System Design Considerations: The On-screen Keyboard

User Control, Application Control, or System Control


User, Application, or System - these 3 different approaches for using an on-screen keyboard are major design choices by anyone building a complete computer system.

The first question is: Will the system be general purpose, or will it perform specific tasks? From the point of view of the onscreen keyboard in a touch driven system, which approach the system designer selects becomes a critical choice as to how to implement the way the user can access the onscreen keyboard for text input.

There are various ways an onscreen keyboard can appear on a system - it is always there, it is there on demand, it shows up when needed by the application, it shows up when needed as determined by the system. Additionally, what exactly do you mean by "an onscreen keyboard"?

A good system designer would typically want the flexibility of bringing up a number pad when only numbers are needed, or only the alphabet for password entry, or a full keyboard when doing system level tasks. Aesthetics can also be important, and flexible layouts, colors, keys, sizes, operating logic all are things a "forced by the system" onscreen keyboard can not accommodate.

As computer systems have become more sophisticated, system designers have to be involved in the choice of the operating system, which is a major aspect of what capabilities the finished result will have, along with the cost of the entire device & its software. Since each design choice can affect the options available for the onscreen keyboard, it makes sense for some systems to have components as flexible as possible, so more possibilities are available as choices are made that affect other components of the system.

Even when you look at the history and family of Microsoft Windows, the system level onscreen keyboard has been implemented more ways than anyone would want to think about. A system choice that defines the user interface creates another aspect to face when making decisions on various systems.

IMG's onscreen keyboards have always focused on user control or application control. The fact that we are not involved or put much effort into system control is because this is a poor design choice. A system level approach imposes too many constraints on the infinite flexibility allowed by letting the user or application control the user interface.

There can be security concerns, training concerns, and ease of use concerns. Additionally, it blurs the role of the operating system when the system defines the user interface. While there are benefits that a system controlled onscreen keyboard can offer, these same benefits can be available when used in conjunction with a user or application controlled approach, but a system controlled approach can limit what the user, application designer, or system designer can do - limits which should not be imposed by operating system software.

Having a consistent, flexible onscreen keyboard option that can operate across many versions of Windows (and now multiple platforms) has been a very good thing for many of IMG's customers. Our design choice has always been to let the user or application designer control the onscreen keyboard. When the system forces an onscreen keyboard onto the scene, our software must defer to that design choice.

To have the keyboard software do any design or user interface actions has always seemed counter to what the software was designed and built for. Historically, a keyboard has been a user interface tool, and not much more. So what, then, is an keyboard that shares the screen with the system? Is it more like an input device, or a user interface component? If it is a user interface component, should it be handled by the user, the application, or the system? Since on some systems, the operating system controls the keyboard, these systems can greatly limit what the system can and cannot do.

So when we get asked about having the keyboard do things on its own (such as show up when needed), we point to our Developer's Kit, and discuss the best approaches to handle & manipulate the various keyboard products we offer. Our design approach is to make the keyboard as flexible as our customers need it.

So by definition, the software can't act like an operating system component on its own - that would be up to the system designer, not the keyboard software. For that same reason, if the application will be in control, then the application should be in control - the keyboard portion should be flexible, and easy to manipulate, but the onscreen keyboard should not define when it does whatever it might do. If the user will be in control, then having the keyboard readily accessible is paramount - for example, the classic software has 4 different user selectable minimize options (without even mentioning programmatic options).

So the focus has always been either on the user and what the user might want to do with the keyboard software, or with the application developer, and how to manipulate the keyboard software. So when we get asked to make our keyboard act more like a system keyboard, we remember all the systems that have come & gone, and try to think why it would be a good idea to invest time in something so fleeting - we can point you in the right direction, but with dozens and dozens of systems left behind in the trash can of computer history, we must focus our resources on our customer's current and future needs.

Typically the operating system will provide the bare minimum and force their design choices onto the user - this approach is a poor compromise that typically makes for a less than ideal user interface. When you compare that to what IMG's keyboard software can do, you should be able to see why customers with specific usage scenarios want complete flexibility in their onscreen keyboard software.

written by:

Kermit Komm

VP Programming

Innovation Management Group, Inc

Publisher of My-T-Soft® - On-screen Keyboard Utilities for Windows (incl. Embedded & Terminal Server)

and My-T-Soft® Build-A-Board System - User Interface Designer with Cross-platform O/S Run-times

Published: January 29, 2014