Abstract:
A method for conversational computing includes executing code embodying a conversational virtual machine, registering a plurality of input/output resources with a conversational kernel, providing an interface between a plurality of active applications and the conversational kernel processing input/output data, receiving input queries and input events of a multi-modal dialog across a plurality of user interface modalities of the plurality of active applications, generating output messages and output events of the multi-modal dialog in connection with the plurality of active applications, managing, by the conversational kernel, a context stack associated with the plurality of active applications and the multi-modal dialog to transform the input queries into application calls for the plurality of active applications and convert the output messages into speech, wherein the context stack accumulates a context of each of the plurality of active applications.
Abstract:
A conversational computing system that provides a universal coordinated multi-modal conversational user interface (CUI) (10) across a plurality of conversationally aware applications (11) (i.e., applications that "speak" conversational protocols) and conventional applications (12). The conversationally aware applications (11) communicate with a conversational kernel (14) via conversational application APIs (13). The conversational kernel (14) controls the dialog across applications and devices (local and networked) on the basis of their registered conversational capabilities and requirements and provides a unified conversational user interface and conversational services and behaviors. The conversational computing system may be built on top of a conventional operating system and APIs (15) and conventional device hardware (16). The conversational kernel (14) handles all I/O processing and controls conversational engines (18). The conversational kernel (14) converts voice requests into queries and converts outputs and results into spoken messages using conversational engines (18) and conversational arguments (17). The conversational application API (13) conveys all the information for the conversational kernel (14) to transform queries into application calls and conversely convert output into speech, appropriately sorted before being provided to the user.
Abstract:
An intelligent subsystem separately supports inking functions in order to allow stroke-ignorant software to be supported in a stylus driven environment. This subsystem thus provides the inking capability missing in existing flat-panel display controllers. Separate inking functions are incorporated into the subsystem in order to support inking management functions which do not corrupt the display refresh buffer as it is understood by existing application software. The subsystem makes no assumptions about the application's awareness of stroke data as an input modality. Instead, the subsystem assumes that a conventional display subsystem also exists in the system. The subsystem utilizes the strobes and clocks generated by the conventional display controller to generate addresses in a memory which has physically separate address and strobe lines from the display refresh buffer. The content of this added memory is used to control the source of input to the data lines of the display. The invention can be generalized to allow any number of planes to be added to a display system providing access to that display system by any number of asynchronous processes such as inking.
Abstract:
PLANAR SOLID STATE LASER ARRAY An array of collimated wide aperture electrically pumped leaky corrugated AlGaAs optical waveguide lasers is formed on a single chip by etching a series of grooves oriented with respect to the crystalographic planes to isolate discrete lasers in the array and provide the requisite orientation of the internal reflecting surfaces to support the lasing action. The corrugation period is chosen such that the laser radiation exits from the array in a direction normal to the plane of waveguide.
Abstract:
YO9-90-178 A method for entry and recognition of elements from a set of symbols, involving a template of line segments displayed on an electronic writing surface. A stylus is applied to the electronic writing surface so as to trace a desired symbol. Computing means are used to "snap" the strokes made by the stylus onto the corresponding template line segments. Upon completion of a symbol, a code is made to represent the line segments and this code is used to reference entries in a data structure to identify the appropriate corresponding computer code. If there is no match, the code for the line segments and a corresponding set of computer codes can be added to the table. This method takes advantage of natural handwriting skills and can be used for a variety of symbol sets.
Abstract:
A conversational computing system that provides a universal coordinated multi-modal conversational user interface (CUI)(10) across a plurality of conversationally aware applications (11) (i.e., applications that "speak" conversational protocols) and conventional applications (12). The conversationally aware applications (11) communicate with a conversational kernel (14) via conversational application APIs (13). The conversational kernel (14) controls the dialog across applications and devices (local and networked) on the basis of their registered conversational capabilities and requirements and provides a unified conversational user interface and conversational services and behaviors. The conversational computing system may be built on top of a conventional operating system and APIs (15) and conventional device hardware (16). The conversational kernel (14) handles all I/O processing and controls conversational engines (18). The conversational kernel (14) converts voice requests into queries and converts outputs and results into spoken messages using conversational engines (18) and conversational arguments (17). The conversational application API (13) conveys all the information for the conversational kernel (14) to transform queries into application calls and conversely convert output into speech, appropriately sorted before being provided to the user.
Abstract:
A user interface includes a process model unit (34) for predicting one or more allowable next states, from a current state of a process, and a display processing unit (26) for deriving, for each of the allowable next states, a representation of the allowable next state. The display processing unit has an output coupled to a display screen (30) for displaying each of the representations (30b-30g) in conjunction with a representation (30a) of a current state of the process. The user interface further includes an actuator control unit (22) that is coupled to an input mechanism whereby a user selects one of the displayed representations of one of the allowable next states. The motor control unit controls the process to cause it to enter a new current state that corresponds to the selected derived representation. In one embodiment, the display screen has a touchscreen capability whereby the user selects one of the representations by physically touching the display screen within an area associated with a selected one of the derived allowable states.
Abstract:
An intelligent subsystem separately supports inking functions in order to allow stroke-ignorant software to be supported in a stylus driven environment. This subsystem thus provides the inking capability missing in existing flat-panel display controllers. Separate inking functions are incorporated into the subsystem in order to support inking management functions which do not corrupt the display refresh buffer as it is understood by existing application software. The subsystem makes no assumptions about the application's awareness of stroke data as an input modality. Instead, the subsystem assumes that a conventional display subsystem also exists in the system. The subsystem utilizes the strobes and clocks generated by the conventional display controller to generate addresses in a memory which has physically separate address and strobe lines from the display refresh buffer. The content of this added memory is used to control the source of input to the data lines of the display. The invention can be generalized to allow any number of planes to be added to a display system providing access to that display system by any number of asynchronous processes such as inking.
Abstract:
A conversational computing system that provides a universal coordinated mult i- modal conversational user interface (CUI) (10) across a plurality of conversationally aware applications (11) (i.e., applications that "speak" conversational protocols) and conventional applications (12). The conversationally aware applications (11) communicate with a conversational kernel (14) via conversational application API's (13). The conversational kernel (14) controls the dialog across applications and devices (local and networked) on the basis of their registered conversational capabilities and requirements and provides a unified conversational user interface and conversational services and behaviors. The conversational computing system m ay be built on top of a conventional operating system and API's (15) and conventional device hardware (16). The conversational kernel (14) handles al l I/O processing and controls conversational engines (18). The conversational kernel (14) converts voice requests into queries and converts outputs and results into spoken messages using conversational engines (18) and conversational arguments (17). The conversational application API (13) conve ys all the information for the conversational kernel (14) to transform queries into application calls and conversely convert output into speech, appropriately sorted before being provided to the user.