Patent classifications
G06F3/038
MULTI-CHANNEL PERIPHERAL INTERCONNECT SUPPORTING SIMULTANEOUS VIDEO AND BUS PROTOCOLS
A method includes generating, by a control unit of a first device, a handshaking signal to be transmitted to a second device via a second channel. The method further includes based on the handshaking signal being acknowledged by the second device, configuring, by the control unit, the second channel to communicate non-display data and configuring a first channel connecting the first device to the second device to selectively communicate either display data or non-display data; and based on the handshaking signal being not acknowledged by the second device, configuring, by the control unit, the first channel to communicate display data.
Systems, Methods, and Computer-Readable Media for Generating Computer-Mediated Reality Display Data
Systems, methods, and computer-readable media are provided for generating computer-mediated reality display data based on user instantaneous motion data. A system includes at least one sensor, a mediated reality data source, and a mediated reality display generator that generates displayable mediated reality scene data based on (a) current reality data of the system from the at least one sensor; (b) mediated reality data from the mediated reality data source; and (c) instantaneous motion data of the system from the at least one sensor. In one example the mediated reality display generator generates the displayable mediated reality scene data by generating displayable mediated reality frame data based on the current reality data and the mediated reality data. The operations further include selecting a portion of the displayable mediated reality frame data as the displayable mediated reality scene data based on the instantaneous motion data. The portion that is selected is offset from a center of the displayable mediated reality frame data and is less than a frame size of the displayable mediated reality frame data and the offset is selected based on said instantaneous motion.
Systems, Methods, and Computer-Readable Media for Generating Computer-Mediated Reality Display Data
Systems, methods, and computer-readable media are provided for generating computer-mediated reality display data based on user instantaneous motion data. A system includes at least one sensor, a mediated reality data source, and a mediated reality display generator that generates displayable mediated reality scene data based on (a) current reality data of the system from the at least one sensor; (b) mediated reality data from the mediated reality data source; and (c) instantaneous motion data of the system from the at least one sensor. In one example the mediated reality display generator generates the displayable mediated reality scene data by generating displayable mediated reality frame data based on the current reality data and the mediated reality data. The operations further include selecting a portion of the displayable mediated reality frame data as the displayable mediated reality scene data based on the instantaneous motion data. The portion that is selected is offset from a center of the displayable mediated reality frame data and is less than a frame size of the displayable mediated reality frame data and the offset is selected based on said instantaneous motion.
Contextual assistant using mouse pointing or touch cues
A method for a contextual assistant to use mouse pointing or touch cues includes receiving audio data corresponding to a query spoken by a user, receiving, in a graphical user interface displayed on a screen, a user input indication indicating a spatial input applied at a first location on the screen, and processing the audio data to determine a transcription of the query. The method also includes performing query interpretation on the transcription to determine that the query is referring to an object displayed on the screen without uniquely identifying the object, and requesting information about the object. The method further includes disambiguating, using the user input indication indicating the spatial input applied at the first location on the screen, the query to uniquely identify the object that the query is referring to, obtaining the information about the object requested by the query, and providing a response to the query.
Contextual assistant using mouse pointing or touch cues
A method for a contextual assistant to use mouse pointing or touch cues includes receiving audio data corresponding to a query spoken by a user, receiving, in a graphical user interface displayed on a screen, a user input indication indicating a spatial input applied at a first location on the screen, and processing the audio data to determine a transcription of the query. The method also includes performing query interpretation on the transcription to determine that the query is referring to an object displayed on the screen without uniquely identifying the object, and requesting information about the object. The method further includes disambiguating, using the user input indication indicating the spatial input applied at the first location on the screen, the query to uniquely identify the object that the query is referring to, obtaining the information about the object requested by the query, and providing a response to the query.
METHOD AND DEVICE FOR PROVIDING A TRUSTED ENVIRONMENT FOR EXECUTING AN ANALOGUE-DIGITAL SIGNATURE
The invention relates to the field of providing a trusted environment for executing an analogue-digital signature. The claimed document-signing device in the form of a stylus includes a protective compartment, in which the following are disposed: a microcontroller with a programme code; a memory with a secret digital signature key; and additionally inertial sensors, which are connected to the microcontroller; a lens; and a camera, which is also connected to the microcontroller. A wireless interface is used in order to communicate with a computer. The inertial sensors serve to verify the handwritten signature of the user, while the lens and camera serve to carry out a comparison with the text of an electronic document uploaded via the wireless interface. In this way it is ensured that verified information enters the trusted environment of the stylus.
METHOD AND DEVICE FOR PROVIDING A TRUSTED ENVIRONMENT FOR EXECUTING AN ANALOGUE-DIGITAL SIGNATURE
The invention relates to the field of providing a trusted environment for executing an analogue-digital signature. The claimed document-signing device in the form of a stylus includes a protective compartment, in which the following are disposed: a microcontroller with a programme code; a memory with a secret digital signature key; and additionally inertial sensors, which are connected to the microcontroller; a lens; and a camera, which is also connected to the microcontroller. A wireless interface is used in order to communicate with a computer. The inertial sensors serve to verify the handwritten signature of the user, while the lens and camera serve to carry out a comparison with the text of an electronic document uploaded via the wireless interface. In this way it is ensured that verified information enters the trusted environment of the stylus.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, COMPUTER PROGRAM, AND STORAGE MEDIUM
An information processing apparatus includes: an operating unit capable of recognizing a peripheral apparatus. The operating unit includes: a first recognizing unit configured to recognize, when a peripheral apparatus is connected to the operating unit and identification information about the connected peripheral apparatus is included in peripheral apparatus information including predetermined identification information, the connected peripheral apparatus as a first peripheral apparatus; and a second recognizing unit configured to recognize, when a peripheral apparatus is connected to the operating unit and the identification information about the connected peripheral apparatus is not included in the peripheral apparatus information, the connected peripheral apparatus as a second peripheral apparatus.
APPARATUS AND METHOD FOR EXECUTING APPLICATION FOR MOBILE DEVICE
The present invention relates to an apparatus and a method for executing a desired application on any screen of a mobile device by a simple operation without switching a screen. The apparatus for executing an application for a mobile device according to one embodiment of the present invention comprises: a text input unit; an application recognition unit for recognizing an application to be executed; a message recognition unit for recognizing an operation message to be performed in the application to be executed on the basis of the text which is inputted through the text input unit; a transmission instruction sensing unit for sensing an operation message transmission instruction of a user; a message transfer unit for transferring the operation message to the application when the transmission instruction sensing unit senses the operation message transmission instruction; and an application execution unit for executing the operation message transferred by the message transfer unit.
ELECTRONIC PEN AND ELECTRONIC PEN MAIN BODY
An electronic pen includes a magnetic core that has a through-hole and around which a coil is wound in a direction along this through-hole, a core body that is inserted in the through-hole of this magnetic core and has electrical conductivity, a capacitor that forms a resonant circuit with the coil, a signal generation circuit that generates a signal that enables a position of the electronic pen to be detected, which is transmitted through the core body, an electricity storage device, and a charge circuit that charges the electricity storage device by an induced current generated in the coil according to an external magnetic field. While the resonant circuit operates, the signal generated by the signal generation circuit is concurrently transmitted through the core body.