Patent classifications
G06F2203/0381
MULTI SENSORY INPUT TO IMPROVE HANDS-FREE ACTIONS OF AN ELECTRONIC DEVICE
In one general aspect, a method can include detecting at least one indicator of user-initiated interaction with a computing device, obtaining data related to a demographic of a user of the computing device, identifying a current state of the computing device, determining that content displayed on a first display device included in the computing device is to be casted to a second display device separate from the computing device based on the at least one indicator of the user-initiated interaction with the computing device, the data related to a demographic of a user of the computing device, and the current state of the computing device, and casting the content displayed on the first display device to the second display device.
Human-Computer Interaction Method and System, and Apparatus
A human-computer interaction method relates to the field of human-computer interaction technologies. The method includes: establishing a correspondence between a first voiceprint and a first output position on a touchscreen; and receiving first voice, and when it is determined that a voiceprint of the first voice matches the first voiceprint, recognizing content of the voice, and outputting and displaying the content to the first output position.
Control system and method using in-vehicle gesture input
A control system and method for controlling a vehicle's functions using an in-vehicle gesture input, and more particularly, a system for receiving an occupant's gesture and controlling the execution of vehicle functions. The control system using an in-vehicle gesture input includes an input unit configured to receive a user's gesture, a memory configured to store a control program using an in-vehicle gesture input therein, and a processor configured to execute the control program. The processor transmits a command for executing a function corresponding to a gesture according to a usage pattern.
METHODS FOR DISPLAYING USER INTERFACE ELEMENTS RELATIVE TO MEDIA CONTENT
In some embodiments, a computer system displays a caption for a media item at different depths depending on the depth of the portion of the media item over which the caption is displayed. In some embodiments, a computer system displays a user interface element that includes information associated with the media item at different locations relative to the media item depending on attention of the user. In some embodiments, a computer system displays a user interface element that includes information associated with the media item with different visual appearances depending on visual characteristics of the portion of the media item over which the user interface element is displayed.
Multimodal inputs for computer-generated reality
Implementations of the subject technology provide determining an operating mode of an electronic device based at least in part on whether the electronic device is communicatively coupled to an associated base device. Based on the determined operating mode, the subject technology identifies a set of input modalities for initiating a recording of content within a field of view of the electronic device. The subject technology monitors sensor information generated by at least one sensor included in, or communicatively coupled to, the electronic device. Further, the subject technology initiates the recording of content within the field of view of the electronic device when the monitored sensor information indicates that at least one of the identified set of input modalities has been triggered.
Haptic hand controller system for mixed reality
The technology disclosed herein includes a controller or device that provides multi-dimensional hand interaction with the digital world by delivering physical sensations to the palm and the fingertips. The device translates motion from the hand and fingers to control of a computer device, while simultaneously receiving signals to display haptic sensations. The device is “controller-held” around a user's hand, holding onto hand anatomy at key locations. In some embodiments, the device has one-handed engagement and disengagement. In some embodiments, the device may be used as a game controller, incorporating WebVR electronics and software, wireless communication, power-harvesting electronics, inertial measurements unit electronics including additional inputs for camera-based IMU supplementation, battery recharging electronic and internal communication protocol support electronics. In some embodiments, the device may be used in non-gaming environments, and include additional electronics that support universal remote controller components, IoT compatibility, and compatibility for wireless charging.
Multimodal dialog in a motor vehicle
A method for carrying out a multimodal dialog in a vehicle, in particular a motor vehicle, via which method the interaction between the vehicle and a vehicle user is improved with regard to the provision of a dialog that is as natural as possible. For this purpose, the following acts are performed: sensing an input of a vehicle user for activating a voice dialog and activating gesture recognition.
Automated robotic process selection and configuration
A system for selection and configuration of an automated robotic process includes a media input module structured to receive at least one functional media, a media analysis module structured to analyze the at least one functional media and identify an action parameter; and a solution selection module structured to select at least one component of an AI solution for use in an automated robotic process, wherein the selection is based, at least in part, on the action parameter.
Virtual and augmented reality instruction system
A virtual and augmented reality instruction system may include a complete format and a portable format. The complete format may include a board system to capture all movement (including writing and erasing) on the board's surface, and a tracking system to capture all physical movements. The portable format may include a touch-enabled device or digital pen and a microphone, and is designed to capture a subset of the data captured by the complete format. In one embodiment of the complete format, the board system and the tracking system can communicate with each other through a network, and control devices (such as a laptop, desktop, mobile phone and tablet) can be used to control the board system and tracking system through the network. In further embodiments of the complete format, augmented reality can be achieved within the tracking system through the combination of 3D sensors and see through augmented reality glasses.
Information processing apparatus, information processing method, and non-transitory computer-readable storage medium
An information processing apparatus comprises a first acquiring unit configured to acquire a command input to application software, a second acquiring unit configured to acquire scene information representing a scene represented by a screen displayed when executing the application software, a third acquiring unit configured to acquire a command file based on the command and the scene information, and an execution unit configured to execute processing in accordance with the command file.