Patent classifications
G06F3/005
See-through computer display systems with adjustable zoom cameras
Aspects of the present invention relate to methods and systems for the see-through computer display systems with adjustable-zoom cameras positioned such that their respective capture fields-of-view at least partially overlap at a target distance.
Neuromuscular text entry, writing and drawing in augmented reality systems
Methods and systems for providing input to an augmented reality system or an extended reality system based, at least in part, on neuromuscular signals. The methods and systems comprise detecting, using one or more neuromuscular sensors arranged on one or more wearable devices, neuromuscular signals from a user; determining that a computerized system is in a mode configured to provide input including text to the augmented reality system; identifying based, at least in part, on the neuromuscular signals and/or information based on the neuromuscular signals, the input, wherein the input is further identified based, at least in part, on the mode; and providing the identified input to the augmented reality system.
Electronic device and control method thereof
Disclosed is an electronic device. The electronic device comprises: a microphone comprising circuitry; a speaker comprising circuitry; and a processor electrically connected to the microphone and speaker, wherein the processor, when a first user's voice is input through the microphone, identifies a user who uttered the first user's voice and provides a first response sound, which is obtained by inputting the first user's voice to an artificial intelligence model learned through an artificial intelligence algorithm, through the speaker, and when a second user's voice is input through the microphone, identifies a user who uttered the second user's voice, and if the user who uttered the first user's voice is the same as the user who uttered the second user's voice, provides a second response sound, which is obtained by inputting the second user's voice and utterance history information to the artificial intelligence model, through the speaker. In particular, at least some of the methods of providing a response sound to a user's voice may use an artificial intelligence model learned in accordance with at least one of a machine learning, neural network, or deep learning algorithm.
METHOD AND APPARATUS FOR PROCESSING SCREEN USING DEVICE
A method and an apparatus for processing a screen by using a device are provided. The method includes obtaining, at the second device, a display screen displayed on the first device and information related to the display screen according to a screen display request regarding the first device, determining, at the second device, an additional screen based on the display screen on the first device and the information related to the display screen, and displaying the additional screen near the display screen on the first device.
Enhanced input using recognized gestures
A representation of a user can move with respect to a graphical user interface based on input of a user. The graphical user interface comprises a central region and interaction elements disposed outside of the central region. The interaction elements are not shown until the representation of the user is aligned with the central region. A gesture of the user is recognized, and, based on the recognized gesture, the display of the graphical user interface is altered and an application control is outputted.
Method for controlling an application employing identification of a displayed image
An application control system and method is adapted for use with an entertainment system of a type including a display such as a monitor or TV and having display functions. A control device may be conveniently held by a user and employs an imager. The control system and method images the screen of the TV or other display to detect distinctive markers displayed on the screen. This information is transmitted to the entertainment system for control of an application or is used by the control device to control an application.
Vehicular vision system
A vehicular vision system includes a camera disposed at a vehicle, at least one non-vision sensor disposed at the vehicle, and a display system of the vehicle that displays video images for viewing by the driver of the vehicle. Image data captured by the camera and sensor data sensed by the non-vision sensor are provided to a control of the vehicle. Responsive at least in part to processing at the control of image data captured by the camera, video images are displayed by a video display screen of the display system. The vehicular vision system determines an augmented reality overlay and the video display screen also displays the augmented reality overlay. The displayed augmented reality overlay pertains to at least one accessory of the equipped vehicle and/or is responsive at least in part to a driving condition of the equipped vehicle.
Personalized videos using selfies and stock videos
A method is provided that includes displaying, by a computing device, representations of a plurality of stock videos to a user. The representations are at a still image, a partial clip, and/or a full play of the stock video. Each of the representations include a face outline for insertion of a facial image of a user. When the user has provided a self-image to the computing device, the facial image of the user is inserted in the face outline of the representations. The facial image is extracted from the self-image. The method may include receiving a selection of one of the representations of the plurality of stock videos, and displaying a personalized video including a selected stock video with the facial image positioned within a further face outline corresponding to the face outline of the selected representation.
Embedding a trainer in virtual reality (VR) environment using chroma-keying
A virtual reality (VR) system comprising a head-mounted display (HMD) and handheld controller set is enhanced to provide a more realistic end user VR experience. According to this disclosure, a chroma-keyed trainer is embedded in the VR environment for use in workouts. This creates the illusion that the trainer is with the user in the virtual room. To create this effect, the trainer is filmed using a chroma key setup (e.g., a green screen background) and with standard cameras. The video is used in the headset with the keyed color (green) removed. As the training video is rendered, a set of virtual reality (VR) events that have been synchronized to movements of the trainer in the training video are output to the user so that the trainer actively participates in the workout with the user.
Systems and methods for non-contacting interaction with user terminals
Systems and methods are provided to enable users to interact with user terminals having a touch screen interface without requiring the user to physically contact a surface of the touch screen interface.