Patent classifications
G06V40/28
Artificial reality collaborative working environments
- Michael James LeBeau ,
- Manuel Ricardo Freire Santos ,
- Aleksejs Anpilogovs ,
- Alexander Sorkine Hornung ,
- Bjorn Wanbo ,
- Connor Treacy ,
- Fangwei Lee ,
- Federico Ruiz ,
- Jonathan Mallinson ,
- Jonathan Richard Mayoh ,
- Marcus Tanner ,
- Panya Inversin ,
- Sarthak Ray ,
- Sheng Shen ,
- William Arthur Hugh Steptoe ,
- Alessia Marra ,
- Gioacchino Noris ,
- Derrick Readinger ,
- Jeffrey Wai-King Lock ,
- Jeffrey Witthuhn ,
- Jennifer Lynn Spurlock ,
- Larissa Heike Laich ,
- Javier Alejandro Sierra Santos
Aspects of the present disclosure are directed to creating and administering artificial reality collaborative working environments and providing interaction modes for them. An XR work system can provide and control such artificial reality collaborative working environments to enable, for example, A) links between real-world surfaces and XR surfaces; B) links between multiple real-world areas to XR areas with dedicated functionality; C) maintaining access, while inside the artificial reality working environment, to real-world work tools such as the user's computer screen and keyboard; D) various hand and controller modes for different interaction and collaboration modalities; E) use-based, multi-desk collaborative room configurations; and F) context-based auto population of users and content items into the artificial reality working environment.
User effort detection
A variety of systems and methods can include evaluation of human user effort data. Various embodiments apply techniques to identify anomalous effort data for the purpose of detecting the efforts of a single person, as well as to segment and isolate multiple persons from a single collection of data. Additional embodiments describe the methods for using real-time anomaly detection systems that provide indicators for scoring effort data in synthesized risk analysis. Other embodiments include approaches to distinguish anomalous effort data when the abnormalities are known to be produced by a single entity, as might be applied to medical research and enhance sentiment analysis, as well as detecting the presence of a single person's effort data among multiple collections, as might be applied to fraud analysis and insider threat investigations. Embodiments include techniques for analyzing the effects of adding and removing detected anomalies from a given collection on subsequent analysis.
Sign language information processing method and apparatus, electronic device and readable storage medium
Sign language information processing method and apparatus, an electronic device and a readable storage medium provided by the present disclosure, achieve real-time collection of language data in a current communication of a user by obtaining voice information and video information collected by a user terminal in real time; and then match a speaking person with his or her speaking content by determining, in the video information, a speaking object corresponding to the voice information; and finally, make it possible for the user to clarify the corresponding speaking object when the user sees AR sign language animation in a sign language video by superimposing and displaying an augmented reality AR sign language animation corresponding to the voice information on a gesture area corresponding to the speaking object to obtain a sign language video. Therefore, it is possible to provide a higher user experience.
ACTIVATING CROSS-DEVICE INTERACTION WITH POINTING GESTURE RECOGNITION
A method and handheld device for remotely interacting with a second device. The method and apparatus identify the second device from a plurality of devices based on the gestures of the user. As the user gestures, movement sensors sensing the motion of these gestures can generate signals that can be processed by rule-based and/or learning based methods. The result of processing these signals can be used to identify the second device. In order to improve performance, the user can be prompted to confirm the identified second device is the device the user wants to remotely control. The results of processing these signals can also be used so that the user can remotely interact with the second device.
DEVICE AND METHOD FOR ACQUIRING DEPTH OF SPACE BY USING CAMERA
A device and method of obtaining a depth of a space are provided. The method includes obtaining a plurality of images by photographing a periphery of a camera a plurality of times while sequentially rotating the camera by a preset angle, identifying a first feature region in a first image and an n-th feature region in an n-th image, the n-th feature region being identical with the first feature region, by comparing adjacent images between the first image and the n-th image from among the plurality of images, obtaining a base line value with respect to the first image and the n-th image, obtaining a disparity value between the first feature region and the n-th feature region, and determining a depth of the first feature region or the n-th feature region based on at least the base line value and the disparity value.
Driver Attention And Hand Placement Systems And Methods
Driver attention and hand placement systems and methods are disclosed herein. An example method includes providing warning messages to a driver of a vehicle based on steering wheel input or hand-wheel contact by the driver. The warning messages are provided according to a first scheme when the steering wheel input is above a threshold value and according to a second scheme when the steering wheel input is below a threshold value and images obtained by a camera in the vehicle indicate that at least one hand of the driver is on the steering wheel.
DIGITAL AUDIO WORKSTATION AUGMENTED WITH VR/AR FUNCTIONALITIES
Embodiments of the present technology are directed at features and functionalities of a VR/AR enabled digital audio workstation. The disclosed audio workstation can be configured to allow users to record, produce, mix, and edit audio in virtual 3D space based on detecting and manipulating human gestures in a virtual reality environment. The audio can relate to music, voice, background noise, speeches, background noise, one or more musical instruments, special effects music, electronic humming or noise from electrical/mechanical equipment, or any other type of audio.
Gesture control for communication with an autonomous vehicle on the basis of a simple 2D camera
A method of recognizing gestures of a person from at least one image from a monocular camera, e.g. a vehicle camera, includes comp the steps: a) detecting key points of the person in the at least one image, b) connecting the key points to form a skeleton-like representation of body parts of the person, wherein the skeleton-like representation represents a relative position and a relative orientation of the respective body parts of the person, c) recognizing a gesture of the person from the skeleton-like representation of the person, and d) outputting a signal indicating the gesture.
Mobile terminal and control method therefor
The present invention relates to a device and a control method therefor and, more specifically, the device comprises: a memory for storing at least one command; a depth camera for capturing at least one hand of a user; a display module; and a controller for controlling the memory, the depth camera, and the display module. The controller controls the depth camera so as to capture the at least one hand of a user and controls the display module so as to output a visual feedback that changes on the basis of the captured hand of a user.
Enhanced graphical user interface for voice communications
Enhanced graphical user interfaces for transcription of audio and video messages is disclosed. Audio data may be transcribed, and the transcription may include emphasized words and/or punctuation corresponding to emphasis of user speech. Additionally, the transcription may be translated into a second language. A message spoken by a user depicted in one or more images of video data may also be transcribed and provided to one or more devices.