Patent classifications
G06F3/011
GRAPHICAL MENU STRUCTURE
A human interface including steps of presenting an image, then receiving a gesture from the user. The image is analyzed to identify the elements of the image and then compared to known images and then either soliciting an input from the user or displaying a menu to the user. Comparing the image and/or graphical image elements may be effectuated using a trained artificial intelligence engine or, in some embodiments, with a structured data source, said data source including predetermined images and menu options. If the input from the user is known, then presenting a predetermined menu. If the image is not known, then presenting an image or other menu options, and soliciting from the user the desired options. Once the user selections an option, the resulting selection may be used to further train the AI system or added to the structured data source for future reference.
VISUALIZATION OF A KNOWLEDGE DOMAIN
An exemplary process obtains sensor data corresponding to a physical environment including one or more physical objects. A physical property of the one or more physical objects is determined based on the sensor data. A presentation mode associated with a knowledge domain is determined. An extended reality environment including a view of the physical environment and a visualization selected based on the determined presentation mode is provided. The visualization includes virtual content associated with the knowledge domain. The virtual content is provided based on display characteristics specified by the presentation mode that depend upon the physical property of the one or more objects.
METHOD AND SYSTEM FOR GAZE-BASED CONTROL OF MIXED REALITY CONTENT
Systems and methods are presented for discovering and positioning content into augmented reality space. A method includes forming a three-dimensional (3D) map of surroundings of a user of an augmented reality (AR) head mounted display (HMD); determining a depth-wise location of a gaze point of a user based on eye gaze direction and eye vergence; determining a visual guidance line pathway in the 3D map; guiding an action of the user along the visual guidance line pathway at one or more identified focal points; and rendering a mixed reality (MR) object along the visual guidance line pathway at a location corresponding to a direction of the user’s gaze.
COORDINATING ALIGNMENT OF COORDINATE SYSTEMS USED FOR A COMPUTER GENERATED REALITY DEVICE AND A HAPTIC DEVICE
A first electronic device controls a second electronic device to measure a position of the first electronic device. The first electronic device includes a motion sensor, a network interface circuit, a processor, and a memory. The motion sensor senses motion of the first electronic device. The network interface circuit communicates with the second electronic device. The memory stores program code that is executed by the processor to perform operations that include, responsive to determining that the first electronic device has a level of motion that satisfies a defined rule, transmitting a request for the second electronic device to measure a position of the first electronic device. The position of the first electronic device is sensed and then stored in the memory. An acknowledgement is received from the second electronic device indicating that it has stored sensor data that can be used to measure the position of the first electronic device.
ROTATIONAL DEVICE FOR AN AUGMENTED REALITY DISPLAY SURFACE USING NFC TECHNOLOGY
A device for displaying AR markings comprising a top and a base, with the top rotatably attached to the base, and the base configured to be held by a hand or placed on a fixed surface. The AR markings are positioned on the top such that when the top rotates with respect to the base, so do the AR markings. When the AR markings are scanned by an appropriate scanning and display device, such as a smart phone, a 3d image associated with the AR markings will be displayed on the display device as an augmented reality projection. When the top rotates with respect to the base, so too does the augmented reality projection.
TEMPERATURE STIMULUS PRESENTATION APPARATUS AND METHOD
A temperature stimulus presentation device 1 includes a presentation unit 11 that generates heat at a predetermined temperature, and a control unit 12 that changes an area of the presentation unit 11 depending on whether or not a temperature stimulus is presented.
A METHOD FOR ADAPTING TO A DRIVER POSITION AN IMAGE DISPLAYED ON A MONITOR IN A VEHICLE CAB
The invention relates to a method for adapting to a driver position an image displayed on a monitor in a cab of the vehicle. The invention also relates to a system for adapting to a driver position an image displayed on a monitor in a cab of the vehicle. The invention further relates to a vehicle comprising such a system.
VIRTUAL CONTENT EXPERIENCE SYSTEM AND CONTROL METHOD FOR SAME
Disclosed is a virtual content experience system. In the virtual content experience system, a central server for driving the system contains: a content conversion unit which converts two-dimensional image content, received by means of a data transmission and reception unit or input by a user, into a stereoscopic image; a motion information generation unit which recognizes text information extracted from the two-dimensional image content and converts the text information into motion information; a content playback control unit which is provided to transmit the motion information to a motion information management unit provided in a virtual reality experience chair, or receive start information and end information about the motion information from the motion information management unit to generate and change control information for controlling whether to provide new two-dimensional image content; and a display unit for displaying the content conversion unit, and the motion information or control information.
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
The present technology relates to an information processing device and an information processing method, which are capable of allowing users at remote locations to each grasp more deeply the condition of the space where the partner is present. Provided is an information processing device including a control unit, wherein, between a first space where a first imaging device and a first display device are installed and a second space where a second imaging device and a second display device are installed, when a captured image captured by the imaging device in one of the spaces is displayed by the display device in the other space in real time, the control unit performs a control for presenting a state of the second space in an ineffective region excluding an effective region in which a captured image captured by the second imaging device is displayed, in a display region of the first display device. The present technology can be applied to, for example, a video communication system.
VR-Based Treatment System and Method
An XR-based system (virtual reality, augmented reality, or mixed reality system), is provided to visualize and resolve at least one condition of a subject. A dynamic virtual representation of the subject's body is generated based on the captured physical traits and movement of the subject's body is captured by at least one motion tracking device, and rendered in the extended reality environment. The dynamic virtual representation is synchronized with the movement of the body of the subject, generating a virtual representation of at least one condition of the subject in response to one or more inputs, overlaying or rendering the virtual representation of the condition of the subject on the virtual representation of the body of the subject, and receiving and processing one or more inputs representing one or more attributes of the condition to adjust the virtual representation of the condition of the subject in the extended reality environment.