Patent classifications
G06F2203/011
HEAD-MOUNTED DISPLAY SENSOR STATUS
An example device comprises: a head-mounted display; a housing for the head-mounted display, the housing including an external surface; sensors to monitor a wearer of the head-mounted display; a visual indicator at the external surface; and a controller. The controller is generally to: control subsets of the sensors to be on or off based on respective permissions for usage of subsets of the sensors; and control the visual indicator to indicate respective status of the subsets of the sensors.
TELEPRESENCE SYSTEM
The present disclosure relates to a telepresence system that enables better communication. The telepresence system includes: a network that connects a plurality of bases; and a plurality of telepresence apparatuses that transmit and receive video images and sound via the network, and share the video images and the audio between the respective bases. Further, in each of the telepresence apparatuses, a presentation device that performs presentation for prompting communication by users at different bases is disposed at the center of a shared communication space in which the users communicate with each other. The present technology can be applied to a telepresence system, for example.
METHODS AND SYSTEMS FOR SUGGESTING AN ENHANCED MULTIMODAL INTERACTION
Provided are methods and systems for suggesting an enhanced multimodal interaction. The method for suggesting at least one modality of interaction, includes: identifying, by an electronic device, initiation of an interaction by a user with a first device using a first modality; detecting, by the electronic device, an intent of the user and a state of the user based on the identified initiated interaction; determining, by the electronic device, at least one of a second modality and at least one second device, to continue the initiated interaction, based on the detected intent of the user and the detected state of the user; and providing, by the electronic device, a suggestion to the user to continue the interaction with the first device using the determined second modality, by indicating the second modality on the first device or the at least one second device.
System and method for making a recommendation for a user of a life management system
A life management system receives data from a client device worn by a user, the data comprising biotelemetry data and activity data collected about a user wearing the client device. The life management system generates snapshot information using information from a group consisting of: the biotelemetry data, activity data, social data associated with the user, and user profile information associated with the user. The life management system generates a recommendation using portions of the snapshot information, and updates the snapshot information with the recommendation. The life management system executes a recommendation associated with the snapshot information in accordance with the user controls associated with the user.
ARTIFICIAL INTELLIGENCE SYSTEM, ARTIFICIAL INTELLIGENCE PROGRAM, AND NATURAL LANGUAGE PROCESSING SYSTEM
An artificial intelligence system includes: a storage configured to previously store a data model; a generator configured to extract the data model from the storage and generate a human object capable of reproducing a motion and a thought of a human; a world builder including a first platform and a second platform and configured to construct a world in which a motion and a thought of the human object are developed, the human object being disposed on the first platform and the second platform; the external world reproduction unit configured to dispose the human object on the first platform and reproduce an external world; and an output determiner configured to obtain an external situation by recognizing the external world reproduced on the first platform, dispose the human object on the second platform, and determine an output to the outside by manipulating the human object.
AUGMENTED REALITY ARTIFICIAL INTELLIGENCE ENHANCE WAYS USER PERCEIVE THEMSELVES
Methods and systems are provided for generating augmented reality (AR) scenes where the AR scenes can be adjusted to modify at least part of an image of the physical features of a user to produce a virtual mesh of the physical features. The method includes generating an augmented reality (AR) scene for rendering on a display for a user wearing AR glasses, the AR scene includes a real-world space and virtual objects overlaid in the real-world space. The method includes analyzing a field of view into the AR scene from the AR glasses; the analyzing is configured to detect images of physical features of the user when the field of view is directed toward at least part of said physical features of the user. The method includes adjusting the AR scene, in substantial real-time, to modify at least part of the images of the physical features of the user when the physical features of the user are detected to be in the AR scene as viewed from the field of view of the AR glasses, wherein said modifying includes detecting depth data and original texture data from said physical features to produce a virtual mesh of said physical features; the virtual mesh is changed in size and shape and rendered using modified texture data that blends with said original texture data. In one embodiment, the modified physical features of the user appear to the user when viewed via the AR glasses as existing in the real-world space. In this way, when the physical features of a user are detected to be in the AR scene, the physical features are augment in the AR scene which can result in the self-perception of the user improving which in turn can provide the user with confidence to overcome challenging tasks or obstacles during the gameplay of the user.
Control method, terminal, and system using environmental feature data and biological feature data to display a current movement picture
A control method includes obtaining feature data using at least one sensor, the feature data being acquired by the terminal using the at least one sensor, generating an action instruction based on the feature data and a decision-making mechanism of the terminal, and executing the action instruction. In this application, various aspects of feature data are acquired using a plurality of sensors, data analysis is performed on the feature data, and a corresponding action instruction is then generated based on a corresponding decision-making mechanism to implement interactive control.
System for extended reality visual contributions
Aspects of the subject disclosure may include, for example, receiving information about a task to be completed by a user, receiving information about the user and receiving information about a physical environment of the user. The subject disclosure may further include creating one or more immersion objects based on the information about the task, the information about the user and the information about the physical environment, creating an immersive environment including the one or more immersive objects and at least a portion of the physical environment of the user, and communicating to an extended reality (XR) device of the user information about the immersive environment to create an immersive experience for completion of the task by the user. Other embodiments are disclosed.
Affective-cognitive load based digital assistant
Embodiments of the present disclosure sets forth a computer-implemented method comprising receiving, from at least one sensor, sensor data associated with an environment, computing, based on the sensor data, a cognitive load associated with a user within the environment, computing, based on the sensor data, an affective load associated with an emotional state of the user, determining, based on both the cognitive load at the affective load, an affective-cognitive load, determining, based on the affective-cognitive load, a user readiness state associated with the user, and causing one or more actions to occur based on the user readiness state.
Modifying virtual content to invoke a target user state
In one implementation, a method includes: while presenting reference CGR content, obtaining a request from a user to invoke a target state for the user; generating, based on a user model and the reference CGR content, modified CGR content to invoke the target state for the user; presenting the modified CGR content; after presenting the modified CGR content, determining a resultant state of the user; in accordance with a determination that the resultant state of the user corresponds to the target state for the user, updating the user model to indicate that the modified CGR content successfully invoked the target state for the user; and in accordance with a determination that the resultant state of the user does not correspond to the target state for the user, updating the user model to indicate that the modified CGR content did not successfully invoke the target state for the user.