G06F2203/011

Electronic Device

To provide an electronic device capable of recognizing a user's emotion with a high accuracy. The electronic device includes a detection device, an arithmetic device, and a housing. The housing includes a space at a position overlapping with a user's nose when the user wears the electronic device. The detection device is located between the housing and the user's nose. The detection device has a function of obtaining user's data on an emotion of the user and outputting the user's data to the arithmetic device. The arithmetic device has a function of generating display data based on the user's data and outputting the display data.

ELECTRONIC DEVICE AND METHOD FOR VIDEO CALL BASED ON REACTION SERVICE
20230012919 · 2023-01-19 ·

According to various embodiments, the an electronic device includes: a communication module comprising communication circuitry, a camera, a display, and a processor, wherein the processor is configured to control the electronic device to: make a video call connection with at least one counterpart device through the communication module, analyze user emotion\ based on a user face recognized in an image acquired from the camera in response to an utterance of a speaker of the at least one counterpart device, and display a reaction induction object inducing a suitable reaction according to user emotion varying in response to the utterance of the speaker on the display.

BRAIN-ACTIVITY ACTUATED EXTENDED-REALITY DEVICE
20230018247 · 2023-01-19 ·

Quantum sensors may have a size suitable for integration with an extended reality device, such as an augmented reality device or a virtual reality device. When the extended reality device is worn on the head of a user, the quantum sensors can detect magnetoencephalography (MEG) signals from the user's brain. Trained computer models may be used in a recognition algorithm to detect and/or classify particular brain activities. The particular brain activities may then be used to control an extended reality application.

Enhanced Emotive Engagement with Volumetric Content

A volumetric content enhancement system (“the system”) can annotate at least a portion of a plurality of voxels from a volumetric video with contextual data. The system can determine at least one actionable position within the volumetric video. The system can create an annotated volumetric video that includes the volumetric video, an annotation with the contextual data, and the at least one actionable position. The system can provide the annotated volumetric video to a volumetric content playback system. The system can obtain viewer feedback associated with the viewer and can determine an emotional state of the viewer based, at least in part, upon the viewer feedback. The system can receive viewer position information that identifies a specific actionable position of the viewer. The system can generate manipulation instructions to instruct the volumetric content playback system to manipulate the annotated volumetric content to achieve a desired emotional state of the viewer.

Cognitive stimulation in vehicles

Interactive content can be managed and provided to occupants of an automated vehicle to enhance their experience while in the vehicle. Orchestrator component can determine interactive content based on conditions associated with the vehicle, user preferences, video content, or other information. Interactive content can comprise video content, audio content, and control content. Video content can comprise augmented reality or virtual reality content. Control content can be used to control vehicle operation in relation to or synchronization with presentation of video content. Orchestrator component can correlate between certain roads on which the vehicle can travel and entertainment presentations presented to a vehicle occupant. Orchestrator component can control vehicle operation to have the vehicle recreate a vehicle action sequence (VAS) in a video program being presented to the occupant in the vehicle. Orchestrator component can notify nearby vehicles when VAS is be recreated, or another vehicle also can participate in VAS.

Multimodal inputs for computer-generated reality
11698674 · 2023-07-11 · ·

Implementations of the subject technology provide determining an operating mode of an electronic device based at least in part on whether the electronic device is communicatively coupled to an associated base device. Based on the determined operating mode, the subject technology identifies a set of input modalities for initiating a recording of content within a field of view of the electronic device. The subject technology monitors sensor information generated by at least one sensor included in, or communicatively coupled to, the electronic device. Further, the subject technology initiates the recording of content within the field of view of the electronic device when the monitored sensor information indicates that at least one of the identified set of input modalities has been triggered.

DETERMINING MENTAL STATES BASED ON BIOMETRIC DATA

Various embodiments of an apparatus, methods, systems and computer program products described herein are directed to an Analytics Engine that receives one more signal files that include neural signal data of a user based on voltages detected by one or more electrodes on a set of headphones worn by a user. The Analytics Engine preprocesses the data, extracts features from the received data, and feeds the extracted features into one or more machine learning models to generate determined output that corresponds to at least one of a current mental state of the user and a type of facial gesture performed by the user. The Analytics Engine sends the determined output to a computing device to perform an action based on the determined output.

HYPER-CONNECTED AND SYNCHRONIZED AR GLASSES

Systems and methods are described for selectively sharing audio and video streams amongst electronic eyewear devices. Each electronic eyewear device includes a camera arranged to capture a video stream in an environment of the wearer, a microphone arranged to capture an audio stream in the environment of the wearer, and a display. A processor of each electronic eyewear device executes instructions to establish an always-on session with other electronic eyewear devices and selectively shares an audio stream, a video stream, or both with other electronic eyewear devices in the session. Each electronic eyewear device also generates and receives annotations from other users in the session for display with the selectively shared video stream on the display of the electronic eyewear device that provided the selectively shared video stream. The annotation may include manipulation of an object in the shared video stream or overlay images registered with the shared video stream.

Wearable device and wearable system

A wearable device includes a skin-attachable device to be attached to a skin of a user to acquire user data, an electronic device that supplies power to the skin-attachable device, and a connection device including a first cable connected to the skin-attachable device and a second cable connected to the first cable and detachably attached to the electronic device.

Methods and systems for modulating physiological states between biological entities
11693000 · 2023-07-04 · ·

The invention provides methods and systems for the treatment and diagnosis of pathologic disorders by modulating a physiological state of a target biological entity via exposure of the target entity to a single or a plurality of triggered entities and for transferring of information in a non-direct way and as part of virtual reality interactive environment.