Patent classifications
G06F3/015
CONTROLLING PROGRESS OF AUDIO-VIDEO CONTENT BASED ON SENSOR DATA OF MULTIPLE USERS, COMPOSITE NEURO-PHYSIOLOGICAL STATE AND/OR CONTENT ENGAGEMENT POWER
Provided is a system for controlling progress of audio-video content based on sensor data of multiple users, composite neuro-physiological state (CNS) and/or content engagement power (CEP). Sensor data is received from sensors positioned on an electronic device of a first user to sense neuro-physiological responses of the first user and second users that are in field-of-view (FOV) of the sensors. Based on the sensor data and at least one of a CNS value for social interaction application and a CEP value for immersive content, recommendations of action items for first user are predicted. Content of a feedback loop, created based on sensor data, CNS value, CEP value, and predicted recommendations, is rendered on output unit of electronic device during play of the at least one of social interaction application and immersive content experience. Progress of social interaction and immersive content experience is controlled by first user based on predicted recommendations.
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
The present technology relates to an information processing device and an information processing method, which are capable of allowing users at remote locations to each grasp more deeply the condition of the space where the partner is present. Provided is an information processing device including a control unit, wherein, between a first space where a first imaging device and a first display device are installed and a second space where a second imaging device and a second display device are installed, when a captured image captured by the imaging device in one of the spaces is displayed by the display device in the other space in real time, the control unit performs a control for presenting a state of the second space in an ineffective region excluding an effective region in which a captured image captured by the second imaging device is displayed, in a display region of the first display device. The present technology can be applied to, for example, a video communication system.
Hands-Free Crowd Sourced Indoor Navigation System and Method for Guiding Blind and Visually Impaired Persons
The present invention discloses an indoor Electronic Traveling Aid (ETA) system for blind and visually impaired (BVI) people. The system comprises a headband, intuitive tactile display with myographic (EMG) feedback, controller, and server-based methods corresponding to three operation modalities. In 1.sup.st modality, sighted users mark routes, map navigational directions, and create semantic comments for BVIs. This information of routes is continuously collected and estimated in ETA servers. In the 2.sup.nd modality, BVIs choose the routes from servers, thereby, are supplied with real-time navigational guidance. Also, an EMG interface is used, where the user's facial muscles are enabled is to send commands to the ETA system. In the 3.sup.rd modality, BVIs receive real-time audio guidance in complex or unforeseen situations: ETA provides a crowd-assisted interface and real-time sensory (e.g., video) data, where crowd-assistants analyze the situation and help the BVI to navigate.
Ring motion capture and message composition system
Systems, devices, media, and methods are presented for composing and sharing a message based on the motion of a handheld electronic device such as a ring. The methods in some implementations include presenting a keyboard on a display, collecting course data associated with a course traveled by the ring, and overlying a trace onto the keyboard, such that the trace is correlated in near real-time with the course traveled by the ring. In some implementations the display element is part of a portable device, such as the lens of an electronic eyewear device. Based on the course data relative to the key locations on the keyboard, the system identifies and presents candidate words to be included in a message.
Wearable device having high security and stable blood pressure detection
A wearable device including a skin sensor and a processor is provided. The processor is configured to receive an authentication data for authenticating a user when a wearing state of the wearable device is adjacent to a skin surface of the user, execute a predetermined function in response to a request when the authentication data matches a pre-stored data and the skin sensor determines that the wearable device does not leave the skin surface after the authentication data is received, and reject or ignore the request when the skin sensor determines that the wearable device leaves the skin surface before the predetermined function is executed. The processor further calculates blood pressures according to PPG signals detected by a PPG sensor of the skin sensor.
Whole-body human-computer interface
A human-computer interface system having an exoskeleton including a plurality of structural members coupled to one another by at least one articulation configured to apply a force to a body segment of a user, the exoskeleton comprising a body-borne portion and a point-of-use portion; the body-borne portion configured to be operatively coupled to the point-of-use portion; and at least one locomotor module including at least one actuator configured to actuate the at least one articulation, the at least one actuator being in operative communication with the exoskeleton.
System and method for iterative classification using neurophysiological signals
A method of training an image classification neural network comprises: presenting a first plurality of images to an observer as a visual stimulus, while collecting neurophysiological signals from a brain of the observer; processing the neurophysiological signals to identify a neurophysiological event indicative of a detection of a target by the observer in at least one image of the first plurality of images; training the image classification neural network to identify the target in the image, based on the identification of the neurophysiological event; and storing the trained image classification neural network in a computer-readable storage medium.
Method And System For Determining The Intention Of Performing A Voluntary Action
The invention relates to methods and systems for determining the intention of a subject to perform a voluntary action based on the analysis of the subject's respiratory phases and neuroelectrical signals.
ARTIFICIAL INTELLIGENCE-BASED PLATFORM TO OPTIMIZE SKILL TRAINING AND PERFORMANCE
Artificial intelligence-based systems and methods for learning management are described.
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
An information processing apparatus of the present disclosure includes an acquisition unit that acquires respiration information indicating respiration of a user, and a determination unit that determines an operation amount regarding an operation by the user on the basis of the respiration of the user indicated by the respiration information acquired by the acquisition unit.