G06V40/23

SYSTEMS AND METHODS FOR CONSTRUCTING MOTION MODELS BASED ON SENSOR DATA

This disclosure relates to systems, media, and methods for updating motion models using sensor data. In an embodiment, the system may perform operations including receiving sensor data from at least one motion sensor; generating training data based on at least one annotation associated with the sensor data and at least one data manipulation; receiving at least one experiment parameter; performing a first experiment using the training data and the at least one experiment parameter to generate experiment results; and performing at least one of: update or validate a first motion model based on the experiment results.

Recognition of activity in a video image sequence using depth information
11568682 · 2023-01-31 · ·

Techniques are provided for recognition of activity in a sequence of video image frames that include depth information. A methodology embodying the techniques includes segmenting each of the received image frames into a multiple windows and generating spatio-temporal image cells from groupings of windows from a selected sub-sequence of the frames. The method also includes calculating a four dimensional (4D) optical flow vector for each of the pixels of each of the image cells and calculating a three dimensional (3D) angular representation from each of the optical flow vectors. The method further includes generating a classification feature for each of the image cells based on a histogram of the 3D angular representations of the pixels in that image cell. The classification features are then provided to a recognition classifier configured to recognize the type of activity depicted in the video sequence, based on the generated classification features.

Reflective video display apparatus for interactive training and demonstration and methods of using same
11712614 · 2023-08-01 · ·

A smart mirror can show live or recorded streaming video of an instructor performing a workout in a package that is attractive and unobtrusive enough to hang in a living room. The smart mirror includes a mirror surface with a fully reflecting section and a partially reflecting section. A display behind the partially reflecting section shows the video when the smart mirror is on and is almost invisible when the smart mirror is off. The smart mirror also has a speaker, a microphone, and a camera to enable a user to view the video content and interact with the instructor. The smart mirror may connect to the user's smart phone, a peripheral device (e.g., a Bluetooth speaker) to augment user experience, a biometric sensor to provide biometric data to assess user performance, and/or a network router to connect the smart mirror to a content provider, an instructor, and/or other users.

Neural network based radiowave monitoring of fall characteristics in injury diagnosis

Training a machine learning neural network (MLNN) in radiowave based monitoring of fall characteristics in diagnosing injury. The method comprises receiving, in a first set of input layers of the MLNN, from a millimeter wave (mmWave) radar sensing device, a set of mmWave radar point cloud data representing fall attributes associated with a subject, each of the first set associated with a respective fall attribute; receiving, at a second set of input layers of the MLNN, a set of personal attributes of the subject, training a MLNN classifier based on supervised training that establishes a correlation between an injury condition of the subject as generated at the output layer, the mmWave point cloud data, and personal attributes; and adjusting an initial matrix of weights by backpropagation to increase correlation between the injury condition, the mmWave point cloud data, and the personal attributes.

Content clustering of new photographs for digital picture frame display

A method for automated routing of pictures taken on mobile electronic devices to a digital picture frame including a camera integrated with the frame, and a network connection module allowing the frame for direct contact and upload of photos from electronic devices or from photo collections of community members. The integrated camera is used to automatically determine an identity of a frame viewer and can capture gesture-based feedback. The displayed photos are automatically shown and/or changed according to the detected viewers. The photos can be filtered and cropped at the receiver side. Clustering photos by content is used to improve display and to respond to photo viewer desires.

Suggesting behavioral adjustments based on physiological responses to stimuli on electronic devices
11568166 · 2023-01-31 · ·

Introduced here are health management platforms able to monitor changes in the health state of a subject based on the context of digital activities performed by, or involving, the subject. Initially, a health management platform can identify a physiological response by examining physiological data associated with a subject. Then, the health management platform can identify a stimulus presented by an electronic device that provoked the physiological response by examining contextual data associated with the subject. The contextual data may be in the form of a screenshot of a computer program in use by the subject during the physiological response. In some embodiments, the health management platform prompts the subject to specify whether the physiological response is a positive physiological response that resulted in an upward shift in health or a negative physiological response that resulted in a downward shift in health.

Augmented reality object manipulation

A processing system having at least one processor may detect a first object in a first video of a first user and detect a second object in a second video of a second user, where the first video and the second video are part of a visual communication session between the first user and the second user. The processing system may further detect a first action in the first video relative to the first object, detect a second action in the second video relative to the second object, detect a difference between the first action and the second action, and provide a notification indicative of the difference.

COMPUTER SYSTEM AND METHOD FOR CONTROLLING GENERATION OF VIRTUAL MODEL

Model data of a virtual model imitating an object model is generated based on photographed data obtained by photographing the object model including a joint structure. A given applied joint structure is applied to the virtual model. The virtual model based on the model data is disposed in a given virtual space. Virtual model management data including the model data and data of the applied joint structure is stored in a predetermined storage section or is externally output as data for causing a joint of the virtual model to function.

DYNAMIC INTERACTION-ORIENTED SUBJECT'S LIMB TIME-VARYING STIFFNESS IDENTIFICATION METHOD AND DEVICE

The disclosure provides a dynamic interaction-oriented subject's limb time-varying stiffness identification method and device. The method includes: the combination of subject's limb displacement and measured force data or the combination of angle and measured torque data is collected; based on the time-varying dynamic system constructed based on a second-order impedance model, the linear parameter varying method is utilized to substitute the time-varying impedance parameters and reconstruct the restoring force/torque expression; iterative identification is performed on variable weights, dynamic interaction force/torque, and restoring force/torque by using time-varying dynamic parameters based on the dynamic interaction force/torque expression expanded from basis function; the time-varying stiffness is solved by using variable weights and dynamic interaction force/torque according to expression with substituted the time-varying impedance parameters. The disclosure not only improves the accuracy of the time-varying stiffness identification technology but also expands the application scenarios of the time-varying stiffness identification technology.

Action recognition method and apparatus, and human-machine interaction method and apparatus

A computer device extracts a plurality of target windows from a target video. Each of the target windows comprises a respective plurality of consecutive video frames. For each of the target windows, the device performs action recognition on the respective plurality of consecutive video frames corresponding to the target window to obtain respective first action feature information of the target window. The device obtains a similarity between the first action feature information of the target window and preset feature information. The device determines, from the respective obtained similarities corresponding to the plurality of target windows, a highest first similarity and a first target window corresponding to the highest first similarity. The device also determines a dynamic action corresponding to the highest first similarity as the preset dynamic action in accordance with threshold settings.