Patent classifications
G06V40/25
APPARATUS AND METHOD FOR ESTIMATING BEHAVIOR OF USER BASED ON IMAGE CONVERTED FROM SENSING DATA, AND METHOD FOR CONVERTING SENSING DATA INTO IMAGE
Disclosed herein are an apparatus and a method for estimating the behavior of a user based on an image converted from sensing data. The apparatus for estimating the behavior of a user based on an image converted from sensing data includes memory for storing at least one program, and a processor for executing the program, wherein the program performs acquiring sensing data measured by one or more behavior measurement devices worn by the user, converting sensing data of the user obtained for a predetermined time period into images, and estimating the behavior of the user from the images of the user based on a pre-trained model.
Detecting interactions with non-discretized items and associating interactions with actors using digital images
Commercial interactions with non-discretized items such as liquids in carafes or other dispensers are detected and associated with actors using images captured by one or more digital cameras including the carafes or dispensers within their fields of view. The images are processed to detect body parts of actors and other aspects therein, and to not only determine that a commercial interaction has occurred but also identify an actor that performed the commercial interaction. Based on information or data determined from such images, movements of body parts associated with raising, lowering or rotating one or more carafes or other dispensers may be detected, and a commercial interaction involving such carafes or dispensers may be detected and associated with a specific actor accordingly.
Sessions and groups
Athletic activity may be tracked while providing encouragement to perform athletic activity. For example, energy expenditure values and energy expenditure intensity values may be calculated and associated with a duration and type of activity performed by an individual. These values and other movement data may be displayed on an interface in a manner to motivate the individual and maintain the individual's interest. The interface may track one or more discrete “sessions”. The sessions may be associated with energy expenditure values during a duration that is within a second duration, such as a day, that is also tracked with respect to variables, such as energy expenditure. Other individuals (e.g., friends) may also be displayed on an interface through which a user's progress is tracked. This may allow the user to also view the other individuals' progress toward completing an activity goal and/or challenge.
Model learning device, model learning method, and recording medium
A model learning device provided with: an error-added movement locus generation unit for adding an error to movement locus data for action learning that represents the movement locus of a subject and to which is assigned an action label that is information representing the action of the subject, and thereby generating error-added movement locus data; and an action recognition model learning unit for learning a model, using at least the error-added movement locus data and learning data created on the basis of the action label, by which model the action of some subject can be recognized from the movement locus of the subject. Thus, it is possible to provide a model by which the action of a subject can be recognized with high accuracy on the basis of the movement locus of the subject estimated using a camera image.
Systems and methods for controlling virtual scene perspective via physical touch input
Systems, methods, and non-transitory computer readable media for controlling perspective in an extended reality environment are disclosed. In one embodiment, a non-transitory computer readable medium contains instructions to cause a processor to perform the steps of: outputting for presentation via a wearable extended reality appliance (WER-appliance), first display signals reflective of a first perspective of a scene; receiving first input signals caused by a first multi-finger interaction with the touch sensor; in response, outputting for presentation via the WER-appliance second display signals to modify the first perspective of the scene, causing a second perspective of the scene to be presented via the WER-appliance; receiving second input signals caused by a second multi-finger interaction with the touch sensor; and in response, outputting for presentation via the WER-appliance third display signals to modify the second perspective of the scene, causing a third perspective of the scene to be presented via the WER-appliance.
PEDESTRIAN SEARCH METHOD, SERVER, AND STORAGE MEDIUM
Provided are a pedestrian search method, a server, and a storage medium. The pedestrian search method is described as follows: a pedestrian detection is performed on each segment of monitoring video to obtain multiple pedestrian tracks, where each pedestrian track of the multiple pedestrian tracks includes multiple video frame images of a same pedestrian; and pedestrian tracks belonging to the same pedestrian is determined according to video frame images in the multiple pedestrian tracks, and the pedestrian tracks of the same pedestrian are merged.
SYSTEMS AND METHODS FOR MACHINE LEARNING-INFORMED AUTOMATED RECORDING OF TIME ACTIVITIES WITH AN AUTOMATED ELECTRONIC TIME RECORDING SYSTEM OR SERVICE
A system and method for a machine learning-based automated electronic time recording for personnel includes identifying, via a scene capturing device, a representation of a time recording space; identifying a body having a time recording pose within the time recording space based on an assessment of the representation of the time recording space; extracting a plurality of distinct features from the representation of the time recording space based on identifying the body having the time recording pose; executing automated user-recognition based on the extracting of the plurality of distinct features; executing automated time recording recognition based on the extracting of the plurality of distinct features; and executing automated electronic time recording, via a time recording application based on the automated user-recognition and the automated time recording recognition.
MULTI-MODAL FEW-SHOT LEARNING DEVICE FOR USER IDENTIFICATION USING WALKING PATTERN BASED ON DEEP LEARNING ENSEMBLE
Disclosed is multi-modal few-shot learning device for user identification using a walking pattern based on deep learning ensemble. The device includes: a walking data collector configured to collect walking data of a user from a smart insole including any one or more of a pressure sensor, an acceleration sensor, and a gyro sensor; a preprocessor configured to convert a series of time series walking data obtained from each of the sensors included in the smart insole into a unit format data set; and an ensemble learner configured to apply an ensemble learning model that provides one final prediction by training CNN series learning and RNN series learning respectively and independently based on the unit-format data set generated by the preprocessor.
Walking training system, non-transitory storage medium storing control program for walking training system and control method for walking training system
A walking training system includes a treadmill configured to prompt a trainee to walk, a display device installed such that the trainee views the display device while walking on the treadmill, a camera configured to image the trainee at an angle of view at which a gait of the trainee is recognizable, a calculation unit configured to calculate a tilt of a body core of the walking trainee based on an image captured by the camera, and a display control unit configured to control the display device to display a body core line associated with the tilt, and an index indicating at least an end of a permissible range of a deflection of the body core line.
State estimation program, trained model, rehabilitation support system, learning apparatus, and state estimation method
A state estimation program is for causing a computer to function to determine a state of training in a rehabilitation support system used by a trainee to perform training of a preset motion and includes a threshold setting step and a state estimation step. The threshold setting step is for acquiring a sensor output, the sensor output being an output of a sensor included in the rehabilitation support system in the training performed by the trainee and setting a threshold for determining one of a normal state and an abnormal state of the training based on the sensor output. The state estimation step is for estimating whether the training is performed in the normal state or the abnormal state based on the threshold.