G06T7/248

Automated physical training system

Systems, methods and computer readable media comprising a virtual exercise board, which is represented by images on the screen of a pad device; wearable devices configured to attach to each shoe of a user and to collect and transmit touch data to the pad device; cameras for tracking movement and calibrating with the data collected by the wearable devices; and computer programs for collecting user data, processing user data, and generating outputs. In embodiments, features include augmented reality; ratings of performance; automated workouts/protocols; real-time progress bar; multi-location database capabilities; and reports.

MOVEMENT STATE ESTIMATION DEVICE, MOVEMENT STATE ESTIMATION METHOD AND PROGRAM RECORDING MEDIUM
20180005046 · 2018-01-04 · ·

[Problem] To provide a motion condition estimation device, a motion condition estimation method and a motion condition estimation program capable of accurately estimating the motion condition of monitored subjects even in a crowded environment. [Solution] A motion condition estimation device according to the present invention is provided with a quantity estimating means 81 and a motion condition estimating means 82. The quantity estimating means 81 uses a plurality of chronologically consecutive images to estimate a quantity of monitored subjects for each local region in each image. The motion condition estimating means 82 estimates the motion condition of the monitored subjects from chronological changes in the quantities estimated in each local region.

METHOD FOR PROVIDING USER INTERFACE THROUGH HEAD MOUNTED DISPLAY USING EYE RECOGNITION AND BIO-SIGNAL, APPARATUS USING SAME, AND COMPUTER READABLE RECORDING MEDIUM

A method for providing a user interface through a head mounted display using eye recognition and bio-signals comprises the steps of: (a) moving a cursor to a particular location at which a user gazes by referencing the eye information obtained from a first eyeball that is one of the eyeballs of the user through a camera module when the user gazes at a particular location on an output screen; and (b) supporting in order to provide detailed selection items corresponding to an entity when a certain entity exists in the certain position by referencing the movement information obtained from the eyelid corresponding to a second eyeball that is one of the eyeballs of the user through a bio-signal acquisition module.

MOVING OBJECT DETECTION METHOD IN DYNAMIC SCENE USING MONOCULAR CAMERA
20180005055 · 2018-01-04 ·

The present invention relates to a moving object detection method in a dynamic scene using a monocular camera, which is capable of detecting a moving object using a monocular camera installed on the moving object such as a vehicle, and warning a driver of a dangerous situation. The moving object detection method in a dynamic scene using a monocular camera can detect a moving object in a dynamic scene using the monocular camera without a stereo camera.

Systems and methods for machine learning based physiological motion measurement

A system for physiological motion measurement is provided. The system may acquire a reference image corresponding to a reference motion phase of an ROI and a target image of the ROI corresponding to a target motion phase, wherein the reference motion phase may be different from the target motion phase. The system may identify one or more feature points relating to the ROI from the reference image, and determine a motion field of the feature points from the reference motion phase to the target motion phase using a motion prediction model. An input of the motion prediction model may include at least the reference image and the target image. The system may further determine a physiological condition of the ROI based on the motion field.

Surveillance information generation apparatus, imaging direction estimation apparatus, surveillance information generation method, imaging direction estimation method, and program
11710322 · 2023-07-25 · ·

A surveillance information generation apparatus (2000) includes a first surveillance image acquisition unit (2020), a second surveillance image acquisition unit (2040), and a generation unit (2060). The first surveillance image acquisition unit (2020) acquires a first surveillance image (12) generated by a fixed camera (10). The second surveillance image acquisition unit (2040) acquires a second surveillance image (22) generated by a moving camera (20). The generation unit (2060) generates surveillance information (30) relating to object surveillance, using the first surveillance image (12) and first surveillance information (14).

ONLINE MATCHING AND OPTIMIZATION METHOD COMBINING GEOMETRY AND TEXTURE, 3D SCANNING DEVICE, SYSTEM AND NON-TRANSITORY STORAGE MEDIUM
20230237693 · 2023-07-27 ·

An online matching and optimization method combining geometry and texture and a three-dimensional (3D) scanning system are provided. The method includes obtaining pairs of depth texture images with a one-to-one corresponding relationship, and collecting the pairs of the depth texture images including depth images by a depth sensor and collecting texture images by a camera device; adopting a strategy of coarse to fine to perform feature, matching on the depth texture images corresponding to a current frame and on the depth texture images corresponding to the target frames, to estimate a preliminary pose of the depth sensor in the 3D scanning system; combining a geometric constraint and a texture constraint to optimize the estimated preliminary pose, and obtaining a refined motion estimation between the frames.

AUGMENTED REALITY SYSTEMS AND METHODS FOR FINGERNAIL DETECTION AND TRACKING

A user may be directed to the nail application (e.g., a mobile application that provides the various augmented reality and booking features discussed herein) via interactions with a social media site. For example, a user may browse Facebook and see a friend's post or advertisement regarding a nail design, or more generally, some artwork that the user feels would look nice as a nail design. The user may select the artwork and be redirected to the nail application to begin an augmented reality testing of one or more nail artwork designs and/or schedule treatments. In some embodiments, the social media website utilizes an API provided by the data center (e.g. the entity that coordinates the treatment booking process and provides augmented reality software to the users) so that the user can “Touch to try-on” a particular design team in social media by clicking on the design.

METHOD FOR DETECTING VEHICLE AND DEVICE FOR EXECUTING THE SAME
20230005163 · 2023-01-05 · ·

There is provided a method for detecting a vehicle including receiving continuously captured front images, setting a search area of the vehicle in a target image based on a location of the vehicle or a vehicle area detected from a previous image among the front images, detecting the vehicle in the search area according to a machine learning model, and tracking the vehicle in the target image by using feature points of the vehicle extracted from the previous image according to a vehicle detection result based on the machine learning model. Since the entire image is not used as a vehicle detection area, a processing speed may be increased, and a forward vehicle tracked in an augmented reality navigation may be continuously displayed without interruption, thereby providing a stable service to the user.

TIME-OF-FLIGHT IMAGING CIRCUITRY, TIME-OF-FLIGHT IMAGING SYSTEM, TIME-OF-FLIGHT IMAGING METHOD

The present disclosure generally pertains to a time-of-flight imaging circuitry configured to: obtain first image data from an image sensor, the first image data being indicative of a scene, which is illuminated with spotted light; determine a first image feature in the first image data; obtain second image data from the image sensor, the second image data being indicative of the scene; determine second image feature in the second image data; estimate a motion of the second image feature with respect to the first image feature; and merge the first and the second image data based on the estimated motion.