G06T7/251

Method and apparatus for optimizing scan data and method and apparatus for correcting trajectory

A method and an apparatus optimizes scan data obtained by sensors on vehicle, and corrects trajectory for a vehicle/robot based on the optimized scan data. The method for optimizing the scan data obtained by scanning environment elements, includes: step of obtaining the scan data, including obtaining at least two frames of scan data respectively corresponding to different timings; step of cluster processing, based on the characteristic of the data points, including classifying the plurality of data points in each frame of the scan data into one or more clusters; step of establishing correspondence, among the at least two frames of scan data, including searching and obtaining at least one set of clusters having correspondence; step of optimizing clusters, among the at least two frames of scan data, including conducting calculation to each set of the at least one set of clusters having correspondence, to obtain optimized clusters respectively corresponding to each set of the at least one set of clusters having correspondence; and step of optimizing the scan data, including accumulating all optimized clusters to obtain an optimized scan date for the at least two frames of scan data.

Apparatus and method for executing a safety function

An apparatus and method for executing a safety function is particularly applicable to monitoring a safety area of a technical installation. An imaging unit acquires an event that triggers the safety function within a defined working area. A controller carries out a safety-related reaction depending on the triggering event. A test unit is configured to verify the operability of the imaging unit and includes a processing unit and a projection unit. The projection unit projects a pattern with defined properties into the working area. The processing unit evaluates the image data acquired by the imaging unit to detect the projected pattern within the acquired image data. Further, the processing unit extracts the specific properties of the detected projected pattern, and compares the specific properties of the detected projected pattern with the defined properties.

Monitoring handling of an object

In order to reduce a radiation dose delivered to an object or an observer, a facility for monitoring handling of the object has an optical unit configured to direct ionizing radiation onto the object and also a filter element in order to attenuate a part of the ionizing radiation. An imaging unit may detect portions of the ionizing radiation passing through the object in order to create an image of the object. A view acquisition system may acquire a viewing movement, and a control unit is configured, during a first operating mode, to control a position of the filter element as a function of the viewing movement. The control unit is configured to identify a predefined sequence of viewing movements and, as a function thereof, to switch into a second operating mode. The position of the filter element is controlled during the second operating mode as a function of an image analysis.

Calibration of an eye tracking system

There is provided mechanisms for calibration of an eye tracking system. An eye tracking system comprises a pupil centre corneal reflection (PCCR) based eye tracker and a non-PCCR based eye tracker. A method comprises obtaining at least one first eye position of a subject by applying the PCCR based eye tracker on an image set depicting the subject. The method comprises calibrating a head model of the non-PCCR based eye tracker, as applied on the image set, for the subject using the obtained at least one first eye position from the PCCR based eye tracker as ground truth. The head model comprises facial features that include at least one second eye position. The calibrating involves positioning the head model in order for its at least one second eye position to be consistent with the at least one first eye position given by the PCCR based eye tracker.

DYNAMIC FACIAL HAIR CAPTURE OF A SUBJECT

Embodiments of the present disclosure are directed to methods and systems for generating three-dimensional (3D) models and facial hair models representative of subjects (e.g., actors or actresses) using facial scanning technology. Methods accord to embodiments may be useful for performing facial capture on subjects with dense facial hair. Initial subject facial data, including facial frames and facial performance frames (e.g., images of the subject collected from a capture system) can be used to accurately predict the structure of the subject's face underneath their facial hair to produce a reference 3D facial shape of the subject. Likewise, image processing techniques can be used to identify facial hairs and generate a reference facial hair model. The reference 3D facial shape and reference facial hair mode can subsequently be used to generate performance 3D facial shapes and a performance facial hair model corresponding to a performance by the subject (e.g., reciting dialog).

PHYSICAL ABILITY EVALUATION SERVER, PHYSICAL ABILITY EVALUATION SYSTEM, AND PHYSICAL ABILITY EVALUATION METHOD
20230230259 · 2023-07-20 · ·

A physical ability evaluation server includes an image processing unit that executes evaluation score calculation processing on a plurality of still images included in a measurement video to calculate a physical ability evaluation score, a physical ability evaluation unit that evaluates the physical ability based on the evaluation score, and an evaluation result notification unit that creates and outputs an evaluation report based on the evaluation result. The evaluation score calculation processing includes a first process of acquiring joint position coordinates by physique estimation for each still image, and a second process of acquiring physique information by segmentation for a first still image corresponding to a first target period, and a third process that calculates the evaluation score of physical ability by a predetermined calculation formula using the information acquired in the first and second processes with respect to a second still image corresponding to a second target period.

VOLUMETRIC CAPTURE AND MESH-TRACKING BASED MACHINE LEARNING 4D FACE/BODY DEFORMATION TRAINING
20230230304 · 2023-07-20 ·

Mesh-tracking based dynamic 4D modeling for machine learning deformation training includes: using a volumetric capture system for high-quality 4D scanning, using mesh-tracking to establish temporal correspondences across a 4D scanned human face and full-body mesh sequence, using mesh registration to establish spatial correspondences between a 4D scanned human face and full-body mesh and a 3D CG physical simulator, and training surface deformation as a delta from the physical simulator using machine learning. The deformation for natural animation is able to be predicted and synthesized using the standard MoCAP animation workflow. Machine learning based deformation synthesis and animation using standard MoCAP animation workflow includes using single-view or multi-view 2D videos of MoCAP actors as input, solving 3D model parameters (3D solving) for animation (deformation not included), and given 3D model parameters solved by 3D solving, predicting 4D surface deformation from ML training.

Three-dimensional object reconstruction from a video

A three-dimensional (3D) object reconstruction neural network system learns to predict a 3D shape representation of an object from a video that includes the object. The 3D reconstruction technique may be used for content creation, such as generation of 3D characters for games, movies, and 3D printing. When 3D characters are generated from video, the content may also include motion of the character, as predicted based on the video. The 3D object construction technique exploits temporal consistency to reconstruct a dynamic 3D representation of the object from an unlabeled video. Specifically, an object in a video has a consistent shape and consistent texture across multiple frames. Texture, base shape, and part correspondence invariance constraints may be applied to fine-tune the neural network system. The reconstruction technique generalizes well—particularly for non-rigid objects.

DISCHARGE RISK AND MANAGEMENT

A method comprising receiving an input indicating intake information associated with a patient. Based on the input, the method further includes determining an initial discharge date and receiving mobility information associated with the patient. Based in part on the mobility information, the method further includes determining an estimated discharge date and a confidence metric associated with the estimated discharge date, determining that the estimated discharge date is later than the initial discharge date by more than a threshold period of time, and determining that the confidence metric is greater than a threshold metric. Based in part on the estimated discharge date being later than the initial discharge date by more than the threshold period of time and the confidence metric being greater than the threshold metric, the method further includes generating an alert.

Real-Time Alignment of Multiple Point Clouds to Video Capture

The presented invention includes the generation of cloud points, the identification of objects in the cloud points, and, in this case, finding the positions of objects in cloud points. In addition, the invention includes capturing images, data streaming, and digital image processing in different points of the system, and calculation of the position of objects. The invention includes the usage of cameras of mobile smart devices, smart glasses, 3D cameras, but not necessarily. The data streaming provides video streaming and sensor data streaming from mobile smart devices. The presented invention further includes cloud points of buildings in which the positioning of separated objects could be implemented. It also consists of the database of cloud points of isolated objects which help to calculate the position in the building. Finally, the invention comprises the method of objects feature extraction, comparing in the cloud points and position calculation.