G06V10/755

CONTOUR SHAPE RECOGNITION METHOD
20230047131 · 2023-02-16 ·

Provided is a contour shape recognition method, including: sampling and extracting salient feature points of a contour of a shape sample; calculating a feature function of the shape sample at a semi-global scale by using three types of shape descriptors; dividing the scale with a single pixel as a spacing to acquire a shape feature function in a full-scale space; storing feature function values at various scales into a matrix to acquire three types of feature grayscale map representations of the shape sample in the full-scale space; synthesizing the three types of grayscale map representations of the shape sample, as three channels of RGB, into a color feature representation image; constructing a two-stream convolutional neural network by taking the shape sample and the feature representation image as inputs at the same time; and training the two-stream convolutional neural network, and inputting a test sample into a trained network model to achieve shape classification.

Systems and methods for reconstruction and rendering of viewpoint-adaptive three-dimensional (3D) personas

An exemplary method includes maintaining a receiver-side mesh-vertices list, receiving duplicative-vertex information from a sender, and responsively reducing the receiver-side mesh-vertices list in accordance with the received duplicative-vertex information, and rendering, using the reduced receiver-side mesh-vertices list, viewpoint-adaptive three-dimensional (3D) personas of a subject at least in part by weighting video pixel colors from different video-camera vantage points of video cameras that capture video streams of the subject, the weighting being performed according to a respective geometric relationship of each video-camera vantage point to a user-selected viewpoint.

Solution for Determination of Supraphysiological Body Joint Movements

A solution for non-invasive determination of supraphysiological body joint kinematics. The solution obtains external images related to a test procedure of the body joint and performs image analysis on the obtained images to define a pattern of a plurality of spatial points in a region of interest. Each individual spatial point is defined by a unique pattern of neighboring surrounding pixels in each image, and the pattern is part of a high-contrast speckle pattern applied to the body joint. The solution identifies displacements of the spatial points in subsequently obtained images by tracing a location of the unique pattern of neighboring pixels in each image in relation to a base image of the body joint, calculates deformation measures from the displacements of the plurality of spatial points, and obtains deformation measures of a reference body joint. The solution compares the deformation measures and determines supraphysiological body joint kinematics from the comparison.

APPARATUS AND METHOD FOR DETERMINING LANE CHANGE OF SURROUNDING OBJECTS
20220410942 · 2022-12-29 · ·

A method for determining a lane change, performed by an apparatus for determining a lane change of an object located around a driving vehicle with which is equipped a sensor, the method including, detecting a plurality of objects located around the driving vehicle using scanning information obtained repeatedly at every predetermined period of time by the sensor scanning surroundings of the driving vehicle, selecting at least one candidate object estimated to change lanes among the plurality of objects based on previously detected lane edge information and determining whether the candidate object changes lanes based on information on movement of the candidate object.

HYBRID LANE MODEL
20220405515 · 2022-12-22 ·

A method of hybrid lane modeling, including, receiving a roadway image, extracting a set of lane points from the roadway image, fitting a polynomial line to the set of lane points, determining a fitted error of the fitted polynomial line, outputting the polynomial line if the fitted error is less than a predetermined threshold, selecting a set of clean lane points from the set of lane points if the fitted error is greater than the predetermined threshold and interpolating a cubic spline line to the set of clean lane points.

LARGE SCALE COMPUTATIONAL LITHOGRAPHY USING MACHINE LEARNING MODELS

A computational lithography process uses machine learning models. An aerial image produced by a lithographic mask is first calculated using a two-dimensional model of the lithographic mask. This first aerial image is applied to a first machine learning model, which infers a second aerial image. The first machine learning model was trained using a training set that includes aerial images calculated using a more accurate three-dimensional model of lithographic masks. The two-dimensional model is faster to compute than the three-dimensional model but it is less accurate. The first machine learning model mitigates this inaccuracy.

Apparatus for constructing kinematic information of robot manipulator and method therefor
20220383540 · 2022-12-01 ·

An apparatus for constructing kinematic information of a robot manipulator is provided. The apparatus includes: a robot image acquisition part for acquiring a robot image containing shape information and coordinate information of the robot manipulator; a feature detection part for detecting the type of each of a plurality of joints of the robot manipulator and the three-dimensional coordinates of the joint using a feature detection model generated through deep learning based on the robot image containing shape information and coordinate information; and a variable derivation part for deriving Denavit-Hartenberg (DH) parameters based on the type of each of the plurality of joints of the robot manipulator and the three-dimensional coordinates of the joint.

Geometrically constrained, unsupervised training of convolutional autoencoders for extraction of eye landmarks

The disclosure relates to systems, methods and programs for geometrically constrained, unsupervised training of convolutional autoencoders on unlabeled images for extracting eye landmarks. Disclosed systems for unsupervised deep learning of gaze estimation in eyes' image data are implementable in a computerized system. Disclosed methods include capturing an unlabeled image comprising the eye region of a user; and training a plurality of convolutional autoencoders on the unlabeled image comprising the eye region of a user using an initial geometrically regularized loss function to determine a plurality of eye landmarks.

FACE IMAGE PROCESSING METHOD, FACE IMAGE PROCESSING MODEL TRAINING METHOD, APPARATUS, DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT

This application discloses a face image processing method performed by an electronic device. The method includes: acquiring a face image of a source face and a face template image of a template face; performing three-dimensional face modeling on the face image and the face template image to obtain a three-dimensional face image feature of the face image and a three-dimensional face template image feature of the face template image; fusing the three-dimensional face image feature and the three-dimensional face template image feature to obtain a three-dimensional fusion feature; performing face replacement feature extraction on the face image based on the face template image to obtain an initial face replacement feature; transforming the initial face replacement feature based on the three-dimensional fusion feature to obtain a target face replacement feature; and replacing the template face with the source face based on the target face replacement feature to obtain a target face image.

IMAGE ANALYSIS SYSTEM FOR FORENSIC FACIAL COMPARISON

An automatic forensic facial comparison system, FFC, having a questioned image (I1) and a reference image (I2), captured by means of acquisition of images of a subject, comprising processing means configured to carry out FFC steps: at least one morphological analysis stage (11), mandatory, and optionally a holistic comparison stage (12), and/or an image overlay stage (13), and/or a photo-anthropometry stage (14), and/or a decision-making stage (15). For each stage (11, 12, 13, 14) corresponding to FFC methods, the processing means calculate an overall indicator value of the stage carried out. In the last decision-making stage (15), the processing means calculate a fuzzy value by applying soft computing, obtained as a sum of the overall indicator value of each stage previously carried out (11, 12, 13, 14), each value being weighted by a weight based on a set of data to support the decision-making stage (15) indicative of a degree of reliability of each stage and of the quality of the starting images (I1, I2).