G06V10/50

SIMULTANEOUS ORIENTATION AND SCALE ESTIMATOR (SOSE)

A method and hardware based system provide for descriptor-based feature mapping during terrain relative navigation (TRN). A first reference image/premade terrain map and a second image are acquired. Features in the first reference image and the second image are detected. A scale and an orientation of the one or more detected features are estimated based on an intensity centroid (IC), moments of the detected features, an orientation which is in turn based on an angle between a center of each of the detected features and the IC, and an orientation stability measure which is in turn based on a radius. Signatures are computed for each of the detected features using the estimated scale and orientation and then converted into feature descriptors. The descriptors are used to match features from the two images which are then used to perform TRN.

Systems and methods for reconstruction and rendering of viewpoint-adaptive three-dimensional (3D) personas

An exemplary method includes maintaining a receiver-side mesh-vertices list, receiving duplicative-vertex information from a sender, and responsively reducing the receiver-side mesh-vertices list in accordance with the received duplicative-vertex information, and rendering, using the reduced receiver-side mesh-vertices list, viewpoint-adaptive three-dimensional (3D) personas of a subject at least in part by weighting video pixel colors from different video-camera vantage points of video cameras that capture video streams of the subject, the weighting being performed according to a respective geometric relationship of each video-camera vantage point to a user-selected viewpoint.

Systems and methods for reconstruction and rendering of viewpoint-adaptive three-dimensional (3D) personas

An exemplary method includes maintaining a receiver-side mesh-vertices list, receiving duplicative-vertex information from a sender, and responsively reducing the receiver-side mesh-vertices list in accordance with the received duplicative-vertex information, and rendering, using the reduced receiver-side mesh-vertices list, viewpoint-adaptive three-dimensional (3D) personas of a subject at least in part by weighting video pixel colors from different video-camera vantage points of video cameras that capture video streams of the subject, the weighting being performed according to a respective geometric relationship of each video-camera vantage point to a user-selected viewpoint.

System and method for determining probability that a vehicle driver is associated with a driver identifier
11580756 · 2023-02-14 · ·

A method for driver identification including recording a first image of a vehicle driver; extracting a set of values for a set of facial features of the vehicle driver from the first image; determining a filtering parameter; selecting a cluster of driver identifiers from a set of clusters, based on the filtering parameter; computing a probability that the set of values is associated with each driver identifier of the cluster; determining, at the vehicle sensor system, driving characterization data for the driving session; and in response to the computed probability exceeding a first threshold probability: determining that the new set of values corresponds to one driver identifier within the selected cluster, and associating the driving characterization data with the one driver identifier.

System and method for determining probability that a vehicle driver is associated with a driver identifier
11580756 · 2023-02-14 · ·

A method for driver identification including recording a first image of a vehicle driver; extracting a set of values for a set of facial features of the vehicle driver from the first image; determining a filtering parameter; selecting a cluster of driver identifiers from a set of clusters, based on the filtering parameter; computing a probability that the set of values is associated with each driver identifier of the cluster; determining, at the vehicle sensor system, driving characterization data for the driving session; and in response to the computed probability exceeding a first threshold probability: determining that the new set of values corresponds to one driver identifier within the selected cluster, and associating the driving characterization data with the one driver identifier.

IMAGE PROCESSING SYSTEM AND METHOD
20230040513 · 2023-02-09 ·

There is provided an image processing system and method for identifying a user. The system comprises a processor configured to identify a first user in an image, determine a plurality of characteristic vectors associated with the first user, compare the characteristic vectors associated with the first user with a plurality of predetermined characteristic vectors associated with a plurality of users including the first user, and identify the first user based on the comparison.

VIDEO PROCESSING METHOD, VIDEO SEARCHING METHOD, TERMINAL DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM

A video processing method, comprising: according to the scenario, editing a video to be edited, and obtaining a target video (S100); acquiring feature parameters of the target video (S200); generating, according to the feature parameters, a keyword of the target video (S300); and associatively storing the keyword and the target video (S400).

OBJECT DETECTION APPARATUS, SYSTEM, AND METHOD, DATA CONVERSION UNIT, AND NON-TRANSITORY COMPUTER READABLE MEDIUM

A receiver receives a radio wave transmitted to a target and scattered by the target to acquire a signal. An imaging unit generates a 3D complex image of the target based on the signal. A value extraction unit extracts intensity information and phase in including an intensity matrix and a phase matrix, the extracted intensity information constituting the intensity matrix and the extracted phases information constituting the phase matrix. A subset selection unit selects a subset from the value set. A transformation unit changes a representation of the subset to generate a 2D real image. A detection unit detects whether there is an undesired object on the target based on the 2D real image.

HALFTONE SCREENS
20230039819 · 2023-02-09 · ·

In an example, a method includes, by one or more processors, receiving a greyscale image having a plurality of pixels, each pixel being associated with a grey level, and the greyscale image having a first number of grey levels. An order of the pixels may be determined based on the grey level. A second number of grey levels may be determined, wherein the second number of grey levels is greater than the first number, and an indication of a target number of pixels per grey level of the second number of grey levels may be further be determined. Taking the pixels in order, and based on the target number of pixels per grey level, a new grey level may be allocated to each pixel to provide the second number of grey levels. The new grey levels may be converted to a threshold of a threshold halftone screen.

Scene-based automatic white balance

A method and apparatus may be used for performing a scene-based automatic white balance correction. The method may include obtaining an input image. The method may include obtaining a raw image thumbnail. The method may include obtaining an augmented image thumbnail. The method may include computing a histogram from an image thumbnail. The method may include determining a scene classification. The method may include learning a filter. The filter may be learned from one or several different instances of the raw image thumbnail, the augmented image thumbnail, the scene classification, or any combination thereof. The method may include applying the filter to the histogram to determine white balance correction coefficients and obtain a processed image.