G06V10/772

SIMULTANEOUS ORIENTATION AND SCALE ESTIMATOR (SOSE)

A method and hardware based system provide for descriptor-based feature mapping during terrain relative navigation (TRN). A first reference image/premade terrain map and a second image are acquired. Features in the first reference image and the second image are detected. A scale and an orientation of the one or more detected features are estimated based on an intensity centroid (IC), moments of the detected features, an orientation which is in turn based on an angle between a center of each of the detected features and the IC, and an orientation stability measure which is in turn based on a radius. Signatures are computed for each of the detected features using the estimated scale and orientation and then converted into feature descriptors. The descriptors are used to match features from the two images which are then used to perform TRN.

SIMULTANEOUS ORIENTATION AND SCALE ESTIMATOR (SOSE)

A method and hardware based system provide for descriptor-based feature mapping during terrain relative navigation (TRN). A first reference image/premade terrain map and a second image are acquired. Features in the first reference image and the second image are detected. A scale and an orientation of the one or more detected features are estimated based on an intensity centroid (IC), moments of the detected features, an orientation which is in turn based on an angle between a center of each of the detected features and the IC, and an orientation stability measure which is in turn based on a radius. Signatures are computed for each of the detected features using the estimated scale and orientation and then converted into feature descriptors. The descriptors are used to match features from the two images which are then used to perform TRN.

Using mapped elevation to determine navigational parameters

Systems and methods for navigating a host vehicle. The system may perform operations including receiving, from an image capture device, at least one image representative of an environment of the host vehicle; analyzing the at least one image to identify an object in the environment of the host vehicle; determining a location of the host vehicle; receiving map information associated with the determined location of the host vehicle, wherein the map information includes elevation information associated with the environment of the host vehicle; determining a distance from the host vehicle to the object based on at least the elevation information; and determining a navigational action for the host vehicle based on the determined distance.

Using mapped elevation to determine navigational parameters

Systems and methods for navigating a host vehicle. The system may perform operations including receiving, from an image capture device, at least one image representative of an environment of the host vehicle; analyzing the at least one image to identify an object in the environment of the host vehicle; determining a location of the host vehicle; receiving map information associated with the determined location of the host vehicle, wherein the map information includes elevation information associated with the environment of the host vehicle; determining a distance from the host vehicle to the object based on at least the elevation information; and determining a navigational action for the host vehicle based on the determined distance.

LEARNING DATA GENERATION DEVICE AND DEFECT IDENTIFICATION SYSTEM
20230039064 · 2023-02-09 ·

A learning data generation device that can generate learning data suitable for learning of an identification model. The learning data generation device has a function of cutting out part of first image data as second image data, a function of generating a two-dimensional graphic corresponding to the area of the second image data and representing a pseudo defect, a function of generating third image data by combining the second image data and the two-dimensional graphic, and a function of assigning a label corresponding to the two-dimensional graphic to the third image data. By using the third image data for learning of the identification model, a highly accurate identification model can be generated.

Devices, systems and methods for predicting gaze-related parameters using a neural network
11556741 · 2023-01-17 · ·

A method for creating and updating a database is disclosed. In one example, the method includes presenting a first stimulus to a first user wearing a head-wearable device, using a first camera of the head-wearable device to generate. When the first user is expected to respond to the first stimulus or expected to have responded to the first stimulus, using a second camera of the head-wearable device to generate a first right image of at least a portion of the right eye of the first user. A data connection is established between the head-wearable device and the database. A first dataset is generated comprising the first left image, the first right image and a first representation of a gaze-related parameter, the first representation being correlated with the first stimulus, and adding the first dataset to a device database.

METHOD AND APPARATUS WITH IMAGE TRANSFORMATION
20230007964 · 2023-01-12 · ·

A method with image transformation includes: identifying an original image; and determining a transformed image by inputting the original image to a neural network model configured to transform a color of the original image, wherein the neural network model comprises an operation block configured to perform white balancing on the original image, a correction block configured to correct a color of an output image of the operation block, and a mapping block configured to apply a lookup table to an output image of the correction block.

METHOD AND DEVICE FOR FINGERPRINT AUTHENTICATION

A fingerprint authentication method includes a first step of acquiring partial fingerprint measurement data for a part of a fingerprint, and a second step of calculating a matching rate by comparing the partial fingerprint measurement data with reference comparison data selected among a plurality of partial fingerprint registration data, each partial fingerprint registration data corresponding to a part of a fingerprint. The method further includes a third step of determining whether the matching rate is equal to or greater than an authentication threshold and a fourth step of determining, based on a result in the third step, a success of the authentication, or repeating the second and third steps by selecting new reference comparison data based on whether or not the matching rate is equal to or greater than a preset threshold smaller than the authentication threshold.

ANALYSIS DEVICE

An analysis device includes an analysis unit configured to receive scattered light, transmitted light, fluorescence, or electromagnetic waves from an observed object located in a light irradiation region light-irradiated from a light source and analyze the observed object on the basis of a signal extracted on the basis of a time axis of an electrical signal output from a light-receiving unit configured to convert the received light or electromagnetic waves into the electrical signal.

ANALYSIS DEVICE

An analysis device includes an analysis unit configured to receive scattered light, transmitted light, fluorescence, or electromagnetic waves from an observed object located in a light irradiation region light-irradiated from a light source and analyze the observed object on the basis of a signal extracted on the basis of a time axis of an electrical signal output from a light-receiving unit configured to convert the received light or electromagnetic waves into the electrical signal.