G06T7/50

CONTOUR SHAPE RECOGNITION METHOD
20230047131 · 2023-02-16 ·

Provided is a contour shape recognition method, including: sampling and extracting salient feature points of a contour of a shape sample; calculating a feature function of the shape sample at a semi-global scale by using three types of shape descriptors; dividing the scale with a single pixel as a spacing to acquire a shape feature function in a full-scale space; storing feature function values at various scales into a matrix to acquire three types of feature grayscale map representations of the shape sample in the full-scale space; synthesizing the three types of grayscale map representations of the shape sample, as three channels of RGB, into a color feature representation image; constructing a two-stream convolutional neural network by taking the shape sample and the feature representation image as inputs at the same time; and training the two-stream convolutional neural network, and inputting a test sample into a trained network model to achieve shape classification.

CONTOUR SHAPE RECOGNITION METHOD
20230047131 · 2023-02-16 ·

Provided is a contour shape recognition method, including: sampling and extracting salient feature points of a contour of a shape sample; calculating a feature function of the shape sample at a semi-global scale by using three types of shape descriptors; dividing the scale with a single pixel as a spacing to acquire a shape feature function in a full-scale space; storing feature function values at various scales into a matrix to acquire three types of feature grayscale map representations of the shape sample in the full-scale space; synthesizing the three types of grayscale map representations of the shape sample, as three channels of RGB, into a color feature representation image; constructing a two-stream convolutional neural network by taking the shape sample and the feature representation image as inputs at the same time; and training the two-stream convolutional neural network, and inputting a test sample into a trained network model to achieve shape classification.

INFORMATION PROCESSING APPARATUS, SENSING APPARATUS, MOBILE OBJECT, METHOD FOR PROCESSING INFORMATION, AND INFORMATION PROCESSING SYSTEM
20230048222 · 2023-02-16 · ·

An information processing apparatus includes an input interface, a processor, and an output interface. The input interface obtains observation data obtained from an observation space. The processor detects a subject image of a detection target from the observation data, calculates a plurality of individual indices indicating degrees of reliability, each of which relates to at least identification information or measurement information regarding the detection target, and also calculates an integrated index, which is obtained by integrating a plurality of calculated individual indices. The output interface outputs the integrated index.

INFORMATION PROCESSING APPARATUS, SENSING APPARATUS, MOBILE OBJECT, METHOD FOR PROCESSING INFORMATION, AND INFORMATION PROCESSING SYSTEM
20230048222 · 2023-02-16 · ·

An information processing apparatus includes an input interface, a processor, and an output interface. The input interface obtains observation data obtained from an observation space. The processor detects a subject image of a detection target from the observation data, calculates a plurality of individual indices indicating degrees of reliability, each of which relates to at least identification information or measurement information regarding the detection target, and also calculates an integrated index, which is obtained by integrating a plurality of calculated individual indices. The output interface outputs the integrated index.

SYSTEMS AND METHODS FOR MASKING A RECOGNIZED OBJECT DURING AN APPLICATION OF A SYNTHETIC ELEMENT TO AN ORIGINAL IMAGE
20230050857 · 2023-02-16 ·

An exemplary object masking system is configured to mask a recognized object during an application of a synthetic element to an original image. For example, the object masking system accesses a model of a recognized object depicted in an original image of a scene. The object masking system associates the model with the recognized object. The object masking system then generates presentation data for use by a presentation system to present an augmented version of the original image in which a synthetic element added to the original image is, based on the model as associated with the recognized object, prevented from occluding at least a portion of the recognized object. In this way, the synthetic element is made to appear as if located behind the recognized object. Corresponding systems and methods are also disclosed.

SYSTEMS AND METHODS FOR MASKING A RECOGNIZED OBJECT DURING AN APPLICATION OF A SYNTHETIC ELEMENT TO AN ORIGINAL IMAGE
20230050857 · 2023-02-16 ·

An exemplary object masking system is configured to mask a recognized object during an application of a synthetic element to an original image. For example, the object masking system accesses a model of a recognized object depicted in an original image of a scene. The object masking system associates the model with the recognized object. The object masking system then generates presentation data for use by a presentation system to present an augmented version of the original image in which a synthetic element added to the original image is, based on the model as associated with the recognized object, prevented from occluding at least a portion of the recognized object. In this way, the synthetic element is made to appear as if located behind the recognized object. Corresponding systems and methods are also disclosed.

CORRECTING DEPTH ESTIMATIONS DERIVED FROM IMAGE DATA USING ACOUSTIC INFORMATION
20230047317 · 2023-02-16 ·

In one implementation, a method includes: obtaining a first depth estimation characterizing a distance between the device and a surface in a real-world environment, wherein the first depth estimation is derived from image data including a representation of the surface; receiving, using the audio transceiver, an acoustic reflection of an acoustic wave, wherein the acoustic wave is transmitted in a known direction relative to the device; and determining a second depth estimation based on the acoustic reflection, wherein the second depth estimation characterizes the distance between the device and the surface in the real-world environment; and determining a confirmed depth estimation characterizing the distance between the device and the surface based on resolving any mismatch between the first depth estimation and the second depth estimation.

CORRECTING DEPTH ESTIMATIONS DERIVED FROM IMAGE DATA USING ACOUSTIC INFORMATION
20230047317 · 2023-02-16 ·

In one implementation, a method includes: obtaining a first depth estimation characterizing a distance between the device and a surface in a real-world environment, wherein the first depth estimation is derived from image data including a representation of the surface; receiving, using the audio transceiver, an acoustic reflection of an acoustic wave, wherein the acoustic wave is transmitted in a known direction relative to the device; and determining a second depth estimation based on the acoustic reflection, wherein the second depth estimation characterizes the distance between the device and the surface in the real-world environment; and determining a confirmed depth estimation characterizing the distance between the device and the surface based on resolving any mismatch between the first depth estimation and the second depth estimation.

GENERATING VIRTUAL IMAGES BASED ON CAPTURED IMAGE DATA
20230050264 · 2023-02-16 ·

Systems and methods for generating a virtual view of a virtual camera based on an input image are described. A system for generating a virtual view of a virtual camera based on an input image can include a capturing device including a physical camera and a depth sensor. The system also includes a controller configured to determine an actual pose of the capturing device; determine a desired pose of the virtual camera for showing the virtual view; define an epipolar geometry between the actual pose of the capturing device and the desired pose of the virtual camera; and generate a virtual image depicting objects within the input image according to the desired pose of the virtual camera for the virtual camera based on an epipolar relation between the actual pose of the capturing device, the input image, and the desired pose of the virtual camera.

GENERATING VIRTUAL IMAGES BASED ON CAPTURED IMAGE DATA
20230050264 · 2023-02-16 ·

Systems and methods for generating a virtual view of a virtual camera based on an input image are described. A system for generating a virtual view of a virtual camera based on an input image can include a capturing device including a physical camera and a depth sensor. The system also includes a controller configured to determine an actual pose of the capturing device; determine a desired pose of the virtual camera for showing the virtual view; define an epipolar geometry between the actual pose of the capturing device and the desired pose of the virtual camera; and generate a virtual image depicting objects within the input image according to the desired pose of the virtual camera for the virtual camera based on an epipolar relation between the actual pose of the capturing device, the input image, and the desired pose of the virtual camera.