G06V10/757

Apparatus and method for identifying an articulatable part of a physical object using multiple 3D point clouds

An apparatus comprises an input interface configured to receive a first 3D point cloud associated with a physical object prior to articulation of an articulatable part, and a second 3D point cloud after articulation of the articulatable part. A processor is operably coupled to the input interface, an output interface, and memory. Program code, when executed by the processor, causes the processor to align the first and second point clouds, find nearest neighbors of points in the first point cloud to points in the second point cloud, eliminate the nearest neighbors of points in the second point cloud such that remaining points in the second point cloud comprise points associated with the articulatable part and points associated with noise, generate an output comprising at least the remaining points of the second point cloud associated with the articulatable part without the noise points, and communicate the output to the output interface.

Content-based detection and three dimensional geometric reconstruction of objects in image and video data

Systems, computer program products, and techniques for detecting and/or reconstructing objects depicted in digital image data within a three-dimensional space are disclosed. The concepts utilize internal features for detection and reconstruction, avoiding reliance on information derived from location of edges. The inventive concepts provide an improvement over conventional techniques since objects may be detected and/or reconstructed even when edges are obscured or not depicted in the digital image data. In one aspect, detecting a document depicted in a digital image includes: detecting a plurality of identifying features of the document, wherein the plurality of identifying features are located internally with respect to the object; projecting a location of one or more edges of the document based at least in part on the plurality of identifying features; and outputting the projected location of the one or more edges of the document to a display of a computer, and/or a memory.

SYSTEM USING IMAGE CONNECTIVITY TO REDUCE BUNDLE SIZE FOR BUNDLE ADJUSTMENT
20230081366 · 2023-03-16 ·

Systems and methods are disclosed, including a non-transitory computer readable medium storing computer executable instructions that when executed by a processor cause the processor to identify a first image, a second image, and a third image, the first image overlapping the second image and the third image, the second image overlapping the third image; determine a first connectivity between the first image and the second image; determine a second connectivity between the first image and the third image; determine a third connectivity between the second image and the third image, the second connectivity being less than the first connectivity, the third connectivity being greater than the second connectivity; assign the first image, the second image, and the third image to a cluster based on the first connectivity and the third connectivity; conduct a bundle adjustment process on the cluster of the first image, the second image, and the third image.

Relocalization method and apparatus in camera pose tracking process, device, and storage medium

This application discloses a repositioning method and apparatus in a camera pose tracking process, a device, and a storage medium, belonging to the field of augmented reality (AR). The method includes: obtaining a current image acquired after an i.sup.th anchor image in a plurality of anchor images; obtaining an initial feature point and an initial pose parameter in the first anchor image in the plurality of anchor images in a case that the current image satisfies a repositioning condition; performing feature point tracking on the current image relative to the first anchor image, to obtain a plurality of matching feature point pairs; filtering the plurality of matching feature point pairs according to a constraint condition, to obtain a filtered matching feature point pair; calculating a pose change amount of a camera from the initial pose parameter to a target pose parameter according to the filtered matching feature point pair; and performing repositioning according to the initial pose parameter and the pose change amount to obtain the target pose parameter of the camera.

LABELING TECHNIQUES FOR A MODIFIED PANOPTIC LABELING NEURAL NETWORK

A panoptic labeling system includes a modified panoptic labeling neural network (“modified PLNN”) that is trained to generate labels for pixels in an input image. The panoptic labeling system generates modified training images by combining training images with mask instances from annotated images. The modified PLNN determines a set of labels representing categories of objects depicted in the modified training images. The modified PLNN also determines a subset of the labels representing categories of objects depicted in the input image. For each mask pixel in a modified training image, the modified PLNN calculates a probability indicating whether the mask pixel has the same label as an object pixel. The modified PLNN generates a mask label for each mask pixel, based on the probability. The panoptic labeling system provides the mask label to, for example, a digital graphics editing system that uses the labels to complete an infill operation.

PRECIOUS METAL AUTHENTICATION SYSTEM AND METHOD
20230081262 · 2023-03-16 ·

A method for authentication of precious metals according to an embodiment may include: obtaining a surface image by photographing a surface of a predetermined region of the precious metal; generating an authentication value based on the obtained surface image; and determining the authenticity of the precious metal by comparing the generated authentication value with a pre-stored authentication value.

Method, device and storage medium for determining camera posture information

Embodiments of this application disclose a method for determining camera pose information of a camera of a mobile terminal. The method includes: obtaining a first image, a second image, and a template image, the first image being a previous frame of image of the second image, the first image and the second image being images including a respective instance of the template image captured by the mobile terminal using the camera at a corresponding spatial position; determining a first homography between the template image and the second image; determining a second homography between the first image and the second image; and performing complementary filtering processing on the first homography and the second homography, to obtain camera pose information of the camera, wherein the camera pose information of the camera represents a spatial position of the mobile terminal when the mobile terminal captures the second image using the camera.

Pattern Matching Device, Pattern Measurement System, and Non-Transitory Computer-Readable Medium
20230071668 · 2023-03-09 ·

A pattern matching apparatus includes a computer system configured to execute pattern matching processing between first pattern data based on design data 104 and second pattern data representing a captured image 102 of an electron microscope. The computer system acquires a first edge candidate group including one or more first edge candidates, acquires a selection-required number (the number of second edge candidates to be selected based on the second pattern data), acquires a second edge candidate group including the second edge candidates of the selection-required number, acquires an association evaluation value for each of different association combinations between the first edge candidate group and the second edge candidate group, selects one of the combinations based on the association evaluation value, and calculates a matching shift amount based on the selected combination.

SYSTEMS AND METHODS FOR OBJECT RECOGNITION

The present disclosure relates to systems and methods for object recognition. The systems may obtain image data captured by an imaging device. The image data may include one or more objects. The systems may determine a centerline of a target object in the one or more objects based on the image data. The systems may determine a recognition result of the target object using a trained neural network model based on at least one feature parameter of the centerline of the target object. The recognition result may include a name of the target object. The systems may perform an anomaly detection on the target object based on the recognition result of the target object.

AUTHENTICATION METHOD, NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR STORING AUTHENTICATION PROGRAM, AND INFORMATION PROCESSING DEVICE
20230070660 · 2023-03-09 · ·

An authentication method implemented by a computer, the authentication method including: extracting a feature amount of each of a plurality of feature points of a living body from imaged data of the living body; calculating a similarity between the feature amount of each of the plurality of feature points and a feature amount stored in a storage unit in association with a feature point that corresponds to each of the plurality of feature points; referring to the storage unit that stores weight information that indicates a weight to be applied to a similarity in association with the similarity to acquire the weight information associated with the calculated similarity; and executing authentication processing on the living body based on a similarity that is newly generated by applying the acquired weight information to the calculated similarity.