G06V20/647

COMPUTER-BASED SYSTEMS CONFIGURED FOR TEXTURE WARPING-BASED ENCRYPTION AND METHODS OF USE THEREOF
20220414978 · 2022-12-29 ·

Systems and methods for providing encryption and decryption involving texture warping, comprising: obtaining a visual input; obtaining a private key; generating an encrypted visual representation (visual representation A) based on the private key and the visual input; determining at least one 3D object configured so that the private key is derivable when the visual representation A is mapped to a digital model of the at least one 3D object; transmitting the visual representation A to a second computing device associated with a second user; transmitting a representation of the digital model of the at least one 3D model to the second computing device; and instructing the second computing device so that the second computing device is configured to map the visual representation A to the digital model generated based on the representation of the digital model of the at least one 3D model to extract the private key.

TRACKING WITH REFERENCE TO A WORLD COORDINATE SYSTEM
20220414925 · 2022-12-29 ·

Examples described herein provide a method that includes capturing data about an environment. The method further includes generating a database of two-dimensional (2D) features and associated three-dimensional (3D) coordinates based at least in part on the data about the environment. The method further includes determining a position (x, y, z) and an orientation (pitch, roll, yaw) of a device within the environment based at least in part on the database of 2D features and associated 3D coordinates. The method further includes causing the device to display, on a display of the device, an augmented reality element at a predetermined location based at least in part on the position and the orientation of the device.

DEPTH MEASUREMENT THROUGH DISPLAY

Disclosed herein is a display device including an illumination source for projecting an illumination pattern including a plurality of illumination features on a scene; an optical sensor for determining a first image including a plurality of reflection features; a translucent display, where the illumination source and the optical sensor are placed in a direction of propagation of the illumination pattern in front of the display; and an evaluation device configured for evaluating the first image by identifying and sorting the reflection features with respect to brightness, each reflection feature including a beam profile, determining a longitudinal coordinate for each reflection feature by analyzing their beam profiles,

unambiguously matching reflection features with corresponding illumination features using the longitudinal coordinate classifying a reflection feature as a real feature or a false feature, rejecting the false features, and generating a depth map for the real features using the longitudinal coordinate.

MATCHING SUPPORT APPARATUS, MATCHING SUPPORT METHOD, AND COMPUTER-READABLE RECORDING MEDIUM
20220398855 · 2022-12-15 · ·

A matching support apparatus 10 has a generation unit 11 that, in the case where a feature region indicating a facial feature of a person targeted for matching visible on the skin surface of the person targeted for matching is designated, using a reference face development image display region, displayed on the screen of a display device 30, for displaying a reference face development image generated based on three-dimensional data of a head serving as a reference, generates feature information relating to the designated feature region, a matching unit 12 that matches the feature information against matching information in which a matching-use face development image and matching-use feature region for each person registered in advance are associated with each other, and a selection unit 13 for selecting a person to serve as a candidate, based on a matching result.

System and method for creating a decision support material indicating damage to an anatomical joint

In accordance with one or more embodiments herein, a system for creating a decision support material indicating damage to at least a part of an anatomical joint of a patient, wherein the created decision support material comprises one or more damage images, is provided. The system comprises a storage media and at least one processor, wherein the at least one processor is configured to i) receive a series of radiology images of the at least part of the anatomical joint from the storage media; ii) obtain a three-dimensional image representation of the at least part of the anatomical joint which is based on at least a part of said series of radiology images, by generating said three-dimensional image representation in an image segmentation process based on said series of radiology images, or receiving said three-dimensional image representation from a storage media; iii) identify tissue parts of the anatomical joint in at least one of at least a part of said series of radiology images and/or the three-dimensional image representation using image analysis; iv) determine damage to the identified tissue parts in the anatomical joint by analyzing at least one of at least a part of said series of radiology images and/or the three-dimensional image representation of the at least part of the anatomical joint; v) determine suitable sizes and suitable implanting positions for one or more graft plugs based on the determined damage; vi) mark damage to the anatomical joint and suitable sizes and implanting positions for the one or more graft plugs in the obtained three-dimensional image representation of the anatomical joint; and vii) generate a decision support material, where the determined damage to the at least part of the anatomical joint and the suitable sizes and implanting positions for the one or more graft plugs are marked in at least one of the one or more damage images of the decision support material, and at least one of the one or more damage images is generated based on the obtained three-dimensional image representation of the at least part of the anatomical joint.

System and method for classifying an object using a starburst algorithm
11526706 · 2022-12-13 · ·

A system for classifying an object may include one or more processors, a sensor and a memory device. The memory device may include a data collection module, a starburst module, and an object classifying module. The modules have instructions that when executed by the one or more processors cause the one or more processors to obtain three dimensional point cloud data from the sensor, identify at least one cluster of points representing the object within the three dimensional point cloud data, identify a center point of the at least one cluster of points, project a plurality of rays from the center point to points of the at least one cluster of points to generate a shape, compare the shape to a plurality of candidate shapes, and classify the object when the shape matches at least one of the plurality of candidate shapes.

Combined 2D and 3D processing of images or spaces

2D and 3D data of a scene are linked by associating points in the 3D data with corresponding points in multiple different 2D images within the 2D data. Labels assigned to points in either data can be propagated to the other data. Labels propagated to a point in the 3D data are aggregated, and the labels ranked highest are kept and propagated back to the 2D images. 3D data including labels produced in this manner allow partially obscured objects in certain views to be more accurately identified. Thus, an object can be manipulated in all 2D views of the 2D data in which the object is at least partially visible, in order to digitally remove, alter, or replace it.

Three-dimensional pose estimation

Devices and techniques are generally described for estimating three-dimensional pose data. In some examples, a first machine learning network may generate first three-dimensional (3D) data representing input 2D data. In various examples, a first 2D projection of the first 3D data may be generated. A determination may be made that the first 2D projection conforms to a distribution of natural 2D data. A second machine learning network may generate parameters of a 3D model based at least in part on the input 2D data and based at least in part on the first 3D data. In some examples, second 3D data may be generated using the parameters of the 3D model.

METHOD, APPARATUS AND DEVICE FOR RECOGNIZING THREE-DIMENSIONAL GESTURE BASED ON MARK POINTS
20220392264 · 2022-12-08 ·

A method and apparatus for recognizing a three-dimensional gesture based on mark points, a device and a storage medium are provided. An image collected by each camera is acquired; for each image, an identifier and a corresponding two-dimensional coordinate position of each mark point in the image are determined; a three-dimensional coordinate position of each mark point in a coordinate system of a corresponding camera is determined according to the two-dimensional coordinate position of each mark point and a calibration parameter of the corresponding camera; the three-dimensional coordinate position is converted to an initial three-dimensional coordinate position in a coordinate system of the designated space; and a target three-dimensional coordinate position of each mark point in the coordinate system of the designated space is determined according to the initial three-dimensional coordinate position of each mark point in each image and the identifier of the corresponding mark point.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM, AND FLIGHT OBJECT

To enable high-speed autonomous flight of a flight object. A three-dimensional real-time observation result is generated on the basis of self-position estimation information and three-dimensional distance measurement information. A prior map corresponding to a three-dimensional real-time observation result is acquired. The three-dimensional real-time observation result and the prior map are aligned. After the alignment, the three-dimensional real-time observation result is expanded on the basis of the prior map. A flight route is set on the basis of the three-dimensional real-time observation result having been expanded. In the flight object such as a drone, a somewhat long flight route can be accurately calculated at a time in a global behavior plan, which enables high-speed autonomous flight of the flight object.