G06V20/647

Real-time virtual try-on item modeling
11593868 · 2023-02-28 · ·

A method includes generating, based on user images, a user 3-D model. The method proceeds with obtaining, via a user interface, a request to graphically represent an accessory on to a user graphical representation. This user graphical representation is generated using the user 3-D model. In response to this request, an accessory 3-D model is obtained. Further, the method includes positioning, via the user interface and based on parameters of the user 3-D model and of the accessory 3-D model, an accessory graphical representation on to the user graphical representation. The method further includes updating, in response to detecting user movement, the user 3-D model and the accessory 3-D model and presenting, via the user interface and based on these updated 3-D models, the accessory graphical representation and the user graphical representation in accordance with the user movement.

Object detection in vehicles using cross-modality sensors

A system includes first and second sensors and a controller. The first sensor is of a first type and is configured to sense objects around a vehicle and to capture first data about the objects in a frame. The second sensor is of a second type and is configured to sense the objects around the vehicle and to capture second data about the objects in the frame. The controller is configured to down-sample the first and second data to generate down-sampled first and second data having a lower resolution than the first and second data. The controller is configured to identify a first set of the objects by processing the down-sampled first and second data having the lower resolution. The controller is configured to identify a second set of the objects by selectively processing the first and second data from the frame.

DETERMINING THE POSITION OF AN OBJECT IN A SCENE
20180005049 · 2018-01-04 · ·

A method of determining the position of an object in a scene, comprising: receiving captured images of the scene, each image being captured from a different field of view of the scene, wherein a portion of the scene with a volume comprises a detectable object, the volume is divided into volume portions, and each volume portion is within the captured field of view of at least two of the captured images so that an image of each volume portion appears in the at least two of the captured images; detecting, for each volume portion in each of the captured images within which an image of that volume portion appears, whether or not an image of one of the detectable objects in the scene is positioned within a distance of the position of the image of that volume portion, a correspondence between the images of the detectable objects detected in the at least two of the images is established, the correspondence indicating that the images of the detectable objects detected in the at least two of the images correspond to a single detectable object in the scene, and the position in the scene of that volume portion is established as a position in the scene of the single detectable object.

Augmenting a Moveable Entity with a Hologram

In embodiments of augmenting a moveable entity with a hologram, an alternate reality device includes a tracking system that can recognize an entity in an environment and track movement of the entity in the environment. The alternate reality device can also include a detection algorithm implemented to identify the entity recognized by the tracking system based on identifiable characteristics of the entity. A hologram positioning application is implemented to receive motion data from the tracking system, receive entity characteristic data from the detection algorithm, and determine a position and an orientation of the entity in the environment based on the motion data and the entity characteristic data. The hologram positioning application can then generate a hologram that appears associated with the entity as the entity moves in the environment.

Object modeling using light projection
11710275 · 2023-07-25 · ·

A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.

A METHOD AND APPARATUS FOR ENCODING AND DECODING VOLUMETRIC CONTENT IN AND FROM A DATA STREAM
20230239451 · 2023-07-27 ·

Methods and apparatus for encoding and decoding a volumetric scene are disclosed. A set of attribute and geometry patches is obtained by projecting samples of the volumetric scene onto the patches according to projection parameters. If the geometry patch is comparable to a planar layer located at a constant depth according to the projection parameters, only the attribute patch is packed in an attribute atlas image and the depth value is encoded in metadata. Otherwise, both attribute and geometry patches are packed in an atlas. At the decoding, if metadata for an attribute patch indicates that its geometry may be determined from the projection parameters and a constant depth, the attributes are inverse projected on a planar layer. Otherwise, attributes are inverse projected according to the associated geometry patch.

ADAPTIVE SENSING BASED ON DEPTH

A microscope for adaptive sensing may comprise an illumination assembly, an image capture device configured to collect light from a sample illuminated by the assembly, and a processor. The processor may be configured to execute instructions which cause the microscope to capture, using the image capture device, an initial image set of the sample, identify, in response to the initial image set, an attribute of the sample, determine, in response to identifying the attribute, a three-dimensional (3D) process for sensing the sample, and generate, using the determined 3D process, an output image set comprising more than one focal plane. Various other methods, systems, and computer-readable media are also disclosed.

OBJECT SEARCH DEVICE AND OBJECT SEARCH METHOD

An object of the invention is to configure an object search device capable of expressing information on shapes and irregularities as features only by images, in a search for an object that is characteristic in shape or irregularity, and performing an accurate search.

The object search device includes: an image feature extraction unit that is configured with a first neural network, and is configured to input an image to extract an image feature; a three-dimensional data feature extraction unit that is configured with a second neural network, and is configured to input three-dimensional data to extract a three-dimensional data feature; a learning unit that is configured to extract an image feature and a three-dimensional data feature from an image and three-dimensional data of an object obtained from a same individual, respectively, and update an image feature extraction parameter so as to reduce a difference between the image feature and the three-dimensional data feature; and a search unit that is configured to extract image features of a query image and a gallery image of the object by the image feature extraction unit using the updated image feature extraction parameter, and calculate a similarity between the image features of both images to search for the object.

DIMENSION MEASUREMENT METHOD AND DIMENSION MEASUREMENT DEVICE

A dimension measurement method includes: extracting a plurality of lines from a plurality of images generated by shooting a target area from a plurality of viewpoints, and generating a line segment model which is a three-dimensional model of the target area that is expressed using the plurality of lines; calculating a dimension of a particular part inside the target area, using the line segment model; and outputting the dimension calculated.

SHOOTING METHOD, SHOOTING INSTRUCTION METHOD, SHOOTING DEVICE, AND SHOOTING INSTRUCTION DEVICE

A shooting method executed by a shooting device includes: shooting first images of a target space; generating a first three-dimensional point cloud of the target space, based on the first images and a first shooting position and a first shooting orientation of each of the first images; and determining a first region of the target space for which generating a second three-dimensional point cloud which is denser than the first three-dimensional point cloud is difficult, using the first three-dimensional point cloud and without generating the second three-dimensional point cloud. The determining includes generating a mesh using the first three-dimensional point cloud, and determining the region other than a second region of the target space for which the mesh is generated.