G06T2200/04

METHOD FOR AUTOMATICALLY RECONSTITUTING THE REINFORCING ARCHITECTURE OF A COMPOSITE MATERIAL

A method for automatically reconstituting the architecture, along a reinforcing axis, of the reinforcement of a composite material, includes acquiring images of the reinforcement of the composite material, each image being acquired along a section plane perpendicular to the reinforcing axis; for each image acquired, detecting, using a neural network, barycentre and/or the circumference of each section of the reinforcing thread; for at least one acquired reference image, assigning a tag corresponding to a reinforcing thread, to each detected barycentre or circumference; for each other acquired image, assigning, to each detected barycentre and/or each detected circumference, the tag of the corresponding barycentre in the acquired reference image; reconstituting the architecture of each reinforcing thread from each detected barycentre and/or circumference having the tag of the reinforcing thread and the position on the reinforcing axis associated with the acquired image on which the barycentre and/or the circumference has been detected.

Sensor alignment
11592539 · 2023-02-28 · ·

Described herein are systems, methods, and non-transitory computer readable media for performing an alignment between a first vehicle sensor and a second vehicle sensor. Two-dimensional (2D) data indicative of a scene within an environment being traversed by a vehicle is captured by the first vehicle sensor such as a camera or a collection of multiple cameras within a sensor assembly. A three-dimensional (3D) representation of the scene is constructed using the 2D data. 3D point cloud data also indicative of the scene is captured by the second vehicle sensor, which may be a LiDAR. A 3D point cloud representation of the scene is constructed based on the 3D point cloud data. A rigid transformation is determined between the 3D representation of the scene and the 3D point cloud representation of the scene and the alignment between the sensors is performed based at least in part on the determined rigid transformation.

AUGMENTED REALITY CONTENT RENDERING VIA ALBEDO MODELS, SYSTEMS AND METHODS
20180005453 · 2018-01-04 · ·

Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user.

THREE-DIMENSIONAL OBJECT SCANNING FEEDBACK

Examples of providing feedback regarding a scan of a three-dimensional object are described. In one example, a method of computer modeling a three-dimensional object includes computer-tracking a three-dimensional pose of a scanning device relative to the three-dimensional object as the three-dimensional pose of the scanning devices changes to measure different contours of the three-dimensional object from different vantage points, and assessing a sufficiency of contour measurements from one or more of the different vantage points based on measurements received from the scanning device. The example method further includes providing haptic feedback, via a haptic output device, indicating the sufficiency of contour measurements corresponding to a current three-dimensional pose of the scanning device.

SYSTEM AND METHOD FOR PLACING A CHARACTER ANIMATION AT A LOCATION IN A GAME ENVIRONMENT
20180005426 · 2018-01-04 ·

A method for execution by a processor of a computer system for computer gaming. The method comprises maintaining a game environment; receiving a request to execute an animation routine during gameplay; attempting to identify a location in the game environment having a surrounding area that is free to host the requested animation routine; and in case the attempting is successful, carrying out the animation routine at the identified location in the game environment.

Image Processing Apparatus, Image Processing Method, and Image Communication System

Methods and apparatus provide for: capturing an image of an object, which includes a face of a person wearing an optical display apparatus by which to observe a stereoscopic image that contains a first parallax image and a second parallax image obtained when the object in a three-dimensional (3D) space is viewed from different viewpoints; identifying the optical display apparatus included in the image of the object; and generating an image of the face of the person that does not include the optical display apparatus by excluding the identified optical display apparatus, and instead by adding features of the face of the person to a region in which the identified optical display apparatus is excluded.

Contextual local image recognition dataset
11710279 · 2023-07-25 · ·

A contextual local image recognition module of a device retrieves a primary content dataset from a server and then generates and updates a contextual content dataset based on an image captured with the device. The device stores the primary content dataset and the contextual content dataset. The primary content dataset comprises a first set of images and corresponding virtual object models. The contextual content dataset comprises a second set of images and corresponding virtual object models retrieved from the server.

Image processing device, image processing method, and surgical navigation system
11707340 · 2023-07-25 · ·

Provided is an image processing device including a matching unit that performs matching processing between a predetermined pattern on a surface of a 3D model of a biological tissue including an operating site generated on the basis of a preoperative diagnosis image and a predetermined pattern on a surface of the biological tissue included in a captured image during surgery, a shift amount estimation unit that estimates an amount of deformation from a preoperative state of the biological tissue on the basis of a result of the matching processing and information regarding a three-dimensional position of a photographing region which is a region photographed during surgery on the surface of the biological tissue, and a 3D model update unit that updates the 3D model generated before surgery on the basis of the estimated amount of deformation of the biological tissue.

Augmented reality content rendering via Albedo models, systems and methods
11710282 · 2023-07-25 · ·

Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user.

METHOD FOR CO-SEGMENTATING THREE-DIMENSIONAL MODELS REPRESENTED BY SPARSE AND LOW-RANK FEATURE
20180012361 · 2018-01-11 ·

Presently disclosed is a method for co-segmenting three-dimensional models represented by sparse and low-rank feature, comprising: pre-segmenting each three-dimensional model of a three-dimensional model class to obtain three-dimensional model patches for the each three-dimensional model; constructing a histogram for the three-dimensional model patches of the each three-dimensional model to obtain a patch feature vector for the each three-dimensional model; performing a sparse and low-rank representation to the patch feature vector for the each three-dimensional model to obtain a representation coefficient and a representation error of the each three-dimensional model; determining a confident representation coefficient for the each three-dimensional model according to the representation coefficient and the representation error of the each three-dimensional model; and clustering the confident representation coefficient of the each three-dimensional model to co-segment the each three-dimensional model respectively.