G06T3/06

DAMAGE DETECTION FROM MULTI-VIEW VISUAL DATA

Reference images of an object may be mapped to an object model to create a reference object model representation. Evaluation images of the object may also be mapped to the object model via the processor to create an evaluation object model representation. Object condition information may be determined by comparing the reference object model representation with the evaluation object model representation. The object condition information may indicate one or more differences between the reference object model representation and the evaluation object model representation. A graphical representation of the object model that includes the object condition information may be displayed on a display screen.

DAMAGE DETECTION FROM MULTI-VIEW VISUAL DATA

A plurality of images may be analyzed to determine an object model. The object model may have a plurality of components, and each of the images may correspond with one or more of the components. Component condition information may be determined for one or more of the components based on the images. The component condition information may indicate damage incurred by the object portion corresponding with the component.

DAMAGE DETECTION FROM MULTI-VIEW VISUAL DATA

One or more images of an object, each from a respective viewpoint, may be captured at a camera at a mobile computing device. The images may be compared to reference data to identify a difference between the images and the reference data. Image capture guidance may be provided on a display screen for capturing another one or more images of the object that includes the identified difference.

OBJECT DAMAGE AGGREGATION

Images of an object may be analyzed to determine individual damage maps of the object. Each damage map may represent damage to an object depicted in one of the images. The damage may be represented in a standard view of the object. An aggregated damage map for the object may be determined based on the individual damage maps.

Customizing virtual assets
10719910 · 2020-07-21 · ·

Customizing virtual assets is disclosed, including: transforming each of a plurality of initially identical copies of a virtual asset or a portion thereof to isolate a feature of the virtual asset or portion thereof; and enabling the isolated feature to be changed by a user in at least one of the transformed copies. In some embodiments, customizing virtual assets includes: receiving a three-dimensional model associated with the virtual asset; receiving an indication to save a two-dimensional virtual asset based on the 3D model with a 2D image wrapped on it; and using the 3D model with the 2D image wrapped on it to generate the 2D virtual asset.

Drone-enabled wildlife monitoring system
10716292 · 2020-07-21 · ·

An unmanned aerial vehicle is configured to fly along a flight path and capture audio signals produced by animal species or emitted by collars coupled to animals. For example, the unmanned aerial vehicle may include a rail mount to which various attachments can be coupled. One such attachment may be a rail attachment coupled to a support that carries a microphone, a recording device, and/or a radio frequency (RF) transceiver. The microphone can capture audio signals and transmit the captured audio signals to the recording device for storage. The RF transceiver can receive signals emitted by collars and forward the signals to a remote receiver. A remote system can compare the captured audio signals with animal species audio signatures or expected data payloads to identify which animal species are present along the flight path and/or the location at which the animal species are present.

Organism growth prediction system using drone-captured images
10713777 · 2020-07-14 · ·

A plant growth measurement and prediction system uses drone-captured images to measure the current growth of particular plant species and/or to predict future growth of the plant species. For example, the system instructs a drone to fly along a flight path and capture images of the land below. The captured images may include both thermographic images and high-resolution images. The system processes the images to create an orthomosaic image of the land, where each pixel in the orthomosaic image is associated with a brightness temperature. The system then uses plant species to brightness temperature mappings and the orthomosaic image to identify current plant growth. The system generates a diagnostic model using the orthomosaic image to then predict future plant growth.

METHOD AND SYSTEM FOR GENERATING A 3D DIGITAL MODEL USED FOR CREATING A HAIRPIECE
20200219328 · 2020-07-09 ·

A method for generating a 3D digital model used for creating a hairpiece are disclosed. The method comprises: obtaining a 3D model of a head, the 3D model containing a 3D surface mesh having one single boundary and color information associated with the 3D mesh; mapping the 3D model into a 2D image in such a manner that any continuously connected line on the 3D model is mapped into a continuously connected line in the 2D image, the 2D image containing a 2D mesh with color information applied thereto; displaying the 2D image; identifying a feature in the 2D image based on the color information; and mapping the identified feature back onto the 3D model. A system for generating a 3D digital model used for creating a hairpiece is also provided in another aspect of the present disclosure.

ROBUST ASSOCIATION OF TRAFFIC SIGNS WITH A MAP
20200217667 · 2020-07-09 ·

Techniques provide for accurately matching traffic signs observed in camera images with traffic sign data from 3D maps, which can allow for error correction in a position estimate of a vehicle based on differences in the location of the observed traffic sign and the location of the traffic sign based on 3D map data. Embodiments include preparing the data to allow for comparison between observed and map traffic sign data, conducting the comparison in a 2D frame (e.g., in the frame of the camera image) to make an initial order of proximity of candidate traffic signs in the map traffic sign data to the observed traffic sign, conducting a second comparison in a 3D frame (e.g. the frame of the 3D map) to determine an association based on the closest match, and using the association to perform error correction.

ADAPTIVE SELECTION OF OCCUPANCY MAP PRECISION

An encoding device, a method of encoding, and decoding device for point cloud compression of a 3D point cloud. The encoding device is configured to generate, for the three-dimensional (3D) point cloud, at least a set of geometry frames and a set of occupancy map frames for points of the 3D point cloud. The encoding device is also configured to select an occupancy precision value based on a quantization parameter (QP) associated with at least one generated geometry frame in the set of geometry frames, subsample at least one occupancy map frame in the set of occupancy map frames based on the selected occupancy precision value, and encode the set of geometry frames and the set of occupancy map frames into a bitstream for transmission.