Patent classifications
G06T7/75
Mobile robots to generate occupancy maps
An example control system includes a memory and at least one processor to obtain image data from a given region and perform image analysis on the image data to detect a set of objects in the given region. For each object of the set, the example control system may classify each object as being one of multiple predefined classifications of object permanency, including (i) a fixed classification, (ii) a static and fixed classification, and/or (iii) a dynamic classification. The control system may generate at least a first layer of a occupancy map for the given region that depicts each detected object that is of the static and fixed classification and excluding each detected object that is either of the static and unfixed classification or of the dynamic classification.
VIRTUALIZING OBJECTS USING OBJECT MODELS AND OBJECT POSITION DATA
Described herein are a system and methods for generating a record of objects, as well as respective positions for those objects, with respect to a user. In some embodiments, a user may use a user device to scan an area that includes one or more objects. The one or more objects may be identified from image information obtained from the user device. Positional information for each of the one or more objects may be determined from depth information obtained from a depth sensor installed upon the user device. In some embodiments, the one or more objects may be mapped to object models stored in an object model database. The image information displayed on the user device may be augmented so that it depicts the object models associated with the one or more objects instead of the actual objects.
IMAGE PROCESSING SYSTEM AND METHOD
A computer-implemented method of determining a pose of each of a plurality of objects includes, for each given object: using image data and associated depth information to estimate a pose of the given object. The method includes iteratively updating the estimated poses by: sampling, for each given object, a plurality of points from a predetermined model of the given object transformed in accordance with the estimated pose of the given object; determining first occupancy data for each given object dependent on positions of the points sampled from the predetermined model, relative to a voxel grid containing the given object; determining second occupancy data for each given object dependent on positions of the points sampled from the predetermined models of the other objects, relative to the voxel grid containing the given object; and updating the estimated poses of the plurality of objects to reduce an occupancy penalty.
IMAGE PROCESSING SYSTEM AND METHOD
A computer-implemented method of estimating a pose of a target object in a three-dimensional scene includes: obtaining image data and associated depth information representing a view of the three-dimensional scene; processing the image data and the associated depth information to generate a volumetric reconstruction for each of a plurality of objects in the three-dimensional scene, including the target object; determining a volumetric grid containing the target object; generating, using the generated volumetric reconstructions, occupancy data indicating portions of the volumetric grid occupied by free space and portions of the volumetric grid occupied by objects other than the target object; and estimating the pose of the target object using the generated occupancy data and pointwise feature data for a plurality of points on a surface of the target object.
Attribute information prediction method, encoder, decoder and storage medium
Provided is a method for predicting attribute information, a coder, a decoder, and a storage medium. The coder determines a current Morton code corresponding to a point to be predicted in a point cloud to be coded, determines a target Morton code corresponding to the point to be predicted based on the current Morton code and according to a preset neighbor information table, judges whether a neighbor point of the point to be predicted exists in the point cloud to be coded according to the target Morton code, and performs prediction to obtain a predicted attribute value of the point to be predicted according to attribute reconstruction information of the neighbor point in response to that the neighbor point exists in the point cloud to be coded.
PLANAR CONTOUR RECOGNITION METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
This application relates to a planar contour recognition method and apparatus, a computer device, and a storage medium. The method includes obtaining a target frame image collected from a target environment; fitting edge points of an object plane in the target frame image and edge points of a corresponding object plane in a previous frame image to obtain a fitting graph, the previous frame image being collected from the target environment before the target frame image; deleting edge points that do not appear on the object plane of the previous frame image, in the fitting graph; and recognizing a contour constructed by remaining edge points in the fitting graph as a planar contour.
THREE-DIMENSIONAL SELECTIVE BONE MATCHING FROM TWO-DIMENSIONAL IMAGE DATA
A method of generating a custom three-dimensional (3D) model of a patient bone from one or more 2D images is disclosed. The method includes obtaining a 2D image of a bone, optionally of a joint, and identifying a 3D bone template for a candidate or representative bone from a pre-aligned library of representative bones. The method further includes repositioning one or more views of the 3D model or 2D images (e.g., with respect to rotation angle or caudal angle). In an iterative process, another 3D bone model for another candidate bone can be identified based on the repositioning until an accuracy threshold is satisfied. When the accuracy threshold is satisfied, surface region(s) of the current 3D bone model can then be modified to generate the resulting 3D model for the patient bone. The process can then be repeated for other bone(s) associated with the joint of the patient.
Systems and methods to insert supplemental content into presentations of two-dimensional video content
Systems and methods for inserting supplemental content into presentations of two-dimensional video content are disclosed. Exemplary implementations may: obtain two-dimensional video content depicting a three-dimensional space; obtain supplemental content; obtain a model of the three-dimensional space defining the one or more visible physical features within the three-dimensional space; determine the camera position of the two-dimensional video content; identify a presentation location within the two-dimensional video content; determine integration information; modify the two-dimensional video content to include the supplemental content at the identified presentation locations in accordance with the integration information and/or perform other operations.
MICROWAVE IDENTIFICATION METHOD AND SYSTEM
The present disclosure discloses a microwave identification method, which is implemented on at least one device, including at least one processor and at least one storage device, the method including: the at least one processor obtains microwave data; the at least one processor generates an image of one or more objects based on the microwave data; the at least one processor obtains a model of each of the one or more objects; and based on the model of each of the one or more objects, the at least one processor identifies the one or more objects in the image of the one or more objects.
Unmanned aerial vehicle (UAV) data collection and claim pre-generation for insured approval
Systems and methods are described for using data collected by unmanned aerial vehicles (UAVs) to generate insurance claim estimates that an insured individual may quickly review, approve, or modify. When an insurance-related event occurs, such as a vehicle collision, crash, or disaster, one or more UAVs are dispatched to the scene of the event to collect various data, including data related to vehicle or real property (insured asset) damage. With the insured's permission or consent, the data collected by the UAVs may then be analyzed to generate an estimated insurance claim for the insured. The estimated insurance claim may be sent to the insured individual, such as to their mobile device via wireless communication or data transmission, for subsequent review and approval. As a result, insurance claim handling and/or the online customer experience may be enhanced.