G06T2219/20

OBJECT CREATION USING BODY GESTURES

An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.

CALIBRATION OF LIDAR SENSORS

A method of calibrating a LiDAR sensor mounted on a vehicle includes positioning the vehicle at a distance from a target including a planar mirror and features surrounding the mirror. The vehicle is positioned and oriented relative to the mirror so that an optical axis of the LiDAR sensor is nominally parallel to the optical axis of the mirror, and the target is nominally centered at a field of view of the LiDAR sensor. The method further includes acquiring, using the LiDAR sensor, a three-dimensional image of the target including images of the features of the target and a mirror image of the vehicle formed by the mirror. The method further includes determining a deviation from an expected alignment of the LiDAR sensor with respect to the vehicle by analyzing the images of the features and the mirror image of the vehicle in the three-dimensional image of the target.

DYNAMIC CALIBRATION OF LIDAR SENSORS

A method of calibrating a LiDAR sensor mounted on a vehicle includes storing a reference three-dimensional image acquired by the LiDAR sensor while the LiDAR sensor is in an expected alignment with respect to the vehicle. The reference three-dimensional image includes a first image of a fixed feature on the vehicle. The method further includes, acquiring, using the LiDAR sensor, a three-dimensional image including a second image of the fixed feature, and determining a deviation from the expected alignment of the LiDAR sensor with respect to the vehicle by comparing the second image of the fixed feature in the three-dimensional image to the first image of the fixed feature in the reference three-dimensional image.

Object creation using body gestures

An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.

MODIFYING THREE-DIMENSIONAL REPRESENTATIONS USING DIGITAL BRUSH TOOLS
20210056752 · 2021-02-25 ·

Systems, methods, and non-transitory computer-readable media are disclosed for modifying voxel-based 3D representations using 3D digital brush tools and/or resolution filters. For example, the disclosed systems can utilize 3D digital brush tools (e.g., a digital blur brush tool, a digital smudge brush tool, and/or a digital melt brush tool) to identify and modify one or more voxels within a 3D representation using multiple buffers of visual properties. Additionally, the disclosed systems can modify one or more voxels within a 3D representation by rendering the one or more voxels at varying levels of detail using an octree (e.g., a mosaic filter tool). In particular, the disclosed systems can identify one or more voxels within an octree that are smaller than a target voxel size. Moreover, the disclosed systems can combine the identified one or more voxels within the octree to render the 3D representation at varying levels of detail.

THREE-DIMENSIONAL MEASUREMENT INTERFACE

A 3D measurement system to generate and cause display of a 3D measurement interface to present values depicting dimensions of one or more objects depicted in a presentation of a 3D model at a client device. According to certain embodiments, the system is configured to perform operations that include: accessing a data stream at a client device, the data stream comprising depth data; generating a 3D model based on at least the depth data; causing display of a presentation of the 3D model at the client device; receiving an input that selects a dimension of the 3D model; and causing display of a value based on the dimension in response to the input that selects the dimension.

MULTI-DEVICE EDITING OF 3D MODELS

Various implementations disclosed herein include devices, systems, and methods that enable two or more devices to simultaneously view or edit the same 3D model in the same or different settings/viewing modes (e.g., monoscopically, stereoscopically, in SR, etc.). In an example, one or more users are able to use different devices to interact in the same setting to view or edit the same 3D model using different views from different viewpoints. The devices can each display different views from different viewpoints of the same 3D model and, as changes are made to the 3D model, consistency of the views on the devices is maintained.

Methods and systems for shape based training for an object detection algorithm

A non-transitory computer readable medium embodies instructions that cause one or more processors to perform a method. The method includes: (A) receiving, in one or more memories, a 3D model corresponding to an object, and (B) setting a depth sensor characteristic data set for a depth sensor for use in detecting a pose of the object in a real scene. The method also includes (C) generating blurred 2.5D representation data of the 3D model for at least one view around the 3D model based on the 3D model and the depth sensor characteristic data set, to generate, on the basis of the 2.5D representation data, training data for training an object detection algorithm, and (D) storing the training data in one or more memories.

ACCESSING A VIRTUAL REALITY ENVIRONMENT

A method includes a computing device interpreting a request for a requesting entity to access a set of learning objects pertaining to a common topic represented in a virtual reality environment to produce a set of requested learning object identifiers. The method further includes determining whether a license smart contract for the set of learning objects associated with a non-fungible token of the object distributed ledger affirms access by the requesting entity to the set of learning objects. When access is affirmed, the method further includes generating the virtual reality environment utilizing a group of object representations in accordance with interaction information for at least some of the object representations of the group of object representations. The method further includes outputting the virtual reality environment to the requesting entity for interactive consumption.

Client-generated content within a media universe

A media universe (MU) system may leverage network-based resources to store and maintain a database of content for a world encompassed by the media universe. One or more layers of content including a client layer may be overlaid on a base or canonical layer of content in the MU database to enable an immersive client experience within the media universe. Clients may participate within the media universe via the MU system, for example to create customized digital assets. Client-generated content or other content may be dynamically integrated into digital media based within the media universe, for example by leveraging the repository to store and access the client-generated content, and network-based computing resources to dynamically insert the content into digital media for streaming to the clients. Client-generated content may be promoted to intermediate layers and/or to canon within the MU database, for example by community voting, popularity, and so on.