Patent classifications
G06T9/20
Cost-driven framework for progressive compression of textured meshes
Techniques of compressing level of detail (LOD) data involve sharing a texture image LOD among different mesh LODs for single-rate encoding. That is, a first texture image LOD corresponding to a first mesh LOD may be derived by refining a second texture image LOD corresponding to a second mesh LOD. This sharing is possible when texture atlases of LOD meshes are compatible.
Cost-driven framework for progressive compression of textured meshes
Techniques of compressing level of detail (LOD) data involve sharing a texture image LOD among different mesh LODs for single-rate encoding. That is, a first texture image LOD corresponding to a first mesh LOD may be derived by refining a second texture image LOD corresponding to a second mesh LOD. This sharing is possible when texture atlases of LOD meshes are compatible.
Systems and methods for encoding hyperspectral data with variable band resolutions
An encoder may perform a dynamic encoding that adapts the encoding of hyperspectral data according to the number of bands of the electromagnetic spectrum that are captured by different imaging devices, the amount of data that is contained in each band, and/or encoding criteria that are specified by a user or that are automatically generated by the encoder for an optimal encoding of the hyperspectral data. The encoder may receive hyperspectral data for different electromagnetic spectrum bands. The encoder may determine an encoding resolution based on one or more of a number of bands and a maximum resolution within the received bands. The encoder may configure a block size for a file format that is used to store an encoding of the hyperspectral data based on the encoding resolution, and may encode the hyperspectral data contained within each band to at least one block of the block size.
Automatic Area Detection
An example computing platform is configured to (i) receive a two-dimensional (2D) image file comprising a construction drawing, (ii) generate, via semantic segmentation, a first set of polygons corresponding to respective areas of the 2D image file, (iii) generate, via instance segmentation, a second set of polygons corresponding to respective areas of the 2D image file, (iv) generate, via unsupervised image processing, a third set of polygons corresponding to respective areas of the 2D image file, (v) based on (a) overlap between polygons in the first, second, and third sets of polygons and (b) respective confidence scores for each of the overlapping polygons, determine a set of merged polygons corresponding to respective areas of the 2D image file, and (vi) cause a client station to display a visual representation of the 2D image file where each merged polygon is overlaid as a respective selectable region of the 2D image file.
Unified shape representation
Techniques are described herein for generating and using a unified shape representation that encompasses features of different types of shape representations. In some embodiments, the unified shape representation is a unicode comprising a vector of embeddings and values for the embeddings. The embedding values are inferred, using a neural network that has been trained on different types of shape representations, based on a first representation of a three-dimensional (3D) shape. The first representation is received as input to the trained neural network and corresponds to a first type of shape representation. At least one embedding has a value dependent on a feature provided by a second type of shape representation and not provided by the first type of shape representation. The value of the at least one embedding is inferred based upon the first representation and in the absence of the second type of shape representation for the 3D shape.
Transformation of hand-drawn sketches to digital images
Techniques are disclosed for generating a vector image from a raster image, where the raster image is, for instance, a photographed or scanned version of a hand-drawn sketch. While drawing a sketch, an artist may perform multiple strokes to draw a line, and the resultant raster image may have adjacent or partially overlapping salient and non-salient lines, where the salient lines are representative of the artist's intent, and the non-salient (or auxiliary) lines are formed due to the redundant strokes or otherwise as artefacts of the creation process. The raster image may also include other auxiliary features, such as blemishes, non-white background (e.g., reflecting the canvas on which the hand-sketch was made), and/or uneven lighting. In an example, the vector image is generated to include the salient lines, but not the non-salient lines or other auxiliary features. Thus, the generated vector image is a cleaner version of the raster image.
ENCODING LIDAR SCANNED DATA FOR GENERATING HIGH DEFINITION MAPS FOR AUTONOMOUS VEHICLES
Embodiments relate to methods for efficiently encoding sensor data captured by an autonomous vehicle and building a high definition map using the encoded sensor data. The sensor data can be LiDAR data which is expressed as multiple image representations. Image representations that include important LiDAR data undergo a lossless compression while image representations that include LiDAR data that is more error-tolerant undergo a lossy compression. Therefore, the compressed sensor data can be transmitted to an online system for building a high definition map. When building a high definition map, entities, such as road signs and road lines, are constructed such that when encoded and compressed, the high definition map consumes less storage space. The positions of entities are expressed in relation to a reference centerline in the high definition map. Therefore, each position of an entity can be expressed in fewer numerical digits in comparison to conventional methods.
ENCODING LIDAR SCANNED DATA FOR GENERATING HIGH DEFINITION MAPS FOR AUTONOMOUS VEHICLES
Embodiments relate to methods for efficiently encoding sensor data captured by an autonomous vehicle and building a high definition map using the encoded sensor data. The sensor data can be LiDAR data which is expressed as multiple image representations. Image representations that include important LiDAR data undergo a lossless compression while image representations that include LiDAR data that is more error-tolerant undergo a lossy compression. Therefore, the compressed sensor data can be transmitted to an online system for building a high definition map. When building a high definition map, entities, such as road signs and road lines, are constructed such that when encoded and compressed, the high definition map consumes less storage space. The positions of entities are expressed in relation to a reference centerline in the high definition map. Therefore, each position of an entity can be expressed in fewer numerical digits in comparison to conventional methods.
Data generation system and methods
A data generation system for generating data representing content to be displayed includes: a content dividing unit operable to divide content to be displayed into a plurality of polyhedra and generate polyhedron position information, an intersection detecting unit operable to generate intersection information that describes the intersection of one or more surfaces within the content with the plurality of polyhedra, a polyhedron classifying unit operable to classify each of the polyhedra in dependence upon the intersection information, the classification indicating the properties of the surface within the respective polyhedra, and a data generating unit operable to generate data comprising the polyhedron position information and the polyhedron classification information.
Data generation system and methods
A data generation system for generating data representing content to be displayed includes: a content dividing unit operable to divide content to be displayed into a plurality of polyhedra and generate polyhedron position information, an intersection detecting unit operable to generate intersection information that describes the intersection of one or more surfaces within the content with the plurality of polyhedra, a polyhedron classifying unit operable to classify each of the polyhedra in dependence upon the intersection information, the classification indicating the properties of the surface within the respective polyhedra, and a data generating unit operable to generate data comprising the polyhedron position information and the polyhedron classification information.