Patent classifications
G06T2210/32
LANE EXTRACTION METHOD USING PROJECTION TRANSFORMATION OF THREE-DIMENSIONAL POINT CLOUD MAP
A lane extraction method uses projection transformation of a 3D point cloud map, by which the amount of operations required to extract the coordinates of a lane is reduced by performing deep learning and lane extraction in a two-dimensional (2D) domain, and therefore, lane information is obtained in real time. In addition, black-and-white brightness, which is most important information for lane extraction on an image, is substituted by the reflection intensity of a light detection and ranging (LiDAR) sensor so that a deep learning model capable of accurately extracting a lane is provided. Therefore, reliability and competitiveness is enhanced in the field of autonomous driving, the field of road recognition, the field of lane recognition, and the field of HD road maps for autonomous driving, and the fields similar or related thereto, and more particularly, in the fields of road recognition and autonomous driving using LiDAR.
APPARATUS, METHOD, AND STORAGE MEDIUM
An apparatus includes an acquisition unit configured to acquire characteristic information of a target, and a determination unit configured to determine a resolution level of illumination information to be used in rendering, based on reflection information contained in the characteristic information. The determination unit decreases the resolution level with an increase in a width, which indicates spread of a component.
POINT CLOUD COMPRESSION USING OCCUPANCY NETWORKS
Occupancy networks enable efficient and flexible point cloud compression. In addition to the voxel-based representation, occupancy networks are able to handle points, meshes, or projected images of 3D objects, making them very flexible in terms of input signal representation. The probability of occupancy of positions is estimated using occupancy networks instead of sparse convolutional neural networks. A compression implementation using occupancy network enables scalability with infinite reconstruction resolution.
Systems and methods for capturing visible information
A transaction card construction and computer-implemented methods for a transaction card are described. The transaction card has vector-formatted visible information applied by a laser machining system. In some embodiments, systems and methods are disclosed for enabling the sourcing of visible information using a scalable vector format The systems and methods may receive a request to add visible information to a transaction card and capture an image of the visible information. The systems and methods may capture data representing the image. The systems and methods may also determine an ambient color saturation of the image. Further, systems and methods may translate the image based on the ambient color saturation of the image. The systems and methods may also map the translated image to a bounding box and convert the mapped image into vector format. In addition, the systems and methods may provide the converted image to a laser machining system.
Label propagation in a distributed system
Data are maintained in a distributed computing system that describe a graph. The graph represents relationships among items. The graph has a plurality of vertices that represent the items and a plurality of edges connecting the plurality of vertices. At least one vertex of the plurality of vertices includes a set of label values indicating the at least one vertex's strength of association with a label from a set of labels. The set of labels describe possible characteristics of an item represented by the at least one vertex. At least one edge of the plurality of edges includes a set of label weights for influencing label values that traverse the at least one edge. A label propagation algorithm is executed for a plurality of the vertices in the graph in parallel for a series of synchronized iterations to propagate labels through the graph.
Unified shape representation
Techniques are described herein for generating and using a unified shape representation that encompasses features of different types of shape representations. In some embodiments, the unified shape representation is a unicode comprising a vector of embeddings and values for the embeddings. The embedding values are inferred, using a neural network that has been trained on different types of shape representations, based on a first representation of a three-dimensional (3D) shape. The first representation is received as input to the trained neural network and corresponds to a first type of shape representation. At least one embedding has a value dependent on a feature provided by a second type of shape representation and not provided by the first type of shape representation. The value of the at least one embedding is inferred based upon the first representation and in the absence of the second type of shape representation for the 3D shape.
Embedding a Magnetic Map into an Image File
In one embodiment, a method includes accessing a magnetic map of an area that includes magnetic-field values for locations in the area. The method also includes accessing an image file that includes pixels that correspond to the locations and include components. The image file also includes a first matrix with elements that each include color values. The components of the pixels include links to elements in the first matrix. The method also includes embedding portions of the magnetic map into the image file by generating a second matrix for the image file including elements that represent the magnetic-field values and, for the locations in the area, writing to the components of the pixels corresponding to the locations links to elements of the second matrix. The method also includes communicating the image file, with the portions of the magnetic map embedded in it, to computing devices for navigation or localization.
Methods for Correcting and Encrypting Space Coordinates of Three-Dimensional Model
The present disclosure provides a method for correcting and encrypting space coordinates of a three-dimensional model. The method for correcting space coordinates of a three-dimensional model includes: step S1, reading information of an original coordinate frame of a three-dimensional model in a first format and the origin of coordinates of the model; reading information of nodes from three-dimensional model data in the first format, and calculating original coordinates of the nodes; step S2, calculating parameters of correction between the original coordinate frame and a target coordinate frame based on space coordinates of four or more control points in the original coordinate frame in the first format and corresponding space coordinates of the control points in the target coordinate frame in a second format, and constructing a space coordinate correction matrix; step S3, transforming and correcting the coordinates of the origin and nodes of the three-dimensional model in the first format one by one by using the space coordinate correction matrix to obtain information of coordinate points of the three-dimensional model in the second format; and step S4, storing a file of the three-dimensional model in the second format with corrected space coordinates. Thus, the production efficiency is improved.
MULTIMEDIA SYSTEM AND MULTIMEDIA OPERATION METHOD
The invention relates to a multimedia system and a multimedia operation method. The multimedia system includes a first portable electronic device, a collaboration device, a camera, and an audio-visual processing device. The first portable electronic device provides a first operation instruction. The collaboration device is coupled to the first portable electronic device and receives the first operation instruction. The collaboration device provides a multimedia picture, and the multimedia picture is changed with the first operation instruction. The camera provides a video image. The audio-visual processing device is coupled to the collaboration device and the camera, and the audio-visual processing device receives the multimedia picture and a video image, and outputs a synthesized image with an immersive audio-visual effect according to the multimedia picture and the video image.
IMAGE OUTPUT DEVICE AND METHOD FOR CONTROLLING THE SAME
The present invention relates to a video output device mounted on a vehicle to implement augmented reality, and a method for controlling the same. The video output device comprises: a video output unit for outputting visual information for implementing the augmented reality; a communication unit for receiving a front video captured of the front of the vehicle; and a processor for investigating, in the front video, at least one to-be-driven lane on which the vehicle is to be driven, and controlling the video output unit such that main carpet images guiding the to-be-driven lanes are output lane by lane.