Patent classifications
G06T2207/20221
System and method for large-scale lane marking detection using multimodal sensor data
A system and method for large-scale lane marking detection using multimodal sensor data are disclosed. A particular embodiment includes: receiving image data from an image generating device mounted on a vehicle; receiving point cloud data from a distance and intensity measuring device mounted on the vehicle; fusing the image data and the point cloud data to produce a set of lane marking points in three-dimensional (3D) space that correlate to the image data and the point cloud data; and generating a lane marking map from the set of lane marking points.
METHOD, APPARATUS, SYSTEM, AND STORAGE MEDIUM FOR 3D RECONSTRUCTION
A method, device, computer system and computer readable storage medium for 3D reconstruction are provided. The method comprises: performing a 3D reconstruction of an original 2D image of a target object to generate an original 3D object corresponding to the original 2D image; selecting a complementary view of the target object from candidate views based on a reconstruction quality of the original 3D object at the candidate views; obtaining a complementary 2D image of the target object based on the complementary view; performing a 3D reconstruction of the complementary 2D image to generate a complementary 3D object corresponding to the complementary 2D image; and fusing the original 3D object and the complementary 3D object to obtain a 3D reconstruction result of the target object.
HIGH DYNAMIC RANGE IMAGE SYNTHESIS METHOD AND APPARATUS, IMAGE PROCESSING CHIP AND AERIAL CAMERA
Embodiments of the present invention are a high dynamic range (HDR) synthesis method and apparatus, an image processing chip and an aerial camera. The method includes: acquiring a plurality of to-be-synthesized images having different exposure time; calculating a mean brightness of the to-be-synthesized images; determining an image brightness type of the to-be-synthesized images according to the mean brightness; calculating a brightness difference between adjacent pixel points in one to-be-synthesized image; calculating an inter-frame difference of different to-be-synthesized images at a same pixel point position according to the brightness difference; determining a motion state of the to-be-synthesized images at the pixel point position according to the inter-frame difference; and weighting and synthesizing the to-be-synthesized images into a corresponding HDR image according to the image brightness type and the motion state.
LEARNING DATA GENERATION DEVICE AND DEFECT IDENTIFICATION SYSTEM
A learning data generation device that can generate learning data suitable for learning of an identification model. The learning data generation device has a function of cutting out part of first image data as second image data, a function of generating a two-dimensional graphic corresponding to the area of the second image data and representing a pseudo defect, a function of generating third image data by combining the second image data and the two-dimensional graphic, and a function of assigning a label corresponding to the two-dimensional graphic to the third image data. By using the third image data for learning of the identification model, a highly accurate identification model can be generated.
HIGH-DEFINITION MAP CREATION METHOD AND DEVICE, AND ELECTRONIC DEVICE
A high-definition map creation method includes: obtaining point cloud data collected with respect to a target region, the point cloud data including K frames of point clouds and an initial pose of each frame of point cloud, K being an integer greater than 1; associating the K frames of point clouds with each other in accordance with the initial pose to obtain a first point cloud relation graph of the K frames of point clouds; performing point cloud registration on the K frames of point clouds in accordance with the first point cloud relation graph and the initial pose to obtain a target relative pose of each frame of point cloud in the K frames of point clouds; and splicing the K frames of point clouds in accordance with the target relative pose to obtain a point cloud map of the target region.
APPARATUS FOR ACQUIRING DEPTH IMAGE, METHOD FOR FUSING DEPTH IMAGES, AND TERMINAL DEVICE
Provided are an apparatus for acquiring a depth image, a method for fusing depth images, and a terminal device. The apparatus for acquiring a depth image includes an emitting module, a receiving module, and a processing unit. The emitting module is configured to emit a speckle array to an object, where the speckle array includes p mutually spaced apart speckles. The receiving module includes an image sensor. The processing unit is configured to receive the pixel signal and generate a sparse depth image based on the pixel signal, align an RGB image at a resolution of a*b with the sparse depth image, and fuse the aligned sparse depth image with the RGB image using a pre-trained image fusion model to obtain a dense depth image at a resolution of a*b.
SYSTEM AND METHOD FOR HYBRID IMAGING
The present disclosure provides systems and methods for hybrid imaging. The systems and methods may obtain a first magnetic resonance (MR) image of a target object. The first MR image may be acquired by a magnetic resonance imaging (MRI) device using a first imaging sequence. The systems and methods may also obtain a second MR image of the target object. The second MR image may be acquired by the MRI device using a second imaging sequence. The second MR image may correspond to a target respiratory phase of the target object. The systems and methods may also obtain a target emission computed tomography ECT) image of the target object. The target ECT image may correspond to the target respiratory phase. The systems and methods may further fuse, based on the second MR image, the first MR image and the target ECT image.
Personalized videos featuring multiple persons
Provided are systems and methods for personalized videos featuring multiple persons. An example method includes receiving a user selection of a video having at least one frame with metadata that include a first location and a second location and receiving an image of a source face and a further image of a further source face, modifying the image of the source face to generate an image of a modified source face and modifying the further image of the further source face to generate an image of a modified further source face, inserting, in the at least one frame of the video, the image of the modified source face at the first location and the image of the modified further source face at the second location to generate a personalized video, and sending the personalized video via a communication chat.
Method and apparatus for image processing and computer storage medium
A method and an apparatus for processing an image are provided. The method may include: acquiring a set of image sequences, the set of image sequences including a plurality of image sequence subsets divided according to similarity measurements between image sequences, each image sequence subset including a basic image sequence and other image sequence, wherein a first similarity measurement corresponding to the basic image sequence is greater than or equal to a first similarity measurement corresponding to the other image sequence; creating an original three-dimensional model using the basic image sequence; and creating a final three-dimensional model using the other image sequence based on the original three-dimensional model.
Processing apparatus, image sensor, and system
Provided is a processing apparatus including a processing unit that is connected to a data bus, and performs control involved with an image which is output by each of a plurality of image sensors connected to the data bus, through the data bus.