Patent classifications
G06T2207/30252
Enhanced Illumination-Invariant Imaging
Devices, systems, and methods for generating illumination-invariant images are disclosed. A method may include activating, by a device, a camera to capture first image data; while the camera is capturing the first image data, activating of a first, light source; receiving the first image data, the first image data having pixels having first color values; identifying first light generated by the first light source while the camera is capturing the first image data; identifying, based on the first image data, second light generated by a second light source; generating, based on the first light and the second light, second image data that are illumination-invariant; and presenting the second image data.
INFORMATION PROCESSING APPARATUS, SENSING APPARATUS, MOBILE OBJECT, METHOD FOR PROCESSING INFORMATION, AND INFORMATION PROCESSING SYSTEM
An information processing apparatus includes an input interface, a processor, and an output interface. The input interface obtains observation data obtained from an observation space. The processor detects a subject image of a detection target from the observation data, calculates a plurality of individual indices indicating degrees of reliability, each of which relates to at least identification information or measurement information regarding the detection target, and also calculates an integrated index, which is obtained by integrating a plurality of calculated individual indices. The output interface outputs the integrated index.
INFORMATION PROCESSING APPARATUS, SENSING APPARATUS, MOBILE OBJECT, AND METHOD FOR PROCESSING INFORMATION
An information processing apparatus includes an input interface, a processor, and an output interface. The input interface obtains observation data obtained from an observation space. The processor detects a detection target included in the observation data. The processor maps coordinates of the detected detection target as coordinates of a detection target in a virtual space, tracks a position and a velocity of a material point indicating the detection target in the virtual space, and maps coordinates of the tracked material point in the virtual space as coordinates in a display space. The processor sequentially observes a size of the detection target in the display space and estimates a size of a detection target at a present time on a basis of observed values of a size of a detection target at the present time and estimated values of a size of a past detection target. The output interface outputs output information based on the coordinates of the material point mapped to the display space and the estimated size of the detection target.
CRACK DETECTION DEVICE, CRACK DETECTION METHOD AND COMPUTER READABLE MEDIUM
In a crack detection device (10), an image acquisition unit (21) acquires image data acquired by taking an image of a road surface from an oblique direction with respect to the road surface, An image classification unit (22) classifies image data acquired into an acceptable range with a resolution higher than a standard value, and an unacceptable range with a resolution equal to or less than the standard value. A data output unit (23) outputs acceptable data being image data of a part classified into the acceptable range as data to detect a crack on the road surface. An image display unit (24) displays data output.
PROCESSING DEVICE
Erroneous detection due to erroneous parallax measurement is suppressed to accurately detect a step present on a road. An in-vehicle environment recognition device 1 includes a processing device that processes a pair of images acquired by a stereo camera unit 100 mounted on a vehicle. The processing device includes a stereo matching unit 200 that measures a parallax of the pair of images and generates a parallax image, a step candidate extraction unit 300 that extracts a step candidate of a road on which the vehicle travels from the parallax image generated by the stereo matching unit 200, a line segment candidate extraction unit 400 that extracts a line segment candidate from the images acquired by the stereo camera unit 100, an analysis unit 500 that performs collation between the step candidate extracted by the step candidate extraction unit 300 and the line segment candidate extracted by the line segment candidate extraction unit 400 and analyzes validity of the step candidate based on the collation result and an inclination of the line segment candidate, and a three-dimensional object detection unit 600 that detects a step present on the road based on the analysis result of the analysis unit 500.
DEVICE AND COMPUTER-IMPLEMENTED METHOD FOR OBJECT TRACKING
A device and computer-implemented method for object tracking. The method comprises providing a sequence of digital images, determining a sequence of relational graph embeddings, wherein a first relational graph embedding of the sequence comprises a first object embedding representing a first object in a first digital image of the sequence of digital images, wherein the first relational graph embedding comprises a first relation embedding of a relation for the first object embedding, wherein the first relation embedding relates the first object embedding to embeddings representing other objects of the first digital image in the first relational graph embedding and to embeddings in a second relational graph embedding of the sequence that represent objects of a second digital image of the sequence of digital images.
IMAGE PROCESSING METHOD, NETWORK TRAINING METHOD, AND RELATED DEVICE
This application provides an image processing method, a network training method, and a related device, and relates to image processing technologies in the artificial intelligence field. The method includes: inputting a first image including a first vehicle into an image processing network to obtain a first result output by the image processing network, where the first result includes location information of a two-dimensional 2D bounding frame of the first vehicle, coordinates of a wheel of the first vehicle, and a first angle of the first vehicle, and the first angle of the first vehicle indicates an included angle between a side line of the first vehicle and a first axis of the first image; and generating location information of a three-dimensional 3D outer bounding box of the first vehicle based on the first result.
POINT CLOUD REGISTRATION METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM
A point cloud registration method, apparatus, device, and storage medium are provided. The method includes: acquiring target point cloud data; dividing the target point cloud data into a plurality of point cloud sets; determining a coincidence degree between every two point cloud sets and determining a fixed point cloud set and a registration point cloud set from two point cloud sets with a coincidence degree between the two point cloud sets being greater than a preset threshold; determining a target registration matrix between the fixed point cloud set and the registration point cloud set; and performing registration of the fixed point cloud set with the registration point cloud set according to the target registration matrix.
OBTAINING AND AUGMENTING AGRICULTURAL DATA AND GENERATING AN AUGMENTED DISPLAY
A geographic position of an agricultural machine is captured. Agricultural data is received that corresponds to a geographic position. Georeferenced visual indicia are displayed that are indicative of the received agricultural data.
System Adapted to Detect Road Condition in a Vehicle and a Method Thereof
A system adapted to detect road condition in a vehicle and a method thereof uses geometrical laser projections and an image processing system. The system includes a laser source, an imaging unit and at least a processing unit. The laser source is adapted to project geometrical laser projections on the road. The imaging unit is adapted to capture images of the geometrical projections. The processing unit is configured to calculate a surface reflectance for the projected geometrical projection. Further it is configured to compute geometrical parameters of the projections at regular time intervals based on the captured images. It determines a road condition based on the surface reflectance and the geometrical parameters.