Patent classifications
G06T2207/30261
METHOD AND APPARATUS FOR DETERMINING VEHICLE LOCATION BASED ON OPTICAL CAMERA COMMUNICATION
Disclosed are a method and an apparatus for determining a vehicle location based on optical camera communication (OCC). According to an embodiment of the present disclosure, the method for determining a vehicle location based on OCC may include the steps of receiving information on a distance between a plurality of rear lamps of a front vehicle and size information of the plurality of rear lamps by using a single camera provided in a vehicle, acquiring a rear side image of the front vehicle through the single camera, determining a rear lamp area in the rear side image of the front vehicle by using a pre-trained artificial neural network, determining a driving lane of the front vehicle based on the rear lamp area, determining a distance between the single camera and each of the plurality of rear lamps based on the rear lamp area, and deriving location information of the front vehicle, based on the received information on the distance between the plurality of rear lamps, the size information, the distance between the single camera and each of the plurality of rear lamps, and the driving lane.
In-vehicle surrounding environment recognition device
An in-vehicle surrounding environment recognition device includes: a photographic unit that photographs a road surface around a vehicle and acquires a photographic image; an application execution unit that recognizes another vehicle on the basis of the photographic image, and detects a relative speed of the other vehicle with respect to the vehicle; a reflection determination unit that, on the basis of the photographic image, determines upon presence or absence of a reflection of a background object from the road surface; a warning control unit that controls output of a warning signal on the basis of the result of recognition of the other vehicle; and a warning prevention adjustment unit that suppresses output of the warning signal on the basis of the relative speed of the other vehicle, if it has been determined that there is the reflection of the background object from the road surface.
System and method for free space estimation
A system and method for estimating free space including applying a machine learning model to camera images of a navigation area, where the navigation area is broken into cells, synchronizing point cloud data from the navigation area with the processed camera images, and associating probabilities that the cell is occupied and object classifications of objects that could occupy the cells with cells in the navigation area based on sensor data, sensor noise, and the machine learning model.
Collision avoidance system, depth imaging system, vehicle, map generator and methods thereof
According to various aspects, a collision avoidance method may include: receiving depth information of one or more depth imaging sensors of an unmanned aerial vehicle; determining from the depth information a first obstacle located within a first distance range and movement information associated with the first obstacle; determining from the depth information a second obstacle located within a second distance range and movement information associated with the second obstacle, the second distance range is distinct from the first distance range, determining a virtual force vector based on the determined movement information, and controlling flight of the unmanned aerial vehicle based on the virtual force vector to avoid a collision of the unmanned aerial vehicle with the first obstacle and the second obstacle.
Autonomous Navigation using Visual Odometry
A system and method are provided for autonomously navigating a vehicle. The method captures a sequence of image pairs using a stereo camera. A navigation application stores a vehicle pose (history of vehicle position). The application detects a plurality of matching feature points in a first matching image pair, and determines a plurality of corresponding object points in three-dimensional (3D) space from the first image pair. A plurality of feature points are tracked from the first image pair to a second image pair, and the plurality of corresponding object points in 3D space are determined from the second image pair. From this, a vehicle pose transformation is calculated using the object points from the first and second image pairs. The rotation angle and translation are determined from the vehicle pose transformation. If the rotation angle or translation exceed a minimum threshold, the stored vehicle pose is updated.
Image processing apparatus and image processing method
According to one embodiment, an image processing apparatus and an image processing method includes a processed image generation unit, a provisional detection unit, and a final detection unit. The processed image generation unit generates a processed image for detection processing from an image around a vehicle taken by a camera provided on the vehicle. The provisional detection unit scans a scanning frame of a predetermined size according to a detection target object on the processed image, and detects a plurality of position candidate regions of the detection target object within the processed image by determining a feature value for respective scanning positions using a dictionary of the detection target object. The final detection unit determines an overlapping region of the plurality of position candidate regions, and determines a final detection position of the detection target object within the processed image based on the overlapping region and the plurality of position candidate regions.
Information processing apparatus and information processing method using correlation between attributes
According to an embodiment, an information processing apparatus includes an attribute determiner and a setter. Each of acquired first sets indicates a combination of observation information indicating a result of observation of an area surrounding a moving body and position information. The attribute determiner is configured to determine, based on the observation information, an attribute of each of areas into which the area surrounding the moving body is divided, and to generate second sets, each indicating a combination of attribute information indicating the attribute of each area and the position information. The setter is configured to set, based on the second sets, reliability of the attribute of the area of a target second set, from correlation between the attribute of the area of the target second set and the attribute of corresponding areas each indicating the area corresponding to the target area in the areas of the other second sets.
Image processing device, image processing method, and device control system
Disclosed is an image processing device for generating disparity information from a first image and a second image, wherein the first image is captured by a first capture unit and the second image is captured by a second capture unit. The image processing device includes a disparity detector configured to detect disparity information of a pixel or a pixel block of the second image by correlating the pixel or the pixel block of the second image with each of pixels or each of pixel blocks of the first image within a detection width. The disparity detector is configured to detect the disparity information of the pixel or the pixel block of the second image more than once by changing a start point of pixels or pixel blocks of the first image within the detection width.
Monitoring method and apparatus using a camera
The present invention relates to a monitoring method and a monitoring apparatus using a camera. According to an embodiment of the present invention, a monitoring method using a camera includes receiving a photographed input image from the camera; detecting an object existing in the input image and positional information of the object; generating a corrected image by correcting a distorted area of the input image; and generating a synthesized image by synthesizing alarm display information which is generated on the basis of the object and the positional information, with the corrected image, and outputting the synthesized image through a display.
AUTOMATIC OPERATION VEHICLE
An automatic operation vehicle that automatically performs an operation in an operation area is provided. A survey unit acquires the position information of the marker using a first angle between the moving direction and a direction to the marker from a first point through which the vehicle passes during the movement in a constant moving direction, a second angle between the moving direction and a direction to the marker from a second different point through which the vehicle passes during the movement in the moving direction, and a distance between the first point and the second point. The survey unit determines the first point and/or the second point during the movement of the vehicle in the moving direction such that the angle difference between the first angle and the second angle becomes close to 90°.