Patent classifications
G06T2207/30261
CARPET DETECTION METHOD, MOVEMENT CONTROL METHOD, AND MOBILE MACHINE USING THE SAME
Carpet detection using an RGB-D camera and mobile machine movement control based thereon are disclosed. Carpets and carpet curls are detected by obtaining a RGB-D image pair including an RGB image and a depth image through an RGB-D camera, detecting carpet and carpet-curl areas in the RGB image and generating a 2D bounding box to mark each area using a deep learning model, and generating groups of carpet and carpet-curt points corresponding to each of the carpet and carpet-curl areas by matching each pixel of the RGB image within each 2D bounding box corresponding to the carpet and carpet curl areas to each pixel in the depth image.
Image-based velocity control for a turning vehicle
An autonomous vehicle control system is provided. The control system may include a plurality of cameras to acquire a plurality of images of an area in a vicinity of a vehicle; and at least one processing device configured to: recognize a curve to be navigated based on map data and vehicle position information; determine an initial target velocity for the vehicle based on at least one characteristic of the curve as reflected in the map data; adjust a velocity of the vehicle to the initial target velocity; determine, based on the plurality of images, observed characteristics of the curve; determine an updated target velocity based on the observed characteristics of the curve; and adjust the velocity of the vehicle to the updated target velocity.
Auto clean machine, cliff determining method and surface type determining method
An auto clean machine comprising: a chassis; an internal light source, located inside the chassis, for emitting internal light; an external light source, located outside the chassis, for emitting external light; an optical sensor, configured to sense optical data generated according to the external light or according to the internal light; and a control circuit, configured to analyze optical information of the optical data. If the internal light source is activated, the external light source is de-activated and the control circuit determines variation of the optical information is larger than a variation threshold, the control circuit changes the internal light source to be non-activated and the external light source to be activated.
Vehicle and method of controlling the same
A vehicle includes: recognizing a forward vehicle in response to the processing of image data captured by an image sensor disposed at the vehicle so as to have a field of view of the outside of the vehicle; obtaining a distance from the forward vehicle in response to the processing of detecting data captured by a radar disposed at the vehicle so as to have a detecting area of the outside of the vehicle; obtaining a change amount of vertical movement of the forward vehicle in the image data in response to the distance from the forward vehicle that is equal to or less than a reference distance; obtaining a height of an obstacle on a road surface corresponding to the change amount; obtaining the height of the obstacle on the road surface in the image data in response to the distance from the forward vehicle that exceeds the reference distance; identifying a driving speed of the vehicle; identifying a reference height corresponding to the driving speed of the vehicle; and outputting deceleration guide information in response to the height of the obstacle on the road surface that is greater than or equal to the reference height.
IMAGE DATA COLLECTING APPARATUS, METHOD, AND PROGRAM
An image data collecting apparatus includes: an acquisition unit that acquires position information of a plurality of mobile bodies on which a photographing device is mounted, and a selecting unit that acquires, for each mobile body, the position information acquired for the plurality of mobile bodies, information indicating photographing ranges of the photographing devices mounted on the plurality of mobile bodies, and position information of a predetermined object, and selects a mobile body on which a photographing device that outputs collection target image data is mounted from the plurality of mobile bodies on the basis of the acquired information; a transmitting unit that transmits a collection instruction for image data which is a photographing result of the photographing device mounted on the mobile body to the selected mobile body; and a receiving unit that receives the image data transmitted from the mobile body having received the collection instruction.
PERIPHERY MONITORING DEVICE FOR WORKING MACHINE
A periphery monitoring device calculates an expected passage range indicating a range of a locus of a machine body when a lower travelling body travels in an imaging direction of a camera, based on a slewing angle of an upper slewing body and an attitude of an attachment, and superimposes a range image indicating the calculated expected passage range on an image captured by the camera to display the superimposed image on the display.
APPARATUS AND METHOD FOR ESTIMATING DISTANCE AND NON-TRANSITORY COMPUTER-READABLE MEDIUM CONTAINING COMPUTER PROGRAM FOR ESTIMATING DISTANCE
An apparatus for estimating distance inputs an outside image representing the situation around a vehicle and obtained from a camera mounted on the vehicle into a classifier to detect a vehicle region representing a target vehicle and to classify the target vehicle represented in the detected vehicle region as one of preregistered types of vehicles. The apparatus then identifies the position of a virtual object corresponding to the target vehicle represented in the vehicle region, and estimates the distance from the vehicle to the virtual object as the distance from the vehicle to the target vehicle.
Obstacle detection and vehicle navigation using resolution-adaptive fusion of point clouds
A method for obstacle detection and navigation of a vehicle using resolution-adaptive fusion includes performing, by a processor, a resolution-adaptive fusion of at least a first three-dimensional (3D) point cloud and a second 3D point cloud to generate a fused, denoised, and resolution-optimized 3D point cloud that represents an environment associated with the vehicle. The first 3D point cloud is generated by a first-type 3D scanning sensor, and the second 3D point cloud is generated by a second-type 3D scanning sensor. The second-type 3D scanning sensor includes a different resolution in each of a plurality of different measurement dimensions relative to the first-type 3D scanning sensor. The method also includes detecting obstacles and navigating the vehicle using the fused, denoised, and resolution-optimized 3D point cloud.
Image processing device
Provided is an image processing device that can accurately detect a target object, even when a high-distortion lens is used. According to the present invention, a camera 100 captures images in accordance with a synchronization signal Sig1, a camera 101 captures images in accordance with a synchronization signal Sig2, an area of interest setting unit 1033 sets an area of interest that represents a region to which attention is to be paid, a phase difference setting unit 1034 sets a shift .DELTA.t (a phase difference) for synchronization signal Sig1 and synchronization signal Sig2 that synchronizes the imaging timing of camera 100 and camera 101 with respect to the area of interest in the images captured by camera 100 and a region of the images captured by camera 101 that corresponds to the area of interest, and a synchronization signal generation unit 102 generates synchronization signal Sig1 and synchronization signal Sig2 on the basis of the shift .DELTA.t.
Electronic device and method for vehicle driving assistance
An electronic device for and a method of assisting vehicle driving are provided. The electronic device includes a plurality of cameras configured to capture a surrounding image around a vehicle; at least one sensor configured to sense an object around the vehicle; and a processor configured to obtain, during vehicle driving, a plurality of image frames as the surrounding image of the vehicle is captured based on a preset time interval by using the plurality of cameras, based on the object is sensed using the at least one sensor while the vehicle is being driven, extract an image frame corresponding to a time point when and a location where the object has been sensed, from among the obtained plurality of image frames, perform object detection from the extracted image frame, and perform object tracking of tracking a change in the object, from a plurality of image frames obtained after the extracted image frame. The present disclosure also relates to an artificial intelligence (AI) system that utilizes a machine learning algorithm, such as deep learning, and applications of the AI system.