Patent classifications
G06T2207/30261
Falling object detection apparatus, in-vehicle system, vehicle, and computer readable medium
An acquisition unit of a falling object detection apparatus installed and used in a first vehicle acquires a depth image of a second vehicle, on which a load is mounted and which is traveling in front of the first vehicle, and of the area around the second vehicle. A determination unit of the falling object detection apparatus determines whether the load has not made a movement different from that of the second vehicle, using the depth image acquired by the acquisition unit. A detection unit of the falling object detection apparatus detects a fall of the load based on a result of determination by the determination unit.
Object detection apparatus
An object detection apparatus that detects an object in a vicinity of a vehicle includes: (a) an image processing circuit configured to: (i) derive vectors representing movement of feature points in captured images acquired periodically by a camera that captures images of the vicinity of the vehicle; and (ii) detect the object based on the vectors; and (b) a controller configured to (i) acquire a velocity of the vehicle; and (ii) set a parameter that affects a number of the feature points based on the velocity of the vehicle.
Image selection device and image selection method
An image selection device includes a processor configured to: input, for each of a series of images acquired from a camera mounted on a vehicle, the image to a classifier to detect a region including an object represented on the image; track the detected object over the series of images; and select, when a period in which the detected object can be tracked is equal to or more than a predetermined period, and a size of a region including the detected object in any one image during the period in which the object can be tracked is equal to or more than a predetermined size threshold value, among the series of images, an image immediately before the period in which the object can be tracked, or an image in which the tracked object is not represented during the period in which the object can be tracked.
Method and device for determining operation of an autonomous device
A method and device for determining operation of an autonomous device is disclosed. The method includes receiving pixel data and sound data associated with an environment at an instance of time, wherein the pixel data is received from least an image sensor associated with the autonomous device, and wherein the sound data is received from at least four sound sensors placed in a quadrilateral configuration on the autonomous device. Each quadrant of the pixel data is associated with each of the at least four sound sensors. The sound data received is mapped the to the matrix to identify one or more pixels in the matrix corresponding to the sound data based on a difference in amplitude between a first sound sensor of the at least four sound sensors recording maximum sound amplitude with a plurality of second sound sensors of the at least four sound sensors.
Object recognition and pedestrian alert apparatus for a vehicle
An object recognition apparatus 10 includes a candidate image extraction unit 12 that extracts a candidate image part 22, which is an image part of an object, from a pickup image 21 of a camera 2, and a candidate image determination unit 14 that determines the type of object of the candidate image part 22 in a determination area R′.sub.1 on the basis of whether or not a first predetermined number k or more of the candidate image parts 22, which have been extracted by the candidate image extraction unit 12, have been extracted in the determination area R′.sub.1, the width in the horizontal direction of which is a predetermined width or less, in the pickup image 21.
WEARABLE AIRCRAFT TOWING COLLISION WARNING DEVICES AND METHODS
The disclosed embodiments describe collision warning devices, controllers, and computer readable media. A collision warning device for towing vehicles includes a housing, a scanning sensor, a display, and a controller. The housing is configured to be secured to at least one of a tow operator and a tug during aircraft towing operations. The a scanning sensor is secured to the housing and is configured to scan an aircraft and to scan an object in an environment surrounding the aircraft. The controller is mounted to the housing and is operably coupled with the scanning sensor and the display. The controller is configured to generate a three dimensional (3D) model of the aircraft and the environment based on a signal output from the scanning sensor, and to calculate potential collisions between the aircraft and the object based on the 3D model.
Geometry-aware instance segmentation in stereo image capture processes
A system detects multiple instances of an object in a digital image by receiving a two-dimensional (2D) image that includes a plurality of instances of an object in an environment. For example, the system may receive the 2D image from a camera or other sensing modality of an autonomous vehicle (AV). The system uses a first object detection network to generate a plurality of predicted object instances in the image. The system then receives a data set that comprises depth information corresponding to the plurality of instances of the object in the environment. The data set may be received, for example, from a stereo camera of an AV, and the depth information may be in the form of a disparity map. The system may use the depth information to identify an individual instance from the plurality of predicted object instances in the image.
Remote distance estimation system and method
Provided is a method including emitting, with a laser light emitter disposed on a robot, a collimated laser beam projecting a light point on a surface opposite the laser light emitter; capturing, with each of at least two image sensors disposed on the robot, images of the projected light point; overlaying, with a processor of the robot, the images captured by the at least two image sensors to produce a superimposed image showing both captured images in a single image; determining, with the processor of the robot, a first distance between the projected light points in the superimposed image; and determining, with the processor, a second distance based on the first distance using a relationship that relates distance between light points with distance between the robot or a sensor thereof and the surface on which the collimated laser beam is projected.
Robot cleaner and method for controlling the same
A method of controlling a robot cleaner includes recognizing information on a monitoring standby position by a robot cleaner, moving to the monitoring standby position at a monitoring start time by the robot cleaner, acquiring an image, by an image acquisition unit of the robot cleaner, at the monitoring standby position, determining whether an event has occurred, by the robot cleaner, based on the image acquired by the image acquisition unit, transmitting the image acquired by the image acquisition unit to an external remote terminal when it is determined that the event occurred.
Optical flow based motion detection
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating motion detection based on optical flow. One of the methods includes obtaining a first image of a scene in an environment taken by an agent at a first time point and a second image of the scene at a second later time point. A point cloud characterizing the scene in the environment is obtained. A predicted optical flow is determined between the first image and the second image. A respective initial flow prediction for the point that represents motion of the point between the two time points is determined. A respective ego motion flow estimate for the point that represents a motion of the point induced by ego motion of the agent is determined. A respective motion prediction that indicates whether the point was static or in motion between the two time points is determined.