Patent classifications
G06T2207/30261
ROAD SHAPE RECOGNIZER, AUTONOMOUS DRIVE SYSTEM AND METHOD OF RECOGNIZING ROAD SHAPE
A road shape recognizer includes a peripheral information recognizer that recognizes at least two items of peripheral information based on an output of a periphery detector. A reliability assigner assigns a reliability level to each of the peripheral information. A point sequence generator generates and places a point sequence representing a shape of a road on which the own vehicle travels, based on at least two items of peripheral information and the reliability level. The point sequence generator generates and places a point sequence by generating and placing points one by one toward a distant place from a point located at a prescribed relative position to the own vehicle. The point sequence generator generates and places the next point corresponding to an amount of change in shape and a position of a point generated and placed at the end of the point sequence. The amount of change in shape is represented by the peripheral information and determined per section having a prescribed distance.
Information processing apparatus, vehicle, and information processing method using correlation between attributes
According to an embodiment, an information processing apparatus includes a memory having computer executable components stored therein; and a processing circuit communicatively coupled to the memory. The processing circuit acquires a plurality of pieces of observation information of surroundings of a moving body, generates a plurality of pieces of attribute information of the surroundings of the moving body on the basis of the plurality of pieces of observation information, and sets a reliability of the attribute information of the surroundings of the moving body on the basis of correlation of the plurality of pieces of attribute information.
AIRCRAFT WITH OPPOSED WINGTIP-MOUNTED CAMERAS AND METHOD OF OPERATING THE AIRCRAFT THAT COMPENSATE FOR RELATIVE MOTION OF THE OPPOSED WINGTIP-MOUNTED CAMERAS
An aircraft includes a fuselage, a first wing coupled to the fuselage and including a first wingtip that is movable relative to the fuselage during flight, and a second wing coupled to the fuselage, opposite the first wing, and including a second wingtip that is movable relative to the fuselage and relative to the first wingtip during flight. The aircraft also includes a first camera mounted to the first wingtip of the first wing and a second camera mounted to the second wingtip of the second wing. The aircraft further includes a processing unit configured to determine a real-time distance between the first camera and the second camera as the first camera and the second camera move relative to each other and relative to the fuselage during flight.
SYSTEMS AND METHODS FOR AUGMENTED STEREOSCOPIC DISPLAY
A method includes, with aid of one or more processors individually or collectively, analyzing stereoscopic video data of an environment to determine environmental information, generating augmented stereoscopic video data of the environment by fusing the stereoscopic video data and the environmental information, and controlling an unmanned aerial vehicle (UAV) to avoid an obstacle on a motion path of the UAV according to the augmented stereoscopic video data of the environment.
Vehicular vision system using side-viewing camera
A vehicular vision system includes a side-viewing camera mounted within an exterior rearview mirror assembly attached at a side of a vehicle equipped with the vehicular vision system. The side-viewing camera has a field of view at least sideward of the side of the equipped vehicle at which the exterior rearview mirror assembly is attached. The side-viewing camera captures an image of a scene occurring exterior of the equipped vehicle. The captured image includes an image data set representative of the exterior scene. A control includes an image processor, and the image data set is provided to the control. The control processes a reduced image data set of the image data set provided to the control to detect edges present exterior of the equipped vehicle within an area of interest of the scene occurring exterior of the equipped vehicle that is within the field of view of the side-viewing camera.
VISUAL PERCEPTION METHOD AND APPARATUS, PERCEPTION NETWORK TRAINING METHOD AND APPARATUS, DEVICE AND STORAGE MEDIUM
The present disclosure provides a visual perception method and apparatus, a perception network training method and apparatus, a device and a storage medium. The visual perception method recognizes the acquired image to be perceived with a perception network to determine a perceived target and a pose of the perceived target, and finally determines a control command according to a preset control algorithm and the pose, so as to enable an object to be controlled to determine a processing strategy for the perceived target according to the control command. According to the perception network training method, acquire image data and model data, then generate an edited image with a preset editing algorithm according to a 2D image and a 3D model, and finally train the perception network to be trained according to the edited image and the label.
VEHICLE AND METHOD FOR CONTROLLING THEREOF
A vehicle and a control method thereof are provided. The vehicle may include a camera configured to acquire a front image of the vehicle; a storage configured to store an outline image of a reference license plate, and store location information including a position of a front vehicle based on a size of the outline image of the reference license plate, and a controller configured to recognize a license plate included in the front image, and determine a position of the front vehicle based on the license plate and the location information.
Image processor, imaging device, and image processing system
An image processor according to the present disclosure includes: a multiplier that receives image data from a pixel section including pixels of a plurality of colors and multiplies the image data by an adjustment parameter that adjusts a color level in each of the pixels; an adjuster that calculates a ratio of respective colors in each of the pixels in the image data and adjusts a value of the adjustment parameter on the basis of the ratio of the respective colors; and a binarization processor that extracts a target image of a specific color on the basis of the image data multiplied by the adjustment parameter.
Tracking objects using sensor data segmentations and/or representations
Techniques are disclosed for tracking objects in sensor data, such as multiple images or multiple LIDAR clouds. The techniques may include comparing segmentations of sensor data such as by, for example, determining a similarity of a first segmentation of first sensor data and a second segmentation of second sensor data. Comparing the similarity may comprise determining a first embedding associated with the first segmentation and a second embedding associated with the second segmentation and determining a distance between the first embedding and the second embedding. The techniques may improve the accuracy and/or safety of systems integrating the techniques discussed herein.
System and method for generating a probability distribution of a location of an object
An object detection system for generating a probability distribution of the location of an object includes one or more processors and a memory in communication with the one or more processors. The memory includes an image acquisition module, a bounding box generator module, and a probability distribution generator module. The image acquisition module causes the one or more processors to obtain a two-dimension image displaying an object. The bounding box generator module causes the one or more processors to generate, using the two-dimensional image as an input, a bounding box of the object displayed in the two-dimensional image having a plurality of pixels. The probability distribution generator module causes the one or more processors to generate a probability distribution of a bounding box location for the object using a neural process using a pair of context points, a latent space, and a centered pixel location.