Patent classifications
G06V10/803
SENSOR FUSION-BASED TOP-VIEW THREE-DIMENSIONAL STIXEL REPESENTATION FOR GENERAL OBSTACLE DETECTION IN A VEHICLE
A system in a vehicle includes a first sensor to obtain first sensor data from a first field of view and provide a first top-view feature representation. The system also includes a second sensor to obtain second sensor data from a second field of view with an overlap with the first field of view and provide a second top-view feature representation. Processing circuitry implements a neural network and provides a top-view stixel representation based on the first top-view feature representation and the second top-view feature representation. The top-view three-dimensional stixel representation is used to control an operation of the vehicle.
ENHANCED REMOTE CONTROL OF AUTONOMOUS VEHICLES
Devices, systems, and methods for remote control of autonomous vehicles are disclosed herein. A method may include receiving, by a device, first data indicative of an autonomous vehicle in a parking area, and determining, based on the first data, a location of the autonomous vehicle. The method may include determining, based on a the location, first image data including a representation of an object. The method may include generating second image data based on the first data and the first image data, and presenting the second image data. The method may include receiving an input associated with controlling operation of the autonomous vehicle, and controlling, based on the input, the operation of the autonomous vehicle.
Evaluating risk factors of proposed vehicle maneuvers using external and internal data
Apparatuses and methods for evaluating the risk factors of a proposed vehicle maneuver using remote data are disclosed. In embodiments, a computer-assisted/autonomous driving vehicle communicates with one or more remote data sources to obtain remote sensor data, and process such remote sensor data to determine the risk of a proposed vehicle maneuver. A remote data source may be authenticated and validated, such as by correlation with other remote data sources and/or local sensor data. Correlation may include performing object recognition upon the remote data sources and local sensor data. Risk evaluation is performed on the validated data, and the results of the risk evaluation presented to a vehicle operator or to an autonomous vehicle navigation system.
Vehicle sensor fusion
Various systems and methods for optimizing use of environmental and operational sensors are described herein. A system for improving sensor efficiency includes object recognition circuitry implementable in a vehicle to detect an object ahead of the vehicle, the object recognition circuitry configured to use an object detection operation to detect the object from sensor data of a sensor array, and the object recognition circuitry configured to use at least one object tracking operation to track the object between successive object detection operations; and a processor subsystem to: calculate a relative velocity of the object with respect to the vehicle; and configure the object recognition circuitry to adjust intervals between successive object detection operations based on the relative velocity of the object.
Non-same camera based image processing apparatus
The present invention provides an image processing apparatus comprising: a first camera obtaining a true-color image by capturing a subject; a second camera spaced apart from the first camera and obtaining an infrared image by capturing the subject; and a control unit connected to the first camera and the second camera, wherein the control unit matches the true-color image and the infrared image and obtains three-dimensional information of the subject by using the matched infrared image in a region corresponding to the matched true-color image and a valid pixel.
Driving surface protrusion pattern detection for autonomous vehicles
A component of an Autonomous Vehicle (AV) system, the component having at least one processor; and a non-transitory computer-readable storage medium including instructions that, when executed by the at least one processor, cause the at least one processor to decode data encoded in a signal, wherein the data identifies a pattern of protrusions embedded in a driving surface, the signal being received from at least one vehicle sensor resulting from a vehicle driving over the pattern of protrusions in the driving surface.
Apparatus, system and method for fusing sensor data to do sensor translation
Technologies and techniques for operating a sensor system including an image sensor and a light detection and ranging (LiDAR) sensor. Image data associated with an image scene of a landscape is received from the image sensor, and LiDAR data associated with a LiDAR scene of the landscape is received from the LiDAR sensor, wherein the LiDAR scene and image scene of the landscape substantially overlap. A machine-learning model is applied to (i) the image data to identify image points of interest in the image data, and (ii) the LiDAR data to identify LiDAR features of interest in the LiDAR data. The LiDAR features of interest and the image points of interest are fused, utilizing an attention mechanism, and generating an output, wherein new LiDAR data is produced, based on the fusing output.
Method for positioning on basis of vision information and robot implementing same
The present invention relates to a method for positioning on the basis of vision information and a robot implementing the method. The method for positioning on the basis of vision information, according to an embodiment of the present invention, comprises the steps of: generating, by a control unit of a robot, first vision information by using image information of an object sensed by controlling a vision sensing unit of a sensor module of the robot; generating, by the control unit of the robot, a vision-based candidate position by matching the first vision information with second vision information stored in a vision information storage unit of a map storage unit; and generating, by the control unit, the vision-based candidate position as the position information of the robot when there is one vision-based candidate position.
SYSTEMS AND METHODS FOR MANAGING MEAT CUT QUALITY
In some embodiments, apparatuses and methods are provided herein useful to ensuring quality of meat cuts. In some embodiments, a system for ensuring quality of meat cuts comprises a capture device comprising an image capture device configured to capture an image of a cut of meat, a depth sensor configured to capture depth data, a transceiver configured to transmit the image and the depth data, a microcontroller configured to control the image capture device, the depth sensor, and the transceiver, a database configured to store meat cut specifications, and the control circuit configured to receive, from the capture device, the image and the depth data, retrieve, from the database, a meat cut specification, evaluate the image of the cut of meat and the depth data associated with the cut of meat, and classify the cut of meat.
Systems and methods for managing meat cut quality
In some embodiments, apparatuses and methods are provided herein useful to ensuring quality of meat cuts. In some embodiments, a system for ensuring quality of meat cuts comprises a capture device comprising an image capture device configured to capture an image of a cut of meat, a depth sensor configured to capture depth data, a transceiver configured to transmit the image and the depth data, a microcontroller configured to control the image capture device, the depth sensor, and the transceiver, a database configured to store meat cut specifications, and the control circuit configured to receive, from the capture device, the image and the depth data, retrieve, from the database, a meat cut specification, evaluate the image of the cut of meat and the depth data associated with the cut of meat, and classify the cut of meat.