G06T2207/30261

Intelligent crop maintenance device with independently controlled blades

System that automates crop maintenance activities, such as cultivating and weeding, with a device that intelligently and independently controls two blades that drag along either side of a crop row using sensors to repeatedly track the position of the blades and of the plants in the row. Blades may be moved in and out independently using an actuator for each blade to contour closely around the individual plants, even if plants or rows vary in their positions, and even if plant sizes and shapes differ. An illustrative system may use a single camera and a processor per crop row; the processor may analyze camera images to locate plant positions and shapes, to plan blade trajectories, and to control blade actuators. The processor may be able to control blade movement precisely to respond quickly to sensor input on changes in plant positions, shapes, and sizes along the row.

DATA PROCESSING APPARATUS, SENDING APPARATUS, AND DATA PROCESSING METHOD

An image generation apparatus (20) includes an acquisition unit (210) and a data processing unit (220). The acquisition unit (210) repeatedly acquires analysis data from at least one sending apparatus (10). The analysis data include at least type data and relative position data. The data processing unit (220) generates, each time analysis data are acquired, a reconfigured image by using the analysis data, and causes a display (230) to display the reconfigured image. Further, the data processing unit (220) requests the sending apparatus (10) for a captured image, when a criterion is satisfied.

Vehicle collision detection and driver notification system

A vehicle collision avoidance and driver notification system includes an object detection unit configured to detect environmental obstacles and a collision detection unit for assessing risk of collision. Depending on risk assessment, a collision avoidance unit gives feedback to the driver or directly interacts with the vehicle engine.

THREE-DIMENSIONAL TARGET ESTIMATION USING KEYPOINTS
20230087261 · 2023-03-23 ·

Systems and techniques are described for performing object detection and tracking. For example, a tracking object can obtain an image comprising a target object at least partially in contact with a surface. The tracking object can obtain a plurality of two-dimensional (2D) keypoints based on one or more features associated with one or more portions of the target object in contact with the surface in the image. The tracking object can obtain information associated with a contour of the surface. Based on the plurality of 2D keypoints and the information associated with the contour of the surface, the tracking object can determine a three-dimensional (3D) representation associated with the plurality of 2D keypoints.

ROAD AND INFRASTRUCTURE ANALYSIS TOOL

Systems and methods is provided for road hazard analysis. The method includes obtaining sensor data of a road environment including a road and observable surroundings, and applying labels to the sensor data. The method further includes training a first neural network model to identify road hazards, training a second neural network model to identify faded lane markings, and training a third neural network model to identify overhanging trees and blocking foliage. The method further includes implementing the trained neural network models to detect road hazards in a real road setting.

Vehicular vision system with object detection
11610410 · 2023-03-21 · ·

A vehicular vision system includes a camera disposed at an in-cabin side of a windshield of a vehicle and viewing forward of the vehicle. The vehicular vision system, responsive at least in part to image processing of multiple frames of captured image data, detects an object present exterior of the vehicle that is moving relative to the vehicle. The system, when the vehicle is moving, and based at least in part to received vehicle motion data indicative of motion of the vehicle when the vehicle is moving and image processing of multiple frames of captured image data, (i) estimates object trajectory of the detected object based at least in part on corresponding object features present in multiple frames of image data captured by the camera and (ii) determines motion of the detected object relative to the moving vehicle based on the estimated object trajectory.

Method of and system for predicting future event in self driving car (SDC)

Methods and devices for generating a trajectory for a self-driving car (SDC) are disclosed. The method includes: for a given one of the plurality of trajectory points of a given trajectory: (i) determining a presence of plurality of dynamic objects around the SDC, (ii) applying a first algorithm to determine a set of collision candidates, (iii) generating a segment line for the SDC, (iv) generating a bounding box for each of set of collision candidates, (v) for a given one of the set of collision candidates, determining a distance between the segment line and the respective bounding box to determine a separation distance, (vi) in response to the separation distance being lower than a threshold, determining that the given one of the set of collision candidates would cause the collision with the SDC, (vii) amending at least one of the plurality of trajectory points to render a revised candidate trajectory.

SYSTEMS AND METHODS FOR PRODUCING AMODAL CUBOIDS
20220343101 · 2022-10-27 ·

Systems and methods for operating an autonomous vehicle. The methods comprising: obtaining, by a computing device, loose-fit cuboids overlaid on 3D graphs so as to each encompass LiDAR data points associated with a given object; defining, by the computing device, an amodal cuboid based on the loose-fit cuboids; using, by the computing device, the amodal cuboid to train a machine learning algorithm to detect objects of a given class using sensor data generated by sensors of the autonomous vehicle or another vehicle; and causing, by the computing device, operations of the autonomous vehicle to be controlled using the machine learning algorithm.

Spatio-temporal-interactive networks

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing point cloud data using spatio-temporal-interactive networks.

Cross-domain image comparison method and system using semantic segmentation

A cross-domain image comparison method and a cross-domain image comparison system are provided. The cross-domain image comparison method includes the following steps. Two videos in cross-domain are obtained. The videos are generated by different types of devices. A plurality of semantic segmentation areas are obtained from one frame of each of the videos. A region of interest pair (ROI pair) is obtained according to moving paths of the semantic segmentation areas in the videos. Two bounding boxes and two central points of the ROI pair are obtained. A similarity between the frames is obtained according to the bounding boxes and the central points.