Patent classifications
G06V10/893
CROP ROW GUIDANCE SYSTEMS
Technologies for guiding an agricultural vehicle through crop rows using a camera and signal processing to locate the crop row or centers of the crop row. The signal processing uses a filter to filter data from images captured by the camera and locates the row or the centers based on the filtered data. The filter is generated based on a signal processing transform and an initial image of the crop row captured by the camera. The filter is applied to subsequent images of the crop row captured by the camera. In some embodiments, the camera includes one lens. For example, monocular computer vision is used in some embodiments. Also, in some embodiments, a central processing unit generates the filter based on the transform and the initial image of the crop row and applies the generated filter to the subsequent images of the row.
OPTICAL 4f SYSTEM PERFORMING EXTENDED CONVOLUTION OPERATION AND OPERATING METHOD THEREOF
Disclosed is an optical system which includes a first lens that receives input data from an object, a kernel that performs a first Fourier transform on the input data and generates learning data by performing calculation on a result of the first Fourier transform and pattern data, and a second lens that generates result data by performing a second Fourier transform on the learning data, and the input data, the learning data, and the result data include both a positive number and a negative number.
MODEL-BASED ROAD ESTIMATION
A computer-implemented method for estimating a road geometry of a road upon which a vehicle is traveling and related aspects are disclosed. The disclosed embodiments provide a model-based technique for estimating a road model having an arbitrary number of lanes on a road as well as any entrance lanes (i.e. on-ramp or slip roads) and exit lanes (i.e. off ramps or off-slip roads). In more detail, the herein disclosed embodiments utilize a Bayesian approach to multi-lane tracking in various traffic scenarios, such as e.g. highway scenarios. The employed model-based algorithm estimates the lane center curves of multi-lane roads based on vehicle motion data (e.g. speed, angular velocity) and perception data (e.g. camera output, lidar output, radar output, etc.) that includes detected road features (e.g. lane markers, road edges, road barriers, guard rails, road markers, etc.) or other objects (e.g. other road users).
Crop row guidance systems
Technologies for guiding an agricultural vehicle through crop rows using a camera and signal processing to locate the crop row or centers of the crop row. The signal processing uses a filter to filter data from images captured by the camera and locates the row or the centers based on the filtered data. The filter is generated based on a signal processing transform and an initial image of the crop row captured by the camera. The filter is applied to subsequent images of the crop row captured by the camera. In some embodiments, the camera includes one lens. For example, monocular computer vision is used in some embodiments. Also, in some embodiments, a central processing unit generates the filter based on the transform and the initial image of the crop row and applies the generated filter to the subsequent images of the row.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING PROGRAM, AND INFORMATION PROCESSING SYSTEM
An information processing apparatus according to an embodiment includes a conversion part (311) that converts, based on an output of a first recognizer (310) that performs a recognition process based on a first signal read from a first sensor, a processing parameter related to a recognition process of a second recognizer (312) that performs the recognition process based on a second signal read from a second sensor having a characteristic different from a characteristic of the first sensor. The conversion part converts the processing parameter so as to approximate an output of the second recognizer to an output of the first recognizer.
Feature extraction by directional wavelet packets for image processing by neural networks
Methods and systems that replace convolutional layers of a convolutional neural network (CNN) with quasi-analytic directional wavelet packet (qWP)-based filters, and which use the qWP-based filters to perform filtering and extract features from image data. The extracted features are then used by the CNN to perform a classification task. The results of the classification task are output to a user.