G06V10/255

Plant group identification

A farming machine moves through a field and includes an image sensor that captures an image of a plant in the field. A control system accesses the captured image and applies the image to a machine learned plant identification model. The plant identification model identifies pixels representing the plant and categorizes the plant into a plant group (e.g., plant species). The identified pixels are labeled as the plant group and a location of the pixels is determined. The control system actuates a treatment mechanism based on the identified plant group and location. Additionally, the images from the image sensor and the plant identification model may be used to generate a plant identification map. The plant identification map is a map of the field that indicates the locations of the plant groups identified by the plant identification model.

Localization and mapping method and moving apparatus

A localization and mapping method is for localizing and mapping a moving apparatus in a moving process. The localization and mapping method includes an image capturing step, a feature point extracting step, a flag object identifying step, and a localizing and mapping step. The image capturing step includes capturing an image frame at a time point of a plurality of time points in the moving process by a camera unit. The flag object identifying step includes identifying whether the image frame includes a flag object among a plurality of the feature points in accordance with a flag database. The flag database includes a plurality of dynamic objects, and the flag object is corresponding to one of the dynamic objects. The localizing and mapping step includes performing localization and mapping in accordance with the image frames captured and the flag object thereof in the moving process.

Training image classifiers

Methods, systems, an apparatus, including computer programs encoded on a storage device, for training an image classifier. A method includes receiving an image that includes a depiction of an object; generating a set of poorly localized bounding boxes; and generating a set of accurately localized bounding boxes. The method includes training, at a first learning rate and using the poorly localized bounding boxes, an object classifier to classify the object; and training, at a second learning rate that is lower than the first learning rate, and using the accurately localized bounding boxes, the object classifier to classify the object. The method includes receiving a second image that includes a depiction of an object; and providing, to the trained object classifier, the second image. The method includes receiving an indication that the object classifier classified the object in the second image; and performing one or more actions.

System and method for iterative classification using neurophysiological signals

A method of training an image classification neural network comprises: presenting a first plurality of images to an observer as a visual stimulus, while collecting neurophysiological signals from a brain of the observer; processing the neurophysiological signals to identify a neurophysiological event indicative of a detection of a target by the observer in at least one image of the first plurality of images; training the image classification neural network to identify the target in the image, based on the identification of the neurophysiological event; and storing the trained image classification neural network in a computer-readable storage medium.

Construction zone segmentation

Systems and methods for construction zone segmentation are provided. The system aligns image level features between a source domain and a target domain based on an adversarial learning process while training a domain discriminator. The target domain includes construction zones scenes having various objects. The system selects, using the domain discriminator, unlabeled samples from the target domain that are far away from existing annotated samples from the target domain. The system selects, based on a prediction score of each of the unlabeled samples, samples with lower prediction scores. The system annotates the samples with the lower prediction scores.

OBJECT RECOGNITION APPARATUS, OBJECT RECOGNITION METHOD, AND RECORDING MEDIUM
20230039355 · 2023-02-09 · ·

In an object recognition apparatus, a storage unit stores a table in that a plurality of feature amounts are associated with each object having feature points of respective feature amounts. An object region detection unit detects object regions of a plurality of objects from an input image. A feature amount extraction unit extracts feature amounts of feature points from the input image. A refining unit refers to the table, and refines from all objects of recognition subjects to object candidates corresponding to the object regions based on feature amounts of feature points belonging to the object regions. A matching unit recognizes the plurality of objects by matching the feature points belonging to each of the object regions with feature points for each of the object candidates, and outputs a recognition result.

IMAGING SYSTEM AND METHOD USING A MULTI-LAYER MODEL APPROACH TO PROVIDE ROBUST OBJECT DETECTION

A system and method of detecting an image of a template object in a captured image may include comparing, by a processor, an image model of an imaged template object to multiple locations, rotations, and scales in the captured image. The image model may be defined by multiple model base point sets derived from contours of the imaged template object, where each model base point set inclusive of a plurality of model base points that are positioned at corresponding locations associated with distinctive features of the imaged template object. Each corresponding model base point of the model base point sets may (i) be associated with respective layers and (ii) have an associated gradient vector. A determination may be made as to whether and where the image of the object described by the image model is located in the captured image.

OBJECT POSITION DETERMINING SYSTEM AND OBJECT DETECTION SYSTEM

An object position determining system comprising: at least one light source, configured to emit light; at least one optical sensor, configured to sense optical data generated based on reflected light of the light; and a processing circuit, configured to compute distance information between the optical sensor and an object which generates the reflected light. The processing circuit further determines a position of the object according to the distance information.

Systems and Methods for Image Based Perception

Systems and methods for image-based perception. The methods comprise: obtaining, by a computing device, images captured by a plurality of cameras with overlapping fields of view; generating, by the computing device, spatial feature maps indicating locations of features in the images; defining, by the computing device, predicted cuboids at each location of an object in the images based on the spatial feature maps; and assigning, by the computing device, at least two cuboids of said predicted cuboids to a given object when predictions from images captured by separate cameras of the plurality of cameras should be associated with a same detected object.

Homography error correction

An object tracking system that includes a sensor that is configured to capture frames of at least a portion of a global plane for a space. The system is configured to receive a first frame from the sensor, to identify a pixel location within the first frame, and to determine an estimated sensor location for the sensor by applying a homography to the pixel location. The homography includes coefficients that translate between pixel locations in a frame from the sensor and (x,y) coordinates in the global plane. The system is further configured to determine an actual sensor location for the sensor and to determine a location difference between the estimated sensor location and the actual sensor location. The system is further configured to compare the location difference to a difference threshold level and to recompute the homography in response to determining that the location difference exceeds the difference threshold level.