G06T7/74

PUPIL DETECTION DEVICE, LINE-OF-SIGHT DETECTION DEVICE, OCCUPANT MONITORING SYSTEM, AND PUPIL DETECTION METHOD

A pupil detection device includes an eye area image obtaining unit that obtains image data representing an eye area image in a captured image obtained by a camera; a luminance gradient calculating unit that calculates luminance gradient vectors corresponding to respective individual image units in the eye area image, using the image data; an evaluation value calculating unit that calculates evaluation values corresponding to the respective individual image units, using the luminance gradient vectors; and a pupil location detecting unit that detects a pupil location in the eye area image, using the evaluation values.

APPARATUS FOR CHECKING BATTERY POSITION AND OPERATING METHOD THEREOF

This application relates to a battery test apparatus and an operating method thereof to check whether the direction of battery cells provided inside a battery tray is correct by implementing a computer vision AI algorithm. In one aspect, the method includes performing a first test for checking whether a battery tray has been correctly aligned with the battery test apparatus. The method may also include capturing a battery tray image including a plurality of battery cells using a camera upon alignment of the battery tray. The method may further include obtaining related information on battery cells from the battery tray image, and loading the related information on the battery cells. The method may further include performing a second test for checking whether directions of the battery cells are correct.

SYSTEMS AND METHODS FOR OBJECT DETECTION

A computing system including a processing circuit in communication with a camera having a field of view. The processing circuit is configured to perform operations related to detecting, identifying, and retrieving objects disposed amongst a plurality of objects. The processing circuit may be configured to perform operations related to object recognition template generation, feature generation, hypothesis generation, hypothesis refinement, and hypothesis validation.

Localization using dynamic landmarks

A method, system and computer program product for determining a map position of an ego-vehicle are disclosed. The method includes acquiring map data comprising a road geometry, initializing at least one dynamic landmark by measuring a position and velocity, relative to the ego-vehicle, of a surrounding vehicle, and determining a first map position of the surrounding vehicle based on this measurement and the geographical position of the ego-vehicle. Further, the method includes predicting a second map position of the surrounding vehicle, and measuring a location, relative to the ego-vehicle, of the surrounding vehicle when it is estimated to be at the second map position, whereby the geographical position of the ego-vehicle can be computed and updated.

Facial recognition technology for improving motor carrier regulatory compliance

Methods for improving compliance with regulations pertaining to vehicle driving records are disclosed. One or more digital images from a camera mounted in a vehicle are received. Based on a determination that the vehicle has hours of service that have not been assigned to a driver, a subset of the one or more digital images corresponding to the hours of service are identified based on the timestamps. The subset of the one or more digital images are processed to identify a correspondence between a face of a person included in the one or more digital images and a face of a known person. Based on the correspondence transgressing a threshold level of correspondence, a user interface is generated for presentation on a device. The user interface includes an interactive user interface element for accepting a recommendation to assign the known person as the driver for the unassigned hours of service.

Adaptive gaussian derivative sigma systems and methods

In one embodiment, a method is provided. The method comprises determining a first value of a coefficient of an edge-determining algorithm in response to a spatial resolution of a first image acquired with an image capture device onboard a vehicle, a spatial resolution of a second image, and a second value of the coefficient in response to which the edge-determining algorithm generated a second edge map corresponding to the second image. The method further comprises determining, with the edge-determining algorithm in response to the coefficient having the first value, at least one edge of at least one object in the first image. The method further comprises generating, in response to the determined at least one edge, a first edge map corresponding to the first image. The method further comprises determining at least one navigation parameter of the vehicle in response to the first and second edge maps.

Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method

The present disclosure discloses a photography-based 3D modeling system and method, and an automatic 3D modeling apparatus and method, including: (S1) attaching a mobile device and a camera to the same camera stand; (S2) obtaining multiple images used for positioning from the camera or the mobile device during movement of the stand, and obtaining a position and a direction of each photo capture point, to build a tracking map that uses a global coordinate system; (S3) generating 3D models on the mobile device or a remote server based on an image used for 3D modeling at each photo capture point; and (S4) placing the individual 3D models of all photo capture points in the global three-dimensional coordinate system based on the position and the direction obtained in S2, and connecting the individual 3D models of multiple photo capture points to generate an overall 3D model that includes multiple photo capture points.

Machine vision-based method and system to facilitate the unloading of a pile of cartons in a carton handling system
11557058 · 2023-01-17 · ·

A machine vision-based method and system to facilitate the unloading of a pile of cartons within a work cell are provided. The method includes the step of providing at least one 3-D or depth sensor having a field of view at the work cell. Each sensor has a set of radiation sensing elements which detect reflected, projected radiation to obtain 3-D sensor data. The 3-D sensor data including a plurality of pixels. For each possible pixel location and each possible carton orientation, the method includes generating a hypothesis that a carton with a known structure appears at that pixel location with that container orientation to obtain a plurality of hypotheses. The method further includes ranking the plurality of hypotheses. The step of ranking includes calculating a surprisal for each of the hypotheses to obtain a plurality of surprisals. The step of ranking is based on the surprisals of the hypotheses.

Avian detection systems and methods

Provided herein are detection systems and related methods for detecting moving objects in an airspace surrounding the detection system. In an aspect, the moving object is a flying animal and the detection system comprises a first imager and a second imager that determines position of the moving object and for moving objects within a user selected distance from the system the system determines whether the moving object is a flying animal, such as a bird or bat. The systems and methods are compatible with wind turbines to identify avian(s) of interest in airspace around wind turbines and, if necessary, take action to minimize avian strike by a wind turbine blade.

Autonomous driving with surfel maps

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using a surfel map to generate a prediction for a state of an environment. One of the methods includes obtaining surfel data comprising a plurality of surfels, wherein each surfel corresponds to a respective different location in an environment, and each surfel has associated data that comprises an uncertainty measure; obtaining sensor data for one or more locations in the environment, the sensor data having been captured by one or more sensors of a first vehicle; determining one or more particular surfels corresponding to respective locations of the obtained sensor data; and combining the surfel data and the sensor data to generate a respective object prediction for each of the one or more locations of the obtained sensor data.