G06V10/446

Optical detection apparatus and methods

An optical object detection apparatus and associated methods. The apparatus may comprise a lens (e.g., fixed-focal length wide aperture lens) and an image sensor. The fixed focal length of the lens may correspond to a depth of field area in front of the lens. When an object enters the depth of field area (e.g., sue to a relative motion between the object and the lens) the object representation on the image sensor plane may be in-focus. Objects outside the depth of field area may be out of focus. In-focus representations of objects may be characterized by a greater contrast parameter compared to out of focus representations. One or more images provided by the detection apparatus may be analyzed in order to determine useful information (e.g., an image contrast parameter) of a given image. Based on the image contrast meeting one or more criteria, a detection indication may be produced.

System and method to enable the application of optical tracking techniques for generating dynamic quantities of interest with alias protection
10713516 · 2020-07-14 · ·

Systems and methods for realizing practical applications of high speed digital image correlation (DIC) for dynamic quantities of interest are provided. In particular, a series of images are captured for a component of interest in which a non-filtered sensor and an analog low-pass filtered sensor are included within the region of interest for the series of images. Displacement signals are obtained for the component of interest, the non-filtered sensor, and the analog low-pass filtered sensor by applying digital image correlation processing to the series of images, which may also be wavelet filtered. Dynamic quantities of interest may be generated and derived from the displacement signals after having been wavelet filtered. Such dynamic quantities of interest based on the wavelet filtered DIC-derived displacement signal may be compared to sensor-derived dynamic quantities of interest to determine if aliasing is or is likely to be present.

VIDEO PROCESSING METHODS AND SOFTWARE ARCHITECTURES FOR ANALYZING TRANSFORMATION IN OBJECTS
20200175686 · 2020-06-04 · ·

Video processing methods and the associated system architecture for measuring transformation in objects, including pupils, entail the following steps: 1. Motion correction; 2. Object (eye) detection; 3. Image correction; and 4. Fourier-based analysis for item (in some embodiments the item is a pupil) motion estimation.

Machine vision and robotic installation systems and methods

Machine vision methods and systems determine if an object within a work field has one or more predetermined features. Methods comprise capturing image data of the work field, applying a filter to the image data, in which the filter comprises an aspect corresponding to a specific feature, and based at least in part on the applying, determining if the object has the specific feature. Systems comprise a camera system configured to capture image data of the work field, and a controller communicatively coupled to the camera system and comprising non-transitory computer readable media having computer-readable instructions that, when executed, cause the controller to apply a filter to the image data, and based at least in part on applying the filter, determine if the object has the specific feature. Robotic installation methods and systems that utilize machine vision methods and systems also are disclosed.

Detecting and measuring the size of clods and other soil features from imagery

The present disclosure provides systems and methods that measure soil roughness in a field from imagery of the field. In particular, the present subject matter is directed to systems and methods that include or otherwise leverage a machine-learned clod detection model to determine a soil roughness value for a portion of a field based at least in part on imagery of such portion of the field captured by an imaging device. For example, the imaging device can be a camera positioned in a downward-facing direction and physically coupled to a work vehicle or an implement towed by the work vehicle through the field.

ADVANCED DRIVER ASSISTANCE SYSTEM AND METHOD
20200143176 · 2020-05-07 ·

An advanced driver assistance system is configured to detect lane markings in a perspective image of a road in front of the vehicle. The perspective image of the road is separated into horizontal stripes corresponding to different road portions at different average distances from the vehicle. Features are extracted from the plurality of horizontal stripes using a plurality of kernels.

Fast and robust face detection, region extraction, and tracking for improved video coding
10628700 · 2020-04-21 · ·

Techniques related to improved video coding based on face detection, region extraction, and tracking are discussed. Such techniques may include performing a facial search of a video frame to determine candidate face regions in the video frame, testing the candidate face regions based on skin tone information to determine valid and invalid face regions, rejecting invalid face regions, and encoding the video frame based on valid face regions to generate a coded bitstream.

DRIVE ASSIST METHOD, DRIVE ASSIST PROGRAM, AND VEHICLE CONTROL DEVICE
20200101977 · 2020-04-02 ·

A vehicle control device includes: detecting at least one of a position or a state of occupant in a vehicle; determining an operation inputting part that is most easily operable for the occupant from among a plurality of operation inputting parts in the vehicle based on the at least one of the position or the state of the occupant which is detected in the detecting; notifying a consent request in the vehicle with respect to a processing content scheduled to be performed in response to an occurrence of an event; and validating consent operation to the operation inputting part that is most easily operable.

Apparatus for automated monitoring of facial images and a process therefor

Provided herein is an apparatus for automated monitoring of facial images. The apparatus includes a cabinet having at least one video capturing device for continuously capturing video. The apparatus also includes means for analyzing frames to identify human facial images and for cropping facial images with data and time information, if detected, and at least one means for instantaneously transmitting the cropped facial images with date and time to at least one predefined storage unit operating in an unattended mode. The predefined storage unit is operatively connected to the cabinet. The apparatus can be configured for headless startup by means of an in built application software. A process for automated monitoring of facial images for surveillance purposes in a monitoring apparatus is also provided.

Multiple-parts based vehicle detection integrated with lane detection for improved computational efficiency and robustness

Detecting the presence of target vehicles in front of a host vehicle by obtaining, using one or more visual sensors, an image of a field of view in front of a host vehicle and detecting, individually, a plurality of parts of one or more target vehicles in the obtained image. The detected plurality of parts, of the one or more target vehicles, are extracted from the obtained image and paired to form a complete target vehicle from the plurality of parts. The pairing is only performed on selective individual parts that overlap and have similar sizes indicating that they belong to the same target vehicle. A complete target vehicle is detected in response to forming a substantially complete target vehicle.