G06V10/25

GROUND ENGAGING TOOL WEAR AND LOSS DETECTION SYSTEM AND METHOD

An example wear detection system receives a plurality of images from a plurality of sensors associated with a work machine. Individual sensors of the plurality of sensors have respective fields-of-view different from other sensors of the plurality of sensors. The wear detection system identifies a first region of interest and second region of interest associated with the at least one GET. The wear detection system determines a first set of image points and a second set of images points for the at least one GET based on geometric parameters associated with the GET. The wear detection system determines a wear level or loss for the at least one GET based on the GET measurement.

LOW POWER MACHINE LEARNING USING REAL-TIME CAPTURED REGIONS OF INTEREST

Systems and methods are described for generating image content. The systems and methods may include, in response to receiving a request to cause a sensor of a computing device to identify image content associated with optical data captured by the sensor, detecting a first sensor data stream having a first image resolution, and detecting a second sensor data stream having a second image resolution. The systems and method may also include identifying, by processing circuitry of the computing device, at least one region of interest in the first sensor data stream, determining cropping coordinates that define a first plurality of pixels in the at least one region of interest in the first sensor data stream, and generating a cropped image representing the at least one region of interest.

LOW POWER MACHINE LEARNING USING REAL-TIME CAPTURED REGIONS OF INTEREST

Systems and methods are described for generating image content. The systems and methods may include, in response to receiving a request to cause a sensor of a computing device to identify image content associated with optical data captured by the sensor, detecting a first sensor data stream having a first image resolution, and detecting a second sensor data stream having a second image resolution. The systems and method may also include identifying, by processing circuitry of the computing device, at least one region of interest in the first sensor data stream, determining cropping coordinates that define a first plurality of pixels in the at least one region of interest in the first sensor data stream, and generating a cropped image representing the at least one region of interest.

METHOD, COMPUTER PROGRAM, AND APPARATUS FOR CONTROLLING IMAGE ACQUISITION DEVICE

A method of controlling an image acquisition device for tracking a target object includes: detecting an event in which tracking of a first object, which is a tracking target object, fails in a first image acquired by the image acquisition device; determining, in the first image, a reference object which is used as a reference for controlling the image acquisition device; controlling the image acquisition device such that at least one of an image capturing range and an image capturing direction of the image acquisition device is adjusted based on at least one of a size and a location of the reference object in the first image; and recognizing the first object in a second image acquired by the image acquisition device in a state in which at least one of the image capturing range and the image capturing direction is adjusted.

METHOD AND APPARATUS FOR IDENTIFYING PARTITIONS ASSOCIATED WITH ERRATIC PEDESTRIAN BEHAVIORS AND THEIR CORRELATIONS TO POINTS OF INTEREST
20230052037 · 2023-02-16 ·

An approach is provided for identifying partitions associated with erratic pedestrian behaviors and their correlations to points of interest. For example, the approach involves receiving sensor data associated with a geographic area. The approach also involves based on the sensor data, determining pedestrian-behavior parameter(s) respectively for partition(s). Each respective partition of the partition(s) represents a respective subarea of the geographic area, a respective time period, or a combination thereof. The approach further involves identifying at least one erratic partition from the partition(s) based on determining that a respective pedestrian-behavior parameter associated with the at least one erratic partition deviates from a baseline pedestrian-behavior parameter by at least a threshold extent. The approach further involves determining a correlation of the at least one erratic partition to at least one map feature of a geographic database. The approach further involves providing the correlation as an output.

DYNAMIC CAPTURE PARAMETER PROCESSING FOR LOW POWER

In one general aspect, a method can include capturing, using an image sensor, a first raw image at a first resolution, converting the first raw image to a digitally processed image using an image signal processor, and analyzing at least a portion of the digitally processed image based on a processing condition. The method can include determining that the first resolution does not satisfy the processing condition; and triggering capture of a second raw image at the image sensor at a second resolution greater than the first resolution.

SYSTEMS, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR IMAGE ANALYSIS

Image analytics systems, methods, and computer program products to autonomously analyze an image to identify and detect features in the image, such as the horizon, and/or identify and detect objects of interest therein, such as, smoke or possible smoke. The image is captured, for example, by RGB cameras, and depicts a scene to be analyzed. The intelligent image analytic system is configured to provide alerts and/or other information to one or more concerned parties and/or computing systems to take an appropriate response.

SYSTEMS, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR IMAGE ANALYSIS

Image analytics systems, methods, and computer program products to autonomously analyze an image to identify and detect features in the image, such as the horizon, and/or identify and detect objects of interest therein, such as, smoke or possible smoke. The image is captured, for example, by RGB cameras, and depicts a scene to be analyzed. The intelligent image analytic system is configured to provide alerts and/or other information to one or more concerned parties and/or computing systems to take an appropriate response.

OBJECT RECOGNITION APPARATUS AND NON-TRANSITORY RECORDING MEDIUM
20230045897 · 2023-02-16 ·

An object recognition apparatus includes at least one processor and at least one memory communicably coupled to the processor. The processor sets one or more object presence regions in each of which an object is likely to be present, on the basis of observation data of a distance sensor. The processor estimates an attribute of the object that is likely to be present in each of the one or more object presence regions. The processor sets, on the basis of the attribute of the object, a level of processing load to be spent on object recognition processing to be executed for each of the one or more object presence regions by using at least image data generated by an imaging device. The processor performs, for each of the one or more object presence regions, the object recognition processing corresponding to the set level of the processing load.

AUGMENTED REALITY SYSTEM AND METHODS FOR STEREOSCOPIC PROJECTION AND CROSS-REFERENCING OF LIVE X-RAY FLUOROSCOPIC AND COMPUTED TOMOGRAPHIC C-ARM IMAGING DURING SURGERY
20230050636 · 2023-02-16 ·

A method for performing a procedure on a patient includes acquiring a three-dimensional image of a location of interest on the patient and a two-dimensional image of the location of interest can be acquired. A computer system can relate the three-dimensional image with the two-dimensional image to form a holographic image dataset. The computer system can register the holographic image dataset with the patient. The augmented reality system can render a hologram based on the holographic image dataset from the patient. The hologram can include a projection of the three-dimensional image and a projection of the two-dimensional image. The practitioner can view the hologram with the augmented reality system and perform the procedure on the patient. The practitioner can employ the augmented reality system to visualize a point on the projection of the three-dimensional image and a corresponding point on the projection of the two-dimensional image during the procedure.