G06T1/0014

OBJECT TRACKING BY EVENT CAMERA
20230021408 · 2023-01-26 ·

A tracking system is disclosed utilizing one or more dynamic vision sensors (e.g., an event camera) configured to generate luminance-transition events associated with a target object, a depth estimation unit configured to generate based on the luminance-transition events depth data/signals indicative of a distance of the target object from the event camera, a spatial tracking unit configured to generate based on the luminance-transition events spatial tracking signals/data indicative of transitions of the target object in a scene of the target object, and an error correction unit configured to process the depth and spatial tracking data/signals and generate error correcting data/signals for the tracking of the target object by the one or more dynamic vision sensors.

Autonomous vehicle control method, system, and medium

Apparatus and methods for identification of a coded pattern visible to a computerized imaging apparatus while invisible or inconspicuous to human eyes. A pattern and/or marking may serve to indicate identity of an object, and/or the relative position of the pattern to a viewer. While some solutions exist for identifying patterns (for example, QR codes), they may be visually obtrusive to a human observer due to visual clutter. In exemplary implementations, apparatus and methods are capable of generating patterns with sufficient structure to be used for either discrimination or some aspect of localization, while incorporating spectral properties that are more aesthetically acceptable such as being: a) imperceptible or subtle to the human observer and/or b) aligned to an existing acceptable visual form, such as a logo. In one variant, a viewer comprises an imaging system comprised as a processor and laser scanner, or camera, or moving photodiode.

Image noise compensating system, and auto clean machine
11703595 · 2023-07-18 · ·

An image noise compensating system, comprising: a distance determining device, configured to determine whether a distance is larger than a distance threshold or not; an image sensor, comprising at least one image sensing unit, wherein the image sensor forms a combined image sensing unit when the distance is smaller than the distance threshold and senses images without forming the combined image sensing unit when the distance is larger than the distance threshold, wherein a width of an area that the combined image sensing unit can sense is larger than a width of an area that the image sensing unit can sense; a noise compensating circuit, configured to compensate image noises; and a control circuit, configured to calculate a location of the image noise compensating system.

INFORMATION PROCESSING METHOD, INFORMATION PROCESSING SYSTEM, AND PROGRAM
20230222648 · 2023-07-13 ·

The present disclosure provides an information processing method for performing the following steps, and the resulting information includes captured image data including at least the robot arm every predetermined period: a step of causing a captured image data acquisition unit to acquire captured image data of an imaging target at least including a robot arm and a control object; a step of causing a control unit to change a state of the control object every predetermined period based on user setting; an image comparison step of causing an image comparison unit to compare the captured image data with reference image data; and a step of causing a result information acquisition unit to detect a predetermined state change based on a result of the comparison in the image comparison step, acquire result information regarding a work of the robot arm, and store the result information in a result information storage unit.

Method and Apparatus for Vision-Based Tool Localization

A method for vision-based tool localization (VTL) in a robotic assembly system including one or more calibrated cameras, the method comprising capturing a plurality of images of the tool contact area from a plurality of different vantage points, determining an estimated position of the tool contact area based on an image, and refining the estimated position based on another image from another vantage point. The method further comprises providing the refined position to the robotic assembly system to enable accurate control of the tool by the robotic assembly system.

Image-capturing unit and component-mounting device
11557109 · 2023-01-17 · ·

The image-capturing unit includes an imaging section; a holding section configured to hold a subject to be imaged by the imaging section; a light irradiation section configured to select light of one or more light sources out of multiple light sources having different wavelengths, and to irradiate the subject held in the holding section with the light; a storage section configured to store a correspondence among a color of the light emitted for irradiating the subject by the light irradiation section, a material of an irradiation surface irradiated with the light, and a resolution representing the number of pixels per unit length; and an image processing section configured to obtain the resolution from the correspondence, based on the color of the light emitted for irradiating the subject and the material of the irradiation surface of the subject, and to process a subject image by using the resolution.

Spacing-aware plant detection model for agricultural task control

Methods and systems for controlling robotic actions for agricultural tasks are disclosed which use a spacing-aware plant detection model. A disclosed method, in which all steps are computer-implemented, includes receiving, using an imager moving along a crop row, at least one image of at least a portion of the crop row. The method also includes using the at least one image, a plant detection model, and an average inter-crop spacing for the crop row to generate an output from the plant detection model. The plant detection model is spacing aware in that the output of the plant detection model is altered or overridden based on the average inter-crop spacing. The method also includes outputting a control signal for the robotic action based on the output from the biased plant detection model. The method also includes conducting the robotic action for the agricultural task in response to the control signal.

Image-based solar estimates

An example device is configured to determine, based on a sky image of a portion of sky over a power distribution network and using a convolutional neural network (CNN)-based image regression model, an estimated global horizontal irradiance (GHI) value and manage or control the power distribution network using the estimated GHI value. The device may also be configured to determine, based on GHI values and aggregate load values for at least a portion of the power distribution network, using a Bayesian Structural Time Series model, an estimated photovoltaic power output value for the at least a portion of the power distribution network. The device may manage or control the power distribution network using the estimated photovoltaic power output value.

Machine vision system and interactive graphical user interfaces related thereto

Machine vision devices may be configured to automatically connect to a remote management server (e.g., a “cloud”-based management server), and may offload and/or communicate images and analyses to the remote management server via wired or wireless communications. The machine vision devices may further communicate with the management server, user computing devices, and/or human machine interface devices, e.g., to provide remote access to the machine vision device, provide real-time information from the machine vision device, receive configurations/updates, provide interactive graphical user interfaces, and/or the like.

Computer vision based safety hazard detection

Devices and techniques are generally described for computer vision techniques for safety hazard detection. A frame of image data representing a physical environment is received. In some examples, a first object represented in the frame of image data may be detected. A determination may be made that the first object is of a first class. A first zone represented in the frame of image data may be identified. The first zone may correspond to a ground surface of the physical environment. A determination may be made that the first object at least partially overlaps with the first zone. A first rule associated with the first zone may be determined. The first rule may restrict objects of the first class from being present within the first zone. Output data may be generated indicating that the first object is at least partially within the first zone, in violation of the first rule.