Patent classifications
G06T7/207
Digital foveation for machine vision
A machine vision method includes obtaining a first representation of an image captured by an image sensor array, analyzing the first representation for an assessment of whether the first representation is sufficient to support execution of a machine vision task by the processor, if the first representation is not sufficient, determining, based on the first representation, a region of the image of interest for the execution of the machine vision task, reusing the image captured by the image sensor array to obtain a further representation of the image by directing the image sensor array to sample the image captured by the image sensor array in a manner guided by the determined region of the image of interest and by the assessment, and analyzing the further representation to assess whether the further representation is sufficient to support the execution of the machine vision task by implementing a procedure for the execution of the machine vision task in accordance with the further representation.
IMAGE PROCESSING METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM
Disclosed is an image processing method, electronic device and storage medium. The method includes obtaining feature information of first region in a current image frame, wherein first region includes a region that is determined by performing motion estimation on the current and previous image frames based on optical flow; obtaining feature information of second region in the current image frame, wherein second region includes a region corresponding to pixel points among first pixel points of the current image frame, where its association with pixel points among second pixel points of the previous image frame satisfies a condition; and based on the feature information of first region and that of second region, fusing the previous and current image frames to obtain a processed current image frame, which is used as a previous image frame for a next image frame.
IMAGE PROCESSING METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM
Disclosed is an image processing method, electronic device and storage medium. The method includes obtaining feature information of first region in a current image frame, wherein first region includes a region that is determined by performing motion estimation on the current and previous image frames based on optical flow; obtaining feature information of second region in the current image frame, wherein second region includes a region corresponding to pixel points among first pixel points of the current image frame, where its association with pixel points among second pixel points of the previous image frame satisfies a condition; and based on the feature information of first region and that of second region, fusing the previous and current image frames to obtain a processed current image frame, which is used as a previous image frame for a next image frame.
METHOD AND DEVICE FOR OBJECT TRACKING
The present disclosure relates to a computer-implemented method for object tracking, the method including the steps of defining a state-space of interest based on a class of objects subject to tracking. Further, the method includes the step of representing the state-space of interest using a FEM representation partitioning the state-space of interest in elements. Further, the method includes initiating a state-space distribution defining a probability density for different states of at least one tracked object in the state-space of interest. Moreover, the method updates the state-space distribution based on evidence, wherein the evidence being at least one of sensor data and external data of at least one tracked object in said class of objects. Furthermore, the method propagates the state-space distribution of the at least one tracked object for a time period.
Analysis and visualization of subtle motions in videos
Example embodiments allow for fast, efficient motion-magnification of video streams by decomposing image frames of the video stream into local phase information at multiple spatial scales and/or orientations. The phase information for each image frame is then scaled to magnify local motion and the scaled phase information is transformed back into image frames to generate a motion-magnified video stream. Scaling of the phase information can include temporal filtering of the phase information across image frames, for example, to magnify motion at a particular frequency. In some embodiments, temporal filtering of phase information at a frequency of breathing, cardiovascular pulse, or some other process of interest allows for motion-magnification of motions within the video stream corresponding to the breathing or the other particular process of interest. The phase information can also be used to determine time-varying motion signals corresponding to motions of interest within the video stream.
Analysis and visualization of subtle motions in videos
Example embodiments allow for fast, efficient motion-magnification of video streams by decomposing image frames of the video stream into local phase information at multiple spatial scales and/or orientations. The phase information for each image frame is then scaled to magnify local motion and the scaled phase information is transformed back into image frames to generate a motion-magnified video stream. Scaling of the phase information can include temporal filtering of the phase information across image frames, for example, to magnify motion at a particular frequency. In some embodiments, temporal filtering of phase information at a frequency of breathing, cardiovascular pulse, or some other process of interest allows for motion-magnification of motions within the video stream corresponding to the breathing or the other particular process of interest. The phase information can also be used to determine time-varying motion signals corresponding to motions of interest within the video stream.
AUTO-FOCUS TRACKING FOR REMOTE FLYING TARGETS
A system for automatically maintaining focus while tracking remote flying objects includes an interface and processor. The interface is configured to receive two or more images. The processor is configured to determine a bounding box for an object in the two or more images; determine an estimated position for the object in a future image; and determine an estimated focus setting and an estimated pointing direction for a lens system.
AUTO-FOCUS TRACKING FOR REMOTE FLYING TARGETS
A system for automatically maintaining focus while tracking remote flying objects includes an interface and processor. The interface is configured to receive two or more images. The processor is configured to determine a bounding box for an object in the two or more images; determine an estimated position for the object in a future image; and determine an estimated focus setting and an estimated pointing direction for a lens system.
Intent-based dynamic change of region of interest of vehicle perception system
The present disclosure provides a perception system for a vehicle. The perception system includes a perception filter for determining a region of interest (“ROI”) for the vehicle based on an intent of the vehicle and a current state of the vehicle; and a perception module for perceiving an environment of the vehicle based on the ROI; wherein the vehicle is caused to take appropriate action based on the perceived environment, the current state of the vehicle, and the intent of the vehicle.
Intent-based dynamic change of region of interest of vehicle perception system
The present disclosure provides a perception system for a vehicle. The perception system includes a perception filter for determining a region of interest (“ROI”) for the vehicle based on an intent of the vehicle and a current state of the vehicle; and a perception module for perceiving an environment of the vehicle based on the ROI; wherein the vehicle is caused to take appropriate action based on the perceived environment, the current state of the vehicle, and the intent of the vehicle.