Patent classifications
G06T7/248
OBJECT TRACKING METHOD AND OBJECT TRACKING APPARATUS
An object tracking method and an object tracking apparatus, which are adapted for a low latency application, are provided. In the method, an object detection is performed on one of continuous image frames. The objection detection is configured to identify a target. The continuous image frames are temporarily stored. An objection tracking is performed on the temporarily stored continuous image frames according to a result of the object detection. The objection tracking is configured to associate the target in one of the continuous image frames with the target in another of the continuous image frames. Accordingly, the accuracy of object tracking may be improved, and the requirement for low latency may be satisfied.
Systems and Methods for Adaptive Beam Steering for Throughways
Systems and methods for monitoring a throughway using a radio frequency identification (RFID) detection system. The RFID detection system includes (i) an image sensor configured to have a field of view directed towards a lane of the throughway; (ii) an RFID transceiver arrangement configured to interrogate RFID tags disposed on vehicles within the lane of the throughway; and (iv) a controller operatively connected to the image sensor and the RFID transceiver arrangement. The controller is configured to (1) cause the image sensor to capture a frame of image data representative of the lane of the throughway; (2) analyze the frame of image data to detect a presence of a vehicle in the lane of the throughway; (3) based on the analysis, determine a position of the vehicle relative to the RFID transceiver arrangement; and (4) configure an antenna array to generate a beam directed at the position of the vehicle.
Tracking Soft Tissue in Medical Images
The present invention relates to a medical data processing method of determining the representation of an anatomical body part (2) of a patient (1) in a sequence of medical images, the anatomical body part (2) being subject to a vital movement of the patient (1), the method being constituted to be executed by a computer and comprising the following steps: a) acquiring advance medical image data comprising a time-related advance medical image comprising a representation of the anatomical body part (2) in a specific movement phase; b) acquiring current medical image data describing a sequence of current medical images, wherein the sequence comprises a specific current medical image comprising a representation of the anatomical body part (2) in the specific movement phase, and a tracking current medical image which is different from the specific current medical image and comprises a representation of the anatomical body part (2) in a tracking movement phase which is different from the specific movement phase; c) determining, based on the advance medical image data and the current medical image data, specific image subset data describing a specific image subset of the specific current medical image, the specific image subset comprising the representation of the anatomical body part (2); d) determining, based on the current medical image data and the image subset data, subset tracking data describing a tracked image subset in the tracking current medical image, the tracked image subset comprising the representation of the anatomical body part (2).
SEQUENCE RECOGNITION FROM VIDEO IMAGES
Methods, systems, and apparatus for an image recognition system. The image recognition system includes a memory. The memory is configured to store multiple sequences of movements of multiple standard objects. The image recognition system includes a sensor. The sensor is configured to capture image data of a surrounding environment. The image recognition system includes a processor. The processor is coupled to the memory and the sensor. The processor is configured to recognize an object in the image data. The processor is configured to determine a movement of the object based on the image data. The processor is configured to compare the movement of the object in the image data to a sequence of movements of a standard object of the plurality of standard objects, and determine that the object is a living being based on the comparison.
Vehicle path restoration system through sequential image analysis and vehicle path restoration method using the same
Disclosed are a vehicle path restoration system through sequential image analysis which includes: an image capturing unit that acquires sequential images from the front camera installed in the subject vehicle; an image analysis unit for generating multiple lanes that can be recognized from the sequential images of the video file acquired by the image capturing unit and multi-paths calculated using the geometric characteristics of the lanes recognized at the current time and the speed of the subject vehicle, that restores the path of the subject vehicle and restores the path of the front vehicle driving in front of the subject vehicle; a memory for storing path data of the subject vehicle and the front vehicle restored by the image analysis unit; and a display unit that expresses the path data of the subject vehicle and the front vehicle stored in the memory in the form of a top view.
DEPTH COMPLETION METHOD AND APPARATUS USING A SPATIAL-TEMPORAL
Provided are a depth completion method and apparatus using spatial-temporal information. The depth completion apparatus according to the present invention comprises a processor; and a memory connected to the processor, wherein the memory stores program instructions executable by the processor for performing operations comprising receiving an RGB image and a sparse image through a camera and LiDAR, generating a dense first depth map by processing color information of the RGB image through a first branch based on an encoder-decoder, generating a dense second depth map by up-sampling the sparse image through a second branch based on an encoder-decoder, generating a third depth map by fusing the first depth map and the second depth map, and generating a final depth map including a trajectory of a moving object included in an RGB image continuously captured during movement by inputting the third depth map to a convolution long term short memory (LSTM).
Stereo camera apparatus, vehicle, and parallax calculation method
A stereo camera apparatus includes a first imaging unit including a first imaging optical system provided with a plurality of lens groups, and a first actuator configured to change a focal length by driving at least one of the plurality of lens groups of the first imaging optical system; a second imaging unit including a second imaging optical system provided with a plurality of lens groups, and a second actuator configured to change a focal length by driving at least one of the plurality of lens groups of the second imaging optical system; a focal length controller configured to output synchronized driving signals to the first and second actuators; and an image processing unit configured to calculate a distance to a subject by using images captured by the first imaging unit and the second imaging unit.
Automatic field of view detection
Implementations are described herein for analyzing a sequence of digital images captured by a mobile vision sensor (e.g., integral with a robot), in conjunction with information (e.g., ground truth) known about movement of the vision sensor, to determine spatial dimensions of object(s) and/or an area captured in a field of view of the mobile vision sensor. Techniques avoid the use of visual indicia of known dimensions and/or other conventional tools for determining spatial dimensions, such as checkerboards. Instead, techniques described herein allow spatial dimensions to be determined using less resources, and are more scalable than conventional techniques.
System and method for generating accurate hyperlocal nowcasts
A computing system includes at least one processor, and a memory communicatively coupled to the at least one processor. The processor is configured to receive at least two successive radar images of precipitation data, generate a motion vector field using the at least two successive radar images, forecast linear prediction imagery of future precipitation using the motion vector field, and generate corrected output imagery corresponding to the forecasted linear prediction imagery of the future precipitation corrected by a first neural network. In addition, the processor is further configured to receive, by a second neural network, the linear prediction imagery, and one of observed imagery and the corrected output imagery, and distinguish, by the second neural network, between the corrected output imagery and the observed imagery to produce conditioned output imagery. The processor is also configured to display the conditioned output imagery on a display.
METHODS FOR HANDLING OCCLUSION IN AUGMENTED REALITY APPLICATIONS USING MEMORY AND DEVICE TRACKING AND RELATED APPARATUS
A method performed by a device for occlusion handling in augmented reality is provided. The device can generate at least one pixel classification image in a frame including an occluding object and having foreground, background, and unknown pixels. Generation of the at least one pixel classification image can include (1) calculating an initial foreground pixel probability image, and an initial background pixel probability image, and (2) calculating a normalized depth image based on depth information of the occluding object. The device can obtain an alpha mask to blend a virtual object and the foreground of the at least one pixel classification image based on determining a color of the unknown pixels. The device can render a final composition of an augmented reality image containing the virtual object occluded by the occluding object based on applying the alpha mask to pixels in the at least one pixel classification image.