Patent classifications
G06T7/80
UNDERWATER ORGANISM IMAGING AID SYSTEM, UNDERWATER ORGANISM IMAGING AID METHOD, AND STORAGE MEDIUM
An underwater organism imaging aid system according to an aspect of the present disclosure includes, at least one memory configured to store instructions, and at least one processor configured to execute the instructions to: detect an underwater organism from an image acquired by a camera, determine a positional relationship between the underwater organism detected and the camera, and output auxiliary information for moving the camera in such a way that a side face of the underwater organism and an imaging face of the camera face each other based on the positional relationship.
ELECTRONIC DEVICE COMPRISING MULTI-CAMERA, AND PHOTOGRAPHING METHOD
An electronic device having a multi-camera, according to various embodiments of the present disclosure, includes: the multi-camera, a display, a memory, and a processor operatively connected to the camera, the display, and the memory. The processor may be configured to receive a first image being photographed at a first angle of view of the camera, receive a second image being photographed at a second angle of view of the camera, identify a subject in the first image according to predetermined criteria, generate a third image in which the identified subject is cropped according to a predetermined area of interest, and display the second image and the third image on at least a portion of an area in which the first image is displayed. Various other embodiments may be possible.
Secure Camera Based Inertial Measurement Unit Calibration for Stationary Systems
Described are techniques and systems for secure camera based IMU calibration for stationary systems, including vehicles. Existing vehicle camera systems are employed, with enhanced security to prevent malicious attempts by hackers to try and cause a vehicle to enter IMU calibration mode. IMU calibration occurs when a calibration system determines the vehicle is parked in a controlled environment; calibration targets are positioned at different viewing angles to vehicle cameras to act as sources of optical patterns of encoded data. Features of the patterns are for security as well as for alignment functionality. Images of the calibration targets enable inference of a vehicle coordinate system, from which calculations for IMU mounting error compensations are performed. A relative rotation between the IMU and the vehicle coordinate system are applied to IMU data to compensate for relative rotations between the vehicle and the IMU, thereby improving vehicle slope and bank metrics.
METHOD FOR ACQUIRING DISTANCE FROM MOVING BODY TO AT LEAST ONE OBJECT LOCATED IN ANY DIRECTION OF MOVING BODY BY PERFORMING NEAR REGION SENSING AND IMAGE PROCESSING DEVICE USING THE SAME
A method for acquiring a distance from a moving body to an object located in any direction of the moving body includes steps of: an image processing device (a) instructing a rounded cuboid sweep network to project pixels of images, generated by cameras covering all directions of the moving body, onto N virtual rounded cuboids to generate rounded cuboid images and apply 3D concatenation operation thereon to generate an initial 4D cost volume, (b) instructing a cost volume computation network to generate a final 3D cost volume from the initial 4D cost volume, and (c) generating inverse radius indices, corresponding to inverse radii representing inverse values of separation distances of the N virtual rounded cuboids, by referring to the final 3D cost volume and extracting the inverse radii by using the inverse radius indices, to acquire the separation distances and thus, the distance from the moving body to the object.
METHOD FOR ACQUIRING DISTANCE FROM MOVING BODY TO AT LEAST ONE OBJECT LOCATED IN ANY DIRECTION OF MOVING BODY BY PERFORMING NEAR REGION SENSING AND IMAGE PROCESSING DEVICE USING THE SAME
A method for acquiring a distance from a moving body to an object located in any direction of the moving body includes steps of: an image processing device (a) instructing a rounded cuboid sweep network to project pixels of images, generated by cameras covering all directions of the moving body, onto N virtual rounded cuboids to generate rounded cuboid images and apply 3D concatenation operation thereon to generate an initial 4D cost volume, (b) instructing a cost volume computation network to generate a final 3D cost volume from the initial 4D cost volume, and (c) generating inverse radius indices, corresponding to inverse radii representing inverse values of separation distances of the N virtual rounded cuboids, by referring to the final 3D cost volume and extracting the inverse radii by using the inverse radius indices, to acquire the separation distances and thus, the distance from the moving body to the object.
SYSTEM AND METHOD FOR EVALUATING SPORT BALL DATA
A system and a method related for evaluation of sport ball data. The method includes calibrating a first coordinate system of a first camera to a second coordinate system of a baseball field; capturing, with the first camera, one or more images including a first batter; determining biometric characteristics of the first batter based on the one or more images and the calibration of the first camera to the baseball field; mapping the biometric characteristics of the first batter to an upper positional limit and a lower positional limit of a first strike zone for the first batter; and determining positional limits of the first strike zone in the second coordinate system.
SYSTEM AND METHOD FOR EVALUATING SPORT BALL DATA
A system and a method related for evaluation of sport ball data. The method includes calibrating a first coordinate system of a first camera to a second coordinate system of a baseball field; capturing, with the first camera, one or more images including a first batter; determining biometric characteristics of the first batter based on the one or more images and the calibration of the first camera to the baseball field; mapping the biometric characteristics of the first batter to an upper positional limit and a lower positional limit of a first strike zone for the first batter; and determining positional limits of the first strike zone in the second coordinate system.
Vehicular vision system that dynamically calibrates a vehicular camera
A vehicular vision system includes a camera disposed at a vehicle and operable to capture multiple frames of image data during a driving maneuver of the vehicle. A control includes an image processor that processes frames of captured image data to determine feature points in an image frame when the vehicle is operated within a first range of steering angles, and to determine motion trajectories of those feature points in subsequent image frames for the respective range of steering angles. The control determines a horizon line based on the determined motion trajectories. Responsive to determination that the determined horizon line is non-parallel to the horizontal axis of the image plane, at least one of pitch, roll or yaw of the camera is adjusted. Image data captured by the camera is processed at the control for object detection.
Vehicular vision system that dynamically calibrates a vehicular camera
A vehicular vision system includes a camera disposed at a vehicle and operable to capture multiple frames of image data during a driving maneuver of the vehicle. A control includes an image processor that processes frames of captured image data to determine feature points in an image frame when the vehicle is operated within a first range of steering angles, and to determine motion trajectories of those feature points in subsequent image frames for the respective range of steering angles. The control determines a horizon line based on the determined motion trajectories. Responsive to determination that the determined horizon line is non-parallel to the horizontal axis of the image plane, at least one of pitch, roll or yaw of the camera is adjusted. Image data captured by the camera is processed at the control for object detection.
Homography error correction
An object tracking system that includes a sensor that is configured to capture frames of at least a portion of a global plane for a space. The system is configured to receive a first frame from the sensor, to identify a pixel location within the first frame, and to determine an estimated sensor location for the sensor by applying a homography to the pixel location. The homography includes coefficients that translate between pixel locations in a frame from the sensor and (x,y) coordinates in the global plane. The system is further configured to determine an actual sensor location for the sensor and to determine a location difference between the estimated sensor location and the actual sensor location. The system is further configured to compare the location difference to a difference threshold level and to recompute the homography in response to determining that the location difference exceeds the difference threshold level.