Patent classifications
G06T2207/30208
TECHNIQUES FOR THREE-DIMENSIONAL ANALYSIS OF SPACES
An example method includes receiving a 2D image of a 3D space from an optical camera, identifying, in the 2D image. A virtual image generated by an optical instrument refracting and/or reflecting the light is identified. The example method further includes identifying, in the 2D image, a first object depicting a subject disposed in the 3D space from a first direction extending from the optical camera to the subject and identifying, in the virtual image, a second object depicting the subject disposed in the 3D space from a second direction extending from the optical camera to the subject via the optical instrument, the second direction being different than the first direction. A 3D image depicting the subject based on the first object and the second object is generated. Alternatively, a location of the subject in the 3D space is determined based on the first object and the second object.
Realistic virtual/augmented/mixed reality viewing and interactions
The present invention discloses systems and methods for both viewing and interacting with a virtual reality (VR), an augmented reality (AR) or a mixed reality (MR). More specifically, the systems and methods allow the user to interact with aspects of such realities including virtual items presented in such realities or within such environments by manipulating a control device that has an inside-out camera mounted on-board. The apparatus or system uses two distinct representations including a reduced representation in determining the pose of the control device and uses these representations to compute an interactive pose portion of the control device to be used for interacting with the virtual item. The reduced representation is consonant with a constrained motion of the control device.
Secure Camera Based Inertial Measurement Unit Calibration for Stationary Systems
Described are techniques and systems for secure camera based IMU calibration for stationary systems, including vehicles. Existing vehicle camera systems are employed, with enhanced security to prevent malicious attempts by hackers to try and cause a vehicle to enter IMU calibration mode. IMU calibration occurs when a calibration system determines the vehicle is parked in a controlled environment; calibration targets are positioned at different viewing angles to vehicle cameras to act as sources of optical patterns of encoded data. Features of the patterns are for security as well as for alignment functionality. Images of the calibration targets enable inference of a vehicle coordinate system, from which calculations for IMU mounting error compensations are performed. A relative rotation between the IMU and the vehicle coordinate system are applied to IMU data to compensate for relative rotations between the vehicle and the IMU, thereby improving vehicle slope and bank metrics.
Homography error correction
An object tracking system that includes a sensor that is configured to capture frames of at least a portion of a global plane for a space. The system is configured to receive a first frame from the sensor, to identify a pixel location within the first frame, and to determine an estimated sensor location for the sensor by applying a homography to the pixel location. The homography includes coefficients that translate between pixel locations in a frame from the sensor and (x,y) coordinates in the global plane. The system is further configured to determine an actual sensor location for the sensor and to determine a location difference between the estimated sensor location and the actual sensor location. The system is further configured to compare the location difference to a difference threshold level and to recompute the homography in response to determining that the location difference exceeds the difference threshold level.
AUTOMATIC ANALYSER
A two-dimensional code is attached to a location of a reagent storage unit which is visually recognizable from the outside, and a coordinate position of the two-dimensional code in a coordinate system of the two-dimensional code and coordinate information of an installation position of a reagent bottle are held. After that, an image of the two-dimensional code is captured by a portable terminal so that a coordinate system of an image capture unit of the portable terminal is converted into the coordinate system of the two-dimensional code using AR technology. The coordinate information of the installation position of the reagent bottle in the coordinate system of the two-dimensional code is regarded as positional coordinates in the captured image on the basis of the conversion, thereby ascertaining the position of the reagent bottle on the captured image and displaying the ascertained position on a display unit.
APPARATUS, METHOD AND COMPUTER PROGRAM FOR MONITORING A SUBJECT DURING A MEDICAL IMAGING PROCEDURE
The invention refers to an apparatus for monitoring a subject (121) during an imaging procedure, e.g. CT-imaging The apparatus (110) comprises a monitoring image providing unit (111) providing a first monitoring image and a second monitoring image acquired at different support positions, a monitoring position providing unit (112) providing a first monitoring position of a region of interest in the first monitoring image, a support position providing unit (113) providing support position data of the support positions, a position map providing unit (114) providing a position map mapping calibration support positions to calibration monitoring positions, and a region of interest position determination unit (115) determining a position of the region of interest in the second monitoring image based on the first monitoring position, the support position data, and the position map. This allows to determine the position of the region of interest accurately and with low computational effort.
System and method for pose estimation of an imaging device and for determining the location of a medical device with respect to a target
A system and method for estimating a pose of an imaging device for one or more images is provided.
Calibration for multi-camera and multisensory systems
A method and apparatus for calibrating an image capture device are provided. The method includes capturing one or more of a single or Multiview image set by the image capture device, detecting one or more calibration features in each set by a processor, initializing each of the one or more calibration parameters a corresponding default value, extracting one or more relevant calibration parameters, computing an individual cost term for each of the identified relevant calibration parameters, and scaling each of the relevant cost terms. The method continues with combining all the cost terms once each of the calculated relevant cost terms have been scaled, determining if the combination of the cost terms has been minimized, adjusting the calibration parameters if it is determined that that the combination of the cost terms has not been minimized, and returning to the step of extracting one or more of the relevant calibration parameters.
METHOD AND SYSTEM FOR PERFORMING AUTOMATIC CAMERA CALIBRATION
A system and method for performing automatic camera calibration is provided. The system receives a calibration image, and determines a plurality of image coordinates for representing respective locations at which a plurality of pattern elements of a calibration pattern appear in a calibration image. The system determines, based on the plurality of image coordinates and defined pattern element coordinates, an estimate for a first lens distortion parameter of a set of lens distortion parameters, wherein the estimate for the first lens distortion parameter is determined while estimating a second lens distortion parameter of the set of lens distortion parameters to be zero, or is determined without estimating the second lens distortion parameter. The system determines, after the estimate of the first lens distortion parameter is determined, an estimate for the second lens distortion parameter based on the estimate for the first lens distortion parameter.
DIMENSIONAL CALIBRATION OF THE FIELD-OF-VIEW OF A SINGLE CAMERA
A method for calibrating an active FOV of a single camera, wherein from the calibration of the cameras active FOV, a coordinate matrix is obtained which remotely produces a virtual interpolation measurement network at any point within an image (a frame) extracted from a video stream (recorded by the single camera), while eliminating the need to be physically located at the actual location where the video stream has been recorded. According to an embodiment of the invention, the basis of the active FOV of a camera is the ability to obtain (measure) coordinates of the measurement points marked on a calibration board.