G06T2207/10021

AUGMENTED REALITY SYSTEM AND METHODS FOR STEREOSCOPIC PROJECTION AND CROSS-REFERENCING OF LIVE X-RAY FLUOROSCOPIC AND COMPUTED TOMOGRAPHIC C-ARM IMAGING DURING SURGERY
20230050636 · 2023-02-16 ·

A method for performing a procedure on a patient includes acquiring a three-dimensional image of a location of interest on the patient and a two-dimensional image of the location of interest can be acquired. A computer system can relate the three-dimensional image with the two-dimensional image to form a holographic image dataset. The computer system can register the holographic image dataset with the patient. The augmented reality system can render a hologram based on the holographic image dataset from the patient. The hologram can include a projection of the three-dimensional image and a projection of the two-dimensional image. The practitioner can view the hologram with the augmented reality system and perform the procedure on the patient. The practitioner can employ the augmented reality system to visualize a point on the projection of the three-dimensional image and a corresponding point on the projection of the two-dimensional image during the procedure.

SYSTEM AND METHOD FOR CALIBRATING A TIME DIFFERENCE BETWEEN AN IMAGE PROCESSOR AND AN INTERTIAL MEASUREMENT UNIT BASED ON INTER-FRAME POINT CORRESPONDENCE
20230049084 · 2023-02-16 ·

Systems and methods are used for calibrating a time difference between an image signal processor (ISP) and an inertial measurement unit (IMU) of an image capture device. An image capture device includes a lens, an image sensor, an IMU, and an ISP. The image sensor detects images as frames and the IMU captures motion data. The ISP detects one or more key points on the frames and matches the one or more key points between the frames. The ISP computes one or more calibration parameters. The one or more calibration parameters are based on the matched key points and a time difference between the ISP and the IMU. The ISP performs a calibration using the calibration parameters.

VEHICULAR ACCESS CONTROL BASED ON VIRTUAL INDUCTIVE LOOP

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for monitoring events using a Virtual Inductive Loop system. In some implementations, image data is obtained from cameras. A region depicted in the obtained image data is identified, the region comprising lines spaced by a distance that satisfies a distance threshold. For each line included in the region: an object depicted crossing the line is determined whether to satisfy a height criteria indicating that the line is activated. In response to determining that an object depicted crossing the line satisfies the height criteria, an event is determined to have likely occurred using data indicating (i) which lines of the lines were activated and (ii) an order in which each of the lines were activated. In response to determining that an event likely occurred, actions are performed using at least some of the data.

System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function

A method for generating a three-dimensional (3D) model of an object includes: capturing images of the object from a plurality of viewpoints, the images including color images; generating a 3D model of the object from the images, the 3D model including a plurality of planar patches; for each patch of the planar patches: mapping image regions of the images to the patch, each image region including at least one color vector; and computing, for each patch, at least one minimal color vector among the color vectors of the image regions mapped to the patch; generating a diffuse component of a bidirectional reflectance distribution function (BRDF) for each patch of planar patches of the 3D model in accordance with the at least one minimal color vector computed for each patch; and outputting the 3D model with the BRDF for each patch.

Method and apparatus for sensing moving ball
11582426 · 2023-02-14 · ·

Provided are an apparatus and method for sensing a moving ball, which extract a feature portion such as a trademark, a logo, etc. indicated on a ball from consecutive images of a moving ball, acquired by an image acquisition unit embodied by a predetermined camera device, and calculate a spin axis and spin amount of rotation the moving ball based on the feature portion and thus spin of the ball is simply, rapidly, and accurately calculated with low computational load, thereby achieving rapid and stable calculation of the ball in a relatively low performance system. The sensing apparatus includes an image acquisition unit for acquiring consecutive images, an image processing unit for extracting a feature portion from the acquired image, and a spin calculation unit for calculating spin using the extracted feature portion.

Passive wide-area three-dimensional imaging

Radar, lidar, and other active 3D imaging techniques require large, heavy sensors that consume lots of power. Passive 3D imaging techniques based on feature matching are computationally expensive and limited by the quality of the feature matching. Fortunately, there is a robust, computationally inexpensive way to generate 3D images from full-motion video acquired from a platform that moves relative to the scene. The full-motion video frames are registered to each other and mapped to the scene coordinates using data about the trajectory of the platform with respect to the scene. The time derivative of the registered frames equals the product of the height map of the scene, the projected angular velocity of the platform, and the spatial gradient of the registered frames. This relationship can be solved in (near) real time to produce the height map of the scene from the full-motion video and the trajectory.

System and method to simultaneously track multiple organisms at high resolution
20230045152 · 2023-02-09 ·

A microscopy includes multiple cameras working together to capture image data of a sample having a group of organisms distributed over a wide area, under the influence of an excitation instrument. A first processor is coupled to each camera to process the image data captured by the camera. Outputs from the multiple first processors are aggregated and streamed serially to a second processor for tracking the organisms. The presence of the multiple cameras capturing images from the sample, configured with 50% or more overlap, can allow 3D tracking of the organisms through photogrammetry.

PRODUCT TARGET QUALITY CONTROL SYSTEM

A process includes receiving a target quality value, receiving a measured quality value, receiving a source quality value, and sending a source control instruction. The source control instruction is based at least in part on the target quality value, the measured quality value, and the source quality value. The target quality value, the measured quality value, the source quality value, and the source control instruction are communicated via the communication port. The measured quality value is generated by an inspection device configured to inspect a sample. The source quality value is associated with a quality level of a first group of samples. The target quality value indicates a desired quality value of an output group of samples. The source control instruction causes a source selecting device to select one of a plurality of groups of samples, each group having identified quality characteristics.

Neural network processing for multi-object 3D modeling

Embodiments are directed to neural network processing for multi-object three-dimensional (3D) modeling. An embodiment of a computer-readable storage medium includes executable computer program instructions for obtaining data from multiple cameras, the data including multiple images, and generating a 3D model for 3D imaging based at least in part on the data from the cameras, wherein generating the 3D model includes one or more of performing processing with a first neural network to determine temporal direction based at least in part on motion of one or more objects identified in an image of the multiple images or performing processing with a second neural network to determine semantic content information for an image of the multiple images.

REAL-TIME SYSTEM FOR GENERATING 4D SPATIO-TEMPORAL MODEL OF A REAL WORLD ENVIRONMENT
20230008567 · 2023-01-12 ·

The present invention relates to a method for deriving a 3D data from image data comprising: receiving, from at least one camera, image data representing an environment; detecting, from the image data, at least one object within the environment; classifying the at least one detected object, wherein the method comprises, for each classified object of the classified at least one objects: determining a 2D skeleton of the classified object by implementing a neural network to identify features of the classified object in the image data corresponding to the classified object; and constructing a 3D skeleton for the classified object, comprising mapping the determined 2D skeleton to 3D.