G06T2207/20221

REAL-TIME SYSTEM FOR GENERATING 4D SPATIO-TEMPORAL MODEL OF A REAL WORLD ENVIRONMENT
20230008567 · 2023-01-12 ·

The present invention relates to a method for deriving a 3D data from image data comprising: receiving, from at least one camera, image data representing an environment; detecting, from the image data, at least one object within the environment; classifying the at least one detected object, wherein the method comprises, for each classified object of the classified at least one objects: determining a 2D skeleton of the classified object by implementing a neural network to identify features of the classified object in the image data corresponding to the classified object; and constructing a 3D skeleton for the classified object, comprising mapping the determined 2D skeleton to 3D.

COMPOSITION-GUIDED POST PROCESSING FOR X-RAY IMAGES
20230007835 · 2023-01-12 ·

A method of enhancing an x-ray image is disclosed. The method involves obtaining an input image based on a source x-ray image of an object. Compositional information representing physical characteristics of the object is also obtained. An image enhancement process is applied to the input image to generate a processed image. Application of the image enhancement process is controlled by one or more parameters determined in dependence on the compositional information. An output image is then provided based on the processed image.

IMAGING SYSTEMS WITH MULTIPLE RADIATION SOURCES
20230010044 · 2023-01-12 ·

Disclosed herein is a method and a system for reconstructing a three-dimensional image of an object, based on stitched images of the object obtained using multiple beams.

Method and apparatus for improved medical imaging
11571177 · 2023-02-07 ·

This invention provides a method to optimize an x-ray beam for more than one structure within the field of view. The preferred embodiment comprises a modular construction of a collimator comprising multiple materials of varying thickness. A first attenuation is performed by the first portion of the collimator to optimize a first anatomic feature and a second attenuation is performed by the second portion of the collimator to optimize a second anatomic feature.

Performing 3D reconstruction via an unmanned aerial vehicle

In some examples, an unmanned aerial vehicle (UAV) employs one or more image sensors to capture images of a scan target and may use distance information from the images for determining respective locations in three-dimensional (3D) space of a plurality of points of a 3D model representative of a surface of the scan target. The UAV may compare a first image with a second image to determine a difference between a current frame of reference position for the UAV and an estimate of an actual frame of reference position for the UAV. Further, based at least on the difference, the UAV may determine, while the UAV is in flight, an update to the 3D model including at least one of an updated location of at least one point in the 3D model, or a location of a new point in the 3D model.

Virtual lens optical system
11592598 · 2023-02-28 ·

An optical system has a virtual lens comprising an array of image sensors and processor in communication with the virtual lens and configured to focus the at least one virtual lens by way of mathematical image processing of a dynamic object plane.

System and method for fusing information of a captured environment

A method, apparatus and computer program product for fusing information, to be performed by a device comprising a processor and a memory device, the method comprising: receiving one or more distance readings related to the environment from a Lidar device emitting light in a predetermined wavelength; receiving an image captured by a multi spectra camera, the multi spectra camera being sensitive at least to visible light and to the predetermined wavelength; identifying within the image points or areas having the predetermined wavelength; identifying one or more objects within the image; identifying correspondence between each of the light points or areas and one of the readings; associating the object with a distance, based on the reading and points or areas within the object; and outputting indication of the object and the distance associated with the at least one object.

Efficient training and accuracy improvement of imaging based assay
11593590 · 2023-02-28 · ·

The present disclosure relates to devices, apparatus and methods of improving the accuracy of image-based assay, that uses imaging system having uncertainties or deviations (imperfection) compared with an ideal imaging system. One aspect of the present invention is to add the monitoring marks on the sample holder, with at least one of their geometric and/optical properties of the monitoring marks under predetermined and known, and taking images of the sample with the monitoring marks, and train a machine learning model using the images with the monitoring mark.

System, method and apparatus for macroscopic inspection of reflective specimens

An inspection apparatus includes a specimen stage configured to retain a specimen, at least three imaging devices arranged in a triangular array positioned above the specimen stage, each of the at least three imaging devices configured to capture an image of the specimen, one or more sets of lights positioned between the specimen stage and the at least three imaging devices, and a control system in communication with the at least three imaging devices.

Surgical camera system with high dynamic range
11595589 · 2023-02-28 · ·

An endoscopic camera device having an optical assembly; a first image sensor in optical communication with the optical assembly, the first image sensor receiving a first exposure and transmitting a first low dynamic range image; a second image sensor in optical communication with the optical assembly, the second image sensor receiving a second exposure and transmitting a second low dynamic range image, the second exposure being higher than the first exposure; and a processor for receiving the first low dynamic range image and the second low dynamic range image; wherein the processor is configured to combine the first low dynamic range image and the second dynamic range image into a high dynamic range image using a luminosity value derived as a preselected percentage of a cumulative luminosity distribution of at least one of the first low dynamic range image and the second low dynamic range image.