G06T7/337

Image processing device, image processing method, and surgical navigation system
11707340 · 2023-07-25 · ·

Provided is an image processing device including a matching unit that performs matching processing between a predetermined pattern on a surface of a 3D model of a biological tissue including an operating site generated on the basis of a preoperative diagnosis image and a predetermined pattern on a surface of the biological tissue included in a captured image during surgery, a shift amount estimation unit that estimates an amount of deformation from a preoperative state of the biological tissue on the basis of a result of the matching processing and information regarding a three-dimensional position of a photographing region which is a region photographed during surgery on the surface of the biological tissue, and a 3D model update unit that updates the 3D model generated before surgery on the basis of the estimated amount of deformation of the biological tissue.

Detection and replacement of transient obstructions from high elevation digital images

Implementations relate to detecting/replacing transient obstructions from high-elevation digital images. A digital image of a geographic area includes pixels that align spatially with respective geographic units of the geographic area. Analysis of the digital image may uncover obscured pixel(s) that align spatially with geographic unit(s) of the geographic area that are obscured by transient obstruction(s). Domain fingerprint(s) of the obscured geographic unit(s) may be determined across pixels of a corpus of digital images that align spatially with the one or more obscured geographic units. Unobscured pixel(s) of the same/different digital image may be identified that align spatially with unobscured geographic unit(s) of the geographic area. The unobscured geographic unit(s) also may have domain fingerprint(s) that match the domain fingerprint(s) of the obscured geographic unit(s). Replacement pixel data may be calculated based on the unobscured pixels and used to generate a transient-obstruction-free version of the digital image.

ANALYZING POSTURE-BASED IMAGE DATA
20180012357 · 2018-01-11 ·

Various embodiments are directed to systems and methods for determining whether an individual uses proper posture to perform a job duty/task. For example, systems may determine whether an individual utilizes proper posture when lifting a heavy item from a floor. Accordingly, various embodiments comprise an image capture device and a central computing entity configured to receive item information/data for an item to be moved by an individual and to determine whether the item information/data satisfies one or more image collection criteria. Upon determining the item information/data satisfies one or more of the image collection criteria, the computing entity may activate an image capture device to collect image information/data of individuals performing the job duty/task, to compare collected image information/data against a plurality of reference images, and to determine whether the collected image information/data is indicative of the individual performing the job duty/task according to proper posture considerations.

3D MULTI-PARAMETRIC ULTRASOUND IMAGING

Systems and methods are disclosed that facilitate obtaining two dimensional (2D) ultrasound images, using two or more ultrasound imaging modes or modalities, to generate 2D multi-parametric ultrasound (mpUS) images and/or to generate a three-dimensional (3D) mpUS image. The different ultrasound imaging modes acquire images in a common frame of reference during a single procedure to facilitate their registration. The mpUS images (i.e., 2D or 3D) may be used for enhanced and/or automated detection of one or more suspicious regions. After identifying one or more suspicious regions, the mpUS images may be utilized with a real-time image to guide biopsy or therapy the region(s). All these processes may be performed in a single medical procedure.

Apparatus and method for successive multi-frame image denoising

An apparatus and method for successive multi-frame image denoising are herein disclosed. The apparatus includes a first subtractor including a first input to receive a frame of the image, a second input to receive a reference frame, and an output; an absolute value function block including an input connected to the output of the first subtractor and an output; a second subtractor including a first input connected to the output of the absolute value function block, a second input for receiving a first predetermined value, and an output; and a maximum value divider function block including an input connected to the output of the second subtractor and an output for outputting filter weights.

Photoacoustic image evaluation apparatus, method, and program, and photoacoustic image generation apparatus

A photoacoustic image evaluation apparatus includes a processor configured to acquire a first photoacoustic image generated at a first point in time and a second photoacoustic image generated at a second point in time before the first point in time, the first and second photoacoustic images being photoacoustic images generated by detecting photoacoustic waves generated inside a subject, who has been subjected to blood vessel regeneration treatment, by emission of light into the subject; acquire a blood vessel regeneration index, which indicates a state of a blood vessel by the regeneration treatment, based on a difference between a blood vessel included in the first photoacoustic image and a blood vessel included in the second photoacoustic image; and display the blood vessel regeneration index on a display.

SYSTEMS AND METHODS TO REGISTER PATIENT ANATOMY OR TO DETERMINE AND PRESENT MEASUREMENTS RELATIVE TO PATIENT ANATOMY

Systems and methods are disclosed for use in electronic guidance systems for surgical navigation. A sensor is provided with an optical sensor, to provide optical information, and a measuring sensor, to provide measurements for determining a direction of gravity. The sensor communicates optical information and measurements to an inter-operative computing unit. In an embodiment, the inter-operative computing unit receives first optical information for a registration device and a patient anatomy and a measurement to determine a direction of gravity to perform a registration step. The inter-operative computing unit receives second optical information for the patient anatomy and an object and determines and presents measurements relative to the anatomy. The measurements relative to the anatomy are determined from the second optical information, and in relation to the registration of the anatomy of the patient.

MICROSCOPE-BASED SUPER-RESOLUTION

A method for microscope-based super-resolution includes acquiring a to-be-processed image and at least an auxiliary image, the to-be-processed image includes a target area, the auxiliary image includes an overlapping portion with the target area, and the to-be-processed image and the auxiliary image are both microscope images of a first resolution. The method further includes registering the to-be-processed image and the auxiliary image to obtain a registered image, and extracting one or more high-resolution features from the registered image. The one or more high-resolution features represent image features of the target area in a second resolution, and the second resolution is greater than the first resolution. The method also includes reconstructing, based on the one or more high-resolution features, a target image of the second resolution corresponding to the to-be-processed image of the first resolution. Apparatus and non-transitory computer-readable storage medium counterpart embodiments are also contemplated.

METHOD FOR ASCERTAINING SUITABLE POSITIONING OF MEASURING DEVICES AND SIMPLIFIED MOVING IN MEASURING AREAS USING VIS DATA AND REFERENCE TRAJECTORIES BACKGROUND
20230237681 · 2023-07-27 · ·

A method for ascertaining a suitable deployment of a mobile measuring device within measurement surroundings, wherein first and second measurement surroundings containing first and second object features are automatically optically captured at the first deployment and tracked using a visual inertial system (VIS) and within the scope of changing the deployment. The first and second measurement surroundings are compared, wherein the comparison is based on searching for corresponding first and second object features visible in a certain number and quality in the first and second measurement surroundings, wherein this certain number and quality of corresponding features is a criterion that a registration of the first and second point cloud is possible.

DISPLAY NON-UNIFORMITY CORRECTION

In one embodiment, a computing system may determine, determine an estimated distance of an eye of a user to a display plane of a display. The system may access correction maps corresponding to a number of reference distances to the display plane of the display. The system may select a first reference distance and a second reference distance based on the estimated distance. The system may generate a custom correction map for the user based on an interpolation of a first correction map corresponding to the first reference distance and a second correction map corresponding to the second reference distance. The system may adjust an image to be displayed on the display using the custom correction map. The custom correction map may correct non-uniformity of the display as viewed from the eye of the user. The system may display the image adjusted using the custom correction map on the display.