G06T5/94

Integration system for a medical image archive system
10867697 · 2020-12-15 · ·

A medical picture archive integration system includes a de-identification system that includes a first memory designated for protected health information (PHI), operable to perform a de-identification function is on a DICOM image, received from a medical picture archive system, to identify at least one patient identifier and generate a de-identified medical scan that does not include the at least one patient identifier. The medical picture archive integration system further includes a de-identified image storage system that stores the de-identified medical scan in a second memory that is separate from the first memory, and an annotating system, operable to utilize model parameters received from a central server to perform an inference function on the de-identified medical scan, retrieved from the second memory to generate annotation data for transmission to the medical picture archive system as an annotated DICOM file.

IMAGE PROCESSING APPARATUS AND METHOD
20200389631 · 2020-12-10 ·

The present disclosure relates to image processing apparatus and method configured so that degradation of invisibility of a pattern can be reduced. According to the luminance of a contents image, a luminance difference between two pattern images projected in a superimposed state on the contents image, having an identical shape, and having patterns in opposite luminance change directions is adjusted. The present disclosure is, for example, applicable to an image processing apparatus, an image projection apparatus, a control apparatus, an information processing apparatus, a projection image capturing system, an image processing method, a program, or the like.

SATURATION MANAGEMENT FOR LUMINANCE GAINS IN IMAGE PROCESSING

Image analysis and processing may include using an image processor to receive image data corresponding to an input image, determine an initial gain value for the image data based on at least one of a two-dimensional gain map or a parameterized radial gain model, determine whether the initial gain value is below a threshold, determine a maximum RGB triplet value for the image data where the initial gain value is below the threshold, determine a pixel intensity as output of a function for saturation management, determine a final gain value for the image data based on the maximum RGB triplet value and the pixel intensity, apply the final gain value against the image data to produce processed image data, and output the processed image data for further processing using the image processor.

Image processing apparatus, image processing method, and recording medium
10861140 · 2020-12-08 · ·

An imaging apparatus 1 includes an image acquisition unit 51 and an image processing unit 53. The image acquisition unit 51 acquires a face image. The image processing unit 53 adjusts the brightness of the face image acquired by the image acquisition unit 51. The image processing unit 53 combines the face image acquired by the image acquisition unit 51 with the adjusted image, using map data (map) in which a predetermined region of the face is set as a transparent region on the basis of a three dimensional shape of the face.

Image processing apparatus, image processing method, and storage medium
10861136 · 2020-12-08 · ·

An image processing apparatus includes a setting unit configured to set a virtual light source for a captured image; a brightness correction unit configured to correct brightness of a partial region of an object using the virtual light source set by the setting unit; an attribute detection unit configured to detect an attribute of the partial region; a glossy component generation unit configured to generate a glossy component that is to be applied to the partial region, according to the attribute of the partial region detected by the attribute detection unit; and a glossy appearance correction unit configured to correct a glossy appearance of the partial region using the glossy component generated by the glossy component generation unit.

METHOD OF DISPLAYING AN IMAGE ON A SEE-THROUGH DISPLAY

The present invention concerns a method of displaying an image on a see-through display. The method comprises: obtaining (101) a first electro-magnetic radiation matrix of an object, the first matrix comprising first matrix elements representing radiation intensity values of corresponding locations of the object; dividing (103) the first matrix into a second matrix representing a first subset of the radiation intensity values of the matrix elements, and a third, different matrix representing a second, different subset of the radiation intensity values of the matrix elements; generating (105) a first histogram for the second matrix; equalising (107) the first histogram to obtain an equalised second histogram; generating (109) a first grayscale image representing the first subset of the radiation intensity values from the second matrix and the equalised second histogram; colouring (111) the first grayscale image with a first colourmap to obtain a first colour image; generating (113) a second grayscale image representing the second subset of the radiation intensity values image by mapping substantially linearly the second subset of the radiation intensity values to a given number of encoded radiation intensity values; colouring (115) the second grayscale image with a second colourmap, which is different from the first colourmap, to obtain a second colour image; combining (117) the first colour image and the second colour image to obtain a combined colour image; and displaying (123) the combined colour image on the see-through display.

CLINICAL TRIAL RE-EVALUATION SYSTEM

A clinical trial re-evaluation system is operable to perform at least one assessment function on a set of medical scans for each of a first subset of a set of patients of a failed clinical trial to generate automated assessment data for each of the first subset of the set of patients. The first subset of the set of patients corresponds to a subset of human assessment data determined to have failed to meet criteria of the clinical trial. Patient re-evaluation data is generated for each of the first subset of the set of patients by comparing the automated assessment data to the criteria. The patient re-evaluation data for a second subset of the first subset of the set of patients indicates the automated assessment data passes the criteria. Trial re-evaluation data is generated based on the patient re-evaluation data for transmission to a computing device for display.

ENHANCED LOCAL CONTRAST
20200380647 · 2020-12-03 ·

Various implementations provided herein provide content on an optical see-through display using enhanced local contrast. In some implementations, the enhanced local contrast may be used to provide an apparent reduction in brightness (e.g., a shadow) or other visual effect. For example, the appearance of a virtual shadow of a virtual cup on a real table of a physical environment can be provided even though the brightness of the table cannot be reduced. The appearance of the shadow may be provided by selectively enhancing contrast that the user cognitively interprets to see a relatively darker area where the shadow should be (e.g., via an optical illusion/effect).

ROBOTIC CONTROL BASED ON 3D BOUNDING SHAPE, FOR AN OBJECT, GENERATED USING EDGE-DEPTH VALUES FOR THE OBJECT
20200376675 · 2020-12-03 ·

Generating edge-depth values for an object, utilizing the edge-depth values in generating a 3D point cloud for the object, and utilizing the generated 3D point cloud for generating a 3D bounding shape (e.g., 3D bounding box) for the object. Edge-depth values for an object are depth values that are determined from frame(s) of vision data (e.g., left/right images) that captures the object, and that are determined to correspond to an edge of the object (an edge from the perspective of frame(s) of vision data). Techniques that utilize edge-depth values for an object (exclusively, or in combination with other depth values for the object) in generating 3D bounding shapes can enable accurate 3D bounding shapes to be generated for partially or fully transparent objects. Such increased accuracy 3D bounding shapes directly improve performance of a robot that utilizes the 3D bounding shapes in performing various tasks.

AUTOMATED GENERATION OF SYNTHETIC LIGHTING SCENE IMAGES USING GENERATIVE ADVERSARIAL NETWORKS
20200380652 · 2020-12-03 ·

This disclosure is directed to systems and methods for automated generation of lighting scene images. An image of an environment is provided to the system, and lamps with particular light styles can be added to various locations on the image. A modified image of the environment which includes the added lamps and light styles is generated using a generative adversarial network. The generative adversarial network focuses on one or more zones around the added lamp and applies learned or pre-specified decay functions to the light style in the zones.