G06T2210/41

SYSTEM AND METHOD FOR ENHANCING VISUAL ACUITY

A head wearable display system comprising a target object detection module receiving multiple image pixels of a first portion and a second portion of a target object, and the corresponding depths; a first light emitter emitting multiple first-eye light signals to display a first-eye virtual image of the first portion and the second portion of the target object for a viewer; a first light direction modifier for respectively varying a light direction of each of the multiple first-eye light signals emitted from the first light emitter; a first collimator; a first combiner, for redirecting and converging the multiple first-eye light signals towards a first eye of the viewer. The first-eye virtual image of the first portion of the target object in a first field of view has a greater number of the multiple first-eye light signals per degree than that of the first-eye virtual image of the second portion of the target object in a second field of view.

METHODS AND APPARATUSES FOR TRAINING MAGNETIC RESONANCE IMAGING MODEL

Methods and apparatuses for training a magnetic resonance imaging model, electronic devices and computer readable storage media are provided. A method may include: acquiring a magnetic resonance image data set; constructing a ring deep neural network to be trained; inputting an under-sampled magnetic resonance image and a full-sampled magnetic resonance image respectively to two neural networks included in the ring deep neural network, to generate respective simulated magnetic resonance images; inputting a first simulated full-sampled magnetic resonance image and the full-sampled magnetic resonance image to a pre-constructed first simulated magnetic resonance image class discrimination model, to obtain a first discrimination result indicating whether or not the first simulated full-sampled magnetic resonance image is of a simulated magnetic resonance image class; and adjusting a network parameter of the ring deep neural network based on a preset loss function, to obtain a trained magnetic resonance imaging model.

SYSTEMS AND METHODS FOR MASKING A RECOGNIZED OBJECT DURING AN APPLICATION OF A SYNTHETIC ELEMENT TO AN ORIGINAL IMAGE
20230050857 · 2023-02-16 ·

An exemplary object masking system is configured to mask a recognized object during an application of a synthetic element to an original image. For example, the object masking system accesses a model of a recognized object depicted in an original image of a scene. The object masking system associates the model with the recognized object. The object masking system then generates presentation data for use by a presentation system to present an augmented version of the original image in which a synthetic element added to the original image is, based on the model as associated with the recognized object, prevented from occluding at least a portion of the recognized object. In this way, the synthetic element is made to appear as if located behind the recognized object. Corresponding systems and methods are also disclosed.

OPHTHALMIC INFORMATION PROCESSING APPARATUS, OPHTHALMIC APPARATUS, OPHTHALMIC INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
20230050680 · 2023-02-16 · ·

An ophthalmic information processing apparatus includes a specifying unit and an image deforming unit. The specifying unit is configured to specify a three-dimensional position of each pixel in a two-dimensional front image depicting a predetermined site of a subject's eye, based on OCT data obtained by performing optical coherence tomography on the predetermined site. The image deforming unit is configured to deform the two-dimensional front image, by changing position of at least one pixel in the two-dimensional front image based on the three-dimensional position, to generate a three-dimensional front image.

VR-Based Treatment System and Method
20230047622 · 2023-02-16 ·

An XR-based system (virtual reality, augmented reality, or mixed reality system), is provided to visualize and resolve at least one condition of a subject. A dynamic virtual representation of the subject's body is generated based on the captured physical traits and movement of the subject's body is captured by at least one motion tracking device, and rendered in the extended reality environment. The dynamic virtual representation is synchronized with the movement of the body of the subject, generating a virtual representation of at least one condition of the subject in response to one or more inputs, overlaying or rendering the virtual representation of the condition of the subject on the virtual representation of the body of the subject, and receiving and processing one or more inputs representing one or more attributes of the condition to adjust the virtual representation of the condition of the subject in the extended reality environment.

Method of Operating Intraoral Scanner for Fast and Accurate Full Mouth Reconstruction
20230048005 · 2023-02-16 · ·

An intraoral scanner includes an image capturing device and a processor. A method of operating the intraoral scanner includes the image capturing device sequentially capturing M images of a buccal bite, the processor generating M sets of buccal bite point clouds according to the M images, the processor matching the M sets of buccal bite point clouds to generate a bite model, when the number of data points of the bite model exceeds a first threshold, the processor computing P sets of bite feature descriptors of the bite model, when a predetermined quantity of bite feature descriptors in a set of bite feature descriptors of the P sets of bite feature descriptors exceeds a second threshold, the processor performing a registration on an upper arch model and a lower arch model to the buccal bite mode to generate a full mouth model.

Computer apparatus and methods for generating color composite images from multi-echo chemical shift-encoded MRI
11580626 · 2023-02-14 ·

A computer apparatus and methods generate multi-parametric color composite images from multi-echo chemical shift encoded (CSE) MRI. Some embodiments use inherently co-registered images (i.e., image maps) that are combined into a single intuitive composite color image. The color (e.g., brightness, hue, and/or saturation) reflects in part the water and fat content, and other properties, particularly T2* relaxation (related to magnetic susceptibility) of the tissue.

Methods, systems, and computer readable media for mask embedding for realistic high-resolution image synthesis
11580673 · 2023-02-14 · ·

The subject matter described herein includes methods, systems, and computer readable media for mask embedding for realistic high-resolution image synthesis. According to one method for mask embedding for realistic high-resolution image synthesis includes receiving, as input, a mask embedding vector and a latent features vector, wherein the mask embedding vector acts as a semantic constraint; generating, using a trained image synthesis algorithm and the input, a realistic image, wherein the realistic image is constrained by the mask embedding vector; and outputting, by the trained image synthesis algorithm, the realistic image to a display or a storage device.

System and method for generating a virtual mathematical model of the dental (stomatognathic) system

A method for forming a virtual 3D mathematical model of a dental system, including receiving DICOM files representing the dental system; identifying number and location of voxels of tissues of the dental system; combining the voxels of the tissues into voxels of organs of the dental system; combining the organs into the virtual 3D mathematical model of the dental system, wherein the virtual 3D mathematical models supports linear, non-linear and volumetric measurements of the dental system; and presenting the virtual 3D mathematical model to a user. The DICOM files can be cone beam or multispiral computed tomography, MRT, PET and/or ultrasonography. The tissues include enamel, dentin, pulp, cartilage, periodontium, and/or jaw bone. The organs include teeth, gums, temporomandibular joint and/or jaw. A size of the voxels is typically between 40 μm and 200 μm.

Patient-specific instrumentation for implant revision surgery

A system for creating at least one model of a bone and implanted implant comprises a processing unit; and a non-transitory computer-readable memory communicatively coupled to the processing unit and comprising computer-readable program instructions executable by the processing unit for: obtaining at least one image of at least part of a bone and of an implanted implant on the bone, the at least one image being patient specific, obtaining a virtual model of the implanted implant using an identity of the implanted implant, overlaying the virtual model of the implanted implant on the at least one image to determine a relative orientation of the implanted implant relative to the bone in the at least one image, and generating and outputting a current bone and implant model using the at least one image, the virtual model of the implanted implant and the overlaying.