Patent classifications
H04N13/122
Mixed-reality surgical system with physical markers for registration of virtual models
An example method includes obtaining, a virtual model of a portion of an anatomy of a patient obtained from a virtual surgical plan for an orthopedic joint repair surgical procedure to attach a prosthetic to the anatomy; identifying, based on data obtained by one or more sensors, positions of one or more physical markers positioned relative to the anatomy of the patient; and registering, based on the identified positions, the virtual model of the portion of the anatomy with a corresponding observed portion of the anatomy.
Mixed-reality surgical system with physical markers for registration of virtual models
An example method includes obtaining, a virtual model of a portion of an anatomy of a patient obtained from a virtual surgical plan for an orthopedic joint repair surgical procedure to attach a prosthetic to the anatomy; identifying, based on data obtained by one or more sensors, positions of one or more physical markers positioned relative to the anatomy of the patient; and registering, based on the identified positions, the virtual model of the portion of the anatomy with a corresponding observed portion of the anatomy.
Method for image processing of image data for image and visual effects on a two-dimensional display wall
A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values to add or modify an image effect or a visual effect are determined, and the image data is adjusted using the pixel display values and the image matte.
Systems and methods for improved 3-D data reconstruction from stereo-temporal image sequences
In some aspects, the techniques described herein relate to systems, methods, and computer readable media for data pre-processing for stereo-temporal image sequences to improve three-dimensional data reconstruction. In some aspects, the techniques described herein relate to systems, methods, and computer readable media for improved correspondence refinement for image areas affected by oversaturation. In some aspects, the techniques described herein relate to systems, methods, and computer readable media configured to fill missing correspondences to improve three-dimensional (3-D) reconstruction. The techniques include identifying image points without correspondences, using existing correspondences and/or other information to generate approximated correspondences, and cross-checking the approximated correspondences to determine whether the approximated correspondences should be used for the image processing.
Systems and methods for improved 3-D data reconstruction from stereo-temporal image sequences
In some aspects, the techniques described herein relate to systems, methods, and computer readable media for data pre-processing for stereo-temporal image sequences to improve three-dimensional data reconstruction. In some aspects, the techniques described herein relate to systems, methods, and computer readable media for improved correspondence refinement for image areas affected by oversaturation. In some aspects, the techniques described herein relate to systems, methods, and computer readable media configured to fill missing correspondences to improve three-dimensional (3-D) reconstruction. The techniques include identifying image points without correspondences, using existing correspondences and/or other information to generate approximated correspondences, and cross-checking the approximated correspondences to determine whether the approximated correspondences should be used for the image processing.
CORRECTION OF A HALO IN A DIGITAL IMAGE AND DEVICE FOR IMPLEMENTING SAID CORRECTION
The object of the invention is a method (400) for correcting a halo (H) in a digital image (1) captured using photogrammetry in a 3-D modeling studio, the halo being generated through the interaction of light originating from a light source (L3, L4, L5, L6) in the studio with the optic of the shooting device, and manifesting as a local lightening of the digital image, the method comprising the steps of generating (410) a light intensity map (M) characterizing the light source in terms of spatial distribution and light intensity, providing (420) a convolution kernel specific to the shooting device, calculating (430) a convolution product of the light intensity map and the kernel to obtain a corrective value map (CVM), and removing the corrective value map from the digital image pixel by pixel to produce a corrected image (Icorr) in which the halo is not present.
CORRECTION OF A HALO IN A DIGITAL IMAGE AND DEVICE FOR IMPLEMENTING SAID CORRECTION
The object of the invention is a method (400) for correcting a halo (H) in a digital image (1) captured using photogrammetry in a 3-D modeling studio, the halo being generated through the interaction of light originating from a light source (L3, L4, L5, L6) in the studio with the optic of the shooting device, and manifesting as a local lightening of the digital image, the method comprising the steps of generating (410) a light intensity map (M) characterizing the light source in terms of spatial distribution and light intensity, providing (420) a convolution kernel specific to the shooting device, calculating (430) a convolution product of the light intensity map and the kernel to obtain a corrective value map (CVM), and removing the corrective value map from the digital image pixel by pixel to produce a corrected image (Icorr) in which the halo is not present.
SEPARABLE DISTORTION DISPARITY DETERMINATION
Systems and methods for determining disparity between two images are disclosed. Such systems and methods include obtaining a first pixel image of a scene from a first viewpoint, obtaining a second pixel image of the scene from a second viewpoint (e.g., separate from the first viewpoint in a camera baseline direction such as horizontal or vertical), modifying the first and second pixel images using component-separated correction to create respective first and second corrected pixel images maintaining pixel scene correspondence in the camera baseline direction from between the first and second pixel images to between the first and second corrected pixel images, determining pixel pairs from corresponding pixels between the first and second corrected pixel images in the camera baseline direction, and determining disparity correspondence for each of the determined pixel pairs from pixel locations in the first and second pixel images corresponding to respective pixel locations of the pixel pairs in the first and second corrected pixel images.
SEPARABLE DISTORTION DISPARITY DETERMINATION
Systems and methods for determining disparity between two images are disclosed. Such systems and methods include obtaining a first pixel image of a scene from a first viewpoint, obtaining a second pixel image of the scene from a second viewpoint (e.g., separate from the first viewpoint in a camera baseline direction such as horizontal or vertical), modifying the first and second pixel images using component-separated correction to create respective first and second corrected pixel images maintaining pixel scene correspondence in the camera baseline direction from between the first and second pixel images to between the first and second corrected pixel images, determining pixel pairs from corresponding pixels between the first and second corrected pixel images in the camera baseline direction, and determining disparity correspondence for each of the determined pixel pairs from pixel locations in the first and second pixel images corresponding to respective pixel locations of the pixel pairs in the first and second corrected pixel images.
SMART WEARABLE DEVICE FOR VISION ENHANCEMENT AND METHOD FOR REALIZING STEREOSCOPIC VISION TRANSPOSITION
The invent discloses a smart wearable device for vision enhancement and a method for realizing stereoscopic vision transposition, comprising a wearable device body, wherein the wearable device body is provided with camera lenses, image sensors, an image information receiving and transmitting unit, image enhancement units, and near-to-eye optical systems; the optical axis and field angle of the near-to-eye optical system are matched with the optical axis and field angle of the camera lens; the image sensor is arranged behind the camera lens; the real scene enters the image sensor through an image imaging device for image acquisition, and through the image enhancement unit, the low-light environment image collected by the smart wearable device in the low-light environment is enhanced and displayed clearly. The invention can ensure the enhancement of the real stereoscopic vision in the dark environment and the interchange of the remote and barrier-free stereoscopic real vision.