Patent classifications
H04N2013/0077
Processing color information for intraoral scans
A method includes receiving scan data of a tooth during a first mode of operation, the scan data of the tooth having been generated by an intraoral scanner. The method includes invoking a second mode of operation and presenting, in a GUI, an image of the tooth. The method includes presenting, in the GUI, indications of a plurality of color zones of the tooth, the indications comprising, for at least one color zone of the plurality of color zones, an indication that insufficient color information has been received, wherein each color zone represents a separate region of the tooth for which an approximately uniform color is to be used. The method includes categorizing, for one or more color zones of the plurality of color zones for which sufficient color information has been received, each of the one or more color zones according to a color pallet used for dental prosthetics.
METHODS AND SYSTEMS FOR DIAGNOSING VISION LOSS AND PROVIDING VISUAL COMPENSATION
Methods, systems and apparatus for compensating vision loss for a patient. In some embodiments, a computer processor receives vision loss data associated with a vision loss region of an eye of a patient from a head mounted display (HMD) device worn by the patient, generates a parameterized perceptual loss model, and then generates inverse data to correct for color loss, contrast and luminance desaturation, and visual rotational and spatial distortion suffered by the eye of the patient. The computer processor then transmits the inverse data to the HMD device being worn by the patient for use in correcting the visual rotational and spatial distortion loss of the eye of the patient.
Polarization capture device, system, and method
A device includes a first lens. The device also includes a first polarized image sensor coupled with the first lens and configured to capture, from a first perspective, a first set of image data in a plurality of polarization orientations. The device also includes a second lens disposed apart from the first lens. The device further includes a second polarized image sensor coupled with the second lens and configured to capture, from a second perspective different from the first perspective, a second set of image data in the plurality of polarization orientations.
SYSTEM AND METHOD FOR GENERATING DEWARPED IMAGE USING PROJECTION PATTERNS CAPTURED FROM OMNI-DIRECTIONAL STEREO CAMERAS
A system for generating high-resolution de-warped omni-directional stereo image from captured omni-directional stereo image by correcting optical distortions using projection patterns is provided. The system includes a projection pattern capturing arrangement, a projector or a display, and a de-warping server. The projection pattern capturing arrangement includes one or more omnidirectional cameras to capture projection patterns from the captured omni-directional stereo image from each omni-directional stereo camera. The projector or the display displays the projection patterns. The de-warping server obtain the projection patterns and processes the projection patterns to generate high resolution de-warped omni-directional stereo image by correcting optical distortions in the captured omni-directional stereo image and mapping the captured omni-directional stereo image and the high resolution de-warped omni-directional stereo image.
Systems, methods, and media for colorizing grayscale images
In one embodiment, a computing system may access a first grayscale image and a second grayscale image. The system may generate a first color image and a second color image based on the first grayscale image and the second grayscale image, respectively. The system may generate affinity information based on the first grayscale image and the second grayscale image, the affinity information identifying relationships between pixels of the first grayscale image and pixels of the second grayscale image. The system may modify the color of the first color image and the second color image based on the affinity information. The system may generate a first visual output based on the modified first color image and a second visual output based on the modified second color image.
Removing moving objects from a video scene captured by a moving camera
Methods, an apparatus, and software media are provided for removing unwanted information such as moving or temporary foreground objects from a video sequence. The method performs, for each pixel, a statistical analysis to create a background data model whose color values can be used to detect and remove the unwanted information. The method assumes that for each pixel the background is present in a majority of the frames. The camera that records the video sequence may move relative to the geometry of the video scene. A pixel in a first frame is matched to a location in the geometry. The method determines color values of pixels, matched to the location in the geometry, in successive frames and clusters color values to determine a background color value range. It may use quadratic or better interpolation and extrapolation to determine background color values for unavailable frames.
Dirty lens image correction
Systems and method for correcting images including artifacts due to dirty camera lenses of electronic device are disclosed. Correction of images by the systems and methods includes obtaining a first raw pixel image of a scene captured with a first camera, obtaining a second raw image of the scene captured with a second camera separate from the first camera in a camera baseline direction, rectifying the first and second raw pixel images to create respective first and second rectified pixel images, determining disparity correspondence between corresponding image pixel pairs of the first and second rectified images in the camera baseline direction, mapping first and second rectified images into the same domain using the determined disparity, detect image artifact regions within each domain mapped image by comparing corresponding regions of the domain mapped images, determining correction factors for each detected image artifact region, and correcting the rectified first and second images by applying the determined correction factors.
Systems and methods for color dithering
In one embodiment, a computing system may determine a barycentric coordinate system associated with a target color value for a target image region. The system may determine barycentric weights for the target color value with respect to vertices of the barycentric coordinate system. The system may determine a number of pixel groups for the target image region based on the barycentric weights of the target color value and a dithering mask satisfying a spatial stacking constraint. Each pixel group may be associated with a color of a color space associated with the vertices of the barycentric coordinate system. The system may generate an image including the target image region by assigning pixels in the pixel groups to associated colors, respectively. The average color value of the target image region may substantially equal to the target color value.
METHOD FOR DISPLAYING A MODEL OF A SURROUNDING AREA, CONTROL UNIT AND VEHICLE
A method, including recording a first and a second camera image; the first camera image and the second camera image having an overlap region. The method includes: assigning pixels of the first camera image and pixels of the second camera image to predefined points of a three-dimensional lattice structure, the predefined points being situated in a region of the three-dimensional lattice structure, which represents the overlap region; ascertaining a color information item difference for each predefined point as a function of the assigned color information items; ascertaining a quality value as a function of the ascertained color information item difference at the specific, predefined point; determining a global color transformation matrix as a function of the color information item differences, weighted as a function of the corresponding quality value; and adapting the second camera image as a function of the determined color transformation matrix.
Detection method and detection device, electronic device and computer readable storage medium
A detection method for an image acquisition device, a detection device for an image acquisition device, an electronic device and a computer readable storage medium are disclosed. The image acquisition device includes a light source and a diffuser, the diffuser is configured to scatter light emitted by the light source and the light emitted by the light source is irradiated on at least partial area of a scene, the image acquisition device includes: acquiring an image of the scene; determining an effective area of the image; and determining whether or not the diffuser is in an abnormal working state according to information of the effective area.