Patent classifications
G06T7/529
ELECTRONIC DEVICE INCLUDING PROCESSING CIRCUIT FOR GENERATING DEPTH INFORMATION USING LUMINANCE DATA AND METHOD OF GENERATING DEPTH INFORMATION
Disclosed is an electronic device configured to generate depth information. The electronic device includes: a memory storing one or more instructions and image data; and at least one processing circuit configured to generate the depth information on the image data by executing the one or more instructions, wherein the at least one processing circuit is further configured to obtain luminance data of the image data, generate absolute depth data for the luminance data by using a first artificial neural network configured to extract disparity features, and generate the depth information based on the absolute depth data.
ELECTRONIC DEVICE INCLUDING PROCESSING CIRCUIT FOR GENERATING DEPTH INFORMATION USING LUMINANCE DATA AND METHOD OF GENERATING DEPTH INFORMATION
Disclosed is an electronic device configured to generate depth information. The electronic device includes: a memory storing one or more instructions and image data; and at least one processing circuit configured to generate the depth information on the image data by executing the one or more instructions, wherein the at least one processing circuit is further configured to obtain luminance data of the image data, generate absolute depth data for the luminance data by using a first artificial neural network configured to extract disparity features, and generate the depth information based on the absolute depth data.
Optical detection device of detecting a distance relative to a target object
An optical detection device of detecting a distance relative to a target object includes a substrate, an optical sensor and a processor. The optical sensor is disposed on the substrate and adapted to capture an image about the target object. The processor is disposed on the substrate and electrically connected with the optical sensor. The processor is adapted to mark a first region and a second region within the image for acquiring first quantity of the first region and second quantity of the second region, and compare the first quantity with the second quantity for determining whether the distance is varied to a predefined condition.
Optical detection device of detecting a distance relative to a target object
An optical detection device of detecting a distance relative to a target object includes a substrate, an optical sensor and a processor. The optical sensor is disposed on the substrate and adapted to capture an image about the target object. The processor is disposed on the substrate and electrically connected with the optical sensor. The processor is adapted to mark a first region and a second region within the image for acquiring first quantity of the first region and second quantity of the second region, and compare the first quantity with the second quantity for determining whether the distance is varied to a predefined condition.
MACHINE LEARNING-BASED 2D STRUCTURED IMAGE GENERATION
Techniques are described for a multiple-phase process that uses machine learning (ML) models to produce a texturized version of an input image. During a first phase, using a pix2pix-based ML model, an automatically-generated image that depicts structured texture is generated based on an input image that visually identifies a plurality of image areas for the structured texture. During a second phase, a neural style transfer-based ML model is used to apply the style of a style image (e.g., a target image from the training dataset of the pix2pix-based ML model) to the texture image generated at the first phase (the content image) to produce a modified texture image. According to an embodiment, during a third phase, the generated texture image produced at the first phase and the modified texture image produced at the second phase are combined to produce a structured texture image with a moderated amount of detail.
MACHINE LEARNING-BASED 2D STRUCTURED IMAGE GENERATION
Techniques are described for a multiple-phase process that uses machine learning (ML) models to produce a texturized version of an input image. During a first phase, using a pix2pix-based ML model, an automatically-generated image that depicts structured texture is generated based on an input image that visually identifies a plurality of image areas for the structured texture. During a second phase, a neural style transfer-based ML model is used to apply the style of a style image (e.g., a target image from the training dataset of the pix2pix-based ML model) to the texture image generated at the first phase (the content image) to produce a modified texture image. According to an embodiment, during a third phase, the generated texture image produced at the first phase and the modified texture image produced at the second phase are combined to produce a structured texture image with a moderated amount of detail.
Surface imaging using high incident angle of light rays
A plurality of light sources placed at one end of a surface imaging system may generate and direct light rays to be incident onto a surface to be detected at high incident angles that are greater than or equal to a predetermined angle threshold, the predetermined angle threshold may be set to ensure that images of the surface to be detected that have at least a predetermined degree of contrast are to be obtained. An image sensor that is placed at another end of the surface imaging system may collect light rays reflected from the surface to be detected to form a first image of the surface to be detected.
Surface imaging using high incident angle of light rays
A plurality of light sources placed at one end of a surface imaging system may generate and direct light rays to be incident onto a surface to be detected at high incident angles that are greater than or equal to a predetermined angle threshold, the predetermined angle threshold may be set to ensure that images of the surface to be detected that have at least a predetermined degree of contrast are to be obtained. An image sensor that is placed at another end of the surface imaging system may collect light rays reflected from the surface to be detected to form a first image of the surface to be detected.
Light field imaging system by projecting near-infrared spot in remote sensing based on multifocal microlens array
The present disclosure provides a light field imaging system by projecting near-infrared spot in remote sensing based on a multifocal microlens array. The light field imaging system includes a near-infrared spot projection apparatus (100) and a light field imaging component (200), where the near-infrared spot projection apparatus (100) is configured to scatter near-infrared spots on a to-be-observed object to add texture information to a target image, and the light field imaging component (200) is configured to image a target scene light ray with additional texture information. The present disclosure can extend a target depth-of-field (DOF) detection range, and particularly, reconstruct a surface of a weak-texture object.
Light field imaging system by projecting near-infrared spot in remote sensing based on multifocal microlens array
The present disclosure provides a light field imaging system by projecting near-infrared spot in remote sensing based on a multifocal microlens array. The light field imaging system includes a near-infrared spot projection apparatus (100) and a light field imaging component (200), where the near-infrared spot projection apparatus (100) is configured to scatter near-infrared spots on a to-be-observed object to add texture information to a target image, and the light field imaging component (200) is configured to image a target scene light ray with additional texture information. The present disclosure can extend a target depth-of-field (DOF) detection range, and particularly, reconstruct a surface of a weak-texture object.