Patent classifications
G06T5/70
Image dehazing method, apparatus, and device, and computer storage medium
This application provides an image dehazing method, apparatus, and device, and a computer storage medium. The method includes: in response to obtaining an image dehazing instruction, acquiring a first image and a second image corresponding to a target scene at the same moment. The method also includes calculating, based on a first pixel value of each pixel of the first image and a second pixel value of each pixel of the second image, haze density information of the each pixel; generating an image fusion factor of the each pixel according to the haze density information, the image fusion factor indicating a fusion degree between the first image and the second image; and fusing the first image and the second image according to the image fusion factor to obtain a dehazed image.
IMAGE PROCESSING APPARATUS, IMAGING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM
The image processing apparatus includes a processor. The processor is configured to: acquire a first image, which is obtained by performing first AI processing on a processing target image, and a second image, which is obtained without performing the first AI processing on the processing target image; and adjust excess and deficiency of the first AI processing by combining the first image and the second image.
APPARATUS AND METHODS TO GENERATE DEBLURRING MODEL AND DEBLUR IMAGE
Described herein is a method, and system for training a deblurring model and deblurring an image (e.g., SEM image) of a patterned substrate using the deblurring model and depth data associated with multiple layers of the patterned substrate. The method includes obtaining, via a simulator using a target pattern as input, a simulated image of the substrate, the target pattern comprising a first target feature to be formed on a first layer, and a second target feature to be formed on a second layer located below the first layer; determining, based on depth data associated with multiple layers of the substrate, edge range data for features of the substrate; and adjusting, using the simulated image and the edge range data associated with the target pattern as training data, parameters of a base model to generate the deblurring model to a deblur image of a captured image.
SYSTEMS AND METHODS FOR SPECTACLE REMOVAL AND VIRTUAL TRY-ON
A system includes a computing device including a processor communicatively coupled to a camera. The computing device is configured to, in response to receiving a request, capture an image via the camera and detect, within the image, a first plurality of locations of a first object. The computing device is further configured to segment the first plurality of locations of the first object by determining, for each location of the first plurality of locations, a likelihood the corresponding location includes a part of the first object. The computing device is configured to inpaint a second plurality of locations with an associated likelihood the corresponding location includes the part of the first object above a threshold value. The computing device is additionally configured to generate an augmented image by superimposing a selected second object over the image and display the augmented image on a user interface of the computing device.
Anomaly Detection System
An image analysis system including an image gathering unit that gathers a high-altitude image having multiple channels, an image analysis unit that segments the high-altitude image into a plurality of equally size tiles and determines an index value based on at least one channel of the image where the image analysis unit identifies areas containing anomalies in each image.
APPARATUS, METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR COMBINING REAL-NUMBER-BASED AND COMPLEX-NUMBER-BASED IMAGES
The present disclosure relates to a real-number-based neural network operating in combination with a complex-number-based neural network to perform image processing (e.g., using phase-based medical images). In one embodiment, a method includes, but is not limited to, applying, to inputs of a first trained neural network trained to process real-number-based images, first image data generated from real-number-based measurements obtained by imaging a subject; applying, to inputs of a second trained neural network trained to process complex-number-based images, second image data generated from complex-number-based measurements obtained by imaging the subject; and combining a first output of the first trained neural network and a second output of the second trained neural network to produce a combined image, based on the first image data and the second image data.
ADAPTABLE AND HIERARCHICAL POINT CLOUD COMPRESSION
Systems and methods are provided for point cloud compression. An exemplary method includes: receiving point cloud data; quantizing the point cloud data to remove noise to produce quantized point cloud data; generating, using the quantized point cloud data, a plurality of multi-level tiles; performing reordering within the multi-level tiles to optimize a compression rate producing a plurality of reordered multi-level tiles; bit packing the reordered multi-level tiles minimizing bits required to store per tile header data producing a bit packed multi-level tile data; and performing additional compression on the bit-packed multi-level tile data using a first compression algorithm.
GENERATION OF IMAGES WITH TOOTH COLOR DETERMINED USING DEPTH INFORMATION
A method includes determining depth values associated with a first set of pixel locations in a first image of a mouth. One or more function is generated for one or more color channels based on intensities of the one or more color channels at the first set of pixel locations and depth values associated with the first set of pixel locations. Image data comprising a new representation of the teeth is received, wherein the image data comprises a second set of pixel locations and new depth values associated with the second set of pixel locations. A new image is generated based on the image data and the one or more functions.
Spatially Varying Reduction of Haze in Images
Methods, systems, devices, and tangible non-transitory computer readable media for haze reduction are provided. The disclosed technology can include generating feature vectors based on an input image including points. The feature vectors can correspond to feature windows associated with features of different portions of the points. Based on the feature vectors and a machine-learned model, a haze thickness map can be generated. The haze thickness map can be associated with an estimate of haze thickness at each of the points. Further, the machine-learned model can estimate haze thickness associated with the features. A refined haze thickness map can be generated based on the haze thickness map and a guided filter. A dehazed image can be generated based on application of the refined haze thickness map to the input image. Furthermore, a color corrected dehazed image can be generated based on performance of color correction operations on the dehazed image.
SYSTEMS AND METHODS FOR DISPLAYING AUTONOMOUS VEHICLE ENVIRONMENTAL AWARENESS
The disclosed computer-implemented method may include displaying vehicle environment awareness. In some embodiments, a visualization system may display an abstract representation of a vehicle's physical environment via a mobile device and/or a device embedded in the vehicle. For example, the visualization may use a voxel grid to represent the environment and may alter characteristics of shapes in the grid to increase their visual prominence when the sensors of the vehicle detect that an object is occupying the space represented by the shapes. In some embodiments, the visualization may gradually increase and reduce the visual prominence of shapes in the grid to create a soothing wave effect. Various other methods, systems, and computer-readable media are also disclosed.