Patent classifications
G06T5/001
SATURATION ENHANCEMENT METHOD AND DEVICE, AND COMPUTER READABLE STORAGE MEDIUM
Disclosed is a saturation enhancement method, the method comprising: performing color space conversion on an input image to obtain color data of color conversion; performing saturation expansion on the color data of the color conversion to obtain expanded color data; and performing Gamma preprocessing on the expanded color data to obtain processed color data. Also disclosed are a saturation enhancement device and a computer readable storage medium.
POINT CLOUD FEATURE ENHANCEMENT AND APPARATUS, COMPUTER DEVICE AND STORAGE MEDIUM
The present disclosure relates to a point cloud feature enhancement and apparatus, a computer device and a storage medium. The method includes: acquiring a three-dimensional point cloud, the three-dimensional point cloud including a plurality of input points; performing feature aggregation on neighborhood point features of the input point to obtain a first feature of the input point; mapping the first feature to an attention point corresponding to the corresponding input point; performing feature aggregation on neighborhood point features of the attention point to obtain a second feature of the corresponding input point; and performing feature fusion on the first feature and the second feature of the input point to obtain a corresponding enhanced feature. An enhancement effect of point cloud features can be improved with the method.
INFORMATION PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM
An information processing apparatus includes a processor configured to: obtain plural images each including any of plural objects; and determine, based on an analysis result regarding the plural objects in the plural images, according to which of two or more objects, among the plural objects, included in an image the image is corrected.
LEARNING DEVICE, DEPTH INFORMATION ACQUISITION DEVICE, ENDOSCOPE SYSTEM, LEARNING METHOD, AND PROGRAM
Provided are a learning device, a depth information acquisition device, an endoscope system, a learning method, and a program capable of efficiently acquiring a learning data set used for machine learning to perform depth estimation, and capable of implementing a highly accurate depth estimation for an actually imaged endoscope image.
The learning device includes a processor performing endoscope image acquisition processing of acquiring an endoscope image obtained by imaging a body cavity with an endoscope system, actual measurement information acquisition processing of acquiring actually measured first depth information corresponding to at least one measurement point in the endoscope image, imitation image acquisition processing of acquiring an imitation image obtained by imitating an image of the body cavity to be imaged with the endoscope system, imitation depth acquisition processing of acquiring second depth information including depth information of one or more regions in the imitation image, and learning processing of causing a learning model to perform learning by using a first learning data set and a second learning data set.
SYSTEM AND METHOD FOR VISUAL ENHANCEMENT OF A SCENE DURING CAPTURE THEREOF
A system. The system includes an image display system, a display device, a camera system, one or more control devices and a control system. The display device is configured to display an image received from the image display system. The camera system is configured to capture the image displayed by the display device during a capture of a scene. The control system is communicably coupled to the image display system, the display device, the camera system and the one or more control devices. The control system comprises a processing circuit and is configured to automatically adjust settings of the image display system, the display device and the camera system. The control system is also configured to determine which of the adjustments results in the least destruction to the image, and apply the adjustment which results in the least destruction to the image.
SYSTEMS AND METHODS FOR AUTONOMOUS VISION-GUIDED OBJECT COLLECTION FROM WATER SURFACES WITH A CUSTOMIZED MULTIROTOR
Various embodiments of a vision-guided unmanned aerial vehicle (UAV) system to identify and collect foreign objects from the surface of a body of water are disclosed herein. A vision system and methodology has been developed to reduce reflections and glare from a water surface to better identify an object for removal. A linearized polarization filter and a specularity-removal algorithm is used to eliminate excessive reflection and glare. A contour-based detection algorithm is implemented for detecting the targeted objects on water surface. Further, the system includes a boundary layer sliding mode control (BLSMC) methodology to reduce and minimize position and velocity errors between the UAV and object in the presence of modeling and parameter uncertainties due to variation in a moving water surface.
Systems and methods for image reconstruction
The present disclosure provides a system for image reconstruction. The system may obtain an initial image of a subject. The initial image may be generated based on scan data of the subject that is collected by an imaging device. The system may also generate a gradient image associated with the initial image. The system may further generate a target image of the subject by applying an image reconstruction model based on the initial image and the gradient image. The target image may have a higher image quality than the initial image.
Method and apparatus for selecting slide media image read location
The present disclosure is directed to a method and apparatus for locating a target location on a reaction cell and using the target location to perform an assay. In an example embodiment, a method of performing at least one assay includes obtaining at least one image of a fluid sample located on a reaction cell and creating a set of derivative data including a plurality of derivative data points based on the at least one image. The method also includes determining an image gradient data point for each of the plurality of derivative data points and determining a target location of the fluid sample in the reaction cell based on the image gradient data points. The method further includes performing at least one assay using the target location of the fluid sample in the reaction cell.
Biometric feature reconstruction method, storage medium and neural network
A method, storage media and neural network for rebuilding biometric feature is provided. The method includes inputting the partial texture image obtained to the neural network and outputting a predictive value of an entire texture image that is output by the neural network. The above technical solution is via the neural network used to process images and the neural network includes the feature value layer. A plurality of the partial texture images is converted to the feature values at the technical level, and the composite calculation of a plurality of partial texture images is avoided on the application level. Because the entire texture image is not synthesized in the end, data leakage and theft are avoided. Thus, the security of the method for analyzing texture image is improved.
Evaluation value calculation device and electronic endoscope system
An electronic endoscope system includes a plotting unit which plots pixel correspondence points, which correspond to pixels that constitute an intracavitary color image that has a plurality of color components, on a target plane according to color components of the pixel correspondence points, the target plane intersecting the origin of a predetermined color space; an axis setting unit which sets a reference axis in the target plane based on pixel correspondence points plotted on the target plane; and an evaluation value calculating unit which calculates a prescribed evaluation value with respect to the captured image based on a positional relationship between the reference axis and the pixel correspondence points.