Patent classifications
G06T7/557
POWDER LEAKAGE MONITORING DEVICE AND POWDER LEAKAGE MONITORING METHOD
The invention discloses a powder leakage monitoring device and a powder leakage monitoring method. The powder leakage monitoring device comprises a light field camera, a 3D PTZ and a computer. Wherein, the light field camera records the original light field images of the monitored area; the 3D PTZ under the light field camera adjusts the shooting angle of the light field camera when it rotates according to the set direction; and the computer respectively connects to the light field camera and the 3D PTZ, which generates refocused images corresponding to the original light field images, and determines the spatial coordinates of the powder leakage point and the hazard range of the powder leakage in the monitored area according to the refocused images and the shooting angle. Therefore, the range and accuracy of powder leakage monitoring are both increased by using this invention.
METHOD FOR DIGITALLY STAINING CELLS
The invention relates to a method for digitally staining a cell and/or a medical preparation, the method comprising the following steps: determining three-dimensional information of a cell and/or of a medical preparation by means of an analyser for analysing a medical sample, the analyser comprising an apparatus for determining the three-dimensional information of the cell and/or of the medical preparation; digitally staining the cell and/or the medical preparation according to a predetermined correlation between the three-dimensional information of the cell and/or of the medical preparation and the staining of a corresponding cell and/or medical preparation and/or cellular and/or sub-cellular structures of the cell and/or of the medical preparation by means of a staining protocol; representing the digitally stained cell and/or the preparation, the representation involving a predetermined defocus region, and regions of the cell and/or of the preparation being represented by means of different digital staining in the area of the defocus region as corresponding modulations of colour intensities and/or as mixed colour.
METHOD FOR DIGITALLY STAINING CELLS
The invention relates to a method for digitally staining a cell and/or a medical preparation, the method comprising the following steps: determining three-dimensional information of a cell and/or of a medical preparation by means of an analyser for analysing a medical sample, the analyser comprising an apparatus for determining the three-dimensional information of the cell and/or of the medical preparation; digitally staining the cell and/or the medical preparation according to a predetermined correlation between the three-dimensional information of the cell and/or of the medical preparation and the staining of a corresponding cell and/or medical preparation and/or cellular and/or sub-cellular structures of the cell and/or of the medical preparation by means of a staining protocol; representing the digitally stained cell and/or the preparation, the representation involving a predetermined defocus region, and regions of the cell and/or of the preparation being represented by means of different digital staining in the area of the defocus region as corresponding modulations of colour intensities and/or as mixed colour.
SYSTEM AND METHOD FOR IDENTIFYING ITEMS
In variants, a method for item recognition can include: optionally calibrating a sampling system, determining visual data using the sampling system, determining a point cloud, determining region masks based on the point cloud, generating a surface reconstruction for each item, generating image segments for each item based on the surface reconstruction, and determining a class identifier for each item using the respective image segments.
SYSTEM AND METHOD FOR IDENTIFYING ITEMS
In variants, a method for item recognition can include: optionally calibrating a sampling system, determining visual data using the sampling system, determining a point cloud, determining region masks based on the point cloud, generating a surface reconstruction for each item, generating image segments for each item based on the surface reconstruction, and determining a class identifier for each item using the respective image segments.
Light field imaging system by projecting near-infrared spot in remote sensing based on multifocal microlens array
The present disclosure provides a light field imaging system by projecting near-infrared spot in remote sensing based on a multifocal microlens array. The light field imaging system includes a near-infrared spot projection apparatus (100) and a light field imaging component (200), where the near-infrared spot projection apparatus (100) is configured to scatter near-infrared spots on a to-be-observed object to add texture information to a target image, and the light field imaging component (200) is configured to image a target scene light ray with additional texture information. The present disclosure can extend a target depth-of-field (DOF) detection range, and particularly, reconstruct a surface of a weak-texture object.
Light field imaging system by projecting near-infrared spot in remote sensing based on multifocal microlens array
The present disclosure provides a light field imaging system by projecting near-infrared spot in remote sensing based on a multifocal microlens array. The light field imaging system includes a near-infrared spot projection apparatus (100) and a light field imaging component (200), where the near-infrared spot projection apparatus (100) is configured to scatter near-infrared spots on a to-be-observed object to add texture information to a target image, and the light field imaging component (200) is configured to image a target scene light ray with additional texture information. The present disclosure can extend a target depth-of-field (DOF) detection range, and particularly, reconstruct a surface of a weak-texture object.
Light field imaging system by projecting near-infrared spot in remote sensing based on multifocal microlens array
The present disclosure provides a light field imaging system by projecting near-infrared spot in remote sensing based on a multifocal microlens array. The light field imaging system includes a near-infrared spot projection apparatus (100) and a light field imaging component (200), where the near-infrared spot projection apparatus (100) is configured to scatter near-infrared spots on a to-be-observed object to add texture information to a target image, and the light field imaging component (200) is configured to image a target scene light ray with additional texture information. The present disclosure can extend a target depth-of-field (DOF) detection range, and particularly, reconstruct a surface of a weak-texture object.
Systems and methods for an improved camera system using a graded lens and filters to estimate depth
System, methods, and other embodiments described herein relate to an improved camera system including directional optics to estimate the depth of grayscale and color images. In one embodiment, a camera system includes a graded lens to receive light associated with a scene and resolve multiple angles of the light according to parameters of the graded lens. The camera system also includes a detector that senses the light from the graded lens per pixel to integrate multiple views of the scene into a single image to estimate depth associated with objects and the single image includes data for views of the objects that overlap having resolved angles in association with the parameters.
Systems and methods for an improved camera system using a graded lens and filters to estimate depth
System, methods, and other embodiments described herein relate to an improved camera system including directional optics to estimate the depth of grayscale and color images. In one embodiment, a camera system includes a graded lens to receive light associated with a scene and resolve multiple angles of the light according to parameters of the graded lens. The camera system also includes a detector that senses the light from the graded lens per pixel to integrate multiple views of the scene into a single image to estimate depth associated with objects and the single image includes data for views of the objects that overlap having resolved angles in association with the parameters.