Patent classifications
G06V10/60
Systems, methods and apparatus for autonomous diagnostic verification of optical components of vision-based inspection systems
Methods of autonomous diagnostic verification and detection of defects in the optical components of a vision-based inspection system are provided. The method includes illuminating a light panel with a first light intensity pattern, capturing a first image of the first light intensity pattern with a sensor, illuminating the light panel with a second light intensity pattern different than the first light intensity pattern, capturing a second image of the second light intensity pattern with the sensor, comparing the first image and the second image to generate a comparison of images, and identifying defects in the light panel or the sensor based upon the comparison of images. Systems adapted to carry out the methods are provided as are other aspects.
Optical encoder capable of identifying absolute positions
The present disclosure is related to an optical encoder which is configured to provide precise coding reference data by feature recognition technology. To apply the present disclosure, it is not necessary to provide particular dense patterns on a working surface. The precise coding reference data can be generated by detecting surface features of the working surface.
Optical encoder capable of identifying absolute positions
The present disclosure is related to an optical encoder which is configured to provide precise coding reference data by feature recognition technology. To apply the present disclosure, it is not necessary to provide particular dense patterns on a working surface. The precise coding reference data can be generated by detecting surface features of the working surface.
AUTONOMOUS MOBILE GRABBING METHOD FOR MECHANICAL ARM BASED ON VISUAL-HAPTIC FUSION UNDER COMPLEX ILLUMINATION CONDITION
The present disclosure discloses an autonomous mobile grabbing method for a mechanical arm based on visual-haptic fusion under a complex illumination condition, which mainly includes approaching control over a target position and feedback control over environment information.
According to the method, under the complex illumination condition, weighted fusion is conducted on visible light and depth images of a preselected region, identification and positioning of a target object are completed based on a deep neural network, and a mobile mechanical arm is driven to continuously approach the target object; in addition, the pose of the mechanical arm is adjusted according to contact force information of a sensor module, the external environment and the target object; and meanwhile, visual information and haptic information of the target object are fused, and the optimal grabbing pose and the appropriate grabbing force of the target object are selected.
By adopting the method, the object positioning precision and the grabbing accuracy are improved, the collision damage and instability of the mechanical arm are effectively prevented, and the harmful deformation of the grabbed object is reduced.
AUTONOMOUS MOBILE GRABBING METHOD FOR MECHANICAL ARM BASED ON VISUAL-HAPTIC FUSION UNDER COMPLEX ILLUMINATION CONDITION
The present disclosure discloses an autonomous mobile grabbing method for a mechanical arm based on visual-haptic fusion under a complex illumination condition, which mainly includes approaching control over a target position and feedback control over environment information.
According to the method, under the complex illumination condition, weighted fusion is conducted on visible light and depth images of a preselected region, identification and positioning of a target object are completed based on a deep neural network, and a mobile mechanical arm is driven to continuously approach the target object; in addition, the pose of the mechanical arm is adjusted according to contact force information of a sensor module, the external environment and the target object; and meanwhile, visual information and haptic information of the target object are fused, and the optimal grabbing pose and the appropriate grabbing force of the target object are selected.
By adopting the method, the object positioning precision and the grabbing accuracy are improved, the collision damage and instability of the mechanical arm are effectively prevented, and the harmful deformation of the grabbed object is reduced.
Method and apparatus of active identity verification based on gaze path analysis
Disclosed herein are a method and apparatus for active identification based on gaze path analysis. The method may include extracting the face image of a user, extracting the gaze path of the user based on the face image, verifying the identity of the user based on the gaze path, and determining whether the face image is authentic.
Method and apparatus of active identity verification based on gaze path analysis
Disclosed herein are a method and apparatus for active identification based on gaze path analysis. The method may include extracting the face image of a user, extracting the gaze path of the user based on the face image, verifying the identity of the user based on the gaze path, and determining whether the face image is authentic.
METHOD OF DETERMINING IMAGE QUALITY IN DIGITAL PATHOLOGY SYSTEM
Disclosed is an image quality evaluation method for a digital pathology system according to the present invention. The image quality evaluation method includes receiving a digital slide image by an image quality evaluation unit; dividing the digital slide image into a plurality of blocks by the image quality evaluation unit; analyzing the plurality of blocks to extract a foreground; calculating a blur for the extracted foreground; calculating brightness distortion for the extracted foreground; calculating contrast distortion for the extracted foreground; and evaluating the overall quality of the digital slide image using the blur, the brightness distortion, and the contrast distortion by the image quality evaluation unit.
Authentication of a Physical Credential
Aspects described herein may provide detection of a physical characteristic of a credential, thereby allowing for authentication of the credential. According to some aspects, these and other benefits may be achieved by detecting the physical characteristic with the credential. An image of a credential may be received. An optical characteristic of a secure feature of the credential may be determined. An expected optical characteristic of the secure feature may be determined based on known properties of the secure feature. A determination as to whether the credential is authentic may be based on a comparison of the determined optical characteristic of the secure feature to the expected optical characteristic of the secure feature.
Authentication of a Physical Credential
Aspects described herein may provide detection of a physical characteristic of a credential, thereby allowing for authentication of the credential. According to some aspects, these and other benefits may be achieved by detecting the physical characteristic with the credential. An image of a credential may be received. An optical characteristic of a secure feature of the credential may be determined. An expected optical characteristic of the secure feature may be determined based on known properties of the secure feature. A determination as to whether the credential is authentic may be based on a comparison of the determined optical characteristic of the secure feature to the expected optical characteristic of the secure feature.