Patent classifications
G06V10/147
Sensor-based Bare Hand Data Labeling Method and System
A sensor-based bare hand data labeling method and system are provided. The method comprises: performing device calibration processing on a depth camera and on one or more sensors respectively preset at one or more specified positions of a bare hand, so as to acquire coordinate transformation data; collecting a depth image of the bare hand by the depth camera, and collecting 6DoF data of one or more bone points; acquiring, based on the 6DoF data and the coordinate transformation data, three-dimensional position information of a preset number of bone points; determining two-dimensional position information of the preset number of bone points on the depth image based on the three-dimensional position information of the preset number of bone points; and labeling joint information on all of the bone points in the depth image according to the two-dimensional position information and the three-dimensional position information.
SECURELY EXCHANGING INFORMATION BETWEEN A MEDICAL DEVICE AND A MOBILE COMPUTING DEVICE USING VISUAL INDICATORS
A medical system is provided. The medical system includes a medical device and a mobile computing device. The medical device includes at least one physiologic sensor configured to acquire physiological signals from a patient, at least one processor coupled to the at least one physiologic sensor, and at least one optical code encoded with encrypted data. The mobile computing device includes a camera and one or more processors coupled to the camera and configured to acquire one or more images of the at least one optical code, decode the one or more images of the at least one optical code to generate a copy of the encrypted data, decrypt the copy of the encrypted data to generate decrypted data, and process the decrypted data to establish an operable connection between the mobile computing device and the medical device.
Photoelectric fingerprint identification apparatus, terminal, and fingerprint identification method
A photoelectric fingerprint identification apparatus, a terminal, and a fingerprint identification method are provided. The apparatus includes: a light-emitting unit, where the light-emitting unit generates at least a first light signal and a second light signal; a photoelectric fingerprint sensor, where the photosensitive fingerprint sensor includes a first sensing region and a second sensing region that do not overlap each other, and the first sensing region is covered with an infrared filter; an image detection unit, configured to detect reflected light energy of the first sensing region to obtain fingerprint information; and a living body detection unit, configured to detect reflected light energy of the second sensing region to obtain living body detection information.
Mobile phone-based miniature microscopic image acquisition device and image stitching and recognition methods
A mobile phone-based miniature microscopic image acquisition device, and image stitching and recognition methods are provided. The acquisition device comprises a support, wherein a mobile phone fixing table is provided on the support. A microscope head is provided below a camera of a mobile phone. A slide holder is provided below the microscope head, and an lighting source is provided below the slide holder. A scanning movement is performed between the slide holder and the microscope head along X and Y axes, so that images of a slide are acquired into the mobile phone. The slide sample images acquired into the mobile phone can be stitched and recognized, and can be uploaded to the cloud to be processed by cloud AI, thereby significantly improving the accuracy and efficiency of cell recognition, greatly reducing the medical cost, and ensuring more remote medical institutions can apply such technology for diagnosis.
Mobile phone-based miniature microscopic image acquisition device and image stitching and recognition methods
A mobile phone-based miniature microscopic image acquisition device, and image stitching and recognition methods are provided. The acquisition device comprises a support, wherein a mobile phone fixing table is provided on the support. A microscope head is provided below a camera of a mobile phone. A slide holder is provided below the microscope head, and an lighting source is provided below the slide holder. A scanning movement is performed between the slide holder and the microscope head along X and Y axes, so that images of a slide are acquired into the mobile phone. The slide sample images acquired into the mobile phone can be stitched and recognized, and can be uploaded to the cloud to be processed by cloud AI, thereby significantly improving the accuracy and efficiency of cell recognition, greatly reducing the medical cost, and ensuring more remote medical institutions can apply such technology for diagnosis.
System and method for adaptive object-oriented sensor fusion for environmental mapping
The present disclosure relates to a mapping system adapted for detecting objects in an environmental scene, by scanning an environmental scene with propagating energy, and receiving reflected energy back from objects present in the environmental scene, in a prioritized manner, for later use. The system may comprise an imaging subsystem which includes a detection and ranging subsystem for initially identifying primitive objects in the environmental scene. The imaging subsystem may also include an identification and mapping subsystem for prioritizing the primitive objects for further scanning and analysis, to ultimately identify one or more of the primitive objects as one or more abstract objects. An environmental model, updated in real time, is used to maintain a map of the primitive objects and the known abstract objects within the environmental scene, as new primitive objects and new abstract objects are obtained with repeated scans of the environmental scene.
Non-same camera based image processing apparatus
The present invention provides an image processing apparatus comprising: a first camera obtaining a true-color image by capturing a subject; a second camera spaced apart from the first camera and obtaining an infrared image by capturing the subject; and a control unit connected to the first camera and the second camera, wherein the control unit matches the true-color image and the infrared image and obtains three-dimensional information of the subject by using the matched infrared image in a region corresponding to the matched true-color image and a valid pixel.
Non-same camera based image processing apparatus
The present invention provides an image processing apparatus comprising: a first camera obtaining a true-color image by capturing a subject; a second camera spaced apart from the first camera and obtaining an infrared image by capturing the subject; and a control unit connected to the first camera and the second camera, wherein the control unit matches the true-color image and the infrared image and obtains three-dimensional information of the subject by using the matched infrared image in a region corresponding to the matched true-color image and a valid pixel.
Vehicular control system with traffic lane detection
A vehicular control system includes a forward viewing camera disposed at an in-cabin side of a windshield of a vehicle and viewing forward of the vehicle. Road curvature of a road along which the vehicle is traveling is determined responsive at least in part to processing by an image processor of image data captured by the camera. Responsive at least in part to processing of captured image data, a traffic lane of the road along which the vehicle is traveling is determined. Upon approach of the vehicle to a curve in the road, speed of the vehicle is reduced to a reduced speed for traveling around the curve in the road at least in part responsive to at least one selected from the group consisting of (a) processing of image data captured by the forward viewing camera and (b) data relevant to a current geographical location of the equipped vehicle.
Vehicular control system with traffic lane detection
A vehicular control system includes a forward viewing camera disposed at an in-cabin side of a windshield of a vehicle and viewing forward of the vehicle. Road curvature of a road along which the vehicle is traveling is determined responsive at least in part to processing by an image processor of image data captured by the camera. Responsive at least in part to processing of captured image data, a traffic lane of the road along which the vehicle is traveling is determined. Upon approach of the vehicle to a curve in the road, speed of the vehicle is reduced to a reduced speed for traveling around the curve in the road at least in part responsive to at least one selected from the group consisting of (a) processing of image data captured by the forward viewing camera and (b) data relevant to a current geographical location of the equipped vehicle.