Patent classifications
G06T2207/10012
GROUND ENGAGING TOOL WEAR AND LOSS DETECTION SYSTEM AND METHOD
An example wear detection system receives a plurality of images from a plurality of sensors associated with a work machine. Individual sensors of the plurality of sensors have respective fields-of-view different from other sensors of the plurality of sensors. The wear detection system identifies a first region of interest and second region of interest associated with the at least one GET. The wear detection system determines a first set of image points and a second set of images points for the at least one GET based on geometric parameters associated with the GET. The wear detection system determines a wear level or loss for the at least one GET based on the GET measurement.
DOCUMENT AUTHENTICITY VERIFICATION IN REAL-TIME
A method for determining authenticity of a document in real-time is disclosed. The method being performed by a processor includes receiving image data of a document. The image data corresponds to at least two images of the document taken simultaneously using at least two cameras. The method includes analyzing the image data to determine a plurality of measurements corresponding to the document along three dimensions. The method includes a thickness at a plurality of location points on the document based on the plurality of measurements, and determining authenticity of the document in real-time based on the determined thickness of the document at the plurality of location points.
SYSTEM AND METHOD FOR DETERMINATION OF A 3D INFORMATION AND OF A MODIFICATION OF A METALLURGICAL VESSEL
Method, imaging system (5), data processing device (60) and system (10) for determination of a 3D information (90), especially of a point cloud (80) or of a 3D surface reconstruction (81) or of a 3D object (82), of an inner part (55) of a metallurgical vessel (50) or of a modification, the method comprising the steps of providing (100) a metallurgical vessel (50); capturing (110) a first optical image (21) of at least one first inner part (51) of the metallurgical vessel (50), from a first imaging device position (22) outside of the metallurgical vessel (50), with a first optical axis (23), by a first imaging device (20); capturing (120) a second optical image (31) of at least one second inner part (52) of the metallurgical vessel (50), from a second imaging device position (32) outside of the metallurgical vessel (50), with a second optical axis (33), by a second imaging device (30); calculating (130) a 3D information (90), such as a point cloud (80) or a 3D surface reconstruction (81) or a 3D object (82), of at least one inner part (55) of the metallurgical vessel (50) from at least the first optical image (21) and the second optical image (31), whereas the first optical image (21) is captured from a first fixed imaging device position (22) with a first fixed optical axis (23) and whereas the second optical image (31) is captured from a second fixed imaging device position (32) with a second fixed optical axis (33).
PROCESSING DEVICE
Erroneous detection due to erroneous parallax measurement is suppressed to accurately detect a step present on a road. An in-vehicle environment recognition device 1 includes a processing device that processes a pair of images acquired by a stereo camera unit 100 mounted on a vehicle. The processing device includes a stereo matching unit 200 that measures a parallax of the pair of images and generates a parallax image, a step candidate extraction unit 300 that extracts a step candidate of a road on which the vehicle travels from the parallax image generated by the stereo matching unit 200, a line segment candidate extraction unit 400 that extracts a line segment candidate from the images acquired by the stereo camera unit 100, an analysis unit 500 that performs collation between the step candidate extracted by the step candidate extraction unit 300 and the line segment candidate extracted by the line segment candidate extraction unit 400 and analyzes validity of the step candidate based on the collation result and an inclination of the line segment candidate, and a three-dimensional object detection unit 600 that detects a step present on the road based on the analysis result of the analysis unit 500.
SYSTEMS AND METHODS FOR IMAGE PROCESSING BASED ON OPTIMAL TRANSPORT AND EPIPOLAR GEOMETRY
Systems and methods for image processing for determining a registration map between a first image of a scene with a second image of the scene, include solving an optimal transport (OT) problem to produce the registration map by optimizing a cost function that determines a minimum of a ground cost distance between the first and the second images modified with an epipolar geometry-based regularizer including a distance that quantifies the violation of an epipolar geometry constraint between corresponding points defined by the registration map. The ground cost compares a ground cost distance of features extracted within the first image with a ground cost distance of features extracted from the second image.
Systems, methods, and computer-readable media for detecting image degradation during surgical procedures
Methods, systems, and computer-readable media for detecting image degradation during a surgical procedure are provided. A method includes receiving images of a surgical instrument; obtaining baseline images of an edge of the surgical instrument; comparing a characteristic of the images of the surgical instrument to a characteristic of the baseline images of the edge of the surgical instrument, the images of the surgical instrument being received subsequent to obtaining the baseline images of the edge of the surgical instrument and being received while the surgical instrument is disposed at a surgical site in a patient; determining whether the images of the surgical instrument are degraded, based on the comparing of the characteristic of the images of the surgical instrument and the characteristic of the baseline images of the surgical instrument; and generating an image degradation notification, in response to a determination that the images of the surgical instrument are degraded.
Method and system for detecting and picking up objects
A method includes steps of: capturing an image of a container; recognizing at least one object in the container based on the image; determining at least one first coordinate set corresponding to the at least one object; determining at least one second coordinate set that corresponds to target one (s) of the at least one first coordinate set and that relates to a fixed picking device of a robotic arm; adjusting position(s) of unfixed picking device(s) of the robotic arm if necessary; controlling the robotic arm to pick up one (s) of the at least one object that correspond(s) to the at least one second coordinate set with the fixed picking device and/or at least one unfixed picking device.
Depth estimation using biometric data
Method of generating depth estimate based on biometric data starts with server receiving positioning data from first device associated with first user. First device generates positioning data based on analysis of a data stream comprising images of second user that is associated with second device. Server then receives a biometric data of second user from second device. Biometric data is based on output from a sensor or a camera included in second device. Server then determines a distance of second user from first device using positioning data and biometric data of the second user. Other embodiments are described herein.
System and method for assisting collaborative sensor calibration
Embodiments described herein include a method of receiving, by a moving assisting vehicle, a calibration assistance request related to a moving ego vehicle that requested assistance in collaborative calibration of a sensor deployed on the moving ego vehicle. The method further includes analyzing the calibration assistance request to extract at least one of a schedule or an assistance route associated with the requested assistance. The method includes communicating with the moving ego vehicle about a desired location relative to the position of the moving ego vehicle for the moving assisting vehicle to be in order to assist the sensor to acquire information of a target present on the moving assisting vehicle. The method includes facilitating to drive the moving assisting vehicle to reach the desired location to achieve the collaborative calibration of the sensor on the moving ego vehicle.
Viewpoint dependent brick selection for fast volumetric reconstruction
A method to culling parts of a 3D reconstruction volume is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data with low usage of computational resources and storage spaces. The method includes culling parts of the 3D reconstruction volume against a depth image. The depth image has a plurality of pixels, each of which represents a distance to a surface in a scene. In some embodiments, the method includes culling parts of the 3D reconstruction volume against a frustum. The frustum is derived from a field of view of an image sensor, from which image data to create the 3D reconstruction is obtained.