G01B11/03

Method, device, apparatus and storage medium for detecting a height of an obstacle

A method, device, apparatus and a computer-readable storage medium for detecting a height of an obstacle are provided. The method can include: acquiring observation data of a plurality of reference obstacles from a frame; according to the observation data of each of the reference obstacles, fitting a function F: Z=F(ymax), wherein the observation data of a reference obstacle comprises a longitudinal coordinate of a bottom of the reference obstacle in the frame, and a distance between the reference obstacle and a camera capturing the frame; and determining a distance between the obstacle to be detected and the camera, according to the longitudinal coordinate of the bottom of the obstacle in the frame and the function F; and determining an evaluation value of the height of the obstacle to be detected according to the distance between the obstacle to be detected and the camera.

Method for determining correct scanning distance using augmented reality and machine learning models

A smart device is provided with an application program for displaying a video feed received from the smart device's camera. The application can determine the coordinates for an intersection point, which is a point on the ground where the smart device is pointing at. The application can display a target on the visual representation of the intersection point. Based on whether the smart device is at an appropriate distance from the intersection point, the user interface can superimpose an indicator on the video feed received from the camera. This can inform the user whether the smart device is at an optimal scan distance from the intersection point (or an object) so that the object can be identified by a machine learning model.

Method for determining correct scanning distance using augmented reality and machine learning models

A smart device is provided with an application program for displaying a video feed received from the smart device's camera. The application can determine the coordinates for an intersection point, which is a point on the ground where the smart device is pointing at. The application can display a target on the visual representation of the intersection point. Based on whether the smart device is at an appropriate distance from the intersection point, the user interface can superimpose an indicator on the video feed received from the camera. This can inform the user whether the smart device is at an optimal scan distance from the intersection point (or an object) so that the object can be identified by a machine learning model.

Knocked-down case inspection and erection method

Techniques for corrugate and chipboard knocked-down case inspection and assembly are described herein. In one example, the disclosed techniques include a method to inspect and assemble a knocked-down case. In the example, a measurement of at least one aspect of the knocked-down case is obtained. A difference is determined between the obtained measurement and a standard measurement and/or a range of standard measurements. Programming of a case-handling tool is executed that is based at least in part on the determined difference. In an example, the programming is configured to adjust for and/or compensate for differences between the actual knocked-down case and a knocked-down case that is within a specification. A case-handling tool is operated responsive to the executed programming to at least partially erect the knocked-down case into an erected case.

Knocked-down case inspection and erection method

Techniques for corrugate and chipboard knocked-down case inspection and assembly are described herein. In one example, the disclosed techniques include a method to inspect and assemble a knocked-down case. In the example, a measurement of at least one aspect of the knocked-down case is obtained. A difference is determined between the obtained measurement and a standard measurement and/or a range of standard measurements. Programming of a case-handling tool is executed that is based at least in part on the determined difference. In an example, the programming is configured to adjust for and/or compensate for differences between the actual knocked-down case and a knocked-down case that is within a specification. A case-handling tool is operated responsive to the executed programming to at least partially erect the knocked-down case into an erected case.

Optical axis based measuring apparatus

It is an object of the present invention to provide a measuring apparatus capable of easily grasping a tracking state of a target and performing an efficient measurement. One aspect of the present invention is a measuring apparatus that emits a light beam toward a target, captures and tracks the target, and measures the three-dimensional coordinates of the target. The measuring apparatus comprises: a light source for emitting light beam; an angle control unit for controlling the emission angle of the light beam emitted from the light source so as to track the moving target; a display unit that provided on a device that is wearable by a measurer; a calculation unit for calculating the three-dimensional coordinates of the target based on the emission angle of the light beam and the light returning from the target; and a display control unit that controls information displayed on the display unit based on the three-dimensional coordinates calculated by the calculation unit. The display control unit performs control to superimposed and display the optical axis graphic image on a position of the optical axis of the light beam.

Optical axis based measuring apparatus

It is an object of the present invention to provide a measuring apparatus capable of easily grasping a tracking state of a target and performing an efficient measurement. One aspect of the present invention is a measuring apparatus that emits a light beam toward a target, captures and tracks the target, and measures the three-dimensional coordinates of the target. The measuring apparatus comprises: a light source for emitting light beam; an angle control unit for controlling the emission angle of the light beam emitted from the light source so as to track the moving target; a display unit that provided on a device that is wearable by a measurer; a calculation unit for calculating the three-dimensional coordinates of the target based on the emission angle of the light beam and the light returning from the target; and a display control unit that controls information displayed on the display unit based on the three-dimensional coordinates calculated by the calculation unit. The display control unit performs control to superimposed and display the optical axis graphic image on a position of the optical axis of the light beam.

Three-dimensional position estimation device and three-dimensional position estimation method

A three-dimensional position estimation device includes: a feature point extracting unit for detecting an area corresponding to the face of an occupant in an image captured by a camera for imaging a vehicle interior and extracting a plurality of feature points in the detected area; an inter-feature-point distance calculating unit for calculating a first inter-feature-point distance that is a distance between distance-calculating feature points among the plurality of feature points; a face direction detecting unit for detecting the face direction of the occupant; a head position angle calculating unit for calculating a head position angle indicating the position of the head of the occupant with respect to an imaging axis of the camera; an inter-feature-point distance correcting unit for correcting the first inter-feature-point distance to a second inter-feature-point distance that is a distance between distance-calculating feature points in a state where portions of the head corresponding to the distance-calculating feature points are arranged along a plane parallel to an imaging plane of the camera using a result detected by the face direction detecting unit and the head position angle; and a three-dimensional position estimating unit for estimating the three-dimensional position of the head using the head position angle, the second inter-feature-point distance, and a reference inter-feature-point distance.

MEASURING METHOD, MEASURING APPARATUS, AND MEASURING SYSTEM
20230087312 · 2023-03-23 · ·

A target-object-encompassing region representing a region causing any change between first image information and second image information is determined based on the first image information precluding a target object in an imageable area and the second image information including the target object in the imageable area. A two-dimensional size of the target object included in the target-object-encompassing region is determined based on the target-object-encompassing region and the height information.

Apparatus and method for contactless checking of the dimensions and/or shape of a complex-shaped body

Apparatus (1) for checking the dimensions and/or shape of a complex-shaped body (3), comprising a checking support (5) on which the body to be checked is positioned, a robotic system (8) with an optical assembly (17) and a memory unit (19) for storing reference data relating to a reference shape of the body. A processing and control unit (18) controls movements of the optical assembly so as to obtain dimensional values relating to the body at predetermined measuring points, these dimensional values then being compared with the reference data stored in the memory unit. The apparatus further comprises reference elements (35) defined in the checking support in predetermined positions and a distance sensor (17) for acquiring actual positions of said reference elements. Local compensation parameters for correcting positioning errors of the robotic system are calculated for each of the reference elements on the basis of the predetermined positions and the actual positions acquired. A method for checking the dimensions and/or shape of a complex-shaped body by using the above described apparatus includes a calibration phase of the robotic system to calculate the local compensation parameters, a phase for collecting the reference data related to the predetermined measuring points and a dimensional checking phase of the body. The reference data collecting phase and the dimensional checking phase take into consideration the local compensation parameters.