HEIGHT MEASUREMENT METHOD BASED ON MONOCULAR MACHINE VISION
20180116556 ยท 2018-05-03
Inventors
Cpc classification
G06F18/214
PHYSICS
A61B5/0077
HUMAN NECESSITIES
A61B2576/00
HUMAN NECESSITIES
A61B5/6887
HUMAN NECESSITIES
A41H1/02
HUMAN NECESSITIES
G06V10/446
PHYSICS
G06V10/774
PHYSICS
A61B5/1072
HUMAN NECESSITIES
International classification
A61B5/107
HUMAN NECESSITIES
A61B5/00
HUMAN NECESSITIES
A41H1/02
HUMAN NECESSITIES
Abstract
The present disclosure provides a height measurement method based on monocular machine vision. The method includes: picking up, by an RGB camera arranged on the head of a robot, a two-dimensional identifier from the head to feet of a person under measurement; calculating, by the robot, a homography matrix of a current visual field according to four corner points on the visual location identifier; acquiring a head image region by segmenting the image, and calculating pixel coordinates of a head vertex; and calculating a height of the person under measurement. The height measurement method based on monocular machine vision according to the present disclosure is simple in operation and calculation. The height of a person under measurement may be measured by himself or herself with no assistance from others. The measurement method features non-contact. The method further improves the measurement precision, and enhances the measurement speed.
Claims
1. A height measurement method based on monocular machine vision, comprising the following steps: obtaining, by a camera on a robot, an image when a person under measurement stands in a specified region corresponding to a visual location identifier, the image comprising the visual location identifier from the head to the feet of the person under measurement; calculating, by the robot, a homography matrix of a current visual field according to four corner points on the visual location identifier; acquiring a head image region by segmenting the image, and calculating pixel coordinates of a head vertex of the person under measurement; and calculating a height of the person under measurement according to the pixel coordinates of the head vertex of the person under measurement and the homography matrix of the current visual field.
2. The height measurement method based on monocular machine vision according to claim 1, wherein the calculating, by the robot, a homography matrix of a current visual field according to four corner points on the visual location identifier comprises: substituting each of the corner points into the following predefined equations:
3. The height measurement method based on monocular machine vision according to claim 1, wherein the acquiring a head image region by segmenting the image, and calculating pixel coordinates of a head vertex of the person under measurement comprises: detecting a face rectangular region in the picked-up image using the Haar-Adaboost face detection algorithm; acquiring the head image region via segmentation based on the Watershed algorithm; and obtaining the pixel coordinates of the head vertex of the person under measurement according to the rectangular region and the acquired head image region.
4. The height measurement method based on monocular machine vision according to claim 3, wherein the detecting a face rectangular region in the picked-up image using the Haar-Adaboost face detection algorithm comprises: identifying the face rectangular region in the image by a face image sample trained face detector based on the Haar-Adaboost face detection algorithm.
5. The height measurement method based on monocular machine vision according to claim 3, wherein the acquiring the head image region via segmentation based on the Watershed algorithm comprises: marking the face rectangular region as a foreground image region after the face rectangular region is identified; and marking a background image region not including the head of the person according to size and position of the face rectangular region, and obtaining the head image region of the person under measurement.
6. The height measurement method based on monocular machine vision according to claim 3, wherein the obtaining the pixel coordinates of the head vertex of the person under measurement according to the rectangular region and the acquired head image region comprises: determining an intersection of a central point of the face rectangular region, a vertical line parallel to the y-axis and a head vertex profile in the head image region as the head vertex; and determining a pixel coordinate in the x-axis direction of the heat vertex of the person under measurement as an x-axis coordinate value of the central point of the face rectangular region.
7. The height measurement method based on monocular machine vision according to claim 1, wherein the calculating a height of the person under measurement according to the pixel coordinates of the head vertex of the person under measurement and the homography matrix of the current visual field comprises: substituting the pixel coordinates of the head vertex and the homography matrix of the current visual field into the following predefined equations, and calculating the height of the person under measurement:
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0017]
[0018]
[0019]
DETAILED DESCRIPTION
[0020] Hereinafter a height measurement method based on monocular machine vision according to the present disclosure is described in detail with reference to the accompanying drawings.
[0021] The present disclosure provides a height measurement method based on monocular machine vision. The method includes the following steps:
[0022] causing a person under measurement to stand in a specified region on a planar identifier;
[0023] maintaining the head of a robot in a horizontal state, and adjusting a distance between the robot and the person under measurement such that an RGB camera arranged on the head of the robot picks up a two-dimensional identifier from the head to the feet of the person under measurement;
[0024] calculating, by the robot, a homography matrix H=SM[1,r2,r3,t] of a current visual field according to four corner points on the two-dimensional identifier based on the following predefined equations:
[0025] wherein (x, y, 1) denotes homogenous coordinates of any corner point on the visual location identifier (that is, the planar identifier) in pixel coordinates in an image coordinate system of the camera; (X, Y, Z, 1) denotes homogenous coordinates (positions of the four corner points are known, and therefore the homogenous coordinates thereof in the visual location identifier coordinate system are predefined) of the corner point in the coordinate system of the visual location identifier; s denotes any introduced scale proportion parameter; M denotes an internal parameter matrix of the camera; r1, r2 and r3 denote three column vectors in a rotary matrix a visual location identifier coordinate system relative to the image coordinate system of the camera; and t denotes a translation vector;
[0026] assume that a plane of the visual location identifier Z is equal to 0, then the homogenous coordinates of the corner point in the visual location identifier coordinate system are simplified as (X, Y, 0, 1), and the homography matrix is transformable, such that r1, r2 and t may be calculated according to the four corner points; r1, r2 and r3 are all column vectors of a rotary matrix R, and since the rotary matrix R is a unitary orthogonal matrix, r1, r2 and r3 are all unitary vectors that are orthogonal to each other; the unitary vector r3 may be calculated based on a cross product of r1 and r2, that is, r3=r1r2; and
[0027] acquiring a head image region via segmentation, and calculating pixel coordinates (x0, y0) of a head vertex of the person under measurement.
[0028] The pixel coordinates of the head vertex of the person under measurement may be calculated by the following three steps:
[0029] (4) detecting a face rectangular region in the image using the Haar-Adaboost face detection algorithm; and identifying the face rectangular region in the image by a face image sample trained face detector based on the Haar-Adaboost face detection algorithm; and
[0030] (5) acquiring the head region via segmentation based on the Watershed algorithm; wherein the face rectangular region may be marked as a foreground image region after the face rectangular region is identified, non-face background regions on both sides of the face may be marked as background image regions, and a head profile of the person under measurement may be completely acquired via segmentation from the background based on the watershed segmentation algorithm;
[0031] watershed algorithm refers to the watershed image segmentation algorithm:
[0032] this algorithm is capable of automatically acquiring a border profile of two regions via segmentation by means of respectively marking a foreground image region and a background image region; according to the present disclosure, a profile region of the head vertex is acquired via segmentation based on the watershed algorithm, and then the pixel coordinates of an uppermost point of the head vertex are acquired according to the profile of the head vertex; the specific process includes:
[0033] a. detecting a face rectangular region 1 using a face detection algorithm;
[0034] b. marking the face rectangular region 1 as a foreground image region for segmentation; and
[0035] c. marking a background image region not including the head of the person according to size and position of the face rectangular region 1; specifically, position and size of the head profile are concluded according to the face rectangular region, and then the region outside the head profile region is marked as a background image region 3, such that a head image region 3 of the person under measurement is automatically generated, that is, the head profile image (this is a function that can be implemented by the watershed algorithm, which is not described herein any further), as illustrated in
[0036] (6) obtaining the pixel coordinates of the head vertex of the person under measurement.
[0037] Assume that it has been detected that the pixel coordinates at the center of the face rectangular region are (x1, x2) and the head of the person under measurement maintains upright, then an intersection of a vertical line passing through the center of the face rectangular region and the head vertex profile in the head profile image is a head vertex 4. The pixel coordinates (x0, y0) of the head vertex are found at the head profile, wherein x0=x1.
[0038] A height Z of the person under measurement is calculated by substituting x=x0, y=y0, X=0 and Y=Y0 into the following predefined equations:
[0039] As illustrated in
[0040] As illustrated in
[0041] As illustrated in
[0042] The above embodiments are merely used to illustrate the technical solutions of the present disclosure, instead of limiting the protection scope of the present disclosure. Any modification, equivalent replacement, or improvement made without departing from the spirit and principle of the present disclosure should fall within the protection scope defined by the appended claims of the present disclosure.