METHOD AND APPARATUS FOR EVALUATING MOTION STATE OF TRAFFIC TOOL, DEVICE, AND MEDIUM
20230053952 · 2023-02-23
Inventors
Cpc classification
G06T7/246
PHYSICS
International classification
G06T7/246
PHYSICS
Abstract
This application provides a method for evaluating a motion state of a vehicle performed by a computer device. The method includes: obtaining target image data captured by cameras on the vehicle, and combining every two neighboring image frames in the target image data into N image frame groups; obtaining matching feature points between two image frames included in each of the N image frame groups; constructing target mesh plane figures respectively corresponding to the N image frame groups according to the matching feature points in each image frame group; and determining the motion state of the vehicle according to the target mesh plane figures respectively corresponding to the N image frame groups. By adopting the embodiments of this application, the evaluation accuracy of the motion state of the vehicle may be improved.
Claims
1. A method for evaluating a motion state of a vehicle, performed by a computer device and comprising: obtaining target image data captured by cameras on the vehicle, and combining every two neighboring image frames in the target image data into N image frame groups, N being a positive integer; obtaining matching feature points between two image frames comprised in each image frame group in the N image frame groups; constructing target mesh plane figures respectively corresponding to the N image frame groups according to the matching feature points in each image frame group; and determining the motion state of the vehicle according to the target mesh plane figures respectively corresponding to the N image frame groups.
2. The method according to claim 1, wherein the obtaining captured by cameras on the vehicle, and combining every two neighboring image frames in the target image data into N image frame groups comprises: obtaining original image data captured by the cameras on the vehicle, and performing denoising processing on the original image data to obtain denoised image data; performing distortion removing processing on the denoised image data according to a device parameter corresponding to the cameras, to obtain the target image data corresponding to the vehicle; and dividing the target image data into N+1 image frames, and combining every two neighboring image frames in the N+1 image frames to obtain the N image frame groups.
3. The method according to claim 1, wherein the obtaining matching feature points between two image frames comprised in each image frame group in the N image frame groups comprises: obtaining an image frame group G.sub.i in the N image frame groups, wherein the image frame group G.sub.i comprises an image frame T.sub.i and an image frame T.sub.i+1, and i is a positive integer less than or equal to N; obtaining a first feature point set in the image frame T.sub.i and a second feature point set in the image frame T.sub.i+1; and performing feature point matching on the first feature point set and the second feature point set to obtain matching feature points between the image frame T.sub.i and the image frame T.sub.i+1.
4. The method according to claim 1, wherein the constructing target mesh plane figures respectively corresponding to the N image frame groups according to the matching feature points in each image frame group comprises: obtaining q matching feature points between an image frame T.sub.i and an image frame T.sub.i+1 comprised in an image frame group G.sub.i, and determining matching coordinate information respectively corresponding to the q matching feature points according to image coordinate information of the q matching feature points in the image frame T.sub.i and the image frame T.sub.i+1 respectively, wherein the image frame group G.sub.i belongs to the N image frame groups, i is a positive integer less than or equal to N, and q is a positive integer; obtaining an initial polygon linked list associated with the q matching feature points according to the matching coordinate information respectively corresponding to the q matching feature points; obtaining, for a matching feature point c.sub.r in the q matching feature points, an associated polygon matching the matching feature point c.sub.r from the initial polygon linked list, wherein a circumscribed circle of the associated polygon comprises the matching feature point c.sub.r, and r is a positive integer less than or equal to q; deleting a common side of the associated polygon, and connecting the matching feature point c.sub.r to vertexes of the associated polygon to generate a candidate polygon corresponding to the matching feature point c.sub.r; converting the candidate polygon into a target polygon corresponding to the matching feature point c.sub.r, and adding the target polygon to the initial polygon linked list, wherein a circumscribed circle of the target polygon does not comprise any matching feature point; and determining the initial polygon linked list as a target polygon linked list when target polygons respectively corresponding to the q matching feature points are all added to the initial polygon linked list, and generating a target mesh plane figure corresponding to the image frame group G.sub.i according to the target polygon linked list.
5. The method according to claim 1, wherein the determining the motion state of the vehicle according to the target mesh plane figures respectively corresponding to the N image frame groups comprises: obtaining a target mesh plane figure H.sub.i corresponding to an image frame group G.sub.i in the N image frame groups, wherein the target mesh plane figure H.sub.i comprises M connected but non-overlapping polygons, a circumscribed circle of each of the polygons does not comprise any matching feature point, M is a positive integer, and i is a positive integer less than or equal to N; obtaining average coordinate change values respectively corresponding to the M polygons, sorting the average coordinate change values respectively corresponding to the M polygons, and determining the sorted average coordinate change values as a first coordinate sequence of the target mesh plane figure H.sub.i; determining a statistical value associated with the target mesh plane figure H.sub.i according to the first coordinate sequence; and determining the motion state of the vehicle as a static state when statistical values respectively associated with N target mesh plane figures are all less than a first threshold.
6. The method according to claim 1, wherein the determining the motion state of the vehicle according to the target mesh plane figures comprises: obtaining a target mesh plane figure H.sub.i corresponding to an image frame group G.sub.i in the N image frame groups, wherein the image frame group G.sub.i comprises an image frame T.sub.i and an image frame T.sub.i+1, the target mesh plane figure H.sub.i comprises M connected but non-overlapping polygons, a circumscribed circle of each of the polygons does not comprise any matching feature point, M is a positive integer, and i is a positive integer less than or equal to N; obtaining a first direction of each side of the M polygons in the image frame T.sub.i, and obtaining a second direction of each side of the M polygons in the image frame T.sub.i+1; determining a direction change value corresponding to each side of the M polygons according to the first direction and the second direction; generating a direction change sequence associated with the M polygons according to direction change values respectively corresponding to the M polygons, and determining a direction average value and a direction median associated with the target mesh plane figure H.sub.i according to the direction change sequence; and determining the motion state of the vehicle as a linear motion state when direction average values respectively associated with N target mesh plane figures and direction medians respectively associated with the N target mesh plane figures are all less than a second threshold.
7. The method according to claim 1, further comprising: when the motion state of the vehicle is in a static state, determining positioning information of the vehicle according to a positioning and navigation system mounted on the vehicle; and when the motion state of the vehicle is in a linear motion state, determining a motion direction of the vehicle according to the linear motion state, and determining the positioning information of the vehicle according to the motion direction and the positioning and navigation system.
8. A computer device, comprising a memory and a processor, the processor being connected to the memory, the memory being configured to store a computer program, and the processor being configured to execute the computer program and cause the computer device to perform a method for evaluating a motion state of a vehicle including: obtaining target image data captured by cameras on the vehicle, and combining every two neighboring image frames in the target image data into N image frame groups, N being a positive integer; obtaining matching feature points between two image frames comprised in each image frame group in the N image frame groups; constructing target mesh plane figures respectively corresponding to the N image frame groups according to the matching feature points in each image frame group; and determining the motion state of the vehicle according to the target mesh plane figures respectively corresponding to the N image frame groups.
9. The computer device according to claim 8, wherein the obtaining captured by cameras on the vehicle, and combining every two neighboring image frames in the target image data into N image frame groups comprises: obtaining original image data captured by the cameras on the vehicle, and performing denoising processing on the original image data to obtain denoised image data; performing distortion removing processing on the denoised image data according to a device parameter corresponding to the cameras, to obtain the target image data corresponding to the vehicle; and dividing the target image data into N+1 image frames, and combining every two neighboring image frames in the N+1 image frames to obtain the N image frame groups.
10. The computer device according to claim 8, wherein the obtaining matching feature points between two image frames comprised in each image frame group in the N image frame groups comprises: obtaining an image frame group G.sub.i in the N image frame groups, wherein the image frame group G.sub.i comprises an image frame T.sub.i and an image frame T.sub.i+1, and i is a positive integer less than or equal to N; obtaining a first feature point set in the image frame T.sub.i and a second feature point set in the image frame T.sub.i+1; and performing feature point matching on the first feature point set and the second feature point set to obtain matching feature points between the image frame T.sub.i and the image frame T.sub.i+1.
11. The computer device according to claim 8, wherein the constructing target mesh plane figures respectively corresponding to the N image frame groups according to the matching feature points in each image frame group comprises: obtaining q matching feature points between an image frame T.sub.i and an image frame T.sub.i+1 comprised in an image frame group G.sub.i, and determining matching coordinate information respectively corresponding to the q matching feature points according to image coordinate information of the q matching feature points in the image frame T.sub.i and the image frame T.sub.i+1 respectively, wherein the image frame group G.sub.i belongs to the N image frame groups, i is a positive integer less than or equal to N, and q is a positive integer; obtaining an initial polygon linked list associated with the q matching feature points according to the matching coordinate information respectively corresponding to the q matching feature points; obtaining, for a matching feature point c.sub.r in the q matching feature points, an associated polygon matching the matching feature point c.sub.r from the initial polygon linked list, wherein a circumscribed circle of the associated polygon comprises the matching feature point c.sub.r, and r is a positive integer less than or equal to q; deleting a common side of the associated polygon, and connecting the matching feature point c.sub.r to vertexes of the associated polygon to generate a candidate polygon corresponding to the matching feature point c.sub.r; converting the candidate polygon into a target polygon corresponding to the matching feature point c.sub.r, and adding the target polygon to the initial polygon linked list, wherein a circumscribed circle of the target polygon does not comprise any matching feature point; and determining the initial polygon linked list as a target polygon linked list when target polygons respectively corresponding to the q matching feature points are all added to the initial polygon linked list, and generating a target mesh plane figure corresponding to the image frame group G.sub.i according to the target polygon linked list.
12. The computer device according to claim 8, wherein the determining the motion state of the vehicle according to the target mesh plane figures respectively corresponding to the N image frame groups comprises: obtaining a target mesh plane figure H.sub.i corresponding to an image frame group G.sub.i in the N image frame groups, wherein the target mesh plane figure H.sub.i comprises M connected but non-overlapping polygons, a circumscribed circle of each of the polygons does not comprise any matching feature point, M is a positive integer, and i is a positive integer less than or equal to N; obtaining average coordinate change values respectively corresponding to the M polygons, sorting the average coordinate change values respectively corresponding to the M polygons, and determining the sorted average coordinate change values as a first coordinate sequence of the target mesh plane figure H.sub.i; determining a statistical value associated with the target mesh plane figure H.sub.i according to the first coordinate sequence; and determining the motion state of the vehicle as a static state when statistical values respectively associated with N target mesh plane figures are all less than a first threshold.
13. The computer device according to claim 8, wherein the determining the motion state of the vehicle according to the target mesh plane figures comprises: obtaining a target mesh plane figure H.sub.i corresponding to an image frame group G.sub.i in the N image frame groups, wherein the image frame group G.sub.i comprises an image frame T.sub.i and an image frame T.sub.i+1, the target mesh plane figure H.sub.i comprises M connected but non-overlapping polygons, a circumscribed circle of each of the polygons does not comprise any matching feature point, M is a positive integer, and i is a positive integer less than or equal to N; obtaining a first direction of each side of the M polygons in the image frame T.sub.i, and obtaining a second direction of each side of the M polygons in the image frame T.sub.i+1; determining a direction change value corresponding to each side of the M polygons according to the first direction and the second direction; generating a direction change sequence associated with the M polygons according to direction change values respectively corresponding to the M polygons, and determining a direction average value and a direction median associated with the target mesh plane figure H.sub.i according to the direction change sequence; and determining the motion state of the vehicle as a linear motion state when direction average values respectively associated with N target mesh plane figures and direction medians respectively associated with the N target mesh plane figures are all less than a second threshold.
14. The computer device according to claim 8, wherein the method further comprises: when the motion state of the vehicle is in a static state, determining positioning information of the vehicle according to a positioning and navigation system mounted on the vehicle; and when the motion state of the vehicle is in a linear motion state, determining a motion direction of the vehicle according to the linear motion state, and determining the positioning information of the vehicle according to the motion direction and the positioning and navigation system.
15. A non-transitory computer-readable storage medium, storing a computer program, the computer program, adapted to be loaded and executed by a processor of a computer device, and causing the computer device to perform a method for evaluating a motion state of a vehicle including: obtaining target image data captured by cameras on the vehicle, and combining every two neighboring image frames in the target image data into N image frame groups, N being a positive integer; obtaining matching feature points between two image frames comprised in each image frame group in the N image frame groups; constructing target mesh plane figures respectively corresponding to the N image frame groups according to the matching feature points in each image frame group; and determining the motion state of the vehicle according to the target mesh plane figures respectively corresponding to the N image frame groups.
16. The non-transitory computer-readable storage medium according to claim 15, wherein the obtaining captured by cameras on the vehicle, and combining every two neighboring image frames in the target image data into N image frame groups comprises: obtaining original image data captured by the cameras on the vehicle, and performing denoising processing on the original image data to obtain denoised image data; performing distortion removing processing on the denoised image data according to a device parameter corresponding to the cameras, to obtain the target image data corresponding to the vehicle; and dividing the target image data into N+1 image frames, and combining every two neighboring image frames in the N+1 image frames to obtain the N image frame groups.
17. The non-transitory computer-readable storage medium according to claim 15, wherein the obtaining matching feature points between two image frames comprised in each image frame group in the N image frame groups comprises: obtaining an image frame group G.sub.i in the N image frame groups, wherein the image frame group G.sub.i comprises an image frame T.sub.i and an image frame T.sub.i+1, and i is a positive integer less than or equal to N; obtaining a first feature point set in the image frame T.sub.i and a second feature point set in the image frame T.sub.i+1; and performing feature point matching on the first feature point set and the second feature point set to obtain matching feature points between the image frame T.sub.i and the image frame T.sub.i+1.
18. The non-transitory computer-readable storage medium according to claim 15, wherein the determining the motion state of the vehicle according to the target mesh plane figures respectively corresponding to the N image frame groups comprises: obtaining a target mesh plane figure H.sub.i corresponding to an image frame group G.sub.i in the N image frame groups, wherein the target mesh plane figure H.sub.i comprises M connected but non-overlapping polygons, a circumscribed circle of each of the polygons does not comprise any matching feature point, M is a positive integer, and i is a positive integer less than or equal to N; obtaining average coordinate change values respectively corresponding to the M polygons, sorting the average coordinate change values respectively corresponding to the M polygons, and determining the sorted average coordinate change values as a first coordinate sequence of the target mesh plane figure H.sub.i; determining a statistical value associated with the target mesh plane figure H.sub.i according to the first coordinate sequence; and determining the motion state of the vehicle as a static state when statistical values respectively associated with N target mesh plane figures are all less than a first threshold.
19. The non-transitory computer-readable storage medium according to claim 15, wherein the determining the motion state of the vehicle according to the target mesh plane figures comprises: obtaining a target mesh plane figure H.sub.i corresponding to an image frame group G.sub.i in the N image frame groups, wherein the image frame group G.sub.i comprises an image frame T.sub.i and an image frame T.sub.i+1, the target mesh plane figure H.sub.i comprises M connected but non-overlapping polygons, a circumscribed circle of each of the polygons does not comprise any matching feature point, M is a positive integer, and i is a positive integer less than or equal to N; obtaining a first direction of each side of the M polygons in the image frame T.sub.i, and obtaining a second direction of each side of the M polygons in the image frame T.sub.i+1; determining a direction change value corresponding to each side of the M polygons according to the first direction and the second direction; generating a direction change sequence associated with the M polygons according to direction change values respectively corresponding to the M polygons, and determining a direction average value and a direction median associated with the target mesh plane figure H.sub.i according to the direction change sequence; and determining the motion state of the vehicle as a linear motion state when direction average values respectively associated with N target mesh plane figures and direction medians respectively associated with the N target mesh plane figures are all less than a second threshold.
20. The non-transitory computer-readable storage medium according to claim 15, wherein the method further comprises: when the motion state of the vehicle is in a static state, determining positioning information of the vehicle according to a positioning and navigation system mounted on the vehicle; and when the motion state of the vehicle is in a linear motion state, determining a motion direction of the vehicle according to the linear motion state, and determining the positioning information of the vehicle according to the motion direction and the positioning and navigation system.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
DESCRIPTION OF EMBODIMENTS
[0036] The technical solutions in the embodiments of this application are clearly and completely described below with reference to the accompanying drawings in the embodiments of this application. Apparently, the described embodiments are merely some embodiments of this application rather than all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of this application without making creative efforts shall fall within the protection scope of this application.
[0037] The embodiments of this application involve the following concepts:
[0038] Location-based service (LBS): LBS is a location-related service provided by a wireless carrier for a user. The LBS obtains a current position of a positioning device through various types of positioning technologies, and provide information resources and basic services to the positioning device through the mobile Internet. The LBS integrates a plurality of information technologies such as mobile communication, interconnection network, space positioning, location information, and big data, and performs data updating and exchange through a mobile interconnection network service platform, so that the user may obtain a corresponding service through space positioning.
[0039] In-vehicle image data may be data provided by an in-vehicle camera, and the in-vehicle camera is a basis for implementing various pre-warning, and recognition-type advanced driving assistance system (ADAS) functions. In most ADAS functions, a visual image processing system is a basic function in ADAS functions, and a camera may serve as an input of the visual image processing system, so that the in-vehicle camera is essential for smart driving. In-vehicle cameras of a vehicle may include a built-in camera, a surround view camera, a side view camera, a reverse rear view camera, a front view sensor camera, and a driving recorder camera. A type of the front view sensor camera may include a monocular camera and a binocular camera. Different from the monocular camera, the binocular camera has a better ranging function, but the binocular camera needs to be installed at two different positions, so that the binocular camera costs 50% more than the monocular camera. A type of the surround view camera may be a wide-angle lens, 4 surround view cameras may be assembled surrounding the vehicle, and images photographed by the 4 surround view cameras are spliced to implement a panoramic image, where road line sensing may be implemented by using an algorithm according to the panoramic image. The rear view camera may be a wide-angle lens or a fisheye lens, which is mainly used during reversing. The in-vehicle cameras of the vehicle are mainly applicable to reversing image (photographed by the rear view camera) and 360-degree panoramic image (photographed by the surround view cameras) scenarios in the vehicle. Up to 8 cameras may be configured for various assistance devices of a high-end vehicle, which are configured to assist a driver in parking or triggering emergency braking. When the in-vehicle cameras successfully replace side mirrors, the number of cameras on the vehicle may reach 12, and with the development of unmanned driving technologies, a requirement of a smart driving vehicle type on cameras may also increase. In the embodiments of this application, this application may use image data acquired by a front view sensor camera or a driving recorder camera to assist a satellite positioning device.
[0040] An image feature point refers to a point whose image grayscale value changes drastically or a point having a relatively great curvature on an edge of an image (namely, an intersection point of two edges). During image processing, the image feature point has a quite important function in a feature point-based image matching algorithm. The image feature point can reflect an essential feature of an image, and a target object in the image may be identified, so that image matching can be completed through feature point matching. The image feature point mainly includes two parts: a key point and a descriptor. The key point refers to a position of a feature point in an image, and sometimes may further include information such as a direction and a scale; and the descriptor may refer to a vector, and the descriptor may be manually designed to describe relationship information of the key point with surrounding pixels. Generally, according to a principle that features with similar appearance are to have similar descriptors, during feature point matching performed on involved feature points, if a distance (for example, a Mahalanobis distance or a Hamming distance) between descriptors of two feature points in a vector space is close, it may be considered that the two feature points are the same feature point.
[0041] A Delaunay triangulation network may refer to a set of series of connected but non-overlapping triangles, and a circumscribed circle of each triangle in the Delaunay triangulation network does not include any other point other than vertexes of the triangle. The Delaunay triangulation network may include features such as a good structure, a simple data structure, small data redundancy, high storage efficiency, and being harmony and consistent with irregular ground features. The Delaunay triangulation network may represent a region boundary of a linear feature on which any shape is superimposed, which is easy to update and may adapt to data of various distribution densities. The Delaunay triangulation network includes two specific properties, which are as follows:
[0042] 1. An empty circumscribed circle property of the Delaunay triangulation network: The Delaunay triangulation network is unique (any four points cannot be concyclic), a circumscribed circle of each triangle in the Delaunay triangulation network does not include any other point in a Delaunay triangulation network plane, and the property may be used as a discrimination standard for creation of a Delaunay triangulation network; and
[0043] 2. A largest minimum angle property: in triangulations that may be formed by scattered point sets, a minimum angle of a triangle formed by a Delaunay triangulation is the largest. In other words, a minimum angle of six internal angles is no longer increased after diagonal lines of a convex quadrilateral formed by every two adjacent triangles in the Delaunay triangulation network are interchanged.
[0044] Referring to
[0045] The user terminal 10a is used as an example, the user terminal 10a may be a terminal device in a vehicle, and the user terminal 10a may obtain image data photographed by a camera (for example, a front view camera or a driving recorder camera), where the image data may be a series of consecutive image frames (for example, N+1 image frames, N is a positive integer, and N may be 1, 2, 3, . . . ), or the image data may be photographed video data. When the image data is video data, the user terminal 10a may perform framing on the image data to obtain N+1 image frames. The user terminal 10a may perform feature point extraction on the N+1 image frames, further match image feature points in every two adjacent image frames to obtain matching feature points between every two adjacent image frames, construct a mesh plane figure (for example, a Delaunay triangulation network) corresponding to every two adjacent image frames according to the matching feature points, and further evaluate a motion state of the vehicle according to the mesh plane figure, where the motion state may be used for assisting a satellite positioning device. Certainly, the user terminal 10a may also transmit the N+1 image frames to the server 10d, for the server 10d to perform feature point extraction on the N+1 image frames, match image feature points in every two adjacent image frames to obtain matching feature points between every two adjacent image frames, and construct a mesh plane figure corresponding to every two adjacent image frames according to the matching feature points. A motion state of the vehicle evaluated based on the mesh plane figure may be returned to the user terminal 10a, and the user terminal 10a may provide an assistance foundation for the satellite positioning device of the vehicle according to the motion state of the vehicle.
[0046] Referring to
[0047] The in-vehicle terminal 20b may perform feature point extraction on the image frame T1 and the image frame T2 included in the image frame group 1. Points whose image grayscale values change drastically or points having a relatively great curvature on an edge of an image (namely, intersection points of two edges) are extracted from the image frame T1 through a feature extraction algorithm, and the extracted image points are determined as a feature point set 20d corresponding to the image frame T1. Similarly, a feature point set 20e may be extracted from the image frame T2 through the same feature extraction algorithm. An image coordinate system may be established in both the image frame T1 and the image frame T2, where the image coordinate system includes an axis U and an axis V, a coordinate on the axis U may be used for representing a column of a feature point in the image frame, and a coordinate on the axis V may be used for representing a row of the feature point in the image frame. As shown in
[0048] The in-vehicle terminal 20b may perform feature point matching on the feature point set 20d and the feature point set 20e through a matching algorithm, determine matching feature points between the feature point set 20d and the feature point set 20e by using a distance between two feature point descriptors (or may be referred to as descriptive features) as a matching principle, to obtain a matching feature point set 20f corresponding to the image frame group 1, where image coordinates corresponding to each matching feature point in the matching feature point set 20f may be: an average value of image coordinates on the image frame T1 and the image frame T2. For example, when a distance between a feature point 1 in the feature point set 20d and a feature point 2 in the feature point set 20e is less than a distance threshold (the distance threshold may be manually set, and for example, the distance threshold is set to 0.8), the feature point 1 and the feature point 2 are determined as a matching feature point A between the image frame T1 and the image frame T2. If image coordinates of the feature point 1 are represented as (u1, v1) and image coordinates of the feature point 2 are represented as (u2, v2), image coordinates of the matching feature point A may be represented as ((u1+u2)/2, (v1+v2)/2).
[0049] Further, a Delaunay triangulation network 20g corresponding to the image frame group 1 may be constructed according to the matching feature point set 20f. The Delaunay triangulation network 20g includes connected but non-overlapping triangles, a circumscribed circle of each triangle does not include any other matching feature points other than vertexes of the current triangle, and vertexes of each triangle in the Delaunay triangulation network 20g are matching feature points in the matching feature point set 20f.
[0050] The in-vehicle terminal 20b may remove abnormal matching feature points in the matching feature point set 20f according to the Delaunay triangulation network 20g. For any triangle (vertexes of the triangle may be respectively represented as: a vertex A, a vertex B, and a vertex C) in the Delaunay triangulation network 20g, a coordinate change value of a side length AB, a coordinate change value of a side length BC, and a coordinate change value of a side length AC of the triangle may be calculated according to coordinates of the three vertexes in the image frame T1 and the image frame T2 respectively. The abnormal matching feature points in the matching feature point set 20f is filtered based on the coordinate change values of the three side lengths in each triangle, to delete the abnormal matching feature points from the matching feature point set 20f. The in-vehicle terminal 20b may update the Delaunay triangulation network 20g according to a matching feature point set from which the abnormal matching feature points are deleted, to generate a Delaunay triangulation network 20h corresponding to the image frame group 1. Based on the same operation process, Delaunay triangulation networks respectively corresponding to the N image frame groups may be finally obtained, and the motion state of the driving vehicle is evaluated according to the Delaunay triangulation networks respectively corresponding to the N image frame groups. The motion state of the driving vehicle may include a static state, a non-static state, and a linear motion state, and the motion state of the driving vehicle may be used for assisting a satellite positioning device of the vehicle. In the embodiments of this application, a Delaunay triangulation network may be constructed according to matching feature points between adjacent image frames, and the motion state of the vehicle may be evaluated based on the Delaunay triangulation network, thereby improving the evaluation accuracy of the motion state of the vehicle.
[0051] Referring to
[0052] S101. Obtain target image data captured by cameras on the vehicle, and combining every two neighboring image frames in the target image data into N image frame groups.
[0053] Specifically, a camera (for example, the in-vehicle camera 20a in the embodiment corresponding to
[0054] The vehicle may be a vehicle on the land, a ship in the sea, an airplane in the air, or a spacecraft. The camera may serve as a video input device, the camera may include a binocular camera and a monocular camera or may include a wide-angle camera and a fisheye camera, and a type of the camera is not limited in this application.
[0055] S102. Obtain matching feature points between two image frames included in each image frame group in the N image frame groups.
[0056] Specifically, the computer device may perform feature point extraction on the target image data, namely, extract feature points from the N+1 image frames respectively, and matching feature points (which may be understood as same feature points between two image frames included in each image frame group) respectively corresponding to the N image frame groups may be obtained according to the feature points extracted from each image frame. For the N image frame groups, an obtaining process of the matching feature points in each image frame group is the same. Therefore, the following describes the obtaining process of the matching feature points by only using any image frame group in the N image frame groups as an example.
[0057] The computer device may obtain any image frame group G.sub.i from the N image frame groups, where the image frame group G.sub.i may include an image frame T.sub.i and an image frame T.sub.i+1, i is a positive integer less than or equal to N, and namely, a minimum value of i is 1, and a maximum value thereof is N. The computer device may further obtain a first feature point set in the image frame T.sub.i (for example, the feature point set 20d in the embodiment corresponding to
[0058] During feature point extraction, the computer device may construct a scale space for the image frame T.sub.i and the image frame T.sub.i+1, search for a first key point position set corresponding to the image frame T.sub.i in the scale space, and search for a second key point position set corresponding to the image frame T.sub.i+1 in the scale space. Further, the computer device may determine, according to a local gradient corresponding to a first key point position included in the first key point position set, a first descriptive feature corresponding to the first key point position, and obtain the first feature point set of the image frame T.sub.i according to the first key point position and the first descriptive feature, where a feature point included in the first feature point set may be shown in the following formula (1):
[0059] F.sup.i may represent the first key point position set corresponding to the image frame T.sub.i, namely, F.sup.i may represent a coordinate set formed by first key point positions of all feature points in the first feature point set; f.sub.j.sup.i may represent coordinates of a j.sup.th feature point in the first feature point set, u.sub.j.sup.i may represent a U-axis coordinate of the j.sup.th feature point in an image coordinate system, and v.sub.j.sup.i may represent a V-axis coordinate of the j.sup.th feature point in the image coordinate system; m.sub.i may represent the number of feature points included in the first feature point set; and Π.sup.1 may represent a descriptive feature set formed by first descriptive features of all the feature points in the first feature point set, and ξ.sub.1.sup.i may represent a first descriptive feature of a first feature point in the first feature point set.
[0060] The computer device may determine, according to a local gradient corresponding to a second key point position included in the second key point position set, a second descriptive feature corresponding to the second key point position, and obtain the second feature point set of the image frame T.sub.i+1 according to the second key point position and the second descriptive feature. For a representation form of each feature point included in the second feature point set, reference may be made to the foregoing formula (1). For the second feature point set, F.sup.i may be replaced with F.sup.i±1, and Π.sup.i may be replaced with Π.sup.i+1, where F.sup.i+1 represents a coordinate set formed by second key point positions of all feature points in the second feature point set, and Π.sup.i+1 represents a descriptive feature set formed by second descriptive features of all the feature points in the second feature point set. In other words, no matter the image frame T.sub.i or the image frame T.sub.i+1, the computer device may extract a key point in the image frame (the image frame T.sub.i or the image frame T.sub.i+1), and search for pixels having a specific feature in the image frame (in this case, the specific feature is associated with an adopted feature point extraction algorithm), and may further calculate a descriptive feature (or may be referred to as a descriptor) of the key point according to a position of the key point, where the position of the key point and the corresponding descriptive feature may be referred to as a feature point in the image frame. The first feature point set may include one or more feature points, the second feature point set may include one or more feature points, and the number of feature points included in the first feature point set and the number of feature points included in the second feature point set may be the same or may be different. Feature point extraction algorithms adopted for the N image frame groups are the same.
[0061] After the first feature point set of the image frame T.sub.i and the second feature point set of the image frame T.sub.i+1 are obtained, the computer device may obtain a first feature point a.sub.t from the first feature point set and obtain a second feature point b.sub.k from the second feature point set, where t is a positive integer less than or equal to the number of the feature points included in the first feature point set, and k is a positive integer less than or equal to the number of the feature points included in the second feature point set. Further, the computer device may determine a matching degree between the first feature point a.sub.t and the second feature point b.sub.k according to a first descriptive feature corresponding to the first feature point a.sub.t and a second descriptive feature corresponding to the second feature point b.sub.k; and determine, when the matching degree is greater than a matching threshold, the first feature point a.sub.t and the second feature point b.sub.k as matching feature points between the image frame T.sub.i and the image frame T.sub.i+1. In other words, for any feature point in the first feature point set, matching degrees between the feature point and all the feature points included in the second feature point set need to be calculated respectively, and two feature points whose matching degree is greater than the matching threshold are determined as the matching feature points between the image frame T.sub.i and the image frame T.sub.i+1. That is, it indicates that two feature points whose matching degree is greater than the matching threshold are a same feature point in different image frames, and two feature points whose matching degree is less than or equal to the matching threshold are different feature points in different image frames, where the matching degree may refer to a distance between descriptive features corresponding to two feature points and calculated according to the matching algorithm, and the distance may include, but not limited to, a Mahalanobis distance or a Hamming distance.
[0062] A feature point in the first feature point set match at most one feature point in the second feature point set, namely, for any feature point a.sub.t in the first feature point set, no matching feature point or one matching feature point exists in the second feature point set. For example, the first feature point set includes a feature point a1, a feature point a2, and a feature point a3, and the second feature point set includes a feature point b1, a feature point b2, a feature point b3, and a feature point b4. Matching degrees between the feature point a1 and the feature point b1, the feature point b2, the feature point b3, and the feature point b4 may be calculated respectively, and if the feature point b2 whose matching degree is greater than the matching threshold exists in the feature point b1, the feature point b2, the feature point b3, and the feature point b4, the feature point a1 and the feature point b2 may be determined as matching feature points between the image frame T.sub.i and the image frame T.sub.i+1. Similarly, matching degrees between the feature point a2 and the feature point b1, the feature point b2, the feature point b3, and the feature point b4 may be calculated respectively, and if no feature point whose matching degree is greater than the matching threshold exists in the feature point b1, the feature point b2, the feature point b3, and the feature point b4, it may be determined that no feature point same as the feature point a2 exists in the image frame T.sub.i+1. Matching degrees between the feature point a3 and the feature point b1, the feature point b2, the feature point b3, and the feature point b4 are calculated respectively, and if the feature point b4 whose matching degree is greater than the matching threshold exists in the feature point b1, the feature point b2, the feature point b3, and the feature point b4, the feature point a3 and the feature point b4 may be determined as matching feature points between the image frame T.sub.i and the image frame T.sub.i+1, and in this case, two matching feature points exist between the image frame T.sub.i and the image frame T.sub.i+1. It may be understood that, when matching degrees between feature points in the first feature point set and the second feature point set are calculated, parallel calculation may be performed, and serial calculation may also be performed. The computer device may calculate matching degrees between the feature points in the first feature point set and the feature points in the second feature point set in a parallel manner, and further determine the matching feature points between the image frame T.sub.i and the image frame T.sub.i+1 according to all calculated matching degrees. Alternatively, the computer device may calculate matching degrees between the feature points in the first feature point set and the feature points in the second feature point set in a serial manner, namely, the computer device may stop, when a feature point matching a feature point a.sub.t is determined from the second feature point set, calculating matching degrees between the feature point a.sub.t and other feature points in the second feature point set. For example, in a case of determining that the matching degree between the feature point a1 and the feature point b2 is greater than the matching threshold, matching degrees between the feature point a1 and other feature points other than the feature point b2 in the second feature point set do not need to be calculated subsequently.
[0063] Referring to
[0064] The computer device may construct a scale space of the image frame T.sub.i, the scale space may be represented by using a Gaussian pyramid, and the Gaussian pyramid may be obtained by performing blurring and down-sampling processing the image frame T.sub.i through a Gaussian function. That is, Gaussian blurring of different scales may be performed on the image frame T.sub.i, and down-sampling processing is performed on the image frame T.sub.i, where in this case, the down-sampling processing may be dot interlaced sampling processing. The Gaussian pyramid of the image frame T.sub.i introduces a Gaussian filter on the basis of an image pyramid formed through simple down-sampling. The image pyramid may refer to continuously performing reduced-order sampling processing on the image frame T.sub.i to obtain a series of images in different sizes, and constructing the series of images from big to small and from bottom to top into a tower-shaped model. The image frame T.sub.i may be a first layer of the image pyramid, and a new image obtained through each down-sampling may serve as one layer in the image pyramid (each layer in the image pyramid is an image). Further, Gaussian blurring may be performed on each layer of image in the image pyramid by using different parameters, so that each layer of the image pyramid may include a plurality of Gaussian-blurred images, and in this case, the image pyramid may be referred to as a Gaussian pyramid. The Gaussian blurring is an image filter, which may use normal distribution (namely, a Gaussian function) to calculate a blurring template, and use the blurring template and the image frame T.sub.i to perform convolution operation, to blur an image.
[0065] After the scale space (namely, the foregoing Gaussian pyramid) of the image frame T.sub.i is constructed, the SIFT algorithm may include steps such as scale space extrema detection, key point positioning, direction determining, and key point description, and the scale space extrema detection, the key point positioning, and the direction determining in this case may be referred to as a feature point detection process. The scale space extrema detection may include searching for image positions on all scales, namely, identifying potential points of interest invariant to scales and rotation through a Gaussian differentiation function, where there may be a plurality of points of interest in this case. The key point positioning may include determining a position and a scale of a key point in the Gaussian pyramid from the points of interest through a finely fitted model, where selection of the key point depends on the stability of the points of the interest. The direction determining may include allocating one or more directions to a position of each feature point based on a local gradient direction of an image in the image frame T.sub.i, so that the image frame T.sub.i may be operated by transforming the direction, the scale, and the position of the key point. The key point description may include measuring a local gradient of an image on a selected scale in the Gaussian pyramid in an adjacent domain around each key point, where the gradient may be transformed into a vector representation, and the vector representation may permit relatively great local shape deformation and illumination changes. According to the foregoing feature point detection and feature point description parts, the first feature point set corresponding to the image frame T.sub.i may be obtained. Similarly, the second feature point set corresponding to the image frame T.sub.i+1 may be obtained based on the same processing procedure. The computer device may perform feature point matching on the first feature point set and the second feature point set to obtain the matching feature points between the image frame T.sub.i and the image frame T.sub.i+1.
[0066] For the feature point matching process between the first feature point set and the second feature point set, reference may be made to
[0067] S103. Construct target mesh plane figures respectively corresponding to the N image frame groups according to the matching feature points in each image frame group.
[0068] Specifically, the computer device may construct target mesh plane figures respectively corresponding to the N image frame groups according to the matching feature points respectively corresponding to each image frame group. That is, one image frame group may correspond to one target mesh plane figure, each target mesh plane figure may include a plurality of connected but non-overlapping polygons, a circumscribed circle of each polygon does not include any matching feature point, and vertexes of each polygon may be the matching feature points in each image frame group. In the embodiments of this application, the target mesh plane figure may be a Delaunay triangulation network, the polygons in the target mesh plane figure may all be triangles, and the computer device may construct a corresponding Delaunay triangulation network for each image frame group according to the empty circumscribed circle property and the largest minimum angle property of the Delaunay triangulation network.
[0069] A method for constructing a Delaunay triangulation network may include, but not limited to: a segmentation and merging algorithm, a point-by-point insertion algorithm, and a recursive growth algorithm. The segmentation and merging algorithm may segment matching feature points corresponding to a single image frame group into point subsets that is easy to triangulation (for example, each point subset may include 3 or 4 matching feature points), perform triangulation processing on each point subset, optimize a triangulated triangulation network into a Delaunay triangulation network through a local optimization procedure (LOP) algorithm, and may further merge the Delaunay triangulation network of each point subset to form a final Delaunay triangulation network. The point-by-point insertion algorithm may construct initial triangles of all matching feature points in a single image frame group, store the initial triangles in a triangle linked list, and may further insert the matching feature points in the image frame group into the triangle linked list sequentially, so that a Delaunay triangulation network corresponding to the image frame group is formed after all the matching feature points in the image frame group are inserted into the triangle linked list. The recursive growth algorithm may select any matching feature point 1 from matching feature points corresponding to a single image frame group, search for a matching feature point 2 that is closest to the matching feature point 1, connect the matching feature point 1 and the matching feature point 2 as an initial baseline 1-2, search for a matching feature point 3 on one side (for example, a right side or a left side) of the initial baseline 1-2 by adopting a Delaunay principle to form a first Delaunay triangle, use two new sides (a side length 2-3 and a side length 1-3) of the Delaunay triangle as a new initial baseline, and repeat the foregoing steps until all the matching feature points in the image frame group are processed, to obtain a Delaunay triangulation network corresponding to the image frame group.
[0070] S104. Determine the motion state of the vehicle according to the target mesh plane figures respectively corresponding to the N image frame groups.
[0071] Specifically, the computer device may determine the motion state of the vehicle according to an average coordinate change value of each triangle in the Delaunay triangulation network, and the motion state of the vehicle may be a static state or a non-static state in this case; or determine the motion state of the vehicle according to direction change values respectively corresponding to three sides included in each triangle in the Delaunay triangulation network, and the motion state of the vehicle may be a linear motion state or a non-linear motion state. The motion state of the vehicle may be used for providing an assistance foundation for positioning and navigation of the vehicle.
[0072] In some embodiments, positioning information of the vehicle may be determined according to a positioning and navigation system (for example, satellite positioning) when the motion state of the vehicle is the static state; and when the motion state of the vehicle is the linear motion state, a motion direction of the vehicle may be determined according to the linear motion state, and the positioning information of the vehicle may be determined according to the motion direction and the positioning and navigation system. In other words, the motion state of the vehicle may effectively assist the positioning and navigation system, and may further improve the positioning precision of a terminal configured for the vehicle. In a driving process of the vehicle, an accurate motion state may effectively assist lane-level positioning and navigation.
[0073] In the embodiments of this application, the target image data captured by one or cameras mounted on the vehicle may be obtained, the matching feature points between two adjacent image frames are extracted, the target mesh plane figure corresponding to the two adjacent image frames is constructed according to the matching feature points between the two adjacent image frames, and the motion state of the vehicle may be accurately evaluated based on the target mesh plane figure, so that the evaluation accuracy of the motion state of the vehicle may be improved. The motion state of the vehicle may effectively assist the positioning and navigation system, and may further improve the positioning precision of a terminal configured for the vehicle.
[0074] Further, referring to
[0075] S201. Obtain in-vehicle image data through a connection line.
[0076] Specifically, a computer device may obtain original image data for a vehicle acquired by a camera. When the vehicle is a driving vehicle, in this case, the computer device may be an in-vehicle terminal, the camera may be a front-facing camera or a driving recorder camera configured for the driving vehicle, and the original image data may be in-vehicle image data. The in-vehicle image data may refer to image data acquired by the camera within a period of time. For example, the in-vehicle image data may refer to N+1 image frames acquired by the camera within 1 second.
[0077] Referring to
[0078] S202. Perform denoising processing on the in-vehicle image data based on a Wiener filter, and perform distortion removing processing on an image according to parameters in the in-vehicle camera to obtain a corrected image.
[0079] Specifically, after obtaining the in-vehicle image data (namely, the original image data) acquired by the camera, the computer device may perform denoising processing on the in-vehicle image data to obtain denoised image data, and may further perform distortion removing processing on the denoised image data according to device parameters corresponding to the camera to obtain target image data associated with the vehicle. The target image data is divided into N+1 image frames, every two neighboring image frames in the N+1 image frames are combined to obtain N image frame groups, and in this case, image frames included in the N image frame groups are all corrected images obtained through denoising processing and distortion removing processing.
[0080] The denoising processing process of the in-vehicle image data may be implemented through a Wiener filter (which may be referred to as a least square filter), noise and interference may be filtered from the in-vehicle image data through the Wiener filter. The Wiener filter may refer to a linear filter using a least square as an optimal principle, and in a specific constraint condition, a square of a difference between an output and a desired output of the Wiener filter is minimized, which may be finally converted into a problem of calculating a solution to a Toeplitz equation through mathematical calculation. The device parameters corresponding to the camera may be parameters in the in-vehicle camera, and the distortion removing processing on the in-vehicle image data may be implemented based on the parameters in the in-vehicle camera. The denoising processing and the distortion removing processing belong to a pre-processing process of the in-vehicle image data, and the pre-processing process of the in-vehicle image data may further include deblurring processing and smoothing processing, and a method adopted in the pre-processing process is not specifically limited in the embodiments of this application.
[0081] Referring to
[0082] S203. Obtain image feature points of an image frame T.sub.i and an image frame T.sub.i+1 according to an image feature point extraction algorithm.
[0083] S204. Match matching feature points between the image frame T.sub.i and the image frame T.sub.i+1 through a matching algorithm.
[0084] Every image frame group in the N image frame groups includes two adjacent image frames, and the two adjacent image frames are correct image frames on which denoising processing and distortion removing processing have been performed. A generation process of a target mesh plane figure corresponding to each image frame group is similar, and for ease of description, the following describes the generation process of the target mesh plane figure by using any image frame group G.sub.i in the N image frame groups as an example, where the image frame group G.sub.i may include a corrected image frame T.sub.i and a corrected image frame T.sub.i+1. The generation process of the target mesh plane figure may include a process of extracting feature points of the image frame T.sub.i and the image frame T.sub.i+1, a process of determining matching feature points between the image frame T.sub.i and the image frame T.sub.i+1, and a process of constructing a Delaunay triangulation network corresponding to the image frame group G.sub.i. Therefore, for a specific implementation of step S203 and step S204, reference may be made to step S102 in the embodiment corresponding to
[0085] S205. Perform Delaunay triangulation processing on all matching feature points between the image frame T.sub.i and the image frame T.sub.i+1.
[0086] Specifically, It is assumed that the number of matching feature points between the image frame T.sub.i and the image frame T.sub.i+1 is q, the computer device may determine, according to image coordinate information of each matching feature point in the q matching feature points in the image frame T.sub.i and the image frame T.sub.i+1, matching coordinate information respectively corresponding to the q matching feature points, where the matching coordinate information is an image coordinate average value in the image frame T.sub.i and the image frame T.sub.i+1, i is a positive integer less than or equal to N, and q is a positive integer. Further, the computer device may obtain an initial polygon linked list associated with the q matching feature points according to the matching coordinate information respectively corresponding to the q matching feature points. For any matching feature point c.sub.r in the q matching feature points, an associated polygon matching the matching feature point c.sub.r is obtained from the initial polygon linked list, where a circumscribed circle of the associated polygon may include the matching feature point c.sub.r, r is a positive integer less than or equal to q, namely, a minimum value of r is 1, and a maximum value thereof is q. The computer device may delete a common side of the associated polygon, connect the matching feature point c.sub.r to all vertexes of the associated polygon to generate a candidate polygon corresponding to the matching feature point c.sub.r, may further convert the candidate polygon into a target polygon corresponding to the matching feature point c.sub.r, and add the target polygon to the initial polygon linked list, where the target polygon meets the empty circumscribed circle property in this case. When target polygons respectively corresponding to the q matching feature points are added to the initial polygon linked list, the initial polygon linked list including the target polygons respectively corresponding to the q matching feature points is determined as a target polygon linked list, and a mesh plane figure corresponding to the image frame group G.sub.i is generated according to the target polygon linked list, where the mesh plane figure in this case may be referred to as a candidate mesh plane figure.
[0087] When the mesh plane figure is a Delaunay triangulation network, Delaunay triangulation network processing may be performed on the q matching feature points between the image frame T.sub.i and the image frame T.sub.i+1 based on a Bowyer-Watson algorithm (a triangulation network generation algorithm), and a process thereof may include: 1. Traverse the q matching feature points, construct an initial triangle including the q matching feature points according to the matching coordinate information respectively corresponding to the q matching feature points, and store the initial triangle into an initial triangle linked list (namely, the initial polygon linked list). 2. Insert each matching feature point in the q matching feature points sequentially (for example, the matching feature point c.sub.r in the q matching feature points), find an affected triangle (namely, the associated polygon) whose circumscribed circle includes the matching feature point c.sub.r from the initial triangle linked list, delete a common side of the affected triangle, and connect the matching feature point c.sub.r to all vertexes of the affected triangle to obtain a candidate triangle (namely, the foregoing candidate polygon) corresponding to the matching feature point c.sub.r, so as to complete insertion of the matching feature point c.sub.r in the initial triangle linked list. 3. Optimize the local newly formed candidate triangle according to a Delaunay triangulation network optimization principle to generate a target triangle associated with the matching feature point c.sub.r, and store the target triangle into the initial triangle linked list. 4. Perform step 2 and step 3 circularly until the q matching feature points are inserted into the initial triangle linked list, where the initial triangle linked list in this case may be referred to as a target triangle linked list (namely, the target polygon linked list), and the target triangle linked list may be determined as the Delaunay triangulation network corresponding to the image frame group G.sub.i.
[0088] Referring to
[0089] The computer device may delete a common side AB of the triangle ABD and the triangle ABC, connect the matching feature point P to all vertexes of the triangle ABD and the triangle ABC to form new triangles. That is, the matching feature point P is connected to the matching feature point A, the matching feature point P is connected to the matching feature point B, the matching feature point P is connected to the matching feature point C, and the matching feature point P is connected to the matching feature point D, to form a new triangle ADP, a new triangle ACP, a new triangle BCP, and a new triangle BDP. The triangle ADP, the triangle ACP, the triangle BCP, and the triangle BDP are stored into the triangle linked list, and the Delaunay triangulation network corresponding to the image frame group G.sub.i is generated according to the triangle linked list in this case.
[0090] S206. Remove abnormal matching feature points based on an established Delaunay triangulation network.
[0091] Specifically, the matching feature points between the image frame T.sub.i and the image frame T.sub.i+1 may have abnormal matching feature points, so that the computer device may remove the abnormal matching feature points from the Delaunay triangulation network generated according to step S205. The computer device may determine the Delaunay triangulation network generated through step S205 as a candidate mesh plane figure, and obtain any polygon D.sub.j from the candidate mesh plane figure, where j is a positive integer less than or equal to the number of polygons included in the candidate mesh plane figure. Further, the computer device may obtain first vertex coordinates of matching feature points corresponding to the polygon D.sub.j in the image frame T.sub.i and second vertex coordinates of the matching feature points corresponding to the polygon D.sub.j in the image frame T.sub.i+1. A coordinate change value corresponding to each side length included in the polygon D.sub.j is determined according to the first vertex coordinates and the second vertex coordinates; a side length whose coordinate change value is less than a change threshold is determined as a target side length, normal verification parameters (in this case, the normal verification parameter may be a counter value) corresponding to matching feature points on two ends of the target side length are accumulated to obtain a total value of a normal verification parameter corresponding to each of the matching feature points of the polygon D.sub.j; and a matching feature point whose total value of the normal verification parameter is greater than a parameter threshold is determined as a normal matching feature point, and another matching feature point other than the normal matching feature point may be referred to as an abnormal matching feature point. Specifically, matching feature points other than normal matching feature points in the q matching feature points may be further deleted.
[0092] It is assumed that the candidate mesh plane figure includes m triangles, and the candidate mesh plane figure in this case may be shown in the following formula (2):
Θ={Δ.sub.D.sub.
[0093] Θ represents the candidate mesh plane figure corresponding to the image frame group G.sub.i, Δ.sub.D.sub.
[0094] The first vertex coordinates may refer to coordinates of the three vertexes of the triangle Δ.sub.D.sub.
[0095] Second vertex coordinates of the three vertexes of the triangle Δ.sub.D.sub.
[0096] The second vertex coordinates may refer to coordinates of the three vertexes of the triangle Δ.sub.D.sub.
[0097] When the three vertexes of the triangle Δ.sub.D.sub.
dAB=|u.sub.D.sub.
[0098] A coordinate change value of a side length BC in the triangle Δ.sub.D.sub.
dBC=|u.sub.D.sub.
[0099] A coordinate change value of a side length AC in the triangle Δ.sub.D.sub.
dAC=|u.sub.D.sub.
[0100] A counter may be allocated to each matching feature point in the q matching feature points, and if the coordinate change value dAB of the side length AB in formula (5) is less than a change threshold (the change threshold may be preset), the side length AB may be determined as a target side length, and counters of two vertexes (namely, the matching feature point A and the matching feature point B) of the target side length AB may be increased by 1; and if the coordinate change value dAB is greater than or equal to the change threshold, no processing may be performed on the counters of the matching feature point A and the matching feature point B. If the coordinate change value dBC of the side length BC in formula (6) is less than the change threshold, the side length BC may be determined as a target side length, and counters of two vertexes (namely, the matching feature point B and the matching feature point C) of the target side length BC may be increased by 1; and if the coordinate change value dBC is greater than or equal to the change threshold, no processing may be performed on the counters of the matching feature point B and the matching feature point C. If the coordinate change value dAC of the side length AC in formula (7) is less than the change threshold, the side length AC may be determined as a target side length, and counters of two vertexes (namely, the matching feature point A and the matching feature point C) of the target side length AC may be increased by 1; and if the coordinate change value dAC is greater than or equal to the change threshold, no processing may be performed on the counters of the matching feature point A and the matching feature point C.
[0101] The computer device may circularly process each triangle in the candidate mesh plane figure according to formula (5) to formula (7) until the m triangles are all processed, to obtain total values of counters (namely, the total values of the normal verification parameters) respectively corresponding to the q matching feature points. If a total value of the counter of a matching feature point is greater than a parameter threshold (the parameter threshold may be preset, and for example, the parameter threshold is 4), the matching feature point may be determined as a normal matching feature point, and the matching feature point may be reserved; and if a total value of the counter of a matching feature point is less than or equal to the parameter threshold (for example, the parameter threshold is 4), the matching feature point may be determined as an abnormal matching feature point, and the abnormal matching feature point may be deleted.
[0102] S207. Obtain the Delaunay triangulation network from which abnormal matching feature points are removed.
[0103] Specifically, the computer device may update, according to normal matching feature points, the candidate mesh plane figure to a target mesh plane figure corresponding to the image frame group G.sub.i, for example, obtain a Delaunay triangulation network from which abnormal matching feature points are removed according to the normal matching feature points from which the abnormal matching feature points are removed. That is, the computer device may delete triangles including the abnormal matching feature points from the candidate mesh plane figure, to form a final Delaunay triangulation network corresponding to the image frame group G.sub.i.
[0104] In some embodiments, the computer device may further re-construct, according to the Delaunay triangulation processing process described in step S205, a Delaunay triangulation network according to the normal matching feature points, and the Delaunay triangulation network including the normal matching feature points in this case may be referred to as the target mesh plane figure.
[0105] Target Delaunay triangulation networks respectively corresponding to the N image frame groups may be obtained based on step S203 to step S207.
[0106] In the embodiments of this application, in-vehicle image data corresponding to a driving vehicle is obtained, matching feature points between two adjacent image frames are extracted, a Delaunay triangulation network corresponding to the two adjacent image frames is constructed by using the matching feature points between the two adjacent image frames, and abnormal matching feature points are removed based on the Delaunay triangulation network. Therefore, the effectiveness of the matching feature points may be improved, and determining a motion state of the driving vehicle based on the Delaunay triangulation network may improve the evaluation accuracy of the motion state of the vehicle.
[0107] Referring to
[0108] S301. Pre-process in-vehicle image data (Eliminate image noise and distortion).
[0109] S302. Extract image feature points of the in-vehicle image data.
[0110] S303. Match feature points of adjacent image frames.
[0111] S304. Establish a Delaunay triangulation network by using matching feature points.
[0112] S305. Remove abnormal matching feature points based on the Delaunay triangulation network.
[0113] In this embodiment of this application, an implementation procedure of the method for evaluating a motion state of a vehicle is described by using an example in which the vehicle is a driving vehicle and a target mesh plane figure is a Delaunay triangulation network. For specific implementations of step S301 to step S305, reference may be made to step S201 to step S207 in the embodiment corresponding to
[0114] S306. Determine a motion state of a vehicle based on the Delaunay triangulation network.
[0115] The computer device may determine a motion state of a driving vehicle according to triangles included in the Delaunay triangulation network, and the following describes a process of determining a motion state of a vehicle (or a process of evaluating a motion state of a vehicle) in detail based on
[0116] Referring to
[0117] S401. Obtain a Delaunay triangulation network from which abnormal matching feature points are removed.
[0118] Specifically, the computer device may obtain an image frame group G.sub.i in N image frame groups, and further obtain a target mesh plane figure H.sub.i corresponding to the image frame group G.sub.i, where the target mesh plane figure H.sub.i includes M polygons, M is a positive integer, and i is a positive integer less than or equal to N. In this embodiment of this application, the target mesh plane figure H.sub.i may refer to a Delaunay triangulation network from which abnormal matching feature points are removed and corresponding to the image frame group G.sub.i (for ease of description, the “Delaunay triangulation network from which abnormal matching feature points are removed and corresponding to the image frame group G.sub.i” is referred to as a “Delaunay triangulation network corresponding to the image frame group G.sub.i”). The image frame group G.sub.i may include an image frame T.sub.i and an image frame T.sub.i+1, the Delaunay triangulation network corresponding to the image frame group G.sub.i may include M triangles, and a minimum value of i is 1 and a maximum value thereof is N. For ease of description, the following describes the determining process of a static state of a vehicle by using the Delaunay triangulation network corresponding to the image frame group G.sub.i as an example.
[0119] S402: Calculate average coordinate change values of each triangle in the Delaunay triangulation network in a U-axis direction and a V-axis direction.
[0120] Specifically, the computer device may obtain image coordinate information of all matching feature points included in the Delaunay triangulation network in the image frame T.sub.i and the image frame T.sub.i+1 respectively, that is, image coordinate information of vertexes of each triangle in the Delaunay triangulation network in the image frame T.sub.i and the image frame T.sub.i+1. Further, the computer device may obtain average coordinate change values respectively corresponding to the M triangles in the Delaunay triangulation network (namely, the polygon in this embodiment of this application is a triangle) according to the image coordinate information of the vertexes of each triangle in the image frame T.sub.i and the image frame T.sub.i+1, where the average coordinate change value herein may include average coordinate change values in a U-axis direction and in a V-axis direction in an image coordinate system.
[0121] The Delaunay triangulation network corresponding to the image frame group G.sub.i may be shown in the following formula (8):
Ψ={Δ.sub.D.sub.
[0122] Ψ may represent the Delaunay triangulation network (namely, the Delaunay triangulation network from which abnormal matching feature points are removed) corresponding to the image frame group G.sub.i, A.sub.D.sub.
[0123] u.sub.D.sub.
[0124] Image coordinate information of the three vertexes of the triangle Δ.sub.Ds.sub.
[0125] u.sub.D.sub.
[0126] An average coordinate change value d.sub.u,sj of the Δ.sub.Ds.sub.
[0127] An average coordinate change value d.sub.v,sj of the triangle Δ.sub.Ds.sub.
[0128] S403. Combine average coordinate change values of all triangles in the Delaunay triangulation network in the U-axis direction and in the V-axis direction into two sequences.
[0129] Specifically, the computer device may form a sequence of the Delaunay triangulation network in the U-axis direction according to average coordinate change values of the M triangles in the Delaunay triangulation network in the U-axis direction, and the sequence may be shown in the following formula (13):
Φ.sub.u={d.sub.u,s1,d.sub.u,s2, . . . ,d.sub.u,sM} (13)
[0130] Further, the computer device may form a sequence of the Delaunay triangulation network in the V-axis direction according to average coordinate change values of the M triangles in the Delaunay triangulation network in the V-axis direction, and the sequence may be shown in the following formula (14):
Φ.sub.v={d.sub.v,s1,d.sub.v,s2, . . . ,d.sub.v,sM} (14)
[0131] S404. Process the two sequences, and obtain average values, medians, and standard deviation values respectively corresponding to the two sequences.
[0132] Specifically, the computer device may process a sequence Φ.sub.u and a sequence Φ.sub.v to obtain statistical values associated with the Delaunay triangulation network. In this case, the statistical values may include an average value, a median, and a standard deviation value calculated according to the sequence Φ.sub.u and an average value, a median, and a standard deviation value calculated according to the sequence Φ.sub.v. The average values corresponding to the sequence Φ.sub.u and the sequence Φ.sub.v may be referred to as coordinate average values, the medians corresponding to the sequence Φ.sub.u and the sequence Φ.sub.v may be referred to as coordinate medians, and the standard deviation values corresponding to the sequence Φ.sub.u and the sequence Φ.sub.v may be referred to as coordinate standard deviation values. For a specific calculation process of the statistical values, reference may be made to the description in the following embodiment corresponding to
[0133] S405. Determine that the current vehicle is in a static state when the average values, the medians, and the standard deviation values of the two sequences are all less than a given threshold.
[0134] Specifically, a target mesh plane figure may be constructed for each image frame group, namely, N Delaunay triangulation networks may be obtained according to normal matching feature points respectively corresponding to the N image frame groups, one Delaunay triangulation network may correspond to one sequence Φ.sub.u and one sequence Φ.sub.v, and when statistical values respectively associated with the N Delaunay triangulation networks are all less than a first threshold, it may be determined that the motion state of the vehicle is a static state. It may be understood that, the first threshold herein may include a first average value threshold, a first median threshold, and a standard deviation value threshold, and the first average value threshold, the first median threshold, and the standard deviation value threshold may be different preset values.
[0135] In the embodiments of this application, in-vehicle image data corresponding to a vehicle is obtained, matching feature points between two adjacent image frames are extracted, a Delaunay triangulation network corresponding to the two adjacent image frames is constructed by using the matching feature points between the two adjacent image frames, and abnormal matching feature points are removed based on the Delaunay triangulation network. Therefore, the effectiveness of the matching feature points may be improved, and determining a motion state of the vehicle based on the Delaunay triangulation network may not only improve the evaluation accuracy of the motion state of the vehicle, but may also simplify the evaluation procedure of the static state of the vehicle, thereby further improving the evaluation efficiency of the motion state of the vehicle.
[0136] Further, referring to
[0137] The first coordinate sequence may include a sequence of the target mesh plane figure H.sub.i in the U-axis direction (for example, a sequence obtained by sorting average coordinate change values included in a first sequence below in ascending order) and a sequence of the target mesh plane figure H.sub.i in the V-axis direction (for example, a sequence obtained by sorting average coordinate change values included in a second sequence below in ascending order). The statistical values may include a coordinate average value, a coordinate median, and a coordinate standard deviation value, where the coordinate average value may include an average value corresponding to the sequence in the U-axis direction and an average value corresponding to the sequence in the V-axis direction, the coordinate median may include a median corresponding to the sequence in the U-axis direction and a median corresponding to the sequence in the V-axis direction, and the coordinate standard deviation value may include a standard deviation value corresponding to the sequence in the U-axis direction and a standard deviation value corresponding to the sequence in the V-axis direction. The first threshold may include a first average value threshold, a first median threshold, and a standard deviation value threshold adopting different values.
[0138] Processing processes of the sequence in the U-axis direction and the sequence in the V-axis direction are similar (as shown in
[0139] As shown in
[0140] S1201. Obtain a Delaunay triangulation network from which abnormal matching feature points are removed.
[0141] S1202. Calculate average coordinate change values of each triangle in the Delaunay triangulation network in a U-axis direction and a V-axis direction.
[0142] For a specific implementation of step S1201 and step S1202, reference may be made to step S401 and step S402 in the embodiment corresponding to
[0143] S1203. Combine the average coordinate change value of each triangle in the U-axis direction into a first sequence.
[0144] Specifically, the computer device may form a sequence of the Delaunay triangulation network in the U-axis direction according to average coordinate change values of M triangles in the Delaunay triangulation network in the U-axis direction. In this case, the sequence may be referred to as a first sequence, and the first sequence is presented as the foregoing formula (13).
[0145] S1204. Sort elements of the first sequence in ascending order.
[0146] Specifically, the first sequence may include the average coordinate change values of the M triangles of the Delaunay triangulation network in the U-axis direction respectively, and a first coordinate sequence in the U-axis direction may be obtained by sorting the average coordinate change values included in the first sequence in ascending order, where the first coordinate sequence in the U-axis direction may be shown in the following formula (15):
Φ.sub.u.sup.sort={d.sub.u,S1,d.sub.u,S2, . . . ,d.sub.u,SM} (15)
[0147] Φ.sub.u.sup.sort may represent the first coordinate sequence in the U-axis direction of the Delaunay triangulation network corresponding to the image frame group G.sub.i, d.sub.u,S1 may be a minimum average coordinate change value in the first coordinate sequence Φ.sub.u.sup.sort ort and d.sub.u,SM may be a maximum average coordinate change value in the first coordinate sequence Φ.sub.u.sup.sort.
[0148] S1205. Initialize a first tagged array.
[0149] Specifically, an initial tagged array corresponding to the first coordinate sequence in the U-axis direction is obtained. In this case, the initial tagged array may refer to an initialized first tagged array, and the initial tagged array may be set to:
mask[M]={0,0, . . . ,0} (16)
[0150] The initial tagged array mask[M] may be an array including M elements, namely, the number of elements in the initial tagged array mask[M] is the same as the number of triangles included in the Delaunay triangulation network.
[0151] S1206. Calculate an upper quartile, a lower quartile, and a median of the sorted first sequence.
[0152] Specifically, the sorted first sequence may be referred to as the first coordinate sequence Φ.sub.u.sup.sort of the M triangles in the U-axis direction, and the computer device may calculate an upper quartile, a lower quartile, and a median of the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction. The upper quartile may refer to an average coordinate change value at a 75% position in the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction, the lower quartile may refer to an average coordinate change value at a 25% position in the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction, and the median may refer to an average coordinate change value at a 50% position in the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction.
[0153] When the computer device calculates the lower quartile of the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction, if (M+1)/4 is an integer, the lower quartile may be represented as
and if (M+1)/4 is not an integer, the lower quartile may be represented as:
[0154] Int in formula (17) represents an integer calculation operation, and Dec represents a decimal calculation operation.
[0155] When the computer device calculates the upper quartile of the first coordinate sequence Φ.sub.u.sup.sort r in the U-axis direction, if 3(M+1)/4 is an integer, the upper quartile may be represented as
and if 3(M+1)/4 is not an integer, the upper quartile may be represented as:
[0156] When the computer device calculates the median of the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction, if (M+1)/2 is an integer, the median may be represented as
and if (M+1)/2 is not an integer, the median may be represented as:
[0157] S1207. Update the first tagged array.
[0158] Specifically, after the upper quartile M.sub.3/4 and the lower quartile M.sub.1/4 corresponding to the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction are calculated, the computer device may perform gross error detection on the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction according to the upper quartile M.sub.3/4 and the lower quartile M.sub.1/4, namely, perform the following processing on each element (average coordinate change value) included in the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction: when d.sub.u,Sj<M.sub.1/4−1.5.Math.(M.sub.3/4−M.sub.1/4) or d.sub.u,Sj>M.sub.3/4+1.5.Math.(M.sub.3/4−M.sub.1/4), mask[j]=mask[j]+1; or otherwise, no processing is performed, where j is a positive integer less than M, and this operation is a process of updating and processing the first tagged array (the initial tagged array). The gross error may refer to a gross error, which is an error greater than a maximum error that is likely to occur in a normal observation condition, and the gross error may be an error caused by carelessness of a staff member or equipment failure.
[0159] S1208. Process the sorted first sequence to obtain a new sequence.
[0160] Specifically, the computer device may update the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction according to the median M.sub.1/2 of the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction to obtain a new sequence. In this case, the new sequence may be referred to as a second coordinate sequence
[0161] S1209. Calculate an upper quartile, a lower quartile, and a median of the new sequence.
[0162] Specifically, the computer device may calculate a lower quartile, an upper quartile, and a median corresponding to the second coordinate sequence
[0163] Further, the computer device may determine an effective value range of element values included in the second coordinate sequence
[0164] The computer device may further calculate a median
[0165] S1210. Update the first tagged array.
[0166] Specifically, the computer device may update the first tagged array again according to the median
mask[j]=mask[j]+1; or otherwise, no processing is performed. This operation is a process of updating and processing the updated first tagged array again, and an updated first tagged array may be obtained after this operation.
[0167] S1211. Remove a gross error in the first sequence according to the first tagged array.
[0168] Specifically, the computer device may remove an average coordinate change value included in the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction according to the first tagged array mask[M] obtained through step S1210, and if mask[j]≥1, an average coordinate change value d.sub.u,Sj in the first coordinate sequence Φ.sub.u.sup.sort is removed, and the average coordinate change value d.sub.u,Sj in this case may be referred to as a gross error in the first coordinate sequence. If mask[j]=0, the average coordinate change value d.sub.u,Sj in the first coordinate sequence Φ.sub.u.sup.sort is reserved, and the first coordinate sequence from which the gross error is removed is determined as a fourth coordinate sequence {tilde over (Φ)}.sub.u in the U-axis direction of the Delaunay triangulation network.
[0169] S1212. Calculate an average value, a median, and a standard deviation value of the first sequence from which the gross error is removed.
[0170] Specifically, the computer device may obtain an average value Avg.sub.u, a median Med.sub.u, and a standard deviation value STD.sub.u of the fourth coordinate sequence {tilde over (Φ)}.sub.u in the U-axis direction according to element values included in the fourth coordinate sequence {tilde over (Φ)}.sub.u in the U-axis direction. In this case, the average value Avg.sub.u, the median Med.sub.u, and the standard deviation value STD.sub.u may be referred to as statistical values of the Delaunay triangulation network in the U-axis direction.
[0171] S1213. Combine the average coordinate change value of each triangle in the V-axis direction into a second sequence.
[0172] Specifically, the computer device may form a sequence of the Delaunay triangulation network in the V-axis direction according to average coordinate change values of the M triangles in the Delaunay triangulation network in the V-axis direction. In this case, the sequence may be referred to as a second sequence, and the second sequence is presented as the foregoing formula (14).
[0173] S1214. Sort elements of the second sequence in ascending order.
[0174] Specifically, the second sequence may include the average coordinate change values of the M triangles of the Delaunay triangulation network in the V-axis direction respectively, and a first coordinate sequence in the V-axis direction may be obtained by sorting the average coordinate change values included in the second sequence in ascending order, where the first coordinate sequence in the V-axis direction may be shown in the following formula (21):
Φ.sub.v.sup.sort={d.sub.v,S1,d.sub.v,S2, . . . ,d.sub.v,SM} (21)
[0175] Φ.sub.v.sup.sort may represent the first coordinate sequence in the V-axis direction of the Delaunay triangulation network corresponding to the image frame group G.sub.i, d.sub.v,S1 may be a minimum average coordinate change value in the first coordinate sequence Φ.sub.v.sup.sort and d.sub.v,SM may be a maximum average coordinate change value in the first coordinate sequence Φ.sub.v.sup.sort.
[0176] S1215. Initialize a second tagged array.
[0177] Specifically, an initial tagged array corresponding to the first coordinate sequence in the V-axis direction and the initial tagged array corresponding to the first coordinate sequence in the U-axis direction are the same, namely, an initialized second tagged array and the initialized first tagged array are the same, the initialized second tagged array may be shown in the foregoing formula (16), and the second tagged array may also be an array including M elements.
[0178] S1216. Calculate an upper quartile, a lower quartile, and a median of the sorted second sequence.
[0179] S1217. Update the second tagged array.
[0180] S1218. Process the second sequence to obtain a new sequence.
[0181] S1219. Calculate an upper quartile, a lower quartile, and a median of the new sequence.
[0182] S1220. Update the second tagged array.
[0183] S1221. Remove a gross error in the second sequence according to the second tagged array.
[0184] S1222. Calculate an average value, a median, and a standard deviation value of the second sequence from which the gross error is removed.
[0185] Specifically, the computer device may process the first coordinate sequence Φ.sub.v.sup.sort in the V-axis direction according to step S1216 to step S1222, for a processing process of the first coordinate sequence Φ.sub.v.sup.sort, reference may be made to the processing process of the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction in step S1206 to step S1212, and details are not described herein again. The processing process of the first coordinate sequence Φ.sub.v.sup.sort in the V-axis direction is similar to that of the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction, namely, in the processing processes, processed objects are different, but processing manners are the same. Based on step S1206 to step S1212, a fourth coordinate sequence {tilde over (Φ)}.sub.v of the Delaunay triangulation network in the V-axis direction may be obtained. Further, an average value Avg.sub.v, a median Med.sub.v, and a standard deviation value STD.sub.v of the fourth coordinate sequence {tilde over (Φ)}.sub.v in the V-axis direction may be obtained according to element values included in the fourth coordinate sequence {tilde over (Φ)}.sub.v in the V-axis direction. In this case, the average value Avg.sub.v, the median Med.sub.v, and the standard deviation value STD.sub.v may be referred to as statistical values of the Delaunay triangulation network in the V-axis direction. The average value Avg.sub.u, the median Med.sub.u, the standard deviation value STD.sub.u, the average value Avg.sub.v, the median Med.sub.v, and the standard deviation value STD.sub.v may be referred to as statistical values of the Delaunay triangulation network corresponding to the image frame group G.sub.i.
[0186] S1223. Determine that the vehicle is in a static state when the average values, the medians, and the standard deviation values of the two sequences are all less than a given threshold; or otherwise, determine that the vehicle is in a non-static state.
[0187] Specifically, according to the foregoing step S1201 to step S1222, the computer device may obtain statistical values respectively corresponding to the N Delaunay triangulation networks. That is, each Delaunay triangulation network may be associated with the average value Avg.sub.u, the median Med.sub.u, and the standard deviation value STD.sub.u in the U-axis direction and the average value Avg.sub.v, the median Med.sub.v, and the standard deviation value STD.sub.v in the V-axis direction. When the statistical values respectively associated with the N Delaunay triangulation networks are all less than a given threshold (namely, the first threshold), it may be determined that the motion state of the vehicle is a static state; and when a statistical value in the statistical values respectively associated with the N Delaunay triangulation networks is greater than or equal to the given threshold (namely, the first threshold), it may be determined that the motion state of the vehicle is a non-static state.
[0188] In some embodiments, when the first threshold may include a first average value threshold, a first median threshold, and a standard deviation value threshold adopting different values, if the average values (including the average values Avg.sub.u and the average values Avg.sub.v) respectively associated with the N Delaunay triangulation networks are all less than the first average value threshold, the medians (including the medians Med.sub.u and the medians Med.sub.v) respectively associated with the N Delaunay triangulation networks are all less than the first median threshold, and the standard deviation values (including the standard deviation values STD.sub.u and the standard deviation values STD.sub.v) respectively associated with the N Delaunay triangulation networks are all less than the standard deviation value threshold, it may be determined that the motion state of the vehicle is a static state; or otherwise, it may be determined that the motion state of the vehicle is a non-static state.
[0189] In the embodiments of this application, in-vehicle image data corresponding to a vehicle is obtained, matching feature points between two adjacent image frames are extracted, a Delaunay triangulation network corresponding to the two adjacent image frames is constructed by using the matching feature points between the two adjacent image frames, and abnormal matching feature points are removed based on the Delaunay triangulation network. Therefore, the effectiveness of the matching feature points may be improved, and determining a motion state of the vehicle based on the Delaunay triangulation network may not only improve the evaluation accuracy of the motion state of the vehicle, but may also simplify the evaluation procedure of the static state of the vehicle, thereby further improving the evaluation efficiency of the motion state of the vehicle. The motion state of the vehicle is evaluated based on the in-vehicle image data and the Delaunay triangulation network, so that limitations to a scenario in which the vehicle is located may be reduced, thereby effectively improving the evaluation accuracy of the motion state of the vehicle in a complex scenario (such as city overpass) and further improving the applicability of the method for evaluating a motion state of a vehicle.
[0190] Referring to
[0191] Any polygon in the M polygons may include a plurality of side lengths. For example, when the polygon is a triangle, the polygon may include three side lengths, in this case, the first direction may include directions of three sides in the image frame T.sub.i respectively, the second direction may include directions of the three sides in the image frame T.sub.i+1 respectively, and the number of direction change sequences is 3 (for example, a third sequence, a fourth sequence, and a fifth sequence in the embodiment corresponding to
[0192] As shown in
[0193] S501. Obtain a Delaunay triangulation network from which abnormal matching feature points are removed.
[0194] For a specific implementation process of step S501, reference may be made to step S401 in the embodiment corresponding to
[0195] S502. Calculate directions of three sides of each triangle in the Delaunay triangulation network in the image frame T.sub.i.
[0196] Specifically, the computer device may obtain image coordinate information of all matching feature points included in the Delaunay triangulation network in the image frame T.sub.i, that is, image coordinate information of vertexes of each triangle in the Delaunay triangulation network in the image frame T.sub.i, and the computer device may further obtain directions of three sides included in each triangle in the Delaunay triangulation network (namely, the polygon in this embodiment of this application is a triangle) in the image frame T.sub.i according to the image coordinate information of the vertexes of each triangle in the image frame T.sub.i.
[0197] The Delaunay triangulation network corresponding to the image frame group G.sub.i may be shown in the foregoing formula (8), and the Delaunay triangulation network may include M triangles. Image coordinate information of three vertexes of any triangle A.sub.Ds.sub.
b.sub.i.sup.D.sup.
b.sub.i.sup.D.sup.
b.sub.i.sup.D.sup.
[0198] atan may represent an arc-tangent function, b.sub.i.sup.D.sup.
[0199] S503. Calculate directions of the three sides of each triangle in the Delaunay triangulation network in the image frame T.sub.i+1.
[0200] Specifically, the computer device may obtain image coordinate information of all the matching feature points included in the Delaunay triangulation network in the image frame T.sub.i+1, that is, image coordinate information of the vertexes of each triangle in the Delaunay triangulation network in the image frame T.sub.i+1, and the computer device may further obtain directions of the three sides included in each triangle in the Delaunay triangulation network in the image frame T.sub.i+1 according to the image coordinate information of the vertexes of each triangle in the image frame T.sub.i+1.
[0201] Image coordinate information of three vertexes of any triangle Δ.sub.Ds.sub.
b.sub.i+1.sup.D.sup.
b.sub.i+1.sup.D.sup.
b.sub.i+1.sup.D.sup.
[0202] atan may represent an arc-tangent function, b.sub.i+1.sup.D.sup.
[0203] S504. Calculate direction change values of the three sides of each triangle in the Delaunay triangulation network in the image frame T.sub.i and in the image frame T.sub.i+1 and form three sequences.
[0204] Specifically, The computer device may calculate direction change values of the three sides of the triangle Δ.sub.Ds.sub.
db.sub.D.sub.
db.sub.D.sub.
db.sub.D.sub.
[0205] db.sub.D.sub.
[0206] The computer device may obtain three direction change values respectively corresponding to the M triangles in the Delaunay triangulation network according to formula (22) to formula (24), and may further generate three sequences corresponding to the Delaunay triangulation network according to the three direction change values respectively corresponding to the M triangles, where the three sequences may be shown in the following formula (25):
Ω.sup.1={db.sub.D.sub.
Ω.sup.2={db.sub.D.sub.
Ω.sup.3={db.sub.D.sub.
[0207] Ω.sup.1, Ω.sup.2, and Ω.sup.3 may represent the three sequences corresponding to the Delaunay triangulation network. In this case, the three sequences may all be referred to as direction change sequences corresponding to the Delaunay triangulation network, and the three sequences include a third sequence Ω.sup.1, a fourth sequence Ω.sup.2, and a fifth sequence Ω.sup.3.
[0208] S505. Process the three sequences to obtain average values and medians respectively corresponding to the three sequences.
[0209] Specifically, the computer device may process the third sequence Ω.sup.1, the fourth sequence Ω.sup.2, and the fifth sequence Ω.sup.3 corresponding to the Delaunay triangulation network, to obtain an average value Avg.sub.1 and a median Med.sub.1 of the third sequence Ω.sup.1, an average value Avg.sub.2 and a median Med.sub.2 of the fourth sequence Ω.sup.2, and an average value Avg.sub.3 and a median Med.sub.3 of the fifth sequence Ω.sup.3. Further, the computer device may calculate an average value Avg.sub.b (the Avg.sub.b herein may be referred to as a direction average value) among the average value Avg.sub.1, the average value Avg.sub.2, and the average value Avg.sub.3, and the average value Avg.sub.b may be represented as: Avg.sub.b=(Avg.sub.1+Avg.sub.2+Avg.sub.3)/3. Further, the computer device may calculate an average value Med.sub.b (the Med.sub.b herein may be referred to as a direction median) among the median Med.sub.1, the median Med.sub.2, and the median Med.sub.3, and the average value Med.sub.b may be represented as: Med.sub.b=(Med.sub.1+Med.sub.2+Med.sub.3)/3.
[0210] S506. Determine that the current vehicle is in a linear motion state when the average values and the medians of the three sequences are all less than a given threshold.
[0211] Specifically, a target mesh plane figure may be constructed for each image frame group, namely, N Delaunay triangulation networks may be obtained according to normal matching feature points respectively corresponding to the N image frame groups. One Delaunay triangulation network may correspond to one sequence Ω.sup.1, one sequence Ω.sup.2, and one sequence Ω.sup.3, and direction average values respectively associated with the N Delaunay triangulation networks and direction medians respectively associated with the N Delaunay triangulation networks may be further calculated. When the direction average values respectively associated with N Delaunay triangulation networks and the direction medians respectively associated with the N Delaunay triangulation networks are all less than a second threshold (which may be a preset threshold), the motion state of the vehicle may be determined as a linear motion state. It may be understood that, the second threshold herein may include a second average value threshold and a second median threshold adopting different values.
[0212] In the embodiments of this application, in-vehicle image data corresponding to a vehicle is obtained, matching feature points between two adjacent image frames are extracted, a Delaunay triangulation network corresponding to the two adjacent image frames is constructed by using the matching feature points between the two adjacent image frames, and abnormal matching feature points are removed based on the Delaunay triangulation network. Therefore, the effectiveness of the matching feature points may be improved, and determining a motion state of the vehicle based on the Delaunay triangulation network may not only improve the evaluation accuracy of the motion state of the vehicle, but may also simplify the evaluation procedure of the linear motion state of the vehicle, thereby further improving the evaluation efficiency of the motion state of the vehicle.
[0213] Further, referring to
[0214] S1401. Obtain a Delaunay triangulation network from which abnormal matching feature points are removed.
[0215] S1402. Calculate directions of three sides of each triangle in the Delaunay triangulation network in the image frame T.sub.i.
[0216] S1403. Calculate directions of the three sides of each triangle in the Delaunay triangulation network in the image frame T.sub.i+1.
[0217] S1404. Calculate direction change values of the three sides of each triangle in the Delaunay triangulation network in the image frame T.sub.i and in the image frame T.sub.i+1 and form three sequences.
[0218] For a specific implementation process of step S1401 to step S1404, reference may be made to step S501 to step S504 in the embodiment corresponding to
[0219] S1405. Sort elements of a third sequence in ascending order.
[0220] Specifically, the computer device may obtain a sorted third sequence Ω.sup.1 by sorting the direction change values included in the third sequence Ω.sup.1 in formula (25) in ascending order.
[0221] S1406. Initialize a third tagged array.
[0222] Specifically, the computer device may obtain an initial tagged array corresponding to the third sequence Ω.sup.1, where the initial tagged array may refer to an initialized third tagged array. When the Delaunay triangulation network includes M triangles, it may be determined that the initialized third tagged array and the initialized first tagged array in the embodiment corresponding to
[0223] S1407. Calculate an upper quartile, a lower quartile, and a median of the sorted third sequence.
[0224] S1408. Update the third tagged array.
[0225] S1409. Process the third sequence to obtain a new sequence.
[0226] S1410. Calculate an upper quartile, a lower quartile, and a median of the new sequence.
[0227] S1411. Update the third tagged array.
[0228] S1412. Remove a gross error in the third sequence according to the third tagged array.
[0229] S1413. Calculate an average value and a median of the third sequence from which the gross error is removed.
[0230] Specifically, the computer device may process the sorted third sequence Ω.sup.1 according to step S1407 to step S1413, for a processing process of the sorted third sequence Ω.sup.1, reference may be made to the processing process of the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction in step S1206 to step S1212, and details are not described herein again. The processing process of the sorted third sequence Ω.sup.1 is similar to that of the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction, namely, in the processing processes, processed objects are different, but processing manners are the same. Based on step S1407 to step S1413, a third sequence Ω.sup.1 from which a gross error is removed and corresponding to the Delaunay triangulation network may be obtained, and an average value Avg.sub.1 and a median Med.sub.1 of the third sequence Ω.sup.1 from which the gross error is removed may be further obtained according to element values included in the third sequence Ω.sup.1 from which the gross error is removed.
[0231] It may be understood that, in the processing process of the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction, the median, the average value, and the standard deviation value corresponding to the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction need to be finally obtained. However, in this embodiment of this application, only the average value and the median corresponding to third sequence Ω.sup.1 need to be finally obtained. In other words, factors for evaluating the static state of the vehicle may include an average value, a median, and a standard deviation value; and factors for evaluating the linear motion state of the vehicle may include an average value and a median.
[0232] S1414. Sort elements of a fourth sequence in ascending order.
[0233] Specifically, the computer device may obtain a sorted fourth sequence Ω.sup.2 by sorting the direction change values included in the fourth sequence Ω.sup.2 in formula (25) in ascending order.
[0234] S1415. Initialize a fourth tagged array.
[0235] Specifically, the computer device may obtain an initial tagged array corresponding to the fourth sequence Ω.sup.2, where the initial tagged array may refer to an initialized fourth tagged array. When the Delaunay triangulation network includes M triangles, it may be determined that the initialized fourth tagged array and the initialized first tagged array in the embodiment corresponding to
[0236] S1416. Calculate an upper quartile, a lower quartile, and a median of the sorted fourth sequence.
[0237] S1417. Update the fourth tagged array.
[0238] S1418. Process the fourth sequence to obtain a new sequence.
[0239] S1419. Calculate an upper quartile, a lower quartile, and a median of the new sequence.
[0240] S1420. Update the fourth tagged array.
[0241] S1421. Remove a gross error in the fourth sequence according to the fourth tagged array.
[0242] S1422. Calculate an average value and a median of the fourth sequence from which the gross error is removed.
[0243] Specifically, the computer device may process the sorted fourth sequence Ω.sup.2 according to step S1416 to step S1422, for a processing process of the sorted fourth sequence Ω.sup.2, reference may be made to the processing process of the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction in step S1206 to step S1212, and details are not described herein again. The processing process of the sorted fourth sequence Ω.sup.2 is similar to that of the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction, namely, in the processing processes, processed objects are different, but processing manners are the same. Based on step S1416 to step S1422, a fourth sequence Ω.sup.2 from which a gross error is removed and corresponding to the Delaunay triangulation network may be obtained, and an average value Avg.sub.2 and a median Med.sub.2 of the fourth sequence Ω.sup.2 from which the gross error is removed may be further obtained according to element values included in the fourth sequence Ω.sup.2 from which the gross error is removed.
[0244] It may be understood that, in the processing process of the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction, the median, the average value, and the standard deviation value corresponding to the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction need to be finally obtained. However, in this embodiment of this application, only the average value and the median corresponding to fourth sequence Ω.sup.2 need to be finally obtained.
[0245] S1423. Sort elements of a fifth sequence in ascending order.
[0246] Specifically, the computer device may obtain a sorted fifth sequence Ω.sup.3 by sorting the direction change values included in the fifth sequence Ω.sup.3 in formula (25) in ascending order.
[0247] S1424. Initialize a fifth tagged array.
[0248] Specifically, the computer device may obtain an initial tagged array corresponding to the fifth sequence Ω.sup.3, where the initial tagged array may refer to an initialized fifth tagged array. When the Delaunay triangulation network includes M triangles, it may be determined that the initialized fifth tagged array and the initialized first tagged array in the embodiment corresponding to
[0249] S1425. Calculate an upper quartile, a lower quartile, and a median of the sorted fifth sequence.
[0250] S1426. Update the fifth tagged array.
[0251] S1427. Process the fifth sequence to obtain a new sequence.
[0252] S1428. Calculate an upper quartile, a lower quartile, and a median of the new sequence.
[0253] S1429. Update the fifth tagged array.
[0254] S1430. Remove a gross error in the fifth sequence according to the fifth tagged array.
[0255] S1431. Calculate an average value and a median of the fifth sequence from which the gross error is removed.
[0256] Specifically, the computer device may process the sorted fifth sequence S3 according to step S1425 to step S1431, for a processing process of the sorted fifth sequence Ω.sup.3, reference may be made to the processing process of the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction in step S1206 to step S1212, and details are not described herein again. The processing process of the sorted fifth sequence Ω.sup.3 is similar to that of the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction, namely, in the processing processes, processed objects are different, but processing manners are the same. Based on step S1425 to step S1431, a fifth sequence Ω.sup.3 from which a gross error is removed and corresponding to the Delaunay triangulation network may be obtained, and an average value Avg.sub.3 and a median Med.sub.3 of the fifth sequence Ω.sup.3 from which the gross error is removed may be further obtained according to element values included in the fifth sequence Ω.sup.3 from which the gross error is removed.
[0257] It may be understood that, in the processing process of the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction, the median, the average value, and the standard deviation value corresponding to the first coordinate sequence Φ.sub.u.sup.sort in the U-axis direction need to be finally obtained. However, in this embodiment of this application, only the average value and the median corresponding to fifth sequence Ω.sup.3 need to be finally obtained.
[0258] S1432. Determine that the vehicle is in a linear motion state when the average values and the medians of the three sequences are all less than a given threshold.
[0259] Specifically, after obtaining the average value Avg.sub.1 and the median Med.sub.1 of the third sequence Ω.sup.1, the average value Avg.sub.2 and the median Med.sub.2 of the fourth sequence Ω.sup.2, and the average value Avg.sub.3 and the median Med.sub.3 of the fifth sequence Ω.sup.3, the computer device may determine an average value of the average value Avg.sub.1, the average value Avg.sub.2, and the average value Avg.sub.3 as a direction average value associated with the Delaunay triangulation network, and determine an average value of the median Med.sub.1, the median Med.sub.2, and the median Med.sub.3 as a direction median associated with the Delaunay triangulation network. Based on the same processing process, the computer device may obtain direction average values respectively associated with N Delaunay triangulation networks and direction medians respectively associated with the N Delaunay triangulation networks. When the direction average values respectively associated with N Delaunay triangulation networks and the direction medians respectively associated with the N Delaunay triangulation networks are all less than a second threshold (which may be a preset threshold), the motion state of the vehicle may be determined as a linear motion state; and when a direction average value greater than or equal to the second threshold exists in the direction average values associated with the N Delaunay triangulation networks or a direction median greater than or equal to the second threshold exists in the direction medians associated with the N Delaunay triangulation networks, the motion state of the vehicle may be determined as a non-linear motion state.
[0260] In some embodiments, when the second threshold includes a second average value threshold and a second median value, when the direction average values respectively associated with the N Delaunay triangulation networks are all less than the second average value threshold, and the direction medians respectively associated with the N Delaunay triangulation networks are all less than the second median threshold, the motion state of the vehicle is determined as a linear motion state; and when a direction average value greater than or equal to the second average value threshold exists in the direction average values associated with the N Delaunay triangulation networks or a direction median greater than or equal to the second median threshold exists in the direction medians associated with the N Delaunay triangulation networks, the motion state of the vehicle may be determined as a non-linear motion state.
[0261] In the embodiments of this application, in-vehicle image data corresponding to a vehicle is obtained, matching feature points between two adjacent image frames are extracted, a Delaunay triangulation network corresponding to the two adjacent image frames is constructed by using the matching feature points between the two adjacent image frames, and abnormal matching feature points are removed based on the Delaunay triangulation network. Therefore, the effectiveness of the matching feature points may be improved, and determining a motion state of the vehicle based on the Delaunay triangulation network may not only improve the evaluation accuracy of the motion state of the vehicle, but may also simplify the evaluation procedure of the linear motion state of the vehicle, thereby further improving the evaluation efficiency of the motion state of the vehicle. The motion state of the vehicle is evaluated based on the in-vehicle image data and the Delaunay triangulation network, so that limitations to a scenario in which the vehicle is located may be reduced, thereby effectively improving the evaluation accuracy of the motion state of the vehicle in a complex scenario (such as city overpass) and further improving the applicability of the method for evaluating a motion state of a vehicle.
[0262] Referring to
[0263] the first obtaining module 11 is configured to obtain target image data captured by cameras on the vehicle, and combining every two neighboring image frames in the target image data into N image frame groups, N being a positive integer;
[0264] the second obtaining module 12 is configured to obtain matching feature points between two image frames included in each image frame group in the N image frame groups;
[0265] the mesh figure construction module 13 is configured to construct target mesh plane figures respectively corresponding to the N image frame groups according to the matching feature points in each image frame group; and
[0266] the motion state determining module 14 is configured to determine the motion state of the vehicle according to the target mesh plane figures respectively corresponding to the N image frame groups.
[0267] For specific function implementations of the first obtaining module 11, the second obtaining module 12, the mesh figure construction module 13, and the motion state determining module 14, reference may be made to step S101 to step S104 in the embodiment corresponding to
[0268] In some feasible implementations, the first obtaining module 11 may include a denoising processing unit 111, a distortion removing processing unit 112, and an image frame combination unit 113, where
[0269] the denoising processing unit 111 is configured to obtain original image data acquired by a camera for the vehicle, and perform denoising processing on the original image data to obtain denoised image data;
[0270] the distortion removing processing unit 112 is configured to perform distortion removing processing on the denoised image data according to a device parameter corresponding to the camera, to obtain the target image data associated with the vehicle; and
[0271] the image frame combination unit 113 is configured to divide the target image data into N+1 image frames, and combine every two neighboring image frames in the N+1 image frames to obtain the N image frame groups.
[0272] For specific function implementations of the denoising processing unit 111, the distortion removing processing unit 112, and the image frame combination unit 113, reference may be made to step S202 in the embodiment corresponding to
[0273] In some feasible implementations, the second obtaining module 12 may include an image frame group selection unit 121, a feature point obtaining unit 122, and a feature point matching unit 123, where
[0274] the image frame group selection unit 121 is configured to obtain an image frame group G.sub.i in the N image frame groups, where the image frame group G.sub.i includes an image frame T.sub.i and an image frame T.sub.i+1, i is a positive integer less than or equal to N;
[0275] the feature point obtaining unit 122 is configured to obtain a first feature point set in the image frame T.sub.i and a second feature point set in the image frame T.sub.i+1; and
[0276] the feature point matching unit 123 is configured to perform feature point matching on the first feature point set and the second feature point set to obtain matching feature points between the image frame T.sub.i and the image frame T.sub.i+1.
[0277] For specific function implementations of the image frame group selection unit 121, the feature point obtaining unit 122, and the feature point matching unit 123, reference may be made to step S102 in the embodiment corresponding to
[0278] In some feasible implementations, the feature point obtaining unit 122 may include a scale space construction subunit 1221, a first feature point set obtaining subunit 1222, and a second feature point set obtaining subunit 1223, where
[0279] the scale space construction subunit 1221 is configured to construct a scale space for the image frame T.sub.i and the image frame T.sub.i+1, search for a first key point position set corresponding to the image frame T.sub.i in the scale space, and search for a second key point position set corresponding to the image frame T.sub.i+1 in the scale space;
[0280] the first feature point set obtaining subunit 1222 is configured to determine, according to a local gradient corresponding to a first key point position included in the first key point position set, a first descriptive feature corresponding to the first key point position, and obtain the first feature point set of the image frame T.sub.i according to the first key point position and the first descriptive feature; and
[0281] the second feature point set obtaining subunit 1223 is configured to determine, according to a local gradient corresponding to a second key point position included in the second key point position set, a second descriptive feature corresponding to the second key point position, and obtain the second feature point set of the image frame T.sub.i+1 according to the second key point position and the second descriptive feature.
[0282] For specific function implementations of the scale space construction subunit 1221, the first feature point set obtaining subunit 1222, and the second feature point set obtaining subunit 1223, reference may be made to step S102 in the embodiment corresponding to
[0283] In some feasible implementations, the feature point matching unit 123 may include a feature point selection subunit 1231, a matching degree determining subunit 1232, and a matching feature point determining subunit 1233, where:
[0284] the feature point selection subunit 1231 is configured to obtain a first feature point a.sub.t from the first feature point set, and obtain a second feature point b.sub.k from the second feature point set, where t is a positive integer less than or equal to the number of feature points included in the first feature point set, and k is a positive integer less than or equal to the number of feature points included in the second feature point set;
[0285] the matching degree determining subunit 1232 is configured to determine a matching degree between the first feature point a.sub.t and the second feature point b.sub.k according to a first descriptive feature corresponding to the first feature point a.sub.t and a second descriptive feature corresponding to the second feature point b.sub.k; and
[0286] the matching feature point determining subunit 1233 is configured to determine, when the matching degree is greater than a matching threshold, the first feature point a.sub.t and the second feature point b.sub.k as the matching feature points between the image frame T.sub.i and the image frame T.sub.i+1.
[0287] For specific function implementations of the feature point selection subunit 1231, the matching degree determining subunit 1232, and the matching feature point determining subunit 1233, reference may be made to step S102 in the embodiment corresponding to
[0288] In some feasible implementations, the mesh figure construction module 13 may include a matching feature point obtaining unit 131, an initial linked list obtaining unit 132, an associated polygon obtaining unit 133, a candidate polygon generation unit 134, a polygon conversion unit 135, and a target mesh figure generation unit 136, where
[0289] the matching feature point obtaining unit 131 is configured to obtain q matching feature points between an image frame T.sub.i and an image frame T.sub.i+1 included in an image frame group G.sub.i, and determine matching coordinate information respectively corresponding to the q matching feature points according to image coordinate information of each matching feature point in the q matching feature points in the image frame T.sub.i and the image frame T.sub.i+1 respectively, where the image frame group G.sub.i belongs to the N image frame groups, i is a positive integer less than or equal to N, and q is a positive integer;
[0290] the initial linked list obtaining unit 132 is configured to obtain an initial polygon linked list associated with the q matching feature points according to the matching coordinate information respectively corresponding to the q matching feature points;
[0291] the associated polygon obtaining unit 133 is configured to obtain, for a matching feature point c.sub.r in the q matching feature points, an associated polygon matching the matching feature point c.sub.r from the initial polygon linked list, where a circumscribed circle of the associated polygon includes the matching feature point c.sub.r, and r is a positive integer less than or equal to q;
[0292] the candidate polygon generation unit 134 is configured to delete a common side of the associated polygon, and connect the matching feature point c.sub.r to vertexes of the associated polygon to generate a candidate polygon corresponding to the matching feature point c.sub.r;
[0293] the polygon conversion unit 135 is configured to convert the candidate polygon into a target polygon corresponding to the matching feature point c.sub.r, and add the target polygon to the initial polygon linked list, where a circumscribed circle of the target polygon does not include any matching feature point; and
[0294] the target mesh figure generation unit 136 is configured to determine the initial polygon linked list as a target polygon linked list when target polygons respectively corresponding to the q matching feature points are all added to the initial polygon linked list, and generate a target mesh plane figure corresponding to the image frame group G.sub.i according to the target polygon linked list.
[0295] For specific function implementations of the matching feature point obtaining unit 131, the initial linked list obtaining unit 132, the associated polygon obtaining unit 133, the candidate polygon generation unit 134, the polygon conversion unit 135, and the target mesh figure generation unit 136, reference may be made to step S205 in the embodiment corresponding to
[0296] In some feasible implementations, the target mesh figure generation unit 136 may include a polygon selection subunit 1361, a vertex coordinate obtaining subunit 1362, a coordinate change value determining subunit 1363, a parameter total value obtaining subunit 1364, and an abnormal point deleting subunit 1365, where
[0297] the polygon selection subunit 1361 is configured to generate a candidate mesh plane figure corresponding to the image frame group G.sub.i according to the target polygon linked list, and obtain a polygon D.sub.j from the candidate mesh plane figure, where j is a positive integer less than or equal to the number of polygons included in the candidate mesh plane figure;
[0298] the vertex coordinate obtaining subunit 1362 is configured to obtain first vertex coordinates of matching feature points corresponding to the polygon D.sub.j in the image frame T.sub.i, and obtain second vertex coordinates of the matching feature points corresponding to the polygon D.sub.j in the image frame T.sub.i+1;
[0299] the coordinate change value determining subunit 1363 is configured to determine a coordinate change value corresponding to each side length included in the polygon D.sub.j according to the first vertex coordinates and the second vertex coordinates;
[0300] the parameter total value obtaining subunit 1364 is configured to determine a side length whose coordinate change value is less than a change threshold as a target side length, accumulate normal verification parameters corresponding to matching feature points on two ends of the target side length to obtain a total value of a normal verification parameter corresponding to each of the matching feature points of the polygon D.sub.j; and
[0301] the abnormal point deleting subunit 1365 is configured to determine a matching feature point whose total value of the normal verification parameter is greater than a parameter threshold as a normal matching feature point, and update the candidate mesh plane figure to the target mesh plane figure corresponding to the image frame group G.sub.i according to the normal matching feature point.
[0302] For specific function implementations of the polygon selection subunit 1361, the vertex coordinate obtaining subunit 1362, the coordinate change value determining subunit 1363, the parameter total value obtaining subunit 1364, and the abnormal point deleting subunit 1365, reference may be made to step S206 and step S207 in the embodiment corresponding to
[0303] In some feasible implementations, the motion state determining module 14 may include a first mesh figure selection unit 141, a sorting unit 142, a statistical value determining unit 143, and a static state determining unit 144, where
[0304] the first mesh figure selection unit 141 is configured to obtain a target mesh plane figure H.sub.i corresponding to an image frame group G.sub.i in the N image frame groups, where the target mesh plane figure H.sub.i includes M connected but non-overlapping polygons, a circumscribed circle of each of the polygons does not include any matching feature point, M is a positive integer, and i is a positive integer less than or equal to N;
[0305] the sorting unit 142 is configured to obtain average coordinate change values respectively corresponding to the M polygons, sort the average coordinate change values respectively corresponding to the M polygons, and determine the sorted average coordinate change values as a first coordinate sequence of the target mesh plane figure H.sub.i;
[0306] the statistical value determining unit 143 is configured to determine a statistical value associated with the target mesh plane figure H.sub.i according to the first coordinate sequence; and
[0307] the static state determining unit 144 is configured to determine the motion state of the vehicle as a static state when statistical values respectively associated with N target mesh plane figures are all less than a first threshold.
[0308] For specific function implementations of the first mesh figure selection unit 141, the sorting unit 142, the statistical value determining unit 143, and the static state determining unit 144, reference may be made to the embodiment corresponding to
[0309] In some feasible implementations, the statistical value associated with the target mesh plane figure H.sub.i includes a coordinate average value, a coordinate median, and a coordinate standard deviation value; and
[0310] the statistical value determining unit 143 may include a first sequence update subunit 1431, a second sequence update subunit 1432, a tagged array update subunit 1433, and a filtering subunit 1434, where
[0311] the first sequence update subunit 1431 is configured to update the first coordinate sequence according to a median corresponding to the first coordinate sequence and the average coordinate change values included in the first coordinate sequence to obtain a second coordinate sequence;
[0312] the second sequence update subunit 1432 is configured to update the second coordinate sequence according to a lower quartile corresponding to the second coordinate sequence and an upper quartile corresponding to the second coordinate sequence to obtain a third coordinate sequence;
[0313] the tagged array update subunit 1433 is configured to obtain an initial tagged array corresponding to the first coordinate sequence, and update the initial tagged array according to a lower quartile corresponding to the first coordinate sequence, an upper quartile corresponding to the first coordinate sequence, and a median corresponding to the third coordinate sequence to obtain a target tagged array; and
[0314] the filtering subunit 1434 is configured to filter the average coordinate change values included in the first coordinate sequence according to a non-zero element in the target tagged array to update the first coordinate sequence to a fourth coordinate sequence, and obtain a coordinate average value, a coordinate median, and a coordinate standard deviation value corresponding to the fourth coordinate sequence.
[0315] The second sequence update subunit 1432 is specifically configured to:
[0316] determine an effective value range of element values included in the second coordinate sequence according to the lower quartile corresponding to the second coordinate sequence and the upper quartile corresponding to the second coordinate sequence; and
[0317] determine element values within the effective value range in the second coordinate sequence as the third coordinate sequence.
[0318] For specific function implementations of the first sequence update subunit 1431, the second sequence update subunit 1432, the tagged array update subunit 1433, and the filtering subunit 1434, reference may be made to step S1203 to step S1222 in the embodiment corresponding to
[0319] In some feasible implementations, the motion state determining module 14 may include a second mesh figure selection unit 145, a direction obtaining unit 146, a direction change value determining unit 147, a direction change sequence generation unit 148, and a linear motion state determining unit 149, where
[0320] the second mesh figure selection unit 145 is configured to obtain a target mesh plane figure H.sub.i corresponding to an image frame group G.sub.i in the N image frame groups, where the image frame group G.sub.i includes an image frame T.sub.i and an image frame T.sub.i+1, the target mesh plane figure H.sub.i includes M connected but non-overlapping polygons, a circumscribed circle of each of the polygons does not include any matching feature point, M is a positive integer, and i is a positive integer less than or equal to N;
[0321] the direction obtaining unit 146 is configured to obtain a first direction of each side of the M polygons in the image frame T.sub.i, and obtain a second direction of each side of the M polygons in the image frame T.sub.i+1;
[0322] the direction change value determining unit 147 is configured to determine a direction change value corresponding to each side of the M polygons according to the first direction and the second direction;
[0323] the direction change sequence generation unit 148 is configured to generate a direction change sequence associated with the M polygons according to direction change values respectively corresponding to the M polygons, and determine a direction average value and a direction median associated with the target mesh plane figure H.sub.i according to the direction change sequence; and
[0324] the linear motion state determining unit 149 is configured to determine the motion state of the vehicle as a linear motion state when direction average values respectively associated with N target mesh plane figures and direction medians respectively associated with the N target mesh plane figures are all less than a second threshold.
[0325] For specific function implementations of the second mesh figure selection unit 145, the direction obtaining unit 146, the direction change value determining unit 147, the direction change sequence generation unit 148, and the linear motion state determining unit 149, reference may be made to the embodiments corresponding to
[0326] In some feasible implementations, the apparatus 1 for evaluating a motion state of a vehicle may further include a first positioning module 15 and a second positioning module 16, where
[0327] the first positioning module 15 is configured to determine positioning information of the vehicle according to a positioning and navigation system when the motion state of the vehicle is the static state; and
[0328] the second positioning module 16 is configured to determine, when the motion state of the vehicle is the linear motion state, a motion direction of the vehicle according to the linear motion state, and determine the positioning information of the vehicle according to the motion direction and the positioning and navigation system.
[0329] For specific function implementations of the first positioning module 15 and the second positioning module 16, reference may be made to step S104 in the embodiment corresponding to
[0330] In the embodiments of this application, in-vehicle image data corresponding to a driving vehicle is obtained, matching feature points between two adjacent image frames are extracted, a target mesh plane figure corresponding to the two adjacent image frames is constructed by using the matching feature points between the two adjacent image frames, and abnormal matching feature points are removed based on the target mesh plane figure. Therefore, the effectiveness of the matching feature points may be improved, and determining the motion state of the vehicle based on the target mesh plane figure may not only improve the evaluation accuracy of the motion state of the vehicle, but may also simplify the evaluation procedure of the motion state of the vehicle, thereby further improving the evaluation efficiency of the motion state of the vehicle. The motion state of the vehicle is evaluated based on the in-vehicle image data and the target mesh plane figure, so that limitations to a scenario in which the vehicle is located may be reduced, thereby effectively improving the evaluation accuracy of the motion state of the vehicle in a complex scenario (such as city overpass) and further improving the applicability of the method for evaluating a motion state of a vehicle. The motion state of the vehicle may effectively assist the positioning and navigation system, and may further improve the positioning precision of a terminal configured for the vehicle.
[0331] Referring to
[0332] In the computer device 1000 shown in
[0333] obtaining target image data captured by cameras on the vehicle, and combining every two neighboring image frames in the target image data into N image frame groups, N being a positive integer;
[0334] obtaining matching feature points between two image frames included in each image frame group in the N image frame groups;
[0335] constructing target mesh plane figures respectively corresponding to the N image frame groups according to the matching feature points in each image frame group; and
[0336] determining the motion state of the vehicle according to the target mesh plane figures respectively corresponding to the N image frame groups.
[0337] It is to be understood that, the computer device 1000 described in this embodiment of this application may implement the description of the method for evaluating a motion state of a vehicle in the embodiment corresponding to any one of
[0338] In addition, an embodiment of this application further provides a computer-readable storage medium. The computer-readable storage medium stores a computer program executed by the apparatus 1 for evaluating a motion state of a vehicle mentioned above, and the computer program includes program instructions. When executing the program instructions, a processor can implement the description of the method for evaluating a motion state of a vehicle in the embodiment corresponding to any one of
[0339] In addition, an embodiment of this application further provides a computer program product or a computer program. The computer program product or the computer program may include computer instructions, and the computer instructions may be stored in a computer-readable storage medium. A processor of a computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, to cause the computer device to perform the description of the method for evaluating a motion state of a vehicle in the embodiment corresponding to any one of
[0340] To make the description simple, the foregoing method embodiments are stated as a series of action combinations. However, a person skilled in the art is to know that this application is not limited to the described sequence of the actions because according to this application, some steps may use another sequence or may be simultaneously performed. In addition, a person skilled in the art is further to understand that the embodiments described in this specification are exemplary embodiments, and the involved actions and modules mentioned are not necessarily required by this application.
[0341] A sequence of the steps of the method in the embodiments of this application may be adjusted, and certain steps may also be combined or deleted according to an actual requirement.
[0342] The modules in the apparatus in the embodiments of this application may be combined, divided, and deleted according to an actual requirement.
[0343] A person of ordinary skill in the art may understand that all or some of the procedures of the methods in the foregoing embodiments may be implemented by a computer program instructing relevant hardware. The computer program may be stored in a computer-readable storage medium. When the program is executed, the procedures of the foregoing method embodiments may be performed. The foregoing storage medium may include a magnetic disc, an optical disc, a read-only memory (ROM), a random access memory (RAM), or the like.
[0344] The foregoing disclosure is merely exemplary embodiments of this application, and certainly is not intended to limit the protection scope of this application. Therefore, equivalent variations made in accordance with the claims of this application shall fall within the scope of this application. In this application, the term “unit” or “module” in this application refers to a computer program or part of the computer program that has a predefined function and works together with other related parts to achieve a predefined goal and may be all or partially implemented by using software, hardware (e.g., processing circuitry and/or memory configured to perform the predefined functions), or a combination thereof. Each unit or module can be implemented using one or more processors (or processors and memory). Likewise, a processor (or processors and memory) can be used to implement one or more modules or units. Moreover, each module or unit can be part of an overall module that includes the functionalities of the module or unit.