SYSTEM AND METHOD FOR OBJECT MONITORING, LOCALIZATION, AND CONTROLLING
20230074477 · 2023-03-09
Assignee
Inventors
Cpc classification
B60Q9/00
PERFORMING OPERATIONS; TRANSPORTING
G01S7/003
PHYSICS
G01S13/87
PHYSICS
G01S7/415
PHYSICS
International classification
G01S7/41
PHYSICS
B60Q9/00
PERFORMING OPERATIONS; TRANSPORTING
Abstract
The present invention is a system and method to monitor, localize, and control objects in an implicitly defined geofenced area and to determine object position and orientation (vehicle only) by capturing the object's image, 3D data, or position using camera, LiDAR, and/or RADAR sensors that are installed on structures mounted on the ground. The sensors capture image, 3D data points, and distance of the surface points that are processed to ultimately obtain 3D data of the surface points of the object. The 3D data points from different sensors are then combined or fused by a controller to obtain a single set of 3D points, called fusion data, under one coordinate system such as the GPS coordinate system. The single set of 3D points is then processed by the controller using deep neural network and/or other algorithms to obtain position and orientation of the object. Additionally, the controller or sensors can send current and desired future object positions and orientations to controllable objects. Controller and/or sensors can send site image data to scene marking device and receive marked image data for geofenced monitoring of objects. Controller or sensors send alert to devices if objects are detected or abnormal behavior of objects are detected within the geofenced area.
Claims
1. A system to monitor, localize, and control an object by sensing the object with a plurality of optical, RADAR, and LiDAR sensors, where the sensors are mounted on structures on the ground at known locations, monitoring area can be marked for geofencing, and landmark points are used for positioning, with the system comprising: a plurality of optical sensors, a plurality of RADAR sensors, and a plurality of LiDAR sensors mounted on structures on ground; a controller to analyze data captured by the sensors, send vehicle control command to vehicles, and object information to display devices; a plurality of devices receiving analytical information from the controller; three or more landmark points with known positions visible in one or more scene images; a scene marking device that can collect the said scene images, facilitate the capability to add additional information such as points, lines, and curves drawn on the images, and upload the additional information back into the controller and/or sensors; and networked communication channels established among the sensors, controller, and devices.
2. The system as defined in claim 1, wherein the said locations are expressed in GPS coordinate system or in another coordinate systems common or accessible to all the sensors or the structures they are installed on;
3. The system as defined in claim 1, wherein a user can mark (manually or automatically) points and areas in the said site images in claim 1 and upload the site images back into the controller and/or sensors;
4. A method to monitor, localize, and control an object, the method comprising the steps: capturing sensor data perceived by the ground sensors; transferring the sensor data into the controller and scene marking device; adding points, lines, and curves into the site images with scene marking device and uploading the marked site images into the controller and/or sensors; processing all the data from different sensors to obtain position data of object surface points; fuse the position data from different sensors into a common coordinate system known to all sensors, which is called fusion data; using the fusion data or its projection in an already trained deep neural network or other algorithms such as computer vision algorithms to determine current object position and orientation; sending current and future desired object positions and orientations to controllable objects; sending said object positions, dimensions, orientations, and directions of travel into display devices; and controller or sensors sending alert to devices if objects are detected or abnormal behavior of objects are detected within geofenced area.
5. The method as defined in claim 4, wherein the deep neural network is trained with manually prepared 2D or 3D structure data of multiple objects.
6. The method as defined in claim 4, wherein the deep neural network is alternatively trained with the fusion data.
7. The method as defined in claim 4, wherein the said object position and orientation can be determined by other means in addition to or without using deep neural network.
8. The method as defined in claim 4, wherein the display devices could be a stationary display device or a mobile one such as a cell phone screen or a display screen in a vehicle.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] System and method of the present invention are illustrated as an example and are not limited by the figures of the accompanying diagrams and pictures, in which:
[0007]
[0008]
DETAILED DESCRIPTION OF THE INVENTION
[0009] The terminology used herein for the purpose of describing the system and method is not intended to be limiting the invention. The term ‘and/or” includes any and all combinations of one or more of the associated listed items. As used herein, the singular forms “a,” “an”, and “the” are intended to include the plural forms as well as singular forms, unless the context clearly indicates otherwise. The term “comprising” and/or “comprises” specify the presence, when used in this specification, specify the presence of stated features, steps operations, elements, and/r components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups/thereof.
[0010] If not otherwise defined, all terms used herein have the same meaning as commonly understood by one having ordinary skill in the art to which this invention belongs. Furthermore, terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present invention and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
[0011] In the description of the invention, it will be understood that a number of techniques and steps are disclosed. Each of these has individual benefit and each can also be used in conjunction with one or more, or in some cases all, of the other disclosed techniques. Accordingly, for the sake of clarity, this description will refrain from repeating every possible combination of the individual steps in an unnecessary fashion. However, the specification and claims should be read with the understanding that such combinations are entirely within the scope of the invention and the claims.
[0012] The present invention, a system and method for object monitoring, localization, and control will now be described by referencing the appended figures,
[0013] The present invention is a system and method to monitor, localize, and control objects in an implicitly defined geofenced area and to determine object position and orientation (vehicle only) by capturing the object's image, 3D data, or position using optical 1, LiDAR 2, and/or RADAR 3 sensors that are installed on structures mounted on the ground. The sensors capture image, 3D surface data points, and distance of the surface points of the object; all the captures are processed by a controller 6 with memory 5 to hold processing logics to ultimately obtain a more complete 3D data of the surface points of the object. The 3D surface data points from different sensors are then combined or fused by the controller to obtain a single set of 3D surface points, called fusion data, under one coordinate system such as the GPS coordinate system. The single set of 3D surface points is then processed by the controller using deep neural network and/or other algorithms to obtain position and orientation of the object.
[0014] Additionally, the controller or sensors can send current and desired future object positions and orientations to controllable objects such as vehicle 8. Controller and/or sensors can send their image data to scene marking device 4 and receive marked image data for geofenced monitoring of objects. Controller or sensors send alert to display devices 7 if objects are detected or abnormal behavior of objects are detected within the geofenced area.