Collision avoidance system and method for an underground mine environment
09747802 · 2017-08-29
Assignee
Inventors
- Matthew A. Fisher (Rustburg, VA, US)
- Paul R. Carpenter (Lynchburg, VA, US)
- James E. Silverstrim (Moneta, VA, US)
Cpc classification
B60K28/10
PERFORMING OPERATIONS; TRANSPORTING
B60W30/0956
PERFORMING OPERATIONS; TRANSPORTING
B60W30/0953
PERFORMING OPERATIONS; TRANSPORTING
G06V20/58
PHYSICS
G08G1/166
PHYSICS
B60W30/09
PERFORMING OPERATIONS; TRANSPORTING
G06V40/10
PHYSICS
B60W30/085
PERFORMING OPERATIONS; TRANSPORTING
B60W2554/00
PERFORMING OPERATIONS; TRANSPORTING
B60W30/095
PERFORMING OPERATIONS; TRANSPORTING
B60W30/08
PERFORMING OPERATIONS; TRANSPORTING
B60W2554/80
PERFORMING OPERATIONS; TRANSPORTING
International classification
H04N7/12
ELECTRICITY
B60W30/09
PERFORMING OPERATIONS; TRANSPORTING
Abstract
Described are methods and systems for collision avoidance in an underground mine environment that use one or more of a computer vision component, an asset tracking system, and a motion detection component for the purpose of determining and responding to potential collision threats. Imaging is captured and processed in real time, so that assets of interest can be identified and used in evaluating potential for collision with other assets. Location data from an asset tracking system is likewise evaluated and used to determine proximity of assets in relation to the host. A final input is provided by the motion detection component that intelligently determines movement patterns and direction of travel. Once these components' inputs are collectively evaluated, a proximity or a threat value is generated which determine an audible or visual signal or action to prevent collision and increase safety in unfavorable conditions.
Claims
1. A collision avoidance system, the system comprising: a computer vision component comprising: an imaging modality comprising one or more thermal infrared or ultraviolet cameras configured to provide an image capture of a region of interest; a computer processor; and a memory comprising a set of computer-executable instructions configured for instructing the computer processor to analyze the thermal image capture to identify assets present in the region of interest; an asset tracking component based on fixed mesh radio nodes and mobile mesh radio nodes, wherein a mobile mesh radio node is placed on a first asset and the asset tracking component is configured to determine the location of the mobile mesh radio node based on a Received Signal Strength Indication (RSSI) between the mobile mesh radio node and surrounding fixed mesh radio nodes; at least one motion detection component capable of determining a directional velocity component for the asset tracking component and comprising an accelerometer-based motion sensor device placed on the first asset, wherein the directional velocity component comprises a speed and direction of travel; and a collision avoidance component which is configured to receive inputs from the computer vision component, the asset tracking component, and the motion detection component and combine the inputs into a collision avoidance algorithm programmed in a set of computer-executable instructions which instruct a computer processor to calculate a Threat Rating Value that determines a warning or action for the first asset to avoid collision with a second asset; wherein the computer-executable instructions are configured to instruct the computer processor to calculate the Threat Rating Value as:
TRV=(K.sub.VH.Math.A.sub.VH.Math.V.sub.VH)+(K.sub.VO.Math.A.sub.VO.Math.V.sub.VO)+(K.sub.TS.Math.max[TRV.sub.TS1 . . . TRV.sub.TSn]) wherein:
TRV.sub.TS1=C.sub.TS1.Math.D.sub.TS1.Math.V.sub.TS1
TRV.sub.TSn=C.sub.TSn.Math.D.sub.TSn.Math.V.sub.TSn K.sub.VH=Weight constant for a host computer vision component input A.sub.VH=Amplitude level for the host computer vision component input V.sub.VH=Value of the host computer vision component input K.sub.VO=Weight constant for an object computer vision component input A.sub.VO=Amplitude level for the object computer vision component input V.sub.VO=Value of the object computer vision component input K.sub.TS=Weight constant for the asset tracking component input V.sub.TS1=Value of a first asset tracking component input TRV.sub.TS1=Threat rating value for an n.sup.th asset tracking component input C.sub.TSn=Confidence level for the n.sup.th asset tracking component input D.sub.TSn=Directional velocity component for the n.sup.th asset tracking component input V.sub.TSn=Value of the n.sup.th asset tracking component input TRV.sub.TSn=Threat rating value for the n.sup.th asset tracking component input TRV=Threat rating value for the Collision Avoidance Component.
2. The collision avoidance system of claim 1, wherein the first asset is a vehicle or human and the second asset is a vehicle or human.
3. The collision avoidance system of claim 1, wherein object recognition is achieved by training the computer vision component with positive samples of objects to be detected and with negative samples wherein no objects to be detected exist, wherein positive sample measurements are manually calibrated using the formula:
F=P*D/S where F is the focal length of the camera; P is the number of pixels wide of the subject; D is the distance from the camera to the subject; and S is the size of the subject.
4. The collision avoidance system of claim 1, wherein the collision avoidance algorithm is capable of detecting co-location of humans riding in a vehicle and overriding the warning or action on a user interface for the system and the mobile mesh radio nodes where a vehicle must be stopped for on-boarding and off-boarding of human assets.
5. The collision avoidance system of claim 1, wherein: the motion tracking component comprises an accelerometer placed on each vehicle and each human asset that is configured to provide a speed, direction of travel and unique ID for each human and vehicle; the imaging component comprises a thermal or ultraviolet imaging component on each vehicle to capture images in dark and dusty environments using one or more passive long wave infrared cameras and/or ultraviolet cameras configured for imaging one or more areas not otherwise capable of being seen by a driver to display real-time live video to the driver and real-time object recognition corresponding to known objects including humans, vehicles, and electrical infrastructure; and the collision avoidance algorithm is on each vehicle and is configured to use position information, directional velocity information, a unique ID for each mobile object, and object recognition information as input values to calculate proximity between assets, speed and direction of travel between assets, and a threat rating value between the first asset and the second asset.
6. The collision avoidance system of claim 1, wherein the collision avoidance component is configured to receive inputs comprising object recognition information from the computer vision component, position information from the asset tracking component, and directional velocity information from the motion detection component and combine the inputs into the algorithm.
7. The system of claim 1, wherein the asset tracking component comprises a plurality of mobile mesh radio nodes and a plurality of fixed mesh radio nodes wherein: 1. a mobile mesh radio node is placed on each mobile vehicular asset and mobile human asset and the fixed mesh radio nodes are placed on fixed objects in the underground mine; and 2. the mobile mesh radio nodes and the fixed mesh radio nodes together form a Wireless Mesh Network capable of determining position information of each of the mobile vehicular assets and each of the mobile human assets in non-line-of-sight (NLOS) conditions in an underground mine environment based on a tracking algorithm that uses Received Signal Strength Indication (RSSI) calculations from the fixed mesh radio nodes which together comprise multiple surrounding fixed mesh radio nodes with known locations.
8. The system of claim 1, wherein the computer vision component comprises one or more camera and one or more object recognition algorithms.
9. The system of claim 1, wherein the computer vision component comprises one or more long wave infrared cameras or ultraviolet cameras and the imaging is capable of being captured at a rate of 1 to 100 frames per second to form a continual live video feed for analysis to perform real-time image processing and object recognition to determine short range line-of-sight collision threats, distance of threats, and/or speed of threats.
10. The system of claim 8, wherein one or more of the object recognition algorithms comprise appearance-based or feature-based techniques chosen from edges, gradients, Histogram of Oriented Gradients (HOG), Haar wavelets, linear binary patterns, extracted features and boosted learning algorithms, bag-of-words models, gradient-based and derivative-based matching approaches, Viola-Jones algorithm, template matching, image segmentation and blob analysis, local feature detectors, Speeded Up Robust Features (SURF), blob detection methods, or Maximally Stable Extremal Regions (MSER) and provides a realtime computer vision system capable of performing object recognition on thermal image frames received as a video feed for one or more of: object recognition of humans; object recognition of human faces; object recognition of vehicles; and object recognition of heat emitting infrastructure.
11. The system of claim 1, wherein the asset tracking component is configured to determine direction of travel and speed of a mobile mesh radio node using data from an accelerometer placed on the mobile node to measure directional velocity.
12. The system of claim 1, comprising collision avoidance software which provides user interface instructions for avoiding collisions between objects.
13. The system of claim 1, comprising at least one motion detection component comprising an accelerometer to provide directional velocity information for at least one mobile unit.
14. The system of claim 1, wherein the computer vision component comprises: a video imaging modality configured to provide an image capture of an asset; wherein the computer processor is configured to receive asset tracking information and video image frames for analysis to identify assets according to a set of computer-executable instructions stored in the memory.
15. The system of claim 1, wherein the computer-executable instructions are configured to calculate the proximity of a human form by the size and heat intensity of the thermal or ultraviolet image capture whereby a brighter image pixel indicates warmer areas and closer and larger humans span a greater number of pixels than do farther and smaller humans.
16. The system of claim 1, wherein: in addition to at least one of the motion detection components being placed on the first asset, at least one of the motion detection components is placed on the second asset; and inputs from the asset tracking component and one or more of the motion detection components are combined into the collision avoidance algorithm to determine an intersection point and time to intersection point of the first asset and the second asset.
17. A method for avoiding asset collisions, the method comprising: thermal imaging a first asset with a computer vision component and identifying the type of asset with the computer vision component, the computer vision component comprising: a video imaging component comprising one or more thermal infrared or ultraviolet cameras which provide video image frames comprising a thermal or ultraviolet image capture of an object; a computer processor; and a memory comprising a set of computer-executable instructions which instruct the computer processor to analyze video image frames received from the video imaging component to identify assets present in the thermal or ultraviolet image capture, wherein the set of computer-executable instructions employ object classification algorithms to identify the asset; tracking the location of the first asset with an asset tracking component comprising fixed mesh radio nodes and mobile mesh radio nodes, wherein a mobile mesh radio node is placed on a first vehicle or a human and the asset tracking component determines the location of the mobile mesh radio node based on a Received Signal Strength Indication (RSSI) between the mobile mesh radio node and surrounding fixed mesh radio nodes; tracking the speed and direction of travel of the first asset with a motion detection component which determines a directional velocity component for the asset tracking component based on an accelerometer-based motion sensor device placed on the first asset, wherein the directional velocity component comprises a speed and direction of travel; determining a Threat Rating Value through a collision avoidance component which receives inputs from the computer vision component, asset tracking component, and motion detection component and combines the inputs into a collision avoidance algorithm programmed in a set of computer-executable instructions which instruct a processor to calculate the Threat Rating Value; and issuing a warning or instruction for action for the first asset to avoid collision with a second asset based on the Threat Rating Value; wherein the Threat Rating Value is calculated as:
TRV=(K.sub.VH.Math.A.sub.VH.Math.V.sub.VH)+(K.sub.VO.Math.A.sub.VO.Math.V.sub.VO)+(K.sub.TS.Math.max[TRV.sub.TS1 . . . TRV.sub.TSn]) wherein:
TRV.sub.TS1=C.sub.TS1.Math.D.sub.TS1.Math.V.sub.TS1
TRV.sub.TSn=C.sub.TSn.Math.D.sub.TSn.Math.V.sub.TSn K.sub.VH=Weight constant for a host computer vision component input A.sub.VH=Amplitude level for the host computer vision component input V.sub.VH=Value of the host computer vision component input K.sub.VO=Weight constant for an object computer vision component input A.sub.VO=Amplitude level for the object computer vision component input V.sub.VO=Value of the object computer vision component input K.sub.TS=Weight constant for the asset tracking component input V.sub.TS1=Value of a first asset tracking component input TRV.sub.TS1=Threat rating value for an n.sup.th asset tracking component input C.sub.TSn=Confidence level for the n.sup.th asset tracking component input D.sub.TSn=Directional velocity component for the n.sup.th asset tracking component input V.sub.TSn=Value of the n.sup.th asset tracking component input TRV.sub.TSn=Threat rating value for the n.sup.th asset tracking component input TRV=Threat rating value for the Collision Avoidance Component.
18. The method of claim 17, wherein the first asset is a vehicle or human and the second asset is a vehicle or human.
19. The method of claim 17, wherein the collision avoidance component receives inputs comprising object recognition information from the computer vision component, position information from the asset tracking component, and directional velocity information from the motion detection component and combines the inputs into the collision avoidance algorithm.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The accompanying drawings illustrate certain aspects of some of the embodiments of the present disclosure, and should not be used to limit or define the disclosure. Together with the written description the drawings serve to explain certain principles of the disclosure.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION OF VARIOUS EMBODIMENTS OF THE INVENTION
(10) It is to be understood by persons of ordinary skill in the art that the following descriptions are provided for purposes of illustration and not for limitation. An artisan understands there are many variations that lie within the spirit of the disclosure and the scope of the appended claims. Unnecessary detail of known functions and operations may be omitted from the current description so as not to obscure the present disclosure.
(11) As used herein, the term “asset” refers to a vehicle or human in an underground mine environment. The vehicle may be any vehicle employed in underground mining, including but not limited to a personnel transport vehicle, a rescue vehicle, a utility vehicle, a loader, a truck, a shearer, a drill, a crane, a flat bed, a lift, a plow, a roof support carrier, and may be powered by any source including electric, battery, diesel, and gasoline. The human may be any person in the underground mining environment, including but not limited to miners, managers, foremen, supervisors, and support personnel.
(12) As used herein, the term “approximately” applied to a value refers to a value that ranges from minus 10% of the value to plus 10% of the value. Thus, “approximately” 100 would refer to any number from 90 to 110.
(13)
(14) Human operators diligently avoid collision of their vehicles with other vehicles and walkers when a threat is perceived; however, many situations, such as underground mining, do not provide workers with sufficient sensory inputs to perceive threats of collision. In underground mining, vehicle drivers may be located in a small space facing a wall, such as displayed in
(15) Computer Vision Component
(16) The computer vision component, an embodiment of which is displayed in
(17) In embodiments, the means for video imaging 5 or cameras 5, in
(18) The use of a thermal camera allows for humans to be identified even in dusty situations and heavy clothing as shown in
(19) Multiple image frames are taken to form a continual live video feed for analysis. The number of image frames per second (fps) may range from 10 fps to 30 fps, 5 fps to 50 fps, or 1 fps to 100 fps in various embodiments. Video analysis is performed by the computer image software on each frame in real-time as the image frames are acquired from the camera. Previous frame analysis results are stored for comparison with incoming frames as they are captured. Real-time image processing and comparison with previous images gives the computer vision component the ability to confidently determine short range line-of-sight collision threats, distance of threats, and speed of threats as they appear and progress.
(20) Object classification algorithms perform the role of computer vision and automated machine perception to identify the objects in the analyzed image frames. These algorithms have the ability to detect human forms, identify objects, and track motion. Using the heat signature, visible, or ultraviolet image outline contained in the images, the algorithms determine if the object is a human form, even if the person is in a prone, supine, crouched, upright/standing, sitting, crawling, squatting, kneeling, or other natural human position. In one embodiment, the proximity of the human is calculated by the size and heat intensity whereby warmer areas are indicated by a brighter image pixel and closer and larger humans span a greater number of pixels than do farther and smaller human. Given a recognized body position of the human, comparisons are made to subsequent findings and distance is measured. In other embodiments, other types of proximity sensors may be used in substitution of or in addition to the heat intensity measurements, including sensors based on LIDAR (light detection and ranging), RADAR (radio detection and ranging), SONAR (sound navigation and ranging), ultrasonic, or other infrared sensors.
(21) Confident, acceptable object recognition is achieved by training the computer vision software with positive samples of objects that should be detected, and by providing negative samples whereby no desired objects to be detected exist. When training and calibrating the computer vision software with positive samples, measurements are first manually calibrated using the formula:
F=P.Math.D/S
(22) Where:
(23) F is the focal length of the camera
(24) P is the number of pixels wide of the subject
(25) D is the distance from the camera to the subject
(26) S is the size of the subject
(27) As the subject's pixel size changes in the image, the distance to the object is calculated with a new pixel value P using the formula:
D=S.Math.F/P
(28) The output of the video analysis is a value based on the type and confidence of the object determined in the scene and multiple values that are continuously fed to the collision avoidance component in real-time. Two distinct values are provided, one for human forms and another for fixed and mobile objects. The higher the value, the greater the confidence that the video analysis correctly identified the particular shape. Each value is also accompanied by an amplitude level which represents how close the object is to the camera. A large outline and/or a high heat signature would result in a large amplitude level.
(29) Asset Tracking Component
(30) Another embodiment of this disclosure is a wireless communication and tracking system (i.e. asset tracking component) for an underground mine environment, shown in
(31) The Fixed Mesh Radio Nodes and Mobile Mesh Radio Nodes form part of a Wireless Mesh Network (WMN). The Fixed Mesh Node FMN is a stationary dual-transceiver mesh radio unit operates on the WMN. Multiple units operate together to form the semi-static infrastructure for the WMN. Each FMN has the capability to coordinate individual clusters within the wireless mesh network WMN and route data through the network between mobile nodes and to a Gateway Node. An FMN can also communicate through a wired backbone headend with a wired backbone, such as a leaky feeder system, as well as form the core links for WMN.
(32) Mobile Mesh Radio (MMR) is a portable device carried by personnel that allows them to have voice and data communication with a Network Operations Center and/or other personnel equipped with an MMR. MMR can also be a relay link between another MMR and an FMN, or between a sensor mesh node (SMN) and an FMN.
(33) The accuracy may be enhanced to approximately 50 feet by adding transmit only Beacon Nodes (BCN) to form a grid with spacing of approximately 200 feet.
(34) To improve the average tracking accuracy over a time interval, a maximum speed is applied to the estimated position. If the newly calculated position is farther from the previous position than which can be reached by traveling at the maximum speed, the new position's distance is capped by the maximum distance allowed given the time difference of the two calculations. The bearing between the two positions is unaffected. The maximum speed value is set by the tracking system and is based whether the tracked mobile object is identified as human or a vehicle.
(35) The tracking confidence value assigned to a tracked device is determined by several factors. A strong RSSI value indicates that two tracked objects are close to each other. If the RF signal is from another mobile (non-fixed) device, the position calculation accuracy is reduced accordingly based on the distance of the transmitting device. Fixed infrastructure devices have very high position accuracy and are algorithmically favored in the mobile device's position calculation.
(36) The bearing or direction of travel is also a factor in the tracking analysis. Based on an accelerometer-based motion sensor device in the mobile objects, the speed and direction of travel is measured and transmitted over the wireless Asset Tracking Component along with its unique ID. The collision avoidance tracking algorithm determines if the direction of travel of other mobile objects will intersect with its current path (forward or reverse). If the two paths do not intersect, the directional velocity component will be near zero. If the paths intersect, the intersection point will result in a higher directional velocity factor. Time to the intersection point (based on the calculated speed) will also affect the directional velocity component.
(37) Persons with a tracked device riding in the vehicle present a special case for the vehicle's collision avoidance alarms. The collision avoidance algorithm detects that they are co-located and over-rides the alarms. The de-activation period is extended if the RSSI values are very high and shortened when the RSSI fall below a configured threshold.
(38) Motion Component
(39) Aiding the Collision Avoidance Component in determining when and how to react to a collision threat is the Motion component. This component provides two valuable metrics in determining if a collision could occur: direction of travel and speed. Without adequate knowledge of these two metrics, proper collision avoidance would not be possible.
(40) In embodiments, the motion component may also be considered distributed within the underground mine environment, as it includes sensors placed on one or more mobile objects (i.e. vehicles) within the underground mind environment. Both direction of travel and speed are acquired from the motion sensor of each mobile object. When a change in gravitational force is detected, the motion component evaluates whether or not the change is useful for collision avoidance. For instance, if a coal shuttle vehicle receives a heavy load of coal, the sudden downward thrust of the vehicle will trigger the accelerometer and produce data that is not associated with avoiding collision since the shuttle vehicle is not moving forward or backward. However, if the coal shuttle vehicle moves forward or backward, it will produce useful gravitational force metrics for determining direction and speed to avoid collision with nearby humans and vehicles. The accelerometer may be any type of accelerometer known in the art, analog or digital, including without limitation capacitive, piezoelectric, piezoresistive, Hall Effect, and magnetoresistive accelerometers.
(41) Collision Avoidance
(42) The Collision Avoidance Component (CAC) 8 embodiment shown in
(43) Embodiments of the high level algorithms are demonstrated below:
(44) Threat rating value for tracking system input 1:
TRV.sub.TS1=C.sub.TS1.Math.D.sub.TS1.Math.V.sub.TS1
(45) Threat rating value for tracking system input n:
TRV.sub.TSn=C.sub.TSn.Math.D.sub.TSn.Math.V.sub.TSn
(46) Threat rating for Collision Avoidance Component:
TRV=(K.sub.VH.Math.A.sub.VH.Math.V.sub.VH)+(K.sub.VO.Math.A.sub.VO.Math.V.sub.VO)+(K.sub.TS.Math.max[TRV.sub.TS1 . . . TRV.sub.TSn])
(47) Where:
(48) K.sub.VH=weight constant for the host computer vision component input
(49) A.sub.VH=amplitude level for the host computer vision component input
(50) V.sub.VH=value of the host computer vision component input
(51) K.sub.VO=weight constant for the object computer vision component input
(52) A.sub.VO=amplitude level for the object computer vision component input
(53) V.sub.VO=value of the object computer vision component input
(54) K.sub.TS=weight constant for the asset tracking component input
(55) C.sub.TS1=confidence level for the first asset tracking component input
(56) D.sub.TS1=directional velocity component for the first asset tracking component input
(57) V.sub.TS1=value of the first asset tracking component input
(58) TRV.sub.TS1=threat rating value for the n.sup.th asset tracking component input
(59) C.sub.TSn=confidence level for the n.sup.th asset tracking component input
(60) D.sub.TSn=directional velocity component for the n.sup.th asset tracking component input
(61) V.sub.TSn=value of the n.sup.th asset tracking component input
(62) TRV.sub.TSn=threat rating value for the n.sup.th asset tracking system input
(63) TRV=threat rating value for the Collision Avoidance Component
(64)
(65)
(66)
(67) In the first scenario, the computer vision component of vehicle 201 identifies the vehicle ahead, and the computer vision component of vehicle 202 identifies the vehicle behind. The vehicle objects are recognized, and their speeds and distances are continuously calculated while travelling. The asset tracking component likewise determines that the vehicle assets are nearby, and determines the location of each vehicle. Last, the motion component on each vehicle determines its own speed and direction of travel. On each vehicle, the values are then passed to the collision avoidance component, and in this scenario, vehicle 202 is alerted of immediate danger of collision, and vehicle 201 slows to a safe speed of four MPH to allow vehicle 202 to distance itself farther from potential collision.
(68) In the second scenario, all of the collision determination steps are taken as they were in the first scenario. The difference here is that both vehicle 202 and vehicle 203 alert their drivers and reduce to a safe speed below five MPH to allow the drivers to carry on in a safe manner.
(69)
(70) The human 307 is in the front camera field of view 308, and travelling the same direction as the vehicle. In this instance, the human can safely travel in the same path as the vehicle as long as the vehicle isn't travelling too close or approaching too fast. Sudden changes in the perceived danger of collision will alert both the vehicle and the human.
(71) Human 311 is travelling nearby the vehicle, but not directly in its path. This scenario provides warning of nearby danger, but does not stop the vehicle as no collision is imminent. As human 311 enters the field of view 308 at close proximity, however, the vehicle will be caused to halt and avoid colliding with the human.
(72) Human 312 is travelling perpendicular to the vehicle, such as in an underground mine crosscut. As the human approaches the vehicle, an audible and/or visible warning is provided for both parties, and the vehicle is halted if the human gets too close. This scenario is quite common in underground mines since crosscuts provide safe shelter for humans while vehicles pass. The vehicle's speed, however, will be reduced as a chance exists that the vehicle may turn into the crosscut where the human is located.
(73) Human 313, similar to human 311, is travelling nearby the vehicle, and into a camera's field of view. The difference is that human 313 is approaching the rear of the vehicle travelling in a similar direction as the vehicle. The human is not in high danger of collision with the vehicle, so human 313 will be alerted of nearby danger, but the vehicle will not be halted.
(74) Human 314 is directly in the vehicle's rear camera field of view 309 but is travelling away from the vehicle. Similar to human 313, human 314 will be warned of nearby danger, but the vehicle will continue on at a safe speed, since there is not an imminent risk of collision with human 314.
(75) It will be understood that the various processes, operations, and/or algorithms of described and/or depicted in this disclosure may be carried out by a group of computer-executable instructions that may be organized into routines, subroutines, procedures, objects, methods, functions, or any other organization of computer-executable instructions that is known or becomes known to a skilled artisan in light of this disclosure, where the computer-executable instructions are configured to direct a computer or other data processing device to perform one or more of the specified processes, operations, and/or algorithms. Embodiments of this disclosure include one or more computers or devices loaded with a set of the computer-executable instructions described herein wherein the one or more computers or devices are instructed and configured to carry out the processes, operations, and/or algorithms of the disclosure. The computer or device performing the specified processes, operations, and/or algorithms may comprise at least one processing element such as a central processing unit and a form of computer-readable memory which may include random-access memory (RAM) or read-only memory (ROM). In embodiments, the computer or device may be positioned on one or more vehicles as one, several, or all of the components of a Collision Avoidance System described in this disclosure. The computer-executable instructions can be embedded in computer hardware or stored in the computer-readable memory such that the computer or device may be directed to perform one or more of the processes, operations, and/or algorithms depicted and/or described herein. Embodiments of this disclosure also include a computer program product comprising one or more computer files comprising a set of computer-executable instructions for performing one or more of the processes, operations, and/or algorithms described and/or depicted herein. In exemplary embodiments, the files may be stored contiguously or non-contiguously on a computer-readable medium, in computer-readable memory on a single computer, or distributed across multiple computers. Embodiments of this disclosure also include a computer readable medium comprising one or more computer files comprising a set of computer-executable instructions for performing one or more of the calculations, processes, operations, and/or algorithms described and/or depicted herein. Further, embodiments of the disclosure include a computer program product comprising the computer files, either in the form of the computer-readable medium comprising the computer files and, optionally, made available to a consumer through packaging, or alternatively made available to a consumer through electronic distribution. As used herein, a “computer-readable medium includes any kind of computer memory such as floppy disks, conventional hard disks, CD-ROMS, Flash ROMS, non-volatile ROM, electrically erasable programmable read-only memory (EEPROM), and RAM.
(76) A skilled artisan will further appreciate, in light of this disclosure, how the processes, operations, and/or algorithms can be implemented, in addition to software, using hardware or firmware. As such, as used herein, the operations in this disclosure can be implemented in a system comprising any combination of software, hardware, or firmware.
(77) Embodiments of this disclosure may include a user interface which may be used in conjunction with the computer-executable instructions. For example, the user interface may include a graphical user interface configured to allow a user to access a camera image or video feed, display a Received Signal Strength Indication of a nearby fixed or mobile mesh radio node, display one or more warnings or alarms to take action to avoid a collision, including a message. The message may indicate the level of threat of collision with an asset and/or instructions designed to avoid collision such as to decelerate or execute an evasive maneuver. The graphical user interface may also communicate that such an action was automatically taken by the collision avoidance component. The graphical user interface may allow a user to perform these tasks through the use of text fields, check boxes, pull-downs, command buttons, and the like. A skilled artisan will appreciate how such graphical features may be implemented for performing the tasks of this disclosure.
(78) Such graphical controls and components are reusable class files that are delivered with a programming language. For example, pull-down menus may be implemented in an object-oriented programming language wherein the menu and its options can be defined with program code. Further, some programming languages integrated development environments (IDEs) provide for a menu designer, a graphical tool that allows programmers to develop their own menus and menu options. The menu designers provide a series of statements behind the scenes that a programmer could have created on their own. The menu options may then be associated with an event handler code that ties the option to specific functions. Text fields, check boxes, and command buttons may be implemented similarly through the use of code or graphical tools. A skilled artisan can appreciate that the design of such graphical controls and components is routine in the art.
(79) The present disclosure has been described with reference to particular embodiments having various features. In light of the disclosure provided above, it will be apparent to those skilled in the art that various modifications and variations can be made in the practice of the present disclosure without departing from the scope or spirit of the disclosure. One skilled in the art will recognize that the disclosed features may be used singularly, in any combination, or omitted based on the requirements and specifications of a given application or design. For example, any of the methods described can be implemented in systems according to the disclosure, while any of the systems described can be configured to operate any of the inventive methods. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure.
(80) It is noted in particular that where a range of values is provided in this specification, each value between the upper and lower limits of that range is also specifically disclosed. The upper and lower limits of these smaller ranges may independently be included or excluded in the range as well. The singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. It is intended that the specification and examples be considered as exemplary in nature and that variations that do not depart from the essence of the disclosure fall within the scope of the disclosure. Further, all of the references cited in this disclosure including published patents, published patent applications, and non-patent literature are each individually incorporated by reference herein in their entireties and as such are intended to provide an efficient way of supplementing the enabling disclosure as well as provide background detailing the level of ordinary skill in the art.