Event triggered drone system and method for image collection and transmission
12585288 ยท 2026-03-24
Inventors
Cpc classification
G05D1/86
PHYSICS
G05D2111/32
PHYSICS
International classification
Abstract
An autonomous system and method for capturing event-driven aerial imagery utilizes a drone equipped with advanced sensors and navigation subsystems to operate without human intervention. Upon detecting predefined target events, such as structure fires, explosions, gunshots, emergency sirens, a specific license place, or a specific face, the drone autonomously launches and employs direction-finding triangulation to pinpoint the event's latitude, longitude, and elevation. The drone autonomously executes optimized flight profiles. Data transmission utilizes bonded and blended communication channels to ensure reliable video streaming to users, such as first responders or news agencies. Compliance with FAA altitude regulations is enforced, and the system can be controlled remotely via internet or cellular connections. Continuous operation is enabled through tethered power or automated battery replacement stations. Applications include law enforcement, emergency services, and news gathering, providing immediate aerial reconnaissance without requiring human operators to be present at unpredictable event locations.
Claims
1. An autonomous aerial imagery system comprising six subsystems and six functional modules configured to work together, comprising: a drone comprising a propulsion subsystem, a control subsystem, an electric power subsystem, a sensor subsystem, a video processing subsystem, and a wireless communications subsystem; a first functional module configured for event detection by autonomously detecting occurrence of target events without human intervention; a second functional module configured for event location using triangulation from multiple spatial positions; a third functional module configured for autonomous profile flight based on detected event types; a fourth functional module configured for multiplexed data communication across a plurality of wireless communications channels external to the drone; a fifth functional module configured for flight endurance despite limited battery charge; and a sixth functional module configured for modified parameters by remote control, wherein the second functional module configured for event location comprises: direction-finding components including at least four sensorpods mounted on respective rotor booms of the drone; each sensorpod having an acoustic sensor and an optical sensor positioned at different spatial locations; wherein the second functional module autonomously determines spatial coordinates of a target event by calculating latitude, longitude, and elevation through triangulation using multiple bearing measurements taken from the different spatial positions of the sensorpods.
2. An autonomous aerial imagery system comprising six subsystems and six functional modules configured to work together, comprising: a drone comprising a propulsion subsystem, a control subsystem, an electric power subsystem, a sensor subsystem, a video processing subsystem, and a wireless communications subsystem; a first functional module configured for event detection by autonomously detecting occurrence of target events without human intervention; a second functional module configured for event location using triangulation from multiple spatial positions; a third functional module configured for autonomous profile flight based on detected event types; a fourth functional module configured for multiplexed data communication across a plurality of wireless communications channels external to the drone; a fifth functional module configured for flight endurance despite limited battery charge; and a sixth functional module configured for modified parameters by remote control, wherein the fourth functional module configured for multiplexed data communication comprises: a dispatcher and a gatherer linked by a plurality of wireless communications channels; wherein the dispatcher packetizes video streams and distributes them across the plurality of communications channels, and the gatherer reassembles the video streams at a destination.
3. An autonomous aerial imagery system comprising six subsystems and six functional modules configured to work together, comprising: a drone comprising a propulsion subsystem, a control subsystem, an electric power subsystem, a sensor subsystem, a video processing subsystem, and a wireless communications subsystem; a first functional module configured for event detection by autonomously detecting occurrence of target events without human intervention; a second functional module configured for event location using triangulation from multiple spatial positions; a third functional module configured for autonomous profile flight based on detected event types; a fourth functional module configured for multiplexed data communication across a plurality of wireless communications channels external to the drone; a fifth functional module configured for flight endurance despite limited battery charge; and a sixth functional module configured for modified parameters by remote control, wherein the drone comprises at least four sensorpods at least one of which is attached to the distal end of each rotor boom, wherein each sensorpod comprises both an acoustic sensor and an optical sensor configured for directional sensing and triangulation.
4. A method for obtaining aerial video imagery of a target event upon detection of the target event comprising: subscribing to a plurality of communications channels; acquiring a drone; modifying the drone with a system enabling the drone to: detect the occurrence of the target event autonomously; determine the location of the target event autonomously; flying appropriate flight profiles relative to the target event autonomously; transmit video through the plurality of communications channels; and maintain its source of electrical power; installing a gatherer near a transmitter or Internet server; defining the target event; providing the system with appropriate sound, light, and vibration signatures associated with the target event in the form of templates; providing the system with event-determined flight profiles definitions related to types of target events; allowing the system to be transported to a location proximate to where the target event is expected to occur; activating the plurality of communications channels; autonomously detecting occurrence of the target event; launching the drone autonomously; locating the target event autonomously; flying the event-determined flight profiles autonomously; sending video through the plurality of communications channels; allowing a user to monitor mission performance and video collected and to make adjustments by sending commands.
5. The method of claim 4, wherein sending the video through the plurality of communications channels comprises: using a dispatcher connected to the gatherer through the plurality of communications channels; chunking data streams received from a sensor subsystem into units suitable for allocating among the plurality of communications channels; Event triggered drone system and method for image collection and transmission allocating such units to each of the plurality of communications channels based on its characteristics; detecting units that do not arrive at the gatherer, throttling the particular communications channel of the plurality of communications channels on which that unit was sent.
Description
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
(1) To identify the discussion of any particular element or act easily, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
DETAILED DESCRIPTION
Overview
(30) The system is distinguished from the prior art by its automatic event triggering, by its use of direction finding to locate the target event, by its autonomous execution of flight profiles appropriate to collecting imagery of the target event, by its use of a plurality of communications channels for transmitting its imagery and data, and by its techniques for extending flight endurance.
(31) The system comprises six subsystems: a control subsystem, an electric power subsystem, a sensor subsystem, a video processing subsystem, and a wireless communications subsystem, configured to work together with six functional modules: a first functional module configured for event detection; a second functional module configured for event location; a third functional module configured for profile flight; a fourth functional module configured for multiplexed data communication; a fifth functional module configured for flight endurance despite limited battery charge; and a sixth functional module configured for modified parameters by remote control.
(32) System operation is premised on automatic detection of a target event such as one likely to be of interest to law enforcement, to other first responders or to news organizations.
(33) The system accepts inputs from a plurality of event detectors comprising gunshot detectors, license plate readers, emergency siren detectors, human face detectors, and microphones and cameras feeding imagery and sounds to pattern matching computer code in the system.
(34) Users can specify in advance sound, light, and vibration thresholds and particular patterns they want the system to recognize by supplying templates developed through machine learning. No claim is made to the machine learning processes incident to the development of such templates.
(35) Once the system detects such a specified threshold or pattern, it launches the drone and applies a triangulation pattern to pinpoint the spatial coordinates of the detected target event, unless the user has already supplied the longitude, latitude, and elevation coordinates of the event.
(36) The drone then flies flight profiles appropriate to the event. The flight profiles are provided by the user and correlated with event type. The system selects a flight profile corresponding to the detected event and flies it autonomously.
(37) The imagery enabling these steps can be collected by means of a variety of configurations of airborne and ground-based assets. In the simplest embodiment, a single drone performs all of the functions: event detection launch, event location, image capture and image transmission.
(38) In other embodiments some of the functions, particularly those related to event detection and location, are delegated to ground-based sensors. In still other embodiments, a plurality of drones shares the functions among themselves and among themselves and ground-based components
(39) As the drone captures imagery, it transmits digital representations of the imagery to its mission controller and thence through bonded and blended WiFi, cellular, microwave, and satellite links to a receiving station, which might be a television station, an Internet equivalent of a television station, or a law enforcement or other first-responder agency.
(40) Drones are kept in the air through autonomous use of battery replacement stations or tethers that transmit continuous electrical power while also transmitting data up and down to the drone.
(41) A single drone embodiment may be associated with automatic ground-based battery replacement, or it may not, assuming that the drone can complete its mission within the endurance limits necessitated by its propulsion and onboard battery system.
(42)
(43) Data Communications
(44) Together the video processing subsystem 312 and the wireless communications subsystem 810 constitute the video downlink 108. The wireless communications subsystem 314, the control link 126, and the video downlink 108 connect the drone with the remote control device 124, the dispatcher 110, and the mission controller 128.
(45) As in the parent application, in an alternate embodiment, a user controls the system by entering commands on a remote console 2302 linked to the mission controller 110 by an Internet connection 2301. The remote console 2302 and the mission controller 110 would be configured as nodes with Internet protocol (IP) addresses.
(46) As in the parent application, In an alternate embodiment, a user controls the system by entering commands on a remote console 2402 linked to the the mission controller 110 by a cellular connection 2401.
(47) As in the parent application, either an Internet or a cellular connection can carry video imagery back to the remote human controller user as well as telemetry and commands to and from the mission controller. Remote operation permitted by these kinds of links may be desirable when information about the most desirable launch circumstances may not be available when the drone and its ground support devices are positioned. Internet or cellular connectivity allows a user to position the system, and to leave without setting up its mission profile, launching trigger, or target event coordinates, inputting such information only later, when it becomes available. Time is of the essence for event driven drone imagery capture. Accordingly the system utilizes bonded and blended digital communication techniques disclosed in U.S. Pat. No. 12,250,377 to send imagery captured by the drone to news entities or public safety agencies
(48) As in the parent application, providing for the transmission of both electrical power and data over the same conductors, reduces the weight of a tether that might otherwise include separate conductors for power and data.
(49) The system makes use of blended and bonded data communications channels to ensure that high-quality imagery is made available to the human users at remote locations.
(50) The data transmitted from the system over these bonded and blended links comprise location coordinates, still in moving imagery, speed, and updated coordinate information.
(51) A digital signal processor on the drone compresses the captured video stream, adhering to industry standards, such as H.264 or H.265, reducing the data load for transmission without sacrificing quality. The drone's remote control device allows the operator to manage both flight maneuvers and camera controls, while a radio transmitter onboard ensures the transmission of high-definition video. The remote control device has a plurality of controls allowing a human user to command aerial vehicle maneuvers and flight profiles capable of launching the drone and flying it to a position where it has a view of the target event. The remote control device and the drone have interoperable computer software and hardware that use data from onboard sensors to carry out human commands autonomously, to maintain position and to adhere flight profiles. The drone is initially activated by a human user, typically a journalist, and commanded by that human user to fly to a position where it has a view of the target event. Thereafter, the drone maintains its position and flight path and otherwise carries out human commands autonomously through its onboard sensors and navigation software.
(52) The system's dispatcher manages the flow of video data from the drone. It chunks the video stream into smaller segments, ensuring that these packets are appropriately sized for transmission across the various wireless digital links, blended, to receive data from a single stream originating on the drone. The dispatcher's task is to monitor network performance, assigning sequence numbers to each packet and transmitting them across the available channels. This ensures that the video data flows smoothly, even when network conditions are less than optimal.
(53) At the receiving end, a gatherer collects the video packets transmitted over multiple wireless links. The gatherer's responsibility is to reassemble the packets in the correct order using the sequence numbers, effectively reconstructing the original high-definition video stream. The system handles varying latency and performance across channels, ensuring that even delayed packets arriving over slower links like satellite are correctly reordered to maintain video integrity.
(54) The system and method are suitable for over-the air broadcasting, broadcasting through wired terrestrial networks, broadcasting through satellite services, and streaming through the Internet.
(55) One of the system's strengths lies in its use of sequence numbers to manage data transmission over blended networks. Since different communication channels often exhibit different speeds and latency, packets may arrive out of order. The dispatcher's sequencing mechanism ensures that each packet is tagged with a unique sequence number, which the gatherer uses to reorder them. This process prevents corrupted data or jittery video playback, ensuring that the transmission remains smooth even when packets arrive out of sequence.
(56) To handle varying transmission speeds, the system incorporates a robust flow control mechanism. This ensures that no single connection is overwhelmed by the data rate, preventing dropped packets and minimizing delays. The system balances the load across faster and slower links by adjusting transmission rates in real-time, based on feedback from the receiver. This adaptive flow control maintains synchronized data delivery, especially when different channels have significant differences in bandwidth and latency.
(57) Buffering is another element of the system's design. Each channel in the blended network may have different latency characteristics, and buffering helps accommodate these differences. At the receiving end, buffers hold packets until all data arrives, ensuring the correct reassembly of the video stream. Larger buffers are used for high-latency links, such as satellite, while smaller buffers handle faster, more reliable connections. This adaptive buffering system ensures that no data is lost or processed out of order.
(58) The system is also designed to handle packet loss effectively. If a chunk of data is lost in transmission, the gatherer sends feedback to the dispatcher in the form of the sequence numbers of the missing chunks, requesting that the missing chunks be resent or allowing the system to throttle the underperforming channel. Additionally, techniques such as forward error correction (FEC) can reconstruct lost packets without requiring retransmission, further enhancing the system's resilience against packet loss and ensuring smooth video delivery even in challenging environments.
(59) The system comprises a drone 102, a target event 104, a video downlink 108, a dispatcher 110, a first communications channel 112, a second communications channel 114, a third communications channel 116, a satellite link 118, a microwave link 120, a cellular link 122, a remote control device 124, and a control link 126.
(60) As
Vehicle Technology
(61)
(62) In some embodiments, the drone 102 is a multicopter equipped with obstacle detection sensors to navigate autonomously, flying a flight profile determined by the system. In other embodiments, the drone is a fixed-wing aircraft equipped with obstacle detection sensors to navigate autonomously, flying a flight profile determined by the system.
(63) The drone 102 comprises a body, rotors 202, cameras 204, a gimbal 206, motors 208, associated dc brushless motor controllers 210, a control module 212, antennas 214, a video downlink transmitter 216, a command transceiver 218, a digital signal processor 222, and a battery 220.
(64) The drone 102 may be a rotary wing vehicle with multiple rotors known as a multicopter, or it may be a fixed-wing unmanned aircraft. Only a rotary wing vehicle is shown.
(65) The components of the drone 102 comprise a propulsion subsystem 304, an electric power subsystem 308, a sensor subsystem 3108, a control subsystem 306, a video processing subsystem 312, and a wireless communications subsystem 314. In one embodiment, the propulsion subsystem 304 uses four brushless electric motors controlled by digital brushless controllers to rotate four rotors. In the electric power subsystem 308, a rechargeable battery supplies electrical power to the motors and to one or more onboard computers and a plurality of navigation and indicator lights. The sensor subsystem 310 comprises one or more high-definition cameras, a lidar sensor, and a sonar sensor. The control subsystem 306 computes the orientation of the drone 102 in space from inputs from the sensor subsystem 310 and translates commands from the remote control device 124 and the mission controller 128 into detailed inputs to the digital brushless controllers to effect desired revolutions per minute (RPM) of each rotor, thereby bringing about desired, pitch, roll, yaw, and trust to place the drone 102 on a commanded flight path
(66) The remote control device 124 allows a human user to enter commands to control drone 102 flight maneuvers, including launch and landing. The mission controller 128 allows a human user pre-program the system by entering instructions as to when the drone 102 should launch and where it should fly to capture optical imagery 106 of a target event 104. The mission controller 128 receives data from the drone 102 through the wireless communications subsystem 314 and combines it with commands from the remote control device 124 to manage drone flight and maneuvers.
(67) The propulsion subsystem 304, control subsystem 306, electric power subsystem 308, the sensor subsystem 310, the video processing subsystem 312 and the wireless communications subsystem 314 are native to the parent application, but are enhanced by components disclosed in this application below.
(68) The basic drone 102 from the parent application is modified to enable autonomous event detection and location.
(69) The control subsystem is configured to navigate to the target event autonomously based on preprogrammed coordinates and flight profile parameters or based on coordinates of an event detected by the system.
(70) The functions of the subsystems are distributed between onboard hardware and software and ground-based hardware and software, balancing processing capability with wireless bandwidth requirements for data transmission. For example doing more of the direction finding processing in ground-based computer hardware requires more data-exchange over the wireless connection. Doing more of it aboard the drone reduces wireless data communication but requires more computation capability aboard the drone.
(71) As modified, the drone 102 also comprises sensor pods attached at the end of each boom, a first sensorpod 408, a second sensorpod 410, a third sensorpod 412, and a fourth sensorpod 414.
(72) The inherent direction finding capability of the drone is enhanced by attaching sensorpods at the ends of each rotor boom. Each sensorpod contains an acoustic and an optical sensor. By combining and analyzing the differences among the signals received from the plurality of acoustic and optical sensors, the system can use triangulation, beamforming, and binocular vision to facilitate the location of the target object. For example, beamforming can rely on phase differences between the sound waves received at the acoustic sensors on the four different sensor pods. Triangulation can use the binocular vision available from the forward pair of optical sensors, as well as the as the binocular vision available from the left and right pairs of optical sensors to determine the position of the target object. Fusion of the acoustically determined location and the optically determined location improves accuracy
(73) The first sensorpod 408 is attached to the forward left rotor boom 520, the second sensorpod 410 is attached to the forward right rotor boom 522, the third sensorpod 412 is attached to the aft right rotor boom 526 and the fourth sensorpod 414 is attached to the aft left rotor boom 524.
(74) Each sensorpod is equipped with an acoustic sensor 604 and an optical sensor 606a gimbaled camera in some embodiments. Only the first sensorpod 408 is depicted, but it should be understood that each of the four sensorpods are similarly equipped with an acoustic sensor and an optical sensor for each.
Detecting the Event
(75) The present disclosure modifies a basic drone available on the market to enable it to detect events for which imagery is desirable. The disclosure in the parent application is enhanced in this application by providing additional detail as to the structure and functioning of the apparatus for identifying events.
(76) The first functional module is configured for event detection. It comprises a plurality of sensors selected from the group consisting of gunshot detectors, emergency siren detectors, license plate readers, human face detectors, motion detectors, light change detectors, vibration detectors, acoustic sensors, and optical sensors, wherein the first functional module compares real-time sensor inputs to pre-stored templates to identify occurrence of a target event.
(77) The target event entered by the human user may not occur at a particular time, but it may be indicated by input from a ground-based sensor, such as a light or motion detector. In some situations, the mission controller is programmed to launch the drone 102 and begin capturing video imagery, for example when a motion detector detects movement in the vicinity or a light detector detects changes in lighting.
(78) The drone 102 disclosed in the parent application is modified with sophisticated event-detection technology via commercially available event detectors or via pattern matching software added to the parent as part of this present disclosure that accepts a predefined event condition to be detected by a plurality of event sensors, the predefined event condition including at least one of a motion detection, a light change, or a vibration detection. Detection of the predefined event condition identifies the target event.
(79) The sensor subsystem 310 comprises products that enable detection of emergency sirens, gunshots and explosions. Signals from those products and from the optical sensor 606 and the acoustic sensor 604 are compared to data sets comprising stored images of newsworthy events and events of interest to law enforcement such as explosions, structure fires and unlawful assemblies to enable pattern matching of images captured in real-time.
(80) The sensorpods installed on the booms of the drone 102 allow it to collect data from the event to permit pattern matching with acoustic signatures and light or infrared signatures associated with the target event in the form of templates pre-stored in the system before launch.
(81) The mission controller comprises an input device configured to accept coordinates of the target event, camera presets for capturing imagery of the target event, and parameters for a flight profile over the target event.
(82) Related technologies for explosion detection use seismic sensors; pressure spike detectors, in which sudden increases in air pressure indicate an explosion; sound signature analysis supported by templates of explosion wave forms; and data fusion that combines seismic, acoustic, and optical data to reduce false alarms. In some embodiments, the sensor subsystem 310 also comprises such explosion detection technology.
(83) High-risk locations such as banks, jewelry stores, pharmacies, and hospitals often have emergency alert systems linked directly to law enforcement agencies. The sensor subsystems in some embodiments are configures to receive alarms from such systems and treat the alarm as a target event. In such embodiments, metadata associated with the alarm would provide spatial coordinates for the target event.
(84) The system also could utilize face-detection technology and treat the recognition of a predefined face as a target event, and launch the drone and cause it to follow the individual with that face. It also could utilize license-plate-detection technology and treat the recognition of a predefined license plate as a target event. The sensor subsystems in some embodiments comprise human face detectors or license plate detectors.
(85) In some application environments, event detection may be more effective if it is based on sensors above the ground. The system accommodates sensors such as license plate readers and gunshot detectors located on above the ground structures such as traffic light supports or utility poles.
(86) The system also accommodates event detectors mounted on drones which remain aloft at the end of tethers that supply electrical power to them an d which accommodate data exchange and power simultaneously. Both electrical power from the ground to the drone and telemetry from the graph and control commands from the ground to the drone digital control commands.
(87) The sensor subsystem integrates the signals received from all the sensors, whether installed on the drone, on a separate airborne vehicle or on the ground.
(88) The mission controller 128 transforms user input regarding time of day or sensor conditions and signals from the sensor subsystem 310 into appropriate commands.
Locating the Event
(89) The present disclosure modifies a basic drone available in the market to enable it to locate events for which imagery is desirable. The disclosure in the parent application is enhanced in this application by providing additional detail as to the structure and functioning of the triangulation processes used to identify the latitude, longitude, and elevation of the event.
(90) The second functional module is configured for event location. It comprises direction-finding components including sensorpods mounted on rotor booms of a drone, each sensorpod having an acoustic sensor and an optical sensor.
(91) The second functional module determines spatial coordinates of the target event by autonomously determining latitude, longitude, and elevation through triangulation using multiple bearing measurements taken from different spatial positions.
(92) In some circumstances the location of an event is known in advance, such as a bridge or a building about to be demolished. When the location is known in advance the user of the system may enter the latitude and longitude or other data sufficient to define the location. In such a case only the occurrence of the event need be detected.
(93) In other circumstances, the exact location of an event is not known in advance, for example a gunshot or an emergency siren. When the location is not known in advance, the system determines spatial coordinates of the target event by autonomously determining a latitude, longitude, and elevation of an event by in-flight direction-finding triangulation using multiple bearing measurements taken from different spatial positions.
(94) The systems uses multiple bearing measurements taken from different spatial positions. The system accomplishes triangulation, direction finding and location of the event by means of processing signals from the sensorpods 408, 410, 412, and 414 located on the booms of the drone. Optical sensors determine the azimuth of the event triggering image by comparing the images from the four optical sensors on the four sensorpods. These optical sensors may be cameras or they may be infrared detectors. Widely separated optical sensors on the same drone used binocular vision to estimate not only azimuth but distance from the drone on which they are mounted. The drone knows its latitude and longitude from his GPS system, and it can take the azimuth and distance from its optical binocular optical sensor system to determine the latitude and longitude of the event.
(95) Those optical sensors can, not only be mounted on the booms of the drone, they also can be ground-based, they can be mounted on a tethered drone, or they can be mounted on a plurality of drones in flight.
(96) The optical sensors are supplemented by acoustic sensors also mounted on the sensorpods 408, 410, 412, and 414. These acoustic sensors can determine the location of an event with an acoustic signature. Acoustic sensors installed on sensorpods 408, 410, 412, and 414 are equipped with acoustic filters that suppress the acoustic signature of the drones motors and rotors.
(97) Widely spaced microphones on the drones permit the azimuth of a sound source to be determined by one or more of the well-known means for triangulating sound including time differences of arrival, direction of arrival, and phase differences at multiple sensors. Time differences of arrival determine the times at which a sound wave reaches two different sensors. Because sound travels at a known speed, the distance from each sensor can be computed, and the sound source lies on a hyperbola with the two sensors as foci. Multiple sensor pairs permit locating the sound source at the intersection of the hyperbolas.
(98) Direction of arrival techniques used beamforming or array processing to estimate the angle from which the sound is coming. The azimuth is computed from phase differences between the received sound waves The sensor subsystem 310 transforms user inputs regarding target event location into formats usable by the drone's 102 control subsystem 306.
(99) In the case of a license plate detector, the system utilizes pixel size analysis to determine the distance of location of a target license plate from the known coordinates of the license plate detector. And known data about license plate dimensions.
(100) In alternative embodiments the acoustic sensors may be located on the ground, they may be installed on a tethered drone, or they may be installed on untethered drones.
(101) The event can be located by triangulation. The drone 102 flies to three arbitrary points at the same altitude and determines the latitude and longitude of each point by means of the built-in GPS system and the azimuth of the signal from that point by means of the directional event onboard the UAV. A vector from each such point is defined by the latitude and longitude of the point and the angle represented by the azimuth. The intersection of the vectors determines the location of the event.
(102)
(103) In locating the event, the system allows for the possibility that the altitude of the UAV may be lower than the elevation of the event. As a first step, it asks whether the azimuth for the event signal is less than 90, which would signify that the event is above the UAV. If that condition is satisfied, the drone climbs and takes another direction-finding fix.
(104) Alternatively the drone launches and ascends to a position where it has a good view of the event. The system then takes multiple photographs. The drone uses its GPS-determined position and azimuth and elevation information from the photographs to determine the geographic position of the event.
(105) The triangulation functionality provides a more finely grained level of detail about location of the target event than is likely available from the crude estimates of gunshot detectors and siren detectors.
(106) For example, the system, upon detecting a license plate number identified in advance by a user organization, would treat such detection as a triggering target event, launch the drone and follow the vehicle bearing that license plate.
(107) As another example, the system upon detecting gunshots, would use its triangulation function to pinpoint the location of the gunshots and then would fly a predefined orbit around that location and follow any individuals whose images show them moving rapidly away from the scene.
(108) In the third example, the system, upon detecting emergency sirens, would use this triangulation capability to determine the point on which the sirens were converging and would orbit above that point.
(109) The system also could utilize face detection technology and treat the recognition of a predefined face as a triggering target event, launching the drone and causing it to follow the individual with that face.
(110) The system also accommodates event detectors mounted on drones which remain aloft at the end of tethers that supply electrical power to them and which accommodate data exchange and power simultaneously. Both electrical power from the ground to the drone and telemetry from the graph and control commands from the ground to the drone digital control commands.
(111)
(112) Although the example routine depicts a particular sequence of operations, the sequence may be altered without departing from the scope of the present disclosure. For example, some of the operations depicted may be performed in parallel or in a different sequence that does not materially affect the function of the routine. In other examples, different components of an example device or system that implements the routine may perform functions at substantially the same time or in a specific sequence.
Flight Profiles
(113) The present disclosure modifies a basic drone available in the market to enable it to fly flight profiles relative to the event location most suitable for collecting imagery of that type of event. The disclosure in the parent application is enhanced in this application by providing additional detail as to the definition of profiles and the structure and functioning of the profile selection and definition processes.
(114) The third functional module is configured for profile flight. It comprises a control subsystem with computer hardware and software configured autonomously to execute flight profiles appropriate to the target event type, wherein the flight profiles are selected from pre-loaded templates correlated with event types and executed while maintaining flight altitude within Federal Aviation Administration regulatory limits.
(115) The most appropriate flight profile to be flown by the drone depends on the nature of the target event. A plurality of flight-profile templates relative to target events are loaded into the system before operation. The system matches inputs from its sensors to the templates to select the optimal flight profile.
(116) The system autonomously calculates an optimal flight path from a current location of the drone to the target. The control subsystem is configured to navigate to the target event autonomously based on spatial coordinates of a target event detected by the control subsystem. The drone flies that calculated path and then flies flight profiles most suitable to capture imagery of the particular target eventthe optimal flight profile. The drone executes the flight profile to maintain optimal positioning relative to the target event during imagery capture without requiring continuous manual control during flight.
(117) The system comprises computer program logic that causes the drone to fly mission profiles appropriate to the target event. The computer program code in the system forces the drone to maintain flight altitude within Federal Aviation Administration (FAA) regulatory limit.
(118) As in the parent application, the flight profile might be a circular or elliptical orbit over the target event 104; it might be a hover at an offset angle from the target event; it might be rectilinear traverses over and adjacent to the target event, depending on the nature of the optical imagery 106 to be collected. In any case, the flight profile prescribes the altitudes to be flown.
(119) The system comprises two protocols for selecting flight-profile templates. The first protocol is autonomous; it matches sensor information about a target event and automatically loads the appropriate flight profile template. The other protocol allows a user to select from a larger set of flight-profile templates according to what the user sees in the imagery captured by the drone after launch.
(120) Once a drone has determined the coordinates of the target event, it flies a predetermined flight pattern specified by the flight-profiled template to collect the video imagery most suitable for the purpose, whether that be news reporting, law enforcement response or other emergency response.
(121) Circular orbits are standard for localized, dynamic events like police standoffs, while wider or elliptical orbits suit large-scale events like wildfires or parades. Tight, circular orbits are preferred for continuous coverage of a dynamic, localized event. Pilots may adjust radius or altitude to avoid interfering with law enforcement aircraft (e.g., police helicopters) or to stay clear of temporary flight restrictions (TFRs).
(122) Wider elliptical or irregular orbits to cover large areas. For wildfires, drones avoid smoke plumes and coordinate with firefighting aircraft, often orbiting at higher altitudes (e.g., 2,000-10,000 feet AGL (above ground level).
(123) Pre-planned circular orbits at higher altitudes to minimize noise and ensure safety over crowds. Orbits may be elongated to capture the event's full length (e.g., a marathon route).
(124) Broader, slower orbits over highways or intersections to provide wide shots of congestion without distracting drivers.
(125) In the law enforcement context, the most suitable flight profile may be following a vehicle with a particular license plate or a human figure emerging quickly from the vicinity of the target event. The system might cause the drone to follow the individual and track him not only with optical sensors, but also with infrared sensors.
(126) The result resembles a human pilot's response to commands: Follow that car! Follow him! Don't lose him!
(127) An alternative flight profile supporting or covering a police chase might be a tight circular orbit, one-tenth to one-mile mile radius 300 feet AGL adjusted frequently to follow movement.
(128) Another law-enforcement flight profile, for example, might be designed to enforce a law enforcement parameter established around a scene with a fugitive or an active shooter. That profile might cause the drone to fly the parameter and enable command personnel to determine if personnel were deployed appropriately to enforce the perimeter.
(129) Circular orbits are standard for localized, dynamic events like structure fires.
(130)
(131)
(132) Natural disaster aftermath coverage would involve even wider orbits at 400 feet AGL to capture long shots of widespread damage. Coverage of a wildfire might involve a wide elliptical orbit, 2-1001 miles, at 300 feet AGL, staying upwind of smoke.
(133) Temporary Flight Restrictions (TFRs) are common during major incidents such as presidential visits or disasters. News drones must orbit outside TFR boundaries, which may force wider or offset orbits. Drones must adjust orbits to avoid restricted areas. The system maintains digital maps of such FAA designated areas and adjust flight profiles to avoid them.
(134) When an event-specific profile is not available the drone flies wide flight profiles over an area in which an event is likely to occur.
(135) The flight profiles are autonomously determined in a manner roughly similar to those described in the Robocowboy patents U.S. Pat. Nos. 12,102,060 and 12,153,451. The central concept of the present disclosure, compared with the parent disclosure, lies in the details of its event-detection, its use of triangulation to locate the event, its use of blended and bonded communications channels, and its use of simultaneous transmission of electrical power and data over the same conductors in a tether.
Data Communications
(136) The system is distinguished from the parent application and from the prior art by its use of a plurality of blended and bonded communications channels to transmit its video imagery to it users. A plurality of communications channels, 112, 114, and 116 connect the components of the system to each other and to a Remote Media Fusion Unit, comprising a monitoring station, a broadcast studio and its associated transmitter or Internet server remote from the target event 1702. The user of the system may rely on third-party providers for some of these the communications channels. In some cases, the user subscribes to channels, as with cellular service providers. In other cases, the user may arrange its own frequencies and licenses, as with satellite link, microwave link, broadband wireless, control link and video downlink connections.
(137) The fourth functional module is configured for multiplexed data communication. It comprises a dispatcher and a gatherer linked by a plurality of communications channels, wherein the dispatcher packetizes video streams and distributes them across the plurality of communications channels, and the gatherer reassembles the video streams at a destination.
(138) The communications channels terminate at the Remote Media Fusion Unit (RMFU) 1702 which may contain a studio; a broadcast, terrestrial wired, or satellite transmitter site; or at an Internet server site.
(139) The integrated arrangement comprises a dispatcher and a gatherer, linked by a plurality of communications channels, that packetize video streams and distribute them efficiently over the plurality of communications channels and to reassemble the video streams at a destination proximate to a monitoring station, a television transmitter, and an Internet server.
(140) The functions of the video processing subsystem 312 and wireless communications subsystem 314 are depicted in
(141) The hardware and software acquire a video signal from the cameras, compress it according to a standard, such as H.264 or H.265, depending on the embodiment, chunk it according to the network abstraction layer (NAL) algorithms, assign a sequence number to each chunk, transmit the chunk to the dispatcher, which dispatches it to a wireless channel, and records sequence number and channel.
(142)
(143) The dispatcher 110 comprises hardware and computer program code that distributes the video stream from the camera across a plurality of communication channels for optimal load balancing.
(144) Frequency division multiplexing is a term frequently used when a signal is divided among a plurality of frequencies, each on a different medium. A more proper term is space division multiplexing, with frequency division multiplexing reserved for division among a plurality of frequencies on the same medium. So the allocation of the video signal among multiple channels in the present disclosure is space division multiplexing and is referred to as such.
(145) The dispatcher 110 uses predictive algorithms based on non-delivered chunks of data to monitor communication channel capacity and preemptively adjusts video transmission rates to minimize latency. Its computer program code assigns video data chunks to the communication channels using flow control and sequencing algorithms to ensure correct chunk order and data integrity across variable network conditions, records the communications channel to which it assigns each chunk and uses machine learning to predict network conditions and adjust transmission rates for each communication channel based on real-time feedback.
(146) The dispatcher 110 comprises transmitters connected to antennas for each uplink channel.
(147) The packetizer 1201 comprises a computer, and computer program code that perform the digital processing to assign the full motion video stream from the vehicle's camera to a plurality of uplink channels: a satellite link 118, a microwave link 120, cellular links 118, shown on
(148) In some embodiments the dispatcher is a ground-based unit, integral to the mission 128, linked to the drone via a video downlink 1204. In other embodiments, the dispatcher is aboard the drone, integral to its wireless communications subsystem 314 components.
(149)
(150) As a component of the dispatcher, the packetizer is responsible for dissecting the video stream received from the cameras into frames, packets, and other groups of bits appropriate for insertion into different channels. In other words it performs the functions enabling frequency division multiplexing, with each channel representing a frequency, in that context.
(151) The packetizer performs network abstraction layer (NAL) processing on the downloaded image data, fragmenting the image data frames to the maximum transmission unit (MTU) of the lower layers in the OSI stack 1032-1370 bytes, in one embodiment. The packetizer 1201 ensures that the chunk size does not exceed the MTU of any lower layer. Ensuring that the size of the NAL unit passed down to the lower layers does not exceed the MTU of any of those lower layers eliminates the possibility of further fragmentation at lower-layer processing and permits the NAL units to be distributed to the respective communication channels intact.
(152) The dispatcher also performs functions beyond the scope of the packetizer relating to efficient allocation of information among the channels, detection of packet loss, and throttling of channels.
(153)
(154) Attaching sequence numbers to each chunk permits the gatherer to reassemble the in proper order even though they arrive at the gatherer at different times, based on different channel performance.
(155) If the byte counter is greater than the MTU 1314, the dispatcher selects a channel 1301 (which may be the same channel or a different one) and repeats the process.
(156) The dispatcher provides a buffer to contain data downlinked from the camera until it can be processed and buffers to contain data directed to an output channel until the output channel can accept it.
(157)
(158) The result of this process is to fragment the much larger units of data represented by an entire video frame from the camera downlink into chunks no greater than can be handled by the MTUs of the lower layers in the OSI stack preparing them for transmission on the output channels.
(159) The sequence numbers in the chunk headers permit the smaller chunks of information to be reassembled by the gatherer 1704 in the correct order after they have been fragmented ensuring that the chunks created by this process are no larger in size than the smallest MTU acceptable by lower layers in the OSI stack. Such sizing of the chunks by the dispatcher ensures that no further fragmentation will occur, which might result in loss of sequencing. The dispatcher module distributes the packets among the available channels, which may be as limited as two conventional cellular channels or as numerous as a dozen or more cellular channels, Wi-Fi channels, proprietary video channels, microwave channels, and satellite channels. Until it learns more about channel capacity, the distributor module feeds the packets into each channel sequentially so that the data rate of packets entering each channel is the same.
(160) The flows of NAL units and their corresponding quick UDP Internet connections (QUIC/UDP) segments and IP packets are separated into streams and inserted into communications channels, one stream per channel, but the sequence of NAL units in a particular stream is not a continuous portion of the higher level video sequence. Rather, the NAL units comprising the higher level sequence are distributed across multiple streams and channels according to channel capacity and performance.
(161) The dispatcher software uses UDP (User Datagram Protocol) or optionally QUIC or RTP protocols combined with NAL application-layer flow control and sequencing techniques to manage the video streams. The dispatcher application program code handles packet reordering and flow management, providing greater flexibility for adapting to the variable conditions of cellular networks. The dispatcher paces the sending of packets into each stream based on input from the gatherer to enable the dispatcher's role as a congestion controller.
(162) To perform its function effectively, the dispatcher discovers bit rate, latency, and error rate for each channel. Latency discovery is straightforward by means of the Internet ping command. Discovery of bit rate and error rate occurs through the interaction between the dispatcher 110 and the gatherer 1704.
(163) The flow control and sequencing algorithms balance speed and reliability. Flow control continuously adapts to the real-time state of each cellular connection, while sequencing mechanisms ensure that data is delivered in the correct order despite packet loss or out-of-order arrival. The dispatcher uses machine learning and predictive algorithms to monitor network conditions and to adjust flow control and sequencing parameters preemptively to minimize latency and maximize throughput.
(164) One embodiment achieves inter-stream sequencing by defining a protocol that tags data, such as NAL chunks with identifiers specifying which part of a larger message they belong to, regardless of the stream that carries them. The receiver buffers out-of-order chunks and reorders them based on the protocol's rules before further processing. This aspect of the design overcomes a deficiency in the QUIC protocol, which ensures correct intrastream sequencing but not correct interstream sequencing, important for bonded cellular and blended transmission of real time audio and video.
(165) The dispatcher's advanced flow control for blended networks relies on AI (artificial intelligence) algorithms that predict the behavior of each link based on historical data. These algorithms adjust transmission rates, buffer sizes, and sequencing strategies in real-time, optimizing performance based on current network conditions.
(166) The gatherer, as part of the RMFU 1702, undoes what the packetizer did, reassembling discrete packets, frames, and other groups of bits into streams compatible with the needs of the monitor, transmitter or Internet server. It also coordinates with the dispatcher to buffer information and to identify lost packets.
(167)
(168) When the gatherer detects a missing a chunk and therefore sends later chunks to its buffer, the logic saves the number for the lowest numbered chunk in the buffer. This is the first chunk to arrive after the missing one. By decrementing the chunk's number by one, the logic obtains the number of the missing chunk, and sends that number to the dispatcher 1512. The gatherer then flushes the buffer to the transmitter feed 1514.
(169)
(170) The gatherer 1704 is part of a Remote Media Fusion Unit (RMFU) 1702, located proximate to the user of the system. The basic function of the gatherer 1701 is depicted in
(171) The gatherer 1701 comprises a bundler 1801, a plurality of satellite links 118, microwave links 120, and cellular linksfirst cellular link 1207, second cellular link 1202, and third cellular link 1203, a feed to transmitter/server 1808.
(172) The RMFU comprises a video display 1901, which presents a matrix displaying channel numbers 1902, 1903, 1904, 1905, and 1910 a type of channel 1906, a bit rate 1907, a latency 1908, and an error rate 1909.
(173) In an alternative embodiment, the video display can be located proximate to the dispatcher.
Continuity of Electrical Power
(174) The present disclosure modifies a basic drone available in the market to enable it to remain aloft beyond the normal exhaustion of batteries provided with the drone. Several different embodiments address this goal of increasing flight endurance. This application enhances the disclosure in the parent application by providing additional detail as to the structure and functioning of those different enablements, including use of a ground-based battery replacement station and a tether capable of carrying both electrical power and data signals on the same pair of conductors.
(175) The fifth functional module is configured for flight endurance despite limited battery charge. It comprises a power management system configured to extend drone operation beyond onboard battery capacity through external power supply. The power management system comprises a battery replacement system including an onboard battery monitoring system, a power manager, and an automated battery replacement station.
(176) The drone comprises an onboard battery monitoring system that monitors drone battery levels, signals a power manager when the battery level reaches a pre-determined threshold based on distance from a battery replacement station to the target event, and directs the drone to the battery replacement station.
(177)
(178) The battery replacement apparatus automatically detects the arrival of the drone at the landing zone, performs the battery replacement process without human intervention comprising: removal of a depleted battery, insertion of a fully charged battery into the drone and includes an drone recognition and battery handling alignment mechanism specific to the drone's physical battery interface.
(179) In one embodiment, a second drone called the relief vehicle 2003 is provided. As soon as the first drone 102 sends a battery exhaustion message through the battery exhaustion dialogue 1206 to the power manager 2001, the power manager 2001 transmits a message through the relief vehicle dialogue 2004 to the relief vehicle 2003, launching it and causing it to take up the position now vacated by the first vehicle. Availability of a relief vehicle 2003 means that coverage of the triggering event is continuous, even while drones are obtaining battery replacements.
(180) The battery replacement station 2002 is a commercially available off-the-shelf automatic-landing-zone, battery-charger and battery-replacement system capable of directing a vehicle to dock in the proper place on the automatic landing zone, after which it removes the nearly depleted battery and places it on the charging apparatus. The apparatus inserts a fully charged battery into the vehicle. After the charging station completes those operations, it signals the power manager 2001, and the power manager gives a signal to the vehicle, which now has a fully charged battery, authorizing it to return to its position for monitoring the target event 104. Upon receiving that signal from the power manager, the vehicle returns to a position proximate to the target event.
(181) The source of electrical power is adequate to handle the total number of batteries. One embodiment provides three square feet of solar panel area.
(182) The charging station is powered by solar panels of appropriate size, three square feet in area in one embodiment. An auxiliary gasoline or diesel engine backs up the solar power in one embodiment. Fixed batteries in the charging station serve as a buffer and storage medium between the electrical source and the vehicle-battery charging apparatus.
(183) In an alternative embodiment, shown in
(184) the drone 102 can remain aloft, covering the target event 104, as long as the event persists. In this embodiment, the drone 102 does not have to land to receive a charged battery. It receives a continuous supply of electricity through a power up 2108 conductor to the drone 102, contained in a tether 2120 from the power manager 2001. The tether 2120 also carries telemetry 2116 down from the drone to the mission controller 128 and the dispatcher 110 and commands from the remote control device 124 and the mission controller 128 to the drone through the tether. All signals travel on a plurality of conductors enclosed by the tether.
(185) The tether comprises an electrical power conductor and a digital communications conductor capable of carrying video, telemetry, and command information to enable continuous operation of the drone without battery constraints.
(186) In an alternative embodiment, shown in
(187) The drone 102 is connected through a tether 2220 enclosing a single wire pair 2222. The single wire pair 2222 carries electrical power up 2226, video down 2228, telemetry down 2230, and commands up 2232 to and from the power manager 2001, the mission controller 128, the remote control device 124, and the dispatcher 110, with the signals and currents separated by appropriate LC filters.
(188) In this embodiment the tether comprises only two conductors capable of carrying both electrical power and digital video, telemetry, and command information simultaneously with the electrical power to enable continuous operation of the drone without battery constraints, and to lessen the weight of the tether, compared to a tether with more than a single pair of conductors.
(189) Remote Control of the System
(190) The system is subject to varying degrees of remote control. Its autonomous operations are controlled in part by commands sent by the mission controller 128, as determined by the program code distributed between the drone and the mission controller. Manual operations are controlled by a remote control device 124 operated by human user proximate to the drone.
(191) The sixth functional module is configured for modified parameters by remote control. It comprises a mission controller with interfaces for receiving inputs from remote consoles through Internet or cellular connections, wherein such inputs modify target event definitions, launch conditions, flight profiles, spatial coordinates, or camera presets.
(192) The system also allows for remote control at greater, essentially arbitrary, distances by means of cellular or Internet connections. These connections may be implemented on a plurality of wired or wireless channels, including those depicted as being bonded and blended for carrying video from the system to a broadcast station or Internet server.
(193)
(194)
(195) Regardless of whether the remote connection is effected through an Internet connection or a cellular connection, inputs from the remote console through the connection may comprise definition of target event, a modified launch condition, modified flight profiles, modified target coordinates, or modified camera presets.
(196)
(197) The first steps are to subscribe to communications channels 2502, and to acquire a drone 2504 equipped with the system. Then a user installs the gatherer near a transmitter or Internet server 2506, and defines target event 2508, providing the system with appropriate sound, light, and vibration signatures associated with the event in the form of templates.
(198) Then the user transports the system 2510 to a location proximate to where the event is expected to occur and activates the communications channels 2512.
(199) The system detects occurrence of the event 2514 and launches the drone 2516.
(200) Once the drone is launched the system locates the event 2518 and flies event-determined flight profiles 2520 autonomously.
(201) As it flies the profiles, the dispatcher disaggregates the video into data units such as packets and frames and allocates units to each of the plurality of communications channels based on the characteristics of that channel, such as its bandwidth, latency, and error rate.
(202) The system transmits video through the plurality of communications channels 2522.
(203) The user monitors mission performance and video collected and and makes adjustments by sending commands 2524.
(204) The dispatcher and gatherer cooperate to determine which of the plurality of communications channels is losing data units, based of a determination of which units do not arrive, and the dispatcher throttles that communications channel.
Distinguishing the Prior Art
(205) While applications involving drones used in the property insurance context have disclosed somewhat similar features and methods, the differences in purpose undermine any obvious mapping of the insurance industry disclosures to the newsgathering and public safety context. Speed is of the essence newsgathering in public safety contacts, but not in the insurance context; the types of target events are substantively different, although they overlap to a limited degree; the insurance applications do not call for triangulation to determine the location of a target event.
(206) Insurance applications do not disclose the features necessary for effective law enforcement, other first responder, and newsgathering applications. The insurance application focus on determining damage that might support claims, fraudulent claims, and infrastructure necessary to process claims. None of this is relevant to the law enforcement, other public responder, and newsgathering applications.
(207) Disclaimer
(208) The inventor does not claim the blended and bonded digital communications system and methods disclosed in U.S. Pat. No. 12,250,377, but only the combination of such system and method with the other features of the present disclosure.