Command and Control Systems and Methods for Distributed Assets
20210403157 · 2021-12-30
Inventors
Cpc classification
B64U2201/10
PERFORMING OPERATIONS; TRANSPORTING
G05D1/0094
PHYSICS
B64U50/19
PERFORMING OPERATIONS; TRANSPORTING
G06T17/20
PHYSICS
G05D1/0027
PHYSICS
B64U2101/30
PERFORMING OPERATIONS; TRANSPORTING
B64F1/362
PERFORMING OPERATIONS; TRANSPORTING
G06T19/00
PHYSICS
International classification
G05D1/00
PHYSICS
G05D1/10
PHYSICS
G06T17/20
PHYSICS
Abstract
Various embodiments of a command and control system of distributed assets, including drones, other mobile assets, and fixed assets. Optionally information is gleaned from sensory units and transformed into a pictorial representation for easy understanding, decision-making, and control. In some embodiments, the pictorial representation is in a 3D image. Optionally a user interface that is designed for particular usages, and that in some embodiments may be customized by different operators. Optionally sensory units or assets are in communicative contact in a mesh network. Various embodiments of methods to operate command and control systems.
Claims
1. A command and control system for management of drone fleets, comprising: a command and control unit for receiving data, and configured to issue commands for controlling sub-systems; and wherein the sub-systems are configured to receive the data and transmit it to the command and control unit.
2. The command and control system of claim 1, wherein the sub-systems comprise: a docking station for storage, charging, launching, and retrieving drones; a plurality of drones; an instrument on the drones for receiving and transmitting the data; a positioning sub-system for determining the positions and orientations of drones in relation to a target; and static support equipment for receiving data or taking other action.
3. The command and control system of claim 2, further comprising: a processing device configured to receive the data, and to process the data into a 3D model in relation to the pictorial representation of an area in which the target is located.
4. The command and control system of claim 3, further comprising: wherein the processing device is configured to transmit the 3D model to the command and control unit.
5. The command and control system of claim 4, further comprising: the sub-systems are communicatively connected in a mesh network.
6. A method for real-time mapping by a mesh network of a target, comprising: capturing data about the location of a target by a plurality of drones; compressing the data by the drones; applying computer algorithms by the drones to transform the data for each drone into a 3D model of an area in which the target is located; adding to the 3D models positioning data about the drones to create a shared position map of the location of the target; processing visual markers with the shared position map into a single map of target location, and the positions and orientations of the drones; and creating a visual map of the area in which the target is located, such that the single map is configured to be altered as the received data changes over time.
7. The method of claim 6, further comprising repetition of the method described so as to update the single map in real-time.
8. The method of claim 7, further comprising the mesh network transmitting the single map to a command and control unit configured to receive such transmission.
9. The method of claim 8, further comprising the command and control unit combining an external map of the area in which the target is located with the single map in order to produce a unified and updated map of the target and the area in which the target is located.
10. The method of claim 9, further comprising the command and control unit integrating external intelligence into the unified and updated map.
11. A device configured to display dynamic UI about a target, comprising a device with user interface in an initial state showing a map and sensory data from a plurality of drones.
12. The device of claim 11, wherein the device is configured to detect, and to display in the user interface, a change in conditions related to the initial state.
13. The device of claim 12, wherein the detection occurs in real-time relative to the change in conditions, and automatically without human intervention.
14. The device of claim 13, wherein the change in display occurs in real-time relative to the change in conditions, and automatically without human intervention.
15. The device of claim 14, wherein notification of the change to the user occurs in real-time, and wherein the user indicates a manner in which the display should change to best present the change in conditions.
16. A system for real-time mapping of a target, comprising: a plurality of drones for receiving sensory data from a geographic area and transmitting such data to a portable device; wherein the portable device is communicatively connected to the plurality of drones, and the portable device is configured to receive the sensory data transmitted from the drones.
17. The system of claim 16, further comprising: the portable device is configured to process the sensory data received from the plurality of drones to create a real-time 3D model of an area in which a target is located.
18. The system of claim 17, further comprising: the portable device is configured to send commands to the drones to perform actions in relation to the target.
19. The system of claim 18, further comprising: the portable device is configured to retransmit the sensory data to a processing device; and the processing device is configured to process the sensory data received from the portable device to create a real-time 3D model of an area in which a target is located.
20. The system of claim 19, further comprising: the processing device is configured to send commands to the drones to perform actions in relation to the target.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] For a fuller understanding of the nature advantages of various embodiments described herein, reference should be made to the following detailed description in conjunction with the accompany drawings.
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
DETAILED DESCRIPTION OF VARIOUS EMBODIMENTS
[0028] As used herein, the following terms have the following meanings:
[0029] “Commands to the drones” or “commands to the assets” are any of a variety of commands that may be given from either a portable unit or a central unit to a drone in relation to a particular target. Non-limiting examples include “observe the target,” “ignore the target,” “follow the target,” “scan the target,” “mark the target,” and “attack the target.”
[0030] “Distributed assets,” sometimes called merely “assets,” are devices in the field that are collecting data and sending to a point at which the data is processed into a map. One example of a mobile asset is a drone, but other mobile assets may be on land, at sea, or in the air. The term “distributed assets” may also include fixed assets which are permanently or at least semi-permanently placed at a specific location, although the sensing device within the asset, such as camera, may have either a fixed area of observation or an ability to roam with the fixed asset, thereby changing the field of observation.
[0031] “Drone” is a type of mobile distributed asset. When it appears herein, the term does not exclude other assets, mobile or fixed, that may perform the same function ascribed to the drone.
[0032] “External intelligence” is information that is gleaned from outside a Command & Control System, which is then incorporated with real-time data within the system. External intelligence may be updated in real-time, or may be relatively fixed and unchanging. One alternative method by which external intelligence may be gleaned is illustrated in
[0033] “External map” is a map taken from an external source, such as the Internet. For example, a map of the various buildings in
[0034] “Fixed assets” are sensors whose data may be received by a command & control unit, and which may receive and execute orders from such a unit. In
[0035] “Mesh network” is a group of units, either all the same units or different kinds of units, in which the units are in communicative contact directly and dynamically to as many other units as possible in the network. In this sense, “direct” means unit to unit contact, although that does not impact a unit's contact with portable units or processing units. In this sense, “dynamically” means either or both of data conveyed in real-time, and/or the communicative contacts between units are updated consistently as units move position and either lose or gain communicative contact with other units. Some mesh networks are non-hierarchical in that all units may communicate equally and in with equal priority to other units, but that is not required and in some mesh network some of the units may either have higher priority or may direct the communicative contacts of other units.
[0036] “Mobile assets” Such asserts may be mobile or fixed including drones, vehicles, and sensors of all types such as visual, auditory, and olfactory, measuring location or movement or temperature or pressure or any other measurable item, are included within the concept of “distributed assets” subject in part of a Command and Control System (“C&C System”). In
[0037] “Pictorial representation” is a conversion of various data, which may be numerical, Boolean (Yes-No, or True-False, or other binary outcome), descriptive, visual, or any other sensing (olfactory, auditory, tactile, or taste), all converted into a visual posting in a form such as a map with objects and information displayed visually, or drawing, or a simulated photograph, or other visual display of the various data.
[0038] Brief Summary of Various Embodiments:
[0039] The Command & Control (C&C) System delivers start-to-finish simultaneous operation/mission control of multiple autonomous systems to a single user from a single device. One example is a drone group but the System can manage any assets on or under the land, in the air, on or under the sea, or in space. The assets may be mobile or stationary, and may include, without limitation, sub-systems including drones, robots, cameras, self-driving vehicles, weapon arrays, sensors, information/communication devices, and others.
[0040] Further, in various embodiments there is a tablet or other portable device for aggregating communication to and from the remote assets. In some embodiments, the portable device also processes the information from the assets, although in other embodiments much or all of the processing may be done “off-line” by a separate processing unit. In some of the embodiments described herein, a person is involved, typically as the site of the portable device, although a person may be involved also at a processing device (in which case it may be one person for the two devices portable and processing, or different people for each device). In other embodiments, there are no people involved at one or more stages, and the System may manage itself.
[0041] Among some experts in the art, the degree of machine autonomy is discussed according to various levels, which, by some people, including the following:
[0042] Level I: No machine autonomy. The machine does only what the human specifically orders. For example, when a person fires a gun, the gun has no autonomy.
[0043] Level II: The machine is relatively autonomous when switched on, but with very little flexibility or autonomy. For example, when a robotic cleaner is switched on but can only vacuum a floor.
[0044] Level III: A system or asset is launched or turned on, and progresses to a certain point, at which the human must confirm that the operation is to continue (as in pressing a button to say, “Yes, attack,”) or conversely that the operation will be executed automatically unless the human says “stop” (as in pressing a button to terminate an operation).
[0045] Level IV: This is the level of what is called “complete machine autonomy.” Humans give a very general instruction, and then leave the system, including the control center and the remotes assets, to decide what to do and how to do it in order to execute the general instruction. For example, “Drive back the enemy,” or “Find out what is happening at a certain intersection or within one kilometer thereof, in particular with reference to specified people.” In essence, the people give a mission, but the machines both create and execute the plan.
[0046] The various embodiments described herein may operate at any or all of the four levels, although it is anticipated in particular that this System will operate at some version of Level III and/or Level IV.
[0047] One aspect of the C&C System is the connection of all the sub-systems needed to run an operation from planning to launching to retrieval to post-operation analysis, all while some of the assets or a mobile control station are in the field. The sub-systems combined into the C&C System include, but are not limited to: communication, data streaming, piloting, automated detection, real-time mapping, status monitoring, swarm control, asset replacement, mission/objective parameters, safety measures, points-of-interest/waypoint marking, and broadcasting. The fact that the System, specifically a person with a portable device for communication and/or processing, may be deployed in the field rather than in an office or other fixed location, means that the system is very easy to deploy remotely, easy to use, and dynamically adjustable, with a user interface (“UI”) that is easy to use, flexible, and easily adjustable either by a person or automatically in some embodiments.
[0048] Further, an additional potential advantage of deployment in the field is that the System may operate in the absence of what is considered standard communication infrastructure such as an RF base station and network operation control center. The RF base station is not required, and the processing of the control center may be done by the portable device (called here “on-load processing”) or by another processing device located remotely from the portable device (called here “off-load processing”). In off-load mode, the processing device does some or most or all of the processing, and communicates the results the results to the portable device in the field. The processing device may be in a fixed location, or may also be mobile, in some embodiments the portable device and the processing device are in close proximity, even next to one another, but the portable device communicates with the remote assets whereas the processing device processes the results (and such processing device may or may not be direct communication with the remote assets, according to various embodiments).
[0049] In one exemplary embodiment involving assets, a single user may deploy in a mission area with a tablet or other portable device for communication and processing, connected via a mesh network to a docking station containing a group of assets. The user would open the device and have a map view of his current coordinates on which the user would use the touch screen to place markers on the map to outline the mission environment. The user would then select from prebuilt mission parameters, objectives, and operating procedures which would indicate what actions the assets will take. (The user could also create new behavior on-the-fly by using context menus or placing markers on the map and selecting from actions.) Once these are all locked in, and any relevant Points of Interest or Objective Markers have been put down on a map, the user would have an option to launch the mission. Once the mission is launched, the C&C System may automatically start sending out commands to the docking station and the assets. The docking station would open and launch all the needed assets which would the follow their predefined orders to all fly to specific heights, locations, and orientations all without the need of any pilots or additional commands (The user would be able to modify any mission parameters, objectives, or operations during the mission or could even send direct commands to individual assets that could contradict preset orders) While the assets are active, the user would have live information of all asset positions, orientations, heights, status, current mission/action, video, sensor, payload, etc. The C&C System also has the unique ability to use the live video and positions from the assets to create real time mapping of the mission environment or specific target areas using computer vision technology included in the C&C System. The user may also choose any video feed from any asset and scrub back thru the footage to find a specific time in the footage and even play it back at lower speeds and then jump right back to the live feed. Another important feature of the C&C System is the simplified User Interface which allows a single user to keep track of huge amounts of information and multiple simultaneous video/sensor feeds by using priority based design schemes and automated resizing, information prioritization, pop-ups, and context menus to allow for the C&C System to curate and simplify what is on the user's screen at any given moment to best support the active mission. During the mission, another important feature of the C&C System is its ability to have the assets not only communicate with each other to provide fast information sharing, but also to create a safety net for any and all possible errors or system failures. Since each asset is communicating with all the others, if any kind of communication or system error happens, the assets are able to automatically still carry out their mission parameters or enact fallback procedures. This also means that if one or more assets run out of batteries/fuel, are damaged, or otherwise cannot complete their mission, the other assets are able to communicate to each other or to the C&C System to automatically pick up the slack and cover the missing asset's mission objectives. Once all mission parameters have been completed, or the user chooses to end the mission, all the assets would automatically return to their docking station, self-land, and power off. After the mission, the C&C System would then be in post-mission assessment mode where the user could review all the parameters, data, video, decisions, and feedback throughout the whole mission. The user would also be able to scrub through the whole mission timeline to see exactly what happened with each element of the mission at each minute. Their mission map would still be interactive during this mode, allowing them to dynamically move the map at any point on the mission replay timeline to see the data from different angles. All of the mission information/data could also be live streamed to other C&C stations or viewing stations, where the data could be backed up or the data could be uploaded later from a direct connection to the tablet or other portable device.
[0050] Since the communication network is specifically a mesh network, the assets or other remote assets may be in direct contact with the portable device (and/or processing device). They may also be in contact with one another, and in some embodiments there may be a chain communication to and from remote asset A, from and to remote asset B, and to and from the portable device in the field and/or the processing device.
[0051] In various alternative embodiments, the C&C System will have one or more mission parameter/objective creator modes where advanced users could create new behaviors for their drones/assets to engage in to adapt to the ever-changing environments in which they function. This mode could also facilitate creating an entire mission timeline so that a field user could have a one-touch mission loaded up when he or she arrives at the launch location. Mission and objective parameters, or Mission/Objective Parameters, or MOPs for short, mean that any project or operation or task becomes a “mission” with goals to be achieved, and “objectives” which are sought in order to achieve the goals. It is also possible, in some embodiments, to have negative objective parameters, such as not to harm people or a particular building. One possible purpose for this mode would be to allow behaviors and adaptations to be added constantly to the System. The C&C System would be robust enough so that modifications or updates to the System could be easily integrated into the System in the future, allowing for augmentation of operational capabilities. Non-limiting examples of such augmentation include creating a specific automated patrol routine for a complicated and/or dense geographic area, building one or more protocols for different kinds of search & rescue environments, or building a decision hierarchy among different remote assets that have unique equipment. The last example envisions a situation in which different assets, drones or USV's or UGV's or other, have different capabilities and different roles, and the controlling unit, whether it is a table or portable unit, a processing unit, both portable and processing units, or another, must plan and execute a procedure for coordinating such different remote assets, including update goals and/or objectives and/or routines in a real-time mode based on what is happening in the environment. In various embodiments, with or without specialized or customized remote assets, goals or objectives or routines will be updated on a real-time basis.
[0052] Detailed Description of Various Embodiments with Reference to Figures:
[0053]
[0054] As the drones maintain position or move along their respective flight paths, they continue to take video pictures of the target, particularly of points of interest in the target. The drones send all this data back to a unit that processes the data, which may be portable device, or a centralized processing device. That data is used by the processing unit to create a 3D model of the point of interest, and also allows decisions about continuations or changes in flight paths, placement, height, angle of visuals, and other positional aspects of the drones. In some embodiments, the drones are in direct communicative contact, which they may use for one or more of three purposes—first, to convey data from one drone to another, where only the second drone is in communicative contact with the portable device or centralized processing device; second, to communicate with one another such that their visual coverage of the target area is maximized over time; and third, to focus particular attention, with two more drones, on a particular event or object of interest at a particular time, even though there may be some or substantial overlap of the fields of vision for the two drones. In
[0055]
[0056] In step 210, video data is captured starting with a known image or position that is the target of interest. This is done by each of the units in a fleet with multiple assets, such as, for example, several drones in the drone fleet. This example will use drones, but it is understood for a fleet of assets on land, or a fleet of assets on the water, or a combined fleet with air and/or land and/or water assets.
[0057] In step 220, each drone compresses its video data to enable quick processing and transfer.
[0058] In step 230, there is a point or multiple points for processing the video data. Pre-processing prior to transmission, may be performed by drones. The pre-processed data is then transferred to a processing unit, such as a portable unit or a centralized processing device. Upon receipt, the portable device or processing device will process the data into information. The processing unit (portable device or processing device) applied computer vision algorithms to create known mesh points and landmarks. The overall process of receiving and processing data to create a map of known mesh points and landmarks may be called “computer vision.”
[0059] In step 240, the processing unit creates a 3D mesh from the video footage received from drones. The processing unit also creates a computer vision point cloud.
[0060] In step 250, the processing unit uses positioning data among the multiple drones and their cameras to create a shared position map of all the cameras' paths.
[0061] In step 260, the processing unit uses shared visual markers from the initial positions of the drones, and throughout each drone's flight, to combine the separate meshes into one map of the drones in correct positions and orientations to receive a continuing image of the target. In this sense a “separate mesh” is the mesh of views created by a continuously movie video from a single drone. When these separate meshes are used with known positions and angles, as well as landmarks, correct positions and orientations of all the drones may be calculated on an essentially continuous basis.
[0062] In step 270, the combined separated measures are unified by the processing unit to create a single unified mesh within the Command & Control System, in essence a 3D visual map of the target at a particular time, and then changing as the drones continue to move.
[0063] This process, steps 210 to 270, is repeated to continue to update a unified mesh image of the target area.
[0064] One embodiment is a method for real-time mapping by a mesh network of a target. In a first step, one or more drones capture data about the location of a certain target area or person of interest, and compress the data. In some embodiments, the drones then send that compressed data to a processor, which may be a portable unit, or a separate processing unit, or both. In other embodiments, each drone or some of the drones may process raw data into a 3D model for a drone, and that 3D model is then transmitted to the processor together with the raw data and location data of the drone. The drones send also positioning data and orientation data for each drone. The processor, be it a portable device or a separate processing unit or both, will process any raw data that has not 3D model into a 3D model for that drone. Positioning data and orientation data are added to the 3D model for each drone. Using the collective positioning and orientation data from all the drones, and visual markers that are unique to the target area, the processor creates a single 3D map of the target area. The map is configured in such a way that it may be updated as new data is transmitted to the processor from the drones.
[0065] In an alternative embodiment to the method just described, in addition the drones collect and send updated data, which is used by the processor to create a continuously updated 3D map of the target area.
[0066] In an alternative embodiment to the method just described, in addition processing occurs within the mesh network, and the mesh network sends, in real-time, an updated map to a command and control unit configured to receive such transmission.
[0067] In an alternative to the method just described, in addition the command and control unit combines an external map of the area in which the target is located with the single map received, in order to produce a unified and updated map of the target and the area in which the target is located. The command and control unit may obtain such map from the Internet or from a private communication network.
[0068] In an alternative to the method just described, in addition the command and control unit integrates external intelligence into the unified and updated map. Such external intelligence may be obtained from the Internet or a private communication network. It may include such things as weather conditions, locations of various people or physical assets, expected changes in the topography, or other.
[0069]
[0070] Many of the elements at left and right may be unchanged, except that, as shown, due to the detection at 340v5a within video 3 340v3a, the screen allocation to video 3 has expanded greatly at right 340v3b, whereas the other video images, video 1 340v1b, video 2 340v2b, and video 4 340v4b, have contracted in size to allow a more detailed presentation of video 3 340v3b. In video 3 340v3b, the particular pictorial form of interest that was 340v5a is now shown as 340v5b, except that this new 340v5b may be expanded, or moved to the center of video 3 340v3b, or attached with additional information such as, for example, direction and speed of movement of the image within 340v5b, or processed in some other way. It is possible also, that more than one video will focus on 340v5b, although that particular embodiment is not shown in
[0071] One embodiment is a device configured to display dynamic UI about a target. Such device includes a user interface in an initial state showing a map and sensory data from a plurality of drones. The device is in communicative contact with a user, which may be as simple as the user looking at the device, or may be an electronic connection between the user and the device.
[0072] In one alternative embodiment to the device just described, further the device is configured to detect, and to display in the user interface, a change in conditions related to the initial state.
[0073] In one alternative embodiment to the device just described, further the detection occurs in real-time relative to the change in conditions.
[0074] In one alternative embodiment to the device just described, further the change in display occurs in real-time relative to the change in conditions. In some embodiments the change may occur automatically, without human intervention. In other embodiments, the change will occur at the command of a human user.
[0075] In some embodiments the change may occur automatically, without human intervention, and the human user is notified of the change in real-time. In other embodiments, the user indicates a manner in which the display should change to best present the change in conditions.
[0076]
[0080] One embodiment is a command and control system for management of drone fleets. In some embodiments, such system includes a command and control unit for receiving data, which is configured to issue commands for controlling sub-systems. In such embodiments, the sub-systems are configured to receive the data and transmit it to the command and control unit.
[0081] In an alternative embodiment to the command and control system just described, further the sub-systems include (1) a docking station for storage, charging, launching, and retrieving drones; (2) one or more drones; (3) an instrument on each drone for receiving and transmitting the data; (4) a positioning sub-system on each drone for determining the position and orientation of the drone in relation to a target; and (5) static support equipment for receiving data or taking other action.
[0082] In an alternative embodiment to the command and control system just described, the system further includes a processing device configured to receive the data, and to process the data into a 3D model in relation to the pictorial representation of an area in which the target is located.
[0083] In an alternative embodiment to the command and control system just described, further the processing device is configured to transmit the 3D model to the command and control unit.
[0084] In an alternative embodiment to the command and control system just described, further the sub-systems are communicatively connected in a mesh network.
[0085]
[0086] Device 500 may operate automatically, or at the command of human user 510. There are three drones in this example, drone 1 520a, drone 2 520b, and drone 3 520c, each with a direct communication path to the portable device (530a, 530b, and 530c, respectively), and each with connection to the other drones (for drone 1, 540a and 540c; for drone 2, 540a and 540b; for drone 3, 540b and 540c). It is not required that all units be in contact with all other units all of the time. In a mesh network, the key thing is that at least one remote unit, let's say drone 1 520a, is in direct communication with portable device 500 acting as a controller, and one or more of the other remote devices drone 2 520b and drone 3 520c are in contact with the other unit that is in direct contact with the controller (for example, at some point in time path 530b may be broken or down, but drone 2 520b is still in contact with the controller through drone 1 520a on path 530a).
[0087] In the system portrayed in
[0088]
[0089]
[0090] (1) No feed at all 640 or 650 to the portable device 610 acting as a controller, versus either real-time videos 640, or video files 650, or a mix of both according to defined criteria, transmitted by the drones 630a-630d to the processing device 620.
[0091] (2) No feed at all 640 or 650 to processing device 620, versus either real-time videos 640, or video files 650, or a mix of both by defined criteria, to the portable device 610. The portable device 610 would then relay feeds to the processing device 620, in some cases without any pre-processing by the portable device 610, and in other cases with some pre-processing by the portable device 610. In all cases, after the processing device 620 receives the feeds, it could perform significant processing, and then store part or all of the data, or send part or all of the data back to the portable device 610, or transmit part or all of the data to a receiver or transceiver located outside the system illustrated
[0092] (3) Any mix of feeds—real-time videos 640, video files 650, or a mix—to both the portable device 610 and the processing device 620 according to pre-defined criteria. In
[0093] (4) All of the alternative embodiments, including the three described immediately above, are changeable according to time, changing conditions, or changing criteria. By “changing conditions,” the intent is factors such as change in the field of vision of specific drones 630a-630d, quality of the field of vision of specific drones, changing atmospheric conditions affecting communication, events occurring at the target, quantity of data being generated at each point in the system, need for processing of specific types of data, remaining flight time of drones, and other factors that can impact the desirability of collecting or transmitting either real-time video feeds 640 or video files 650. By “changing criteria,” the intent is rules regarding how much data should be pre-processed at which point in the system, how much data may be transmitted in what format at what time, ranking of data by importance, changes in the importance of the target, changes in the available processing power at the portable device 610 or processing device 620, and other factors within the control of system that could increase the quantity or quality of either the data collected and/or the information produced from the data.
[0094] (5) All of the foregoing discussion assumes one portable device 610, and one processing device 620, as illustrated in
[0095] One embodiment is a system for real-time mapping of a target, including one or more drones for receiving sensory data from a geographic area and transmitting such data to a portable device, wherein the portable device is communicatively connected to the plurality of drones, and the portable device is configured to receive the sensory data transmitted from the drones.
[0096] In one alternative embodiment is the system for real-time mapping of a target just described, further the portable device is configured to process the sensory data received from the plurality of drones to create a real-time 3D model of an area in which a target is located.
[0097] In one alternative embodiment is the system for real-time mapping of a target just described, further the portable device is configured to send commands to the drones to perform actions in relation to the target.
[0098] In one alternative embodiment is the system for real-time mapping of a target just described, further the portable device is configured to retransmit the sensory data to a processing device and further the processing device is configured to process the sensory data received from the portable device to create a real-time 3D model of an area in which a target is located.
[0099] In one alternative embodiment is the system for real-time mapping of a target just described, further the processing device is configured to send commands to the drones to perform actions in relation to the target.
[0100]
[0101] In the example presented in
[0102]
[0103]
[0104]
[0105] Exemplary Usages: Various embodiments of the invention will prove useful for many different usages, including, without limitation, any or all of the following: [0106] (1) Military operations: According to what is known as “the Revolution in Military Affairs” (“RMA”), drone air fleets are becoming one of the major modes of aerial warfare and may become the predominant mode by the decade of the 2030's. Drones are often referenced as “UAV's,” short for “Unmanned Aerial Vehicles,” but in addition to UAV's, there are now “UGV's” or “Unmanned Ground Vehicles” of many types, and a beginning of “USV's,” short for “Unmanned Surface Vehicles,” essentially ships of various kinds. It is likely that there will be fleets of UAV's, separate fleets of UGV's, and separate fleets of USV's. It is likely that there will also be combined fleets of multiple types of units, all managed on a single controller infrastructure. There will be espionage air fleets, or combined assets fleets, as well, and these may be coordinate with military fleets in the air, on ground, on the sea, and/or in space, managed on one C&C Systems or managed by multiple Systems that are in close and continuous communication with one another. [0107] (2) Civilian operations: In area after area of the civilian economy, mobile assets, particularly unmanned mobile assets, are becoming increasingly important. In many areas of agriculture and water management, such assets are critical for monitoring and measuring the environment. For agriculture, as an example, unmanned mobile assets may help monitor water consumption of plants, fertilizer consumption, or pest infestation. Mobile assets could also be deployed to help manage these factors, by turning on water, or spraying either fertilizer or pest control material. For water management, for example, mobile assets can monitor water flow, points of storage, possible leaks, points and quantities of consumption, and others. Other mobile assets could help execute water management by activating or deactivating points of exist, or plugging leaks.
[0108] Depending on the scale of such projects, fleets of mobile assets may be essential. For example, in construction sites, particularly those that include multiple buildings, mobile assets are needed for safety and management, monitoring the creation and management of safety structures such as barriers, or checking to insure that personnel use mandated safety equipment, or monitoring situations to help enhance fire safety. In security situations, infrastructure, and transportation, mobile assets are becoming increasingly important. Various embodiments of the systems and methods described herein will be useful in deploying and managing such mobile assets, possibly in conjunction with fixed assets. [0109] (3) Mixed civilian and military operations: There are usages in which civilian projects must be protected by military assets. One might think, for example, of platforms for the drilling of gas and oil wells, which require civilian assets to complete the project and military assets to protect the platform. Drones or other mobile assets of different types would be deployed, and may be serviced by various embodiments of the C&C Systems described herein. As one example, it is possible to envision a maritime drilling platform, in which mobile assets help monitor drilling progress and safety conditions, while the same mobile assets, or other mobile assets, monitor the area for military threats from the air, on water, or underwater, or the mobile assets either direct friendly military assets responding to threats, or alternatively take military action against the threat.
[0110] In this description, numerous specific details are set forth. However, the embodiments/cases of the invention may be practiced without some of these specific details. In other instances, well-known hardware, materials, structures and techniques have not been shown in detail in order not to obscure the understanding of this description. In this description, references to “one embodiment” and “one case” mean that the feature being referred to may be included in at least one embodiment/case of the invention. Moreover, separate references to “one embodiment”, “some embodiments”, “one case”, or “some cases” in this description do not necessarily refer to the same embodiment/case. Illustrated embodiments/cases are not mutually exclusive, unless so stated and except as will be readily apparent to those of ordinary skill in the art. Thus, the invention may include any variety of combinations and/or integrations of the features of the embodiments/cases described herein. Also herein, flow diagrams illustrate non-limiting embodiment/case examples of the methods, and block diagrams illustrate non-limiting embodiment/case examples of the devices. Some operations in the flow diagrams may be described with reference to the embodiments/cases illustrated by the block diagrams. However, the methods of the flow diagrams could be performed by embodiments/cases of the invention other than those discussed with reference to the block diagrams, and embodiments/cases discussed with reference to the block diagrams could perform operations different from those discussed with reference to the flow diagrams. Moreover, although the flow diagrams may depict serial operations, certain embodiments/cases could perform certain operations in parallel and/or in different orders from those depicted. Moreover, the use of repeated reference numerals and/or letters in the text and/or drawings is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments/cases and/or configurations discussed. Furthermore, methods and mechanisms of the embodiments/cases will sometimes be described in singular form for clarity. However, some embodiments/cases may include multiple iterations of a method or multiple instantiations of a mechanism unless noted otherwise. For example, when a controller or an interface are disclosed in an embodiment/case, the scope of the embodiment/case is intended to also cover the use of multiple controllers or interfaces.
[0111] Certain features of the embodiments/cases, which may have been, for clarity, described in the context of separate embodiments/cases, may also be provided in various combinations in a single embodiment/case. Conversely, various features of the embodiments/cases, which may have been, for brevity, described in the context of a single embodiment/case, may also be provided separately or in any suitable sub-combination. The embodiments/cases are not limited in their applications to the details of the order or sequence of steps of operation of methods, or to details of implementation of devices, set in the description, drawings, or examples. In addition, individual blocks illustrated in the FIG.s may be functional in nature and do not necessarily correspond to discrete hardware elements. While the methods disclosed herein have been described and shown with reference to particular steps performed in a particular order, it is understood that these steps may be combined, sub-divided, or reordered to form an equivalent method without departing from the teachings of the embodiments/cases. Accordingly, unless specifically indicated herein, the order and grouping of the steps is not a limitation of the embodiments/cases. Embodiments/cases described in conjunction with specific examples are presented by way of example, and not limitation. Moreover, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and scope of the appended claims and their equivalents.