Methods and Device for Autonomous Rocketry
20230249847 ยท 2023-08-10
Inventors
Cpc classification
B64G1/247
PERFORMING OPERATIONS; TRANSPORTING
B64G1/62
PERFORMING OPERATIONS; TRANSPORTING
International classification
Abstract
Rocket control is a difficult and unpredictable task in environments with inclement weather. As a result, launch missions are often strictly limited based on weather conditions. The present invention provides methods for controlling a rocket to account for environmental uncertainties and maintain optimal mission performance. First, sensors collect data about the rocket's environment, passing the information to storage in the rocket's database. Second, the rocket's processor manipulates the database with an optimization algorithm producing instructions. Third, the instructions command the rocket's control system for optimal end-to-end trajectory.
Claims
1. A method for rocket control, the method comprising a rocket, data sensors conveying data to a database and processor, a convolutional neural network processing data from the database for computer vision and environmental sensing, the convolutional neural network integrating with a deep reinforcement learning algorithm, the deep reinforcement learning algorithm processing the data from the convolutional neural network and producing commands for thrust vector valve manipulation, the commands manipulating the thrust vector valves, and the thrust vector valves controlling thruster output to optimize rocket control for landing performance.
2. The method of claim 1 wherein, a convolutional neural network optimizes metrics corresponding to distance, time, and impact.
3. The method of claim 1 wherein, a deep neural network integrated with a reinforcement learning program optimizes metrics corresponding to distance, time, and impact.
4. The method of claim 1 wherein, the database and processor are radiation hardened graphics processing units.
5. The method of claim 1 wherein, the database and processor are configured using a Field Programmable Gate Array.
6. A computing device for commanding a reaction control system, the computing device comprising a simulation trained artificial intelligence computer program, embedded on a radiation hardened processor, the radiation hardened processor further comprising at least one graphics processing unit, storing at least one artificial intelligence computer program processing real time sensor data, generalizing about the rocket's trajectory environment, and storing at least one deep learning program to optimize commands for reaction control, the commands controlling thrust vectors for the rocket, wherein wiring connects the radiation hardened processor to the rocket's thrust vectors further comprising a fuel injector, injecting fuel to one or more engines according to the commands produced by the deep learning computer program, the deep learning computer program further comprising at least on neural network.
7. The device of claim 6 wherein, an artificial intelligence computer program optimizes metrics corresponding to distance, time, and impact, informing the command sequences for controlling the thrust vectors for the rocket, maintaining an optimal trajectory course throughout the rocket's flight, from point-to-point, wherein the first point is the launch pad, and the final point is the landing pad.
8. The device of claim 6 wherein, a convolutional neural network integrated with a reinforcement learning program optimizes metrics corresponding to distance, time, and impact.
9. The device of claim 6 wherein, the reinforcement learning agent controls thrust vectors to manipulate thrust control and steer the rocket's pitch, attitude, roll, and yaw.
10. The device of claim 6 wherein, the engines receiving fuel injections from the fuel injectors according to commands from thrust vectors, propel the rocket using chemical propellants, generating thrust throughout the rocket's point-to-point trajectory.
11. The device of claim 6 wherein, the radiation hardened processor, processors data using a graphics processing unit, performing convolutional operations on real-time sensor data, predicting associations between thrust vector commands and flight path accuracy according to an optimal trajectory, signaling the thrust vector commands from launch to landing for optimal performance.
12. A method for intelligent rocket trajectory control, the method comprising a rocket, launching from a launch pad, following an optimal trajectory, using sensors to assess position, recording sensor data to a database and computer processor, the database and processor stored on board the rocket, the processor further processing the data using convolutional neural networks, the convolutional neural networks generating a point cloud environment, a reinforcement learning agent further processing the point cloud environment for action oriented commands using a reinforcement learning agent to manipulate the rocket's control system to adjust deviations from the flight path and optimize trajectory.
13. The method of claim 12 wherein, a reinforcement learning program optimizes metrics corresponding to distance, time, and impact, informing commands to a thrust vector control system, manipulating fuel injectors for the rocket's engines, producing thrust to control the rocket's trajectory.
14. The method of claim 12 wherein, the database and processor are configured using a field programmable gate array, further comprising at least on graphics processing unit.
15. The method of claim 12 wherein, the launch pad and landing zone are both based on land, the coordinates for each defining the rocket's start and end state respectively.
16. The method of claim 12 wherein, the convolutional neural network generating the point cloud environment, processes the sensor data using deep convolutional operations, classifying data according to the optimal flight path, maintaining controls commanding fuel injection for the rocket's engines, producing thrust
17. The method of claim 12 wherein, the reinforcement learning agent controlling the rocket's thrust vectors, further comprises a simulation trained optimal policy, generalizing about decision making using at least one deep neural network.
18. The method of claim 12 wherein, the reinforcement learning agent controlling the rocket's thrust vectors, further comprises a simulation trained optimal policy, generalizing about decision making using two neural networks, further comprising an actor network and a critic network processing data to make intelligent control commands.
19. The method of claim 12 wherein, the reinforcement learning agent controlling the rocket's thrust vectors, further comprises a simulation trained optimal policy, generalizing about decision making using a predictive program processing data from input to output.
20. The method of claim 12 wherein, the reinforcement learning agent controlling the rocket's thrust vectors, further comprises a statistical program, accounting for uncertainty in the rocket's trajectory using a predictive graph program, mapping thrust vector commands to positions in the point cloud environment, updating in real time according to sensor data, aggregating from the rocket's sensors.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0024]
[0025]
[0026]
DETAILED DESCRIPTION OF THE INVENTION
[0027] In certain embodiments, the present disclosure is a process for autonomous rocketry including a simulation trained deep reinforcement learning computer program 100. In certain embodiments, the deep reinforcement learning computer program may be embedded to a radiation hardened processor and database 101. The database and processor may receive real time data regarding the rocket's environment 102, further processing the data to generalize about uncertainty to make control decisions 103 using a deep learning program 104. In such embodiments, the rocket's end-to-end trajectory is optimized according to intelligent commands from the deep learning algorithm 105.
[0028] In certain embodiments, the present disclosure is a process for end-to-end rocket trajectory optimization, including a rocket starting on a launch pad, autonomously launching. In such embodiments, the rocket autonomously follows an optimized trajectory path to orbit, where the rocket reaches orbit and travels to a return point. Next, the rocket reorients its position to return to Earth using a control system autoactivating for landing control 200. Then, the rocket performing a vertical powered landing 201 in a landing zone 202.
[0029] In certain embodiments of the present disclosure, an onboard database and processor receive LIDAR sensor data 300 from LIDAR sensors on the rocket. The data is then stored in radiation hardened FPGA 301 which processes the sensor data with a trained deep intelligence 302. The deep intelligence then sends signals to the thrust vector control system 303. In turn, the thrust vector control system manipulates thrust vector valves 304. As a result, the rocket's control is optimized from launch to landing 305.
[0030] In certain embodiments, the present disclosure is a device for commanding a reaction control system. The device comprises a simulation trained artificial intelligence program, which operates on a radiation hardened processor. The artificial intelligence program processes real time sensor data and generalizes about the rocket's trajectory environment 104. Specifically, in certain embodiments, the artificial intelligence program using a deep learning program to optimize commands for end-to-end trajectory. In such embodiments, the artificial intelligence computer program produces commands that control thrust vectors for the rocket. In such embodiments, the thrust vectors also may include a fuel injector, injecting fuel to one or more engines according to the commands produced by the deep learning program 302.
[0031] In certain embodiments, the present disclosure is a method for rocket control. In such embodiments, the method includes a rocket with several LIDAR or other data sensors, which record information about the rocket's environment in real time 300. The LIDAR and other data sensors may then transmit the data to a database and processor 301. The processor may in certain embodiments include and embedded machine learning algorithm, using a convolutional neural network to generate an accurate point cloud environment. Additionally, in certain embodiments, the processor may include a second machine learning algorithm, such as a deep reinforcement learning algorithm for producing commands for thrust vector valve manipulation 304.
[0032] In certain embodiments, the present disclosure is a process for autonomous rocket trajectory control 105. In such embodiments, the process involves a rocket, launching from a launch pad and following an optimal trajectory to orbit. The rocket reaches orbit, delivers a payload, and then reorients before returning to Earth at a landing in a landing zone 202. In such embodiments, the rocket may use sensors to assess position 300, recording sensor data to a database and computer processor. The processor may process the data using an artificial intelligence computer program with further processing for action-oriented commands using a reinforcement learning agent to manipulate the rocket's control system, optimizing trajectory until the rocket's landing 201.
[0033] In certain embodiments, the present disclosure utilizes various hardware components. For example, certain embodiments include mounting a radiation hardened field programmable gate array (FGPA) on the rocket 101, with wiring connections to various thrust chambers. In certain embodiments, the FGPA may contain both a central processing unit and graphics processing unit to perform computations. Commands from the FGPA move to control vector units which may open and close thrust chambers on the rocket, or limit thrust output to a certain degree 304.
[0034] In certain embodiments of the present disclosure, the FGPA may be embedded with a deep learning algorithm 104. The embedded deep algorithm may be expressed as software code written in one of several programming languages, including Python, C, C++ or other machine code. The deep learning algorithm may be trained in a simulation environment before being embedded to the hardware processor 100. Throughout the mission, the algorithm may correct for differences in the actual flight path and the optimal flight path by issuing commands corresponding to thrust vector control.
[0035] In certain embodiments, the present disclosure may converge hardware and software components including both a radiation hardened FGPA and a deep reinforcement learning algorithm, which may be fastened in the rocket to control the rocket's thrust output 304. In certain embodiments, electric wiring from the FGPA may carry signals from the deep reinforcement learning control algorithm to fuel injectors throughout the point-to-point journey 303. In such embodiments, the entire trajectory, from launch to landing may be controlled by the deep reinforcement learning control algorithm manipulating thrust vector commands corresponding to thrust output.
[0036] In certain embodiments, the present disclosure may include sensors collecting data about the rocket's environment 102. The sensor data may be processed and stored in the rocket's database, and subsequently processed by convolutional neural networks to create a digital environment 103. The sensor data may be further processed and manipulated by a reinforcement learning agent, which performs optimal control commands to manipulate rocket trajectory 305.
[0037] In certain embodiments, the present disclosure may include the hardware for the rocket may use a niobium alloy metal with a protective heat shield for the rocket body 200. In such embodiments, the inside of the rocket is made up of a chemical propellant engine, with thrust chambers relaying force through a nozzle. The control systems are embedded on a radiation hardened processor 301 with electrical wiring sending signals throughout the rocket 303.
[0038] In certain embodiments, the present disclosure may be composed of three parts, reflecting the three flight stages, which include launch, powered flight, and landing 201. In each stage, a separate software component may control the rocket to optimize safety and performance for point-to-point travel. Moreover, in such embodiments the software stack embedded in the rocket's hardware processors includes convolutional neural networks, reinforcement learning agents, and integrated deep reinforcement learning systems 104. In embodiments, the disclosure provides a way to unify computer perception and decision-making technologies for point-to-point rocket control in a singular system 302. In doing so, the methods marry software code for deep learning and reinforcement learning technologies which collaboratively control the rocket from liftoff to landing.
[0039] In certain embodiments, the present disclosure includes LiDAR sensors gathering real-time data about the rocket's environment which is stored in an on-board database and processed with a deep reinforcement learning algorithm producing instructions to optimize rocket control in uncertain environments including inclement weather conditions 100. In embodiments, the hardware components for the rocket include embedding LiDAR sensors on the rocket, which gather data relating to the rocket's environment. The data collected is routed to an on-board hardware processor with electrical wiring, which allows the data to be processed to create a virtual environment 103. Further electrical wiring connects the on-board hardware processor to thrust chamber valves which command and control propellant injectors.
[0040] In certain embodiments, the present disclosure includes using LiDAR sensors 300 for perception includes convolutional neural networks 302 for generating a digital environment. Programming code for the convolutional neural networks may be written in various programming languages including Python, C, and C++ depending on mission need. The software may be developed in a simulation environment prior to flight and subsequently embedded to the rocket's on-board processor 104.
[0041] In certain embodiments, the present disclosure provides a way to unify computer vision and decision-making technologies using LiDAR sensors 300 and trained deep reinforcement learning algorithms 100 to process data in real-time and effectively command rocket control systems 105. In such embodiments, the software may be developed using simulation data and subsequently embedded to a hardware processor prior to flight 104. The combined hardware-software stack may be optimized for point-to-point mission performance with considerations to both efficiency and safety.
[0042] In certain embodiments, the disclosed methods include data sensors gathering real-time data about the environment 102. The data may be stored in an on-board database 101, projected to a point-cloud environment modeling the physical world in real time. The data is further processed with a deep intelligence algorithm 104 controlling the rocket through command sequences corresponding to thruster command controls to manipulate rocket positioning including roll, pitch, yaw, and attitude 105.
[0043] In certain embodiments, the present disclosure may use hardware such as a radiation hardened processor using graphics processing units to process data. For example, certain embodiments include mounting a radiation hardened FGPA on the rocket, with wiring connections to various thrust chambers. The FGPA may contain both a central processing unit and graphics processing unit to perform computations. Commands from the FGPA move to control 301 vector units which may open and close thrust chambers on the rocket, or limit thrust output to a certain degree. The FGPA may be connected throughout the rocket and to sensors with various electrical wirings for transmitting data. Data sensors collecting information may include LiDAR 300, cameras, video, radio, or inertial instruments.
[0044] In embodiments the software control system utilizes artificial intelligence programs 104 processing data in real time to command the rocket through space 105. For example, the point cloud environment may be processed with convolutional neural networks predicting probabilities and assigning associated actions to optimize the rocket's trajectory. In certain embodiments, the digital point-cloud provides real-time data regarding the rocket's environment from liftoff to landing. In processing the point-cloud data, the rocket's software stack iteratively produces commands corresponding to thrust vector controls 304 for manipulating the rocket to ensure safety and efficiency 305.
[0045] In certain embodiments of the disclosure, a rocket launches a satellite to orbit and returns to Earth. During return, an autonomous control system activates with the push of a button. Once activated, the control system autonomously commands the rocket by processing real time data 102 about the landing zone and adapting the rocket's mechanics, positioning, and trajectory accordingly by manipulating the rocket's thrust vector output 304. Multiple LiDAR sensors, GPS sensors, and inertial navigation sensors on the rocket, landing pad, or other locations like drones or ships, to create a 3D point-cloud environment may record data for processing. In real time, a convolutional neural network identifies the landing zone performing the rocket's vision function. Meanwhile, an embedded reinforcement learning agent maximizes a reward function defining optimal landing metrics including distance, time, and impact trajectory and force 201.
[0046] In certain embodiments, the disclosed methods include LiDAR sensors 300 gathering real-time data about the rocket's environment 102. The data is stored in an on-board database and processed with a deep reinforcement learning algorithm producing instructions to optimize rocket control in uncertain environments including inclement weather conditions. In embodiments, the hardware components for the rocket include embedding LIDAR sensors on the rocket, which gather data relating to the rocket's environment. The data collected is routed to an on-board hardware processor with electrical wiring, which allows the data to be processed to create a virtual environment. Further electrical wiring connects the on-board hardware processor to thrust chamber valves which command and control propellant injectors and thrust vector valves to optimize the rocket's control during landing 201.
[0047] It is to be understood that while certain embodiments and examples of the invention are illustrated herein, the invention is not limited to the specific embodiments or forms described and set forth herein. It will be apparent to those skilled in the art that various changes and substitutions may be made without departing from the scope or spirit of the invention and the invention is not considered to be limited to what is shown and described in the specification and the embodiments and examples that are set forth therein. Moreover, several details describing structures and processes that are well-known to those skilled in the art and often associated with rockets and rocket trajectories or other launch vehicles are not set forth in the following description to better focus on the various embodiments and novel features of the disclosure of the present invention. One skilled in the art would readily appreciate that such structures and processes are at least inherently in the invention and in the specific embodiments and examples set forth herein.
[0048] One skilled in the art will readily appreciate that the present invention is well adapted to carry out the objectives and obtain the ends and advantages mentioned herein as well as those that are inherent in the invention and in the specific embodiments and examples set forth herein. The embodiments, examples, methods, and compositions described or set forth herein are representative of certain preferred embodiments and are intended to be exemplary and not limitations on the scope of the invention. Those skilled in the art will understand that changes to the embodiments, examples, methods and uses set forth herein may be made that will still be encompassed within the scope and spirit of the invention. Indeed, various embodiments and modifications of the described compositions and methods herein which are obvious to those skilled in the art, are intended to be within the scope of the invention disclosed herein. Moreover, although the embodiments of the present invention are described in reference to use in connection with rockets or launch vehicles, ones of ordinary skill in the art will understand that the principles of the present inventions could be applied to other types of aerial vehicles or apparatus in a wide variety of environments, including environments in the atmosphere, in space, on the ground, and underwater.