Patent classifications
G01C23/00
Machine to machine targeting maintaining positive identification
A method of targeting, which involves capturing a first video of a scene about a potential targeting coordinate by a first video sensor on a first aircraft; transmitting the first video and associated potential targeting coordinate by the first aircraft; receiving the first video on a first display in communication with a processor, the processor also receiving the potential targeting coordinate; selecting the potential targeting coordinate to be an actual targeting coordinate for a second aircraft in response to viewing the first video on the first display; and guiding a second aircraft toward the actual targeting coordinate; where positive identification of a target corresponding to the actual targeting coordinate is maintained from selection of the actual targeting coordinate.
METHOD FOR VERIFYING AT LEAST ONE FLIGHT PLAN, AND ASSOCIATED COMPUTER PROGRAM PRODUCT AND MODULE FOR VERIFYING AT LEAST ONE FLIGHT PLAN
The present invention relates to a method of assisting in the checking of at least one flight plan among at least a first flight plan and a second flight plan, each flight plan being associated with an ordered list of elements. The method comprises the steps of comparing flight plans in order to identify among the elements associated with said plans, common elements for all the plans and distinctive elements for each flight plan, and to display, a first comparative zone comprising a tree structure defining a plurality of levels, each level comprising a single root formed from one of the common elements or a branch for each flight plan, at least one of the branches in a same level comprising at least one of the distinctive elements associated with the corresponding flight plan.
Method for representing virtual information in a real environment
The invention relates to a method for ergonomically representing virtual information in a real environment, including the following steps: providing at least one view of a real environment and of a system setup for blending in virtual information for superimposing with the real environment in at least part of the view, the system setup comprising at least one display device, ascertaining a position and orientation of at least one part of the system setup relative to at least one component of the real environment, subdividing at least part of the view of the real environment into a plurality of regions comprising a first region and a second region, with objects of the real environment within the first region being placed closer to the system setup than objects of the real environment within the second region, and blending in at least one item of virtual information on the display device in at least part of the view of the real environment, considering the position and orientation of said at least one part of the system setup, wherein the virtual information is shown differently in the first region than in the second region with respect to the type of blending in the view of the real environment.
Method for representing virtual information in a real environment
The invention relates to a method for ergonomically representing virtual information in a real environment, including the following steps: providing at least one view of a real environment and of a system setup for blending in virtual information for superimposing with the real environment in at least part of the view, the system setup comprising at least one display device, ascertaining a position and orientation of at least one part of the system setup relative to at least one component of the real environment, subdividing at least part of the view of the real environment into a plurality of regions comprising a first region and a second region, with objects of the real environment within the first region being placed closer to the system setup than objects of the real environment within the second region, and blending in at least one item of virtual information on the display device in at least part of the view of the real environment, considering the position and orientation of said at least one part of the system setup, wherein the virtual information is shown differently in the first region than in the second region with respect to the type of blending in the view of the real environment.
Real-time lightning monitor for synthetic vision systems (SVS)
An aircraft-based synthetic vision system (SVS) is disclosed. In embodiments, the SVS includes avionics processors in communication with onboard lightning detection sensors, which provide the SVS with real-time lightning data (e.g., bearing to, and distance from, the aircraft) about proximate lightning strikes. Based on the real-time lightning data, the avionics processors generate flight deck effects (FDE) corresponding to identified areas of lightning activity (e.g., a sufficient quantity of strikes, exceeding a strike threshold, within a particular airspace during a time window), each FDE having a particular bearing to and distance from the aircraft. The FDE data is processed by a display system aboard the aircraft (e.g., a cockpit-based primary flight display (PFD) or head-worn/heads-up display (HWD/HUD)) which incorporates the generated FDEs into the SVS status display provided to the flight crew or pilot at the appropriate bearing and distance relative to the aircraft.
Aerial operation support and real-time management
A method for supporting aerial operation over a surface includes obtaining a three-dimensional (3D) representation of the surface; converting the 3D representation of the surface to a two-dimensional (2D) representation of the surface; obtaining a 2D flight path of the aircraft based on the 2D representation of the surface; converting the 2D flight path to a 3D flight path including location coordinates; and controlling the aircraft to conduct a flight mission following the 3D flight path.
Aerial operation support and real-time management
A method for supporting aerial operation over a surface includes obtaining a three-dimensional (3D) representation of the surface; converting the 3D representation of the surface to a two-dimensional (2D) representation of the surface; obtaining a 2D flight path of the aircraft based on the 2D representation of the surface; converting the 2D flight path to a 3D flight path including location coordinates; and controlling the aircraft to conduct a flight mission following the 3D flight path.
Video display system and method
A system for displaying videos, comprising a processing resource configured to: provide a data repository comprising a plurality of previously captured video segments (PCVSs) captured during previous operations of corresponding platforms, each being associated with metadata indicative of a Line-of-Sight (LoS) of a sensor, carried by the corresponding platform of the platforms used to capture the corresponding PCVS, with respect to a fixed coordinate system established in space, during capturing the corresponding PCVS; obtain an indication of a Region-of-Interest (RoI); identify one or more of the PCVSs that include at least part of the RoI, utilizing the LoSs associated with the PCVSs, giving rise to RoI matching PCVSs; and display at least part of at least one of the RoI matching PCVSs, being displayed RoI matching PCVSs, on a display of an operating platform to an operator of the operating platform during a current operation of the operating platform.
Video display system and method
A system for displaying videos, comprising a processing resource configured to: provide a data repository comprising a plurality of previously captured video segments (PCVSs) captured during previous operations of corresponding platforms, each being associated with metadata indicative of a Line-of-Sight (LoS) of a sensor, carried by the corresponding platform of the platforms used to capture the corresponding PCVS, with respect to a fixed coordinate system established in space, during capturing the corresponding PCVS; obtain an indication of a Region-of-Interest (RoI); identify one or more of the PCVSs that include at least part of the RoI, utilizing the LoSs associated with the PCVSs, giving rise to RoI matching PCVSs; and display at least part of at least one of the RoI matching PCVSs, being displayed RoI matching PCVSs, on a display of an operating platform to an operator of the operating platform during a current operation of the operating platform.
Method for calibrating an altitude sensing stereo vision device of a UAV
The present invention relates to a method for calibrating an altitude sensing stereo vision device (122) of an unmanned aerial vehicle (100), wherein the method includes: arranging the unmanned aerial vehicle to take off from ground (G) and ascend; deriving at least one first altitude value (10a-15a) from the stereo vision device and obtaining at least one corresponding second altitude value (10b-15b) from another device (123) of the unmanned aerial vehicle during the ascent (1) of the unmanned aerial vehicle; recording the derived at least one first altitude value and the obtained at least one corresponding second altitude value as calibration data; deriving an additional first altitude value from the stereo vision device while the unmanned aerial vehicle flies a route; and adjusting the derived additional first altitude value based on the recorded calibration data.