Video system for piloting a drone in immersive mode
09747725 · 2017-08-29
Assignee
Inventors
Cpc classification
G06T19/20
PHYSICS
H04N23/90
ELECTRICITY
G05D1/0038
PHYSICS
International classification
G02B27/00
PHYSICS
G06T19/20
PHYSICS
G06T19/00
PHYSICS
H04N7/18
ELECTRICITY
Abstract
This system comprises a drone and a remote station with virtual reality glasses rendering images transmitted from the drone, and provided with means for detecting changes of orientation of the user's head. The drone generates a “viewpoint” image (P′1) and a “bird's eye” image (P′2) whose field is wider and whose definition is lower. When the sight axis of the “viewpoint” image is modified in response to changes of position of the user's head, the station generates locally during the movement of the user's head a combination of the current “viewpoint” and “bird's eye” images, with outlines (CO′) adjusted as a function of the changes of position detected by the detection means.
Claims
1. A system for piloting a drone in immersion, comprising: a drone provided with shooting means and a ground station comprising: virtual reality glasses rendering images captured via the shooting means and transmitted from the drone by wireless communication means; detecting means for detecting movement of a head of a user wearing the glasses; as well as ground graphic processing means adapted to generate rendered images, a first generating means for generating on board the drone a first image, called a viewpoint image, and transmitting the first image to the ground graphic processing means; modification means provided in the drone to modify a sight axis of the viewpoint image in response movement of the user's head detected by the detecting means and transmitted to the drone via the wireless communication means, a second generating means for generating on board the drone a second image, called a bird's view image, wherein a field of the second image is wider and an angular resolution is lower than the viewpoint image, and transmitting the second image to the ground graphic processing means; and wherein the ground graphic processing means are adapted to generate locally during a movement of the user's head, by a combination of a current viewpoint image and a current bird's view image present in the ground station, images to be rendered, with outlines adjusted as a function of the changes of movement detected by the detecting means.
2. The system of claim 1, wherein the ground graphic processing means are adapted to perform an incrustation of the current viewpoint image into the current bird's view image and to apply variable cropping operations to a so-obtained image.
3. The system of claim 1, wherein the shooting means comprise a set of shooting cameras of wide field of view and of different sight axes.
4. The system of claim 3, wherein the shooting means comprise cameras of different resolutions for the viewpoint image and the bird's view image.
5. The system of claim 4, comprising a first camera whose sight axis is arranged according to a main axis of the drone, and a set of cameras of lower resolutions with sight axes oriented to the left and to the right with respect to the main axis.
6. The system of claim 5, wherein the lower resolution cameras are of complementary fields covering together all directions in a horizontal plane.
7. The system of claim 3, wherein the shooting means comprise cameras of a common set of cameras all having a same resolution, and circuits for generating images of different resolutions from the common set of cameras.
8. The system of claim 3, comprising a set of cameras of complementary fields covering together all directions in a horizontal plane.
9. The system of claim 3, wherein at least certain cameras have optical systems of the fisheye type, and wherein correction means for correcting the distortions generated by the type of optical system are provided.
Description
(1) An example of implementation of the invention will now be described, with reference to the appended drawings in which the same references denote identical or functionally similar elements throughout the figures.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11) An “immersive mode” or “FPV” shooting system according to the present invention comprises a drone equipped with a set of shooting cameras, and a ground equipment communicating with the drone through a wireless link of suitable range and comprising virtual reality glasses, provided with means for rendering in front of the user's eyes images giving him the feeling of flying on board the drone, in the most realistic manner possible.
(12) The cameras equipping the drone are wide-field cameras, such as cameras with an optical system of the fisheye type, i.e. provided with an hemispherical field lens covering a field of about 180°.
(13) In a first embodiment of the invention, and with reference to
(14) A first camera or main camera 110 has a fisheye-type lens, whose sight axis A19 is directed forward. It has a great sensor with a significant number of pixels, typically from 8 to 20 Mpixel with the current technologies.
(15) As illustrated in
(16) The drone has further two auxiliary cameras 121, 122 pointing in the present example to each of the two sides of the drone as shown in
(17) As will be seen hereinafter, the two auxiliary sensors may have any orientation, provided that their respective sight axes are essentially opposite to each other, an object in particular being that the junction between the two images, each covering about half a sphere, is located at the less cumbersome place for the sight and/or for the processing operations to be performed. It may moreover be provided not to cover the angular area located close to the upward vertical, which is the less useful. For example, the part of the image above 70° upward may be neglected.
(18) From this set of cameras, the electronic circuitry of the drone transmits to the ground station two images intended to be processed as seen for a combined rendering to the ocular displays of a pair of virtual reality glasses.
(19) A first image P1, called the “viewpoint” image, is that captured by the camera 110, reflecting the viewpoint of a virtual pilot of the drone, and a second image P2, called the “bird's eye” image, is that, combined by the on-board electronic circuitry, which comes from the two lateral cameras 121 and 122. The first image is a full-definition image corresponding to a limited field of view, whereas the second image is an image of lower resolution over a field of view of 360° horizontally, and 360° or slightly less vertically.
(20) It will be noted that the focal distances of the three cameras are all the same so that a superimposition of the two images between them can be performed with no anomaly.
(21)
(22) It will be noted that the invention may be advantageous in a case where a VGA image (640×480) is used for the “viewpoint” area of about 90°×90° of field of view, and another VGA image is used for the remaining (field of view of 360°×360°).
(23) The different steps implemented in a system according to the invention to obtain in the virtual reality glasses an image reflecting an experiment of the FPV type will now be described.
Generation and Transmission of the “Viewpoint” Image P1
(24) This step implements the following operations: 1) a sensor (accelerometer or other) equipping the glasses measures the movements of the user's head; 2) the information of position of the user's head is periodically sent to the circuitry of the drone from that of the ground station via the wireless communication channel, with a rate typically corresponding to that of the images to be rendered, for example at least 30 times per second; 3) on board the drone, the new sight axis for the “viewpoint” image is defined as a function of said head position information received; 4) each image captured by the camera is cropped as a function of this sight axis to generate the image P1; 5) in the circuitry of the drone, this image is, if necessary, reprocessed so as to compensate for the distortion induced by the fisheye lens (such a processing is known per se and will not be described in more detail); 6) the so-reprocessed image is coded, preferably with compression, with a suitable algorithm; 7) the compressed image is transmitted to the ground station via the wireless communication channel.
(25) These operations are repeated for example at a rate of 30 images per second, with each time an updating of the sight axis A10 of the camera 110 and the corresponding cropping.
(26) It will be noted herein that, as a variant, it could be provided a camera 110 movable in response to actuators to adjust its physical sight axis in response to the head position information.
(27) According to another variant, the viewpoint image may, in the same way as the “bird's eye” image, be generated by combining the images captured by two or several cameras oriented differently.
Generation and Transmission of the “Bird's Eye” Image P2
(28) This step implements the following operations: 1) two images are acquired by means of two lateral cameras 121 and 122; 2) the two images are combined into a single image by the on-board electronic circuitry; 3) the so-combined image is coded, preferably with compression, with a suitable algorithm; 4) the compressed image is transmitted to the ground station via the wireless communication channel.
Processing of the Images in the Ground Station
(29) As long as the user does not move his head, the ground station displays in the glasses the “viewpoint” images P1 in high definition, streamed from the drone. A framerate of 30 images per second is herein possible because no cropping processing of the image captured by the camera 110 is required.
(30) But when the user turns his head, all the steps of the hereinabove process of generation and transmission of the “viewpoint” image requires a time of processing that, as explained hereinabove, is incompatible with the searched framerate due to cropping operations of the image P1 to be performed for each individual image.
(31) According to the invention, the electronic circuitry of the station constructs for the needs of the transition (i.e. until the user's head is fixed again) transition images from the freshest images P1 and P2 available at this time in the ground station, and the coordinates of the virtual reality glasses. These transition images are created from data fully available on the ground station and glasses side, by combination and cropping operations as just seen hereinabove. Given that no transfer via the communication channel is required and that only a cropping and a refreshing of the display have to be performed, the latency for this operation may be extremely low.
(32) It is understood that the wider-field image P2 coming from the lateral cameras 121, 122 could be simply cropped at the ground and rendered to ensure the searched transition. But this image P2 has a lower definition than the “viewpoint” image normally generated with the camera 110, and such a solution would cause an abrupt drop of resolution of the whole image at the rendering of the transition.
(33) To avoid this phenomenon, a combination of images is performed with the graphical processor equipping the ground station.
(34) Hence,
(35) This combination of fractions of images is hence performed by simple cropping and juxtaposition operations, the angular amplitude of the lateral rotation of the head being known and the correspondence between the referential system of the image P1 and that of the image P2 being also known, by construction.
(36) Hence, the greatest part of the image rendered to the user remains in high definition, and only a marginal part of the image is in lower definition, and that only temporarily as long as the orientation of the user's head is not stabilized.
(37) It is to be noted that the rate at which these cropping/juxtaposition operations are performed may be decorrelated from the rate of reception of the high-definition “viewpoint” images (typically 30 images per second with the current technologies), and be higher. In particular, the virtual reality glasses are able to perform a rendering at a rate of 75 images per second or more, and the above-mentioned cropping/juxtaposition operations being light enough in terms of graphical processor load, this rate is reachable.
(38) During the whole period of transition, this generation of juxtaposed images will be performed as a function of the current position of the user's head. Hence,
(39) The implementation of the invention on real images will now be explained with reference to
(40) The image of
(41) As long as the user keeps his head straight, the image rendered in the virtual reality glasses is the “viewpoint” image P1 alone, whose outlines CO are illustrated in
(42) When the user turns his head, in the present example to the left, the outline of the image effectively rendered to the user (herein denoted CO′) is shifted to the left, and it is understood that the image rendered to the user actually corresponds to a combination of a fraction P′1 of the image P1, deprived of a band of a determined width on the right, and of a fraction P2 of the image P2 located immediately on the left of the image P1 (the frontier between these two fractions of image being indicated in dotted-line in
(43) Of course, the left part of the image is of less high definition, but it has the advantage to be immediately available to be combined to the truncated image P1, then displayed. It results therefrom an imperceptible latency and quality of image perfectly correct of the user.
(44) Of course, the head rotation information is sent to the electronic circuitry of the drone to accordingly adjust the sight axis A10 of the camera 110, and to render to the ground station the new view generated by the camera 110, which will then be rendered in the glasses as long as the movements of the user's head will be slow enough to allow a goof fluidity without having to perform the cropping/combination processing operations described.
(45)
(46) Firstly, it is to be noted that, instead of providing dedicated cameras for the “viewpoint” and the “bird's eye” functions, a same camera can participate both to the generation of a “viewpoint” image and to the generation of a “bird's eye” image.
(47) In this case, the camera 100, unique in the previous embodiment described up to now, may be replaced by a set of cameras. Hence,
(48) The circuit for the wireless communication with the ground station receives at its receiving part 151 the user's head position information, absolute or relative, and applies this information to the “viewpoint” image generation circuit to adjust, herein by a digital processing of a combined image, of wider field of view than that actually rendered, the sight axis for the image P1 to be rendered.
(49) The images coming from the circuits 130 and 140 are sent to the ground station by the emitting part 152 of the wireless communication circuit.
(50) It will be noted that, for the implementation of the invention, these images may be sent either separately or in a combined manner.
(51)
(52) A device 220 for acquiring the user's head rotation, which conventionally equips virtual reality glasses to which the ground station is connected, delivers this information to a graphic processing circuit 230 adapted to perform the cropping operations (the juxtaposition being in this case already made by the incrustation by the circuit 210, as explained), as a function of the position information received from the glasses.
(53) It will be noted herein that, in practice, the circuits 210 and 230 may be consisted by a same circuit.
(54) In parallel, the emitting part 252 of the wireless communication circuit of the ground station sends to the drone the user's head position information for updating the sight view of the “viewpoint” image P1, as explained hereinabove.
(55) With reference now to
(56) In
(57)
(58) Finally,
(59) Of course, the present invention is not limited in any way to the embodiments described and shown, and the one skilled in the art will know how to make many variants and modifications of them. It applies to drones of various types, for inspection, leisure or other purposes, hovering or not. It also applies to various types of virtual reality glasses, with on-board or remote electronic circuitry.