SITUATION AWARENESS SYSTEM AND METHOD FOR SITUATION AWARENESS IN A COMBAT VEHICLE
20170310936 · 2017-10-26
Assignee
Inventors
Cpc classification
H04N7/181
ELECTRICITY
G06F3/011
PHYSICS
G09G2300/026
PHYSICS
G06T3/4038
PHYSICS
International classification
H04N7/18
ELECTRICITY
G06T3/40
PHYSICS
Abstract
The invention relates to a system (1) for situation awareness in a combat vehicle (2), comprising a plurality of image-capturing sensors (3A-3E) configured to record image sequences showing different partial views (V.sub.A-V.sub.E) of the surroundings of the combat vehicle, and a plurality of client devices (C1-C3) wherein each is configured to show a view (V.sub.P) of the surroundings of the combat vehicle, desired by a user of the client device, on a display (D1-D3). The image-capturing sensors are configured to be connected to a network (4) and to send said image sequences over said network by means of a technique in which each image sequence sent by an image-capturing sensor can be received by a plurality of receivers, such as multicast. The client devices are also configured to be connected to said network and to receive, via said network, at least one image sequence recorded by at least one image-capturing sensor (3A-3E). Further, each client device is configured to generate, on its own, said desired view from the at least one image sequence by processing images from the at least one image sequence, and to provide for display of the desired view on said display.
Claims
1. A system for situation awareness in a combat vehicle, the system comprising: a plurality of image-capturing sensors, each configured to record an image sequence showing a partial view of surroundings of the combat vehicle, and a plurality of client devices, each configured to show a view of the surroundings of the combat vehicle, desired by a user of the client device, on a display, wherein the image-capturing sensors are configured to be connected to a network and to send said image sequences over said network by a technique in which each image sequence sent by an image-capturing sensor can be received by a plurality of receivers, and each of said client devices is configured to be connected to said network and to receive, via said network, at least one image sequence recorded by at least one image-capturing sensor, and to generate, on its own, said desired view from said at least one image sequence by processing images from said at least one image sequence, and to provide for showing of the desired view on said display.
2. The system according to claim 1, wherein at least one of the client devices is configured to receive a plurality of image sequences recorded by different image-capturing sensors, merge images from the received image sequences into a merged image comprising image information recorded by different image-capturing sensors and display, on said display, the merged image or part thereof as said desired view.
3. The system according to claim 2, wherein said merged image is a panoramic image.
4. The system according to claim 1, wherein at least one of the client devices is configured to receive an indication from a user of the client device on said desired view, and to request and receive only the image sequences required to generate said desired view, based on said indication.
5. The system according to claim 4, wherein said at least one client device is configured to request and receive at most three and preferably only one or two image sequences to generate said desired view.
6. The system according to claim 1, further comprising a network switch via which the image-capturing sensors are connected to the client devices, wherein the client devices are configured to send requests to the network switch for selected image sequences to be sent in order to generate said desired view, wherein the network switch is configured to selectively communicate the requested image sequences from the different image-capturing sensors to the different client devices, based on said requests.
7. The system according to claim 6, wherein at least one of the client devices or a component connected thereto comprises a direction sensor, wherein the client device is configured to base said request for selected image sequences to be sent on a current direction of the client device or the component connected thereto.
8. The system according to claim 1, wherein said network is an Ethernet network.
9. The system according to claim 1, wherein the image-capturing sensors are video cameras.
10. The system according to claim 1, wherein said system does not comprise any image processing hardware which modifies the image sequences from the time when they are sent by the image-capturing sensors until they are received by the client devices.
11. The system according to claim 1, wherein said client devices are constituted by general-purpose computers without special-purpose video processing cards, special-purpose plug-in cards, or other special-purpose hardware for processing the image sequences, usually not found in general-purpose computers.
12. A combat vehicle characterized in that it comprises a system according to claim 1 for providing situation awareness for vehicle operators inside the combat vehicle.
13. A method for situation awareness in a combat vehicle, comprising the steps of: recording a plurality of image sequences showing partial views of surroundings of the combat vehicle by a plurality of image-capturing sensors, displaying, on each of a plurality of displays associated with a respective client device of a plurality of client devices, a view of the surroundings of the combat vehicle, desired by a user of the client device, and sending the image sequences from the image-capturing sensors over a network of the combat vehicle by a technique in which each image sequence can be received by a plurality of receivers, wherein in each of said plurality of client devices the method further comprises the steps of: receiving, over said network, at least one image sequence recorded by at least one image-capturing sensor; generating, from said at least one image sequence, said desired view by processing images in said at least one image sequence, and displaying the desired view on the display associated with the client device.
14. The method according to claim 13, comprising the steps of receiving, in at least one of said plurality of client devices, a plurality of image sequences recorded by different image-capturing sensors, processing the images by merging images from the different image sequences to a merged image comprising image information recorded by different image-capturing sensors, and displaying, on said display, the merged image or part thereof as said desired view.
15. The method according to claim 14, wherein merging is performed such that the merged image constitutes a panoramic image.
16. The method according to claim 13, further comprising the steps of receiving, in at least one of said plurality of client devices, an indication from a user of the client device on said desired view, and, by the client device, requesting and receiving only the image sequences required to generate said desired view, based on said indication.
17. The method according to claim 16, wherein the step of requesting and receiving only the image sequences required to generate said desired view by the client device involves requesting and receiving at most three and preferably only one or two image sequences in order to generate said desired view.
18. The method according to claim 13, further comprising the steps of: connecting the image-capturing sensors and the client devices to each other via a network switch of the network; sending, from the respective client device, requests to the network switch for selected image sequences to be sent in order to generate said desired view; by the network switch, selectively communicating the requested image sequences from the image-capturing sensors to the client devices, based on said requests.
19. The method according to claim 18, further comprising the steps of: registering a direction of the respective client device or a component connected to the respective client device, and sending, from the respective client device, said request for selected image sequences to be sent in order to generate said desired view based on said direction.
20. A computer program stored in non-transitory storing medium for providing situation awareness in a combat vehicle comprising a plurality of image-capturing sensors configured to record image sequences showing respective partial views of surroundings of the combat vehicle, the computer program comprising: program code which when executed by a processor in a client device causes the client device to show a view of the surroundings of the combat vehicle, desired by a user of the client device, on a display, and program code which when executed by said processor causes the client device to, via a network of the combat vehicle over which the image-capturing sensors send said image sequences by a technique in which each image sequence can be received by a plurality of receivers: receive at least one image sequence recorded by at least one image-capturing sensor; generate, based on said at least one image sequence, said desired view by processing images from said at least one image sequence, and display the desired view on said display.
21. A computer program product comprising the computer program according to claim 20, wherein the non-transitory storing medium comprises non-volatile memory.
Description
DESCRIPTION OF FIGURES
[0060] The present invention will be better understood by reference to the following detailed description when considered together with the accompanying drawings, in which the same reference numerals refer to the same parts in the different views, and in which:
[0061]
[0062]
[0063]
[0064]
[0065]
DETAILED DESCRIPTION OF THE INVENTION
[0066] By “merging images” is meant a process in which a new image is generated by merging together two or more original images, wherein the new image comprises image information from each of the merged original images.
[0067] With “panoramic view” is meant a wide angle view that comprises more image information than can be recorded by a single image capturing sensor. Thus, a panoramic image is a wide angle image created by merging a plurality of images recorded by different image-capturing sensors, merged in such a way that the panoramic image shows a larger field of view than the individual images do individually.
[0068] With simultaneous reference to
[0069] The situation awareness system 1 is configured to be integrated in the combat vehicle 2. Herein, the combat vehicle 2 is described as a land vehicle, such as a tank, but it should be noted that the system can also be realised and implemented in a watercraft, such as a surface vessel, or an airborne vehicle, such as e.g. a helicopter or an airplane.
[0070] The system 1 comprises a sensor device 3 comprising a plurality of image-capturing sensors 3A-3E, each arranged to record an image sequence showing at least a part of the surroundings of the combat vehicle during operation.
[0071] The image-capturing sensors 3A-3E may be digital electro-optical sensors, comprising at least one electro-optical sensor for capturing image sequences constituting still image sequences and/or video sequences.
[0072] The image-capturing sensors 3A-3E may be digital cameras or video cameras configured to record images within the visual and/or infrared (IR) range. They may also be constituted by image amplifiers configured to record images in the near infrared (NIR) range.
[0073] The image-capturing sensors 3A-3E may be arranged on the exterior of the combat vehicle 2 or in the interior of the combat vehicle 2 protected by transparent, protective material through which recording of image sequences is performed.
[0074] The image-capturing sensors 3A-3E are preferably aligned relative to each other so that the image-capturing areas of the different sensors, i.e. the partial views referred to as V.sub.A-V.sub.E in
[0075] Further, the system 1 comprises a plurality of client devices C1-C3, each associated with a screen or display D1-D3, which may be integrated in or connected to the client device. The client devices are configured to receive image sequences from the image-capturing sensors 3A-3E, preferably one or two image sequences at a time, and to process and, if necessary, merge images from the different image sequences for display on the display D1-D3 associated with the client device, as will be described in more detail below.
[0076] For this purpose, the client devices C1-C3 comprise a data processing device or processor P1-P3 and a digital storage medium or memory M1-M3. It should be realized that the actions or method steps referred to herein as being performed by a client device C1-C3 are performed by the processor P1-P3 of the client device through execution of a certain part, i.e. a certain program code sequence, of a computer program stored in the memory M1-M3 of the client device.
[0077] In one embodiment, the client devices are constituted by standard computers in the sense that they do not comprise any special-purpose hardware for processing the received image sequences. The client devices may for example be constituted by laptop or desktop personal computers or smaller portable computing devices, such as a tablet computer or a tablet device. In
[0078] The client devices C1-C3 and the image-capturing sensors 3A-3E are all connected to a network 4 of the combat vehicle 2. In a preferred embodiment, the network is an Ethernet network, preferably a Gigabit Ethernet network (GigE). The client devices C1-C3 are connected to the image-capturing sensors 3A-3E over said network 4 via a network switch 5, typically in the form of an Ethernet switch.
[0079] The image-capturing sensors 3A-3E are configured to record image sequences showing a respective partial view V.sub.A-V.sub.E of the surroundings of the combat vehicle, and to send these image sequences over said network 4 by means of a technique (e.g. multicast technique) which enables a plurality of receivers to be reached by a certain image sequence even if said image sequence is sent only once by an image-capturing sensor 3A-3E. Each client device C1-C3 is in turn configured to receive, via said network 4, one or more image sequences showing different partial views V.sub.A-V.sub.E of the surroundings of the combat vehicle and to generate, on its own, a desired view by processing the images from the received image sequence(s), and to provide for display of the desired view on said display D1-D3.
[0080] In the exemplary embodiment shown in
[0081] In this embodiment, the image-capturing sensors 3A-3E are constituted by digital network video cameras configured to record the image sequences which thus constitute the video streams depicting the different partial views V.sub.A-V.sub.E of the surroundings of the combat vehicle. More specifically, in this embodiment, the image-capturing sensors 3A-3E are constituted by Ethernet video cameras with multicast functionality, which means that the video cameras 3A-3E are connected to the Ethernet network 4 and are configured to send each recorded image sequence by means of a technique that although each image sequence is sent only once can be received by a plurality of receivers, i.e. client devices.
[0082] Furthermore, the client devices C1-C3 of this embodiment comprise a respective direction sensor S1-S3 configured to sense a current direction of the direction sensor and thus the direction of the client device or the component of which the direction sensor forms a part. This enables a user of a client device C1-C3 to indicate a desired view of the surroundings of the combat vehicle by directing the client device or a component attached thereto, comprising the direction sensor S1-S3, in the direction the user desires to “see”. As illustrated in
[0083] From the above description it should be understood that the observation system 1 typically comprises an MMI (man-machine interface) configured to allow the user to indicate a desired view by indicating, via said MMI, a direction in which the user wants to see the surroundings of the combat vehicle, and that such an MMI can be designed in several different ways. Thus, it should be understood that the observation system 1 of the present disclosure is not restricted to any of a number of possible solutions for providing such functionality.
[0084] When a user of a client device C1-C3 indicates a desired view of the surroundings of the combat vehicle, the client device calculates which one(s) of the partial views V.sub.A-V.sub.E that is/are required to generate the desired view.
[0085] In the event that the desired view can fit within one of the partial views V.sub.A-V.sub.E, that is, if the image information desired by the operator to be displayed on the display D1-D3 corresponds to or is a subset of one of the partial views V.sub.A-V.sub.E, the client device C1-C3 only needs to demand and receive image sequences from a single image-capturing sensor 3A-3E and not carry out any merging of images. Even in this situation, however, a certain degree of processing of the images comprised in the image sequence is required in order to generate, from those images, the desired view for display on the display D1-D3. For example, the processing may in this case consist of extracting parts of the images, projecting the images or the extracted image parts on a curved surface and/or rescaling the images or the extracted image parts before they are presented as said desired view on the display D1-D3 associated with the client device C1-C3.
[0086] Thus, the desired view can be generated from an image sequence recorded by a single image-capturing sensor 3A-3E. Advantageously, the client devices C1-C3 are configured to, based on an indication of desired view of the surroundings of the combat vehicle, indicated by the user of the respective client device by means of, for example, the above mentioned direction sensors S1-S3, determine from how many and which of the image-capturing sensors 3A-3E the image sequences have to be obtained in order to generate the desired view. Furthermore, the client devices C1-C3 are advantageously configured to request, from the image-capturing sensors, the image sequences and only the image sequences required to generate the desired view. This means that the client devices C1-C3 to the extent possible strive to generate the desired view from an image sequence recorded by a single image-capturing sensor 3A-3E and that further image sequences from other image-capturing sensors 3A-3E are only requested if necessary. Nevertheless, for descriptive purposes, it will henceforth be assumed that the view desired by the user requires merging of images from at least two image sequences recorded by different image-capturing sensors 3A-3E, in order to create a panoramic image corresponding to said desired view to be displayed to the user.
[0087] As illustrated in
[0088] As mentioned above, the client devices C1-C3 are, however, configured to minimize the number of image sequences used to generate the view desired by the user and, as this does usually not require merging of more than two or a maximum of three image sequences, the client devices C1-C3 are advantageously configured to limit the requests for image sequences from the different video cameras to two or maximum three image sequences.
[0089]
[0090] In the example shown in
[0091] It should also be appreciated that the desired view V.sub.P displayed on the display D1 does not have to comprise the whole partial views V.sub.B, V.sub.C, or even an entire partial view. Instead, the desired view displayed on the display D1 typically constitutes a subset of a merged image that the client device C1 generates from the requested and received video streams. For example, the client device C1 can demand video streams from the video cameras 3B and 3C, whereupon the client device can receive these video streams and thus the partial views V.sub.B, V.sub.C, generate a merged image corresponding to the view V.sub.P in
[0092] To store an image in the memory M1 of the client device, which image is larger than the image currently being displayed on the display D1 associated with the client device, is advantageous in that it allows for quick updates of the display of the desired view caused by small changes in the indication of desired view from the operators, for example caused by small head movements of an operator provided with an integrated helmet direction sensor S1, S2 by means of which the operator indicates the desired view for display on a display, as described above. The fact that the merged and stored image is larger than the image being displayed as desired view on the display means that there is a certain margin of image information outside the desired and showed view, wherein image information within this margin can be shown when indicated as being desired by the operator, without the need for new calculation-intensive merges of images. For example, the merged image stored in the memory of the client device may correspond to a horizontal field of view of 90 degrees around the vehicle 2 while the desired view being displayed on the display only corresponds to a horizontal field of view of 60 degrees.
[0093] As indicated above, the system 1 is advantageously designed such that each client device C1-C3 is configured to request, based on the desired view as indicated by the user of the client device, the minimal number of image sequences from the video cameras 3A-3E required to generate said desired view. In one embodiment, two is the upper limit for the number of image sequences from different video cameras that may be required and merged by the respective client device. In another embodiment, said upper limit is three. In yet another embodiment, the client devices are configured to allow the users, through user input, to specify an upper limit for the number of image sequences that should be requested and merged based on the indication of desired view by the user. In this way, the maximum number of images that are merged by the client device can, for example, be adapted to personal preferences of the respective user and/or to the calculation capacity of each client device.
[0094]
[0095] In a first step S11, a first client device “Client device 1” sends a request to the switch for image sequences to be sent from the video cameras 1 and 2. As described above, the client device base the choice of video cameras on an indication of desired view for display on a display, received from the user of the client device.
[0096] In a second step S12, a second client device “Client device 2” sends, in the same way, a request to the switch for image sequences to be sent from the video cameras 2 and 3.
[0097] In a third step S13, the switch receives an image sequence from “Video camera 1” and forwards it to the “Client device 1” since this is the only client device that has requested the image sequence.
[0098] In a fourth step S14, the switch receives an image sequence from “Video camera 2”. This is requested by both “Client device 1” and “Client device 2”. Thus, the switch duplicates the image sequence and then sends a respective copy of the image sequence to the two client devices.
[0099] In a fifth step S15, the switch receives an image sequence from “Camera 3” and forwards it to “Client device 2” since this is the only client device that has requested the image sequence.
[0100] As mentioned above, the network connected video cameras 3A-3E are configured to send the recorded image sequences over the network 4 by means of a technique that allows a plurality of client devices C1-C3 to receive the same image sequence, although this is only sent once by a video camera. In one embodiment, this is accomplished by configuring the network devices, comprised in the Ethernet network 4, for use of IP multicast.
[0101] IP multicast is a well-known technology that is frequently used to stream media over the Internet or other networks. The technology is based on the use of group addresses for IP multicast and each video camera 3A-3E is advantageously configured to use a specific group address as the destination address of the data packet that the recorded image sequences are sent in. The client devices then use these group addresses to inform the network that they are interested in some selected image sequences by specifying that they want to receive data packets sent to a specific group address. When a client device informs the network that it wants to receive packets to a specific group address it is said that the client device joins a group with this group address. In one embodiment, the above mentioned requests sent from the client devices C1-C3 to the network switch 6 are such join requests that indicate which video streams the client device wish to receive and thus which it do not wish to receive.
[0102]
[0103] In a first step, S21, a plurality of image sequences are recorded showing partial views V.sub.A-V.sub.E of the surroundings of the combat vehicle by means of a plurality of image-capturing sensors 3A-3E.
[0104] In a second step, S22, these image sequences are sent over a network 4 comprised in the combat vehicle 2 by means of multi-receiver technique i.e. a technique in which each image sequence can be received by a plurality of receivers, such as multicast.
[0105] In a third step, S23, selected image sequences are received in the client devices C1-C3. As mentioned earlier, the client devices C1-C3 are preferably configured to request and receive image sequences from a minimum of image-capturing sensors 3A-3E, where the image-capturing sensors and thus the requested image sequences are selected by the client device based on an indication of desired view for display, received by the client device from a user thereof.
[0106] In a fourth step S24, each client device creates, on its own, the desired view by processing images from at least one received image sequence and, if more than one image sequence is needed to create the desired view, by merging images from at least two image sequences recorded by different image-capturing sensors. As mentioned above, the desired view is typically but not necessarily a part of a panoramic view created in and by the respective client device by software for generating panoramic images from a plurality of image sequences, which software is stored in the respective client device.
[0107] In a fifth step, S25, each client device displays the desired view on a display D1-D3 associated with the respective client device.
[0108] It has been described that the desired view showed on the client device display can be a merged image composed by images from different image sequences. These images may advantageously be constituted by video stream frames. Thus, it should be understood that in a preferred embodiment, a panoramic video, or part of a panoramic video, is displayed on the displays of the client devices, generated by merging of frames from video streams recorded by the video cameras 3A-3E.
[0109] The foregoing description of preferred embodiments of the invention has been provided in illustrative and descriptive purpose. It is not intended to be exhaustive or to limit the invention to the precise embodiments described. Therefore, it should be understood that the invention intends to comprise all possible embodiments that fall within the scope of the following claims.