IMAGE PROCESSING DEVICE, DISPLAY SYSTEM, IMAGE PROCESSING METHOD, AND PROGRAM
20220360753 · 2022-11-10
Inventors
Cpc classification
G09G5/00
PHYSICS
International classification
Abstract
A video processing device 10 according to an embodiment outputs a background video 52 producing an induced motion in a visual target 51 on a display surface of a screen 22. The video processing device 10 includes an output unit 13 that outputs the background video 52 surrounding the visual target 51 to a background video output device 21 and a control unit 12 that moves the background video 52 in an opposite direction to a direction in which the visual target 51 is desired to be moved. The background video output device 21 projects the background video 52 to the screen 22.
Claims
1. A video processing device that outputs a video in which movement in a depth direction of a visual target of which movement in the depth direction is fixed on a display surface of a display device is perceived, the video processing device comprising: an output unit, implemented using one or more computing devices, configured to output a video corresponding to a position of the visual target to the display device; and a control unit, implemented using one or more computing devices, configured to move the video in a direction in which the movement of the visual target in the depth direction is perceived.
2. The video processing device according to claim 1, wherein the video is a background video producing an induced motion in the visual target, wherein the output unit outputs the background video surrounding the visual target to the display device, and wherein the control unit moves the background video in an opposite direction to a direction in which the visual target is desired to be moved.
3. The video processing device according to claim 2, wherein the output unit outputs a second background video surrounding the background video, and wherein the control unit moves the second background video in a direction which is the same as a movement direction of the background video and sets a movement amount of the second background video to be greater than a movement amount of the background video.
4. The video processing device according to claim 3, wherein a display aspect of the second background video is a spotlight in which the visual target is illuminated.
5. The video processing device according to claim 2, wherein the control unit causes a movement amount of each portion of the background video to differ based on a movement direction of the visual target.
6. The video processing device according to claim 1, further comprising: a mid-air image output device, implemented using one or more computing devices, displays the visual target on a virtual image surface above a display surface of the display device, wherein the video is a shadow of the visual target, wherein the output unit outputs the shadow of the visual target to the display device and outputs a video of the visual target to the mid-air image output device, and wherein the control unit moves the shadow of the visual target to a position at which a depth position of the visual target is desired to be perceived and changes a size and a height of the visual target in accordance with a viewpoint position and the depth position of the visual target.
7. The video processing device according to claim 6, wherein the shadow of the visual target is a shadow growing in a lateral direction at the depth position of the visual target.
8. The video processing device according to claim 7, wherein the output unit outputs a video indicating an irradiation range in an aspect in which an upper portion of the visual target is illuminated in a spotlight in the lateral direction and displays the shadow of the visual target corresponding to the upper portion of the visual target within the video indicating the irradiation range.
9. A display system comprising: a plurality of display devices; and a video processing device including one or more computing devices, wherein each of the plurality of display devices displays a visual target at a position at which projection surfaces above display surfaces of the display devices intersect each other, and wherein the video processing device includes: an output unit, implemented using one or more computing devices, that outputs a background video surrounding the visual target to the display device, and a control unit, implemented using one or more computing devices, that moves the background video in an opposite direction to a direction in which the visual target is desired to be moved.
10. (canceled)
11. A non-transitory recording medium storing a program, wherein execution of the program causes one or more computers of each unit of a video processing device to perform operations comprising: outputting a video of which movement in a depth direction of a visual target of which movement in the depth direction is fixed on a display surface of a display device is perceived; outputting a video corresponding to a position of the visual target to the display device; and moving the video in a direction in which the movement of the visual target in the depth direction is perceived.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
DESCRIPTION OF EMBODIMENTS
First Embodiment
[0048] A display system according to a first embodiment will be described with reference to the drawings.
[0049] A display system 1 illustrated in
[0050] The screen 22 is disposed in parallel to the ground. The background video output device 21 projects a background video to the screen 22. The background video output device 21 may project a video in any direction.
[0051] The optical element 24 is disposed to be tilted at about 45 degrees and the mid-air image output device 23 is disposed above or below the optical element 24. A video output from the mid-air image output device 23 is reflected toward the observer 100 by the optical element 24 to form a mid-air image on the virtual image surface 30. The screen 22 and the optical element 24 are disposed so that the virtual image surface 30 is parallel to a normal direction of the screen 22. By changing a distance d1 between the mid-air image output device 23 and the optical element 24, it is possible to adjust a distance d2 between the optical element 24 and the virtual image surface 30. When the distance d1 becomes short, the distance d2 becomes short. In the embodiment, the mid-air image output device 23 is disposed so that the virtual image surface 30 is near the center of the screen 22. The virtual image surface 30 is not limited to the center of the screen 22 and may be set at any position. The positions of the mid-air image output device 23 and the optical element 24 may be fixed.
[0052] The mid-air image output device 23 and the optical element 24 may be able to display a mid-air image on the upper side of the screen 22 and the present invention is not limited to the foregoing configuration. The visual target may not be necessarily displayed to float in the air and may be displayed to be grounded to a display surface of the screen 22. Alternatively, the screen 22 may be disposed on the upper side and a visual target may be displayed to hang on a background video displayed on the screen 22.
[0053] Instead of displaying the mid-air image by the mid-air image output device 23 and the optical element 24, a transparent screen may be disposed on the screen 22 and a video transmitted through the transparent screen may be a visual target. Alternatively, a real object may be disposed on the screen 22 and the real object may be a visual target. The positions of the transparent screen and the real object may be fixed.
[0054] The video processing device 10 supplies a background video producing an induced motion in the visual target to the background video output device 21. Specifically, the video processing device 10 moves the background video in an opposite direction to the movement direction of the visual target to produce the induced motion in the visual target. The induced motion is a sensory illusion phenomenon giving a motion perception to a stationary object. The background video producing the induced motion is a video surrounding the visual target when the background video is viewed from a viewpoint of the observer 100. In the embodiment, the observer 100 is caused to perceive a floor surface which represents a movement range of the visual target and the visual target which is moving as a background video on the floor surface.
[0055]
[0056]
[0057] Under the darkroom condition, as illustrated in
[0058] A configuration of the video processing device 10 will be described with reference to
[0059] The setting unit 11 disposes a visual target object indicating a visual target in a visual space and a floor surface object serving as a background video at initial positions based on a positional relation between the screen 22 and the visual target in the real space. For example, the setting unit 11 disposes the floor surface object so that the visual target object stands near the center of the floor surface object. The floor surface object is a plane figure indicating a movement range of the visual target object.
[0060] The setting unit 11 disposes a background virtual camera that captures a video to be projected to the screen 22 in the virtual space. The background virtual camera images a region including the floor surface object. A video captured by the background virtual camera is projected to the screen 22. When the position of the virtual camera remains fixed and the floor surface object is moved in the virtual space, a background video projected to the screen 22 is moved.
[0061] The setting unit 11 may dispose a visual target virtual camera that images the visual target object. The visual target virtual camera images the visual target object in the lateral direction. The mid-air image output device 23 projects a video captured by the visual target virtual camera to the optical element 24 to display the visual target on the virtual image surface 30.
[0062] The control unit 12 moves the floor surface object based on a movement amount of the visual target object. For example, when the visual target is desired to be moved forward a distance v, the control unit 12 moves the floor surface object the distance v in the depth direction. That is, the control unit 12 moves only the floor surface object and does not move the visual target object, the visual target virtual camera, or the background virtual camera. Alternatively, the control unit 12 may not move the floor surface object and may move the visual target object, the visual target virtual camera, and the background virtual camera in the same direction at the same movement amount. In any case, when the floor surface object is moved, a position at which the floor surface object is shown in the video captured by the background virtual camera is moved.
[0063] When the visual target can be moved freely in the virtual image surface 30, the control unit 12 may move the background video 52 only in the normal direction of the virtual image surface 30. For example, in the example illustrated in
[0064] The output unit 13 outputs a video including the visual target object imaged by the visual target virtual camera to the mid-air image output device 23. The output unit 13 outputs a video including the floor surface object imaged by the background virtual camera to the background video output device 21.
[0065] An operation of the video processing device 10 will be described with reference to the flowchart of
[0066] In step S11, the setting unit 11 disposes the floor surface object at the initial position and disposes the virtual camera that images the floor surface object in the virtual space based on a positional relation between the screen 22 and the visual target in the real space. The setting unit 11 may dispose the visual target object and the visual target virtual camera in the virtual space.
[0067] In step S12, the control unit 12 calculates a movement amount of the floor surface object corresponding to one frame based on a movement amount of the visual target corresponding to one frame and moves the floor surface object in accordance with the calculated movement amount.
[0068] In step S13, the output unit 13 outputs the background video obtained by imaging a plane including the floor surface object using the virtual camera to the background video output device 21. The output unit 13 may output a video obtained by imaging the visual target object using the visual target virtual camera to the mid-air image output device 23.
[0069] The processes of steps S12 and S13 are performed for each frame.
[0070] As described above, the display system according to the embodiment displays the background video 52 surrounding the visual target 51 on the screen 22 and moves the background video 52 in the opposite direction to the direction in which the visual target 51 is desired to be moved, and thus the observer 100 can perceive the visual target 51 as moving in the background video 52.
Second Embodiment
[0071] Next, a display system according to a second embodiment will be described. A configuration of the display system according to the second embodiment is the same as that of the display system according to the first embodiment.
[0072] In general, an induced motion is a phenomenon that arises under a darkroom condition in which an amount of ambient light surrounding the display system and the observer is small. In a real environment, it is difficult to control an amount of light in a facility and completely darken the surroundings of the display system. A nearby device is assumed to be shining and visible to the observer in some cases due to light for display of the visual target, illumination light illuminating the visual target, light output by the visual target itself, or the like. As a result, there is concern of the observer perceiving movement of the background video based on a positional relation between the surrounding device and the background video.
[0073] In the second embodiment, as illustrated in
[0074] The video processing device 10 according to the second embodiment includes the setting unit 11, the control unit 12, and the output unit 13 as in the first embodiment.
[0075] The setting unit 11 disposes an induction object surrounding the floor surface object at an initial position in the virtual space in addition to the visual target object and the floor surface object. For example, the setting unit 11 disposes the induction object displayed like a spotlight in which the background video 53 illuminates the visual target 51.
[0076]
[0077] The control unit 12 moves the induction object based on a movement amount of the floor surface object. Specifically, the control unit 12 moves the induction object in the same direction as the movement direction of the floor surface object so that the movement amount of the induction object is greater than the movement amount of the floor surface object. For example, when the movement amount of the floor surface object is v, the movement amount of the induction object is set to 2v. The movement amount of the induction object may be greater than the movement amount of the floor surface object.
[0078]
[0079] Although the periphery of the observer and the display system is dim, the observer perceives the background video 52 as still and the visual target 51 as moving from the contrast between the background videos 52 and 53 when the background videos 52 and 53 are separately moved, as illustrated in
[0080] In the foregoing case, the movement of the background video 53 is perceived. Therefore, the background video 53 may be displayed in an aspect in which the observer can perceive the background video 53 which does not give a discomfort despite the movement. For example, by displaying the background video 53 in an aspect of a spotlight in which the visual target 51 is illuminated, it is possible to expect the advantageous effect of reducing the discomfort about existence of the background video 53.
[0081] The output unit 13 outputs a video including the induction object and the floor surface object imaged by the background virtual camera to the background video output device 21.
[0082] An operation of the video processing device 10 according to the second embodiment is basically similar to the flowchart of
[0083] In step S11, the setting unit 11 disposes the floor surface object and the induction object at initial positions based on the positional relation between the visual target and the screen 22.
[0084] In step S12, the control unit 12 calculates a movement amount of the floor surface object and the induction object corresponding to one frame based on a movement amount of the visual target corresponding to one frame and moves the floor surface object and the induction object based on the calculated movement amount.
[0085] In step S13, the output unit 13 outputs the background video obtained by imaging a plane including the floor surface object and the induction object using the virtual camera to the background video output device 21.
[0086] As described above, the display system according to the embodiment displays the background video 52 surrounding the visual target 51 and the induction background video 53 surrounding the background video 52 on the screen 22 and moves the background videos 52 and 53 in the opposite direction to the direction in which the visual target 51 is desired to be moved, so that the movement amount of the induction background video 53 is greater than the movement amount of the background video 52, and thus the observer 100 can perceive the visual target 51 as moving in the background video 52 in the dim environment.
Third Embodiment
[0087] Next, a display system according to a third embodiment will be described. A configuration of the display system according to the third embodiment is the same as that of the display system according to the first and second embodiments.
[0088] When the movement amount of the background video surrounding the visual target is set to be large in order to move the visual target fast, there is concern of movement of the background video being perceived.
[0089] In the third embodiment, by moving a part of the background video, as illustrated in
[0090] The video processing device 10 according to the third embodiment includes the setting unit 11, the control unit 12, and the output unit 13 as in the first embodiment.
[0091] The setting unit 11 disposes a floor surface object at the initial position in the virtual space as in the first embodiment. The setting unit 11 may dispose an induction object surrounding the floor surface object as in the second embodiment.
[0092] The control unit 12 sets a different movement amount of each portion of the background video 52, that is, the floor surface object, to move each portion of the background video 52 based on a movement amount of the visual target 51. In the example of
[0093] A movement example of the background video 52 when the background video 52 is rectangular will be described specifically. The floor surface object is rectangular and a circle circumscribing four sides of the rectangle is assumed. The control unit 12 moves the circle in the opposite direction to the movement direction of the visual target 51. At this time, the corners of the floor surface object may be fixed or may be moved with a movement amount less than the movement amount of the circle. The control unit 12 deforms the sides so that the sides of the floor surface object in the movement direction of the visual target 51 come into contact with the circle after the movement. The control unit 12 also performs the same deformation on the facing sides.
[0094] The sides of the background video 52 may be blurred so that the deformation of the sides of the background video 52 are not conspicuous.
[0095] When the background video 52 is configured by a collection of points, for example, the control unit 12 fast moves points at which an existence direction of the points is close to the movement direction of the visual target 51 and slowly moves points at which an existence direction of the points is different from the movement direction.
[0096] The output unit 13 outputs the floor surface object imaged by the virtual camera to the background video output device 21.
[0097] An operation of the video processing device 10 according to the third embodiment is basically similar to the flowchart of
[0098] In step S11, the setting unit 11 disposes the floor surface object at an initial position based on the positional relation between the visual target and the screen 22.
[0099] In step S12, the control unit 12 calculates a movement amount of each portion of the floor surface object corresponding to one frame based on a movement amount of the visual target corresponding to one frame and moves each portion of the floor surface object based on the calculated movement amount.
[0100] In step S13, the output unit 13 outputs the background video obtained by imaging a plane including the floor surface object using the virtual camera to the background video output device 21.
[0101] As described above, the display system according to the embodiment moves each portion of the background video 52 by a different movement amount based on the movement direction of the visual target 51 when the visual target 51 is desired to be displayed so that the visual target 51 is moving fast, and thus it is possible to inhibit the observer from perceiving the movement of the background video 52.
Fourth Embodiment
[0102] Next, a display system according to a fourth embodiment will be described. A display system according to the fourth embodiment displays a visual target which can be observed in two or more different directions.
[0103] The display system according to the fourth embodiment will be described with reference to
[0104] The video processing device 10 supplies the background video 52 producing an induced motion in the visual target 51 to the background video output device 21. The video processing device 10 may uses any of the first to third embodiments when the background video 52 is supplied.
[0105] In the fourth embodiment, four sets of the mid-air image output devices 23 and the optical elements 24 are provided to project a mid-air image to the upper side of the screen 22 in four different directions. The mid-air image output devices 23 and the optical elements 24 are disposed so that the positions of the virtual image surfaces of the facing devices match each other. Specifically, a virtual image surface 30A formed by the mid-air image output device 23 and the optical element 24 disposed downward in the drawing of
[0106] The mid-air image output devices 23 display the visual target 51 viewed in each direction at a position at which the virtual image surfaces 30A and 30C intersect the virtual image surfaces 30B and 30D. Thus, the whole circumference of the visual target 51 can be observed. The mid-air image output devices 23 and the optical elements 24 may be disposed so that the virtual image surfaces 30A to 30D are parallel to the normal direction of the screen 22 and the virtual image surfaces 30A and 30C cross the virtual image surfaces 30B and 30D at right angles.
[0107] Instead of the optical elements 24, transparent screens may be disposed to correspond to the positions of the virtual image surfaces 30A and 30C and the virtual image surfaces 30B and 30D illustrated in
[0108] The number of directions in which the visual target 51 is projected is not limited to the four directions and may be two or three directions. In any case, the visual target 51 is projected to a position at which projection surfaces intersect each other.
[0109] As described above, the display system according to the embodiment displays the visual target 51 at the position at which the virtual image surfaces 30A to 30D intersect each other on the screen 22, displays the background video 52 on the screen 22, and moves the background video 52 in the opposite direction to the direction in which the visual target 51 is desired to be moved, and thus the whole circumference of the visual target 51 can be perceived so that the visual target 51 is moved on the background video 52.
Fifth Embodiment
[0110] Next, a display system according to a fifth embodiment will be described. A configuration of the display system 1 according to the fifth embodiment is the same as the configuration of the display system 1 illustrated in
[0111] In the display system 1 in
[0112] When a viewpoint of the observer 100 is higher than the position of the visual target and the size of the visual target and a display position in the virtual image surface 30 are changed, the observer 100 can perceive movement of the visual target in the depth direction. Further, by making a shadow to the footing of the visual target, it is possible to perceive an absolute position of the visual target on the floor surface.
[0113] In the fifth embodiment, by changing the size and position of the visual target and displaying the shadow on the floor surface, it is possible to perceive movement of the visual target in the depth direction. In the fifth embodiment, unlike the first to fourth embodiments, an induced motion is not produced. Therefore, the darkroom condition may not be set.
[0114] The video processing device 10 according to the fifth embodiment includes the setting unit 11, the control unit 12, and the output unit 13 as in the first embodiment.
[0115] The setting unit 11 disposes a visual target object indicating a visual target and a floor surface object below the visual target object at the initial positions in the virtual space based on a positional relation between the screen 22 and the virtual image surface 30 (the visual target) in the real space. The setting unit 11 disposes a parallel light source illuminating the visual target object from the upper side on the visual target object. The shadow of the visual target is displayed on the floor surface object by the parallel light source. When the visual target object is moved in the virtual space, the shadow is also moved.
[0116] The setting unit 11 disposes a background virtual camera that captures a video to be projected to the screen 22 in the virtual space. The background virtual camera images the floor surface object including the shadow displayed on the floor surface object. A video captured by the background virtual camera is projected to the screen 22.
[0117] The setting unit 11 may dispose a visual target virtual camera that images the visual target object in the virtual space. A positional relation between the virtual camera and the visual target object in the virtual space is equal to the positional relation between the viewpoint of the observer 100 in the real space and the visual target in the virtual image surface 30, and a projection method is set in a perspective projection method.
[0118] The control unit 12 moves the visual target object in the virtual space. The shadow of the visual target object is moved in accordance with the position of the visual target object. In a video captured by the background virtual camera, the shadow of the visual target object is moved in accordance with the position of the visual target object in the virtual space. For the visual target object captured by the visual target virtual camera, a size and a position in the captured video is changed in accordance with a movement amount in the depth direction by the perspective projection method.
[0119] The output unit 13 outputs a video including the visual target object imaged by the visual target virtual camera to the mid-air image output device 23 and outputs a video including the floor surface object imaged by the background virtual camera and the shadow to the background video output device 21.
[0120]
[0121]
[0122]
[0123] When the visual target object is moved in the depth direction in the virtual space, the shadow of the visual target object displayed on the floor surface object is also moved in the depth direction. As illustrated in
[0124] The visual target virtual camera images the visual target object by the perspective projection method. Therefore, when the visual target object is moved in the depth direction, the visual target 51 is displayed on the virtual image surface 30 with a size and at a height in accordance with a viewpoint position of the observer 100 and a depth position of the visual target 51 desired to be perceived by the observer 100.
[0125]
[0126] An operation of the video processing device 10 will be described with reference to the flowchart of
[0127] In step S21, the setting unit 11 disposes the visual target object and the floor surface object at the initial positions in the virtual space and disposes a parallel light source above the visual target object based on a positional relation between the screen 22 and the visual target in the real space. The setting unit 11 disposes a virtual camera that captures the visual target to correspond to a viewpoint position of the observer 100 and disposes a virtual camera that captures a floor surface object in the virtual space.
[0128] In step S22, the control unit 12 moves the visual target object in the virtual space. In the virtual space, a shadow is displayed immediately below the visual target object.
[0129] In step S23, the output unit 13 outputs a video including the visual target object imaged by the visual target virtual camera to the mid-air image output device 23 and outputs a video including the floor surface object imaged by the background virtual camera and the shadow to the background video output device 21. The visual target 51 is displayed on the virtual image surface 30, and the floor surface and the shadow 62 are displayed on the screen 22.
[0130] The processes of steps S22 and S23 are repeatedly performed for each frame.
[0131] Not a parallel light source but a spotlight may be disposed above the visual target object. In this case, as illustrated in
[0132] When the visual target object is moved in the depth direction in the virtual space, the spotlight is also moved with the movement of the visual target object. When the visual target object is within the irradiation range of the spotlight, the spotlight may not be moved. The shadow of the visual target object displayed on the floor surface object is also moved in the depth direction. As illustrated in
[0133] When the visual target object is moved in the depth direction, the visual target object is captured with a size and at a position different from the state of
[0134]
[0135]
[0136]
[0137] As described above, the video processing device 10 according to the embodiment disposes the visual target object and the floor surface object at the initial positions in the virtual space based on the positional relation between the screen 22 and the virtual image surface 30 in the real space, disposes the parallel light source illuminating the visual target object, and disposes the background virtual camera that captures the video projected to the screen 22 and the virtual camera that images the visual target object. The video processing device 10 moves the shadow 62 to the position at which the position of the visual target 51 on the depth side is desired to be perceived with the movement of the visual target object and changes the size and the height of the visual target 51 in accordance with the viewpoint position of the observer 100 and the position of the visual target 51 on the depth side. Thus, it is possible to perceive the movement of the visual target 51 in the depth direction on the screen 22.
Sixth Embodiment
[0138] Next, a display system according to a sixth embodiment will be described. The sixth embodiment is different from the fifth embodiment in that a light source is disposed upward obliquely in the lateral direction of the visual target object in the virtual space. The others are the same as the fifth embodiment.
[0139] In the fifth embodiment, it is assumed that the observer 100 views the visual target 51 on the front side of the screen 22. When the observer 100 moves to the right or left of the front or the plurality of observers 100 are lined in the right and left directions, the visual target 51 and the shadow 62 are separated, and thus there is the problem of a way of being unnaturally viewed.
[0140] In the sixth embodiment, a light source is disposed upward obliquely in the lateral direction of the visual target object and a laterally long shadow is displayed.
[0141] The video processing device 10 according to the sixth embodiment includes the setting unit 11, the control unit 12, and the output unit 13 as in the fifth embodiment.
[0142] As in the fifth embodiment, the setting unit 11 disposes a visual target object indicating a visual target and a floor surface object at the initial positions in the virtual space based on a positional relation between the screen 22 and the visual target in the real space and disposes a background virtual camera that images the floor surface object including the shadow displayed on the floor surface object and a visual target visual camera that images the visual target object.
[0143] The setting unit 11 disposes a parallel light source illuminating the visual target object upward obliquely in the lateral direction at the same depth position as the visual target object. A laterally long shadow of the visual target is displayed on the floor surface object by the parallel light source.
[0144] The control unit 12 moves the visual target object in the virtual space as in the fifth embodiment. For the visual target object captured by the visual target virtual camera, a size and a position in the captured video is changed in accordance with a movement amount in the depth direction by the perspective projection method.
[0145] As in the fifth embodiment, the output unit 13 outputs a video including the visual target object imaged by the visual target virtual camera to the mid-air image output device 23 and outputs a video including the floor surface object imaged by the background virtual camera and the shadow to the background video output device 21.
[0146] A flow of the process of the video processing device 10 according to the sixth embodiment is the same as the flow of the process of the video processing device 10 described with reference to
[0147]
[0148] When the observer 100 views in the front of the center of the screen 22 in the display state of
[0149] Not the parallel light source but a spotlight with which the upper portion of the visual target object is illuminated may be disposed. A range outside of the irradiation range of the spotlight is set to be dark so that the shadow of the visual target object cannot be distinguished. In this case, as illustrated in
[0150] As illustrated in
[0151]
[0152]
[0153] As described above, the video processing device 10 according to the embodiment disposes the light source upward obliquely in the lateral direction of the visual target object and displays the shadow 62 growing in the lateral direction. Thus, when an angle of the visual target 51 viewed by the observer 100 is different, it is possible to inhibit the visual target 51 and the shadow 62 from being separately displayed.
[0154] The video processing device 10 according to the embodiment disposes the spotlight light source upward obliquely in the lateral direction of the visual target object and displays the shadow 62 of the upper portion of the visual target 51 within the irradiation range of the spotlight. Thus, it is difficult to distinguish whether the footing of the visual target 51 is separated from the shadow 62.
[0155] The video processing method according to the sixth embodiment may be applied to the display system that has four virtual image surfaces according to the fourth embodiment. Thus, it is possible to express the movement of the visual target in the depth direction with respect to the observer viewing the whole circumference.
[0156] In the above-described video processing device 10, for example, a general-purpose computer system that includes a central processing unit (CPU) 901, a memory 902, a storage 903, a communication device 904, an input device 905, and an output device 906, as illustrated in
REFERENCE SIGNS LIST
[0157] 1 Display system [0158] 10 Video processing device [0159] 11 Setting unit [0160] 12 Control unit [0161] 13 Output unit [0162] 21 Background video output device [0163] 22 Screen [0164] 23 Mid-air image output device [0165] 24 Optical element [0166] 30, 30A, 30B, 30C, 30D Virtual image surface [0167] 51 Visual target [0168] 52, 53 Background video [0169] 62 Shadow [0170] 63 Irradiation range [0171] 100 Observer