SEE-THROUGH DISPLAY, METHOD FOR OPERATING A SEE-THROUGH DISPLAY AND COMPUTER PROGRAM
20220366615 · 2022-11-17
Inventors
- Mitra Damghanian (Upplands-Bro, SE)
- Martin Pettersson (Vallentuna, SE)
- Rickard Sjöberg (Stockholm, SE)
Cpc classification
G02B2027/0118
PHYSICS
G06T3/40
PHYSICS
International classification
G06T3/40
PHYSICS
Abstract
There is provided a see-through display and a method for operating a see-through display. The display is configurable to display additional image content for augmenting a user's view of a scene visible through the display. According to the method, image data are received defining an image of a scene visible through the display. By analysis of the received image data, one or more characteristics of the scene are determined. A light effect to be applied to the user's view of the scene is determined. Additional image content is generated according to the determined light effect and according to the one or more determined characteristics of the scene. The generated additional image content is displayed to the user such that light received from the scene is combined with the additional image content, thereby to implement the determined light effect in the user's view of the scene.
Claims
1. A method for operating a see-through display, the display being configurable to display additional image content for augmenting a user's view of a scene visible through the display, the method comprising: receiving image data defining an image of a scene visible through the display; determining, by analysis of the received image data, one or more characteristics of the scene; determining a light effect to be applied to the user's view of the scene; generating additional image content according to the determined light effect and according to the one or more determined characteristics of the scene; and displaying the additional image content to the user such that light received from the scene is combined with the additional image content, thereby to implement the determined light effect in the user's view of the scene.
2. The method according to claim 1, wherein determining the one or more characteristics of the scene comprises determining at least one of: characteristics of an object visible in the scene; the position of an object visible in the scene; a profile of luminance across a region in the scene; a profile of colour across a region in the scene; a light model of the scene; and a time of capture of the image data.
3. The method according to claim 1 or claim 2, wherein determining the one or more characteristics of the scene comprises at least one of: constructing, obtaining and updating a map of the scene; and executing a SLAM method to analyse the receive image data.
4. (canceled)
5. The method according to claim 1, comprising: generating the additional image content comprising light with a different profile of luminance or a different profile of color to that of light received from a respective region in the scene.
6. (canceled)
7. The method according to claim 1, comprising: filtering light received from a region of the scene using an optical filter and combining the light passed by the optical filter with the additional image content, thereby to implement the determined light effect in the user's view of the scene.
8. The method according to claim 7, comprising: generating the additional image content to take account of characteristics of the light passed by the optical filter.
9-10. (anceled)
11. The method according to claim 1, comprising: generating the additional image content comprising a time varying profile of light across a respective region in the scene.
12. (canceled)
13. The method according to claim 1, wherein determining a light effect to be applied to the user's view of the scene comprises receiving user profile data defining the light effect to be applied.
14. The method according to claim 13, wherein the user profile data defines at least one event or condition for activating a respective light effect in the display, and the method comprises: responsive to determining that the at least one event or condition has occurred, generating and displaying additional image content to apply the determined light effect.
15. The method according to claim 14, wherein the at least one event or condition comprises determining, by the analysis of the received image data, a presence of one or more predetermined characteristics of the scene.
16. The method according to claim 1, comprising: controlling an active blocking layer to block or at least partially to block light received at the display from a selected region of the scene.
17-20. (canceled)
21. The method according to claim 1, comprising: generating additional image content having a first level of image quality for display in a region of an image area of the display corresponding to the user's line of sight and generating additional image content having a second, lower level of image quality for display in other regions of the image area of the display.
22. The method according to claim 21, wherein the additional image content having the first level of image quality comprises image content having a higher level of image or colour resolution than the additional image content generated having the second, lower level of image quality.
23-25. (canceled)
26. A see-through display, comprising: an image generator configured to generate additional image content and to project the generated additional image content along a user's line of sight to a scene visible through the display such that light received from the scene is combined with the additional image content in the user's view of the scene; a processor, linked to the image generator and configured: to receive image data representing an image of a scene visible through the display; to determine, by analysis of the received image data, one or more characteristics of the scene; to determine a light effect to be applied to the user's view of the scene; and to control the image generator to generate and to project additional image content according to the determined light effect and according to the one or more determined characteristics of the scene.
27. The see-through display according to claim 26, comprising an optical filter positioned to receive light from the scene and to pass received light, according to filtering characteristics of the optical filter, for viewing by the user.
28. The see-through display according to claim 26, comprising: a camera positioned to capture images of a scene visible to the user through one or both of the display and the optical filter and to output to the processor corresponding image data.
29. (canceled)
30. The see-through display according to claim 26, comprising: a memory, accessible by the processor, configurable to store light effect profile data defining one or more predetermined light effects that may be applied in the display.
31. The see-through display according to claim 30, wherein the memory is configurable to store user profile data defining one or more light effects to be applied in the display for the user.
32. The see-through display according to claim 30, wherein the light effect profile data or the user profile data defines, for a said light effect, data defining at least one event or condition for triggering selection or application of the said light effect in the display.
33. (canceled)
34. The see-through display according to claim 32, wherein the at least one event or condition includes at least one of: detection of a predetermined characteristic of the scene; detection of an input by a user in a user interface; detection of an audible input; determination of a predetermined characteristic in a detected audible input; detection of a gesture by the user; determination of the presence of a predetermined object or other feature in the received image data; a predetermined time; and a predetermined position or orientation of the display.
35. The see-through display according to claim 26, comprising: a blocking layer configurable at least partially to block light from a selected region in a user's view of the scene, wherein the processor is configured to control the configurable blocking layer according to the determined light effect and according to the one or more determined characteristics of the scene.
36-40. (canceled)
41. A computer program product, comprising a computer-readable medium, or access thereto, the computer-readable medium having stored thereon a computer which when loaded into and executed by a processor of a see-through display, causes the processor: to receive image data representing an image of a scene visible through the display; to determine, by analysis of the received image data, one or more characteristics of the scene; to determine a light effect to be applied to a user's view of the scene through the display; and to control an image generator of the display to generate and to project additional image content according to the determined light effect and according to the one or more determined characteristics of the scene.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0014] Example embodiments of the proposed technology will now be described in more detail and with reference to the accompanying drawings of which:
[0015]
[0016]
[0017]
[0018]
DETAILED DESCRIPTION
[0019] Known augmented reality (AR) and mixed reality (MR) display arrangements provide a user with a view of a scene, for example a real-world scene, and augment that view with additional image content. The user's view of the scene may comprise live images being displayed on an opaque screen of a real-world scene captured by a camera. The scene may alternatively be a combination of an image of a real-world scene captured by a camera and AR/MR image content. If using a see-through display, the user may continue to view light from a real-world scene directly through the see-through display.
[0020] If using an opaque screen and a camera, for example a mobile phone or other portable computing device, additional image content may be combined digitally to augment a digitally encoded image of a real-world scene captured by the camera. The resultant digitally combined image may then be displayed on the opaque screen.
[0021] If using a see-through display, the additional image content is projected so that it may be viewed along a user's direct line of sight to the real-world scene, so appearing to overlay the user's view of the real-world scene. Examples of known types of see-through display include: a head-up display system as shown schematically in
[0022] Referring to
[0023] Referring to
[0024] Referring to
[0025] Referring to
[0026] A head-mounted display may comprise any selection or combination of one or more of the arrangements shown in
[0027] In each of the see-through display arrangements shown in
[0028] It is known in opaque display systems to analyse a digital video image captured by a camera, to alter the digital image to introduce additional image content, and to display the altered image on an opaque screen. The additional image content may include effects of applying different lighting to a scene captured by the camera. For example, changes may be made to the luminance of pixels captured by the camera to simulate changes in the illumination of objects in the scene. Those changes may include altering areas of a light and shade within the captured image so as to simulate a new light source appearing to illuminate particular objects within the scene. It is known to apply these techniques to video frames captured by a camera attached to the display, e.g. a mobile phone or portable tablet computer, and to display the altered images on a screen to simulate a view of a scene including one or more objects lit by a simulated light source.
[0029] The additional image content may be displayed at one or more fixed positions in the image area of a display. Alternatively, the additional image content may be displayed so as to appear to the user to be fixed relative to an object or feature in a scene being viewed, irrespective of changes in orientation of the display, i.e. the additional image content is “space-stabilised”. To be able to display space-stabilised image content, the position in the image area of the display at which the additional image content is to be displayed needs to be recalculated for each newly displayed image frame as the orientation of the display changes, so as to track the changing direction from which the scene is being viewed. If the direction of viewing the object or feature in the scene passes beyond the aperture of the display, the position of the associated additional image content also moves beyond the aperture of the display and is no longer displayed.
[0030] For example, the position of features within a scene, and changes in orientation of the display relative to the scene, may be tracked using algorithms such as SLAM, referenced above. Changes in orientation of the display itself may alternatively, or in addition, be tracked using movement sensor data output by movement sensors attached to the display, or by another type of display tracking system. A determined orientation of the display may be used as an approximate indication of a line of sight of a user through the display to a real-world scene, in particular where the display is a head-mounted display. Movement sensors may include inertial sensors of various known types, or components of a tracking system comprising components mounted on the display and components mounted separately to the display at known locations.
[0031] Examples of the latter include optical tracking systems, magnetic tracking systems and radio-frequency (RF)-based position determining systems.
[0032] Besides tracking changes in orientation of the display, it may be beneficial also to determine a direction of gaze or line of sight of a user's eye through the display. It is known to integrate an eye-tracking mechanism in AR/MR displays to detect the line of sight of an eye and to signal changes in that line of sight. The determined line of sight may be used in various ways, for example to enhance or to alter displayed image content according to the determined line of sight of the user. For example, recognising that the sensitivity of an eye to particular measures of image quality may reduce with increased viewing angles beyond the line of sight of the eye, the image quality of additional image content may be reduced for those regions known to be peripheral to the user's line of sight.
[0033] In existing AR/MR solutions using see-through displays, the aim is usually to superimpose additional image content such as symbols, text, objects or characters in the user's view of a scene without otherwise affecting the user's view of the scene.
[0034] According to example embodiments disclosed herein, the user of a see-through display is able to apply personalised adjustments to their view of the scene through the display. For example, the display may be configured to alter scene properties perceived by the user.
[0035] Alterations to scene properties may for example include, without limitation, alterations to scene lighting, style (as in style transfer), mood or ambiance. Such alterations may be made for the benefit of a viewer in a single or multi-viewer arrangement thereby to enhance the viewing experience of the scene. In this invention, the scene may comprise a real-world scene or a combination of a real-world scene and any overlaid AR image content. Example embodiments to be described below may be implemented or used either individually or as a combination or features in a see-through display system.
[0036] According to example embodiments disclosed herein, light passing through a see-through display from a real-world scene may be altered in various ways to create visible changes to that light as compared to the light if viewed without the display. While the user continues to view light from the real-world scene directly, light received from one or more selected regions of the scene may be altered, for example in perceived luminance or colour, before reaching the user's eye. The user then perceives those one or more regions of the scene differently. In particular, light received from one or more particular objects or features identifiable within the scene may be altered such that the object or feature appears to be differently illuminated, for example by a differently positioned light source, or a light source having a different colour of light. This differs from the situation in an opaque display screen where images captured by a camera are manipulated digitally and displayed on the screen to simulate different light effects. In the present invention, the user's perception of light received from the real-world scene is altered to create the different light effects so that a user continues to view at least some of the light from the real-world scene at the user's normal viewing resolution.
[0037] Advantageously, as compared with AR/MR applications of opaque displays, embodiments of the present invention may provide an altered view of a real-world scene with higher image quality, e.g. higher image resolution, higher colour accuracy, reduced or absence of latency. Furthermore, in embodiments of the present invention, the altered view may be provided without breaking the line of sight of the user or altering the scaling of the real-world scene. The known technique of digital processing and display of a camera image of the real-world scene by opaque displays of mobile phones, other portable computing devices or “immersive” VR head-mounted displays, provides no direct view to the user of light from the real-world scene. Such techniques may also involve some level of compromise on image resolution, colour accuracy, latency, viewing experience, etc. Such compromises may be reduced or avoided in the see-through displays of the present invention.
[0038] According to example embodiments disclosed herein, alterations to the light received from particular regions of a scene, e.g. from particular objects or features, are made in such a way as to track changes in the orientation of the display and corresponding changes to the user's line of sight to those objects or features through the display. In this way, the perceived alterations remain fixed relative to the line of sight of the user through the display as the orientation of the display changes. For example, if simulating the illumination of an object in the real-world scene, visible through the display, by a virtual light source, the object will continue to appear to be illuminated by that light source if the orientation of the display changes, so long as the object remains visible to the user through the display. That is, the virtual illumination effect is “space-stabilised” within the display to the user's view of the object.
[0039] According to example embodiments disclosed herein, alterations to the light received from the scene may comprise one or both of additive and subtractive alteration. For example, perceived alterations to a region in a user's view of a scene may be achieved by filtering, blocking or partially blocking light from the region to alter the perceived luminance of that region. Alternatively, or in addition, perceived alterations to a region in the scene may be achieved by filtering, blocking or partially blocking light in one or more frequency ranges. Alternatively, or in addition, perceived alterations to a region in the scene may be achieved by passing light within only one or more defined ranges of wavelengths from the region in the scene, or from the whole scene, to cause a change in perceived colour. Alternatively, or in addition, the user's perception of a region of a scene may be altered by generating additional image content corresponding to the region in the scene and displaying it to the user in combination with light received from the region of the scene, whether altered by an optical filter or not. The additional image content may be space-stabilised within the display to track changes in the user's direction of viewing the scene through the display.
[0040] In one example embodiment, light from a whole scene may be altered subtractively by an optical filter or other optical device configured to reduce its luminance or to pass only selected wavelengths of light. Additional image content may be generated to augment light passed by the optical filter. For example, the additional image content may relate only to one or more selected regions of the scene thereby to alter the perceived luminance or colour of those selected one or more regions.
[0041] In another example embodiment, light from one or more selected regions in the scene may be altered subtractively by an optical filter or other optical device to reduce the luminance of the light from the region or to pass only selected wavelengths of the light. At the same time, additional image content may be generated to augment the subtractively altered light from one or more of the same or from a different selected region of the scene. The additional image content may relate only to the one or more selected regions or only to another selected region or it may relate to a user's view of the whole scene.
[0042] Before proceeding further with detail of example embodiments for achieving a perceived alteration to light in a see-through display, components of an example see-through display that may be used to achieve alterations to a user's perception of a real-world scene will now be described with reference to
[0043] Referring to
[0044] In an alternative implementation of the display in
[0045] In a further alternative implementation of the display in
[0046] The optical filter may comprise one or a combination of known types of optical filter.
[0047] For example, the optical filter may be configured to alter the luminance of received light. Alternatively, or in addition, the optical filter may be configured to pass received light 145 in only one or more ranges of wavelength. Alternatively, or in addition, the optical filter may be configured to pass received light according to the angle of polarisation of the received light. Alternatively, or in addition, the optical filter may be configured to apply optical filtering on one or some areas of the scene based on information extracted from the scene. This information might be updated dynamically with e.g. changes in the scene, or the movement of the display or the viewer relative to the scene or relative to each other, or a change in the line of sight of the user. Alternatively, or in addition, the optical filter may be configured to alter the characteristics of light as it passes through the filter in other ways, for example to change the angle of polarisation of the received light. The light passed by the optical filter then passes through the combiner or is redirected by the combiner towards the eye of the user, with or without also passing through an additional optical filter as discussed above.
[0048] Alternatively, or in addition, the optical filter may include or be associated with an active blocking layer configurable to prevent, or to partially prevent light in one or more selected regions within the aperture of the display from reaching the optical filter. In this way, particular regions in the user's view of the scene may be blocked, or partially blocked, i.e. dimmed. This provides an opportunity to replace the user's view of that region of the scene with additional image content generated by the image generator 155 and displayed at an appropriate position in the display while the user continues to view light from the remainder of the scene.
[0049] In the display arrangement shown in
[0050] Referring to
[0051] Not shown in
[0052] Referring to
[0053] Referring to
[0054] As discussed above with reference to
[0055] In each of the embodiments of a see-through display described above with reference to
[0056] Either or both of the processor 300 and the memory 305 may optionally be associated with the respective display, for example implemented as components of the display. Alternatively, or in addition, either or both of the processor 300 and the memory 305 may be separate from the respective display and be configured to communicate with the display over a communications link. The communications link may be a wireless communications link, for example a link established through a mobile communications network, or a short-range wireless link such as “wi-fi” (IEEE 802.11 wireless standard), Bluetooth® or an optical, e.g. infra-red (IR) communications link. Alternatively, the processor 300 may be configured to communicate with the display over a physical communications link. The physical communications link may be implemented, for example, using an optical fibre, or a communications link may be established over an electrical conductor or transmission line. A processor 300 and memory 305 may be provided as components of a single data processing facility, or they may be components of an edge computing or cloud-hosted data processing facility configured to communicate with components of the display. An edge-computing or cloud-hosted facility may for example be beneficial in a multi-user environment, as will be discussed further below.
[0057] The memory 305 may for example store one or more computer programs which when executed by the processor 300 cause the display to operate a process as will now be described in summary with reference to
[0058] Referring to
[0059] At 355, to determine a light effect to be applied, the processor may be configured to access the memory 305 which may be arranged to store profile data relating to different predetermined light effects that may be selected and generated in the display. The processor 300 may be configured to use the stored profile data to generate the additional image content, at 365, as required to simulate the determined light effect. The memory 305 may for example store user profile data indicative of a user's preferences for the creation of a specific light effect when viewing a scene through the display. The user profile data may reference one or more of the stored light effect profiles. The memory 305 may for example store information about a scene 25, for example a determined geometry of the scene. The processor 300 may be configured to may improve, complement or otherwise update this stored information through time. This informaton, for example the geometry of the scene, may be used for example for accelerating the processing of captured image data for a scene 25 at 355.
[0060] In an example embodiment, a processor 300, if implemented as a component of the display system, may be configured to receive, at 365, from a source external to the display system, data indicative of the additional image content to be generated. The received data may for example comprise an indication of a lighting profile to be implemented in the display, or the data may comprise image data defining the additional image content to be displayed.
[0061] The processor 300 may, for example, be configured to implement functionality to receive tracking information from tracking devices associated with the display system. The tracking system may be configured, for example in any of the display arrangements shown in
[0062]
[0063] The processor 300 may, for example, be configured to implement functionality for generating, at 365, the additional image content using a frame-based digital image-generating technique, for example at a frame rate of 50 or 60 Hz. Where additional image content is required to be space-stabilised relative to a user's view of the real-world scene, the processor 300 may be configured to re-calculate the position at which additional image content is displayed within the display for each new image frame. The processor 300 may also be configured to receive data defining changes in orientation of the display at the frame rate, or more frequently. The processor 300 may use the received change in orientation data to calculate, at 365, the position at which additional image content should be displayed in the display for each new image frame in order to maintain the perceived light effect.
[0064] The processor 300 may be configured to execute, at 355, a SLAM algorithm, referenced above, wherein determining characteristics of the scene comprises identifying and mapping features visible within the scene. In this way, information may be derived relating to the relative position of features visible within the scene. Features may be identified within the scene by the SLAM algorithm according to changes in luminance or colour of pixels, enabling structures within the scene to be determined. The determined structures may represent objects within the scene, boundaries of shadow or light, colour change boundaries, etc. The information may also be used to determine changes in the orientation of the display. The SLAM algorithm may use any changes in relative position of the identified features to determine changes in position and/or orientation of the display. The determined changes in orientation from the SLAM algorithm may be used by the processor 300 at 365, either instead of, or to supplement tracking data received from a tracking system when generating additional image content. The processor 300 may be configured to receive information from other sensors, e.g. cameras positioned to observe the scene 25, not necessarily located at the position of the camera 310, and so observe the scene 25 from one or more different directions.
[0065] Image data of the real-world scene may be received at 350 by the processor 300 from a camera mounted in a fixed position relative to the display to capture light as may be viewed by a user from the real-world scene. In any of the example display systems shown in
[0066] The camera 310 may be configured to detect or output particular information about the scene 25. For example, the information may comprise light intensity or geometry of the scene. The camera 310 may for example be an RGB camera, a depth camera or a light-field camera.
[0067] Optionally, in an alternative arrangement or in addition to the camera 310, a camera 315 may be mounted in a fixed position to receive light passed or re-directed by the optical filter and combiner 140, 200, 230. Such an arrangement enables the processor 300, at 355, to analyse light of a real-world scene, before or after alteration in the respective display. The analysis may for example determine one or more of: the location or relative position of objects or features visible in the scene; material properties of those objects or features; the position of those objects or features relative to light sources or light obstructers; the viewing geometry; and actual illumination of the scene, including for example the variation of luminance or colour across the scene.
[0068] The processor 300 may, for example, be configured to implement functionality to receive, at 350, image data captured by one or both of the cameras 310, 315 and to determine, at 355, a light model of the scene. The light model may for example comprise one or more of the position, intensity or colour of light emitted by a light source. The resulting lighting model may then be used to determine what alterations are going to be required to the user's perception of the current lighting of the scene in order to apply a preferred light effect for the scene as it will appear to the user. The processor 300 may be configured to apply the preferred light effect by controlling the display to add one or more virtual light sources and light obstructers to the determined lighting model of the scene, calculating their effect on the lighting of the scene, and determining, at 365, any additional image content to be generated. The perceived lighting of the scene will then comprise one or more of: filtered light from the real-world scene; the light from the real-world scene combined with a view of the additional image content; and filtered light from the real-world scene combined with a view of the additional image content.
[0069] If an actively configurable optical blocking layer is provided in the display, for example one associated with the optical filter as discussed above, the processor 300 may also be configured to control the actively configurable blocking layer at least partially to block light from one or more selected regions of the scene. Corresponding additional image content may be generated at 365 and displayed at 370 at an appropriate position in the display, for example to replace the blocked or partially blocked light or otherwise to exploit the at least partial blocking of light to achieve a desired light effect in the user's perception of the scene. In one example embodiment, the light that is to be at least partially blocked by the blocking layer and the light that is to be filtered by the optical filter may be determined by functionality implemented by the processor. Furthermore, the processor 300 may be configured, for example as part of the processing at 365, or in a separate process, to determine a region within the aperture of the blocking layer that is to be activated at least partially to block light according to determined changes in orientation of the display. In this way, the region in the user's view of the scene from which light is to be at least partially blocked may remain unaltered by changes in orientation of the display when viewed through the display. The region may comprise the apparent (user's perception of the) position of an object or other feature identified within the scene. Any additional image content generated at 365 to correspond to that region of the scene may therefore relate to the object that would be visible through the display in that respective direction.
[0070] Some example embodiments will now be described with reference to
[0071] Referring to
[0072] Referring to
[0073] Referring to
[0074] Referring to
[0075] Referring to
[0076] Referring to
[0077] In each of the arrangements shown in
[0078] Similarly, in a variant of either of the arrangements shown in
[0079] Some example embodiments of light effects that may be implemented using, for example, the embodiments of a display described above with reference to
[0080] An example embodiment enables a user to perceive adjustable scene lighting when viewing a scene through the display. The adjustable scene lighting may be generated by overlaying additional image content simulating the effect of one or more virtual light sources or obstructers on top of the user's actual view of the scene. Such an effect may be implemented by the see-through display described above with reference to any of
[0081] Virtual light sources and obstructers may be simulated to be far from the scene being viewed by the user, e.g. a virtual sun or virtual cloud. Alternatively, the virtual light sources may be close to or within the user's view of the scene, e.g. a virtual extra lamp in a room. Virtual light sources and obstructers may therefore be visible within the user's field of view of the scene, or they may themselves be outside the user's field of view, but with effects that are visible within the user's field of view. Light obstructers may act as a virtual object in the scene (inside or outside the field of view) and may affect the lighting of the scene, e.g. by their shadow, visible within the user's field of view.
[0082] Examples of virtual light effects that may be superimposed with additional image content in an AR scenario may include: [0083] Sunny sky with sunrays from the sun. The rays will give the impression that the virtual sun is shining bright. [0084] Sunset or sunrise with gradual darkening/reddening of the sky. By analysing image data of the scene, the processor may be configured to detect and locate the sky in a user's view of a real-world scene. The user's view of the sky may then be altered by a combination of optical filtering and additional image content to create, for example, a gradual darkening/reddening effect around a virtual sun that sets/rises at a horizon. The remainder of the scene may be darkened/reddened accordingly. [0085] Cloudy sky with rain. Clouds may be superimposed upon the user's view of a real-world sky and the sky may be slightly darkened. Virtual drops of rain may be represented in additional image content so as to appear to fall from the sky. A rainbow may also be superimposed. [0086] Moonlight by day. By darkening the real-world scene and filtering out colours to give a more monochrome effect, the user's view of the scene may be one of moonlight during the day. A full moon may also be superimposed in the sky. [0087] Virtual lightning. This may be simulated in additional image content, for example by including an image of a lightning bolt in one or two image frames of additional image content. The additional image content displayed during those image frames may comprise a brighter representation of the whole scene, generated using, for example, image data captured by a camera with a view of the scene. A virtual lightning event may be accompanied by generating the sound of a lightning strike or of a subsequent rumble of thunder, according to how close to the user's view of the scene the lightning strike is intended to have occurred. [0088] ‘Devilish’ sky with red and black clouds sweeping in. [0089] Unnatural light effects such as greenish light from the sky. [0090] Dim regions made lighter. Additional image content may be generated to cause a user's view of dim areas within the scene to appear brighter, for example to enhance visibility in low-light areas. One example may comprise generating additional image content corresponding to a lighter representation of an area of shadow in a football stadium (e.g. when half the field is in shadow) and displaying it in a space-stabilised position relative to the user's view of the stadium such that the user's eye does not need to adapt between dark and bright areas. [0091] Highlighting an object visible within a scene. For example, to provide individual illumination of the object, e.g. illumination from a light source associated with the object so that the light source moves with the object, or a theatre spot-light or similar illumination effect in which the light source is fixed and follows the object if it moves within the scene. The purpose of the individual illumination may for example be to highlight the object to the user, to provide a warning (e.g. illumination with red light) in respect of the object, or for tracking purposes, enabling the user more easily to track movement of the object through the scene. [0092] Re-colouring of an object, for example to appear to the user to be red instead of green.
[0093] The virtual scene light effects may be controlled manually by the user, e.g. via a graphical user interface or by voice control. Alternatively, the effects may be controlled according to input from sensors (e.g. light detectors, etc.) or by other external means. For example, one of a number of predetermined light effects may be triggered by a predefined sequence of events by the user. For example, the selection of a particular scene light effect may be triggered by an audible input. For example, different light effects may be selected depending on determined characteristics in detected sounds, for example a determined ‘mood’ of music that is being played by a user while viewing a scene through a display as disclosed herein. Alternatively, or in addition, a predetermined light effect may be selected according to the occurrence of one or more such events, as defined in a user profile for the user, as discussed above.
[0094] In an example embodiment, one or more predetermined lighting settings or lighting profiles may be defined and stored for selection as required. Each lighting setting or profile may define one or more virtual light sources and/or light obstructers to be implemented in a display, with a defined set of parameters. The defined parameters may include, but not be limited to defining a luminance profile across one or more regions or across the whole of a viewing aperture in a display. A luminance profile, for example, may be implemented in the display by one or both of filtering light received from a scene by an optical filter, and generating additional image content. The additional image content may be generated based upon received image data captured by a camera of one or more regions in the user's view of a scene.
[0095] Further parameters may define conditions under which the defined lighting setting or light effect profile is to be implemented in a display. For example, a given lighting setting or light effect profile may be triggered when a scene being viewed is determined as being an indoor scene or when an outdoor scene. Alternatively, or in addition, a given lighting setting or light effect profile may be triggered at particular times or during defined time intervals. For example, a given lighting setting or light effect profile may be applied in a display during a defined morning period or during a defined evening period. Alternatively, or in addition, a given lighting setting or light effect profile may be triggered when a pre-defined characteristic of the scene, for example an object, gesture or event is detected or recognised in the field of view of the display. One or multiple conditions may be defined for triggering a lighting setting or light effect profile. The lighting setting or light effect profile to be applied may be chosen or scheduled by a user or a real-world source and may be triggered for example by one or more of:
[0096] detection of a predetermined characteristic of the scene;
[0097] detection of an input by a user in a user interface;
[0098] detection of an audible input;
[0099] determination of a predetermined characteristic in a detected audible input;
[0100] detection of a gesture by the user;
[0101] determination of the presence of a predetermined object or other feature in the received image data;
[0102] a predetermined time; and
[0103] a predetermined position or orientation of the display.
[0104] A processor 300 for use with a display disclosed herein may be configured to receive external or local sensor information, for example time, calendar information, GPS coordinates or orientation. The processor 300 may be configured to use these data to adjust the perceived position of a light source defined in a lighting profile. In one example, a combination of received GPS data, orientation and calendar information may be used to apply a virtual sunlight effect to a user's view of an indoor or outdoor scene, or to add the effect of the virtual sunlight to the user's view of a scene on a cloudy day.
[0105] In an example embodiment, several users may be using see-through displays according to embodiments discussed above, in the same environment. For example, several users may be located to view the same real-world scene from slightly different positions. In one example scenario, each of the users may agree upon a common lighting profile to be applied by their respective displays to alter each user's perception of the environment in substantially the same way. In this example scenario the processing required to control each of the displays may be shared. For example, an edge computing arrangement or cloud resources accessible to all the users may be configured to exploit redundancy. The redundancy may arise across the multiple views of the environment captured from each user's display in which the same features or different subsets of a common set of features in a scene may be visible to each user. The processing required to analyse images captured of a scene by multiple users may thereby be reduced. For example, a SLAM algorithm may be executed to determine a set of features visible to one or more users within a group. It may not be necessary to analyse images captured for all the users in the group if the same or a subset of the determined features are visible to all the users. There may also be possible to economise on the processing required to generate additional image content for a selected light effect to be applied in each user's display. The resulting lighting profiles, represented by additional image content appropriately adjusted according to each user's view position and view direction, or control signals for other display components such as a blocking layer as discussed above, may be streamed to each user's display from a common processing resource.
[0106] In an example variant of this embodiment, the light effects to be applied may be determined by a common authority, for example by a light and illumination control centre. All users having display connected to that centre, or subject to that common authority, may receive data or control signals to generate view-port-dependent altered lighting of the scene from edge nodes or cloud servers.
[0107] The altered light effects to be applied in the display of each user may be updated substantially in real time and streamed from an edge node or cloud server. For example, an edge node or cloud server may be configured to receive data indicative of a change in position and/or orientation of the display of any one user and to use those data in generating the altered light effect for that user. Updates may be generated and communicated to the respective display for example for each new image frame of a frame-based image generator.
[0108] In an example embodiment, in a multi-user arrangement, a common processing environment, for example the edge computing or cloud server arrangement mentioned above, may be configured to perform any analysis for a shared scene. That is, scene modelling for a scene viewable by different users may be performed using edge processors or cloud servers to reduce the demand for processing power on each device. For example, overlapping portions in views of a real-world scene captured from different displays may enable a reduction in the processing resources required to analyse the scene for any one user. The personalised lighting or ambiance effect for each user may be generated using the results of this common analysis to achieve the applied lighting profile for each user's view of the real-world scene.
[0109] In an example variant of this embodiment, the modelling of the scene may be performed progressively as the view point and/or view port of one or more users changes. Such modelling may include or be performed in a similar way to a 3D reconstruction of the scene by combining the information from one or a number of moving cameras.
[0110] In an example embodiment, ‘pick-and-place’ functionality may be implemented to implement a selected light effect. ‘Pick-and-place’ functionality may for example enable a user to select a light source or illumination effect, e.g. from one or more pre-defined light sources or illumination effects and, as appropriate, to place or otherwise specify a location of the selected light source or illumination effect within the user's view of the scene. Such functionality may be presented to a user or used in a similar way to an artist selecting paints from a colour pallet, enabling a user to design a desired light effect in a light-augmenting AR system.
[0111] Parameters that distinguish the different light sources or illumination effects, in one example light effect, may for example include one or more of a colour spectrum of the light source or illumination effect, its intensity, its position and its spread profile. In one example light effect, a tray for different types of light source or illumination effect may be prepared, from which the user may choose one or a number of light sources or illumination effects. The user may define controlling parameters and adjust the desired parameters for each light source or illumination effect. The user may place each light source or illumination effect at a desired position relative to the scene and modify the light source position or properties based on the observed effect. One exemplary use case of this embodiment may be fast prototyping of a lighting setup for professional use.
[0112] In an example embodiment, at least one virtual light source or virtual light obstructer may be defined and implemented in a display to have dynamic characteristics. That is, at least one of the parameters defining the light source or light obstructer may change over time or in response to new events. The parameters associated with a light source or light obstructer, having dynamic characteristics, may for example include one or more of the colour spectrum, intensity, position and spread profile of the light source. Dynamic lighting in this embodiment may for example be used for overlaying, e.g. dancing light or glitter effect to a user's view of a scene, or illumination of a moving object within the scene.
[0113] In an example embodiment, a lighting profile may be applied to different parts of a field of view of a display with different levels of detail. The result may be a different light augmenting quality in different parts of the field of view. Region-wise quality of the light-augmenting may for instance be based on a region of interest: high-quality light-augmenting within a region of interest; and low-quality light-augmenting for parts of the field of view outside the region of interest. One example of low-quality light augmenting may be to ignore the 3D structure of the scene and apply constant (uniform or with a fixed profile) light attenuation or enrichment to a part of the scene regardless of the content in that part. Attenuating (filtering out) light using optical filters is one example. Such techniques for applying different levels of quality have the benefit that a lower overall level of processing is required to implement the augmented lighting in the display.
[0114] In an example variant of this embodiment, an eye tracking system may be implemented in the display system to determine the gaze direction and/or focus of a user. Data from the eye tracking system may be used to ensure that the augmented lighting is applied in high quality (e.g. with high resolution) to a part of the field of view which is in the determined direction of the gaze and the remainder of the field of view is processed with a lower quality (e.g. lower resolution, ignoring the scene geometry).
[0115] In an example embodiment, the optical filter may be configured to filter one or more colours from a region of a scene and a re-colouring layer may be generated and displayed overlaying the region of the scene as additional image content. The user may then perceive the region of the scene in a different colour, according to the user's perception of the resultant combination of filtered light from the scene and the re-colouring light in the additional image content. The re-colouring effect may be designed to be realistic or non-realistic. The re-colouring effect to be applied may be defined in a user's profile indicating a preference for such a light effect. One reason for applying such a re-colouring may be to help to overcome a visual deficiency of the user, for example a “colour-blindness” difficulty which may, for example, reduce the user's ability to distinguish between green and red-coloured objects. A re-colouring of green or of red objects in a scene may enable the user to recognise a difference in colour of the objects.
[0116] In an example embodiment, a user's experience in viewing a scene, augmented by any of the ways discussed above, may be further enhanced with one or a combination of other sensory inputs. The other sensory inputs may include one or more of audio content and tactile stimuli, provided by transducers associated with the display, or provided by separate systems.
[0117] Example embodiments described above have included a method for operating a see-through display, the display being configurable to display additional image content for augmenting a user's view of a scene visible through the display, the method comprising:
[0118] receiving image data defining an image of a scene visible through the display;
[0119] determining, by analysis of the received image data, one or more characteristics of the scene;
[0120] determining a light effect to be applied to the user's view of the scene;
[0121] generating additional image content according to the determined light effect and according to the one or more determined characteristics of the scene; and
[0122] displaying the additional image content to the user such that light received from the scene is combined with the additional image content, thereby to implement the determined light effect in the user's view of the scene.
[0123] According to the method, determining the one or more characteristics of the scene may comprise determining at least one of:
[0124] characteristics of an object visible in the scene;
[0125] the position of an object visible in the scene;
[0126] a profile of luminance across a region in the scene;
[0127] a profile of colour across a region in the scene;
[0128] a light model of the scene; and
[0129] a time of capture of the image data.
[0130] According to the method, determining the one or more characteristics of the scene may comprise at least one of constructing, obtaining and updating a map of the scene.
[0131] According to the method, determining the one or more characteristics of the scene may comprise executing a SLAM method to analyse the received image data.
[0132] The method may comprise generating the additional image content comprising light with a different profile of luminance to that of light received from a respective region in the scene.
[0133] The method may comprise generating the additional image content comprising light with a different profile of colour to that of light received from a respective region in the scene.
[0134] The method may comprise filtering light received from a region of the scene using an optical filter and combining the light passed by the optical filter with the additional image content, thereby to implement the determined light effect in the user's view of the scene.
[0135] The method may comprise generating the additional image content to take account of characteristics of the light passed by the optical filter. Optionally, the determined light effect comprises changing the colour of light received from a region in the scene having a first colour such that the user sees light of a second, different colour from the region in the scene.
[0136] According to the method, the determined light effect may comprise changing the luminance of light received from a region in the scene having a first level of luminance such that the user sees light of a second, different level of luminance from the region in the scene.
[0137] The method may comprise generating the additional image content comprising a time varying profile of light across a respective region in the scene.
[0138] The method may comprise:
[0139] receiving data indicative of a change in orientation of the display; and
[0140] using the received orientation change data to determine a position in an image area of the display for displaying the additional image content such that the additional image content appears to the user to remain aligned with a respective region in scene after the indicated change in orientation of the display.
[0141] According to the method, determining a light effect to be applied to the user's view of the scene may comprise receiving user profile data defining the light effect to be applied.
[0142] Optionally, according to the method, the user profile data may define at least one event or condition for activating a respective light effect in the display, and the method may comprise:
[0143] responsive to determining that the at least one event or condition has occurred, generating and displaying additional image content to apply the determined light effect.
[0144] Optionally, the at least one event or condition comprises determining, by the analysis of the received image data, a presence of one or more predetermined characteristics of the scene.
[0145] The method may comprise controlling an active blocking layer to block or at least partially to block light received at the display from a selected region of the scene.
[0146] Optionally, the method comprises receiving data indicative of a change in orientation of the display; and
[0147] using the received orientation change data to control the blocking layer thereby to continue to block or at least partially to block the light received from the selected region of the scene following the indicated change in orientation of the display.
[0148] Optionally, the method comprises using the received data indicative of a change in orientation of the display as an indication of a change in the user's line of sight to the scene.
[0149] Optionally, the user's line of sight to the scene is assumed to be aligned with the centre of an image area of the display.
[0150] The method may comprise:
[0151] receiving data indicative of a line of sight of a user's eye through the display; and using the data to implement the light effect to take account of the line of sight of the user's eye through the display.
[0152] The method may comprise:
[0153] generating additional image content having a first level of image quality for display in a region of an image area of the display corresponding to the user's line of sight and generating additional image content having a second, lower level of image quality for display in other regions of the image area of the display.
[0154] Optionally, the additional image content having the first level of image quality comprises image content having a higher resolution than the additional image content generated having the second, lower level of image quality.
[0155] Optionally, the additional image content having the first level of image quality comprises image content having a higher level of colour resolution than that of additional image content generated having the second, lower level of image quality.
[0156] The method may comprise:
[0157] determining the user's line of sight through the display and determining a region in the image area of the display that corresponds to the user's determined line of sight through the display.
[0158] According to the method, the region in the scene may correspond to a determined object or other feature in the scene.
[0159] Example embodiments described above have included a see-through display, comprising:
[0160] an image generator configured to generate additional image content and to project the generated additional image content along a user's line of sight to a scene visible through the display such that light received from the scene is combined with the additional image content in the user's view of the scene;
[0161] a processor, linked to the image generator and configured: [0162] to receive image data representing an image of a scene visible through the display; [0163] to determine, by analysis of the received image data, one or more characteristics of the scene; [0164] to determine a light effect to be applied to the user's view of the scene; and [0165] to control the image generator to generate and to project additional image content according to the determined light effect and according to the one or more determined characteristics of the scene.
[0166] The see-through display may comprise an optical filter positioned to receive light from the scene and to pass received light, according to filtering characteristics of the optical filter, for viewing by the user.
[0167] The see-through display may comprise a camera positioned to capture images of a scene visible to the user through the display and to output to the processor corresponding image data.
[0168] The see-through display may comprise a camera positioned to capture images of a scene visible to the user through the optical filter and to output to the processor corresponding image data.
[0169] The see-through display may comprise a memory, accessible by the processor, configurable to store light effect profile data defining one or more predetermined light effects that may be applied in the display. Optionally, the memory is configurable to store user profile data defining one or more light effects to be applied in the display for the user. Optionally, the light effect profile data defines, for a said light effect, data defining at least one event or condition for triggering selection or application of the said light effect in the display. Optionally, the user profile data comprise data defining at least one event or condition for triggering selection or application of a defined light effect in the display.
[0170] Optionally, the at least one event or condition includes at least one of:
[0171] detection of a predetermined characteristic of the scene;
[0172] detection of an input by a user in a user interface;
[0173] detection of an audible input;
[0174] determination of a predetermined characteristic in a detected audible input;
[0175] detection of a gesture by the user;
[0176] determination of the presence of a predetermined object or other feature in the received image data;
[0177] a predetermined time; and
[0178] a predetermined position or orientation of the display.
[0179] Optionally, the see-through display comprises a blocking layer configurable at least partially to block light from a selected region in a user's view of the scene, wherein the processor is configured to control the configurable blocking layer according to the determined light effect and according to the one or more determined characteristics of the scene.
[0180] The see-through display may comprise one or more components of a tracker system arranged to determine changes in orientation of the display and to output, to the processor, orientation data indicative of a change in orientation of the display, the processor being configured to receive the orientation data and to use the received orientation data to generate the additional image content.
[0181] The see-through display may comprise a blocking layer configurable at least partially to block light from a selected region in a user's view of the scene, the processor being configured to control the configurable blocking layer according to the received orientation data.
[0182] The see-through display may comprise a head-up or head-mounted see-through display.
[0183] Example embodiments described above have included a computer program which when loaded into and executed by a processor of a see-through display, cause the processor: [0184] to receive image data representing an image of a scene visible through the display; [0185] to determine, by analysis of the received image data, one or more characteristics of the scene; [0186] to determine a light effect to be applied to a user's view of the scene through the display; and [0187] to control an image generator of the display to generate and to project additional image content according to the determined light effect and according to the one or more determined characteristics of the scene.
[0188] Optionally, the computer program, when loaded into and executed by the processor of a see-through display, causes the processor to implement the method according to any one of the embodiments of the method described herein.
[0189] Example embodiments described above have included a computer program product, comprising a computer-readable medium, or access thereto, the computer-readable medium having stored thereon the computer program defined above.
[0190] The methods of the present disclosure may be implemented in hardware, or as software modules running on one or more processors. The methods may also be carried out according to the instructions of a computer program, and the present disclosure also provides a computer readable medium having stored thereon a program for carrying out any of the methods described herein. A computer program embodying the disclosure may be stored on a computer readable medium. Alternatively, or in addition, it may, for example, be in the form of a signal such as a downloadable data signal provided from a website accessible over the Internet, or it may take any other form.
[0191] It should be noted that the above-mentioned examples illustrate rather than limit the disclosure, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. The word “comprising” does not exclude the presence of elements or steps other than those listed in a claim; “a” or “an” does not exclude a plurality; and a single processor or other unit may fulfil the functions of several units recited in the claims. Any reference signs in the claims shall not be construed so as to limit their scope.