DYNAMIC IMAGE ADJUSTMENT TO ENHANCE OFF- AXIS VIEWING IN A DISPLAY ASSEMBLY
20170278218 · 2017-09-28
Assignee
Inventors
- James A. Carpenter (Rochester Hills, MI, US)
- Thomas A. Seder (Warren, MI, US)
- William E. Conway (Birmingham, MI, US)
Cpc classification
G06T3/20
PHYSICS
G09G2320/0673
PHYSICS
G09G2320/028
PHYSICS
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
G09G2320/0261
PHYSICS
International classification
Abstract
A display assembly includes a display console displaying at least one image on an image plane. The image is divided into a plurality of pixels. A controller is operatively connected to the display console and includes a processor and tangible, non-transitory memory on which is recorded instructions for executing a method for dynamically adjusting the image in real-time for off-axis viewing. The controller is programmed to generate a compensation-over-viewing-angle map which includes respective compensation factors for each of the plurality of pixels for multiple viewing positions. In one embodiment, the controller is programmed to apply separate respective compensation factors for the instantaneous viewing positions of a first user at a time j and a second user at a time k. In another embodiment, the controller is programmed to apply first and second compensation factors simultaneously at a time m, for a first image and a second image, respectively.
Claims
1. A display assembly comprising: a display console displaying at least one image on an image plane, the at least one image divided into a plurality of pixels; a controller operatively connected to the display console, the controller including a processor and tangible, non-transitory memory on which is recorded instructions for executing a method for dynamically adjusting the at least one image in real-time for off-axis viewing; and wherein execution of the instructions by the processor causes the controller to generate a compensation-over-viewing-angle map, the compensation-over-viewing-angle map including respective compensation factors for each of the plurality of pixels for multiple viewing positions.
2. The assembly of claim 1, wherein the controller is programmed to: obtain a first instantaneous viewing position of a first user at a time j; and apply the respective compensation factor from the compensation-over-viewing-angle map to each of the plurality of pixels at the time j, for the first instantaneous viewing position.
3. The assembly of claim 2, wherein the controller is programmed to: obtain a second instantaneous viewing position of a second user at a time k; and apply the respective compensation factor from the compensation-over-viewing-angle map to each of the plurality of pixels at the time k, for the second instantaneous viewing position.
4. The assembly of claim 2: wherein the image plane defines a coordinate system with an origin, an x-axis, a y-axis and a z-axis, the x-axis and the y-axis defining an x-y plane; wherein the first instantaneous viewing position is based at least partially on a viewing reference vector (R) between the origin and a first eye reference point of the first user, a first angle (θ) and a second angle (γ); wherein the first angle (θ) is between the x-axis and an x-y projection vector (r.sub.xy), the x-y projection vector (r.sub.xy) being a projection of the viewing reference vector (R) on the x-y plane; and wherein the second angle (φ) is between a normal vector perpendicular to the display console and the viewing reference vector (R).
5. The assembly of claim 4, further comprising: a camera operatively connected to the controller and configured to obtain the eye reference point of the first user in real-time; wherein the viewing reference vector (R), the first angle (θ) and the second angle (φ) are dynamically adjusted based at least partially on the first eye reference point in real-time.
6. The assembly of claim 4, wherein: the display console is rotated a tilt angle (α) about a rotation axis such that an original position (x.sub.1, y.sub.1) on the image is rotated to a modified position (x.sub.2, y.sub.2) relative to the origin; wherein a y-coordinate (y.sub.2) of the modified position (x.sub.2, y.sub.2) is a function of the original position (x.sub.1, y.sub.1) and the tilt angle (α), such that y.sub.2=(y.sub.1*cosine (α)); and wherein the controller is programmed to obtain a modified first angle (θ.sub.2) and a modified second angle (φ.sub.2) to compensate for the tilt angle (α).
7. The assembly of claim 6, wherein: the modified first angle (θ.sub.2) is based at least partially on a modified projection (r.sub.xy,2) and a radial distance (r) between the origin and the eye reference point of the first user, the modified first angle (θ.sub.2) being defined as [90−(cosine.sup.−1 (r.sub.xy,2/r))]; wherein the modified second angle (φ.sub.2) is based at least partially on a modified projection (r.sub.xy,2) and a modified y coordinate (y.sub.2), the modified second angle (φ.sub.2) being defined as [180−(sine.sup.−1(y.sub.2/(r.sub.xy,2))]; and wherein the modified projection (r.sub.xy,2) is a function of the original position (x.sub.1, y.sub.1) and the tilt angle (α), such that r.sub.xy,2=(x.sub.2+y.sub.2).sup.0.5.
8. The assembly of claim 1: wherein the display console includes a first set of pixels configured to support presentation of a first image visible from a first side of the display console; wherein the display console includes a second set of pixels configured to support presentation of a second image visible from a second side of the display console; and further comprising: a barrier structure positioned adjacent to the display console, the barrier structure separating the first and second images such the first image is not visible from the second side and the second image is not visible from the first side.
9. The assembly of claim 8: wherein the at least one user includes a first user positioned at the first side of the display console and a second user positioned at the second side of the display console; wherein the controller is programmed to: obtain first and second instantaneous viewing positions of the first and second users, respectively; apply a first and a second compensation factor simultaneously, at a time m, to the first and second images, respectively, based at least partially on the compensation-over-viewing-angle map; and wherein the first compensation factor is applied to the first set of pixels of the first image for the first instantaneous viewing position of the first user at the time m and second compensation factor is applied to the second set of pixels of the second image for the second instantaneous viewing position of the second user at the time m.
10. The assembly of claim 1, wherein the at least one image is characterized by gray scale levels and the respective compensation factors adjust respective luminance steps of the gray scale levels in real-time.
11. The assembly of claim 1, wherein each of the plurality of pixels is characterized by a respective original gamma factor (γ.sub.o) and a respective voltage (V.sub.o), and wherein the controller is programmed to: employ a predefined desired gamma constant (γ.sub.d) to determine a respective desired luminance (L.sub.d) at the respective original voltage (V.sub.o), for each of the plurality of pixels; determine a respective shifted voltage (V.sub.s) that results in the respective desired luminance (L.sub.d) at the original gamma factor (γ.sub.o), for each of the plurality of pixels; and change a respective voltage applied to each of the plurality of pixels from the original voltage (V.sub.o) to the shifted voltage (V.sub.s).
12. The assembly of claim 1, wherein each of the plurality of pixels is characterized by a respective original gamma factor (γ.sub.o) and a respective original gray scale value (G.sub.o), and wherein the controller is programmed to: employ a predefined desired gamma constant (γ.sub.d) to determine a respective desired luminance (L.sub.d) at the respective original gray scale value (G.sub.o), for each of the plurality of pixels; determine a respective shifted gray scale value (G.sub.s) that results in the desired luminance (L.sub.d) at the respective original gamma factor (γ.sub.o), for each of the plurality of pixels; and change the respective original gray scale value (G.sub.o) for each of the plurality of pixels to the respective shifted gray scale value (G.sub.s).
13. A method for dynamically adjusting at least one image in real-time for off-axis viewing in a display assembly having a controller, a display console displaying the at least one image on an image plane, the at least one image being divided into a plurality of pixels, the method comprising: generating a compensation-over-viewing-angle map having respective compensation factors for each of the plurality of pixels for multiple viewing positions, via the controller; obtaining a first instantaneous viewing position of a first user at a time j, via a camera; applying the respective compensation factor from the compensation-over-viewing-angle map to each of the plurality of pixels at the time j, for the first instantaneous viewing position via the controller; and controlling the at least one image based on the respective compensation factor from the compensation-over-viewing-angle map.
14. The method of claim 13, further comprising: obtaining a second instantaneous viewing position of a second user at a time k, via the camera; and applying the respective compensation factor from the compensation-over-viewing-angle map to each of the plurality of pixels at the time k, for the second instantaneous viewing position, via the controller.
15. The method of claim 13, wherein: the display assembly further includes a barrier structure positioned adjacent to the display console; the display console includes a first set of pixels configured to support presentation of a first image visible from a first side of the display console; the display console includes a second set of pixels configured to support presentation of a second image visible from a second side of the display console; and the barrier structure separates the first and second images such the first image is not visible from the second side and the second image is not visible from the first side.
16. The method of claim 15, further comprising: obtaining a first instantaneous viewing position of a first user positioned at the first side of the display console at a time m, via the camera; obtaining a second instantaneous viewing position of a second user positioned at the second side of the display console at the time m, via the camera; applying a first and a second compensation factor simultaneously, at a time m, to the first and second images, respectively, based at least partially on the compensation-over-viewing-angle map, via the controller; and wherein the first compensation factor is applied to the first set of pixels of the first image for the first instantaneous viewing position of the first user at the time m and second compensation factor is applied to the second set of pixels of the second image for the second instantaneous viewing position of the second user at the time m.
17. The method of claim 13, wherein each of the plurality of pixels is characterized by a respective original gamma factor (γ.sub.o) and a respective original gray scale value (G.sub.o), and further comprising: employing a predefined desired gamma constant (γ.sub.d) to determine a respective desired luminance (L.sub.d) at the respective original voltage (V.sub.o), for each of the plurality of pixels, via the controller; determining a respective shifted voltage (V.sub.s) that results in the respective desired luminance (L.sub.d) at the original gamma factor (γ.sub.o), for each of the plurality of pixels, via the controller; and changing a respective voltage applied to each of the plurality of pixels from the original voltage (V.sub.o) to the shifted voltage (V.sub.s), via the controller.
18. The method of claim 13, wherein each of the plurality of pixels is characterized by a respective original gamma factor (γ.sub.o) and a respective original gray scale value (G.sub.o), and further comprising: employing a predefined desired gamma constant (γ.sub.d) to determine a respective desired luminance (L.sub.d) at the respective original gray scale value (G.sub.o), for each of the plurality of pixels, via the controller; determining a respective shifted gray scale value (G.sub.s) that results in the desired luminance (L.sub.d) at the respective original gamma factor (γ.sub.o), for each of the plurality of pixels, via the controller; and changing the respective original gray scale value (G.sub.o) for each of the plurality of pixels to the respective shifted gray scale value (G.sub.s), via the controller.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
DETAILED DESCRIPTION
[0019] Referring to the drawings, wherein like reference numbers refer to like components,
[0020] Referring to
[0021] Referring to
[0022] Referring to
[0023] Referring to
[0024] Referring to
[0025] Referring to
[0026] Referring now to
[0027] Referring to
[0028] Method 200 is a compensation process to correct the image for appearance differences caused by off-axis viewing. The human perception of brightness is a non-linear response to a linear change in luminance. The method 200 optimizes display gray scale performance for all viewing angles dynamically, increasing readability, aesthetic and user satisfaction. The image 16 (or images 123, 125) may be characterized by gray scale levels and the respective compensation factors may adjust respective luminance steps of the gray scale levels for each pixel 17 in the image 16 in real-time. The calibration parameter that controls the luminance steps of the gray levels, referred to herein as the compensation factor (C), is dynamically adjusted to optimize the viewing performance at an instantaneous viewing angle.
[0029] In block 204 of
[0030] In block 206 of
θ.sub.2=[90−(cosine.sup.−1(r.sub.xy,2/r))].
φ.sub.2=[180−(sine.sup.−1(y.sub.2/(r.sub.xy,2))].
[0031] In block 208 of
[0032] In block 210 of
[0033] In block 214 of
[0034] As noted above, the compensation factor (C) for each pixel (in the image 16 of the first embodiment or images 123 and 125 of the second embodiment) may be a gray scale shift (G.sub.s) or a voltage shift (V.sub.s).
[0035] Referring to
[0036] The controller 50 (and execution of the method 200) improves the functioning of the device 12 by improving the readability and esthetic of an image observed at an off-axis angle, thereby improving accuracy of user interaction with the device 12. For example, the first user 20 may rely on the readability of the information displayed to make control decisions for the device 12, e.g. changing the trajectory of the device 12.
[0037] The controller 50 of
[0038] Look-up tables, databases, data repositories or other data stores described herein may include various kinds of mechanisms for storing, accessing, and retrieving various kinds of data, including a hierarchical database, a set of files in a file system, an application database in a proprietary format, a relational database management system (RDBMS), etc. Each such data store may be included within a computing device employing a computer operating system such as one of those mentioned above, and may be accessed via a network in any one or more of a variety of manners. A file system may be accessible from a computer operating system, and may include files stored in various formats. An RDBMS may employ the Structured Query Language (SQL) in addition to a language for creating, storing, editing, and executing stored procedures, such as the PL/SQL language mentioned above.
[0039] The detailed description and the drawings or figures are supportive and descriptive of the invention, but the scope of the invention is defined solely by the claims. While some of the best modes and other embodiments for carrying out the claimed invention have been described in detail, various alternative designs and embodiments exist for practicing the invention defined in the appended claims. Furthermore, the embodiments shown in the drawings or the characteristics of various embodiments mentioned in the present description are not necessarily to be understood as embodiments independent of each other. Rather, it is possible that each of the characteristics described in one of the examples of an embodiment can be combined with one or a plurality of other desired characteristics from other embodiments, resulting in other embodiments not described in words or by reference to the drawings. Accordingly, such other embodiments fall within the framework of the scope of the appended claims.