Distortion viewing with improved focus targeting
10636117 ยท 2020-04-28
Assignee
Inventors
- Hendrik Frans Verwoerd Boshoff (Stellenbosch, ZA)
- Willem Morkel Van Der Westhuizen (Stellenbosch, ZA)
- Jan Pool (Stellenbosch, ZA)
- Adri Smuts (Durbanville, ZA)
Cpc classification
G06F3/04842
PHYSICS
G06F2203/04805
PHYSICS
G06F2203/04806
PHYSICS
G06F3/04812
PHYSICS
International classification
G06F3/0484
PHYSICS
G06T3/40
PHYSICS
Abstract
The invention provides a method for human-computer interaction (HCl) on a graphical user interface (GUI). The method includes the steps of displaying a plurality of objects positioned in relation to each other and in relation to the display window; determining user input; distorting at least one of the position relations according to a magnification function with a focal dip, where the focal position of the magnification function is controllable by the user input; and updating the distortion whenever the relevant user input changes.
Claims
1. A method for human-computer interaction (HCl) on a graphical user interface (GUI), which includes the steps of: displaying a plurality of objects at distinct positions of a display device; determining input from a user; determining a point on the display device corresponding to the input; modifying positions of the display device at which at least a subset of the plurality of the objects are displayed according to a distortion function that depends on a magnification function and on a demagnification function where: the magnification function has a first focal point that is the point on the display device corresponding to the input and a first radius of distortion, the demagnification function has a second focal point that is the point on the display device corresponding to the input and a second radius of distortion, the second radius of distortion less than the first radius of distortion, and the distortion function is a sum of the magnification function and the demagnification function; displaying at least the subset of the plurality of objects at the modified positions of the display device; determining a change in the input; determining a modified point on the display device corresponding to the changed input; updating the first focal point and the second focal point to the modified point on the display device; and updating positions of the display device at which at least the subset of the plurality of the objects are displayed using the distortion function with the updated first focal point and the updated second focal point.
2. The method as claimed in claim 1, wherein the objects are discrete or discretized items and in the latter case the method includes the step of discretising items.
3. The method as claimed in claim 1, wherein the relations are distorted in a non-linear way.
4. The method as claimed in claim 1, wherein the magnification function is a non-negative, slowly changing, continuous function, with a flat or broad maximum around a single focal point, decreasing in all directions away from the focal point.
5. The method as claimed in claim 1, wherein the method uses an indirect specification of the magnification function.
6. The method as claimed in claim 5, wherein the magnification function is indirectly specified by specifying a transformation function, which yields the magnification function by mathematical differentiation.
7. The method as claimed in claim 5, wherein the magnification function is indirectly specified by specifying a distortion function, which yields the magnification function by mathematical integration.
8. The method as claimed in claim 5, wherein the magnification function is indirectly specified by specifying displacement of objects.
9. The method as claimed in claim 1, wherein the positions of the objects in the neighbourhood of an object in focus determines the space which is available for rendering the object in focus.
10. The method as claimed in claim 1, wherein objects are scaled and rendered by scaling first the object closest to the focal point, and, proceeding in order of increasing distance, to the furthest one.
11. The method as claimed in claim 1, wherein, in the case of occlusion, objects are scaled and rendered by completing the scaling, then rendering the objects, first the object furthest from the focal point, and, proceeding in order of decreasing distance, to the nearest one, to allow objects close to the focal point to end up on top.
12. The method of claim 1, wherein an object having a position modified according to the distortion function is displayed with a changed size that is directly related to a relative distance between the object and one or more objects, the relative distance modified from the modification to the position of the object according to the distortion function.
13. A device for human-computer interaction (HCl) on a graphical user interface (GUI), which device is configured to: display a plurality of objects at distinct positions of a display device; determine input from a user; determine a point on the display device corresponding to the input; positions of the display device at which at least a subset of the plurality of the displayed objects are displayed according to a distortion function that depends on a magnification function and on a demagnification function where: the magnification function has a first focal point that is the point on the display device corresponding to the input and a first radius of distortion, the demagnification function has a second focal point that is the point on the display device corresponding to the input and a second radius of distortion, the second radius of distortion less than the first radius of distortion, and the distortion function is a sum of the magnification function and the demagnification function; display at least the subset of the plurality of objects at the modified positions of the display device; determine a change in the input; determine a modified point on the display device corresponding to the changed input; update the first focal point and the second focal point to the modified point on the display device; and positions of the display device at which at least the subset of the plurality of the objects are displayed using the distortion function with the updated first focal point and the updated second focal point.
14. The device of claim 13, wherein an object having a position modified according to the distortion function is displayed with a changed size that is directly related to a relative distance between the object and one or more objects, the relative distance modified from the modification to the position of the object according to the distortion function.
Description
(1) The invention is now described by way of examples with reference to the accompanying drawings.
(2) In the drawings,
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
EXAMPLE 1. IMPROVED FOCUS TARGETING BY FOCUS DEMAGNIFICATION
(16) Refer to
(17) The method also includes the step of determining user input. The example makes use of a touch screen device like a tablet computer, and the user input is the coordinates (X.sub.t,Y.sub.t) of the centre point of the area where the user touches the screen, which doubles as the control space 10.
(18) The commonly used rectangular coordinates (X,Y) of any point P may be converted to polar coordinates (R,), where
R={square root over (X.sup.2+Y.sup.2)} and =tan.sup.1(y/x).
(19) Given the position F of a focal point with coordinates (X.sub.f, Y.sub.f), and the radius of distortion R.sub.s around F, the normalized relative radial distance r from any point P to the focal point F may be calculated as
(20)
(21) The method also includes the step of distorting at least one of the position relations according to a magnification function with a focal dip, where the focal position of the magnification function is controllable by the user input. The distorting in this example is performed with a distortion mapping 11, which maps points from the persistent space 12 to a mapped space 13, which serves as the buffer in computer memory for the display 14. The distortion mapping 11 is specified by the following transformation function of r:
(22)
(23) Applying the distortion mapping 11 to points in the persistent space 12 which represent the positions of two or more objects, results in distorting the position relation between the objects involved, as the relation appears in the mapped space 13 and the display 14.
(24) The magnification function corresponding to the distortion mapping 11, which may be obtained from (E1.1) by mathematical differentiation with respect to r, is
(25)
(26) In agreement with the requirement of the distorting step, this magnification function has a focal dip, which shows up as a minimum at r=0. Points with r>1 fall outside the range of the distortion.
(27) In order to make the focal position of the magnification function controllable by the user input, this example establishes a relation between the user touch coordinates (X.sub.p, Y.sub.p) and the coordinates (X.sub.f, Y.sub.f) of the focal point, as follows. The touch screen device provides a tight connection between the control space 10 and the display space 14. The undistorted or linear mapping from the persistent space 12 through the mapped space 13 to the display space 14 may be inverted to generate coordinates representing the user touch in the persistent space 12. The focal point coordinates in the persistent space 12 is then made equal to the user touch coordinates in that space, or linked to it by a constant offset.
(28) The linear mapping is then used in the forward direction to map the focal point F in the persistent space 12 to the image point F.sub.d with coordinates (X.sub.df, Y.sub.df) in the mapped space 13. In this example with the touch screen, the linear mapping is the identity function T.sub.i(r)=r with corresponding M.sub.i(r)=1.
(29) F.sub.d is called the image of F under the linear transformation, because the linear transformation maps F to F.sub.d. The scaling factor of the linear transformation determines the size of the radius of distortion R.sub.sd in the mapped space 13.
(30) The position of every object in the persistent space 12 is then transformed according to equation (E1.1), to yield the coordinates (T(r), ), where T(r) is the normalized relative radial distance with respect to F.sub.d in the mapped space 13. This distance may be denormalized:
R.sub.d=R.sub.sdT(r).
(31) Now the rectangular coordinates (X.sub.d, Y.sub.d) of the point Q, which is the image under T(r) of point P, can be calculated as
X.sub.d=X.sub.df+R.sub.d cos and Y.sub.d=Y.sub.df+R.sub.d sin ,
and the corresponding object can be represented at point Q in the mapped space 13. The angle remains unchanged by the distortion mapping T(r).
(32) The method also includes the step of updating the distortion whenever the relevant user input changes. As the display is initially undistorted, the first detection of touch is used to effect a discrete change in the form of the distortion mapping 11 from the identity transformation to the one given in equation (E1.1).
(33) When the user input changes, the focal point F and therefore the distortion mapping 11 depending on it is changed. The mapping 11 is then applied to the contents of the persistent space 12, the result represented in the mapped space 13, and displayed on the display hardware 14. An example of such a display is shown in
(34) User control of the touch coordinates (X.sub.t, Y.sub.t) has the effect of activating the distortion and moving its focal point around, although a static image like
(35) The tablet hardware is fast enough to generate the illusion of continuous movement of a distortion lens over the grid. Breaking finger contact with the touch screen deactivates the distortion again and returns the computer to rendering an undistorted grid.
(36) It will be appreciated that other aspects of the distortion mapping 11 may also be changed interactively based on multi-touch input. In particular, the strength of the distortion or magnification factor, dependent on c and d in equations (E1.1) and (E1.2), may be controlled by a second finger touching the touch control 10. With different hardware, e.g. three dimensional input, two dimensional focus targeting as well as control of the strength of distortion may be jointly controlled by moving a single point in three dimensions.
(37) When the distortion mapping 11 is replaced by another one having a magnification function without the focal dip, the user finds focus targeting difficult or even impossible, especially for cases with a large magnification factor. Once the focal dip is restored, focus targeting becomes not only possible, but easy. This clearly illustrates the utility of the method. However, it is difficult to render the essentially interactive effect with static images, or even with a video of someone else performing the control. It has to be experienced at first hand.
(38) Further utility may be obtained by triggering some action on breaking finger contact. For example, the easier focus targeting allows the user to adjust an initial, crudely aimed ballistic movement, by performing control based on the enlarged visual feedback from the distortion viewing.
(39) It will be appreciated that many different distortion mappings 11 may be found which include a focal dip in their magnification function, and that equations (E1.1) and (E1.2) constitute but an illustrative example.
(40) Two more examples of distortion mappings follow, specified by their distortion functions, noting that their magnification functions indeed do have focal dips. The first is the power function:
(41)
and the second the product exponential or Lambert function:
(42)
These equations have been implemented as options in the application of the current example, and they perform well.
EXAMPLE 2. DOUBLE LENSING AS A REMEDY FOR A DRAWBACK OF FOCUS DEMAGNIFICATION
(43) Refer to
(44) As stated in the general description, the method, in the step of updating the distortion whenever the relevant user input changes, may include the following step: rendering at least one object in the view with a size related to a measure of the space which is available to it after the position distortion.
(45) In this example, instead of using the distortion mapping 11 to determine both the position and the scale of each object, it is only used to determine the position. The scale at which each object is rendered is instead determined during view generation 15, which is inserted as a transformation from the mapped space 13 to the display 14. The scale is determined as follows.
(46) On an infinite rectangular grid, every object would have four neighbouring positions. If an object's position is at point P the distance between P's image Q and the images in the mapped space 13 of these four positions may be denoted by X.sup.+, X.sup., Y.sup.+ and Y.sup.. Even if no object is actually present at a certain grid position, that position can still be calculated, before and after distortion; i.e. both in the persistent space 12 and in the mapped space 13.
(47) A measure of the space available to a particular object in the mapped space 13, is calculated in this example as the average of the distance to its four neighbouring positions, either the arithmetic average d.sub.a given by
(48)
or by the geometric average d.sub.g given by
d.sub.g=(|X.sup.+|.Math.|X.sup.|.Math.|Y.sup.+|.Math.|Y.sup.|).sup.1/4.(E2.1b)
(49) The object closest to the focal point F is scaled to a radius of, say 80% of its average neighbourly distance, while other objects are scaled to 50% of theirs. This ensures that each object uses a substantial part of the space available to it, but it is not a guarantee that overlap will always be avoided.
(50) In this example, a rectangular album cover or blank image is associated with each object. With the distortion mapping 11 equal to the identity function, the album covers on the undistorted grid is shown in
(51) A screen shot during an example of touch interaction, with scaling according to equation (E2.1b), is shown in
(52) The ability to separate the representation of the background space (ground), including the positions of the objects, from the representation of the objects in the foreground (figure) is a precondition for this to work.
(53) Note the overlap between object images, especially in the region where the space is compressed. The right order of rendering is important to get this right, starting at the object furthest from the focus, and working back to the nearest.
(54) This example illustrates the native setting for the method which combines focus demagnification with double (figure & ground) lensing. The setting includes:
(55) a discrete set of objects
(56) embedded in a continuous space
(57) with a focused distortion
(58) which can be interactively controlled.
(59) As in the Example 1, focus demagnification is used to ease focus targeting. Unlike Example 1, the undesirable side-effect of focus demagnification is here overcome by separate, linear figure lenses, in addition to the non-linear ground lens. The combination is referred to as double lensing.
EXAMPLE 3. THREE DIMENSIONAL CONTROL OF FOCUS DEMAGNIFICATION AND DOUBLE LENSING
(60) In the third example of the invention, the method for human computer interaction (HCl) on a graphical user interface (GUI), is applied in a way similar to the second example, but with three-dimensional control. Input is obtained with the Leap Motion Controller (www.leapmotion.com), represented as the 3D Control block 10 in
(61) As stated in the general description, the method, in the step of generating a view of the whole or a part of the mapped space and displaying the view, may include the following step: rendering each object in the view at a scale proportional to a measure of the total space which is available to it, after the displacement of its position and the positions of its surrounding objects by the distortion mapping.
(62) In the current example, this step is performed during the view generation 15 which is based on the mapped space 13, which view is to be displayed on the display hardware 14. Instead of using the distortion mapping 11 to determine both the position and the scale of each object, it is only used to determine the position. The scale at which each object is rendered is instead determined in the following way.
(63) Each display object has indices i and j associated with its row and column respectively. The radial distance in the (x, y) plane from the object ij to the user pointing object, normalized by the radius of the lens, is denoted by r.sub.ij. The maximum size W.sub.ij for each object is calculated as
(64)
(65) where m is a magnification factor, W.sub.0 is a constant common to all objects, and q is a free exponent. The display size W.sub.ij (z) is a function of the third dimension z, where the function increases with diminishing z. The maximum in-range distance of z is normalized to 1, and the function is chosen to have boundary conditions W.sub.ij (0)=W.sub.ij calculated according to equation (E3.1), and W.sub.ij (1)=1. The linear function that meets these conditions and is used in this example, is
W.sub.ij=W.sub.ij+(1W.sub.ij)z.(E3.2)
(66) The objects in this example are photographs. With the distortion mapping 11 equal to the identity function, the photos on the undistorted grid are shown in
(67)
(68) Note the overlap between object images, especially in the region at the edge of the lens, where the space is compressed. The right order of rendering is important to get this right, starting at the object furthest from the focus, and working back to the nearest.
(69) When the user moves his finger closer to the screen, the finger coordinate in the third dimension (z) diminishes, leading to a larger display size for objects near the focus, according to equation (E3.2). Such a situation is depicted in
(70) While the dynamic effects of the focus demagnification step that allows easier focus targeting are difficult to convey with static images, it should at least be clear that the double lensing step has succeeded in compensating for the effect of the demagnification on the items at and around the focus.
(71) The availability of three control dimensions means that the user can control which object is in focus with the same finger that controls at what magnification that object appears. This eases the viewing and browsing of and selection from a large set of photographs on a relatively small screen.
(72) The method illustrated in this example can also be applied to GUI icons or other interactive images. The boundary conditions and the equations given, are meant to illustrate the method, and not to limit its scope.
EXAMPLE 4. SUGGESTING A FOCAL DIP
(73) The next example is a fragment quoted from PCT/ZA2012/0000059, filed by the same applicant as for this patent application. Refer to
(74) The function family used for calculating relative angular positions may be sigmoidal, as follows. .sub.ip is the relative angular position of interactive object 18.i with respect to the line connecting the reference point 20 to the pointer's coordinates 12. The relative angular position is normalized to a value between 1 and 1 by calculating
(75)
Next the value of v.sub.ip is determined as a function of u.sub.ip and r.sub.p, using a piecewise function based on ue.sup.u for 0u<1/N, a straight line for 1/Nu<2/N and 1e.sup.u for 2/Nu1, with r.sub.p as a parameter indexing the strength of the nonlinearity. The relative angular position .sub.ip of display coordinates 16.i, with respect to the line connecting the reference point 20 to the pointer 14 in 10.2, is then calculated as .sub.ip=v.sub.ip.
(76) The three equations in this example together describe a one dimensional sigmoidal transformation function, giving v.sub.ip as a function of u.sub.ip and r.sub.p. Any transformation function can be differentiated to yield the associated magnification function. The magnification function in this case would have a minimum around the u=0 focal position, which suggests a focal dip.
REFERENCES
(77) [1] Furnas, G W, The FISHEYE view: a new look at structured files, Bell Laboratories Technical Memorandum #81-11221-9, Oct. 12, 1981. [2] Appert, C, Chapuis, O, & Pietriga, E, High-Precision Magnification Lenses, ACM CHI, 2010. [3] Cockburn, A, Karlson, A, and Bederson, B B, A review of overview+detail, zooming, and focus+context interfaces, ACM CSUR Vol. 41 No 1, 2008, pp 1-31. [4] Gutwin, Carl, Improving Focus Targeting in Interactive Fisheye Views, CHI Letters, Vol. 4 No 1, 2002, pp 267-274. [5] Mackinlay, J D, Card, S K & Robertson, G G, Rapid Controlled Movement Through a Virtual 3D Workspace, ACM Computer Graphics, Vol. 24 No 4, August 1990, pp 171-176. [6] Igarashi, T & Hinckley K, Speed-dependent automatic zooming for browsing large documents, Proc. UIST '00, ACM, 2000, pp 139-148. [7] Leung, Y, and Apperley, M, A Review and Taxonomy of Distortion-Oriented Presentation Techniques, ACM T. CHI Vol 1 No 2, 1994, pp 126-160. [8] Sarkar, M., & Brown, M. Graphical Fisheye Views of Graphs. Proc. ACM CHI'92, pp 83-91.