Interactive image selection method

09798741 · 2017-10-24

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for browsing a collection of digital images on a soft-copy display comprising: receiving a collection of digital images; interactively user selecting a digital image using a user interface; determining a plurality of subsets of the digital images, wherein each subset shares a common category with the selected digital image; and displaying the subsets of digital images on the soft-copy display, together with the selected digital image.

Claims

1. A method comprising: receiving a plurality of digital images; determining, for each of the plurality of digital images, image features; calculating a strength value for each of the plurality of digital images; sorting the plurality of digital images into classes based upon the image features; receiving an indication of a selection of one or more of the plurality of digital images from a user; receiving user data associated with the user selected one or more of the plurality of digital images; measuring an emotive response of the user to the user selected one or more of the plurality of digital images; updating, using one or more electronic processors, the strength value for each of the user selected one or more of the plurality of digital images based upon the user data, wherein the user data includes the measured emotive response, and wherein the strength value is updated using captured facial images of the user, the facial images captured while the user is viewing the one or more of the plurality of digital images; and automatically selecting, by the one or more electronic processors, at least one image from the plurality of digital images in addition to the user selected one or more of the plurality of digital images based upon the selection of one or more of the plurality of digital images and the strength values of the plurality of digital images, wherein the at least one additional image is selected from one or more image classes to which the user selected one or more of the plurality of digital images belong.

2. The method of claim 1, wherein measuring the emotive response of the user comprises analyzing user physiology.

3. The method of claim 2, wherein the user physiology includes facial changes, pupil size, skin conductivity, and treating as a measure of viewer reaction.

4. The method of claim 1, wherein the user data comprises a display duration for each of the user selected one or more of the plurality of digital images.

5. The method of claim 1, wherein the user data comprises an interaction count with each of the user selected one or more of the plurality of digital images.

6. The method of claim 1, further comprising: receiving an image of a face of the user; recognizing the face of the user in a subset of the plurality of digital images; and updating the strength value of each digital image in the subset of the plurality of digital images based upon the recognition of the face of the user.

7. The method of claim 1, further comprising updating the strength value of each of the plurality of digital images based upon the image features.

8. The method of claim 7, wherein the image features comprise image sharpness or image contrast.

9. A non-transitory computer readable medium having instructions stored thereon, the instructions comprising: instructions to receive a plurality of digital images; instructions to determine, for each of the plurality of digital images, image features; instructions to calculate a strength value for each of the plurality of digital images; instructions to sort the plurality of digital images into classes based upon the image features; instructions to receive an indication of a selection of one or more of the plurality of digital images from a user; instructions to receive user data associated with the user selected one or more of the plurality of digital images; instructions to measure an emotive response of the user to the user selected one or more of the plurality of digital images; instructions to update the strength value for each of the user selected one or more of the plurality of digital images based upon the user data, wherein the user data includes the measured emotive response, and wherein the strength value is updated using captured facial images of the user, the facial images captured while the user is viewing the one or more of the plurality of digital images; and instructions to automatically select at least one image from the plurality of digital images in addition to the user selected one or more of the plurality of digital images based upon the selection of one or more of the plurality of digital images and the strength values of the plurality of digital images, wherein the at least one additional image is selected from one or more image classes to which the user selected one or more of the plurality of digital images belong.

10. The non-transitory computer readable medium of claim 9, wherein the instructions to measure the emotive response of the user comprises instructions to analyze user physiology.

11. The non-transitory computer readable medium of claim 10, wherein the user physiology includes facial changes, pupil size, skin conductivity, and treating as a measure of viewer reaction.

12. The non-transitory computer readable medium of claim 9, wherein the user data comprises a display duration for each of the user selected one or more of the plurality of digital images.

13. The non-transitory computer readable medium of claim 9, wherein the user data comprises an interaction count with each of the user selected one or more of the plurality of digital images.

14. The non-transitory computer readable medium of claim 9, further comprising: instructions to receive an image of a face of the user; instructions to recognize the face of the user in a subset of the plurality of digital images.

15. A system comprising: a memory; and one or more electronic processors coupled to the memory and configured to: receive a plurality of digital images; determine, for each of the plurality of digital images, image features; calculate a strength value for each of the plurality of digital images; sort the plurality of digital images into classes based upon the image features; receive an indication of a selection of one or more of the plurality of digital images from a user; receive user data associated with the user selected one or more of the plurality of digital images; measure an emotive response of the user to the user selected one or more of the plurality of digital images; update the strength value for each of the user selected one or more of the plurality of digital images based upon the user data, wherein the user data includes the measured emotive response and wherein the strength value is updated using captured facial images of the user, the facial images captured while the user is viewing the one or more of the plurality of digital images; and automatically select at least one image from the plurality of digital images in addition to the user selected one or more of the plurality of digital images based upon the selection of one or more of the plurality of digital images and the strength values of the plurality of digital images, wherein the at least one additional image is selected from one or more image classes to which the user selected one or more of the plurality of digital images belong.

16. The system of claim 15, wherein measurement of the emotive response of the user comprises analyzing user physiology.

17. The system of claim 16, wherein the user physiology includes facial changes, pupil size, skin conductivity, and treating as a measure of viewer reaction.

18. The system of claim 15, wherein the user data comprises a display duration for each of the user selected one or more of the plurality of digital images.

19. The system of claim 15, further comprising an image capturing component that includes the one or more electronic processors.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 is a flow chart illustrating an image selection method corresponding to the invention.

(2) FIG. 2 is a flow chart illustrating an image classification method that may be carried out as a preliminary step of the method according to FIG. 1.

(3) FIG. 3 is an illustration of a possible display layout of image selected according to the method of FIG. 1.

DESCRIPTION OF THE EMBODIMENTS

(4) In the following description identical or similar parts of different figures are indicated with the same reference signs.

(5) Reference 100 of FIG. 1 relates to a first step of an image selection process corresponding to the provision of a collection of classified images 102. As explained later, this first step may include a preliminary sorting of the images in different classes. The collection of digital images 102 comprises a plurality of classes 104a, 104b, 104c of images each comprising a plurality of images 102, 102i, 102j. The images in a same class have similar image features. As an example, a class may contain all the images that have been captured for a given event, such as a birthday party, or all images that have been captured at a same day or in a same time range. A class may also contain all the images on which the face of a given person has been recognized by a face recognition algorithm.

(6) A same image can belong to several different image classes if the image has features corresponding to such classes. As an example, an image, of a given person, taken at his/her birthday party at a given date may belong to each of the above-mentioned classes. In the example of FIG. 1, an image 102i represented in bold belongs both to class 104b and 104c. An image 102j represented in doubled line belongs to classes 104a, 104b and 104c.

(7) In a step 110, the method then allocates to each image 102, 102i, 102j, 102k one or a plurality of strength values 112, 112i, 112j, 112k. When the method is initially run, all the strength values can be set to a same initial value. The values can also be initially set as a function of a semantic content of the images or another image feature. As an example, more recent images, or images in which a face recognition algorithm recognizes a human face, may get a higher initial strength value than older images or images without human face.

(8) The fact of belonging to a class, as well as the strength value of the images may be stored in the form of separate data or data files or may be part of image metadata.

(9) The initial strength value allocated to the image is not critical since the strength values are updated whilst the process is run.

(10) Each time the process is resumed, step 110 may comprise the allocation to each image of its strength value as it was when a given user previously ran the process. The strength values can be stored in a memory.

(11) In an additional step 114 a user may be invited to enter his name or any kind of identification. The user identification can be used in step 110 to retrieve, for each image, the respective strength value as it was stored for the specific identified user.

(12) The process allows the user to browse through the image collection using any per se known browsing method and allows the user to select images. The selection of an image by the user is indicated with reference sign 120. This selection can be made by depressing a selection button, in touching a screen, in clicking on a computer mouse etc.

(13) Whilst browsing through the image collection, and more generally whilst the user is interacting with the device used to carry out the method of the invention, user data are collected. This corresponds to step 122 on FIG. 1. User data can also be collected at another time and stored in a memory until being used by the method.

(14) As previously mentioned, the user data may include data about the user such as his/her date of birth, his/her address, a photograph of the user's face or any other references that the user may be invited to enter as an input. Other user data may include data about the user's behavior or physiologic reaction whilst he/she is browsing through the collection, or viewing images of the collection. The latter may include data on how long an image was viewed, on how many clicks or data input have been made whilst the image was viewed, on whether an image has been processed or altered etc.

(15) The user data available for an image are converted in a deemed user interest according to a set of predetermined rules and the strength values of the images are updated accordingly in a subsequent step 130.

(16) As previously mentioned, according to one possible rule, the strength value data of an image may be updated by a value proportional to relative display duration of that image. This is based on an implicit rule that a user tends to view for a longer time images that are interesting to him. The strength value of such images is then increased.

(17) To the contrary, an image that would have been discarded from a subset of images that the user has processed may be considered as not interesting to the user. Its strength value may be lowered.

(18) More sophisticated rules can be based on calculations made on the image content. This can be a cross correlation calculation or face recognition. As an example, an image in which the user's face is recognized by a face recognition algorithm, may be considered as interesting for the user and its strength value may be increased.

(19) The amount a strength value of an image is increased, or possibly decreased, may be preset, proportional to a scalable user data, or weighted with respect to other images of the collection or other images of a same category.

(20) In addition, the strength value of an image may be also updated based on calculations or data that are not linked to a user but based on low or high level analysis of the images. As an example, an image with poor contrast and poor sharpness may have its strength value automatically reduced.

(21) The update of the image strength value is symbolically represented by “+/−1” on FIG. 1.

(22) It shall be stressed that if user data becomes available and is used to update the strength value of a given image selected by the user, then not only the strength value of that image is updated but also the strength value of all the images in a same category as the image selected by the user.

(23) Any image in a same category than a selected image, may have its strength value updated by the same value or a smaller value than the updated value of the selected image.

(24) It is additionally noted that when a selected image belongs to several different categories, then the image strength values in all such categories can be updated by the same or by different update values.

(25) In other words, when user data is collected and corresponds to an update rule, then the data is used to update the strength values of images selected by the user and in turn the strength values of images classified in a same category as the images selected by the user.

(26) A next step 142 of FIG. 1 comprises the automatic selection of additional images. This selection is based upon the user selection 120 of one or several images.

(27) The additionally selected images are taken from the image categories which the user-selected image belongs to. The selection is based on the strength values and may retain images having the highest strength values.

(28) As an example, images belonging to a category of a user-selected image and having a strength value above a threshold value can be taken. The threshold value

(29) can be predetermined or may be a weighed function of a mean strength value of all the images in the collection or all the images in a same category than the user-selected image. Other threshold calculations and especially a threshold calculation as a function of a display capability of a device running the process are not excluded.

(30) The image selected by the user and the additional automatically selected images are displayed, simultaneously or subsequently in a display step 146. The display step can be replaced or completed by a printing step or any other subsequent image processing step.

(31) An additional step 144 may comprise the calculation of a display layout based on the categories, and/or the strength values of the selected images.

(32) Box 150 of FIG. 1, in mixed line, represents a device suitable to carry out the invention and having storage means able to store instructions suitable to carry out the invention when read and executed by a machine. Such a device can be a multimedia device, a photo frame, or a computer for example.

(33) Although the method of the invention can be carried out on an already classified collection of images it may be preceded by an automatic classification process. Such process is briefly described with reference to FIG. 2.

(34) A first step 210 of this process comprises the gathering of a collection of images. This can be done, for example, by the capture of images with a capture device, the downloading of images from a remote device or the reading of the images in a memory device.

(35) A next step 212 comprises the determining of image features for each image of the collection. The image features may be established through high or low level analysis and corresponding calculations. As previously mentioned, a face recognition engine or algorithm may be used. Providing image feature may also comprise the mere reading of metadata already attached to the image data. An example may be the feature of the capture time of an image.

(36) In a step 214, the features of the images are then compared to preset features corresponding to preset categories or previously created categories. The preset features of a category can be set as a range of values. As an example, a category corresponding to a birthday may include a range of capture time corresponding to a predetermined day and month. Each image, having a capture time in that range, can then be sorted in that category. Other examples may include a threshold for a cross correlation calculation, a threshold for face recognition, a threshold for detection of sea, sand or landscape, etc. Of course an image can still be classified in a category based on explicit user input.

(37) An additional step 216 may be carried out when the features of some images correspond to none of the existing classes. Such images can then be classified in sui generis classes or may be used to set new classes. When a given number of image have comparable features within a range, a new class can be created in step 217 for such images.

(38) A last step 220 corresponds to the sorting of the images in the classes. The result of this step is a classified collection of images 102 that can be used for the previously described selection method. The sorting of image may include the creation metadata for the images indicative of the classes.

(39) FIG. 3 shows an example of a display layout of images selected as described previously.

(40) A spatial direction 302, 303, 304, 305, 306, 307 is allocated to each category of images a selected image belongs to. The direction can be regularly distributed over 360°, or irregularly depending on the display sizes of the selected images.

(41) The image 310 selected by the user is displayed centrally, at the intersection of all the spatial directions, then around this image the automatically selected additional 312 images are distributed. As it appears on FIG. 3 the additional images are displayed smaller than the user-selected image and in a size decreasing from a center 300.

(42) Especially the display size and the distance from the center, can be decreasing functions of the strength value of the additional features. The additional images 312L having a higher strength value, and deemed to be more interesting to the user are displayed in a bigger size and are located closer to the user-selected image. The images 312S with a lower strength value are more distant and smaller.

(43) As previously mentioned all the images belonging to a same category are aligned along a same spatial direction.

(44) If an image belongs to several categories it is possible either to display it several times or to limit its display along one single direction.

(45) The fact that the displayed images are distributed as a function of their category and strength value allows the user to more easily browse through the collection and makes him aware of possible links between images that are displayed in a same manner or in a same area. In other words the display can be expressed as an angular or polar function of the categories and strength.

(46) Other layouts in columns, in angular sectors, etc. may also be suitable, as long as a spatial link or distribution corresponds to the categories and that a display difference is made with respect to the strength values. As another example, a display duration for each displayed image can be calculated as a function of its strength value.

(47) The display can be a two-dimension display or a three-dimension display.

PART LIST

(48) 100. Provision of classified images

(49) 102, 102i, 102j, 102k. Images

(50) 104, 104a, 104b, 104c. Classes

(51) 110. Strength value allocation

(52) 112, 112i, 112j, 112k. Strength values

(53) 114. User input

(54) 120. Image selection

(55) 122. Data collection

(56) 130. Strength value update

(57) 142. Automatic selection of images

(58) 144. Display layout calculation

(59) 150. Multimedia device

(60) 210. Gathering step

(61) 212. Image feature determining step

(62) 214. Comparison step

(63) 216. Comparison step

(64) 217. Class creation

(65) 220. Sorting step

(66) 302, 303, 304, 305, 306, 307. Spectral direction

(67) 310. User selected image

(68) 312, 312L, 312S. Additional images