Systems, methods, and devices supporting scene change-based smart search functionalities
11711557 · 2023-07-25
Assignee
Inventors
- Yatish Jayant Naik Raikar (Bengaluru, IN)
- Soham Sahabhaumik (Bengaluru, IN)
- Rakesh Ramesh (Bengaluru, IN)
- Karthik Mahabaleshwar Hegde (Bengaluru, IN)
Cpc classification
H04N21/44008
ELECTRICITY
H04N21/2387
ELECTRICITY
G06V20/46
PHYSICS
International classification
H04N21/2387
ELECTRICITY
H04N21/239
ELECTRICITY
Abstract
Systems, methods, and devices are disclosed enabling smart search functionalities utilizing key scene changes appearing in video content. In various embodiments, the method includes the step or process of, while engaged in playback of the video content, receiving a user command at a playback device to shift a current playback position of the video content to a default search playback position (PP.sub.DS). In response to receipt of the user command, the playback device searches a time window encompassing the default search playback position (PP.sub.DS) for a key scene change in the video content. If locating a key scene change within the time window, the playback device shifts playback of the video content to a playback position corresponding to the key scene change (PP.sub.ST). Otherwise, the playback device shifts playback of the video content to the default search playback position (PP.sub.DS).
Claims
1. A method of playing video content utilizing a playback device, the method comprising: receiving a user command at the playback device to shift a current playback position of the video content to a default search playback position (PP.sub.DS); defining a temporal window extending before and after the default search playback position (PP.sub.DS) of the user command, wherein a start time of the temporal window is determined by subtracting a first duration from the default search playback position (PP.sub.DS), wherein an end time of the temporal window is determined by adding a second duration to the default search playback position (PP.sub.DS), and wherein the temporal window extending before and after the default search playback position (PP.sub.DS) encompasses a segment of the video content; searching the segment of the video content encompassed by the temporal window extending before and after the default search playback position (PP.sub.DS) for a scene change; and in response to locating the scene change in the segment of the video content encompassed by the temporal window, automatically shifting playback of the video content to a new playback position corresponding to a time position of the scene change (PP.sub.ST) between the start time of the temporal window and the end time of the temporal window.
2. The method of claim 1, further comprising, in response to the scene change near to the temporal window, shifting playback of the video content to the new playback position corresponding to the time position of the scene change (PP.sub.ST) rather than the default search playback position (PP.sub.DS).
3. The method of claim 2, further comprising, in response to multiple scene changes being located within the temporal window, shifting playback of the video content to the new playback position corresponding to the scene change closest in time to the default search playback position (PP.sub.DS).
4. The method of claim 3, further comprising: in response to receipt of the user command, searching a time window encompassing the default search playback position (PP.sub.DS) for the scene change in the video content wherein the time window is a different window than the temporal window.
5. The method of claim 4, further comprising: in response to the default search playback position (PP.sub.DS) occurring after the current playback position in the video content, assigning a first predetermined value to the first duration; and in response to the default search playback position (PP.sub.DS) occurs occurring prior to the current playback position in the video content, assigning a second predetermined value to the first duration.
6. The method of claim 5, wherein the user command comprises instructions to implement a SEEK function having a trick mode speed; and wherein the method further comprises determining the first duration and second duration based, at least in part, on the trick mode speed of the SEEK function.
7. The method of claim 3, further comprising: place-shifting the video content from the playback device, over a network, and to a second device coupled to a display device; wherein the user command is entered at the second device and sent as a user command message transmitted from the second device, over the network, and to the playback device.
8. The method of claim 6, further comprising: extracting the start time (W.sub.S) of the temporal window and the end time (W.sub.E) of the temporal window from a user command message.
9. The method of claim 1, further comprising: parsing the video content to identify scene changes to generate scene change data prior to playback of the video content; and subsequently utilizing the scene change data to determine whether the scene change is located within the temporal window.
10. The method of claim 9, further comprising: generating a slide bar graphic on a display device including a slide bar representative of a duration of the video content; and further producing, on the display device, markers adjacent to the slide bar graphic denoting locations of scene changes indicated by the scene change data.
11. The method of claim 10, further comprising: generating thumbnails for the video content at junctures corresponding to scene changes listed in the scene change data; and in response to user input, displaying selected ones of the thumbnails alongside the slide bar graphic.
12. The method of claim 9 wherein parsing comprises identifying the scene changes based, at least in part, on user preference data stored in a memory of the playback device.
13. The method of claim 12 wherein the user preference data specifies a sensitivity for generating the scene change data.
14. The method of claim 1, wherein the second duration between the end time of the temporal window and the default search playback position (PP.sub.DS) is dynamically determined based on a search direction.
15. The method of claim 1, wherein the second duration between the end time of the temporal window and the default search playback position (PP.sub.DS) is between 1 second and 10 seconds.
16. A playback device, comprising: a processor; and a computer-readable storage medium storing computer-readable code that, when executed by the processor, causes the playback device to perform operations comprising: receiving a user command at the playback device to shift a current playback position of video content to a default search playback position (PP.sub.DS); defining a temporal window extending before and after the default search playback position (PP.sub.DS) of the user command, wherein a start time of the temporal window is defined by subtracting a first duration from the default search playback position (PP.sub.DS), wherein an end time of the temporal window is identified by adding a second duration to the default search playback position (PP.sub.DS), and wherein the temporal window extending before and after the default search playback position (PP.sub.DS) encompasses a segment of the video content; searching the segment of the video content encompassed by the temporal window for a scene change between the start time and the end time of the temporal window; and in response to locating the scene change in the segment of video encompassed by the temporal window, automatically shifting playback of the video content to a new playback position corresponding to the scene change (PP.sub.ST).
17. The playback device of claim 16 wherein the operations further comprise: in response to locating multiple scene changes within the temporal window or near to the temporal window, shifting playback of the video content to the new playback position corresponding to the scene change closest in time location to the default search playback position (PP.sub.DS).
18. The playback device of claim 16 wherein the operations further comprise: parsing the video content to identify scene changes to generate scene change data prior to playback of the video content; and using the scene change data to determine whether the scene change is located within the temporal window or to determine whether the scene change is located near the temporal window.
19. A playback device, comprising: a scene change detector configured to: detect scene changes in video content in a time window and in a temporal window, wherein the time window is different than the temporal window; and generate scene change data indicating locations of the scene changes in the video content, wherein the scene change data is provided as a manifest; and a smart search controller to: monitor for a user command to shift a current playback position of the video content to a default search playback position (PP.sub.DS); define the temporal window extending before and after the default search playback position (PP.sub.DS) of the user command, wherein a start time of the temporal window is defined by subtracting a first duration from the default search playback position (PP.sub.DS), wherein an end time of the temporal window is defined by adding a second duration to the default search playback position (PP.sub.DS), and wherein the temporal window extending before and after the default search playback position (PP.sub.DS) encompasses a segment of the video content; utilize the scene change data to determine a scene change occurs in the segment of the video content encompassed by the temporal window; and in response to determining that the scene change occurs in the segment of video encompassed by the temporal window, automatically shifting playback of the video content to a playback position corresponding to the scene change (PP.sub.ST).
20. The playback device of claim 19 further comprising a thumbnail generator configured to: receive the scene change data from the scene change detector; and generate thumbnail images for each time position in the video content at which the scene change data indicates that an occurrence of the scene change (PP.sub.ST) wherein the scene change detector is configured to detect scene changes in the video content utilizing scene change markers embedded in the video content.
Description
BRIEF DESCRIPTION OF THE DRAWING FIGURES
(1) Exemplary embodiments will hereinafter be described in conjunction with the following drawing figures, wherein like numerals denote like elements, and:
(2)
(3)
(4)
(5)
DETAILED DESCRIPTION
(6) The following Detailed Description is merely exemplary in nature and is not intended to limit the invention or the application and uses of the invention. The term “exemplary,” as appearing throughout this document, is synonymous with the term “example” and is utilized repeatedly below to emphasize that the description appearing in the following section merely provides multiple non-limiting examples of the invention and should not be construed to restrict the scope of the invention, as set-out in the Claims, in any respect.
Overview
(7) Systems, methods, and devices are provided supporting smart search (SKIP and/or SEEK) functionalities utilizing key scene transitions or changes in video content. Generally, the systems, methods, and devices described herein improve the accuracy and ease with which users locate points-of-interest in video content by predicting user desires or points-of-interest corresponding to key scene changes appearing in the video content. Embodiments of the present disclosure may be implemented utilizing a computer-executed algorithm or process, which detects when an end user utilizes a particular search command (e.g., a SKIP or SEEK function) to change a current playback position of video content. When determining that a user has initiated such a search command, the playback device searches for key scene changes occurring within a time window encompassing a default playback position corresponding to the user-entered search command. If a key scene change is located within this time window, the playback device predicts a likely user intent to shift playback of the video content to a time position corresponding to the key scene change (herein “PP.sub.ST”); or, if multiple key scene changes are located within the time window, to a time position corresponding to the key scene change occurring most closely in time to PP.sub.DS. In accordance with this prediction, the playback device then shifts playback of the video content to the playback position corresponding to the key scene change (PP.sub.ST) as opposed to the default search playback position (PP.sub.DS). Otherwise, the playback device shifts playback of the video content to the default search playback position (PP.sub.DS).
(8) By predicting user desires in the above-described manner, embodiments of the disclosure provide an efficient scheme to better enable users to navigate through video content. Further, little, if any additional effort or learning is required on behalf of the end user in executing the smart search functions. The smart search functionalities outlined above can be modified in further embodiments and can include other functionalities, such as actively varying parameters defining the time window based upon one or more factors. Additional description in this regard is provided below in connection with
Example of Streaming Media System Including Playback Devices Suitable for Performing Embodiments of the Present Disclosure
(9)
(10) In contrast to home media receiver 12, portable media receiver 14 may be any portable electronic device capable of receiving streaming video content over network 22. In certain embodiments, receivers 12, 14 may communicate over network 22, with home media receiver 12 serving as a place-shifting device providing streaming content to portable media receiver 14. In such implementations, home media receiver 12 (e.g., an STB located in a user's residence) may transmit streaming video content to portable media receiver 14 (e.g., a smartphone or tablet) for viewing by an end user. In such embodiments, home media receiver 12 may be considered a playback device, while receiver 14 is considered a second device coupled to a display screen on which the steamed video content is presented. In other instances, video content may be stored in and played from a local memory accessible to portable media receiver 14, whether an internal memory contained in the housing of receiver 14 or an external memory exterior to the receiver housing and coupled to receiver 14 through, for example, a USB (or other) connection.
(11) Video content may be initially recorded or stored in a memory accessible to home media receiver 12; e.g., a computer-readable storage area contained in receiver 12 or an external memory coupled to receiver 12 via a wired or wireless (home network) connection. Alternatively, the pertinent video content may be transmitted to home media receiver 12 and then place-shifted to portable media receiver 14 by receiver 12 as the content is received. When providing such a place-shifting functionality, home media receiver 12 may further contain at least one encoder module 26 and control module 28. Modules 26, 28 can be implemented utilizing software, hardware, firmware, and combinations thereof. The encoded media stream generated by receiver 12 will typically contain both video and audio component streams, which may be combined with packet identification data. Any currently-known or later-developed packetized format can be employed by receiver 12 including, but not limited to, MPEG, QUICKTIME, WINDOWS MEDIA, and/or other formats suitable for streaming transmission over network 22.
(12) The foregoing components can each be implemented utilizing any suitable number and combination of known devices including microprocessors, memories, power supplies, storage devices, interface cards, and other standardized components. Such components may include or cooperate with any number of software programs or instructions designed to carry-out the various methods, process tasks, encoding and decoding algorithms, and relevant display functions. Media transmission system 10 may also include various other conventionally-known components, which are operably interconnected (e.g., through network 22) and not shown in
(13) As indicated above, portable media receiver 14 can assume the form of any electronic device suitable for performing the processes and functions described herein. A non-exhaustive list of suitable electronic devices includes smartphones, wearable devices, tablet devices, laptop computers, and desktop computers. When engaged in a place-shifting session with home media receiver 12, portable media receiver 14 outputs visual signals for presentation on display device 30. Display device 30 can be integrated into portable media receiver 14 as a unitary system or electronic device. This may be the case when, for example, portable media receiver 14 assumes the form of a mobile phone, tablet, laptop computer, or similar electronic device having a dedicated display screen. Alternatively, display device 30 can assume the form of an independent device, such as a freestanding monitor or television set, which is connected to portable media receiver 14 via a wired or wireless connection. Any such video output signals may be formatted in accordance with conventionally-known standards, such as S-video, High Definition Multimedia Interface (HDMI), Sony/Philips Display Interface Format (SPDIF), DVI (Digital Video Interface), or IEEE 1394 standards, as appropriate.
(14) By way of non-limiting illustration, portable media receiver 14 is shown in
(15) Browser player 38 includes control logic 42 adapted to process user input, obtain streaming content from one or more content sources, decode received content streams, and provide corresponding output signals to display device 30. In this regard, control logic 42 may establish a data sharing connection with the remote home media receiver 12 enabling wireless bidirectional communication with control module 28 such that a place-shifting session can be established and maintained. During a place-shifting session, home media receiver 12 streams place-shifted content to portable media receiver 14 over network 22. Such streaming content can contain any visual or audiovisual programming including, but not limited to, streaming OTT TV programming and VOD content. The streaming content is received by portable media receiver 14 and decoded by decoding module 44, which may be implemented in hardware or software executing on processor 32. The decoded programming is then provided to a presentation module 46, which generates output signals to display device 30 for presentation to the end user operating portable media receiver 14. In some embodiments, presentation module 46 may combine decoded programming (e.g., programming from multiple streaming channels) to create a blended or composite image; e.g., as schematically indicated in
(16) In operation, control logic 42 of portable media receiver 14 obtains programming in response to end user input or commands received via a user interface, such as a touchscreen or keyboard interface, included within I/O features 36. Control logic 42 may establish a control connection with remotely-located home media receiver 12 via network 22 enabling the transmission of commands from control logic 42 to control module 28. Accordingly, home media receiver 12 may operate by responding to commands received from a portable media receiver 14 via network 22. Such commands may include information utilized to initiate a place-shifting session with home media receiver 12, such as data supporting mutual authentication of home media receiver 12 and portable media receiver 14. In embodiments in which home media receiver 12 assumes the form of a consumer place-shifting device, such as an STB or DVR located in an end user's residence, control commands may include instructions to remotely operate home media receiver 12 as appropriate to support the current place-shifting session.
(17)
(18) As indicated in
(19) Scene change detection module 52 may parse the entirety of video content 58 ahead of the below-described process, which case key scene change data 60 may be provided as a single manifest to thumbnail generation module 54 when appropriate. Alternatively, scene change detection module 52 may repeatedly parse sections (e.g., successive segments of a predetermine duration) of the video content during playback, while providing key scene change data 60 for each parsed section of the video content after processing by module 52. Notably, scene change detection module 52 may not identify every scene changes appearing in the video content as a “key” scene change. Instead, in embodiments, scene change detection module 52 may determine whether a scene change is considered a “key” scene changes utilizing additional data, such as user preference data contained in a user profile, as discussed more fully below. Scene change detection module 52 then provides the key scene change data as an output 60, which is forwarded to thumbnail generation module 54.
(20) When included in playback device architecture 50, thumbnail generation module 54 receives the key scene change data provided by scene change detection module 52 as input 60. Thumbnail generation module 54 utilizes the key scene change data to ensure that a thumbnail image is generated for each key scene change in the video content. In this regard, thumbnail generation module 54 may generate thumbnail images for the video content at predetermined intervals as a baseline set of images; e.g., module 54 may generate a thumbnail image every N seconds progressing through the entire duration of a given video content item. Additionally, thumbnail generation module 54 may further generate thumbnail images for each time point corresponding to a key scene change, as specified in the key scene change data provided by module 52. Thumbnail generation module 54 may then consolidate these two sets of images to yield a final set of thumbnail images. In so doing, module 54 may also replace any baseline thumbnail image with a key scene change thumbnail image occurring within a predetermine time proximity (e.g., a four second time span) of the key scene change thumbnail image to avoid redundancy of images. The final set of thumbnail images may then be stored in local memory accessible by smart search module 56 or otherwise provided to module 56 as output 62. The final thumbnails may be stored for subsequent access by smart search module 56 or, instead, immediately outputted to smart search module 56 along with corresponding metadata (e.g., specifying the time position corresponding to each thumbnail) to enhance the accuracy of the smart search process conducted by module 56. This example notwithstanding, thumbnail generation module 54 may be omitted in further embodiments; e.g., in embodiments which the on-screen imagery generated in conjunction with the search process does not include thumbnail images or when on-screen imagery is not generated in conjunction with the search process.
(21) With continued reference to
(22) After establishing the parameters of the time window to search, smart search module 56 next scans the segment of the video content encompassed by the time window for key scene changes. In the illustrated embodiment, smart search module 56 may perform this task utilizing the key scene change data or manifest generated by scene change detection module 52. For example, in the key scene change data is provided as listing of time points at which key scene changes occur, smart search module 56 may determine which, if any time points fall within the time window. As a relatively simple example, if the time window is determined to have a duration of ±2 seconds (e.g., X and Y each have a value of 2 seconds), and if it is further determined that the default search playback position (PP.sub.DS) occurs at a certain point during playback (e.g., 10 minutes from the beginning of the window content), smart search module 56 may examine the key scene change data to determine if any scene change time points occur between 10 minutes±2 seconds in the video content.
(23) If a time point corresponding to a key scene change is discovered by module 56 within the time window, as determined utilized the key scene change data, playback of the video content may then be advanced (in either a forward or backward time direction) to this time point. Otherwise, playback of the video content may be shifted the default search playback position (PP.sub.DS). Further description in this regard is provided below in connection with
(24) Turning next to
(25) After commencing (STEP 68), scene change-based search method 67 advances to STEP 70. During STEP 70, video content is received at the playback device. At an appropriate juncture, playback of the video content may be initiated in accordance with user commands. The video content may be obtained from local memory of the playback device, streamed to the playback device from a streaming media server, place-shifted from a first playback device to a second playback device, or otherwise obtained. Also, at STEP 70 of method 67, the video content is parsed to identify key scene changes; e.g., by scene change detection module 52 utilizing any one of the techniques described above in connection with
(26) Next, at STEP 72 of scene change-based search method 67, the playback device commences playback of the video content or continues video content playback in accordance with user commands received at the playback device. As previously indicated, the video content can be presented on a local display device (that is, a display device in physical proximity of the playback device) for viewing by an end user. In such instances, the display device may be integrated into the playback device (e.g., as in the case of a smartphone, tablet, laptop, smart television, or similar device having an integrated display screen) or, instead, may be a freestanding device to which the playback device is coupled via a wired or wireless connection. In other instances, the playback device may be a streaming server, such as a consumer place-shifting device residing in an end user's residence, place of work, or similar location. In this latter instance, the playback device may stream the video content over a network (e.g., network 22 shown in
(27) As indicated in
(28) Progressing to STEP 76 of scene change-based search method 67, the playback device (e.g., smart search module 56 shown in
(29) If a user search command has been received at STEP 78, the playback device advances to STEP 80 of scene change-based search method 67. At STEP 80 of method 67, the playback device ascertains the default search playback position corresponding to the newly-received user search command (PP.sub.DS). In one approach, the playback device determines PP.sub.DS in a manner similar or identical to that traditionally employed in executing SKIP or SEEK functions. For example, in the case of a SKIP function, if a user has selected a forward SKIP button once (whether by selection via virtual button on a GUI, by pressing a button on a remote control, or in some other manner), the playback device may recall the fixed time jump associated with SKIP (e.g., 10 seconds) and determine that the default search playback position (PP.sub.DS) is 10 seconds ahead of the current playback position. If the user has instead entered a SEEK command implemented utilizing a trick mode, the playback device may begin to advance the media content in a forward or backward time direction at an increased playback speed (e.g., 2×, 4×, 8×, or 16× the normal playback speed) in accordance with the user command. Finally, if the user has entered a SEEK command by manipulating a GUI widget, such as by moving a sliding widget or bug along the length of a slide bar, the playback device may translate this user input into the default search playback position (PP.sub.DS) accordingly; e.g., observing the location of the sliding widget along the slide bar and converting this position into a corresponding time position for PP.sub.DS.
(30) Next, at STEP 82 of method 67, the playback device defines the parameters of the time window encompassing the default search playback position (PP.sub.DS). As indicated above, such parameters may include a window start time (W.sub.S) occurring X seconds prior to the default search playback position (PP.sub.DS) and a window end time for the search window (W.sub.E) occurring Y seconds after PP.sub.DS. In certain instances, X and Y may have equivalent static values. In this case, the playback device may simply recall a duration of the time window (e.g., on the order of 2 to 10 seconds) from memory, while defining the parameters of the time window such that X and Y are equidistant from the default search playback position (PP.sub.DS) as take along the time axis. As discussed above, the values of X and Y (and therefore the duration of the time window) will vary among embodiments, but will often be selected such that the sum of X and Y is greater than 2 seconds and less than 20 seconds. In other embodiments, X and Y may not be equivalent and/or may have dynamic values that vary depending factors relating to the search process, as discussed below. Finally, while in many cases the playback device will independently determine the values of X and Y (e.g., by recalling these values from local memory), this need not be the case in all implementations. For example, in implementations in which the playback device acts as a place-shifting device or a streaming video server (e.g., as discussed above in connection with device 12 shown in
(31) In implementations in which X and Y have dynamic values, the values of X and Y may vary depending upon any number of factors. Such factors can include, for example, the time direction of the user search command. For example, in this case, the playback device may assign X a greater value than Y (e.g., X and Y may be assigned values of 3 and 2 seconds, respectively) if the default search playback position (PP.sub.DS) occurs prior to the current playback position in the video content; that is, if a user is searching the video content in a backward time direction. Conversely, the playback device may assign Y a greater value than X (e.g., X and Y may be assigned values of 2 and 3 seconds, respectively) if the default search playback position (PP.sub.DS) occurs after the current playback position in the video content; that is, if a user is searching the video content in a forward time direction. This, in effect, expands the time window in the direction time direction of the search action to further increase the likelihood of locating the point-of-interest sought by the end user.
(32) In alternative embodiments, the values of X and Y may be varied based upon other characteristics related to the user search commands. For example, in instances in which the user search command is received as a SEEK function having a trick mode speed (e.g., to advance playback of the video content in a forward or backward time direction by 2×, 4×, 8×, or 16× the normal playback speed), the values of X and Y may be varied a function of the trick mode speed of the SEEK function; e.g., more specifically, the values of X and Y may be increased as the playback speed of the trick mode increases. In still other embodiments, the value of X and Y may be increased based upon recent search history. For example, if a user has repeatedly searched a segment of the video content and continues to search such video content in a short period of time (e.g., on the order of a few seconds), the value of X and Y may be increased to effectively widen or broaden the search and increase the likelihood of locating the point-of-interest sought by the end user. Such approaches may also be combined with any of the other approaches discussed above, as desired, to yield still further embodiments of the present disclosure.
(33) Continuing to STEP 84 of scene change-based search method 67, the playback device next determines whether any key scene changes are located within the time window. Again, the playback device may render this determination utilizing the data indicating the time position of key scene changes, as generated by scene change detection module 52 and/or thumbnail generation module 54. If determining that at least one key scene change occurs within the time window, the playback device shifts or advances playback of the video content to a playback position corresponding to this key scene change (STEP 86). If multiple key scene changes occur within the time window, the playback device may shift to the playback position corresponding to the scene key change closest in time to PP.sub.DS. In the rare instances that two key scene changes are equally close in time to PP.sub.DS, playback device may shift the playback position to the key scene change occurring prior to PP.sub.DS if a user is searching in a backward time direction; or to the key scene change occurring after PP.sub.DS if a user is searching in a forward time direction. Otherwise, the playback device progresses to STEP 88 and shifts playback of the video content to the default search playback position (PP.sub.DS). In this manner, the user is more likely to arrive precisely at a key scene change within a temporal field of interest, thereby enhancing the ease and convenience with which the user may locate desired points-of-interest in an intuitive manner. Lastly, at STEP 90 of method 67, the playback device determines whether the present iteration of smart, scene change-based search method 67 should terminate. If so, the playback device shifts to STEP 92 and terminates the current iteration of method 67. Alternatively, the playback device returns to STEP 74 and the above-described process steps repeat or loop.
(34)
(35) Similarly, in
(36) As shown in
CONCLUSION
(37) The foregoing has thus provide systems, methods, and playback devices enabling smart search or scene change-based search functionalities utilizing key scene changes in video content. By predicting user desires, embodiments of the disclosure provide an efficient scheme to better enable users to navigate through video content by searching (SKIPPING or SEEKING) to the time locations corresponding to scene changes in the video content. The above-described systems, methods, and playback devices can increase search accuracy in most instances by better predicting user viewing desires and then tailoring the search action in accordance with such predictions. Further, implementation of the above-described playback device and method requires little additional effort or learning on behalf of the end user in executing the smart search functions.
(38) While several exemplary embodiments have been presented in the foregoing Detailed Description, it should be appreciated that a vast number of alternate but equivalent variations exist, and the examples presented herein are not intended to limit the scope, applicability, or configuration of the invention in any way. To the contrary, various changes may be made in the function and arrangement of the various features described herein without departing from the scope of the claims and their legal equivalents.