MOBILE COMMUNICATION TERMINAL AND METHOD THEREOF

20180013877 · 2018-01-11

    Inventors

    Cpc classification

    International classification

    Abstract

    A method for providing a user interface of a communication apparatus comprises switching from a low power mode to a working mode upon receiving a stream of audio data; and upon switching from the low power mode to the working mode: extracting at least one audio feature from said stream of audio data, and modifying the appearance of at least one user interface component configured for invoking a function of the communication apparatus, in accordance with said extracted audio feature.

    Claims

    1. A method for providing a user interface in a communication apparatus, said method comprising: responsive to a stream of audio data, generating an audio activation signal; responsive to the audio activation signal, switching circuitry for transmitting graphics data to a display of the communication apparatus from a low power mode to a working mode; and then modifying, in accordance with the stream of audio data, the appearance of at least one user interface component configured for invoking a function of the communication apparatus.

    2. The method of claim 1, further comprising: extracting at least one audio feature from the stream of audio data; wherein the appearance of the at least one user interface component is modified in accordance with the extracted at least one audio feature.

    3. The method of claim 2, further comprising: classifying an extracted audio feature into one of a plurality of predetermined feature representations; wherein the appearance of the at least one user interface component is modified in accordance with the predetermined feature representation for the extracted audio feature.

    4. The method of claim 1, wherein the at least one user interface component comprises a graphical object.

    5. The method of claim 1, further comprising: in the communication apparatus, generating the stream of audio data.

    6. A communication apparatus comprising: a display configured to visualize a user interface comprising at least one user interface component configured for invoking a function of the communication apparatus; an audio detector configured to generate an audio activation signal responsive to a stream of audio data; and a module for transmitting graphics data to the display, the module configured to switch from a low power mode to a working mode to transmit graphics data to the display responsive to the audio activation signal, and configured to modify, in the working mode, the at least one user interface component in accordance with the stream of audio data.

    7. The apparatus of claim 6, wherein said apparatus is a mobile communication terminal.

    8. The apparatus of claim 6, wherein the module comprises: a graphics engine configured to determine the graphics data to be transmitted to the display; an audio feature extractor, configured to extract an audio feature from the stream of audio data; and a user interface modifier, configured to generate user interface modification data for modifying a user interface component based on the extracted audio feature, and to transmit the user interface modification to the graphics engine.

    9. The apparatus of claim 8, wherein the audio feature extractor and the user interface modifier are further configured to switch from a low power made to a working mode upon the module receiving the audio activation signal.

    10. The apparatus of claim 8, wherein the module further comprises: an audio feature classifier configured to classify the extracted audio feature into one of a set of predetermined feature representations; wherein the user interface modifier generates user interface modification data corresponding to the predetermined feature representation corresponding to the extracted audio feature.

    11. The apparatus of claim 8, wherein the at least one user interface component comprises a graphical object.

    12. The apparatus of claim 8, wherein the module is a software implemented module.

    13. The apparatus of claim 8, wherein the module is a hardware implemented module.

    14. A non-transitory computer-readable medium having computer-executable components comprising instructions that, when executed in a communication apparatus, cause the apparatus to perform a plurality of operations comprising: responsive to an audio activation signal generated in response to a stream of audio data, causing the switching of circuitry for transmitting graphics data to a display of the communication apparatus from a low power mode to a working mode; and then modifying, in accordance with the stream of audio data, the appearance of at least one user interface component configured for invoking a function of the communication apparatus.

    15. The non-transitory computer-readable medium of claim 14, wherein the plurality of operations further comprises: extracting at least one audio feature from the stream of audio data; wherein the appearance of the at least one user interface component is modified in accordance with the extracted at least one audio feature.

    16. The non-transitory computer-readable medium of claim 15, wherein the plurality of operations further comprises: classifying an extracted audio feature into one of a plurality of predetermined feature representations; wherein the appearance of the at least one user interface component is modified in accordance with the predetermined feature representation for the extracted audio feature.

    17. The non-transitory computer-readable medium of claim 14, wherein the at least one user interface component comprises a graphical object.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0050] The above, as well as additional objects, features and advantages of the disclosed embodiments, will be better understood through the following illustrative and non-limiting detailed description of the disclosed embodiments, with reference to the appended drawings, where the same reference numerals will be used for similar elements, wherein:

    [0051] FIG. 1 is a flow chart of an embodiment of a method for modifying a user interface component in accordance to audio data.

    [0052] FIG. 2 schematically illustrates a module according to the disclosed embodiments.

    [0053] FIG. 3 schematically illustrates an apparatus according to the disclosed embodiments.

    [0054] FIG. 4 illustrates an example of a user interface with user interface components being modified in accordance to audio data.

    DETAILED DESCRIPTION

    [0055] FIG. 1 is a flow chart illustrating a method according to the disclosed embodiments describing the general steps of modifying a user interface component in accordance to audio data.

    [0056] In a first step, 100, audio data is received. The audio data may be a current part of a stored audio file being played by a music player, or, alternatively, a current part of an audio stream received by an audio data receiver.

    [0057] Next, in a second step, 102, an audio feature is extracted from the received audio data. Such an audio feature may be a frequency spectrum of the audio data.

    [0058] Finally, in a third step, 104, one or several user interface components are modified in accordance to the extracted audio feature.

    [0059] The third step, 104, may be subdivided into a first substep, 106, in which the extracted audio feature is classified into a predetermined feature representation. Thereafter, in a second substep, 108, the user interface component is modified in accordance to the predetermined feature representation.

    [0060] By using predetermined feature representations, a number of user interface component appearance state images may be used. This implies that less computational power is needed in order to modify the user interface components in accordance to the audio data.

    [0061] The user interface components can be 3-D rendered objects. Additionally, audio visualization effects can be superposed upon the 3-D rendered objects. Then, when receiving audio data and extracting an audio feature, the audio visualization effects are changed, which means that the appearance of the user interface components vary in accordance to the audio data.

    [0062] Alternatively, 2-D objects may be used as user interface components. As in the case of 3-D rendered objects, audio visualization effects, which varies in accordance to the audio data, may be superposed upon the 2-D objects.

    [0063] Alternatively, instead of having superposed audio visualization effects, the size of one or several user interface components may be modified in accordance to the extracted audio features. For instance, the user interface components may be configured to change size in accordance to the amount of base frequencies in the audio data. In this way, during a drum solo the size of the user interface component will be large, and during a guitar solo the size will be small Other options are that the colour, the orientation, the shape, the animation speed or other animation-specific attributes, such as zooming level in fractal animation, of the user interface components change in accordance to the audio data.

    [0064] If so-called environment mapping is utilised, existing solutions for music visualization may be used. This is an advantage since no new algorithms must be developed. Another advantage of using so-called environment mapping is that a dynamically changing environment map emphasizes the shape of a 3-D object, making UI components easier to recognize.

    [0065] Optionally, different user interface components may be associated to different frequencies. For instance, when playing a rock song comprising several different frequencies, a first user interface component, such as a “messages” icon, may change in accordance to high frequencies, i.e. treble frequencies, and a second user interface component, such as “contacts” icon, may change in accordance to low frequencies, i.e. base frequencies.

    [0066] The procedure of receiving audio data, 100, extracting audio feature, 102, and modifying a UI component in accordance to the extracted audio feature, 104, may be repeated continuously as long as audio data is received. The procedure may, for instance, be repeated once every time the display is updated.

    [0067] FIG. 2 schematically illustrates a module 200. The module 200 may be a software implemented module or a hardware implemented module, such as an ASIC, or a combination thereof, such as an FPGA circuit.

    [0068] Audio data can be input to an audio feature extractor 202. Thereafter, one or several audio features can be extracted from the audio data, and then the extracted features can be transmitted to a user interface (UI) modifier 204. UI modification data can be generated in the UI modifier 204 based upon the extracted audio feature(s). After having generated UI modification data, this data can be output from the module 200.

    [0069] The UI modification data may be data representing the extracted audio feature(s). Then, a graphics engine (not shown) is configured to receive the UI modification data, and based upon this UI modification data and original graphics data, the graphic engine is configured to determine graphics data comprising audio visualization effects.

    [0070] Alternatively, the UI modification data may be complete graphics data containing audio visualization effects. In other words, the graphics engine may be contained within said module 200.

    [0071] Optionally, the module may further comprise an audio feature classifier 206. The function of the audio feature classifier 206 can be to find characteristic features of the audio signal. Such a characteristic feature may be the amount of audio data corresponding to a certain frequency, such as a base frequency or a treble frequency. Alternatively, if different UI components are corresponding to different characteristic features, a number of characteristic features may be determined in the audio feature classifier 206.

    [0072] If an audio feature classifier 206 is present, a memory 208 comprising a number of predetermined feature representations may be present as well. A predetermined feature representation may, for instance, be the amount of audio data corresponding to a sound between 20 Hz and 100 Hz. The number of predetermined feature representations, i.e. the resolution of the classification, may be user configurable, as well as the limits of each of the predetermined feature representations.

    [0073] Optionally, the module 200 may comprise an audio detector 209 configured to receive an audio activation signal. The audio activation signal may be transmitted from the music player when the playing of a song is started, or, alternatively, when the radio is switched on. When the audio detection signal is received, an audio activation signal is transmitted to the audio feature extractor 202, the UI modifier 204 or the audio feature classifier 206.

    [0074] Optionally, the module 200 may further comprise a memory 210 containing UI modification themes. A UI modification theme may comprise information of how the extracted audio feature(s) is to be presented in the UI. For instance, the extracted audio feature(s) may be presented as a histogram superposed on a 3-D rendered UI component, or the extracted audio feature(s) may be presented as a number of circles superposed on a 3-D rendered UI component.

    [0075] FIG. 3 schematically illustrates an apparatus 300, such as a mobile communication terminal, comprising the module 200, a music player 302, a graphics engine 304, a display 306, optionally a keypad 308 and optionally an audio output 310, such as a loudspeaker or a head phone output.

    [0076] When a song is started in the music player 302, which start may be made after having received key input actuation data from the keypad 308, audio data and, optionally, an audio activation signal, are transmitted from the music player 302 to the module 200. Optionally, audio data may also be transmitted to the audio output 310.

    [0077] The module 200 is configured to generate UI modification data from extracted audio features of the audio data as is described above. The UI modification data generated by the module 200 can be transmitted to the graphics engine 304. The graphics engine 304 can, in turn, be configured to generate graphics data presenting the extracted features of the audio data by using the UI modification data.

    [0078] After having determined the graphics data, this data may be transmitted to the display 306, where it is shown to the user of the apparatus 300. Alternatively, if the graphics engine 304 is comprised within the module 200, graphics data is transmitted directly from the module 200 to the display 306.

    [0079] FIG. 4 illustrates an example of a user interface 400 with user interface components being modified in accordance to audio data.

    [0080] A first user interface component may be illustrated as a “music” icon comprising a 3-D cuboid 402. Audio visualization effects in the form of a frequency diagram 404 can be superposed on the sides of the 3-D cuboid 402. Moreover, an identifying text “MUSIC” 406 may be available in connection to the 3-D cuboid 402.

    [0081] A second user interface component illustrates a “messages” icon comprising a 3-D cylinder 408. Audio visualization effects in the form of a number of rings 410a, 410b, 410c may be superposed on the top of the 3-D cylinder 408. Moreover, an identifying text “MESSAGES” 412 may be available in connection to the 3-D cylinder 408.

    [0082] A third user interface component illustrates a “contacts” icon comprising a 3-D cylinder 414. Audio visualization effects in the form of a 2-D frequency representation 416 may be superposed on the top of the 3-D cylinder 414. Moreover, an identifying text “CONTACTS” 418 may be available in connection to the 3-D cylinder 414.

    [0083] A fourth user interface component illustrates an “Internet” icon comprising a 3-D cuboid 420. Audio visualization effects in the form of a number of stripes 422a, 422b, 422c may be superposed on the sides of the 3-D cuboid 420. Moreover, an identifying text “Internet” 424 may be available in connection to the 3-D cuboid 420.

    [0084] The disclosed embodiments have mainly been described above with reference to a few embodiments. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope of the disclosed embodiments, as defined by the appended patent claims.