APPARATUS AND METHOD FOR GENERATING OLFACTORY INFORMATION RELATED TO MULTIMEDIA CONTENT
20190019033 ยท 2019-01-17
Assignee
Inventors
- Sung June Chang (Daejeon, KR)
- Hae Ryong Lee (Daejeon, KR)
- Jun Seok Park (Daejeon, KR)
- Joon Hak BANG (Sejong-si, KR)
- Jong Woo Choi (Daejeon, KR)
- Sang Yun Kim (Daejeon, KR)
- Hyung Gi BYUN (Seoul, KR)
- Jang Sik CHOI (Donghae-si, KR)
Cpc classification
G06F18/254
PHYSICS
G06V20/46
PHYSICS
G06V20/41
PHYSICS
G01N33/0062
PHYSICS
G06F16/48
PHYSICS
G06V10/809
PHYSICS
International classification
G01N33/00
PHYSICS
Abstract
An apparatus for generating olfactory information related to multimedia content may comprise a processor. The processor may receive multimedia content, extract an odor image or an odor sound included in the multimedia content, and generate representative data related to the odor image or the odor sound by describing information on the extracted odor image or odor sound in a data format sharable by a media thing.
Claims
1. An olfactory information generator which generates olfactory information sharable between the real world and at least one virtual world, the olfactory information generator comprising a processor, wherein the processor receives multimedia content, extracts an odor image or an odor sound included in the multimedia content, and generates representative data related to the odor image or the odor sound by describing information on the extracted odor image or odor sound in a data format sharable by a media thing.
2. The olfactory information generator of claim 1, wherein the processor analyzes the extracted odor image or odor sound and generates text-cased label information capable of describing an odor of the odor image or the odor sound through a semantic evaluation or an abstract process related to the analyzed odor image or odor sound.
3. The olfactory information generator of claim 2, wherein the processor updates the label information of the extracted odor image or odor sound by applying a pattern recognition technique to odor image or odor sound data included in a database related to the extracted odor image or odor sound.
4. The olfactory information generator of claim 1, wherein the processor extracts each of a plurality of odor images or odor sounds included in the multimedia content and generates the representative data by using information on each of the plurality of extracted odor images or odor sounds, with a weight.
5. The olfactory information generator of claim 1, wherein the processor generates the representative data by using synchronization information between the extracted odor image or odor sound and the multimedia content to form a scent emitting sequence corresponding to the odor image or the odor sound to be synchronized with execution of the multimedia content.
6. The olfactory information generator of claim 1, wherein the processor receives sensory information related to a scent in the real world, which is generated by a gas sensor, extracts odor image or odor sound information related to content of the multimedia content, which is time-synchronized with the sense information, and generates the representative data by adding the sensory information to the extracted odor image or odor sound information extracted related to the content time-synchronized with the sensory information.
7. An olfactory information generator which generates olfactory information sharable between a real world and at least one virtual world, the olfactory information generator comprising a processor, wherein the processor obtains text-based label information related to a scent component included in the scent cartridge and generates representative data related to the label information by describing information on the label information related to the scent component in a data format sharable by a media thing.
8. The olfactory information generator of claim 7, wherein the processor searches an odor information-odor image association database, an odor information-odor sound association database, or an odor information-label information association database for the scent component and extracts the label information corresponding to the scent component.
9. The olfactory information generator of claim 7, wherein the processor obtains the label information by a user input, extracts modified label information corresponding to the label information by searching an odor information-odor image association database, an odor information-odor sound association database, or an odor information-label information association database for the label information, and generates the representative data in connection with the label information and the modified label information.
10. The olfactory information generator of claim 7, wherein the processor, periodically or when a particular event occurs, executes searching of an odor information-odor image association database, an odor information-odor sound association database, or an odor information-label information association database, executes pattern recognition or text syntax analysis, and updates the label information.
11. A method of generating olfactory information, in which olfactory information sharable between a real world and at least one virtual world is generated, the method comprising: receiving multimedia content; extracting an odor image or an odor sound included in the multimedia content; and describing information on the extracted odor image or odor sound in a data format sharable by a media thing.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0028] Example embodiments of the present invention will become more apparent by describing in detail example embodiments of the present invention with reference to the accompanying drawings, in which:
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
DESCRIPTION OF EXAMPLE EMBODIMENTS
[0054] Embodiments of the present disclosure are disclosed herein. However, specific structural and functional details disclosed herein are merely representative for purposes of describing embodiments of the present disclosure, however, embodiments of the present disclosure may be embodied in many alternate forms and should not be construed as limited to embodiments of the present disclosure set forth herein.
[0055] Accordingly, while the present disclosure is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the present disclosure to the particular forms disclosed, but on the contrary, the present disclosure is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure. Like numbers refer to like elements throughout the description of the figures.
[0056] It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. As used herein, the term and/or includes any and all combinations of one or more of the associated listed items.
[0057] It will be understood that when an element is referred to as being connected or coupled to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being directly connected or directly coupled to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (i.e., between versus directly between, adjacent versus directly adjacent, etc.).
[0058] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present disclosure. As used herein, the singular forms a, an and the are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms comprises, comprising, includes and/or including, when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0059] Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this present disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
[0060] Hereinafter, embodiments of the present disclosure will be described in greater detail with reference to the accompanying drawings.
[0061] The following detailed description is provided to assist the reader in gaining a comprehensive understanding of the methods, apparatuses, and/or systems described herein. However, various changes, modifications, and equivalents of the systems, apparatuses and/or methods described herein will be apparent to one of ordinary skill in the art. Also, descriptions of functions and constructions that are well known to one of ordinary skill in the art may be omitted for increased clarity and conciseness.
[0062] Throughout the drawings and the detailed description, the same reference numerals refer to the same elements. The drawings may not be to scale, and the relative size, proportions, and depiction of elements in the drawings may be exaggerated for clarity, illustration, and convenience.
[0063] The features described herein may be embodied in different forms, and are not to be construed as being limited to the examples described herein.
[0064] A general virtual world processing system included as a part of a configuration of the present invention may correspond to an engine, a virtual world, and the real world. In the real world, an electronic nose (E-nose) apparatus senses information related to the real world or a scent emitting device embodies information related to a virtual world in the real world. Also, the virtual world may include a virtual world itself embodied by a program or a scent media reproducer which reproduces content including scent-emitting information capable of being embodied in the real world.
[0065] For example, a scent in the real world, information on abilities and data of the E-nose apparatus, and the like may be sensed and transmitted to an engine by the E-nose apparatus. Also, the E-nose apparatus may include an E-nose Capability Type which transfers the abilities and data of the E-nose apparatus to the engine, an Odor Sensor Technology Classification Scheme which describes a type of sensor necessary for definition of the E-nose Capability Type, and an Enose Sensed Info Type which transfers information recognized by the E-nose apparatus to the engine.
[0066] The engine may transmit sensed information to a virtual world. Here, the sensed information is applied to the virtual world such that an effect corresponding to the Enose sensed info type corresponding to a scent of the real world may be embodied in the virtual world.
[0067] An effect event which occurs in the virtual world may be driven by the scent emitting device of the real world. Virtual information (sensory effects) related to the effect event which occurs in the virtual world may be transmitted to the engine. Also, virtual world object characteristics may be mutually transmitted between the virtual world and the engine.
[0068] The scent emitting device which exists in the real world and accommodates user preference will be described in the realm of Internet of Media Things and Wearables (IoMT). The scent emitting device exists in the real world and emits a scent to a user to allow the user to be synchronized with content of the virtual world and to have a realistic experience. For this, that which transfers the abilities and data of the scent emitting device to the engine is referred to as a Scent Capability Type. Also, that which accommodates a preference of the user to compensate for a difference in characteristics of a scent provided by the scent emitting device and a scent sensed by the user is referred to as a Scent Preference Type. Also, that which commands in order to allow the scent emitting device to emit a scent is referred to as a scent effect.
[0069] A generalized virtual world processing method included as a part of a configuration of the present invention may be performed by mutually transmitting olfactory information between a virtual world, the real world, and another virtual world to represent the olfactory information through the scent emitting device. The generalized virtual world processing method may obtain virtual information which is olfactory information of the virtual world, obtain real information that is olfactory information of the real world through a reality recognizer which is an apparatus which recognizes a scent, provide the virtual information to the real world or the other virtual world, provide the real information to the virtual world or the other virtual world, and emit a scent to a user through a scent emitting device on the basis of the virtual information and the real information.
[0070] The real information includes a type of sensor necessary for defining the E-nose Capability Type which transfers the abilities and data of the E-nose apparatus which is the reality recognizer, the Scent Sensor Technology CS, information recognized by the E-nose, and the Enose Sensed Info Type which is a part which transfers the information recognized by the Enose.
[0071] Also, an operation of defining the Scent Capability Type, which transfers the ability and data of the scent emitting device which emits a scent to the engine, an operation of defining a Scent Preference Type which transfers a user preference to compensate for a difference in characteristics of a scent provided by the scent emitting device and a scent sensed by the user, and an operation of defining a Scent Effect which commands in order to allow the scent emitting device to emit a scent are included.
[0072] The terms a scent display or an olfactory display stated herein adds a scent to content and provides the user with the scent-added content while interworking with, for example, a personal computer, a laptop computer, a mobile terminal, a television, or an audiovisual display such as a head mounted display (HMD) and the like. The scent display or the olfactory display may include a scent cartridge which includes a scent component and may further include a controller or a processor which controls the scent cartridge to embody a scent atmosphere by discharging the scent component or a combination of scent components.
[0073]
[0074] The olfactory information generator extracts an odor image from included multimedia content such as an image and describes the odor image in a data format sharable with a media thing. The olfactory information generator may extract an imagery component of a sense which is associated with a scent according to characteristics of the multimedia content. When the multimedia content includes a sound as a significant component, a sound which is associated with a particular scent may be extracted as an odor sound. For example, a meat-roasting sound may be classified as an odor sound which is associated with a scent of meat, and a fruit-cutting or cooking sound may be classified as an odor sound which is associated with a scent of fruit.
[0075] For convenience of description, a following description will focus on multimedia content with an emphasis on visual components such as a video and an odor image. However, the concept of the present invention is not limited to embodiments. The concept of the present invention described with respect to an odor image may be easily modified and applied to an odor sound or imagery component of another sense which is associated with a scent. For example, a component which generates label information and derives text-based information related to an odor image, odor sound, or imagery component of another sense which is associated with a scent may be applied to each and may also be applied to a post label information generation component or imagery components of a variety of senses.
[0076] The olfactory information generator according to one embodiment of the present invention may be a media thing which has a multimedia function. The olfactory information generator extracts an odor image capable of influencing olfactory senses by analyzing multimedia content and selects a scent component or a combination of scent components matching with characteristics of the odor image.
[0077] Referring to
[0078] In
[0079] In
[0080] One example of the olfactory information generator according to the present invention may be an olfactory-media composer shown in
[0081] Here, the olfactory-media composer is shown as an independent apparatus in
[0082] Although one odor image may be representatively extracted from one piece of multimedia content, a plurality of components may complexly or individually/independently be associated with a scent. In this case, a plurality of odor images extracted from one piece of multimedia content may be represented as weighted representative data.
[0083] The extracted odor image may be transmitted to an apparatus capable of embodying olfactory information with the multimedia content, for example a scent emitting device. An olfactory display capable of being related to and synchronized with multimedia content to discharge a particular scent may embody the olfactory information. The extracted odor image may be, for example, transmitted to the olfactory display and synchronized with the multimedia content to be embodied such that multi-dimensional/multi-channel multimedia content including the olfactory information may be provided to a user.
[0084] The extracted odor image may be processed to be represented as text-based information. The odor image is evaluated and classified by a plurality of users or a trained group of experts and results thereof are described in order to be represented as text-based information related to the odor image. The text-based information may be referred to as tag information or label information related to the odor image.
[0085] The label information related to the odor image may include a source (related content) mark which refers to the multimedia content from which the odor image is obtained. The label information related to the odor image may competitively represent the concepts of a plurality of independent scents obtainable from one piece of content. Also, the label information related to the odor image may hierarchically represent abstract superordinate concepts and subordinate concepts related to one scent obtainable from one piece of content (for example, a smell of fruit->a smell of apples or a sweet smell->a smell of fruit).
[0086] A process of obtaining the tag information or label information related to the extract odor image may be performed through evaluation and classification by a plurality of users or a trained group of experts in early stages. When evaluation, classification, and technology information in early stages are collected, tag information or label information related to a similar or relevant odor image may be recognized based on pattern recognition. A process of recognizing label information of an odor image may be executed using an artificial intelligence (AI) machine learning technology.
[0087] An olfactory information generator according to another embodiment of the present invention synchronizes and stores odor information sensed by a gas sensor with multimedia content. Referring to
[0088] Here, the olfactory-media composer is shown as an independent apparatus in
[0089] Referring back to
[0090]
[0091] The olfactory-media composer, as one example of the olfactory information generator of the present invention, obtains text-based label information related to the scent component included in the scent cartridge. Here, when the text-based label information related to the scent component does not exist, the text-based label information may be generated by analyzing odor information of the scent component. When the odor information of the scent component is analyzed, odor information when the scent component is actually discharged may be collected by using the gas sensor such as the E-nose and the like. With respect to the collected odor information, an odor image may be extracted and label information related to the odor image may be obtained by searching a previously analyzed odor information-odor image association database to generate label information related to the scent component.
[0092] In another embodiment, it may be assumed that text-based label information related to a scent component is input by a user. Here, the label information related to the scent component may not be identical to generally used label information related to the odor image. The olfactory information generator may collect label information highly related to the label information input by the user and the label information related to the odor image related to the scent component through analyzing syntax of a text. The olfactory information generator may store the label information input by the user related to the scent component and label information (generalized, standardized, or previously collected label information) derived through executing pattern recognition, database searching, and syntax analysis of the text together in the memory or the database.
[0093] When the label information input by the user related to the scent component does not coincide with the label information of the odor image of the multimedia content which is to be provided to the user, the olfactory information generator may match the label information input by the user related to the scent component with the label information of the odor image of the multimedia content by using the label information derived through pattern recognition, searching the database for the label information of the odor image related to the scent component, and analyzing the syntax of the text.
[0094] With respect to first label information of the scent component, which is derived first by analyzing the scent component, the olfactory information generator may obtain second label information of the scent component which is updated periodically whenever a particular event (a user command, addition of multimedia content data, and addition of an odor image database) occurs through pattern recognition, searching the database, and analyzing the syntax of the text.
[0095] A processor of the scent display shown in
[0096] As one embodiment of the olfactory information generator of the present invention, the processor of the scent display may transmit a search query related to a particular scent component to a scent & label database, and prestored cartridge scent label information (202) may be transmitted from the scent & label database to the processor of the scent display. Meanwhile, an odor image & label database may transmit an odor image and label information corresponding to the odor image in response to a search query of the odor image analyzer processor.
[0097]
[0098] Referring to
[0099] The olfactory-media composer transmits OdorImageRecognizerOutputs, which is standardized label information, to a storage through the wrapped interface for data transmission and sharing (305). OdorImageRecognizerOutputs, which is the standardized label information stored in the storage, is transmitted to the processor of the olfactory display (305), and the olfactory display performs a scent-emitting treatment which interworks with the image content by controlling scent emission in the olfactory display in order to discharge a scent component or a combination of a plurality of scent components equipped in the scent cartridge of the olfactory display by using the label information of the odor image of the transmitted multimedia content (306).
[0100]
[0101]
[0102] Referring to
[0103] As described above, label information related to a particular odor image may be represented, and additional label information related to an abstract superordinate concept suggested by the label information may be added.
[0104] Otherwise, a plurality of superordinate concepts related to one odor image may be competitively listed. For example, since orange may be connected to a superordinate concept such as fruit and an abstract concept such as sweet, the orange may be connected to the above keywords.
[0105] Semantic similarity or semantic relation among the keywords of the odor image may be obtained by applying a natural language processing principle and may be further specified and diversified by artificial intelligence-based machine learning.
[0106] Referring to
[0107]
[0108]
[0109] Referring to
[0110]
[0111]
[0112] Referring to
[0113] Referring to
[0114] Although one example in which the olfactory-media composer and the odor image analyzer processor are distinguished from each other is shown in
[0115]
[0116]
[0117]
[0118]
[0119] Referring to the schema diagram of
[0120] Referring to
[0121]
[0122]
[0123]
[0124]
[0125] Referring to the schema diagram of
[0126]
[0127]
[0128]
[0129]
[0130] Referring to
[0131]
[0132]
[0133]
[0134]
[0135]
[0136] Referring to
[0137] Also, since even the same scent component may have different imagery components related to a scent recognized by a human being due to a tagging ratio thereof, the representative data handled in the system environment including the olfactory display or the scent display may include scentLabel and tagging ratio as data fields.
[0138] Here, the tagging ratio may be applied as a concept corresponding to a concentration of gas or may be applied as a concept corresponding to a strength defined through evaluation by a plurality of users or a trained expert. That is, although an example in which the tagging ratio has a certain value is shown in
[0139] In
[0140] When the olfactory information generator (the olfactory-media composer) is embodied as a separate media thing, the olfactory information generator may include a processor, a memory, a storage, and a communication module. The processor may perform functions of extracting an odor image, recognizing label information of the odor image (or transmitting a command to another media thing for recognition), and the like. Necessary information may be stored in a memory or a storage, and a communication module may be included for communication and sharing with other media things.
[0141] In still another embodiment, a processor included in the olfactory display (including the scent emitting device) may operate as the olfactory information generator. The olfactory information generator may further include a memory, a storage, and a communication module in addition to the processor.
[0142] The embodiments of the present disclosure may be implemented as program instructions executable by a variety of computers and recorded on a computer readable medium. The computer readable medium may include a program instruction, a data file, a data structure, or a combination thereof. The program instructions recorded on the computer readable medium may be designed and configured specifically for the present disclosure or can be publicly known and available to those who are skilled in the field of computer software. Examples of the computer readable medium may include magnetic media such as hardware disk, floppy disk, and magnetic tape, optical media such as CD-ROM and DVD, magneto-optical media such as floptical disk, ROM, RAM, and flash memory, which are specifically configured to store and execute the program instructions. Examples of the program instructions include machine codes made by, for example, a compiler, as well as high-level language codes executable by a computer, using an interpreter. The above exemplary hardware device can be configured to operate as at least one software module in order to perform the embodiments of the present disclosure, and vice versa.
[0143] While the example embodiments of the present invention and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations may be made herein without departing from the scope of the invention.