Information providing system and information providing method
11520822 · 2022-12-06
Assignee
Inventors
Cpc classification
G06F16/7867
PHYSICS
G06F16/9035
PHYSICS
International classification
G06F16/00
PHYSICS
G06F16/78
PHYSICS
Abstract
A content model data base stores past target information, which includes past first video information acquired in advance, reference IDs, which are linked with the past target information, and which correspond to contents, and three or more levels of degrees of content association between the past target information and the reference IDs. A first acquiring unit acquires the target information from a user terminal, a first evaluation unit looks up the content model database and acquires ID information, which includes the degrees of content association between the target information and the reference IDs, and a judging unit judges the ID information. Contents that correspond to the ID information are output to the user terminal based on the result of judgment by the judging unit.
Claims
1. An information providing system for narrowing down a plurality of contents to output to a user terminal, based on target information, which includes first video information acquired from the user terminal, with reference to a database, the information providing system comprising: a content model database that stores past target information, which includes past first video information acquired in advance, reference IDs, which are linked with the past target information, and which correspond to the contents, and three or more levels of degrees of content association, which show the degrees of content association between the past target information and the reference IDs; and a hardware processor that is configured to: acquire the target information from the user terminal; look up the content model database, and acquire ID information, which includes the degrees of content association between the target information and the reference IDs; judge the ID information; and output the contents, which correspond to the ID information, to the user terminal, based on a result of the judgement, wherein, in performing the output of the contents, the hardware processor is configured to: generate a reference ID list, which includes a plurality of pieces of ID information, based on the result of the judgement; acquire a first reference ID, which is included in the reference ID list, from the user terminal; and output, as the contents, content which corresponds to the first reference ID, to the user terminal.
2. The information providing system according to claim 1, wherein, in performing the output of the contents, the hardware processor is further configured to generate a reference summary list, which includes a plurality of summaries, corresponding respectively to the plurality of pieces of ID information included in the reference ID list, and acquire a first summary, which is selected from the reference summary list via the user terminal, and acquire the first reference ID, which corresponds to the first summary, from the reference ID list.
3. The information providing system according to claim 1, wherein the database comprises a transition information table, in which order information related to content outputs corresponding to the ID information is stored in advance, and wherein, in performing the judgement, the hardware processor is configured to look up the transition information table, and judge whether or not the order information related to the content outputs corresponding to the ID information is present.
4. The information providing system according to claim 3, wherein, when there is no order information related to the content outputs corresponding to the acquired ID information in the order information, the processor is configured to judge that there is no content that corresponds to the ID information.
5. The information providing system according to claim 1, wherein the database comprises: a scene model database that stores past second video information, which is acquired in advance, scene information, which includes scene IDs linked with the past second video information, and three or more levels of degrees of scene association, which show the degrees of scene association between the past second video information and the scene information; wherein the hardware processor is further configured to: acquire target information, which includes the second video information, from the user terminal, before acquiring the target information including the first video information from the user terminal; look up the scene model database, and acquire a scene ID list, which includes degrees of scene association between the second video information and the scene information; and generate a scene name list, which includes a plurality of scene names corresponding to the scene ID list, and wherein the hardware processor acquires the target information, which includes the second video information and scene IDs corresponding to scene names selected from the scene name list, from the user terminal.
6. The information providing system according to claim 5, wherein the hardware processor is further configured to: acquire scene names corresponding to the past first video information and the past second video information, which are acquired in advance; generate scene IDs with smaller amounts of information than the scene names, for each of the scene names acquired; and generate the scene model database by way of machine learning using the scene information and the past second video information.
7. The information providing system according to claim 6, wherein the hardware processor is further configured to: acquire the past second video information, which is acquired in advance, and contents corresponding to the past second video information; generate content IDs with smaller amounts of information than the contents, for each of the contents acquired; and generate the content model database by way of machine learning using reference information, which includes at least the content IDs and the past target information.
8. The information providing system according to claim 7, wherein the content IDs are associated with a plurality of pieces of meta information.
9. The information providing system according to claim 5, wherein the user terminal further comprises: receiving means for receiving the scene ID list; judging means for judging whether or not a scene ID included in the received scene ID list is present in a cache area of the user terminal, based on a result of receiving the scene ID list; and inquiring means for making an inquiry to a content information database holding contents, when, according to a result of the judgement, the scene ID is not present in the cache area of the user terminal.
10. The information providing system according to claim 5, wherein the user terminal further comprises: receiving means for receiving the reference ID list; judging means for judging whether a reference ID included in the received reference ID list is present in a cache area of the user terminal, based on a result of receiving the reference ID list; and inquiring means for making an inquiry to a content information database holding contents, when, according to a result of the judgement, the reference ID is not present in the cache area of the user terminal.
11. The information providing system according to claim 1, wherein the user terminal comprises a display unit that is mounted on a head or glasses and that displays information generated based on the first video information acquired from the user terminal, in a transparent state.
12. The information providing system according to claim 1, wherein the contents comprise information of at least part or all of text, illustration, video, and audio.
13. An information providing method for narrowing down a plurality of contents to output to a user terminal with reference to a database, based on first video information acquired from the user terminal, the information providing method comprising: making a content model data base store past target information, which includes past first video information acquired in advance, reference IDs, which are linked with the past target information, and which correspond to the contents, and three or more levels of degrees of content association, which show the degrees of content association between the past target information and the reference IDs; acquiring the target information from the user terminal; looking up the content model database, and acquiring ID information, which includes the degrees of content association between the target information and the reference IDs; judging the ID information; and outputting the contents, which correspond to the ID information, to the user terminal, based on a result of judgement in the judging, wherein said outputting the contents comprises: generating a reference ID list, which includes a plurality of pieces of ID information, based on the result of judgement; acquiring a first reference ID, which is included in the reference ID list, from the user terminal; and outputting, as the contents, content which corresponds to the first reference ID, to the user terminal.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
DESCRIPTION OF EMBODIMENTS
(26) Hereinafter, examples of information providing systems and information providing methods according to embodiments of the present invention will be described with reference to the accompanying drawings.
(27)
First Embodiment
(28) An example of the configuration of the information providing system 1 according to the first embodiment will be described with reference to
(29) As shown in
(30) The management server 2 is used, for example, to perform work such as maintenance, safeguarding, repair, etc. of devices installed in the field. The management server 2 acquires, for example, video information of devices or the like acquired from the user terminal 7 in the field. The management server 2 acquires each data, evaluates the data, and outputs information, with reference to the scene model database 3 and the content model database 4.
(31) The scene model database 3 stores past first video information, which is acquired in advance, scene information, which includes scene IDs linked with the past first video information, and three or more levels of degrees of scene association, which represent the degrees of scene association between the past first video information and the scene information. The content model database 4 stores past target information, in which past second video information, which is acquired in advance, and a scene ID, which has been mentioned earlier, form a pair, reference IDs, which are linked with the past target information, and which correspond to contents, and three or more levels of degrees of content association, which represent the degrees of content association between the past target information and the reference IDs. For example, results that are built based on target information and reference IDs (content IDs) are stored in the content model database. The content information database 5 records, for example, contents. The contents may include, for example, product introduction movies, solution manual movies and so forth, as well as document-related materials such as device manuals, instruction manuals, catalogs, reports and so forth. The contents are registered, for example, by the administrator of each content. The contents to be registered may be, for example, files such as audio files and the like, and may be files such as audio files of foreign language translations corresponding to Japanese. For example, when one country's language is registered in audio, a translated audio file of a foreign language corresponding to the registered audio file may be stored together. To register and update these contents, each manufacturer's administrator or the person in charge of preparing manuals, for example, my operate terminal devices via the public communication network 6 (network). Furthermore, for example, a business to take care of the management and the like, for which the administrator or the person in charge of preparing manuals is responsible, may perform these operations all together.
(32) The public communication network 6 is an Internet network or the like, to which the information providing system 1 and the management server 2 are connected via a communication circuit. The public communication network 6 may be constituted by a so-called optical-fiber communication network. Furthermore, the public communication network 6 is not limited to a cable communication network, and may be realized in the form of a wireless communication network.
(33) <User Terminal 7>
(34) The user terminal 7 has a display unit that is integrally or partly mounted on the worker's head or glasses, and that displays information generated based on a variety of types of video information acquired from the user terminal 7, in a transparent state. For example, the user terminal 7 may be HoloLens (registered trademark), which is one type of HMD (Head-Mounted Display). The worker can check the work area and the device to evaluate, through the display unit that shows display information of the user terminal 7 in a transparent manner, such as a head-mounted display or HoloLens. This allows the worker to watch the situation at hand, and check a scene name list, a reference summary list and contents together, which are generated based on a variety of types of video information acquired.
(35) Furthermore, besides electronic devices such as a mobile phone (mobile terminal), a smartphone, a tablet terminal, a wearable terminal, a personal computer, an IoT (Internet of Things) device and so forth, any electronic device can be used to implement the user terminal 7. The user terminal 7 may be, for example, connected to the information providing system 1 via the public communication network 6, and, besides, for example, the user terminal 7 may be directly connected to the information providing system 1. The user may use the user terminal 7 to acquire a variety of types of reference information from the information providing system 1, and, besides, control the information providing system 1, for example.
(36) Furthermore, the user terminal 7 has a receiving means that receives a scene ID list acquired by a third evaluation means or a reference ID list acquired by a second acquiring means, a judging means that checks whether a target ID is present in the cache area of the user terminal 7, and an inquiring means that, when the target ID is not present in the cache area of the user terminal 7, makes an inquiry to the corresponding content information database 5.
(37) <Scene Model Database 3>
(38)
(39) The scene model database 3 is built of second video information, which is acquired by machine learning, and evaluation results of past second video information and scene IDs, and, for example, each relationship between these is stored as a degree of scene association. For example, “01” of the past second video information is stored such that its degree of scene association is 70% with the scene ID “A”, 50% with the scene ID “D”, 10% with the scene ID “C”, and so on. As for the first video information acquired from the user terminal 7, evaluation results of, for example, its similarity with past second video information, which is acquired in advance, are built by machine learning. For example, by performing deep learning, it is possible to deal with information that is not the same but is similar.
(40) The scene model database 3 stores past second video information, which is acquired in advance, scene information, which includes scene IDs linked with the past second video information, and three or more levels of second degrees of scene association, which represent the degrees of scene association between the past second video information and the scene information. The second evaluation means looks up the scene model database 3, selects past second video information that matches, partially matches, or resembles past second video information, selects scene information, including the scene ID linked with the selected past second video information, calculates the second degree of scene association based on the degree of association between the selected past second video information and the scene information, acquires the scene ID including the second degree of scene association calculated, and displays a scene name list that is selected based on a scene ID list, on the user terminal 7.
(41) The scene model database 3 stores a scene ID list. The scene ID list is acquired by a second evaluation unit 212, which will be described later. The scene name list is, for example, a list, in which pairs of past second video information and scene IDs are evaluated based on degrees of scene association calculated by machine learning. The scene model database 3 stores contents, in which these evaluation results form a list. The contents of the list include, for example, scene IDs to show high degrees of scene association, such as “scene ID A: 70%”, “scene ID D: 50%”, and so on.
(42) The scene model database 3 stores a scene ID list and a scene name list. The scene name list is generated by a third generation unit 213, which will be described later. For example, scene names corresponding to scene IDs are acquired by the second evaluation unit 212, and the list of these scene names is stored in the scene ID list. The scene name list stored in the scene model database 3 is transmitted to the user terminal 7 in later process. The user looks up the scene name list received in the user terminal 7, and finds out which scenes correspond to second video information.
(43) If, due to updating of the scene model database, correction or addition of registered data and so forth, neither scene information corresponding to second video information nor a scene name corresponding to a scene ID is present in the scene model database 3, the process for acquiring the first video information in another field of view may be performed, or a scene name list with an addition of a scene name such as “No applicable scene” may be generated and transmitted to the user terminal 7.
(44) <Content Model Database 4>
(45)
(46) The content model database 4 stores first scene IDs with the first video information acquired from the user terminal 7. The first scene IDs refers to a scene name list stored in the scene model database 3, and scene IDs that correspond to scene names selected by the user. The content model database 4 stores these first scene IDs and the first video information as target information. Note that, when work is repeated, the process up to this point is repeated. Note that there may be a plurality of first scene IDs, and, in that case, it follows that a plurality of scene name lists are selected by the user.
(47) The content model database 4 stores past target information, in which past first video information, which is acquired in advance, and a scene ID form a pair, reference IDs, which are linked with the past target information, and which correspond to contents, and three or more levels of degrees of content association, which represent the degrees of content association between the past target information and the reference IDs. The first evaluation means looks up the content model database 4, selects past first video information that matches, partially matches, or resembles past first video information, and scene IDs, selects the reference IDs linked with the past first video information that is selected and the target information, calculates the first degrees of association based on the degrees of association between the selected past first video information and the target information, and acquires the ID information (reference IDs) including the first degrees of scene association that are calculated. The judging means judges the ID information acquired. Based on the results of judgement, the output means displays a reference summary list, which is selected based on a reference ID list, on the user terminal 7.
(48) Evaluation results to show high similarity between target information and reference IDs may be calculated such that, for example, if the target information is “A×01”, its degree of association is 70% with the reference ID “reference ID: A”, 50% with the reference ID “reference ID: D”, and 10% with the reference ID “reference ID: C”. As for the target information acquired in the second acquiring means, for example, the similarity to the target information is evaluated. This process may be performed via AI image processing or the like (not shown). By using AI image processing, the process of each stage can be performed in a much shorter time than conventional processing.
(49) Next, the content model database 4 stores a reference ID list. The reference ID list is acquired by the first evaluation means. For example, pairs of target information and reference information linked with past first video information and reference IDs are evaluated based on degrees of content association built by machine learning. Based on the results of evaluation, the reference IDs of evaluation results showing high degrees of association are listed up. The reference ID list is built of reference IDs with high degrees of content association such as, for example, “reference ID: A 70%”, “reference ID: D 50%”, and so on, and stored in the content model database 4.
(50) Next, the content model database 4 stores the reference ID list acquired in the first evaluation means, and a reference summary list generated by the second generation means. The reference summary list is recorded in a content table, which will be described later, based on reference IDs that are identified in the reference ID list. The second acquiring unit 206 acquires the “summaries” registered in the content table stored in the content model database 4. The second acquiring unit 206 acquires a reference ID list, which includes first degrees of content association with the reference IDs. The second generation unit 208 generates a reference summary list based on the summary information acquired. The reference summary list generated by the second generation unit 208 is transmitted to the user terminal 7.
(51) Note that, if, due to updating of the content model database 4, correction or addition of registered data and so forth, there is no data that corresponds to a scene ID, such as a reference ID, content or a summary, in the content model database 4, for example, a scene ID that is prepared as an alternative for such a case where there is no corresponding data may be newly associated, and the associated alternative content may be transmitted to the user terminal 7.
(52) Also, the first evaluation unit 202 looks up the content model database 4, and acquires ID information, which includes the degrees of content association between the target information and the reference IDs. Following this, the judging unit 203 looks up a transition information table, and makes judgements based on the ID information acquired. The contents that correspond to the ID information are output from the output unit 204 to the user terminal 7, based on the results of judgements made in the judging unit 203.
(53) After that, the ID information acquired by the first evaluation unit 202 is stored in the content database 4. As for the storage of ID information, after the output from the output unit 204, the ID information acquired by each acquiring means is stored in an ID information storage table, by way of an ID history unit 214.
(54) <Management Server 2>
(55)
(56) The CPU 101 controls the whole of the management server 2. The ROM 102 stores the operation codes for the CPU 101. The RAM 103 is the work area that is used when the CPU 101 operates. The storage unit 104 stores a variety of types of information such as measurement information. For the storage unit 104, for example, an SSD (Solid State Drive) or the like is used, in addition to an HDD (Hard Disk Drive).
(57) The I/F 105 is an interface for transmitting and receiving a variety of types of information to and from the user terminal 7, via the public communication network 6. The I/F 106 is an interface for transmitting and receiving information to and from the input part 108. For example, a keyboard is used for the input part 108, and the administrator, the worker, the content administrator and so forth to use the information providing system 1 inputs or selects a variety of types of information, or control commands for the management server 2, via the input part 108. The I/F 107 is an interface for transmitting and receiving a variety of types of information to and from the output part 109. The output part 109 outputs a variety of types of information stored in the storage unit 104, or the status of processing in the management server 2, and so forth. A display may be used for the output part 109, and this may be, for example, a touch panel type. In this case, the output part 109 may be configured to include the input part 108.
(58)
(59) <First Acquiring Unit 201>
(60) The first acquiring unit 201 acquires first video information from the user terminal 7. The first video information shows devices or parts, taken by the worker, or taken by using, for example, an HMD (Head-Mounted Display) or HoloLens. Video that is taken may be transmitted to the management server 2 on a real time basis. Furthermore, video that is being taken may be acquired as first video information.
(61) <First Evaluation Unit 202>
(62) The first evaluation unit 202 looks up the content model database 4, and acquires ID information (reference ID list), which includes first degrees of content association between the target information and the reference IDs. The scene list that is acquired includes, for example, “reference IDs”, “degrees of content association”, and so forth, and includes scene IDs that have high degrees of association with the target information.
(63) <Judging Unit 203>
(64) The judging unit 203 judges the ID information acquired in the first evaluation unit 202. The judging unit 203 looks up the transition information table, in which information about the order of content outputs corresponding to the ID information is stored in advance. The judging unit 203 judges, based on scene IDs that are associated with the acquired ID information, whether or not order information for the content IDs related to the scene IDs is stored in the transition information table.
(65) <Output Unit 204>
(66) The output unit 204 outputs the contents that correspond to the acquired ID information (first reference ID). The contents are acquired from the content information database 5. The content to output is acquired, for example, from the content table shown in
(67) <First Generation Unit 205>
(68) The first generation unit 205 generates a reference ID list, which includes a plurality of pieces of ID information, based on the result of judgement in the judging unit 203.
(69) <Second Acquiring Unit 206>
(70) The second acquiring unit 206 acquires, via the user terminal 7, the first reference ID selected from the reference summary list.
(71) <Content Output Unit 207>
(72) The content output unit 207 outputs the contents that correspond to the first reference ID, to the user terminal 7.
(73) <Second Generation Unit 208>
(74) The second generation unit 208 generates a reference summary list, corresponding to the ID information that was evaluated in the first evaluation unit 202 and acquired. The reference summary list is generated based on summary information stored in a content table, which will be described later.
(75) <Scene ID Generation Unit 209>
(76) The scene ID generation unit 209 has a scene name acquiring unit, a scene ID generation unit and a first learning unit. For example, the scene ID generation unit 209 determines the character length of scene names that are acquired, and generates scene IDs with smaller amounts of information than the scene names, the character length of which has been determined, for every scene name that is acquired. By this means, the scene IDs generated carry smaller amounts of information than scene names.
(77) <Scene Name Acquiring Unit>
(78) A scene name acquiring unit looks up the scene table stored in the scene model database 3, and acquires past second video information, which is acquired in advance, and scene names corresponding to the past second video information.
(79) <First Learning Unit>
(80) The first learning unit generates a scene model database 3, by way of machine learning using scene information, which at least includes the scene IDs generated, and the past second video information.
(81) <Content ID Generation Unit 210>
(82) The content ID generation unit 210 has a content acquiring unit, a content ID generation unit, and a second learning means. For example, when content is acquired, the content ID generation unit 210 determines the acquired content's file name, its related information, or the character length of its text and so forth, and generates a content ID with a smaller amount of information than the data capacity of the content, the character length of which has been determined, for every content that is acquired. Furthermore, the content ID generation unit 210 generates content IDs with smaller amounts of information than contents, for every content that is acquired. By this means, the content IDs generated here carry smaller amounts of information than contents.
(83) <Content Acquiring Unit>
(84) The content acquiring unit looks up the content table stored in the content model database 4, and acquires contents that correspond to the past first video information and past second video information which are acquired in advance.
(85) <Second Learning Unit>
(86) The second learning unit generates a content model database 4, by way of machine learning using reference information, which at least includes the content IDs that are generated, and past target information.
(87) <Third Acquiring Unit 211>
(88) The acquiring unit 211 acquires, from the user terminal 7, target information, in which second video information and a first scene ID, corresponding to a scene name selected from the scene name list, form a pair.
(89) <Second Evaluation Unit 212>
(90) The second evaluation unit 212 looks up the scene model database 3. The scene model database 3 stores past second video information, which is acquired in advance, scene information, which includes scene IDs linked with the past second video information, and three or more levels of degrees of scene association, which represent the degrees of scene association between the past second video information and the scene information. The second evaluation unit 212 acquires a scene ID list, which includes first degrees of scene association between second video information and the scene information.
(91) <Third Generation Unit 213>
(92) The third generation unit 213 generates a scene name list, which corresponds to the scene ID list acquired in the second evaluation unit 212. The scene ID list generated here includes, for example, “scene IDs”, “degrees of scene ID association”, and so forth, and scene IDs having high degrees of association with the past second video information. The scene IDs are generated, for example, based on the scene model table shown in
(93) <User Terminal 7: Receiving Means>
(94) A receiving means is provided in the user terminal 7, and receives the scene ID list acquired in the second evaluation unit 212. Furthermore, the reference ID list acquired by the third acquiring unit 211 is received.
(95) <User Terminal 7: Judging Means>
(96) A judging means is provided in the user terminal 7, and judges whether the scene IDs included in the received scene ID list are present in the cache area of the user terminal, based on the result of receiving the scene ID list. Also, the judging means judges whether or not the reference IDs included in the received reference ID list are present in the cache area of the user terminal 7, based on the result of receiving the reference ID list. If there is a scene ID list or a reference ID list in the cache area, the judging means checks that list first. The cache is given in the form of, for example, the name cache table and the summary cache table shown in
(97) In the name cache table, for example, scene IDs and scene names are associated and stored. Also, in the summary cache table, for example, reference IDs are associated and stored for reference.
(98) <User Terminal 7: Inquiring Means>
(99) If the result of judgement in the judging means indicates that the reference IDs are not present in the cache area of the user terminal, the inquiring means in the user terminal 7 makes an inquiry to the content information database 5 that holds contents.
(100) Next,
(101) First, the scene model table shown in
(102) Next, in the scene content model table shown in
(103) Next,
(104) Next,
(105) Next,
(106) Next,
(107) Furthermore, the user terminal 7 receives, for example, the reference ID list generated in the management server 2, and stores this in the storage area. The user terminal 7 looks up the reference ID list stored in the user terminal 7, and, based on the result of this, returns a reference ID list to the management server 2, so that efficient narrowing becomes possible.
(108) Next,
(109) First, in the meta-table shown in
(110) (Judging Operation in Information Providing System 1)
(111) Next, examples of the operations of the information providing system 1 according to the present embodiment will be described.
(112) The content model database 4 stores past target information, which includes past first video information acquired in advance, reference IDs, which are linked with past target information and which correspond to contents, and three or more levels of degrees of content association, which represent the degrees of content association between the past target information and the reference IDs.
(113) <First Acquiring Means S100>
(114) A first acquiring means S100 acquires target information from the user terminal 7. The target information includes first video information. Note that the first acquiring means S100 may acquire, from the user terminal 7, target information, in which first video information and a first scene ID, which corresponds to a scene name selected from the scene name list, form a pair.
(115) The first acquiring means S100 acquires first video information from the user terminal 7. The first video information shows devices or parts, taken by the worker, or taken by using, for example, an HMD (Head-Mounted Display) or HoloLens. The first video information that is taken may be, for example, transmitted to the management server 2 on a real time basis, via the user terminal 7, by application functions of the user terminal 7 provided by the management server 2 or the like.
(116) For example, as shown in
(117) <First Evaluation Means S101>
(118) The first evaluation means 101 looks up the content model database 4, and acquires ID information (reference ID list), which includes first degrees of content association between the target information and the reference IDs. In the ID information, for example, information such as reference IDs and degrees of content association is listed.
(119) The first evaluation means S101 looks up the content model database 4, which stores past target information, including past first video information acquired in advance, reference IDs, which are linked with past target information and which correspond to contents, and three or more levels of degrees of content association, which represent the degrees of content association between the past target information and the reference IDs, and acquires ID information, including first degrees of content association between the target information and the reference IDs.
(120) The first evaluation means S101 acquires ID information, showing high degrees of association with the target information, based on, for example, “reference ID”, “degree of content association”, and so forth.
(121) <Judging Means S102>
(122) The judging means S102 judges the ID information acquired in the first evaluation means S101. The judging means S102 looks up a transition information table, in which information about the order of content outputs corresponding to the ID information is stored in advance, and judges whether or not order information for the content IDs that relate to the scene IDs associated with the acquired ID information is stored.
(123) If, amongst the scene IDs, there is order information that corresponds to the ID information that is acquired as a result of judgement as to whether or not order information is present, the judging means S102 judges, for example, that the ID information acquired in the first evaluation unit 202 shows a prediction result that matches the situation of the worker's field. On the other hand, when order information is not stored in the scene IDs associated with the acquired ID information, the judging means S102 judges that there are no contents corresponding to the acquired ID information.
(124) Furthermore, the judging means S102 looks up the transition information table judges whether or not there is order information related to content outputs corresponding to the ID information. When there is no order information corresponding to the ID information acquired in the first acquiring means S100, the judging means S102 judges that there is no content that corresponds to the ID information.
(125) <Output Means S103>
(126) An output means S103 outputs contents that correspond to the acquired ID information. To output contents from the output means S103, contents are acquired from the content information database 5 and output to the user terminal 7. For example, as shown in
(127) By this means, the operation of the information providing system 1 of the present embodiment ends.
Second Embodiment
(128) <Content Output Operation in Information Providing Sysytem 1>
(129) Next, examples of the content output operation in the information providing system 1 according to the present embodiment will be described.
(130) The output unit 204 has a first generation unit 205 that generates a reference ID list, which includes a plurality of pieces of ID information, based on the result of judgement in the judging unit 203, a second acquiring unit 206 that acquires first reference IDs included in the reference ID list from the user terminal 7, and a content output unit 207 that outputs contents corresponding to the first reference IDs to the user terminal 7.
(131) Furthermore, following the first generation means S110 in the first generation unit 205, the output unit 204 has a second generation unit 208 that generates a reference summary list, which includes a plurality of summaries that correspond to a plurality of pieces of ID information included in the reference ID list. The second acquiring unit 206 acquires a first summary selected from the reference summary list via the user terminal 7, and acquires the first reference ID corresponding to the first summary, from the reference ID list.
(132) <First Generation Means S110>
(133) The first generation means S110 generates a reference ID list, which includes a plurality of pieces of ID information, based on the result of judgement in the judging means S102. When the first generation means S110 generates the reference ID list, for example, as shown in
(134) <Second Generation Means S113>
(135) Following the first generation means S110, the second generation means S113 generates a reference summary list, which includes a plurality of summaries that correspond to a plurality of pieces of ID information included in the reference ID list.
(136) Given the reference summary list generated by the second generation means S113, for example, summaries corresponding to reference IDs is acquired from the content table shown in
(137) <Second Acquiring Means S111>
(138) The second acquiring means S111 acquires the first reference IDs included in the reference ID list, from the user terminal 7, via the user terminal 7. In the second acquiring means S111, for example, as shown in
(139) <Content Output Means S112>
(140) The content output means S112 outputs the contents that correspond to the first reference IDs, to the user terminal 7. The content output means S112 outputs the contents that correspond to the ID information acquired in the second acquiring means S111. To output contents, contents are acquired from the content information database 5 and output to the user terminal 7. The contents to output are stored in, for example, the content table of
(141) By this means, the content output operation of the information providing system 1 of the present embodiment ends.
Third Embodiment
(142) <Scene Name List Generating Operation of Information Providing System 1 According to Present Embodiment>
(143) Next, an example of the scene name list generating operation in the information providing system 1 according to the present embodiment will be described.
(144) <Third Acquiring Means S120>
(145) Before the first acquiring means S100, the third acquiring means S120 acquires target information, including second video information, from the user terminal 7.
(146) <Second Evaluation Means S121>
(147) The second evaluation means S121 looks up the scene model database 3, and acquires a scene ID list, which includes the degrees of scene association between second video information and scene information. As described earlier, the scene model database 3 stores past second video information, which is acquired in advance, scene information, which includes scene IDs linked with the past second video information, and three or more levels of degrees of scene association, which represent the degrees of scene association between the past second video information and the scene information. The second evaluation means S121 acquires a scene ID list, which includes first degrees of scene association between second video information and the scene information.
(148) <Third Generation Means S122>
(149) The third generation means S122 generates a scene name list, which corresponds to the scene ID list acquired in the second evaluation means S121. The scene ID list generated by the third generation means S122 includes, for example, “scene ID”, “degree of scene ID association”, and so forth, and scene IDs having high degrees of association with past second video information.
(150) The third generation means S122 generates a scene name list based on the scene name list, which corresponds to the scene ID list acquired in the second evaluation means S121. The third generation means S122, for example, looks up the scene table of
(151) The scene IDs are generated, for example, based on the scene model table shown in
(152) The third generation means S122 may determine the character length of the scene names that are acquired, and generate scene IDs with smaller amounts of information than the scene names, the character length of which has been determined, for every scene name that is acquired. By this means, the scene IDs generated here carry smaller amounts of information than scene names. Furthermore, the third generation unit 430 may generate a scene model database 3, by way of machine learning using scene information, which at least includes the scene IDs generated, and the past second video information.
(153) By this means, the scene name list generating operation in the information providing system 1 of the present embodiment ends.
(154) (Operations of Scene ID Generation Unit 209 and Content ID Generation Unit 210)
(155) Next,
(156) <Scene ID Generation Means S209>
(157) First,
(158) The scene ID generation means S209 is constituted by a scene name acquiring means S200, a scene ID generation means S201, and a first learning means S202.
(159) <Scene Name Acquiring Means S200>
(160) The scene name acquiring means S200 acquires past second video information, which is acquired in advance, and scene names, which correspond to the past second video information, from the scene model database 3.
(161) <Scene ID Generation Means S201>
(162) The scene ID generation means S201 generates scene IDs with smaller amounts of information than the scene names, for every scene name that is acquired in the scene name acquiring means S200.
(163) <First Learning Means S202>
(164) The first learning means S202 generates the scene model database 3, by way of machine learning using scene information including scene IDs, and past first video information.
(165) Next,
(166) The content ID generation unit 210 is constituted by a content name acquiring means S205, a content ID generation means S206, and a second learning means S207.
(167) <Content Acquiring Means S205>
(168) The content acquiring means S205 acquires past second video information, which is acquired in advance, and contents corresponding to the past second video information, from the content model database 4.
(169) <Content ID Generation Means S206>
(170) The content ID generation means S206 generates content IDs with smaller amounts of information than contents, for every content that is acquired in the content acquiring means S205.
(171) <Second Learning Means S207>
(172) The second learning means S207 generates the content model database 4 based on machine learning using reference IDs, including at least content IDs, and past target information.
(173)
(174) <Receiving Means S300>
(175) The receiving means S300 receives the scene ID list acquired in the first evaluation means S101, or the reference ID list acquired in the first acquiring means.
(176) <Judging Means S301>
(177) The judging means S301 judges whether or not the scene IDs included in the scene ID list received in the receiving means S300, or the reference IDs included in the received reference ID list, are present in the cache area of the user terminal 7.
(178) <Inquiring Means S302>
(179) If the judging means S301 indicates that the IDs are not present in the cache area of the user terminal 7, the inquiring means S302 makes an inquiry to the content information database 5 that holds contents.
(180) By this means, it is possible to provide appropriate information, efficiently, in the user terminal 7 and the content information database 5.
(181) Furthermore, according to the present embodiment, the scene ID generation unit 209 and the content ID generation unit 210 can generate IDs with smaller amounts of information than scene names or contents. Therefore, the amount of information can be reduced between the user terminal 7 and the management server 2. By this means, the response of narrowing improves between the user terminal 7 and the management server 2. Furthermore, when updating one database to enable exchange using newly-generated IDs, it is only necessary to correct target IDs, and there is no need to update the other database. By this means, the time required for updating can be reduced, and the time and man-hours required to prepare for work in the field, such as maintenance and repair of devices, can be reduced.
(182) Furthermore, the present embodiment has a scene ID generation unit 209, and a scene name acquiring means, a scene ID generation unit 209 and a first learning unit are provided. Consequently, it is possible to acquire scene names stored in the scene model database 3, and generate scene IDs with smaller amounts of information than scene names, for each scene name. As a result of this, it is possible to save the amount of communication to be exchanged, so that quick response is made possible. Furthermore, the scene IDs that are generated are stored in the scene table shown in
(183) Furthermore, the present embodiment has a content ID generation unit 210, a content acquiring means, a reference ID generation unit, and a first learning unit. Consequently, it is possible to acquire the contents stored in the content model database 4, and generate reference IDs with smaller amounts of information than contents, for each content name. As a result of this, it is possible to save the amount of communication to be exchanged, so that quick response is made possible. Furthermore, the reference IDs that are generated are stored in the content table of
(184) Furthermore, according to the present embodiment, the user terminal 7 may be any device that has a display unit that is mounted on the head or glasses and that displays information, generated based on the first video information acquired from the user terminal 7, in a transparent state. Consequently, it is possible to narrow down information, using first video information acquired in the management server 2, based on video information that is taken. In order to operate the first video information that is acquired, exchange is made with the management server 2, in accordance with operations made in the user terminal 7. For example, some kind of gesture operation or voice instruction may be used, or operations may be executed based on rules that are set forth between the user terminal 7 and the management server 2. This makes it possible to acquire and provide appropriate information in an efficient manner.
(185) Furthermore, according to the present embodiment, the contents to output from the output unit 204 to the user terminal 7 may be information of part or all of text, illustrations, video, and audio. Consequently, existing information assets can be used on an as-is basis. By this means, a variety of contents can be provided. This makes it possible to provide optimal information to the site from among existing information assets.
(186) Furthermore, according to the present embodiment, a first acquiring unit 201 that implements a first acquiring means S100, a first evaluation unit 202 that implements a first evaluation means S101, a judging unit 203 that implements a judging means S102, and an output unit 204 that implements an output means S103 can provide an information providing method using the information providing system 1.
(187) Therefore, even when dealing with new situations or the like, there is no need to generate a learning model every time, and appropriate information can be provided efficiently, without spending time and money. By this means, states can be narrowed down according to the conditions of the field, and existing information assets can be used on an as-is basis. Furthermore, this makes it possible to provide optimal information to the site from among existing information assets.
(188) Although embodiments of the present invention have been described, each embodiment has been presented simply by way of example, and is not intended to limit the scope of the invention. These novel embodiments can be implemented in a variety of other forms, and various omissions, replacements, and changes can be made without departing from the spirit of the invention. These embodiments and modifications thereof are included in the scope and gist of the invention, and are also included in the invention described in the claims and equivalents thereof.
REFERENCE SIGNS LIST
(189) 1: Information providing system 2: Management server 3: Scene model database 4: Content model database 5: Content information database 6: Public communication network 7: User terminal 101: CPU 102: ROM 103: RAM 104: Storage unit 105 to 107: I/F 108: Input part 109: Output part 110: Internal bus 201: First acquiring unit 202: First evaluation unit 203: Judging unit 204: Output unit 205: First generation unit 206: Second acquiring unit 207: Content output unit 208: Second generation unit 209: Scene ID generation unit 210: Content ID generation unit 211: Third acquiring unit 212: Second evaluation unit 213: Third generation unit 214: ID history unit