Method and device for generating robot control scenario

09636822 ยท 2017-05-02

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for generating a robot control scenario includes generating a judgement frame by a computational means, storing the judgement frame in a storage means, and displaying the judgement frame by receiving a situation cognition frame and an expression frame; generating a stage by the computation means, storing the stage in the storage means, and displaying the stage by selecting at least one judgement frame and at least one transition frame; and connecting a transition frame of one stage to another stage in at least two stages by the computation means, storing the connected stages in the storage means, and displaying the connected stages. According to the present invention, a robot exists within a close distance to people in a human living space, and can recognize a surrounding situation and information provided by the people and provide a service that meets various requests desired by the people.

Claims

1. A method of creating a robot control scenario of a device for creating the robot control scenario, the method comprising: receiving a situation cognition frame and an expression frame and, by a computation means, creating a judgment frame, storing the judgment frame in a storage means, and displaying the judgment frame; selecting one or more judgment frames and one or more transition frames and, by the computation means, creating a stage, storing the stage in the storage means, and displaying the stage; and connecting a transition frame of one stage to another stage in at least two stages, storing the connection of the stages in the storage means, and displaying the connection of the stages by the computation means, wherein the situation cognition frame is determined by using an external situation of the robot using input of one or more recognition means, wherein the recognition means for determining the situation is previously set or modified by a user, and wherein the situation cognition frame is determined by further using a state of the robot.

2. The method according to claim 1, wherein the expression frame is configured by using one or more outputs of one or more expression means of the robot.

3. The method according to claim 2, wherein the expression means and the outputs are previously set or configured by selecting an expression means and an output which are modified by the user.

4. A device for creating a robot control scenario, the device comprising: a storage which stores data through an electric or magnetic signal; a judgment frame creator which selects a situation cognition frame and an expression frame, creates a judgment frame, and stores the judgment frame in the storage; a stage creator which selects one or more judgment frames and one or more transition frames, creates a stage, and stores the stage in the storage; and a scenario creator which connects a transition frame of one stage to another stage in at least two stages and stores the connection of the stages in the storage unit, wherein the situation cognition frame is determined by using an external situation of the robot using input of one or more recognition means, wherein the recognition means for determining the situation is previously set or modified by a user, and wherein the situation cognition frame is determined by further using a state of the robot.

5. The device according to claim 4, wherein the expression frame is configured by using one or more outputs of one or more expression means of the robot.

6. The device according to claim 5, wherein the expression means and the output are previously set or configured by selecting an expression means and an output which are modified by the user.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 is a view showing a conventional method of creating a robot control scenario.

(2) FIG. 2 is an exemplary view showing situation cognition of a robot.

(3) FIG. 3 is an exemplary view showing expressions of a robot.

(4) FIG. 4 is a flowchart illustrating a method of creating a robot control scenario according to the present invention.

(5) FIG. 5 is an exemplary view showing selecton of a situation cognition frame.

(6) FIG. 6 is an exemplary view showing connection of a situation cognition frame and a recognition means.

(7) FIG. 7 is an exemplary view showing the configuration of an expression frame.

(8) FIG. 8 is an exemplary view showing connection of an expression frame and an expression means.

(9) FIG. 9 is an exemplary view showing creation of a stage.

(10) FIG. 10 is an exemplary view showing connection of stages.

(11) FIG. 11 is a block diagram showing a device for creating a robot control scenario according to the present invention.

DESCRIPTION OF SYMBOLS

(12) 110: Expression means 120, 720: Timeline

(13) 510, 710: Constitutional element

BEST MODE FOR CARRYING OUT THE INVENTION

(14) Details of the objects and technical configuration of the present invention described above and operational effects according thereto will be clearly understood hereinafter by the detailed description with reference to the accompanying drawings attached in the specification of the present invention. The embodiment of the present invention will be described in detail with reference to the accompanying drawings.

(15) Since those skilled in the art may implement diverse applications of the present invention through the embodiments of this specification, arbitrary embodiments described in the detailed descriptions of the present invention are for exemplary purposes to describe the present invention more apparently and are not intended to limit the scope of the present disclosure to the embodiments.

(16) The functional units expressed in this specification are merely examples for implementing the present invention. Accordingly, other functional units may be used in other implementations of the present invention without departing from the spirit and scope of the present invention. In addition, although each functional unit can be implemented only in a pure hardware or software configuration, it also can be implemented by combining various hardware or software configurations performing the same function.

(17) In this specification, although a computational means may be a general-purpose central processing unit (CPU), it also can be a programmable device (CPLD or FPGA) or an application specific integrated circuit (ASIC) implemented to be appropriate to a specific purpose. In addition, a storage means or a storage unit may be a volatile memory element, a non-volatile memory element or a non-volatile electromagnetic storage device.

(18) A method of creating a robot control scenario according to the present invention may be directly performed in a robot or may be performed in a separate device external to the robot.

(19) An embodiment of a method of creating a robot control scenario according to the present invention will be described with reference to FIG. 4. The method of creating a robot control scenario according to the present invention includes a judgment frame creation step (S410), a stage creation step (S420) and a scenario creation stage (S430).

(20) The judgment frame creation step (S410) starts by receiving a situation cognition frame and an expression frame from a user.

(21) The user may input a situation cognition frame by selecting a situation cognition frame as shown in FIG. 5, not by simply selecting a sensing input of a direct recognition means.

(22) Selecting a situation cognition frame may mean selecting constitutional elements 510 describing a corresponding situation.

(23) Referring to FIG. 5, an example of inputting a situation cognition frame by selecting a configuration describing a situation of One person about 1.5 meters tall approaches one meter in front of a robot and stays for one or more seconds may be presented.

(24) The situation cognition frame is for determining an external situation of the robot by using input of at least one or more recognition means.

(25) As far as the recognition means is a means for recognizing an external situation of a robot, such as a camera, a touch sensor, a laser sensor, a microphone, a touch display device or the like, a type thereof is not limited. In addition, the external situation of a robot includes flow of time. Accordingly, a timer for recognizing flow of specific time is also included in the recognition means.

(26) For example, when the recognition means are a camera and a laser sensor, the computational means grasps the number of objects around a robot, movement of an object, moving direction of the object, an object size and the like through the input of the laser sensor and takes images of the object through the camera, and the computational means recognizes whether or not the object is a human being, and if the object is a human being, the sex, age and the like of the human being can be grasped by recognizing the face of the human being.

(27) In this case, a situation cognition frame of One child 1.5 meters tall or smaller approaches a robot as shown in FIG. 2(a), a situation cognition frame of One male adult 1.5 meters or taller passes through a robot as shown in FIG. 2(b), and a situation cognition frame of A plurality of human beings stays around a robot as shown in FIG. 2(c) can be created by using inputs of the camera and the laser sensor.

(28) Meanwhile, FIGS. 3(a), 3(b) and 3(c) show, for example, various kinds of operations that can be performed by a robot in different situations.

(29) In addition, the recognition means includes a means for indirectly recognizing an external situation of a robot, as well as a means for directly recognizing a situation. For example, when a robot is placed at a specific external location, the recognition means includes an indirect recognition means for grasping the weather at the location of the robot through a GPS device and a wired or wireless network, as well as a direct recognition means such as a light amount sensor, a camera for photographing the sky or the like.

(30) In addition, when the recognition means is a touch display means, an external situation of a robot can be determined by using touch input of the touch display.

(31) In addition, the situation cognition frame may be determined by further using a state of the robot, as well as the external situation of the robot.

(32) For example, when a robot is set to have an emotional state, a situation cognition frame can be determined by using the emotional state of the robot and an external situation of the robot together. A situation cognition frame of One child 1.5 meters tall or smaller approaches a robot in a very pleasant state can be created if feeling of the robot is a very pleasant state, and a situation cognition frame of One child 1.5 meters tall or smaller approaches a robot in a gloomy state can be created if feeling of the robot is a gloomy state.

(33) In addition, a state of a robot may be a mechanical or electrical state of the robot, as well as an emotional state of the robot. The mechanical or electrical state of the robot may be regarded as a body state of a human being if the robot is compared to a human being. For example, when operating states of expression means of a robot are periodically checked and a specific expression means is inoperable, a situation cognition frame of One child 1.5 meters tall or smaller approaches a robot which cannot move the right arm may be created.

(34) A type and an input of the recognition means for determining a situation of the situation cognition frame may be set in advance.

(35) Referring to FIG. 6, for example, a reflective signal input of a laser scanner, a facial image photographed using a camera, and voice recognition of a microphone may be previously set as a recognition means and an input thereof for determining a situation of a situation cognition frame of Approach of a person, or a reflective input of an ultrasonic sensor and a reflective signal input of a laser sensor may be previously set as a recognition means and an input thereof for determining a situation of a situation cognition frame of passing-by of a person.

(36) That is, although a user, i.e., a human being, may recognize similar situations, a type and an input of a recognition means appropriate for technically determining a corresponding situation may be different from those of the other means. According to the present invention, it is effective in that a user may correctly grasp a situation since a type and an input of a recognition means appropriate to a corresponding situation are determined only by simply inputting a situation cognition frame.

(37) For example, if a situation cognition frame of One child 1.5 meters tall or smaller approaches a robot which cannot move the right arm is input, a user may grasp the situation by previously setting a recognition means appropriate to the corresponding situation without paying attention to whether an ultrasonic sensor and a video camera or only a video camera will be used as a recognition means for grasping the corresponding situation.

(38) In addition, although a type and an input of a recognition means appropriate for recognizing a specific situation may vary according to a version and a type of a robot, it is effective in that a user may create various kinds of robot control scenarios only by simply inputting a same situation cognition frame regardless of the type and the input of a recognition means.

(39) A recognition means appropriate for determining a situation of a situation cognition frame may be previously set inside a robot or may be downloaded from a previously set server through a wired or wireless network.

(40) Accordingly, when a recognition means appropriate for determining a situation of a situation cognition frame is set in the server and is downloaded, it is effective in that although the algorithm for determining a situation through the input of the recognition means is changed and a type and an input of the recognition means for determining a specific situation are changed, a user may create various kinds of robot control scenarios only by inputting a situation cognition frame in the same way as before without the need of considering the type and the input of the recognition means.

(41) In addition, when a recognition means appropriate for determining a situation of a situation cognition frame is set in the server and is downloaded, it is effective in that although an already purchased robot is upgraded by additionally purchasing a specific recognition means, a user may create various kinds of robot control scenarios only by inputting a situation cognition frame in the same way as before without the need of considering the type and the input of the recognition means.

(42) In addition, it is effective in that a user may easily and intuitively create a robot control scenario by inputting a situation cognition frame of a method similar to a human being, not by directly selecting a sensing input of a recognition means one by one by the user creating the robot control scenario.

(43) A user may input an expression frame by selecting an expression frame configured of at least one or more outputs of at least one or more expression means, not by simply selecting one expression means and one output.

(44) Selection of an expression frame may mean selecting constitutional elements 710 describing a corresponding situation and arranging the constitutional elements 710 on timelines 720.

(45) As far as the expression means is a means for expressing a motion, contents or the like of a robot, such as a face or an arm of a robot, an LED, a display device or the like, a type thereof is not limited.

(46) For example, a user may input an expression frame by selecting a previously set persuasive expression frame, and the persuasive expression frame may be configured of expressions of moving a robot in a specific direction while repeatedly turning the head of the robot left and right at a predetermined angle, repeatedly swinging the arms up and down at a predetermined angle, and outputting a voice of Hello, look at me for a moment.

(47) The expression frame may be configured by using at least one or more outputs of at least one or more expression means, and the expression means and the outputs may be configured by selecting an expression means and an output previously set by the user as shown in FIG. 7.

(48) For example, as shown in FIG. 7, a user may select constitutional elements 710 such as a laughing expression of the face of a robot, playback of video contents, Text To Speech (TTS) of a specific sentence and the like and configure an expression frame by arranging the constitutional elements 710 on the timelines 720 in a drag-and-drop method or the like.

(49) Accordingly, since an expression frame for a specific expression is configured by combining a plurality of outputs of a plurality of expression means, it is effective in that a user may use an expression frame appropriate to a previously set situation without the need of configuring the expression frame by combining an expression means and an output of the expression means one by one again whenever a scenario is created.

(50) In addition, since an expression means for a specific expression and an output thereof are previously determined for an expression frame, a user may simply select the expression frame for a corresponding expression.

(51) For example, since bowing of a robot is a very frequently used expression, expression means and outputs as shown in FIG. 8 are set as basic expression frames of bow, and a user may input an expression frame only by selecting a frame expressing a bow.

(52) The computational means creates a judgment frame by connecting a situation cognition frame and an expression frame received from a user.

(53) The judgment frame refers to connection of a situation cognition frame and a judgment frame (.fwdarw.an expression frame). Accordingly, a plurality of judgment frames can be created by connecting a situation cognition frame to different expression frames or by connecting an expression frame to different situation cognition frames.

(54) Accordingly, it is effective in that a user may create various robot control scenarios by creating various judgment frames by combining situation cognition programs (.fwdarw.frames) and expression frames according to a use location and a service characteristic of a robot.

(55) If a judgment frame is created, the computational means creates a stage by selecting at least one or more judgment frames and transition frames (S420) and creates a scenario by connecting a transition frame of one stage to another stage in at least two or more stages (S430).

(56) The transition frame is a situation cognition frame connected to another stage without being connected to an expression frame, which means a frame configured to execute another stage if a situation according to the situation cognition frame is recognized.

(57) For example, referring to FIG. 9, a stage having a name 910 of Intro is configured of a transition frame 920 having a name of Approach of a person and a judgment frame 930. The judgment frame is configured by connecting a situation cognition frame 931 of Automatic start after 0 second and an expression frame 932 of Introduction.

(58) The situation cognition frame 931 of Automatic start after 0 second may be configured to recognize a situation in which flow of time passes 0 second from execution of stage Intro, and the expression frame 932 of Introduction may be configured as an expression of playing back a specific moving image on a display device while repeatedly turning the head of a robot left and right at a predetermined angle.

(59) The transition frame 920 of Approach of a person may be a situation cognition frame of a configuration recognizing a situation of approach of a person toward a robot regardless of the type and number of persons.

(60) Accordingly, if a robot executing the stage recognizes a situation of approach of a person toward the robot regardless of the type and number of persons while repeatedly turning the head of the robot left and right at a predetermined angle and playing back a specific moving image on the display device when flow of time passes 0 second from the execution of stage Intro, it may execute a stage connected to the transition frame 920 of Approach of a person.

(61) Since one or more of the transition frames may be selected and included in a stage, it may be possible to recognizing diverse situations while executing the stage and execute another stage appropriate to the recognized situation, and thus a robot control scenario for appropriately responding a robot to a variety of situations can be created.

(62) FIG. 10 is a view showing an example of a scenario connecting a plurality of stages to each other through transition frames. If a robot executing the scenario according to FIG. 10 recognizes a situation corresponding to a situation cognition frame 1120 of Approach of a person while executing a judgment frame 1130 configured of a situation cognition frame of Automatic start after 0 second and an expression frame of Opening at the stage 1100 having a name of Opening, the robot executes a stage 1300 having a name of Bow, and if the robot recognizes passage of five seconds from the execution of the stage having a name of Opening without recognizing a situation corresponding to the situation cognition frame 1120 of Approach of a person, it executes a stage 1200 having a name of Intro.

(63) In the same manner, since a stage appropriate to each situation is executed through a transaction frame while executing another stage, it is not that operation of a robot is performed according to a series of previously determined sequences, but the robot actively and diversely operates according to a variety of situations, and thus it is effective in that a robot control scenario for operating the robot in a way further similar to a human being can be created.

(64) In addition, it is effective in that a user may create an indefinitely working robot control scenario by connecting a plurality of stages to each other according to a specific situation without an effort of arranging motions or contents of a robot one by one on all the timelines corresponding to an amount of time for operating the robot.

(65) An embodiment of a device for creating a robot control scenario according to the present invention will be described with reference to FIG. 11. A device for creating a robot control scenario according to the present invention includes a storage unit 1110 for storing data through an electric or magnetic signal, a judgment frame creation unit 1120, a stage creation unit 1130 and a scenario creation unit 1140.

(66) The judgment frame creation unit 1120 selects a situation cognition frame and an expression frame, creates a judgment frame, and stores the judgment frame in the storage unit.

(67) The stage creation unit 1130 selects at least one or more of the judgment frames and at least one or more transition frames, creates a stage, and stores the stage in the storage unit.

(68) The scenario creation unit 1140 connects a transition frame of one stage to another stage and stores connection of the stages in the storage unit, in at least two or more stages.

(69) While the present invention has been described with reference to the particular illustrative embodiments, it is not to be restricted by the embodiments but only by the appended claims. It is to be appreciated that those skilled in the art can change or modify the embodiments without departing from the scope and spirit of the present invention.