Video editing system and video editing method
11501801 · 2022-11-15
Assignee
Inventors
Cpc classification
H04N21/858
ELECTRICITY
G11B27/02
PHYSICS
G11B27/031
PHYSICS
International classification
G11B27/02
PHYSICS
H04N21/845
ELECTRICITY
Abstract
A video editing system capable of quickly providing video that an original appears and reducing a workload on a service manager. The video editing system edits multiple pieces of video including original video that a cast oneself appears and substitute video in chronological order to generate complete video. Specifically, the video editing system acquires cast attribute data indicating a cast attribute, acquires original video data indicating the original video shot by a shooting apparatus, and selects substitute video data in a predetermined shooting pattern from multiple shooting patterns based on the cast attribute indicated by the cast attribute data. Moreover, the video editing system edits the original video indicated by the original video data and the substitute video indicated by the selected substitute video data in chronological order to generate complete video data indicating the complete video.
Claims
1. A video editing system comprising: a processor configured to edit multiple pieces of video including an original video in which a cast plays a character and a substitute video in which a substitute for the cast plays a same character as the character played by the cast in chronological order to generate a complete video; and a memory that stores substitute video data indicating a pre-shot substitute video in multiple shooting patterns according to attribute items including a gender of the cast, a body type of the cast and a costume of the cast and according to shooting scenes, wherein the processor is configured to: acquire cast attribute data indicating a cast attribute necessary for appearance of the cast from a cast terminal utilized by the cast; generate, from the cast attribute indicated by the acquired cast attribute data, identification mark data indicating an identification mark for identifying the cast; transmit the generated identification mark data to the cast terminal; read the identification mark displayed on the cast terminal to confirm the cast; and acquire original video data indicating the original video of the cast, by reading the identification mark indicated by the identification mark data in shooting of the original video to link the cast attribute recorded in the identification mark and the original video of the cast to each other, and acquiring multiple pieces of original video data linked to the cast attribute; select the substitute video data of a shooting pattern from multiple shooting patterns based on the attribute items corresponding to the acquired cast attribute data of the cast and the shooting scene; and edit the original video indicated by the original video data and the substitute video indicated by the selected substitute video data in chronological order by shooting the original video of the cast in multiple scenes to generate complete video data indicating the complete video, the original video is a video of a shooting scene in which the character which the cast plays is seen and a face of the character is seen, the substitute video is a video of a shooting scene in which the same character as the character played by the cast is seen and the face of the character is not seen, and the processor is configured to edit the original video and the substitute video in chronological order to generate the complete video data in which the cast and the substitute for the cast play the same character; and output the complete video data to the cast terminal.
2. The video editing system according to claim 1, further comprising a plurality of shooting apparatuses that includes the shooting apparatus, wherein each of the plurality of shooting apparatuses is placed at each location where each shooting scene is shot in an experience-based facility.
3. A video editing method for editing multiple pieces of video in chronological order to generate a complete video by a computer, wherein the multiple pieces of video includes an original video in which a cast plays a character and a substitute video in which a substitute for the cast plays a same character as the character played by the cast, and wherein the computer stores substitute video data indicating a pre-shot substitute video in multiple shooting patterns according to attribute items including a gender of the cast, a body type of the cast and a costume of the cast and according to shooting scenes, the method comprising: acquiring cast attribute data including a cast attribute necessary for appearance of the cast from a cast terminal utilized by the cast, generating, from the cast attribute indicated by the acquired cast attribute data, identification mark data indicating an identification mark for identifying the cast, transmitting the generated identification mark data to the cast terminal, reading the identification mark displayed on the cast terminal to confirm the cast, and acquiring original video data indicating the original video of the cast, by reading the identification mark indicated by the identification mark data in shooting of the original video to link the cast attribute recorded in the identification mark and the original video of the cast to each other, and acquiring multiple pieces of original video data linked to the cast attribute, selecting the substitute video data of a shooting pattern from the multiple shooting patterns based on the attribute items corresponding to the acquired cast attribute data of the cast and the shooting scene, editing the original video indicated by the original video data and the substitute video indicated by the selected substitute video data in chronological order by shooting the original video of the cast in multiple scenes to generate complete video data indicating the complete video, and outputting the complete video data to the cast terminal, wherein the original video is a video of a shooting scene in which the character which the cast plays is seen and a face of the character is seen, the substitute video is a video of a shooting scene in which the same character as the character played by the cast is seen and the face of the character is not seen, and the computer is configured to edit the original video and the substitute video in chronological order to generate the complete video data in which the cast and the substitute for the cast play the same character.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1) Various embodiments are illustrated in the drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
DETAILED DESCRIPTION
(13) Hereinafter, embodiments are described with reference to
(14) The entire configuration of a video editing system S according to an embodiment is illustrated in
(15) The video editing system S is a system configured to edit “original video” that a cast plays various characters and “substitute video” that a substitute for the cast appears in chronological order to generate “complete video” as a memorial moving image and provide the complete video to the cast.
(16) Specifically, the video editing system S is a system configured to quickly provide, to an experience target, the “complete video” including the “original video” that the experience target oneself acts like various characters in an experience-based service for an inbound visitor.
(17) “Various characters” described herein include characters representing Japan, such as characters including a ninja, a samurai, a prince, and a princess and Japanese cartoon characters.
(18) Hereinafter, description is made in the present embodiment, assuming that the cast oneself wears a “ninja” costume and plays a ninja.
(19) The video editing system S mainly includes a video editing apparatus 1, a shooting apparatus 20 connected to the video editing apparatus 1 and configured to shoot the original video, and cast terminals 30 connected to the video editing apparatus 1 via a network and utilized by casts.
(20) As illustrated in
(21) Specifically, the video editing apparatus 1 acquires “cast attribute data” indicating the attribute of the cast from the cast terminal 30 to generate “identification mark data” indicating an identification mark of the cast, and transmits the “identification mark data” to the cast terminal 30. Then, the video editing apparatus 1 reads the identification mark displayed on the cast terminal 30 to confirm the cast oneself. Then, the video editing apparatus 1 acquires “original video data” indicating the original video shot by the shooting apparatus 20, and edits such original video and the substitute video selected based on the cast attribute in chronological order to generate “complete video data” indicating the complete video. Then, the video editing apparatus 1 outputs the “complete video data” to the cast terminal 30.
(22) Note that the video editing apparatus 1 stores multiple patterns of “substitute video data” indicating the pre-shot substitute video.
(23) The shooting apparatus 20 is a shooting camera, and shoots the original video in multiple shooting scenes to generate the “original video data” in each shooting scene.
(24) Specifically, multiple shooting apparatuses 20 are placed at locations where each shooting scene can be shot in an experience-based facility reminiscent of a ninja mansion, generate multiple pieces of “original video data” linked to the cast attribute, and transmits such data to the video editing apparatus 1.
(25) The cast terminal 30 is an information terminal to be operated by the cast, and is specifically a computer such as a P-smartphone, a tablet terminal, or a PC.
(26) The cast terminal 30 is connected to the video editing apparatus 1 via the network, and receives the software service provided from the video editing apparatus 1.
(27) Specifically, the cast terminal 30 receives input or selection of experience-based event application information from the cast, transmits the “cast attribute data” to the video editing apparatus 1, and receives the “identification mark data” for identifying the cast from the video editing apparatus 1. Then, on the day of the experience, the identification mark is displayed on a display unit of the cast terminal 30 for smoothly performing, e.g., processing from cast's reception and check-in to reception for studio shooting. Then, after the end of the experience, the cast terminal 30 receives the “complete video data” from the video editing apparatus 1.
(28) Hardware Configuration of Video Editing System
(29) As illustrated in
(30) Moreover, the video editing apparatus 1 further has a display apparatus configured to display character or image information displayed in a predetermined format, an input apparatus to be input-operated when a predetermined command is input to the CPU, a storage medium apparatus such as an external hard disk, and a printing apparatus configured to output the character or image information.
(31) Note that the shooting apparatus 20 and the cast terminal 30 also include similar hardware configurations.
(32) As illustrated in
(33) Software Configuration of Video Editing System
(34) From a functional aspect, the video editing apparatus 1 includes, as main components, a storage unit 10 configured to store various programs and various types of data in addition to the “cast attribute data,” the “identification mark data,” the “substitute video data,” the “substitute video reference data,” the “original video data,” and the “complete video data,” an attribute data acquisition unit 11, a mark data generation unit 12, a data transmission unit 13, an original confirmation unit 14, a video data acquisition unit 15, a video data selection unit 16, a video data generation unit 17, and a video data output unit 18, as illustrated in
(35) These components include, for example, CPUs, ROMs, RAMs, HDDs, communication interfaces, and various programs.
(36) The shooting apparatus 20 includes, as main components, a storage unit 21 configured to store various programs (various types of data), a video shooting unit 22 configured to shoot the original video in the multiple shooting scenes, a video linking unit 23 configured to read the identification mark indicated by the “identification mark data” in advance in shooting of the original video to link the cast attribute included in the identification mark and the original video to each other, and a data transmission unit 24 configured to generate multiple pieces of “original video data” linked to the cast attribute to transmit the “original video data” to the video editing apparatus 1.
(37) The cast terminal 30 includes, as main components, a storage unit 31 configured to store various programs (various types of data), a data transmission unit 32 configured to receive the input operation or the selection operation from the cast to transmit the “cast attribute data” to the video editing apparatus 1, a data reception unit 33 configured to receive the “identification mark data” and the “complete video data” from the video editing apparatus 1, and a display unit 34 configured to display the identification mark indicated by the “identification mark data.”
(38) As illustrated in
(39) With reference to the cast attribute data, the function of generating the “identification mark data” indicating the identification mark of the cast can be utilized.
(40) Specifically, the cast attribute data mainly includes information on the “ID,” “name,” “nationality,” “age,” “picture,” “gender,” “body type,” “costume size,” and “costume” of the cast.
(41) The embodiment of
(42) The “body type” described herein is, for example, set from three types of “thin,” “normal,” and “large” by the cast as necessary. Alternatively, the video editing apparatus 1 may set the “body type” as necessary in response to information on a body height, a body weight, a bust, a waist, a hip, and a shoe size by the cast's input operation.
(43) The “costume size” is set as necessary by the cast within a range from a small size “S” to a large size “L,” for example.
(44) For the “costume,” in the case of, e.g., a male, the cast can select the costume as necessary from costumes of “ninja A” and “ninja B” types, and can select a color from the “black” and the “white.” In the case of a female, the cast can select the costume as necessary from costumes of “kunoichi A” and “kunoichi B” types.
(45) As illustrated in
(46) With reference to such identification mark data, the function of displaying the identification mark on the display unit 34 by the cast terminal 30 can be utilized.
(47) Specifically, the identification mark data includes, in addition to the “identification mark,” information on the “ID,” the “name,” the “gender,” the “body type,” the “costume size,” the “costume,” and the “picture” as cast identification contents.
(48) The “identification mark” described herein is, for example, a two-dimensional bar code, and is a mark for optically reading the above-described cast identification contents.
(49) The “substitute video data” is moving image data indicating the pre-shot substitute video, and is unified and stored in the storage unit 10 according to multiple shooting patterns and the multiple shooting scenes.
(50) The substitute video data is stored in the storage unit 10 in the multiple shooting patterns according to a difference in cast attribute items. Specifically, the substitute video data is stored in the multiple shooting patterns according to a difference in the “gender,” “body type,” “costume size,” and “costume” of the cast.
(51) Note that the substitute video data is not particularly limited to the above-described cast attribute items, and for example, may be stored in the multiple shooting patterns according to a difference in the “nationality” and “age” of the cast. Alternatively, the substitute video data may be stored in the multiple shooting patterns according to a difference in information obtained from the “picture” of the cast.
(52) The embodiment of
(53) As illustrated in
(54) With reference to such substitute video reference data, the function of selecting the substitute video data in the shooting pattern corresponding to the cast attribute items from the multiple shooting patterns can be utilized.
(55) Specifically, in the substitute video reference data, information on the “shooting scene” and the “gender,” “body type,” “costume size,” and “costume” of the cast is assigned to each piece of substitute video.
(56) The embodiment of
(57) The “original video data” is moving image data indicating the original video shot by the shooting apparatus 20, and is unified and stored in the storage unit 10 according to the multiple shooting scenes.
(58) Specifically, the original video data is moving image data linked to the cast attribute (specifically, the ID of the cast), and for each cast, is managed and stored for each of the multiple shooting scenes.
(59) The embodiment of
(60) Moreover, the embodiment of
(61) The “complete video data” is moving image data indicating the complete video obtained by editing the original video and the selected substitute video in chronological order, and is generated by the video editing apparatus 1 and is transmitted to the cast terminal 30.
(62) Specifically, as illustrated in
(63) The attribute data acquisition unit 11 acquires the “cast attribute data” from the cast terminal 30 utilized by the cast, and such cast attribute data is sorted according to each cast and is stored in the storage unit 10.
(64) The mark data generation unit 12 generates the “identification mark data” with reference to the cast attribute indicated by the “cast attribute data,” and such identification mark data is sorted according to each cast and is stored in the storage unit 10.
(65) The data transmission unit 13 transmits the “identification mark data” to the cast terminal 30. At this point, the cast terminal 30 (the data reception unit 33) receives the “identification mark data” from the video editing apparatus 1, and the identification mark data is stored in the storage unit 31.
(66) The original confirmation unit 14 reads the identification mark displayed on the cast terminal 30 to confirm that a user of the cast terminal 30 is the cast oneself.
(67) Specifically, a not-shown camera connected to the video editing apparatus 1 reads the identification mark, and the original confirmation unit 14 checks, from the cast identification contents included in the identification mark, whether or not the user is the cast oneself
(68) The video data acquisition unit 15 acquires the “original video data” linked to the cast attribute from the shooting apparatus 20, and such original video data is, for each cast, sorted according to each shooting scene and is stored in the storage unit 10.
(69) Note that the video linking unit 23 of the shooting apparatus 20 reads, in shooting of the original video, the identification mark indicated by the “identification mark data” in advance to link the cast attribute recorded in the identification mark and the original video to each other.
(70) The video data selection unit 16 selects, with reference to the cast attribute indicated by the “cast attribute data,” the “substitute video data” in a predetermined shooting pattern from the multiple shooting patterns.
(71) Specifically, the video data selection unit 16 selects, for each shooting scene, the “substitute video data” in the shooting pattern corresponding to the cast attribute items from the multiple shooting patterns, as illustrated in
(72) The video data generation unit 17 edits the original video indicated by the “original video data” and the substitute video indicated by the selected “substitute video data” in chronological order to generate the “complete video data” indicating the complete video.
(73) Specifically, as illustrated in
(74) The embodiment of
(75) The video data output unit 18 outputs the “complete video data” to the cast terminal 30.
(76) The method for outputting the “complete video data” may be, for example, data transmission of the “complete video data” to the cast terminal 30 or transmission of URL information of a website storing the “complete video data” to the cast terminal 30.
(77) Video Editing Method
(78) Next, the processing of the video editing program (a video editing method) executed in the video editing system S (the video editing apparatus 1) is described based on
(79) The above-described program according to an embodiment is a utility program aggregating various programs for implementing, as functional components of the video editing apparatus 1 including the storage unit 10, the storage unit 10, the attribute data acquisition unit 11, the mark data generation unit 12, the data transmission unit 13, the original confirmation unit 14, the video data acquisition unit 15, the video data selection unit 16, the video data generation unit 17, and the video data output unit 18, and the CPU of the video editing apparatus 1 executes this video processing program.
(80) Note that the above-described program is, for example, executed in response to the operation of starting video editing from a management staff as a user.
(81) A “video editing flow” illustrated in
(82) Note that the acquired cast attribute data is sorted according to each cast and is stored in the storage unit 10.
(83) Next, at a step S2, the mark data generation unit 12 generates the “identification mark data” with reference to the cast attribute indicated by the “cast attribute data.”
(84) Then, the data transmission unit 13 transmits the “identification mark data” to the cast terminal 30 (a step S3). At this point, the cast terminal 30 receives the “identification mark data” from the video editing apparatus 1, and the identification mark data is stored in the storage unit 31.
(85) Next, at a step S4, the original confirmation unit 14 reads the identification mark displayed on the cast terminal 30 to confirm that the user of the cast terminal 30 is the cast oneself.
(86) Specifically, the not-shown camera connected to the video editing apparatus 1 reads the identification mark, and the original confirmation unit 14 checks, from the cast identification contents included in the identification mark, whether or not the user is the cast oneself.
(87) Next, at a step S5, the video data acquisition unit 15 acquires, from the shooting apparatus 20, multiple pieces of “original video data” linked to the cast attribute.
(88) Note that the video linking unit 23 of the shooting apparatus 20 reads the identification mark indicated by the “identification mark data” in advance in shooing of the original video to link the cast attribute recorded in the identification mark and the original video to each other.
(89) Next, at a step S6, the video data selection unit 16 selects, with reference to the cast attribute indicated by the “cast attribute data,” the “substitute video data” in the predetermined shooting pattern from the multiple shooting patterns.
(90) Specifically, as illustrated in
(91) Next, at a step S7, the video data generation unit 17 edits the original video indicated by the “original video data” and the substitute video indicated by the selected “substitute video data” in chronological order, thereby generating the “complete video data” indicating the complete video.
(92) Specifically, as illustrated in
(93) Finally, at a step S8, the video data output unit 18 outputs the “complete video data” to the cast terminal 30.
(94) The process of
(95) By the above-described flow of the processing of the video editing program, the video that the original acts like the character can be quickly provided in the experience-based service for the inbound visitor, and a workload on a service manager and a working cost of the service manager can be reduced.
(96) Other Embodiments
(97) In the above-described embodiment, as illustrated in
(98) For example, as characters other than the “ninja,” characters such as a “samurai” and a “prince (princess),” characters of “existing famous people,” and “cartoon” characters may be employed.
(99) Note that in a case where the cast plays a character such as the “ninja,” it is preferable because parts of the cast other than the face are covered with the costume and part of the face is also covered with the costume. That is, the video can be finished with such a degree of completion that a third person cannot recognize that the substitute video has been utilized.
(100) In the above-described embodiment, the video data selection unit 16 selects the substitute video data in the shooting pattern corresponding to the cast attribute items (the gender, the body type, the costume size, and the costume) from the multiple shooting patterns, but is not limited to the above-described attribute items. Changes can be made to the video data selection unit 16.
(101) For example, the video data selection unit 16 may select the substitute video data based on one attribute item of the “gender,” “body type,” “costume size,” or “costume” of the cast. Alternatively, the video data selection unit 16 may select the substitute video data based on the “nationality” and “age” of the cast or the information obtained from the “picture.”
(102) In the above-describe embodiment, the video editing program is stored in a recording medium readable by the video editing apparatus 1, and the video editing apparatus 1 reads and executes such a program to execute the processing. The recording medium readable by the video editing apparatus 1 as described herein is a magnetic disk, a magnet-optical disk, a CD-ROM, a DVD-ROM, a semiconductor memory, etc.
(103) Alternatively, this video editing program may be delivered to a not-shown server terminal via a communication line, and the server terminal itself having received such a delivery may function as a video processing apparatus to execute the program. Alternatively, the cast terminal itself may function as the video processing apparatus.
(104) Note that the above-described embodiments are provided merely as one example for the sake of easy understanding, and is not provided for any purpose of limitation. Changes and modifications can be made to described embodiments, including variations and equivalents thereof.
(105) TABLE-US-00001 TABLE OF REFERENCE NUMERALS S Video editing system 1 Video editing apparatus 10 Storage unit 11 Attribute data acquisition unit 12 Mark data generation unit 13 Data transmission unit 14 Original confirmation unit 15 Video data acquisition unit 16 Video data selection unit 17 Video data generation unit 18 Video data output unit 20 Shooting apparatus 21 Storage unit 22 Video shooting unit 23 Video linking unit 24 Data transmission unit 30 Cast terminal 31 Storage unit 32 Data transmission unit 33 Data reception unit 34 Display unit