MEDIA SYSTEM AND METHOD OF GENERATING MEDIA CONTENT
20220232262 · 2022-07-21
Inventors
Cpc classification
H04H60/07
ELECTRICITY
International classification
Abstract
A method and system for generating media content comprising synchronised video and audio components in which media content is captured using a camera function of a user device to generate media content having a captured video component and a captured audio component corresponding to a speaker output. An audio signal is transmitted corresponding to an audio signal input to the speaker; the wirelessly transmitted audio is synchronised with the captured video component and/or captured audio component of the captured media content to generate combined media content.
Claims
1-22. (canceled)
23. A method of generating media content comprising synchronised video and audio components, the method comprising: receiving media content captured using a camera function of a user device; wherein the media content has a captured video component and a captured audio component; and further wherein the captured audio component corresponds to audio output by a remote speaker; wirelessly transmitting to the user device an audio signal substantially corresponding to an audio signal input to the remote speaker; and synchronising the wirelessly transmitted audio signal with the captured video component to generate combined media content in which the captured video component is synchronised with the wirelessly transmitted audio signal.
24. The method according to claim 23, wherein the transmitted audio signal wirelessly transmitted to the user device substantially corresponds to an audio signal output from a mixing console.
25. The method according to claim 23, further comprising wirelessly transmitting synchronisation data to the user device.
26. The method according to claim 25, wherein the synchronisation data comprises a clock synchronisation information to synchronise a clock function at the user device with a system clock function.
27. The method according to claim 24, further comprising providing a networking module for creating a wireless network and wirelessly transmitting the audio signal to the user device over the wireless network, wherein the user device is connected to the wireless network via the networking module.
28. The method according to claim 27, wherein the networking module facilitates wireless communication between the user device and the network; and further wherein the method includes the networking module transmitting the audio signal to the user device.
29. The method according to claim 27, wherein the networking module receives the audio signal output from the mixing console.
30. The method according to claim 27, further comprising generating synchronisation data at the networking module and wirelessly transmitting the synchronisation data to the user device.
31. The method according to claim 23, wherein the method includes wirelessly transmitting the transmitted audio signal to the user device substantially concurrently with the capturing of the media content by the user device.
32. The method according to claim 23, further comprising live streaming the combined media content.
33. The method according to claim 23, wherein the captured audio component of the captured media content is combined with or substantially replaced by the wirelessly transmitted audio signal to generate the combined media content.
34. The method according to claim 23, wherein the synchronising the wirelessly transmitted audio signal includes synchronizing the wirelessly transmitted audio signal with the captured video component and the captured audio component of the captured media content.
35. A non-transitory computer-readable medium comprising computer executable instructions which, when executed by one or more processors cause the one or more processors to perform a method of generating media content according to claim 23.
36. A wearable device configured to communicatively couple with one or more processors comprising instructions executable by the one or more processors, and wherein the one or more processors is operable when executing the instructions to perform the method according to claim 23.
37. A signal processing device for transmitting audio and/or video signals to, and receiving audio and/or video signals from, a wireless network, the signal processing device comprising: a receiver for receiving audio signals from a mixing console or an audio workstation; one or more processors configured to generate and associate synchronisation data with the audio signals, the one or more processors being coupled to a network module for providing a wireless network; and a transmitter for transmitting the audio signals to one or more user devices over the wireless network.
38. The signal processing device according to claim 37, further comprising a clock synchronisation component for establishing a common time base between a master system clock and a clock function of the one or more user devices.
39. An audio workstation comprising the signal processing device of claim 37.
40. A public address system comprising the signal processing device of claim 37.
41. A system for generating media content comprising synchronised video and audio components comprising: a transmitter configured to wirelessly transmit to one or more user devices an audio signal substantially corresponding to an audio signal input to the remote speaker; and at least one processor for synchronising the wirelessly transmitted audio signal with a captured video component and/or a captured audio component of captured media content captured by one or more user devices to generate combined media content in which the captured video component is synchronised with the wirelessly transmitted audio signal; wherein the captured audio component corresponds to audio output by a remote speaker.
42. The system according to claim 41, further comprising a clock synchronisation component configured to generate synchronisation data.
43. The system according claim 41, comprising one or more of: a mixing console, an audio workstation, a loudspeaker, an amplifier, a transducer, and one or more wireless access points.
44. The system according to claim 41, further comprising a plurality of the networking modules.
45. The system according to claim 41, comprising a plurality of the user devices.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0184] In the Figures, which illustrate embodiments of the invention by way of example only:
[0185]
[0186]
[0187]
[0188]
[0189]
[0190]
[0191]
DETAILED DESCRIPTION
[0192]
[0193] The mixing console (or “mixing desk”) 4 may process analogue or digital signals. Each audio signal is directed to an input channel of the mixing console 4 and these signals are processed and combined to provide an output signal delivered to the speaker system 5 via an output channel.
[0194] Audio signal processing at the mixing console 4 may include altering signals to change, for example, relative volumes, gain, EQ (equalization), panning, mute, solo and other onboard effects.
[0195] The master output mix created at the mixing console 4 is amplified and transmitted to the audience via the speaker system 5. One or more auxiliary output mixes may also be directed to the performers on stage via stage monitors. As shown in
[0196] The mixing console 4 may further comprise or be connected to a recording device such as a digital audio workstation (DAW) for further processing and recording. Mixing consoles are commonly connected to one or more outboard processors such as digital signal processing (DSP) boxes (e.g., noise gates and compressors), each providing individual functionality to increase the overall system possibilities for sounds and audio manipulation.
[0197] The signal chain is indicated by the arrows in
[0198] As indicated, a corresponding audio signal (i.e., comprising the same audio information or the same “mix”) is also transmitted from the mixing console to the loudspeaker 7, and the audio output from the loudspeaker 7 is picked up by the user device microphone. In other words, the signal input to the loudspeaker 7 is substantially the same as the signal input to the broadcast unit 8 and the same master output audio mix is output to the user device via the loudspeaker and via the broadcast unit 8.
[0199] Referring to
[0200] As illustrated in further detail in
[0201] The broadcast unit 8 further comprises a transmitter 19 to wirelessly transmit the master audio mix signal (which may be a modulated master audio mix signal) to a remote server for processing or directly to one or more portable electronic user devices 9, such as mobile telephone communications devices, smartphones, smart watches and other mobile video devices such as wearables having video functionality.
[0202] A modulated signal includes a signal that has one or more of its characteristics set or changed in such a manner as to encode information, instructions, data, etc., in the signal.
[0203] In certain embodiments, a user device 9 may comprise any portable electronic device such as a tablet computer, a laptop, a personal digital assistant, a wearable smart watch, headgear or eyewear or other similar device with similar functionality to support a camera function and optionally transfer or stream data wirelessly to a router or cellular network. In certain embodiments, the user device 9 may comprise a plurality of connected devices, such as a wearable bracelet, glasses or headgear communicatively coupled to another portable electronic device having a user interface, such as a mobile telephone.
[0204] The user device 9 may comprise one or more processors to support a variety of applications, such as one or more of a digital video camera application, a digital camera application, a digital music player application and/or a digital video player application, a telephone application, a social media application, a web browsing application, an instant messaging application, a photo management application, a video conferencing application, and an e-mail application.
[0205] In one embodiment, the user device 9 has a front-facing camera module including a camera lens and image sensor to capture photographs or video and a rear-facing second camera module. The user device 9 further comprises an audio input-output (I/O) system, processing circuitry including an application processor, a wireless communication processor and a network communication interface. It generally also includes software stored in non-transitory memory executable by the processor(s), and various other circuitry and modules. For example, the application processor controls a camera application that allows the user to use the mobile device 9 as a digital camera to capture photographs and video.
[0206] Mobile video devices such as smartphones also usually include an operating system (OS) such as iOS®, Android®, Windows® or other OS. A GPS module determines the location of the mobile device 9 and provides data for use in applications including the camera (e.g., as photograph/video metadata).
[0207]
[0208] A real-time video stream may be generated by each user and broadcast live, e.g., via a social media platform, which may be a pre-existing social media platform or a bespoke video-sharing platform forming part of the system 1.
[0209] The mobile device 9 is connected to a network 21, for example, a wireless area network or Wi-Fi, which may comprise or be part of one or more local area networks (WLANs) provided by a wireless access point 11 on the broadcast unit 8, which serves as both wireless base station and transceiver for media signal processing and transmission. Communication protocols such as transmission control protocol TCP/IP or user datagram protocol (UDP/IP) are utilised. Other types of suitable wireless communications networks are envisaged and may be utilised. These include any other suitable communication networks, protocols, and technologies known in the art, such as Wi-Fi, 3G, 4G, WiMAX, wireless local loop, GSM (Global System for Mobile Communications), wireless personal area networks (PAN), wireless metropolitan area networks (MAN), wireless wide area networks (WAN), networks utilising other radio communication, Bluetooth and/or infrared (IR).
[0210] In the illustrated embodiment, the network 21 is a private network and the broadcast unit 8 of the network system communicates with the software application 10 executing on the user device 9 to identify the user device 9. An authorisation module 16 verifies any necessary associated authorisations for receiving high definition audio from the mixing console 4 at the device 9. Such authorisation may include identification of a user ID, media access control (MAC) address, or any other suitable client device identifier. Optionally, authorisation data may comprise event ticket and/or GPS information. A virtual firewall (not shown) provides a secure location which users cannot access without agreeing to terms and conditions of the software application 10. Separated architecture using multiple hard drives may be utilised for firewall separation of application and user access. The network 21 may provide an encrypted communication session for authenticated users generating and receiving media data over the network.
[0211] Joining of the private network 21 may initiate software execution at the user device 9 to perform time stamping and other in-app video functions, as well as user device requests for HD audio (and/or high quality video) signals from the server. The private network 21 may also provide access to/from the Internet to allow live streaming and video uploads to social media sites.
[0212] Within the CMS it is possible to manage active broadcast units. Broadcast unit unique ID, latitude and longitude data is used to verify each broadcast unit request. If this information is not verified, any attempt to push data to the application server will be rejected.
[0213] The audio signal received at the broadcast unit 8 from the mixing console 4 is processed by a processing module 14 to generate and/or associate various data and/or metadata with the audio signal or stream. Data (and/or metadata) may be associated with the signal by modulating the audio wave and/or broadcast as chirps with the audio wave. Such data or metadata may, for example, comprise timing information, frequency information, such as frequency components of soundwave or spectrogram peaks, digital audio fingerprint information, other waveform information, click tracks, other synchronisation pulses, and/or other values and data related to the audio signal. Data may be encoded into the audio signal and decoded (demodulated) by a processor at the receiving user device 9.
[0214] A synchronisation module 12 provides synchronisation information, which may include any of this data for synchronising the high definition audio with the video stream captured by a user on the user device 9. An enhanced video stream comprising the associated high definition audio from the mixing console 4 is generated and may be provided to a social media application for sharing via the internet (either by upload, live streaming, etc.) and/or saved in memory on the user device 9, or cloud location (which may include a secure storage facility provided via the software application 10).
[0215] The synchronisation module 12 comprises a clock sync component 15 that utilises a system clock 15A associated with the broadcast unit 8 (a broadcast unit internal clock or server clock), to establish a common time base between the master system clock 15A of the broadcast unit server 8 and a plurality of user devices 9, each having their own clock function (which may be supplied by the original equipment manufacturer via default device applications or settings, or may be an alternative clock function, such as a clock function provided by the software application 10).
[0216] In one embodiment, the system clock 15A comprises a hardware reference or primary time server clock and utilises a network time protocol (NTP) type synchronisation system. The broadcast unit 8 may comprise a GPS antenna for receiving timing signals, which can be transmitted to user devices 9.
[0217] The clock sync component 15 of the synchronisation module 12 is configured to generate a timecode/timestamp, which can be utilised for correlation with the device clock function corresponding to the timing of video captured at the user device 9.
[0218] The clock sync component 15 is configured to synchronise the time at the master system clock 15A with the clock at one or more user devices 9 (which may function as a master and slave type configuration). This includes a clock component of the application 10 executing on the user device 9 and/or accessing and calibrating another clock application or widget on the user device 9, for example the manufacturer-provided operating system clock function.
[0219] In another embodiment, the clock functions may be synchronised by the application 10 executing on the user device 9, providing instructions for the user device 9 to query another time server via the wireless access point 11, which is the same as a time server providing a timing signal to the system clock 15A, such as a GPS satellite-based time server.
[0220] An authenticated user device may be prompted to query a time server (either the system clock 15A or other remote time server) at start-up of the application 10, request to join the private network, or a video session. The user device may reset /synchronise its internal clock, synchronise with an application clock and/or calculate a time differential between one or more user device clocks and the system clock 15A and calculate any offset for synchronisation of audio and video, taking into account signal transmission and arrival times.
[0221] The timing information generated by the synchronisation module 12 of the unit 8 may comprise a calibration (or clock synchronisation) signal or metadata timecode. This is transmitted together with the audio signal to the user device 9. The application 10 executing on the user device 9 utilises timestamp data to synchronise high definition audio transmitted to the user device with video (and optionally audio) captured by the user using the user device 9. In certain embodiments, real-time synchronisation provides live streaming functionality such that the user may live stream the video substantially at the same time as they are recording the video footage, combined with the associated HD audio received from the mixing console 4 via the broadcast unit 8.
[0222] In the illustrative embodiment shown in
[0223] The user device 9 video function also utilises one or more built-in device microphones and captures ambient audio transmitted from the speaker system along with the captured video.
[0224] The HD audio signal received at the user device from the broadcast unit 8 can be further synchronised with the user video by algorithmic comparison and matching of characteristics of the audio signal from the device microphone (such as waveform alignment/audio fingerprinting) and the audio signal (and associated metadata) received from the broadcast unit 8. Synchronisation may be achieved and/or refined using a combination of algorithmic comparison of signals (and optionally metadata) and timing information from the clock sync module 15. In certain embodiments, a synchronisation pulse (from a GPS-based time server or otherwise) accurate to microsecond levels may be output from the broadcast unit 8 to the user device 9 with the media signal. Click track data from the stage audio may also be included in the broadcast to aid audio synchronisation.
[0225] The synchronisation module 12 provides synchronisation information such that data may be aligned by the application 10 at the user device 9. Any time differences between the arrival time of the signal from the broadcast unit 8 and the audio transduced by a microphone of the user device 9 are automatically adjusted and digital audio fingerprints and/or other metadata may be used to overlay the audio transmitted from the broadcast unit to the user video, which may require a few milliseconds of adjustment.
[0226] In certain embodiments, the synchronisation of audio and video may be performed by one or more processors at the broadcast unit 8 communicating with the user device 9. Alternatively or in addition, synchronisation of audio and video may be performed at a remote server.
[0227] In certain embodiments, the system comprises a server pool comprising a plurality of local and/or remote servers, which may include cloud-based servers. An application server or CMS is responsible for communicating with the software application on the user device. A storage server stores all uploaded HD audio and user media files. Storage usage is actively monitored and increased as necessary. A database server stores all application and user data. Data is encrypted at rest and the encryption keys are stored separately. A load balance determines which of a number of application servers has capacity to handle each current request and distributes the load accordingly. The system is able to handle a high volume of simultaneous requests for information in addition to supporting a high number of concurrent users.
[0228] The application server(s) are configured to make use of compression to serve content. This allows the server to compress data before it is sent to a user device, helping to keep load times low without compromising the content quality. The data is automatically uncompressed on the user's device. Additionally, where applicable, the application server(s) cache requests to minimise the amount of work required by the server to complete the request.
[0229] Server usage is monitored and adjusted automatically, for example by assigning more resources to the existing servers, shutting down unnecessary services on the server to free up resources, or employing an additional server to share the load.
[0230] In certain embodiments, signal processing may be performed at a remote server and as such, the broadcast unit 8 may transmit high definition audio signals to a remote server (which may be cloud-based) and processing may be performed at the server, such that both the broadcast unit and user device request synchronisation data from the same remote application server.
[0231] To ensure accurate synchronisation, both the App executing on a user device and the broadcast unit 8 request the current timestamp from the application server at regular intervals. This information is stored against the recorded media and used to clip the audio files to the correct length. The timestamp is to the nearest millisecond, which is important for accurate synchronisation. Thus utilising the clock function of a mobile telephone may be less reliable.
[0232] The system takes the start time of the video and checks that it falls within the start and end times of the audio file. If it does, it will then cut the audio at the video start and end times.
[0233] The user may be sent a notification and the new audio clip can then be streamed to the user's device in synchronisation with the video. Synchronisation may be performed at the server or at the user device. The system also generates a version of the video with the original audio replaced with the broadcast unit audio for sharing on social platforms. The audio on these clips has a short fade in/out so they do not immediately start at maximum volume.
[0234] Upon a user pressing the record button within the App, a request is sent to the application server to get the current timestamp. Once a video has been captured, the user is presented with two options—Add to Queue (upload) or Save Video to Camera Roll. The App will prompt users to enable location services while in use. This will allow the App to recognise where the user is placing them at an event/show and proximity to a broadcast unit 8 and obtain certain other data. When the audio broadcast unit is automatically prompted to commence recording by listening and detecting sound, it also requests a current timestamp from the server. User devices and the broadcast unit periodically request timestamp information from the server during recording, such that timestamp information is accurate to the nearest millisecond.
[0235] Waveform or audio fingerprint data from user-generated video/audio may also be compared with data received with the HD audio signal to provide an assessment of the quality of the user-generated audio from the user device microphone. This can be used to automatically optimise any combination of user-generated audio and HD audio wirelessly received from the mixing console 4. This may be done by algorithmically adjusting volume levels or other components of the signal to provide an optimised combined audio matched to the user-generated video.
[0236] The application 10 may provide instructions such that the headphone output and/or speaker output of the user device 9 is muted automatically during synchronisation of the received audio signal with the user-generated video. Thus, the user does not hear the received HD audio during the live performance, even if live streaming the video recording.
[0237] As illustrated in
[0238] In certain embodiments, a user requests transmission of a video signal from a video source (camera module 17) to a user device 9 as an alternative, or in addition to an audio signal. The video may correspond to a video displayed on a screen at the live event, such as video of the performers on stage, or video that is not displayed at the event.
[0239] In a similar system to the audio transmission, the video signal is input to the broadcast unit 8 in addition to the audio signal from the mixing console 4. The video signal is automatically time stamped utilising a system clock 15A and is formatted, e.g., compressed into a format that can be read by media players of a user device 9. Transmission of video signals may utilise UDP/IP instead of TCP/IP. If both audio and video signals are received at the broadcast unit 8, software executing at the broadcast unit 8 provides functionality for combination of the HD audio and video data feeds and synchronisation before transmission to a user device 9. Video (and optionally additional audio) received at a user mobile video device 9 may be combined with (i.e., merged to varying degrees e.g., utilising a slider function—or otherwise utilised to provide enhanced user video) the user-generated video captured by the camera of the user device 9. Combination and optimisation of transmitted and user-generated video may be an automatic function provided in real time by the software application 10 executing on the user device for live streaming or it may be a function for post-event processing (optionally with subsequent video data download) by a user.
[0240] One illustrative embodiment of the broadcast unit 8 of the invention is shown in
[0241] This may comprise radio frequency (RF) transceiver circuitry and at least one antenna for receiving and transmitting digital signals. The unit 8 further includes a wireless access point (WAP) 11 to provide a closed local area network (which may be part of a wide area network).
[0242] An internal PC-based system clock 15A in the unit 8 provides a network synchronised time stamping service for software events, including message logs. The synchronised time accurate correlation of log files between the user device 9, software application 10 and broadcast unit hardware provides this functionality.
[0243] The WAP 11 provides additional information on users of the system, including logging the number of users, how much data is being used, collecting other user data such as behavioural data for storage, as well as generating time stamp correlations. Advantageously, the broadcast unit 8 has functionality to process and transmit audio data to a large number of user devices requesting HD audio. A plurality of broadcast units may be utilised in very large venues or festivals.
[0244] A feedback system may process and store data received from user devices 9 via the network and/or application. Feedback data may include information about the user and user behaviour, such as which sections of the performance the user recorded and/or streamed, which performers the user was most engaged with, which social networking sites the user uploaded video or streamed to and GPS information on where the user was located within the venue. The feedback system may further provide aggregated data such as parts of the performance in which video or user engagement peaked, user demographic etc.
[0245] The feedback data from the system 1 may be utilised to provide customised advertisements to the user, for example via the software application 10, which may be displayed to the user during the event or subsequently. For example, GPS information may provide information on whether a user is located in a premium seating location and advertisements may be customised to target premium customers.
[0246] Feedback data or other data received by the broadcast unit 8 may be utilised by the system to automatically adjust the bitrate for streaming. At the broadcast unit 8 there may be automatic adjustment of the bitrate (upscaling if necessary) to provide an HD audio feed to a maximum of 0 db. Transparent (musical) compression may be activated when −3 db is reached. There may also be automatic adjustment of signal from the mixing desk, e.g., amplification to compensate for any audio mix that may be at a low level.
[0247] In certain embodiments, the broadcast unit comprises a tamper proof secured housing 22 in a 3U rack mount format box and a motherboard with the relevant cards and connections at the front or rear side. The size of the box (housing 22), number of antennae, user access configurations (I/O system) etc. may be varied depending on the end use location and/or venue size. For example, arena, festival, theatre, stage or street locations. For larger locations/venues, the system 1 may require a plurality of broadcast units 8 at selected locations around or within the area.
[0248] In one embodiment, the broadcast unit 8 comprises a server in a rack mount platform installed in a transportable rack case. It has a dual hard drive system with a soft firewall between these (e.g., 1×Solid State Drive and 1×SATA Hard Drive). A four port Server CAT6 Card connects to the Wireless Access Point(s), network and other network devices. A 16 GB RAM 21″ monitor keyboard and mouse may also be installed in the system with a sliding rack shelf. Windows® and DANTE® Virtual Sound Card licences enable connection to the mixing desk 4. A slot enabling an upgrade facility may be included, for, e.g., multitrack output and recording via a Dante or similar industry standard digital interface. The unit 8 further comprises dual band 2.4 GHz and 5 GHz Wireless Access Points with a tripod system.
[0249] A sound engineer or other user may listen to audio at the broadcast unit 8, via a headphone output 23, and it may be possible to adjust the volume via a volume control. A signal output display 24 indicates correct function and transmission of signal(s).
[0250] A recording facility at the broadcast unit 8 records and automatically deletes recordings data after a predetermined amount of time, e.g., 1 week (and/or once the recordings have been backed up to a main server) to free up local memory at the unit 8. An embodiment of a dynamic broadcast unit storage management module or system is described with reference to
[0251] A system having a plurality of units 8, for example at a festival site, would be individually visible to a main server and cover a number of stage areas at different locations. In certain embodiments, any of the units 8 may send and receive signals to one or more other units 8.
[0252] In a further embodiment, the audio signal may be subsequently synchronised on demand with a video recording from the event at a time after the live event (i.e., not live during the event or performance). For example, video captured by the user device at the live event may be stored in memory on the user device or cloud location (and/or via the software application 10) for playback at a later time. The application 10 executing on the user device at the time of video capture associates the relevant timestamp data to the video data, which can be used to synchronise high definition audio to the video after the event. This provides functionality for downloading HD audio via the internet to be matched and accurately synchronised with a user video recording at any time after the event.
[0253] The audio received at the user device 9 from the mixing console 4 via the broadcast unit 8 can be stored separately (or be otherwise separable) from the user device microphone-captured audio. A user can therefore listen to the received audio or transduced audio, or a combination of both at user-adjustable relative volumes.
[0254] In certain embodiments, the application 10 provides functionality for adjusting various attributes of the sound, such as mixing and equalising the sound, adjusting the relative volumes of instruments, vocals, audio captured by the user video device microphone(s) and received audio. A virtual mixing console with graphic equaliser display (not shown) having sliders (faders) and other controls may be presented via a user interface such as the screen of the user device 9. The user's personalised media mix can be combined with the captured video and saved in memory and/or uploaded to social media. This function also provides customisable combination of user-generated video with high quality received video from the video module 17.
[0255] All recordings are accessible via a central library, displayed as a dynamic list, and additional recordings/data is loaded as the user scrolls. The user will be able to access a “cross-fader,” which will enable them to slide between the recorded audio and the matched high-quality sound. The high-quality, matched audio may be the default sound to every video recording. When playing back a recording the user can access an equalizer (EQ), enabling them to adjust the bass, mids and treble of a recording. The EQ settings will be saved to the user's library per recording and will be adjustable at any time during playback. Within the CMS it will be possible to view usage statistics through the Dashboard module. Data will be collected by the platform and visible through the Dashboard and may include: Device Type Operating System (iOS/Android), Active/Total User Numbers Average Recordings, Average Video Duration, Popular Artists, Popular Venues, Streams per Show/Venue/Artist.
[0256] An embodiment of the method of the invention is illustrated in
[0257]
[0258] In an embodiment illustrated in
[0259] At a step 605, timestamps from the application on the user device and the broadcast device are matched, to generate an audio file that matches user video start and end timestamps. To ensure an accurate synchronisation, both the App (user device) and the audio broadcast device request the current timestamp from the application server periodically at regular intervals.
[0260] In certain embodiments, audio to video matching is performed server-side. The server maintains a log of the physical location of the broadcast device(s) 8. This may be using device unique ID, manual log of location and/or GPS/assisted GPS data from the broadcast device. The broadcast device(s') location is matched with the user's location (through the App) in order to place the user at a particular venue/show/stage area. At large festival type events, where users may be moving around within a large area, this provides the advantage that a user can be matched to a particular performance at one of several stages by user proximity to a particular broadcast device or devices and/or by user network connection to a particular WAP. Furthermore, the broadcast device 8 may be easily disconnected/unplugged from an audio workstation or mixing console and utilised at another stage area if there is a change of location for a performance or change in schedule, etc.
[0261] Once the system has determined the authorised user is at an authorised performance, it checks for any audio matching the provided timestamps. The system takes the start time of the video file and checks that it falls within the start and end times of the High Definition audio file. If there is a match, the system generates high quality audio soundtrack at the video start and end times (step 606) and provides this to the user (step 607).
[0262] When the audio has been successfully matched, the user will also be able to play back user video with the high quality audio through the App. The user will be able to fade between the two audio streams—their own from their original audio recording with user video and the high quality audio from the broadcast unit. The system also generates a copy of the user's video with the audio replaced with the high quality audio from the mixing desk/broadcast unit adapted for sharing on social media platforms.
[0263] The broadcast unit 8 comprises software for listening, detecting and recording audio received via the broadcast unit audio input(s). The broadcast unit automatically loads/runs all required software on boot, enabling an audio engineer to simply plug it in and turn it on.
[0264] Each broadcast unit will have a unique identifier (e.g., Serial Number) assigned to it, which is used to associate each broadcast unit to a particular venue/performance and/or physical location. The unique ID also provides functionality to track usage (e.g., number of shows recorded at a particular location) and to prevent unauthorised devices from connecting to the application servers.
[0265] Where the system is utilising the venue network and a venue is unable to guarantee the broadcast unit access to an active internet connection, the software will maintain a queue of all recently recorded audio in order to keep track of audio that has been recorded but not yet uploaded. When the broadcast unit has access to the internet, the software will process the queue and upload it to the remote (or cloud based) application server.
[0266]
[0267] When a sound from the performance matches or exceeds this threshold, the broadcast unit will generate a timestamp and start recording a higher quality feed from the audio input at 705. This recording will continue until the broadcast unit hears nothing for around 5 minutes, after which, the high quality audio recording will be saved to a local storage queue 706 ready for upload to the server at 707 (which may be a remote/cloud based server).
[0268] As shown at step 702, if the broadcast unit 8 has an active internet connection, the high quality audio will be automatically processed 703 and uploaded to the server 707 and removed/deleted from the broadcast unit to give capacity for future recordings. If the broadcast unit does not have an active internet connection, the high quality audio will be added to an upload queue 706 until an active internet connection is available and upload can commence at 707.
[0269] Once a high quality recording has finished, the broadcast unit will begin listening for a sound again, ready to record.
[0270] At large events, the system may utilise a plurality of broadcast units at predetermined locations around a venue. The broadcast units may be in communication via the network in order to distribute load or storage across the plurality of broadcast units.
[0271] It will be appreciated that embodiments of the invention may be implemented in hardware, one or more computer programs tangibly stored on computer-readable media, firmware, or any combination thereof. The methods described may be implemented in one or more computer programs executing on, or executable by, a programmable computer, including any combination of any number of the following: a processor, a storage medium readable and/or writable by the processor (including, for example, volatile and non-volatile memory and/or storage elements), an input device, and an output device. Any computer program within the scope of the claims below may be implemented in any programming language and may be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a computer processor.
[0272] Method steps of the invention may be performed by one or more processors executing a program tangibly embodied on a computer-readable medium to perform functions of the invention by operating on input and generating output. Suitable processors include e.g., general and special purpose microprocessors. In general, the processor receives (reads) instructions and data from a memory (such as a read-only memory and/or a random access memory) and writes (stores) instructions and data to the memory.