Methods for collecting and managing public music performance royalties and royalty payouts
11262976 · 2022-03-01
Assignee
Inventors
Cpc classification
H04S2400/15
ELECTRICITY
G06Q20/40
PHYSICS
H04S7/301
ELECTRICITY
G06F21/00
PHYSICS
H04L67/12
ELECTRICITY
G06F3/165
PHYSICS
G06F21/10
PHYSICS
H04B1/0003
ELECTRICITY
International classification
G06Q20/40
PHYSICS
G06F21/00
PHYSICS
Abstract
Methods and apparatus, including software, for collecting and managing public music performance royalties and royalty payouts are described. On the listeners side, song/audio fingerprint data is collected and transmitted to the rights owner side, where the rights owner side verifies the song/audio fingerprint data, calculates royalty payments, and in some cases, automates the royalty payments. Public music performance royalty payments are based on the song/audio fingerprint data collected by listeners/clients, as well as business logic servers.
Claims
1. A system for collecting and managing public music performance royalties and royalty payouts, the system comprising: a device associated with a user; a database comprising information associated with a musical work; a business logic server communicatively coupled to a rules, rights and policy server; and a rights owner server communicatively coupled to the business logic server and the rules, rights and policy server, the business logic server being configured to: receive data units associated with the musical work from the device associated with the user; receive an authorization from the user to share the data units with a third party administrator; map the data units to the database to identify a rights owner of the data units; verify the rights owner; map the data units to the rules, rights, and policy server comprising copyright laws of a territory; verify compliance with the copyright laws of the territory; and transmit a verification message of the rights owner server to facilitate a royalty payment to the rights owner and a copyright holder.
2. The system of claim 1, wherein each of the data units are selected from the group consisting of: song information, information regarding whether a song was sung, information regarding whether a song was played live, information regarding whether a song was recorded, information regarding a time of the data unit, and information regarding a location of the data unit.
3. The system of claim 1, wherein the information of the database is selected from the group consisting of: audio fingerprint recognition information, licensing grant information, performance information, song catalog information, song ownership information, and a location or a jurisdiction associated with the royalty payment for the data units.
4. The system of claim 3, wherein the information of the database comprises the performance information, and wherein the business logic server is further configured to: map the data units to the performance information to identify a non-musical entity associated with the data units, wherein the non-musical entity is a venue commercializing musical works; verify the non-musical entity; and transmit another verification message to the rights owner server to facilitate the royalty payment to the non-musical entity.
5. The system of claim 1, wherein the device associated with the user comprises a smart device, a wearable device, and/or an IoT device.
6. The system of claim 1, wherein the verification of the rights owner comprises verification by the third party administrator.
7. The system of claim 1, wherein the rights owner is selected from the group consisting of: a songwriter, a lyricist, a composer, a musical company, and a publisher of the musical work, and wherein the copyright holder is a creator of the musical work or an assigned entity.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
DESCRIPTION OF THE PREFERRED EMBODIMENTS
(13) The preferred embodiments of the present invention will now be described with reference to the drawings. Identical elements in the various figures are identified with the same reference numerals.
(14) Reference will now be made in detail to each embodiment of the present invention. Such embodiments are provided by way of explanation of the present invention, which is not intended to be limited thereto. In fact, those of ordinary skill in the art may appreciate upon reading the present specification and viewing the present drawings that various modifications and variations can be made thereto.
(15) As a threshold matter, it should be noted that whenever the phrases “microphone” or “microphone-equipped” are used, it is intended to mean any device that is capable of detecting sound, not merely microphones. For example, a high-performance low frequency antenna connected to a software-defined radio may be used to input sound observations into the system, or a piezo-electric diagraph may be used to measure the vibrations the correspond to a given sound. These examples are provided to give greater clarity as to what the term “microphone” should be interpreted as, and not construed as a limiting example.
(16) The system of the present invention operates by integrating clusters of various computing devices and wearable computers with sound management techniques and methods so that various sound “fingerprints” can be developed and used to visualize how sound is being perceived in micro-areas within a larger venue. In various embodiments, the system of the present invention can be integrated into an individual's home, vehicle audio system, concert venues, and other locations where sound is played. In addition, the system's components allow for the present invention to be scaled to accommodate sound management and monitoring control within the largest of venues such as stadiums and other sports arenas.
(17) Due to the devices that are integrated into the system having the ability to sense the frequency and magnitude of audio signals, a sound or audio fingerprint (summary) can be generated from deterministic methods. These fingerprints are then communicated to an audio control source and can subsequently be processed and used to communicate with external applications and things such as third-party sound databases. However, the purpose of this system is not to be confused. In addition to the sound fingerprinting ability of the present invention, it is also capable of utilizing a series of methods to sense and control audio output in various venues.
(18) In an alternative embodiment, the present invention is located in a train or airport station that has an intercom system that functions poorly when noisy crowds are present. If an audio control source within these facilities is able to autonomously collect audio data via a series of integrated devices, then with the present invention, the same audio control source can adjust system outputs accordingly in order to make important intercom announcements intelligible. In yet another embodiment, a user can enter in EQ parameters in their integrated computing device to ensure that both the audio perceived by them, and the audio perceived by their device is in accordance with some predetermined parameters/settings. While many short-range wireless technologies can be used with the present invention, preferably one or more of the following technologies will be used: ANT+, Bluetooth, Bluetooth Low Energy, versions 4.1, 4.2, and 5.0, cellular, IEEE 802.15.4, IEEE 802.22, 802.11ax (i.e. Wi-Fi 6), 802.11a/b/g/n/ac/ax, 802.15.4-2006, ISA 100a, Infrared, ISM (band); NFC, RFID, WPAN, UWS, WI-FI, Wireless HART, Wireless HD/USB, ZigBee, or Z-wave.
(19) In yet another preferred embodiment, various in-ear systems may be integrated into the present invention, software-defined and/or cognitive-defined based in-ear transceivers can be used to wirelessly communicate with an audio control source and thus, the output of such an in-ear monitor can be autonomously adjusted after sensing audio output. A given output can be adjusted according to what is sensed within specified location or what is sensed at external clusters. Similarly to a software-defined and/or cognitive-defined based in-ear transceivers, an in-ear monitor system for use with the present invention will preferably comprise hardware such as, earphones, at least one body pack receiver, at least one mixer and at least one transmitter. These functions can also be adjusted and controlled via the audio control source of the present invention.
(20) According to an embodiment, the functions of the present invention include sensing and isolating frequency bands associated with musical instruments/human voices in the following order: midrange, highs, and lows. According to an embodiment, the functions further include separating like frequencies (panning). According to an embodiment, the functions additionally include balancing the volume, controlling the dynamic range of the frequencies sensed (compression), performing subtractive and additive equalization, and/or adding audio effects to provide additional depth and texture.
(21) Loud noises can often lead to stress and hearing loss. For example, certain frequencies and volumes can cause stress in pets, and loud music and other forms of loud sounds have put approximately 1.1 billion young people at risk of suffering from hearing loss. Furthermore, military veterans are 30% more likely to suffer from severe hearing loss than non-veterans. In fact, according to the DoD's Hearing Center for Excellence (HCE), hearing loss is the most-widespread injury among returning veterans, driving hearing loss payments to exceed $2 billion in 2016. The present invention provides for an interdisciplinary and technologically advanced approach to hearing loss prevention.
(22) It is important to note that noise pollution not only produces negative health outcomes for humans, but also, can produce negative outcomes for pets. Loud noises and obtrusive, artificial light negatively affect pets such as cats and dogs, and can eventually lead to abnormal behaviors, like excessive whining, trembling, barking and panting. These behaviors are a result of the pets trying to cope with the stress tied to phenomena within their environment, and if left unchecked, can cause panic disorders such as, e.g., separation anxiety, which is not healthy for both pet owners and pets. It is therefore an object of the present invention to provide a method wherein at least one sound and/or light sensing device can be affixed or integrated into a pet wearable (e.g. dog collar).
(23) Hearing loss can be considered an inevitable cost of military exercises and war. However, real-time alerts using mobile devices creates an opportunity to implement preventative measures, ultimately reducing hazardous exposure time and thus injury. Study considerations include, data sets, hearing loss incidents among veterans (on the rise), current preventative measures, gear, and equipment such as jet engines and other inherently noisy machinery.
(24) In summary, various embodiments of the present invention are in response to the DoD commitment to reduce the number of military personnel that suffer from hearing loss injury by 1) analyzing hazardous sounds in real-time 2) alerting service members using wearable mobile devices (new preventative technique).
(25) According to an embodiment, the present invention provides for a mobile cluster-based apparatus that analyzes, reports, and controls outputs based on a range of inputs over a swath of frequency bands, with distinct applications including sound output control, hazardous millimeter-wave, blue light or RF detection and reporting, and ultrasonic and infrasonic wave detection and reporting. In a blue light sensing application, a wearable in close proximity to a user's retina (e.g. located on a collar of a smart jacket) can measure prolonged blue light retina exposure and report the issue back to the user.
(26) According to an embodiment, the apparatus is configurable and uses standard computing devices, such as wearables, tablets, and mobile phones, to measure various frequency bands across multiple points, allowing a single user to visualize and adjust sound output, and in some cases, detect and report hazardous signals.
(27) Each year, sound companies spend billions of dollars on audio technologies and audio research to find new ways to improve audio quality in performance settings. Proposed is an apparatus and method that creatively tackles the issue of poor audio quality and sound perception across various spaces by integrating consumer-based mobile devices, wearable computers and sound management systems. The ubiquitous computing devices in this method and apparatus senses soundwaves, associates sensed audio levels with specific clusters (locations), predicts whether or not an audio-related issue is likely to occur within a specific cluster (for instance, predicts if an echo is likely to occur), and adjusts audio intensity (and related EQs) accordingly to improve audio output quality.
(28) Key features of the Mobile Cluster-Based Audio Adjusting Method and Apparatus include: User/listener-based sound management and control Scalable platform that can incorporate future tech—that is, new functionalities can be added because the method and apparatus is designed to seamlessly integrate additional components including, but not limited to, software applications such as a ‘sound preference’ application that sets user-based sound perception settings on a mobile device or wearable computer. Integrates with existing audio hardware and software—such as in-ear systems, mixer boards and other related audio consoles Autonomous audio sensing Can be configured, manufactured and sold across different industries (e.g. automobile or audio electronic industries) Can be used in sound fingerprint and music publishing/performance applications (e.g. in a performance venue, fingerprint data can be sent directly to music publishing entities from the described clusters Can interface with various communication offerings such as e-mail, SMS, and visual screens (for instance, communicative updates can be sent with sensed audio measurements. A specific example—an SMS that reads a “too loud in section A”/cluster A) Can support a fixed or unfixed number of “sensing units”
(29) Referring to
(30)
(31) Assuming that
(32)
(33) Where:
(34) SPL=Sound pressure level dB
SWL=Sound power level=10 log.sub.10(W/W.sub.ref) W is the total sound power radiated from a source with respect to a reference power (W.sub.ref) dBW re 10.sup.−12 Watts.
(35) r=distance from source m
(36) Q.sub.θ=directivity factor of the source in the direction of r
(37) S=total surface area of a room m.sup.2
(38) α.sub.av=average absorption coefficient in a room
(39)
(40) Over time, each computing device in
(41) It is important to note that in any given indoor environment, R.sub.C, α.sub.av, and S can be predetermined and made available to each computing device, approximated or deemed negligible. Also note that each computing device in
(42) Turning to
(43) The embodiment depicted here shows devices that sense audio signal energy within the confines of a single cluster and then sends data directly to an audio control unit and other clusters. Therefore, not only can these computing devices wirelessly share sensed data with each other, but, also, data can be shared with an audio control source 111 (for audio output management purposes) and other devices in other clusters. Depending on the audio signal energy sensed within a specific cluster(s), audio control source 111 adjusts any connected output devices in either a single cluster, or multiple clusters to ensure high quality/fidelity output.
(44)
(45) Referring now to
(46) According to an embodiment, the present invention isolates and/or separates sounds within band, reports findings of those sounds to a cloud-based system for audio signal processing (if necessary), and sends control commands to one or more commercial mixing consoles and/or audio control sources to alter the audio output (if necessary), and then communicate with apparatus devices to share and confirm sensed audio findings (if necessary). According to an embodiment, these sounds are associated with different frequencies and/or are associated with one or more instruments.
(47) At step 305, audio/noise is sensed by one or more audio sensing devices. According to an embodiment, the one or more sensing devices are microphones.
(48) At steps 310-315, the volume between the sensed audio is balanced. That is, one or more instruments and/or frequencies are identified and isolated from the sensed audio (at step 310), and the signal amplitude of each instrument is manipulated using a mixing console/audio source (at step 315). It is noted, however, that, at step 310, the identified sounds need not always be instruments. The sounds may be any suitable identifiable sounds, while maintaining the spirit of the present invention.
(49) According to an embodiment, the present system may sense different types of phenomena (e.g., it may sense audio using an audio transducer such as a microphone, it may include a smartwatch and/or other similar device that may be able to sense ultrasonic waves using an ultrasonic transducer, and/or the system may incorporate one or more various suitable types of transducers). According to an embodiment, the system may be configured to sense environmental phenomena outside of the acoustic frequency range by using a variety of transducers. In those cases, the underlying functionality of the system generally remains the same, regardless of the input phenomena sensed. The system may measure the intensity of an acoustic wave, ultrasonic wave, infrasonic wave, and/or any other suitable waves.
(50) According to an embodiment, the system may incorporate various input/output functions/details, such as those shown in Table 1. According to an embodiment, the system is configured to sense, analyze, and/or control audio outputs.
(51) TABLE-US-00001 TABLE 1 SYSTEM INPUT SYSTEM FUNCTION OUTPUT Network Interface: Apparatus will isolate/separate sounds Network Interface Configured to: Sense audible sounds via mic within band, report findings to cloud-based Control mixing console(s) and/or or comparable audio system for audio signal processing (if an audio control source(s) via sensing transducer necessary), send control commands to commercial physical or SDR-based transceiver(s)** mixing console and/or audio control source to alter audio output (if necessary) and communicate with apparatus devices to share and confirm sensed audio findings (if necessary) 20-40 Hz Sub Bass *(Piano, kHz: 125/134 Synthesizer Strings) MHz: 13.56/600/ 40-160 Hz Bass Band (Drums, Strings, 800/850/900/1700/1800/1900 Winds, Vocals, Piano, Synthesizer) 2100/2200/L700/U700/2300/ 160-300 Hz Upper Bass Band (Drums, Strings, 2400/2500/2700/3500/5200/ Winds, Vocals, Piano, Synthesizer) 5700/whitespaces between 54 and 860/ 300-800 Hz Low-Mid Band (Drums, Strings, GHz: 3.6/4.9/5/5.9/24 to 300 Winds, Vocals, Piano, Synthesizer) 300 GHz to 430 THz 800-2.5 kHz Mid-Range Band (Drums, Strings, Winds, Vocals, Piano, Synthesizer) 2.5-5 kHz Upper Mid Band (Drums, Strings, Winds, Vocals, Piano, Synthesizer) 5-10 kHz High Frequency Band (Drums - including Cymbals, Synthesizer) 10-20 kHz Ultra-High Freq Bands (Hi-Hat, Cymbals, Hiss)
(52) It is also noted that the present invention may further have implications in sensing and analyzing millimeter waves, which the human ear cannot hear. Higher-frequency millimeter-waves can possibly have adverse effects on human health. According to an embodiment, the present system can (as shown in Table 2), in real-time, detect and report harmful, high-energy level millimeter waves, which are included in many 5G deployment plans.
(53) TABLE-US-00002 TABLE 2 SYSTEM INPUT SYSTEM FUNCTION OUTPUT Network Interface: Sense Apparatus will detect, Network Interface Configured to: millimeter-waves via a analyze, measure and/or Report/share data via physical or mmWave transducer report harmful millimeter- SDR-based transceiver(s)** waves across several environments 24 to 300 GHz Identify and measure kHz: 125/134 millimeter-wave MHz: 13.56/600/800/850/900/ characteristics 1700/1800/1900 2100/2200/L700/U700/2300/ 2400/2500/2700/3500/5200/5700/ whitespaces between 54 and 860/ GHz: 3.6/4.9/5/5.9/24 to 300 300 GHz to 430 THz
(54) Weaponized infrasonic and ultrasonic devices with highly directional energy transmissions can produce both psychological and physical effects on humans. In addition, blue light (short wavelength) emitted from displays is harmful to the retina. For this reason, a light sensing transducer is a part of the apparatus described herein. According to an embodiment, the present system can, in real-time, detect and report harmful infrasonic and ultrasonic devices in weaponized scenarios. According to an embodiment, the apparatus described can (as shown in Table 3), in real-time, detect and report harmful infrasonic and ultrasonic devices in weaponized scenarios.
(55) TABLE-US-00003 TABLE 3 SYSTEM INPUT SYSTEM FUNCTION OUTPUT Network Interface: Apparatus will detect, Network Interface Sense infrasonic, analyze, measure and/or Configured to: ultrasonic waves, and/or report on harmful Report/share data via light waves via an ultrasonic or infrasonic physical or SDR-based ultrasonic, infrasonic waves across several transceiver(s) or electro-optical environments transducer 18.9 Hz, 0.3 Hz, Identify and measure kHz: 125/134 7 Hz and 9 Hz ultrasonic, infrasonic MHz: 13.56/600/800/850/ 700 kHz to 3.6 MHz or visible wave 900/1700/1800/1900 20 to 200 kHz characteristics 2100/2200/L700/U700/2300 400-770 THz 2400/2500/2700/3500/5200/ 5700/whitespaces between 54 and 860/ GHz: 3.6/4.9/5/5.9/24 to 300 300 GHz to 430 THz
(56) At step 320, it is determined whether the sensed audio includes any audio in frequencies that have been predetermined to be hazardous to human ears. According to an embodiment, if audio in the hazardous range has been detected, then one or more users are notified, at step 325. The notification may take the form of a visual notification, an audible notification, and/or any other suitable form of notification. It is noted, however, that, if automatically corrected, the user need not always be notified.
(57) According to an embodiment, at step 330, the dynamic range of the sensed audio (compressed or limiting) is controlled by sending audio data to a mixing console/audio source or cloud-based system that can identify and mitigate sudden peaks in a sensed audio stream to help sound(s) sit consistently in an audio mix (accomplished by removing sudden peaks). Altering the dynamic range may also be used to eliminate any audio in the predetermined hazardous range. At step 335, the audio is panned. That is, like frequencies in the sensed audio are separated.
(58) At step 340, effects that add depth and texture to audio outputs are added and, at step 345, equalization is added using subtractive and/or additive equalization techniques.
(59) According to an embodiment, at step 350, automation is generated that predicts environmental conditions based on sensed data (like echoes and audio wind steers) and, at step 355, volume changes and audio effects are autonomously programmed, accordingly.
(60) According to an embodiment, the present invention includes acoustic band applications. Consumer products, such as, e.g., wearables, smartphones, and other portable computing devices autonomously control sound output(s) in private spaces (e.g. cars and homes) and public spaces (e.g. transport stations and theater/concert venues). According to an embodiment, the present system senses audible sounds via a mic or comparable audio sensing transducer and isolates/separates sounds within certain bands, reports findings to cloud-based system(s) for audio signal processing, sends control commands to a commercial mixing console and/or audio control source to alter audio output, and communicates with cluster devices to share and confirm sensed audio findings. According to an embodiment, the present system outputs to control mixing console(s) and/or an audio control source(s) via physical or SDR-based transceiver(s).
(61) According to an embodiment, the present system senses and analyzes audio frequencies across clusters to adjust and control audio output and perceived sound at a given locale. In order to achieve high-quality sound and sound equalization of a sonic presentation, a sound system's audio output levels are autonomously adjusted via a central audio mixing source using intelligent tell-tale frequency characteristics gathered from clusters comprised of smart devices and/or wearable computers.
(62) According to an embodiment, the audio signal data obtained within clusters enables a system integrated mixing console to manage audio output based on detailed frequency descriptions of acoustic properties and characteristics across a venue, room, or vehicle. According to an embodiment, the present system incorporates a modular structure so that components can be added and expand as consumer needs grow.
(63) According to an embodiment, the present system provides for an apparatus that is configured to adjust and control audio output signal levels across multiple cluster locales using computing devices such as smartphones and/or wearable computers; a wireless transmission platform; transceivers—software-defined, cognitive-defined and/or hardware-defined; wireless microphones; in-ear monitors—software-defined, cognitive-defined and/or hardware-defined; and a central audio mixing source.
(64) According to an embodiment, the apparatus of the present invention may include, but is not limited to, the following functions: Balancing the volume between sensed audio. For example, isolating instruments based on frequency and manipulating the signal amplitude of each instrument using a mixing console/audio source. Controlling the dynamic range of the sensed audio (compress or limiting) by sending audio data to a mixing console/audio source or cloud-based system that can identify and mitigate sudden peaks in a sensed audio stream to help sounds sit consistently in an audio mix (accomplished by removing sudden peaks). Panning. Adding effects that add depth and texture to audio outputs. Equalization using subtractive/additive equalization techniques. Automation that 1) predicts environmental conditions based on sensed data (like echoes and audio wind steers) and 2) autonomously programs volume changes and audio effects accordingly.
(65) Referring to
(66) In a preferred embodiment, the sound sensing mechanisms (preferably, transducers) used within each “sensing” computer/device outputs an output signal that is fed into the input of an ADC. In the configurations described in
(67)
(68) As in method 200 of
(69) If the device does not have a navigation unit, the method moves to step 404, where a breach severity measurement is determined. Once the breach severity measurement is determined, the method moves to step 405, where it is determined whether there is an onset issue.
(70) If there is an onset issue, the method moves to step 406, in which any data and/or findings are reported and/or displayed. Once the data and/or findings are reported and/or displayed, the device returns to sleep mode, step 402.
(71) If there is not an onset issue, the method moves to step 407, wherein a time window is calculated at which any sensed data was determined to be unacceptable. Once this time window is calculated, the method moves to step 408, wherein breaches within the calculated time window are collected and/or analyzed. Once the breaches within the calculated time window are collected and/or analyzed, the method moves to step 409, wherein it is determined whether there were consistent breaches during the time window. If there were consistent breaches during the time window, the method moves to step 406. If there were not consistent breaches during the time window, the device goes back to sleep mode, step 402.
(72) If the device has a navigation unit, the method moves to step 410, wherein breach severity measurements with the device's location are determined. Once the breach severity measurements with the device's location are determined, the method moves to step 411, wherein it is determined whether the device's location at the time of the breach lessened the severity of the breach. If the device's location at the time of the breach did not lessen the severity, the method moves to step 405, wherein it is determined whether there is an onset issue. If the device's location at the time of the breach did lessen the severity, the method moves to step 412, wherein an analysis takes place in which location and machine learning insights are factored into the threshold breach calculations. The method then moves to step 413, where it is determined if the breach is still an issue. If the breach is still an issue, the method moves to step 405, wherein it is determined whether there is an onset issue. If the breach is not still an issue, the device goes back to sleep mode, step 402.
(73) According to an embodiment, environmental measurements may be skewed depending on the device's location (e.g., in a bag, in a pocket, etc.). According to an embodiment, the location of the device is detected, and, in these cases, the system will either account for signal degradation in the measurement or disable environmental measurements based on predefined thresholds. According to an embodiment, smart devices (e.g., smartphones, etc.) will use an accelerometer and/or light sensor and/or a temperature sensor to detect whether or not the smart device is directly exposed to phenomena (i.e. whether or not the device is in a bag or pocket).
(74) The instant invention further describes methods for collecting and managing public music performance royalties and royalty payouts. On the listeners side, song/audio fingerprint data is collected using the method and apparatus described in U.S. Pat. No. 10,127,005 and U.S. patent application Ser. No. 16/421,141, the contents of which are hereby fully incorporated by reference. On the rights owner side, verified song/audio data is received from the listeners side and royalty payments are, in some cases, automated. Public performance royalty payments are based on data (e.g., the song/audio fingerprint data) collected by listeners/clients and business logic servers.
(75) The instant invention further describes a system that facilitates and modernizes the way music performing rights royalties are earned, processed, and managed. The system includes IoT/smart devices (e.g., a first smart or wearable device 508A, a second smart or wearable device 508B, a third smart or wearable device 508C, and/or a fourth smart or wearable device 508D of
(76) The example system also includes a song/sound fingerprint/data unit identifiable by a third party administrator. The third party administrator is provided information that details whether the song in a given environment is being sung, played live, or recorded. The third party administrator is also given information regarding a time and location of the data unit. The third party administrator autonomously issues royalty payments to the appropriate musical source and/or company. As defined herein, the “rights owner” refers to a songwriter, a lyricist, a composer, a musical company, and/or a publisher of a musical work. All payments are based on the data units received from the IoT/smart devices and performance rights information associated with the musical work. The third party administrator authorizes and issues the royalty payment based upon receiving the data units from the system. The IoT/smart devices, the song/sound fingerprint/data unit, and the system are further described in U.S. patent application Ser. No. 16/421,141 and U.S. Pat. No. 10,127,005, the contents of which are hereby fully incorporated by reference.
(77)
(78) The first tier 502 may include a first smart or wearable device 508A having an application 510 executable thereon. The first smart or wearable device 508A may be associated with a first client 512A (or user). In some examples, the first tier 502 may additionally include a second smart or wearable device 508B having the application 510 executable thereon, which may be associated with a second client 512B and a third smart or wearable device 508C having the application 510 executable thereon, which may be associated with a third client 512C. Additionally, the first tier 502 may include a fourth smart or wearable device 508D having the application 110 executable thereon, which may be associated with a local administrator client 514, if present. In examples, the first smart or wearable device 508A, the second smart or wearable device 508B, the third smart or wearable device 508C, and/or the fourth smart or wearable device 508D may be an IoT device, a smart device, or a wearable device.
(79) The first smart or wearable device 508A may be configured to communicate with the second smart or wearable device 508B and/or the third smart or wearable device 508C, if present. The second smart or wearable device 508B and/or the third smart or wearable device 508C may communicate with the fourth smart or wearable device 508D, if present. In examples, the first smart or wearable device 508A of the first tier 502 is configured, via an input/output (I/O) socket 516, to communicate with a business logic server 518 of the second tier 504. The second tier 504 may also include a rules, rights, and policy server 520. The business logic server 518 is further configured to communicate with the rules, rights, and policy server 520.
(80) The business logic server 518 of the second tier 504 is configured to perform multiple processes, such as: retrieving musical acts and/or song information and/or audio or sound fingerprint data from rights owners and/or publishers; acting as a third party song repository and/or a custom-built song database; mapping collected data to rights owner; and/or verifying songs. The rules, rights, and policy server 520 may be associated with United States and/or foreign territories, and may be configured to: store copyright laws and rules tied to royalty and royalty payouts across various territories.
(81) The third tier 506 may include a rights owner server 522, as well as a login 524 capability. The rights owner server 522 is configured to: combine data from the first tier 502, the second tier 504, and/or the third tier 506; process and appropriate royalty payments; distribute royalty payouts to financial institutions (e.g. banks or similar entities); and/or produce public performance royalty statements and/or reports.
(82) An example method executed by the business logic server 518 of
(83) The method may further include receiving, by the business logic server 518, an authorization from the user (e.g., the first client 512A) to share the data units with a third party administrator. In some examples, the authorization from the user (e.g., the first client 512A) is an opt-in agreement. In response to receiving such authorization, the business logic server 518 is configured to map the received data units to a database comprising information to identify a rights owner of the data units. The information of the database may include audio or sound fingerprint recognition information, licensing grant information, performance information, song catalog information, song ownership information, and/or a location or a jurisdiction associated with a royalty payment for the data units, among other information.
(84) The method may then include verifying the rights owner. Verification of the rights owner may include mobile-to-mobile checks. In other examples, the verification may include verification by the third party administrator. The method may further include transmitting, by the business logic server 518, a verification message to the rights owner server 522 to facilitate a payment (e.g., a royalty payment) to the rights owner. The verification message is not limited to any format and may include textual, graphical, and/or audio data.
(85) In an example where the information of the database comprises the performance information, the method may further include: mapping, by the business logic server 518, the data units to the performance information to identify a non-musical entity associated with the data units. In examples, the non-musical entity may be a venue commercializing musical works. The method may further include: verifying the non-musical entity and transmitting, by the business logic server 518, another verification message to the rights owner server 522 to facilitate the payment (e.g., the royalty payment) to the non-musical entity.
(86) In a further example, the method may include: mapping, by the business logic server 518, the data units to the rules, rights, and policy server 520 comprising copyright laws of a territory, verifying compliance with the copyright laws of the territory, and transmitting, by the business logic server 518, another verification message to the rights owner 522 server to facilitate payment to a copyright holder. In examples, the copyright holder is a music producer, songwriter, recording artist and/or other rights owner or holder (e.g. publishers).
(87) An example system contemplated herein for collecting and managing public music performance royalties and royalty payouts includes numerous components, which may be depicted, at least, in
(88) The business logic server 518 is configured to: receive the data units associated with the musical work from the device (e.g., the first smart or wearable device 508A) associated with the user (e.g., the first client 512A) and receive an authorization (e.g., an opt-in agreement) from the user (e.g., the first client 512A) to share the data units with a third party administrator. The business logic server 518 is further configured to: map the data units to the database to identify a rights owner of the data units and verify the rights owner.
(89) The business logic server 518 may also map the data units to the rules, rights, and policy server 520 comprising copyright laws of a territory and verify compliance with the copyright laws of the territory. The business logic server 518 may then transmit a verification message of the rights owner server 522 to facilitate the payment (e.g., the royalty payment) to the rights owner and a copyright holder (e.g., a publisher).
(90) In an example where the information of the database comprises the performance information, the business logic server 518 is further configured to: map the data units to the performance information to identify a non-musical entity associated with the data units (e.g., a venue commercializing musical works), verify the non-musical entity, and transmit another verification message to the rights owner server 522 to facilitate the payment (e.g., the royalty payment) to the non-musical entity.
(91) In examples, the payment to the rights owner, the copyright holder, and/or the non-musical entity are based on the data units, compliance with the copyright laws of the territory, and the performance rights associated with the musical work. In examples, the rights owner server 522 may also generate royalty statements or reports based on the royalty payment and/or location-based public performance activity reports.
(92)
(93) The method of
(94) The “YES” response to the process step 606 may lead to flagging the audio or sound fingerprint data with a “yes data” administrative flag. Subsequent this, a process step 640 may occur. The “NO” response to the process step 606 may lead to flagging the audio or sound fingerprint data with a “no data” administrative flag. Subsequent this, a process step 608 may occur.
(95) The process step 608 may include mapping the audio or sound fingerprint data to a database. The database may comprise information, such as: audio fingerprint recognition information, licensing grant information, performance information, song catalog information, song ownership information (including the rights owner of the audio or sound fingerprint data), and/or a location or a jurisdiction associated with the rights owner of the audio or sound fingerprint data for a royalty payment. A process step 610 may follow the process step 608, which includes determining the rights owner and the location or the jurisdiction associated with the rights owner of the audio or sound fingerprint data for the royalty payment from the information in the database.
(96) A process step 612 follows the process step 610, which includes applying one or more rules to the information. The rules that may be applied are non-exhaustive. A process step 614 follows the process step 612, which includes assessing whether the rights owner and/or the location or the jurisdiction allow for automated royalties. A first response to the process step 614 is a “NO” response, which leads the process to a process step 616. The process step 616 includes sharing and/or reporting such information to the rights owner. A process step 618 follows the process step 616 to end the process.
(97) A second response to the process step 614 is a “YES” response, which leads the process to a process step 620. The process step 620 includes assessing whether the rights owner is capable of processing their own royalties. A first response to the process step 620 includes a “YES” response, which leads to a process step 622. The process step 622 includes transmitting the data to a third party for processing. A process step 624 follows the process step 622 to end the process.
(98) A second response to the process step 620 includes a “NO” response, which leads to a process step 626. The process step 626 that includes retrieving royalty rates from a third party administrator or the rights owner. A process step 628 follows the process step 626 and includes performing royalty payment calculations for the rights owner. The royalty payment calculation methods are non-exhaustive. A process step 630 follows the process step 628 and includes validating the identification of the rights owner, the location and/or the jurisdiction, the royalty payment calculation, and/or previous information. In examples, the validation for this step may occur for each song and/or for each musical act.
(99) A process step 632 follows the process step 630 and includes submitting a ticket or a report to a financial entity (e.g., a bank) for a payment to the rights owner. The payment may include a royalty payment. However, the payment is not limited to this example. The process step 632 is followed by a process step 634 that includes transmitting the payment (e.g., the royalty payment) to the rights owner. A process step 636 follows the process step 634 and ends the process.
(100) In response to a “YES” to the process step 606, a process step 640 includes retrieving administrative data from a database. A process step 642 follows the process step 640 and includes assessing whether the audio and sound fingerprint data from the client correlates to the administrative data. A first response to the process step 642 is a “YES” response. The “YES” response to the process step 642 brings the process to the process step 608.
(101) A second response to the process step 642 is a “NO” response. The “NO” response to the process step 642 results in a process step 644. The process step 644 includes appending the audio and sound fingerprint data from the client with the administrative data. A process step 646 follows the process step 644 and includes assessing if there are any irreversible discrepancies between the audio and sound fingerprint data from the client and the administrative data. A first response to the process step 646 is a “YES” response, which leads to a process step 648. The process step 648 includes transmitting any irreversible discrepancies between the audio and sound fingerprint data from the client and the administrative data with the rights owner. Subsequent the process step 648, the process is brought to the process step 604. A second response to the process step 646 is a “NO” response, which leads to the process step 608, where the process is continued.
(102) When introducing elements of the present disclosure or the embodiment(s) thereof, the articles “a,” “an,” and “the” are intended to mean that there are one or more of the elements. Similarly, the adjective “another,” when used to introduce an element, is intended to mean one or more elements. The terms “including” and “having” are intended to be inclusive such that there may be additional elements other than the listed elements.
(103) While the disclosure refers to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made, and equivalents may be substituted for elements thereof without departing from the scope of the disclosure. In addition, many modifications will be appreciated by those skilled in the art to adapt a particular instrument, situation or material to the teachings of the disclosure without departing from the spirit thereof. Therefore, it is intended that the disclosure not be limited to the particular embodiments disclosed.