VIDEO RECORDING AND EDITING SYSTEM
20220020396 · 2022-01-20
Inventors
Cpc classification
H04N5/772
ELECTRICITY
International classification
Abstract
A video recording system includes a camera sensor, a controller in communication with the camera sensor, and a memory in communication with the controller. The memory includes a video recording application that causes the controller to: store video from the camera sensor; receive a user input to associate an enhanced time marker in the video, and generate a video clip from a subset of the stored video. The video clip begins at a video frame associated with a start time point and ends at a video frame associated with an end time point, and the start and end time points are dependent on the time point associated with the enhanced time marker.
Claims
1. A video recording system comprising: a camera sensor; a controller in communication with the camera sensor; a memory in communication with the controller, the memory including a video recording application that, when executed by the controller, cause the controller to: store video from the camera sensor; receive a user input to associate an enhanced time marker in the video; and generate a video clip from a subset of the stored video, the video clip beginning at a video frame associated with a start time point and ending at a video frame associated with an end time point, wherein the start and end time points are dependent on the time point associated with the enhanced time marker.
2. The video recording system of claim 1, wherein the step of storing video from the camera sensor comprises the step of: continuously storing video in a temporary or permanent file storage arrangement from the camera sensor.
3. The video recording system of claim 1, wherein the controller receives a plurality of user inputs to associate a plurality of enhanced time markers in the video, each user input associating a respective enhanced time marker, and wherein the controller is configured to generate a plurality of video clips from the subset of stored video.
4. The video recording system of claim 1, wherein the controller receives a first user input and a second user input associated with a first enhanced time marker and a second enhanced time marker, respectively, and wherein the video clip begins at a video frame associated with a start time point dependent on the time point associated with the first enhanced time marker and ends at a video frame associated with an end time point, wherein the end time point dependent on the time point associated with the second enhanced time marker.
5. The video recording system of claim 1, wherein the step of receiving a user input comprises the steps of: provide a recording user interface including an enhanced time marker button; and receive a user input via the enhanced time marker button, wherein the input associates an enhanced time marker with a time point in the video.
6. The video recording system of claim 1, wherein the step of receiving a user input comprises the step of receiving a voice command to associate an enhanced time marker with a time point in the video.
7. The video recording system of claim 1, wherein the start time point is a predetermined number of seconds prior to the time point associated with the enhanced time marker.
8. The video recording system of claim 1, wherein the end time point is one of a predetermined number of seconds after the time point associated with the enhanced time marker and the time point associated with the enhanced time marker.
9. The video recording system of claim 1, further comprising a database including user settings, wherein the user settings include parameters associated with the enhanced time marker button.
10. The video recording system of claim 9, wherein the parameters include defining a predetermined number of seconds prior to the time point associated with the enhanced time marker as the start time point.
11. The video recording system of claim 9, wherein the parameters include defining a dynamically generated number of seconds prior to the time point associated with the enhanced time marker as the start time point.
12. The video recording system of claim 11, wherein the dynamically generated number of seconds is based on artificial intelligence.
13. The video recording system of claim 9, wherein the parameters include defining a predetermined number of seconds after the time point associated with the enhanced time marker as the end time point.
14. The video recording system of claim 9, wherein the parameters include defining a dynamically generated number of seconds after the time point associated with the enhanced time marker as the end time point.
15. The video recording system of claim 14, wherein the dynamically generated number of seconds is based on artificial intelligence.
16. The video recording system of claim 1, further comprising a user device associated with the camera sensor, the controller, and the memory.
17. The video recording system of claim 16, further comprising a remote database, wherein the video is stored on the remote database.
18. The video recording system of claim 17, wherein the remote database is a memory of a further user device.
19. The video recording system of claim 17, wherein the video is stored in a temporary file storage arrangement on the remote database.
20. The video recording system of claim 17, further comprising a further device including: a further controller in communication with the remote database; and a further memory in communication with the further controller, the further memory including a further video recording application that, when executed by the controller, cause the further controller to: provide a media gallery user interface including the temporary file storage arrangement from the remote database.
21. The video recording system of claim 20, wherein the further device includes a further camera sensor in communication with the further controller, and wherein the video recording application further causes the further controller to: store a further video from the further camera sensor; receive a further user input to associate a further enhanced time marker with a further time point in the further video; and generate a further video clip from a further subset of the further video, the further video clip beginning at a video frame associated with a further start time point and ending at a video frame associated with a further end time point, wherein the further start and further end time points are dependent on the further time point associated with the further enhanced time marker.
22. The video recording system of claim 21, wherein the step of storing the further video from the further camera sensor comprises the step of: continuously storing the further video in a further temporary file storage arrangement in one file or as a combination of file segments from the further camera sensor.
23. The video recording system of claim 21, wherein the further temporary file storage arrangement is stored on the remote database as a single file or as a combination of file segments.
24. The video recording system of claim 21, wherein the video recording application on the user device causes the controller to: provide a further media gallery user interface including the further video from the remote database.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0043] The drawing figures depict one or more implementations in accord with the present concepts, by way of example only, not by way of limitations. In the figures, like reference numerals refer to the same or similar elements.
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
[0062]
[0063]
[0064]
[0065]
DETAILED DESCRIPTION OF THE DRAWINGS
[0066] The present application provides a video recording system 10 that enables the user to generate a video clip of an event, where the event occurs before the user provides user input that triggers the generation of the video clip. More specifically, the video recording system 10 permits users to retroactively create a video clip of a past event. The video recording system may be embodied in a video recording mobile application that may be run on mobile devices (such as iOS, Android, and Windows Mobile devices), personal computers, digital cameras (such as those produced by Nikon and GoPro), and other devices (such as Google Glass and Apple Watch). The video recording system may also be integrated into the device's native recording software.
[0067]
[0068] In some embodiments, the recording is saved on the device's internal memory and is initiated via the traditional method of selecting the record button to start and stop the video. In other embodiments, the recording is continuous using the temporary file storage arrangement as discussed below. In still further embodiments, the video recorded may include a recorded video file that is saved to the device's internal memory and a temporary video file that is stored in the temporary file storage arrangements. In the embodiment illustrated in
[0069]
[0070]
[0071]
[0072]
[0073] In other embodiments, the start time point 751 may be defined via voice command as the enhanced time marker 748 is applied. For example, a voice command of “enhanced time mark lasting 10 seconds” would cause an enhanced time marker 748 to be applied to the video 310 and generate a video clip 752 having a start time point 751 corresponding to the location of the enhanced time marker 748 and an end time point 753 that is 10 seconds after the start time point 751. In still other embodiments, the start and end time points 751, 753 may be provided via user input or a separater custom input device.
[0074] As shown in the embodiment illustrated in
[0075] The system 100 may collect and analyze video data collected by the user to recognize patterns within the data using machine learning and/or artificial intelligence for use in the application of enhanced time markers 748. For example, a user may capture a large amount of video footage of basketball games and recognize that video clips including basketball shots are generated with an average start time of eight seconds prior to the ball moves through the basketball hoop and an average end time of two seconds after the ball moves through the basketball hoop. During a basketball game, the user may tap the enhanced time marker button 754 or otherwise trigger the application of an enhanced time marker 748 to the video 310 during a basketball shot, and the system 100 recognizes that a basketball shot has been made and automatically generates a video clip 752 with a start time that is eight seconds before the ball moves through the hoop and an end time that is two seconds after the ball moves through the hoop. Other unlimiting examples of patterns that machine learning may be trained to recognize include complex plays within a specific sport or a specific team and audience reactions such as clapping, cheering, or silence during live events.
[0076] In other embodiments, tapping of the enhanced time marker button 754 on the GUI 40 of
[0077] The video recording system 10 also allows for the use of a further user input that causes an enhanced time marker 748 to be applied to a video along with selection of a start time point of the corresponding video clip 752. For example, a vertical swipe down on the GUI 40 during recording causes the enhanced time marker 748 to be applied to the video 301. The user interface then presents a series of video frames from the video 301 at the bottom of the screen, enabling the user to select the start time point of the video clip 752. A user would use the down swipe user input to apply the enhanced time marker where the start time point is outside of the range provided for in the predefined user settings for the standard user input.
[0078] In another embodiment, a still further user input may be used to apply on-the-fly tagging of a video clip generated from an enhanced time marker 748. For example, a vertical swipe up on the GUI 40 during recording causes the enhanced time marker 748 to be applied to the video 301 and then prompts the user to select a specific color tag. When the user views the video clip 752 in the gallery 300, the video clip 752 includes an indicator that the video clip 752 is tagged.
[0079] In further embodiments, the video recording system 10 allows the user to create still photos from frames of the video clip 752. In still other embodiments, the video recording system 10 can integrate special effects such as slow motion into a video clip 752 immediately upon applying the user input that applies the enhanced time marker 748. In this example, the user interface 40 includes a slow motion button that, when selected, causes the video clip 752 to run in slow motion.
[0080] To help users easily locate desirable video portions within a longer video, users can add digital markers, like digital bookmarks, that appear on the scrub bar or thumbnail bar such as the scroll selection 758 (
[0081]
[0082]
[0083]
[0084]
[0085] A thumbnail 309 of a video file 310 or clip 752 may now include a designation 316 if it is in a draft mode. In draft mode, the video file 310 or clip 752 remains editable and all changes may be made virtually, meaning no new file was created. The resulting virtual files 317 are managed via time markers that include a starting point 531 and endpoint 532 marking the location of the virtual file in the temporary file storage arrangement 400 or within another video file 310. This allows for multiple video clips to be present in the gallery 300 from the same source video.
[0086] Virtual files 317 are defined by time markers that may be interpreted by the system 10 to correctly display the virtual files 317. Each time marker may include a starting point, an endpoint, and a reference to one or more source files 317. During playback, the time markers may be used to add video (for example, in the case of merged videos 310) or remove video (for example, in the case of a trimmed video) in real-time from the source video 318. Virtual files 317 may be shared, in which case a temporary new file may be created that reflects the virtual file 317 as defined by the time markers, and then after a certain time the new file gets automatically deleted. As described herein video files 310 may be provided as actual files or virtual files 317 with reference to an actual file.
[0087] Once created, the video clip 752 is added to the gallery 300 and the user may modify the video clip 752 to the same extent that he may modify a video file 310, such as adjusting the starting and ending points as described in greater detail below with reference to
[0088] Similarly, the video from which the video clip 752 is generated may be a real or recorded video file stored on the device's internal memory, a virtual video file stored in the temporary file storage arrangement on the user device, or a combination of both recorded and virtual video files.
[0089] The gallery 300 in
[0090] Referring to
[0091]
[0092] The temporary file storage arrangement 400 is useful because the video recording system 10 records video constantly without the user having to press the record button 110. Without the use of a temporary file storage arrangement 400, the amount of video 401 recorded by the system 10 would exceed storage limits. The temporary file storage arrangement 400 may enable the video recording system 10 to hold a pre-defined amount of video 401 (e.g., thirty seconds, a minute, five minutes, etc.) in separate temporary files recorded in the past that will be eventually discarded, effectively balancing storage space conservation against the risk of missing an important moment.
[0093] In an embodiment, each temporary file is thirty seconds long, and temporary files of the temporary file storage arrangement 400 are added every thirty seconds. In another embodiment, only two temporary files are kept at a time, unless included in a video 310. In some embodiments, in order to switch between files, recording is stopped for one temporary file and re-started to begin filling another temporary file. Those of skill in the art will recognize that such recording is continuous because the starting and stopping process does not introduce sizeable delays that would be noticeable to the user.
[0094]
[0095] Also shown in
[0096]
[0097] As shown in
[0098] The sliding bar 590 may include one or more thumbnail frames of the temporary file storage arrangement 400. The sliding bar 590 may includes a start time slider 591 and an end time slider 592. Both the start time slider 591 and the end time slider 592 may be moved along the sliding bar 590 using a drag gesture. The sliding bar 590 may include various locations along its length that the start time slider 591 and the end time slider 592 may be dragged to. In an embodiment, the locations may permit pixel-by-pixel dragging of the start time slider 591 and the end time slider 592. In another embodiment, the locations may be the thumbnail frames of the sliding bar 590. Each location along the sliding bar 590 may correspond to a time point of the video in the temporary file storage arrangement 400.
[0099] In response to the user dragging the start time slider 591 to a first location, the starting point 531 may be updated based on a time point corresponding to the first location. Additionally, the fine selection bar 520 may be placed in a start selection mode and updated to the start time point. Similarly, in response to the user dragging the end time slider 592 to a second location, the endpoint 532 may be updated based on a time point corresponding to the second location. Also, the fine selection bar 520 may be placed in an end selection mode and updated to the endpoint 532.
[0100] In the start selection mode, the starting point 531 may be updated in response to a scroll gesture on the fine selection bar 520. A central frame of the movable series of video frames may be displayed in the viewing window 501. As the user scrolls through the video frames, the video frame in the central frame may be updated as the starting point 531. Likewise, in an end selection mode, in response to a scroll gesture, the end point 532 may be updated to the central frame of the movable series of video frames. The user may then scroll through the video frames to update the endpoint 532. The viewing window 501 may include a play button 503 that the user may press to view the video file 310 as currently edited. When the user is in end selection mode, pressing the play button 503 may result in playback of a few seconds before the endpoint 532. For example, in an embodiment, the final three seconds are played back when pressing the play button 503 in end selection mode.
[0101]
[0102]
[0103]
[0104]
[0105]
[0106]
[0107] In a further embodiment illustrated in
[0108] The network of user devices 30 may be composed of recorder user devices and controller user devices. Each recorder device 30A-30D captures a respective video feed, either in a traditional permanent file storage arrangement or in a temporary file storage arrangement, captured in whole or in fragments, and may have a user interface through which the captured video feed can be viewed. Through the video recording system 10 on each controller device, coaches and audience members can apply enhanced time markers to one or more of the video feeds from recorder devices 30A-30D. Some devices may serve as both a recorder and controller device.
[0109] Each recorder device 30A-30D continuously receives recorded video 401 from the camera of the respective device and stores the video 401 for a pre-defined period of time in a temporary file storage arrangement 400 or as a real or recorded video file on the respective device 30A-30D and/or the remote database 34. Each device 30 can access the video files 401, 750 of other devices 30 through the gallery 300 on the respective device or through a shared folder on the remote database 34. In one embodiment where the video 401 is stored locally on the respective device 30A-30D, the galleries 300 of devices 30A-30D may sync to the other devices 30A-30D. The system 100 may allow the owner of the networked devices 30A-30D to provide select users access to the video.
[0110] During recording, voice commands may be used to start and stop recording as well as to apply enhanced time markers or tags during recording. Voice commands may be used to apply an enhanced time marker 748 to a video feed 401, 750 on a specific device and to tag the enhanced time marker 748 with a specific player or a basketball move or play. Such user input may be provided through the controller devices and/or the recorder devices. A person stationed at each device 30 may also tap or select the enhanced time marker button 754 on the graphical user interface 40 to utilize enhanced time markers 748 within a video file 401, 750. The enhanced time markers may also be applied to the video feed 401, 750 based on screen activity, such as a basketball shot being made, or a change in audio volume, such as a crowd cheering or buzzer sounded.
[0111] In one example play, a player intercepts a pass between players on the opposing team and sprints down the court, scoring two points with a lay-up. A first device is located near the point of interception and a second device is located near the player's basketball net. The user, such as the coach, provides a first voice command to instruct the first device to apply a first enhanced time marker associated with the player to the video file. The video recording system 10 then creates a 10-second video clip having a starting point at ten second prior to the first voice command and an ending point at the time of the first voice command. The video file is also tagged with the player's name. Moments later, the coach provides a second voice command to instruct the second device to apply a second enhanced time marker associated with the player to its video file. The video recording system 10 then creates a second 10-second video clip having a starting point at ten second prior to the second voice command and an ending point at the time of the second voice command, tagging the second video file with the player's name. The coach can also tag each video clip by move, such as interception, pass, or layup.
[0112] In another embodiment, users in the audience view the video feeds 902A-902D from the devices 30A-30D, respectively, through the mobile application on their user devices through the user interface 900 shown in
[0113] In some embodiments, the system 10 may transfer all video clips 752 associated with enhanced time markers 748 related to a singular point or specific duration in time from remote recorders to a shared drive or remote database. The owner of the system 10 may have a large number of video files associated with a singular point, likely spanning well before and after a critical point in the video, such as the time leading up to a three-point shot. In some cases, the videos 401, 750 are virtual files and are automatically deleted unless selected by the user to be converted to a real file.
[0114] In other embodiments, users in the audience viewing the video 902A-902D from the devices 30A-30D through the mobile app on their user devices may create local video files by tapping the enhanced time marker button 754 on the user devices. After the game, the parent can review the video files (virtual or real) of the different perspectives and decide which video files to keep. In yet another embodiment, an audience member can record video in a traditional manner from their vantage point using their user device. After completing their recording the system can automatically generate an enhanced time marker that corresponds with the start time and end time of the captured video, and then subsequently request the necessary source video from the first through fourth recorder devices in order to collect additional, fully synched videos from additional vantage points provided by the available recorder devices. It should be noted that in one potential embodiment the application of an enhanced time marker may be initiated via the user interface in a most familiar manner by what appears to the user as a traditional record and stop recording button. In yet another iteration, an enhanced time marker may be applied by a button that's labeled “capture past 30 seconds of video”. In one embodiment, to ensure optimal performance of the system, an initialization event of all participating devices, both recorders and controllers, should take place to synch the clocks of all devices to ensure correct association of time markers associated with enhanced time markers are correctly associated with the correct video segments of source video file(s).
[0115] After the game ends, the coach can meet with the team immediately to review video files 310, 317 or clips 752. The video files 310, 317 and clips 752 recorded by all four devices 30A-30D may be collected in a single folder in the media gallery 300. The coach may sort video files and clips in the gallery 300 according to tags, such as by player name or by move. The video files 310, 317, 752 from each device 30A-30D may be also edited using any of the features described herein in order to create shorter video files. For example, a 2-hour video file created by the third device may be edited to create 150 video clips, each lasting a few seconds long. The video clips 317, 752 may be tagged by player, by move, or by play. The video clips 317, 752 may be saved as actual files, while the 2-hour virtual video file may be deleted.
[0116] The video recording system 10 can also merge video clips 752 by tag and/or time into a single video file. For example, the coach may provide a voice command to merge all video clips 752 tagged by player name so that the team can view a single merged video to see a player's performance throughout the game. The coach could also merge all video clips tagged according to moves, so that the team can review all interceptions or passes, etc., throughout the game in a single video. The coach may also merge video clips 752 by selecting the link button 302 on the media gallery 300 and tapping the clips to merge.
[0117] In a still further embodiment, the network of additional devices 30E-30K allows for users to record virtual video file 401 based on the video feeds 902A-902D. A networked device 30E provides a live video file 401 to a shared folder in the media gallery 300, which is accessible by networked devices 30F-30K as well. Viewers of devices 30F-30K may view the video feed 401 and tap the enhanced time marker button 754 on their respective GUI 40 to create a video clip 752, which they may convert to a recorded video file. For example, four devices 30A-30D may be positioned about the basketball court as described in reference to the example above. Journalists may also have access to the live video feeds 401 through devices 30G-30K and can generate a ten-second clip 752 for immediate release.
[0118] Within the network-based multiuser recording system of many recorders and controllers user devices, the ability to reduce load on recorder devices and distribute video processing load is valuable. In one embodiment, the recorder devices capture the video feed in fragments of predefined or dynamically calculated lengths of time via successive stop recording and start recording events (“stop start events”), which then can be provided to controller applications on the controller devices over the network. When a controller device applies an enhanced time marker, the video recording system can determine which file fragments are needed to fulfill the requirements of the enhanced time marker, and then request only the necessary file fragments from the relevant recorder devices in order to generate the video clip 752.
[0119] Once the controller device receives the file fragment(s) from the recorder device, the controller device generates the video and presents the video in virtual form by use of the file fragment(s), time marker information, and a specialized video player. Through the controller device, the user can easily alter the desired start point and end point associated with the enhanced time marker and preview their desired video in virtual form, and if so desired convert the file from a virtual video to a real video, all the processing of which would take place on the controller device (not the recorder device). Alternatively, once the file fragment(s) are received by the controller device, the controller device can automatically generate the desired video with the use of the enhanced time marker(s) and source video fragment(s), the processing of which would take place on the controller device.
[0120] Referring back to
[0121] Sensors, devices, and additional subsystems can be coupled to the peripherals interface 106 to facilitate various functionalities. For example, a motion sensor 108 (e.g., a gyroscope), a light sensor 163, and positioning sensors 112 (e.g., GPS receiver, accelerometer) can be coupled to the peripherals interface 106 to facilitate the orientation, lighting, and positioning functions described further herein. Other sensors 114 can also be connected to the peripherals interface 106, such as a proximity sensor, a temperature sensor, a biometric sensor, or other sensing device, to facilitate related functionalities.
[0122] A camera subsystem 116 and an optical sensor 118 (e.g., a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor) can be utilized to facilitate camera functions, such as recording photographs and video clips.
[0123] Communication functions can be facilitated through a network interface, such as one or more wireless communication subsystems 120, which can include radio frequency receivers and transmitters and/or optical (e.g., infrared) receivers and transmitters. The specific design and implementation of the communication subsystem 120 can depend on the communication network(s) over which the user device 30 is intended to operate. For example, the user device 30 can include communication subsystems 120 designed to operate over a GSM network, a GPRS network, an EDGE network, a Wi-Fi or Imax network, and a Bluetooth network. In particular, the wireless communication subsystems 120 may include hosting protocols such that the user device 30 may be configured as a base station for other wireless devices.
[0124] An audio subsystem 122 can be coupled to a speaker 124 and a microphone 126 to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions.
[0125] The I/O subsystem 128 may include a touch screen controller 130 and/or other input controller(s) 132. The touch-screen controller 130 can be coupled to a touch screen 134, such as a touch screen. The touch screen 134 and touch screen controller 130 can, for example, detect contact and movement, or break thereof, using any of a plurality of touch sensitivity technologies, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with the touch screen 134. The other input controller(s) 132 can be coupled to other input/control devices 136, such as one or more buttons, rocker switches, thumb-wheel, infrared port, USB port, and/or a pointer device such as a stylus. The one or more buttons (not shown) can include an up/down button for volume control of the speaker 124 and/or the microphone 126.
[0126] The memory interface 102 may be coupled to memory 138. The memory 138 can include high-speed random access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices, and/or flash memory (e.g., NAND, NOR). The memory 138 may store operating system instructions 140, such as Darwin, RTXC, LINUX, UNIX, OS X, iOS, ANDROID, BLACKBERRY OS, BLACKBERRY 10, WINDOWS, or an embedded operating system such as VxWorks. The operating system instructions 140 may include instructions for handling basic system services and for performing hardware dependent tasks. In some implementations, the operating system instructions 140 can be a kernel (e.g., UNIX kernel).
[0127] The memory 138 may also store communication instructions 142 to facilitate communicating with one or more additional devices, one or more computers and/or one or more servers. The memory 138 may include graphical user interface instructions 144 to facilitate graphic user interface processing; sensor processing instructions 146 to facilitate sensor-related processing and functions; phone instructions 148 to facilitate phone-related processes and functions; electronic messaging instructions 150 to facilitate electronic-messaging related processes and functions; web browsing instructions 152 to facilitate web browsing-related processes and functions; media processing instructions 154 to facilitate media processing-related processes and functions; GPS/Navigation instructions 156 to facilitate GPS and navigation-related processes and instructions; camera instructions 158 to facilitate camera-related processes and functions; and/or other software instructions 160 to facilitate other processes and functions (e.g., access control management functions, etc.). The memory 138 may also store other software instructions controlling other processes and functions of the user device 30 as will be recognized by those skilled in the art. In some implementations, the media processing instructions 154 are divided into audio processing instructions and video processing instructions to facilitate audio processing-related processes and functions and video processing-related processes and functions, respectively. An activation record and International Mobile Equipment Identity (IMEI) 162 or similar hardware identifier can also be stored in memory 138.
[0128] Each of the above identified instructions and applications can correspond to a set of instructions for performing one or more functions described herein. These instructions need not be implemented as separate software programs, procedures, or modules. The memory 138 can include additional instructions or fewer instructions. Furthermore, various functions of the user device 30 may be implemented in hardware and/or in software, including in one or more signal processing and/or application specific integrated circuits. Accordingly, the user device 30, as shown in
[0129] Aspects of the systems and methods described herein are controlled by one or more controllers 103. The one or more controllers 103 may be adapted run a variety of application programs, access and store data, including accessing and storing data in associated databases, and enable one or more interactions via the user device 30. Typically, the one or more controllers 103 are implemented by one or more programmable data processing devices. The hardware elements, operating systems, and programming languages of such devices are conventional in nature, and it is presumed that those skilled in the art are adequately familiar therewith.
[0130] For example, the one or more controllers 103 may be a PC based implementation of a central control processing system utilizing a central processing unit (CPU), memories and an interconnect bus. The CPU may contain a single microprocessor, or it may contain a plurality of microcontrollers 103 for configuring the CPU as a multi-processor system. The memories include a main memory, such as a dynamic random access memory (DRAM) and cache, as well as a read only memory, such as a PROM, EPROM, FLASH-EPROM, or the like. The system may also include any form of volatile or non-volatile memory. In operation, the main memory is non-transitory and stores at least portions of instructions for execution by the CPU and data for processing in accord with the executed instructions.
[0131] The one or more controllers 103 may further include appropriate input/output ports for interconnection with one or more output displays (e.g., monitors, printers, touchscreen 134, motion-sensing input device 108, etc.) and one or more input mechanisms (e.g., keyboard, mouse, voice, touch, bioelectric devices, magnetic reader, RFID reader, barcode reader, touchscreen 134, motion-sensing input device 108, etc.) serving as one or more user interfaces for the processor. For example, the one or more controllers 103 may include a graphics subsystem to drive the output display. The links of the peripherals to the system may be wired connections or use wireless communications.
[0132] Although summarized above as a PC-type implementation, those skilled in the art will recognize that the one or more controllers 103 also encompasses systems such as host computers, servers, workstations, network terminals, and the like. Further one or more controllers 103 may be embodied in a user device 30, such as a mobile electronic device, like a smartphone or tablet computer. In fact, the use of the term controller is intended to represent a broad category of components that are well known in the art.
[0133] Hence aspects of the systems and methods provided herein encompass hardware and software for controlling the relevant functions. Software may take the form of code or executable instructions for causing a processor or other programmable equipment to perform the relevant steps, where the code or instructions are carried by or otherwise embodied in a medium readable by the processor or other machine. Instructions or code for implementing such operations may be in the form of computer instruction in any form (e.g., source code, object code, interpreted code, etc.) stored in or carried by any tangible readable medium.
[0134] It should be noted that various changes and modifications to the presently preferred embodiments described herein will be apparent to those skilled in the art. Such changes and modifications may be made without departing from the spirit and scope of the present invention and without diminishing its attendant advantage.