Patent classifications
G06F3/04855
Graphical management system for interactive environment monitoring
Systems and methods for monitoring an environment using a Graphical Management System (GMS) are described. The GMS may present a vector image map of the environment for interaction by a user. The user may zoom and pan on the map to generate views of the map with various ranges of detail of the map. Video data from a plurality of video cameras may also be displayed on the map based on the user input and the level of zoom and the location viewed in the map. Further, the user may select a timeline event and sensor data associated with the event and the map may be initialized at the time and location of the event.
Navigating long distances on navigable surfaces
Aspects disclosed herein relate to the use of navigational control UI elements to aid in navigating large surfaces on a touchscreen device. The navigational control UI element may be operable to facilitate traversal of the navigable surface along the axis upon which the navigational control UI element is placed. In alternate examples, the navigational control element may be operable to provide functionality to traverse or adjust the navigable surface along both the horizontal vertical axes. In still further aspects, other types of navigational control UI elements may provide the ability to directly jump to a specific position on the navigable surface.
Navigating long distances on navigable surfaces
Aspects disclosed herein relate to the use of navigational control UI elements to aid in navigating large surfaces on a touchscreen device. The navigational control UI element may be operable to facilitate traversal of the navigable surface along the axis upon which the navigational control UI element is placed. In alternate examples, the navigational control element may be operable to provide functionality to traverse or adjust the navigable surface along both the horizontal vertical axes. In still further aspects, other types of navigational control UI elements may provide the ability to directly jump to a specific position on the navigable surface.
Collaboratively annotating streaming videos on mobile devices
A process for annotating a video in real-time on a mobile device. The process may include creating one or more markers, allowing a user of the mobile device to annotate the video while one or more users within a group of users are annotating the streaming video in real-time. The process may include receiving a selection from the user for which he or she seeks to annotate within the video. The process further includes displaying a text box for a frame or range of frames selected by the user seeks for annotation, and receiving a submitted text box from the user and propagating the annotations within the submitted text box to one or more users within the group in real-time.
Collaboratively annotating streaming videos on mobile devices
A process for annotating a video in real-time on a mobile device. The process may include creating one or more markers, allowing a user of the mobile device to annotate the video while one or more users within a group of users are annotating the streaming video in real-time. The process may include receiving a selection from the user for which he or she seeks to annotate within the video. The process further includes displaying a text box for a frame or range of frames selected by the user seeks for annotation, and receiving a submitted text box from the user and propagating the annotations within the submitted text box to one or more users within the group in real-time.
Indications for sponsored content items within media items
In one embodiment, a method includes sending a media item to a client computing device of a user; determining an interest level of the user for the media item, wherein the interest level is determined based on a duration of time for which the media item is played on the client computing device; and if the interest level of the user is greater than a threshold interest level, then sending, to the client computing device, a sponsored-content indicator indicating that a sponsored content item will be presented and causing the sponsored content item to be presented on the client computing device.
INPUT DEVICE, INPUT METHOD, AND IMAGE FORMING DEVICE
An input device including a touchscreen sensor that detects a touch operation from a user, a non-transitory computer-readable recording medium including a program, and a hardware processor that executes the program to operate as: an area determination unit that determines whether a detected touch operation is performed in a first detection area; a trajectory determination unit that, when it is determined that the detected touch operation is performed in the first detection area, determines whether or not the detected touch operation indicates an operation drawing a linear trajectory; and a detection control unit that, when it is determined that the operation drawing a linear trajectory is indicated, controls the area determination unit to determine whether a next touch operation is performed in a second detection area that is larger than the first detection area.
Method and system for tagging and navigating through performers and other information on time-synchronized content
In one embodiment, a computer-implemented method for editing navigation of a content item is disclosed. The method may include presenting, via a user interface at a client computing device, time-synchronized text pertaining to the content item; receiving an input of a tag for the time-synchronized text of the content item, wherein the tag corresponds to a performer that performs at least a portion of the content item at a timestamp in the time-synchronized text; storing the tag associated with the portion of the content item at the timestamp in the time-synchronized text of the content item; and responsive to receiving a request to play the content item: playing the content item via a media player presented in the user interface, and concurrently presenting the time-synchronized text and the tag in the user interface, wherein the tag is presented as a graphical user element in the user interface.
Favorite-object display method and terminal
Embodiments of this application provide a favorite-object display method. The method manages favorite objects in different applications, and after an input operation performed by a user on a favorite object is received, the method displays in an original application the content corresponding to the favorite object. The method further includes the following steps: displaying a favorites management interface, where a first favorite object and a second favorite object are displayed on the favorites management interface; receiving an operation entered by a user; and if the operation is directed to the first favorite object, responding to the operation by displaying, in a first application, a content corresponding to the first favorite object; or if the operation is directed to the second favorite object, responding to the operation by displaying, in a second application, a content corresponding to the second favorite object.
Favorite-object display method and terminal
Embodiments of this application provide a favorite-object display method. The method manages favorite objects in different applications, and after an input operation performed by a user on a favorite object is received, the method displays in an original application the content corresponding to the favorite object. The method further includes the following steps: displaying a favorites management interface, where a first favorite object and a second favorite object are displayed on the favorites management interface; receiving an operation entered by a user; and if the operation is directed to the first favorite object, responding to the operation by displaying, in a first application, a content corresponding to the first favorite object; or if the operation is directed to the second favorite object, responding to the operation by displaying, in a second application, a content corresponding to the second favorite object.