Patent classifications
G06F3/04812
Television user interface
A user interface for a television display includes a remote control with a touch pad. The remote control communicates wirelessly with a receiver. Periodic samples of touch positions are time stamped only when they are received at the receiver, and the time stamps are quantized to the interval of the periodic samples. The response of the user interface to gestures may be determined by a set of cascaded style sheets. Directional gestures may be used to skip forward or backward by a relative time during playback. During EPG scrolling, a position indicator may remain fixed in a horizontal direction until a time boundary of the EPG is reached, at which point the position indicator may move to the end of the time boundary. When scrolling programme items, an item may remain highlighted until it scrolls off the display, at which point the highlighting disappears until scrolling is complete. During scrolling, multiple directional gestures may be used to increase speed of scrolling. A swipe and hold gesture may be used to control the speed of scrolling, which is dependent on the length of time of the hold.
Television user interface
A user interface for a television display includes a remote control with a touch pad. The remote control communicates wirelessly with a receiver. Periodic samples of touch positions are time stamped only when they are received at the receiver, and the time stamps are quantized to the interval of the periodic samples. The response of the user interface to gestures may be determined by a set of cascaded style sheets. Directional gestures may be used to skip forward or backward by a relative time during playback. During EPG scrolling, a position indicator may remain fixed in a horizontal direction until a time boundary of the EPG is reached, at which point the position indicator may move to the end of the time boundary. When scrolling programme items, an item may remain highlighted until it scrolls off the display, at which point the highlighting disappears until scrolling is complete. During scrolling, multiple directional gestures may be used to increase speed of scrolling. A swipe and hold gesture may be used to control the speed of scrolling, which is dependent on the length of time of the hold.
Waypoint detection for a contact center analysis system
A contact center analysis system can receive various types of communications from customers, such as audio from telephone calls, voicemails, or video conferences; text from speech-to-text translations, emails, live chat transcripts, text messages, and the like; and other media or multimedia. The system can segment the communication data using temporal, lexical, semantic, syntactic, prosodic, user, and/or other features of the segments. The system can cluster the segments according to one or more similarity measures of the segments. The system can use the clusters to train a machine learning classifier to identify one or more of the clusters as waypoints (e.g., portions of the communications of particular relevance to a user training the classifier). The system can automatically classify new communications using the classifier and facilitate various analyses of the communications using the waypoints.
Waypoint detection for a contact center analysis system
A contact center analysis system can receive various types of communications from customers, such as audio from telephone calls, voicemails, or video conferences; text from speech-to-text translations, emails, live chat transcripts, text messages, and the like; and other media or multimedia. The system can segment the communication data using temporal, lexical, semantic, syntactic, prosodic, user, and/or other features of the segments. The system can cluster the segments according to one or more similarity measures of the segments. The system can use the clusters to train a machine learning classifier to identify one or more of the clusters as waypoints (e.g., portions of the communications of particular relevance to a user training the classifier). The system can automatically classify new communications using the classifier and facilitate various analyses of the communications using the waypoints.
Eclipse cursor for virtual content in mixed reality displays
Systems and methods for displaying a cursor and a focus indicator associated with real or virtual objects in a virtual, augmented, or mixed reality environment by a wearable display device are disclosed. The system can determine a spatial relationship between a user-movable cursor and a target object within the environment. The system may render a focus indicator (e.g., a halo, shading, or highlighting) around or adjacent objects that are near the cursor. When the cursor overlaps with a target object, the system can render the object in front of the cursor (or not render the cursor at all), so the object is not occluded by the cursor. The object can be rendered closer to the user than the cursor. A group of virtual objects can be scrolled, and a virtual control panel can be displayed indicating objects that are upcoming in the scroll.
Eclipse cursor for virtual content in mixed reality displays
Systems and methods for displaying a cursor and a focus indicator associated with real or virtual objects in a virtual, augmented, or mixed reality environment by a wearable display device are disclosed. The system can determine a spatial relationship between a user-movable cursor and a target object within the environment. The system may render a focus indicator (e.g., a halo, shading, or highlighting) around or adjacent objects that are near the cursor. When the cursor overlaps with a target object, the system can render the object in front of the cursor (or not render the cursor at all), so the object is not occluded by the cursor. The object can be rendered closer to the user than the cursor. A group of virtual objects can be scrolled, and a virtual control panel can be displayed indicating objects that are upcoming in the scroll.
Systems and Methods for Executing Robotic Process Automation (RPA) Within a Web Browser
In some embodiments, a robotic process automation (RPA) agent executing within a first browser window/tab interacts with an RPA driver injected into a target web page displayed within a second browser window/tab. A bridge module establishes a communication channel between the RPA agent and the RPA driver. In one exemplary use case, the RPA agent receives a robot specification from a remote server, the specification indicating at least one RPA activity, and communicates details of the respective activity to the RPA driver via the communication channel. The RPA driver identifies a runtime target for the RPA activity within the target web page and executes the respective activity.
Compressed content object and action detection
Various embodiments of a framework which allow, as an alternative to resource-taxing decompression, efficient computation of feature maps using a compressed content data subset, such as video, by exploiting the motion information, such as a motion vector, present in the compressed video. This framework allows frame-specific object recognition and action detection algorithms to be applied to compressed video and other media files by executing only on I-frames in a Group of Pictures and linearly interpolating the results. Training and machine learning increases recognition accuracy. Yielding significant computational gains, this approach accelerates frame-wise feature extraction I-frame/P-frame/P-frame videos as well as I-frame/P-frame/B-frame videos. The present techniques may also be used for segmentation to identify and label respective regions for objects in a video.
Compressed content object and action detection
Various embodiments of a framework which allow, as an alternative to resource-taxing decompression, efficient computation of feature maps using a compressed content data subset, such as video, by exploiting the motion information, such as a motion vector, present in the compressed video. This framework allows frame-specific object recognition and action detection algorithms to be applied to compressed video and other media files by executing only on I-frames in a Group of Pictures and linearly interpolating the results. Training and machine learning increases recognition accuracy. Yielding significant computational gains, this approach accelerates frame-wise feature extraction I-frame/P-frame/P-frame videos as well as I-frame/P-frame/B-frame videos. The present techniques may also be used for segmentation to identify and label respective regions for objects in a video.
Cursor integration with a touch screen user interface
In some embodiments, a cursor interacts with user interface objects on an electronic device. In some embodiments, an electronic device selectively displays a cursor in a user interface. In some embodiments, an electronic device displays a cursor while manipulating objects in the user interface. In some embodiments, an electronic device dismisses or switches applications using a cursor. In some embodiments, an electronic device displays user interface elements in response to requests to move a cursor beyond an edge of the display.