Patent classifications
G06F3/0485
SELECTION RING USER INTERFACE
Utilization of a selection ring to select icons is provided herein. The selection ring is presented in a first graphical user interface. In response to receiving a first selection of the selection ring by a user, a plurality of icons are presented in the first graphical user interface. User manipulation of a position of the selection ring within the first graphical user interface is received, where that manipulation occurs without altering a location of the plurality of icons being presented within the first graphical user interface. In response to receiving a second selection associated with the selection ring, an icon from the plurality of icons is selected based on the position of the selection ring within the first graphical user interface. A second graphical user interface is then presented to the user based on the selected icon.
SELECTION RING USER INTERFACE
Utilization of a selection ring to select icons is provided herein. The selection ring is presented in a first graphical user interface. In response to receiving a first selection of the selection ring by a user, a plurality of icons are presented in the first graphical user interface. User manipulation of a position of the selection ring within the first graphical user interface is received, where that manipulation occurs without altering a location of the plurality of icons being presented within the first graphical user interface. In response to receiving a second selection associated with the selection ring, an icon from the plurality of icons is selected based on the position of the selection ring within the first graphical user interface. A second graphical user interface is then presented to the user based on the selected icon.
USER DEVICE FOR DISPLAYING A USER-INTERFACE OBJECT AND METHOD THEREOF
Various embodiments disclosed herein are directed to a user device for displaying a user-interface object. The user device includes a first display device with a touch-sensitive interface, at least one processor, and at least one memory storing instructions executable by the processor. The instructions executable by the processor are configured to identify a defined touch gesture applied to the touch-sensitive interface. The instructions executable by the processor are also configured to display a user-interface, UI, object, which has been displayed on the first display device, on a second display device communicatively coupled to the user device responsive to the identification of the defined touch gesture.
USER DEVICE FOR DISPLAYING A USER-INTERFACE OBJECT AND METHOD THEREOF
Various embodiments disclosed herein are directed to a user device for displaying a user-interface object. The user device includes a first display device with a touch-sensitive interface, at least one processor, and at least one memory storing instructions executable by the processor. The instructions executable by the processor are configured to identify a defined touch gesture applied to the touch-sensitive interface. The instructions executable by the processor are also configured to display a user-interface, UI, object, which has been displayed on the first display device, on a second display device communicatively coupled to the user device responsive to the identification of the defined touch gesture.
REAL TIME VIDEO SPECIAL EFFECTS SYSTEM AND METHOD
A user interface system and method of recording/editing a video while applying special effects in real time. The interface can be associated with an electronic device including a processor in communication with a camera and a memory unit, or can receive previously prepared video. A first speed rate of the video can be changed by modifying a frame in the video data to create a different modified video data at a modified speed rate that. This allows for continuous recording and/or displaying of video at different speed rates without altering operations or settings. The interface can include time guidelines associated with selectable speed rates, to display which speed rate setting is near a touching finger or pointing device. The guidelines can be activated automatically or can change color, shape, intensity or other property based on finger location, and can aid in object positioning in a field-of-view.
REAL TIME VIDEO SPECIAL EFFECTS SYSTEM AND METHOD
A user interface system and method of recording/editing a video while applying special effects in real time. The interface can be associated with an electronic device including a processor in communication with a camera and a memory unit, or can receive previously prepared video. A first speed rate of the video can be changed by modifying a frame in the video data to create a different modified video data at a modified speed rate that. This allows for continuous recording and/or displaying of video at different speed rates without altering operations or settings. The interface can include time guidelines associated with selectable speed rates, to display which speed rate setting is near a touching finger or pointing device. The guidelines can be activated automatically or can change color, shape, intensity or other property based on finger location, and can aid in object positioning in a field-of-view.
CONTENT-BASED TACTILE OUTPUTS
The present disclosure generally relates to content-based tactile outputs. In some embodiments, user interfaces associated with content-based tactile outputs are described. In some embodiments, user interfaces associated with end-of-content tactile outputs are described. In some embodiments, user interfaces associated with moving a user interface in response to different types of input are described. In some embodiments, user interfaces associated with adjustable item-based tactile outputs are described. In some embodiments, user interfaces associated with input velocity-based tactile outputs are described.
COMPUTER-BASED METHOD ALLOWING USER TRANSACTIONS IN AN AUGMENTED SPACE
A computer-based method that allows users to conduct transactions and interact with environments and objects in an augmented space using any generic electronic device with a display and user input. The environment can be virtually rendered or 360° image or video pre-captured and allows the user to move around within it. Products, information, navigational or communication hotspots can be accessed by the user via the electronic device. Objects such as products are displayed as a 360° image and is user-interactable. There is an option for a physical kiosk located within a retail space that allows for offline (non Internet) use.
ELECTRONIC DEVICE HAVING FOLDABLE DISPLAY AND METHOD FOR CONTROLLING SAME
An electronic device is provided. The electronic device includes a hinge allowing the electronic device to be folded or unfolded, a foldable display disposed on at least one side of the electronic device, a processor controlling an output screen of the foldable display, sensor circuitry for detecting a folding angle of the electronic device, and input circuitry for touch input, the foldable display including a first area disposed on one side and a second area disposed on the other side with respect to the hinge, and the processor providing an execution screen of an application to a target area, which is any one of the first area or the second area, when an execution input for the application is received in a state in which the folding angle is within a predetermined range.
MOVING CONTENT BETWEEN A VIRTUAL DISPLAY AND AN EXTENDED REALITY ENVIRONMENT
Systems, methods, and non-transitory computer readable media including instructions for extracting content from a virtual display are disclosed. Extracting content from a virtual display includes generating a virtual display via a wearable extended reality appliance, wherein the virtual display presents a group of virtual objects and is located at a first virtual distance from the wearable extended reality appliance; generating an extended reality environment via the wearable extended reality appliance including at least one additional virtual object at a second virtual distance from the wearable extended reality appliance; receiving input for causing a specific virtual object to move from the virtual display to the extended reality environment; and in response, generating a presentation of a version of the specific virtual object in the extended reality environment at a third virtual distance different from the first virtual distance and the second virtual distance.