Patent classifications
G06F2203/04806
Partitioning agricultural fields for annotation
Some implementations herein relate to a graphical user interface (GUI) that facilitates dynamically partitioning agricultural fields into clusters on an individual agricultural field-basis using agricultural features. A map of a geographic area containing a plurality of agricultural fields may be rendered as part of a GUI. The agricultural fields may be partitioned into a first set of clusters based on a first granularity value and agricultural features of individual agricultural fields. The individual agricultural fields may be visually annotated in the GUI to convey the first set of clusters of similar agricultural fields. Upon receipt of a second granularity value different from the first granularity value, the agricultural fields may be partitioned into a second set of clusters of similar agricultural fields. The map of the geographic area may be updated so that individual agricultural fields are visually annotated to convey the second set of clusters.
Electronic device properly displaying widgets in blank region when operated by one hand
An electronic device includes a display and a processor. The processor is electrically coupled to the display. The processor is configured to execute a plurality of program instructions to perform the following steps: defining a window display region and an interface display region on the display when the electronic device is operated in a one-hand operation mode; displaying a desktop display interface corresponding to the one-hand operation mode in the interface display region; and displaying a plugin in the window display region.
Method of wearable device displaying icons, and wearable device for performing the same
A method of a wearable device displaying icons is provided. The method includes displaying a plurality of circular icons comprising a first circular icon located in a center area of a touch display in a first size and a second circular icon located outside of the center area of the touch display in a second size smaller than the first size, and based on a direction of a touch input received on the touch display, moving the plurality of circular icons such that the first circular icon is moved to a first position located outside of the center area of the touch display and the second circular icon is moved from a second position located outside the center area of the touch display to the center area of the touch display and enlarged in size from the second size to the first size.
SYSTEMS AND METHODS FOR CONCURRENT GRAPHICAL USER INTERFACE TRANSITIONS
Systems, methods, and non-transitory computer-readable media can receive a first user interaction associated with a first transition in a graphical user interface. The first transition is executed in the graphical user interface. A second user interaction associated with a second transition in the graphical user interface is received during the executing the first transition. The second transition is executed in the graphical user interface during the executing the first transition.
Digital viewfinder user interface for multiple cameras
An electronic device has multiple cameras and displays a digital viewfinder user interface for previewing visual information provided by the cameras. The multiple cameras may have different properties such as focal lengths. When a single digital viewfinder is provided, the user interface allows zooming over a zoom range that includes the respective zoom ranges of both cameras. The zoom setting to determine which camera provides visual information to the viewfinder and which camera is used to capture visual information. The user interface also allows the simultaneous display of content provided by different cameras at the same time. When two digital viewfinders are provided, the user interface allows zooming, freezing, and panning of one digital viewfinder independently of the other. The device allows storing of a composite images and/or videos using both digital viewfinders and corresponding cameras.
METHOD AND DEVICE FOR NAVIGATING IN A USER INTERFACE AND APPARATUS COMPRISING SUCH NAVIGATION
A method is provided for navigating in a display screen by way of a control surface including a step of measuring: —a data item, termed position, relating to a position targeted, on the control surface, by a remote control object positioned opposite the control surface, and —a data item, termed vertical distance, relating to the distance between the at least one remote control object and the control surface; and a drive step, carrying out, as a function of the vertical distance measured: —a displacement, and/or —an adjustment of a parameter relating to a displacement; of at least one part of a zone and/or of a symbol displayed on the display screen and chosen as a function of the target position.
Touch Operation Processing Method and Terminal Device
A touch operation processing method includes detecting a touch operation of a user, which starts from a border of a screen display area to the screen display area, using the first point touched by the touch operation in the screen display area as a starting point, and performing, according to the touch operation, reduction processing on an operation interface displayed in the screen display area, where one edge of an operation interface after the reduction processing includes the starting point.
INTELLIGENT INTERACTION METHOD AND DEVICE, AND STORAGE MEDIUM
Provided are an intelligent interaction device and method, and a non-transitory computer readable storage medium. The method includes: displaying, on a touch screen, a current window of a multimedia file in a playing state; displaying, in response to an instruction from a user for zooming the current window, a zoomed window of the current window at a first predetermined position of the current window, the zoomed window being smaller than the current window; and displaying, in response to an annotation operation performed by the user for the zoomed window, an annotation in the zoomed window, and updating the current window by displaying the annotation in the current window.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
There is provided an information processing apparatus that enables provision of a more natural viewing experience to a user, an information processing method, and a recording medium. A control unit controls display of a two-dimensional image including a plurality of objects having distance information in a three-dimensional coordinate system with a viewpoint position of a user as a reference. The control unit controls a display magnification of the two-dimensional image corresponding to a movement amount of the viewpoint position in a real space on the basis of distance information of the objects that is a region of interest of the user. The present disclosure can be applied to, for example, an HMD that presents an omnidirectional image.
MOVING CONTENT BETWEEN A VIRTUAL DISPLAY AND AN EXTENDED REALITY ENVIRONMENT
Systems, methods, and non-transitory computer readable media including instructions for extracting content from a virtual display are disclosed. Extracting content from a virtual display includes generating a virtual display via a wearable extended reality appliance, wherein the virtual display presents a group of virtual objects and is located at a first virtual distance from the wearable extended reality appliance; generating an extended reality environment via the wearable extended reality appliance including at least one additional virtual object at a second virtual distance from the wearable extended reality appliance; receiving input for causing a specific virtual object to move from the virtual display to the extended reality environment; and in response, generating a presentation of a version of the specific virtual object in the extended reality environment at a third virtual distance different from the first virtual distance and the second virtual distance.