Patent classifications
G06F2203/04808
Systems and methods for interactive image caricaturing by an electronic device
A method for interactive image caricaturing by an electronic device is described. The method includes detecting at least one feature location of an image. The method further includes generating, based on the at least one feature location, an image mesh that comprises a grid of at least one horizontal line and at least one vertical line. The method additionally includes obtaining a gesture input. The method also includes determining at least one caricature action based on the at least one gesture input. The method further includes generating a caricature image based on the image mesh, the at least one caricature action and the image.
Interactive environment with virtual environment space scanning
An interactive environment image may be displayed in a virtual environment space, and interaction with the interactive environment image may be detected within a three-dimensional space that corresponds to the virtual environment space. The interactive environment image may be a three-dimensional image, or it may be two-dimensional. An image is displayed to provide a visual representation of an interactive environment image including one or more virtual objects, which may be spatially positioned. User interaction with the visualized representation in the virtual environment space may be detected and, in response to user interaction, the interactive environment image may be changed.
Visual manipulation of a digital object
Visual manipulation of a digital object such as three-dimensional digital object manipulation on a two-dimensional display surface is described that overcomes the challenges of explicit specification of axis manipulation for each of the three axes one at a time. In an example, a multipoint gesture to a digital object is received on a display surface, which generates an axis of manipulation based on a position of the multipoint gesture relative to the digital object. Then a manipulation gesture is recognized, indicative of a manipulation of the digital object relative to the axis of manipulation, and a visual manipulation of the digital object about the axis of manipulation is generated based on the manipulation gesture.
USER INTERFACES AND ASSOCIATED SYSTEMS AND PROCESSES FOR CONTROLLING PLAYBACK OF CONTENT
In some embodiments, an electronic device presents a user interface for controlling the playback of content items. In some embodiments, the user interface includes a plurality of selectable options for controlling playback of a respective content item overlaid on the content item. In some embodiments, an electronic device presents a user interface for browsing and switching between content items available for playback.
USER INTERFACES FOR MAPS AND NAVIGATION
In some embodiments, an electronic device present navigation routes from various perspectives. In some embodiments, an electronic device modifies display of representations of (e.g., physical) objects in the vicinity of a navigation route while presenting navigation directions. In some embodiments, an electronic device modifies display of portions of a navigation route that are occluded by representations of (e.g., physical) objects in a map. In some embodiments, an electronic device presents representations of (e.g., physical) objects in maps. In some embodiments, an electronic device presents representations of (e.g., physical) objects in maps in response to requests to search for (e.g., physical) objects.
Image forming apparatus and numerical value counting method
To provide an image forming apparatus having an excellent operability for inputting a numerical value, an image forming apparatus (1) includes a touch panel (20), an angle change calculating portion (100), and a numerical value counting portion (130). The touch panel (20) is configured to detect touches of a plurality of fingers. The angle change calculating portion (100) is configured to, when it is detected that a first finger and a second finger have been touched to the touch panel (20), calculate an angle change of the second finger with the first finger as a fulcrum. The numerical value counting portion (130) is configured to count a numerical value that is input, in response to the angle change calculated by the angle change calculating portion (100).
Method of handling aircraft cargo from a portable panel
Disclosed is a method of operating a Portable Cargo Panel (PCP) for an aircraft, including: detecting a gesture on a display of the PCP; determining that the gesture is a command to view on the display a first cargo compartment; securing a wireless connection with the first control panel therein; receiving, from the first control panel, a health state of each of a plurality of Cargo Handling Units (CHUs) therein; displaying, on the display, the first cargo compartment with the plurality of CHUs and the operational state of the plurality of CHUs; controlling one or more of the CHUs in the first cargo compartment by transmitting, to the first control panel, a command to: run a diagnostic test against the one or more of the plurality of CHUs; or control the plurality of CHUs to move a Unit Load Device (ULD) into, within or out of the first cargo compartment.
Machine Translation Method and Electronic Device
A machine translation method includes: an electronic device displays a first user interface, where source text content is displayed in the first user interface; after detecting an operation of triggering scrolling screenshot taking by a user, the electronic device automatically starts to take a scrolling screenshot; the electronic device obtains a first picture through scrolling screenshot taking; the electronic device obtains translation content corresponding to the source text content displayed on the first picture; and the electronic device automatically displays a second user interface, where a part or all of the translation content is displayed in the second user interface.
METHODS, DEVICES, AND COMPUTER-READABLE STORAGE MEDIA FOR PERFORMING A FUNCTION BASED ON USER INPUT
There is described a method performed by a computing device having first and second touch-sensitive user interfaces. According to this method, when a user input is applied to one of the first and second user interfaces, the computing device detects the user input and determines a force applied by the user input and a type of the user input. The computing device then determines whether to perform a function such as whether to display a virtual trackpad or a virtual keyboard, based on the force, the type of the user input, and a determination whether the user input is applied to a selected one of the first and second user interfaces.
ELECTRONIC DEVICE HAVING EXTENDABLE DISPLAY AND METHOD FOR PROVIDING CONTENT THEREOF
An electronic device according to various embodiments may include: a housing, a flexible display having at least a partial area configured to be drawn out from the housing so that a size of a visible area of the flexible display can be changed, a sensor configured to measure a length of the flexible display, a memory, a processor operatively connected to the memory, the sensor, and the flexible display. The processor may be configured to control the electronic device to: display a first content including at least one image in a first area including a partial area of the flexible display in response to an input, measure a length of the flexible display drawn out from the housing using the sensor, based on the length of the flexible display measured using the sensor being a specified first length, determine that the electronic device is in a non-expanded state, and based on the length of the flexible display measured using the sensor being a second length longer than the first length, determine that the electronic device is in an expanded state. In addition, the processor may be configured to control the electronic device to: based on the electronic device being in an expanded state, display the first content in the first area, and display a user interface corresponding to at least one application and/or second content obtained by converting the first content into a form corresponding to a function provided by the at least one application in a second area including a partial area of the flexible display, drawn out from the housing to be visible, and transmit, in response to an input, data corresponding to the first content to a server connected to the at least one application.