Patent classifications
G06T13/00
Method for providing drawing effects by displaying a drawing output corresponding to a drawing input using a plurality of objects, and electronic device supporting the same
Various embodiments of the present invention relate to a method for displaying a stylus pen input, and an electronic device for same, the electronic device including: a touch screen display; a wireless communication circuit; processors operatively connected to the touch screen display and the wireless communication circuit; and a memory operatively connected to the processor. The memory may store instructions which, when executed, cause at least one of the processors to: display a user interface on the touch screen display; receive a drawing input that has at least one drawing path formed with a stylus pen or part of a user's body through the user interface; and display a drawing output on the user interface. The drawing output includes: a first layer including a plurality of first objects having shapes selected along the drawing path; and a second layer including a plurality of moving second objects having the selected shapes, wherein the plurality of moving second objects can move from the drawing path in at least one selected direction.
AUTOMATED INTERVIEW APPARATUS AND METHOD USING TELECOMMUNICATION NETWORKS
Apparatus (1) for automatically conducting an interview over a telecommunication network (4), with at least one candidate party (2a, 2b, . . . 2N) to an open job position; comprising means for: selecting (S0) a candidate party; initiating (S1) a communication session between the candidate party and an automated interviewing party; monitoring (S2) the communication session by receiving an audio stream; converting (S3) language of said audio stream into text data determining (S4), from said text data, at least first understandability quality features (UQF.sub.A, UQF.sub.G) and an information quality feature (IQF), said first understandability quality feature being representative of at least word articulation and grammar correctness within said language, and said information quality feature being representative of a comparison of the semantic content of the audio stream with an expected content; assessing (S5) a matching value of the candidate party.
AUTOMATED INTERVIEW APPARATUS AND METHOD USING TELECOMMUNICATION NETWORKS
Apparatus (1) for automatically conducting an interview over a telecommunication network (4), with at least one candidate party (2a, 2b, . . . 2N) to an open job position; comprising means for: selecting (S0) a candidate party; initiating (S1) a communication session between the candidate party and an automated interviewing party; monitoring (S2) the communication session by receiving an audio stream; converting (S3) language of said audio stream into text data determining (S4), from said text data, at least first understandability quality features (UQF.sub.A, UQF.sub.G) and an information quality feature (IQF), said first understandability quality feature being representative of at least word articulation and grammar correctness within said language, and said information quality feature being representative of a comparison of the semantic content of the audio stream with an expected content; assessing (S5) a matching value of the candidate party.
STIMULATION OF BRAIN PATHWAYS AND RETINAL CELLS FOR VISION TRAINING
A vision stimulation platform is designed for stimulation of brain pathways and retinal cells for vision training and enhancement. Modules of eye exercises are provided to a user, where each module includes an ordered sequence of eye exercises to be performed by the user. The eye exercises comprise one or more screens showing an animated display for the user to view for a time period less than a threshold time (e.g., 10 seconds). The animated display has a color, movement, or pattern designed to stimulate a specific visual pathway of the brain or the retina of the user, and the set of displays is designed to achieve a purpose (e.g., eye relaxation, vision precision, stroke treatment, etc.). At least one of the eye exercises comprises an interactive portion for the user to interact with one or more items on the screen to test the motor cortex of the user.
STIMULATION OF BRAIN PATHWAYS AND RETINAL CELLS FOR VISION TRAINING
A vision stimulation platform is designed for stimulation of brain pathways and retinal cells for vision training and enhancement. Modules of eye exercises are provided to a user, where each module includes an ordered sequence of eye exercises to be performed by the user. The eye exercises comprise one or more screens showing an animated display for the user to view for a time period less than a threshold time (e.g., 10 seconds). The animated display has a color, movement, or pattern designed to stimulate a specific visual pathway of the brain or the retina of the user, and the set of displays is designed to achieve a purpose (e.g., eye relaxation, vision precision, stroke treatment, etc.). At least one of the eye exercises comprises an interactive portion for the user to interact with one or more items on the screen to test the motor cortex of the user.
USER INTERFACES RELATED TO TIME
The present disclosure generally relates to methods and user interfaces for managing watch face user interfaces. In some embodiments, methods and user interfaces for managing watch faces based on depth data of a previously captured media item are described. In some embodiments, methods and user interfaces for managing clock faces based on geographic data are described. In some embodiments, methods and user interfaces for managing clock faces based on state information of a computer system are described. In some embodiments, methods and user interfaces related to the management of time are described. In some embodiments, methods and user interfaces for editing user interfaces based on depth data of a previously captured media item are described.
USER INTERFACES RELATED TO TIME
The present disclosure generally relates to methods and user interfaces for managing watch face user interfaces. In some embodiments, methods and user interfaces for managing watch faces based on depth data of a previously captured media item are described. In some embodiments, methods and user interfaces for managing clock faces based on geographic data are described. In some embodiments, methods and user interfaces for managing clock faces based on state information of a computer system are described. In some embodiments, methods and user interfaces related to the management of time are described. In some embodiments, methods and user interfaces for editing user interfaces based on depth data of a previously captured media item are described.
Virtual object display method and apparatus, electronic device, and storage medium
The present disclosure discloses a display method and apparatus for a virtual object, an electronic device, and a storage medium, and is related to the field of computer technologies. The method includes: obtaining a plurality of animation frames corresponding to each of a plurality of virtual objects and a weight of each animation frame; blending a plurality of animation frames corresponding to the plurality of virtual objects in parallel through an image processor according to the weight of each animation frame, to obtain target position and pose data of each bone in bone models of the plurality of virtual objects; and displaying the plurality of virtual objects in a graphical user interface according to the target position and pose data of each bone in the bone models of the plurality of virtual objects.
Virtual object display method and apparatus, electronic device, and storage medium
The present disclosure discloses a display method and apparatus for a virtual object, an electronic device, and a storage medium, and is related to the field of computer technologies. The method includes: obtaining a plurality of animation frames corresponding to each of a plurality of virtual objects and a weight of each animation frame; blending a plurality of animation frames corresponding to the plurality of virtual objects in parallel through an image processor according to the weight of each animation frame, to obtain target position and pose data of each bone in bone models of the plurality of virtual objects; and displaying the plurality of virtual objects in a graphical user interface according to the target position and pose data of each bone in the bone models of the plurality of virtual objects.
Mission driven virtual character for user interaction
An augmented reality (AR) display device can display a virtual assistant character that interacts with the user of the AR device. The virtual assistant may be represented by a robot (or other) avatar that assists the user with contextual objects and suggestions depending on what virtual content the user is interacting with. Animated images may be displayed above the robot's head to display its intents to the user. For example, the robot can run up to a menu and suggest an action and show the animated images. The robot can materialize virtual objects that appear on its hands. The user can remove such an object from the robot's hands and place it in the environment. If the user does not interact with the object, the robot can dematerialize it. The robot can rotate its head to keep looking at the user and/or an object that the user has picked up.