G06V40/28

SYSTEMS AND METHODS FOR MACHINE LEARNING-INFORMED AUTOMATED RECORDING OF TIME ACTIVITIES WITH AN AUTOMATED ELECTRONIC TIME RECORDING SYSTEM OR SERVICE
20230237439 · 2023-07-27 ·

A system and method for a machine learning-based automated electronic time recording for personnel includes identifying, via a scene capturing device, a representation of a time recording space; identifying a body having a time recording pose within the time recording space based on an assessment of the representation of the time recording space; extracting a plurality of distinct features from the representation of the time recording space based on identifying the body having the time recording pose; executing automated user-recognition based on the extracting of the plurality of distinct features; executing automated time recording recognition based on the extracting of the plurality of distinct features; and executing automated electronic time recording, via a time recording application based on the automated user-recognition and the automated time recording recognition.

ENHANCED ANIMATION GENERATION BASED ON MOTION MATCHING USING LOCAL BONE PHASES

Systems and methods are provided for enhanced animation generation based on using motion mapping with local bone phases. An example method includes accessing first animation control information generated for a first frame of an electronic game including local bone phases representing phase information associated with contacts of a plurality of rigid bodies of an in-game character with an in-game environment. Executing a local motion matching process for each of the plurality of local bone phases and generating a second pose of the character model based on the plurality of matched local poses for a second frame of the electronic game.

MOBILE TERMINAL
20230007118 · 2023-01-05 · ·

A mobile terminal comprises: a body frame that is expandable in a first direction and shrinkable in a second direction; a flexible display in which the area of a display unit positioned on the front surface of the body frame is expanded according to the expansion of the body frame; a driving unit that changes the size of the body; a sensing unit that senses a user command; and a control unit that controls the driving unit to expand or shrink the body frame on the basis of the user command sensed by the sensing unit. Thus, a screen of the display unit positioned on the front surface can be expanded through size adjustment. Because a part of the screen always faces the outside, a separate secondary display unit is not required, and the screen may be expanded step by step as needed.

MOVING CONTENT BETWEEN A VIRTUAL DISPLAY AND AN EXTENDED REALITY ENVIRONMENT

Systems, methods, and non-transitory computer readable media including instructions for extracting content from a virtual display are disclosed. Extracting content from a virtual display includes generating a virtual display via a wearable extended reality appliance, wherein the virtual display presents a group of virtual objects and is located at a first virtual distance from the wearable extended reality appliance; generating an extended reality environment via the wearable extended reality appliance including at least one additional virtual object at a second virtual distance from the wearable extended reality appliance; receiving input for causing a specific virtual object to move from the virtual display to the extended reality environment; and in response, generating a presentation of a version of the specific virtual object in the extended reality environment at a third virtual distance different from the first virtual distance and the second virtual distance.

Recognition of activity in a video image sequence using depth information
11568682 · 2023-01-31 · ·

Techniques are provided for recognition of activity in a sequence of video image frames that include depth information. A methodology embodying the techniques includes segmenting each of the received image frames into a multiple windows and generating spatio-temporal image cells from groupings of windows from a selected sub-sequence of the frames. The method also includes calculating a four dimensional (4D) optical flow vector for each of the pixels of each of the image cells and calculating a three dimensional (3D) angular representation from each of the optical flow vectors. The method further includes generating a classification feature for each of the image cells based on a histogram of the 3D angular representations of the pixels in that image cell. The classification features are then provided to a recognition classifier configured to recognize the type of activity depicted in the video sequence, based on the generated classification features.

Real-time hand modeling and tracking using convolution models

Technologies are provided herein for modeling and tracking physical objects, such as human hands, within a field of view of a depth sensor. A sphere-mesh model of the physical object can be created and used to track the physical object in real-time. The sphere-mesh model comprises an explicit skeletal mesh and an implicit convolution surface generated based on the skeletal mesh. The skeletal mesh parameterizes the convolution surface and distances between points in data frames received from the depth sensor and the sphere-mesh model can be efficiently determined using the skeletal mesh. The sphere-mesh model can be automatically calibrated by dynamically adjusting positions and associated radii of vertices in the skeletal mesh to fit the convolution surface to a particular physical object.

Predictive information for free space gesture control and communication

The technology disclosed relates to simplifying updating of a predictive model using clustering observed points. In particular, it relates to observing a set of points in 3D sensory space, determining surface normal directions from the points, clustering the points by their surface normal directions and adjacency, accessing a predictive model of a hand, refining positions of segments of the predictive model, matching the clusters of the points to the segments, and using the matched clusters to refine the positions of the matched segments. It also relates to distinguishing between alternative motions between two observed locations of a control object in a 3D sensory space by accessing first and second positions of a segment of a predictive model of a control object such that motion between the first position and the second position was at least partially occluded from observation in a 3D sensory space.

Reinforcement learning-based remote control device and method for an unmanned aerial vehicle

A device and method for remotely controlling an unmanned aerial vehicle based on reinforcement learning are disclosed. An embodiment provides a device for remotely controlling an unmanned aerial vehicle based on reinforcement learning, where the device includes a processor and a memory connected to the processor, and the memory includes program instructions that can be executed by the processor to determine an inclination direction corresponding to the hand pose of a user, the movement direction of the hand, and the angle in the inclination direction based on sensing data associated with the pose of the hand or the movement of the hand acquired by way of at least one sensor, and determine one of a movement direction, a movement speed, a mode change, a figural trajectory, and a scale of the figural trajectory of the unmanned aerial vehicle according to the determined inclination direction, movement direction, and angle.

Electronic device for performing payment and operation method therefor

Disclosed is an electronic device for processing a touch input. The electronic device may comprise: a touch screen; a biometric sensor disposed overlappingly at a position of at least a part of the touch screen; and a processor for acquiring biometric information of a user from an input relating to an object displayed through the touch screen, by using the biometric sensor, receiving a payment command associated with a payment function for the object, and performing the payment function for a product corresponding to the object by using the biometric information according to the payment command. Various other embodiments may be provided.

DATA PROCESSING SYSTEM WITH MACHINE LEARNING ENGINE TO PROVIDE OUTPUT GENERATING FUNCTIONS

Methods, apparatuses, systems, and computer-readable media for identifying and executing one or more interactive condition evaluation tests and collecting and analyzing user behavior data to generate an output are provided. In some examples, user information may be received and one or more interactive condition evaluation tests may be identified. An instruction may be transmitted to a computing device of a user and executed on the computing device to enable functionality of one or more sensors that may be used in the identified tests. Upon initiating a test, data may be collected from the one or more sensors. The collected sensor data may be transmitted to the system and processed using one or more machine learning datasets. Additionally, user behavior data may be collected and processed using one or more machine learning datasets. The sensor data, the user behavior data, and other data may be used together to generate an output.