Patent classifications
G06F3/01
ELECTRONIC DEVICE FOR TRACKING OBJECTS
Systems, methods, and non-transitory media are provided for tracking operations using data received from a wearable device. An example method can include determining a first position of a wearable device in a physical space; receiving, from the wearable device, position information associated with the wearable device; determining a second position of the wearable device based on the received position information; and tracking, based on the first position and the second position, a movement of the wearable device relative to an electronic device.
AUGMENTED REALITY OBJECT MANIPULATION
A processing system having at least one processor may detect a first object in a first video of a first user and detect a second object in a second video of a second user, where the first video and the second video are part of a visual communication session between the first user and the second user. The processing system may further detect a first action in the first video relative to the first object, detect a second action in the second video relative to the second object, detect a difference between the first action and the second action, and provide a notification indicative of the difference.
VEHICLE DISPLAY CONTROL DEVICE, VEHICLE DISPLAY DEVICE, VEHICLE DISPLAY CONTROL METHOD AND COMPUTER-READABLE STORAGE MEDIUM
The appearance of content that guides a traveling path of a vehicle is improved. When a display control ECU shifts a path line, which is expressed by path line information included in map information for navigation, in a vehicle transverse direction in accordance with a distance along the vehicle transverse direction between a position of a vehicle and the path line, the display control ECU causes content to be displayed on a HUD at a corresponding position on the shifted path line.
GESTURE-BASED REMOTE KEYLESS ENTRY FOR NFC KEYS
Utilizing an access device is provided. A command mapping of user input to RKE commands is utilized to identify a RKE command based on the user input to one or more motion or orientation sensor of the access device. Responsive to the environmental input from one or more environmental sensors of the access device being indicative of command entry, a transceiver of the access device is activated, the RKE command is sent, and the transceiver is deactivated. Otherwise, the user input of the RKE command is ignored.
DISPLACED HAPTIC FEEDBACK
Displaced haptic feedback can be provided to a person providing an input on a user interface. The user interface can be operatively connected to a processor. The processor can be configured to receive an input signal from the user interface. Responsive to receiving the input signal, the processor can be further configured to cause a haptic device to be activated. The haptic device can be physically separated from the user interface. The haptic device can provides a haptic feedback to a user interacting with the user interface.
VISUAL REALITY SHOPPING APPLICATION
This application relates to systems, methods, devices, and other techniques for utilizing visual reality shopping experience for users.
HOLOGRAPHIC DISPLAY SYSTEM
A display system for a vehicle includes a display unit mounted to the vehicle and is selectively operable in a first mode as a holographic display and in a second mode as a mirror. Holographic images may include rear view images obtained from a camera or computer generated graphics. Holographic images are displayed at a virtual image plane behind the display to reduce the operator's eyes accommodation.
SYSTEMS AND METHODS FOR TERMINAL CONTROL
The embodiments of the present disclosure disclose a system and method. The system may include at least one storage device configured to storage computer instruction; and at least one processor, in communication with the storage device. When executing the computer instructions, the at least one processor is configured to direct the system to perform operations including: obtaining a sensing signal of at least one sensing device; identifying a signal feature of the sensing signal; and determining, based on the signal feature, an operation of a target object associated with the at least one sensing device.
INTERNET OF THINGS CONFIGURATION USING EYE-BASED CONTROLS
In an approach to an Internet of Things configuration using eye-based controls, one or more computer processors receive an initiation of eye control of one or more computing devices from a user. One or more computer processors identify an eye gaze direction of the user. Based on the identified eye gaze direction, one or more computer processors determine one or more target devices of the one or more computing devices. One or more computer processors determine one or more activities associated with the one or more target devices. One or more computer processors determine one or more eye control commands associated with the one or more activities. One or more computer processors display the one or more eye control commands associated with the one or more activities in the field of view of the user.
ELECTRONIC DEVICE INCLUDING FLEXIBLE DISPLAY AND METHOD FOR CONTROLLING SAME
According to an embodiment, an electronic device may include: a housing, a flexible display at least a portion of which is visible to the outside through the housing, and at least one processor operably connected to the flexible display, wherein the at least one processor may be configured, in response to a sliding operation being performed to make visible a second portion including at least a portion of a first portion of the flexible display to the outside, based on an input, in a state in which the first portion of the flexible display is visible to the outside, to: obtain context information in a state in which the second portion is visible, identify one or more workspaces based on the context information, control the display to display a list of the one or more workspaces in a portion of the second portion, and in response to one workspace being selected from the list, control the display to display execution screens of a plurality of applications in the second portion based on the selected workspace, and wherein each of the one or more workspaces may include size information of the second portion, information about a plurality of applications to be executed, and layout information of execution screens of the plurality of applications.