Patent classifications
G06F1/1684
WEARABLE DEVICE CONTROL METHOD AND APPARATUS, ELECTRONIC DEVICE, AND READABLE STORAGE MEDIUM
Provided are a control and apparatus for a wearable device, an electronic device, and a computer-readable storage medium. The wearable device has a first operation mode and a second operation mode. The first operation mode is a mode for running a first system and a second system. The second operation mode is a mode for running only the second system. The first operation mode has a higher power consumption than the second operation mode. The control method includes: obtaining user behavior data (102), determining a user behavior status based on the user behavior data (104), and switching, in response to detecting that the user behavior status is a sleep status, the wearable device from the first operation mode to the second operation mode (106).
MACHINE-LEARNING BASED GESTURE RECOGNITION WITH FRAMEWORK FOR ADDING USER-CUSTOMIZED GESTURES
Embodiments are disclosed for a machine learning (ML) gesture recognition with a framework for adding user-customized gestures. In an embodiment, a method comprises: receiving sensor data indicative of a gesture made by a user, the sensor data obtained from at least one sensor of a wearable device worn on a limb of the user; generating a current encoding of features extracted from the sensor data using a machine learning model with the features as input; generating similarity metrics between the current encoding and each encoding in a set of previously generated encodings for gestures; generating similarity scores based on the similarity metrics; predicting the gesture made by the user based on the similarity scores; and performing an action on the wearable device or other device based on the predicted gesture.
Sensor, input device, and electronic device
Provided is a sensor including a sensor electrode unit including: a sensing unit of a capacitance type that detects pressing; and a temperature detection sensing unit of a capacitance type provided in an area corresponding to the sensing unit.
Electronic apparatus
An electronic apparatus includes an electronic panel including a plurality of sensing groups spaced apart from each other, and an electronic module overlapping with the electronic panel in a plan view. Each of the sensing groups includes a first sensing electrode extending in a second direction, and second sensing electrodes spaced apart from each other in the second direction, located on the same layer as the first sensing electrode, and face the first sensing electrode in a first direction. An opening overlapping with the electronic module and penetrating a first sensing group of the sensing groups is defined in the electronic panel, and at least one selected from the first sensing electrode and the second sensing electrodes of the first sensing group extends along an edge of the opening.
Foldable electronic device and method of estimating bioinformation using the same
Provided are foldable electronic device and method for estimating bio-information by using the same. The foldable electronic device may include: a main body part including a first main body and a second main body that are configured to be folded toward each other or unfolded from each other along a fold line where the first main body and the second main body meet; an image sensor part including a first image sensor and a second image sensor which are disposed at the first main body; and a processor configured to obtain a contact image of an object from the first image sensor disposed at the first main body and obtain an image of a marker that is displayed on the second main body, from the second image sensor disposed at the first main body, when the object is in contact with the first image sensor and the main body part is folded along the fold line, and estimate bio-information based on the contact image of the object and the image of the marker.
DEVELOPING SOURCE CODE LEVERAGING SMART GLASSES
Methods for supporting a development of application source code using input from a first smart glasses of a first user and a second smart glasses of a second user is provided. Methods may include retrieving the application source code from an internal development platform and displaying the application source code on an augmented reality (“AR”) display of the first smart glasses and an AR display of the second smart glasses. Methods may include receiving, from the first smart glasses, a command to edit the application source code and in response, deactivating each input device of the second smart glasses. Methods may include receiving input of one or more edits on the first smart glasses and updating the application source code to include the input. Methods may include displaying the updated application source code on the AR displays and reactivating each input device of the second smart glasses.
Image projection device
An image projection device which can correctly discern content of touch operation when a user performs various kinds of touch operation on an image projected on a projection screen is provided. An imaging unit is adjusted to come into focus on the projection screen. An image data extracting unit extracts image data in which a finger or the like exists and in which the finger or the like is brought into focus in image data obtained by the imaging unit. An operation determining unit determines content of operation performed with the finger or the like, on the basis of the image data extracted by the image data extracting unit. An input control unit recognizes content of an input instruction corresponding to the operation performed with the finger or the like, on the basis of data relating to the content of the operation performed with the finger or the like, position data of the finger or the like, and reference data for specifying a position and a size of the image projected on the projection screen, and controls a projection unit in accordance with the recognized content of the input instruction.
Performance mode control method and electronic device supporting same
An embodiment of the present invention comprises: a communication module for communicating with at least one external device; a microphone for receiving a user utterance; a memory for storing performance mode information having been configured in the electronic device; and a processor electrically connected to the communication module, the microphone, and the memory, wherein the processor is configured to: receive, through the microphone, a second user utterance associated with task execution; transmit first data associated with the second user utterance to an external device; receive, from the external device, second data associated with at least a part of processing of the first data; identify a first work load allocated to the electronic device at the time of receiving the second data; and compare a second work load required for processing the second data and the first work load, so as to control the performance mode. In addition, various embodiments recognized through the specification are possible.
DISPLAY ELEMENT DISPLAY METHOD AND ELECTRONIC DEVICE
A display element display method and an electronic device are provided. The method is applied to an electronic device including a first body and a second body. The first body is bendable relative to the second body. The first body and the second body respectively correspond to different display areas of the electronic device. The method includes: The electronic device detects a status of the first body and a status of the second body (301); determines a main interaction area and a main display area based on the status of the first body and the status of the second body (302); obtains one or more display elements on a to-be-displayed user interface (303); determines a display element type, where the display element type includes a main interaction element and a main display element (304); and displays the main interaction element in the main interaction area and displays the main display element in the main display area (306). The method helps a user better operate the electronic device, and also helps the user view content displayed by the electronic device.
BATTERY CAPACITY REPRESENTATION METHOD AND RELATED COMPUTER SYSTEM
A battery capacity representation method for a computer system, is disclosed. The computer system includes a specific absorption rate (SAR) sensor, a battery module and a light-emitting diode (LED) module, and the battery capacity representation method includes performing an external environment sensing by the SAR sensor to determine whether a triggering condition is satisfied; sensing a battery capacity of the battery module when the triggering condition is satisfied; and determining a representation status of the LED module according to the battery capacity.