G06F3/01

MUTE-ABLE INPUT DEVICE WITH KEYSTROKE TACTILE FEEDBACK
20230052943 · 2023-02-16 ·

A mute-able input device with keystroke tactile feedback includes: a plurality of keys, each including a tactile structure and a sound-generating structure for respectively generating operational tactile feedback and operating sounds; a plurality of adjusting mechanisms, each including an adjusting portion corresponding to one of the keys; at least one switching unit including an operating portion and a switch member, the operating portion connecting the switch member and the adjusting mechanism, the switch member generating a switching signal involving a mode switching between different tactile modes or different sound modes for one or more keys. When the operating portion moves in response to a force, the switch member is triggered to achieve the mode switching; meanwhile, the adjusting portion moves to interfere with at least one of the tactile structure or the sound-generating structure along with the movement of the operating portion.

System and method for an interactive digitally rendered avatar of a subject person
11582424 · 2023-02-14 · ·

A system and method for an interactive digitally rendered avatar of a subject person to participate in a web meeting is described. In one embodiment, the method includes receiving an invite to a web meeting on a video conferencing platform, wherein the invite identifies a subject person and the video conferencing platform. The method also includes generating an interactive avatar of the subject person based on a data collection associated with the subject person stored in a database. The method further includes instantiating a platform integrator associated with the video conferencing platform identified in the invite and joining, by the interactive avatar of the subject person, the web meeting on the video conferencing platform. The platform integrator transforms outputs and inputs between the video conferencing platform and an interactive digitally rendered avatar system so that the interactive avatar of the subject person participates in the web meeting.

Photodetector activations

An example computing device includes a photodetector to measure an amount of light incident on a detection surface of the photodetector. The example computing device includes a state sensor to activate the photodetector responsive to the computing device being in a detection state. The example computing device also includes a processor. An example processor identifies, during the detection state, a user gesture based on an output of the photodetector. The user gesture blocks light incident on the detection surface of the photodetector. The example processor also alters an operation of the computing device based on the user gesture.

Exercised-based watch face and complications

Exercise-based watch faces and complications for use with a portable multifunction device are disclosed. The methods described herein for exercise-based watch faces and complications provide indications of time and affordances representing applications (e.g., a workout application or a weather application). In response to detecting a user input corresponding to a selection of the affordance (e.g., representing a workout application), a workout routine can optionally be begun. Further disclosed are non-transitory computer-readable storage media, systems, and devices configured to perform the methods described herein, as well as electronic devices related thereto.

Systems and methods for driving a display

An image system dynamically updates drive sequences in an image system. Drive sequences are image display settings or display driving characteristics with which a display is operated. The image system may determine the drive sequence at least partially based on input from one or more sensors. For example, the image system may include sensors such as an inertial measurement unit, a light sensor, a camera, a temperature sensor, or other sensors from which sensor data may be collected. The image system may analyze the sensor data to calculate drive sequence settings or to select a drive sequence from a number of predetermined drive sequences. Displaying image content on a display includes providing the display with image data and includes operating the display with various drive sequences.

System and method for PIN entry on mobile devices
11580208 · 2023-02-14 · ·

A system for entering a secure Personal Identification Number (PIN) into a mobile computing device includes a mobile computing device and a peripheral device that are connected via a data communication link. The mobile computing device includes a mobile application and a display and the mobile application runs on the mobile computing device and displays a grid on the mobile computing device display. The peripheral device includes a display and an encryption engine, and the peripheral device display displays a grid corresponding to the grid displayed on the mobile computing device display. Positional inputs on the mobile computing device grid are sent to the peripheral device and the peripheral device decodes the positional inputs into PIN digits and generates an encrypted PIN and then sends the encrypted PIN back to the mobile computing device.

Devices, systems, and methods for multi-device interactions

There is provided a pointing device including a mode switching apparatus that switches the pointing device between a two-dimensional (2D) operational mode and a three-dimensional (3D) operational mode and a sensor configured to determine a pointing direction of the pointing device and locations of a plurality of computing devices. When in the 2D operational mode, the pointing device is paired with a first computing device of the plurality of computing devices and controls the first computing device and when in the 3D operational mode, the pointing device is configured to select a second computing device of the plurality of computing devices additionally to control, the selection based on one or more of the pointing direction of the pointing device and the location of the second computing device.

Method and system for determining a current gaze direction
11579687 · 2023-02-14 · ·

A method for determining a current gaze direction of a user in relation to a three-dimensional (“3D”) scene, the 3D scene sampled by a rendering function to produce a two-dimensional (“2D”) projection image of the 3D scene, the sampling performed based on a virtual camera in turn being associated with a camera position and camera direction in the 3D scene. The method includes determining, by a gaze direction detection means, a first gaze direction of the user related to the 3D scene at a first gaze time point. The method includes determining a time-dependent virtual camera 3D transformation representing a change of a virtual camera position and/or virtual camera direction between the first gaze time point and a second sampling. The method includes determining the current gaze direction as a modified gaze direction calculated based on the first gaze direction and an inverse of the time-dependent virtual camera 3D transformation.

Systems for real-time intelligent haptic correction to typing errors and methods thereof

Systems and methods of the present disclosure enable context-aware haptic error notifications. The systems and methods include a processor to receive input segments into a software application from a character input component and determine a destination. A context identification model predicts a context classification of the input segments based at least in part on the software application and the destination. Potential errors are determined in the input segments based on the context classification. An error characterization machine learning model determines an error type classification and an error severity score associated with each potential error and a haptic feedback pattern is determined for each potential error based on the error type classification and the error severity score of each potential error of the one or more potential errors. And a haptic event latency is determined based on the error type classification and the error severity score of each potential error.

Dynamic feedback for haptics

A haptic system is described. The haptic system includes a linear resonant actuator (LRA), a receiver, and a transmitter. The LRA has a characteristic frequency and provides a vibration in response to an input signal. The receiver is configured to sense received vibration from the LRA. The transmitter is configured to provide the input signal to the LRA. The receiver is coupled with the transmitter and provides vibrational feedback based on the received vibration. The input signal incorporates the vibrational feedback.