G06F3/0488

Systems and methods for controlling virtual scene perspective via physical touch input

Systems, methods, and non-transitory computer readable media for controlling perspective in an extended reality environment are disclosed. In one embodiment, a non-transitory computer readable medium contains instructions to cause a processor to perform the steps of: outputting for presentation via a wearable extended reality appliance (WER-appliance), first display signals reflective of a first perspective of a scene; receiving first input signals caused by a first multi-finger interaction with the touch sensor; in response, outputting for presentation via the WER-appliance second display signals to modify the first perspective of the scene, causing a second perspective of the scene to be presented via the WER-appliance; receiving second input signals caused by a second multi-finger interaction with the touch sensor; and in response, outputting for presentation via the WER-appliance third display signals to modify the second perspective of the scene, causing a third perspective of the scene to be presented via the WER-appliance.

Systems and methods for controlling virtual scene perspective via physical touch input

Systems, methods, and non-transitory computer readable media for controlling perspective in an extended reality environment are disclosed. In one embodiment, a non-transitory computer readable medium contains instructions to cause a processor to perform the steps of: outputting for presentation via a wearable extended reality appliance (WER-appliance), first display signals reflective of a first perspective of a scene; receiving first input signals caused by a first multi-finger interaction with the touch sensor; in response, outputting for presentation via the WER-appliance second display signals to modify the first perspective of the scene, causing a second perspective of the scene to be presented via the WER-appliance; receiving second input signals caused by a second multi-finger interaction with the touch sensor; and in response, outputting for presentation via the WER-appliance third display signals to modify the second perspective of the scene, causing a third perspective of the scene to be presented via the WER-appliance.

Apparatus to dispense feminine hygiene products with one or more user sensors
11576828 · 2023-02-14 · ·

A dispenser of feminine pads and tampons activated by a touch sensor or a proximity sensor. The touch of or the proximity of a person's hand closes an electronic circuit causing a motor to rotate. The motor is attached to a shaft which rotates. The shaft retains a feminine product dispenser. The feminine product dispenser transports a feminine napkin or tampon to a retrieval tray. The improved design enables the sanitary napkin rail and tampon rail to be adjacent to each other, thereby reducing the width requirements for the cabinet housing the rails. The activation by a touch screen or a proximity sensor significantly improves the selection and dispensing of the desired feminine napkin product and tampon product.

Apparatus to dispense feminine hygiene products with one or more user sensors
11576828 · 2023-02-14 · ·

A dispenser of feminine pads and tampons activated by a touch sensor or a proximity sensor. The touch of or the proximity of a person's hand closes an electronic circuit causing a motor to rotate. The motor is attached to a shaft which rotates. The shaft retains a feminine product dispenser. The feminine product dispenser transports a feminine napkin or tampon to a retrieval tray. The improved design enables the sanitary napkin rail and tampon rail to be adjacent to each other, thereby reducing the width requirements for the cabinet housing the rails. The activation by a touch screen or a proximity sensor significantly improves the selection and dispensing of the desired feminine napkin product and tampon product.

Method and apparatus for prompting that virtual object is attacked, terminal, and storage medium

A method and an apparatus includes prompting that a virtual object is attacked. The method includes: displaying a user interface (UI) including a target virtual object located in a virtual environment. A being-attacked direction of the target virtual object is obtained. A display position of the being-attacked direction prompt information is obtained according to the being-attacked direction. The being-attacked direction prompt information is used for indicating the being-attacked direction. The being-attacked direction prompt information is displayed in the UI according to the display position. The being-attacked direction is prompted, and content of the being-attacked prompt is diversified.

Apparatus and method for performing multi-tasking in portable terminal

A multi-tasking execution apparatus and a method for easily controlling applications running in a portable terminal are provided. The apparatus includes a display and a controller. The display displays an application-containing image in which at least one specific image representing at least one application running in a background is contained and arranged. The controller operatively displays at least one specific image representing at least one application running in the background, so as to be contained in the application-containing image, and controls the at least one application running in the background by controlling the specific image based on a specific gesture.

Apparatus and method for performing multi-tasking in portable terminal

A multi-tasking execution apparatus and a method for easily controlling applications running in a portable terminal are provided. The apparatus includes a display and a controller. The display displays an application-containing image in which at least one specific image representing at least one application running in a background is contained and arranged. The controller operatively displays at least one specific image representing at least one application running in the background, so as to be contained in the application-containing image, and controls the at least one application running in the background by controlling the specific image based on a specific gesture.

CORE MODEL AUGMENTED REALITY

A method of registering geological data at a formation core tracking system includes, at the tracking system, registering a formation core provided within a field of view of an optical imaging system of the tracking system; tracking the orientation of the formation core relative to the tracking system and the distance of the formation core relative to the tracking system; obtaining data associated with a first section of the formation core which is located at a predetermined distance from the tracking system, displaying the data together with an image of the formation core such that an augmented reality image is provided on a display device of the tracking system, changing the distance between the tracking system and the core; and updating the displayed data by obtaining data associated with a second section of the formation core which is located at said predetermined distance from the tracking system.

DISPLAY METHOD FOR FOLDABLE SCREEN AND RELATED APPARATUS

This application discloses a display method for a foldable screen, applied to an electronic device including a foldable screen. The foldable screen can be folded to form at least two screens, and the at least two screens may include a first screen and a second screen. The method includes: When the foldable screen is in an expanded state, the electronic device displays a first interface of a first application in full screen on the foldable screen. The first interface includes an image captured by a camera. When detecting that the foldable screen changes from the expanded state to a half-folded state, the electronic device displays a second interface on the first screen. In this way, a user can conveniently perform photographing, video calling, live broadcasting, or the like without holding the electronic device steady with both hands.

Content Transmission Method, Device, and Medium
20230042460 · 2023-02-09 ·

A content transmission method is provided. The method may include: A first device determines that a distance between the first device and a second device is less than a distance threshold. The first device provides a user with a prompt that content transmission can be performed between the first device and the second device. The first device recognizes a gesture operation performed by the user on the first device, and determines transmission content and a transmission direction of the transmission content between the first device and the second device based on the recognized gesture operation. The first device receives the transmission content from the second device or sends the transmission content to the second device based on the determined transmission direction.