G06F3/04845

Augmented reality placement for user feedback

Methods and systems are provided for generating augmented reality (AR) scenes where the AR scenes include one or more artificial intelligence elements (AIEs) that are rendered as visual objects in the AR scenes. The method includes generating an AR scene for rendering on a display; the AR scene includes a real-world space and virtual objects projected in the real-world space. The method includes analyzing a field of view into the AR scene; the analyzing is configured to detect an action by a hand of the user when reaching into the AR scene. The method includes generating one or more AIEs rendered as virtual objects in the AR scene, each AIE is configured to provide a dynamic interface that is selectable by a gesture of the hand of the user. In one embodiment, each of the AIEs is rendered proximate to a real-world object present in the real-world space; the real-world object is located in a direction of where the hand of the user is detected to be reaching when the user makes the action by the hand.

Augmented reality placement for user feedback

Methods and systems are provided for generating augmented reality (AR) scenes where the AR scenes include one or more artificial intelligence elements (AIEs) that are rendered as visual objects in the AR scenes. The method includes generating an AR scene for rendering on a display; the AR scene includes a real-world space and virtual objects projected in the real-world space. The method includes analyzing a field of view into the AR scene; the analyzing is configured to detect an action by a hand of the user when reaching into the AR scene. The method includes generating one or more AIEs rendered as virtual objects in the AR scene, each AIE is configured to provide a dynamic interface that is selectable by a gesture of the hand of the user. In one embodiment, each of the AIEs is rendered proximate to a real-world object present in the real-world space; the real-world object is located in a direction of where the hand of the user is detected to be reaching when the user makes the action by the hand.

Screen sharing system, screen sharing method, and display apparatus
11579832 · 2023-02-14 · ·

A screen sharing system includes a first display apparatus including first circuitry; and a second display apparatus including second circuitry. Both of the first display apparatus and the second display apparatus display an input screen. The first circuitry of the first display apparatus is configured to receive first hand drafted input data that is input to the first display apparatus, and set an edit authority, of a user of the second display apparatus, for the first hand drafted input data. The second circuitry of the second display apparatus is configured to restrict editing of the firsthand drafted input data based on the edit authority of the user set by the first display apparatus.

Displaying a representation of a user touch input detected by an external device

A device includes a touch-sensitive display, one or more processors, and memory storing one or more programs including instructions for receiving data from an external device representing user input received over a duration of time at the external device. The programs may include instructions for determining whether the electronic device is actively executing an application for playback. The programs may further include instructions for in accordance with a determination that the electronic device is not actively executing an application for playback: displaying an indication of the receiving of the data; and displaying an affordance, wherein the affordance when selected launches the application for playback and causes the electronic device to playback the received data according to the duration of time of the user input.

Displaying a representation of a user touch input detected by an external device

A device includes a touch-sensitive display, one or more processors, and memory storing one or more programs including instructions for receiving data from an external device representing user input received over a duration of time at the external device. The programs may include instructions for determining whether the electronic device is actively executing an application for playback. The programs may further include instructions for in accordance with a determination that the electronic device is not actively executing an application for playback: displaying an indication of the receiving of the data; and displaying an affordance, wherein the affordance when selected launches the application for playback and causes the electronic device to playback the received data according to the duration of time of the user input.

Mid-air volumetric visualization movement compensation

A wearable computing device generates a volumetric visualization at a first position that is located in a three-dimensional space. The wearable computing device includes a volumetric source configured to create the volumetric visualization. The wearable computing device includes one or more sensors configured to determine movement of the wearable computing device. A movement of the wearable computing device is identified by the wearable computing device. Based on the movement the wearable computing device adjusts the volumetric source.

Mid-air volumetric visualization movement compensation

A wearable computing device generates a volumetric visualization at a first position that is located in a three-dimensional space. The wearable computing device includes a volumetric source configured to create the volumetric visualization. The wearable computing device includes one or more sensors configured to determine movement of the wearable computing device. A movement of the wearable computing device is identified by the wearable computing device. Based on the movement the wearable computing device adjusts the volumetric source.

Methods, systems, and apparatus, for receiving persistent responses to online surveys
11579750 · 2023-02-14 · ·

A press and hold function for conducting online surveys with a respondent in order to obtain genuine responses to the online surveys is presented herein. A user interface associated with an online survey is presented to the respondent on a screen of a computing device. The online survey can include a set of questions. Each question of the online survey can include a set of response elements. Each of these response elements can be associated with one or more response duration. In order to select a response to a question, the respondent can press a response button that is associated with one or more response elements to that question. The respondent then holds the response button for the response duration associated with the response element. After the response duration is complete, the response associated with the response element is deemed to be the response of the respondent.

Methods, systems, and apparatus, for receiving persistent responses to online surveys
11579750 · 2023-02-14 · ·

A press and hold function for conducting online surveys with a respondent in order to obtain genuine responses to the online surveys is presented herein. A user interface associated with an online survey is presented to the respondent on a screen of a computing device. The online survey can include a set of questions. Each question of the online survey can include a set of response elements. Each of these response elements can be associated with one or more response duration. In order to select a response to a question, the respondent can press a response button that is associated with one or more response elements to that question. The respondent then holds the response button for the response duration associated with the response element. After the response duration is complete, the response associated with the response element is deemed to be the response of the respondent.

Sound effect simulation by creating virtual reality obstacle

According to one embodiment, a method, computer system, and computer program product for modulating external sounds to reflect the acoustic effects of virtual objects in a mixed-reality environment is provided. The present invention may include creating a knowledge corpus, recording a sound effect occurring externally to a mixed-reality environment experienced by a user operating the mixed-reality device; identifying one or more objects within the mixed-reality environment, including at least one virtual object; modulating the sound effect based on the knowledge corpus to simulate one or more acoustic effects of the one or more objects within the MR environment; and playing the modulated sound effect to the user.