A63F13/217

METHOD AND APPARATUS FOR RENDERING WEATHER IN VIRTUAL ENVIRONMENT, DEVICE, MEDIUM AND PROGRAM

This application discloses a method and apparatus for rendering weather in a virtual environment, a device, a medium and a program, belonging to the art of image processing. The method includes the following steps: acquiring at least one weather map of a weather scene in a virtual environment (201); removing a first weather map from the at least one weather map to obtain a remaining second weather map (202); and rendering the weather scene in the virtual environment according to the second weather map (203). This method reduces the frequency of map sampling by reducing the number of weather maps, and improves the performance of a terminal when running an application program supporting the virtual environment.

Thermopile array fusion tracking

A simultaneous location and mapping (SLAM)-enabled video game system, a user device of the video game system, and a computer-readable storage medium of the user device are disclosed. Generally, the video game system includes a video game console, a plurality of thermal beacons, and a user device communicatively coupled with the video game console. The user device includes a thermopile array, a processor, and a memory. The user device may receive thermal data from the thermopile array, the thermal data corresponding to a thermal signal emitted from a thermal beacon of the plurality of thermal beacons and detected by the thermopile array. The user device may determine, based on the thermal data, its location in 3D space, and then transmit that location to the video game system.

Thermopile array fusion tracking

A simultaneous location and mapping (SLAM)-enabled video game system, a user device of the video game system, and a computer-readable storage medium of the user device are disclosed. Generally, the video game system includes a video game console, a plurality of thermal beacons, and a user device communicatively coupled with the video game console. The user device includes a thermopile array, a processor, and a memory. The user device may receive thermal data from the thermopile array, the thermal data corresponding to a thermal signal emitted from a thermal beacon of the plurality of thermal beacons and detected by the thermopile array. The user device may determine, based on the thermal data, its location in 3D space, and then transmit that location to the video game system.

Pressure sensor with microphone and metal oxide sensor of a gaming headset microphone mouthpiece
11590411 · 2023-02-28 · ·

A biofeedback headset for providing input to and receiving output from an information handling system may include a controller to send and receive audio signals to and from the information handling system and send biofeedback signals to the information handling system; one or more speakers mounted to a wearable head band to provide audio output from the information handling system to a user; and a mouthpiece operatively coupled to the wearable headband including: a microphone to receive audio input from the user; a pressure sensor to detect a breathing rate and amplitude of the user and, with the controller, provide breathing rate and amplitude biofeedback signals to the information handling system; and a gas sensor to detect a composition of air at the mouthpiece as the user respirates and, with the controller, provide air composition biofeedback signals to the information handling system.

Methods of transmitting and receiving additional SIB1-NB subframes in a NB-IoT network

A method performed by a network node comprises transmitting a transmission of system information. The transmission comprises coded bits obtained by reading from a circular buffer. The transmission is transmitted in a first set of subframes corresponding to subframes #4 of a plurality of radio frames. The method further comprises transmitting an additional transmission of the system information. The additional transmission comprises additional coded bits obtained by continuing reading from the circular buffer. The additional transmission is transmitted in a second set of subframes corresponding to subframes of the plurality of radio frames other than subframes #4.

Methods of transmitting and receiving additional SIB1-NB subframes in a NB-IoT network

A method performed by a network node comprises transmitting a transmission of system information. The transmission comprises coded bits obtained by reading from a circular buffer. The transmission is transmitted in a first set of subframes corresponding to subframes #4 of a plurality of radio frames. The method further comprises transmitting an additional transmission of the system information. The additional transmission comprises additional coded bits obtained by continuing reading from the circular buffer. The additional transmission is transmitted in a second set of subframes corresponding to subframes of the plurality of radio frames other than subframes #4.

METHODS AND SYSTEMS FOR INTERACTIVE GAMING PLATFORM SCENE GENERATION UTILIZING CAPTURED VISUAL DATA AND ARTIFICIAL INTELLIGENCE-GENERATED ENVIRONMENT
20230218984 · 2023-07-13 ·

Methods and systems are provided for interactive gaming platform scene generation utilizing captured visual data and artificial intelligence-generated environment, with the gaming platform including at least a user device that has or is coupled to a display, with the gaming platform configured to obtain recorded footage associated with an environment pertinent to a game playable via the user device, to generate, based on the recorded footage, one or more video frames for use during playing of the game via the user device, and to display the one or more video frames via the display during the playing of the game via the user device. The recorded footage may be processed using artificial intelligence, and the one or more video frames may be generated using the artificial intelligence and based on the processing of the recorded footage.

METHODS AND SYSTEMS FOR INTERACTIVE GAMING PLATFORM SCENE GENERATION UTILIZING CAPTURED VISUAL DATA AND ARTIFICIAL INTELLIGENCE-GENERATED ENVIRONMENT
20230218984 · 2023-07-13 ·

Methods and systems are provided for interactive gaming platform scene generation utilizing captured visual data and artificial intelligence-generated environment, with the gaming platform including at least a user device that has or is coupled to a display, with the gaming platform configured to obtain recorded footage associated with an environment pertinent to a game playable via the user device, to generate, based on the recorded footage, one or more video frames for use during playing of the game via the user device, and to display the one or more video frames via the display during the playing of the game via the user device. The recorded footage may be processed using artificial intelligence, and the one or more video frames may be generated using the artificial intelligence and based on the processing of the recorded footage.

Scavenger hunt facilitation

Methods and systems for facilitating a scavenger hunt. The systems and methods described herein involve receiving at an interface a list of a plurality of attractions, and communicating the list of the plurality of attractions to at least one device associated with a participant over a network. Scavenger hunt participants may then gather imagery of the required attractions. The systems and methods described herein then involve receiving imagery from the at least one participant and executing at least one computer vision procedure to determine whether the received imagery includes at least one of the plurality of attractions.

Scavenger hunt facilitation

Methods and systems for facilitating a scavenger hunt. The systems and methods described herein involve receiving at an interface a list of a plurality of attractions, and communicating the list of the plurality of attractions to at least one device associated with a participant over a network. Scavenger hunt participants may then gather imagery of the required attractions. The systems and methods described herein then involve receiving imagery from the at least one participant and executing at least one computer vision procedure to determine whether the received imagery includes at least one of the plurality of attractions.