G08B7/00

Ear-worn electronic device configured to compensate for hunched or stooped posture

A hearing device comprises a processor operatively coupled to memory, a speaker or a receiver, and one or both of a microphone arrangement and a telecoil arrangement. A sensor is operatively coupled to the processor and configured to sense an angular position of the device relative to a horizontal plane oriented orthogonal to a direction of gravity. The processor is configured to detect a change in the angular position of the device from a first angular position to a second angular position, the first angular position corresponding to a specified angular position that provides for a target or optimal level of device performance and the second angular position resulting in suboptimal device performance. The processor is also configured to implement a corrective action that improves performance of the device relative to the suboptimal device performance while operating the device at the second angular position.

Tying a virtual speaker to a physical space
11748056 · 2023-09-05 · ·

A method for tying a virtual speaker to a physical space. A first user with a first wearable extended reality appliance enters an area associated with a virtual speaker. First sounds are transmitted from the virtual speaker to the first wearable extended reality appliance. The first user hears the first sounds at first settings of the virtual speaker. The first user changes the settings of the virtual speaker to second settings. Second sounds are transmitted from the virtual speaker to the first wearable extended reality appliance, such that the first user hears the second sounds at the second settings. After the first user leaves the area, a second user with a second wearable extended reality appliance enters the area. Third sounds are transmitted from the virtual speaker to the second wearable extended reality appliance, such that the second user hears the third sounds at the second settings.

TYING A VIRTUAL SPEAKER TO A PHYSICAL SPACE
20230139626 · 2023-05-04 · ·

A method for tying a virtual speaker to a physical space. A first user with a first wearable extended reality appliance enters an area associated with a virtual speaker. First sounds are transmitted from the virtual speaker to the first wearable extended reality appliance. The first user hears the first sounds at first settings of the virtual speaker. The first user changes the settings of the virtual speaker to second settings. Second sounds are transmitted from the virtual speaker to the first wearable extended reality appliance, such that the first user hears the second sounds at the second settings. After the first user leaves the area, a second user with a second wearable extended reality appliance enters the area. Third sounds are transmitted from the virtual speaker to the second wearable extended reality appliance, such that the second user hears the third sounds at the second settings.

INITIATING SENSORY PROMPTS INDICATIVE OF CHANGES OUTSIDE A FIELD OF VIEW

Methods, systems, apparatuses, and computer-readable media are provided for initiating location-driven sensory prompts reflecting changes to virtual objects. In one implementation, the computer-readable medium includes instructions to cause a processor to enable interaction with a virtual object located in an extended reality environment associated with a wearable extended reality appliance; receive data reflecting a change associated with the virtual object; determine whether the virtual object is within or outside a field of view of the wearable extended reality appliance; cause initiating of a first sensory prompt indicative of the change when the virtual object is determined to be within the field of view; and cause initiating of a second sensory prompt indicative of the change when the virtual object is determined to be outside the field of view, the second sensory prompt differing from the first sensory prompt.

ENHANCING VIDEOS OF PEOPLE INTERACTING WITH VIRTUAL OBJECTS IN AN EXTENDED REALITY ENVIRONMENT

A system and method for generating videos of individuals interacting with virtual objects. A wearable extended reality appliance generates an extended reality environment including at least one virtual object. First image data reflects a first perspective of an individual wearing the wearable extended reality appliance, including physical hand movements interacting with the at least one virtual object from the first perspective. Second image data reflects a second perspective facing the individual, including second physical hand movements interacting with a virtual object from the second perspective. The first image data and the second image data are analyzed to determine an interaction with the virtual object. The rendered representation of the virtual object from the second perspective is melded with the second image data to generate a video of the individual interacting with the virtual object from the second perspective.

Methods and systems for outputting alerts on user interfaces
11657699 · 2023-05-23 · ·

A technique is directed to methods and systems for outputting alerts on user interfaces. In some implementations, an alert system can identify devices connected to the alert interface and determine the user interface capabilities (e.g., audio, visual, or vibration) of each device. Upon receiving an alert of an emergency event, the alert system can determine the location of the user within a structure and select a device(s) nearby the user to transmit or display the notification of the emergency event to the user. The selected device can identify the emergency event and output the alert based upon the visual audible, or vibration user interface capabilities of the selected device.

METHODS AND SYSTEMS FOR OUTPUTTING ALERTS ON USER INTERFACES
20230154309 · 2023-05-18 ·

A technique is directed to methods and systems for outputting alerts on user interfaces. In some implementations, an alert system can identify devices connected to the alert interface and determine the user interface capabilities (e.g., audio, visual, or vibration) of each device. Upon receiving an alert of an emergency event, the alert system can determine the location of the user within a structure and select a device(s) nearby the user to transmit or display the notification of the emergency event to the user. The selected device can identify the emergency event and output the alert based upon the visual audible, or vibration user interface capabilities of the selected device.

Package Alarm Mat and Container Device
20230351874 · 2023-11-02 ·

The present invention relates to a novel package alarm device. The device is a storage box or door mat for delivered packages. The device comprises a base component with pressure sensors, such that when a package is placed on the base component, the weight and time of the package delivery is identified. Further, the base component comprises an alarm system with several flashing alarms that activate if motion is detected near the base component to prevent theft of a package left on its surface. The base component can be activated or deactivated with a remote and may be used in conjunction with a smart doorbell system. Further, the base component can comprise its own camera and notification system and can send a text message and a photo/video of the delivery to the user and/or the delivery service through a smartphone application.

Package Alarm Mat and Container Device
20230351874 · 2023-11-02 ·

The present invention relates to a novel package alarm device. The device is a storage box or door mat for delivered packages. The device comprises a base component with pressure sensors, such that when a package is placed on the base component, the weight and time of the package delivery is identified. Further, the base component comprises an alarm system with several flashing alarms that activate if motion is detected near the base component to prevent theft of a package left on its surface. The base component can be activated or deactivated with a remote and may be used in conjunction with a smart doorbell system. Further, the base component can comprise its own camera and notification system and can send a text message and a photo/video of the delivery to the user and/or the delivery service through a smartphone application.

EDGE AND GENERATIVE AI-BASED SUSTAINABLE GPS NAVIGATED WEARABLE DEVICE FOR BLIND AND VISUALLY IMPAIRED PEOPLE

The present invention relates to an EdgeGenAI based sustainable GPS navigated wearable device (100) for blind and visually impaired people. The device (100) comprises a GPS navigation unit, a plurality of sensor, an obstacle detection unit, a haptic feedback unit, an audio prompts unit, a central processing unit, a power sources and a user interface unit. The GPS navigation unit is configured to provide a real-time positioning and route guidance to the user. The audio prompts unit is configured to provide auditory instructions and information to the user during navigation. The power sources are configured to supply electrical power to the GPS navigation unit, plurality of sensor, obstacle detection unit, audio prompts unit and haptic feedback unit. The user interface unit is configured to provide an intuitive and accessible interface for blind and visually impaired individuals.