HEARING PROTECTION EQUIPMENT AND SYSTEM WITH TRAINING CONFIGURATION
20220223061 · 2022-07-14
Inventors
Cpc classification
A61B5/6803
HUMAN NECESSITIES
H04S7/302
ELECTRICITY
G06F3/017
PHYSICS
International classification
Abstract
The invention relates to a system comprising a hearing protection device (13) configured to be worn by a user (10); one or more audio output devices (26) configured to generate one or more audio signals, a computing device (60) comprising a memory and one or more computer processors, wherein the memory comprises instructions that when executed by the one or more computer processors cause the one or more computer processor to: —select a training configuration (25) that defines a set of audio events (74A) that correspond to a set of user reactions (74B), —send a set of control signals to the one or more audio output device (26) that cause the one or more audio output devices (26) to simulate the set of audio events (74A), —receive reaction data (74C) that indicates whether the user (10) provided the set of user reactions (74B) to the set of audio events (74A); and —perform at least one operation based at least in part on whether the user (10) provided the set of user reactions (74B) to the set of audio events (74A) while wearing the hearing protection device (13).
Claims
1. A system comprising: a hearing protection device 13 configured to be worn by a user 10, one or more audio output devices 26 configured to generate one or more audio signals, a computing device 60 comprising a memory and one or more computer processors, wherein the memory comprises instructions that when executed by the one or more computer processors cause the one or more computer processor to: select a training configuration 25 that defines a set of audio events 74A that correspond to a set of user reactions 74B, send a set of control signals to the one or more audio output device 26 that cause the one or more audio output devices 26 to simulate the set of audio events 74A, receive reaction data 74C that indicates whether the user provided the set of user reactions 74B to the set of audio events 74C; and perform at least one operation based at least in part on whether the user 10 provided the set of user reactions 74B to the set of audio events 74A while wearing the hearing protection device 13.
2. The system according to claim 1, wherein the memory of the computing device 60 comprises instructions that when executed by the one or more computer processors cause the one or more computer processor to: send a set of control signals to the one or more audio output device 26 that cause the one or more audio output devices 26 to simulate the set of audio events 74A, wherein the control signals are configured to cause the audio output device 26 to generate at least one audio output signal that provides a three-dimensional acoustic experience to a user 10.
3. The system according to claim 1 or claim 2, wherein the memory of the computing device 60 comprises instructions that when executed by the one or more computer processors cause the one or more computer processor to: select a training configuration 25 that defines a set of audio events 74A that correspond to a set of user reactions 74B, send a set of control signals to the one or more audio output device 26 that cause the one or more audio output devices 26 to simulate the set of audio events 74A at one or more particular locations in a three-dimensional space around the user 10 or in the hearing protection device 13 to simulate a the three-dimensional acoustic experience to the user 10, receive reaction data 74C that indicates whether the user 10 provided the set of user reactions 74B to the set of audio events 74A at the one or more particular locations; and perform at least one operation based at least in part on whether the user 10 provided the set of user reactions 74B to the set of audio events 74A at the one or more particular locations while wearing the hearing protection device 13.
4. The system according to any of the preceding claims, wherein the memory of the computing device 60 comprises instructions that when executed by the one or more computer processors cause the one or more computer processor to: select a training configuration 25 that defines a set of audio events 74A that correspond to a set of user reactions 74B send a set of control signals to the one or more audio output device 26 that cause the one or more audio output devices 26 to simulate the set of audio events 74A, wherein the a set of audio events 74A simulate real-life three-dimensional acoustic experience that may occur during the day of a user 10.
5. The system according to any of the preceding claims, wherein the memory of the computing device 60 comprises instructions that when executed by the one or more computer processors causes the one or more computer processors to: reference that set of audio events 74A to a set of user reactions 74B identifying the according location in the three-dimensional space around the user 10 where the acoustic experience comes from and/or reference that set of audio events 74A to a set of user reactions 74B identifying the kind of acoustic experience.
6. The system according to any of the preceding claims, wherein the memory of the computing device 60 comprises instructions that when executed by the one or more computer processors causes the one or more computer processors to: reference that set of audio events 74A to a set of defined time frames for the time passing between sending out the set of control signals to the one or more audio output device 26 that causes the one or more audio output device 26 to simulate the set of audio events 74A and receiving reaction data 74C that indicates whether the user 10 provided the set of user reactions 74B.
7. The system according to any of the preceding claims, wherein the at least one operation based at least in part on whether the user 10 provided the set of user reactions 74B to the set of audio events 74A while wearing the hearing protection device 13 is selected from one or more of the following operations: providing a predefined feedback to the user, providing a selection of an appropriate next training configuration, providing a suggestion of an appropriate hearing protection device, creating a profile of a user with a number of reaction data, providing information packages for a management of the user, providing a comparison of reaction data of one user with reaction data of another user, providing a comparison of reaction data of one user with the profile of this user.
8. The system according to any of the preceding claims, wherein the hearing protection device 13 may be any kind of known hearing protection device worn in or over at least one ear while exposed to hazardous noise to help prevent noise-induced hearing loss.
9. The system according to any of the preceding claims, wherein the hearing protection device 13 is communicatively coupled to the computing device.
10. The system according to any of the preceding claims, wherein the hearing protection device 13 may be equipped with communication devices such as for example a head set.
11. The system according to any of the preceding claims, wherein the audio output device 26 is integrated in the hearing protection equipment.
12. The system according to any of the claims 1 to 10, wherein the audio output device 26 is positioned in a training space.
13. The system according to any of the preceding claims, wherein the system also comprises a reaction recognition device, which is communicatively coupled to the computing device 60 and which is providing input signals for the received reaction data 74C.
14. The system according to any of the preceding claim, wherein the reaction recognition device is a separate device or is integrated into components of the system such as for example the hearing protection device.
15. The system according to any of the preceding claims, wherein the reaction recognition device may be configured such that it can be hold in a user's hand or that it can be fixed to a user's equipment or body. The system according to any of the preceding claim, wherein the reaction recognition device, comprises a camera or an acceleration sensor.
Description
[0051] The invention will now be described in more detail with reference to the following Figures exemplifying particular embodiments of the invention:
[0052]
[0053]
[0054]
[0055] Herein below various embodiments of the present invention are described and shown in the drawings wherein like elements are provided with the same reference numbers.
[0056]
[0057] As shown in
[0058] In this example, environment 8A and 8B are shown generally as having users 10, while environment 8C is shown in expanded form to provide more detailed example and has only one user 10.
[0059] As shown in the example of
[0060] In addition, an environment, such as the environment 8C, may also include one or more wireless-enabled sensing stations, such as sensing stations 21A, 21B, 21C. Each sensing station 21 includes one or more sensors and a controller configured to output data indicative of sensed environmental conditions. Moreover, sensing stations 21 may be positioned within respective geographic regions of environment 8 or otherwise interact with beacons 17 to determine respective positions and include such positional data when reporting environmental data to safety training system 6. Example environmental conditions that may be sensed by sensing stations 21 include but are not limited to temperature, humidity, presence of gas, pressure, visibility, wind and the like.
[0061] In the example of
[0062] The hearing protection device 13 as well as the other mentioned personal protection devices may include embedded sensors or monitoring devices and processing electronics configured to capture data in real-time as a user (e.g. a user) engages in activities while utilizing (e.g. wearing) the hearing protection device 13 or others. The hearing protection device 13 may include a number of equipment sensors for sensing or controlling the operation of such components.
[0063] In addition, each hearing protection device 13 may include one or more output devices for outputting data that is indicative of the operation of the hearing protection device 13 and/or generating and outputting communications to the respective user 10. For example, hearing protection device 13 may include one or more devices to generate audible feedback (e.g. one or more speakers), visual feedback (e.g., one or more displays, light emitting diodes (LEDs) or the like), or tactile feedback (e.g. a device that vibrates or provides other haptic feedback).
[0064] In general, each of environments 8 include computing facilities (e.g., a local area network) by which physiological sensors 22, sensing stations 21 beacons 17, and/or hearing protection device 13 are able to communicate with the safety training system 6. For example, environments 8 may be configured with wireless technology. In the example, environment 8C includes a local network 7 that provides a packet-based transport medium for communicating with the safety training system 6 via network 4. Environment 8 may include a wireless access point 19 to provide support for wireless communications. In some examples and depending on the size of the environment, environment 8 may include a plurality of wireless access points 19 that may be distributed throughout the environment 8C to provide support for wireless communications throughout the environment.
[0065] In some examples, the user 10 may be equipped with respective one of wearable communication hubs 14 that enable and facilitate wireless communication. And each of environments 8 may include computing facilities that provide an operating environment for end-user computing devices 16 for interacting with the safety training system 6 via network 4. Similarly, remote users may use computing devices 18 to interact with safety training system 6 via network 4. For purpose of example, the end-user computing device 16 may be laptops, desktop computers, mobile devices such as tablets or smart phones and the like.
[0066] In one example, environment 8 may provide one or more audio output devices 26. The audio output devices may be communicatively coupled to the safety training system 6 via the network 4. The one or more audio output devices may be selected and configured such that they are able to create a three-dimensional audio experience to the user. The safety training system 6 may send out control signals to the one or more audio output devices 26 in order to cause the one or more audio output devices to simulate a set of audio events. The audio output devices 26 may also be integrated into the hearing protection device 13 such that they are able to create a three-dimensional acoustic experience to the user (not shown in the drawings).
[0067]
[0068] In
[0069] Client applications executing on computing devices 60 may communicate with the safety training system 6 to send and receive data that is retrieved, stored, generated, and/or otherwise processed by services 68. For instance, the client applications may request and edit safety training configurations 25 including audio events and users reactions stored and/or managed by the safety training system 6. In some examples, client applications may request and display of audio events and according users reactions. The client applications may interact with safety training system 6 to query for analytics data about past and predicted safety training configurations and received reaction data. In some examples, the client applications may output for display data received from safety training system 6, such as reaction data. As further illustrated and described in below, safety training system 6 may provide data to the client application, which the client applications output for display in user interfaces.
[0070] Client applications executing on computing devices 60 may be implemented for different platforms but include similar or the same functionality. For instance, a client application may be a desktop application compiled to run on a desktop operating system or a mobile application compiled to run on a mobile operating system.
[0071] As shown in
[0072] As shown in
[0073] Application layer 66 may include one or more separate software services 68, e.g. processes that communicate, e.g. via a logical service bus 70 as one example. Service bus 70 generally represents logical interconnections or set of interfaces that allows different services to send messages to other services, such as by a publish/subscription communication model. For instance, each of services 68 may subscribe to specific types of messages based on criteria set for the respective service. When a service publishes a message of a particular type on service bus 70, other services that subscribe to messages of that type will receive the message. In this way, each of services 68 may communicate data to one another. As another example, services 68 may communicate in point-to-point fashion using sockets or other communication mechanisms. Before describing the functionality of each service 68, the layers are briefly described herein.
[0074] Data layer 72 of the safety training system 6 represents a data repository that provides persistence for data in the safety training system 6 using one or more data repositories 74. Data repository, generally, may be any data structure or software that stores and/or manages data. Examples of data repositories include but are not limited to relational databases, multi-dimensional databases, maps, and hash tables, to name only a few examples. In the safety training system 6 according to the invention the data repositories may for example provide audio event data 74A, set of user reaction data 74B, received reaction data 74C or users profiles 74D.
[0075] As shown in
[0076] In accordance with techniques of the disclosure, services 68 may include an event endpoint frontend 68A, event selector 68B, an event processor 68C, a notification service 68D, a safety training management service 68E and/or a record management and reporting service 68F.
[0077] Event endpoint frontend 68A operates as a frontend interface for exchanging communications with hubs 14 and safety equipment 62. In other words, event endpoint frontend 68A operates to as a frontline interface to safety equipment deployed within environments 8 and utilized by users 10. Each incoming communication may, for example, carry data recently captured representing sensed conditions, motions, temperatures, actions or other data, generally referred to as events. Communications exchanged between the event endpoint frontend 68A and safety equipment 62 and/or hubs 14 may be real-time or pseudo real-time depending on communication delays and continuity.
[0078] Event selector 68B operates on the stream of events 69 received from safety equipment 62 and/or hubs 14 via frontend 68A and determines, based on rules or classifications, priorities associated with the incoming events. Based on the priorities, event selector 68B enqueues the events for subsequent processing by event processor 68C. In general, event processor 68C operate on the incoming streams 69 to update data within the data repository 74. For example, users reaction data 74C or users profiles 74D may be updated. User reactions 74B may include information identifying the according location of an audio event in a three-dimensional space and information identifying the kind of acoustic experience of the audio event. In other instances, the users reaction 74B may include information about the time that has passed between sending out a set of audio events and receiving reaction data.
[0079] The event processor 68C may create, read, update, and delete received reaction data 74C or users profiles 74D. Received reaction data 74C may include name/value pairs of data, such as data tables specified in row/column format.
[0080] In accordance with the techniques of this disclosure, safety training management service 68E may determine a safety training configuration for one or more respective users 10 based at least in part on events within event stream 69. Safety training management service 68E may be configured to determine the according safety training configuration for a user 10 based at least in part on a selection of the user 10 itself or the safety management of the user 10, selected data of a user 10 and one or more rules. Although other technologies can be used, in examples, the one or more rules are generated using machine learning. In other words, in one example implementation, safety training management service 68Eutilizes machine learning when operating on event streams 69 so as to perform real-time analytics. That is, safety training management service 68E may include executable code generated by application of machine learning to determine a safety risk score for the user.
[0081] In the following one example of an application of the safety training system 6 according to the invention will be described referring to the
[0082] There are different ways how the user might get exposed the audio events. One example is that the user needs to enter a special training room or space 8 together with the hearing protection device 13. This training room or space 8 may for example be equipped with audio output devices 26 in the form of loud speakers. The audio output devices 26 may further be communicatively connected with the training system 6 according to the invention and may be used to generate the audio events for the user 10.
[0083] Another possibility is that the audio output devices are little loud speakers that are integrated into the hearing protection device 13. Such a configuration provides the advantage that no special training room 8 is required and that the training can be done anywhere.
[0084] As soon as the user 10 has entered the training room 8 and put on the hearing protection device 13—or for the second example as soon as the user 10 has put on the hearing protection device 13—and as soon as someone has started the training system, a training configuration is selected. The training configuration may include a series of audio events 74A that resemble for example audio events 74A of real-live working environments. The training configurations may exist in different levels, such as for example a starter level as well as several levels for more experienced users 10. The training configuration may also exist for different working environments. For each audio event 74A the training configuration may provide a set of users reactions 74B that the system expects and considers as being the right reaction of a user. Which kind of information the user reactions may include will be described in the following.
[0085] As a next step the training system 6 may send out a set of control signals to the one or more audio output devices (either separate audio output devices 26 in a training room 8 or audio output devices integrated into the hearing protection device 13), that cause the one or more audio devices to simulate a set of audio events 74A. According to the training rules the user 10 needs to concentrate on those audio events 74A and identify them. Thus, the user 10 needs to react in a certain (see set of user reaction 74B above) way to each of the audio events 74A.
[0086] The training system 6 will as a next step receive a reaction data 74C in a certain way, wherein the reaction data 74C indicates whether the user provided the set of user reactions 74B to the set of audio events 74A mentioned before or if he did not provide the set of user's reactions.
[0087] Depending on the received reaction data 74C the training system 6 will perform at least one operation based at least in part on whether the user provided the set of user reactions to the set of audio events while wearing the hearing protection device 13. Depending on the reason for the training, the operations may be very different. For the above described case, where a user is trained on a new working environment and a new hearing protection device, the selected operations may for example be to first save the received reaction data in the data layer 72 of the system 6. The training system 6 may further provide direct feedback to the user 10 for example by indicating if the user 10 came up with a user reactions 74C that resembles to the set of user reactions 74B that the training configuration includes. Or the system 6 may provide a feedback if the users' reactions 74C do not resemble to the set of user reactions 74B.
[0088] Another possible operation of the training system 6 may be the proposal of an appropriate next level safety training configuration that considers the results of the training just absolved.
[0089] The training system 6 may also propose another hearing protection device, if for example the received reaction data 74C provides information that the user is not well enough connected to the world outside of the hearing protection devices.
[0090] The system may also create a profile 74D of the user and store it in the data layer 72. A profile of a user 10 may be used for different purposes. One idea is to compare a profile of one user 10 with the profile of other peer users 10. Another idea is to compare a profile of a user 10 over a longer period of time to be able to detect any development. If the training is done on a frequent basis, the development may for example be used as an indicator of fatigue.
[0091] The different steps that the safety training 6 performs and that have been described above may also be seen in
[0092] The system according to the invention may also comprise a reaction recognition device. The reaction recognition device may be a separate device or it may be integrated into components of the training system such as for example the hearing protection device. The reaction recognition device needs to be configured such that it is able to recognize a reaction of a user. It may recognize the reaction of a user for example through recognizing a predefined gesture or movement of a user. It should also be able to log the time at which it recognizes the reaction of the user. One example of a reaction recognition device may be a camera. It may also be an acceleration sensor. The reaction recognition device may be configured such that it can be hold in a user's hand or that it can be fixed to a user's equipment or body or somewhere in the space around the user.