Dental Device With Speech Recognition
20230008250 · 2023-01-12
Inventors
- Michael Brotzge (Koblach, AT)
- Michael Tiefenthaler (Franstanz, AT)
- Johannes Matt (Mauren, LI)
- Robert Grünenfelder (Schaanwald, LI)
- Diego Gabathuler (Buchs, CH)
Cpc classification
G10L15/22
PHYSICS
B33Y30/00
PERFORMING OPERATIONS; TRANSPORTING
G10L15/30
PHYSICS
A61C13/0004
HUMAN NECESSITIES
International classification
G10L15/22
PHYSICS
Abstract
A dental device with a speech recognition module is provided, which is connected to a control device that controls at least part of the functions of the dental device. Based on the recognition result, the speech recognition module triggers a selected function of the dental device via the control device and has at least one microphone. An output module outputs information about the triggered function. The speech recognition module continuously listens via the microphone and has a code word module that activates or leaves active speech recognition for the temporally successive words when a code word is recognized and attempts to recognize them as predetermined control words each assigned to a function.
Claims
1. A dental device comprising a device selected from the group consisting of a dental furnace, a dental milling machine, a dental rapid prototyping device, a dental end-finishing device, a dental polishing device, a dental polymerization device, a dental CAD/CAM device, a dental extraction device, a dental mixer, a dental casting device, a dental press, a dental thermoforming device, a dental vibrator and other dental device, said dental device comprising a speech recognition module, a control device, wherein the speech recognition module is connected to the control device, which control device controls at least a part of the functions of the dental device, at least one microphone in the speech recognition module, wherein the speech recognition module triggers a selected function of the dental device via the control device based on a recognition result, an output module connected to the control device that outputs information about the triggered function, wherein a code word module is provided which module continuously listens via the microphone, wherein the code word module activates or leaves active the speech recognition module for speech recognition of further words when a code word is recognized, and wherein the speech recognition module attempts to recognize the further words as predetermined control words or control commands each assigned to a function.
2. The dental device according to claim 1, wherein the speech recognition module is switched off or deactivated as long as the code word module has not recognized a code word.
3. The dental device according to claim 1, wherein the code word module has an analog/digital converter which operates with lower energy and/or lower sampling rate than an analog/digital converter of the speech recognition module.
4. The dental device according to claim 1, wherein the code word module is implemented separately from the speech recognition module and is switched on when the dental appliance is switched on, and wherein the speech recognition module is switched off as long as the code word module has not recognized a code word.
5. The dental device according to claim 1, wherein the speech recognition module is integrated into the dental device and is in electrical connection with the control device of the dental device.
6. The dental device according to claim 1, wherein the speech recognition module is spatially separated from the dental device and is or can be brought into data connection with the control device of the dental device.
7. The dental device according to claim 1, wherein the speech recognition module is connected to the output module that acoustically and/or visually reproduces the recognized word or words.
8. The dental device according to claim 1, wherein the speech recognition module is connected via an AI module to the output module, which acoustically and/or visually outputs responses to the recognized word or words, wherein the AI module is part of the control device or is realized in a cloud or other remote memory.
9. The dental device according to claim 1, wherein the speech recognition module is connected to the Internet via an AI module or the speech recognition module accesses the AI module via the Internet, and wherein the AI module or the speech recognition module or the control device triggers an action or an order, based on the recognized word or words.
10. The dental device according to claim 1, wherein the speech recognition module is connected to the output module, which, based on the recognized word or words, acoustically or visually displays a selection of existing functions and prompts for selection from the selection of existing functions.
11. The dental device according to claim 1, wherein the control device outputs, acoustically and/or visually, via the output module, information about the function the control device performs and/or is to perform.
12. The dental device according to claim 1, wherein the control device outputs a selection or a question about functions executed and/or to be executed acoustically and/or visually via the output module and waits for a response from a user via the speech recognition module and triggers a function based on the recognized word or words.
13. The dental device according to claim 12, wherein the control device stores a function to be started in the future based on the word or words recognized by the speech recognition module and starts the function at the correct time.
14. The dental device according to claim 1, wherein the speech recognition module and/or the control device is connected by data connection to at least one other dental device having a control device.
15. A dental device comprising a camera, an image recognition module connected to a control device that controls at least part of the functions of the dental device, an output module connected to the control device, and a feedback device connected to the control device, wherein the image recognition module recognizes a picture taken by the camera and triggers a selected function of the dental device via the control device based on the received recognition result, and wherein the feedback device provides information about the triggered function via the output module.
16. The dental device according to claim 15, wherein the image recognition module continuously records and recognizes images via the camera and unlocks rights when recognizing a particular image.
17. The dental device according to claim 15, wherein the image recognition module activates or leaves active rights existing for a user to trigger functions of the dental device when recognizing a body part of the user.
18. The dental device according to claim 15, wherein the image recognition module allows a user to perform at least one function of the dental device to which the user is entitled by recognizing temporally successive images and/or wherein the speech recognition module allows a user to perform at least one function of the dental device to which the user is entitled by recognizing temporally successive words.
19. A method of activating a function of a dental device comprising a device selected from the group consisting of a dental furnace, a dental milling machine, a dental rapid prototyping device, a dental end-finishing device, a dental polishing device, a dental polymerization device, a dental CAD/CAM device, a dental extraction device, a dental mixer, a dental casting device, a dental press, a dental thermoforming device, a dental vibrator and other dental device, said method comprising a code word module continuously listening via a microphone, the code word module activating or leaving active a speech recognition module for speech recognition of further words when a code word is recognized, the speech recognition module attempting to recognize the further words as predetermined control words or control commands each assigned to a function, the speech recognition module triggering a selected function of the dental device via a control device based on a recognition result, an output module connected to the control device outputting information about the triggered function.
20. A computer program product comprising program code which is stored on a non-transitory machine-readable medium, the machine-readable medium comprising computer instructions executable by a processor, which computer instructions cause the processor to perform the method according to claim 19.
Description
BRIEF DESCRIPTION OF THE INVENTION
[0089] Further advantages, details and features result from the following description of exemplary embodiments of a dental device according to the invention with reference to the drawings.
[0090]
[0091]
DETAILED DESCRIPTION
[0092] A dental device 10 is provided in
[0093] The dental milling machine can form the CAM device of a CAD/CAM device. It can also be part of a milling center for dental technology.
[0094] The dental device 10 comprises a schematically indicated working space 12. Outside the working space 12, for example above or below it, a speech recognition module 14 is provided. The speech recognition module 14 is implemented by means of a combination of hardware and software.
[0095] The speech recognition module 14 includes a microphone 16. The microphone 16 continuously picks up voices and sounds from the environment. Its output signal is connected to an ND converter which digitizes the output signal.
[0096] The digital output signal is fed to a code word module 18, which is also housed in the speech recognition module 14.
[0097] The code word module 18 has speech recognition capabilities, but in a reduced manner. It is only capable of recognizing a code word and then signaling that.
[0098] The code word used is a word that is easily recognizable by a machine and does not occur in everyday speech. Preferably, the code word is predetermined, in the delivery language of the dental device 10, and can be changed in accordance with the user's choice, but not arbitrarily, but can be replaced by one of, for example, 5 other also suitable words.
[0099] When the code word has been recognized, the rest of the speech recognition module 14 is switched on or activated. If the speech recognition module 14 is already active, it is left active.
[0100] Subsequent words or phrases are interpreted by the speech recognition module 14 and the speech recognition module then attempts to recognize control words.
[0101] Control words are those words associated with a function of the dental device 10. The choice of such words is limited, but not such that only one word is associated with each function. Preferably, synonymous or similar words trigger the same function.
[0102] Examples can be found in the table below:
TABLE-US-00001 Dental oven Control word Function hotter Turn on the heating warmer Turn on the heating fire Turn on the heating cool Switch off the heating and open the furnace hood later Stop Switch off the heating and open the furnace hood later End Switch off the heating and open the furnace hood later
TABLE-US-00002 Dental milling machine Control word Function fine Display selection of finishing programs simple Display selection of finishing programs finish milling Display selection of finishing programs Start at 7 a.m. Start the specified milling program at 7 a.m. the next day Start 7 a.m. Start the specified milling program at 7 a.m. tomorrow the next day Milling 7 Start the specified milling program at 7 a.m. o'clock the next day
[0103] A code corresponding to the function of the dental device 10 to be triggered is fed by the speech recognition module 14 to a control device 20. The control device 20 programmatically controls the function of the dental device 10.
[0104] The control device 20 is connected to an output module 22. The output module 22 is able to output speech and has a speech synthesis unit for this purpose. It is connected to loudspeakers 24, 26 that convert the generated speech into sound.
[0105] Typically, the recognized function of the dental device 10 is confirmed again via the output module 22 and the loudspeakers 24 and 26.
[0106] The output module 22 is also connected to a touch screen 28, which is connected to the dental device 10 via WiFi or Bluetooth in the exemplary embodiment shown.
[0107] The generated words are displayed additionally or alternatively on the touchscreen 28.
[0108] Communication between touchscreen 28 and output module 22 is bidirectional. The output module 22 is also able to receive and forward user input and, for example, a confirmation from the user about the function to be executed. In this respect, the output module 22 could also be referred to as communication module 22.
[0109] Typically, the function in question is not initiated by the control device 20 until the user input has confirmed that the function in question is actually desired.
[0110] The control device 20 also has an AI module 30. The AI module is used, for example, to monitor the functions of the dental device 10 to be performed and to intervene if they do not make sense. It is also able to answer user questions and outputs the answers via output module 22.
[0111] In addition, it can perform certain operations over the Internet, such as ordering consumables. This is also initially output via the output module 22 and must be confirmed by the user via the touch screen 28.
[0112] In a modified embodiment not shown, a camera or other identification scanner (e.g., an iris scanner or a fingerprint scanner) is provided on the dental device 10 or connected to it. This scanner captures physical characteristics of the user. The AI module evaluates these and thus identifies the user.
[0113] The following is a list of exemplary communication options that can be provided by the AI module 30:
[0114] 1. “Answer questions” communication option
[0115] User: IvoAI, how can the brightness value of e.max Ceram be increased?
[0116] IvoAI: By placing Opal Effect 4 or Deep Dentin directly on the wash or ZirLiner, the brightness value in the dentin range can already be increased from depth. Then layer these areas with dentin. (https://www.ivoclarvivadent.ch/de-ch/zahntechniker/ips-emax-ceram-Gebrauchsinformation—IPS e.max Ceram.pdf)
[0117] User: IvoAI, how can the cervical depth effect of e.max Ceram be increased?
[0118] IvoAI: Use the Cervical Transpa masses with slightly higher fluorescence to complete the layering in the cervical third—e.g., CT orange-pink. (https://www.ivoclarvivadent.ch/de-ch/zahntechniker/ips-emax-ceram-Gebrauchsinformation—IPS e.max Ceram.pdf)
[0119] User: IvoAI, how long is temperature calibration still valid?
[0120] IvoAI: The next temperature calibration should be done in two weeks.
[0121] User: IvoAI, is your software up to date?
[0122] IvoAI: Yes, my software, version 1.23, is up to date.
[0123] User: IvoAI, is any training on intraoral scanners available?
[0124] IvoAI: Yes, the next online training on PrograScan Intraoral scanners will take place on March 9th at 10:00 am. The first training is free of charge. Should I register you “John Doe”?
[0125] User: Yes.
[0126] IvoAI: The registration has been carried out; the date has been entered in your calendar.
[0127] User: IvoAI, what can be printed with the Ivoclar 3D printer?
[0128] IvoAI: The PrograPrint system supports the indications: veneers, occlusal veneers, inlays, onlays, partial crowns, crowns, bridges, occlusion-supported splints, drilling templates and models.
[0129] User: And what parts does this system consist of?
[0130] IvoAI: The PrograPrint system consists of high quality materials (ProArt Print) as well as equipment for printing (PrograPrint PR5), cleaning (PrograPrint Clean) and post-exposure (PrograPrint Cure). The material tray (PrograPrint Pool), material cartridge (PrograPrint Cartridge) and building platform (PrograPrint Stage) are accessories. The print data is created with the 3Shape software “CAMbridge”, processed using the specially developed software “PrograPrint Manager” and transferred to the printer. All system components are optimally matched to each other and form a validated overall process.
[0131] 2. “Execute functions” communication option
[0132] User: IvoAI, open Washbrand for e.max Ceram.
[0133] IvoAI: The firing program “IPS e.max Press layering technique—Washbrand” has been opened.
[0134] User: IvoAI, please clean.
[0135] IvoAI: Should I run self-cleaning?
[0136] User: Yes.
[0137] IvoAI: Self-cleaning has been started.
[0138] User: IvoAI, order Opal Effect masses.
[0139] IvoAI: Should I order a pack of IPS e.max Ceram Opal Effect masses from Ivoclar Vivadent?
[0140] User: Yes.
[0141] IvoAI: The order has been placed.
[0142] 3. “Inform, recognize approach” communication option
[0143] Milling system “PrograMill”:
[0144] User: IvoAI, start the Doe job with e.max CAD.
[0145] IvoAI: Should I start the Doe job with the existing IPS e.max CAD Disc?
[0146] User: Yes.
[0147] IvoAI: The job has been started.
[0148] ˜˜˜
[0149] The PrograMill recognizes the user's approach and informs independently, without prior user request.
[0150] IvoAI: The Doe job is completed. Now the objects have to be crystallized at the Programat.
[0151] ˜
[0152] The dental furnace “Programat” recognizes the approach of the user and informs independently, without prior user request.
[0153] IvoAI: In coordination with PrograMill, the “Impulse/MO/Crystallization” program was opened for the Doe job.
[0154] User: Okay
[0155] All the user has to do is place the firing tray with the objects in the firing chamber and start the program.
[0156] The “Ivoclar Vivadent Assistant”, the PrograMill and the Programat are connected via “Ivoclar-Cloud”.
[0157] “PrograPrint” dental SL printing system:
[0158] User: IvoAI, start the John Smith job tomorrow morning so that it is ready at 7 o'clock.
[0159] IvoAI: Should I start the John Smith job with the appropriate start delay so that it is ready tomorrow at 7 o'clock?
[0160] User: Yes.
[0161] IvoAI: The job was started with a delay.
[0162] ˜˜˜
[0163] The PrograPrint detects the user's approach the next morning and informs independently, without prior user request.
[0164] IvoAI: The John Smith job has been completed. Now the objects have to be cleaned for 3 minutes in the PrograPrint Clean. Conclusively, the objects must be finished with the program “ProArt Print Splint” in the PrograPrint Cure.
[0165] ˜
[0166] With the simple PrograPrint Clean, the user positions the building platform with the objects in the holder, sets it for 3 minutes and starts the device.
[0167] ˜
[0168] The PrograPrint Cure recognizes the user's approach and informs independently, without prior user request.
[0169] IvoAI: For the John Smith job the “ProArt Print Splint” program was opened.
[0170] User: Okay
[0171] The user only has to put the building platform with the objects in the finishing space and start the program.
[0172] In
[0173] Both dental devices can communicate with each other via a Bluetooth connection 32. This makes it possible, for example, to optimize the work to be carried out and also to coordinate the timing.
[0174] Both dental devices 10 and 10A are connected to the Internet via a router 34 such as a Fritzbox. These connections can be realized via WiFi or LAN or in any suitable way.
[0175] These modules include the codeword module 18, which in this embodiment of the invention is formed separately from the speech recognition module 14. Accordingly, the codeword module 18 includes a separate microphone 32. Instead, it is also possible to use only a single microphone 16 and to connect this to either the codeword module 18 or the speech recognition module 14 by means of an analog switch.
[0176] The microphone 32 is connected to an analog/digital converter 34 in a manner known per se. This converts the output signal of the microphone 32 into a digital signal, in a manner known per se.
[0177] The sampling rate of the converter 34 is significantly lower than that of the analog-to-digital converter used in the speech recognition module 14. For example, the ratio of the sampling rates of the two converters may be 1:2, 1:4, or even 1:10. As a result, the data stream output by the converter 34 is significantly lower than in the speech recognition module 14. The data is now processed in a manner known per se, e.g. via FFT using a Viterbi algorithm.
[0178] In the illustrated embodiment example, a recognition 36 is performed according to the hidden Markov model. This accesses a database 38 which comprises only a small vocabulary compared to the corresponding database in the speech recognition module 14.
[0179] The codeword module 18 is switched on as soon as the dental unit 10 is switched on.
[0180] In contrast, the speech recognition module 14 is switched off at this time.
[0181] The output signal of the codeword module 18 is fed to the control device 20. It consists of only two states, namely active and inactive.
[0182] The “active” signal is output when the codeword module 18 has recognized the codeword. Once the control device 20 receives the “active” signal, it turns on the speech recognition module 14.
[0183] Once the speech recognition module 14 is powered up, it monitors and analyzes the signals received through the microphone 16.
[0184] This means, on the other hand, that a word corresponding to a particular control signal from the dental device will not trigger an action if it is spoken in the vicinity of the dental device, but before the code word has been said.
[0185] This behavior is highly desirable because it prevents a particular action of the dental device from being inadvertently triggered by the flow of speech in the environment.
[0186] In the speech recognition module 14, the received speech is continuously analyzed. If a word corresponding to a control command is recognized, a query, for example a request for confirmation, is output via the output module 22, if necessary. If the control command requires no further user input, it is transmitted by the control device 20 to the function module 40. This also comprises a rights module 41 for checking whether the recognized user has the rights to perform the desired function corresponding to the control command.
[0187] If the answer is in the affirmative, the function is permitted and forwarded to a heater 42 as an exemplary part of a dental device 10 configured as a dental furnace.
[0188] A speaker 24 and a display device 44 are connected to the output module 22. Through these, an interaction with the user takes place.
[0189] As a further means of interaction with the user, the control device 20 is connected to a camera 48 via an image recognition module 46.
[0190] The camera 48 is directed at the user, for example, so that identity recognition can be performed via the image recognition module 46.
[0191] However, the user may also perform gestures that the camera 48 can recognize. In this case, the image recognition module 46 is equipped for gesture recognition.
[0192] Further, the camera 48 may also be fed with a code that is recognized by the image recognition module 46. For example, a block of dental restorative material to be fired in the dental furnace may be provided with a QR code. This is held in front of the camera 48, which recognizes it and feeds it to the control device 20. The control device 20 then automatically selects the appropriate firing program for the block.
[0193] The image recognition module 46 can be used in addition to the voice recognition module 14. It may also be used alternatively. The codeword module 18 can also activate the image recognition module 46 instead of the speech recognition module 14, or both.
[0194] For the codeword module 18, the module LD3320 from the company Beaster can be used, for example, or an existing ASR software on a RaspberryPi. For the image recognition module 46 you can use e.g. the module NPD101 from Symbiant.
[0195] It is to be understood that instead of the dental milling machines shown here, other dental devices suitable for cooperation can also be used. An example of this is a dental stereolithography printer and an end-finishing device.
[0196] It is also possible to equip only one of the devices with the inventive intelligence and then to let the devices work according to the master-slave principle.
[0197] Examples of hardware components that may be used to perform the operations described in this application where appropriate include controllers, sensors, generators, drivers, memories, comparators, arithmetic logic units, adders, subtractors, multipliers, dividers, integrators, and any other electronic components configured to perform the operations described in this application. In other examples, one or more of the hardware components that perform the operations described in this application are implemented by computing hardware, for example, by one or more processors or computers.
[0198] A processor or computer may be implemented by one or more processing elements, such as an array of logic gates, a controller and an arithmetic logic unit, a digital signal processor, a microcomputer, a programmable logic controller, a field-programmable gate array, a programmable logic array, a microprocessor, or any other device or combination of devices that is configured to respond to and execute instructions in a defined manner to achieve a desired result. In one example, a processor or computer includes, or is connected to, one or more memories storing instructions or software that are executed by the processor or computer. Hardware components implemented by a processor or computer may execute instructions or software, such as an operating system (OS) and one or more software applications that run on the OS, to perform the operations described in this application. The hardware components may also access, manipulate, process, create, and store data in response to execution of the instructions or software. For simplicity, the singular term “processor” or “computer” may be used in the description of the examples described in this application, but in other examples multiple processors or computers may be used, or a processor or computer may include multiple processing elements, or multiple types of processing elements, or both. For example, a single hardware component or two or more hardware components may be implemented by a single processor, or two or more processors, or a processor and a controller. One or more hardware components may be implemented by one or more processors, or a processor and a controller, and one or more other hardware components may be implemented by one or more other processors, or another processor and another controller. One or more processors, or a processor and a controller, may implement a single hardware component, or two or more hardware components. A hardware component may have any one or more of different processing configurations, examples of which include a single processor, independent processors, parallel processors, single-instruction single-data (SISD) multiprocessing, single-instruction multiple-data (SIMD) multiprocessing, multiple-instruction single-data (MISD) multiprocessing, and multiple-instruction multiple-data (MIMD) multiprocessing. The methods described herein may be encoded as executable instructions embodied in a computer readable medium, including, without limitation, a storage device and/or a memory device. Such instructions, when executed by a processing device, cause the processing device to perform at least a portion of the methods described herein. The above examples are exemplary only, and thus are not intended to limit in any way the definition and/or meaning of the term processor and processing device.
[0199] Where used herein, the term “non-transitory” is a limitation on the computer-readable storage medium itself—that is, it is tangible and not a signal—as opposed to a limitation on the persistence of data storage. A non-transitory computer-readable storage medium does not necessarily store information permanently. Random access memory (which may be volatile, non-volatile, dynamic, static, etc.), read-only memory, flash memory, memory caches, or any other tangible, computer-readable storage medium, whether synchronous or asynchronous, embodies it.
[0200] Although the invention is illustrated above, partly with reference to some preferred embodiments, it must be understood that numerous modifications and combinations of different features of the embodiments can be made. All of these modifications lie within the scope of the appended claims.