Interactive and Customizable Dolls

20260027484 ยท 2026-01-29

    Inventors

    Cpc classification

    International classification

    Abstract

    An interactive doll can include an audio device. The interactive doll can be associated with a computing device. In some embodiments, the computing device includes an interactive doll app associated with the interactive doll. In some embodiments, the interactive doll is customizable. In some embodiments, the doll utilizes AI and/or machine-learning.

    Claims

    1. A doll comprising: a body; an audio device; and a controller associated with the audio device, wherein the controller provides an audio output through the audio device mimicking a particular voice.

    2. The doll of claim 1, wherein the audio device comprises: a speaker; and a microphone, wherein the controller is associated with a computing device, and wherein said doll utilizes artificial intelligence.

    3. The doll of claim 1, wherein the audio device includes the controller.

    4. The doll of claim 1, wherein the audio device includes a speaker.

    5. The doll of claim 1, wherein the audio device includes a microphone.

    6. The doll of claim 1, wherein the audio device is removably located in the body.

    7. The doll of claim 1, wherein the audio device includes a power source.

    8. The doll of claim 7, wherein the power source is a rechargeable battery.

    9. The doll of claim 1, wherein the audio device captures a voice recording.

    10. The doll of claim 1, wherein the controller is associated with a computing device, and wherein said doll utilizes artificial intelligence.

    11. A computer-readable medium comprising a computer-executable instruction that when executed by a processor, causes the processor to: receive an audio file including a voice; and generate an audio output mimicking the voice.

    12. The computer-readable medium of claim 11, wherein the audio file includes at least one of a voice recording, a voicemail, and a voice memo.

    13. The computer-readable medium of claim 11, wherein the audio output includes reading a book in the mimicked voice; having a conversation in the mimicked voice; and/or playing a game in the mimicked voice.

    14. The computer-readable medium of claim 11, further comprising a second computer-executable instruction that when executed by said processor, causes said processor to interact with a user to generate a story.

    15. The computer-readable medium of claim 14, wherein the audio output is reading the generated story in the mimicked voice; and wherein said processor utilizes artificial intelligence.

    16. The computer-readable medium of claim 11, further comprising a second computer-executable instruction that when executed by said processor, causes the processor to provide the audio output through an audio device associated with a doll.

    17. A system, comprising: a doll; a processor; and a non-transitory computer-readable medium comprising a computer-executable instruction that when executed by the processor, causes the processor to: receive an audio file including a voice; and generate an audio output mimicking the voice, wherein the audio file is at least one of a voice recording, a voicemail, and voice memo.

    18. The system of claim 17, wherein the audio output includes reading a book in the mimicked voice; having a conversation in the mimicked voice; and/or playing a game in the mimicked voice.

    19. The system of claim 17, further comprising a second computer-executable instruction that when executed by said processor, causes the processor to interact with a user to generate a story, wherein the audio output is reading the generated story in the mimicked voice and wherein said system utilizes artificial intelligence.

    20. The system of claim 17, further comprising a second computer-executable instruction that when executed by said processor, causes said processor to provide the audio output through an audio device associated with said doll.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0022] FIG. 1 is a front view of an embodiment of an interactive doll.

    [0023] FIG. 2 is a back view of the interactive doll of FIG. 1.

    [0024] FIG. 3 is an isometric view of an embodiment of an audio device.

    [0025] FIG. 4 is a block diagram of an embodiment of a computing device.

    [0026] FIG. 5 is a block diagram of an embodiment of an interactive doll mobile app.

    [0027] FIG. 6A is a plan view of an embodiment of the interactive doll mobile app launched on a computing device.

    [0028] FIG. 6B is a plan view of the embodiment of the interactive doll mobile app of FIG. 6A launched on another embodiment of a computing device.

    [0029] FIG. 7 is a schematic view of a system including the audio device of FIG. 3 and the computing device of FIG. 6A.

    [0030] FIG. 8 is an isometric view of an embodiment of a kit including an interactive doll, an audio device, and a book.

    DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENT(S)

    [0031] A more detailed description of the device, systems, and methods in accordance with the present disclosure is set forth below. It should be understood that the description below of specific devices, systems, and methods is intended to be exemplary, and not exhaustive of all possible variations or applications. Thus, the scope of the disclosure is not intended to be limiting and should be understood to encompass other variations or embodiments.

    [0032] FIGS. 1 and 2 illustrate an embodiment of interactive doll 10. In some embodiments, interactive doll 10 can include body 12 and audio device 14 (shown in FIG. 3). In some embodiments, interactive doll 10 can be associated with a computing device 16. For instance, in some embodiments, audio device 14 can be associated with computing device 16. In some embodiments, audio device 14 and computing device 16 can be interconnected via wired connections, wireless connections, or a combination of wired and wireless connections.

    [0033] In some embodiments, body 12 can resemble a human or a humanoid character. For instance, in some embodiments, body 12 can include head 13 and limbs 15 such as arms and legs. In some embodiments, interactive doll 10 can include clothing, such as, but not limited to, a shirt and/or pants/shorts. In some embodiments, doll 10 can include accessories, such as, but not limited to, a hat, shoes, jewelry, glasses, gloves, a scarf, a bandana, and other suitable accessories. In some embodiments, doll 10 can be dressed in varying outfits by a user. In some embodiments, doll 10 includes a permanently attached outfit. In some embodiments, interactive doll 10 can include a military uniform.

    [0034] In some embodiments, doll 10 can be made of a soft material. In some embodiments, doll 10 is made of a plush material. In some embodiments, doll 10 can include a fabric outer surface. In some embodiments, the outer surface can be a suitable soft material, such as, but not limited to, felt. In some embodiments, body 12 can include an inner cavity configured to house stuffing. In some embodiments, doll 10 can be made, at least in part, from plush. Plush can be made of, among other things, a rich fabric of silk, cotton, wool, and/or a combination of these. In some embodiments, plush has a long soft nap.

    [0035] In some embodiments, interactive doll 10 can be made to resemble a particular person. In some embodiments, interactive doll 10 can be made to resemble a person in the military. For instance, in some embodiments, the interactive doll 10 can be made to resemble a family member, such as a parent, or friend in the military. In some embodiments, this customization involves modifying various features on the doll, including but not limited to, eye shape, eye color, hair style, hair color, lip shape, lip color, ear shape, skin color, the presence or absence of one or more limbs, outfits, and/or accessories.

    [0036] In some embodiments, the user customization of doll 10 is aided by the use of an application. In some embodiments, a user is able to load a picture of an individual into a system and the system suggests a doll based on the image. In some embodiments, the system is able to combine multiple images (such as images from different views) to generate its recommended doll. In some embodiments, the system includes an application that can be used to take a 3D image of a user.

    [0037] In some embodiments a user is prompted to indicate what uniform the doll should wear from preselected options. In some embodiments, the uniform of the doll can be configured to wear the uniform a family member or friend would wear. For example, in some embodiments, the doll's uniform can be a military uniform, a medical uniform, or another distinct uniform.

    [0038] In some embodiments, doll 10 includes opening 18 allowing access to an inner cavity. In some embodiments, as shown in FIG. 2, opening 18 is located on the back of body 12. The location of opening 18 on doll 10 can vary without departing from the scope of the disclosure. In some embodiments, opening 18 can be placed and held in a closed configuration. In some embodiments, opening 18 can be placed in a closed configuration using hook and loop strips, buttons, snaps, ties, zippers, and/or with other suitable fastening means.

    [0039] In some embodiments, audio device 14 can be removably placed into doll 10. For example, in some embodiments, audio device 14 can be located in the inner cavity of body 12. In some embodiments, audio device 14 can include a housing 20. In some embodiments, housing 20 can be a two-piece housing. For example, in some embodiments, housing 20 can include first piece 21a and second piece 21b attached to one another. In some embodiments, housing 20 can include a base unit with a lid. In some embodiments, housing 20 can be a single, unitary piece.

    [0040] In some embodiments, audio device 14 can include a controller 22. In some embodiments, controller 22 can be located within housing 20. In some embodiments, controller 22 can be an external controller associated with audio device 14. In some embodiments, controller 22 is configured to receive information from computing device 16. In some embodiments, controller 22 includes a memory configured to store information/data. In some embodiments, controller 22 can include a transceiver module configured to receive and send data via a wireless or wired connection. In some embodiments, controller 22 is configured to transmit and receive information/data via connections such as Wi-Fi, Ethernet, Bluetooth, NFC, RFID, fiber optics, cellular, infrared, or other optical communications, or the like. Controller 22 can send and receive data via other suitable wired or wireless connections without departing from the scope of the disclosure. In some embodiments, controller 22 can be a microcontroller (such as an ESP32 microcontroller).

    [0041] In some embodiments, audio device 14 can include speaker 25. In some embodiments, speaker 25 is located on the bottom of audio device 14. In some embodiments, audio device 14 can include a power source. For instance, in some embodiments, the power source can be a battery. In some embodiments, the battery can be rechargeable. In some embodiments, when audio device 14 includes a rechargeable battery, audio device 14 can include a charging port 24. In some embodiments, the battery can be recharged wirelessly. In some embodiments, audio device 14 can include a power switch 26. In some embodiments, power switch 26 can be an on/off toggle switch. Other suitable power switches 26 can be used, such as, but not limited to, a power button, without departing from the scope of the disclosure.

    [0042] In some embodiments, audio device 14 can include a microphone 28. In some embodiments, audio device 14 can be configured to capture audio recordings via microphone 28. In some embodiments, audio recordings can be stored with controller 22. For instance, in some embodiments, audio recordings can be stored in the memory of controller 22. In some embodiments, audio recordings can be transmitted to computing device 16 and stored therein. In some embodiments, a user can press and hold a recording button 30 and speak into microphone 28 to record audio. In some embodiments, the user can release button 30 to stop recording.

    [0043] In some embodiments, audio device 14 can include an indicator light 32. In some embodiments, indicator light 32 is an LED indicator light. In some embodiments, indicator light 32 can provide a user a visual representation of the status of audio device 14. For instance, in some embodiments, as shown in FIG. 3, indicator light 32 can be configured to illuminate when audio device 14 is wirelessly connected to computing device 16. In some embodiments, indicator light 32 can flash to show that audio device 14 is ready to be wirelessly paired/tethered to computing device 16. In some embodiments, indicator light 32 can be configured to flash and/or illuminate in different colors to show that audio device 14 is in a particular status such as, but not limited to, powered on and/or that audio device 14 has failed to connect to computing device 16.

    [0044] In some embodiments, audio device 14 can include an audio amplifier; at least one speaker, a boost converter for powering speakers; a battery connector/jacket enclosure and/or a button.

    [0045] In some embodiments, interactive doll 10 is configured to produce sound. In some embodiments, interactive doll 10 is configured to speak to a user. In some embodiments, interactive doll 10 is configured to read to a user. In some embodiments, audio output from interactive doll 10 is configured to be produced by audio device 14.

    [0046] In some embodiments, interactive doll 10 is configured to produce a wide range of sounds, including speech, music, and sound effects. The sound production capabilities can be implemented using digital audio processing techniques and high-quality miniature speakers integrated into audio device 14. In some embodiments, interactive doll 10 utilizes text-to-speech technology to dynamically generate speech, allowing for more flexible and varied verbal interactions with the user.

    [0047] The speaking functionality of interactive doll 10 can include pre-programmed phrases, responses triggered by user actions or inputs, and dynamically generated speech based on artificial intelligence algorithms. This enables interactive doll 10 to engage in simple conversations, answer questions, and provide verbal feedback during play. In some embodiments, the speech can be customized to match different personalities, accents, or languages, enhancing the doll's versatility and appeal to diverse users.

    [0048] In some embodiments, interactive doll 10 can access a library of digital books or stories stored in its memory or streamed from a connected device. In some embodiments, the reading function can include features such as adjustable reading speed, different character voices, and interactive elements where the doll asks questions or prompts user participation during the story. In some embodiments, this reading capability transforms interactive doll 10 into an educational tool that can help improve literacy skills and foster a love for reading in young users.

    [0049] In some embodiments, audio device 14 can incorporate advanced audio processing capabilities. This can include, but is not limited to, features such as voice modulation to mimic different characters, spatial audio effects to create immersive soundscapes, and adaptive volume control based on ambient noise levels.

    [0050] In some embodiments, computing device 16 can be a device capable of receiving, generating, storing, processing, and/or providing information associated with interactive doll 10 described herein. In some embodiments, computing device 16 is configured to execute a mobile app. In some embodiments, computing device 16 can be a smartphone, as shown in FIG. 6A. In some embodiments, computing device 16 can be a tablet computer, as shown in FIG. 6B. In some embodiments, computing device 16 can be a desktop computer, a laptop computer, a handheld computer, or other smart device. In some embodiments, computing device 16 is configured to include interactive doll app 34.

    [0051] Turning to FIG. 4, the figure is a block diagram illustrating an embodiment of the components of computing device 16. In some embodiments, computing device 16 can include processor 36, memory 38, input component 40, output component 42, and/or communication interface 44. In some embodiments, computing device 16 can also include a separate storage component 46. In some embodiments, the components (i.e., processor 36, memory 38, input component 40, output component 42, communication interface 44, and/or storage component 46) can be coupled to one another with a bus. For example, in some embodiments, the bus can include a component that permits communication among the components of computing device 16.

    [0052] In some embodiments, processor 36 can include a processor (e.g., a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), etc.), a microprocessor, and/or other processing component (e.g., a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), etc.) that interprets and/or executes instructions.

    [0053] In some embodiments, memory 38 can include random access memory (RAM), read only memory (ROM), and/or another type of dynamic or static storage device (e.g., a flash memory, a magnetic memory, an optical memory, etc.) that stores information and/or instructions for use by processor 36. In some embodiments, memory 38 is a secure digital memory card. In some embodiments, memory 38 can execute interactive doll app 34.

    [0054] In some embodiments, input component 40 can include a component that permits computing device 16 to receive information, such as via user input (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, a microphone, etc.).

    [0055] In some embodiments, output component 42 can include a component that provides output information from computing device 16 such as, but not limited to, a display or a speaker.

    [0056] In some embodiments, communication interface 44 can include a transceiver-like component (e.g., a transceiver, a separate receiver and transmitter, etc.) that enables computing device 16 to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections. Communication interface 44 can permit computing device 16 to receive information from another device and/or provide information to another device. For example, in some embodiments, communication interface 44 can include an ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi interface, a cellular network interface, a Bluetooth interface, and/or other suitable interface.

    [0057] In some embodiments, computing device 16 can perform one or more processes described herein. Computing device 16 can perform these processes in response to processor 36 executing software instructions stored by a computer-readable medium, such as memory 38 and/or storage component 46. A computer-readable medium is defined herein as a non-transitory memory device. A memory device includes memory space within a single physical storage device or memory space spread across multiple physical storage devices.

    [0058] Software instructions may be read into memory 38 and/or storage component 46 from another computer-readable medium or from another device via communication interface 44. When executed, software instructions stored in memory 38 and/or storage component 46 can cause processor 36 to perform one or more processes described herein.

    [0059] In some embodiments, computing device 16 can include additional components, fewer components, different components, or differently arranged components. Additionally, or alternatively, a set of components (e.g., one or more components) of computing device 16 can perform one or more functions described as being performed by another set of components of computing device 16.

    [0060] Turning to FIG. 5, the figure is a block diagram illustrating an embodiment of interactive doll app 34. FIG. 6A and FIG. 6B, illustrate an embodiment of interactive doll app 34 launched on computing device 16. In some embodiments, interactive doll app 34 can be a non-transitory computer-readable medium comprising computer-executable instructions that when executed by a processor, cause the processor to receive an audio file including a voice and generate an audio output mimicking the voice. In some embodiments, interactive doll app 34 can be user instruct controller 22 in audio device 14 to generate output (e.g., sound, vibrations, or other output). In some embodiments, interactive doll app 34 can include a library component 35. In some embodiments, library component 35 can store downloaded books and/or user generated stories. In some embodiments, interactive doll 10 is configured to audibly read a book or story selected by a user via interactive doll app 34.

    [0061] In some embodiments, interactive doll 10 can be configured as an educational tool, capable of teaching a user various subjects and skills. The teaching functionality of interactive doll 10 can be implemented through a combination of pre-programmed content and artificial intelligence algorithms that enable dynamic, personalized learning experiences.

    [0062] In some embodiments, interactive doll 10 can utilize its audio device 14 and controller 22 to present educational content, ask questions, and process user responses. The doll's teaching capabilities can cover a wide range of subjects, including but not limited to basic arithmetic, vocabulary, science facts, historical information, and/or social skills.

    [0063] In some embodiments, to enhance the effectiveness of its teaching, interactive doll 10 can employ adaptive learning techniques. In some embodiments, by analyzing a user's responses to its questions through the microphone 28 and processing this information via the controller 22 or the associated computing device 16, the doll can adjust the difficulty level, pace, and content of its teaching. For instance, if a user consistently answers math questions correctly, interactive doll 10 may introduce more challenging problems. Conversely, if a user struggles with a particular concept, the doll can provide additional explanations or simplify the material.

    [0064] In some embodiments, interactive doll 10 can facilitate language learning. In some embodiments, the doll can be programmed to speak in multiple languages, allowing it to serve as an interactive language tutor. In some embodiments, the functionality can be especially useful for children learning a second language or for families in multilingual environments. In some embodiments, interactive doll 10 can engage in simple conversations, teach vocabulary, and even help with pronunciation in various languages.

    [0065] In some embodiments, interactive doll 10 can offer personalized suggestions and instructions based on a user's responses and progress. This can include recommending specific activities or books from the library component 35 of the interactive doll app 34, or providing tailored advice on how to improve in certain areas. In some embodiments, the doll's ability to offer personalized guidance is enhanced by its artificial intelligence capabilities, which can analyze patterns in the user's learning and behavior over time.

    [0066] In some embodiments, by combining these educational features with its interactive and customizable nature, interactive doll 10 can serve as an engaging and effective learning companion, adapting to the user's needs and providing a unique, personalized educational experience.

    [0067] In some embodiments, interactive doll app can include a storage component 37. In some embodiments, storage component 37 can be configured to store uploaded or transmitted information and data. In some embodiments, a user may upload an audio file including a voice to interactive doll app 34. In some embodiments, the audio file can be a voice recording, a voicemail stored on a computing device, a voice memo, and/or other suitable audio file including a voice. In some embodiments, the audio file can be a recording made with audio device 14 or computing device 16. In some embodiments, storage component 37 can store multiple audio files. In some embodiments, interactive doll 10 can be configured to play an uploaded audio file through audio device 14.

    [0068] In some embodiments, interactive doll app 34 can also include a generative voice component 39. For example, in some embodiments, generative voice component 39 can include a generative voice artificial intelligence (AI) system. In some embodiments, generative voice component 39 is non-transitory computer-readable storage medium configured to execute instructions based on a rules-based model. In some embodiments, generative voice component 39 is non-transitory computer-readable storage medium configured to execute instructions based on machine learning models.

    [0069] In some embodiments, interactive doll app 34 can be downloaded to computing device 16. In some embodiments, as shown in FIG. 7, computing device 16 and/or audio device 14 can be associated with a server 50. In some embodiments, server 50 can include a computing system executing an implementation of interactive doll app 34. In some embodiments, the server 50 and computing device 16 can communicate with one another to execute the functions of interactive doll app 34. In some embodiments, some components of interactive doll app 34 can be located on server 50. For example, library component 35, storage component 37, and/or generative voice component 39 can be located on server 50. In some embodiments, generative voice component 39 is located on computing device 16, allowing the user to access the generative voice component 39 without a network connection to server 50.

    [0070] In some embodiments, generative voice component 39 is configured to generate an audio output to mimic particular voices through audio device 14. In some embodiments, generative voice component 39 is configured to mimic a particular voice based on an audio file uploaded to interactive doll app 34. For instance, in some embodiments, a user can upload an audio file (e.g., a voice memo, recording, voicemail, or other audio recording) and generative voice component 39 can be configured to mimic the voice provided in the upload.

    [0071] In some embodiments, generative voice component 39 utilizes advanced machine learning algorithms, such as deep neural networks, to analyze and synthesize voice characteristics. This process involves extracting key features from the uploaded audio file, including pitch, tone, rhythm, and/or speech patterns. In some embodiments, the AI model then uses these features to generate a synthetic voice that closely mimics the original.

    [0072] In some embodiments, the voice mimicking capability can be applied to various functions of interactive doll 10. For example, the doll can read stories from library component 35 using the voice of a parent or loved one, providing a comforting and familiar experience for the child. In some embodiments, generative voice component 39 can adapt the mimicked voice to different emotional states or speaking styles, allowing for more dynamic and engaging interactions.

    [0073] In some embodiments, generative voice component 39 can be integrated with other features of interactive doll 10, such as its educational functions. In some embodiments, this allows the doll to deliver personalized lessons or provide encouragement in a voice that is meaningful to the user. For instance, a child learning a new language could hear vocabulary words pronounced by a family member's voice, potentially enhancing engagement and retention.

    [0074] In some embodiments, the voice mimicking feature includes safeguards to prevent misuse. This can include user authentication requirements, limitations on the types of phrases that can be generated, and/or clear indications when the doll is using a synthesized voice versus pre-recorded audio.

    [0075] In some embodiments, the technology behind generative voice component 39 can be continuously updated and improved through machine learning techniques, allowing for more accurate and natural-sounding voice mimicry over time. In some embodiments, the feature enhances the personalization capabilities of interactive doll 10, creating a unique and emotionally resonant play experience for each user.

    [0076] In some embodiments, generative voice component 39 is configured to interact with a user mimicking a particular voice. For example, in some embodiments, generative voice component 39 is configured to read a selected downloaded book from interactive doll app 34 mimicking a particular voice. In some embodiments, the mimicked voice can be that of a famous individual, such as an actor, a politician, or a singer.

    [0077] In some embodiments, generative voice component 39 is configured to respond to a user mimicking a particular voice. For instance, in some embodiments, generative voice component 39 is configured to respond to prompts from a user through microphone 28 in audio device 14. In some embodiments, microphone 28 is an I2S MEMS (micro-electromechanical systems) microphone. In some embodiments, generative voice component 39 is configured to generate interactive stories with a user. For instance, in some embodiments, generative voice component 39 is configured to provide prompts via audio device 14 to create a story with a user. In some embodiments, the prompts can be conversational questions designed to encourage a back-and-forth conversation with interactive doll 10. The conversation can be recorded via microphone 28 and stored in either audio device 14, computing device 16, and/or server 50. In some embodiments, the recordings can be stored in storage component 37 and accessed by generative voice component 39. In some embodiments, artificial intelligence can be used to create visualizations for these stories. In some embodiments, these visualizations can be displayed on a smart device.

    [0078] In some embodiments, a user can select a story via interactive doll app 34 for it to be read in a particular voice. In some embodiments, generative voice component 39 can edit the story. In some embodiments, generative voice component 39 can analyze a user's tone, pitch, cadence, and other voice characteristics to recognize a user's emotions. In some embodiments, the generative voice component 39 can form prompts based on a user's analyzed emotion.

    [0079] In some embodiments, a user can select a particular voice for interactive doll 10 to use. For example, in some embodiments, the user can store multiple voices in storage component 37 and select which one of the voices for generative voice component 39 to use. In some embodiments, generative voice component 39 can be configured to provide different voices for different characters in a story/book.

    [0080] In some embodiments, interactive doll 10 can be configured to be associated with one or more other interactive dolls 10, enabling multi-doll interactions and collaborative play experiences. This feature allows users to engage with multiple interactive dolls 10 simultaneously, creating a more dynamic and immersive play environment. For instance, multiple interactive dolls 10 can be programmed to participate in multi-player audio games with the user, fostering social interaction and group play.

    [0081] The multi-doll association capability can be implemented through wireless communication protocols, such as Bluetooth or Wi-Fi, allowing the dolls to exchange information and coordinate their actions. This interconnectivity enables various interactive scenarios, such as synchronized storytelling, where each doll plays a different character, or educational games that require input from multiple dolls.

    [0082] In some embodiments, the interactive doll app 34 can be expanded to support multi-doll configurations, allowing users to manage and customize interactions between multiple dolls. This could include features like assigning different personalities or roles to each doll, creating custom multi-doll scenarios, or even facilitating remote play between dolls in different locations.

    [0083] The multi-doll functionality can also enhance the educational aspects of interactive doll 10. For example, language learning exercises could involve multiple dolls conversing in different languages, or math games could use multiple dolls to represent different numerical concepts. This collaborative approach to learning can make educational activities more engaging and effective for users.

    [0084] Furthermore, the ability to interact with multiple dolls simultaneously can provide opportunities for emotional and social development. Users can practice social skills, empathy, and conflict resolution by mediating interactions between multiple dolls, each potentially representing different personalities or perspectives.

    [0085] In some embodiments, interactive doll 10 can be configured with advanced sleep monitoring capabilities, leveraging its audio device 14 and artificial intelligence algorithms to enhance a user's sleep experience. In some embodiments, the doll's microphone 28 can be utilized to detect ambient sounds and analyze sleep patterns, particularly beneficial for child users. In some embodiments, if the doll detects noises indicative of restlessness or waking, it can trigger an alert through the interactive doll app 34 installed on a parent or guardian's computing device 16, providing real-time updates on the child's sleep status.

    [0086] In some embodiments, the sleep monitoring feature can be further enhanced by incorporating machine learning algorithms that learn and adapt to a user's specific sleep patterns over time. In some embodiments, this allows interactive doll 10 to predict when a user might be about to wake up, based on factors such as movement sounds, changes in breathing patterns, or environmental cues. In some embodiments, upon detecting potential wake-up signals, the doll can proactively initiate sleep-inducing functions. For instance, in some embodiments, it can activate its speaker 25 to produce soothing white noise, play calming music, or even recite gentle bedtime stories in a familiar voice using the generative voice component 39. In some embodiments, these proactive measures aim to guide the user back into a restful sleep state, potentially reducing sleep disruptions and improving overall sleep quality.

    [0087] In some embodiments, interactive doll 10 can be programmed to serve as an intelligent alarm clock, offering customizable wake-up experience. In some embodiments, the wake-up function can be tailored to the user's preferences and can employ a combination of sensory stimuli to gradually rouse the user from sleep. In some embodiments, the can include subtle tactile feedback, such as gentle vibrations emanating from the doll's body, which can be particularly effective for users who are sensitive to sudden auditory stimuli. In some embodiments, the doll can also utilize its audio capabilities to provide a range of wake-up sounds, from nature-inspired ambient noises to personalized voice messages created using the generative voice component 39.

    [0088] In some embodiments, for visual stimulation, the doll's indicator light 32 or additional LED lights integrated into its design could be programmed to simulate a sunrise effect, gradually increasing in brightness to mimic natural daybreak. In some embodiments, this multi-sensory approach to waking allows for a more natural and less jarring transition from sleep to wakefulness, potentially contributing to improved mood and alertness upon waking.

    [0089] In some embodiments, interactive doll 10 can be used to help a user develop and/or maintain a routine.

    [0090] In some embodiments, interactive doll 10 can be made to resemble a particular person. In some embodiments, interactive doll 10 can be made to resemble a person in the military. For instance, in some embodiments, the interactive doll 10 can be made to resemble a family member, such as a parent, or friend in the military. In this instance, a user can upload an audio file including the family member's or friend's voice to interact with the doll as described herein. By configuring the doll 10 to interact in a particular person's voice, the user can be comforted by hearing that particular person's voice.

    [0091] In some embodiments, interactive doll 10 is able to mimic a person based on their writings and/or recordings. For example, in some embodiments, interactive doll 10 can be configured, using artificial intelligence, to act, sound, and/or respond like a particular historical individual.

    [0092] In some embodiments, interactive doll 10 can leverage advanced natural language processing and machine learning algorithms to analyze and synthesize the linguistic patterns, vocabulary, and writing style of a specific individual based on their written works or recorded speeches. In some embodiments, this capability allows the doll to generate responses and engage in conversations that closely mimic the manner of speaking or writing of the chosen person.

    [0093] In some embodiments, for historical figures, the AI system can be trained on a body of their writings, speeches, letters, and/or other available textual or audio sources. In some embodiments, this training enables interactive doll 10 to emulate the individual's characteristic expressions, idioms, and/or rhetorical devices. For instance, the doll could be programmed to speak in the style of William Shakespeare, using Elizabethan English and incorporating poetic elements typical of his works.

    [0094] In some embodiments, the AI system can be designed to incorporate knowledge of the historical context, personal experiences, and known opinions of the individual being mimicked. In some embodiments, this allows interactive doll 10 to provide responses that are not only stylistically accurate but also contextually appropriate and consistent with the historical figure's worldview.

    [0095] In some embodiments, this feature can be used for educational purposes, allowing users to engage in simulated conversations with historical figures, enhancing their understanding of history, literature, or other subjects. For example, a student studying the American Civil War could interact with a doll mimicking Abraham Lincoln, gaining insights into his thoughts and decision-making processes.

    [0096] In some embodiments, the system can also be configured to adapt its language complexity based on the user's age or comprehension level, ensuring that the interactions remain accessible and educational for a wide range of users. Additionally, in some embodiments, safeguards can be implemented to ensure, or at least increase the likelihood, that the content generated by the AI remains appropriate and aligned with educational objectives.

    [0097] In some embodiments, interactive doll 10 can use voice recognition software to distinguish and/or identify different users, such as siblings in a family and interact with them appropriately. In some embodiments, interactive doll 10 can interact with each user based on their particular preferences. In some embodiments interactive doll can determine a user's preferences via artificial intelligence.

    [0098] In some embodiments, the voice recognition software utilized by interactive doll 10 employs advanced machine learning algorithms to analyze and differentiate between various vocal characteristics, such as pitch, tone, and speech patterns. In some embodiments, this enables the doll to create and maintain individual user profiles, storing information about each user's preferences, interaction history, and learning progress.

    [0099] In some embodiments, the ability to distinguish between users allows interactive doll 10 to provide a personalized experience for each individual. For instance, when interacting with siblings, the doll can adjust its language complexity, topic selection, and even its personality to suit each child's age, interests, and developmental stage. In some embodiments, this feature is particularly beneficial in households with multiple children, as it ensures that each child receives age-appropriate and engaging interactions tailored to their specific needs.

    [0100] In some embodiments, artificial intelligence system employed by interactive doll 10 to determine user preferences goes beyond simple data collection. In some embodiments, it utilizes predictive analytics and pattern recognition to continuously refine its understanding of each user's likes, dislikes, learning style, and emotional responses. In some embodiments, this AI-driven approach allows the doll to anticipate user needs and proactively suggest activities or topics that align with the user's evolving interests.

    [0101] For example, in some embodiments, if the AI system detects that a user consistently engages more with science-related content, it might gradually introduce more advanced scientific concepts or suggest hands-on experiments that complement the user's interests. Similarly, in some embodiments, if a user shows a preference for storytelling, the doll might offer more interactive narrative experiences or encourage the user to create their own stories.

    [0102] In some embodiments, the AI system also takes into account contextual factors such as time of day, recent interactions, and even external events (if connected to the internet) to further personalize its interactions. In some embodiments, this could mean adjusting its energy level to match the user's daily routine, offering comforting interactions during stressful times, or incorporating current events into educational activities.

    [0103] In some embodiments, the doll's ability to learn and adapt to each user's preferences extends to its voice mimicry capabilities. In some embodiments, over time, it can learn which voices or speech patterns a user responds and adjust its audio output accordingly, whether it's mimicking a loved one's voice or adopting a particular accent or speaking style that the user finds engaging.

    [0104] In some embodiments, this level of personalization and adaptability can enhance the user's experience and improve the educational and developmental benefits of interacting with interactive doll 10. In some embodiments, by tailoring its approach to each individual user, the doll can support learning, emotional growth, and social skill development in a way that resonates with a child's unique personality and needs.

    [0105] In some embodiments, interactive doll 10 can take into account the time and/or location in determining appropriate responses. For example, in some embodiments, interactive doll 10 can wish a user a Merry Christmas around Christmas. In some embodiments, interactive doll 10 can comment that a user must be visiting another location if interactive doll 10 is at a new location, such as if a user bring interactive doll 10 on vacation. In some embodiments, interactive doll 10 can provide relevant comments about the new location.

    [0106] In some embodiments, interactive doll 10 incorporates advanced context-awareness capabilities, utilizing a combination of internal sensors, GPS technology, and/or internet connectivity to enhance its interactive experience. In some embodiments, the doll's ability to recognize and respond to temporal and spatial contexts adds a layer of sophistication to its interactions, making them more relevant and engaging for the user.

    [0107] In some embodiments, the time-awareness feature can be implemented through an internal clock system synchronized with the user's local time zone. In some embodiments, this allows interactive doll 10 to offer timely greetings, reminders, and activities. For instance, beyond wishing Merry Christmas during the holiday season, the doll might suggest seasonal activities, sing themed songs, or share holiday-specific stories. Similarly, it could offer Good morning greetings at the start of the day, propose after-school activities in the afternoon, or initiate bedtime routines in the evening.

    [0108] In some embodiments, location awareness can be achieved through GPS functionality integrated into the doll or by syncing with the user's mobile device via the interactive doll app 34. In some embodiments, when the doll detects a significant change in location, it can adapt its responses and interactions accordingly. For example, if brought to a beach vacation, interactive doll 10 might initiate conversations about marine life, suggest beach-appropriate games, or even offer basic information about ocean safety.

    [0109] In some embodiments, the doll's AI system can be programmed to access and process information about new locations from its connected database or the internet. In some embodiments, this allows interactive doll 10 to provide educational content about the geography, history, or culture of the new location. For instance, if taken to Paris, the doll might share interesting facts about the Eiffel Tower, suggest French phrases to learn, or discuss famous artworks in the Louvre.

    [0110] In some embodiments, the context-awareness feature can extend to recognizing different environments within the user's regular routine. For example, it might detect when it's in a car (based on movement patterns and GPS data) and suggest travel games or provide entertainment during long journeys.

    [0111] In some embodiments, parents or guardians can customize the doll's location-based responses through the interactive doll app 34, allowing them to set parameters for what kind of location-specific information the doll can share, ensuring age-appropriate and parent-approved content.

    [0112] In some embodiments, this contextual awareness enhances the interactive and educational capabilities of interactive doll 10, providing a more immersive and responsive play experience that adapts to the user's changing environment and daily routines.

    [0113] In some embodiments, such as shown in FIG. 8, interactive doll 10 can be provided as a kit 60. In some embodiments, kit 60 includes a package 62 including doll 10, audio device 14, and book 64. In some embodiments, book 64 can include a personalized story about a particular person, for example, but not limited to, a family member or friend. In some embodiments, book 64 can also be provided as an automatically available/downloadable book in the library component 35 of interactive doll app 34. In some embodiments, kit 60 can include other accessories such as a charging cable. In some embodiments, kit 60 can include other clothing and/or accessories for doll 10.

    [0114] Specific examples of systems, methods and apparatus have been described herein for purposes of illustration. These are only examples. The technology provided herein can be applied to systems other than the example systems described above. Many alterations, modifications, additions, omissions, and permutations are possible within the practice of these inventions. These inventions includes variations on described embodiments that would be apparent to the skilled addressee, including variations obtained by: replacing features, elements and/or acts with equivalent features, elements and/or acts; mixing and matching of features, elements and/or acts from different embodiments; combining features, elements and/or acts from embodiments as described herein with features, elements and/or acts of other technology; and/or omitting combining features, elements and/or acts from described embodiments.

    [0115] In some embodiments, the application can be a mobile-based, cloud-based, server based and/or online-based application. In some embodiments, the devices can be a smartphone, tablet, laptop, smartwatch, or personal computer. In some embodiments, the application can be hosted by a commercially available platform. In some embodiments, the application can be embodied in a non-transitory computer-readable storage medium.

    [0116] Computer readable medium having program code recorded thereon for execution on a computer of the methods are disclosed above.

    [0117] Unless the context clearly requires otherwise, throughout the description and the claims: [0118] comprise, comprising, and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of including, but not limited to; [0119] connected, coupled, or variants thereof, mean connection or coupling, either direct or indirect, permanent, or non-permanent, between two or more elements; the coupling or connection between the elements can be physical, logical, or a combination thereof; [0120] herein, above, below, and words of similar import, when used to describe this specification, shall refer to this specification as a whole, and not to any particular portions of this specification; [0121] or, in reference to a list of two or more items, covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list; [0122] It is understood that the process is not limited to the particular methodology and/or protocols described herein, as these can vary as persons familiar with the technology involved here will recognize. It is also to be understood that the terminology used herein is used for the purpose of describing particular embodiments only, and is not intended to limit the scope of the process.

    [0123] Unless defined otherwise, technical, and scientific terms used herein have the same meanings as commonly understood by one of ordinary skill in the art to which the process pertains. The embodiments of the process and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments and/or detailed in the following description. It should be noted that features of one embodiment can be employed with other embodiments as the skilled artisan would recognize, even if not explicitly stated herein.

    [0124] Any numerical value ranges recited herein include all values from the lower value to the upper value in increments of one unit, provided that there is a separation of at least two units between any lower value and any higher value. As an example, if it is stated that the concentration of a component or value of a process variable such as, for example, size, angle size, pressure, time and the like, is, for example, from 1 to 98, specifically from 20 to 80, more specifically from 30 to 70, it is intended that values such as 15 to 85, 22 to 68, 43 to 51, 30 to 32, and the like, are expressly enumerated in this specification. For values which are less than one, one unit is considered to be 0.0001, 0.001, 0.01 or 0.1 as appropriate. These are only examples of what is specifically intended and all possible combinations of numerical values between the lowest value and the highest value are to be treated in a similar manner.

    [0125] While particular elements, embodiments and applications of the present inventions have been shown and described, it will be understood, that the inventions are not limited thereto since modifications can be made without departing from the scope of the present disclosure, particularly in light of the foregoing teachings.