Generative Scent Design System
20210138415 · 2021-05-13
Inventors
Cpc classification
B01F33/85
PERFORMING OPERATIONS; TRANSPORTING
B01F23/711
PERFORMING OPERATIONS; TRANSPORTING
B01F33/841
PERFORMING OPERATIONS; TRANSPORTING
B01F35/92
PERFORMING OPERATIONS; TRANSPORTING
B01F35/71805
PERFORMING OPERATIONS; TRANSPORTING
B01F33/848
PERFORMING OPERATIONS; TRANSPORTING
B65B43/52
PERFORMING OPERATIONS; TRANSPORTING
B01F33/846
PERFORMING OPERATIONS; TRANSPORTING
B65C3/065
PERFORMING OPERATIONS; TRANSPORTING
B01F33/8442
PERFORMING OPERATIONS; TRANSPORTING
B01F2101/14
PERFORMING OPERATIONS; TRANSPORTING
B67B3/14
PERFORMING OPERATIONS; TRANSPORTING
International classification
Abstract
A generative scent design system may used to create unique and custom scents (fragrances, perfumes) in real time based upon input from a user. The system may also be utilized for creating other unique and custom formulations of beverages, alcohols, juices, medications, lotions, shampoos and other products, etc. The generative scent design system may have an input receiver, an input processor, a plurality of scents, a plurality of dispensers, a conveyor belt, a plurality of motion sensors, a container, a label, a cap, and, at least one sound output device.
Claims
1. A generative scent design system comprising: a frame; an input receiver; an input processor; a plurality of scents; a plurality of dispensers; a conveyor belt; a plurality of motion sensors; and, a container; wherein the plurality of dispensers, the conveyor belt, and the plurality of motion sensors are attached to the frame; wherein the input receiver receives data; wherein the data is selected from the group consisting of questionnaire answers, user-entered data, social-media based data, biometric feedback, stock exchange based data, weather based data, personal emotion based data, sports based data, sound based data, smell based data, sensor based data, image based data, and combinations thereof; wherein the input processor calculates the data to determine a formula containing an amount of each of the plurality of scents; wherein the amount of each of the plurality of scents are dispensed from the plurality of dispensers into the container; wherein the container is transported on the conveyor belt to allow the container to be movably positioned to receive each of the plurality of scents from each of the plurality of dispensers; and, wherein the plurality of motion sensors guide the container on the conveyor belt.
2. The generative scent design system of claim 1, wherein the conveyor belt comprises: a puck; and, a plurality of cleats; wherein the puck is configured to hold the container; and, wherein the plurality of cleats is configured to hold the puck.
3. The generative scent design system of claim 1 further comprises: a label printer; wherein the input processor generates information for a label; and, wherein the label printer prints the label.
4. The generative scent design system of claim 3 further comprises: a label applicator; wherein the label applicator affixes the label to the container.
5. The generative scent design system of claim 1 further comprises a dispenser manifold; wherein the dispenser manifold is configured to hold the plurality of dispensers to allow the plurality of dispensers to dispense the plurality of scents simultaneously into the container.
6. The generative scent design system of claim 1, wherein the plurality of dispensers are vacuum flexible containers.
7. The generative scent design system of claim 1 further comprises a capping system; wherein the capping system secures a cap on the container.
8. The generative scent design system of claim 1 further comprises a sound output device; wherein the input processor calculates the data to generate sounds; and, wherein the sound output device outputs the sounds.
9. The generative scent design system of claim 1 further comprises a plurality of exit stations.
10. The generative scent design system of claim 1 further comprises a container dispenser; wherein the container dispenser dispenses the container onto the conveyor belt.
11. The generative scent design system of claim 1 further comprises a visual output device.
12. The generative scent design system of claim 1, wherein the plurality of scents are perfume ingredients.
13. The generative scent design system of claim 1, wherein the plurality of scents are beverage ingredients selected from the group consisting of alcoholic drink ingredients, non-alcoholic drink ingredients and combinations thereof.
14. The generative scent design system of claim 1, wherein the plurality of scents are liquid personal products ingredients.
Description
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
[0023] The advantages and features of the present invention will be better understood as the following description is read in conjunction with the accompanying drawings, wherein:
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039] For clarity purposes, all reference numerals may not be included in every figure.
DETAILED DESCRIPTION OF THE INVENTION
[0040] The figures illustrate a generative scent design system 100 comprising an input receiver 120, an input processor 130, a plurality of scents 140, a plurality of scent dispensers 150, a conveyor belt 160, a plurality of motion sensors 170, a container 180, a container dispenser, a label 192, a cap 210, at least one sound output device 220, and at least one visual output device 240.
[0041] As illustrated in
[0042] On the label 192 is a specific code representing a specific generation (or formulation), as illustrated in
[0043]
[0044] In different embodiments the scent dispensers 150 may contain other liquids, for example, different juices, alcoholic beverages, flavors, health supplements, and others, health and beauty products and ingredients.
[0045]
[0046]
[0047]
[0048] In one embodiment of the invention, the scents may be described according to their characteristics or features in several categories (“Feature Categories”). Exemplary Feature Categories are illustrated in the following table. As illustrated in the table, the Feature Categories may be represented by a numeric value, text, color picker, geographical coordinates, or a combination of the foregoing.
TABLE-US-00001 Example Feature Categories Describing Scents Temporal Energy: (Most (Most Perceptual Harmonic Long-lasting Diffusive (Most sharp (Most Pleasant/ to feast long- to feast to least sharp/ Harmonic to lasting) diffusive) more round) most disruptive) Numeric value Numeric value Numeric value Numeric value scale scale scale scale Color Texture Emotion/Mood Associations with Text, numeric Text Text locations, life value or color E.g. Cotton e.g. Scared situations, events, picker feelings. Text E.g. Blue Season Weather Natural/Unnatural Sensations Text Text, numeric Numeric value Text or numeric value scale value Olfactive Memories Biometric data price and territories/ Text on the regulatory data, families responses to CAS number e.g. citrus, green, ingredients or floral, woody, compounds oriental, musk, etc. and combinations thereof Text and coordinates Origin Naturals/ Molecule family List of opposites. Synthetic Text, numeric
[0049] The Feature Categories depend on the type of input data. Some Feature Categories can be applied to multiple types of input data. For example the Temporal Categories (describing, e.g., lastingness of input data) can be applied to sound (audio), visual input (light, colors, etc.), and others.
[0050] The value (e.g., numeric, text, color, etc.) of the Feature Categories is calculated by the input processor 130 based on measurement or analysis of various input data parameters. For example, the Feature Categories for sound input may be characterized by the parameters as shown in the following table:
TABLE-US-00002 Sound Feature Category table Sound Features/ Measured Category Parameters Temporal (Life span: Lastingness/ Total Energy, Volatility of scent) Loudness, Spectral Scale - Numeric value Decrease Energy (Physical presence/ Spectral Spread, Spectral Diffusion of scent) Skewness, Perceptual Scale - Numeric value Spectral Variation Harmonic (Stylistic/ Harmonic Energy, Pleasant versus disruptive) Noise Energy, Scale - Numeric value Noisiness, Inharmonicity Perceptual Perceptual Spectral (Shape & Aesthetics: linear, Centroid, Sharpness sharp, round liquid) Spectral Flatness, Scale - Numeric value Harmonic Energy
[0051] The individual scents may be categorized according to the Features Categories in a relationship such that particular scent will correlate to a particular description for a Feature Category. For example, a particular scent may correlate to a particular value for the Temporal Feature Category. Within the scope of this invention, the Feature Categories are referred to as Scent Descriptors in their association with scents. The following table illustrates Scent Descriptors (Feature Categories) for sound input data with their associated scents. The scents are ordered as described in the table (i.e., the top scent represent the “Most” portion of the scale).
TABLE-US-00003 Scents Categories for Sound Input Data Harmonic Temporal Energy Perceptual (Most (Most long- (Most (Most pleasant/ lasting diffusive sharp to harmonic to least to least to least least sharp/ harmonic/most long-lasting) diffusive) most round) disruptive) Ether Floral Soil Floral Animal Wet Woody Ether Woody Soil Luminous Zest Floral Woody Wet Animal Soil Zest Greens Greens Wet Animal Zest Woody Greens Greens Floral Wet Luminous Luminous Animal Soil Zest Ether Ether Luminous
[0052] The input receiver 120 can receive input data from a user, from the surroundings, from another device, or from its own stored data. For example, a user can provide input by typing, scanning a document, uploading a file to the system, speaking into a microphone, and various other methods. The input receiver 120 can also collect input data from the surroundings, for example, noise and light levels, music, radio frequencies, etc. The input data can also be provided to the input receiver 120 via another device, such as a mobile device via wireless communications, or from network or internet location that contains the data. The input processor 130 may be a computer with peripheral devices, such as a display, keyboard, touchpad, stylus, and other peripheral devices. The input receiver 120 may also comprise various instrumentation for receiving, sensing, measuring or detecting the input data, such as microphones, temperature sensors, light/dark sensor, color sensors, radio frequency sensors, spectral analyzers, sound frequency analyzer, vision systems and cameras, face recognition, microphones, text recognition, voice recognition, image recognition, biometric sensors, and numerous others. In some embodiments, one device may act both as an input receiver 120 and a visual output device 240; for example, a monitor that has touch-screen capabilities.
[0053] In an embodiment for an autonomous generative scent creation process, the input receiver 120 can also receive input data on its own from previously created generations (or formulations) of scent. Such embodiment may be configured to continuously generate new scent formulations without external input, based on internally provided input data,
[0054] The input data may be questionnaire answers, chosen price ranges, chosen ingredients (e.g., specific scents, categories of scents, Naturals or Synthetic, etc.) user-entered data, social-media based data, biometric feedback, financial data, stock exchange-based data, weather-based data, personal/emotion-based data, sports-based data, sound-based data, scent(s)-based data, sensor-based data, image-based data, and combinations thereof. The user may utilize a mobile application to generate the data. For example, the mobile application may have a questionnaire to which the user provides answers. The answers are then transmitted to the input processor 130. Additionally, the user may input data directly into the input processor 130. Alternatively, the input processor 130 may receive data in the forms of social-media based data, biometric feedback, stock exchange-based data, weather-based data, personal emotion-based data, sports based data, sound based data, scent(s)-based data, sensor-based data, or image-based data.
[0055] The input processor 130 ingests the input data and analyzes it. For example for sound input data the input processor 130 can measure various parameters that describe the sound (“Sound Descriptors”) such as Total Energy, Loudness, Spectral Decrease, Spectral Spread, Spectral Skewness, Perceptual Spectral Variation, Harmonic Energy, Noise Energy, Noisiness, Inharmonicity, Perceptual Spectral Centroid, Sharpness Spectral flatness, Harmonic Energy, and others. For example, for a sound input data, the input processor 130 may analyze the context of a song. For visual input data (e.g., image(s), video(s), surrounding(s), etc.), the input processor 130 may analyze the data for presence and amount of different color, hue, darkness and lightness, luminosity, what is the in the scene, the presence and number of people, whether the image is of urban or nature environment, and various other indicators (“Visual Descriptors”). For people (whether in an image or surroundings) the input processor 130 may analyze the facial expression and emotions, assess and assign a value (e.g., on a sliding scale) for gender, ethnicity, race, age, etc. (“Personal Descriptors”). For text input, the input processor 130 may analyze the source, the context and any known associations with it.
[0056] Based upon the analysis of the input data the input receiver 120 creates a description of the input data. The description may be numeric, text or both. For example, for sound input data, the input processor 130 will assign a numeric value to several categories that describe the features of the sound input data. Such categories may be 1) Temporal Features, 2) Energy Features, 3) Perceptual features and 4) Harmonic features. The numeric value assigned to each category of features will be based on the analysis of the appropriate Sound Descriptors representative of each feature category, as set forth in the Sound Feature Category table. Also as set forth in the table the numeric value represent the level each feature is present in the sound input data. For example the numeric value for the Temporal Features category will be representative of the sound input data on a scale of Most Long-Lasting to Least Long-Lasting (e.g., a high number may represent a long lasting sound, while a low number a short sound, or vice versa).
[0057] Similarly, for an image (or other visual) input data, the input processor 130 creates a description of the input data by assigning a numeric value to several feature categories based on the Visual Descriptors, and on Personal Descriptors if people are present. Those features categories may include Brightness, Hue, Color Palette, Contrast, People, Nature, and if people are present, Emotion.
[0058] In addition to or instead of, numeric values the input processor 130 may assign text descriptors to the input data. For example the text descriptor may include descriptive words, such as “bright,” “blue,” “fast,” “allegro,” “warm,” “emotional,” “sad,” “green,” “grey,” “sunny,” “forest,” “wild,” “disharmony,” “melodic,” and numerous others. The input processor 130 may also associate additional text descriptors to the exemplary text descriptors in the previous sentence based on the input. For example the “grey” descriptor may be associated with the additional descriptors “dull” and/or “risk avoiding.”
[0059] Based on the analysis performed by the input processor 130, the algorithm correlates the input data descriptors to the Scent Descriptors (i.e. Feature Categories) and creates a “recipe” (also referred to as a formulation, or generation) for mixing of the different scents (single ingredients, or compounds). Based on the description (numeric, text, or other) of Features Categories the algorithm selects the different scents and the amount of each scent to dispense. For example, for long lasting sound input data (e.g., in the Temporal Feature Category), the algorithm processor may select “Ether” scent, and an amount based on a pre-programmed algorithm. Based on the Harmonic, Perceptual, and Energy Feature Categories for the same sound the algorithm processor may select different amounts of the following scents Woody, Greens, Ether, Wet, Soil, Zest, Animal, Floral and Luminous resulting in a recipe as illustrated in
[0060] The input processor 130 and algorithm may operate as illustrated in the flow-chart in the following figure (also shown as
[0061] In the preceding algorithm, input data audio files are selected and analyzed according to the sound Feature Categories illustrated in the Example Feature Categories Describing Scents Table, above. The analysis results in a configuration for each Feature Category. In one example, each Feature Category configuration consist of a “pool”, “index,” and “drops.” The configurations for each Feature Category are combined into a single configuration, which is then saved as a new generation (or formulation) of scent.
[0062] A system embodying the algorithm illustrated in the figure above, may select (randomly or otherwise) several (e.g., 3) input data audio files from a number of existing pre-stored audio files (e.g., in one embodiment, 450). The existing audio files are divided into pools of a smaller number of files (e.g., 50 files). Each of the pools is associated with a specific scent dispenser 150 containing a particular scent.
[0063] For each of the Sound Descriptors the algorithm may perform the following steps:
[0064] 1) Determine from which pool to select a file for each Sound Descriptor. This is the “pool” value in the configuration.
[0065] 2) Select a file from the chosen pool. This is the “index” value.
[0066] 3) Calculate the number of drops in the scent formula for each Sound Descriptor.
[0067] In one example, the process for the creation of a generation of fragrance starts by selecting 3 input audio files randomly. More or less input audio files may be also selected. Also, the audio files may be selected by a user, may be received by the input receiver 120 (e.g., as files, or through a microphone).
[0068] To select the pool for each Sound Descriptor, the algorithm operates as follows. The Algorithm calculates the mean for the Sound Descriptor for each of the input audio files. This calculation results into one file having the highest mean value, one file having the lowest mean value and one file having a value in between the highest and the lowest. The difference between the highest and the lowest value is divided by a predetermined number. In this example, the predetermined number is 9, corresponding to the number of Sound Descriptors or to the number of scents in each Scents Category for Sound Input Data, illustrated above. If the middle value is below the median, the algorithm chooses the first whole number below the median on this scale of 9. If the middle value is above the median, the algorithm chooses the first whole number above the median on this scale of 9. This number determines from which pool the algorithm will select a file for a particular Sound Descriptor. The algorithm repeats this process of selecting a pool for each Sound Descriptor. Each pool is associated with a specific scent dispenser 150 (or scent).
[0069] To select the index (number corresponding to, e.g., a file within the chosen pool of files) for each Sound Descriptor, the algorithm operates as follows. The algorithm calculates the median value of the Sound Descriptor for each of the input audio files. The algorithm then subtracts the lowest median value from the highest median value for each Sound Descriptor and divides the number of files by the result, so that the result of the division will provide a scale in which the highest median value will correspond to the highest possible index (i.e., file number) and the lowest median will correspond to the lowest index (i.e., lowest file number, e.g, 0 or 1). To determine the scale, for example, the algorithm may determine the straight line on a Cartesian (e.g., X, Y) coordinate system defined by the X, Y number pairs (highest median, highest index) and (lowest median, lowest index). In the next step the algorithm calculates a new median (“Median.new”) of the previously calculated median values. In the example with three median values (i.e., three input audio files), Median.new will be the middle value. Next the algorithm determines the index (file number) to which Median.new corresponds by mapping Median.new to the scale calculated above (in which the highest median corresponds to the highest index, and lowest median corresponds to the lowest index). The resulting number represents the index, corresponding to a file in the pool.
[0070] To select the number of drops (e.g., the amount of particular scent determined by the pool, above) for each Sound Descriptor, the algorithm operates as follows. The algorithm calculates the mean (value z) of the means (as calculated above) for each Sound Descriptor. Next, the algorithm maps z on a scale of the number of files in the pool (e.g., 50) of what could have been the maximum and minimum value for this Sound Descriptor. The algorithm subtracts z from the chosen index (e.g., audio file number) calculated above, and converts the resulting number to an absolute number. The resulting absolute number, x, represents a number of drops of a scent for each Sound Descriptor.
[0071] After calculating the configuration for each Sound Descriptor by determining the pool, index, and drops as described above, the algorithm combines the individual configurations. The algorithm adds the x (i.e., drops) values for all Sound Descriptors and calculates the percentage per Sound Descriptor within the formula of the currently generated fragrance (i.e., generation). Because each pool is associated with a specific scent dispenser 150, the drops associated with each pool (i.e., scent) are calculated as a percentage of the total amount of drops for the formulation. This percentage is calculated into an absolute amount of volume of ingredient (e.g., scents) per scent dispenser 150 for each Scent Descriptor so that the desired quantity is being compounded in the correct ratio.
[0072] The input processor 130 and algorithm can also be programmed to correlate the input data to the scents according to the following flow chart (also shown as
[0073] The amount of each of the plurality of scents 140 are dispensed from the plurality of scent dispensers 150 into the container 180. The container 180 is transported on the conveyor belt 160 to allow the container 180 to be movably positioned to receive each of the plurality of scents 140 from each of the plurality of scent dispensers 150. The plurality of motion sensors 170 guide the container 180 on the conveyor belt 160. The input processor 130 generates information for the label 192 and the unique code. The label 192 is affixed to the container 180. The cap 210 is secured to the container 180.
[0074] The following chart (also shown as
[0075] In another embodiment, the system can allow a user to convert scent to specific sound. In this embodiment, the input processor 130 calculates the data to generate sounds. The at least one sound output device 220 outputs the sounds. The input processor 130 translates scent properties to sound properties. The scent properties include (1) Life Span, (2) Physical Presence, (3) Stylistic, and (4) Shape/Aesthetics. Life Span is the lastingness or volatility of the scent. Life Span may be translated to the sound properties (a) Total Energy, (b) Loudness, and (c) Spectral Decrease. Physical Presence is the diffusion of the scent. Physical Presence may be translated to the sound properties (a) Spectral Spread, (b) Spectral Skewness, and (c) Perceptual Spectral Variation. Stylistic is the pleasantness of the scent compared to its disruptiveness. Stylistic may be translated to the sound properties (a) Harmonic Energy, (b) Noise Energy, (c) Noisiness, and (d) Inharmonicity. Shape/Aesthetics is the shape of the scent, such as linear, sharp or round liquid. Shape/Aesthetics may be translated to the sound properties (a) Perceptual Spectral Centroid, (b) Sharpness, (c) Spectral Flatness, and (d) Harmonic Energy. The input processor 130 outputs sound through the sound output device 220 based upon the sound properties that are translated based upon the scent properties.
[0076] In one embodiment of the invention, as illustrated in
[0077] In one embodiment of the invention, as illustrated in
[0078] In one embodiment of the invention, as illustrated in
[0079] In one embodiment of the invention, a capping system may be used that may account for different sizes and shapes of caps 210. A funneling gate system may provide the right cap 210 from a plurality of cap-magazines. The cap-magazines may easily exchanged to refill with caps 210, or to swap to the desired cap size. While a gripper arm connected to a linear actuator guides the cap 210 toward the container 180, a mechanically opening funnel makes sure the dip tube of the mist sprayer cap 210 enters the opening of the container 180 before it opens up to drive the head of the cap 210 on top of the container 180. The cap 210 may be a mist sprayer cap with a dip tube as described above and as illustrated in
[0080] In one embodiment of the invention, a crimping tool may be used to attach the cap 210 to the container 180 in a watertight way. This may be extended with a sleevepress and/or a system that attaches a closure on top of the cap's mist sprayer.
[0081] In one embodiment of the invention, a plurality of exit stations 230 are installed on the generative scent design system 100. The conveyor belt 160 may guide the container 180 to the desired exit station 230. The exit station 230 is equipped with an actuator may take the container 180 off of the conveyor belt 160. This facilitates the invention to be used by a plurality of users. The exit stations 230 may be outfitted with output devices such as displays to bestow information to the users.
[0082] While the invention has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes, omissions, and/or additions may be made and equivalents may be substituted for elements thereof without departing from the spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiments disclosed as the best mode contemplated for carrying out this invention, but that the invention will include all embodiments falling within the scope of the appended claims. Moreover, unless specifically stated any use of the terms first, second, etc. do not denote any order or importance, but rather the terms first, second, etc. are used to distinguish one element from another.