DETERMINATION OF THE RHEOLOGICAL BEHAVIOR OF A FLUID

20220178805 · 2022-06-09

Assignee

Inventors

Cpc classification

International classification

Abstract

The present disclosure relates to the determination of the rheological behavior of a fluid.

Claims

1. A method for determining at least one rheological property of a fluid, the method comprising: acquiring a sequence of images of the fluid in motion, transmitting the sequence of images to a prediction model as an input signal for determining the rheological property, wherein the prediction model is trained by history and/or calibration data to predict a relationship between visible features of the fluid in motion and at least a rheological property of the fluid, and receiving as an output from the prediction model the rheological property.

2. A system comprising: an image acquisition unit for acquiring a sequence of images of a fluid in motion, and a computing system comprising: a memory; and a processing unit in communication with the memory and configured with processor-executable instructions to: receiver a sequence of images captured by the image acquisition unit, feed the sequence of images into a prediction model, wherein the prediction model is trained by history and/or calibration data to predict a relationship between visible features of the fluid in motion and at least a rheological property of the fluid, receive as an output from the prediction model a rheological property, and output the rheological property.

3. A system for controlling a process for producing a product having at least one desired product property, the system comprising: an image acquisition unit for capturing a sequence of images of the product in a fluid state or of a fluid precursor of the product, fluid product or fluid precursor being in motion; a prediction unit arranged to receive at least one sequence of images from the image acquisition unit and to determine at least one property parameter representative of a rheological property of the fluid product or fluid precursor based on the at least one sequence of images; control unit arranged to compare the property parameter with a set point and to determine control output data representative of mismatch between the property parameter and the set point; and actuating unit arranged to receive said control output data and to change at least one process condition that affects said at least one product property in response to said received control output data.

4. A method of controlling a process for producing a product having at least one desired product property, the method comprising the steps: acquiring a sequence of images of the product in a fluid state or of a fluid precursor of the product, fluid product or fluid precursor being in motion; feeding the sequence of images as an input to a prediction model; receiving from the prediction model at least one property parameter representative of a rheological property of the fluid product or fluid precursor; comparing the property parameter with a set point; determining control output data representative of mismatch between the property parameter and the set point; and changing at least one process condition that affects said at least one product property in response to said received control output data.

5. A non-transitory computer readable medium storing one or more programs, the one or more programs configured to determine at least one rheological property of a fluid, the one or more programs comprising instructions, that when executed by a processor, cause the processor to: acquire a sequence of images of the fluid in motion, transmit the sequence of images to a prediction model as an input signal for determining the rheological property, wherein the prediction model is trained by history and/or calibration data to predict a relationship between visible features of the fluid in motion and at least a rheological property of the fluid, and receive as an output from the prediction model the rheological property.

6. A non-transitory computer readable medium storing one or more programs, the one or more programs configured to control a process for producing a product having at least one desired product property, the one or more programs comprising instructions, that when executed by a processor, cause the processor to: acquire a sequence of images of the product in a fluid state or of a fluid precursor of the product, fluid product or fluid precursor being in motion; feed the sequence of images as an input to a prediction model; receive from the prediction model at least one property parameter representative of a rheological property of the fluid product or fluid precursor; compare the property parameter with a set point; determine control output data representative of mismatch between the property parameter and the set point; and change at least one process condition that affects said at least one product property in response to said received control output data.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0089] The disclosure will now be described, by way of example only, with reference to the accompanying drawings, in which:

[0090] FIG. 1 shows a system for determining rheological behavior of a fluid, according to some embodiments;

[0091] FIG. 2 illustrates a detailed computing system for determining rheological behavior of a fluid, according to some embodiments;

[0092] FIG. 3 shows a system for determining rheological behavior of a fluid according to the some embodiments;

[0093] FIG. 4 shows a vessel which is equipped with a window, according to some embodiments;

[0094] FIG. 5 shows a system for determining rheological behavior of a fluid, according to some embodiments;

[0095] FIG. 6 shows a system for determining rheological behavior of a fluid, according to some embodiments;

[0096] FIG. 7 shows a system for determining rheological behavior of a fluid, according to some embodiments;

[0097] FIG. 8 shows a system for determining rheological behavior of a fluid, according to some embodiments;

[0098] FIG. 9 shows a system for determining rheological behavior of a fluid, according to some embodiments;

[0099] FIG. 10 shows a flow chart of a method of determining rheological behavior of a fluid, according to some embodiments;

[0100] FIG. 11 shows a flow chart of a method of determining rheological behavior of a fluid, according to some embodiments;

[0101] FIG. 12 shows a flow chart of a method of determining rheological behavior of a fluid, according to some embodiments;

[0102] FIG. 13 illustrates various layers within a convolutional neural network (CNN), according to some embodiments;

[0103] FIG. 14 illustrates computation stages within a convolutional layer of a CNN, according to some embodiments;

[0104] FIG. 15 illustrates a recurrent neural network according to some embodiments; and

[0105] FIG. 16 illustrates training and deployment of a neural network, according to some embodiments.

DETAILED DESCRIPTION

[0106] FIG. 1 shows a system for determining rheological behavior of a fluid, according to some embodiments. The system comprises an image acquisition unit (3) and a computing system (6). The computing system comprises a processing unit (61), a memory (62), and an output unit (63) for outputting of information.

[0107] The image acquisition unit (3) can be used for capturing a sequence of images of a fluid in motion. The processing unit (61) is configured with processor-executable instructions (stored in the memory (62)) [0108] to receive the sequence of images captured by the image acquisition unit, [0109] to determine at least one rheological property of the fluid by feeding the sequence of images into a prediction model, wherein the prediction model is trained by history and/or calibration data to predict the relationship between visible features of the fluid in motion and at least a rheological property of the fluid, and [0110] to cause the output unit to output the rheological property.

[0111] FIG. 2 illustrates a computing system (6) according to some embodiments of the present disclosure in more detail. Generally, a computing system of exemplary implementations of the present disclosure may be referred to as a computer and may comprise, include, or be embodied in one or more fixed or portable electronic devices. The computer may include one or more of each of a number of components such as, for example, processing unit (61) connected to a memory (62) (e.g., storage device).

[0112] The processing unit (61) may be composed of one or more processors alone or in combination with one or more memories. The processing unit is generally any piece of computer hardware that is capable of processing information such as, for example, data, computer programs and/or other suitable electronic information. The processing unit is composed of a collection of electronic circuits some of which may be packaged as an integrated circuit or multiple interconnected integrated circuits (an integrated circuit at times more commonly referred to as a “chip”). The processing unit may be configured to execute computer programs, which may be stored onboard the processing unit or otherwise stored in the memory (62) (of the same or another computer).

[0113] The processing unit (61) may be a number of processors, a multi-core processor or some other type of processor, depending on the particular implementation. Further, the processing unit may be implemented using a number of heterogeneous processor systems in which a main processor is present with one or more secondary processors on a single chip. In some embodiments, the processing unit may be a symmetric multi-processor system containing multiple processors of the same type. In some embodiments, the processing unit may be embodied as or otherwise include one or more ASICs, FPGAs or the like. Thus, although the processing unit may be capable of executing a computer program to perform one or more functions, the processing unit of various embodiments may be capable of performing one or more functions without the aid of a computer program. In some embodiments, the processing unit may be appropriately programmed to perform functions or operations according to example implementations of the present disclosure.

[0114] The memory (62) is generally any piece of computer hardware that is capable of storing information such as, for example, data, computer programs (e.g., computer-readable program code (70)) and/or other suitable information either on a temporary basis and/or a permanent basis. The memory may include volatile and/or non-volatile memory, and may be fixed or removable. Examples of suitable memory include random access memory (RAM), read-only memory (ROM), a hard drive, a flash memory, a thumb drive, a removable computer diskette, an optical disk, a magnetic tape or some combination of the above. Optical disks may include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W), DVD or the like. In various instances, the memory may be referred to as a computer-readable storage medium. The computer-readable storage medium is a non-transitory device capable of storing information, and is distinguishable from computer-readable transmission media such as electronic transitory signals capable of carrying information from one location to another. Computer-readable medium as described herein may generally refer to a computer-readable storage medium or computer-readable transmission medium.

[0115] In addition to the memory (62), the processing unit (61) may also be connected to one or more interfaces for displaying, transmitting and/or receiving information. The interfaces may include one or more communications interfaces and/or one or more user interfaces. The communications interface(s) may be configured to transmit and/or receive information, such as to and/or from other computer(s), network(s), database(s) or the like. The communications interface may be configured to transmit and/or receive information by physical (wired) and/or wireless communications links. The communications interface(s) may include interface(s) (66) to connect to a network, such as using technologies such as cellular telephone, Wi-Fi, satellite, cable, digital subscriber line (DSL), fiber optics and the like. In some examples, the communications interface(s) may include one or more short-range communications interfaces (67) configured to connect devices using short-range communications technologies such as NFC, RFID, Bluetooth, Bluetooth LE, ZigBee, infrared (e.g., IrDA) or the like.

[0116] The user interfaces may include an output unit (63) such as a display. The display may be configured to present or otherwise display information to a user, suitable examples of which include a liquid crystal display (LCD), light-emitting diode display (LED), plasma display panel (PDP) or the like. The user input interface(s) (64) may be wired or wireless, and may be configured to receive information from a user into the computing system (6), such as for processing, storage and/or display. Suitable examples of user input interfaces include a microphone, image or video capture device (image acquisition unit), keyboard or keypad, joystick, touch-sensitive surface (separate from or integrated into a touchscreen) or the like. In some embodiments, the user interfaces may include automatic identification and data capture (AIDC) technology (65) for machine-readable information. This may include barcode, radio frequency identification (RFID), magnetic stripes, optical character recognition (OCR), integrated circuit card (ICC), and the like. The user interfaces may further include one or more interfaces for communicating with peripherals such as printers and the like.

[0117] As indicated above, program code instructions may be stored in memory, and executed by processing unit that is thereby programmed, to implement functions of the systems, subsystems, tools and their respective elements described herein. As will be appreciated, any suitable program code instructions may be loaded onto a computer or other programmable apparatus from a computer-readable storage medium to produce a particular machine, such that the particular machine becomes a means for implementing the functions specified herein. These program code instructions may also be stored in a computer-readable storage medium that can direct a computer, processing unit or other programmable apparatus to function in a particular manner to thereby generate a particular machine or particular article of manufacture. The instructions stored in the computer-readable storage medium may produce an article of manufacture, where the article of manufacture becomes a means for implementing functions described herein. The program code instructions may be retrieved from a computer-readable storage medium and loaded into a computer, processing unit or other programmable apparatus to configure the computer, processing unit or other programmable apparatus to execute operations to be performed on or by the computer, processing unit or other programmable apparatus.

[0118] Retrieval, loading and execution of the program code instructions may be performed sequentially such that one instruction is retrieved, loaded and executed at a time. In some embodiments , retrieval, loading and/or execution may be performed in parallel such that multiple instructions are retrieved, loaded, and/or executed together. Execution of the program code instructions may produce a computer-implemented process such that the instructions executed by the computer, processing circuitry or other programmable apparatus provide operations for implementing functions described herein.

[0119] Execution of instructions by processing unit, or storage of instructions in a computer-readable storage medium, supports combinations of operations for performing the specified functions. In this manner, a computing system (6) may include processing unit (61) and a computer-readable storage medium or memory (62) coupled to the processing circuitry, where the processing circuitry is configured to execute computer-readable program code (70) stored in the memory. It will also be understood that one or more functions, and combinations of functions, may be implemented by special purpose hardware-based computer systems and/or processing circuitry which perform the specified functions, or combinations of special purpose hardware and program code instructions.

[0120] FIG. 3 shows a system for determining rheological behavior of a fluid, according to the some embodiments. An image acquisitions unit (3) captures images of a fluid (1) in motion. The fluid (1) is moved by an agitator (5) in a vessel (2). A light source (4) illuminates the fluid (1) in motion.

[0121] The image acquisitions unit (3) may be connected to a computing system (6) so that the image acquisitions unit (3) is able to transmit one or more sequences of captured images to the computing system (6).

[0122] FIG. 4 shows a vessel (2) which is equipped with a window (7), according to some embodiments. The fluid (1) can be observed through the window (7). An image acquisition unit can be located outside the vessel (2) and can be configured to capture a sequence of images of the fluid (1) in motion through the window (7). The fluid can be illuminated by one or more light sources installed inside and/or outside the vessel (2).

[0123] FIG. 5 shows a system for determining rheological behavior of a fluid, according to the some embodiments. An image acquisitions unit (3) captures images of a fluid (1) in motion. The fluid (1) is moved by an agitator (5) in a vessel (2). A light source (4) illuminates the fluid (1) in motion. The vessel (1) is equipped with baffles (8, 8′) which are located inside the vessel (2). The image acquisitions unit (3) is adjusted in a way that it captures the characteristic motion of the fluid (1) at a baffle (8).

[0124] The image acquisitions unit (3) may be connected to a computing system (6) so that the image acquisitions unit (3) is able to transmit one or more sequences of captured images to the computing system (6).

[0125] FIG. 6 shows a system for determining rheological behavior of a fluid, according to the some embodiments. An image acquisitions unit (3) captures images of a fluid (1) in a vessel (2). A light source (4) illuminates the fluid (1). The fluid (1) is moved by a first agitator (5) and a second agitator (5′). The image acquisitions unit (3) is adjusted in a way that it captures the characteristic motion of the fluid (1) at the second agitator (5′).

[0126] The image acquisitions unit (3) may be connected to a computing system (6) so that the image acquisitions unit (3) is able to transmit one or more sequences of images to the computing system (6).

[0127] FIG. 7 shows a system for determining rheological behavior of a fluid, according to the some embodiments. An image acquisitions unit (3) captures images of a fluid (1) in a vessel (2). A light source (4) illuminates the fluid (1). The fluid (1) is conveyed through a conduit (9) onto an inclined plane (10). The fluid flows down the inclined plane. The image acquisitions unit (3) is adjusted in a way that it captures the characteristic motion of the fluid (1) down the inclined plane.

[0128] The image acquisitions unit (3) may be connected to a computing system (6) so that the image acquisitions unit (3) is able to transmit one or more sequences of images to the computing system (6).

[0129] FIG. 8 shows a system for determining rheological behavior of a fluid, according to the some embodiments. An image acquisitions unit (3) captures images of a fluid (1) in motion. The fluid (1) is moved by an agitator (5) in a vessel (2). A light source (4) illuminates the fluid (1) in motion.

[0130] The image acquisitions unit (3) may be connected to a computing system (6) so that the image acquisitions unit (3) is able to transmit one or more sequences of images to the computing system (6).

[0131] The vessel (2) may be equipped with two more sensors (11, 11′), one sensor (11) being located above the fluid (1), the other sensor (11′) being located within the fluid (1).

[0132] The sensors may be used to collect measurement data representative of the conditions during the generation of the sequence of images, such as the temperature of the fluid in motion, and the pressure applied to the fluid.

[0133] The sensors may be connected to the computing system (6) so that the sensors are able to transmit measurement data to the computing system (6).

[0134] The measurement data may be used to train the prediction model and/or to determine one or more rheological properties by using the trained prediction model.

[0135] FIG. 9 shows a system for determining rheological behavior of a fluid, according to the some embodiments. The system is configured to control a process for producing a product having at least one desired product property. The system comprises an image acquisition unit (3), a computing unit (6), and actuating means (14).

[0136] The image acquisition unit (3) may be used for capturing one or more sequences of images of the product in a fluid state or of a fluid precursor of the product, the fluid product or fluid precursor (1) being agitated by an agitator (5) in a vessel (2).

[0137] In some embodiments, the computing system (6) may serve (at least) two different purposes. In some embodiments, it may act as a prediction unit (12) which may be arranged to receive at least one sequence of images from the image acquisition unit (3) and to determine at least one property parameter representative of a rheological property of the fluid product or fluid precursor (1) on the basis of the at least one sequence of images. In some embodiments, the computing system (6) may act as control means (13) which may be arranged to compare the property parameter with a set point and to determine control output data representative of the mismatch between the property parameter and the set point.

[0138] The actuating means (14) may be arranged to receive said control output data and to change at least one process condition that affects said at least one product property in response to said received control output data (e.g. the temperature).

[0139] FIG. 10 shows a flow chart of a method (100) of determining rheological behavior of a fluid, according to some embodiments . The method (100) may comprise the following steps: [0140] (110) acquiring a sequence of images of a fluid in motion, [0141] (120) transmitting the sequence of images to a prediction model as an input signal for determining a rheological property, wherein the prediction model is trained by history and/or calibration data to predict the relationship between visible features of the fluid in motion and at least a rheological property of the fluid, [0142] (130) receiving as an output from the prediction model the rheological property.

[0143] FIG. 11 shows a flow chart of a method (200) of determining rheological behavior of a fluid, according to some embodiments. The method (200) may comprise the following steps: [0144] (210) receiving a sequence of images of a fluid in motion captured by an image acquisition unit, [0145] (220) feeding the sequence of images into a prediction model, wherein the prediction model is trained by history and/or calibration data to predict the relationship between visible features of the fluid in motion and at least a rheological property of the fluid, [0146] (230) receiving as an output from the prediction model a rheological property, [0147] (240) outputting the rheological property.

[0148] FIG. 12 shows a flow chart of a method (300) of determining rheological behavior of a fluid, according to some embodiments. The method (300) may comprise the following steps: [0149] (310) acquiring a sequence of images of a product in a fluid state or of a fluid precursor of the product, the fluid product or fluid precursor being in motion; [0150] (320) feeding the sequence of images as an input to a prediction model; [0151] (330) receiving from the prediction model at least one property parameter representative of a rheological property of the fluid product or fluid precursor; [0152] (340) comparing the property parameter with a set point; [0153] (350) determining control output data representative of the mismatch between the property parameter and the set point; and [0154] (360) change at least one process condition that affects said at least one product property in response to said received control output data.

[0155] FIGS. 13 and 14 illustrate a convolutional neural network, according to some embodiments. FIG. 13 illustrates various layers within a CNN. As shown in FIG. 13, an exemplary CNN can receive input (80) describing the red, green, and blue (RGB) components of an image. The input (80) can be processed by multiple convolutional layers (e.g., convolutional layer (81), convolutional layer (82)). The output from the multiple convolutional layers may optionally be processed by a set of fully connected layers (83, 84). Neurons in a fully connected layer have full connections to all activations in the previous layer. The output from the fully connected layers (83) can be used to generate an output result (84) from the network.

[0156] In some embodiments, the activations within the fully connected layers (83) can be computed using matrix multiplication instead of convolution.

[0157] In some embodiments, the convolutional layers are sparsely connected, which differs from traditional neural network configuration found in the fully connected layers (83). Traditional neural network layers are fully connected, such that every output unit interacts with every input unit. When the convolutional layers are sparsely connected, the output of the convolution of a field (instead of the respective state value of each of the nodes in the field) is input to the nodes of the subsequent layer, as illustrated. The kernels associated with the convolutional layers perform convolution operations, the output of which is sent to the next layer. The dimensionality reduction performed within the convolutional layers is one aspect that enables the CNN to process large images.

[0158] FIG. 14 illustrates exemplary computation stages within a convolutional layer of a CNN, according to some embodiments. Input (91) to a convolutional layer (92) of a CNN can be processed in three stages of the convolutional layer (92). The three stages can include a convolution stage (93), a detector stage (94), and a pooling stage (95). The convolution layer (92) can then output data to a successive convolutional layer. The final convolutional layer of the network can generate output feature map data or provide input to a fully connected layer, for example, to generate a classification or regression value.

[0159] According to some embodiments, in the convolution stage (93), the convolutional layer (92) can perform several convolutions in parallel to produce a set of linear activations. The convolution stage (92) can include an affine transformation, which is any transformation that can be expressed as a sum of a linear transformation and a translation. In some embodiments, affine transformations may include rotations, translations, scaling, and/or combinations of these transformations. In some embodiments, the convolution stage computes the output of functions (e.g., neurons) that are connected to specific regions in the input, which can be determined as the local region associated with the neuron. In some embodiments, the neurons compute a dot product between the weights of the neurons and the region in the local input to which the neurons are connected. The output from the convolution stage (93) may define a set of linear activations that are processed by successive stages of the convolutional layer (92).

[0160] In some embodiments, the linear activations can be processed by a detector stage (94). In the detector stage (94), each linear activation may be processed by a non-linear activation function. The non-linear activation function increases the non-linear properties of the overall network without affecting the receptive fields of the convolution layer. Several types of non-linear activation functions may be used. In some embodiments, the non-linear activation function may be the rectified linear unit (ReLU), which uses an activation function defined as f(x)=max(0, x), such that the activation is threshold at zero.

[0161] In some embodiments, the pooling stage (95) uses a pooling function that replaces the output of the convolutional layer with a summary statistic of the nearby outputs. The pooling function can be used to introduce translation invariance into the neural network, such that small translations to the input do not change the pooled outputs. Invariance to local translation can be useful in scenarios where the presence of a feature in the input data is more important than the precise location of the feature. Various types of pooling functions can be used during the pooling stage (95), including max pooling, average pooling, and 12-norm pooling. In some embodiments, a CNN implementation may not include a pooling stage. In some embodiments, a pooling stage may be substituted for an additional convolution stage having an increased stride relative to previous convolution stages.

[0162] In some embodiments, the output from the convolutional layer (92) can then be processed by the next layer (96). The next layer (96) can be an additional convolutional layer or one of the fully connected layers (83). In some embodiments, the first convolutional layer (81) of FIG. 13 can output to the second convolutional layer (82), while the second convolutional layer can output to a first layer of the fully connected layers (83).

[0163] FIG. 15 illustrates a recurrent neural network, according to some embodiments. In a recurrent neural network (RNN), the previous state of the network influences the output of the current state of the network. RNNs can be built in a variety of ways using a variety of functions. The use of RNNs generally revolves around using mathematical models to predict the future based on a prior sequence of inputs. The illustrated RNN can be described has having an input layer (101) that receives an input vector, hidden layers (102) to implement a recurrent function, a feedback mechanism (103) to enable a ‘memory’ of previous states, and an output layer (104) to output a result. The RNN operates based on time-steps.

[0164] The state of the RNN at a given time step is influenced based on the previous time step via the feedback mechanism (103). For a given time step, the state of the hidden layers (102) is defined by the previous state and the input at the current time step. An initial input (x.sub.1) at a first time step can be processed by the hidden layer (102). A second input (x.sub.2) can be processed by the hidden layer (102) using state information that is determined during the processing of the initial input (x.sub.1). In some embodiments, a given state can be computed as S.sub.t=f(Ux.sub.t+Ws.sub.t-1), where U and W are parameter matrices. The function f is generally a nonlinear function such as the hyperbolic tangent function (tanh) or a variant of the rectifier function f(x)=max(0, x). However, the specific mathematical function used in the hidden layers (102) can vary depending on the specific implementation details of the RNN.

[0165] FIG. 16 illustrates training and deployment of a neural network, according to some embodiments. Once a given network has been structured for a task the neural network is trained using a training dataset (1102).

[0166] In some embodiments, the initial weights may be chosen randomly or by pre-training using a deep belief network to start the training process. The training cycle then be performed in either a supervised or unsupervised manner Supervised learning is a learning method in which training is performed as a mediated operation, such as when the training dataset (1102) includes input paired with the desired output for the input, or where the training dataset includes input having known output and the output of the neural network is manually graded. The network processes the inputs and compares the resulting outputs against a set of expected or desired outputs. Errors are then propagated back through the system. The training framework (1104) can adjust the weights that control the untrained neural network (1106). The training framework (1104) can provide tools to monitor how well the untrained neural network (1106) is converging towards a model suitable to generating correct answers based on known input data. The training process occurs repeatedly as the weights of the network are adjusted to refine the output generated by the neural network. The training process can continue until the neural network reaches a statistically desired accuracy associated with a trained neural network (1108). The trained neural network (1108) can then be deployed to implement any number of machine learning operations. A sequence of images of a new fluid (1112) can be inputted into the trained neural network (1108) to determine at least one rheological property.