OBJECT IMAGING WITHIN STRUCTURES

20240134041 ยท 2024-04-25

Assignee

Inventors

Cpc classification

International classification

Abstract

A method and system of imaging at least one passive object within a surrounding structure is provided. The surrounding structure has multiple surfaces. The method includes: transmitting an ultrasonic signal into the surrounding structure using an array of ultrasonic transmitters and receiving reflections from the passive object using an array of ultrasonic receivers. The method also includes steering the ultrasonic signal such that it includes at least one reflection off a surrounding structure surface using stored data relating to a position of at least one of the surfaces.

Claims

1. A method of imaging at least one passive object within a surrounding structure having a plurality of surfaces, the method comprising: transmitting an ultrasonic signal into the surrounding structure using an array of ultrasonic transmitters; receiving reflections from the passive object using an array of ultrasonic receivers; steering the ultrasonic signal such that it includes at least one reflection off a surrounding structure surface using stored data relating to a position of at least one of said surfaces.

2-3. (canceled)

4. The method of claim 1, comprising using the ultrasonic transmitter array and/or the ultrasonic receiver array to estimate position(s) of the surrounding structure surface(s) prior to the steering.

5. The method of claim 1, comprising updating the surrounding structure surface information during imaging or between episodes of imaging.

6. (canceled)

7. The method of claim 1, comprising simulating, for one or more reflections of the ultrasonic signal from the array, an estimated received signal for the passive object and comparing the reflections which comprise an actual received signal against the estimated received signal.

8. The method of claim 7, comprising basing the estimated received signal on a simulated image from past characteristics of the surrounding structure, a past image of the passive object in the surrounding structure, or a preliminary image of the passive object.

9. The method of claim 7, comprising determining the accuracy of the estimated signal by comparing the estimated received signal with the actual received signal.

10. The method of claim 7, comprising performing a gradient search to compare the estimated received signal with the actual received signal.

11. The method of claim 1, comprising steering the transmitted signal based on characteristics of the passive object.

12. (canceled)

13. The method of claim 1, comprising imaging using a single steered ultrasonic signal.

14. The method of claim 1, comprising using the array to transmit multiple ultrasonic signals in different directions.

15. The method of claim 14, comprising transmitting the multiple ultrasonic signals simultaneously.

16. The method of claim 1, comprising modifying the shape of the transmitted ultrasonic signal to match that of the passive object by focussing the energy of the beam predominantly onto the passive object.

17-18. (canceled)

19. The method of claim 1, comprising creating a visual representation of the passive object.

20. (canceled)

21. The method of claim 1, comprising using compressed sensing and/or sparsity methods.

22. The method of claim 1, comprising calculating a Doppler shift of the ultrasonic signal and using said Doppler shift for said imaging.

23. A system arranged to image at least one passive object within a surrounding structure having a plurality of surfaces, the system comprising: an array of ultrasonic transmitters arranged to transmit an ultrasonic signal into the surrounding structure; and an array of ultrasonic receivers arranged to receive reflections from the passive object; wherein the system is arranged to steer the ultrasonic signal using stored data relating to a position of at least one of said surfaces such that the ultrasonic signal includes at least one reflection off a surrounding structure surface.

24. The system of claim 23, having a single array comprising separate transmitters and receivers therein.

25-26. (canceled)

27. The system of claim 23, wherein the receiver array comprises Micro-Electro-Mechanical System microphones.

28. (canceled)

29. The system of claim 23, wherein the receiver array is a microphone array having a peak response in the audible frequency range; and the transmitter array has a spacing between the transmitters equivalent to a half-wavelength of a sound wave in the ultrasonic frequency range.

30. A device for imaging at least one passive object, the device comprising: an array of ultrasonic transmitters, arranged to transmit an ultrasonic signal, wherein a pair of adjacent transmitters of said array has a spacing equivalent to a half-wavelength of a sound wave in the ultrasonic frequency range; an array of microphones arranged to receive reflections from the passive object, wherein the microphones have a peak response in the audible frequency range; wherein the device is arranged to determine an image of said object using said reflections.

Description

[0071] Certain embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings in which:

[0072] FIG. 1 is a block diagram of an ultrasound system for transmitting and receiving ultrasonic signals;

[0073] FIG. 2 is a view of a rectangular array of PM UTs for use in the system of FIG. 1;

[0074] FIG. 3 is a schematic diagram of imaging an object in a room using direct reflections from the object and reflections from the walls;

[0075] FIG. 4 is a simplified diagram of imaging a single reflector with a single transmitter and a single receiver;

[0076] FIG. 5 is a simplified diagram of imaging the reflector with the transmitter and receiver of FIG. 4 with two reflective paths;

[0077] FIG. 6 is a simplified diagram of imaging the reflector with the transmitter and receiver of FIG. 4 with multiple reflective paths;

[0078] FIG. 7 is a simplified diagram of imaging a reflector with multiple transmitters and multiple receivers;

[0079] FIG. 8 is a simplified diagram of imaging multiple reflectors with multiple transmitters and multiple receivers;

[0080] FIG. 9 is a schematic diagram of an ultrasound imaging system used for imaging a complex object in a room using beam steering to image the near field;

[0081] FIG. 10 is a schematic diagram of imaging a complex object in a room using indirect reflections from the walls;

[0082] FIG. 11 is a schematic diagram of an ultrasound imaging system used for imaging a complex object in a room using beam steering to image the near field where there is lots of empty space;

[0083] FIG. 12 is a schematic diagram of an ultrasound imaging system used for imaging a complex object in a room using beam steering to image a larger near field where there is empty space and reflections;

[0084] FIG. 13 is a schematic diagram of imaging an occluded object in a room;

[0085] FIG. 14 is a schematic diagram of imaging an occluded object in a room where a direct path is blocked;

[0086] FIG. 15 is a schematic diagram of imaging an occluded object in a room using indirect reflections;

[0087] FIG. 16 is a flowchart illustrating a method of imaging an object in a room, as shown in FIGS. 4-6;

[0088] FIG. 17 is a flowchart illustrating a modified method of imaging an object in a room, as shown in FIGS. 4-6;

[0089] FIG. 18 is a flowchart illustrating a method of optimising the sharpness of an image;

[0090] FIG. 19 shows a conference room where an array of ultrasonic transducers and microphones are used to obtain sound from a specific person in the room;

[0091] FIG. 20 shows a living room where an array of ultrasonic transducers and speakers are used to direct sound towards a specific person in the room;

[0092] FIG. 21 shows an ultrasonic array used to image refuse in a container;

[0093] FIG. 22 shows a caf? where an array of ultrasonic transducers are used to determine the location of people in the caf?;

[0094] FIG. 23 shows a robot gripper arm where an ultrasonic array is used for detection of an object and the shape of the enclosure; and

[0095] FIG. 24 shows an embodiment of the invention where an array of ultrasonic transmitters are retrofitted to a device having a built-in microphone array.

[0096] FIG. 1 shows a highly simplified schematic block diagram of the typical components of an ultrasound imaging system 2 for transmitting and receiving ultrasonic signals used for imaging a passive object in accordance with the invention described herein.

[0097] The imaging system 2 comprises an ultrasonic array 4. The ultrasonic array 4 comprises a plurality of piezoelectric micro machined ultrasonic transducers (PMUTs) 6; the array 4 is shown in further detail in FIG. 2. The system 2 includes a CPU 8 having a memory 10 and a battery 12 which will typically power all components of the system.

[0098] The imaging system 2 may, for example, be affixed to a wall of a room, and the ultrasonic array 4 configured to transmit an ultrasonic signal into the room using the PMUTs 6. As will be explained in further detail below, the ultrasonic array 4 will receive reflections from any objects in the room. The ultrasonic array 4 may then steer the ultrasonic beam to ensure the reflections include at least one reflection off a wall of the room, when the location of the walls are known.

[0099] FIG. 2 shows a rectangular array 4 of PMUTs 6. Each PMUT 6 comprises a square silicon die 14 onto which an ultrasonic transmitter 16 and an ultrasonic receiver 18 are formed.

[0100] The transmitter 16 is circular and located in the centre of the die. The receiver 18 is much smaller than the transmitter 16 and is located in the unused space in each corner of the die. Other numbers of receivers may be provided; they could be located elsewhere or more than one could be located in each corner. The transmitter could be differently shaped or located and/or multiple transmitters could be provided.

[0101] The individual dies 14 are tessellated together in a mutually abutting relationship on a common substrate (not shown) to form the array. The dies 14 are half a wavelength wide, such that the centre-centre spacings 20 of the transmitters 16 in both the X and Y directions are also half a wavelength. The receivers 18 in the respective corners of adjacent dies form respective 2?2 mini arrays 22. These mini arrays 22 are also separated by half a wavelength.

[0102] Although only six dies 14 are shown in FIG. 2, in exemplary embodiments, there may be many dies 14 in one or both dimensions of the array 4.

[0103] In operation, the ultrasonic array 4 emits a steered ultrasonic beam. Determined phase adjustments are applied to the signals from respective transmitters 16 or receivers 18 to allow them to act as a coherent arraye.g. for beamforming. Beam steering may be used on either the transmitted ultrasonic signal, reflected ultrasonic signal, or both. In order to steer the transmitted ultrasonic signal, the determined phase adjustments are added to the signal transmitted by each transmitter 16 in the array 4 such that the resultant transmitted ultrasonic signals undergo interference, resulting in an overall signal which is transmitted in a desired direction. The received, reflected ultrasonic signal may be steered in a similar way. Determined phase adjustments may be applied to the received signals from all directions to determine the reflected signal from a single direction in the surrounding structure.

[0104] Most standard beamforming algorithms benefit from half wavelength spacing of the ultrasonic elements 16, 18 as this enables each incoming wave front to be discernible from other incoming wave fronts with a different angle or wavenumber, in turn preventing the problem of grating lobes. Classical beamforming methods that benefit from half wavelength (or tighter) spacing includes (weighted) delay-and-sum beam formers, adaptive beam formers such as MVDR/Capon, direction-finding methods like MUSIC and ESPRIT and Blind Source Estimation approaches like DUET, as well as wireless communication methods, ultrasonic imaging methods with additional constraints such as entropy or information maximisation.

[0105] FIG. 3 is a schematic diagram of imaging an object 24 in a room 26 using direct reflections 30 from the object 24 and indirect reflections 34 which travel via the walls 28. An ultrasound imaging system 2 comprising an ultrasonic array 4 of transmitters and receivers as described above with reference to FIG. 2 is affixed to a wall 28 of the room 26.

[0106] The locations of the walls 28 may be determined using LI DAR scanning, or a CAD drawing of the room which is input to a CPU. Alternatively, the array 4 is used to determine the locations of the walls 28 when the room 26 is empty. The ultrasonic transmitters 16 in the array 4 emit ultrasonic signals which are reflected by the walls 28 of the room 26. These reflected signals are received by the receivers 18 in the array. The CPU then processes the data relating to the transmitted and reflected signals to determine the locations of the walls 28 which the signals were reflected from.

[0107] Once the location of the walls 28 have been determined, the imaging system 2 is used to image the object 24 in the room. A first beam 30 is directed into the near field and reflects off the object 24. The reflected beam 30 is a band limited Dirac pulse 32 which is received by the receivers 18 in the array 4, and provides limited information about the portion of the object which is in the line of sight of the transmitters and receivers in the array. Other signals, such as chirps/frequency sweeps, or other coded signals could be used, combined with suitable processing post-reception, such as pulse-compression techniques.

[0108] In order to gain further information about the dimensions and location of the object 24, a second beam 34 is then directed towards a wall 28a of the room 26. This beam 34 is reflected off the first wall 28a towards the back wall 28b. The beam 34 is then reflected towards the object 24, and the beam 34 is then further reflected off the object 24 back to the array 4. As with the first beam 30, the reflected second beam 34 is a band limited Dirac pulse 36 which is also received by the receivers 18 in the array 4.

[0109] As shown in the time-domain signal traces on the right of FIG. 3, the first reflected pulse 32 is received earlier than the second reflected pulse 36 as the first beam 30 travels a shorter distance than the second beam 34. In order to determine the location of the object 24 in the room 26, the received signals 32, 36 are processed by the CPU 8 which then uses this information, along with the known dimensions of the room 26 to determine the location of the object 24.

[0110] The calculations below provide further detail on the processing performed by the CPU 8 on the received signals 32, 36 in order to determine the location of the object 24.

[0111] Firstly, consider the hypothetical and simplified scenario where there is a single reflector 74, a transmitter 70 and a receiver 72, as shown in FIG. 4. Then, assuming a bandlimited Dirac pulse transmitted from the transmitter 72 ?(t), the receive signal is


y(t)=?.Math.l.sub.1.Math.?(t??.sub.1)

[0112] Where a represents the reflective strength of the target at the specified grid position, l.sub.1 is the path loss (the longer the path the larger the loss), and ?(t??.sub.1) is the originally transmitted Dirac pulse, time-delayed by the delay factor ?.sub.1. The path loss l.sub.1 can be explicitly computed based on the wave propagation model, i.e. for a spherical wave in 3D it will typically be 1 divided by the travel distance squared.

[0113] The received samples y(t) can be put into a vector of length L (i.e. containing L samples) according to the equation:

[00001] y ( t ) = [ y ( t ) y ( t + 1 ) y ( t + 1 ) .Math. y ( t + L - 1 ) ]

assuming the signals have been sampled at the receiver from points t to t+L?1.

[0114] Multiple reflective paths are shown in FIG. 5. The receive signal therefore becomes


y(t0=?.Math.l.sub.1.Math.?(t??.sub.1)+?.Math.l.sub.2.Math.?(t??.sub.2)

where there are now two different paths losses, l.sub.1 and l.sub.2.

[0115] More generally, there may be several different echoic paths, as illustrated by FIG. 6. The receive signal y(t) therefore becomes

[00002] y ( t ) = .Math. r = 1 Q ? .Math. l r .Math. ? ( t - ? r ) = ? .Math. r = 1 Q l r .Math. ? ( t - ? r )

which may also be represented as

[00003] y ( t ) = ? .Math. r ? S l r .Math. ? ( t - ? r )

[0116] S is a set of path index integers, typically i.e. S={1, 2, 3, 4, 5, . . . } representing the varying echoic path indexes, sorted in order of path length. S is the echoic index set.

[0117] Next, if there are several transmitters 70 and receivers 72, as shown in FIG. 7, subscripts are introduced on the receive signal y, to make sure the ij'th transmitter/receiver pair 70/72 are represented. The time delays ?, the path losses l and the echoic index sets will also be indexed accordingly, as they too depend on the relative physical positioning of the transmitters and receivers relative to the hypothetical reflective point. The equation therefore becomes

[00004] y ij ( t ) = ? .Math. r ? S ij l ijr .Math. ? ( t - ? ijr )

[0118] The number of hypothetical reflective grid points ? may then be increased, as shown in FIG. 8. FIG. 8 shows only 6 points for the sake of visibility, but in practise the whole grid may be included. For each transmitter/receiver pair 70, 72, this means that the echoes from each of these points are summed up to give the overall received signal, i.e

[00005] y ij ( t ) = .Math. k = 1 P ? k .Math. r ? S ijk l ijrk .Math. ? ( t - ? ijrk )

[0119] ?.sub.k is the strength of the k'th hypothetical reflector for the 1.sup.st to the P'th reflector under consideration. The path lengths l.sub.ijrk, the time delays ijrk and the echoic index now depend on the positions of the transmitters 70, receivers 72, reflectors 74 and the echoic path number. This may be rewritten in matrix/vector form by defining:

[00006] y ij ( t ) = [ y ij ( t ) y ij ( t + 1 ) y ij ( t + 1 ) .Math. y ij ( t + L - 1 ) ]

[0120] And using the definition

[00007] d ijk ( t ) = .Math. r ? S ijk l ijrk .Math. ? ( t - ? ijrk )

[0121] The matrix D.sub.ij(t) is then defined as

[00008] D ij ( t ) = [ d ij 1 ( t ) d ij 2 ( t ) .Math. d ijP ( t ) d ij 1 ( t + 1 ) d ij 2 ( t + 1 ) .Math. d ijP ( t + 1 ) .Math. .Math. .Math. d ij 1 ( t + L - 1 ) .Math. .Math. d ij P ( t + L - 1 ) ]

[0122] Where L is a suitable window length for the number of samples in the vector y.sub.ij(t), and also the number of rows in D.sub.ij(t). This therefore gives the set of equations


y.sub.ij(t)=D.sub.ij(t)?

[0123] Where i=1, . . . , N and j=1, . . . , Q, where N is the number of transmitters 70, and Q the number of receivers 72. Multiple transmit-receive pairs 70, 72 can be used to better estimate the vector containing reflective coefficients ?, by stacking these equations and removing the time dependence temporarily for notational convenience:

[00009] [ y 11 .Math. y NQ ] = [ D 11 .Math. y NQ ] ?

[0124] Or more generally, y=D?, or if additive noise is to be incorporated, y=D?+n, where n is a vector of additive noise. It should be clear from the above, that the more echoic paths there are, the higher each subblock D.sub.ij becomes, and therefore the equation system becomes better conditioned. In other terms, the echoic multipath situation helps improve the solvability of the equation and, in the presence of noise, improves the SNR. This equation set can be solved in any number of suitable ways, including least-squares, weighted least squares, various techniques incorporating knowledge of the noise characteristics, such as its spatio-temporal distributions etc.

[0125] FIG. 9 is a schematic diagram showing use of the ultrasound imaging system 2 for imaging a complex-shaped object 38 in the room 26 using beam steering to image the near field 42.

[0126] The array 4 emits a steered ultrasonic beam 40 which is focused in the near field 42. The beam 40 may also be steered in post-processing of the reflected signal to obtain a steered received signal. The beam 40 is reflected from the front of the complex object 38 back towards the array 4 where the reflected beam is received. However, this only provides information about the side of the object which is close to, and facing the array 4.

[0127] Once sufficient data has been gathered using direct reflections from the object 38, in order to image the remainder of the object, the array 4 steers the ultrasonic beam towards the walls 28 of the room 26, away from the shortest path as shown in FIG. 10.

[0128] FIG. 10 is a schematic diagram of imaging the complex object 38 in the room 126 using indirect reflections from the walls 28. The beam 44 is directed towards a wall 28. The beam 44 is reflected from the wall 28 towards the object 38. This, in effect, means the wall 28 acts as an ultrasonic emitter, directing the beam 44 towards the object 38 to be imaged.

[0129] The beam 44 will reflect from the object 38 along a different path (not shown) towards the wall 28, and from there back to the ultrasonic array 4. The time delay in this beam being reflected back to the array 104, along with the predetermined locations of the walls is used by the CPU to gain further information about the size, shape and location of the object 38.

[0130] In open acoustic scenes, such as that of FIG. 11, which shows the same object 38 being imaged as in FIGS. 9 and 10 there is typically a lot of empty space in the scene, i.e. positions that cause no reflection, and for which a reflective coefficient ?.sub.k in the vector ? is naturally zero. This is in contrast with medical ultrasound imaging, where reflections will be obtained from multiple layers within the body. For a dense sampling grid, there are typically many more columns in the matrix D, defined previously, than there are rows, and so it is always possible to find some solution to a problem such as


min.sub.??y?D??.sub.2.sup.2

[0131] This is one way of solving the above-mentioned problem of an open acoustic scene. However, given the dimensions of Dassuming it is made up from a tightly spaced grid of hypothetical reflectorsthere will typically also be infinitely many such solutions ? and so it makes sense to try to pin down the most physically likely of those. One approach for this is the compressive sensing approach, where one instead tries to solve


min.sub.?|?|.sub.1 subject to y=D?(Eq. A)

i.e. to find the solution to the problem that has the smallest L1-norm. This is frequently a good approximation to the best L0-norm solution, which is the solution with the fewest number of non-zero coefficients. Having a high number of zeros reflects the previously known underlying hypothesis that the scene is largely full of zeros or non-reflective points, i.e. empty spaces.

[0132] More generally, the dimensions of the equation system can be such that the number of coefficients in ? representing the entire acoustic scene can be in the hundreds-of-thousands of coefficients or more, so any dimension-reduction will drastically save compute time and complexity. To this end, if some of the coefficients in ? can be known or computed before others, using simplified means to a general, big inversion, both time/CPU resource consumption and accuracy can be improved. The equation may be subdivided into


y=D?=D.sub.u?.sub.u+D.sub.k?.sub.k

[0133] Where y?D.sub.l?.sub.k=D.sub.u?.sub.u.sub.u?.sub.u is the part governing the unknown coefficients of a (here, in ?.sub.u, the u subscript denotes unknown). D.sub.k?.sub.k governs the known coefficients of a (the k subscript denoting known). Then a new equation system can be obtained:


y?D.sub.k?.sub.k=D.sub.u?.sub.u

[0134] Which can be solved for a u which will have fewer dimensions than the original problem where a was to be estimated. Approaches to (easily) obtaining some coefficients involve the following: first, in FIG. 11, a pulse is transmitted from the transmitter 16 and received by the receiver 18 in the array 4. If during the first sampling period of K samples (illustrated by the cut-off circle 42) are all zero or close to zero, then clearly all the coefficients representing possible reflectors within this boundary must be zero, as illustrated. This provides an initial set of ?.sub.k samples which can be used to reduce the dimensions. The exact number of zeros in this example is 22.

[0135] Next, referring to FIG. 12, the receive sampling window 42 is stretched a little further, and the first few incoming echoes occur within this window, i.e non-zero samples. Since the location of the reflector 38 is not yet known there is at this point an arc of potential locations for the reflector, indicated by xs. Note, there are still only a modest number of unknowns in this equation system (29 x es), at least compared with all the samples being included, i.e. relaxing the limit on the sampling period. Note that this cut-off can happen in the time-domain by ignoring later arriving samples, or in the impulse response domain. The impulse response domain is the situation where the impulse response is estimated using a suitable coded output signal followed by pulse (de-) compression at the receive side, or in any other suitable domain.

[0136] Now, utilizing the previously known ?.sub.k samples, a new equation system can be created with 29 unknowns, and those can be estimated. Then, there are 29+22=51 known/estimated samples that can be utilized as more samples are obtained in the cut-off approach. Overall, a sequence of estimators are being driven, each with lower dimensions than the full imaging problem, to gradually create a full image of the scene. Any estimation step can utilize any of the aforementioned techniques, including compressed sensing to obtain physically plausible estimates of the acoustic scene.

[0137] Of course, it is not essential to use Eq. A above. Any other suitable method utilizing the sparsity of the scene could be employed, using other norms than L1/L0, and other norms or measures of sparsity, such as information-theoretic approaches optimizing properties like the distribution of coefficients, e.g. the super-Gaussian distribution properties. Bayesian approaches such as Bayesian Sparse Regression, could be also employed, see e.g. https://arxiv.org/abs/1403.0735.

[0138] The direct path reflections shown in FIGS. 9, 11 and 12 result in the portion of the room 26 which is behind the object 38 relative to the array 4 being occluded. FIG. 13 is a schematic diagram of imaging an occluded object 46 in the room 26. As is clear from FIG. 13, due to the object 38 between the array 4 and the occluded object 46, the beam 48 which would be direct from the array 4 to the occluded object 46 cannot be used to image the occluded object 46, as it would instead be reflected from the first object 38.

[0139] Therefore, in order to image the occluded object, an ultrasonic beam 50 is directed towards a wall 28, the location of which is known. The beam 50 is reflected off the wall 28 directly towards the occluded object 46, without being reflected off the first object 38. The beam 50 will therefore be reflected from the occluded object 46 back to the wall 28 along a different path (not shown), and to the array 4, where the received echoes are analysed by the CPU 8 to image the objects 38, 46. The indirect ultrasonic reflections therefore allow for imaging of objects in the room which are occluded from line of sight imaging from the array by other objects in the room.

[0140] The calculations below provide further modifications on the processing described above which is performed by the CPU 8 on the received signals 40, 44, 50 in order to determine the location of the objects 38, 46. These modified calculations remove the occluded paths 50 from the data set in order to reduce the computational load on the CPU 8.

[0141] The general model, y=D? does not incorporate effects such as occlusions, it simply assumes that sound propagates unhindered through all the reflective voxels. Referring back to the equation

[00010] y ij ( t ) = ? .Math. r ? S ij l ijr .Math. ? ( t - ? ijr )

this problem can be managed by using knowledge of the first pixels/reflectors to effectively rule out potential echoic paths in the set S.sub.ij. Referring now to FIG. 14, the potentially reflective point 46 has an acoustic two-way path 48 which is effectively blocked by the front-lying reflectors in the cluster 38. Hence, this echoic path is now removed in every relevant set S.sub.ij for each relevant transmitter/receiver pair in the array 4 (i.e. for those pairs for which the path is blocked). This approach mitigates the problem with occlusions and the potential errors associated with any of the coefficients in a that would follow from not dealing with it, and it also reduces the overall computational load by reducing the number of echoic paths represented as columns in D.

[0142] Finally, knowledge of previously (sequentially) estimated reflectors can be used to steer the acoustic beam in certain directions and away from others. In FIG. 15, the transmit and/or receive array 4 is configured to focus sound in the beam pattern 42 and in the direction 49. This sets the system up for imaging the hidden object 46. However, as sound returns from this emission, before receiving an echo from the object 46, it will be observed that many of those samples are 0 or close to 0, meaning that the coefficients in the sector can be set to 0. This is illustrated with the 0 elements in FIG. 15. Again, the knowledge of the acoustic scene using beam-steering techniques is increased, reducing computational complexity and reducing errors.

[0143] FIG. 16 is a flowchart illustrating a method of imaging an object 38, 46 in the room 26, as shown in FIGS. 9-13. At step 52, the near field to the array 4 is imaged using beam forming. A beam 40 is directed towards the object 38 to be imaged, e.g. as shown in FIG. 9. In order to improve the nearfield imaging, at step 54, a beam 44 is steered away from the shortest path and towards a wall 28. The beam 44 is then reflected from the wall 28, such that the wall acts as a transmitter emitting the beam 44 towards the object 38 to be imaged.

[0144] The reflected beams 40, 44 which may be described as band limited Dirac pulses are input into the equations described above, and the inverse equation y=?.sup.TD used to determine ? which describes the reflectivity at all grid points, and therefore can be used to provide an image of the object 38.

[0145] At step 56, this inverse equation is modified to remove blocked paths, such as path 48 shown in FIG. 13. This reduces the computational load as the number of calculations which must be carried out by the CPU 8 are reduced. At step 58, the modified inverse equation is solved, therefore obtaining images of any objects 38 in the room 26, as well as any occluded objects 46.

[0146] FIG. 17 is a flowchart illustrating a modified method of imaging an object 38, 46 in the room 26, as shown in FIGS. 9-13. Steps 60 and 62 describe the same method as steps 52 and 54 in FIG. 9, where the near field to the array 4 is imaged using beam forming, and then in order to improve the nearfield imaging, a beam 44 is steered away from the shortest path and towards a wall 28. The beam 44 is then reflected from the wall 28, such that the wall acts as a transmitter emitting the beam 44 towards the object 38 to be imaged.

[0147] At step 64, the equation y=D? is solved for the nearfield reflected beams 40, 44. This gives information relating to the location of the object 38, and the beam steering is therefore modified in order to further image the object 38. Through an iterative procedure of steering a beam, receiving the reflected signal, determining information about the object 38, and modifying the direction of the beam, extensive information about the object 38 location and shape may be obtained.

[0148] As with the method described in FIG. 16, the inverse equation is then modified to remove blocked paths, such as path 48 shown in FIG. 13. This reduces the computational load as the number of calculations which must be carried out by the CPU 8 are reduced. At step 68, the modified inverse equation is solved, therefore obtaining detailed images of any objects 38 in the room 26 through the iterative method of steps 62 and 64, as well as any occluded objects 46.

[0149] Referring back to FIG. 10, it is clear that the length of the path 44 could be incorrectly computed, perhaps as a result of a mis-estimation of the exact position or angle of the wall 28. Going forwards with computing the reflective coefficients in the area 38 may then give a wrong result. In practice, the typical result will be that of smearing out the image, because a new reflective coefficient will likely have to be given a positive value to account for the observed reflection via the path 44. This means that the sharpness of the overall image can be used as a criterion with which to optimize the positions of the enclosure, or alternatively, try to re-compute the correct acoustic path length, which could be affected by things like turbulence. Measures such as image sharpness, see https://ieeexplore.ieee.org/document/6783859, or the ratio between low reflector values (close to 0) and high reflective values can be used to compute such sharpness. This enclosure-updating approach can be particularly useful when the enclosure is known to change, such as for a robot gripper-arm as will be described below with reference to FIG. 23.

[0150] Referring to FIG. 18 now, at step 61 an initial image is computed using an initial set of parameters derived from a the current assumption of the location of the surrounding structure; wall, ceilings, floors, objects etc, using the calculated times-of-flight for both direct reflections and indirect reflections. At step 63 the image sharpness is computed, and at step 65 a new set of enclosure parameters is generated. This could be done randomly, as a perturbation to the current parameter set, or, as the algorithm proceeds in iterations, it could be based on previous guesses of the parameters and associated image sharpness scores. In step 67 the new image is computed and its sharpness assessed. In step 69, the sharpness score is matched against a criterion. This could be an absolute criterion, e.g. a fixed threshold as to what is determined to be good enough (or not), or it could be a dynamic one which is computed or set based on how well other previous estimates scored, i.e. a local optimum criteria. In step 71 once the threshold has been met, the program exits and returns both the optimized image and the updated enclosure parameters.

[0151] FIG. 19 shows an array of ultrasonic transducers 75 and microphones 76 used to obtain sound from a specific person 78 in the room 80. Given the position of the target person 78 in the room, p=[x, y, z], and the position of all the microphones used for trying to capture audible sound are x.sub.1, x.sub.2, x.sub.3, . . . x.sub.N the expected time of flight between the target 78 at position p and each of the microphones in the array 76 can be computed, via the formula s=v*t (distance equals speed times time), as

[00011] ? i = 1 c .Math. x i - p .Math.

where c (or v) is the speed of sound.

[0152] The microphones 76 can be placed anywhere in the room. The location of microphones 76 can be computed using any suitable means. The ultrasonic array 75 may be used to determine the position of the speaker 78, and/or microphones 76 using ultrasound.

[0153] Assuming the target person 78 is the only active audio source in the room, the received signals y.sub.1(t), y.sub.2(t), y.sub.3(t), . . . y.sub.N(t) can be expressed as


y.sub.i(t)=s(t??.sub.i)+n(t)

[0154] Where s(t) is the spoken word, i.e. the sound produced by the target person, and n(t) is the sensor noise. An alternative way of expressing this is


y.sub.i(t)=s(t)*?(t??.sub.i)+n(t)

[0155] Where ?(t) is the delta Dirac functions. Both equations essentially say that each microphone receives an appropriately time-delayed version of the sounds output from the target person. For simplicity of explanation, no attenuation term has been included, but they can be readily incorporated as will be appreciated by those skilled in the art.

[0156] A straightforward way to recover signal-of-interest s(t) is by delaying-and-summing, i.e

[00012] s ^ i ( t ) = .Math. i = 1 N y i ( t ) * ? ( t + ? i ) = .Math. i = 1 N [ s ( t ) * ? ( t - ? i ) + n ( t ) ] * ? ( t + ? i ) = .Math. i = 1 N s ( t ) * ? ( t - ? i ) * ? ( t + ? i ) + n ( t ) * ? ( t + ? i ) = .Math. i = 1 N s ( t ) + n ( t ) * ? ( t + ? i ) = N .Math. s ( t ) .Math. i = 1 N n ( t ) * ? ( t + ? i )

[0157] Where the first part becomes an amplification of the source s(t) (added up N times), and the second part becomes a sum of incoherent noise components, i.e. the parts of the noise component that do not sum up constructively. The overall result is an amplification of the signal-to-noise ratio via delay-and-sum beamforming. In the frequency domain, this could be expressed as:


Y.sub.i(?)=S(?)*D.sub.i(?)+N(?)

[0158] Where D.sub.i(?) is the phase delay associated with the time delay ?.sub.i for the specific frequency ?. Note that D.sub.i(?) has unit modulo (i.e. it only phase delays the signal, it does not amplify or attenuate it in accordance with the assumption explained above). In the frequency domain, the delay-and-sum recovery strategy thus becomes:

[00013] S ^ ( ? ) = .Math. i = 1 N Y i ( ? ) D i ( ? ) *

[0159] Where the effect of D.sub.i(?)* to cancel out the effect of D.sub.i(?), to once again get an amplification of the signal relative to the noise. This gives rise to the term phased array, i.e. the phase information in some or all frequency bands is used constructively to recover the signal of interest. Note also, that, in the case of an interfering signal being added to the mix, i.e.


Y.sub.i(?)=S(?)*D.sub.i(?)+Z(?)*F.sub.i(?)+N(?)

[0160] If Z(?) is the interfering signal originating at some other location q and being delayed towards each of the microphones 76 via the individual time delay represented as F.sub.i(?) then the same delay-and-sum strategy would also serve to reduce the effect of the interfering signal in the output result relative to the signal of interest, i.e. the strategy would use the phase knowledge to improve the signal-to-noise-and-interference ratio.

[0161] Other more sophisticated techniques existing for signal source enhancement. Some take into account the positions and/or statistical acoustic properties of an interfering source i.e. not simply smear the out to reduce their impact, as in the above example. Minimum Variance Distortionless receiver (MVDR) or Capon beamforming, is but one example.

[0162] Moreover, if the acoustic transfer functions, or impulse responses from each source 78 to each microphone 76 are known, better results may be obtained, because impulse responses can take into account not merely the direct path of the sound from the person 78 towards each of the microphones 76, but also any subsequent echo coming from a sound impinging on a wall 82, ceiling or other object. Letting H.sub.ij(?) denote, in the frequency domain, the impulse frequency response from source j, j=1, . . . Q, to microphone number i, then we have, assuming S.sub.j(?) to be the source signal from the j'th source:

[00014] Y i ( ? ) = [ .Math. i = 1 Q H ij ( ? ) S j ( ? ) ] + N i ? )

[0163] This can be put into vector-matrix notation by stacking the successive microphone inputs in a vector as:


Y(?)=H(?S(?)+N(?)

[0164] Here H(?)={H.sub.ij(?)}, S(?)=[S.sub.1(?), . . . S.sub.Q(?)].sup.T, Y(?)=[Y(?), . . . Y.sub.N(?)].sup.T and N(?)=[N.sub.1(?), . . . N.sub.N(?)].sup.T. A similar formulation exists in time-domain where the effect of the impulse responses in the time-domain, i.e. h.sub.ij(t) (which are convolved with the source signals s.sub.ij(t)) build up a block Toeplitz matrix system.

[0165] One can now compute an estimate of the sources as:


?(?=H.sup.+(?Y(?)

[0166] Where H.sup.+(?) is a suitable inverse matrix of H(?). This could be a Moore-Penrose inverse, a regularised inverse to match the noise level, such as Tikhonov regularization, or a generalized inverse utilizing knowledge of the noise characteristics, such as a Bayesian estimator. Whether used in the time or frequency domain, any of the following techniques can equally well be used:

[0167] Minimum Mean Square Error (MMSE) receiver strategies, Blind Source Separation or Independent Component Analysis, Blind Source Separation approaches utilizing statistical properties related to the signal-of-interest, Sparse methods such as Bayesian models with Gaussian Mixture Models or L1-based regularization methods such as in compressed sensing, or any other suitable technique that utilizes phase information.

[0168] In practice, this means that in accordance with embodiments of the invention audio capture can be improved in two important ways: first, the location of the person 78 in the room 80, i.e. the position p, can be estimated. Moreover, a statistical map of his or her range of movements and likely positions can be computedeven if he or she is not speakingso that the audio signal processing can be optimized for this purpose. Secondly, the location of the walls 82 and ceilings can be used to compute the impulse response functions H(w) above, which is what enables the sound to be focused using the ceilings and walls 82 and/or other reflective items. So the information captured in the ultrasound domain can usefully be employed in the audio domain.

[0169] Turning now to transmission, such as in a directed hi-fi sound reproduction system as shown in FIG. 20, it is similarly assumed that the locations of the loudspeakers 84 are known (as for the microphones above) and that the location of the target 78 is also known. Then, the time-delays used above can be used to define each output signal s.sub.j(t) to be output from loudspeaker j as:


s.sub.j(t)=s(t)*?(t+?.sub.j)

[0170] In which case, the signal received at the position of the target person 78 would be:

[00015] y ( t ) = .Math. i = 1 N s j ( t ) * ? ( t - ? j ) = .Math. i = 1 N s ( t ) * ? ( t + ? j ) ? ( t - ? j ) = .Math. i = 1 N s ( t ) * ? ( t ) = .Math. i = 1 N s ( t ) = Ns ( t )

i.e. an amplification of the signal at the focus point p where the person 78 is. If the person 78 moved to another location p, then there would not be the same amplification, because the terms ?(t+?.sub.j)?(t??.sub.j) would be replaced by ?(t+?.sub.j)?(t??.sub.j) for some ?.sub.j which would generally not combine to become ?(t).

[0171] Instead, the effect would be a smearing out of the outputs and effective lowering of the N-time amplification observed at p. A parallel argument can be made in the frequency domain, making it apparent that the system is relying on phase delays of the transmit signals to obtain the local focussing effect.

[0172] Also on the transmit side, it is possible to utilize detailed knowledge of the impulse response function to create even better focussing utilizing reflectors like walls 82 and ceiling or other large objects. For instance, if h.sub.ij(t) is the impulse response between each transmitter j and each target i, then the sound received at each target i can be jointly modelled as:

[00016] [ y 1 ( t ) .Math. y Q ( t ) ] = [ H 11 .Math. H 1 N .Math. ? .Math. H Q 1 .Math. H QN ] [ s 1 ( t ) .Math. s N ( t ) ]

[0173] Or


y=Hs

[0174] The matrices H.sub.ij are the aforementioned Toeptliz matrices containing the impulse responses h.sub.ij(t) as its shifted rows, s.sub.j(t) is the sampled vector to samples output from speaker j, and y.sub.i(t) the sound that is received at the i'th target location, for i=1, . . . Q.

[0175] The speakers 84 can be placed anywhere in the room. The location of speakers 84 can be computed using any suitable means. The ultrasonic array 75 may be used to determine the position of the user 78, and/or speakers 84 using ultrasound as previously described herein.

[0176] It is now possible to select transmit signals {s.sub.j(t)} so that the received signals become the desired ones, i.e. that a specific sound is observed at some location I and an entirely different sound at location jeven though the original transmit signals {s.sub.j(t)} all contain mixes of those specific sounds. One straightforward example is to let s=H.sup.+y, where H.sup.+ denotes the Moore-Penrose Inverse of H. More sophisticated techniques capable of dealing with noise robustness can be envisaged too, as explained above for the receive/sound capture scenario. Note that in the above, the entire impulse response, i.e. not just the direct time-of-flight path, can be utilized for audio focussing.

[0177] In some situations the exact position of the person 78 onto which the sound is to be focused may not be known, i.e. there is uncertainty connected with the position p for that person 78, or there may be multiple persons 78 present. In the receive scenario of FIG. 19, different beam-forming or inverse matrix computations may be utilised to get the optimum sound capture, but for the transmit case shown in FIG. 20, once the sound is transmitted, that opportunity is gone. So, referring to the equation y=Hs above this may be dealt with by providing multiple target points placed close to one another, or in two or more groups of cluster points. The number of target points, and therefore the number of row blocks in the matrix H above can be in the hundreds or thousands, with the net effect of making a broader focussing zone. The complexity of the inverse problem does typically not change dramatically, as the matrix H is typically pre-multiplied by its transpose H.sup.T before inversion of the product H.sup.TH.

[0178] Again, as with the audio receiving situation of FIG. 19, this means that the invention as claimed can be used to map both the room 80 and the movements of the person or persons 78, and by combining the two, obtained a vastly improved overall audio experience. As with the receiver case of FIG. 19, the invention can be used to create a statistical map of the persons 78 whereabouts, and use this information for optimization of audio steering in FIG. 20.

[0179] The imaging approach where ultrasound is used to map an environment by utilizing reflections from the enclosure 86 is shown in FIG. 21. The same concepts as were used to steer the audio propagation for sound transmission and reception in FIGS. 19 and 20, can now be used to steer the ultrasound used for imaging in certain directions or towards certain positions, and away from others. In this example, a container 86 includes an ultrasonic array 88, which is used to image the dimensions of the container 86, as well as how full the container 86 is, in this scenario, with refuse 90.

[0180] Referring back to the equation y=Hs, the (stacked) transmit signals held in s, may be chosen in such a way that a desired signal set in the (stacked) vector y is at least approximately obtained. The problem of choosing the sources s may be reformulated as:

[00017] min s .Math. y - Hs .Math. F 2 = .Math. k = 1 Q .Math. y k - H k s .Math. 2 2 = .Math. k = 1 Q ( y k - H k s ) T ( y k - H k s )

[0181] Where H k denotes the k'th block row of the matrix H, i.e. H.sub.k=[H.sub.k1, . . . , H.sub.kN]. Weightings can be introduced to the right hand term, i.e. to create a weighted cost function

[00018] f ( s ) = .Math. k = 1 Q ( y k - H k s ) T W k ( y k - H k s )

[0182] Where the matrices {W.sub.k} are typically diagonal matrices with positive indices. By choosing these weight matrices carefully, certain points in time and space can be set where there isn't any energy. For instance, for a specific hypothetical point k with an associated target signal y.sub.k, y.sub.k=0, and the associated W.sub.k=?I, where ? is a large positive integer.

[0183] At the same time, another vector y.sub.i, can be chosen, for l?k, which is a zero-padded spike or sinc signal, and a suitable weight matrix W.sub.l=?I. It may also be desirable to take less account of energy that arrives at a certain point after a given time, but to take greater account of the fact that there is no energy at this point or other points early on. This is equivalent to steering energy away from an object, see FIG. 21. This can be accomplished by choosing the target vector y.sub.k=0 as explained, but letting W.sub.k=?D, where the matrix D is a diagonal matrix with diagonal elements equal 1 for the first K samples (say, the first 500 samples) and 0's after that. In effect, this means that no energy is desired for the first 500 samples picked up at that location, but after that it doesn't matter. This can be seen as a reasonable compromise because given all the reflections in the scene it is very hard to create a permanent hull or zero at a given point. However, it is possible to steer the ultrasound with a directivity pattern so that initially at least, there is no directional energy in a certain sector. As shown in FIG. 21, the signals 92 are steered toward the walls of the container 86, or towards the refuse 90, rather than being directed towards empty space in the container which will not provide any useful imaging information.

[0184] FIG. 22 shows another exemplary embodiment of the invention in the form of a caf? where an array of ultrasonic transducers 94 are used to determine the location of people 96 in the room 98. Reflections of the transmitted ultrasonic signal from the wall 100 enable an obscured person 96a to be imaged, even though they are not in the direct line of sight of the array 94. It may be useful to monitor what goes on in the room 98, as new customers 96 move in and out and around. For example, the distances between customers may be monitored to ensure they remain a certain distance apart e.g. 2 m under Covid-19 guidelines. The ultrasonic transducer 94 may therefore be used to monitor that guidelines are being adhered to by customers. In FIG. 22, the staff member 96b behind the counter 102 is imaged directly by the ultrasonic transducer 94. However, in some examples, once the dimensions of the room 98, and locations of fixed objects in the room 98, such as the counter 102 have been determined, the area behind the counter 102 may not need to be imaged, as it is known that any person behind the counter 102 will be staff 96b, whose movements and location does not need to be monitored.

[0185] FIG. 23 shows another embodiment of the invention in the form of a robot gripper arm 104 with an ultrasonic transducer array 106. The robotic gripper 104 is being controlled to pick up a pencil 108. The shape of the robotic gripper 104 changes shape as it closes around the pencil 108. The ultrasonic array 106 is used to both determine the location of the surrounding structurein this case the robot gripper 104 itselfas well as the location of the pencil 108. The ultrasonic array 106 therefore regularly updates the information relating to the positions of the robot gripper 104 hand and fingers, to improve the imaging of the pencil 108 as the robot gripper 104 changes shape as described above with reference to FIG. 18. Nearfield reflections 110 are used to image the pencil 108 being picked up by the gripper 104 as the gripper 104 moves towards the pencil 108 whilst changing its shape.

[0186] FIG. 24 shows an embodiment of the invention where an array of ultrasonic transmitters 122 are retrofitted to a device having a built-in array of MEMs microphones 124. In this example, the device is a voice-controlled smart speaker 120. The smart speaker 120 includes the microphones 124 spaced around the top part of the device, the array of ultrasonic transmitters 122 located in the centre of the top of the device, and a CPU 126 for processing received signals from the microphones 124 and controlling the transmitter array 122. The voice-controlled smart speaker 120 may be used as described in the foregoing descriptionto acoustically image objects within a surrounding structure. The microphones 124 each have a peak response in the frequency range of typical speeche.g. between 50 Hz and 500 Hz. As the microphones 124 also have the capability to capture ultrasonic signals, a dedicated ultrasonic receiver array does not also need to be retrofitted. This helps the retrofitted component to be small and suitable for a wider range of devices. The transmitter array 122 is especially compact as it has a spacing equivalent to a half-wavelength of a sound wave in the ultrasonic frequency range, which helps to optimise the transmitter array 122 for ultrasonic beamforming.

[0187] The Applicant has also appreciated that the received signals in accordance with any of the foregoing aspects or embodiments of the invention can be processed to take into account Doppler information. This may enhance imaging performance even further.

[0188] There are several ways in which Doppler information can be used to enhance imaging performance. The following mathematics illustrates one way in which Doppler can explicitly be accounted for during processing.

[0189] Returning to the equation:


y(t)=?.Math.l.sub.1.Math.?(t??.sub.1)

[0190] Where it is assumed that a Dirac pulse had been transmitted and has been received at a receiver as a time-series y(t).

[0191] More typically and as mentioned earlier in this application, coded signals may be used. Let x(t) be the bandlimited, linear output signal, which may for instance be a chirp signal.

[0192] Then y(t) can be obtained through the following:


y.sub.0(t)=h(t)*x(t)+n(t),


y(t)=x(?t)*y.sub.0(t)=x(?t)*h(t)*x(t)+x(?t)*n(t)

[0193] If x(?t)*n(t)=n.sub.2(t):


y(t)=h(t)*x(?t)*x(t)+n.sub.2(t)

[0194] If x(?t)*x(t)=?.sub.B(t), where ?.sub.B(t) is the bandlimited version of the Dirac impulse response, within the frequency band B defined by the signal x(t):


y(t)=h(t)*?.sub.B(t)+n.sub.2(t)

[0195] Now, if a signal x(t) is transmitted and it bounces off a moving object, a major effect will be that of effectively stretching or compressing the transmit signal x(t) upon reception. This can be thought of in a slightly different way: the object staying still, but the transmit signal x(t) being stretched, or scaled in time so that it is now x(kt), where k is a constant positive number, typically close to 1.

[0196] This gives:


y.sub.0(t)=h(t)*x(kt)+n(t),


y(t)=x(?t)*y.sub.0(t)=x(?t)*h(t)*x(kt)+x(?t)*n(t)=h(t)*x(?t)*x(kt)+n.sub.2(t)

[0197] However, the property x(?t)*x(t)=?.sub.B(t) is now missing. This mismatch can be taken advantage of, to construct a set of x(?t) replacements, which will focus the signal processing and subsequent image generation process onto objects with a specific Doppler shift only.

[0198] Now, to filter out and separate objects with a certain Doppler shift, a family of functions {{tilde over (x)}.sub.l(t)} can be designed, which approximately satisfy the criterion:

[00019] x ~ l ( - t ) * x ( k .Math. t ) = { ? B ( t ) for k = l 0 for k <> l Equation ( * )

[0199] Then, a single slice of the imaging problem can be created by pre-convolving the received signal by any of the signals in the family. For instance:


{tilde over (y)}.sub.l(t)={tilde over (x)}.sub.l(?t)*y.sub.0(t)={tilde over (x)}.sub.l(?t)*h(t)*x(k.Math.t)+{tilde over (x)}.sub.l(?t)*n(t)

[0200] If {tilde over (x)}.sub.l(?t)*n(t)=n.sub.3(t):


{tilde over (y)}.sub.l(t)=h(t)*{tilde over (x)}.sub.l(?t)*x(k.Math.t)+n.sub.3(t)

and {tilde over (x)}.sub.l(?t)*x(k.Math.t)=?.sub.B(t) for k=l, 0 otherwise.

[0201] By picking the right Doppler speed-related function {tilde over (x)}.sub.k(?t), the objects in the scene with a specific Doppler shift can effectively be captured, while filtering others out. Imaging can then be continued, assuming that the output driving signal was in fact the bandlimited Dirac signal ?.sub.B(t).

[0202] The family of functions held in Equation (*) can be derived in any number of ways. One specific way to do it is to (a) resample the function x.sub.i(k.Math.t) with different values of k, to generate a family of vectors x.sub.k, momentarily skipping the index i, which is a common variable value when there is only a single transmitter. Then each of those vectors are used to generate an associated Toeplitz matrix X.sub.k with the vectors x.sub.k as its (flipped) elements.

[0203] Then vector-approximations of the filters {tilde over (x)}.sub.k(?t) can be computed as vectors h.sub.k. This is achieved by setting up the requirements:

[00020] X r h k = { 0 for r ? k d for r = k

[0204] Where d is a vector of zeros with the exception of the centre element which is 1, or alternatively, d represents a sampled, bandlimited version of the Dirac function limited to the frequency band of interest. More specifically, the following function can be minimised:

[00021] min { h k } .Math. r = 1 K .Math. k = 1 K .Math. X r h k - w rk .Math. 2 2 Equation ( + )

[0205] Where w.sub.rk is 0 if r<>k and a vector sampled Dirac function if r=k and k is the number of relevant Doppler speed indexes. There are also other separation strategies than filtering, for instance deconvolution approaches could be used, the optimization problem above could be solved using other norms, or deep learning approaches could be used to design optimal filters.

[0206] More sophisticated filtering or deconvolution strategies could also be employed by assuming that only a few Doppler shifts are present at the same time, for example that most objects are static and only a few are moving at relatively high and known speed. This eases the pressure on the criterion (+) because the filter h.sub.k doesn't have to be orthogonal to all the other filters in the family, only to those objects whos speed match specific subsets of the filter family.

[0207] The following equation could then be solved:

[00022] min { h k } .Math. r ? S .Math. r ? S .Math. X r h k - w rk .Math. 2 2

[0208] Where S is a subset of relevant speed indices, with |S|<K. The criterion will then be better fulfilled, getting closer to the design goal set up in (*).

[0209] Multiple other strategies for steering both transmit and receive beams exist in the literature, see e.g. Demi, L., Practical guide to ultrasound beam forming: beam pattern and image reconstruction analysis, Applied Sciences, 2018, 8, 1544.

[0210] It will be appreciated by those skilled in the art that the invention has been illustrated by describing one or more specific embodiments thereof, but is not limited to these embodiments; many variations and modifications are possible, within the scope of the accompanying claims. For example, the CPU may not be local to the imaging system and may instead be an external hub used for work-sharing, with data sent between the imaging system and hub via Bluetooth signals.