Method and apparatus for an advanced radio system
11252396 · 2022-02-15
Assignee
Inventors
Cpc classification
H04N13/275
ELECTRICITY
H04N13/254
ELECTRICITY
International classification
H04N13/254
ELECTRICITY
H04N13/275
ELECTRICITY
G01S13/42
PHYSICS
Abstract
An advanced communication system is provided. The advanced communication system comprises generating a first signal with a polyphase coding based on a DFT spread OFDM with at least one CAZAC sequence in accordance with a configuration condition, applying a digital transmit beamforming to the generated first signal, converting the first signal to analog from digital, modulating the converted first signal with an energy source, emitting, using at least one energy emit element, the modulated first signal, detecting a second signal comprising at least a portion of the emitted first signal that is reflected from at least one object in a scene in a field-of-view, demodulating the detected second signal, converting the second signal to digital from analog, converting the converted second signal to a computational image, and generating a 3D image by applying coherent detection to the computational image. The first signal comprises a polyphase sequence.
Claims
1. An advanced communication system comprising: a processor; and a sensor comprising a digital circuit and a transceiver (XCVR), the digital circuit operably connected to the processor, the digital circuit configured to: generate a first signal including a polyphase sequence based on a configuration condition, apply a digital transmit beamforming to the generated first signal, map a multi input multi output (MIMO) codeword to the generated first signal that is transformed by the digital transmit beamforming, and generate at least one layer by applying layer mapping to the generated first signal that is mapped by the MIMO codeword; and the XCVR operably connected to the processor, the XCVR configured to: emit the first signal via an array, wherein the first signal is modulated by the XCVR before emitting via the array, and detect a second signal comprising at least portion of the emitted first signal that is reflected from at least one object illuminated by an imaging sensor, wherein the digital circuit is further configured to generate images based on the second signal.
2. The advanced communication system of claim 1, wherein the XCVR is further configured to: convert the first signal to an analog signal; modulate the converted first signal with an energy source; demodulate the detected second signal; and convert the detected second signal to a digital signal.
3. The advanced communication system of claim 1, wherein the digital circuit is further configured to convert the second signal to a computational image.
4. The advanced communication system of claim 1, wherein the processor is configured to: identify at least one pseudo-noise (PN) sequence for sensing the at least one object; and map the at least one PN sequence to at least one transmit beam in a frequency and time domain.
5. The advanced communication system of claim 1, wherein: the processor is configured to: generate user data, identify at least one PN sequence to transmit the user data, and multiplex the at least one PN sequence to the user data in a frequency and time domain, and the XCVR is further configured to transmit the user data multiplexed by the at least one PN sequence through at least one transmit beam.
6. The advanced communication system of claim 1, wherein the MIMO codeword is mapped before converting the first signal to an analog signal.
7. The advanced communication system of claim 1, wherein the processor is configured to: apply analog beamforming to the generated first signal that is applied by the digital transmit beamforming before emitting the first signal; and apply the analog beamforming to the detected second signal comprising the at least portion of the emitted first signal that is reflected from the at least one object.
8. The advanced communication system of claim 1, wherein the digital circuit is further configured to: multiply a phase value to the second signal to perform a phase shift, wherein the second signal is a converted signal; and generate an image of the at least one object by applying a two dimensional (2D) fast Fourier transform to the second signal that is multiplied by the phase value.
9. A method of an advanced communication system, the method comprising: generating a first signal including a polyphase sequence based on a configuration condition; applying a digital transmit beamforming to the generated first signal; mapping a multi input multi output (MIMO) codeword to the generated first signal that is transformed by the digital transmit beamforming; generating at least one layer by applying layer mapping to the generated first signal that is mapped by the MIMO codeword; emitting the first signal via an array, wherein the first signal is modulated before emitting via the array; detecting a second signal comprising at least portion of the emitted first signal that is reflected from at least one object illuminated by an imaging sensor; and generating images based on the second signal.
10. The method of claim 9, further comprising: converting the first signal to an analog signal; modulating the converted first signal with an energy source; demodulating the detected second signal; and converting the detected second signal to a digital signal.
11. The method of claim 9, further comprising converting the second signal to a computational image.
12. The method of claim 9, further comprising: identifying at least one pseudo-noise (PN) sequence for sensing the at least one object; and mapping the at least one PN sequence to at least one transmit beam in a frequency and time domain.
13. The method of claim 9, further comprising: generating user data; identifying at least one PN sequence to transmit the user data; multiplexing the at least one PN sequence to the user data in a frequency and time domain; and transmitting the user data multiplexed by the at least one PN sequence through at least one transmit beam.
14. The method of claim 9, wherein the MIMO codeword is mapped before converting the first signal to an analog signal.
15. The method of claim 9, further comprising: applying analog beamforming to the generated first signal that is applied by the digital transmit beamforming before emitting the first signal; and applying the analog beamforming to the detected second signal comprising the at least portion of the emitted first signal that is reflected from the at least one object.
16. The method of claim 9, further comprising: multiplying a phase value to the second signal to perform a phase shift, wherein the second signal is a converted signal; and generating an image of the at least one object by applying a two dimensional (2D) fast Fourier transform to the second signal that is multiplied by the phase value.
17. A non-transitory computer-readable medium comprising program code, that when executed by at least one processor, causes an electronic device to: generate a first signal including a polyphase sequence based on a configuration condition; apply a digital transmit beamforming to the generated first signal; map a multi input multi output (MIMO) codeword to the generated first signal that is transformed by the digital transmit beamforming; generate at least one layer by applying layer mapping to the generated first signal that is mapped by the MIMO codeword; emit the first signal via an array, wherein the first signal is modulated before emitting via the array; detect a second signal comprising at least portion of the emitted first signal that is reflected from at least one object illuminated by an imaging sensor; and generate images based on the second signal.
18. The non-transitory computer-readable medium of claim 17, further comprising program code, that when executed by the at least one processor, causes the electronic device to: identify at least one pseudo-noise (PN) sequence for sensing the at least one object; and map the at least one PN sequence to at least one transmit beam in a frequency and time domain.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) For a more complete understanding of the present disclosure and its advantages, reference is now made to the following description taken in conjunction with the accompanying drawings, in which like reference numerals represent like parts:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
DETAILED DESCRIPTION
(13)
(14) The present disclosure provides a 3D imaging sensor that generates a 3D image of a scene within field of view (FoV). The 3D imaging sensor generates a transmit signal that is reflected or backscattered by various objects of the scene. The backscattered or reflected signal is received and processed by the 3D imaging sensor to generate the 3D image of the scene comprising objects and structures in field-of-view with respect to the 3D imaging sensor. The 3D imaging sensor uses computational images comprising an image formation algorithm to generate the 3D image of the scene.
(15) In one embodiment, the 3D imaging sensor comprises a digital imaging module coupled to a transceiver (XCVR) circuit, which is coupled to an array wherein said array has one or more energy emitter elements and one or more energy detector elements.
(16) In such embodiment, the array is a 2D array with each array element (e.g., energy emitter and energy detector elements) having an (x, y) coordinate defining the position of said array element.
(17) The 3D imaging sensor has a transmit path starting from the digital imaging module extending through a transceiver circuit to the array, and a receive path starting at the array extending through the transceiver circuit to the digital imaging circuit.
(18) The 3D imaging sensor of the present disclosure generates a transmit signal that is emitted by one or more energy emitter elements of the array. The 3D imaging sensor detects the transmit signals reflected or backscattered from objects, structures, or items of a scene that are within the field of view (FoV) of the 3D imaging sensor allowing generating 3D images of the scene using digital beam forming operations and one or more image formation algorithms. Objects, structures, or other items, which reflect energy transmitted by the 3D imaging sensor are said to be illuminated.
(19) The objects within the FoV of the 3D imaging sensor and from which the transmit signal is reflected are thus illuminated by the 3D imaging sensor of the present disclosure. The FoV is a portion of space within which an object, structure or item is located so that the FoV can be illuminated by the 3D imaging sensor of the present disclosure. Objects within the FoV may, however, be obscured by other objects in the FoV. The scene comprises objects, structures, and other items that can be illuminated by the 3D imaging sensor.
(20) In the present disclosure, the transmit path within the digital imaging circuit comprises a sequence generator for generating pseudo noise (PN) sequences, a waveform generator for generating orthogonal digital waveforms and a digital transmit beam former for performing digital beam forming operations on PN-sequence-modulated orthogonal digital waveforms.
(21) The transmit path continues in the XCVR circuit, which comprises a digital to analog converter (DAC) circuit, and a modulator having a first input coupled to an energy source, and a second input for receiving an analog signal generated from the DAC wherein said analog signal comprises the digitally beam formed PN sequence modulated orthogonal digital waveform converted to an analog signal by the DAC.
(22) The modulator further has an output providing a transmit signal that is coupled to one or more energy emitter elements of the array. Thus, the transmit signal is the energy from the energy source being modulated by the analog signal from the DAC. The transmit signal emitted by an array of the 3D imaging sensor of the present disclosure is caused to illuminate a scene, viz., objects, structures or other items in the FoV of the 3D imaging sensor of the present disclosure. The transmit signals that are caused to illuminate a scene experience a resultant phase shift due to frequency translation, time delays, and various phase shifts.
(23) Further, in such embodiment of the present disclosure, the receive path of the transceiver comprises an energy detector circuit configured to receive energy detected by one or more energy detector elements of the array. In particular, the receive path detects energy transmitted from the transmit path (i.e., the transmit signal) that is reflected (or backscattered) from an illuminated scene (i.e., objects, structures or items) in the FoV of the 3D imaging sensor of the present disclosure.
(24) The energy detector circuit is coupled to energy detector elements of the array. The output of the energy detector circuit is coupled to a demodulator (not shown) whose purpose is to obtain a baseband signal from the received reflected signal.
(25) It will be readily understood that the operations of energy detection and demodulation may be performed in one module and/or circuit, one set of circuits, or may be implemented as two separate circuits or circuits. The output of the demodulator provides the received baseband signal. Thus, the received baseband signal is applied to an analog to digital converter (ADC) for providing a received digital signal. The receive path continues to the digital imaging circuit, which comprises a computational imaging circuit coupled to a coherent detector (i.e., a correlation detector) to process the received digital signal to generate 3D images of a scene (i.e., objects, structures and other items) being illuminated by the 3D imaging sensor of the present disclosure.
(26) The computational imaging circuit performs at least an image formation algorithm that reduces or substantially eliminates the resultant phase shift experienced by the transmit signals backscattered or reflected from a scene being illuminated by the 3D imaging sensor, and generates a 3D image of the scene through the use of a 2D fast Fourier transform (FFT) of the reflectivity density of the reflected or backscattered transmit signal.
(27) As aforementioned above, these reflected or backscattered transmit signals are finally received by the energy detector elements of the array of the 3D imaging sensor. These received signals experience a resultant phase shift due to frequency shifts (or Doppler shifts), time delays, and various phase shifts due to interaction with the objects, structures or other items of the scene.
(28) The resultant phase shift is also a result of the relative speed between the scene being illuminated and the array. Also, the type of reflections and backscattering experienced by these signals at various target points of objects, structures or other items may be due to environmental conditions and the relative smoothness of the surface of the targets of objects and structures of a scene being illuminated. All or some of these aforementioned factors may contribute to the resultant phase shift experienced by the transmit signal that is reflected from a scene in the field-of-view and received by one or more energy detector elements of the array of the 3D imaging sensor of the present disclosure. In the present disclosure, the terms “speed” and “velocity” are used interchangeably.
(29) The digital imaging circuit performs computational imaging operations such as an image formation algorithm to determine the target reflectivity, which is the fraction of a signal (e.g., electromagnetic or optical signal incident to the target) that is reflected from the target. The digital imaging circuit thus uses the image formation algorithm to calculate voxels (e.g., volume pixels) having coordinates (x, y, r) to generate a 3D image of a scene being illuminated by the 3D imaging sensor of the present disclosure. The (x, y, r) coordinates are calculated using a 2D FFT of the reflectivity density p, which is the reflected or backscattered signal (from a target point of an object) per infinitesimal volume dηdηdr. The reflectivity density of the target is thus modeled as a function of three variables, (ζ, η, r) as will be discussed below.
(30) The image formation algorithm also makes adjustments to the resultant phase shift experienced by the transmit signals reflected or backscattered by objects, structures, or other items of a scene. The adjustments reduce or significantly eliminate the resultant phase shift experienced by the transmit signals after the transmit signals were emitted by an energy emitting element of the array, to a scene, and reflected or backscattered by the scene. The reflected or backscattered transmit signals are then received by one or more energy detector elements of the array.
(31) A value for the coordinate r associated with each (x, y) set of coordinates is also calculated by the image formation algorithm by performing a 2D FFT of the reflectivity density of a target from which a transmitted signal by the 3D imaging sensor is reflected. Thus, for each value of r calculated, i.e., r=R.sub.1, R.sub.2, R.sub.3, . . . , r=R.sub.N, for a particular (x, y) coordinate, there is a corresponding voxel (x, y, R.sub.1), (x, y, R.sub.2), . . . ,(x, y, R.sub.N) that can be computed by the 3D imaging sensor of the present disclosure thus generating a 3D image of a scene.
(32) The coordinate r represents a distance between the corresponding energy detector element (element detecting the reflected transmit signal) having coordinates (x, y) and a target point of a scene being illuminated by the transmit signals emitted by the array. The transmitted signal is reflected (or backscattered) by the target point and is then detected by one or more energy detector elements of the array having a coordinate of (x, y).
(33) For that particular set of coordinates, the 3D imaging sensor of the present disclosure calculates the r value for different values of r (r=R.sub.1, r=R.sub.2, r=R.sub.3, . . . ) in the process of generating a 3D image of the scene being illuminated. The resulting voxels thus have coordinates (x, y, R.sub.1), (x, y, R.sub.2), . . . ,(x, y, R.sub.N) where N is an integer equal to 1 or greater.
(34) The term “couple” as used herein refers to a path (including a waveguide, optical fiber path), arrangement of media, device(s), equipment, electrical or electronic components, modules, or any combination thereof that facilitate the flow of a signal or information from one portion of the sensor to another portion of the sensor or to another portion of a system outside of the sensor. A portion can be “an origin point” and the other portion can be a “destination point.” The path may be an actual physical path (e.g., electrical, electronic, optic, electromagnetic, waveguide path) or may be a logical path implemented through a data structure that allows information stored at certain memory locations to be retrieved by direct or indirect addressing.
(35) A “direct couple” between two points, or points that are “directly coupled” to each other means that there are no intervening systems or equipment or other obstacle existing in the path of the signals that would significantly affect the characteristics of the signals traveling from a first point to a second point or from an origin point to a destination point.
(36) In another embodiment of the present disclosure, the 3D imaging sensor comprises a transmitter, a receiver, and an array coupled to the transmitter and receiver, said array having one or more energy emitter elements and energy detector elements wherein the array is configured to emit a transmit signal generated by the transmitter.
(37) In such embodiment, the transmit signal comprises a digitally beam formed orthogonal digital waveform modulated by a multiple input multiple output (MIMO) processed frequency domain PN sequence, said digitally beam formed orthogonal digital waveform is converted to an analog waveform signal caused to modulate an energy source resulting in a modulated signal (i.e., the modulated energy) that is then analog beam formed to obtain the transmit signal applied to the one or more energy emitter elements of the array.
(38) The operation of analog beam forming comprises applying a signal directly to an element of the array to provide a certain phase to the element. The phase of that element does not change until the signal (e.g., voltage, current) is no longer applied.
(39) Continuing with this embodiment, the receiver is configured to perform operations using computational imaging comprising at least an image formation algorithm to generate 3D images of a scene being illuminated by the 3D imaging sensor of the present disclosure. The image formation algorithm first makes adjustments to resultant phase shift experienced by signals transmitted from the 3D imaging sensor and reflected or backscattered by the scene. Further, the image formation algorithm performs a 2D FFT of the reflectivity density of the reflected signals (from the scene) to generate a 3D image of the scene.
(40)
(41) Referring to
(42) The 3D imaging sensor generates a transmit signal that is emitted by the energy emitter elements (not shown in
(43) The image formation algorithm generates 3D images of the objects by performing a 2D FFT operation on the reflectivity density of the reflected transmitted signal. The adjustments affect at least one of phase shifts, time delay shifts, or frequency shifts due to Doppler, experienced by the reflected or backscattered transmit signal. Thus, the reflected or backscattered transmit signal experiences a resultant phase shift from a combination of the time delays, frequency shifts, and other phase shifts.
(44) In one embodiment, the transmit path within the digital imaging circuit 102 comprises a sequence generator 102A for generating PN sequences, a waveform generator 102B for generating orthogonal digital waveforms and a digital transmit beam former 102C for performing digital beam forming operations on PN-sequence-modulated orthogonal digital waveforms. The digitally beam formed PN sequence modulated orthogonal digital waveforms are applied to digital to analog converter (DAC) 104B.
(45) Still referring to
(46) In one embodiment, the receive path of the transceiver 104 comprises an energy detector circuit 104E configured to receive and sum energy detected by one or more energy detector elements of the array 106. In particular, the receive path detects energy transmitted from the transmit path that is reflected (or backscattered) from objects in the FoV of the 3D imaging sensor of the present disclosure. The objects are in a distance from the 3D imaging processor 100 of
(47) The energy detector circuit 104E is coupled to the energy detector elements (not shown) of the array 106. The energy detector circuit 104E also performs the operation of demodulating the received signal to a base baseband signal. The output of the energy detector circuit 104E is a received baseband signal, which is applied to an analog to digital converter (ADC) 104D for providing a received baseband digital signal to the digital image circuit 102. The receive path continues to the digital image circuit 102, which comprises a computational imaging circuit 102E coupled to a coherent detector 102D (i.e., a correlation detector) to detect coordinates of volume pixels (voxels) used to generate 3D images of the objects.
(48) The computational imaging circuit 102E performs at least one image formation algorithm for making adjustments to phase shifts (or a resultant phase shift) experienced by received signals originally transmitted by the transmitter of the 3D imaging sensor. The at least one image formation algorithm uses the reflectivity density of the reflected signals to generate 3D images of objects in the field-of-view being illuminated by the transmitter of the 3D imaging processor of the present disclosure.
(49) It will be understood that all of the circuitry and/or modules shown in
(50) It will be readily understood that any of the modules and/or circuits in the digital imaging circuit 102 or the XCVR circuit 104 may be part of the memory of the processor 110 to allow the processor to control, perform or direct any and all operations performed by one or more of the circuits 102A-E, 104A-E, and 108A-B. The processor 110 may operate as a digital signal processor or a microprocessor or both. The processor 110 may configure the signal format described in
(51) The tracking circuit 108A serves as a storage unit for the targets detected by the coherent detector circuit in 102D where the output of the coherent detector circuit 102D in
(52) For example, the post processing circuit 108B can be designed and configured to detect when interference from other signals, including signals from other 3D imaging sensors, occur. When an interference is detected by the post processing circuit 108B, the processor 110 can operate or cause a sequence generator 102A to generate a different type of sequence or different type of combination of sequences to reduce the probability of interference between different users of 3D imaging sensors.
(53) For example, the sequence generator 102A can be designed and/or configured to generate PN sequences (or other sequences) having different formats. The sequences can be assigned randomly, follow a pre-determined pattern, or can be changed adaptively depending on the measured interference level. After CFAR detection by a threshold test, a radar tracking algorithm initiates target tracking. Well-known algorithms such as Kalman filter is used to track multiple targets based on position and the velocity of the target.
(54)
(55)
(56)
(57) Referring temporarily to
(58) To reduce or eliminate interference with other image sensors, the processor 110 may cause sequence generator 102A (of
(59) Referring back to
(60)
(61) Referring now to
(62) In the embodiment of
(63) The array 226 of
(64)
(65) The front view of the array 226 is shown in
(66) The transmit signal comprises a digitally beam formed orthogonal digital waveform (output of digital beam former TX 216). Prior to being digitally beam formed, the orthogonal digital waveform is generated by the combination of resource element (RE) mapping circuit 212.sub.1, . . . . , 212.sub.L coupled to corresponding inverse fast Fourier transform (IFFT) cyclic prefix (CP) circuits 214.sub.1, . . . , 214.sub.L. Also, said orthogonal digital waveform is modulated by a MIMO processed frequency domain PN sequence (i.e., output of MIMO pre-coding circuit 210).
(67) Thus, the digitally beam formed orthogonal digital waveform is obtained by applying the orthogonal digital waveform to the digital beam former 216. The digitally beam formed orthogonal digital waveform is converted to an analog waveform by DAC 218 (i.e., signal at output of DAC 218). The resulting analog waveform is applied to an input of modulator 222 to modulate an energy source 220 (applied to another input of modulator 222 as shown) resulting in a modulated analog signal that is analog beam formed by analog beam former 224A to obtain the transmit signal (output of analog beam former 224A) applied to the one or more energy emitter elements of the array. The one or more energy emitter elements of the array 226 thus emit the transmit signals applied to them.
(68) The modulator 222 of
(69) Still referring to
(70) N is an integer equal to 1 or greater. The DFT circuit/module 204 performs a discrete Fourier transform on a time domain sequence to convert said sequence to a frequency domain sequence. A time domain PN sequence obtained from a CAZAC sequence is one example of a PN sequence that is generated by the PN sequence generator 202. A CAZAC sequence is a type of PN sequence having constant amplitude zero auto-correlation (CAZAC) property. The u.sup.th root of a Zadoff-Chu sequence, which is a CAZAC sequence, is given by the equation:
(71)
0≤n≤N.sub.ZC−1 where N.sub.ZC is the length of the length of the Zadoff-Chu sequence. The frequency domain PN sequence is then MIMO processed.
(72)
(73) Referring to
(74) Still referring to
(75) Thus, for the layer mapping shown in
(76) A MIMO processing is therefore performed on the frequency domain sequence by first performing a code word mapping operation (see circuit 206 of
(77) Still referring to
(78) In one example, the 4-element sequence block shown under the code word mapping operation is assumed to have been subject to a code word mapping operation. The remainders of the MIMO operations are also shown. As shown the operation of layer mapping fragments the sequence into a number of layers (L, here L=2) after the order of the sequence elements has been altered (i.e., the sequence has been scrambled). The pre-coding operation follows whereby certain operations are performed on the sequence elements for certain layers and not for other layers.
(79) Also, the pre-coding operation determines the particular subset of energy emitter elements (when transmitting a signal) and energy detector elements (when receiving a reflected signal) of the array 226 that are to be energized. In the example being discussed, for one of the L precoding layers, a complex conjugate operation on the scrambled sequence elements was performed. For the other layer, the sequence elements were scrambled but the sequence elements were not subjected to any other operation.
(80) Referring back to
(81) As shown, the orthogonal sequence generated is a well-known orthogonal waveform (OFDM) whereby RE mapping (circuits 212.sub.0, 212.sub.1, . . . , 212.sub.L) followed by an cyclic prefix IFFT (circuits 214.sub.1, 214.sub.2, . . . , 214.sub.L) for each of the L layers are performed; L is an integer equal to 1 or greater.
(82) The MIMO processed PN sequence modulated OFDM sequence is applied to a digital beam former 216, which combines phase adjusted orthogonal sequence elements in a manner consistent with the array energy emitter elements being used to emit a transmit signal in a certain direction. In addition to applying various phase shifts to the OFDM waveform, the digital beam former 216 is operated such that the proper energy emitting elements of array 226 are energized in accordance with a desired target within the FoV of the imaging sensor that is to be illuminated.
(83) It is noted each of the layers L is associated with a particular energy emitting element or a particular group of energy emitting element of the array 226. The antenna array consists of multiple sub-arrays, each with phase shifts between the antenna elements of the sub-array. That is, the digital beam former 216 selects how much phase adjustment is to be made for each of the orthogonal sequence elements of the different layers corresponding to the antenna subarray. In this manner, certain particular energy emitter elements of the array will experience more phase change than others. Such phase mapping by DBF TX circuit 216 may affect the direction, size, shape, and amplitude of the beam illuminating a particular area of the FoV of the array 226.
(84) The output of DBF TX circuit 216 is applied to DAC 218 whose output modulates energy source 220 via the modulator 222. The output of the modulator 222 is applied to an analog beam forming circuit 224A. the analog beam forming circuit 224A, controls the amount of energy to be applied to each of the energy emitting element of each of the L layers and provides a constant phase offset to each of the transmit signals being emitted by the array 226.
(85)
(86)
(87) Referring to
(88) Still referring to
(89) Thus, each of the L digital beam formers (216.sub.1, 216.sub.2, . . . , 216.sub.L) applies M unique phase shifts (i.e., ϕ) to each of B sub-array mappings for L layers and uniquely combines these mappings to generate B beams. Consequently, there are L×B beams, each of which is converted to an analog signal by DAC 218.sub.1, 218.sub.2, . . . , 218.sub.L that is then applied to a modulator 222.sub.1, 222.sub.2, . . . , 222.sub.L. As discussed above, L is the number of layers, B is the number of beams, and M is the number of phase shifts, ϕ, per sub-array mapping. In general, the value of B×M does not exceed the total number of antenna elements in the array.
(90) The outputs of the modulators are applied to an analog beam former 224A which is then coupled to one or more energy emitter elements of the array (not shown in
(91) It should be noted, that communication systems that use OFDM, transmit data by multiplexing the data in both time and frequency. Thus, any transmission is specifically identified by the intersection of a defined time period and within a frequency band; see
(92) The transmitted signal may be a PN sequence being used to sense the scenes and generate 3D images of such scenes through the use of an image formation algorithm as discussed throughout this disclosure. The PN sequence may be multiplexed such that consecutive PN sequences forming beams may be transmitted along with the data of the communication system as shown in
(93) Referring back to
(94) The received signal, after having been beam formed by ABF 224B, is applied to an energy detector and demodulator circuit 228 which stores a sum of the energy received and demodulates the received signal to a baseband signal taking into account the relative speed, v, between the 3D imaging sensor of the present disclosure and the scene being illuminated by the 3D imaging sensor.
(95) The circuit 228 performs the operations of received signal energy detection and demodulation. The received signal may be a signal transmitted by the 3D imaging sensor and is reflected by an object, structure or other item in a scene. The demodulation uses a local signal having frequency f.sub.0, which is equal to the carrier frequency of the signal transmitted by the 3D imaging sensor 200 to illuminate a scene in a from the 3D imaging sensor.
(96) The local signal is shown being applied to the circuit 228, but more particularly said local signal is being applied to the demodulator portion of the circuit 228. Variables of the mathematical expression for the local signal include c, which is the speed of light, t, which stands for time and v, which is the velocity at which a scene is moving relative to the imaging sensor of the present disclosure (or the velocity at which the sensor is moving relative to the scene). The output of the circuit 228 is an analog baseband signal that is converted to a digital form by the ADC 230, which transfers said digital signal to a computational imaging circuit 232.
(97) The computational imaging circuit 232 performs at least an image formation algorithm for making adjustments to the resultant phase shift experienced by transmitted signals (transmitted by the 3D imaging sensor of the present disclosure) reflected or backscattered from objects, structures or items of a scene. The image formation algorithm also generates voxels for generating 3D images of objects, structures or other items being illuminated (with transmit signals) by the transmitter of the 3D imaging sensor of the present disclosure. The transmit signals are reflected or backscattered by the objects, structure or other items.
(98)
(99) Referring to
(100) Although not shown, it is understood that the signal being reflected (or backscattered) by the object shown in
(101) As shown in
(102) In particular, the transmit signal experiences time delays, frequency shifts and phase shifts. Without taking into account these various phase shifts, frequency shifts, and time delays, the resulting voxel would be calculated to a wrong location because of the resultant phase shift. This would result in the voxel (e.g., volume pixel) being placed at the wrong location in the 3D image being generated. A resulting image having one or more slightly misplaced voxels would be unclear, blurry and generally not sharp.
(103) Consequently, the computational imaging circuit 232 (of
(104) Each of the transmit signals emitted by an emitter element of the array 226, after having been reflected or backscattered, experiences a resultant phase shift depending on the range, R. The range is the distance from the array emitter element to the target being illuminated. Thus, the range is a function of the location of the array emitter element emitting the transmit signal. The round-trip time for each of the transmit signals is estimated using coherent processing. As can be seen in
(105) For a 2D array or for a 1D array that operates as a virtual 2D array, the location can be expressed in terms of an (x, y) coordinate system. The range R can be expressed in terms of the target location r(S) where S represents the emitter element position.
(106) The resultant phase shift experienced by the reflected or backscattered transmit signal and therefore the phase correction that is to be made to the reflected transmit signal is given by the following formula:
(107)
where λ is the wavelength of the reflected transmit signal.
(108) The computational imaging performed by the circuit 232 applies a phase correction whose value is calculated using C(S). The phase correction is the inverse of the resultant phase shift experienced by the transmit signal. Multiple factors contribute to the resultant phase shift experienced by each transmit signal. For example, the resultant phase shift can be the result of any one or any combination of frequency translation, time delay and actual phase shift experienced by the transmit signal.
(109) The computation of C(S) is simplified when the object being illuminated by the 3D imaging sensor of the present disclosure is located in the “far field” of the array. As will be explained infra, a “far field” target is located at a distance r that meets certain requirements. For ease of description, it is assumed that the transmit and receive arrays are the same, but this is not a requirement for the image formation algorithm. As long as the geometry of the transmit and receive arrays are known, this “far field” calculation can also be used for co-located or non-co-located transmit-receive arrays.
(110) The object and a reflection of a signal to the array 226 is shown in
(111) The reflectivity density of the target point can be modeled as ρ(ζ, η, R). The reflectivity density is the reflection of the signal from the target point per infinitesimal volume dζdηdr. Thus, the reflectivity density is the entire 3D region where an object target point is located. To calculate the reflectivity density for all reflections in the area around the target point, a triple or 3D integral of the signal from all reflections in the volume occupied by the target point is performed. The transmit signal (originating from the transmitter of the 3D imaging sensor having the array 226) is modeled as S(t)=e.sup.j2π(f.sup.
(112) For simplicity of description and for a simpler derivation, the constant phase offset ϕ.sub.0 is assumed to equal to 0; i.e., ϕ.sub.0=0.
(113) The return signal from the target at (ζ, η, R) for a particular instant of time, t, is obtained as:
(114)
(115) where r=√{square root over ((x−ζ).sup.2+(y−η).sup.2+(R+vt).sup.2)} is the distance from the array 226 to the target. Thus, as seen in the above equation for r, r is a function of several variables (including x, y, ζ, η), i.e., a function of transmitter element coordinates and target point coordinates. In the far field where R.sub.2>>(vt).sup.2+(x−ζ).sup.2+(y−η).sup.2, r can be approximated as
(116)
and c is the speed of light. The term “far field” thus means that the distance R satisfies the relationship R.sup.2>>(vt).sup.2+(x−ζ).sup.2+(y−η).sup.2.
(117) Equation (1) can be written as
(118)
which is a scaled version of the Fourier Transform of the reflectivity density ρ(ζ, η, R), i.e.,
(119)
(120) P is a notation for the Fourier transform of the reflectivity density and λ is the wavelength of the signal. Thus, as shown in equation 2, the image at distance R is obtained by demodulating the received signal (using the circuit 228 of
(121)
(using the circuit 232 having multiplier 232A of
(122) Referring back to
(123) The output of the circuit 232 is transferred to 2D FFT circuit 234 converting said time domain received signal to the frequency domain allowing frequency domain coherent detection of said received signal with the use of pre-stored time domain sequence in a lookup table 252 converted to a frequency domain sequence by the DFT circuit 240. The time domain sequence in the lookup table 252 is the same time domain sequence used to construct the transmit signal that is now reflected (or backscattered) from an object in the field-of-view.
(124) A complex conjugate circuit 238 performs the complex conjugate operation on the frequency domain version of the sequence from the DFT circuit 240 to allow coherent detection with the use of multiplier 236 (multiplying two frequency domain sequences). The output of the multiplier 236 is then converted to the time domain by the IFFT circuit 242 to obtain a magnitude of the range R or squared magnitude of R (e.g., the circuit 244) for the received transmit signal and for the adjustments to be made to the (x, y) coordinates of the energy detector array element that received the reflected (or backscattered) transmit signal.
(125) The detected magnitude of the signal is then compared to a threshold by the threshold circuit 246 to determine the reflectivity density and the range value, r, (e.g., r=R) to the object from the received signal. The threshold is set such that false alarm probability of the object is maintained statistically, to avoid false positive images. The location (in x, y coordinates) and distance, r, from the transmit point are thus known.
(126) A tracking circuit 250 serves as a storage unit for the targets detected by the circuits 242, 244, and 246, where the threshold may be set based on constant false-alarm rate (CFAR) criterion and the obtained images and stored various versions of the same image for further processing if needed. A post processing circuit 248 monitors the quality of the demodulated baseband signal from the computational imaging circuit 232 to determine if the type of processing being performed needs to be adjusted to obtain improved signal quality.
(127) In one example, the post processing circuit 248 can be designed and configured to detect when interference from other signals including signals from other 3D imaging sensors occur. When an interference is detected by the post processing circuit 248, the processor (not shown in
(128) In one example, the sequence generator 202 can be designed and/or configured to generate PN sequences (or other sequences) having different formats. The sequence can be assigned randomly, follow a pre-determined pattern, or can be changed adaptively depending on the measured interference level.
(129)
(130) Referring now to
(131) Similarly, the processor 110 of
(132) In step 500 of the method performed by the 3D imaging sensor of the present disclosure, a PN modulated MIMO processed digital waveform is generated. The PN sequence generator circuit 202 has stored therein various formats for PN sequences. One example of a PN sequence is the Zadoff-Chu sequence discussed supra. The Zadoff-Chu sequence is a constant amplitude zero autocorrelation (CAZAC) type of sequence and various formats of the Zadoff-Chu sequence can be stored in the sequence generator 202 of
(133) A discrete Fourier transform (DFT) of the time domain sequence is performed to generate a frequency domain PN sequence. As shown in
(134) The output of DFT 204 is applied to the MIMO processing circuits. The MIMO processing operations comprise codeword mapping (the circuit 206) followed by a layer mapping (the circuit 208) and precoding (the circuit 210). The output of circuit 210 is thus a PN sequence modulated MIMO processed digital waveform.
(135) In step 502, the PN sequence modulated MIMO processed digital waveform is applied to an orthogonal sequence generator. The orthogonal sequence generator shown in
(136) In one example, such orthogonal digital waveform generators comprise orthogonal CP-less OFDM, filter bank multi-carrier (FBMC) waveform generators, generalized frequency division multiplexing (GFDM) waveform generators, and resource spread multiple access (RSMA) waveform generators.
(137) The generated orthogonal digital waveform is modulated by the PN sequence modulated and MIMO processed digital waveform from the MIMO processing output (i.e., from output of pre-coding circuit 210 of
(138) In step 504, the digitally beam formed digital waveform is used at an input of a modulator (e.g., modulator 222 of
(139) In one embodiment, the PN sequence modulated and MIMO processed digital waveform may be applied directly to a digital beam former instead of being applied first to an orthogonal digital waveform generator (i.e., an orthogonal sequence generator). In another embodiment, for both
(140) In step 506, the transmit signal is emitted by one or more energy emitter element of the array of the 3D imaging sensor of the present disclosure. The transmit signal may be analog beam formed prior to being emitted by the array. In one embodiment, the analog beam former applies a non-varying phase shift onto the transmit signal prior to emission. The 3D imaging sensor may be illuminating a scene in which case, the emitted transmit signal or at least a portion thereof may be backscattered or reflected back to the array of the 3D imaging sensor. The backscatter or reflection is due to objects or structures in the FoV of the array on which the emitted transmit signal impinges. The array receiving the reflected transmitted signal may be the same array used to emit the transmit signal or the array may be a separate array configured to detect energy in various frequency bands or wavelength ranges.
(141) In step 508, the reflected or backscattered transmit signal is detected by the array, and in particular by at least one energy detector element of the array. The detecting array may be the same array used to emit the transmitted signal or the detecting array may be a separate array configured with energy detector elements. The reflected energy may be detected, and an analog beam formed by the 3D imaging sensor. The receiving analog beam former applies a non-varying phase shift to the received reflected transmit signal.
(142) In step 510, the received signal is converted to a baseband signal form through a demodulation operation and then converted to a digital signal by ADC (e.g., ADC 230 of
(143) Although the present disclosure has been described with an exemplary embodiment, various changes and modifications may be suggested to one skilled in the art. It is intended that the present disclosure encompass such changes and modifications as fall within the scope of the appended claims.
(144) None of the description in this application should be read as implying that any particular element, step, or function is an essential element that must be included in the claims scope. The scope of patented subject matter is defined only by the claims. Moreover, none of the claims are intended to invoke 35 U.S.C. § 112(f) unless the exact words “means for” are followed by a participle.