DIFFERENTIAL SENSING FOR JOINT COMMUNICATIONS AND SENSING

20250293756 ยท 2025-09-18

    Inventors

    Cpc classification

    International classification

    Abstract

    A device may estimate targets and angle-of-arrival (AoA) and angle-of-departure (AoD) pairs for the targets based on a sensing-aware beam-forming (SABF) signal and a sensing-aware beam-nulling (SABN) signal received from another device. The device may perform post-processing of the AoA and AoD pairs to modify estimation accuracies for the AoA and AoD pairs, and may estimate path loss values for paths of the targets. The device may determine positions of the targets based on the AoA and AoD pairs, and may perform one or more actions based on the positions of the targets. The device may receive communication pilots from the other device, and may determine transmitter calibration coefficients for the other device and receiver calibration coefficients for the device based on the communication pilots.

    Claims

    1. A method, comprising: estimating, by a device, targets and angle-of-arrival (AoA) and angle-of-departure (AoD) pairs for the targets based on a sensing-aware beam-forming (SABF) signal and a sensing-aware beam-nulling (SABN) signal received from another device; performing, by the device, post-processing of the AoA and AoD pairs to modify estimation accuracies for the AoA and AoD pairs; estimating, by the device, path loss values for paths of the targets; determining, by the device, positions of the targets based on the AoA and AoD pairs; and performing, by the device, one or more actions based on the positions of the targets.

    2. The method of claim 1, wherein estimating the targets and the AoA and AoD pairs for the targets comprises: calculating an eigenvalue decomposition of difference of autocorrelation matrices based on the SABF signal and the SABN signal; estimating a total quantity of positive eigenvalues of the eigenvalue decomposition of difference; calculating angular spectrums for the AoAs based on the total quantity of positive eigenvalues; and estimating the AoA and AoD pairs based on the angular spectrums for the AoAs.

    3. The method of claim 1, wherein performing, by the device, post-processing of the AoA and AoD pairs comprises one or more of: eliminating outliers for the AoA and AoD pairs to modify the estimation accuracies for the AoA and AoD pairs; eliminating a line-of-sight path for the AoA and AoD pairs to modify the estimation accuracies for the AoA and AoD pairs; or merging targets with substantially similar AoA and AoD pairs to modify the estimation accuracies for the AoA and AoD pairs.

    4. The method of claim 1, wherein determining the positions of the targets based on the AoA and AoD pairs comprises: utilizing a triangulation technique to determine the positions of the targets based on the AoA and AoD pairs.

    5. The method of claim 1, wherein performing the one or more actions comprises one or more of: monitoring traffic based on the positions of the targets; identifying one or more parking spots based on the positions of the targets; detecting one or more vehicles based on the positions of the targets; or detecting one more pedestrians crossing one or more streets based on the positions of the targets.

    6. The method of claim 1, wherein performing the one or more actions comprises one or more of: counting a quantity of people within an area based on the positions of the targets; detecting one or more unidentified drones based on the positions of the targets; or detecting a presence of people within one or more geo-fenced areas of a factory based on the positions of the targets.

    7. The method of claim 1, wherein performing the one or more actions comprises one or more of: tracking one or more moving objects in a factory based on the positions of the targets; or preventing collisions between one or more of autonomous vehicles, robots, or people.

    8. A device, comprising: one or more memories; and one or more processors, coupled to the one or more memories, configured to: estimate targets and angle-of-arrival (AoA) and angle-of-departure (AoD) pairs for the targets based on a sensing-aware beam-forming (SABF) signal and a sensing-aware beam-nulling (SABN) signal received from another device; perform post-processing of the AoA and AoD pairs to modify estimation accuracies for the AoA and AoD pairs; estimate path loss values for paths of the targets; utilize a triangulation technique to determine positions of the targets based on the AoA and AoD pairs; and perform one or more actions based on the positions of the targets.

    9. The device of claim 8, wherein the one or more processors are further configured to: receive communication pilots from the other device; determine transmitter calibration coefficients for the other device and receiver calibration coefficients for the device based on the communication pilots; and calibrate a receiver antenna array of the device with the receiver calibration coefficients.

    10. The device of claim 9, wherein the one or more processors are further configured to: provide the transmitter calibration coefficients to the other device to cause the other device to calibrate a transmitter antenna array of the other device with the transmitter calibration coefficients.

    11. The device of claim 9, wherein the transmitter calibration coefficients and the receiver calibration coefficients are determined based on a line-of-sight path between a transmitter antenna array of the other device and the receiver antenna array.

    12. The device of claim 8, wherein the SABF signal is generated based on a transmit precoder matrix that is determined based on an ideal sensing beam pattern that maximizes a signal power in a transmitter target angular region.

    13. The device of claim 8, wherein the SABN signal is generated based on a transmit precoder matrix determined based on an ideal sensing beam pattern that minimizes a signal power in a transmit target angular region.

    14. The device of claim 8, further comprising: causing performance of a beam sweeping operation to scan a set of transmit target angular regions and to detect the targets.

    15. A non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to: estimate an angle-of-arrival (AoA) and angle-of-departure (AoD) pair for a target based on a sensing-aware beam-forming (SABF) signal and a sensing-aware beam-nulling (SABN) signal received from another device; perform post-processing of the AoA and AoD pair to modify an estimation accuracy for the AoA and AoD pair; estimate a path loss value for a path of the target; determine a position of the target based on the AoA and AoD pair; and perform one or more actions based on the position of the target.

    16. The non-transitory computer-readable medium of claim 15, wherein the one or more instructions, that cause the device to estimate the AoA and AoD pair for the target, cause the device to: calculate an eigenvalue decomposition of difference of autocorrelation matrices based on the SABF signal and the SABN signal; estimate a total quantity of positive eigenvalues of the eigenvalue decomposition of difference; calculate an angular spectrum for the AoA based on the total quantity of positive eigenvalues; and estimate the AoA and AoD pair based on the angular spectrum for the AoA.

    17. The non-transitory computer-readable medium of claim 15, wherein the one or more instructions, that cause the device to perform post-processing of the AoA and AoD pair, cause the device to: eliminate outliers for the AoA and AoD pair to modify the estimation accuracy for the AoA and AoD pair; eliminate a line-of-sight path for the AoA and AoD pair to modify the estimation accuracy for the AoA and AoD pair; or merge the target with substantially similar AoA and AoD pairs to modify the estimation accuracy for the AoA and AoD pair.

    18. The non-transitory computer-readable medium of claim 15, wherein the one or more instructions, that cause the device to determine the position of the target based on the AoA and AoD pair, cause the device to: utilize a triangulation technique to determine the position of the target based on the AoA and AoD pair.

    19. The non-transitory computer-readable medium of claim 15, wherein the one or more instructions further cause the device to: receive communication pilots from the other device; determine transmitter calibration coefficients for the other device and receiver calibration coefficients for the device based on the communication pilots; and calibrate a receiver antenna array of the device with the receiver calibration coefficients.

    20. The non-transitory computer-readable medium of claim 19, wherein the one or more instructions further cause the device to: provide the transmitter calibration coefficients to the other device to cause the other device to calibrate a transmitter antenna array of the other device with the transmitter calibration coefficients.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0005] FIGS. 1A-1I are diagrams of an example associated with differential sensing for joint communications and sensing.

    [0006] FIG. 2 is a diagram of an example environment in which systems and/or methods described herein may be implemented.

    [0007] FIG. 3 is a diagram of example components of one or more devices of FIG. 2.

    [0008] FIG. 4 is a flowchart of an example process for differential sensing for joint communications and sensing.

    DETAILED DESCRIPTION

    [0009] The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements.

    [0010] In a joint communications and sensing (JCAS) system, the same radio frequency (RF) signal is used for communicating with user equipments (UEs) and for sensing nearby targets (e.g., objects, people, and/or the like). A JCAS system may include sensing modes, such as a base station monostatic sensing mode, a base station bistatic sensing mode, a base station-to-UE bistatic sensing mode, a UE-to-base station bistatic sensing mode, a UE monostatic sensing mode, and a UE bistatic sensing mode. In general, a JCAS transmitter (Tx) and sensing receiver (Rx) are at the same location in monostatic sensing modes to allow for rapid and easy information exchange between the JCAS transmitter and sensing receiver. On the other hand, there is strong self-interference between the JCAS transmitter and sensing receiver due to signal coupling from the JCAS transmitter to the sensing receiver as they operate at with same time and frequency resources. The bistatic sensing mode does not suffer from self-interference but an instantaneous information exchange may not be possible between the JCAS transmitter and sensing receiver as they are located separately.

    [0011] Position estimation (e.g., via a delay calculation and/or an angle-of-departure (AoD)/angle-of-arrival (AoA) calculation), velocity estimation (e.g., via a Doppler frequency shift calculation), a radar cross section (RCS) estimation (e.g., via a path loss calculation) can be performed as sensing operations at the sensing receiver. The delay calculation requires high bandwidth for good precision, which might be problematic when only a small portion of a bandwidth is allocated. Furthermore, a complexity of a cross-correlation operation (e.g., required for the delay calculation) is high when the bandwidth is high. The Doppler frequency shift calculation needs to be very accurate to estimate velocity with high precision, and is sensitive to frequency errors between local oscillators of the JCAS transmitter and sensing receiver. Furthermore, current communication waveforms may not enable an accurate Doppler calculation. It is also difficult to achieve a very good frequency synchronization between the JCAS transmitter and sensing receiver in the bistatic sensing mode.

    [0012] To estimate the AoA of multiple targets, a fixed transmit beam pattern (e.g., such as an omnidirectional beam or a directional beam designed according to target AoD information) can be adjusted at the JCAS transmitter and multi-resolution algorithms, such as multiple signal classification (MUSIC) can be applied to a received signal autocorrelation matrix at the sensing receiver. However, the number of AoAs that can be estimated is bound by a rank of the transmitted signal. In JCAS systems, the rank becomes the number of data streams transmitted to UEs, which may be limited depending on the scenario. In general, there is a strong line-of-sight path between the JCAS transmitter and sensing receiver which makes target position estimation more difficult as reflected signal paths from targets have less power compared to the line-of-sight path. Finally, instantaneous information exchange between the JCAS transmitter and sensing receiver may not be available due to the bistatic sensing mode.

    [0013] Therefore, current techniques for sensing targets in a JCAS system consume computing resources (e.g., processing resources, memory resources, communication resources, and/or the like), networking resources, and/or the like associated with generating poor position estimates due to a limited number of estimated AoAs, generating poor position estimates due to the strong line-of-sight path between the JCAS transmitter and sensing receiver, decreasing resources for communications due to position estimating, performing incorrect actions based on poor position estimates, and/or the like.

    [0014] Some implementations described herein relate to a base station (e.g., a sensing receiver) that provides differential sensing for joint communications and sensing. For example, the sensing receiver may estimate targets and AoA and AoD pairs for the targets based on a sensing-aware beam-forming (SABF) signal and a sensing-aware beam-nulling (SABN) signal received from a transmitter base station. The sensing receiver may perform post-processing of the AoA and AoD pairs to modify estimation accuracies for the AoA and AoD pairs, and may estimate path loss values for paths of the targets. The sensing receiver may determine positions of the targets based on the AoA and AoD pairs, and may perform one or more actions based on the positions of the targets. The sensing receiver may receive communication pilots from the transmitter base station, and may determine transmitter calibration coefficients for another base station and receiver calibration coefficients for the base station based on the communication pilots. The sensing receiver may calibrate a receiver antenna array of the sensing receiver with the receiver calibration coefficients, and may provide the transmitter calibration coefficients to the transmitter base station to cause the transmitter base station to calibrate a transmitter antenna array of the transmitter base station with the transmitter calibration coefficients.

    [0015] In this way, the sensing receiver provides differential sensing for joint communications and sensing. For example, a transmitter base station of a JCAS system may utilize sweeping for transmitter sensing beams with sensing-aware beam-forming and beam-nulling operations. A sensing receiver of the JCAS system may apply an eigenspace analysis to a difference of autocorrelation matrices obtained from the sensing-aware beam-forming and beam-nulling operations. The sensing receiver may detect targets and may estimate an AoA and AoD pair for each detected target. The sensing receiver may perform a path loss estimation for each target path (e.g., which can be used to estimate RCS values), and may estimate a position of each detected target. The sensing receiver may also utilize communication pilots (e.g., demodulation reference signals (DMRS)), received from the transmitter base station, to calculate calibration coefficients for antenna arrays of the sensing receiver and the transmitter base station. Thus, the sensing receiver conserves computing resources, networking resources, and/or the like that would otherwise have been consumed by generating poor position estimates due to a limited number of estimated AoAs, generating poor position estimates due to the strong line-of-sight path between the JCAS transmitter and sensing receiver, decreasing resources for communications due to position estimating, performing incorrect actions based on poor position estimates, and/or the like.

    [0016] FIGS. 1A-1I are diagrams of an example 100 associated with differential sensing for joint communications and sensing. As shown in FIGS. 1A-1I, example 100 includes a transmitter (Tx) base station, a sensing receiver (Rx), UEs (e.g., UE 1, UE 2, and UE 3), a core network, and targets (e.g., target 1 and target 2). Further details of the base station, the UEs, and the core network are provided elsewhere herein.

    [0017] FIG. 1A depicts an example environment with three UEs and sensing paths between the transmitter base station and the sensing receiver. The environment of FIG. 1A may include a bistatic JCAS system where a downlink communication signal (e.g., a data signal and a pilot signal) is used also for sensing purposes. The sensing receiver may be separate from the transmitter base station in order to perform sensing operations (e.g., target detection, AoA estimation, position estimation, and/or the like). A dedicated connection (e.g., either cable or wireless and via the core network) may be provided between the transmitter base station and the sensing receiver for information exchange. As used herein, matrices and vectors may be denoted by uppercase and lowercase bold letters; tr(), custom-character[], , ().sup.+ are trace, expectation, Frobenius (custom-character.sub.2) norm, and pseudoinverse operators, respectively; custom-character, custom-character denote the set of complex and real numbers, respectively; [A].sub.i,j denotes the entry at the i-th row and the j-th column of the matrix A; exp(A) denotes the matrix satisfying [exp(A)].sub.i,j=e.sup.[A].sup.i,j for all entries; Acustom-character0 shows that the matrix A is Hermitian and positive semi-definite; I.sub.n denotes the nn identity matrix; and vec(A) is a column vector obtained by concatenating columns of the matrix A in consecutive order.

    [0018] In one example, there may be K single-antenna UEs to simultaneously receive data from the transmitter base station, with N.sub.tK antennas at the same time/frequency resource block in multi-user mode. Each UE may receive T data samples in a resource block on which the communication channel is assumed to be constant. In this case, a received signal by the k-th UE can be written as:

    [00001] y k = .Math. = 1 K ( h k T w ) s + z k ,

    where y.sub.kcustom-character.sup.T1 is the received signal vector, h.sub.kcustom-character.sup.N.sup.t.sup.1 is the channel vector between the transmitter base station and the k-th UE, w.sub.kcustom-character.sup.N.sup.t.sup.1 is the precoder vector designed by the transmitter base station for the k-th UE, s.sub.kcustom-character.sup.T1 is the intended information samples for the k-th UE, and z.sub.kcustom-character.sup.T1 is the noise samples at the k-th UE receiver. It may be assumed that z.sub.kCN(0, .sub.k.sup.2I.sub.T), where .sub.k.sup.2 is an average noise power at the k-th UE, custom-character[s.sub.kcustom-character]=1/T if k=custom-character and custom-character[s.sub.kcustom-character]=0 if kcustom-character. The term 1/T may have unity power for each block with T samples. After defining augmented matrices, the received signal of all UEs may be calculated as:

    [00002] Y = HX + Z = HWS + Z ,

    where the matrices Ycustom-character.sup.KT, Hcustom-character.sup.KN.sup.t, Wcustom-character.sup.N.sup.t.sup.K, Scustom-character.sup.KT, Zcustom-character.sup.KT are formed so that the k-th row of Y, H, S, Z are y.sub.k.sup.T, h.sub.k.sup.T, s.sub.k.sup.T, z.sub.k.sup.T, respectively, and the k-th column of W is w.sub.k for all k. For a perfect channel state information (CSI) case, it may be assumed that the channel matrix H is perfectly known by the transmitter base station. In this case, the signal model may be written as:

    [00003] y k = .Math. = 1 K ( h k T w ) s + z k = ( h k T w k ) s k desired + .Math. k ( h k T w k ) s interference + z k noise .

    A signal-to-interference-and-noise (SINR) for each user can be expressed as

    [00004] SINR k = .Math. "\[LeftBracketingBar]" h k T w k .Math. "\[RightBracketingBar]" 2 [ .Math. "\[LeftBracketingBar]" .Math. k ( h k T w ) s + z k .Math. "\[RightBracketingBar]" 2 ] = .Math. "\[LeftBracketingBar]" h k T w k .Math. "\[RightBracketingBar]" 2 .Math. k K .Math. "\[LeftBracketingBar]" h k T w .Math. "\[RightBracketingBar]" 2 + k 2 , k .

    [0019] The sensing receiver may include an antenna array with N.sub.r elements and there may be L targets separated in space. An i-th target may have an AoD at .sub.t,i and an AoA at .sub.r,i, a path-gain .sub.icustom-character including a path loss, an amplitude drop due to reflection from the target, and a phase shift due to delay and frequency shift. A narrowband signal model may be utilized where a propagation delay for the signal that travels between the transmitter base station, and the sensing receiver is much smaller than a multiplicative inverse of a bandwidth of the signal. Under a narrowband signal assumption, the delay in the transmitted signal can be modeled as a phase shift which is considered inside the .sub.i coefficient. The AoD is measured with respect to the transmit antenna array at the transmitter base station, and the AoA is measured with respect to the receiver antenna array at the sensing receiver. A line-of-sight (LoS) path (e.g., path 0) may exist between the transmitter base station and the sensing receiver, with an AoD and AoA pair (.sub.t,0, .sub.r,0) and a path-gain .sub.0 which is not related to any target reflection. FIG. 1A shows an example scenario with three UEs, two targets, an LoS path (e.g., Path 0), and two reflection paths (e.g., Path 1 and Path 2).

    [0020] The received signal at the sensing receiver can be written as:

    [00005] Y r = .Math. i = 0 L i a r * ( r , i ) a t H ( t , i ) WS + Z r ,

    where Z.sub.r includes the noise samples at the sensing receiver, a.sub.t(.sub.t,i)=exp(j2B.sub.tu(.sub.t,i)) and a.sub.r(.sub.r,i)=exp(j2B.sub.ru(.sub.r,i)) are array steering vectors for the transmit antenna array with the N.sub.t elements and the receiver antenna array with the N.sub.r elements, respectively, B.sub.tcustom-character.sup.N.sup.t.sup.3 and B.sub.rcustom-character.sup.N.sup.r.sup.3 are matrices including antenna positions (normalized by the wavelength) of the transmit and receiver antenna arrays, respectively, and u()=[sin cos 0].sup.T is a unit vector for an azimuth angle . It may be assumed that at both the transmitter base station and the sensing receiver there is a uniform linear array (ULA) with a half-wavelength spacing placed onto the x-axis and hence the transmit and receiver array steering vectors can be expressed as:

    [00006] a t ( t , i ) = [ 1 e j sin t , i e j 2 sin t , i .Math. e j ( N t - 1 ) sin t , i ] T , a r ( r , i ) = [ 1 e j sin r , i e j 2 sin r , i .Math. e j ( N r - 1 ) sin r , i ] T ,

    which are functions of an azimuth angle by symmetry of ULA in elevation angles. The ULA assumption may simplify the analysis, and implementations described herein may be directly generalized to any array geometry. Since custom-character[SS.sup.H]=I.sub.K, custom-character[Z.sub.rZ.sub.r.sup.H]=.sub.r.sup.2I.sub.N.sub.r, it may be determined that

    [00007] R y = [ Y r Y r H ] = A R A H + r 2 I N r ,

    where R.sub.y is an autocorrelation matrix of the received signal, and

    [00008] R = WW H , A = .Math. i = 0 L A i , A i = i a r * ( r , i ) a t H ( t , i ) , i .

    The autocorrelation matrix R.sub.y may include information related to target AoD/AoAs and path-gains.

    [0021] The following terminology may be utilized herein for parameters: N.sub.t is a number of antennas at the transmitter base station, N.sub.r is a number of antennas at the sensing receiver, K is a number of single-antenna UEs, T is a number of data samples received by each UE, y.sub.k is a signal vector received by a k-th UE, h.sub.k is a channel vector between the transmitter base station and the k-th UE, w.sub.k is a precoder vector designed by the transmitter base station for the k-th UE, s.sub.k is an intended information sample vector for the k-th UE, z.sub.k is a noise sample vector at the k-th UE receiver, .sub.k.sup.2 is an average noise power at the k-th UE, Y is a signal matrix received by all UEs, H is a channel matrix between the transmitter base station and all UEs, H.sub.0 is a channel matrix between the transmitter base station and the sensing receiver, W is a precoder matrix designed by the transmitter base station for all UEs, S is an intended information sample matrix for all UEs, Z is a noise sample matrix at all UEs, X is a transmit signal matrix, Y.sub.r is a signal matrix received by the sensing receiver, Z.sub.r is a noise sample matrix at the sensing receiver, R.sub.y is an autocorrelation matrix of the received signal at the sensing receiver, .sub.i is a path gain for an i-th path, a.sub.t(.sub.t,i) is an array steering vector for the antenna array at the transmitter base station at AoD .sub.t,i, W.sub.t,1 is an array steering vector for the antenna array at the sensing receiver at AoA .sub.r,i, W.sub.t,1 is a precoder matrix designed by the transmitter base station for all UEs by maximizing a signal power at a specific direction, W.sub.t,2 is a precoder matrix designed by the transmitter base station for all UEs by minimizing the signal power at a specific direction, Y.sub.r,1 is a signal matrix received by the sensing receiver when the transmitter base station uses W.sub.t,1, Y.sub.r,2 is a signal matrix received by the sensing receiver when the transmitter base station uses W.sub.t,2, S.sub.1 is an intended information sample matrix for all UEs when the transmitter base station uses W.sub.t,1, S.sub.2 is an intended information sample matrix for all UEs when the transmitter base station uses W.sub.t,2, Z.sub.r,1 is a noise sample matrix at the sensing receiver when the transmitter base station uses W.sub.t,1, Z.sub.r,2 is a noise sample matrix at the sensing receiver when the transmitter base station uses W.sub.t,2, R.sub.y,1 is an autocorrelation matrix calculated at the sensing receiver when the transmitter base station uses W.sub.t,1, R.sub.y,2 is an autocorrelation matrix calculated at the sensing receiver when the transmitter base station uses W.sub.t,2, R.sub.y,b is a difference of autocorrelation matrices calculated at the sensing receiver when the transmitter base station uses W.sub.t,1 and W.sub.t,2 obtained at a b-th beam sweeping stage, .sub.i is an i-th eigenvalue of R.sub.y,1R.sub.y,2, e.sub.i is an eigenvector corresponding to the i-th eigenvalue of R.sub.y,1R.sub.y,2, and .sub.i is an i-th angle in set of angles for beam sweeping.

    [0022] As shown in FIG. 1B, and be reference number 105, the transmitter base station may determine a first ideal sensing beam pattern to maximize a signal power in a transmitter target angular region. For example, sensing performance depends on a beam pattern created by the transmitter base station. The beam pattern for the transmitted signal is given by:


    P()=a.sub.t.sup.H()custom-character[XX.sup.H]a.sub.t()=a.sub.t.sup.H()WW.sup.Ha.sub.t(),

    where is the azimuth angle. To jointly optimize communication and sensing performances, one approach is to design a transmit precoder that maximizes the signal power at the UE locations while directing the signal towards a target point to detect if there is any target at a point of interest. To do this, two sensing precoders W.sub.0,1 and W.sub.0,2 may be calculated and then final transmit precoders W.sub.t,1 and W.sub.t,2 are designed to jointly optimize communication and sensing performances. The transmitter base station may determine an ideal sensing beam pattern P.sub.1() to maximize the signal power in a transmit target angular region. Given a transmit target angular region [.sub.1, .sub.2] for azimuth angles, the ideal directional beam pattern satisfies P.sub.1()=1 for [.sub.1, .sub.2] and zero elsewhere.

    [0023] As further shown in FIG. 1B, and by reference number 110, the transmitter base station may determine a second ideal sensing beam pattern to minimize the signal power in the transmit target angular region. For example, the transmitter base station may determine an ideal sensing beam pattern P.sub.2() to minimize the signal power in a transmit target angular region. The transmitter base stations may minimize the signal power in a transmit target angular region defined similarly by [.sub.1, .sub.2] for azimuth angles. In this case, the ideal beam pattern satisfies P.sub.2()=0 for [.sub.1, .sub.2] and one elsewhere.

    [0024] As shown in FIG. 1C, and by reference number 115, the transmitter base station may generate an SABF signal based on a first transmit precoder matrix determined based on the first ideal beam pattern. For example, the transmitter base station may design a first transmit precoder matrix W.sub.t,1 using the first ideal sensing beam pattern P.sub.1(). Any JCAS transmit precoder design method can be used to design the first transmit precoder matrix W.sub.t,1, provided that the designed beam pattern obtained for the first transmit precoder matrix W.sub.t,1 has a significant power difference at the transmit target angular region. The transmitter base station may generate an SABF signal transmission using the first transmit precoder matrix W.sub.t,1, where the signal power in a transmit target angular region is maximized.

    [0025] As further shown in FIG. 1C, and by reference number 120, the transmitter base station may generate an SABN signal based on a second transmit precoder matrix determined based on the second ideal beam pattern. For example, the transmitter base station may design a second transmit precoder matrix W.sub.t,2 using the second ideal sensing beam pattern P.sub.2(). Any JCAS transmit precoder design method can be used to design the second transmit precoder matrix W.sub.t,2, provided that the designed beam pattern obtained for the second transmit precoder matrix W.sub.t,2 has a significant power difference at the transmit target angular region. The transmitter base station may generate an SABN signal using the second transmit precoder matrix W.sub.t,2, where the signal power in a transmit target angular region is minimized. The SABF signal and the SABN signal may be optimized jointly for communications and sensing and may have non-negligible gains outside the transmit target angular region for communication purposes.

    [0026] As shown in FIG. 1D, and by reference number 125, the sensing receiver may estimate targets and AoA and AoD pairs for the targets based on the SABF signal and the SABN signal. For example, a difference of autocorrelation matrices obtained at the sensing receiver may be utilized when the transmitter base station uses the first transmit precoder matrix W.sub.t,1 (e.g., the SABF signal) and the second transmit precoder matrix W.sub.t,2 (e.g., the SABN signal). The received signal in the two cases can be expressed as:

    [00009] Y r , 1 = .Math. i = 0 L i a r * ( r , i ) a t H ( t , i ) W t , 1 S 1 + Z r , 1 , Y r , 2 = .Math. i = 0 L i a r * ( r , i ) a t H ( t , i ) W t , 2 S 2 + Z r , 2 ,

    where S.sub.1 and S.sub.2 are transmitted communication signals and Z.sub.r,1 and Z.sub.r,2 are noise samples at the sensing receiver when the first and second transmit precoder matrices W.sub.t,1 and W.sub.t,2 are used, respectively. It may be assumed that transmissions with the first and second transmit precoder matrices W.sub.t,1 and W.sub.t,2 are performed in a short enough time interval so that AoDs (.sub.t,i), AoAs (.sub.r,i), and path gains (.sub.i) remain the same. Using the determination that

    [00010] [ S 1 S 1 H ] = [ S 2 S 2 H ] = I K , [ Z r , 1 Z r , 1 H ] = [ Z r , 1 Z r , 2 H ] = r 2 J N r , provides : R y , 1 = [ Y r , 1 Y r , 1 H ] = A R 1 A H + r 2 J N r , R y , 2 = [ Y r , 2 Y r , 2 H ] = A R 2 A H + r 2 J N r , where R 1 = W t , 1 W t , 1 H , R 2 = W t , 2 W t , 2 H , A = .Math. i = 0 L A i , A i = i a r * ( r , i ) a t H ( t , i ) , i .

    [0027] R.sub.y,1 and R.sub.y,2 are received signal autocorrelation matrices calculated at the sensing receiver when the first and second transmit precoder matrices W.sub.t,1 and W.sub.t,2 are used, respectively. When the transmitter base station precoder is designed with the SABN operation, signals in a direction of a target AoD are cancelled:


    a.sub.t.sup.H(.sub.t,i)W.sub.t,20,icustom-character,

    where custom-character is the set of indices of targets whose AoDs are in the transmit target angular region [.sub.1, .sub.2]. Accordingly,

    [00011] .Math. i A i R 2 0 .

    Hence, it is determined that

    [00012] R y , 1 - R y , 2 = ( .Math. i A i + .Math. i .Math. A i ) ( R 1 - R 2 ) A H ( .Math. i A i ) R 1 A H + ( .Math. i .Math. A i ) ( R 1 - R 2 ) A H .

    [0028] It may be assumed that the first transmit precoder matrix W.sub.t,1 keeps the signal power outside the target direction region at levels similar to those achieved by the second transmit precoder matrix W.sub.t,2 in order to obtain a similar communication performance. Hence, it may be determined that

    [00013] ( .Math. i .Math. A i ) R 1 ( .Math. i .Math. A i ) H ( .Math. i .Math. A i ) R 2 ( .Math. i .Math. A i ) H .

    As a result, it is found that

    [00014] R y , 1 - R y , 2 ( .Math. i A i ) R 1 A H + ( .Math. i .Math. A i ) ( R 1 - R 2 ) ( .Math. i A i ) H ( .Math. i A i ) R 1 ( .Math. i A i ) H + ( .Math. i A i ) R 1 ( .Math. i .Math. A i ) H + ( .Math. i .Math. A i ) R 1 ( .Math. i A i ) H .

    A rank of a matrix custom-characterA.sub.i is at most |custom-character| as it is a sum of |custom-character| rank-1 matrices. Assuming that the antenna array geometry at the sensing receiver is adjusted properly and the target AoAs are not very close to each other, the subspaces generated by array steering vectors of the target AoAs (and the LoS path) become independent and the rank of custom-characterA.sub.i becomes exactly |custom-character|. The rank of the matrices (custom-characterA.sub.i)R.sub.1(custom-characterA.sub.i).sup.H, (custom-characterA.sub.i)R.sub.1(custom-characterA.sub.i).sup.H and (custom-characterA.sub.i)R.sub.1(custom-characterA.sub.i).sup.H may be equal to |custom-character|. Therefore, there exists a full rank matrix Tcustom-character satisfying

    [00015] ( .Math. i A i ) R 1 ( .Math. i A i ) H = ( .Math. i A i ) T T H ( .Math. i A i ) H ( .Math. i A i ) R 1 ( .Math. i .Math. A i ) H = ( .Math. i A i ) T T H ( .Math. i .Math. A i ) H ( .Math. i .Math. A i ) R 1 ( .Math. i A i ) H = ( .Math. i .Math. A i ) T T H ( .Math. i A i ) H .

    [0029] The sensing receiver may utilize a proposition: Let U(custom-character.sup.NM and Vcustom-character.sup.NM be two matrices with N2M and let UU.sup.H+UV.sup.H+VU.sup.H=EE.sup.H be an eigenvalue decomposition where E=[e.sub.1 e.sub.2 . . . e.sub.N] is the matrix of eigenvectors and =diag(.sub.1, .sub.2, . . . , .sub.N) is the matrix of eigenvalues with .sub.1.sub.2 . . . .sub.N. Then it may be determined that

    [00016] 1 2 .Math. M M + 1 = M + 2 = .Math. = N - M = 0 N - M + 1 N - M + 2 .Math. N .

    It may further be determined that

    [00017] U H e i e i H U = V H e i e i H V = 0 , i = M + 1 , M + 2 , .Math. , N - M , U H e i e i H U V H e i e i H V , i = N - M + 1 , N - M + 2 , .Math. , N .

    The proposition can be proven using linear algebra techniques.

    [0030] Let U=(custom-characterA.sub.i)T and V=(custom-characterA.sub.i)T and the eigenvalue decomposition of R.sub.y,1R.sub.y,2UU.sup.H+UV.sup.H+V.sup.UH be EE.sup.H where E=[e.sub.1 e.sub.2 . . . e.sub.N.sub.r] is the matrix of eigenvectors and =diag(.sub.1, .sub.2, . . . , .sub.N.sub.r) is the matrix of eigenvalues with .sub.1.sub.2 . . . .sub.N.sub.r. Using the proposition, it is determined that

    [00018] U H e i e i H U = V H e i e i H V = 0 , .Math. "\[LeftBracketingBar]" .Math. "\[RightBracketingBar]" + 1 i N r - .Math. "\[LeftBracketingBar]" .Math. "\[RightBracketingBar]" , U H e i e i H U V H e i e i H V , N r - .Math. "\[LeftBracketingBar]" .Math. "\[RightBracketingBar]" + 1 i N r .

    It may be determined that

    [00019] U H e i e i H U = T H ( .Math. i A i ) H e j e j H ( .Math. i A i ) T = T H ( .Math. i i a r * ( r , i ) a t H ( t , i ) ) H e j e j H ( .Math. i i a r * ( r , i ) a t H ( t , i ) ) T .

    where T.sup.Ha.sub.t(.sub.t,i)0 for icustom-character as the transmit precoding maximizes the beam pattern gain for targets inside the transmit target angular region (icustom-character). As a result, it may be determined that a.sub.r.sup.T(.sub.r,i)e.sub.je.sub.j.sup.H a*.sub.r(.sub.r,i)=0 for all |custom-character|+1jN.sub.r|custom-character| and icustom-character. As V.sup.He.sub.ie.sub.i.sup.HV=0 is also satisfied, the same result is true for i.Math.custom-character, i.e., a.sub.r.sup.T(.sub.r,i)e.sub.je.sub.j.sup.Ha*.sub.r(.sub.r,i)=0 for all |custom-character|+1jN.sub.r|custom-character| and i.Math.custom-character.

    [0031] In terms of AoA estimation, checking the peaks of

    [00020] f ( ) = 1 a r T ( ) e j e j H a r * ( ) for .Math. "\[LeftBracketingBar]" .Math. "\[RightBracketingBar]" + 1 j N r - .Math. "\[LeftBracketingBar]" .Math. "\[RightBracketingBar]"

    may provide AoA estimates of all targets. However, it would not be possible to detect AoDs of targets as the AoAs to be found are not only related to icustom-character but also related to i.Math.custom-character. To solve this problem, eigenvectors related to negative eigenvalues may be used, i.e., e.sub.j for N.sub.r|custom-character|+1jN.sub.r. As shown by the proposition, U.sup.He.sub.je.sub.j.sup.HUV.sup.He.sub.je.sub.j.sup.HV for N.sub.r|custom-character|+1jN.sub.r. Therefore, in the function

    [00021] 1 a r T ( ) e j e j H a r * ( ) ,

    the peaks related to AoAs for icustom-character will be larger compared to the peaks related to AoAs for icustom-character. As a result, it becomes possible to distinguish the AoAs for the target transmit sensing beam pattern and hence the (AoD, AoA) pairs can be found.

    [0032] An angular spectrum

    [00022] f ( ) = 1 a r T ( ) e j e j H a r * ( ) for .Math. "\[LeftBracketingBar]" .Math. "\[RightBracketingBar]" + 1 j N r - .Math. "\[LeftBracketingBar]" .Math. "\[RightBracketingBar]"

    provides strong peaks only at target AoAs whose AoDs are inside the target angular region of a transmit sensing beam. In other words, even if there are four targets with different AoA values and a LoS path at AoA with 90, a resulting angular spectrum only has a single peak for each transmit target angular region. Using a projection onto eigenvectors with negative eigenvalues support to suppress peaks at target AoAs whose AoDs are outside the transmit target angular region of interest. Thus, the proposition enables the sensing receiver to accurately detect (AoD, AoA) pairs of all targets.

    [0033] The sensing receiver may utilize the following operations to detect AoD and AoA pairs for a given transmit target angular region [.sub.t/2, .sub.t+/2] (e.g., a region to be used for transmit precoding design). The sensing receiver may evaluate the eigenvalue decomposition of difference of autocorrelation matrices R.sub.y=R.sub.y,1R.sub.y,2, and may estimate the total number i.sub.0 of positive eigenvalues of R.sub.y. The eigenvalues of R.sub.y may be .sub.1.sub.2 . . . .sub.N.sub.r. As one of the detection methods, the sensing receiver may find a largest i with 1iN.sub.r/2 satisfying

    [00023] i > 1 .Math. "\[LeftBracketingBar]" .Math. j = i N r j .Math. "\[RightBracketingBar]"

    for a pre-determined threshold value .sub.1 to estimate i.sub.0. The sensing receiver may form the matrix of eigenvectors G=[e.sub.i.sub.0.sub.+1 e.sub.i.sub.0.sub.+2 . . . e.sub.N.sub.r] corresponding to the eigenvalues 0.sub.i.sub.0.sub.+1.sub.i.sub.0.sub.+2 . . . .sub.N.sub.r. The sensing receiver may calculate the angular spectrum

    [00024] f ( ) = 1 a r T ( ) GG H a r * ( )

    for all potential AoAs and may identify the i.sub.0 largest peaks of f(). To avoid overestimating the correct number of peaks, the sensing receiver may also compare the peak values with a pre-determined threshold, i.e., choose a peak at the angle .sub.r,i only if f(.sub.r,i)>.sub.2. After the peak detection, a number (L.sub.0i.sub.0) of AoA angles {.sub.r,i}.sub.i=1.sup.L.sup.0 may be detected by the sensing receiver. For each AoA detected, the corresponding AoD is .sub.t as the transmit precoding maximizes the signal power at that angle with the SABF operation. In other words, the AoD and AoA pairs detected are {(.sub.t,i, .sub.r,i)}.sub.i=1.sup.L.sup.0, where .sub.t,i=.sub.t for all i=1, 2, . . . , L.sub.0. Using the eigenvalues .sub.i of R.sub.y and the peak values f(.sub.r,i), the sensing receiver may assign a weight to each detected AoD and AoA pair. There can be different options for weight assignment. One example method may include assigning .sub.i to the pair (.sub.t,i, .sub.r,i) for all i=1, 2, . . . , L.sub.0. As another option, a weighted sum of .sub.i and f(.sub.r,i) may be utilized to determine a weight w.sub.i for the pair (.sub.t,i, .sub.r,i).

    [0034] As shown in FIG. 1E, and by reference number 130, the transmitter base station may perform a beam sweeping operation to scan a set of transmit angular regions and to detect the targets. For example, to detect all targets and their AoD and AoA pairs, a beam sweeping operation can be performed at the transmitter base station. Depending on the number of antennas N.sub.t at the transmitter base station, the width of the transmit target angular region can be determined and a suitable resolution for beam sweeping may be performed. For example, for N.sub.t=16, the beam sweeping can be performed with 5 resolution. As beam sweeping is performed, a transmit beam pattern changes accordingly with a high gain at a desired angle, and there are side lobes for each sweeping angle that are used to maintain good communication quality for UEs. For each angle swept, the transmit precoding (e.g., with the SABF and SABN operations) and a receive an algorithm to detect (AoD, AoA, weight) triples may be performed. Pre-defined transmit target angular regions (e.g., regions centered at angles 10, 15, . . . , 170 each with 5 width) can be swept to detect the (AoD, AoA, weight) triples for all targets. A total number of beam sweeping stages (e.g., the number of transmit target angular regions) may be determined.

    [0035] As shown in FIG. 1F, and by reference number 135, the sensing receiver may perform post-processing of the AoA and AoD pairs to modify estimation accuracies for the AoA and AoD pairs. For example, after determining the AoD and AoA values of each target, the sensing receiver may perform post-processing to obtain more accurate results. For example, the sensing receiver may perform outlier elimination, elimination of the LoS path, merging of targets with close AoD and AoA information, and/or the like. For outlier elimination, the sensing receiver may calculate rays (e.g., defined by AoDs and AoAs) with starting points at either the transmitter base station (for AoDs) or the sensing receiver (for AoAs). If the two rays for a target do not intersect, then the corresponding pair can be marked as an outlier (e.g., false detection) and can be removed. In some cases, especially when it is very strong compared to other reflected paths, the LoS path can be detected as a target path. In such cases, using the AoD and AoA information about the LoS path, the sensing receiver may remove the paths (e.g., associated with AoD and AoA pairs) close to the LoS path.

    [0036] In some scenarios, a single target can be detected as multiple targets with close AoD and AoA information. In such scenarios, the sensing receiver may utilize a clustering method (e.g., k-means clustering) to cluster targets with close enough AoD and AoA information. When merging multiple targets, assigned weights can be used to determine a final AoD and AoA pair. In one example, a weighted sum of close AoD and AoA angles can be calculated to determine an angle value after merging. The merging operation may be needed when there is a target in the middle of two neighbor transmit target angular regions. For example, assume that beam sweeping is performed at AoDs 10, 15, . . . , 170 with a 5 beamwidth and there is a target with an AoD at 32.5. In this case, the same target can be detected in transmit target angular regions centered at 30 and 35. Using the weights calculated in these two cases, merging can be performed and an AoD estimation can be accurately determined.

    [0037] As shown in FIG. 1G, and by reference number 140, the sensing receiver may estimate path loss values for paths of the targets based on the AoA and AoD pairs. For example, to have information about RCS values of each target, a path loss for each target reflection path can be estimated by the sensing receiver. The estimation of path loss values can be performed by estimating .sub.i for all i=0, 1, 2, . . . , L. At each beam sweeping stage, the difference of autocorrelation matrices can be written as


    R.sub.y,b=AR.sub.bA.sup.H, b=1,2, . . . ,B

    where R.sub.y,b is the difference of autocorrelation matrices obtained at the b-th beam sweeping stage, and R.sub.b=W.sub.t,1,bW.sub.t,1,b.sup.HW.sub.t,2,bW.sub.t,2,b.sup.H is the difference of transmit signal autocorrelation matrices for the b-th beam sweeping stage. The last equation can be written as

    [00025] R y , b = ( .Math. i = 0 L i a r * ( r , i ) a t H ( t , i ) ) R b ( .Math. j = 0 L j a r * ( r , j ) a t H ( t , j ) ) H = .Math. i = 0 L .Math. j = 0 L i j * a r * ( r , i ) a t H ( t , i ) R b a t ( t , j ) a r T ( r , j ) .

    Therefore, it is determined that

    [00026] vec ( R y , b ) = .Math. i = 0 L .Math. j = 0 L i j * vec ( a r * ( r , i ) a t H ( t , i ) R b a t ( t , j ) a r T ( r , j ) ) . If = [ 0 1 .Math. L ] T , A eq N r 2 B ( L + 1 ) 2 and b e q N r 2 B 1 satisfying [ A eq ] ( b - 1 ) N r 2 + k , i ( L + 1 ) + j + 1 = [ vec ( a r * ( r , j ) a t H ( t , j ) R b a t ( t , i ) a r T ( r , i ) ) ] k for all k = 1 , 2 , .Math. , N r 2 , b = 1 , 2 , .Math. , B , i = 0 , 1 , 2 , .Math. , L , j = 0 , 1 , 2 , .Math. , L and [ b eq ] ( b - 1 ) N r 2 + k = [ vec ( R y , b ) ] k for all k = 1 , 2 , .Math. , N r 2 , b = 1 , 2 , .Math. , B .

    Then it may be determined that


    A.sub.eqvec(.sup.H)=b.sub.eq.

    [0038] Once the AoD and AoA values are estimated, and the difference of transmit autocorrelation matrices R.sub.b is known by the sensing receiver, the matrix A.sub.eq and the vector b.sub.eq can be calculated by the sensing receiver. The AoD and AoA estimation is performed at the sensing receiver and R.sub.b may include statistical (e.g., not rapidly changing) information of the transmitted signal by transmitter base station and hence can be delivered to the sensing receiver via a dedicated link (e.g., such as via a transport network or the core network). The transfer of R.sub.b from transmitter base station to sensing receiver may be accomplished in several ways. For example, the entries of R.sub.b may be transferred directly. Since the matrix is complex and Hermitian with N.sub.t.sup.2 entries, only the diagonal elements (N.sub.t real numbers) and lower (or upper) diagonal elements ((N.sub.t.sup.2N.sub.t)/2 complex numbers) need to be transferred. This process may transfer N.sub.t.sup.2 real numbers. If the information is to be submitted for different sub bands, then the total number may be multiplied by the number of sub bands. In other example, partial information about R.sub.b may be transferred. In some cases where the structure of R.sub.b known by the receiving base station, only the partial information need be transmitted.

    [0039] To find a solution of the equation A.sub.eqvec(.sup.H)=b.sub.eq, the sensing receiver may find a least-squares solution of A.sub.eqx=b.sub.eq as x=A.sub.eq.sup.+b.sub.eq, and may form the square matrix using x so that [].sub.i+1,j+1=[x].sub.j(L+1)+i+1 for all i=0, 1, . . . , L and j=0, 1, . . . , L. The sensing receiver may determine a rank-1 estimate of using its principal eigenvalue and eigenvector e, {circumflex over ()}={square root over ()}e, where a final estimate of is {circumflex over ()}.

    [0040] As shown in FIG. 1H, and by reference number 145, the sensing receiver may determine positions of the targets based on the AoA and AoD pairs. For example, in a bistatic setup, once the AoDs and AoAs of the targets are estimated, the sensing receiver may calculate positions of the targets using triangulation technique by intersecting related rays. The ray equations can be calculated in two-dimensional coordinate system as

    [00027] ( y i - y t ) cos t , i = ( x i - x t ) sin t , i , i = 1 , 2 , .Math. , L , ( y i - y r ) cos r , i = ( x i - x r ) sin r , i , i = 1 , 2 , .Math. , L ,

    where the transmitter base station and the sensing receiver has coordinates (x.sub.t, y.sub.t) and (x.sub.r, y.sub.r), respectively, and the i-th target has coordinates (x.sub.i, y.sub.i). It may be determined that

    [00028] ( - sin t , i cos t , i - sin r , i cos r , i ) ( x i y i ) = ( y t cos t , i - x t sin t , i y r cos r , i - x r sin r , i ) .

    The solution position may be determined as

    [00029] ( x i y i ) = 1 sin ( r , i - t , i ) ( - cos r , i - cos t , i sin r , i - sin t , i ) ( y t cos t , i - x t sin t , i y r cos r , i - x r sin r , i ) . If sin ( r , i - t , i ) = 0 ,

    the two rays do not intersect. Such a case should not happen since the outlier elimination process may remove such AoD and AoA pairs as described above. The method described may consider two dimensions (e.g., height information may not be included) and may omit curvature of the earth. However, the sensing receiver may generalize the method by defining rays on the earth's surface in three dimensions. This may be beneficial when elevation angle information is also available (e.g., using an antenna array different than ULA) and a distance between the transmitter base station and the sensing receiver is large.

    [0041] As further shown in FIG. 1H, and by reference number 150, the sensing receiver may perform one or more actions based on the positions of the targets. For example, the sensing receiver may perform traffic monitoring based on the positions of the targets, may identify parking spots in busy city streets based on the positions of the targets, may detect pedestrians crossing streets based on the positions of the targets, may count a number of people within a local area based on the positions of the targets, may detect unidentified drones or other flying objects based on the positions of the targets, may detect a presence of people within geo-fenced areas of a factory based on the positions of the targets, may provide accurate localization and tracking of large passive objects in a factory based on the positions of the targets, may provide collision avoidance between autonomous vehicles or other mobile robots and people based on the positions of the targets, and/or the like.

    [0042] As shown in FIG. 1I, and by reference number 155, the sensing receiver may receive a transmitted signal (e.g., the SABF signal and the SABN signal). For example, in some scenarios, due to hardware impairments and mutual coupling, there may be unwanted phase/gain misalignments between antenna array elements of the transmitter base station and/or the sensing receiver. The LoS path between the transmitter base station and the sensing receiver may be utilized for calibration that eliminates the phase/gain misalignments. A principal subspace of a measured channel between the transmitter base station and the sensing receiver can be compared with an ideal LoS channel (e.g., calculated using ideal transmit and receiver array steering vectors) to jointly calibrate the antenna arrays of the transmitter base station and the sensing receiver. In some implementations, the sensing receiver may utilize communication pilot symbols (e.g., provided in the SABF signal and/or the SABN signal) to estimate a channel H.sub.0 between the transmitter base station and the sensing receiver.

    [0043] As further shown in FIG. 1I, and by reference number 160, the sensing receiver may receive communication pilots from the transmitter base station. For example, the sensing receiver may receive, via the core network, the communication pilots (e.g., demodulation reference signals) from the transmitter base station. In some implementations, the communication pilots may include information about the difference of the transmit autocorrelation matrices R.sub.b, described above.

    [0044] As further shown in FIG. 1I, and by reference number 165, the sensing receiver may determine transmitter and receiver calibration coefficients for transmitter and receiver antenna arrays based on the communication pilots. For example, the channel matrix H.sub.0 may include both the LoS path and the target reflection paths. To extract the LoS path component, the sensing receiver may evaluate a singular value decomposition of H.sub.0:


    H.sub.0=U.sub.H,0S.sub.H,0V.sub.H,0.sup.H

    where diagonal elements of S.sub.H,0 are sorted in descending order. The sensing receiver may calculate the vectors h.sub.r,0 and h.sub.t,0 as the first columns of U.sub.H,0 and V.sub.H,0, respectively. The sensing receiver may calculate ideal transmit and receiver array steering vectors a.sub.t,0(.sub.t,0) and a.sub.r,0(.sub.r,0) using the AoD (.sub.t,0) and AoA (.sub.r,0) for the LoS path between the transmitter base station and the sensing receiver. The sensing receiver may calculate calibration vectors c.sub.tcustom-character.sup.N.sup.t.sup.1 for transmitter arrays and c.sub.rcustom-character.sup.N.sup.r.sup.1 for receiver arrays by element-wise division operations:

    [00030] [ c t ] i = N t [ h t , 0 ] i [ a t , 0 ( t , 0 ) ] i , i = 1 , 2 , .Math. , N t , [ c r ] i = N r [ h r , 0 ] i * [ a r , 0 ( r , 0 ) ] i , i = 1 , 2 , .Math. , N r .

    The calibration vectors may correspond to the calibration coefficients for the transmitter and receiver antenna arrays.

    [0045] As further shown in FIG. 1I, and by reference number 170, the sensing receiver may calibrate the receiver antenna array with the receiver calibration coefficients. For example, the sensing receiver may utilize the calibration vectors to calibrate the receiver antenna array by element-wise multiplication of corresponding elements of the receiver array steering vector.

    [0046] As further shown in FIG. 1I, and by reference number 175, the sensing receiver may provide the transmitter calibration coefficients to the transmitter base station. For example, the sensing receiver may provide the calibration coefficients (e.g., the calibration vectors used to calibrate the transmitter antenna array) to the transmitter base station via the core network (e.g., or via transport network).

    [0047] As further shown in FIG. 1I, and by reference number 180, the transmitter base station may calibrate the transmitter antenna array with the transmitter calibration coefficients. For example, the transmitter base station may utilize the calibration vectors to calibrate the transmitter antenna array by element-wise multiplication of corresponding elements of the transmitter array steering vector.

    [0048] In this way, the sensing receiver provides differential sensing for joint communications and sensing. For example, a transmitter base station of a JCAS system may utilize sweeping for transmitter sensing beams with sensing-aware beam-forming and beam-nulling operations. A sensing receiver of the JCAS system may apply an eigenspace analysis to a difference of autocorrelation matrices obtained from the sensing-aware beam-forming and beam-nulling operations. The sensing receiver may detect targets and may estimate an AoA and AoD pair for each detected target. The sensing receiver may perform a path loss estimation for each target path (e.g., which can be used to estimate RCS values), and may estimate a position of each detected target. The sensing receiver may also utilize communication pilots (e.g., demodulation reference signals), received from the transmitter base station, to calculate calibration coefficients for antenna arrays of the sensing receiver and the transmitter base station. Thus, the sensing receiver conserves computing resources, networking resources, and/or the like that would otherwise have been consumed by generating poor position estimates due to a limited number of estimated AoAs, generating poor position estimates due to the strong line-of-sight path between the JCAS transmitter and sensing receiver, decreasing resources for communications due to position estimating, performing incorrect actions based on poor position estimates, and/or the like.

    [0049] As indicated above, FIGS. 1A-1I are provided as an example. Other examples may differ from what is described with regard to FIGS. 1A-1I. The number and arrangement of devices shown in FIGS. 1A-1I are provided as an example. In practice, there may be additional devices, fewer devices, different devices, or differently arranged devices than those shown in FIGS. 1A-1I. Furthermore, two or more devices shown in FIGS. 1A-1I may be implemented within a single device, or a single device shown in FIGS. 1A-1I may be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) shown in FIGS. 1A-1I may perform one or more functions described as being performed by another set of devices shown in FIGS. 1A-1I.

    [0050] FIG. 2 is a diagram of an example environment 200 in which systems and/or methods described herein may be implemented. As shown in FIG. 2, the environment 200 may include a base station 210, a UE 220, and/or a core network 230. Devices and/or elements of the environment 200 may interconnect via wired connections and/or wireless connections.

    [0051] The base station 210 may support, for example, a cellular radio access technology (RAT). The base station 210 may include one or more base stations (e.g., base transceiver stations, radio base stations, node Bs, eNodeBs (eNBs), gNodeBs (gNBs), base station subsystems, cellular sites, cellular towers, access points, transmit receive points (TRPs), radio access nodes, macrocell base stations, microcell base stations, picocell base stations, femtocell base stations, or similar types of devices) and other network entities that can support wireless communication for a UE 220. The base station 210 may transfer traffic between a UE 220 (e.g., using a cellular RAT), one or more base stations (e.g., using a wireless interface or a backhaul interface, such as a wired backhaul interface), and/or a core network. The base station 210 may provide one or more cells that cover geographic areas.

    [0052] In some implementations, the base station 210 may perform scheduling and/or resource management for a UE 220 covered by the base station 210 (e.g., a UE 220 covered by a cell provided by the base station 210). In some implementations, the base station 210 may be controlled or coordinated by a network controller, which may perform load balancing, network-level configuration, and/or other operations. The network controller may communicate with the base station 210 via a wireless or wireline backhaul. In some implementations, the base station 210 may include a network controller, a self-organizing network (SON) module or component, or a similar module or component. In other words, the base station 210 may perform network control, scheduling, and/or network management functions (e.g., for uplink, downlink, and/or sidelink communications of a UE 220 covered by the base station 210).

    [0053] The UE 220 may include one or more devices capable of receiving, generating, storing, processing, and/or providing information, as described elsewhere herein. The UE 220 may include a communication device and/or a computing device. For example, the UE 220 may include a wireless communication device, a mobile phone, a user equipment, a laptop computer, a tablet computer, a desktop computer, a gaming console, a set-top box, a IoT device, a wearable communication device (e.g., a smart wristwatch, a pair of smart eyeglasses, a head mounted display, or a virtual reality headset), or a similar type of device.

    [0054] The core network 230 may include one or more wired and/or wireless networks. For example, the core network 230 may include a cellular network (e.g., a sixth generation (6G) network, a fifth generation (5G) network, a fourth generation (4G) network, a long-term evolution (LTE) network, a third generation (3G) network, a code division multiple access (CDMA) network, etc.), a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a telephone network (e.g., the Public Switched Telephone Network (PSTN)), a private network, an ad hoc network, an intranet, the Internet, a fiber optic-based network, and/or a combination of these or other types of networks. The core network 230 enables communication among the devices of the environment 200. In some implementations, the core network 230 may include an example architecture of a 5G next generation (NG) core network included in a 5G wireless telecommunications system, a 6G core network included in a 6G wireless telecommunications system, and/or the like.

    [0055] The number and arrangement of devices and networks shown in FIG. 2 are provided as an example. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown in FIG. 2. Furthermore, two or more devices shown in FIG. 2 may be implemented within a single device, or a single device shown in FIG. 2 may be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) of the environment 200 may perform one or more functions described as being performed by another set of devices of the environment 200.

    [0056] FIG. 3 is a diagram of example components of a device 300, which may correspond to the base station 210 and/or the UE 220. In some implementations, the base station 210 and/or the UE 220 may include one or more devices 300 and/or one or more components of the device 300. As shown in FIG. 3, the device 300 may include a bus 310, a processor 320, a memory 330, an input component 340, an output component 350, and a communication component 360.

    [0057] The bus 310 includes one or more components that enable wired and/or wireless communication among the components of the device 300. The bus 310 may couple together two or more components of FIG. 3, such as via operative coupling, communicative coupling, electronic coupling, and/or electric coupling. The processor 320 includes a central processing unit, a graphics processing unit, a microprocessor, a controller, a microcontroller, a digital signal processor, a field-programmable gate array, an application-specific integrated circuit, and/or another type of processing component. The processor 320 is implemented in hardware, firmware, or a combination of hardware and software. In some implementations, the processor 320 includes one or more processors capable of being programmed to perform one or more operations or processes described elsewhere herein.

    [0058] The memory 330 includes volatile and/or nonvolatile memory. For example, the memory 330 may include random access memory (RAM), read only memory (ROM), a hard disk drive, and/or another type of memory (e.g., a flash memory, a magnetic memory, and/or an optical memory). The memory 330 may include internal memory (e.g., RAM, ROM, or a hard disk drive) and/or removable memory (e.g., removable via a universal serial bus connection). The memory 330 may be a non-transitory computer-readable medium. The memory 330 stores information, instructions, and/or software (e.g., one or more software applications) related to the operation of the device 300. In some implementations, the memory 330 includes one or more memories that are coupled to one or more processors (e.g., the processor 320), such as via the bus 310.

    [0059] The input component 340 enables the device 300 to receive input, such as user input and/or sensed input. For example, the input component 340 may include a touch screen, a keyboard, a keypad, a mouse, a button, a microphone, a switch, a sensor, a global positioning system sensor, an accelerometer, a gyroscope, and/or an actuator. The output component 350 enables the device 300 to provide output, such as via a display, a speaker, and/or a light-emitting diode. The communication component 360 enables the device 300 to communicate with other devices via a wired connection and/or a wireless connection. For example, the communication component 360 may include a receiver, a transmitter, a transceiver, a modem, a network interface card, and/or an antenna.

    [0060] The device 300 may perform one or more operations or processes described herein. For example, a non-transitory computer-readable medium (e.g., the memory 330) may store a set of instructions (e.g., one or more instructions or code) for execution by the processor 320. The processor 320 may execute the set of instructions to perform one or more operations or processes described herein. In some implementations, execution of the set of instructions, by one or more processors 320, causes the one or more processors 320 and/or the device 300 to perform one or more operations or processes described herein. In some implementations, hardwired circuitry may be used instead of or in combination with the instructions to perform one or more operations or processes described herein. Additionally, or alternatively, the processor 320 may be configured to perform one or more operations or processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software.

    [0061] The number and arrangement of components shown in FIG. 3 are provided as an example. The device 300 may include additional components, fewer components, different components, or differently arranged components than those shown in FIG. 3. Additionally, or alternatively, a set of components (e.g., one or more components) of the device 300 may perform one or more functions described as being performed by another set of components of the device 300.

    [0062] FIG. 4 is a flowchart of an example process 400 for differential sensing for joint communications and sensing. In some implementations, one or more process blocks of FIG. 4 may be performed by a device (e.g., the base station 210). In some implementations, one or more process blocks of FIG. 4 may be performed by another device or a group of devices separate from or including the device. Additionally, or alternatively, one or more process blocks of FIG. 4 may be performed by one or more components of the device 300, such as the processor 320, the memory 330, the input component 340, the output component 350, and/or the communication component 360.

    [0063] As shown in FIG. 4, process 400 may include estimating targets and AoA and AoD pairs for the targets based on an SABF signal and an SABN signal received from another device (block 410). For example, the device may estimate targets and AoA and AoD pairs for the targets based on an SABF signal and an SABN signal received from another device, as described above. In some implementations, estimating the targets and the AoA and AoD pairs for the targets includes calculating an eigenvalue decomposition of difference of autocorrelation matrices based on the SABF signal and the SABN signal, estimating a total quantity of positive eigenvalues of the eigenvalue decomposition of difference, calculating angular spectrums for the AoAs based on the total quantity of positive eigenvalues, and estimating the AoA and AoD pairs based on the angular spectrums for the AoAs.

    [0064] In some implementations, the SABF signal is generated based on a transmit precoder matrix that is determined based on an ideal sensing beam pattern that maximizes a signal power in a transmitter target angular region. In some implementations, the SABN signal is generated based on a transmit precoder matrix determined based on an ideal sensing beam pattern that minimizes a signal power in a transmit target angular region.

    [0065] As further shown in FIG. 4, process 400 may include performing post-processing of the AoA and AoD pairs to modify estimation accuracies for the AoA and AoD pairs (block 420). For example, the device may perform post-processing of the AoA and AoD pairs to modify estimation accuracies for the AoA and AoD pairs, as described above. In some implementations, performing post-processing of the AoA and AoD pairs includes one or more of eliminating outliers for the AoA and AoD pairs to modify the estimation accuracies for the AoA and AoD pairs, eliminating a line-of-sight path for the AoA and AoD pairs to modify the estimation accuracies for the AoA and AoD pairs, or merging targets with substantially similar AoA and AoD pairs to modify the estimation accuracies for the AoA and AoD pairs.

    [0066] As further shown in FIG. 4, process 400 may include estimating path loss values for paths of the targets (block 430). For example, the device may estimate path loss values for paths of the targets, as described above.

    [0067] As further shown in FIG. 4, process 400 may include determining positions of the targets based on the AoA and AoD pairs (block 440). For example, the device may determine positions of the targets based on the AoA and AoD pairs, as described above. In some implementations, determining the positions of the targets based on the AoA and AoD pairs includes utilizing a triangulation technique to determine the positions of the targets based on the AoA and AoD pairs.

    [0068] As further shown in FIG. 4, process 400 may include performing one or more actions based on the positions of the targets (block 450). For example, the device may perform one or more actions based on the positions of the targets, as described above. In some implementations, performing the one or more actions includes one or more of monitoring traffic based on the positions of the targets, identifying one or more parking spots based on the positions of the targets, detecting one or more vehicles based on the positions of the targets, or detecting one more pedestrians crossing one or more streets based on the positions of the targets.

    [0069] In some implementations, performing the one or more actions includes one or more of counting a quantity of people within an area based on the positions of the targets, detecting one or more unidentified drones based on the positions of the targets, or detecting a presence of people within one or more geo-fenced areas of a factory based on the positions of the targets. In some implementations, performing the one or more actions includes one or more of tracking one or more moving objects in a factory based on the positions of the targets, or preventing collisions between one or more of autonomous vehicles, robots, or people.

    [0070] In some implementations, process 400 includes receiving communication pilots, determining transmitter calibration coefficients for the other device and receiver calibration coefficients for the device based on the communication pilots, and calibrating a receiver antenna array of the device with the receiver calibration coefficients. In some implementations, process 400 includes providing the transmitter calibration coefficients to the other device to cause the other device to calibrate a transmitter antenna array of the other device with the transmitter calibration coefficients. In some implementations, the transmitter calibration coefficients and the receiver calibration coefficients are determined based on a line-of-sight path between a transmitter antenna array of the other device and the receiver antenna array. In some implementations, process 400 includes causing performance of a beam sweeping operation to scan a set of transmit target angular regions and to detect the targets.

    [0071] Although FIG. 4 shows example blocks of process 400, in some implementations, process 400 may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted in FIG. 4. Additionally, or alternatively, two or more of the blocks of process 400 may be performed in parallel.

    [0072] The foregoing disclosure provides illustration and description but is not intended to be exhaustive or to limit the implementations to the precise form disclosed. Modifications may be made in light of the above disclosure or may be acquired from practice of the implementations.

    [0073] As used herein, the term component is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software codeit being understood that software and hardware can be used to implement the systems and/or methods based on the description herein.

    [0074] As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, and/or the like, depending on the context.

    [0075] Although particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various implementations includes each dependent claim in combination with every other claim in the claim set.

    [0076] No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles a and an are intended to include one or more items and may be used interchangeably with one or more. Further, as used herein, the article the is intended to include one or more items referenced in connection with the article the and may be used interchangeably with the one or more. Furthermore, as used herein, the term set is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, and/or the like), and may be used interchangeably with one or more. Where only one item is intended, the phrase only one or similar language is used. Also, as used herein, the terms has, have, having, or the like are intended to be open-ended terms. Further, the phrase based on is intended to mean based, at least in part, on unless explicitly stated otherwise. Also, as used herein, the term or is intended to be inclusive when used in a series and may be used interchangeably with and/or, unless explicitly stated otherwise (e.g., if used in combination with either or only one of).

    [0077] In the preceding specification, various example embodiments have been described with reference to the accompanying drawings. It will, however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the broader scope of the invention as set forth in the claims that follow. The specification and drawings are accordingly to be regarded in an illustrative rather than restrictive sense.