G01S13/723

SYSTEMS AND METHODS FOR DETECTING CARRIED OBJECTS TO ADAPT VEHICLE ACCESS

System, methods, and other embodiments described herein relate to adapting vehicle access by detecting a person carrying an object. In one embodiment, a method includes detecting a person near a vehicle for gaining access. The method also includes scanning the person for an object using a radar of the vehicle, wherein information from the radar indicates densities of the person and the object. Upon detecting the object using the densities, the method also includes adapting the access to a compartment of the vehicle.

THREE DIMENSIONAL OBJECT TRACKING USING COMBINATION OF RADAR SPEED DATA AND TWO DIMENSIONAL IMAGE DATA
20230091774 · 2023-03-23 ·

Methods and systems include, in at least one aspect: determining an optical model of an object in flight using two dimensional image data obtained from a camera, determining a radar model of the object in flight using radar data obtained from a radar device, combining the radar model with the optical model to produce three dimensional location information of the object in flight in three dimensional space, comparing the three dimensional location information of the object in flight with data representing an expected ball launch, and rejecting (or verifying) the object as an actual ball launch in response to the three dimensional location information of the object in flight differing (or not differing) from the data representing the expected ball launch by a threshold amount.

SYSTEMS AND METHODS FOR HIGH VELOCITY RESOLUTION HIGH UPDATE RATE RADAR FOR AUTONOMOUS VEHICLES
20230085887 · 2023-03-23 ·

An autonomous vehicle (AV) includes a radar sensor system and a computing system that computes velocities of an object in a driving environment of the AV based upon radar data that is representative of radar returns received by the radar sensor system. The AV can be configured to compute a first velocity of the object based upon first radar data that is representative of the radar return from a first time to a second time. The AV can further be configured to compute a second velocity of the object based upon second radar data that includes at least a portion of the first radar data and further includes additional radar data representative of a radar return received subsequent to the second time. The AV can further be configured to control one of a propulsion system, a steering system, or a braking system to effectuate motion of the AV based upon the computed velocities.

AUTOMATIC CROSS-SENSOR CALIBRATION USING OBJECT DETECTIONS

Certain aspects of the present disclosure provide techniques for sensor calibration. First sensor data is received from a first sensor and second sensor data is received from a second sensor, where the first sensor data and the second sensor data each indicate detected objects in a space. The first sensor data is transformed using a first transformation profile to convert the first sensor data to a coordinate frame of the second sensor data. The first transformation profile is refined based on a difference between the transformed first sensor data and the second sensor data.

Radar-tracked object velocity and/or yaw

Some radar sensors may provide a Doppler measurement indicating a relative velocity of an object to a velocity of the radar sensor. Techniques for determining a two-or-more-dimensional velocity from one or more radar measurements associated with an object may comprise determining a data structure that comprises a yaw assumption and a set of weights to tune the influence of the yaw assumption. Determining the two-or-more-dimensional velocity may further comprise using the data structure as part of regression algorithm to determine a velocity and/or yaw rate associated with the object.

Method for Determining Spin of a Projectile
20230082660 · 2023-03-16 ·

A method for estimating a spin of a projectile, the method comprising obtaining a first data series representing a radial velocity of a projectile over time in accordance with a radar signal reflected from the projectile, subtracting a center velocity of the first data series from the first data series to form a second data series representing a variation of the radial velocity of the projectile around the center velocity over time, dividing the second data series into respective time intervals, estimating, for each of the time intervals of the second data series, a frequency of the variation of the radial velocity of the projectile around the center velocity, and determining a spin of the projectile based on the estimated frequencies of the variation of the radial velocity of the projectile.

METHOD AND ELECTRONIC DEVICE FOR PROVIDING PERSONALIZED RESPONSE SUGGESTIONS TO NOTIFICATION EVENT

Embodiments herein provide a method for providing personalized response suggestions to a user for a notification event using an electronic device. The method includes detecting the notification event associated with a user. The method includes authenticating presence of the user for the notification event based on Ultra-Wide Band (UWB) signal data obtained from the electronic device and/or one or more IoT devices. The method includes determining current activity of the user based on the UWB signal data in response to a successful authentication. The method includes correlating the current activity with a plurality of activities performed by the user in past, and a past interaction pattern of the user in connection with events substantially similar to nature of the notification event. The method includes generating one or more auto response suggestions for the notification event as a result of correlation.

SYSTEM AND METHOD FOR OBJECT MONITORING, LOCALIZATION, AND CONTROLLING

The present invention is a system and method to monitor, localize, and control objects in an implicitly defined geofenced area and to determine object position and orientation (vehicle only) by capturing the object's image, 3D data, or position using camera, LiDAR, and/or RADAR sensors that are installed on structures mounted on the ground. The sensors capture image, 3D data points, and distance of the surface points that are processed to ultimately obtain 3D data of the surface points of the object. The 3D data points from different sensors are then combined or fused by a controller to obtain a single set of 3D points, called fusion data, under one coordinate system such as the GPS coordinate system. The single set of 3D points is then processed by the controller using deep neural network and/or other algorithms to obtain position and orientation of the object. Additionally, the controller or sensors can send current and desired future object positions and orientations to controllable objects. Controller and/or sensors can send site image data to scene marking device and receive marked image data for geofenced monitoring of objects. Controller or sensors send alert to devices if objects are detected or abnormal behavior of objects are detected within the geofenced area.

VEHICLE CONTROL DEVICE, VEHICLE, VEHICLE CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM

A vehicle control device is mountable on a vehicle. The vehicle control device includes: a processor; and a memory storing instructions that, when executed by the processor, cause the vehicle control device to perform operations including: acquiring detection information obtained by detecting an obstacle around the vehicle; performing collision determination of evaluating a possibility of collision with the obstacle; generating, based on the detection information, information on an approaching object that is an obstacle approaching the vehicle and information on a detection point indicating an obstacle that does not move; estimating a position of a shielding object based on the information on the detection point; evaluating, based on the position of the shielding object and the information on the approaching object, a ghost likelihood indicating a possibility that the approaching object is a ghost; and excluding, based on the ghost likelihood, the approaching object from the collision determination.

DEEP LEARNING METHOD OF DETERMINING GOLF CLUB PARAMETERS FROM BOTH RADAR SIGNAL AND IMAGE DATA

An example method of modeling a portion of a golf club and a golf swing includes scanning the golf club to obtain scanning information, training a convolutional neural network using the scanning information, using at least one camera to obtain a series of images, converting the series of images into parameterized motion representations, using at least one radar to obtain a radar signal, converting the radar signal into time-frequency images, inputting the parameterized motion representations and the time-frequency images into the convolutional neural network, receiving golf club parameters and golf swing parameters as an output of the convolutional neural network, and generating a visual model of the golf club and the golf swing in a virtual space using the golf club parameters and the golf swing parameters.