Patent classifications
B60W2050/0004
Method and system for data optimization
Embodiments can provide an intelligent vehicle that determines that the vehicle interior comprises multiple occupants; identifies the occupants; based on the driving behavior of the vehicle, creates a composite occupant profile associated with the group of plural occupants, the composite occupant profile comprising the identities of the plural occupants and group preferences for various vendor products or services; and, based on the composite occupant profile and received inputs from a user interface, an automatic vehicle location system, and a plurality of sensors in the vehicle, performs one or more actions, such as: (a) proposing one or more vendor products or services for the group of occupants; (b) publishing the vendor products or services selected by the group of occupants, via a social network, to associated or selected associates of the occupants in the group; and (c) presenting advertisement information from a vendor server associated with the proposed or selected vendor products or services to one or more of the occupants in the group.
MULTI-MODAL SENSOR CALIBRATION FOR IN-CABIN MONITORING SYSTEMS AND APPLICATIONS
In various examples, calibration techniques for interior depth sensors and image sensors for in-cabin monitoring systems and applications are provided. An intermediary coordinate system may be generated using calibration targets distributed within an interior space to reference 3D positions of features detected by both depth-perception and optical image sensors. Rotation-translation transforms may be determined to compute a first transform (H1) between the depth-perception sensor's 3D coordinate system and the 3D intermediary coordinate system, and a second transform (H2) between the optical image sensor's 2D coordinate system and the intermediary coordinate system. A third transform (H3) between the depth-perception sensor's 3D coordinate system and the optical image sensor's 2D coordinate system can be computed as a function of H1 and H2. The calibration targets may comprise a structural substrate that includes one or more fiducial point markers and one or more motion targets.
Systems and methods for controlling vehicles with navigation markers
Systems, methods, and computer-readable media are disclosed for controlling one or more vehicles with the use of navigation markers positioned or integrated into a ground surface. A vehicle, such as an autonomous vehicle, may include a light detection assembly, which may include a light emitter, an optical filter, an optical sensor, and an analog-to-digital converter, and optionally may include a lens. The light emitter may emit light towards the ground surface which may illuminate the navigation marker and cause the navigation marker to emit light passes through the optical filter and is ultimately sensed by the optical sensor. The vehicle may determine the light was emitted by the navigation marker and cause the vehicle to perform the predetermined action.
Vehicle autonomous collision prediction and escaping system (ACE)
Embodiments herein relate to an autonomous vehicle or self-driving vehicle. The system can determine a collision avoidance path by: 1) predicting the behavior/trajectory of other moving objects (and identifying stationary objects); 2) given the driving trajectory (issued by autonomous driving system) or predicted driving trajectory (human), establishing the probability for a collision that can be calculated between the vehicle and one or more objects; and 3) finding a path to minimize the collision probability.
Control system for moving object
A controller causes a calculation device to perform calculation that determines an operation of a moving object and generates digital signals that define the operations of actuators. The generated digital signals are output to a digital signal transmission path by a signal bus control IC. ICs attached to the actuators obtain digital signals that define the operations of the actuators from the digital signal transmission path and generate control signals for the actuators based on the operations defined by the digital signals.
Method for assisted or at least semi-automated driving of a motor vehicle
The present disclosure relates to a method of driving a motor vehicle. The method determines position data relating to at least one of a current position or a predicted future position of the motor vehicle detecting, using at least one sensor device, surroundings data relating to a surrounding environment of the motor vehicle, determining at least one driving intervention based on the surroundings data, and controlling at least one vehicle system of the motor vehicle to execute the determined at least one driving intervention. The at least one driving intervention executed by a selected software module that is selected based on the surroundings data, and the selected software module is selected from a plurality of software modules based on the position data. Each software module is configured to execute the at least one driving intervention.
CONTROL SYSTEMS, CONTROL METHODS AND CONTROLLERS FOR AN AUTONOMOUS VEHICLE
Systems and methods are provided for controlling an autonomous vehicle (AV). A feature map generator module generates a feature map (FM). Based on the FM, a perception map generator module generates a perception map (PM). A scene understanding module selects from a plurality of sensorimotor primitive modules (SPMs), based on the FM, a particular combination of SPMs to be enabled and executed for the particular driving scenario (PDS). Each SPM maps information from either the FM or the PM to a vehicle trajectory and speed profile (VTSP) for automatically controlling the AV to cause the AV to perform a specific driving maneuver. Each one of the particular combination of the SPMs addresses a sub-task in a sequence of sub-tasks that address the PDS. Each of the particular combination of the SPMs are retrieved from memory and executed to generate a corresponding VTSP.
Method for guiding a vehicle system in a fully automated manner, and motor vehicle
The invention relates to a method for operating a motor vehicle system which is designed to guide the motor vehicle in different driving situation classes in a fully automated manner. The method includes ascertaining a current driving situation class from multiple specified driving situation classes using at least some of driving situation data, each driving situation class is assigned at least one analysis function. The method further includes retrieving configuration parameter sets assigned to the analysis functions of the current driving situation class from a database and producing analysis units which carry out the analysis function and which have not yet been provided by configuring configuration objects using the retrieved configuration parameter sets.
High fidelity simulations for autonomous vehicles based on retro-reflection metrology
Aspects and implementations of the present disclosure address shortcomings of existing technology by enabling autonomous vehicle simulations based on retro-reflection optical data. The subject matter of this specification can be implemented in, among other things, a method that involves initiating a simulation of an environment of an autonomous driving vehicle, the simulation including a plurality of simulated objects, each having an identification of a material type of the respective object. The method can further involve accessing simulated reflection data based on the plurality of simulated objects and retro-reflectivity data for the material types of the simulated objects, and determining, using an autonomous vehicle control system for the autonomous vehicle, a driving path relative to the simulated objects, the driving path based on the simulated reflection data.
Sensor surface object detection methods and systems
Methods, devices, and systems of a sensor surface object detection system are provided. Output from sensors of a vehicle may be used to describe an environment around the vehicle. In the event that a sensor is obstructed by dirt, debris, or detritus the sensor may not sufficiently describe the environment for autonomous control operations. The sensor surface object detection system may receive output from the sensors of the vehicle to determine whether any of the sensors are obstructed. The determination may be made by comparing the output of one sensor to another, determining whether the output of a sensor is within a predetermined threshold, or comparing characteristics of multiple sensor outputs to one another. When a sensor is determined to be obstructed, the system may send a command to a cleaning system to automatically remove the obstruction.