System and method for automatically detecting key behaviors by vehicles
RE049650 · 2023-09-12
Assignee
Inventors
- Jiajun Zhu (Palo Alto, CA, US)
- David I. Ferguson (San Francisco, CA, US)
- Dmitri A. Dolgov (Los Altos, CA, US)
Cpc classification
B60W60/00276
PERFORMING OPERATIONS; TRANSPORTING
B60W2050/0095
PERFORMING OPERATIONS; TRANSPORTING
B60W30/16
PERFORMING OPERATIONS; TRANSPORTING
B60W60/0015
PERFORMING OPERATIONS; TRANSPORTING
B60W30/00
PERFORMING OPERATIONS; TRANSPORTING
B60W2556/50
PERFORMING OPERATIONS; TRANSPORTING
International classification
B60W30/00
PERFORMING OPERATIONS; TRANSPORTING
B60W30/16
PERFORMING OPERATIONS; TRANSPORTING
Abstract
Aspects of the disclosure relate generally to detecting discrete actions by traveling vehicles. The features described improve the safety, use, driver experience, and performance of autonomously controlled vehicles by performing a behavior analysis on mobile objects in the vicinity of an autonomous vehicle. Specifically, an autonomous vehicle is capable of detecting and tracking nearby vehicles and is able to determine when these nearby vehicles have performed actions of interest by comparing their tracked movements with map data.
Claims
.[.1. A method comprising: controlling, by one or more computing devices, an autonomous vehicle in accordance with a first control strategy; receiving, by the one or more computing devices, sensor data indicating a detection of a first object; classifying, by the one or more computing devices, the first object based on the sensor data; accessing, by the one or more computing devices, behavior data based on a classification of the first object, wherein the behavior data identifies potential actions of the first object that are to result in a change in control strategy, and wherein at least one of the potential actions identified in the behavior data is the action of changing from traveling on a first road element to travelling on a second road element; determining, by the one or more computing devices, that the first object has performed an action identified in the behavior data; and based on the determination, altering the control strategy of the autonomous vehicle by the one or more computing devices..].
.[.2. The method of claim 1, wherein the first road element is a first lane of traffic on a roadway, and wherein the second road element is a second lane of traffic on the roadway..].
.[.3. The method of claim 1, wherein altering the control strategy of the autonomous vehicle comprises positioning the autonomous vehicle relative to the first object in a predefined manner..].
.[.4. The method of claim 1, wherein altering the control strategy of the autonomous vehicle comprises having the autonomous vehicle change from travelling in a first lance of traffic to travelling in a second lane of traffic..].
.[.5. The method of claim 1, wherein altering the control strategy of the autonomous vehicle comprises altering at least one of a position, heading, speed, and acceleration of the autonomous vehicle..].
.[.6. The method of claim 1, wherein the first object is classified as a vehicle..].
.[.7. The method of claim 6, wherein determining that the first object has performed an action identified in the behavior data further comprises determining that the first object has changed from travelling on the first road element to travelling on the second road element..].
.[.8. A method comprising: controlling, by one or more computing devices, an autonomous vehicle; receiving, by the one or more computing devices, sensor data indicating a position of a first object external to the autonomous vehicle; classifying, by the one or more computing devices, the first object based on the sensor data; accessing, by the one or more computing devices, map data having a plurality of road elements; comparing the sensor data with the map data; identifying, by the one or more computing devices, that the first object is travelling on a first road element from the plurality of road elements; determining, by the one or more computing devices, that based on the comparison of the sensor data with the map data, the first object has travelled from the first road element to a second road element; and altering, by the one or more computing devices, at least one of a position, heading, speed, and acceleration of the autonomous vehicle based on the determination that the first object has travelled from the first road element to the second road element..].
.[.9. The method of claim 8, wherein the first road element is a first lane of traffic on a roadway, and wherein the second road element is a second lane of traffic on the roadway..].
.[.10. The method of claim 8, wherein altering at least one of the position, heading, speed, and acceleration of the autonomous vehicle comprises positioning the autonomous vehicle relative to the first object in a predefined manner..].
.[.11. The method of claim 8, further comprising: receiving, by the one or more computing devices, a request to navigate between a first location and a second location; and autonomously navigating, by the one or more computing devices, the autonomous vehicle along a path between the first location and a second location; and wherein altering at least one of a position, heading, and speed of the autonomous vehicle, occurs while the autonomous vehicle is travelling along the path..].
.[.12. The method of claim 8, further comprising: determining a relative position of the autonomous vehicle with each of the one or more nearby vehicles, and wherein associating each of the one or more nearby vehicles with a road graph element is based on the relative position..].
.[.13. A system for controlling an autonomous vehicle, the system comprising: one or more sensors for detecting a one or more vehicles in an autonomous vehicle's surroundings; and one or more processors configured to: control an autonomous vehicle in accordance with a first control strategy; receive sensor data indicating a detection of a first object; classify the first object based on the sensor data; access behavior data based on a classification of the first object, wherein the behavior data identifies potential actions of the first object that are to result in a change in control strategy, and wherein at least one of the potential actions of the first object is the first object changing from traveling on a first road element to travelling on a second road element; determine that the first object has performed an action identified in the behavior data; and alter the control strategy of the autonomous vehicle based on the determination..].
.[.14. The system of claim 13, wherein the first road element is a first lane of traffic on a roadway, and wherein the second road element is a second lane of traffic on the roadway..].
.[.15. The system of claim 13, wherein altering the control strategy of the autonomous vehicle comprises positioning the autonomous vehicle relative to the first object in a predefined manner..].
.[.16. The system of claim 13, wherein altering the control strategy of the autonomous vehicle comprises having the autonomous vehicle change from travelling in a first lance of traffic to travelling in a second lane of traffic..].
.[.17. The system of claim 13, wherein altering the control strategy of the autonomous vehicle comprises altering at least one of a position, heading, speed, and acceleration of the autonomous vehicle..].
.[.18. The system of claim 13, wherein the sensor data includes information relating to at least one of a position, heading, speed, and acceleration of the first object..].
.[.19. The system of claim 18, wherein determining that the first object has performed an action identified in the behavior data further comprises determining that the first object has changed from travelling on the first road element to travelling on the second road element..].
.[.20. The system of claim 13, wherein the one or more processors are further configured to: receive a request for navigation between a first location and a second location; and autonomously navigate the autonomous vehicle along a path between the first location and a second location; and wherein altering the control strategy occurs while the autonomous vehicle travels along the path..].
.Iadd.21. A method comprising: receiving, by one or more sensors of a vehicle configured to operate in a fully autonomous mode, sensor data of an external environment of the vehicle; filtering, by one or more computing devices of the vehicle, the received sensor data to identify one or more actions of interest of an object in the external environment, the filtering including filtering the received sensor data to only include instances where the object has performed an action of interest; determining, by the one or more computing devices, whether the vehicle cannot currently operate in the fully autonomous mode based on the identified one or more actions of interest; and upon determining by the one or more computing devices that the vehicle cannot currently operate in the fully autonomous mode, the one or more computing devices altering a control strategy; wherein altering the control strategy includes either (i) delaying changing to the fully autonomous mode, or (ii) ceding control of the vehicle to a remote operator. .Iaddend.
.Iadd.22. The method of claim 21, wherein the object is another vehicle. .Iaddend.
.Iadd.23. The method of claim 21, wherein delaying changing to the fully autonomous mode includes the one or more computing devices scanning the external environment to determine whether there are any obstacles affecting an ability of the vehicle to avoid a collision. .Iaddend.
.Iadd.24. The method of claim 21, wherein delaying changing to the fully autonomous mode includes the one or more computing devices requiring a driver of the vehicle to control steering or accelerating before entering into the fully autonomous mode. .Iaddend.
.Iadd.25. The method of claim 21, wherein ceding control of the vehicle to the remote operator includes sending the received sensor data to a remote party associated with the remote operator. .Iaddend.
.Iadd.26. The method of claim 25, further comprising transmitting data or imagery to a remote computing device in conjunction with ceding control to the remote operator. .Iaddend.
.Iadd.27. The method of claim 21, wherein the identified one or more actions of interest include another vehicle changing lanes. .Iaddend.
.Iadd.28. The method of claim 21, wherein the identified one or more actions of interest include another vehicle changing its route. .Iaddend.
Description
BRIEF DESCRIPTION OF THE FIGURES
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
DETAILED DESCRIPTION
(9) Aspects of the disclosure relate generally to detecting instances when a vehicle has performed a discrete action of interest. In particular, a device implementing the disclosed system is capable of detecting surrounding vehicles using one or more sensors. The device may then determine when the surrounding vehicles have performed one of several predefined actions by comparing the sensor data with stored road graph data. The system described below may be implemented as part of autonomous driving vehicle. In turn, the autonomous vehicle may react to the behavior of nearby objects in a way that decreases the likelihood of an accident and increases the efficiency of travel.
(10) As shown in
(11) The memory 130 stores information accessible by processor 120, including instructions 132 and data 134 that may be executed or otherwise used by the processor 120. The memory 130 may be of any type capable of storing information accessible by the processor, including a computer-readable medium, or other medium that stores data that may be read with the aid of an electronic device, such as a hard-drive, memory card, ROM. RAM, DVD or other optical disks, as well as other write-capable and read-only memories. Systems and methods may include different combinations of the foregoing, whereby different portions of the instructions and data are stored on different types of media.
(12) The instructions 132 may be any set of instructions to be executed directly (such as machine code) or indirectly (such as scripts) by the processor. For example, the instructions may be stored as computer code on the computer-readable medium. In that regard, the terms “instructions” and “programs” may be used interchangeably herein. The instructions may be stored in object code format for direct processing by the processor, or in any other computer language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. Functions, methods and routines of the instructions are explained in more detail below.
(13) The data 134 may be retrieved, stored or modified by processor 120 in accordance with the instructions 132. For instance, although the system and method is not limited by any particular data structure, the data may be stored in computer registers, in a relational database as a table having a plurality of different fields and records, XML documents or flat files. The data may also be formatted in any computer-readable format. By further way of example only, image data may be stored as bitmaps comprised of grids of pixels that are stored in accordance with formats that are compressed or uncompressed, lossless (e.g., BMP) or lossy (e.g., JPEG), and bitmap or vector-based (e.g., SVG), as well as computer instructions for drawing graphics. The data may comprise any information sufficient to identify the relevant information, such as numbers, descriptive text, proprietary codes, references to data stored in other areas of the same memory or different memories (including other network locations) or information that is used by a function to calculate the relevant data.
(14) The processor 120 may be any conventional processor, such as commercially available CPU's. Alternatively, the processor may be a dedicated device such as an ASIC or FPGA. Although
(15) In various of the aspects described herein, the processor may be located remote from the vehicle and communicate with the vehicle wirelessly. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle and others by a remote processor, including taking the steps necessary to execute a single maneuver.
(16) Computer 110 may include all of the components normally used in connection with a computer, such as a central processing unit (CPU), memory 130 (e.g., RAM and internal hard drives) storing data 134 and instructions such as a web browser, an electronic display 142 (e.g., a monitor having a screen, a small LCD touch-screen or any other electrical device that is operable to display information), user input (e.g., a mouse, keyboard, touch screen and/or microphone), as well as various sensors (e.g., a video camera) for gathering explicit (e.g., a gesture) or implicit (e.g. “the person is asleep”) information about the states and desires of a person.
(17) The vehicle may also include a geographic position component 144 in communication with computer 110 for determining the geographic location of the device. For example, the position component may include a GPS receiver to determine the device's latitude, longitude and/or altitude position. Other location systems such as laser-based localization systems, inertial-aided GPS, or camera-based localization may also be used to identify the location of the vehicle. The location of the vehicle may include an absolute geographical location, such as latitude, longitude, and altitude as well as relative location information, such as location relative to other cars immediately around it which can often be determined with less noise than absolute geographical location.
(18) The device may also include other features in communication with computer 110, such as an accelerometer, gyroscope or another direction/speed detection device 146 to determine the direction and speed of the vehicle or changes thereto. By way of example only, acceleration device 146 may determine its pitch, yaw or roll (or changes thereto) relative to the direction of gravity or a plane perpendicular thereto. The device may also track increases or decreases in speed and the direction of such changes. The device's provision of location and orientation data as set forth herein may be provided automatically to the user, computer 110, other computers and combinations of the foregoing.
(19) The computer 110 may control the direction and speed of the vehicle by controlling various components. By way of example, if the vehicle is operating in a completely autonomous mode, computer 110 may cause the vehicle to accelerate (e.g., by increasing fuel or other energy provided to the engine), decelerate (e.g., by decreasing the fuel supplied to the engine or by applying brakes) and change direction (e.g., by turning the front two wheels).
(20) As shown in
(21) Computer 110 may use visual or audible cues to indicate whether computer 110 is obtaining valid data from the various sensors, whether the computer is partially or completely controlling the direction or speed of the car or both, whether there are any errors, etc. Vehicle 101 may also include a status indicating apparatus, such as status bar 230, to indicate the current status of vehicle 101. In the example of
(22) In one example, computer 110 may be an autonomous driving computing system capable of communicating with various components of the vehicle. Returning to
(23)
(24) The vehicle may include components for detecting objects external to the vehicle such as other vehicles, obstacles in the roadway, traffic signals, signs, trees, etc. The detection system may include lasers, sonar, radar, cameras or any other detection devices. For example, if the vehicle is a small passenger car, the car may include a laser mounted on the roof or other convenient location. In one aspect, the laser may measure the distance between the vehicle and the object surfaces facing the vehicle by spinning on its axis and changing its pitch. The vehicle may also include various radar detection units, such as those used for adaptive cruise control systems. The radar detection units may be located on the front and back of the car as well as on either side of the front bumper. In another example, a variety of cameras may be mounted on the car at distances from one another which are known so that the parallax from the different images may be used to compute the distance to various objects which are captured by 2 or more cameras. These sensors allow the vehicle to understand and potentially respond to its environment in order to maximize safety for passengers as well as objects or people in the environment.
(25) Many of these sensors provide data that is processed by the computer in real-time, that is, the sensors may continuously update their output to reflect the environment being sensed at or over a range of time, and continuously or as-demanded provide that updated output to the computer so that the computer can determine whether the vehicle's then-current direction or speed should be modified in response to the sensed environment.
(26)
(27) The vehicle may also include various radar detection units, such as those used for adaptive cruise control systems. The radar detection units may be located on the front and back of the car as well as on either side of the front bumper. As shown in the example of
(28) In another example, a variety of cameras may be mounted on the vehicle. The cameras may be mounted at predetermined distances so that the parallax from the images of 2 or more cameras may be used to compute the distance to various objects. As shown in
(29) Each sensor may be associated with a particular sensor field in which the sensor may be used to detect objects.
(30)
(31)
(32) In another example, an autonomous vehicle may include sonar devices, stereo cameras, a localization camera, a laser, and a radar detection unit each with different fields of view. The sonar may have a horizontal field of view of approximately 60 degrees for a maximum distance of approximately 6 meters. The stereo cameras may have an overlapping region with a horizontal field of view of approximately 50 degrees, a vertical field of view of approximately 10 degrees, and a maximum distance of approximately 30 meters. The localization camera may have a horizontal field of view of approximately 75 degrees, a vertical field of view of approximately 90 degrees and a maximum distance of approximately 10 meters. The laser may have a horizontal field of view of approximately 360 degrees, a vertical field of view of approximately 30 degrees, and a maximum distance of 100 meters. The radar may have a horizontal field of view of 60 degrees for the near beam, 30 degrees for the far beam, and a maximum distance of 200 meters.
(33) The sensors described may be used to identify, track and predict the movements of pedestrians, bicycles, other vehicles, or objects in the roadway. For example, the sensors may provide the location and shape information of objects surrounding the vehicle to computer 110, which in turn may identify the object as another vehicle. The object's current movement may be also be determined by the sensor (e.g., the component is a self-contained speed radar detector) or by the computer 110 based on information provided by the sensors (e.g., by comparing changes in the object's position data over time).
(34) The computer may change the vehicle's current path and speed based on the presence of detected objects. For example, the vehicle may automatically slow down if its current speed is 50 mph and it detects, by using its cameras and using optical-character recognition, that it will shortly pass a sign indicating that the speed limit is 35 mph. Yet further, if the computer determines that an object is obstructing the intended path of the vehicle, it may maneuver the vehicle around the obstruction.
(35) In accordance with one aspect, the autonomous vehicle's computer system 110 may identify when another detected vehicle has performed a particular action of interest.
(36) The position and movement data for the detected vehicles 510-550 may be stored in database 137 of the autonomous driving computer system, as shown in
(37) For example, database 138 may include a set of actions or behaviors of interest, such as the vehicle changing lanes or routes, and instructions 132 may allow for computer system 110 to identify when a detected vehicle has performed one or more of the behaviors of interest. In particular, computer system 110 may access the recorded position and movement stored in database 137, as well as a road graph of the environment stored in database 136. By combining both sets of data, computer system 110, may then determine when one or more of the key behaviors have occurred.
(38) Returning to
(39) In this way, vehicle 101 may associate and track all surrounding vehicles with a particular road graph element, such as a lane of travel or intersection. For example, dashed line 630 in
(40) Vehicle 101 may also filter the data collected for vehicle 510 so that it only contains instances where vehicle 510 has performed an action of interest. As provided by dotted line 630 on map 600, vehicle 510 changes it's heading around point 640, as it begins to travel from a north-west direction to a more south-west direction. While vehicle 101 will collect data regarding vehicle 510's change in heading, computer 110 will also determine that the change in heading does not correspond to an action of interest, as vehicle 510 merely travels along the same road graph element. Vehicle 101 may, in turn, exclude the data corresponding to vehicle 510's change in heading at point 640 as being recorded as an action of interest.
(41) In another embodiment, autonomous vehicle 101 may transport itself, passengers, and/or cargo between two locations by following a route. For example, a driver may input a destination and activate an autonomous mode of the vehicle. In response, the vehicle's computer 110 may calculate a route using a map, its current location, and the destination. Based on the route (or as part of the route generation), the vehicle may determine a control strategy for controlling the vehicle along the route to the destination. In accordance with one embodiment, computer system 110 may control the autonomous vehicle 101 to take particular actions in response to the actions of the surrounding objects that have been identified as performing a behavior of interest. For example, by changing lanes as provided by arrow B2 of
(42) As another example, vehicle 520 may come to a stop for a period of time before making the left-hand turn designated by arrow A2. Computer system 110 may identify this action as a behavior of interest, depending on which road element vehicle 520 is travelling. Specifically, if vehicle 520 is determined to be in a left-hand turn lane, vehicle 101 may not identify vehicle 520 having stopped will not as a behavior of interest. However, if vehicle 520 was travelling one lane over to the right, the fact that it has stopped could indicate that there is a backup ahead. Accordingly, vehicle 101 may adjust its control strategy based on which road graph element (e.g., lane) a vehicle is currently travelling.
(43) Flow diagram 700 of
(44) For each vehicle that has been detected, it may be determined whether the vehicle can be associated with a particular road graph element (e.g., a road, lane of traffic, intersection, or other map element) contained in the map data (Block 730). For example, based on a detected vehicle's state information, computer 110 may determine that the detected vehicle is travelling within a particular lane of traffic represented in the road graph. Computer 110 may then track the detected vehicles as they travel along the associated road graph element, (Block 735) and may determine when one of the detected vehicles has performed a behavior of interest (Block 740). Based on determining that a detected vehicle has performed an action of interest, such as a lane change, computer 110 may then alter the control strategy of autonomous vehicle 101 (Block 745). Blocks 715 through 745 may then be repeated until autonomous vehicle 101 has reached its destination or the autonomous control has otherwise terminated (Block 750). In this way, vehicle 101 may further alter the control strategy upon any of the detected vehicles performing an action of interest.
(45) Vehicle 101 may include one or more user input devices that enable a user to provide information to the autonomous driving computer 110. For example, a user, such as a passenger, may input a destination (e.g., 123 Oak Street) into the navigation system using touch screen 217 or button inputs 219. In another example, a user may input a destination by identifying the destination. In that regard, the computer system may extract the destination from a user's spoken command.
(46) The various systems described above may be used by the computer to operate the vehicle and maneuver from one location to another. For example, a user may enter destination information into the navigation system, either manually or audibly. The vehicle may determine its location to a few inches based on a combination of the GPS receiver data, the sensor data, as well as the detailed map information. In response, the navigation system may generate a route between the present location of the vehicle and the destination.
(47) When the driver is ready to relinquish some level of control to the autonomous driving computer, the user may activate the computer. The computer may be activated, for example, by pressing a button or by manipulating a lever such as gear shifter 220. Rather than taking control immediately, the computer may scan the surroundings and determine whether there are any obstacles or objects in the immediate vicinity which may prohibit or reduce the ability of the vehicle to avoid a collision. In this regard, the computer may require that the driver continue controlling the vehicle manually or with some level of control (such as the steering or acceleration) before entering into a fully autonomous mode.
(48) Once the vehicle is able to maneuver safely without the assistance of the driver, the vehicle may become fully autonomous and continue to the destination. The driver may continue to assist the vehicle by controlling, for example, steering or whether the vehicle changes lanes, or the driver may take control of the vehicle immediately in the event of an emergency.
(49) The vehicle may continuously use the sensor data to identify objects, such as traffic signals, people, other vehicles, and other objects, in order to maneuver the vehicle to the destination and reduce the likelihood of a collision. The vehicle may use the map data to determine where traffic signals or other objects should appear and take actions, for example, by signaling turns or changing lanes. Once the vehicle has arrived at the destination, the vehicle may provide audible or visual cues to the driver. For example, by displaying “You have arrived” on one or more of the electronic displays.
(50) The vehicle may be only partially autonomous. For example, the driver may select to control one or more of the following: steering, acceleration, braking, and emergency braking.
(51) The vehicle may also have one or more user interfaces that allow the driver to reflect the driver's driving a style. For example, the vehicle may include a dial which controls the level of risk or aggressiveness with which a driver would like the computer to use when controlling the vehicle. For example, a more aggressive driver may want to change lanes more often to pass cars, drive in the left lane on a highway, maneuver the vehicle closer to the surrounding vehicles, and drive faster than less aggressive drivers. A less aggressive driver may prefer for the vehicle to take more conservative actions, such as somewhat at or below the speed limit, avoiding congested highways, or avoiding populated areas in order to increase the level of safety. By manipulating the dial, the thresholds used by the computer to calculate whether to pass another car, drive closer to other vehicles, increase speed and the like may change. In other words, changing the dial may affect a number of different settings used by the computer during its decision making processes. A driver may also be permitted, via the user interface 225, to change individual settings that relate to the driver's preferences. In one embodiment, insurance rates for the driver or vehicle may be based on the style of the driving selected by the driver.
(52) Aggressiveness settings may also be modified to reflect the type of vehicle and its passengers and cargo. For example, if an autonomous truck is transporting dangerous cargo (e.g., chemicals or flammable liquids), its aggressiveness settings may be less aggressive than a car carrying a single driver—even if the aggressive dials of both such a truck and car are set to “high.” Moreover, trucks traveling across long distances over narrow, unpaved, rugged or icy terrain or vehicles may be placed in a more conservative mode in order reduce the likelihood of a collision or other incident.
(53) In another example, the vehicle may include sport and non-sport modes which the user may select or deselect in order to change the aggressiveness of the ride. By way of example, while in “sport mode”, the vehicle may navigate through turns at the maximum speed that is safe, whereas in “non-sport mode”, the vehicle may navigate through turns at the maximum speed which results in g-forces that are relatively imperceptible by the passengers in the car.
(54) The vehicle's characteristics may also be adjusted based on whether the driver or the computer is in control of the vehicle. For example, when a person is driving manually the suspension may be made fairly stiff so that the person may “feel” the road and thus drive more responsively or comfortably, while, when the computer is driving, the suspension may be made such softer so as to save energy and make for a more comfortable ride for passengers.
(55) The driver may also select to have his or her vehicle communicate with other devices. As shown in
(56) Vehicle 101 may also receive updated map or object data via network 820. For example, server 810 may provide vehicle 101 with new data relating to object classifications and behavior model information. Computer system 110, of
(57) As these number and usage of these autonomous vehicles increases, various sensors and features may be incorporated into the environment to increase the perception of the vehicle. For example, low-cost beacon transmitters may be placed on road signs, traffic signals, roads or other highway infrastructure components in order to improve the computer's ability to recognize these objects, their meaning, and state. Similarly, these features may also be used to provide additional information to the vehicle and driver such as, whether the driver is approaching a school or construction zone. In another example, magnets, RFID tags or other such items may be placed in the roadway to delineate the location of lanes, to identify the ground speed vehicle, or increase the accuracy of the computer's location determination of the vehicle.
(58) Autonomous vehicles may also be controlled remotely. For example, if the driver is asleep, the sensor data may be sent to a third party so that vehicle may continue to have a responsive operator. While delay and latency may make this type of telemetry driving difficult, it may for example be used in emergency situations or where the vehicle has gotten itself stuck. The vehicle may send data and images to a central office and allow a third party to remotely drive the vehicle for a short period until the emergency has passed or the vehicle is no longer stuck.
(59) As these and other variations and combinations of the features discussed above can be utilized without departing from the invention as defined by the claims, the foregoing description of exemplary embodiments should be taken by way of illustration rather than by way of limitation of the invention as defined by the claims. It will also be understood that the provision of examples of the invention (as well as clauses phrased as “such as,” “e.g.”, “including” and the like) should not be interpreted as limiting the invention to the specific examples; rather, the examples are intended to illustrate only some of many possible aspects.