Patent classifications
B60R1/22
ACCESSORY CONTROL SYSTEM FOR VEHICLE
An accessory control system for a vehicle includes a rearview mirror assembly having a display and touch screen and a processor, and a control panel assembly connected with the rearview mirror assembly, the control panel assembly having a control panel, a plurality of switch modules communicated with a plurality of corresponding accessory modules; wherein the processor panel receives instructions and sends the instructions to the control panel that causes the control panel to: activate the switch modules, in response to the activation of the switch modules, activating the accessory modules.
ACCESSORY CONTROL SYSTEM FOR VEHICLE
An accessory control system for a vehicle includes a rearview mirror assembly having a display and touch screen and a processor, and a control panel assembly connected with the rearview mirror assembly, the control panel assembly having a control panel, a plurality of switch modules communicated with a plurality of corresponding accessory modules; wherein the processor panel receives instructions and sends the instructions to the control panel that causes the control panel to: activate the switch modules, in response to the activation of the switch modules, activating the accessory modules.
CROWD-SOURCED 3D POINTS AND POINT CLOUD ALIGNMENT
Systems and methods are provided for vehicle navigation. In one implementation, a host vehicle-based sparse map feature harvester system may include at least one processor programmed to receive a plurality of images captured by a camera onboard the host vehicle as the host vehicle travels along a road segment in a first direction, wherein the plurality of images are representative of an environment of the host vehicle; detect one or more semantic features represented in one or more of the plurality of images, the one or more semantic features each being associated with a predetermined object type classification; identify at least one position descriptor associated with each of the detected one or more semantic features; identify three-dimensional feature points associated with one or more detected objects represented in at least one of the plurality of images; receive position information, for each of the plurality of images, wherein the position information is indicative of a position of the camera when each of the plurality of images was captured; and cause transmission of drive information for the road segment to an entity remotely-located relative to the host vehicle, wherein the drive information includes the identified at least one position descriptor associated with each of the detected one or more semantic features, the identified three-dimensional feature points, and the position information.
CROWD-SOURCED 3D POINTS AND POINT CLOUD ALIGNMENT
Systems and methods are provided for vehicle navigation. In one implementation, a host vehicle-based sparse map feature harvester system may include at least one processor programmed to receive a plurality of images captured by a camera onboard the host vehicle as the host vehicle travels along a road segment in a first direction, wherein the plurality of images are representative of an environment of the host vehicle; detect one or more semantic features represented in one or more of the plurality of images, the one or more semantic features each being associated with a predetermined object type classification; identify at least one position descriptor associated with each of the detected one or more semantic features; identify three-dimensional feature points associated with one or more detected objects represented in at least one of the plurality of images; receive position information, for each of the plurality of images, wherein the position information is indicative of a position of the camera when each of the plurality of images was captured; and cause transmission of drive information for the road segment to an entity remotely-located relative to the host vehicle, wherein the drive information includes the identified at least one position descriptor associated with each of the detected one or more semantic features, the identified three-dimensional feature points, and the position information.
Methods and Systems for Providing Remote Assistance to an Autonomous Vehicle
Example embodiments relate to techniques for providing remote assistance to an autonomous vehicle. A computing device may receive location from a vehicle while the vehicle is autonomously navigating a path in an environment. Based on the location information, the computing device may display a representation of the environment of the vehicle that conveys lane information for the path and subsequently receive an input selecting a lane in the path. The input modified an availability of the lane in the path during subsequent navigation by the vehicle. The computing device may then provide navigation instructions to the vehicle based on availability of the lane in the path.
Methods and Systems for Providing Remote Assistance to an Autonomous Vehicle
Example embodiments relate to techniques for providing remote assistance to an autonomous vehicle. A computing device may receive location from a vehicle while the vehicle is autonomously navigating a path in an environment. Based on the location information, the computing device may display a representation of the environment of the vehicle that conveys lane information for the path and subsequently receive an input selecting a lane in the path. The input modified an availability of the lane in the path during subsequent navigation by the vehicle. The computing device may then provide navigation instructions to the vehicle based on availability of the lane in the path.
TARGET-BASED SENSOR CALIBRATION
The subject disclosure relates to techniques for sensor calibration based on determining a curvature of a surface of a target. A process of the disclosed technology can include steps of receiving sensor data captured by the two or more sensors, wherein the sensor data includes multiple views of one or more targets in a scene, identifying the one or more targets based on the sensor data, and determining a curvature of a surface of each of the one or more targets based on the sensor data. The process can further include performing a calibration of the two or more sensors based on the curvature of the surface of each of the one or more targets in the sensor data. Systems and machine-readable media are also provided.
SYSTEMS AND METHODS FOR AGRICULTURAL OPERATIONS
A method for an agricultural operation can include presenting a captured image of a field on a display of an electronic device. The image can be captured by an imager operably coupled with the electronic device. The method can further include detecting a geographic position of the electronic device, an imager direction of the imager, and a tilt orientation of the electronic device. The method also can include determining a field of view based on the geographic position, the tilt orientation, and the imager direction. Further, the method can include identifying whether one or more portions of the field within the field of view is a processed segment of the field or an unprocessed segment of the field. Lastly, the method can include visually augmenting the captured image with graphics based at least in part on the identification of the one or more portions of the field.
SYSTEMS AND METHODS FOR AGRICULTURAL OPERATIONS
A method for an agricultural operation can include presenting a captured image of a field on a display of an electronic device. The image can be captured by an imager operably coupled with the electronic device. The method can further include detecting a geographic position of the electronic device, an imager direction of the imager, and a tilt orientation of the electronic device. The method also can include determining a field of view based on the geographic position, the tilt orientation, and the imager direction. Further, the method can include identifying whether one or more portions of the field within the field of view is a processed segment of the field or an unprocessed segment of the field. Lastly, the method can include visually augmenting the captured image with graphics based at least in part on the identification of the one or more portions of the field.
Image system for a vehicle
A system comprises: image capture devices associated with a host vehicle and configured to capture image data indicative of an environment of the host vehicle; sensors associated with the host vehicle and configured to capture object data indicative of the presence of an object in a vicinity of the host vehicle; and a processor communicatively coupled to the image capture devices and the sensors to: receive the captured image data and captured object data; aggregate the object data captured by each sensor; determine, in dependence on the aggregated object data, a geometrical parameter of a virtual projection surface; generate a virtual projection surface in dependence on the geometrical parameter; determine, in dependence on the captured image data, an image texture; and map the image texture onto the generated virtual projection surface.