Patent classifications
B60R2300/60
Determining position data
Aspects described herein relate to methods and systems for obtaining geographic position data for features that are captured through a device using a relative coordinate system, such as an optical sensor on a vehicle travelling along a route. The relative positions of the features are transformed into geographic positions by first establishing a spatial relationship between the relative coordinate system and a geographic coordinate system. Once the spatial relationship is known, it can be used to match a number of the features captured in the relative coordinate system to features having a known position within the geographic coordinate system, preferably, features that are known to have an accurately measured geographic position. The matched features can then be used as tie points between the relative coordinate system and the geographic coordinate system, and then used to transform unmatched features in the relative coordinate system to positions in the geographic coordinate system.
Vehicle monitoring device
A vehicle monitoring device mounted on a vehicle includes an imaging unit for capturing images of a vehicle body side surface, a rear side and a diagonally backward of the vehicle, a storage unit that stores a reference image defining a reference state of the vehicle body side surface, and an image comparison unit that detects an abnormal state of the vehicle body by comparing the captured images captured by the imaging unit with the reference image read from the storage unit.
DATA PROCESSING METHODS, DEVICES, AND APPARATUSES, AND MOVABLE PLATFORMS
The present disclosure provides a method, a device, and an apparatus for processing data, and a movable platform; the method is applied to the movable platform that includes a sensor, and includes: collecting at least two directions of an environment surrounding the movable platform through a sensor, performing three-dimensional reconstruction of the environmental data in the at least two directions to obtain three-dimensional environmental information of the environment surrounding the movable platform, and further displaying the three-dimensional environmental information on a display apparatus. The present disclosure may assist a driver in driving and improve the driving safety of the movable platform.
Method Of Using Camera For Both Internal And External Monitoring Of A Vehicle
A camera has a field of view encompassing a portion of an environment outside of a motor vehicle and a portion of an environment inside a passenger compartment of the motor vehicle. An electronic processor is communicatively coupled to the camera and receives captured images from the camera. The electronic processor separates first extracted images from the captured images. The first extracted images correspond to the portion of the environment outside of the motor vehicle. The electronic processor transmits the first extracted images to a first software application for monitoring the environment outside of the motor vehicle. The electronic processor separates second extracted images from the captured images. The second extracted images correspond to the portion of the environment inside a passenger compartment. The electronic processor transmits the second extracted images to a second software application for monitoring human beings in the passenger compartment.
Parking Assistance Device and Method
A parking assistance device includes a camera configured to capture a rear-view image of a vehicle, a plurality of sensors configured to sense an obstacle located around the vehicle, a controller configured to generate a parking guide line to guide the vehicle into a target parking space and assist a driver of the vehicle to park based on a separation distance between a predicted entrance trajectory corresponding to a steering angle of the vehicle and the parking guide line, and a display configured to match and display the rear-view image of the vehicle with the parking guide line.
Systems and methods for displaying autonomous vehicle environmental awareness
The disclosed computer-implemented method may include displaying vehicle environment awareness. In some embodiments, a visualization system may display an abstract representation of a vehicle's physical environment via a mobile device and/or a device embedded in the vehicle. For example, the visualization may use a voxel grid to represent the environment and may alter characteristics of shapes in the grid to increase their visual prominence when the sensors of the vehicle detect that an object is occupying the space represented by the shapes. In some embodiments, the visualization may gradually increase and reduce the visual prominence of shapes in the grid to create a soothing wave effect. Various other methods, systems, and computer-readable media are also disclosed.
Image processor and method for virtual viewpoint synthesis in a vehicle-based camera system
An image processor includes a storage portion to store projection plane vertex information in which coordinate information of each vertex of a divided projection plane and identification information given to the divided projection plane are linked and coordinate transformation information of the coordinate information, an information control portion to specify all of the divided projection planes constituting a projection region of a stereoscopic image from a virtual viewpoint, to acquire the identification information of the specified divided projection plane, and to acquire the coordinate information of the vertex based on the acquired identification information, and a display control portion to transform the coordinate information of each vertex of the divided projection plane acquired based on the coordinate transformation information of the storage portion, and to create the stereoscopic image based on the image signal output from the photographing device and the coordinate information after the coordinate transformation.
Immersive display of motion-synchronized virtual content
A VR system for vehicles that may implement methods that address problems with vehicles in motion that may result in motion sickness for passengers. The VR system may provide virtual views that match visual cues with the physical motions that a passenger experiences. The VR system may provide immersive VR experiences by replacing the view of the real world with virtual environments. Active vehicle systems and/or vehicle control systems may be integrated with the VR system to provide physical effects with the virtual experiences. The virtual environments may be altered to accommodate a passenger upon determining that the passenger is prone to or is exhibiting signs of motion sickness.
Generating virtual images based on captured image data
Systems and methods for generating a virtual view of a virtual camera based on an input image are described. A system for generating a virtual view of a virtual camera based on an input image can include a capturing device including a physical camera and a depth sensor. The system also includes a controller configured to determine an actual pose of the capturing device; determine a desired pose of the virtual camera for showing the virtual view; define an epipolar geometry between the actual pose of the capturing device and the desired pose of the virtual camera; and generate a virtual image depicting objects within the input image according to the desired pose of the virtual camera for the virtual camera based on an epipolar relation between the actual pose of the capturing device, the input image, and the desired pose of the virtual camera.
3D display system for camera monitoring system
A system in a vehicle for generating and displaying three-dimensional images may comprise a first imager having a first field of view; a second imager having a second field of view at least partially overlapping the first field of view, the second imager disposed a distance from first imager; and an image signal processor in communication with the first and second imagers; wherein the image signal processor is configured to generate an image having a three-dimensional appearance from the data from the first and second imagers. The first and second imagers may be disposed on a vehicle. The first and second imagers may be configured to capture a scene; and the scene may be exterior to the vehicle.