Patent classifications
B60R2300/102
SYSTEM AND METHOD TO IMPROVE MULTI-CAMERA MONOCULAR DEPTH ESTIMATION USING POSE AVERAGING
A method for multi-camera monocular depth estimation using pose averaging is described. The method includes determining a multi-camera photometric loss associated with a multi-camera rig of an ego vehicle. The method also includes determining a multi-camera pose consistency constraint (PCC) loss associated with the multi-camera rig of the ego vehicle. The method further includes adjusting the multi-camera photometric loss according to the multi-camera PCC loss to form a multi-camera PCC photometric loss. The method also includes training a multi-camera depth estimation model and an ego-motion estimation model according to the multi-camera PCC photometric loss. The method further includes predicting a 360° point cloud of a scene surrounding the ego vehicle according to the trained multi-camera depth estimation model and the ego-motion estimation model.
SCALE-AWARE DEPTH ESTIMATION USING MULTI-CAMERA PROJECTION LOSS
A method for scale-aware depth estimation using multi-camera projection loss is described. The method includes determining a multi-camera photometric loss associated with a multi-camera rig of an ego vehicle. The method also includes training a scale-aware depth estimation model and an ego-motion estimation model according to the multi-camera photometric loss. The method further includes predicting a 360° point cloud of a scene surrounding the ego vehicle according to the scale-aware depth estimation model and the ego-motion estimation model. The method also includes planning a vehicle control action of the ego vehicle according to the 360° point cloud of the scene surrounding the ego vehicle.
SELF-OCCLUSION MASKS TO IMPROVE SELF-SUPERVISED MONOCULAR DEPTH ESTIMATION IN MULTI-CAMERA SETTINGS
A method for self-supervised depth and ego-motion estimation is described. The method includes determining a multi-camera photometric loss associated with a multi-camera rig of an ego vehicle. The method also includes generating a self-occlusion mask by manually segmenting self-occluded areas of images captured by the multi-camera rig of the ego vehicle. The method further includes multiplying the multi-camera photometric loss with the self-occlusion mask to form a self-occlusion masked photometric loss. The method also includes training a depth estimation model and an ego-motion estimation model according to the self-occlusion masked photometric loss. The method further includes predicting a 360° point cloud of a scene surrounding the ego vehicle according to the depth estimation model and the ego-motion estimation model.
Flexible hub for handling multi-sensor data
A hub that receives sensor data streams and then distributes the data streams to the various systems that use the sensor data. A demultiplexer (demux) receives the streams, filters out undesired streams and provides desired streams to the proper multiplexer (mux) or muxes of a series of muxes. Each mux combines received streams and provides an output stream to a respective formatter or output block. The formatter or output block is configured based on the destination of the mux output stream, such as an image signal processor, a processor, memory or external transmission. The output block reformats the received stream to a format appropriate for the recipient and then provides the reformatted stream to that recipient.
SYSTEM AND METHOD FOR PROVIDING A DESIRED VIEW FOR A VEHICLE OCCUPANT
A system, method and computer program product for providing a desired camera view of interest for a vehicle occupant. The method includes obtaining, by the least a first interior view camera, at least a first interior view of the face and/or the head of the vehicle occupant determining, based on the at least first interior view, at least any of a gaze direction and a head direction of the vehicle occupant for understanding where the vehicle occupant is looking, determining, that the view of interest for the vehicle occupant is, at least partly, within the at least first exterior view of the at least first exterior view camera, and displaying the at least first exterior view to the vehicle occupant via the at least first display.
Providing visual references to prevent motion sickness in vehicles
Systems and methods to provide visual references to passengers in vehicles to prevent motion sickness. The system can include a controller and one or more projectors and/or displays. The controller can detect movement of a vehicle and project images within the vehicle that comport with the detected movement. The system can include a projector to project images on the interior of the vehicle. The system can include one or more displays to display images inside the vehicle. The controller can receive data from one or more cameras, accelerometers, navigation units, magnetometers, and other components to detect the motion of the vehicle. The system can display visual references on the dashboard, door panels, and other interior surfaces to complete the view of passengers, or provide other visual reference, to prevent motion sickness.
AR mobility and method of controlling AR mobility
A method of controlling augmented reality (AR) mobility according to embodiments may include generating, by a camera, image data by photographing one or more users, extracting information about the one or more users from the image data, calculating a reference point for projection of an AR object based on the location information about the users, and displaying the AR object on a display based on the calculated reference point. An apparatus for controlling AR mobility according to embodiments may include a camera configured to generate image data by photographing one or more users, a controller configured to extract information about the one or more users from the image data, a calibrator configured to calculate a reference point for projection of an AR object based on the location information about the users, and a display configured to display the AR object on a display based on the calculated reference point.
Calibration of a surround view camera system
A method for automatic generation of calibration parameters for a surround view (SV) camera system is provided that includes capturing a video stream from each camera comprised in the SV camera system, wherein each video stream captures two calibration charts in a field of view of the camera generating the video stream; displaying the video streams in a calibration screen on a display device coupled to the SV camera system, wherein a bounding box is overlaid on each calibration chart, detecting feature points of the calibration charts, displaying the video streams in the calibration screen with the bounding box overlaid on each calibration chart and detected features points overlaid on respective calibration charts, computing calibration parameters based on the feature points and platform dependent parameters comprising data regarding size and placement of the calibration charts, and storing the calibration parameters in the SV camera system.
CAMERA RING STRUCTURE FOR AUTONOMOUS VEHICLES
The technology relates to autonomous vehicles that use a perception system to detect objects and features in the vehicle's surroundings. A camera assembly having a ring-type structure is provided that gives the perception system an overall 360° field of view around the vehicle. Image sensors are arranged in camera modules around the assembly to provide a seamless panoramic field of view. One subsystem has multiple pairs of image sensors positioned to provide the overall 360° field of view, while another subsystem provides a set of image sensors generally facing toward the front of the vehicle to provide enhanced object identification. The camera assembly may be arranged in a housing located on top of the vehicle. The housing may include other sensors such as LIDAR and radar. The assembly includes a chassis and top and base plates, which may provide EMI protection from other sensors disposed in the housing.
CAMERA RING STRUCTURE FOR AUTONOMOUS VEHICLES
The technology relates to autonomous vehicles that use a perception system to detect objects and features in the vehicle's surroundings. A camera assembly having a ring-type structure is provided that gives the perception system an overall 360° field of view around the vehicle. Image sensors are arranged in camera modules around the assembly to provide a seamless panoramic field of view. One subsystem has multiple pairs of image sensors positioned to provide the overall 360° field of view, while another subsystem provides a set of image sensors generally facing toward the front of the vehicle to provide enhanced object identification. The camera assembly may be arranged in a housing located on top of the vehicle. The housing may include other sensors such as LIDAR and radar. The assembly includes a chassis and top and base plates, which may provide EMI protection from other sensors disposed in the housing.