Patent classifications
H04N23/90
VISION DISPLAY SYSTEM FOR VEHICLE
A vision display system for a vehicle includes a rear backup camera, a driver side camera, a passenger side camera, a front camera, and a display system including a video display screen. Rear backup video images derived from image data captured by the rear backup camera may be displayed by the video display screen for no more than ten seconds after shifting of the vehicle transmission of the vehicle out of reverse gear. Upon the engine of the vehicle being first started after initial ignition on of the vehicle, priority may be given to display by the video display screen of rear backup video images. Within two seconds after shifting of the vehicle transmission of the vehicle into reverse gear, rear backup video images may be displayed by the video display screen.
LATERAL FIREARM CAMERA
In combination with a hand-held firearm comprising a barrel having a longitudinal axis and a muzzle disposed at a distal end of the barrel, the improvement including a secondary viewing device with a housing coupled to the barrel of the firearm and having a first camera retained by, and disposed on a first side of, the housing of the secondary viewing device, disposed substantially adjacent to the muzzle of the firearm, and directed in a viewing orientation substantially orthogonal to the longitudinal axis of the barrel of the firearm for viewing targets lateral to the firearm, the secondary viewing device and the firearm operably configured to have simultaneous and omnidirectional rotation with one another by a user.
System for image compositing including training with synthetic data
Embodiments allow live action images from an image capture device to be composited with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training. A combination of computer-generated (“synthetic”) and live-action (“recorded”) training data is created and used to train the network so that it can improve the accuracy or usefulness of a depth map so that compositing can be improved.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING SYSTEM
Provided is an information processing apparatus, including a position estimating unit configured to estimate a position of a second imaging apparatus on the basis of a first captured image captured by a first imaging apparatus whose position is specified and a second captured image captured at a time corresponding to the first captured image by the second imaging apparatus serving as a position estimation target.
Systems and methods for navigating a vehicle among encroaching vehicles
Systems and methods use cameras to provide autonomous navigation features. In one implementation, a method for navigating a user vehicle may include acquiring, using at least one image capture device, a plurality of images of an area in a vicinity of the user vehicle; determining from the plurality of images a first lane constraint on a first side of the user vehicle and a second lane constraint on a second side of the user vehicle opposite to the first side of the user vehicle; enabling the user vehicle to pass a target vehicle if the target vehicle is determined to be in a lane different from the lane in which the user vehicle is traveling; and causing the user vehicle to abort the pass before completion of the pass, if the target vehicle is determined to be entering the lane in which the user vehicle is traveling.
Systems and methods for navigating a vehicle among encroaching vehicles
Systems and methods use cameras to provide autonomous navigation features. In one implementation, a method for navigating a user vehicle may include acquiring, using at least one image capture device, a plurality of images of an area in a vicinity of the user vehicle; determining from the plurality of images a first lane constraint on a first side of the user vehicle and a second lane constraint on a second side of the user vehicle opposite to the first side of the user vehicle; enabling the user vehicle to pass a target vehicle if the target vehicle is determined to be in a lane different from the lane in which the user vehicle is traveling; and causing the user vehicle to abort the pass before completion of the pass, if the target vehicle is determined to be entering the lane in which the user vehicle is traveling.
Image processing
Apparatus comprises a camera configured to capture images of a user in a scene; a depth detector configured to capture depth representations of the scene, the depth detector comprising an emitter configured to emit a non-visible signal; a mirror arranged to reflect at least some of the non-visible signal emitted by the emitter to one or more features within the scene that would otherwise be occluded by the user and to reflect light from the one or more features to the camera; a pose detector configured to detect a position and orientation of the mirror relative to at least one of the camera and depth detector; and a scene generator configured to generate a three-dimensional representation of the scene in dependence on the images captured by the camera and the depth representations captured by the depth detector and the pose of the mirror detected by the pose detector.
METHOD OF AUTOMATIC POSITIONING OF A SEAT
A method of automatic positioning a seat in an apparatus comprising two cameras located on either side of the seat, each one in a position able to acquire images of a face of a user seated on the seat. The seat comprises at least one motor, each motor acting on a position of the seat along a predefined axis. The method comprises: for each camera: obtaining a position of a predefined image zone in which at least one eye of a user of the apparatus should be located; acquiring an image of a user seated on the seat; detecting at least one eye of the seated user in the image acquired; and obtaining a relative position between each eye detected and the predefined zone. By using each relative position obtained, at least one motor is actuated until each predefined zone contains at least one eye of the seated user.
DEVICE FOR ASSISTING THE PILOTING OF A ROTORCRAFT, AN ASSOCIATED DISPLAY, AND A CORRESPONDING METHOD OF ASSISTING PILOTING
A device for assisting the piloting of a rotorcraft in order to pilot a rotorcraft during an approach stage preceding a stage of landing on a rotorcraft landing area. Such a device includes in particular a camera for taking a plurality of images of the environment of the rotorcraft along a line of sight, looking at least along a forward direction Dx of the rotorcraft, and processor means for identifying in at least one image from among said plurality of images at least one looked-for landing area.
IMAGING DEVICE AND METHOD FOR GENERATING AN UNDISTORTED WIDE VIEW IMAGE
Certain aspects of the technology disclosed herein involve combining images to generate a wide view image of a surrounding environment. Images can be recorded using an stand-alone imaging device having wide angle lenses and/or normal lenses. Images from the imaging device can be combined using methods described herein. In an embodiment, a pixel correspondence between a first image and a second image can be determined, based on a corresponding overlap area associated with the first image and the second image. Corresponding pixels in the corresponding overlap area associated with the first image and the second image can be merged based on a weight assigned to each of the corresponding pixels.