Patent classifications
G06T2200/32
Synthesizing three-dimensional visualizations from perspectives of onboard sensors of autonomous vehicles
Aspects of the disclosure provide for generating a visualization of a three-dimensional (3D) world view from the perspective of a camera of a vehicle. For example, images of a scene captured by a camera of the vehicle and 3D content for the scene may be received. A virtual camera model for the camera of the vehicle may be identified. A set of matrices may be generated using the virtual camera model. The set of matrices may be applied to the 3D content to create a 3D world view. The visualization may be generated using the 3D world view as an overlay with the image, and the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content.
METHOD AND APPARATUS FOR AUTOMATED PLACEMENT OF A SEAM IN A PANORAMIC IMAGE DERIVED FROM MULTIPLE CAMERAS
A method, apparatus and computer program product are provided to generate a panoramic view derived from multiple cameras and automatically place a seam in that panoramic view in a computationally efficient manner. In regards to a method, images captured by at least two cameras are received. Each camera has a different, but partially overlapping field of view. The method determines a seam location and scale factor to be used when combining the images together to minimize errors at the seam between the two images. In some example implementations, the seam location and scale factor may be recalculated in response to a manual or automatic trigger. In some additional example implementations, motion associated with an image element near a seam location is detected, and the seam location is moved in a direction opposite that of the direction of motion.
SYSTEM AND METHOD FOR ADAPTIVE PANORAMIC IMAGE GENERATION
An imaging system for adaptively generating panoramic images and methods for manufacturing and using same are provided. The system includes an imaging device configured to capture digital images at a plurality of image capture positions. The system further includes a processor configured to identify an overlapping portion of first and second images captured at respective first and second image capture positions, determine a stitching position quality measure for a plurality of stitching positions in the overlapping portion of the first and second images, and select a stitching position based on the determined stitching position quality measures of the plurality of stitching positions of the first and second images. The processor is also configured to stitch the first and second images together at the selected stitching position to generate a panoramic image and determine a third image capture position based on the stitching position quality measure.
IMAGING DEVICE AND METHOD FOR GENERATING AN UNDISTORTED WIDE VIEW IMAGE
Certain aspects of the technology disclosed herein involve combining images to generate a wide view image of a surrounding environment. Images can be recorded using an stand-alone imaging device having wide angle lenses and/or normal lenses. Images from the imaging device can be combined using methods described herein. In an embodiment, a pixel correspondence between a first image and a second image can be determined, based on a corresponding overlap area associated with the first image and the second image. Corresponding pixels in the corresponding overlap area associated with the first image and the second image can be merged based on a weight assigned to each of the corresponding pixels.
THREE-DIMENSIONAL STABILIZED 360-DEGREE COMPOSITE IMAGE CAPTURE
Many embodiments can comprise a system. The system can comprise a processor and a memory coupled to the processor. The memory can include instructions that, when executed by the processor, cause the processor to: determine a direction of gravity in each image of a sequence of images around an object; estimate a center of mass of the object in each image of the sequence of images using the direction of gravity and dimensions of the object; stabilize each image in the sequence of images using the center of mass; and generate a 360 degree display of the object using each image in the stabilized sequence of images. Other embodiments are disclosed herein.
Method and system for assistance in guiding an endovascular instrument
A system for assisting guiding an endovascular instrument in vascular structures of an anatomical region of interest of a patient. This system includes an imaging device for capturing three-dimensional images of parts of the body of a patient, a programmable device and a viewing unit. The imaging device captures partially superposed fluoroscopic images of the region, and the programmable device forms a first augmented image, representative of a complete panorama of bones of the region, and cooperates with the imaging device to obtain a second augmented image including a representation of the vascular structures of the region. The imaging device captures a current fluoroscopic image of a part of the region, and the programmable device registers the current fluoroscopic image with respect to the first augmented image and locates and displays, on the viewing unit, an image region corresponding to the current fluoroscopic image in the second augmented image.
SYNTHESIZING THREE-DIMENSIONAL VISUALIZATIONS FROM PERSPECTIVES OF ONBOARD SENSORS OF AUTONOMOUS VEHICLES
Aspects of the disclosure provide for generating a visualization of a three-dimensional (3D) world view from the perspective of a camera of a vehicle. For example, images of a scene captured by a camera of the vehicle and 3D content for the scene may be received. A virtual camera model for the camera of the vehicle may be identified. A set of matrices may be generated using the virtual camera model. The set of matrices may be applied to the 3D content to create a 3D world view. The visualization may be generated using the 3D world view as an overlay with the image, and the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content.
Apparatus and system for virtual camera configuration and selection
A system and method for virtual camera configuration and selection. For example, one embodiment of a system comprises: a decode subsystem comprising circuitry to concurrently decode a plurality of video streams captured by cameras at an event to generate decoded video streams from a perspective of corresponding virtual cameras (VCAMs); video evaluation logic to apply at least one video quality metric to determine a quality value for the decoded video streams or a subset thereof, and to rank the decoded video streams based, at least in part, on the quality values associated with the decoded video streams; preview logic to provide the decoded video streams or modified versions thereof to one or more computing devices accessible to one or more video production team members and to further provide the quality values and/or the rank generated by the video quality evaluation logic; stream selection hardware logic to select a subset of the plurality of decoded video streams based on input from the one or more video production team members; and transcoder hardware logic to transcode the subset of the plurality of decoded video streams for live transmission over a public or private network.
Non-linear color correction
Systems and methods are disclosed for non-linear color correction. The method including receiving an input image from an image sensor, converting the input image from a red, green, blue (RGB) color space format to an alternate color space format, determining localized hue correction parameters for a selected color in the alternate color space, determining localized saturation correction parameters for a selected hue in the alternate color space, applying the localized hue correction parameters and the localized saturation correction parameters to the input image to generate an output image and storing, displaying, or transmitting the output image based on at least the localized hue correction parameters and the localized saturation correction parameters.
Image stitching device and image stitching method
An image stitching method includes: receiving a first image and a second image; determining that both the first image and the second image include a target object; obtaining a first brightness value and a second brightness value, the first brightness value being a brightness value of the target object in the first image, and the second brightness value being a brightness value of the target object in the second image; adjusting a brightness value of the first image and a brightness value of the second image according to the first brightness value and the second brightness value, so as to obtain a first image to be stitched and a second image to be stitched; and stitching the first image to be stitched and the second image to be stitched to obtain a first stitched image.