Patent classifications
G06T3/18
Integrated vision-based and inertial sensor systems for use in vehicle navigation
A navigation system useful for providing speed and heading and other navigational data to a drive system of a moving body, e.g., a vehicle body or a mobile robot, to navigate through a space. The navigation system integrates an inertial navigation system, e.g., a unit or system based on an inertial measurement unit (IMU). with a vision-based navigation system unit or system such that the inertial navigation system can provide real time navigation data and the vision-based navigation can provide periodic, but more accurate, navigation data that is used to correct the inertial navigation system's output. The navigation system was designed with the goal in mind of providing low effort integration of inertial and video data. The methods and devices used in the new navigation system address problems associated with high accuracy dead reckoning systems (such as a typical vision-based navigation system) and enhance performance with low cost IMUs.
Optical system with dynamic distortion using freeform elements
A method for designing an optical system for providing reliable, robust and successful realization of a distortion variation function is presented. In a preferred embodiment, the proposed distortion variation optical system includes at least two non-symmetrical elements, which are moving in the transverse direction. The proposed freeform lens contains two transmissive refractive surfaces. The freeform elements designed with this method have preferably a flat surface and a non-symmetrical freeform surface. The two plano-surfaces are preferably made to face each other, so that a miniature camera can be offered. The value of the non-symmetrical freeform surface is used to produce variable optical power when the two freeform elements undergo a relative movement in the vertical direction. Using this method, an optical system with an active distortion, smaller form factor, and better imaging quality can be obtained.
FLOW METER HAVING A VALVE
A flow control apparatus includes a body, a door, a drip chamber holster, and a valve. The door is pivotally coupled to the body. The drip chamber holster is coupled to one of the body or the door to retain a drip chamber. The valve is disposed within the body and is configured to receive a fluid line coupled to the drip chamber when the door is in an open position. The valve is operatively coupled to the door to secure the fluid line within the valve when the door is in a closed position.
METHODS, DEVICES AND COMPUTER PROGRAM PRODUCTS FOR GENERATING 3D MODELS
A method of generating a 3D model may include receiving a plurality of 2D images of a physical object captured from a respective plurality of viewpoints in a 3D scan of the physical object in a first process. The method may include receiving a first process 3D mesh representation of the physical object and calculating respective second process estimated position and/or orientation information for each one of the respective plurality of viewpoints of the plurality of 2D images. The method may include generating a second process 3D mesh representation of the physical object using the plurality of 2D images, the second process estimated position and/or orientation information, and the first process 3D mesh representation of the physical object. The method may include generating a 3D model of the physical object by applying surface texture information from the plurality of 2D images to the second process 3D mesh representation of the physical object.
METHODS AND SYSTEM FOR EFFICIENT PROCESSING OF GENERIC GEOMETRIC CORRECTION ENGINE
An apparatus and method for geometrically correcting a distorted input frame and generating an undistorted output frame. The apparatus includes an external memory block that stores the input frame, a counter block to compute output coordinates of the output frame for a region based on a block size of the region, a back mapping block to generate input coordinates corresponding to each of the output coordinates, a bounding module to compute input blocks corresponding to each of the input coordinates, a buffer module to fetch data corresponding to each of the input blocks, an interpolation module to interpolate data from the buffer module and a display module that receives the interpolated data for each of the regions and stitch an output image. The method includes determining the size of the output block based on a magnification data.
Image inpainting based on multiple image transformations
Various disclosed embodiments are directed to inpainting one or more portions of a target image based on merging (or selecting) one or more portions of a warped image with (or from) one or more portions of an inpainting candidate (e.g., via a learning model). This, among other functionality described herein, resolves the inaccuracies of existing image inpainting technologies.
VOLUMETRIC CAPTURE OF OBJECTS WITH A SINGLE RGBD CAMERA
A method includes receiving a first image including color data and depth data, determining a viewpoint associated with an augmented reality (AR) and/or virtual reality (VR) display displaying a second image, receiving at least one calibration image including an object in the first image, the object being in a different pose as compared to a pose of the object in the first image, and generating the second image based on the first image, the viewpoint and the at least one calibration image.
Method and apparatus for generating projection-based frame with 360-degree image content represented by triangular projection faces assembled in triangle-based projection layout
A projection-based frame is generated according to an omnidirectional video frame and a triangle-based projection layout. The projection-based frame has a 360-degree image content represented by triangular projection faces assembled in the triangle-based projection layout. A 360-degree image content of a viewing sphere is mapped onto the triangular projection faces via a triangle-based projection of the viewing sphere. One side of a first triangular projection face has contact with one side of a second triangular projection face, one side of a third triangular projection face has contact with another side of the second triangular projection face. One image content continuity boundary exists between one side of the first triangular projection face and one side of the second triangular projection face, and another image content continuity boundary exists between one side of the third triangular projection face and another side of the second triangular projection face.
Devices, systems, and methods for under vehicle surveillance
A device includes: a board; a camera coupled to the board, wherein the camera is configured to image an undercarriage based on the undercarriage moving over camera; a light source coupled to the board, wherein the light source is configured to illuminate the undercarriage based on the undercarriage moving over the light source.
VISUAL STYLIZATION ON STEREOSCOPIC IMAGES
In accordance with implementations of the subject matter described herein, there is proposed a solution of visual stylization of stereoscopic images. In the solution, a first feature map for a first source image and a second feature map for a second source image are extracted. The first and second source images correspond to first and second views of a stereoscopic image, respectively. A first unidirectional disparity from the first source image to the second source image is determined based on the first and second source images. First and second target images having a specified visual style are generated by processing the first and second feature maps based on the first unidirectional disparity. Through the solution, a disparity between two source images of a stereoscopic image are taken into account when performing the visual style transfer, thereby maintaining a stereoscopic effect in the stereoscopic image consisting of the target images.