Patent classifications
G06T7/74
TEXT BORDER TOOL AND ENHANCED CORNER OPTIONS FOR BACKGROUND SHADING
Disclosed herein are various techniques for more precisely and reliably (a) positioning top and bottom border edges relative to textual content, (b) positioning left and right border edges relative to textual content, (c) positioning mixed edge borders relative to textual content, (d) positioning boundaries of a region of background shading that fall within borders of textual content, (e) positioning borders relative to textual content that spans columns, (f) positioning respective borders relative to discrete portions of textual content, (g) positioning collective borders relative to discrete, abutting portions of textual content, (h) applying stylized corner boundaries to a region of background shading, and (i) applying stylized corners to borders.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM
An information processing apparatus according to an embodiment of the present technology includes a line-of-sight estimator, a correction amount calculator, and a registration determination section. The line-of-sight estimator calculates an estimation vector obtained by estimating a direction of a line of sight of a user. The correction amount calculator calculates a correction amount related to the estimation vector on the basis of at least one object that is within a specified angular range that is set using the estimation vector as a reference. The registration determination section determines whether to register, in a data store, calibration data in which the estimation vector and the correction amount are associated with each other, on the basis of a parameter related to the at least one object within the specified angular range.
MAP INFORMATION UPDATE METHOD, LANDMARK GENERATION METHOD, AND FEATURE POINT DISTRIBUTION ADJUSTMENT METHOD
A map information update method includes: (a) obtaining map information; (b) obtaining landmark observed positions indicating positions of one or more landmarks in a captured image; (c) adding that includes (i) generating added map information by adding information pertaining to the landmark observed positions to the map information, and (ii) updating the map information obtained in (a) to the added map information; (d) predicting that includes (i) calculating predicted map information based on the map information updated in (c), by using a neural network inference engine that has been trained, and (ii) updating the map information to the predicted map information; and updating information that includes (i) calculating updated map information based on the map information updated in (d), by using a gradient method, and (ii) updating the map information to the updated map information.
AIRCRAFT DOOR CAMERA SYSTEM FOR DOCKING ALIGNMENT MONITORING
A camera with a field of view toward an external environment of an aircraft is disposed within an aircraft door such that a ground surface is within the field of view of the camera during taxiing of the aircraft. A display device is disposed within an interior of the aircraft. A processor is operatively coupled to the camera and to the display device. The processor analyzes image data captured by the camera for docking guidance by identifying, within the captured image data, a region on the ground surface corresponding to an alignment fiducial indicating a parking location for the aircraft, determining, based on the region of the captured image data corresponding to the alignment fiducial indicating the parking location, a relative location of the aircraft with respect to the alignment fiducial, and outputting an indication of the relative location of the aircraft to the alignment fiducial.
SYSTEM AND METHOD FOR CALIBRATING A TIME DIFFERENCE BETWEEN AN IMAGE PROCESSOR AND AN INTERTIAL MEASUREMENT UNIT BASED ON INTER-FRAME POINT CORRESPONDENCE
Systems and methods are used for calibrating a time difference between an image signal processor (ISP) and an inertial measurement unit (IMU) of an image capture device. An image capture device includes a lens, an image sensor, an IMU, and an ISP. The image sensor detects images as frames and the IMU captures motion data. The ISP detects one or more key points on the frames and matches the one or more key points between the frames. The ISP computes one or more calibration parameters. The one or more calibration parameters are based on the matched key points and a time difference between the ISP and the IMU. The ISP performs a calibration using the calibration parameters.
SYSTEM AND METHOD FOR GENERATING VIRTUAL PSEUDO 3D OUTPUTS FROM IMAGES
A method for generating virtual pseudo three dimensional 360 degree outputs from 2D images of an object 102 is provided. An image viewer plane of the object 102 in the 3D image to be rendered on a user device 108 is detected using an augmented reality technique. An image viewer plane is placed facing the user device 108 rendering ‘Image 0’ and movement coordinates of the user device 108 with respect to the image viewer plane is detected to calculate the virtual pseudo 3D image set to be displayed based on at least one angle of view by performing interpolation between two consecutive virtual pseudo 3D images. The image viewer plane is changed with respect to the movement of the user device 108 to change the virtual pseudo 3D image and the interpolated virtual pseudo 3D image on the plane and that image is displayed as an augmented reality object in real-time to the user device 108.
Systems and methods for reconstruction and rendering of viewpoint-adaptive three-dimensional (3D) personas
An exemplary method includes maintaining a receiver-side mesh-vertices list, receiving duplicative-vertex information from a sender, and responsively reducing the receiver-side mesh-vertices list in accordance with the received duplicative-vertex information, and rendering, using the reduced receiver-side mesh-vertices list, viewpoint-adaptive three-dimensional (3D) personas of a subject at least in part by weighting video pixel colors from different video-camera vantage points of video cameras that capture video streams of the subject, the weighting being performed according to a respective geometric relationship of each video-camera vantage point to a user-selected viewpoint.
Image positioning system and image positioning method based on upsampling
An image positioning system based on upsampling and a method thereof are provided. The image positioning method based on upsampling is to fetch a region image covering a target from a wide region image, determine a rough position of the target, execute an upsampling process on the region image based on neural network data model for obtaining a super-resolution region image, map the rough position onto the super-resolution region image, and analyze the super-resolution region image for determining a precise position of the target. The present disclosed example can significantly improve the efficiency of positioning and effectively reduce the required cost of hardware.
Object detection using multiple three dimensional scans
One exemplary implementation facilitates object detection using multiple scans of an object in different lighting conditions. For example, a first scan of the object can be created by capturing images of the object by moving an image sensor on a first path in a first lighting condition, e.g., bright lighting. A second scan of the object can then be created by capturing additional images of the object by moving the image sensor on a second path in a second lighting condition, e.g., dim lighting. Implementations determine a transform that associates the scan data from these multiple scans to one another and use the transforms to generate a 3D model of the object in a single coordinate system. Augmented content can be positioned relative to that object in the single coordinate system and thus will be displayed in the appropriate location regardless of the lighting condition in which the physical object is later detected.
Apparatus and methods for augmented reality vehicle condition inspection
Methods, apparatus, systems and articles of manufacture are disclosed for augmented reality vehicle condition inspection. An example apparatus disclosed herein includes a location analyzer to determine whether a camera is at an inspection location and directed towards a first vehicle in an inspection profile, the inspection location corresponding to a location of the camera relative to the first vehicle, an interface generator to generate an indication on a display that the camera is at the inspection location, the indication associated with an inspection image being captured, and an image analyzer to compare the inspection image captured at the inspection location with a reference image taken of a reference vehicle of a same type as the first vehicle, and determine a vehicle part condition or a vehicle condition based on the comparison of the inspection image and the reference image.