G06T3/0056

Generating Two-Dimensional Views with Gridline Information
20220309752 · 2022-09-29 ·

An example computing system is configured to extract gridline information from a two-dimensional drawing file and determine, for the gridline information, first coordinate information that is based on a first datum. The computing system converts the first coordinate information into second coordinate information that is based on a second datum, where the second coordinate information is used by a three-dimensional drawing file. The computing system is also configured to receive a request to generate a two-dimensional view of the three-dimensional drawing file, where the two-dimensional view includes an intersection of two meshes within the three-dimensional drawing file. The computing device generates the two-dimensional view of the three-dimensional drawing file and adds, to the generated two-dimensional view, (i) at least one gridline corresponding to the gridline information and (ii) dimensioning information involving the at least one gridline and at least one of the two meshes.

System for video super resolution using semantic components

A method for increasing the resolution of a series of low resolution frames of a low resolution video sequence to a series of high resolution frames of a high resolution video sequence includes receiving the series of low resolution frames of the video sequence. The system determines a first plurality of semantically relevant key points of a first low resolution frame of the series of low resolution frames and determines a second plurality of semantically relevant key points of a second low resolution frame of the series of low resolution frames. The system temporally processes the first plurality of key points based upon the second plurality of key points to determine a more temporally consistent set of key points for the first plurality of key points.

SAMPLE IMAGING VIA TWO-PASS LIGHT-FIELD RECONSTRUCTION

Methods and systems for sample imaging via two-pass light-field reconstruction. In an exemplary method, a light-field image of a sample may be captured in a light-field plane. The light-field image may be forward-projected computationally to each of a plurality of z-planes in object space to generate a set of forward-projected z-plane images. Backward-projections computationally to the light-field plane of the same xy-region in object space from each z-plane image may be compared with the light-field image, to determine a respective degree of correspondence between the backward-projected xy-region from each of the z-plane images and the light-field image. For each different xy-region, at least one of the forward-projected z-plane images may be selected to contribute data for the different xy-region in a 2D or 3D object-space image of the sample.

TRUNCATED SQUARE PYRAMID GEOMETRY AND FRAME PACKING STRUCTURE FOR REPRESENTING VIRTUAL REALITY VIDEO CONTENT
20170280126 · 2017-09-28 ·

Techniques and systems are described for mapping 360-degree video data to a truncated square pyramid shape. A 360-degree video frame can include 360-degrees' worth of pixel data, and thus be spherical in shape. By mapping the spherical video data to the planes provided by a truncated square pyramid, the total size of the 360-degree video frame can be reduced. The planes of the truncated square pyramid can be oriented such that the base of the truncated square pyramid represents a front view and the top of the truncated square pyramid represents a back view. In this way, the front view can be captured at full resolution, the back view can be captured at reduced resolution, and the left, right, up, and bottom views can be captured at decreasing resolutions. Frame packing structures can also be defined for 360-degree video data that has been mapped to a truncated square pyramid shape.

Image processing device and system, image processing method, and medium
09773296 · 2017-09-26 · ·

The present invention is directed to an image processing device including: a unit configured to extract, as target data, a set of signal values of pixels included in a target region including a target pixel in an input image; a unit configured to obtain reference data for each of a plurality of reference pixels corresponding to the target pixel; a unit configured to generate transformation reference data by transforming the reference data; a unit configured to determine a weight for each of a plurality of transformation reference pixels based on the target data and the reference data; and a unit configured to generate an output pixel corresponding to the target pixel by using a signal value calculated based on the transformation reference pixels and the weights.

EFFICIENT COMMUNICATION INTERFACE FOR CASTING INTERACTIVELY CONTROLLED VISUAL CONTENT
20170269895 · 2017-09-21 ·

System, method, and computer product embodiments for efficiently casting interactively-controlled visual content displayed on a first display screen to a second display screen. In an embodiment, the computing device sends the visual content displayed on the first display screen to a multimedia device for displaying on the second display screen. Upon receipt of an instruction that visually manipulates how the visual content is displayed on the first display screen, the computing device generates a command representative of the received instruction. The command may specify a positional relationship between the center of the first display screen and the visual content displayed on the first display screen. Then, the computing devices sends the command to the multimedia device that causes the second display screen to display the visual content according to the positional relationship.

APPARATUS, METHOD AND STORAGE MEDIUM FOR CORRECTING PAGE IMAGE
20170262163 · 2017-09-14 · ·

When a touch operation is performed with one finger, this touch operation performed with one finger is judged to be a single-point operation performed on one control point on a mesh image constituted by Bezier curves and deformation processing is performed in which the corresponding point is moved in accordance with the movement of the one touching finger. On the other hand, when a touch operation is performed with a plurality of fingers, it is judged to be a multi-point operation performed on all control points on the mesh image constituted by Bezier curves , and deformation processing is performed in which all the control points on the mesh image are moved in accordance with the movements of the plurality of fingers with the linearity of the mesh image being maintained.

IMAGE FEATURE COMBINATION FOR IMAGE-BASED OBJECT RECOGNITION
20170263019 · 2017-09-14 · ·

Methods, systems, and articles of manufacture to improve image recognition searching are disclosed. In some embodiments, a first document image of a known object is used to generate one or more other document images of the same object by applying one or more techniques for synthetically generating images. The synthetically generated images correspond to different variations in conditions under which a potential query image might be captured. Extracted features from an initial image of a known object and features extracted from the one or more synthetically generated images are stored, along with their locations, as part of a common model of the known object. In other embodiments, image recognition search effectiveness is improved by transforming the location of features of multiple images of a same known object into a common coordinate system. This can enhance the accuracy of certain aspects of existing image search/recognition techniques including, for example, geometric verification.

GRAPHICS PROCESSING SYSTEMS

A graphics processing system and method of operating a graphics processing system that generates “spacewarped” frames for display is disclosed. Motion vectors are used to determine the motion of objects appearing in rendered application frames. The so-determined motion is then used to generate “spacewarped” versions of the rendered application frames.

Iterative multi-directional image search supporting large template matching

Systems and methods which provide iterative multi-directional image searching are described. Embodiments utilize a multi-directional searching pattern for defining one or more searching areas within a source image. A searching area of embodiments provides a searching area within a source image for searching for a template image in the multiple directions of the multi-directional searching pattern. A location of the searching area within the source image may be updated iteratively, such as based upon motion vectors derived from the searching, until a template image match position is identified within the source image. Embodiments transform the template image and the searching area within a source image to 1D representations corresponding to each direction of the multi-directional image search. Embodiments accommodate rotation and scale variance of the subject (e.g., object of interest) of the template image within the source image.