Patent classifications
G06T3/4007
Font rendering method and apparatus, and computer-readable storage medium
A font rendering method, apparatus and a computer-readable storage medium. The font rendering method includes: receiving a character string that needs to be rendered and reading a font file; parsing the font file to obtain text character pattern information and template information corresponding to the character string; generating an animation unit according to the text character pattern information and the template information; rendering the animation unit. Through the above font rendering method, corresponding template effects can be customized according to different font effects and different rows and columns, and mixed arrangement of text size, customized text position, customized text color and special effects are supported, which are more suitable for scene matching and picture-text combining, fully expressing semantic priority, enhancing text visual impact, highlighting the focus of the language and the text artistic conception.
Method and apparatus for enhanced anti-aliasing filtering on a GPU
A method for reducing abasing artefacts in an output image may include obtaining a plurality of input images captured by a plurality of cameras, each camera having a different field of view of an environment surrounding a vehicle, wherein the plurality of input images are mapped to the output image to represent the environment, from a predefined virtual point of view. The method may further include for each pixel position in the output image, obtaining a first pixel density value corresponding to a first output pixel position in the output image; and upon determining that the first pixel density value is higher than a threshold, calculating a first output brightness value corresponding to the first output pixel position based at least on a plurality of brightness values corresponding to a plurality of neighboring pixels of a corresponding position in a first input image of the plurality of input images.
DECODING APPARATUS, ENCODING APPARATUS, DECODING METHOD, ENCODING METHOD, AND PROGRAM
A decoding apparatus includes: an obtainment unit in which a high frame rate, a medium frame rate, and a low frame rate have been determined in advance in descending order of frame rate, and which obtains low-frame-rate images that are moving images with the low frame rate, as well as weights; and a decoding unit that generates a third frame of medium-frame-rate images that are moving images with the medium frame rate by compositing a first frame and a second frame that are chronologically contiguous in the low-frame-rate images based on the weights. The low-frame-rate images and the weights are derived in advance so as to minimize a degree of deviation between a plurality of frames of moving images with the high frame rate in a preset period and a plurality of frames of the medium-frame-rate images in the period.
SPATIAL AND TEMPORAL UPSAMPLING TECHNIQUES FOR SIMULATED SENSOR DATA
A sensor simulation system may generate sensor data for use in simulations by rendering two-dimensional views of a three-dimensional simulated environment. In various examples, the sensor simulation system uses sensor dependency data to determine specific views to be re-rendered at different times during the simulation. The sensor simulation system also may generate unified views with multi-sensor data at each region (e.g., pixel) of the two-dimensional view for consumption by different sensor types. A hybrid technique may be used in some implementations in which rasterization is used to generate a view, after which ray tracing is used to align the view with a particular sensor. Spatial and temporal upsampling techniques also may be used, including depth-aware and velocity-aware analyses for simulated objects, to improve view resolution and reduce the frequency of re-rendering views.
Rapid estimation of effective illuminance patterns for projected light fields
Apparatus and methods are provided that employ one or more of a variety of techniques for reducing the time required to display high resolution images on a high dynamic range display having a light source layer and a display layer. In one technique, the image resolution is reduced, an effective luminance pattern is determined for the reduced resolution image, and the resolution of the effective luminance pattern is then increased to the resolution of the display layer. In another technique, the light source layer's point spread function is decomposed into a plurality of components, and an effective luminance pattern is determined for each component. The effective luminance patterns are then combined to produce a total effective luminance pattern. Additional image display time reduction techniques are provided.
Information processing apparatus, information processing method, and program
There is provided an information processing apparatus, an information processing method, and a program with which highly accurate depth information can be acquired. The information processing apparatus includes an interpolation image generation unit, a difference image generation unit, and a depth calculation unit. The interpolation image generation unit generates an interpolation image on the basis of a first normal image and a second normal image among the first normal image, a pattern image irradiated with infrared pattern light, and the second normal image, the interpolation image corresponding to a time at which the pattern image is captured. The difference image generation unit generates a difference image between the interpolation image and the pattern image. The depth calculation unit calculates depth information by using the difference image.
Generation of synthetic high-elevation digital images from temporal sequences of high-elevation digital images
Implementations relate to detecting/replacing transient obstructions from high-elevation digital images, and/or to fusing data from high-elevation digital images having different spatial, temporal, and/or spectral resolutions. In various implementations, first and second temporal sequences of high-elevation digital images capturing a geographic area may be obtained. These temporal sequences may have different spatial, temporal, and/or spectral resolutions (or frequencies). A mapping may be generated of the pixels of the high-elevation digital images of the second temporal sequence to respective sub-pixels of the first temporal sequence. A point in time at which a synthetic high-elevation digital image of the geographic area may be selected. The synthetic high-elevation digital image may be generated for the point in time based on the mapping and other data described herein.
Multi-level lookup tables for control point processing and histogram collection
Multiple lookup tables (LUTs) storing different numbers of control point values are used to process pixels within different blocks of an image, such as after image processing using tone mapping and/or tone control, and/or to collect histogram information or implement 3D LUTs. First control point values stored within a first LUT are applied against pixels of a given block of an image to produce a distorted image block. Second control point values stored within a second lookup table are applied against a pixel of the distorted image block to produce a processed pixel. The second LUT is one of a plurality of second LUTs and stores fewer values than the first LUT. A processed image is produced using the processed pixel. The processed image is then output for further processing or display.
GENERATION OF SYNTHETIC HIGH-ELEVATION DIGITAL IMAGES FROM TEMPORAL SEQUENCES OF HIGH-ELEVATION DIGITAL IMAGES
Implementations relate to detecting/replacing transient obstructions from high-elevation digital images, and/or to fusing data from high-elevation digital images having different spatial, temporal, and/or spectral resolutions. In various implementations, first and second temporal sequences of high-elevation digital images capturing a geographic area may be obtained. These temporal sequences may have different spatial, temporal, and/or spectral resolutions (or frequencies). A mapping may be generated of the pixels of the high-elevation digital images of the second temporal sequence to respective sub-pixels of the first temporal sequence. A point in time at which a synthetic high-elevation digital image of the geographic area may be selected. The synthetic high-elevation digital image may be generated for the point in time based on the mapping and other data described herein.
Image processing apparatus, image processing method, and image processing system
The disclosure proposes an image processing apparatus for rendering a maximum intensity projection image by extracting, as objects to be rendered, only voxels having a high brightness value in three-dimensional volume data and using the brightness values of these voxels for the corresponding pixels.