Patent classifications
G06T3/053
METHODS AND SYSTEMS FOR PROCESSING IMAGES TO PERFORM AUTOMATIC ALIGNMENT OF ELECTRONIC IMAGES
Systems and methods are disclosed for aligning a two-dimensional (2D) design image to a 2D projection image of a three-dimensional (3D) design model. One method comprises receiving a 2D design document, the 2D design document comprising a 2D design image, and receiving a 3D design file comprising a 3D design model, the 3D design model comprising one or more design elements. The method further comprises generating a 2D projection image based on the 3D design model, the 2D projection image comprising a representation of at least a portion of the one or more design elements, generating a projection barcode based on the 2D projection image, and generating a drawing barcode based on the 2D design image. The method further comprises aligning the 2D projection image and the 2D design image by comparing the projection barcode and the drawing barcode.
METHODS AND SYSTEMS FOR PROCESSING IMAGES TO PERFORM AUTOMATIC ALIGNMENT OF ELECTRONIC IMAGES
Systems and methods are disclosed for aligning a two-dimensional (2D) design image to a 2D projection image of a three-dimensional (3D) design model. One method comprises receiving a 2D design document, the 2D design document comprising a 2D design image, and receiving a 3D design file comprising a 3D design model, the 3D design model comprising one or more design elements. The method further comprises generating a 2D projection image based on the 3D design model, the 2D projection image comprising a representation of at least a portion of the one or more design elements, generating a projection barcode based on the 2D projection image, and generating a drawing barcode based on the 2D design image. The method further comprises aligning the 2D projection image and the 2D design image by comparing the projection barcode and the drawing barcode.
Method and apparatus for filtering 360-degree video boundaries
A video system for encoding or decoding 360-degree virtual reality (360VR) video is provided. The system performs filtering operations to reduce coding artifacts and discontinuities in a projection image of an omnidirectional image. The video system identifies first and second edges of the projection image. The first and second edges are physically correlated as a common edge in the omnidirectional image but not physically connected in the projection image. The system computes a set of filtered pixels based on a first set of pixels near the first edge and a second set of pixels near the second edge.
SYSTEMS AND METHODS FOR FUSING IMAGES
A method performed by an electronic device is described. The method includes obtaining a first image from a first camera, the first camera having a first focal length and a first field of view. The method also includes obtaining a second image from a second camera, the second camera having a second focal length and a second field of view disposed within the first field of view. The method further includes aligning at least a portion of the first image and at least a portion of the second image to produce aligned images. The method additionally includes fusing the aligned images based on a diffusion kernel to produce a fused image. The diffusion kernel indicates a threshold level over a gray level range. The method also includes outputting the fused image. The method may be performed for each of a plurality of frames of a video feed.
Interactive electronically presented map
The present invention provides computerized systems and methods for providing electronically presented interactive area representation, such as a map, and information associated therewith. A user can select text, imagery, or other information presented on the map and associated with one or more items or locations, causing presentation of information relating to the associated one or more items or locations, such as appropriate contact information or a hyperlink to an appropriate Web site. Additionally or alternatively, a user can input or select, based on a query or otherwise, information relating to one or more items or locations associated with text, imagery, or other information presented on the map, causing presentation of an indication of one or more locations of the associated text, imagery, or other information on the map. A magnifier feature allowing internal navigation within the map can be provided. Additionally, animated images can appear to move over the map.
Terminal and controlling method thereof
Disclosed are a terminal and operating method thereof. The present invention includes obtaining an input for selecting at least one video, displaying a polyhedron displaying a preview image of the selected at least one video on each of a plurality of faces, obtaining an input for selecting at least one of a plurality of the faces included in the displayed polyhedron, and outputting a video corresponding to the selected face.
Systems and methods for fusing images
A method performed by an electronic device is described. The method includes obtaining a first image from a first camera, the first camera having a first focal length and a first field of view. The method also includes obtaining a second image from a second camera, the second camera having a second focal length and a second field of view disposed within the first field of view. The method further includes aligning at least a portion of the first image and at least a portion of the second image to produce aligned images. The method additionally includes fusing the aligned images based on a diffusion kernel to produce a fused image. The diffusion kernel indicates a threshold level over a gray level range. The method also includes outputting the fused image. The method may be performed for each of a plurality of frames of a video feed.
METHOD AND APPARATUS FOR MANAGING A WIDE VIEW CONTENT IN A VIRTUAL REALITY ENVIRONMENT
A method for managing wide view content in a virtual reality (VR) environment and an apparatus therefor are provided. The method includes receiving content covering 360 degree or a wider viewing angle than a viewing angle of a user of the first device and displaying, on the VR device, a first view displaying at least one portion of the content covering the viewing angle of the user on a first view area of a display of the VR device, and a second view covering 360 degree or the wider viewing angle of the content using convex projection on a second view area of the display.
Upscaling Lower Resolution Image Data for Processing
In an example method and system, image data to an image processing module. Image data is read from memory into a down-scaler, which down-scales the image data to a first resolution, which is stored in a first buffer. A region of image data which the image processing module will request is predicted, and image data corresponding to at least part of the predicted region of image data is stored in a first buffer, in a second resolution, higher than the first. When a request for image data is received, it is then determined whether image data corresponding to the requested image data is in the second buffer, and if so, then image data is provided to the image processing module from the second buffer. If not, then image data from the first buffer is up-scaled, and the up-scaled image data is provided to the image processing module.
Construction machinery
A construction machine includes image processing means coupled in communication with a monitor. The image processing means is configured to receive images from a plurality of cameras; synthesize the images into a synthetic bird's eye view image; divide the synthetic bird's eye view image into four sections that are defined by a first section line and a second section line, the four sections including a first section, a second section, a third section, and a fourth section; generate a conversion image from the synthetic bird's eye view image by increasing a size of the first section and decreasing a size of each of the second section and the third section; and selectively display the conversion image on the monitor.