Patent classifications
G06T2210/22
SYSTEM AND METHOD OF CONTROLLING CONSTRUCTION MACHINERY
A control system for construction machinery includes an upper camera installed in a driver cabin of a rear vehicle body to photograph the front of the driver cabin, a lower camera installed in a front vehicle body rotatably connected to the rear vehicle body to photograph the front of the front vehicle body, an angle information detection portion configured to detect information on a refraction angle of the front vehicle body with respect to the rear vehicle body, an image processing device configured to synthesize first and second images captured from the upper and lower cameras, and configured to determine a position of a transparency processing area in the synthesized image according to the refraction angle information and transparency-process at least one of the first and second images in the transparency processing area, and a display device configured to display the synthesized image transparency-processed by the image processing device.
Mobile terminal and control method thereof
Disclosed is a mobile terminal that provides an augmented reality navigation screen in a state of being hold in a vehicle, the mobile terminal including: at least one camera configured to obtain a front image; a display; and at least one processor configured to calibrate the front image, and to drive an augmented reality navigation application so that the augmented reality navigation screen including at least one augmented reality (AR) graphic object and the calibrated front image is displayed on the display.
IMAGE PROCESSING APPARATUS AND METHOD AND NON-TRANSITORY COMPUTER READABLE MEDIUM
An image processing apparatus includes a processor configured to: display an image on a display device; calculate, for each of small regions set in the image, a degree of importance based on characteristics of the image; and display a degree-of-importance map on the display device in such a manner that the degree-of-importance map is superimposed on a subject region of the image, the degree-of-importance map visually representing a relative relationship between the degrees of importance of the small regions.
Scene crop via adaptive view-depth discontinuity
A method, apparatus, and system provide the ability to crop a three-dimensional (3D) scene. The 3D scene is acquired and includes multiple 3D images (with each image from a view angle of an image capture device) and a depth map for each image. The depth values in each depth map are sorted. Multiple initial cutoff depths are determined for the scene based on the view angles of the images (in the scene). A cutoff relaxation depth is determined based on a jump between depth values. A confidence map is generated for each depth map and indicates whether each depth value is above or below the cutoff relaxation depth. The confidence maps are aggregated into an aggregated model. A bounding volume is generated out of the aggregated model. Points are cropped from the scene based on the bounding volume.
User Profile Picture Generation Method and Electronic Device
A user profile picture generation method and an electronic device includes in a process in which a user searches for a profile picture in a plurality of thumbnails displayed in a user interface, when the user selects a thumbnail, the electronic device displays an original picture corresponding to the thumbnail, and displays a crop box in the original picture, where the selection may be a tap operation on the thumbnail. The electronic device may generate a profile picture of the user based on the crop box. The crop box includes a human face region in the original picture, and a composition manner of the human face region in the crop box is the same as a composition manner of the human face region in the original picture.
Non-uniform stereo rendering
Examples of the disclosure describe systems and methods for recording augmented reality and mixed reality experiences. In an example method, an image of a real environment is received via a camera of a wearable head device. A pose of the wearable head device is estimated, and a first image of a virtual environment is generated based on the pose. A second image of the virtual environment is generated based on the pose, wherein the second image of the virtual environment comprises a larger field of view than a field of view of the first image of the virtual environment. A combined image is generated based on the second image of the virtual environment and the image of the real environment.
IMAGE PROCESSING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
An image processing method includes: performing target object detection on an initial image to obtain an object detection result, and performing image saliency detection on the initial image to obtain a saliency detection result; cropping the initial image based on the object detection result and the saliency detection result to obtain a corresponding cropped image; acquiring an image template for indicating an image style, and acquiring layer information corresponding to the image template; and adding the layer information to the cropped image based on the image template to obtain a target image corresponding to the image style indicated by the image template.
Systems and methods for automatic formatting of images for media assets based on user profile
Systems and methods are provided herein for personalizing images that correspond to a media asset identifier by using user profile information. As an example, the television series “Community” has several actors, such as Joel McHale, Chevy Chase, and Ken Jeong. Poster art developed by an editor o50533238_1f “Community” may include an image that portrays each of Joel McHale, Chevy Chase, and Ken Jeong. In order to personalize the image, control circuitry may determine which actor(s) the user prefers, and crop out only those actors in the poster art to create a personalized image. As an example, if the user prefers Joel McHale, control circuitry may crop out the portrayal of Joel McHale and use only that portion of the image to display next to other text describing “Community.”
INTELLIGENT ZOOMING METHOD AND ELECTRONIC DEVICE USING THE SAME
An intelligent zooming method and an electronic device using the same are provided. The intelligent zooming method includes the following steps. A text paragraph corresponding to the text is merged. The text paragraph is automatically arranged according to the text paragraph and a text magnification box. The text paragraph in the text magnification box is enlarged according to a text magnification ratio. A block group containing the block and other blocks connected thereto is merged. A block magnification ratio is adjusted according to the block group and a block magnification box. The block group in the block magnification box is enlarged according to the block magnification ratio. The picture is cropped to obtain an object. A picture magnification ratio is adjusted according to the object and a picture magnification box. The object in the picture magnification box is enlarged according to the picture magnification ratio.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
An information processing apparatus includes a control unit configured to generate display control information used as information regarding display control of a display image corresponding to scene information indicating a scene of a seminar.