Patent classifications
G06T3/005
Replacing imagery of garments in an existing apparel collection with laser-finished garments
A system allows a user to create new designs for apparel and preview these designs before manufacture. Software and lasers are used in finishing apparel to produce a desired wear pattern or other design. The system swaps garments in a digital asset to garments that are designed using the system. The wear pattern is created by a laser using a laser input file. Generating the preview image comprises combining first and second contributions to obtain a combined value for a pixel at the pixel location of the preview image.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT
An information processing apparatus configured to paste a full-spherical panoramic image along an inner wall of a virtual three-dimensional sphere; calculate an arrangement position for arranging a planar image closer to a center point of the virtual three-dimensional sphere than the inner wall, in such an orientation that a line-of-sight direction from the center point to the inner wall and a perpendicular line of the planar image are parallel to each other, the planar image being obtained by pasting an embedding image to be embedded in the full-spherical panoramic image, on a two-dimensional plane; and display a display image on a display unit. The display image is a two-dimensional image viewed from the center point in the line-of-sight direction in a state in which the full-spherical panoramic image is pasted along the inner wall of the virtual three-dimensional sphere and the planar image is arranged at an arrangement position.
Three-dimensional preview of laser-finished apparel
A system allows a user to create new designs for apparel and preview these designs before manufacture. Software and lasers are used in finishing apparel to produce a desired wear pattern or other design. The user's preview may be based upon a two-dimensional image of a wear pattern in a laser input file and, from a set of two-dimensional images of a base garment, create a three-dimensional view of the base garment with the wear pattern.
INFORMATION PROCESSING DEVICE, METHOD, AND PROGRAM
An information processing device according to an embodiment includes a control unit that performs: control of switching a display coordinate system of display content from a first display coordinate system to a second display coordinate system, the display content displayed on a surface of a real object by a display unit, depending on a state of an input operation of changing a position or an angle of the display content; and control of changing a display position or a display angle of the display content in accordance with the second display coordinate system and causing the display unit to display input assist display corresponding to axes of the second display coordinate system in a case where the display coordinate system is switched from the first display coordinate system to the second display coordinate system.
Technique to Change Garments Within an Existing Image
A system automatically generates apparel collection imagery from user-provided imagery. The user-provided imagery includes images of people wearing one or more garments. The system uses segmenting analysis to analyze the user-provided image to identify locations of the garment. From the locations of the garments, the system can determine which garments from an apparel collection can be used to replace those in the user-provided imagery. The system uses pose estimation on the user-provided imagery and modifies a preview image of a replacement garment from the collection. This modified replacement garment image is used to replace the garment in the user-provided imagery.
Systems and methods for generating panorama image
The present disclosure relates to image processing systems and methods. The method may include obtaining a first image and a second image. The first image may be captured by a first camera lens of a panorama device and the second image may be captured by a second camera lens of the panorama device. The method may also include performing an interpolation based on a center of the first image to obtain a first rectangular image, and performing an interpolation based on a center of the second image to obtain a second rectangular image. The method may further include generating a fused image based on the first rectangular image and the second rectangular image, and mapping the fused image to a spherical panorama image.
Method and apparatus for providing virtual clothing wearing service based on deep-learning
A method and apparatus provide a virtual clothing wearing service based on deep-learning. A virtual clothing wearing server based on deep-learning includes a communicator configured to receive a user image and a v clothing image; a memory configured to store a program including first and second deep-learning models; a processor configured to generate an image of virtually dressing a virtual wearing clothing on a user. The program is configured to: generate, by the first deep-learning model, a transformed virtual wearing clothing image by transforming the virtual wearing clothing image in accordance with a body of the user in the user image based on the user image and the virtual wearing clothing image, and generate, by the second deep-learning model, the virtual wearing person image by dressing the transformed virtual wearing clothing on the body of the user based on the user image and the transformed virtual wearing clothing image.
Interactive projection system and interactive display method of projection system
The disclosure provides an interactive projection system and an interactive display method of the projection system. The interactive projection system includes a biologically-modeled housing, a first projection device disposed inside the biologically-modeled housing and an interactive sensing device, and a controller. The biologically-modeled housing includes a light-transmissive curved projection area. The first projection device projects the first physiological image onto the curved projection area. The interactive sensing device obtains the input signal. The controller is electrically connected to the first projection device and the interactive sensing device to identify the input signal and obtain user instructions. The controller controls the first projection device to project the second physiological image to the curved projection area according to user instructions. The first physiological image and the second physiological image display the physiological features corresponding to the curved projection area at a position on the biologically-modeled housing.
DISTORTION CORRECTION FOR NON-FLAT DISPLAY SURFACE
One embodiment provides a method, including: identifying, using data obtained from at least one sensor associated with an information handling device, a multi-planar orientation of a non-flat display surface of the information handling device and a spatial orientation of the information handling device with respect to a user's gaze position; determining, using a processor and based on the identifying, a distortion of at least one object displayed on the non-flat display surface; and adjusting at least one aspect of the non-flat display surface to correct the distortion. Other aspects are described and claimed.
Methods and systems for processing images to perform automatic alignment of electronic images
Systems and methods are disclosed for aligning a two-dimensional (2D) design image to a 2D projection image of a three-dimensional (3D) design model. One method comprises receiving a 2D design document, the 2D design document comprising a 2D design image, and receiving a 3D design file comprising a 3D design model, the 3D design model comprising one or more design elements. The method further comprises generating a 2D projection image based on the 3D design model, the 2D projection image comprising a representation of at least a portion of the one or more design elements, generating a projection barcode based on the 2D projection image, and generating a drawing barcode based on the 2D design image. The method further comprises aligning the 2D projection image and the 2D design image by comparing the projection barcode and the drawing barcode.