Patent classifications
G06T11/80
Automatic Crop and Fill Boundary Generation
A system and method for extending bounds of straightened and enlarged images is described. A user interface of an image editing application exposes an image to a user for editing. The user positions the image in the user interface and the image editing application generates a frame in the user interface to identify boundaries of a final image in the user interface. The image editing application then automatically determines an empty area within the frame, applies a fill operation to the empty area within the frame, and updates the image in the user interface to reflect results of the fill operation.
DEVICE AND METHOD FOR TRANSFORMING A FACIAL IMAGE INTO A SET OF RECOGNIZABLE EMOTICONS
A set of recognizable custom emoticons representing a subject's face are created and can be used in electronic communications as conventional emoticons are used. A set of unique emoticons are derived from a single digital true image of a subject's face such that others familiar with the subject are likely to recognize each emoticon in the set as representing the subject. The subject's true facial image is modified to reflect the cartoonish style of emoticons. Optionally some facial features are replaced to create a set of emoticons each having distinct facial expressions, for example, sad, happy, surprised, frightened. Optionally other features, for example, glasses, hats, facial hair, can be added to enhance the emoticons. The resultant set of emoticons is then made available to be sent and received on all communication mediums where emoticons are currently used.
MOBILE TERMINAL DEVICE, METHOD, AND STORAGE MEDIUM FOR DISPLAYING CAPTURED IMAGES
A positioning unit identifies first positional information of a mobile terminal device. A communication unit acquires second positional information identified in the other mobile terminal device. An imaging unit captures an image of a surrounding environment. An acquisition unit acquires information related to a status dependent on an environment surrounding the other mobile terminal device. A display unit displays, in a captured image, another device icon indicating another positioning unit, and displays the acquired information in association with the other device icon.
MOBILE TERMINAL DEVICE, METHOD, AND STORAGE MEDIUM FOR DISPLAYING CAPTURED IMAGES
A positioning unit identifies first positional information of a mobile terminal device. A communication unit acquires second positional information identified in the other mobile terminal device. An imaging unit captures an image of a surrounding environment. An acquisition unit acquires information related to a status dependent on an environment surrounding the other mobile terminal device. A display unit displays, in a captured image, another device icon indicating another positioning unit, and displays the acquired information in association with the other device icon.
Image processing apparatus, image processing method, and computer-readable recording medium
An image processing apparatus includes an exaggeration unit configured to perform on an original image including a hand-drawn element an exaggeration process that expands the hand-drawn element to generate an exaggerated image; and a reduction unit configured to reduce the exaggerated image to generate a reduced image of a predetermined size smaller than a size of the original image.
System and method for face capture and matching
According to an example, a face capture and matching system may include a memory storing machine readable instructions to receive captured images of an area monitored by an image capture device, and detect one or more faces in the captured images. The memory may further store machine readable instructions to track movement of the one or more detected faces in the area monitored by the image capture device, and based on the one or more tracked detected faces, select one or more images from the captured images to be used for identifying the one or more tracked detected faces. The memory may further store machine readable instructions to select one or more fusion techniques to identify the one or more tracked detected faces using the one or more selected images. The face capture and matching system may further include a processor to implement the machine readable instructions.
System and method for face capture and matching
According to an example, a face capture and matching system may include a memory storing machine readable instructions to receive captured images of an area monitored by an image capture device, and detect one or more faces in the captured images. The memory may further store machine readable instructions to track movement of the one or more detected faces in the area monitored by the image capture device, and based on the one or more tracked detected faces, select one or more images from the captured images to be used for identifying the one or more tracked detected faces. The memory may further store machine readable instructions to select one or more fusion techniques to identify the one or more tracked detected faces using the one or more selected images. The face capture and matching system may further include a processor to implement the machine readable instructions.
Creative GAN generating music deviating from style norms
A method and system for generating music uses artificial intelligence to analyze existing musical compositions and then creates a musical composition that deviates from the learned styles. Known musical compositions created by humans are presented in digitized form along with a style designator to a computer for analysis, including recognition of musical elements and association of particular styles. A music generator generates a draft musical composition for similar analysis by the computer. The computer ranks such draft musical composition for correlation with known musical elements and known styles. The music generator modifies the draft musical composition using an iterative process until the resulting musical composition is recognizable as music but is distinctive in style.
Creative GAN generating music deviating from style norms
A method and system for generating music uses artificial intelligence to analyze existing musical compositions and then creates a musical composition that deviates from the learned styles. Known musical compositions created by humans are presented in digitized form along with a style designator to a computer for analysis, including recognition of musical elements and association of particular styles. A music generator generates a draft musical composition for similar analysis by the computer. The computer ranks such draft musical composition for correlation with known musical elements and known styles. The music generator modifies the draft musical composition using an iterative process until the resulting musical composition is recognizable as music but is distinctive in style.
Methods and apparatus for providing a digital illustration system
A non-transitory processor-readable medium storing code representing instructions to be executed by a processor to receive a set of data elements associated with a user-defined content having a content type. The processor interpolates the set of data elements to produce a first set of content data based on a filter domain associated with the user-defined content. The processor further refines the first set of content data based, at least in part, on the content type to produce a second set of content data. The processor also sends a signal representing the second set of content data such that the user-defined content is displayed based on the second set of content data.