Patent classifications
G06V40/167
Miniature autonomous robotic blimp
A blimp includes a circular disk-shaped envelope filled with a lighter-than-air gas. A gondola is affixed to an underside of the envelope and is disposed at a region directly below a center point of the circle defined by the intersection of the envelope and the horizontal plane. The gondola includes: a horizontally-disposed elongated circuit board that functions as a structural member of the gondola; and a vertical member extending upwardly from the circuit board and having a top that is attached to the underside of the envelope. A thrusting mechanism is affixed to the gondola and is configured to generate thrust. An electronics suite is disposed on and electrically coupled to the circuit board and includes a blimp processor configured to generate control signals that control the thrusting mechanism. A battery is affixed to the gondola and provides power to the electronics suit and the thrusting mechanism.
Face quality of captured images
The disclosure pertains to techniques for image processing. One such technique comprises a method for image selection, comprising: obtaining a sequence of images, detecting a first face in one or more images of the sequence of images, determining a first location for the detected first face in each of the images having the detected first face, generating a heat map based on the first location of the detected first face in each of the images of the sequences of images, determining a face quality score for the detected first face for each of the one or more images having the detected first face, determining a peak face quality score for the detected first face based in part on the face quality score and the generated heat map, and selecting a first image of the sequence of images, corresponding with the peak face quality score for the detected first face.
COUNTERFEIT IMAGE DETECTION
A computer, including a processor and a memory, the memory including instructions to be executed by the processor to acquire a first image with a visible and NIR light camera and acquire a second image with an infrared camera. The instructions can include further instructions to determine whether the second image includes a live human face by comparing a first infrared profile included in the second image with second infrared profile included in a previously acquired third image acquired with a the infrared camera; and when the second image includes the live human face, output the first image.
Method for simulating the realistic rendering of a makeup product
Disclosed is a method for simulating the rendering of a make-up product on the face of a subject, using a database of reference images including, for each reference individual, images of the face with and without the makeup product. The method includes: acquiring an image of the subject face without makeup; processing the image to extract, for each spatial area of each spatial frequency range of the image, first color feature values of the spatial area; determining, among the database of reference images, reference individuals having, when wearing no makeup, color feature values similar to the first color feature values of the subject; determining, from the first color feature values of the subject, and from color feature values of the reference individuals with and without the makeup product, second color feature values; and generating a modified image of the subject face based on the second color feature values.
Electronic device and method for processing image having human object and providing indicator indicating a ratio for the human object
An electronic device according to various embodiments of the present invention comprises: a camera module including a lens positioned on one side of an electronic device; a display positioned on the other side of the electronic device; and a processor electrically coupled to the camera module and the display, wherein the processor is configured to acquire a preview image through the camera module, determine whether a human object corresponding to a person is included in the preview image, determine an imaging mode of the electronic device based at least on the type and change of at least one object included in the determination and the preview image, and display, through the display, at least one indicator associated with the determined photographing mode. Various other embodiments are possible.
ELECTRONIC APPARATUS AND CONTROL METHOD
An electronic apparatus includes a memory which temporarily stores image data of an image captured by an imaging device, and a processor which processes image data stored in the memory. The processor processes image data of plural images captured by the imaging device at predetermined time intervals and stored in the memory, detects face areas with faces captured therein from among the plural images based on first-resolution image data and second-resolution image data, and determines whether or not the face areas are consecutively detected from the plural images. Further, when determining that the state is changed between a state where face areas are consecutively detected and a state where face areas are not consecutively detected while performing processing to detect the face areas based on first-resolution image data, the processor detects face areas from among the plural images based on the second-resolution image data.
Mediating apparatus and method, and computer-readable recording medium thereof
Provided are a mediating apparatus and a mediating method, and a computer-readable recording medium thereof. The mediating method includes: receiving a plurality of images from a first user; generating at least one new image by referring to the plurality of received images; extracting a feature of a face included in the at least one generated new image; searching for a second user corresponding to the feature that has been extracted; and providing the first user with information about the second user.
BIOMETRIC SYSTEM
A biometric authentication system comprising headwear comprising a plurality of biosensors each configured to sample muscle activity so as to obtain a respective time-varying signal; a data store for storing a data set representing characteristic muscle activity for one or more users; and a processor configured to process the time-varying signals from the biosensors in dependence on the stored data set so as to determine a correspondence between a time-varying signal and characteristic muscle activity of one of the one or more users, and in dependence on the determined correspondence, authenticate the time-varying signals as being associated with that user.
Feature based abstraction and meshing
Methods for CAD operations and corresponding systems (2800) and computer-readable mediums (2826) are disclosed herein. A method includes receiving (502) a model (600) of a part to be manufactured, wherein the model includes a plurality of original faces (102, 104, 106, 112, 114). The method includes classifying (510) each face in model according to a relative face curvature according to classifications that include at least a high-curvature classification (702). The method includes classifying (514) any sliver faces (102, 104, 106, 112, 114) and narrow blend faces (402, 404, 406, 408) of the plurality of faces. The method includes merging (516) contiguous faces (702) in each classification. The method includes detecting (518) special faces (1002, 1012) of the plurality of faces. The method includes restoring (520) original faces in the high-curvature classification except for the special faces (1002, 1012). The method includes processing (522) shared edges of the restored original faces to produce merged faces (802). The method includes merging together (524) any merged faces that produce a locally narrow face (302) or an isthmus (202). The method includes storing (526) a modified model of the part to be manufactured.
Object detection device, object detection method, and program
An object detection device for detecting a target object from an image, includes: a first detection unit configured to detect a plurality of candidate regions in which the target object exists from the image; a region integration unit configured to determine one or more integrated regions based on the plurality of candidate regions detected by the first detection unit; a selection unit configured to select at least a part of the integrated regions; and a second detection unit configured to detect the target object from the selected integrated region using a detection algorithm different from a detection algorithm used by the first detection unit.