Patent classifications
G06T2207/30196
APPARATUS AND METHOD FOR CLASSIFYING CLOTHING ATTRIBUTES BASED ON DEEP LEARNING
Disclosed herein are an apparatus and method for classifying clothing attributes based on deep learning. The apparatus includes memory for storing at least one program and a processor for executing the program, wherein the program includes a first classification unit for outputting a first classification result for one or more attributes of clothing worn by a person included in an input image, a mask generation unit for outputting a mask tensor in which multiple mask layers respectively corresponding to principal part regions obtained by segmenting a body of the person included in the input image are stacked, a second classification unit for outputting a second classification result for the one or more attributes of the clothing by applying the mask tensor, and a final classification unit for determining and outputting a final classification result for the input image based on the first classification result and the second classification result.
INFORMATION PROCESSING APPARATUS, SENSING APPARATUS, MOBILE OBJECT, AND METHOD FOR PROCESSING INFORMATION
An information processing apparatus includes an input interface, a processor, and an output interface. The input interface obtains observation data obtained from an observation space. The processor detects a detection target included in the observation data. The processor maps coordinates of the detected detection target as coordinates of a detection target in a virtual space, tracks a position and a velocity of a material point indicating the detection target in the virtual space, and maps coordinates of the tracked material point in the virtual space as coordinates in a display space. The processor sequentially observes a size of the detection target in the display space and estimates a size of a detection target at a present time on a basis of observed values of a size of a detection target at the present time and estimated values of a size of a past detection target. The output interface outputs output information based on the coordinates of the material point mapped to the display space and the estimated size of the detection target.
METHODS AND SYSTEMS FOR OBTAINING A SCALE REFERENCE AND MEASUREMENTS OF 3D OBJECTS FROM 2D PHOTOS
Disclosed are systems and methods for obtaining a scale factor and 3D measurements of objects from a series of 2D images. An object to be measured is selected from a menu of an Augmented Reality (AR) based measurement application being executed by a mobile computing device. Measurement instructions corresponding to the selected object are retrieved and used to generate a series of image capture screens. A series of image capture screens assist the user in positioning the device relative to the object in a plurality of imaging positions to capture the series of 2D images. The images are used to determine one or more scale factors and to build a complete scaled 3D model of the object in virtual 3D space. The 3D model is used to generate one or more measurements of the object.
APPARATUS AND SYSTEM FOR DISPENSING COSMETIC MATERIAL
A system is provided that includes a mobile user device (300) that executes an application and determines and transmits a recipe for generating a target cosmetic material that is based on a combination of a plurality of separate ingredients that are associated with the user. The system includes a dispensing device (100) configured to receive the transmitted recipe from the mobile user device 300) and dispense each of the plurality of separate ingredients onto a common dispensing surface such that when the dispensed amounts of each of the plurality of separate ingredients is blended on the dispensing surface, the target cosmetic material is achieved.
INFORMATION PROCESSING DEVICE, PROGRAM, AND METHOD
An information processing device that includes a control unit configured to track an object in an image using images input in time series, using a tracking result obtained by performing tracking in units of a tracking region corresponding to a specific part of the object.
QUANTITATIVE DYNAMIC MRI (QDMRI) ANALYSIS AND VIRTUAL GROWING CHILD (VGC) SYSTEMS AND METHODS FOR TREATING RESPIRATORY ANOMALIES
A method of analyzing thoracic insufficiency syndrome (TIS) in a subject by performing quantitative dynamic magnetic resonance imaging (QdMRI) analysis. The QdMRI analysis includes performing four-dimensional (4D) image construction of a TIS subject's thoracic cavity. The 4D image includes a sequence of two dimensional (2D) images of the TIS subject's thoracic cavity over a respiratory cycle of the TIS subject. The QdMRI analysis also includes segmenting a region of interest (ROI) within the 4D image, determining TIS measurements within the ROI, comparing the TIS measurements to normal measurements determined from ROIs in 4D images of the thoracic cavities of normal subjects that are not afflicted by TIS, and outputting quantitative markers indicating deviation of the thoracic cavity of the TIS subject relative to the thoracic cavities of the normal subjects.
QUEUE ANALYSIS APPARATUS, CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLEMEDIUM
A queue analysis apparatus (2000) estimates a position and an orientation of each object (20) included in a target image (10). The target image (10) is generated by a camera (50) that captures the object (20). The queue analysis apparatus (2000) generates a queue line (40) that expresses, by a linear shape, a queue included in a queue region (30) being a region representing a queue in the target image (10), based on a position and an orientation being estimated for each object (20) included in the queue region (30).
AIRCRAFT DOOR CAMERA SYSTEM FOR DOCKING ALIGNMENT MONITORING
A camera with a field of view toward an external environment of an aircraft is disposed within an aircraft door such that a ground surface is within the field of view of the camera during taxiing of the aircraft. A display device is disposed within an interior of the aircraft. A processor is operatively coupled to the camera and to the display device. The processor analyzes image data captured by the camera for docking guidance by identifying, within the captured image data, a region on the ground surface corresponding to an alignment fiducial indicating a parking location for the aircraft, determining, based on the region of the captured image data corresponding to the alignment fiducial indicating the parking location, a relative location of the aircraft with respect to the alignment fiducial, and outputting an indication of the relative location of the aircraft to the alignment fiducial.
ELECTRONIC DEVICE FOR TRACKING OBJECTS
Systems, methods, and non-transitory media are provided for tracking operations using data received from a wearable device. An example method can include determining a first position of a wearable device in a physical space; receiving, from the wearable device, position information associated with the wearable device; determining a second position of the wearable device based on the received position information; and tracking, based on the first position and the second position, a movement of the wearable device relative to an electronic device.
SYSTEMS AND METHODS FOR IDENTIFYING INCLINED REGIONS
Systems and methods for identifying inclined regions are provided. In one aspect, a method is provided that includes receiving shadow data for at least one first ground object in a first region, wherein each first ground object is depicted in one overhead image of the first region, wherein the shadow data comprises a length of the respective first ground object as identified from the respective overhead image; receiving shadow data for at least one second comparable ground object in a second region, wherein each second ground object is depicted in one overhead image of the second region, wherein the shadow data comprises a length of the respective second ground object as identified from the respective overhead image; calculating a statistical measure describing the variability of the shadow lengths between objects in the first region and the second region; comparing the statistical measure to a predetermined threshold; and based on the comparison, identifying that the first region is inclined relative to the second region.