Patent classifications
G06V20/35
Imaging system, server, imaging device, imaging method, program, and recording medium having a function of assisting capturing of an image coincident with preference of a user
An image is analyzed, a scene of the image is recognized, imaging information regarding capturing in a case where the image is captured is acquired, and reproduction information in a case where the image is reproduced on a display is acquired. An image coincident with preference of a user of the imaging device from among images having the same scene is decided for each scene based on the reproduction information, and a preference parameter table in which the scene and imaging information of the imaging device in a case where the image coincident with the preference of the user is captured are stored in association with each other is created. Imaging information associated with a scene coincident with a scene of an image to be captured next by the user is selected from the preference parameter table, and an image is captured by using the imaging information.
SYSTEM AND METHOD FOR TEACHING SMART DEVICES TO RECOGNIZE AUDIO AND VISUAL EVENTS
Exemplary embodiments are directed to a method and apparatus for training a smart device to recognize events in an indoor or outdoor venue. The smart device can execute program code for generating a specialized user interface. The smart device can record a target event in the venue or control another smart device to record the target event. The smart device can process the recording of the target event to generate an event signature. A unique tag for the recorded event can be generated, and the event signature and the unique tag can be transmitted to cloud storage. The smart device can receive event recordings from other smart devices in the venue and compare the event recordings to the event signatures in cloud storage. The smart device can generate at least one user or device prompt regarding the target event when the event recording matches an event signature.
Event-assisted autofocus methods and apparatus implementing the same
A focus method and an image sensing apparatus are disclosed. The method includes capturing, by a plurality of event sensing pixels, event data of a targeted scene, wherein the event data indicates which pixels of the event sensing pixels have changes in light intensity, accumulating the event data for a predetermined time interval to obtain accumulated event data, determining whether a scene change occurs in the targeted scene according to the accumulated event data, obtaining one or more interest regions in the targeted scene according to the accumulated event data in response to the scene change, and providing at least one of the one or more interest regions for a focus operation. The image sensing apparatus comprises a plurality of image sensing pixels, a plurality of event sensing pixels, and a controller configured to perform said method.
Automatic image selection for online product catalogs
Disclosed are systems, methods, and non-transitory computer-readable media for automatic image selection for online product catalogs. An image selection system gathers feature data for images of an item included in listings posted to an online marketplace. The image selection system uses the feature data as input in a machine learning model to determine probability scores indicating an estimated probability that each image is suitable to represent the item. The machine learning model is trained based on a set of training images of the item that have been labeled to indicate whether they are suitable to represent the image. The image selection system compares the probability scores and selects an image to represent the item as a stock image based on the comparison.
Determining an item that has confirmed characteristics
In various example embodiments, a system and method for determining an item that has confirmed characteristics are described herein. An image that depicts an object is received from a client device. Structured data that corresponds to characteristics of one or more items are retrieved. A set of characteristics is determined, the set of characteristics being predicted to match with the object. An interface that includes a request for confirmation of the set of characteristics is generated. The interface is displayed on the client device. Confirmation that at least one characteristic from the set of characteristics matches with the object depicted in the image is received from the client device.
Electronic apparatus and control method thereof
An electronic apparatus is provided. The electronic apparatus includes a camera, a storage, and a processor configured to store an image photographed by the camera and metadata of the image in the storage, the processor is further configured to identify whether first information related to the image is obtainable, based on the first information not being obtainable, generate metadata related to the first information based on second information, and store the generated metadata as metadata of the image.
Electronic device correcting meta information of image and operating method thereof
Disclosed is an electronic device which includes a processor, and a memory that stores instructions and at least one images. The instructions, when executed by the processor, cause the electronic device to: classify the at least one images into at least one image group, based on meta information of the at least one image; identify tag information about at least one object of first images in a first image group of the at least one image group; identify place information about the first images, based on the tag information; and correct meta information of the first images, based on the identified place information.
Wearable Multimedia Device and Cloud Computing Platform with Application Ecosystem
Systems, methods, devices and non-transitory, computer-readable storage mediums are disclosed for a wearable multimedia device and cloud computing platform with an application ecosystem for processing multimedia data captured by the wearable multimedia device. In an embodiment, a method comprises: receiving, by one or more processors of a cloud computing platform, context data from a wearable multimedia device, the wearable multimedia device including at least one data capture device for capturing the context data; creating a data processing pipeline with one or more applications based on one or more characteristics of the context data and a user request; processing the context data through the data processing pipeline; and sending output of the data processing pipeline to the wearable multimedia device or other device for presentation of the output.
IMAGE PROCESSING SYSTEM
The present invention discloses a system and method for image processing and recognizing a scene of an image. The system utilizes a Multi-mode scalable network system and regrouping pipeline. The system is AI based system which uses neuro network. The system includes a pre-processing, processing and a post-processing unit. The system uses optical information recorded from the camera of a mobile device to extract and analyze the content in an image such as a photo or video clip. Based on the retrieved information, a label is given to best describe the scene of the image.
IMAGE DISPOSITIONING USING MACHINE LEARNING
Provided is a method, computer program product, and system for predicting image sharing decisions using machine learning. A processor may receive a set of annotated images and an associated text input from each user of a plurality of users. The processor may train, using the set of annotated images and the associated text input from each user, a neural network model to output an image sharing decision that is specific to a user.