Patent classifications
G06V20/00
MEASURING AND MONITORING SKIN FEATURE COLORS, FORM AND SIZE
Kits, diagnostic systems and methods are provided, which measure the distribution of colors of skin features by comparison to calibrated colors which are co-imaged with the skin feature. The colors on the calibration template (calibrator) are selected to represent the expected range of feature colors under various illumination and capturing conditions. The calibrator may also comprise features with different forms and size for calibrating geometric parameters of the skin features in the captured images. Measurements may be enhanced by monitoring over time changes in the distribution of colors, by measuring two and three dimensional geometrical parameters of the skin feature and by associating the data with medical diagnostic parameters. Thus, simple means for skin diagnosis and monitoring are provided which simplify and improve current dermatologic diagnostic procedures.
Management and display of object-collection data
An object identification and collection method is disclosed. The method includes receiving a pick-up path that identifies a route in which to guide an object-collection system over a target geographical area to pick up objects, determining a current location of the object-collection system relative to the pick-up path, and guiding the object-collection system along the pick-up path over the target geographical area based on the current location. The method further includes capturing images in a direction of movement of the object-collection system along the pick-up path, identifying a target object in the images; tracking movement of the target object through the images, determining that the target object is within range of an object picker assembly on the object-collection system based on the tracked movement of the target object, and instructing the object picker assembly to pick up the target object.
Systems and methods for dynamic image category determination
Disclosed are systems and methods for dynamically determining categories for images. A computer-implemented method may include training a neural network to receive an input image and determine one or more image categories associated with the input image; obtaining a set of images associated with a user; determining, using the trained neural network, one or more image categories associated with each image included in the obtained set of images; determining one or more dominant image categories associated with the user based on the determined image categories for the obtained set of images; and determining an image editing user interface for the user based on the determined one or more dominant image categories.
Video generation method and apparatus, electronic device, and computer readable medium
Disclosed are a video generation method and apparatus, an electronic device, and a computer readable medium. A specific embodiment of the method comprises: obtaining a video footage and an audio footage, the video footage comprising a picture footage; determining a music point of the audio footage, the music point being used for dividing the audio footage into a plurality of audio clips; using the video footage to generate a video clip for each music clip in the audio footage to obtain a plurality of video clips, corresponding music clips and video clips having the same duration; and splicing the plurality of video clips according to the time when music clips respectively corresponding to the plurality of video clips appear in the audio footage, and adding the audio footage as a video audio signal to obtain a composite video.
SELECTIVE CONTENT INSERTION INTO AREAS OF MEDIA OBJECTS
One or more computing devices, systems, and/or methods for selective content insertion into areas of media objects are provided. For example, a media object (e.g., an image or video), is selected for composition with content, such as where a message, interactive content, a hyperlink, or other types of content is overlaid or embedded into the media object to create a composite media object. The content is added into an area of the media object that is selectively identified to reduce occlusion and/or improve visual cohesiveness between the content and the media object (e.g., added to an area with a similar or complimentary color, having an adequate size with spare amounts of visual features such as a soccer player, a ball, or other entity, etc.). In this way, the content may be add into the area of the media object to create a composite media object to provide to users.
Independently procurable item compliance information
Systems and methods electronically provide information regarding digital rules related to a potential relationship instance. Users often wish to know which digital rules apply to a specified item before engaging in a relationship instance with a host entity regarding the item. The system and methods described herein allow a computing facility to identify an item and receive resource information related to the item and the digital rules applicable to the item.
Independently procurable item compliance information
Systems and methods electronically provide information regarding digital rules related to a potential relationship instance. Users often wish to know which digital rules apply to a specified item before engaging in a relationship instance with a host entity regarding the item. The system and methods described herein allow a computing facility to identify an item and receive resource information related to the item and the digital rules applicable to the item.
Wearable Multimedia Device and Cloud Computing Platform with Application Ecosystem
Systems, methods, devices and non-transitory, computer-readable storage mediums are disclosed for a wearable multimedia device and cloud computing platform with an application ecosystem for processing multimedia data captured by the wearable multimedia device. In an embodiment, a method comprises: receiving, by one or more processors of a cloud computing platform, context data from a wearable multimedia device, the wearable multimedia device including at least one data capture device for capturing the context data; creating a data processing pipeline with one or more applications based on one or more characteristics of the context data and a user request; processing the context data through the data processing pipeline; and sending output of the data processing pipeline to the wearable multimedia device or other device for presentation of the output.
MULTI-DOMAIN CONVOLUTIONAL NEURAL NETWORK
In one embodiment, an apparatus comprises a memory and a processor. The memory is to store visual data associated with a visual representation captured by one or more sensors. The processor is to: obtain the visual data associated with the visual representation captured by the one or more sensors, wherein the visual data comprises uncompressed visual data or compressed visual data; process the visual data using a convolutional neural network (CNN), wherein the CNN comprises a plurality of layers, wherein the plurality of layers comprises a plurality of filters, and wherein the plurality of filters comprises one or more pixel-domain filters to perform processing associated with uncompressed data and one or more compressed-domain filters to perform processing associated with compressed data; and classify the visual data based on an output of the CNN.
MULTI-DOMAIN CONVOLUTIONAL NEURAL NETWORK
In one embodiment, an apparatus comprises a memory and a processor. The memory is to store visual data associated with a visual representation captured by one or more sensors. The processor is to: obtain the visual data associated with the visual representation captured by the one or more sensors, wherein the visual data comprises uncompressed visual data or compressed visual data; process the visual data using a convolutional neural network (CNN), wherein the CNN comprises a plurality of layers, wherein the plurality of layers comprises a plurality of filters, and wherein the plurality of filters comprises one or more pixel-domain filters to perform processing associated with uncompressed data and one or more compressed-domain filters to perform processing associated with compressed data; and classify the visual data based on an output of the CNN.