G06V10/24

GEO-SPATIAL CONTEXT FOR FULL-MOTION VIDEO

A method and a system for generating a composite video feed for a geographical area are disclosed. A video of the geographical area, captured by a camera, of an aerial platform is received. The video includes metadata indicative of location information, which is used to identify the coordinates of the geographical area. An image that is adjacent to the geographical area is received from the geographical information system and is transformed according to the metadata. The coordinates of the geographical area are used to determine an area with the image. The video is embedded in the area by matching the area with the coordinates of the geographical area, where the edges of the video correspond to the boundaries of the area. A composite video feed, including the video embedded along with the image, is generated and a video player displays the composite video feed.

GEO-SPATIAL CONTEXT FOR FULL-MOTION VIDEO

A method and a system for generating a composite video feed for a geographical area are disclosed. A video of the geographical area, captured by a camera, of an aerial platform is received. The video includes metadata indicative of location information, which is used to identify the coordinates of the geographical area. An image that is adjacent to the geographical area is received from the geographical information system and is transformed according to the metadata. The coordinates of the geographical area are used to determine an area with the image. The video is embedded in the area by matching the area with the coordinates of the geographical area, where the edges of the video correspond to the boundaries of the area. A composite video feed, including the video embedded along with the image, is generated and a video player displays the composite video feed.

METHOD OF IDENTIFYING SIMILARS IN CLOTHING IMAGES

Described herein is a system and computer implemented method for finding similars for a selected clothing image amongst a set of clothing images in an electronic catalog in an online store serving online customers. An object detection model is applied to extract the clothing section within the clothing images to create a preprocessed image. A first machine learning model is applied on the preprocessed image(s) to convert the colors and textures of said preprocessed image into a first set of vector representations. A second machine learning model is applied on the preprocessed image(s) to convert the shapes of said preprocessed image into a second set of vector representations. Operations of mapping nearest vectors, matching attributes, sorting and ranking are performed, and thereafter similar images are displayed to the online customer.

Device state interface
11711495 · 2023-07-25 · ·

A system and method for visually automated interface integration that includes collecting image data; detecting a device interface source in the image data; processing the image data associated with the device interface source into an extracted interface representation; and exposing at least one access interface to the extracted interface representation.

Face recognition method, terminal device using the same, and computer readable storage medium

A backlight face recognition method, a terminal device using the same, and a computer readable storage medium are provided. The method includes: performing a face detection on each original face image in an original face image sample set to obtain a face frame corresponding to the original face image; capturing the corresponding original face images from the original face image sample set, and obtaining a new face image containing background pixels corresponding to the captured original face images from the original face image sample set; preprocessing all the obtained new face images to obtain a backlight sample set and a normal lighting sample set; and training a convolutional neural network using the backlight sample set and the normal lighting sample set until the convolutional neural network reaches a preset stopping condition. The trained convolutional neural network will improve the accuracy of face recognition in complex background and strong light.

METHOD AND SYSTEM FOR EXTRACTING SENTIMENTS OR MOOD FROM ART IMAGES

A method for extracting sentiments or mood from art images includes: receiving at least one of the art images as an input image; preprocessing the input image; extracting features from the preprocessed input image, the extracting including predicting a color label corresponding to a dominant perceptual color detected from the preprocessed input image a dominant subject from the preprocessed input image, detecting low-level image features from the preprocessed input image, and extracting mood feature information based on a description information included in the input image; classifying the extracted features into a plurality of mood/sentiments classes, using an artificial neural network; and predicting at least one of a mood or a sentiment that is present in the input image based on the dominant perceptual color and the plurality of mood/sentiments classes.

METHOD FOR UPDATING 3-DIMENSIONAL MAP USING IMAGE, AND ELECTRONIC DEVICE FOR SUPPORTING SAME
20230230318 · 2023-07-20 ·

An electronic device is provided. The electronic device includes a camera module, a communication circuit, a memory, and a processor, wherein the processor may execute a first application using the camera module, obtain a first image through the camera module while the first application operates, recognize an object of a specified type in the first image, obtain location information of where the first image is obtained, determine a first virtual point corresponding to the location information on a three-dimensional (3D) virtual map, obtain a second image corresponding to the first image at the first virtual point, and update data of the 3D virtual map on the object based on a comparison between the first image and the second image.

WATER AREA OBJECT DETECTION SYSTEM AND MARINE VESSEL
20230228576 · 2023-07-20 ·

A water area object detection system includes a first imager to image an object around a hull, a second imager provided on the hull such that an imaging direction of the second imager is the same or substantially the same as an imaging direction of the first imager and operable to image the object around the hull, and a controller configured or programmed to perform a control to create a water area map around the hull based on images captured by the first imager and the second imager. The second imager is spaced apart in an upward-downward direction of the hull from the first imager, and the first imager is spaced apart in the imaging direction from the second imager so as not to overlap the second imager in the upward-downward direction perpendicular to the imaging direction.

SYSTEMS AND METHODS FOR INSPECTING PIPELINES USING A PIPELINE INSPECTION ROBOT
20230228694 · 2023-07-20 ·

Systems and methods for robotic inspection of above-ground pipelines are disclosed. Embodiments may include a robotic crawler having a plurality of motors that are individually controllable for improved positioning on the pipeline to facilitate image acquisition. Embodiments may also include mounting systems to house and carry imaging equipment configured to capture image data simultaneously from a plurality of angles. Such mounting systems may be adjustable to account for different sizes of pipes (e.g., 2-40+ inches), and may be configured to account for traversing various pipe support structures. Still further, mounting systems may include quick-release members to allow for removal and re-mounting of imaging equipment when traversing support structures. In other aspects, embodiments may be directed toward control systems for the robotic crawler which assist in the navigation and image capture capabilities of the crawler.

Gesture recognition systems
11703951 · 2023-07-18 · ·

A method and apparatus for performing gesture recognition. In one embodiment of the invention, the method includes the steps of receiving one or more raw frames from one or more cameras, each of the one or more raw frames representing a time sequence of images, determining one or more regions of the one or more received raw frames that comprise highly textured regions, segmenting the one or more determined highly textured regions in accordance textured features thereof to determine one or more segments thereof, determining one or more regions of the one or more received raw frames that comprise other than highly textured regions, and segmenting the one or more determined other than highly textured regions in accordance with color thereof to determine one or more segments thereof. One or more of the segments are then tracked through the one or more raw frames representing he time sequence of images.