Patent classifications
G06V10/273
SEMI-AUTOMATIC IMAGE DATA LABELING METHOD, ELECTRONIC APPARATUS, AND STORAGE MEDIUM
Disclosed are a semi-automatic image data labeling method, an electronic apparatus and a non-transitory computer-readable storage medium. The semi-automatic image data labeling method may include: displaying a to-be-labeled image, the to-be-labeled image comprising a selected area and an unselected area; acquiring a coordinate point of the unselected area and a first range value; executing a grabcut algorithm based on the coordinate point of the unselected area and the first range value acquired, and obtaining a binarized image divided by the grabcut algorithm; executing an edge tracking algorithm on the binarized image to acquire current edge coordinates; updating a local coordinate set based on the current edge coordinates acquired; updating the selected area of the to-be-labeled image based on the local coordinate set acquired.
Aerial item delivery availability
Disclosed are systems and methods to determine and rank large areas encompassing many parcels (e.g., neighborhoods, cities, towns) for aerial item delivery availability, without the use of image data of the areas. In some implementations, publicly available two-dimensional parcel maps that indicate parcel boundaries and outlines of structures on those parcels may be obtained and processed. For example, parcels within the area may be processed to determine deliverable area shapes, such as rectangles, within the parcel, excluding the area of the structure. A determination is then made as to whether one or more of the deliverable area shapes exceed a deliverable area threshold. If one or more of the deliverable area shapes of the parcel exceed the threshold, the parcel is considered to be available for aerial item delivery. This processing may be done for all parcels within an area or all customer parcels of customers of a service within the area (or any other selection criteria). Likewise, this processing may be done for multiple different areas and the areas may be ranked based on the overall determined availability of aerial item delivery to parcels within those areas.
Barrier detection for support structures
A method of barrier detection in an imaging controller includes: obtaining an image of a support structure configured to support a plurality of items on a support surface extending between a shelf edge and a shelf back; extracting frequency components representing pixels of the image; based on the extracted frequency components, identifying a barrier region of the image, the barrier region containing a barrier adjacent to the shelf edge; and detecting at least one empty sub-region within the barrier region, wherein the empty sub-region is free of items between the barrier and the shelf back.
STORE MONITORING SYSTEM, STORE MONITORING APPARATUS, STORE MONITORING METHOD AND RECORDING MEDIUM
A store monitoring system includes: an imaging apparatus; and a store monitoring apparatus that monitors a display shelf in a store by using a captured image captured by the imaging apparatus, the store monitoring apparatus including: an acquisition unit that sequentially obtains the captured image from the imaging apparatus; and an image generation unit that extracts from the captured image a shelf image in which the display shelf appears, by eliminating from the captured image a non-shelf image in which the display shelf does not appear, and that generates a front shelf image obtained when the display shelf is imaged from a front, by correcting a distortion of the shelf image.
Image processing apparatus, imaging apparatus, image processing method, and image processing program
A portable terminal includes an extraction unit and a display control unit. The extraction unit extracts a part of a display image displayed on a touch panel display, as an extraction image used as a compositing target image. The display control unit, in a case where a plurality of regions having different areas are included in the extraction image used as the compositing target image, removes at least one of the plurality of regions in order of area based on an operation amount of an operation input by a user.
METHOD AND APPARATUS FOR IMPROVING OBJECT IMAGE
Provided are a method and an apparatus for restoring an object image, capable of restoring an image naturally by detecting positions of landmarks of an object in a bounding-box detected from an input image, performing warping to align the object at a central position or a reference position on the basis of the landmarks, improving the image using a learning model learned from the aligned object image, performing inverse warping for rotating the improved object image in an original direction or at an original angle, and inserting the inversely-warped object image into the input image. In addition, provided are a method and an apparatus for restoring an object image, capable of detecting positions of landmarks of an object in a bounding-box detected from an input image, performing pose estimation for a side object on the basis of the landmarks, and improving an image using a learning model learned from a side object image corresponding to the pose estimation result.
METHOD, APPARATUS, DEVICE, AND STORAGE MEDIUM FOR TRAINING IMAGE PROCESSING MODEL
An image processing model can more accurately process a face image with an occluded face while reducing calculations and improving operation speed of a processing device, reducing training time and costs. A predicted recognition result of a sample face image and occlusion indication information based on an image processing model is obtained. The occlusion indication information indicates an image feature of a face occlusion area of the sample face image. A recognition error based on the predicted recognition result and a target recognition result is also obtained. A classification error is obtained based on the occlusion indication information and a target occlusion pattern corresponding to the sample face image. An occlusion pattern of the sample face image indicates a position and a size of the face occlusion area. A model parameter of the image processing model is updated based on the recognition error and the classification error.
Electronic apparatus and object information recognition method by using touch data thereof
An electronic apparatus and an object information recognition method by using touch data thereof are provided. Touch sensing is performed in the case where no object touches a touch panel to obtain a specific background frame through the touch panel. A current touch sensing frame is obtained through the touch panel. Touch background data of a plurality of first frame cells in the specific background frame is respectively subtracted from touch raw data of a plurality of second frame cells in the current touch sensing frame to obtain a background removal frame including a plurality of cell values. The background removal frame is transformed into a touch sensing image. The touch sensing image is inputted to a trained neural network model to recognize object information of a touch object.
IMAGE PROCESSING FOR SEPARATION OF ADJACENT OBJECTS
Image processing to discriminate imaged objects that are adjacent or overlapping. Non-empty cells of the image that contain portions of the objects, and empty cells that lack any portions of the objects, are all determined. A global convex hull is defined to surround the non-empty cells of the image. Voids, including at least a first void and a second void, are found within the global convex hull, each being composed of contiguous empty cells and having a corresponding void boundary. A separation line is defined based on a first separation line endpoint along the void boundary of the first void and a second separation line endpoint along the void boundary of the second void, to separate two of the objects in the image. An output may be produced that includes indicia of at least portions of distinct boundaries of the objects in the image based on the separation line.
OBJECT DETECTION DEVICE, OBJECT DETECTION METHOD, AND PROGRAM
An object detection device (1) includes an object detection unit (2) that detects an object from an image including the object by neural computation using a CNN. The object detection unit (2) includes: a feature amount extraction unit (2a) that extracts a feature amount of the object from the image; an information acquisition unit (2b) that obtains a plurality of object rectangles indicating candidates for the position of the object on the basis of the feature amount and obtains information and a certainty factor of a category of the object for each of the object rectangles; and an object tag calculation unit (2c) that calculates, for each of the object rectangles, an object tag indicating which object in the image the object rectangle is linked to, on the basis of the feature amount. The object detection device (2) further includes an excess rectangle suppression unit (4) that separates a plurality of object rectangles for which a category of the object is the same into a plurality of groups according to the object tags, and deletes an excess object rectangle in each of the separated groups on the basis of the certainty factor.