G06V10/759

Automatic nuclear segmentation

Automatic nuclear segmentation. In an embodiment, a plurality of superpixels are determined in a digital image. For each of the superpixels, any superpixels located within a search radius from the superpixel are identified, and, for each unique local combination between the superpixel and any identified superpixels located within the search radius from the superpixel, a local score for the local combination is determined. One of a plurality of global sets of local combinations with an optimum global score is identified based on the determined local scores.

Product recognition apparatus and control method for product recognition apparatus

According to one embodiment, an article recognition apparatus includes an image acquisition unit, a recognition unit, a region detection unit, a storage unit, and a determination unit. The recognition unit recognizes each of the articles. The region detection unit determines article region information. The storage unit stores article information including a reference value for the article region information. The determination unit determines that an unrecognized article exists, if the reference value for the article region information of each article which the recognition unit recognized does not match with the article region information.

Classifying a video stream using a self-attention-based machine-learning model

In one embodiment, a method includes accessing a stream of F video frames, where each of the F video frames includes N patches that are non-overlapping, generating an initial embedding vector for each of the N?F patches in the F video frames, generating a classification embedding by processing the generated N?F initial embedding vectors using a self-attention-based machine-learning model that computes a temporal attention and a spatial attention for each of the N?F patches, and determining a class of the stream of video frames based on the generated classification embedding.

Object recognition method and apparatus, electronic device, and readable storage medium

An object recognition method is provided. The method includes: detecting an occlusion region of an object in an image, to obtain a binary image; obtaining occlusion binary image blocks; querying a mapping relationship between occlusion binary image blocks and binary masks included in a binary mask dictionary to obtain binary masks corresponding to the occlusion binary image blocks; synthesizing the binary masks queried based on each of the occlusion binary image blocks, to obtain a binary mask corresponding to the binary image; and determining a matching relationship between the image and a prestored object image, based on the binary mask corresponding to the binary image, a feature of the prestored object image, and a feature of the to-be-recognized image.

Knowledge-based object localization in images for hardware assurance

Embodiments of the present disclosure provide methods, apparatus, systems, and computer program products for using an image of an integrated circuit (IC) including a plurality of cells to locate one or more target cells within the IC. Accordingly, in various embodiments, a footprint for each cell of the plurality of cells is encoded to transform the image of the IC into a two-dimensional string matrix. A string search algorithm is then applied on each encoded dopant region found in the two-dimensional string matrix using an encoded target layout cell to identify one or more candidate regions of interest within the image. Finally, a mask window is slid over each candidate region of interest while performing matching using match criteria to identify any target cells in the one or more target cells that are located within the candidate region of interest.

Aircraft door camera system for wing monitoring

A camera with a field of view toward an external environment of an aircraft is disposed within an aircraft door such that a leading edge of a wing of the aircraft is within the field of view of the camera. A display device is disposed within an interior of the aircraft. A processor is operatively coupled to the camera and to the display device. The processor analyzes image data captured by the camera to predict a likelihood of foreign object collision with the leading edge of the wing, or detect damage or deformation to the leading edge.

AUTOMATIC ORIENTATION CORRECTION FOR CAPTURED IMAGES
20240312173 · 2024-09-19 ·

In some implementations, a device may receive an image of a document, the image depicting a reference feature associated with the document, the reference feature including at least one of: a face of a person, a machine-readable code, or a text field. The device may identify a rotational angle of the reference feature as depicted in the image based on comparing the reference feature as depicted in the image to one or more orientation parameters of the reference feature associated with a display orientation associated with the document. The device may rotate the image of the document by an angle to obtain an orientated image of the document, the angle being based on the rotational angle of the reference feature as depicted in the image. The device may provide the orientated image of the document for display.

Image correction method, terminal device and storage medium

An image correction method, a terminal device and a non-transitory computer readable storage medium are provided. The method includes: extracting human face attributes of an image; acquiring, from target regions, a first region having a human face correction attribute; acquiring, from the target regions, a second region having a human face protection attribute; and performing image correction on the human face in the first region, and performing pixel compensation, according to background pixels of the image, on a blank region generated by the image correction in the first region.

Method and apparatus for detecting body

Embodiments of the present application disclose a method and apparatus for detecting a body. A particular embodiment of the method comprises: acquiring a set of candidate body image region in a target image; for a candidate body image region in the set of candidate body image region: acquiring position information and confidences of candidate body key points in the candidate body image region; determining the candidate body key points within a body contour according to body contour information in the candidate body image region and the acquired position information; and determining a confidence score of the candidate body image region according to a sum of the confidences of the candidate body key points within the body contour; and determining a body image region from the set of candidate body image regions according to the confidence scores of the candidate body image regions in set of the candidate body image regions.

User interface to select field of view of a camera in a smart glass

A wearable device for use in immersive reality applications is provided. The wearable device has a frame including an eyepiece to provide a forward-image to a user, a first forward-looking camera mounted on the frame, the first forward-looking camera having a field of view within the forward-image, a sensor configured to receive a command from the user, the command indicative of a region of interest within the field of view, and an interface device to indicate to the user that a field of view of the first forward-looking camera is aligned with the region of interest. Methods of use of the device, a memory storing instructions and a processor to execute the instructions to cause the device to perform the methods of use, are also provided.