G06V10/00

ESCAPE DETECTION AND MITIGATION FOR AQUACULTURE
20230000061 · 2023-01-05 ·

Methods, systems, and apparatus, including computer programs encoded on computer-storage media, for escape detection and mitigation for aquaculture. In some implementations, a method includes obtaining one or more images that depict one or more fish within a population of fish that are located within an enclosure; providing, to one or more detection models configured to classify fish that are depicted within the images as likely being member or as likely not being member of a type of fish, the one or images; generating, as a result of providing the one or more images to the one or more detection models, a value that reflects a quantity of fish that are depicted in the images that are likely a member of the type of fish; and detecting a condition based at least on the value.

Systems and methods for improving visual search using summarization feature
11715294 · 2023-08-01 · ·

Systems that search databases of videos or images to identify similar products in a given video or image of a product are disclosed. The content of the given video is represented by a feature vector used to measure the given video's similarity to either a video or an image. When the system is deployed to recognize particular fashion items in videos, some such videos are taken in uncontrolled settings, and as a result, may have low resolution, poor contrast, minimal focus, motion blur, or low lighting. By recognizing and removing poor quality video frames from the image recognition pipeline, associating products across video frames to form tracklets of each product, and enriching the feature representation of each item for best retrieval result by fusing information from multiple video frames depicting the item, the system addresses the aforementioned shortcomings.

Information processing apparatus, information processing method, and storage medium
11716448 · 2023-08-01 · ·

An information processing apparatus includes a comparison unit configured to compare an image capturing condition for a collation target object with an image capturing condition for each of a plurality of image capturing apparatuses, a selection unit configured to select an image capturing apparatus to be collated from among the plurality of image capturing apparatuses based on a result of the comparison by the comparison unit, and a collation unit configured to collate information about an object captured by the image capturing apparatus to be collated with information about the collation target object.

Image coding apparatus for coding tile boundaries

An image decoding apparatus obtain pieces of coded data that is included in a bitstream and generated by coding tiles. Tile boundary independence information is further obtained from the bitstream, with the tile boundary independence information indicating whether each of boundaries between the tiles is one of a first boundary or a second boundary. The pieces of coded data are decoded to generate image data of the tiles. Image data of a first tile is generated by decoding a first code string included in first coded data with reference to decoding information of a decoded tile when the tile boundary independence information indicates the first boundary, and by decoding the first code string without referring to the decoding information of the decoded tile when the tile boundary independence information indicates the second boundary.

Systems and methods for detecting patterns within video content

A method of reducing false positives and identifying relevant true alerts in a video management system includes analyzing images to look for patterns indicating changes between subsequent images. When a pattern indicating changes between subsequent images is found, the video management system solicits from a user an indication of whether the pattern belongs to one of two or more predefined categories. The patterns indicating changes between subsequent images are saved for subsequent use. Subsequent images received from the video camera are analyzed to look for patterns indicating changes between subsequent images. When a pattern indicating changes between subsequent images is detected by the video management system, the video management system compares the pattern indicating changes between subsequent images to those previously categorized into one of the two or more predefined categories. Based on the comparison, the video management system may provide an alert to the user.

FEDERATED LEARNING FOR CONNECTED CAMERA APPLICATIONS IN VEHICLES

Vehicles and related systems and methods are provided for classifying detected objects in a location-dependent manner using localized models in a federated learning environment. A method involves obtaining sensor data for a detected object external to the vehicle from a sensor of a vehicle, obtaining location data associated with the detected object, obtaining a local classification model associated with an object type, assigning the object type to the detected object based on an output by the local classification model as a function of the sensor data and the location data using the local classification model, and initiating an action at the vehicle responsive to assigning the object type to the detected object.

Method of controlling encoding of display data
11699212 · 2023-07-11 · ·

Systems and methods of encoding display data include performing a part of a first predetermined transform algorithm on at least a first part of a first frame of display data, and analyzing a light level to determine whether a different transform algorithm would be more suitable for encoding a second part of the first frame of the display data. If it is determined that a different transform algorithm would be more suitable for encoding, the second part of the first frame of the display data is encoded using the different transform algorithm to generate an encoded first frame. If it is determined that a different transform algorithm would not be more suitable for encoding, the second part of the first frame of the display data is encoded using the first predetermined transform algorithm to generate the encoded first frame.

Machine-learning based gesture recognition using multiple sensors

A device implementing a system for machine-learning based gesture recognition includes at least one processor configured to, receive, from a first sensor of the device, first sensor output of a first type, and receive, from a second sensor of the device, second sensor output of a second type that differs from the first type. The at least one processor is further configured to provide the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted gesture based on sensor output of the first type and sensor output of the second type. The at least one processor is further configured to determine the predicted gesture based on an output from the machine learning model, and to perform, in response to determining the predicted gesture, a predetermined action on the device.

Blood velocity measurement using correlative spectrally encoded flow cytometry

A spectrally encoded flow cytometry (SEFC) technique for imaging blood in the microcirculation. Since the dependency of one of the axes of the image on time prevents effective quantification of essential clinical parameters, the optical path in an SEFC system is split into two parallel imaging lines, followed by data analysis for recovering the flow speed from the multiplexed data. The data analysis may be auto-correlation of a pair of images obtained from a sequence of images of the imaged blood vessel.

Recurrent deep neural network system for detecting overlays in images
11551435 · 2023-01-10 · ·

In one aspect, an example method includes a processor (1) applying a feature map network to an image to create a feature map comprising a grid of vectors characterizing at least one feature in the image and (2) applying a probability map network to the feature map to create a probability map assigning a probability to the at least one feature in the image, where the assigned probability corresponds to a likelihood that the at least one feature is an overlay. The method further includes the processor determining that the probability exceeds a threshold, and responsive to the processor determining that the probability exceeds the threshold, performing a processing action associated with the at least one feature.