G06V10/20

Digital Image Ordering using Object Position and Aesthetics
20230051564 · 2023-02-16 · ·

Digital image ordering based on object position and aesthetics is leveraged in a digital medium environment. According to various implementations, an image analysis system is implemented to identify visual objects in digital images and determine aesthetics attributes of the digital images. The digital images can then be arranged in way that prioritizes digital images that include relevant visual objects and that exhibit optimum visual aesthetics.

METHOD AND SYSTEM FOR AUTOMATIC PRE-RECORDATION VIDEO REDACTION OF OBJECTS
20230046913 · 2023-02-16 · ·

A system and a method for automatic video redaction are provided herein. The method may include: receiving, an input video comprising a sequence of frames captured by a camera, wherein the input video includes live video obtained directly from the camera, wherein recordation of the video directly from the camera is disabled; performing visual analysis of the input video, to detect portions of the frames of the input video in which one of a plurality of predefined objects or a descriptor thereof is detected; generating a redacted input video by replacing the portions of the frames with new portions of another visual content; and recording the redacted input video on a data storage device, wherein the generating of thethe redacted input video, is carried out by a computer processor, after the input video is captured by the camera and before the recording of the redacted input video on the data storage device.

CONSTRUCTION OF ENVIRONMENT VIEWS FROM SELECTIVELY DETERMINED ENVIRONMENT IMAGES
20230051775 · 2023-02-16 ·

A computing system may include a client device and a server. The client device may be configured to access a stream of image frames that depict an environment, determine, from the stream of image frames, environment images that satisfy selection criteria, and transmit the environment images to the server. The server may be configured to receive the environment images from the client device, construct a spatial view of the environment based on position data included with the environment images, and navigate the spatial view, including by receiving a movement direction and progressing from a current environment image depicted for the spatial view to a next environment image based on the movement direction.

SELF-SUPERVISED LEARNING FRAMEWORK TO GENERATE CONTEXT SPECIFIC PRETRAINED MODELS

Systems and methods for self-supervised representation learning as a means to generate context-specific pretrained models include selecting data from a set of available data sets; selecting a pretext task from domain specific pretext tasks; selecting a target problem specific network architecture based on a user selection from available choices or any customized model as per user preference; and generating a pretrained model for the selected network architecture using the selected data obtained from the set of available data sets and a pretext task as obtained from domain specific pretext tasks.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM

There is provided with an information processing apparatus. An approximate discrimination unit discriminates an approximate type of an object from a first captured image obtained by capturing the object to which identification information is added. A setting unit sets, based on the approximate type of the object, an image capturing condition for capturing an image to obtain the identification information. A detail discrimination unit identifies the identification information from a second captured image obtained by capturing the object under the image capturing condition and discriminates a detailed type of the object based on a result of the identification.

Adaptive model updates for dynamic and static scenes

In one embodiment, a computing system may update a first 3D model of a region of an environment based on comparisons between the first 3D model and first depth measurements of the region generated during a first time period. The computing system may determine that the region is static by comparing the first 3D model to second depth measurements of the region generated during a second time period. The computing system may in response to determining that the region is static, detect whether the region changed after the second time period based on comparisons between a second 3D model of the region and third depth measurements of the region generated after the second time period, the second 3D model having a lower resolution than the first 3D model. The computing system may in response to detecting a change in the region, update the first 3D model of the region.

Computer-implemented interfaces for identifying and revealing selected objects from video

A computer-implemented visual interface for identifying and revealing objects from video-based media provides visual cues to enable users to interact with video-based media. Objects in videos are inferred and identified based upon automatic interpretations of the video and/or audio that is associated with the video. The automatic interpretations may be performed by a computer-implemented neural network. The computer-implemented visual interface is integrated with the video to enable users to interact with the identified objects. User interactions with the visual interface may be through either touch or non-touch means. Information is delivered to users that is based upon the identified objects, including in augmented or virtual reality-based form, responsive to user interactions with the computer-implemented visual interface.

In phase (I) and quadrature (Q) imbalance estimation in a radar system

A radar system is provided that includes transmission signal generation circuitry, a transmit channel coupled to the transmission generation circuitry to receive a continuous wave test signal, the transmit channel configurable to output a test signal based on the continuous wave signal in which a phase angle of the test signal is changed in discrete steps within a phase angle range, a receive channel coupled to the transmit channel via a feedback loop to receive the test signal, the receive channel including an in-phase (I) channel and a quadrature (Q) channel, a statistics collection module configured to collect energy measurements of the test signal output by the I channel and the test signal output by the Q channel at each phase angle, and a processor configured to estimate phase and gain imbalance of the I channel and the Q channel based on the collected energy measurements.

Plant group identification

A farming machine moves through a field and includes an image sensor that captures an image of a plant in the field. A control system accesses the captured image and applies the image to a machine learned plant identification model. The plant identification model identifies pixels representing the plant and categorizes the plant into a plant group (e.g., plant species). The identified pixels are labeled as the plant group and a location of the pixels is determined. The control system actuates a treatment mechanism based on the identified plant group and location. Additionally, the images from the image sensor and the plant identification model may be used to generate a plant identification map. The plant identification map is a map of the field that indicates the locations of the plant groups identified by the plant identification model.

Information processing device and recognition support method
11580720 · 2023-02-14 · ·

In order to acquire recognition environment information impacting the recognition accuracy of a recognition engine, an information processing device 100 comprises a detection unit 101 and an environment acquisition unit 102. The detection unit 101 detects a marker, which has been disposed within a recognition target zone for the purpose of acquiring information, from an image captured by means of an imaging device which captures images of objects located within the recognition target zone. The environment acquisition unit 102 acquires the recognition environment information based on image information of the detected marker. The recognition environment information is information representing the way in which a recognition target object is reproduced in an image captured by the imaging device when said imaging device captures an image of the recognition target object located within the recognition target zone.