G06V10/753

Similarity determining method and device, network training method and device, search method and device, and electronic device and storage medium
12229220 · 2025-02-18 · ·

A method and device of similarity determination, network training, and search, an electronic device, and a storage medium are provided. The data similarity determination method includes: acquiring first data of a first object; mapping the first sub-data as a first semantic representation in a semantic comparison space, where the semantic comparison space enables a similarity between a semantic representation obtained by mapping data of the first modality to the semantic comparison space and a semantic representation obtained by mapping data of the second modality to the semantic comparison space to be computed; acquiring second data of a second object; mapping the second sub-data as a second semantic representation in the semantic comparison space; and calculating a similarity between the first data and the second data based on at least the first semantic representation and the second semantic representation.

Methods for Detecting Vehicle Following Distance

Systems, methods, models, and training data for models are discussed, for determining vehicle positioning, and in particular identifying tailgating. Simulated training images showing vehicles following other vehicles, under various conditions, are generated using a virtual environment. Models are trained to determine following distance between two vehicles. Trained models are used in detection of tailgating, based on determined distance between two vehicles. Results of tailgating are output to warn a driver, or to provide a report on driver behavior. Following distance over time is determined, and simplified following distance data is generated for use at a management device.

Method and apparatus for extracting a fingerprint of video having a plurality of frames

A method for extracting a fingerprint of a video having a plurality of frames includes obtaining a plurality of pixel value matrices from each of the plurality of frames, calculating maximum values of average pixel values in each axis of the plurality of pixel value matrices for each of the plurality of frames, and calculating the fingerprint of the video based on a temporal correlation of the maximum values calculated for the plurality of frames.

Object tracking integration method and integrating apparatus
12333742 · 2025-06-17 · ·

An object tracking integration method and an integrating apparatus are provided. In the method, one or more first images and one or more second images are obtained. The first image is captured from a first capturing apparatus, and the second image is captured from a second capturing apparatus. One or more target objects in the first image and in the second image are detected. A detection result of the target object in the first image and a detection result of the target object in the second image are matched. The detection result of the target object is updated according to a matching result between the detection results of the first image and the second image. Accordingly, the accuracy of the association and the monitoring range may be improved.

Method and system or device for recognizing an object in an electronic image

A method is provided for machine vision and image analysis for recognizing an object in an electronic image, which is captured with the aid of an optical sensor. A reference image of the object to be recognized is trained during a learning phase and compared with the image of the scene during a working phase, the pattern comparison between the object and the scene takes place with the aid of a modified census transform, using a determination of maximum and which must exceed a threshold value for a positive statement on a degree of correspondence.

INFORMATION PROCESSING DEVICE, AND DETECTION METHOD

An information processing device includes an acquisition unit and a detection unit. The acquisition unit of the information processing device acquires an image of a stranded wire. The detection unit of the information processing device detects a length for acquiring images of the stranded wire in a same pattern based on the image of the stranded wire.

Dynamic adaptation of images for projection, and/or of projection parameters, based on user(s) in environment
12477089 · 2025-11-18 · ·

Implementations relate to dynamic adaptation of images for projection by a projector, based on one or more properties of user(s) that are in an environment with the projector. The projector can be associated with an automated assistant client of a client device. In some versions of those implementations, a pose of a user in the environment is determined and, based on the pose, a base image for projecting onto a surface is warped to generate a transformed image. The transformed image, when projected onto a surface and viewed from the pose of the user, mitigates perceived differences relative to the base image. The base image (on which the transformed image is based) can optionally be generated in dependence on a distance of the user. Some implementations additionally or alternatively relate to dynamic adaptation of projection parameters (e.g., a location for projection, a size of projection) based on one or more properties of user(s) that are in an environment with the projector.

Diffusion-Based Network Traffic Generation

An implementation may involve: providing, to an image diffusion model, a prompt that describes characteristics of network traffic; receiving, from the image diffusion model, an image representing the network traffic, wherein the image comprises a matrix of pixel values representing packets of the network traffic in a presence-based format; transforming the pixel values into a trace of the network traffic, wherein the trace encodes at least packet header values of the packets; applying, to the trace, protocol compliance rules that relate to the packet header values; and outputting the trace in a binary format.

SIGNAL PROCESSING CIRCUIT, SIGNAL PROCESSING METHOD, AND PROGRAM

A signal processing circuit that processes event signals generated by an event-based vision sensor (EVS). The signal processing circuit includes a memory configured to store program code and a processor configured to execute operations according to the program code. The operations include detecting at least one line segment or curve that is formed by a set of positions within a block of the event signals generated in a block obtained upon division of a detection area of the EVS, and correcting at least one of a first line segment or a first curve or at least one of a second line segment or a second curve in such a manner that a first endpoint of the first line segment or first curve detected in a first block overlaps with a second endpoint of the second line segment or second curve detected in a second block adjacent to the first block.