G06V40/103

Systems and methods for collaborative location tracking and sharing using augmented reality
11710285 · 2023-07-25 · ·

Disclosed is a location tracking system and associated methods for precisely locating a target device with a recipient device via different forms of location tracking and augmented reality. The recipient device receives a first position of the target device over a data network. The recipient device is moved according to the first position until the target device is in Ultra-WideBand (“UWB”) signaling range of the recipient device. The recipient device then measures a distance and direction of the target device relative to the recipient device based on Time-of-Flight (“ToF”) measurements generated from the UWB signaling. The recipient device determines a second position of the target device based on the distance and direction of the target device, and generates an augmented reality view with a visual reference at a particular position in images of a captured scene that corresponds to the second position of the target device.

Method for predicting direction of movement of target object, vehicle control method, and device

A method for predicting a direction of movement of a target object, a method for training a neural network, a smart vehicle control method, a device, an electronic apparatus, a computer readable storage medium, and a computer program. The method for predicting a direction of movement of a target object comprises: acquiring an apparent orientation of a target object in an image captured by a camera device, and acquiring a relative position relationship of the target object in the image and the camera device in three-dimensional space (S100); and determining, according to the apparent orientation of the target object and the relative position relationship, a direction of movement of the target object relative to a traveling direction of the camera device (S110).

MOVING OBJECT DETECTION DEVICE, IMAGE PROCESSING DEVICE, MOVING OBJECT DETECTION METHOD, AND INTEGRATED CIRCUIT

A moving object detection device includes: an image capturing unit with which a vehicle is equipped, and which is configured to obtain captured images by capturing views in a travel direction of the vehicle; a setting unit configured to set, for each of frames that are the captured images, a movement vanishing point at which movement of a stationary object in the captured images due to the vehicle traveling does not occur; a calculation unit configured to calculate, for each of unit regions of the captured images, a first motion vector indicating movement of an image in the unit region; and a detection unit configured to detect a moving object present in the travel direction, based on the movement vanishing points set by the setting unit and the first motion vectors calculated by the calculation unit.

METHOD FOR CO-SEGMENTATING THREE-DIMENSIONAL MODELS REPRESENTED BY SPARSE AND LOW-RANK FEATURE
20180012361 · 2018-01-11 ·

Presently disclosed is a method for co-segmenting three-dimensional models represented by sparse and low-rank feature, comprising: pre-segmenting each three-dimensional model of a three-dimensional model class to obtain three-dimensional model patches for the each three-dimensional model; constructing a histogram for the three-dimensional model patches of the each three-dimensional model to obtain a patch feature vector for the each three-dimensional model; performing a sparse and low-rank representation to the patch feature vector for the each three-dimensional model to obtain a representation coefficient and a representation error of the each three-dimensional model; determining a confident representation coefficient for the each three-dimensional model according to the representation coefficient and the representation error of the each three-dimensional model; and clustering the confident representation coefficient of the each three-dimensional model to co-segment the each three-dimensional model respectively.

VISION-ASSIST DEVICES AND METHODS OF CALIBRATING VISION-ASSIST DEVICES

Vision-assist devices and methods for calibrating a position of a vision-assist device worn by a user are disclosed. In one embodiment, a method of calibrating a vision-assist device includes capturing a calibration image using at least one capturing device of the vision-assist device, obtaining at least one attribute of the calibration image, and comparing the at least one attribute of the calibration image with a reference attribute. The method further includes determining an adjustment of the at least one image sensor based at least in part on the comparison of the at least one attribute of the calibration image with the reference attribute, and providing an output corresponding to the determined adjustment of the vision-assist device.

REGION SELECTION FOR IMAGE MATCH
20180012102 · 2018-01-11 ·

The accuracy of an image matching process can be improved by determining relevant swatch regions of the images, where those regions contain representative patterns of the items of interest represented in those images. Various processes examine a set of visual cues to determine at least one candidate object region, and then collate these regions to determine one or more representative swatch images. For apparel items, this can include locating regions such as an upper body region, torso region, clothing region, foreground region, and the like. Processes such as regression analysis or probability mapping can be used on the collated region data (along with confidence and/or probability values) to determine the appropriate swatch regions.

Methods and Systems for Person Detection in a Video Feed

The various embodiments described herein include methods, devices, and systems for providing event alerts. In one aspect, a method includes: (1) obtaining a video feed, the video feed comprising a plurality of images; and, (2) for each image, analyzing the image to determine whether the image includes a person, the analyzing including: (a) determining that the image includes a potential instance of a person by analyzing the image at a first resolution; (b) in accordance with the determination that the image includes the potential instance, denoting a region around the potential instance; (c) determining whether the region includes an instance of the person by analyzing the region at a second resolution, greater than the first resolution; and (d) in accordance with a determination that the region includes the instance of the person, determining that the image includes the person.

System and method for object tracking and metric generation
11710316 · 2023-07-25 · ·

Disclosed herein is a system and method directed to object tracking and metric generation using a plurality of cameras. The system includes the plurality of cameras disposed around a playing surface in a mirrored configuration, where the plurality of cameras are time-synchronized. The system further includes logic that, when executed by a processor, causes performance of operations including: obtaining a sequence of images from the plurality of cameras, continuously detecting an object in image pairs at successive points in time, wherein each image pair corresponds to a single point in time, continuously determining a location of the object within the playing space through triangulation of the object within each image pair, detecting a player and the object within each image of a subset of image pairs of the sequence of images, identifying a sequence of interactions between the object and the player, and storing the sequence of interactions.

Information processing apparatus, information processing method, and program
11710347 · 2023-07-25 · ·

An information processing apparatus (100) includes an acquisition unit (122) that acquires a first image from which person region feature information regarding a region including other than a face of a retrieval target person is extracted, a second image in which a collation result with the person region feature information indicates a match, and a facial region is detected, and result information indicating a collation result between face information stored in a storage unit and face information extracted from the facial region, and a display processing unit (130) that displays at least two of the first image, the second image, and the result information on an identical screen.

SYSTEMS, PROCESSES AND DEVICES FOR OCCLUSION DETECTION FOR VIDEO-BASED OBJECT TRACKING
20180012078 · 2018-01-11 ·

Processes, systems, and devices for occlusion detection for video-based object tracking (VBOT) are described herein. Embodiments process video frames to compute histogram data and depth level data for the object to detect a subset of video frames for occlusion events and generate output data that identifies each video frame of the subset of video frames for the occlusion events. Threshold measurement values are used to attempt to reduce or eliminate false positives to increase processing efficiency.