Patent classifications
G06T2207/30228
METHOD AND APPARATUS WITH OBJECT TRACKING USING DYNAMIC FIELD OF VIEW
A method with object tracking includes: determining a first target tracking state by tracking a target from a first image frame with a first field of view (FoV); determining a second FoV based on the first FoV and the first target tracking state; and generating a second target tracking result by tracking a target from a second image frame with the second FoV.
Techniques for object tracking
A method and system for tracking movements of objects in a sports activity are provided. The method includes matching video captured by at least one camera with sensory data captured by each of a plurality of tags, wherein each of the at least one camera is deployed in proximity to a monitored area, wherein each of the plurality of tags is disposed on an object of a plurality of monitored objects moving within the monitored area; and determining, based on the video and sensory data, at least one performance profile for each of the monitored objects, wherein each performance profile is determined based on positions of the respective monitored object moving within the monitored area.
SYSTEMS AND METHODS FOR CROWDSOURCED VIDEO ORCHESTRATION
A system described herein may provide a technique for the real-time determination of events, objects, focal points, or the like to be captured by one or more cameras in a multi-camera environment. Such determination may be based on “crowdsourced” data from multiple User Equipment (“UEs”). The crowdsourced data may include positioning and/or pose information associated with UEs. The positioning information for a given UE may include location information, and the pose information may include an azimuth angle, magnetic declination, or other suitable information indicating where a particular physical facet of the UE is facing. For example, the pose information may be used to indicate or infer where a camera of the UE is pointed. One or more actuatable cameras may be displaced, rotated, etc. to capture video at one or more identified crowdsourced focal points.
MAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
An image processing apparatus acquires a moving image to be processed, detects an object from each of images included in the moving image, determines a position of a region of interest in each of the images included in the moving image, based on a result of the detection, derives a cutting locus for the moving image based on a locus corresponding to movement of the position of the region of interest and a reference position for a cutting-out region, the cutting locus being a locus corresponding to movement of a position of the cutting-out region, and generates a cut-out image from the cutting-out region identified based on the cutting locus in each of the images included in the moving image.
Fitness and sports applications for an autonomous unmanned aerial vehicle
Sports and fitness applications for an autonomous unmanned aerial vehicle (UAV) are described. In an example embodiment, a UAV can be configured to track a human subject using perception inputs from one or more onboard sensors. The perception inputs can be utilized to generate values for various performance metrics associated with the activity of the human subject. In some embodiments, the perception inputs can be utilized to autonomously maneuver the UAV to lead the human subject to satisfy a performance goal. The UAV can also be configured to autonomously capture images of a sporting event and/or make rule determinations while officiating a sporting event.
ALIGNMENT OF 3D GRAPHICS EXTENDING BEYOND FRAME IN AUGMENTED REALITY SYSTEM WITH REMOTE PRESENTATION
Augmented reality systems provide graphics over views from a mobile device for both in-venue and remote viewing of a sporting or other event. A server system can provide a transformation between the coordinate system of a mobile device (mobile phone, tablet computer, head mounted display) and a real world coordinate system. Requested graphics for the event are displayed over a view of an event. In a tabletop presentation, video of the event can be displayed with augmented reality graphics overlays at a remote location.
Multi-camera homogeneous object trajectory alignment
A first plurality of images obtained via an image capture device is obtained. A first set of pixels in a first image of the first plurality of images identify is identified based on specified criteria. A first set of coordinates associated with the first set of pixels is determined. A second set of coordinates is generated based on the first set of coordinates. A second set of pixels in a second image of the first plurality of images is identified, based on the specified criteria and a proximity to the second set of coordinates. A first trajectory between the first set of pixels and the second set of pixels is generated. The first trajectory is determined to correspond to a second trajectory associated with a second plurality of images obtained via a second image capture device, and the first trajectory and the second trajectory are outputted.
IMAGE PROCESSING METHOD, APPARATUS, AND STORAGE MEDIUM
An image processing method and apparatus, an electronic device and a storage medium are provided. In the method, scene images corresponding to a game area at different viewpoints during a gaming stage are acquired to obtain a plurality of groups of scene images, and each group of scene images in the plurality of groups of scene images includes at least one frame of scene image corresponding to the game area at the same viewpoint; object analysis is performed on the plurality of groups of scene images to obtain object analysis data corresponding to the game area; and the object analysis data is written into an object general-purpose data structure, and the object general-purpose data structure is provided with positions for supporting storage of various types of data related to the object analysis data.
Image processing apparatus, control method of image processing apparatus, and non-transitory computer-readable storage medium
An image processing apparatus comprising: an image obtaining unit configured to obtain images based on capturing by a plurality of image capturing apparatuses; a position obtaining unit configured to obtain information representing a predetermined position to which the plurality of image capturing apparatuses are directed; a region setting unit configured to set, based on the information obtained by the position obtaining unit, a region to estimate a three-dimensional shape of an object; and an estimation unit configured to estimate, in the region set by the region setting unit, the three-dimensional shape of the object based on the images obtained by the image obtaining unit.
METHOD AND SYSTEM FOR ABSOLUTE POSITIONING OF AN OBJECT
A system and method for determining an absolute position of an object in an area is presented. The system includes a server having a processor, and a plurality of camera nodes coupled to the server. Each node includes a camera that acquires images of the object and area. The server receives image data from a camera, detects the object within an approximate location by image analysis techniques, and determines a relative position of the object in pixel coordinates. The processor then detects stationary markers proximate to the relative location of the object, determines an absolute position of the detected markers relative to known markers to define an absolute position of the marker, and determines an absolute location of the object in relation to the absolute location of the detected marker. This absolute position of the object is provided to an official to accurately locate the object in the area.