G06T2207/30228

SYSTEM AND METHOD FOR OBJECT TRACKING AND METRIC GENERATION
20220391620 · 2022-12-08 ·

Disclosed herein is a system and method directed to object tracking and metric generation using a plurality of cameras. The system includes the plurality of cameras disposed around a playing surface in a mirrored configuration, where the plurality of cameras are time-synchronized. The system further includes logic that, when executed by a processor, causes performance of operations including: obtaining a sequence of images from the plurality of cameras, continuously detecting an object in image pairs at successive points in time, wherein each image pair corresponds to a single point in time, continuously determining a location of the object within the playing space through triangulation of the object within each image pair, detecting a player and the object within each image of a subset of image pairs of the sequence of images, identifying a sequence of interactions between the object and the player, and storing the sequence of interactions.

Fitness And Sports Applications For An Autonomous Unmanned Aerial Vehicle

Sports and fitness applications for an autonomous unmanned aerial vehicle (UAV) are described. In an example embodiment, a UAV can be configured to track a human subject using perception inputs from one or more onboard sensors. The perception inputs can be utilized to generate values for various performance metrics associated with the activity of the human subject. In some embodiments, the perception inputs can be utilized to autonomously maneuver the UAV to lead the human subject to satisfy a performance goal. The UAV can also be configured to autonomously capture images of a sporting event and/or make rule determinations while officiating a sporting event.

Creating multi-camera panoramic projections

One embodiment provides a method, including: obtaining, from each of two more panoramic cameras, panoramic video, wherein each of the two or more panoramic cameras are located at different physical locations within an event environment; compositing the panoramic video obtained from the two or more panoramic cameras into a single video; and streaming the composited panoramic video to one or more end users, wherein each of the one or more end users provide commands to manipulate the streamed composited panoramic video resulting in viewing of a different view of the streamed composited panoramic video for the corresponding end user based on the provided commands.

Control apparatus, control method, and recording medium
11483470 · 2022-10-25 · ·

A control apparatus configured to control an image-capturing apparatus executes focus adjustment of the image-capturing apparatus in response to detection of a specific marker pattern from an image of an area to be focused by the image-capturing apparatus in an image acquired by image-capturing of the image-capturing apparatus.

Mobile entity tracking device and method for tracking mobile entity

A mobile entity tracking device is provided with: a video receiving unit for receiving a moving-image frame of a ball game captured by each of a plurality of cameras present at different positions; a mobile entity candidate extraction unit for extracting a candidate for a mobile entity using a plurality of moving-image frames; a mobile entity selection unit for displaying candidates for a mobile entity and accepting selection, by a user, of the mobile entity to be tracked; and a mobile entity tracking unit for tracking the mobile entity that is the object to be tracked. When the mobile entity selection unit accepts selection, by a user, of the mobile entity to be tracked, the mobile entity tracking unit corrects the object to be tracked to the mobile entity selected by the user.

METHODS AND SYSTEMS FOR ANALYZING AND PRESENTING EVENT INFORMATION

Methods, systems, graphical user interfaces (GUIs), and computer-readable media for presenting GUI elements generated based on information associated with an event are generally described. An event information presentation system may be configured to present GUI elements generated based on substantially real-time event information associated with a live event, such as a sporting event. Illustrative event information may include object movement and location information for objects such as event participants (for instance, players) and articles (for instance, a football for a football game event). The event information may be interpreted based on activity categories to automatically differentiate, organize, highlight, or the like the event information in order to generate relevant and meaningful GUI elements.

System and method for providing golf information for golfer
11660522 · 2023-05-30 · ·

A system for providing golf information, the system which includes a green photographing unit for photographing an image of an area including a green on a golf course and communicatively connected to a communication network to transmit green image data to an outside, a golfer terminal that is carried by a golfer on the golf course and communicatively connected to a communication network to transmit golfer position data to the outside, and an information providing server that is communicatively connected to the communication network to generate golf information using the green image data and the golfer position data and transmit the generated golf information to the golfer terminal.

METHOD OF CALIBRATING CAMERAS
20230162398 · 2023-05-25 ·

A method for calibrating at least one of the six-degrees-of-freedom of all or part of cameras in a formation positioned for scene capturing, the method comprising a step of initial calibration before the scene capturing. The step comprises creating a reference video frame which comprises a reference image of a stationary reference object. During scene capturing the method further comprises a step of further calibration wherein the position of the reference image of the stationary reference object within a captured scene video frame is compared to the position of the reference image of the stationary reference object within the reference video frame, and a step adapting the at least one of the six-degrees-of-freedom of a multiple cameras of the formation if needed in order to get an improved scene capturing after the further calibration.

Image processing apparatus, image processing method, and storage medium
11657514 · 2023-05-23 · ·

An object of the present invention is to extract an area of a foreground object with high accuracy. The present invention is an image processing apparatus including: a target image acquisition unit configured to acquire a target image that is a target of extraction of a foreground area; a reference image acquisition unit configured to acquire a plurality of reference images including an image whose viewpoint is different from that of the target image; a conversion unit configured to convert a plurality of reference images acquired by the reference image acquisition unit based on a viewpoint corresponding to the target image; and an extraction unit configured to extract a foreground area of the target image by using data relating to a degree of coincidence of a plurality of reference images converted by the conversion unit.

System and method for robust model-based camera tracking and image occlusion removal
11606512 · 2023-03-14 · ·

A system and method for model-based camera tracking and image occlusion removal for a camera viewing a sports field (or other scene) includes receiving a synthesized data set comprising at least one empty field image of the field, the empty field image with at least one occlusion graphic, and camera parameters corresponding to the empty field image, training a neural network model to estimate the empty field image and the corresponding camera parameters by providing the model with an input training image comprising the empty field image with occlusion graphic, and providing the model with model output targets comprising the empty field image and the corresponding camera parameters as targets for the model, receiving by the neural network model, alive input image comprising a view of the field with live occlusions, and providing by the neural network model, using trained model parameters, estimated live camera parameters or an estimated empty field image associated with the live input image.