G06T7/20

Single-pass object scanning

Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model based on a selected subset of the images and depth data corresponding to each of the images of the subset. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment captured via a camera on the device, selecting a subset of the images based on assessing the images with respect to motion-based defects based on device motion and depth data, and generating a 3D model of the object based on the selected subset of the images and depth data corresponding to each of the images of the selected subset.

Single-pass object scanning

Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model based on a selected subset of the images and depth data corresponding to each of the images of the subset. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment captured via a camera on the device, selecting a subset of the images based on assessing the images with respect to motion-based defects based on device motion and depth data, and generating a 3D model of the object based on the selected subset of the images and depth data corresponding to each of the images of the selected subset.

Automated video editing
11581019 · 2023-02-14 · ·

A method of generating a modified video file using one or more processors is disclosed. The method comprises detecting objects that are represented in an original video file using computer vision object-detection techniques, determining object motion characteristics for the detected objects, based on a specific object motion characteristic for a specific detected object meeting certain requirements, selecting a corresponding audio or visual effect, and applying the corresponding visual effect to the original video file to create the modified video file.

Automated video editing
11581019 · 2023-02-14 · ·

A method of generating a modified video file using one or more processors is disclosed. The method comprises detecting objects that are represented in an original video file using computer vision object-detection techniques, determining object motion characteristics for the detected objects, based on a specific object motion characteristic for a specific detected object meeting certain requirements, selecting a corresponding audio or visual effect, and applying the corresponding visual effect to the original video file to create the modified video file.

Swing analysis system that calculates a rotational profile
11577142 · 2023-02-14 · ·

A system that measures a swing of equipment (such as a bat or golf club) with inertial sensors, and analyzes sensor data to create a rotational profile. Swing analysis may use a two-lever model, with a body lever from the center of rotation to the hands, and an equipment lever from the hands to the sweet spot of the equipment. The rotational profile may include graphs of rates of change of the angle of the body lever and of the relative angle between the body lever and the equipment lever, and a graph of the centripetal acceleration of the equipment. These three graphs may provide insight into players' relative performance. The timing and sequencing of swing stages may be analyzed by partitioning the swing into four phases: load, accelerate, peak, and transfer. Swing metrics may be calculated from the centripetal acceleration curve and the equipment/body rotation rate curves.

Swing analysis system that calculates a rotational profile
11577142 · 2023-02-14 · ·

A system that measures a swing of equipment (such as a bat or golf club) with inertial sensors, and analyzes sensor data to create a rotational profile. Swing analysis may use a two-lever model, with a body lever from the center of rotation to the hands, and an equipment lever from the hands to the sweet spot of the equipment. The rotational profile may include graphs of rates of change of the angle of the body lever and of the relative angle between the body lever and the equipment lever, and a graph of the centripetal acceleration of the equipment. These three graphs may provide insight into players' relative performance. The timing and sequencing of swing stages may be analyzed by partitioning the swing into four phases: load, accelerate, peak, and transfer. Swing metrics may be calculated from the centripetal acceleration curve and the equipment/body rotation rate curves.

Temporal information prediction in autonomous machine applications

In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.

Temporal information prediction in autonomous machine applications

In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.

Color-sensitive virtual markings of objects
11582312 · 2023-02-14 · ·

Disclosed are systems, methods, and non-transitory computer readable media for making virtual colored markings on objects. Instructions may include receiving an indication of an object; receiving from an image sensor an image of a hand of an individual holding a physical marking implement; detecting in the image a color associated with the marking implement; receiving from the image sensor image data indicative of movement of a tip of the marking implement and locations of the tip; determining from the image data when the locations of the tip correspond to locations on the object; and generating, in the detected color, virtual markings on the object at the corresponding locations.

Color-sensitive virtual markings of objects
11582312 · 2023-02-14 · ·

Disclosed are systems, methods, and non-transitory computer readable media for making virtual colored markings on objects. Instructions may include receiving an indication of an object; receiving from an image sensor an image of a hand of an individual holding a physical marking implement; detecting in the image a color associated with the marking implement; receiving from the image sensor image data indicative of movement of a tip of the marking implement and locations of the tip; determining from the image data when the locations of the tip correspond to locations on the object; and generating, in the detected color, virtual markings on the object at the corresponding locations.