G06T2207/10028

Image processing apparatus, image capturing apparatus, control method, and storage medium
11595572 · 2023-02-28 · ·

An image processing apparatus includes obtaining distance information and subject information with respect to a captured image, specifying, based on the obtained distance information, a first subject region in which images of subjects that exist in a predetermined distance range are distributed in the captured image, specifying, based on the subject information and independently of the obtained distance information, specifying a second subject region that includes a region from which the predetermined subject has been detected in the captured image, and determining a target region for which the image processing is executed with reference to at least one of the first subject region and the second subject region. A subject region varies depending on at least one of an image capture condition of the captured image and a state of the captured image.

Switching between object detection and data transfer with a vehicle radar

In one embodiment, a method includes determining an operational status of a vehicle including a radar antenna. The operational status is related to autonomous-driving operations of the vehicle in an environment. The method includes determining an expected amount of signaling resources associated with the radar antenna to be utilized by the vehicle while the vehicle performs the autonomous-driving operations, based at least on the operational status of the vehicle and the environment. The method includes determining to switch one or more of the signaling resources associated with the radar antenna from a first mode to a second mode based on the expected amount of signaling resources to be utilized by the radar antenna while the vehicle performs the autonomous-driving operations. The method includes causing the one or more of the signaling resources associated with the radar antenna to switch from the first mode to the second mode.

Synthesizing three-dimensional visualizations from perspectives of onboard sensors of autonomous vehicles
11593996 · 2023-02-28 · ·

Aspects of the disclosure provide for generating a visualization of a three-dimensional (3D) world view from the perspective of a camera of a vehicle. For example, images of a scene captured by a camera of the vehicle and 3D content for the scene may be received. A virtual camera model for the camera of the vehicle may be identified. A set of matrices may be generated using the virtual camera model. The set of matrices may be applied to the 3D content to create a 3D world view. The visualization may be generated using the 3D world view as an overlay with the image, and the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content.

System and method for movement detection
11593950 · 2023-02-28 · ·

Systems and methods for movement detection are provided. In one example embodiment, a computer-implemented method includes obtaining image data and range data representing a scene external to an autonomous vehicle, the image data including at least a first image and a second image that depict the scene. The method includes identifying a set of corresponding image features from the image data, the set of corresponding image features including a first feature in the first image having a correspondence with a second feature in the second image. The method includes determining a respective distance for each of the first feature and the second feature based at least in part on the range data. The method includes determining a velocity associated with a portion of a scene represented by the set of corresponding image features based at least in part on the respective distance for the first feature and the second feature.

Network and system for pose and size estimation
11593957 · 2023-02-28 · ·

A network for category-level 6D pose and size estimation, including a 3D-OCR module for 3D Orientation-Consistent Representation, a GeoReS module for Geometry-constrained Reflection Symmetry, and a MPDE module for Mirror-Paired Dimensional Estimation; wherein the 3D-OCR module and the GeoReS module are incorporated in parallel; the 3D-OCR module receives a canonical template shape including canonical category-specific keypoints; the GeoReS module receives an original input depth observation including pre-processed predicted category labels and potential masks of the target instances; the MPDE module receives the output from the GeoReS module as well as the original input depth observation; and the network outputs the estimation results based on the output of the MPDE module, the output of the 3D-OCR module, as well as the canonical template shape. Also provided are corresponding systems and methods.

System for generating a three-dimensional scene of a physical environment

A system configured to assist a user in scanning a physical environment in order to generate a three-dimensional scan or model. In some cases, the system may include an interface to assist the user in capturing data usable to determine a scale or depth of the physical environment and to perform a scan in a manner that minimizes gaps.

Image-based drive-thru management system

The subject matter of this specification can be implemented in, among other things, methods, systems, computer-readable storage medium. A method can include receiving, by a processing device, image data including one or more image frames indicative of a current state of a drive-thru area. The processing device determines a vehicle disposed within the drive-thru area based on the image data. The processing device receives order data with a pending meal order. The processing device determines a first association between the vehicle and the pending meal order based on the image data. The processing devices determine a meal delivery procedure associated with the based on the association between the vehicle and the pending meal order. The processing device performs may perform the meal delivery procedure. The processing device may provide the meal delivery procedure for display on a graphical user interface (GUI).

GROUND PLANE ADJUSTMENT IN A VIRTUAL REALITY ENVIRONMENT

An HMD device is configured to vertically adjust the ground plane of a rendered virtual reality environment that has varying elevations to match the flat real world floor so that the device user can move around to navigate and explore the environment and always be properly located on the virtual ground and not be above it or underneath it. Rather than continuously adjust the virtual reality ground plane, which can introduce cognitive dissonance discomfort to the user, when the user is not engaged in some form of locomotion (e.g., walking), the HMD device establishes a threshold radius around the user within which virtual ground plane adjustment is not performed. The user can make movements within the threshold radius without the HMD device shifting the virtual terrain. When the user moves past the threshold radius, the device will perform an adjustment as needed to match the ground plane of the virtual reality environment to the real world floor.

METHOD AND SYSTEM FOR MEASURING GEOMETRIC PARAMETERS OF THROUGH HOLES
20180003477 · 2018-01-04 ·

A method of measuring geometric parameters of through holes in a thin substrate includes acquiring images of select sub-volumes of the substrate using an optical system having a field of depth greater than a thickness of the substrate. The acquired images are processed to determine the desired geometric parameters.

Sequential Diffractive Pattern Projection

The present disclosure relates to structured illumination. The teachings thereof may be embodied in devices for reconstruction of a three-dimensional surface of an object by means of a structured illumination for projection of measurement patterns onto the object. For example, a device may include: a projector unit for diffractive projection of a measurement pattern comprising a plurality of measurement points onto the surface; an acquisition unit for acquiring the measurement pattern from the surface; and a computer unit for reconstruction of the surface from a respective distortion of the measurement pattern. All possible positions of measurement elements are contained in the measurement pattern in repeating groups, in which a respective combination of measurement points represents a respective location in the measurement pattern.