H04N13/271

CLUSTER RESOURCE MANAGEMENT IN DISTRIBUTED COMPUTING SYSTEMS

Techniques are provided for managing resources among clusters of computing devices in a computing system. Resource reassignment message are generated for indicating that servers are reassigned and in response to resource compute loads exceed or fall below certain thresholds. Techniques also include establishing communications with the reassigned servers to assign compute loads without physically relocating the servers from one cluster to another cluster.

CLUSTER RESOURCE MANAGEMENT IN DISTRIBUTED COMPUTING SYSTEMS

Techniques are provided for managing resources among clusters of computing devices in a computing system. Resource reassignment message are generated for indicating that servers are reassigned and in response to resource compute loads exceed or fall below certain thresholds. Techniques also include establishing communications with the reassigned servers to assign compute loads without physically relocating the servers from one cluster to another cluster.

Augmented reality based management of a representation of a smart environment

A capability for managing a representation of a smart environment is presented herein. The capability for managing a representation of a smart environment is configured to support augmented reality (AR)-based management of a representation of a smart environment, which may include AR-based generation of a representation of the smart environment, AR-based alignment of the representation of the smart environment with the physical reality of the smart environment, and the like.

High resolution depth map computation using multiresolution camera clusters for 3D image generation
09729857 · 2017-08-08 · ·

Techniques for generating 3D images using multi-resolution camera clusters are described. In one example embodiment, the method includes, disposing a multi-resolution camera set including a central camera, having a first resolution, and multiple camera clusters, having one or more resolutions that are different from the first resolution, that are positioned substantially surrounding the central camera. Images are then captured using the camera set. A high resolution depth map is then computed using a hierarchical approach on the captured images. The 3D image of the captured image is then generated using the computed high resolution depth map.

THREE-DIMIENSIONAL POINT CLOUD GENERATION USING MACHINE LEARNING
20220311987 · 2022-09-29 ·

An example method for training a machine learning model is provided. The method includes receiving training data collected by a three-dimensional (3D) imager, the training data comprising a plurality of training sets. The method further includes generating, using the training data, a machine learning model from which a disparity map can be inferred from a pair of images that capture a scene where a light pattern is projected onto an object.

THREE-DIMIENSIONAL POINT CLOUD GENERATION USING MACHINE LEARNING
20220311987 · 2022-09-29 ·

An example method for training a machine learning model is provided. The method includes receiving training data collected by a three-dimensional (3D) imager, the training data comprising a plurality of training sets. The method further includes generating, using the training data, a machine learning model from which a disparity map can be inferred from a pair of images that capture a scene where a light pattern is projected onto an object.

Method for estimating distance, and system and computer-readable medium for implementing the method
09729861 · 2017-08-08 · ·

A method for estimating a distance between a first target and a second target in an image is to be implemented using a distance estimation system that includes a processor module. In the method, the processor module is programmed to: generate an image depth map associated with the image; generate first position information associated with a first position which corresponds to the first target in the image, and second position information associated with a second position which corresponds to the second target in the image; and compute an estimate of a distance between the first target and the second target based on at least the image depth map, the first position information, and the second position information.

Object detection and tracking

Various embodiments enable a primary user to be identified and tracked using stereo association and multiple tracking algorithms. For example, a face detection algorithm can be run on each image captured by a respective camera independently. Stereo association can be performed to match faces between cameras. If the faces are matched and a primary user is determined, a face pair is created and used as the first data point in memory for initializing object tracking. Further, features of a user's face can be extracted and the change in position of these features between images can determine what tracking method will be used for that particular frame.

Electronic device and method for applying image effect to images obtained using image sensor

Electronic devices and methods for processing images are provided. The method includes obtaining a first image and a second image through a first image sensor, extracting depth information from at least one third image obtained through a second image sensor, applying the extracted depth information to the obtained first image and displaying the first image, and applying the extracted depth information to the obtained second image.

Electronic device and method for applying image effect to images obtained using image sensor

Electronic devices and methods for processing images are provided. The method includes obtaining a first image and a second image through a first image sensor, extracting depth information from at least one third image obtained through a second image sensor, applying the extracted depth information to the obtained first image and displaying the first image, and applying the extracted depth information to the obtained second image.