Techniques for Producing Three-Dimensional Models from One or More Two-Dimensional Images
20220392165 · 2022-12-08
Inventors
Cpc classification
G06N7/01
PHYSICS
G06T2200/08
PHYSICS
G06V20/70
PHYSICS
G06T19/20
PHYSICS
G06V10/7715
PHYSICS
G06T17/20
PHYSICS
G06T7/143
PHYSICS
G06T2207/20016
PHYSICS
G06F17/16
PHYSICS
G06T7/187
PHYSICS
International classification
G06T17/20
PHYSICS
G06T3/40
PHYSICS
G06T7/187
PHYSICS
G06V10/22
PHYSICS
G06V10/74
PHYSICS
G06V10/77
PHYSICS
Abstract
Described are techniques for producing a three-dimensional model of a scene from one or more two dimensional images. The techniques include receiving by a computing device one or more two dimensional digital images of a scene, the image including plural pixels, applying the received image data to scene generator/scene understanding engine that produces from the one or more digital images a metadata output that includes depth prediction data for at least some of the plural pixels in the two dimensional image and that produces metadata for a controlling a three-dimensional computer model engine, and outputting the metadata to a three-dimensional computer model engine to produce a three-dimensional digital computer model of the scene depicted in the two dimensional image.
Claims
1. A method comprises receiving by a computing device one or more two dimensional digital images of a scene, the image including plural pixels; applying the received image data to scene generator/scene understanding engine that produces from the one or more digital images a metadata output that includes depth prediction data for at least some of the plural pixels in the two dimensional image and that produces metadata for a controlling a three-dimensional computer model engine; and outputting the metadata to a three-dimensional computer model engine to produce a three-dimensional digital computer model of the scene depicted in the two dimensional image.
2. The method of claim 1 wherein receiving further comprises: receiving with the one or more images, reference measurements associated objects depicted in the one or more images.
3. The method of claim 1 wherein applying the received image to the scene generator/scene understanding engine, further comprises: identifying objects within the image scene; and applying labels to the identified objects.
4. The method of claim 3 wherein identifying objects further comprises: extracting each labeled object's region; and determining and outputting pixel corner coordinates, height and width and confidence values into metadata output to provide specific instructions for a 3D modeling engine to produce the 3D model.
5. The method of claim 3 further comprises: generating with the metadata statistical information on the identified objects within the image scene.
6. The method of claim 1 further comprises: inferring depths of pixels in the image.
7. The method of claim 6 wherein inferring depths of pixels in the image comprises: transforming the input image by a superpixel segmentation that combines small homogenous regions of pixels into superpixels that are homogenous regions of pixels that combined to function as a single input.
8. The method of claim 7 wherein inferring depths of pixels in the image further comprises: determining a penalty function for superpixels by: determining unary values over each of the superpixels. determining pairwise values over each of the superpixels and determining a combination of the unary and the pairwise values.
9. The method of claim 8 wherein the unary processing returns a depth value for a single superpixel and the pairwise communicates with neighboring superpixels having similar appearance to produce similar depths for those neighboring superpixels.
10. The method of claim 8 wherein the unary processing for a single superpixel is determined by: inputting the single superpixel into a fully convolutional neural net that produces a convolutional map that has been up-sampled to the original image size; applying the up-sampled convolutional map and the superpixel segmentation over the original input image to a superpixel average pooling layer to produce feature vectors; and input the feature vectors to a fully connected output layer to produce a unary output for the superpixel.
11. The method of claim 8 wherein the pairwise processing for a single superpixel is determined by: collecting similar feature vectors are collected from all neighboring superpixel patches adjacent to the single superpixel; cataloguing unique feature vectors of the superpixel and neighboring superpixel patches into collections of similar and unique features; and input the collections into a fully connected layer that outputs a vector of similarities between the neighboring superpixel patches and the single superpixel.
12. The method of claim 11 wherein the unary output is fed into the conditional random fields graph model to produce an output depth map that contains information relating to the distance of surfaces of scene objects from a reference point.
13. The method of claim 3 wherein the depth prediction engine processes input digital pixel data through a pre-trained convolutional neural network to produce a depth map.
14. A method comprises: producing by one or more computer systems a spatial depth prediction of pixels in an image by: transforming pixels in an image into a segmented representation, by: applying small homogenous regions of pixels to a segmentation of the image to produce superpixel representations of the homogenous regions; processing the segmented representation by determining a pairwise of energy potentials and unary energy potentials to produce a depth map.
15. The method of claim 14 wherein pairwise potential processing comprises: matching neighboring super pixels; calculating differences of feature vectors; calculating a fully connected output layer; calculating a conditional random fields graph model loss layer; and producing an output depth map for the input image.
16. The method of claim 14 wherein the unary energy potential processing comprises: calculating a superpixel segmentation from a super pixel map and an output from a convolutional map of up-sampling data; calculating a superpixel average pool; calculating a fully connected output layer producing an output depth map for the input image.
17. A computer implemented method of producing a three dimensional computer model shell, comprises: producing metadata that instruct a three dimensional modelling engine to produce a spatial shell and to place objects within the spatial shell, with placed objects being placeholders that are swapped for other objects; producing the shell process by: producing a scaffold mesh; producing a bounding box; producing a clean final mesh; and producing the three dimensional model as a shell.
18. The method of claim 17 wherein object placement comprises: calculating the original coordinate position of bounding boxes; transforming object positions and orientations based on original coordinates; placing the objects into the three dimensional computer model shell.
Description
DESCRIPTION OF DRAWINGS
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
DETAILED DESCRIPTION
[0034] Described are systems, methods, and computer program products to generate three-dimensional (‘3D’) models from ‘as built’ environments as captured on a single image or in some embodiments from plural images. The images used are two dimensional images that capture three-dimensional (‘3D’) ‘as built’ environments.
[0035] Referring to
[0036] The server 101 (or the client device 102 in the other embodiments) has the following capabilities, the capability to receive/take images (pictures); the capability of processing image data and the capability to execute other processes that will be discussed below. Examples of devices that satisfy these minimal capabilities include computers (portable or desktop); tablet computer devices, smartphones, and personal digital assistant devices.
[0037] In the discussion below, the computing device 101 is a server computer 101 and the user device 102 communicates with the server computer 101. The description below will focus on a smartphone as the user device 102, however it is understood that this is but one non-limiting example. The term “smartphone” is used to describe a mobile phone device that executes an advanced mobile operating system. The smartphone has hardware and a mobile operating system with features of personal computer hardware and operating systems along with features required for mobile or handheld operation, such as those functions needed for use of the smartphone as a cell phone and includes GPS (global position system) navigation. The smartphone executes applications (apps) such as a media player, as well as browsers, and other apps. Smartphones typically can access the Internet and have a touchscreen user interface.
[0038] The computing device 101, i.e., server computer 101, includes one or more processors 104, memory 106 coupled to the one or more processors, and storage 108. The system 100 may include a display 110, user interfaces 112 to, e.g., keypads, etc. 112 and I/O interfaces 114, to e.g., ports, etc., all coupled via a bus 116.
[0039] In memory 106, a server process 120 translates a 2D image into a 3D model. Process 120 includes an image processing 122 module that receives an input image from e.g., a camera of the device 100 or from a storage device and pre-processes the image for use with a scene generator engine/machine learning service, i.e., a transformation service 124 that includes functions, as discussed below. The transformation service 124 upon processing of the image returns metadata (not shown) that are provided to a 3D modelling engine 128 to produce an 3D model output. Typically, the transformation service 124 is a server side process. The user device 102 includes image capture and upload client side processes 119.
[0040] Referring now to
[0041] For example, the reference measurement data can be provided as an input by the user as a client side process 120a (i.e., the user provides reference measurements of the height and width of a door in the image). Alternatively, the reference measurement data can be provided as a server side process 120b by a scene understanding service 124a that recognizes a door, refrigerator or any other item in the image, and that accesses a database (not shown) of dimensional specifications pertaining to the recognized item. The database can be provided by a data source, e.g., on the Internet, etc.
[0042] The uploaded image 123a with reference measurement(s) 123b are input to the transformation service 124. The transformation service 124 includes the scene understanding service 124a and a depth prediction service 124b that produce metadata 126 for input into a 3D modeling engine 128 to produce a 3D model output, and/or to a 2D representation engine 130 to produce a 2D representation output, such as an image, floor plan, elevation, and/or section views. Using the metadata 126, the 3D modelling engine 128 can output from the translation service 124. Typically, the metadata 126 will be in a format corresponding to a format used by the 3D modeling engine 128. Any commercially available 3D modeling engine could be used with the format of the metadata compliant with the format of instructions for the particular available 3D modeling engine that is used.
[0043] The metadata output 126 are first fed to or call Application Programing Interfaces for the particular 3D modeling engine 128 or the 2D engine 130. The API will use the calculated metadata as input to configure the particular 3D modeling engine 128 or the 2D engine 130 to produce a 3D model or a 2D representation.
[0044] Alternatively the metadata are fed to a formatter 131 or a formatter 133 that format the data into input or ‘instructions’ (not shown) that can directly inputted to control the 3D modeling engine 128 or the 2D engine 130. Formatters 131 and 133 can be part of the process 120 (as shown) or can be a separate process or can be part of the 3D modeling engine 128 or the 2D engine 130. The exact nature of the formatters 131 and 133 would depend on specific requirements of the 3D modeling engine 128 and/or the 2D engine 130.
[0045] The metadata produced by process 120 is descriptive/numerical, material whereas the instructions or ‘metadata instruction” or the API are procedural, prescriptive. The metadata are descriptive or numerical ‘data’ that is used subsequently within procedural operations, but may not inherently specify procedure. For example, a 3D object shape can be defined by metadata as vectors or as pixels, but the metadata does not specify how the metadata will be used to render the shape.
[0046] In some implementations, rather than or in addition to producing metadata for conversion by the modeling engine into the 3D model, the process 102a can produce a numerical output description from the metadata. The metadata can be analyzed to produce numerical outputs, e.g., statistical data 129, for uses in applications such as statistical analysis from of visual input data, such as images. One example is to identify a population density of people and objects in a room and their relative relationship amongst each other. The movement of people and/or objects can help to identify and to predict how a built space will and can be occupied based on previous information captured by the machine learning service 120.
[0047] Referring now to
[0048] Referring now to
[0049] The server side processing 120b receives the user uploaded image(s) 123a with reference measurement(s) 123b. The image 123a is sent to server 102 and saved in a database 156. The image in its original state, State 0 (
[0050] The Pixel Location A is input to the depth prediction service 124b to enable the depth prediction service 124b to determine a pixel window or pixel location B 172 and calculate an average depth in pixels 174. The resulting metadata 126 from the depth prediction service 124b is posted to the database 156. The server 102 queries for all the processed results derived from undergoing process 120 to produce a file, accompanying the metadata 126, which serves as ‘instructions’ to configure the 3D modeling engine 128 (see
[0051] Referring now to
[0052] Aspects of the CNN implementation will now be described. A CNN or “convolutional neural network” is a type of deep neural network commonly applied to analyzing visual imagery. CNNs are regularized versions of multilayer perceptrons, e.g., fully connected networks where each neuron in one layer is connected to all neurons in a succeeding layer. This characteristic of “full-connectedness” makes CNNs prone to ‘overfitting’ data. In order to address overfitting, “regularization” of a CNN assembles complex patterns using smaller and simpler patterns.
[0053] The transformation service 124 identifies 184 objects within the image scene and applies labels 185a (classification). The transformation service 124 extracts each labeled object's region within the image scene by producing a rectangular window around such labeled objects. From this rectangular window the transformation service 124 determines and outputs pixel corner coordinates 185b, height and width and confidence values 185b that are generated into metadata 126 that includes specific instructions for the 3D modeling engine (
[0054] Referring now to
[0055] Referring now to
[0056] To predict depth of an image, an ‘energy’ (e.g., a penalty) function is determined. This energy function is determined as a combination of unary 225a and a pairwise 225b potentials over the superpixels. Unary processing 225a returns a depth value for a single superpixel while the pairwise processing 225b communicates with neighboring superpixels having similar appearance to produce similar depths for those neighboring superpixels. As used herein unary refers to an operation with only one operand, i.e. a single input, in this instance a single superpixel.
[0057] The unary potential processing 225a for a single superpixel is determined as follows: a superpixel is inputted and fed into a fully convolutional neural net 226 that produces 228 a convolutional map that has been up-sampled (as defined below) to the original image size. The fully convolutional neural net (CNN) as mentioned above, is a conventional machine learning architecture effective in image recognition and classification. While convolutional neural nets are a conventional machine learning architecture solution to those skilled in the art, the production of a superpixel and use as input into a CNN is generally non-conventional. The up-sampling process applies to a data sample (e.g., a pixel in this case) a filter or series of filters to artificially increased the pixel in scale. The output of the convolution map up sampling are H×L matrices (that represent height and length), which are converted into n×1 into one-dimensional arrays 231, e.g., superpixel segmentation.
[0058] The up-sampled convolutional map(s) 228 as well as the superpixel segmentation 231 over the original input image are fed into a superpixel average pooling layer 232 where a pooling process is performed for each superpixel region. As used herein “pooling” refers to a down-sampling process where the amount of parameters are down sampled for computational efficiency in the network and controlled “overfitting.” The output of the superpixel average pooling layer 232 are feature vectors that are used as input to a fully connected output layer to produce a unary output.
[0059] The fully connected output layer (not shown) is a neural input layer in the CNN, typically the last layer, where each neuron in that layer is connected to all neurons in the prior superpixel average pooling. This layer produces the unary output. Thus, the CNN typically takes small features from the image and uses these small features as input to compute and learn different types of regions of the images to classify. In this case, the CNN is taking a superpixel as input.
[0060] The pairwise function processing 225b for a single superpixel is determined as follows: similar feature vectors are collected from all neighboring superpixel patches (areas of superpixels that are adjacent, e.g., neighbors to the single superpixel). Neighbors of a superpixel those superpixels that are adjacent to, e.g., within a distance value of 1 to n of the single superpixel, where “n” is empirically determined. Unique feature vectors of the superpixel and neighboring superpixel patches such as color histogram, textures, and other feature types 242 are identified and catalogued. The results of these collections of similar and unique features are output that is fed into the fully connected layer (or shared parameters) 244 that outputs a vector of similarities between the neighboring superpixel patches and the single superpixel.
[0061] The unary output is fed into the Conditional Random Fields graph model, or “CRF” Loss Layer in order to produce the output depth map 248. The fully connected layer output 244, in conjunction with the unary terms 234, serve as input into the CRF Loss Layer so as to minimize the negative log-likelihood 246 for producing an output depth map 248. CRF is a graph model, familiar to those skilled in the art, used for prediction (i.e. image de-noising); it is used in this description to model relations between neighboring superpixels. The output depth map is an image or image channel that contains information relating to the distance of the surfaces of scene objects from a reference viewpoint.
[0062] Referring now to
[0063] Metadata 126 are a compilation of input data for the 3D modeling engine 128 to produce the foregoing outputs. The compilation of the metadata 126 includes, by way of example, data such as a Uniform Resource Locator (“URL”), i.e., a link to the output depth map and the file path to the original image uploaded by the user; a URL link to relevant 3D objects on the server 102 and the average depth or placement of the 3D objects measured in pixels, which are converted into 3D World Coordinates, object labels, the “confidence” or degree of accuracy values 185 calculated by the computer regarding the degree of confidence in the classification and labeling of a scene object, and, the height and width of a bounding window in which the labeled object is located within a scene.
[0064] Metadata example—illustrative data structure representation.
TABLE-US-00001 file path to original image uploaded by user Calculated URL to Output Depth Confidence map object Value, e.g., height of width of Object average label a numeric bounding bounding URL depth (semantic or percent window window link (pixels) label) value) (pixels) (pixels) 1 2 * * * * * * * * * * * * * * * * * * * * * n
[0065] Pseudo-code examples for metadata sets are set out below. The metadata contains a URL link to the depth map computed by the service 10 and the file path to the original image uploaded by the user.
[0066] The objects_data field contains the URL of the 3d object on the server, the average depth or placement of that object in pixels, which is then converted into 3d world coordinates.
[0067] “Bottom_Left” is the origin of the 3d object; “classification” is the object label, “confidence” is a measure of the degree of accuracy with which the computer vision algorithm has classified an object; and, “height” and “width” refer to the window size of the classified object in an uploaded image.
[0068] The metadata file 126 in Example 1 below contains:
TABLE-US-00002 image data “img_data”: { depth data “depth_data”: null, depth data file name “depth_filepath”: “306-depth.png”, a file path “filepath”: “306.png”, a picture id “picture_id”: 306, reference units “reference_units”: “inches”, and for objects semantic data depth image; coordinates; labels; confidence values; object location values
[0069] Consider the item “chair”. The file has the location of the structure “address”: “https://hostalabs.com”, its “ave_depth” as “2.811230421066284,” inches location as “bottom_left”: with World coordinates [82, 368], a color “color name”: “dimgray”, a confidence value “confidence”: 0.9925103187561035, which corresponds to the confidence in the classification of and determined parameters of the item the item's “est_height”, as “23.339647726281814,” inches and a “pixel_height” as 148, and “pixel_width” as 56, and a color palette “rgb color”. [139, 112, 93] values.
TABLE-US-00003 EXAMPLE 1 { “img_data”: { “depth_data”: null, “depth_filepath”: “306-depth.png”, “filepath”: “306.png”, “picture_id”: 306, “reference_units”: “inches”, “semantic_data”: { “bed”: [ { “address”: “https://hostalabs.com”, “ave_depth”: 2.7333521842956543, “bottom_left”: [ 207, 396 ], “color name”: “silver”, “confidence”: 0.8699102401733398, “est_height”: 35.22271524362479, “pixel_height”: 234, “pixel_width”: 318, “rgb color”: [ 191, 192, 190 ] } ], “chair”: [ { “address”: “https://hostalabs.com”, “ave_depth”: 2.811230421066284, “bottom_left”: [ 82, 368 ], “color name”: “dimgray”, “confidence”: 0.9925103187561035, “est_height”: 23.339647726281814, “pixel_height”: 148, “pixel_width”: 56, “rgb color”: [ 139, 112, 93 ] } ], “house”: [ { “ave_depth”: 3.2572555541992188, “bottom_left”: [ 5, 412 ], “color name”: “darkgray”, “confidence”: 0.8631517887115479, “est_height”: 53.081768782844975, “pixel_height”: 405, “pixel_width”: 545, “rgb color”: [ 168, 163, 146 ] } ], “microwave”: [ { “address”: “https://hostalabs.com”, “ave_depth”: 4.044846534729004, “bottom_left”: [ 76, 217 ], “color name”: “darkgray”, “confidence”: 0.9977370500564575, “est_height”: 16.444784844696372, “pixel_height”: 40, “pixel_width”: 61, “rgb color”: [ 150, 159, 168 ] } ], “tv”: [ { “address”: “https://hostalabs.com”, “ave_depth”: 1.3527330160140991, “bottom_left”: [ 0, 305 ], “color name“: “black”, “confidence”: 0.9985752105712891, “est_height”: 33.44170615321548, “pixel_height”: 157, “pixel_width”: 48, “rgb color”: [ 36, 34, 28 ] } ] }, “user”: “samp_user”, “wall_data”: { “number_of_points”: 5, “points”: [ [ 53, 6, 3.1678709983825684 ], [ 134, 59, 4.941289901733398 ], [ 148, 69, 5.741237640380859 ], [ 167, 67, 5.656834602355957 ], [ 504, 38, 2.6623311042785645 ] ] } } }
TABLE-US-00004 EXAMPLE 2 “address”: “https://dl.dropbox.com/s/q9xvoca4kybeop0/sink2.obj?dl=0”, “ave_depth”: 49.61813186813187, “bottom_left“: [ 192, 224 ], “classification”: “sink”, “confidence”: 0.1300784796476364, “height”: 14, “width”: 78 }, { “address”: “https://dl.drop- box.com/s/n5fhe9gl8dd04xl/bottle.obj?dl=0”, “ave_depth”: 144.66666666666666, “bottom_left”: [ 169, 139 ], “classification”: “bottle”, “confidence”: 0.21171382069587708, “height”: 21, “width”: 9 }, { “address”: null, “ave_depth”: 81.10242914979757, “bottom_left”: [ 181, 307 ], { “img_data”: { “depth_image”: “http://ec2-52-91-230-86.com- pute-1.amazonaws.com/instance/29-depth.png”, “filepath”: ”29.png”, “objects_data”: [ { “address”: “https://dl.drop- box.com/s/ii5pren6gpz1n6p/glasscup.obj?dl=0”, “ave_depth”: 111.14193548387097, “bottom_left”: [ 240, 400 ], “classification”: “cup”, “confidence”: 0.18342258036136627, “height”: 20, “width”: 31 }, {
[0070] Referring now to
[0076] The server provides the metadata file data to a 3D modeling engine (3D engine) 282. The 3D engine 282 uses the metadata 126 to produce a 3D model representation, e.g., Shell, of the interior space 284 as well as retrieve 3D objects from the server 102 for placement 288 into three-dimensional space, where such objects contain an option to swap object prototypes 288.
[0077] To produce the geometric shell of an interior space 284, a compatible scale convert is used to perform an image conversion 290 to produce depth values that the 3D engine system can interpret. The depth values are used to extrude the depth image into a volumetric mesh that serves as a scaffold mesh 294 for a final mesh 296. The scaffold mesh 294 is used to calculate a bounding box 294 around itself that will be used to generate the clean final mesh 296 that becomes the 3D shell output 298.
[0078] To place 3D representative items in the interior shell, a system for 3D object retrieval and placement instructions 286 is produced for the 3D modeling engine 282. Each 3D object contains a coordinate point defined as its origin position as Start X and Start Y 300 for transfer to a destined (transformed) position and orientation in 3D coordinate space 302. The foregoing data is provided by the metadata 126 that contain the calculated average-depth-pixel-and-size-factor conversions for scaling the model proportionally.
[0079] Referring to
[0080] The user submits the preprocessed image (or the image) to the server 102 (
[0081] The input image is processed by the scene generator 124 that outputs the 3D model from 3D modeling engine 126. An output file including the 3D model can be used to input into a design UI system 320 where external design components and inventory 322 are integrated into the design UI system. The state (or snapshot) 324 of each design change can be kept on the server 102 and social components 326 can also be integrated into the system.
[0082] Referring now to
[0083] Referring now to
[0084] Referring now to
[0085] Referring now to
[0086] Referring now to
[0087] Referring now to
[0088] The uploaded image 123a with reference measurement(s) 123b are input to the transformation service 124′. The components of the transformation service 124′ sub-system components scene understanding service 124a, a depth prediction sub-system component 124b and image segmentation 124c. Data from the image segmentation 124c component are used in corner point computations 352. Data from the depth prediction component 124b and corner point computations 352 are used with measurement computations 352 to produce computed measurement data from which the metadata 126 is produced. The metadata are input, via an API or formatter into the 3D engine 128 to produce an output 3D model (not shown) or the metadata are input, via an API or formatter into the 2D engine 130 to produce a 2D representation output (not shown), such as an image, floor plan, elevation, and/or section views. In addition, or alternatively, the metadata can be analyzed to produce statistical output 129.
[0089] Memory stores program instructions and data used by the processor of the system. The memory may be a suitable combination of random access memory and read-only memory, and may host suitable program instructions (e.g. firmware or operating software), and configuration and operating data and may be organized as a file system or otherwise. The program instructions stored in the memory of the panel may further store software components allowing network communications and establishment of connections to the data network. The software components may, for example, include an internet protocol (IP) stack, as well as driver components for the various interfaces, including the interfaces and the keypad. Other software components suitable for establishing a connection and communicating across network will be apparent to those of ordinary skill
[0090] Servers may include one or more processing devices (e.g., microprocessors), a network interface and a memory (all not illustrated). The server may physically take the form of a rack mounted card and may be in communication with one or more operator devices. The processor of each server acts as a controller and is in communication with, and controls overall operation, of each server. The processor may include, or be in communication with, the memory that stores processor executable instructions controlling the overall operation of the server. Software may include a suitable Internet protocol UP) stack and applications/clients.
[0091] Each server may be associated with an IP address and port(s) by which it communicates with the user devices to handle off loaded processing. The servers may be computers, thin-clients, or the like.
[0092] All or part of the processes described herein and their various modifications (hereinafter referred to as “the processes”) can be implemented, at least in part, via a computer program product, i.e., a computer program tangibly embodied in one or more tangible, non-transitory physical hardware storage devices that are computer and/or machine-readable storage devices for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a network.
[0093] Actions associated with implementing the processes can be performed by one or more programmable processors executing one or more computer programs to perform the functions of the calibration process. All or part of the processes can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit).
[0094] Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only storage area or a random access storage area or both. Elements of a computer (including a server) include one or more processors for executing instructions and one or more storage area devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from, or transfer data to, or both, one or more machine-readable storage media, such as mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
[0095] Tangible, physical hardware storage devices that are suitable for embodying computer program instructions and data include all forms of non-volatile storage, including by way of example, semiconductor storage area devices, e.g., EPROM, EEPROM, and flash storage area devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks and volatile computer memory, e.g., RAM such as static and dynamic RAM, as well as erasable memory, e.g., flash memory.
[0096] In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other actions may be provided, or actions may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Likewise, actions depicted in the figures may be performed by different entities or consolidated.
[0097] Elements of different embodiments described herein may be combined to form other embodiments not specifically set forth above. Elements may be left out of the processes, computer programs, described herein without adversely affecting their operation. Furthermore, various separate elements may be combined into one or more individual elements to perform the functions described herein.
[0098] Other implementations not specifically described herein are also within the scope of the following claims.