Iterative determination of decline curve transition in unconventional reservoir modelling
10634815 ยท 2020-04-28
Assignee
Inventors
Cpc classification
E21B2200/20
FIXED CONSTRUCTIONS
E21B43/00
FIXED CONSTRUCTIONS
G06F17/18
PHYSICS
E21B41/00
FIXED CONSTRUCTIONS
International classification
G06F17/18
PHYSICS
G01V99/00
PHYSICS
Abstract
Apparatus and associated methods relate to computerized system for predicting the quantity of oil and/or gas production at an oil site, where a prediction curve for oil and/or gas data transitions from a first fitted curve (e.g., a hyperbolic decline curve) to a second fitted curve (e.g., an exponential decline curve) at a transition point, the transition point being determined by progressively/iteratively identifying curvature changes in the first fitted curve over an initial time period by comparing a running list of terminal decline rates (Dmin) with a predetermined curvature threshold, and setting the occurrence of the transition point at the point where the rate of change of the terminal decline rate is less than the predetermined curvature threshold. In an illustrative example, the second fitted curve may use the value of Dmin that minimizes the deviation between successive forecasts.
Claims
1. A computer-implemented method for predicting oil and/or gas production at an oil and gas production site, the method comprising: receiving input from a user identifying a database (230) containing information about an oil-gas extraction site (OGES), the information originating from sensors at the OGES; transmitting production data from the database (230) to a prediction engine (240); fitting a first monotonically decreasing function (F.sub.1) to the production data over a first time period to generate a first production forecast, using the prediction engine (240); iteratively comparing, using the prediction engine (240): (1) a predicted production value (Y.sub.pred,i) that is a function of the first monotonically decreasing function, with (2) an empirical production value (Y.sub.emp,i), to iteratively determine whether the empirical production value is less than the predicted production value; if the empirical production value is less than the predicted production value (Y.sub.emp,i<Y.sub.pred,i), then iteratively determining a constant-rate decline rate (D.sub.min,i) of the first monotonically decreasing function that matches the empirical production value; iteratively comparing, using the prediction engine (240): (1) a difference between successive constant-rate decline rates (.sub.i=D.sub.min,iD.sub.min,i-1), with (2) a predetermined error minimization threshold (D.sub.thresh), to iteratively determine whether the difference between successive constant-rate decline rates (.sub.i) is less than the predetermined error minimization threshold, wherein the predetermined error minimization threshold (D.sub.thresh) is a user-defined parameter; if the difference between successive constant-rate decline rates (.sub.i) is less than the predetermined error minimization threshold (.sub.i<D.sub.thresh), then determining a transition point (T) where the difference between successive constant-rate decline rates (.sub.i) rate is less than the predetermined error minimization threshold; and, generating a second production forecast over a second time period as a function of a second monotonically decreasing function (F.sub.2) different from the first monotonically decreasing function (F.sub.1), using the prediction engine (240), wherein a combined production forecast comprises the first production forecast and the second production forecast, the first production forecast transitioning to the second production forecast at the transition point (T) in the combined production forecast.
2. The method of claim 1, wherein iteratively comparing the predicted production value (Y.sub.pred,i) with the empirical production value (Y.sub.emp,i) comprises comparing a predicted N-day moving average across N predicted production values with an empirical N-day moving average across N empirical production values.
3. The method of claim 2, wherein N=30 days.
4. The method of claim 1, wherein iteratively determining the constant-rate decline rate (D.sub.min,i) comprises goal seeking the constant-rate decline rate.
5. The method of claim 1, further comprising: resetting a terminal decline rate (D.sub.min) to a predetermined minimum rate value after each iteration of iteratively comparing the difference between successive constant-rate decline rates (.sub.i) with the predetermined error minimization threshold (D.sub.thresh).
6. The method of claim 5, wherein the predetermined minimum rate value is about 1%.
7. The method of claim 1, wherein the first monotonically decreasing function (F.sub.1) comprises a hyperbolic Arps function.
8. The method of claim 7, wherein the second monotonically decreasing function (F.sub.2) comprises an exponential Arps function.
9. The method of claim 8, wherein the terminal decline rate of the hyperbolic Arps function equals the constant decline rate of the Arps exponential function at the determined transition point (T).
10. A computer-implemented method for predicting oil and/or gas production at an oil and gas production site, the method comprising: receiving input from a user identifying a database (230) containing information about an oil-gas extraction site (OGES), the information originating from sensors at the OGES; transmitting production data from the database (230) to a prediction engine (240); fitting a first monotonically decreasing function (F.sub.1) to the production data over a first time period to generate a first production forecast, using the prediction engine (240); iteratively comparing, using the prediction engine (240): (1) a predicted production value (Y.sub.pred,i) that is a function of the first monotonically decreasing function, with (2) an empirical production value (Y.sub.emp,i), to iteratively determine whether the empirical production value is less than the predicted production value; if the empirical production value is less than the predicted production value (Y.sub.emp,i<Y.sub.pred,i), then iteratively updating a parameter of the first monotonically decreasing function to match the empirical production value; iteratively comparing, using the prediction engine (240): (1) a difference (.sub.i) between successive values of the updated at least one parameter, with (2) a predetermined error minimization threshold to determine, for each iteration, whether the difference (.sub.i) between the successive values of the updated at least one parameter is less than the predetermined error minimization threshold; if the difference (.sub.i) is less than the predetermined error minimization threshold, then determining a transition point (T) where the difference between successive parameter values (.sub.i) is less than the predetermined error minimization threshold; and, generating a second production forecast over a second time period as a function of a second monotonically decreasing function (F.sub.2) different from the first monotonically decreasing function (F.sub.1), using the prediction engine (240), wherein a combined production forecast comprises the first production forecast and the second production forecast, the first production forecast transitioning to the second production forecast at the transition point (T) in the combined production forecast.
11. The method of claim 10, wherein the first monotonically decreasing function (F.sub.1) comprises a hyperbolic Arps function.
12. The method of claim 11, wherein the second monotonically decreasing function (F.sub.2) comprises an exponential Arps function.
13. The method of claim 10, wherein the updated at least one parameter of the first monotonically decreasing function comprises a b exponent (b.sub.exp) of a hyperbolic function, and wherein the difference (.sub.i) between successive values of the updated at least one parameter comprises the difference between successive b exponent values (.sub.i=b.sub.exp,ib.sub.exp,i-1).
14. The method of claim 10, wherein the updated parameter of the first monotonically decreasing function comprises a constant-rate decline rate (D.sub.min,i), and wherein the difference (.sub.i) between successive values of the updated at least one parameter comprises the difference between successive constant-rate decline rates (.sub.i=D.sub.min,iD.sub.min,i-1).
15. The method of claim 14, wherein the updated at least one parameter of the first monotonically decreasing function further comprises a b exponent (b.sub.exp) of a hyperbolic function, and wherein the difference (.sub.i) between successive values of the updated at least one parameter comprises the difference between successive b exponent values (.sub.i=b.sub.exp,ib.sub.exp,i-1).
16. The method of claim 10, wherein iteratively comparing the predicted production value (Y.sub.pred,i) with the empirical production value (Y.sub.emp,i) comprises comparing a predicted N-day moving average across N predicted production values with an empirical N-day moving average across N empirical production values.
17. A computer-implemented method for predicting oil and/or gas production at an oil and gas production site, the method comprising: receiving input from a user identifying a database (230) containing information about an oil-gas extraction site (OGES), the information originating from sensors at the OGES; transmitting production data from the database (230) to a prediction engine (240); fitting a first monotonically decreasing function (F.sub.1) to the production data over a first time period to generate a first production forecast, using the prediction engine (240); iteratively comparing, using the prediction engine (240): (1) a predicted production value (Y.sub.pred,i) that is a function of the first monotonically decreasing function, with (2) an empirical production value (Y.sub.emp,i), to iteratively determine whether the empirical production value is less than the predicted production value; if the empirical production value is less than the predicted production value (Y.sub.emp,i<Y.sub.pred,i), then iteratively determining a constant-rate decline rate (D.sub.min,i) of the first monotonically decreasing function that matches the empirical production value by applying a Newton-Raphson bisection method to optimizingly determine the constant-rate decline rate (D.sub.min,i); iteratively comparing, using the prediction engine (240): (1) a difference between successive constant-rate decline rates (.sub.i=D.sub.min,iD.sub.min,i), with (2) a predetermined error minimization threshold (D.sub.thresh), to iteratively determine whether the difference between successive constant-rate decline rates (.sub.i) is less than the predetermined error minimization threshold; if the difference between successive constant-rate decline rates (.sub.i) is less than the predetermined error minimization threshold (D.sub.min,i<D.sub.thresh), then determining a transition point (T) where the difference between successive constant-rate decline rates (.sub.i) is less than the predetermined error minimization threshold; generating a second production forecast over a second time period as a function of a second monotonically decreasing function (F.sub.2) different from the first monotonically decreasing function (F.sub.1), using the prediction engine (240), wherein a combined production forecast comprises the first production forecast and the second production forecast, the first production forecast transitioning to the second production forecast at the transition point (T) in the combined production forecast.
18. The computer-implemented method of claim 17, wherein iteratively comparing the predicted production value (Y.sub.pred,i) with the empirical production value (Y.sub.emp,i) comprises comparing a predicted N-day moving average across N predicted production values with an empirical N-day moving average across N empirical production values.
19. The computer-implemented method of claim 17, wherein the first monotonically decreasing function (F.sub.1) comprises a hyperbolic Arps function.
20. The computer-implemented method of claim 19, wherein the second monotonically decreasing function (F.sub.2) comprises an exponential Arps function.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6) Like reference symbols in the various drawings indicate like elements.
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
(7)
(8) Fitted to the set of empirical data points 102 are oil and/or gas prediction curves 104, 104A, 104B, 104C, 104D, and 104E. An initial prediction curve 104 may be used as a seed to initialize an oil and/or gas prediction algorithm. Intermediate curves 104A-D are used in iterative calculations for determining the final prediction curve 104E. The initial and/or intermediate prediction curves 104, 104A-E may be formed of two fitted segments defined by two exemplary decline curve functions (e.g., a hyperbolic Arps function followed by an exponential Arps function). For example, the final prediction curve 104E may be formed of a first fitted segment 106 and a second fitted segment 108. In this sense, the curves 104, 104A-E may be regarded as a piece-wise defined functions. In this exemplary depiction, the first segment 106 is defined by a first decline curve function (e.g., a hyperbolic Arps function), while the second segment 108 is defined by a second decline curve function (e.g., an exponential Arps function).
(9) The first segment 106 transitions to the second segment 108 at an optimally determined transition point T. The transition point T may be determined by iteratively comparing a difference between successive decline rates of the intermediate curves 104A-104D to a predetermined minimum decline rate threshold. By progressively and iteratively identifying curvature changes in the empirical data 102 over an initial time period, various computer implemented oil and/or gas prediction processes disclosed herein may reliably and objectively identify the point in time (or point in cumulative oil/gas production) where an oil and/or gas production at a given oil and gas site transitions from a first decline curve function (e.g., hyperbolic Arps function) to a second decline curve function (e.g., exponential Arps function).
(10) As shown in the exemplary depiction of
(11) Next, the process will iterate to a next/second x-axis value X.sub.2 (in this case, 31,000 CUME, as shown in
(12) At this point, the process will then take a difference .sub.1 between the first decline rate R.sub.1 and the second decline rate R.sub.1 (.sub.1=R.sub.2R.sub.1), and compare this difference to a predetermined decline curvature threshold D.sub.thresh. In this case, .sub.1=R.sub.2R.sub.1=63.2%61.0%=2.2%. Therefore, assuming a D.sub.thresh of 0.2%, .sub.1=2.2%>0.2%=D.sub.thresh. Because .sub.1>D.sub.thresh, the process continues/iterates to a next/third x-axis value X.sub.3.
(13) In this example, a next/third x-axis value X.sub.3=32,000 CUME, as shown in
(14) At this point, and similar to above, the process will then take a difference .sub.2 between the second decline rate R.sub.2 and the third decline rate R.sub.3 (.sub.2=R.sub.3R.sub.2), and compare this difference to the predetermined decline curvature threshold D.sub.thresh. In this case, .sub.2=R.sub.3R.sub.2=61.059.0%=2.0%. Therefore, .sub.2=2.0%>0.2%=D.sub.thresh. Because .sub.2>D.sub.thresh, the process continues/iterates to a next/fourth x-axis value X.sub.4 (not shown).
(15) The above process will iterate/loop through each successive pair of curves and associated terminal decline rates, until .sub.n<D.sub.thresh. In this illustrative example, an n.sup.th x-axis value X.sub.n=56,000 CUME is associated with an actual production value of 95 BOPD, as shown in
(16) An (n+1).sup.th x-axis value X.sub.n+1=57,000 CUME is associated with an actual production value of 91 BOPD, as shown in
(17) At this point, and similar to above, the process will then take a difference .sub.n between the nth decline rate R.sub.n and the (n+1).sup.th decline rate R.sub.n+1 (.sub.n=R.sub.nR.sub.n+1), and compare this difference to the predetermined decline curvature threshold D.sub.thresh. In this case, .sub.n=RR.sub.n+1=42.9%42.8%=0.1%. Therefore, .sub.n=0.1%<0.2%=D.sub.thresh. Because .sub.2<D.sub.thresh, the process determines that the nth fitted curve 104E is the final fitted curve that transitions from the first fitted segment 106 to the second fitted segment 108 at transition point T having an x-axis value equal to X.sub.n. A computer-implemented method that uses the predetermined threshold as a decline curvature condition for a transition from a first fitted segment to a second segment may advantageously yield more accurate and realistic predictions of oil and/or gas production for a given oil and gas site (as detailed in
(18) In some implementations, the process my use multiple data points (both predicted and empirical) for determining a terminal decline rate R.sub.k. For example, the process may take a 30-day moving average of predicted data points (e.g., as predicted in any of the curves 104, 104A-E), and compare it to an associated 30-day moving average of empirical data points (e.g., a subset of data points 102). A moving average approach may advantageously smooth out the empirical data, and consequently, smoothen out the differences .sub.k's.
(19) In a preferred embodiment, the first segment 106 may be characterized by a hyperbolic function according to the hyperbolic Arps Equation:
(20)
(21) while the second segment 108 may be characterized by an exponential function according to the exponential Arps Equation (with the parameter b set to zero):
q=q.sub.0e.sup.tD.sup.
(22) The exponential form of the Arps Equation generally possesses a steeper decline versus the hyperbolic form for the same values of q.sub.0 and D.sub.i. The effective decline rate (D) is a constant only for an exponential decline. In contrast, the effective decline rate decreases with time for a hyperbolic decline. In an exemplary case where the first segment 106 is a hyperbolic Arps function and the second segment an exponential Arps function, the exponential decline rate of the exponential Arps function may be equal to the terminal decline rate of the hyperbolic Arps function. In such applications, a combined fitted curve (e.g., curve 104E) may advantageously yield more conservative estimates for oil and gas production sites (particularly in late-stage wells) that are more in line with actual/empirical production data.
(23)
(24) The interface 215 communicates data and information to and from a data management engine 225, which controls the flow of data and information within the prediction system 220. The data management engine 225 is configured to send data to/receive data from an oil and gas database 230 and a prediction engine 240. The oil and gas database 230 stores historical and/or real-time data about oil and gas production from, for example, oil wells 205. The prediction engine 240 uses the data in the databases to make predictions and using iterative decline threshold analysis algorithms.
(25) The prediction engine 240 includes at least one processor 245, non-volatile memory (NVM) 250, random-access memory 255, and an interface 260. The interface 260 transmits data to, and receives data from, the data management engine 225. The interface 260 communicates with the processor 245, which executes various one or more pre-programmed sets of instructions that may be stored in a data store. In the depicted example, the data store is illustrated as the nonvolatile memory 250 (e.g., P1 and P2). The processor 245 also is operably connected and configured to employ the random-access memory (RAM) 255. The programs stored in the nonvolatile memory 250 may include pre-programmed implementations of the methods described within this disclosure (such as the method 400 in
(26) When the processor 245 executes the set of pre-programmed instructions stored in nonvolatile memory 250 (and the set of pre-programmed instructions in the engines 265 and 270), it communicates this information to the interface 260, which relays the information back to the data management engine 225. The interface 215 then takes this information relayed to the data management engine 225 and communicates it to a user interface 275. The user interface 275 can display the various analytical tools to a user (e.g., decline curve analysis tools). The user interface 275 can also receive input from a user, which can be translated into instructions for the processor 245 to implement (by sending it through the interfaces 215 and 260).
(27)
(28) For example, a graph 300A depicted in
(29) The graph 300B depicted in
(30) A graph 300C depicted in
(31)
(32) At step 430, the process optimizingly varies the terminal decline rate to determine an optimal terminal decline rate D.sub.min,i, such that the prediction curve achieves a predicted value that achieves/matches the empirical production value for that value of i (e.g., such that the predicted BOPD on a specific day is equal to an empirical BOPD on that same day). Next, at step 435, the process records the determined rate D.sub.min,i. Next, at step 440, the process compares a difference between successive values of D.sub.min,i to a predetermined curvature threshold D.sub.thresh. If the difference .sub.i=D.sub.min,iD.sub.min,i-1 is greater than the predetermined threshold D.sub.thresh, then the process continues to step 425 where the loop counter i is incremented to i=i+1, and then the process resumes at step 415. Note that for i=1, step 440 may be skipped and the process may go directly to step 425 (as there is no difference .sub.0 since only one value of D.sub.min,i has been determined at i=1). If at step 440, the difference .sub.i=D.sub.min,iD.sub.min,i-1 is less than the predetermined threshold D.sub.thresh, then the process continues to step 445.
(33) At step 445, the process generates a final/combined prediction curve that includes the first prediction curve (having a D.sub.min value of D.sub.min,i for the current value of i), and a second prediction curve. The first prediction curve transitions to the second prediction curve at transition point T occurring at the current value of i (which may be associated with a specific date/time and/or CUME production value). Therefore, the final/combined fitted curve may be piece-wise defined by the first prediction curve for all j<i, and the second prediction curve for all j>i.
(34) In various implementations, the steps 415-430 may use moving average values as inputs, as opposed to predicted/empirical values on a single specific day/CUME value. For example, the process may use a predicted M-day moving average of predicted production values, and compare this predicted M-day moving average to an associated empirical M-day moving average. The number M may, for example, be about 2 days, 5 days, 7 days, 10 days, 20 days, 30 days, 60 days, or about 90 days or more. A moving average approach may advantageously smooth out the empirical data, and consequently, smoothen out the differences k's. The parameter M may be a user-defined parameter, in some embodiments, such that the user may advantageously adjust a moving average window size to fit the specific application.
(35) In various implementations, the threshold D.sub.thresh may be a user-defined value. For example, a user may pre-set D.sub.thresh to be about 0.1%, 0.5%, 1%, 2%, 3%, 5%, or about 10% or more. By using a user-defined tolerance for D.sub.thresh, the process 400 may advantageously allow a user to tighten or loosen the decline curve transition point conditions to suit a wide range of empirical data scenarios.
(36) In some examples, an additional step may be performed that requires the difference .sub.i to be less than the threshold D.sub.thresh for a predetermined number of iterations before going to step 445. For example, a (user) pre-defined value of i.sub.thresh may be an additional conditional comparison, such that after step 440, the process 400 determines for how many iterations the differences .sub.i is less than the threshold D.sub.thresh. Once the process 400 has looped through step 440 a user-predetermined b.sub.x number of times determining that the differences .sub.i is less than the threshold D.sub.thresh, the process may then finally transition to step 445, in at least some implementations. In various examples, the value i.sub.thresh may be about 2, 3, 5, 10, or about 20.
(37) In various examples, the value of D.sub.min may be reset every time the process 400 loops through step 425. For example, at each step 415, the process 400 may reset the D.sub.min value of the first prediction curve to a (user) predetermined de minimis terminal decline rate. In various examples, the de minimis decline rate may be about 0.1%, 0.2%, 0.5%, 1%, 1.5%, 2%, or about 5%. In various embodiments, the D.sub.min may advantageously be reset to a de minimis value range, for example. By continually resetting the value of D.sub.min, the process 400 may substantially ensure that the first prediction curve exhibits a variable rate decline, in the case of a hyperbolic variable decline determined by a hyperbolic Arps function.
(38)
(39) A computer-implemented process 500 starts at step 505 with the process generating a first prediction curve over a first time period. The first prediction curve may be a hyperbolic curve with a variable decline rate, in some embodiments. The first prediction curve is constructed by fitting to actual empirical production data during the first time period. Next, at step 510 the process initializes a loop counter i to i=1. The loop counter at i=1 may represent a specific x-axis point (e.g., time or CUME) where the iterative process begins (see, e.g.,
(40) At step 530, the process optimizingly varies the b exponent of a hyperbolic function to determine an optimal b exponent (b.sub.exp,i), such that the prediction curve achieves a predicted value that achieves/matches the empirical production value for that value of i (e.g., such that the predicted BOPD on a specific day is equal to an empirical BOPD on that same day). Next, at step 535, the process records the determined parameter value b.sub.exp,i. Next, at step 540, the process compares a difference between successive values of b.sub.exp,i to a predetermined b exponent threshold b.sub.thresh. If the difference .sub.i=b.sub.exp,ib.sub.exp,i-1 is greater than the predetermined threshold D.sub.thresh, then the process continues to step 525 where the loop counter i is incremented to i=i+1, and then the process resumes at step 515. Note that for i=1, step 540 may be skipped and the process may go directly to step 525 (as there is no difference .sub.0 since only one value of b.sub.exp,i has been determined at i=1). If at step 540, the difference .sub.i=b.sub.exp,ib.sub.exp,i-1 is less than the predetermined threshold b.sub.thresh, then the process continues to step 545.
(41) At step 545, the process generates a final/combined prediction curve that includes the first prediction curve (having a b.sub.exp value of b.sub.exp,i for the current value of i), and a second prediction curve. The first prediction curve transitions to the second prediction curve at transition point T occurring at the current value of i (which may be, for example, associated with a specific date/time and/or CUME production value). Therefore, the final/combined fitted curve may be piece-wise defined by the first prediction curve for all j<i, and the second prediction curve for all j>i.
(42) In various implementations, the steps 515-530 may use moving average values as inputs, as opposed to predicted/empirical values on a single specific day/CUME value. For example, the process may use a predicted M-day moving average of predicted production values, and compare this predicted M-day moving average to an associated empirical M-day moving average. The number M may, for example, be about 2 days, 5 days, 7 days, 10 days, 20 days, 30 days, 60 days, or about 90 days or more. A moving average approach may advantageously smooth out the empirical data, and consequently, smoothen out the differences .sub.k's. The parameter M may be a user-defined parameter, in some embodiments, such that the user may advantageously adjust a moving average window size to fit the specific application.
(43) In various implementations, the threshold b.sub.thresh may be a user-defined value. For example, a user may pre-set b.sub.thresh to be about 0.1%, 0.5%, 1%, 2%, 3%, 5%, or about 10% or more. By using a user-defined tolerance for b.sub.thresh, the process 500 may advantageously allow a user to tighten or loosen the decline curve transition point conditions to suit a wide range of empirical data scenarios.
(44) In some examples, an additional step may be performed that requires the difference .sub.i to be less than the threshold b.sub.thresh for a predetermined number of iterations before going to step 545. For example, a (user) pre-defined value of i.sub.thresh may be an additional conditional comparison, such that after step 540, the process 500 determines for how many iterations the differences .sub.i is less than the threshold b.sub.thresh. Once the process 500 has looped through step 540 a user-predetermined b.sub.x number of times determining that the differences .sub.i is less than the threshold b.sub.thresh, the process may then finally transition to step 545, in at least some implementations. In various examples, the value i.sub.thresh may be about 2, 3, 5, 10, or about 20.
(45) Although various embodiments have been described with reference to the Figures, other embodiments are possible. For example, although hyperbolic and exponential Arps functions may be used in some embodiments, in various implementations, other types of decline curves/functions may be used. In at least one implementation, a process may use a Duong decline curve as a first or a second predicted curve. In various examples, the phrase terminal exponential decline rate may be referred to as a constant-rate decline rate. In various implementations, the phrase decline curvature threshold may be referred to as a decline constancy threshold.
(46) With reference to
(47) Although some embodiments may be described in terms of production output as a function of time (e.g., days), other functional relationships are possible. For example, some implementations may be understood in terms of production output as a function of cumulative output metrics (e.g., CUME).
(48) In one exemplary aspect, a method may include, if the empirical production value is less than the predicted production value (Y.sub.emp,i<Y.sub.pred,i), then iteratively determining updated parameters of a first monotonically decreasing function to generate a second monotonically decreasing function that matches the empirical production value. Then, the method may include iteratively comparing: (1) a difference between successive parameters (e.g., D.sub.min, b.sub.exp), with (2) a predetermined constancy threshold (D.sub.thresh, b.sub.thresh), to iteratively determine whether the difference between successive parameter values is less than the predetermined constancy threshold. If the difference between successive parameter values (.sub.I, e.g., .sub.i=D.sub.min,iD.sub.mini-1, .sub.i=b.sub.exp,ib.sub.exp,i-1) is less than the predetermined constancy threshold (.sub.i<D.sub.thresh, .sub.i<b.sub.thresh), determining a transition point (T) where the difference between successive parameter values (.sub.i) is less than the predetermined constancy threshold.
(49) In some implementations, the iterative comparison method may include a hybrid comparison comprising a function of more than one parameter. For example, the iterative method may include optimizingly varying both the constant decline rate D.sub.min and the b exponent b.sub.exp. In some embodiments, each iteration may include a search to simultaneously solve for optimal values of both parameters that would make the predicted production value for that iteration substantially match (e.g., within a predetermined tolerance) the empirical production value for that iteration. In some examples, a first parameter may be optimizingly varied on a first schedule, and the second parameter may be optimizingly varied on a second iteration schedule. In some such examples, the first and second schedules may alternate every predetermined number of iterations. In some implementations, the first schedule and the second schedule may be different from each other. By way of example and not limitation, the first schedule may call for optimizingly varying the first parameter every 5 iterations, and the second schedule may call for optimizingly varying the second parameter on iterations during which the first parameter is not being varied.
(50) In some embodiments that may include a hybrid comparison, the iteration may terminate repetition upon, for example, (1) the change in the first parameter being less than a first predetermined constancy threshold, and (2) the change in the second parameter being less than a second predetermined constancy threshold. The first and second predetermined constancy thresholds, in some examples, may be different from each other.
(51) In various implementations, a computer-implemented process, which may include one or more operations of the computer-implemented processes 400, 500 described with reference to
(52) Some aspects of embodiments may be implemented as a computer system. For example, various implementations may include digital and/or analog circuitry, computer hardware, firmware, software, or combinations thereof. Apparatus elements can be implemented in a computer program product tangibly embodied in an information carrier, e.g., in a machine-readable storage device, for execution by a programmable processor; and methods can be performed by a programmable processor executing a program of instructions to perform functions of various embodiments by operating on input data and generating an output. Some embodiments may be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and/or at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
(53) Suitable processors for the execution of a program of instructions include, by way of example and not limitation, both general and special purpose microprocessors, which may include a single processor or one of multiple processors of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random-access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including, by way of example, semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and, CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits). In some embodiments, the processor and the memory can be supplemented by, or incorporated in hardware programmable devices, such as FPGAs, for example.
(54) In some implementations, each system may be programmed with the same or similar information and/or initialized with substantially identical information stored in volatile and/or non-volatile memory. For example, one data interface may be configured to perform auto configuration, auto download, and/or auto update functions when coupled to an appropriate host device, such as a desktop computer or a server.
(55) In some implementations, one or more user-interface features may be custom configured to perform specific functions. An exemplary embodiment may be implemented in a computer system that includes a graphical user interface and/or an Internet browser. To provide for interaction with a user, some implementations may be implemented on a computer having a display device, such as an LCD (liquid crystal display) monitor for displaying information to the user, a keyboard, and a pointing device, such as a mouse or a trackball by which the user can provide input to the computer.
(56) In various implementations, the system may communicate using suitable communication methods, equipment, and techniques. For example, the system may communicate with compatible devices (e.g., devices capable of transferring data to and/or from the system) using point-to-point communication in which a message is transported directly from a source to a receiver over a dedicated physical link (e.g., fiber optic link, infrared link, ultrasonic link, point-to-point wiring, daisy-chain). The components of the system may exchange information by any form or medium of analog or digital data communication, including packet-based messages on a communication network. Examples of communication networks include, e.g., a LAN (local area network), a WAN (wide area network), MAN (metropolitan area network), wireless and/or optical networks, and the computers and networks forming the Internet. Other implementations may transport messages by broadcasting to all or substantially all devices that are coupled together by a communication network, for example, by using omni-directional radio frequency (RF) signals. Still other implementations may transport messages characterized by high directivity, such as RF signals transmitted using directional (i.e., narrow beam) antennas or infrared signals that may optionally be used with focusing optics. Still other implementations are possible using appropriate interfaces and protocols such as, by way of example and not intended to be limiting, USB 2.0, FireWire, ATA/IDE, RS-232, RS-422, RS-485, 802.11 a/b/g/n, Wi-Fi, WiFi-Direct, Li-Fi, BlueTooth, Ethernet, IrDA, FDDI (fiber distributed data interface), token-ring networks, or multiplexing techniques based on frequency, time, or code division. Some implementations may optionally incorporate features such as error checking and correction (ECC) for data integrity, or security measures, such as encryption (e.g., WEP) and password protection.
(57) In various embodiments, a computer system may include non-transitory memory. The memory may be connected to the one or more processors, which may be configured for storing data and computer readable instructions, including processor executable program instructions. The data and computer readable instructions may be accessible to the one or more processors. The processor executable program instructions, when executed by the one or more processors, may cause the one or more processors to perform various operations.
(58) A number of implementations have been described. Nevertheless, it will be understood that various modification may be made. For example, advantageous results may be achieved if the steps of the disclosed techniques were performed in a different sequence, or if components of the disclosed systems were combined in a different manner, or if the components were supplemented with other components. Accordingly, other implementations are within the scope of the following claims.