1. Well Log Data Inversion Using Radial Basis Function Network Kou-Yuan Huang,Li-Sheng Weng Department of Computer Science National Chiao Tung University Hsinchu, Taiwan kyhuang@cs.nctu.edu.tw and Liang-Chi Shen Department of Electrical & Computer Engineering University of Houston Houston, TX
12. Review of well log data inversion Lin, Gianzero, and Strickland used the least squares technique, 1984. Dyos used maximum entropy, 1987. Martin, Chen, Hagiwara, Strickland, Gianzero, and Hagan used 2-layer neural network, 2001. Goswami, Mydur, Wu, and Hwliot used a robust technique,2004. Huang, Shen, and Chen used higher order perceptron, IEEE IGARSS, 2008.
13.
14. Hush and Horne, 1993,used RBF network for functional approximation.
17. Properties of RBF RBF is a supervised training model. The 1st layer used the K-means clustering algorithm todetermine the K nodes. The activation function of the 2nd layer was linear. f(s)=s. f ’(s)=1. The 2ndlayer used the Widrow-Hoff learning rule.
21. Output of the 1st layer: response of Gaussian basis function𝑜𝑖=exp−(𝐱−𝐦𝑖)𝑇(𝐱−𝐦𝑖)2σ𝑖2
22. Training in the 2nd layer Widrow-Hoff’s learning rule. Error function 𝐸=12𝑗=1𝐽(𝑑𝑗−𝑜𝑗)2 Use gradient descent method to adjust weights∆ 𝑤𝑗𝑖𝑡=𝑤𝑗𝑖𝑡+1−𝑤𝑗𝑖𝑡=−η𝜕𝐸𝜕𝑤𝑗𝑖 =η𝑑𝑗−𝑜𝑗𝑓𝑗′𝑠𝑗𝑜𝑖=η𝑑𝑗−𝑜𝑗𝑜𝑖 f(s)=s. 𝑓′(𝑠)=1
35. Perceptron training in the 2nd layer Activation function at the 2nd layer: sigmoidal 𝑜𝑗=𝑓𝑠𝑗= 11+𝑒−𝑆𝑗 Error Function 𝐸=12𝑗=1𝐽(𝑑𝑗−𝑜𝑗)2 Delta learning rule(Rumelhart, Hinton, and Williams, 1986): use gradient descent method to adjust weights ∆𝑤𝑗𝑖𝑡=𝑤𝑗𝑖𝑡+1−𝑤𝑗𝑖𝑡=−η𝜕𝐸𝜕𝑤𝑗𝑖= η𝑑𝑗−𝑜𝑗𝑓𝑗′(𝑠𝑗)𝑜𝑖
46. Generalized delta learning rule (Rumelhart, Hinton, and Williams, 1986) Adjust weights between the 2nd layer and the 3rd layer 𝑤𝑘𝑗𝑡+1=𝑤𝑘𝑗𝑡+∆𝑤𝑘𝑗𝑡 ∆𝑤𝑘𝑗(𝑡)=𝜂 𝑑𝑘−𝑜𝑘𝑓𝑘′𝑠𝑘𝑜𝑗=𝜂𝛿𝑘𝑜𝑗 𝛿𝑘= 𝑑𝑘−𝑜𝑘𝑓𝑘′𝑠𝑘 Adjust weights between the 1st layer and the 2nd layer, ∆𝑤𝑗𝑖(𝑡)=𝜂 𝑘=1𝐾𝛿𝑘𝑤𝑘𝑗𝑓𝑗′𝑠𝑗𝑜𝑖=𝜂𝛿𝑗𝑜𝑖 𝛿𝑗= 𝑘=1𝐾𝛿𝑘𝑤𝑘𝑗𝑓𝑗′𝑠𝑗 Adjust weights with momentum term: ∆𝑤𝑘𝑗𝑡=𝜂𝛿𝑘𝑡𝑜𝑗𝑡+𝛽∆𝑤𝑘𝑗𝑡−1 ∆𝑤𝑗𝑖𝑡=𝜂𝛿𝑗𝑡𝑜𝑖𝑡+𝛽∆𝑤𝑗𝑖(𝑡−1)
56. Experiments: on simulated well log data In the simulation, there are 31 well logs. Professor Shenat University of Houston worked on theoretical calculation. Each well log has the apparent conductivity (Ca) as the input, and the true formation conductivity (Ct) as the desired output. Well logs #1~#25 are for training. Well logs #26~#31 are for testing.
63. Input data length and # of training patterns from 25 training well logs
64. Optimal cluster number of training patternsExample: for input data length 10 PFS vs. K. For input N=10, the optimal cluster number K is 27.
65. Optimal cluster number of training patterns in 10 cases Set up 10 two-layer RBF models. Compare the testing errors of 10 models to select the optimal RBF model.
67. Parameter setting in the experiment Parameters in RBF training Learning rate η : 0.6 Momentum coefficient 𝛽: 0.4 (in 3-layer RBF) Maximum iterations: 20,000 Error threshold: 0.002. Define mean absolute error (MAE): Pis the pattern number, K is the output nodes. MAE=1𝑃𝐾𝑝=1𝑃𝑘=1𝐾𝑑𝑝𝑘−𝑜𝑝𝑘
68.
69. Inversion testing using 10-27-10 two-layer RBF Inverted Ct of log #26 by network 10-27-10 (MAE= 0.051753). Inverted Ct of log #27 by network 10-27-10 (MAE= 0.055537).
70. Inverted Ct of log #28 by network 10-27-10 (MAE= 0.041952). Inverted Ct of log #29 by network 10-27-10 (MAE= 0.040859).
71. Inverted Ct of log #31 by network 10-27-10 (MAE= 0.050294). Inverted Ct of log #30 by network 10-27-10 (MAE= 0.047587).
86. Inversion testing using 10-27-9-10 three-layer RBF Inverted Ct of log 27 by network 10-27-9-10 (MAE= 0.059158) Inverted Ct of log 26 by network 10-27-9-10 (MAE= 0.041526)
87. Inverted Ct of log 28 by network 10-27-9-10 (MAE= 0.046744) Inverted Ct of log 29 by network 10-27-9-10 (MAE= 0.043017)
88. Inverted Ct of log 30 by network 10-27-9-10 (MAE= 0.046546) Inverted Ct of log 31 by network 10-27-9-10 (MAE= 0.042763)
89. Testing error of each well log using 10-27-9-10 three-layer RBF model Average error: 0.046625
90. Average testing error of each three-layer RBF model in simulation Experiments using RBFs with different number of hidden nodes. 10-27-9-10 get the smallest average error in testing. So it is selected to the real data application.
103. Select 10-27-9-10 optimal RBF model for real data inversion. After convergence in training, input 10 real data to the RBF model to get the 10 output data, then input 10 data of the next segment to get the next 10 output data, repeatedly.
114. 3-layer RBF has better inversion than 2-layer RBF because more layers can do more nonlinear mapping.In the simulation, the optimal 3-layer model is 10-27-9-10. It can get the smallest average mean absolute error in the testing. The trained 10-27-9-10 RBFmodel is applied to the real well log data inversion. The result is acceptable and good. It shows that the RBF model can work on well log data inversion. Errors are different at experiments because initial weights are different in the network. But the order or percentage of errors can be for comparison in the RBF performance.