What I might have done in a recent hackathon
and why domain expertise matters in machine learning...¶
Recently, my colleague Prof. Michael Pyrcz (@GeostatsGuy) and myself hosted a data science hackathon open to students at The University of Texas at Austin with support from The Hildebrand Department of Petroleum and Geosystems Engineering. Given the COVID pandemic, we hosted the event virtually over two weekends in April 2021. The first weekend consisted of a workshop followed by an explanation of the problem dataset, then the students broke into teams and worked on their solutions that were ultimately submitted for scoring, code review, and presentation judging by a panel of industry experts the following weekend.
This was a really fun experience for me personally and the feedback has been incredible from all involved. Given that I have been teaching data science skills and challenging students in my classes with difficult coding exercises for years, it was great to see them showcase their capabilities.
I'd encourage you to take a look at the detailed problem statement and datasets linked to above, but briefly, the challenge was: given petrophysical data from well logs and spatial maps interpreted from seismic on 73 oil wells that have been in production for some time, predict the cumulative 3 year production for 10 new wells recently drilled, and having well log data, but not yet put into production. In addition to their "best estimate" the students where asked to provide 100 additional realizations such that an uncertainty model could be evaluated. The objective scoring consisted of comparisons of the mean squared error with respect to their "best estimate" and the true 3 year production (which we know, but was withheld from the students) as well as a scoring of their uncertainty model with a "goodness measure" proposed by Deutsch (1996).
I thought it would be a fun exercise to consider what I might have done to complete this challenge myself. In the spirit of the hackathon, I'll limit my time working on the project to what I can accomplish in a single day. Of course, since I am already familiar with the dataset, I have a head start over the teams in the competition who generally spent a few hours just investigating the data the first day. But given my short timeline, I won't be training any complicated neural network architectures or really any complicated machine learning models at all. I want to see how I can use my domain expertise to engineer meaningful features to get an answer quickly. I also have a surprise in store with respect to how I handled the uncertainty model. So let's get started...
Feature Imputation¶
First, like all work in Python, I'll start with the module imports used during this work.
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib import colors
from scipy.optimize import curve_fit
from sklearn.cluster import KMeans
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV, KFold
from sklearn.decomposition import PCA
from sklearn.preprocessing import (PolynomialFeatures, MaxAbsScaler,
MinMaxScaler, StandardScaler, RobustScaler)
from sklearn.linear_model import LinearRegression, ElasticNet
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import BaggingRegressor
from sklearn.pipeline import Pipeline
from sklearn.metrics import mean_absolute_percentage_error
sns.set(rc={'text.usetex' : True})
And I'll read in the well log datasets from both the wells that have been in production as well as what we call the preproduction wells, i.e. those we are trying to predict the cumulative production for after 3 years.
production_wells_df = pd.read_csv('wellbore_data_producer_wells.csv',
index_col='Well_ID')
preproduction_wells_df = pd.read_csv('wellbore_data_preproduction_well.csv',
index_col='Well_ID')
Now we'll combine those two dataframes into a single dataframe.
wells_df = pd.concat([production_wells_df, preproduction_wells_df])
wells_df.head()
And inspect some overall statistics on the dataset.
wells_df.describe().T
We can see that there is quite a bit of missing data. Of particular concern is the large amount of missing permeabilities as this will be one of the strongest indicators of production. So our first task is going to be to impute the missing features, especially working toward a good model for permeability. To start, let's plot some of the provided map data. Here we are showing the spatial distributions of percentage facies in the reservoir. There is a fault in the reservoir which we only know the location of, but no other information. The white circles represent wells we have production for, the red circles are wells we are trying to predict production. The fault is indicated by the red line.
production_well_locations = \
(production_wells_df[['X, m', 'Y, m']]
.groupby(production_wells_df.index)
.mean())
preproduction_well_locations = \
(preproduction_wells_df[['X, m', 'Y, m']]
.groupby(preproduction_wells_df.index)
.mean())
px = production_well_locations['X, m'].to_numpy()[::1]
py = -production_well_locations['Y, m'].to_numpy() + 10000
ppx = preproduction_well_locations['X, m'].to_numpy()[::1]
ppy = -preproduction_well_locations['Y, m'].to_numpy() + 10000
fx = np.linspace(0, 1e4, num=5)
fy = fx - 1750
fig = plt.figure(constrained_layout=True, figsize=(8,5))
ax = fig.subplot_mosaic(
[['sand', 'sandy shale', 'colorbar'],
['shale', 'shaly sand', 'colorbar']],
gridspec_kw={"width_ratios": [1, 1, 0.1]})
cmap = cm.get_cmap('viridis', 100)
bounds = np.linspace(0,1.0,num=100)
vals = bounds#[:-1]
norm = colors.BoundaryNorm(bounds, cmap.N)
ax['sand'].imshow(np.load('2d_sand_propotion.npy'), cmap=cmap, norm=norm,
extent=[0, 10000, 0, 10000])
ax['sand'].scatter(px, py, color='w', marker='o', alpha=0.5)
ax['sand'].scatter(ppx, ppy, color='r', marker='o', alpha=0.5)
ax['sand'].plot(fx, fy, color='r')
ax['sand'].set_ylim([0,1e4])
ax['sand'].set_title('Sand')
ax['sand'].set_aspect(1)
ax['sand'].grid(None)
ax['sandy shale'].imshow(np.load('2d_sandy_shale_propotion.npy'), cmap=cmap,
norm=norm, extent=[0, 10000, 0, 10000])
ax['sandy shale'].scatter(px, py, color='w', marker='o', alpha=0.5)
ax['sandy shale'].scatter(ppx, ppy, color='r', marker='o', alpha=0.5)
ax['sandy shale'].plot(fx, fy, color='r')
ax['sandy shale'].set_ylim([0,1e4])
ax['sandy shale'].set_title('Sandy shale')
ax['sandy shale'].set_aspect(1)
ax['sandy shale'].grid(None)
ax['shale'].imshow(np.load('2d_shale_propotion.npy'), cmap=cmap, norm=norm,
extent=[0, 10000, 0, 10000])
ax['shale'].scatter(px, py, color='w', marker='o', alpha=0.5)
ax['shale'].scatter(ppx, ppy, color='r', marker='o', alpha=0.5)
ax['shale'].plot(fx, fy, color='r')
ax['shale'].set_ylim([0,1e4])
ax['shale'].set_title('Shale')
ax['shale'].set_aspect(1)
ax['shale'].grid(None)
im = ax['shaly sand'].imshow(np.load('2d_shaly_sand_propotion.npy'), cmap=cmap,
norm=norm, extent=[0, 10000, 0, 10000])
ax['shaly sand'].scatter(px, py, color='w', marker='o', alpha=0.5)
ax['shaly sand'].scatter(ppx, ppy, color='r', marker='o', alpha=0.5)
ax['shaly sand'].plot(fx, fy, color='r')
ax['shaly sand'].set_ylim([0,1e4])
ax['shaly sand'].set_title('Shaly sand')
ax['shaly sand'].set_aspect(1)
ax['shaly sand'].grid(None)
cbar = fig.colorbar(im, cax=ax['colorbar'], boundaries=bounds, values=vals);
cbar.set_ticks([0.0, 0.25, 0.5, 0.75, 1.0])
cbar.set_ticklabels([r'$0$', r'$\frac{1}{4}$', r'$\frac{1}{2}$',
r'$\frac{3}{4}$', r'$1$'])
It doesn't look like the fault produces any offset or discontinuity in the spatial facies information, that's good news, so we'll proceed with imputing the missing facies information using spatial location information. First, we'll subset the dataframe and replace the string labels for facies with numerical ones, so 0 will indicated Sandstone, 1 will indicate Sandy shale, and so on...
facies_df = (wells_df[['X, m', 'Y, m', 'Depth, m', 'Rock facies']]
.replace({'Sandstone': 0,
'Sandy shale': 1,
'Shale': 2,
'Shaly sandstone': 3})
)
missing_facies = facies_df['Rock facies'].isnull()
facies_df[~missing_facies].head()
Now we'll build a k-nearest neighbors classifier, do a little hyperparameter tuning with scikit-learn's builtin GridSearchCV.
parameters = {'weights': ('uniform', 'distance'),
'n_neighbors':[4, 6, 8, 10]}
knn = KNeighborsClassifier()
gcv = GridSearchCV(knn, parameters, cv=KFold(random_state=2, shuffle=True))
gcv.fit(facies_df.loc[~missing_facies, 'X, m':'Depth, m'],
facies_df.loc[~missing_facies, 'Rock facies'])
gcv.best_params_
Using the hyperparameter settings above, we can now predict (and impute) the missing facies values.
facies_df.loc[missing_facies, 'Rock facies'] = \
gcv.predict(facies_df.loc[missing_facies, 'X, m':'Depth, m'])
wells_df['Rock facies'] = facies_df['Rock facies'].astype('int')
wells_df.describe().T.loc[['X, m', 'Y, m', 'Rock facies']]
Given that we'd expect rocks of the same facies to have a similar density and acoustic impedance, we'll impute those missing features with the averages from each facies.
wells_df['Density, g/cm3'] = \
(wells_df.groupby('Rock facies')['Density, g/cm3']
.apply(lambda df: df.fillna(df.mean()))
.reset_index()
.set_index(['Well_ID'])
)
wells_df['Acoustic Impedance, kg/s-m^2'] = \
(wells_df.groupby('Rock facies')['Acoustic Impedance, kg/s-m^2']
.apply(lambda df: df.fillna(df.mean()))
.reset_index()
.set_index(['Well_ID'])
)
Now we'll subset the dataframe and use the features shown to impute the missing porosity values using polynomial regression.
missing_porosity = wells_df['Porosity, fraction'].isnull()
porosity_df = (wells_df.loc[:, 'X, m':'Density, g/cm3']
.drop('Permeability, mD', axis=1))
porosity_df.head()
We'll setup a pipeline and use GridSearchCV
again to find the best hyperparameters.
scalers = [MaxAbsScaler(), MinMaxScaler(), StandardScaler(), RobustScaler()]
pipe = Pipeline([
('scaler', StandardScaler()),
('pca', PCA()),
('poly', PolynomialFeatures()),
('reg', LinearRegression())
])
params = {
'scaler': scalers,
'poly__degree': [1, 2, 3],
'pca__n_components': [1, 2, 3, 4, 5, 6],
}
gcv = GridSearchCV(pipe, params, cv=KFold(random_state=5, shuffle=True))
gcv.fit(porosity_df[~missing_porosity].drop('Porosity, fraction', axis=1),
porosity_df.loc[~missing_porosity, 'Porosity, fraction'])
gcv.best_params_
The best parameters are shown above. Now we'll use this model to impute the missing porosity.
porosity_df.loc[missing_porosity, 'Porosity, fraction'] = \
gcv.predict(porosity_df[missing_porosity].drop('Porosity, fraction', axis=1))
Below we'll plot the imputed and given porosities. Nothing looks too strange here, none of the imputed values are outliers with respect to the ranges of the given data.
fig, ax = plt.subplots()
ax.scatter(porosity_df[~missing_porosity &
(porosity_df['Rock facies'] == 0)]['Depth, m'],
porosity_df[~missing_porosity &
(porosity_df['Rock facies'] == 0)]['Porosity, fraction'],
color='r', facecolors='none', label='Sandstone')
ax.scatter(porosity_df[missing_porosity &
(porosity_df['Rock facies'] == 0)]['Depth, m'],
porosity_df[missing_porosity &
(porosity_df['Rock facies'] == 0)]['Porosity, fraction'],
color='r', label='Sandstone (Imputed)')
ax.scatter(porosity_df[~missing_porosity &
(porosity_df['Rock facies'] == 1)]['Depth, m'],
porosity_df[~missing_porosity &
(porosity_df['Rock facies'] == 1)]['Porosity, fraction'],
color='b', facecolors='none', label='Sandy shale')
ax.scatter(porosity_df[missing_porosity &
(porosity_df['Rock facies'] == 1)]['Depth, m'],
porosity_df[missing_porosity &
(porosity_df['Rock facies'] == 1)]['Porosity, fraction'],
color='b', label='Sandy shale (Imputed)')
ax.scatter(porosity_df[~missing_porosity &
(porosity_df['Rock facies'] == 2)]['Depth, m'],
porosity_df[~missing_porosity &
(porosity_df['Rock facies'] == 2)]['Porosity, fraction'],
color='g', facecolors='none', label='Shale')
ax.scatter(porosity_df[missing_porosity &
(porosity_df['Rock facies'] == 2)]['Depth, m'],
porosity_df[missing_porosity &
(porosity_df['Rock facies'] == 2)]['Porosity, fraction'],
color='g', label='Shale (Imputed)')
ax.scatter(porosity_df[~missing_porosity &
(porosity_df['Rock facies'] == 3)]['Depth, m'],
porosity_df[~missing_porosity &
(porosity_df['Rock facies'] == 3)]['Porosity, fraction'],
color='k', facecolors='none', label='Shaly sand')
ax.scatter(porosity_df[missing_porosity &
(porosity_df['Rock facies'] == 3)]['Depth, m'],
porosity_df[missing_porosity &
(porosity_df['Rock facies'] == 3)]['Porosity, fraction'],
color='k', label='Shaly sand (Imputed)')
ax.set_xlabel('Depth, m')
ax.set_ylabel('Porosity, fraction')
ax.legend(bbox_to_anchor=(1.5, 1), loc='upper right', ncol=1);
We can also look at the distribution of the imputed porosities and compare to the given values. The imputation preserves the bimodal distribution of the original data.
fig, ax = plt.subplots(1, 2)
sns.distplot(porosity_df.loc[~missing_porosity, 'Porosity, fraction'], ax=ax[0])
ax[0].set_title('Given values')
sns.distplot(porosity_df.loc[missing_porosity, 'Porosity, fraction'], ax=ax[1]);
sns.distplot(porosity_df.loc[missing_porosity, 'Porosity, fraction'], ax=ax[1]);
ax[1].set_title('Imputed values');
Now we'll add this imputed data to our wells_df
.
wells_df['Porosity, fraction'] = porosity_df['Porosity, fraction']
To impute the missing permeabilities, we'll use some knowledge of petrophysics to do a little feature engineering. There is a widely used correlation between porosity and permeability called the Kozeny-Carmen relationship which models permeability as
$$ \kappa \sim \frac{\phi^3}{(1-\phi)^2} $$
where $\kappa$ is the permeability and $\phi$ is the porosity. We can quickly take a look at this relationship using seaborn's builtin regplot
function, i.e.
missing_perms = wells_df['Permeability, mD'].isnull()
ϕ = wells_df.loc[~missing_perms, 'Porosity, fraction'].to_numpy()
fig, ax = plt.subplots()
sns.regplot(ϕ ** 3 / (1 - ϕ) ** 2,
wells_df.loc[~missing_perms, 'Permeability, mD'], ax=ax);
ax.set_xlabel(r'$\frac{\phi^3}{(1 - \phi)^2}$');
While not the best model, we can use this data to condition our more complex prediction to come. First we find the slope and intercept of the blue line above.
fphi = lambda ϕ, m, kappa0: m * ϕ ** 3 / (1 - ϕ) ** 2 + kappa0
popt, _ = curve_fit(fphi, ϕ, wells_df.loc[~missing_perms, 'Permeability, mD'])
Now we'll use the model to create a feature we'll call 'KC permeability, mD'
.
perm_df = wells_df.loc[:, 'X, m':'Density, g/cm3']
perm_df['KC permeability, mD'] = fphi(wells_df['Porosity, fraction'],
popt[0], popt[1])
Using the data show above, we'll build a model to impute the missing permeabilities. Again, using GridSearchCV
to hyperparameter tune, we have
scalers = [MaxAbsScaler(), MinMaxScaler(), StandardScaler(), RobustScaler()]
pipe = Pipeline([
('scaler', StandardScaler()),
('pca', PCA()),
('poly', PolynomialFeatures()),
('reg', LinearRegression())
])
params = {
'scaler': scalers,
'poly__degree': [2, 3, 4],
'pca__n_components': [1, 2, 3, 4, 5, 6, 7],
}
gcv = GridSearchCV(pipe, params, cv=KFold(random_state=5, shuffle=True))
gcv.fit(perm_df[~missing_perms].drop('Permeability, mD', axis=1),
perm_df.loc[~missing_perms, 'Permeability, mD'])
gcv.best_params_
With these parameters, we can predict the missing permeabilities.
perm_df.loc[missing_perms, 'Permeability, mD'] = \
gcv.predict(perm_df[missing_perms].drop('Permeability, mD', axis=1))
Visualizing the results of the prediction against the given data, this model appears to perform well.
fig, ax = plt.subplots()
ax.scatter(perm_df[~missing_perms &
(perm_df['Rock facies'] == 0)]['Porosity, fraction'],
perm_df[~missing_perms &
(perm_df['Rock facies'] == 0)]['Permeability, mD'],
color='r', facecolors='none', label='Sandstone')
ax.scatter(perm_df[missing_perms &
(perm_df['Rock facies'] == 0)]['Porosity, fraction'],
perm_df[missing_perms &
(perm_df['Rock facies'] == 0)]['Permeability, mD'],
color='r', label='Sandstone (Imputed)')
ax.scatter(perm_df[~missing_perms &
(perm_df['Rock facies'] == 1)]['Porosity, fraction'],
perm_df[~missing_perms &
(perm_df['Rock facies'] == 1)]['Permeability, mD'],
color='b', facecolors='none', label='Sandy shale')
ax.scatter(perm_df[missing_perms &
(perm_df['Rock facies'] == 1)]['Porosity, fraction'],
perm_df[missing_perms &
(perm_df['Rock facies'] == 1)]['Permeability, mD'],
color='b', label='Sandy shale (Imputed)')
ax.scatter(perm_df[~missing_perms &
(perm_df['Rock facies'] == 2)]['Porosity, fraction'],
perm_df[~missing_perms &
(perm_df['Rock facies'] == 2)]['Permeability, mD'],
color='g', facecolors='none', label='Shale')
ax.scatter(perm_df[missing_perms &
(perm_df['Rock facies'] == 2)]['Porosity, fraction'],
perm_df[missing_perms &
(perm_df['Rock facies'] == 2)]['Permeability, mD'],
color='g', label='Shale (Imputed)')
ax.scatter(perm_df[~missing_perms &
(perm_df['Rock facies'] == 3)]['Porosity, fraction'],
perm_df[~missing_perms &
(perm_df['Rock facies'] == 3)]['Permeability, mD'],
color='k', facecolors='none', label='Shaly sand')
ax.scatter(perm_df[missing_perms &
(perm_df['Rock facies'] == 3)]['Porosity, fraction'],
perm_df[missing_perms &
(perm_df['Rock facies'] == 3)]['Permeability, mD'],
color='k', label='Shaly sand (Imputed)')
ax.set_xlabel('Porosity, fraction')
ax.set_ylabel('Permeability, mD')
ax.legend(bbox_to_anchor=(1.5, 1), loc='upper right', ncol=1);
The 'KC permeability, mD'
is redundant now, so we'll drop it from the dataframe.
perm_df.drop('KC permeability, mD', axis=1, inplace=True)
Feature engineering¶
Since the fault runs right through both the produced wells and the wells we are trying to predict production for, let's engineer a few features related to the fault. First we'll engineer a feature we'll call 'Distance to fault'
which computes the perpendicular distance to the fault. Also, we don't know if the fault compartmentalizes the reservoir in any way, so we'll also create a feature called 'left/right fault'
to indicate which side of the fault a well lies.
perm_df['Distance to fault'] = np.sqrt((perm_df['X, m'] + perm_df['Y, m'] - 11750) ** 2) / np.sqrt(2)
perm_df['left/right fault'] = np.sign(perm_df['X, m'] * (-11750) - (perm_df['Y, m'] - 11750) * (11750))
Because all of the prediction wells are "near" the fault, we'll create another boolean feature that gives some importance to wells near the fault. We'll define "near" as any well that is closer than the mean preproduction (i.e. prediction) well distance from the fault.
preproduction_wells = perm_df.index.isin(preproduction_wells_df
.index.to_numpy())
max_distance = perm_df.loc[preproduction_wells, 'Distance to fault'].mean()
perm_df['close to fault'] = np.where(
perm_df['Distance to fault'] < max_distance, 1, 0)
Give that we anticipate wells that are "similar" to have "similar" production, we'll use KMeans
clustering to find similar wells in our dataframe
scalers = [MaxAbsScaler(), MinMaxScaler(), StandardScaler(), RobustScaler()]
pipe = Pipeline([
('scaler', StandardScaler()),
('cluster', KMeans())
])
params = {
'scaler': scalers,
'cluster__n_clusters': [10, 12, 14],
}
gcv = GridSearchCV(pipe, params, cv=KFold(random_state=3, shuffle=True))
gcv.fit(perm_df.loc[~preproduction_wells])
gcv.best_params_
And use the model to predict which cluster the preproduction wells would fall into.
perm_df['Cluster'] = gcv.predict(perm_df)
Now we'll read in the production histories and merge them with our working dataframe.
production_history_df = pd.read_csv('production_history.csv',
index_col='Well_ID')
production_df = perm_df.merge(production_history_df,
how='left',
left_index=True,
right_index=True)
production_df.head()
Now we'll compute the average 1, 2, and 3 year productions for each cluster and assign a new feature with this average value as the expected average production at each well location.
avg_production_df = (production_df
.groupby('Cluster')
.mean()
.loc[:, 'Cumulative oil production (1 yr), MSTB' \
:'Cumulative Water production (3 yr), MSTB']
.reset_index()
.rename(columns={'Cumulative oil production (1 yr), ' \
'MSTB':'Avg. oil prod (1 yr)',
'Cumulative oil production (2 yr), ' \
'MSTB':'Avg. oil prod (2 yr)',
'Cumulative oil production (3 yr), ' \
'MSTB':'Avg. oil prod (3 yr)',
'Cumulative Water production (1 yr), ' \
'MSTB':'Avg. water prod (1 yr)',
'Cumulative Water production (2 yr), ' \
'MSTB':'Avg. water prod (2 yr)',
'Cumulative Water production (3 yr), ' \
'MSTB':'Avg. water prod (3 yr)'})
)
Merging these averaged production values into our working dataframe, we have the final dataframe which we'll use to make predictions with.
final_df = (production_df
.reset_index()
.merge(avg_production_df,
how='left',
on='Cluster')
.set_index(['Well_ID'])
)
final_df.head()
Predictions¶
In order to hyperparameter tune our forthcoming models, we're going to use the "goodness" measure of Deutsch, 1996, i.e.
$$ \mbox{goodness} = 1 - \int_0^1 \left(2 a(p) - 2 \right) \left(\overline{\xi(p)} - p \right) \mathrm{d}p $$
in addition to the mean_absolute_percentage_error
from scikit-learn to score our models. The idea will be to use the BaggingRegressor
to create an ensemble of models that we average over to create our "best estimate" as well using each individual estimator as a realization for our uncertainty model. We'll score our model over each fold in the cross-validation using GridSearchCV
. Credit goes to Honggeun Jo for creating the original version of this function which I only slightly modified here.
def goodness_score(y_true, y_realizations):
goodness_score_array = []
# Define upper/lower boundary of "within-percentile" ranges
list_percentile_lower = 50 - 5 * np.arange(0, 11, dtype=np.int32)
list_percentile_upper = 50 + 5 * np.arange(0, 11, dtype=np.int32)
for i in range(11): # 0%, 10%, 20%, 30%, ... 100% percentiles ranges
num_within = 0 # Counts for predictions within the range
for (j, realization) in enumerate(y_realizations.T):
min_ = np.percentile(realization, list_percentile_lower[i])
max_ = np.percentile(realization, list_percentile_upper[i])
if y_true[j] > min_ and y_true[j] < max_:
num_within += 1
goodness_score_array.append(num_within)
goodness_score_upNdown = (np.array(goodness_score_array) -
np.arange(0,11, dtype=np.double))
a_interval_index = [1 if goodness_score_array[i+1] >= i+1
else 0 for i in range(10)]
goodness_score_ = 1
for i in range(10):
if a_interval_index[i] == 1:
goodness_score_ -= 0.5 * goodness_score_upNdown[i+1] / 45
else:
goodness_score_ -= -goodness_score_upNdown[i+1] / 55
return np.abs(goodness_score_)
def scorer(estimator, X, y):
mape = 1 - mean_absolute_percentage_error(y[:,-1], estimator.predict(X)[:, -1])
pipe = estimator[:-1]
Xt = pipe.transform(X)
realizations = []
features = estimator[-1].estimators_features_
for (i, e) in enumerate(estimator[-1].estimators_):
realizations.append(e.predict(Xt[:, features[i]])[:, -1])
realizations = np.array(realizations)
goodness = goodness_score(y[:,-1], realizations)
return 0.5 * mape + 0.5 * goodness
Separating our predictor and response features labels.
response_features = ['Cumulative Water production (1 yr), MSTB',
'Cumulative Water production (2 yr), MSTB',
'Cumulative Water production (3 yr), MSTB',
'Cumulative oil production (1 yr), MSTB',
'Cumulative oil production (2 yr), MSTB',
'Cumulative oil production (3 yr), MSTB']
predictor_features = final_df.columns[~final_df.columns
.isin(response_features)].to_numpy()
Since there are several data samples from each well, but we are only asked to report our prediction for the entire well, there are several options on how to proceed. This simplest is just to average all the samples over each well, we'll use this approach in the interest of time.
avg_final_df = final_df.groupby(final_df.index).mean()
Getting a boolean indexer for our training wells (the wells that have been in production).
train_wells = ~avg_final_df.index.isin(preproduction_wells_df.index)
Now we'll set up a pipeline and pass that to GridSearchCV
for hyperparameter tuning. I iterated this a few times to narrow down the final set of parameters to search, so that the final notebook/solution runs in a timely manner. Here I'm using 7 folds in the k-fold cross-validation because that leaves about 10 wells in the test set, similar to the number we need to predict.
scalers = [MaxAbsScaler(), RobustScaler()]
pipe = Pipeline([
('scaler', StandardScaler()),
('pca', PCA()),
('poly', PolynomialFeatures()),
('bag', BaggingRegressor(base_estimator=LinearRegression(),
n_estimators=100, n_jobs=-1, random_state=7)),
])
params = [{
'scaler': scalers,
'pca__n_components': [8, 9],
'poly__degree': [1, 2],
'bag__max_samples': [0.75, 1.0],
'bag__max_features': [0.75, 1.0],
'bag__bootstrap': [True, False],
'bag__base_estimator': [LinearRegression()]
}]
gcv = GridSearchCV(pipe, params, scoring=scorer,
cv=KFold(n_splits=7, shuffle=True, random_state=17)
)
gcv.fit(avg_final_df.loc[train_wells, predictor_features].to_numpy(),
avg_final_df.loc[train_wells, response_features].to_numpy())
print(gcv.best_score_)
gcv.best_params_
The "best score" and final set of hyperparameters are shown above. Now we'll write our best estimate and realizations to the solution file.
solution_df = pd.read_csv('solution.csv', index_col='Well_ID')
pipe = gcv.best_estimator_
#best estimate
solution_df['Prediction, MSTB'] = \
pipe.predict(avg_final_df.loc[~train_wells, predictor_features])[:, -1]
#realizations
Xt = pipe[:-1].transform(avg_final_df.loc[~train_wells, predictor_features])
features = pipe[-1].estimators_features_
for (i, estimator) in enumerate(pipe[-1].estimators_):
pt = estimator.predict(Xt[:, features[i]])[:, -1]
solution_df[f'R{i+1}, MSTB'] = pt
#write to file
solution_df.to_csv('solution.csv')
Submitting the solution file for scoring, we can generate the following plots. First we have the accuracy plot which compares the predictions to the actual 3 yr. production for the 10 wells.
from score import (create_accuracy_plot_and_return_mse,
create_realizations_plots,
create_goodness_plot_and_return_goodness_score)
truth_data = np.load('True_for_predrill_3yr.npy')
mse = create_accuracy_plot_and_return_mse(solution_df, truth_data);
print(f'MSE: {mse}')
Below the uncertainty models are shown along with the true values.
create_realizations_plots(solution_df, truth_data)
And finally the goodness plot. The goodness measure we used in our scorer
function above is related to the integral of the black dashed line with respect to the diagonal red line where the area below above the red line is weighted twice the area below.
goodness = create_goodness_plot_and_return_goodness_score(solution_df,
truth_data);
print(f'Goodness: {goodness}')
Finally, my rank in the competition among the other teams (note, this is only for the objective part of the scoring, in the actual Hackathon, there we had additional columns for code quality and presentations, therefore the final rankings of the other teams in this table does not reflect the actual final outcome of the competition). Not bad for a days work!
from IPython.display import Image
Image('results_table.png', width=500)
The final message here is that domain expertise, which leads to good imputation and feature engineering strategies is far more useful than fancy machine learning models. Additionally, understanding exactly the quantities of interest you are looking for so that you evaluate (i.e. score) your models well. In the end, I "won" the competition with a linear regression model.
Comments
Comments powered by Disqus