mloptimizer.aux.alg_wrapper#

Module Contents#

Classes#

CustomXGBClassifier

A class to wrap the xgboost classifier.

Functions#

generate_model([learning_rate, layer_1, layer_2, ...])

class mloptimizer.aux.alg_wrapper.CustomXGBClassifier(base_score=0.5, booster='gbtree', eval_metric='auc', eta=0.077, gamma=18, subsample=0.728, colsample_bylevel=1, colsample_bytree=0.46, max_delta_step=0, max_depth=7, min_child_weight=1, seed=1, alpha=0, reg_lambda=1, scale_pos_weight=4.43, obj=None, feval=None, num_boost_round=50)[source]#

Bases: sklearn.base.BaseEstimator

A class to wrap the xgboost classifier.

base_score#

The initial prediction score of all instances, global bias.

Type:

float, optional (default=0.5)

booster#

Which booster to use, can be gbtree, gblinear or dart; gbtree and dart use tree based models while gblinear uses linear functions.

Type:

string, optional (default=”gbtree”)

eval_metric#

Evaluation metrics for validation data, a default metric will be assigned according to objective (rmse for regression, and error for classification, mean average precision for ranking).

Type:

string, optional (default=”auc”)

eta#

Step size shrinkage used in update to prevent overfitting.

Type:

float, optional (default=0.077)

gamma#

Minimum loss reduction required to make a further partition on a leaf node of the tree.

Type:

float, optional (default=18)

subsample#

Subsample ratio of the training instance.

Type:

float, optional (default=0.728)

colsample_bylevel#

Subsample ratio of columns for each split, in each level.

Type:

float, optional (default=1)

colsample_bytree#

Subsample ratio of columns when constructing each tree.

Type:

float, optional (default=0.46)

max_delta_step#

Maximum delta step we allow each tree’s weight estimation to be.

Type:

int, optional (default=0)

max_depth#

Maximum depth of a tree.

Type:

int, optional (default=7)

min_child_weight#

Minimum sum of instance weight(hessian) needed in a child.

Type:

int, optional (default=1)

seed#

Random number seed.

Type:

int, optional (default=1)

alpha#

L1 regularization term on weights.

Type:

float, optional (default=0)

reg_lambda#

L2 regularization term on weights.

Type:

float, optional (default=1)

scale_pos_weight#

Balancing of positive and negative weights.

Type:

float, optional (default=4.43)

obj#

Customized objective function.

Type:

callable, optional (default=None)

feval#

Customized evaluation function.

Type:

callable, optional (default=None)

num_boost_round#

Number of boosting iterations.

Type:

int, optional (default=50)

fit(X, y)[source]#

Fit the model according to the given training data.

Parameters:
  • X (array-like of shape (n_samples, n_features)) – The training input samples.

  • y (array-like of shape (n_samples,)) – The target values (class labels in classification, real numbers in regression).

Returns:

self – Returns self.

Return type:

object

predict(X)[source]#

Predict class labels for samples in X.

Parameters:

X (array-like of shape (n_samples, n_features)) – The input samples.

Returns:

preds – The predicted classes.

Return type:

array-like of shape (n_samples,)

predict_proba(X)[source]#

Predict class probabilities for samples in X.

Parameters:

X (array-like of shape (n_samples, n_features)) – The input samples.

Returns:

p – The predicted probabilities.

Return type:

array-like of shape (n_samples,)

predict_z(X)[source]#

Predict z values for samples in X.

Parameters:

X (array-like of shape (n_samples, n_features)) – The input samples.

Returns:

zs – The predicted z values.

Return type:

array-like of shape (n_samples,)

mloptimizer.aux.alg_wrapper.generate_model(learning_rate=0.01, layer_1=100, layer_2=50, dropout_rate_1=0, dropout_rate_2=0)[source]#