GridSearchCV scoring parameter: using scoring='f1' or scoring=None (by default uses accuracy) gives the same result
Date : March 29 2020, 07:55 AM
wish helps you I think that the author didn't choose this example very well. I may be missing something here, but min_samples_split=1 doesn't make sense to me: Isn't it the same as setting min_samples_split=2 since you can't split 1 sample -- essentially, it's a waste of computational time. parameters = {
'clf__max_depth': list(range(2, 30)),
'clf__min_samples_split': (2,),
'clf__min_samples_leaf': (1,)
}
Best score: 0.878
Best parameters set:
clf__max_depth: 15
clf__min_samples_leaf: 1
clf__min_samples_split: 2
precision recall f1-score support
0 0.98 0.99 0.99 716
1 0.92 0.89 0.91 104
avg / total 0.98 0.98 0.98 820
> Best score: 0.967
Best parameters set:
clf__max_depth: 6
clf__min_samples_leaf: 1
clf__min_samples_split: 2
precision recall f1-score support
0 0.98 0.99 0.98 716
1 0.93 0.85 0.88 104
avg / total 0.97 0.97 0.97 820
|
Setting up scoring similar to temple run(endless runner game)
Date : March 29 2020, 07:55 AM
Does that help if you know the increments, then you have more than half the battle, all you need to do now is set up something in swift that will give constant increments, something like a java timer, if that means anything to you. Either way, the score is drawn to the screen, and what you need is something that will keep the increment going until you die. For lack of a swift term, you want to use a do{score++;}while(dead != true);
|
Can I pass a parameter to a Magnitude scoring function in Azure Search?
Date : March 29 2020, 07:55 AM
may help you . No, it is not possible to do magnitude boosting based on relative values of a field across documents. This feature is intended for situations where you statically know the ranges that you want to boost (for example, when boosting based on a rating field with a fixed scale).
|
How to create a customized scoring function in scikit-learn for scoring a set of instances based on their individual pro
Tag : python , By : jaredsmiller
Date : March 29 2020, 07:55 AM
wish help you to fix your issue I found a way to solve the problem by going the path of the 2nd proposed answer: Passing a PseudoInteger to Scikit-Learn that has all the same properties as a normal integer when compared or done mathematical operations with. However, it also acts as a wrapper for the int, and instance variables (such as the cost of an instance) can also be stored. As already stated in the question, this causes Scikit-learn to recognize that the values inside the passed label array are in fact of type object rather than int. So I just replaced the test in the type_of_target(y) method of Scikit-Learn's multiclass.py in line 273 to return 'binary' even though it doesn't pass the test. So that Scikit-Learn just treats the whole problem (as it should be) like a binary classification problem. So line 269-273 in the type_of_target(y) method in multiclass.py now looks like: # Invalid inputs
if y.ndim > 2 or (y.dtype == object and len(y) and
not isinstance(y.flat[0], string_types)):
# return 'unknown' # [[[1, 2]]] or [obj_1] and not ["label_1"]
return 'binary' # Sneaky, modified to force binary classification.
import sklearn
import sklearn.model_selection
import sklearn.base
import sklearn.metrics
import numpy as np
import sklearn.tree
import sklearn.feature_selection
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.metrics.scorer import make_scorer
class PseudoInt(int):
# Behaves like an integer, but is able to store instance variables
pass
def grid_search(x, y_normal, x_amounts):
# Change the label set to a np array containing pseudo ints with the costs associated with the instances
y = np.empty(len(y_normal), dtype=PseudoInt)
for index, value in y_normal.iteritems():
new_int = PseudoInt(value)
new_int.cost = x_amounts.loc[index] # Here the cost is added to the label
y[index] = new_int
# Normal train test split
x_train, x_test, y_train, y_test = sklearn.model_selection.train_test_split(x, y, test_size=0.2)
# Classifier
clf = sklearn.tree.DecisionTreeClassifier()
# Custom scorer with the cost function below (lower cost is better)
cost_scorer = make_scorer(cost_function, greater_is_better=False) # Custom cost function (Lower cost is better)
# Define pipeline
pipe = Pipeline([('clf', clf)])
# Grid search grid with any hyper parameters or other settings
param_grid = [
{'sfs__estimator__criterion': ['gini', 'entropy']}
]
# Grid search and pass the custom scorer function
gs = GridSearchCV(estimator=pipe,
param_grid=param_grid,
scoring=cost_scorer,
n_jobs=1,
cv=5,
refit=True)
# run grid search and refit with best hyper parameters
gs = gs.fit(x_train.as_matrix(), y_train)
print("Best Parameters: " + str(gs.best_params_))
print('Best Accuracy: ' + str(gs.best_score_))
# Predict with retrained model (with best parameters)
y_test_pred = gs.predict(x_test.as_matrix())
# Get scores (also cost score)
get_scores(y_test, y_test_pred)
def get_scores(y_test, y_test_pred):
print("Getting scores")
print("SCORES")
precision = sklearn.metrics.precision_score(y_test, y_test_pred)
recall = sklearn.metrics.recall_score(y_test, y_test_pred)
f1_score = sklearn.metrics.f1_score(y_test, y_test_pred)
accuracy = sklearn.metrics.accuracy_score(y_test, y_test_pred)
print("Precision " + str(precision))
print("Recall " + str(recall))
print("Accuracy " + str(accuracy))
print("F1_Score " + str(f1_score))
print("COST")
cost = cost_function(y_test, y_test_pred)
print("Cost Savings " + str(-cost))
print("CONFUSION MATRIX")
cnf_matrix = sklearn.metrics.confusion_matrix(y_test, y_test_pred)
cnf_matrix = cnf_matrix.astype('float') / cnf_matrix.sum(axis=1)[:, np.newaxis]
print(cnf_matrix)
def cost_function(y_test, y_test_pred):
"""
Calculates total cost based on TP, FP, TN, FN and the cost of a certain instance
:param y_test: Has to be an array of PseudoInts containing the cost of each instance
:param y_test_pred: Any array of PseudoInts or ints
:return: Returns total cost
"""
cost = 0
for index in range(len(y_test)):
# print(index)
y = y_test[index]
y_pred = y_test_pred[index]
x_amt = y.cost
if y == 0 and y_pred == 0:
cost -= x_amt # Reducing cot by x_amt
elif y == 0 and y_pred == 1:
cost += x_amt # Wrong classification adds cost
elif y == 1 and y_pred == 0:
cost += x_amt + 5 # Wrong classification adds cost and fee
elif y == 1 and y_pred == 1:
cost += 0 # No cost
else:
raise ValueError("No cost could be assigned to the instance: " + str(index))
# print("Cost: " + str(cost))
return cost
import sklearn.utils.multiclass
def return_binary(y):
return "binary"
sklearn.utils.multiclass.type_of_target = return_binary
|
Custom scoring on GridSearchCV with fold dependent parameter
Date : March 29 2020, 07:55 AM
this one helps. As I understand scoring values are pairs (value,group), but estimator should not work with group. Let cut them in a wrapper but leave them to scorer. Simple estimator wrapper (may need some polishing to full compliance) from sklearn.base import BaseEstimator, ClassifierMixin, TransformerMixin, clone
from sklearn.linear_model import LogisticRegression
from sklearn.utils.estimator_checks import check_estimator
#from sklearn.utils.validation import check_X_y, check_array, check_is_fitted
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
class CutEstimator(BaseEstimator):
def __init__(self, base_estimator):
self.base_estimator = base_estimator
def fit(self, X, y):
self._base_estimator = clone(self.base_estimator)
self._base_estimator.fit(X,y[:,0].ravel())
return self
def predict(self, X):
return self._base_estimator.predict(X)
#check_estimator(CutEstimator(LogisticRegression()))
def my_score(y, y_pred):
return np.sum(y[:,1])
pagam_grid = {'base_estimator__C':[0.2,0.5]}
X=np.random.randn(30,3)
y=np.random.randint(3,size=(X.shape[0],1))
g=np.ones_like(y)
gs = GridSearchCV(CutEstimator(LogisticRegression()),pagam_grid,cv=3,
scoring=make_scorer(my_score), return_train_score=True
).fit(X,np.hstack((y,g)))
print (gs.cv_results_['mean_test_score']) #10 as 30/3
print (gs.cv_results_['mean_train_score']) # 20 as 30 -30/3
[ 10. 10.]
[ 20. 20.]
pagam_grid = {'C':[0.2,0.5]}
X=np.random.randn(30,3)
y=np.random.randint(3,size=(X.shape[0]))
g=np.random.randint(3,size=(X.shape[0]))
cv = GroupShuffleSplit (3,random_state=100)
groups_info = {}
for a,b in cv.split(X, y, g):
groups_info[hash(y[b].tobytes())] =g[b]
groups_info[hash(y[a].tobytes())] =g[a]
def my_score(y, y_pred):
global groups_info
g = groups_info[hash(y.tobytes())]
return np.sum(g)
gs = GridSearchCV(LogisticRegression(),pagam_grid,cv=cv,
scoring=make_scorer(my_score), return_train_score=True,
).fit(X,y,groups = g)
print (gs.cv_results_['mean_test_score'])
print (gs.cv_results_['mean_train_score'])
|