Linear Regression model On tensorflow can't learn bias
Date : March 29 2020, 07:55 AM
I hope this helps you . The main problem is that you are feeding just one sample at a time to the model. This makes your optimizer very inestable, that's why you have to use such a small learning rate. I will suggest you to feed more samples in each step. If you insist in feeding one sample at a time, maybe you should consider using an optimizer with momentum, like tf.train.AdamOptimizer(learning_rate). This way you can increase the learning rate and reach convergence.
|
Softmax logistic regression: Different performance by scikit-learn and TensorFlow
Date : March 29 2020, 07:55 AM
seems to work fine The problem turned out to be silly, I just needed more epochs, a smaller learning rate (and for efficiency I turned to AdamOptimizer, results are now equal, although the TF implementation is much slower. (1681,)
(1681, 2)
SCI-KITLEARN RESULTS:
Accuracy: 0.822129684711
Precision: 0.837883361162
Recall: 0.784522522208
F1: 0.806251963817
TENSORFLOW RESULTS:
Accuracy: 0.82213
Precision: 0.837883361162
Recall: 0.784522522208
F1: 0.806251963817
|
Tensorflow linear regression result does not match Numpy/SciKit-Learn
Date : March 29 2020, 07:55 AM
wish help you to fix your issue I just compare the results by tensorflow and numpy. Since you used dtype=tf.float32 for X and y, I will use np.float32 for the numpy example as follows: X_numpy = housing_data_plus_bias.astype(np.float32)
y_numpy = housing.target.reshape(-1, 1).astype(np.float32)
with tf.Session() as sess:
XTX_value = tf.matmul(XT, X).eval()
XTX_numpy = X_numpy.T.dot(X_numpy)
np.allclose(XTX_value, XTX_numpy, rtol=1e-06) # True
np.allclose(XTX_value, XTX_numpy, rtol=1e-07) # False
|
What is the advantage of using tensorflow instead of scikit-learn for doing regression?
Date : March 29 2020, 07:55 AM
This might help you If the only thing you are doing is regression, scikit-learn is good enough and will definitely do you job. Tensorflow is more a deep learning framework for building deep neural networks. There're people using Tensorflow to do regression maybe just out of personal interests or they think Tensorflow is more famous or "advanced".
|
Scikit-Learn vs Keras (Tensorflow) for multinomial logistic regression
Date : March 29 2020, 07:55 AM
hop of those help? I tried to run your examples and noticed a couple of potential sources: The test set is incredibly small, only 45 instances. This means that to get from accuracy of .89 to .96, the model only needs to predict just three more instances correctly. Due to randomness in training, your Keras results can oscillate quite a bit. As explained by @meowongac https://stackoverflow.com/a/59643522/1467943, you're using a different optimizer. One point is that scikit's algorithm will automatically set its learning rate. For SGD in Keras, tweaking learning rate and/or number of epochs could lead to improvements. Scikit learn quietly uses L2 regularization by default.
|