Zora Hirbodvash
2 min readMar 1, 2021

--

A quick summary of built-in methods for training and evaluation in TensorFlow

When we take a data set, it should be split into two sets, the training set and the test set. Most of the time near %80 of the data set goes into the training set and the rest of %20 goes into the test data set. The data set should be able to train, evaluate, and predict the model. TensorFlow provides some built-in APIs for training and validation.

First, we start with the method fit(). We can use fit() for supervised learning. The model has a method fit that performs the model parameter tuning using the gradient-based training loop. On the other words, fit() will train the model and it slices the data into batches of size and repeat the iterating over the entire data set for a given number of epochs. So, fit() is for the model training with the given inputs. The other built-in method is predict(). The method predict() generates output predictions for the input samples. The method evaluate() is for the evaluation of the model using the test set.

There are also many bulti-in methods for optimization, losses and metrics. For optimization, SGD() and Adam() are between most commons built-in methods. Based on Tensorflow official website, Adam is a stochastic gradient descent (SGD) method that computes individual adaptive learning rates for different parameters from estimates of first- and second-order moments of the gradients. At the same time, the loss function will be used to optimize the model. The loss function gets minimized by the optimizer. MeanSquaredError()is one the most popular method used for finding the loss function. A metric is used to evaluate the model performance. It is just the information we got about the model, and it does not have anything to do about the optimization. AUC(), Precision(), and Recall() between most common methods for metrics.

--

--