The Learning Rate Range Test                                                                                                                                                                                                                                                                                                                      
    Learning    Rates are important when configuring a neural network. But    choosing one is not easy, as there is no single best learning    rate due to its dependency on your dataset.  
    Now, how to choose one? And should it be a fixed one or should    I use learning rate decay? If I know how Ill choose one, how    to do so objectively? Theyre all interesting questions  and    well answer each of them in this blog post.  
    Today, well look at multiple things. In this blog post, well  
    Are you ready? Lets go!  
    Lets take a look at the     high-level supervised machine learning process:  
    Training such models goes through a simple, sequential and    cyclical process:  
    2. These predictions are compared with the targets, which represent the ground truth for the    features. That is, they are the actual    classes in the classification scenario above.  
    3. The difference between the predictions and the actual    targets can be captured in the loss value. Depending on your    machine learning problem,     you can choose from a wide range of loss functions.  
    4. Based on the loss value, the model computes the best way of    making it better  i.e., it computes gradients using    backpropagation.  
    5. Based on these gradients, an optimizer (such as gradient    descent or an    adaptive optimizer) will adapt the model accordingly.  
    6. The process starts again. Likely, and hopefully, the model    performs slightly better this time.  
    Once youre happy with the end results, you stop the machine    learning process, and you have a model that can hopefully be    used in production.  
    Now, if we wish to understand the concept of the Learning Rate    Range Test in more detail, we must take a look at model    optimizers. In particular, we should study the concept of a    learning rate.  
    When specifying an optimizer, its possible to configure the    learning rate most of the time. For example, the Adam optimizer    in Keras (Keras, n.d.):  
    Indeed, here, the learning rate can be set with learning_rate - and it is set to    0.001 by default.  
    Now, what is a learning rate? If our goal is to study the    Learning Rate Range Test, its critical to    understand the concept of a learning rate, isnt it? 😛  
    Lets go back to step 4 of the machine learning process    outlined above: computing gradients with backpropagation.  
    I always compare optimizing a model with walking down a    mountain.  
    The mountain represents the loss landscape, or how the loss    value changes with respect to the particular model state, and    your goal is to walk to the valley, where loss is lowest.  
    This analogy can be used to understand what backpropagation    does and why you need learning rates to control it.  
    Essentially, I like to see backpropagation as a    step-computer. While you walk down the mountain, you    obviously set steps towards your goal. However, you dont want    to miss out on possible shortcuts towards the valley. This    requires you to take smaller steps.  
    Now this is why learning rates are useful: while    backpropagation will likely compute relatively large steps, you    wish to slow down your descent to allow yourself to look around    more thoroughly. Perhaps, youll indeed find that path that    brings you to the valley in a shorter amount of time!  
    So, while backpropagation is a step-computer, the learning    rate will allow you to control the size of your steps. While    youll take longer to arrive, you might do so more efficiently    after all. Especially when the valley is very narrow, you might    no longer overstep it because your steps are too large.  
    This analogy also perfectly explains why the learning rate in    the Adam example above was set to learning_rate = 0.001: while it uses    the computed gradient for optimization, it    makes it 1,000 times smaller first, before using it to change    the model weights with the optimizer.  
    Lets now build in a small intermezzo: the concepts of    overfitting and underfitting, and checking for them by using    validation and test loss.  
    Often, before you train a model with all your data, youll    first evaluate your choice with     hold-out techniques or K-fold Cross Validation. These    generate a dataset split between training data and testing    data, which youll need, as youre going to need to decide when    the model is good enough.  
    And good enough is the precise balance between having it perform better and having it    perform too adequately.  
    In the first case, which is called underfitting, your model can still improve in    a predictive sense. By feeding more samples, and optimizing    further, its likely to improve and show better performance    over time.  
    However, when you do so for too long, the model will    overfit  or adapt too much to    your dataset and its idiosyncrasies. As your dataset is a    sample, which is drawn from the true population you wish to    train for, you face differences between the sample and    population means and variances  by definition. If your model    is over-adapted to your training set, its likely that these    differences get in the way when you want to use it for new data    from the population. And likely, this will occur when you use    your model in production.  
    Youll therefore always have to strike a balance between the    models predictive performance and the models ability to    generalize. This is a very intricate balance that can often    only be found in a small interval of your training iterations.  
    Fortunately, its possible to detect overfitting using a plot    of your loss value (Smith, 2018). Always take your validation    or test loss for this. Use your test loss if you dont split    your training data in true training and    validation data (which is the case if youre simply evaluating    models with e.g. K-fold Cross Validation). Use validation loss    if you evaluate models and train the final one at once    (requiring training, validation and testing data). In both    cases, you ensure that you use data that the model has not seen    before, avoiding that you  as a student  mark your own    homework.  
    This is especially useful when you are using e.g. TensorBoard,    where you can inspect progress in real-time.  
    However, its also possible to generate a plot when your    training process finishes. Such diagrams make things crisply    clear:  
    In the first part of the training process, the models    predictive performance is clearly improving. Hence, it is    underfit during that stage  and additional    epochs can improve model performance.  
    However, after about the 20th epoch, validation loss starts    improving, while (you must assume this) training loss still decreases. This means that while    the model gets better and better in predicting the training    data, it is getting worse in predicting the validation data.    Hence, after the 20th epoch, overfitting    starts to occur.  
    While you can reduce the impact of overfitting or delay it with        regularizers, and Dropout, its clear that for this model    and corresponding configuration, the optimum is achieved at the    20th epoch. Whats important to understand here is that this    optimum emerges given the model architecture and    configuration! If you changed the architecture, or    configured it differently, you might e.g. delay overfitting or    achieve even lower validation loss minimums. Thats why    training neural networks is more of an art than a science.  
    As choosing a learning rate setting impacts the loss    significantly, its good that its clear what overfitting and    underfitting are, and how you can spot them on a plot. Lets    now take a look at choosing a learning    rate.  
    Which learning rate to choose? What options do I have?  
    Good questions.  
    Lets now take a look at two ways of setting a learning rate:  
    Lets take a look at the Adam optimizer implementation for    Keras again (Keras, n.d.):  
    Here, the learning rate is set as a constant. Its a fixed value which is used in every    epoch.  
    Unfortunately, this doesnt produce an optimal learning    process.  
    Lets take a look at two other models that we trained for    another blog post:  
    The model in orange clearly produces a low loss rapidly, and    much faster than the model in blue. However, we can also    observe some overfitting to occur after approximately the 10th    epoch. Not so weird, given the fact that we trained for ten    times longer than strictly necessary.  
    Now, the rapid descent of the loss value and the increasingly    slower pace of falling down are typical for machine learning    settings which use optimizers like gradient descent or adaptive    ones.  
    Why is this the case? And why is this important for a learning    rate?  
    Lets dig a little bit deeper.  
    Supervised machine learning models work with model weights: on initialization, models are    configured to accept certain input data, and they create    weight vectors in which they can store the numeric patterns    they observe. Eventually, they multiply these vectors with the    input vectors during training and production usage.  
    Now, when you start training, its often best practice to    initialize your weight vectors randomly, or by using approaches    adapted to your model.  
    For the forward pass (step 1 of the 6 steps outlined at the    start), you can imagine that multiplying your input data with    random weights will produce very poor results. Indeed, loss is    likely high during the first few epochs. However, in this    stage, its also possible to make large steps towards accurate    weights and hence adequate loss values. Thats why you see loss    descend so rapidly during the first few iterations of a    supervised ML training process: its looking for a global loss    minimum very fast.  
    However, as you walk down that loss mountain, the number of    possible steps that can be taken goes down  by function of the    number of steps you already set. This is also true for loss    landscapes in neural networks: once you get close to the global    loss minimum (should it exist), then room for improvement gets    tighter and tighter. For this reason, loss balances out (or    even gets worse!  i.e. overfitting) over time.  
    This rationale as to why loss values initially decrease    substantially while balancing out later on is a substantial    issue for our learning rate:  
    We dont want it to be static.  
    As we recall, the learning rate essentially tells the model    how much of the gradient to use during    optimization. Remember that with learning_rate = 0.001 only 1/1000th    of the computed gradient is used.  
    For the latter part of the training process, this would be    good, as theres no point in setting large steps. Instead,    here, you want to set small ones in order to truly find the    global minimum, without overshooting it every time. You might    even want to use lower learning rate values here.  
    However, for the first part of the training process, such low    learning rates are problematic. Here, you would actually    benefit from large learning rates, for the    simple reason that you can afford setting large steps during    the first few epochs. Having a small fixed learning rate will    thus unnecessarily slow down your learning process or make    finding a global minimum in time even impossible!  
    Hence, a static learning rate is  in my opinion  not really a    good idea when training a neural network.  
    Now, of course, you can choose to use a static learning rate    that lies somewhere between the large and small ones.    However, is this really a solution, especially when better    solutions are available?  
    Lets now introduce the concept of a decaying    learning rate. Eventually, well now also begin to    discover why the Learning Rate Range Test can be useful.  
    Instead of a fixed learning rate, wouldnt it be good if we    could reduce it over time?  
    Indeed, this seems to be an approach to reducing the negative    impact of a fixed learning rate. By using a so-called decay    scheme, which decides how the learning rate decays over time,    you can exhibit control over the learning rate for an arbitrary    epoch.  
    There are many decay schemes available, and here are four    examples:  
    Linear decay allows you to start with a large learning rate,    decay it pretty rapidly, and then keeping it balanced at a    static one. Together with step decay, which keeps your learning    rate fixed for a set number of epochs, these learning rates are    not smooth.  
    Its also possible to use exponential and time decay, which    are in fact smooth. With exponential decay,    your learning rate decays rapidly at first, and slower over    time  but smoothly. Time decay is like a diesel engine: its a    slow start, with great performance once the car has velocity,    balancing out when its max is reached.  
    While each has their benefits, there is a wide range of new    questions:  
    These are all important questions and the list is going on and    on. Its impractical if not impossible to train your whole    architecture every time such a question pops up, to compare.    Neither is performing a grid search operation, which is    expensive (Smith, 2018). However, especially with respect to    the first two questions, there is another way: the Learning    Rate Range Test (Smith, 2018).  
    Lets take a look at what it is and what it does!  
    With the Learning Rate Range    Test, its possible to find an estimate of the optimal    learning rate quite quickly and accurately. Smith (2018) gives    a perfect introduction to the topic:  
      It is relatively straight-forward: in a test run, one starts      with a very small learning rate, for which one runs the model      and computes the loss on the validation data. One does this      iteratively, while increasing the learning rate exponentially      in parallel. One can then plot their findings into a diagram      representing loss at the y axis and the learning rate at the      x axis. The x value representing the lowest y value, i.e. the      lowest loss, represents the optimal learning rate for the      training data.    
    However, he also argues that  
      The learning rate at this extrema is the largest value that      can be used as the learning rate for the maximum bound with      cyclical learning rates but a smaller value will be necessary      when choosing a constant learning rate or the network will      not begin to converge.    
    Therefore, well simply pick a value just a tiny bit to the    left of the loss minimum.  
    One such Learning Rate Range Test could, theoretically, yield    the following plot:  
    Its a real plot generated with a ConvNet tested for MNIST    data.  
    We see the fastest learning rate descent at  10^-1.95: in the first plot, the descent is    steepest there. The second plot confirms this as it displays    the lowest loss delta, i.e. where negative    change in loss value (= improvement) was highest given change    of learning rate. By consequence, we would choose this learning    rate.  
    Now that we know what the LR Range Test is, its time to    implement it with Keras. Fortunately, thats not a difficult    thing to do!  
    Lets take a look.  
    We need a few dependencies if we wish to run this example    successfully. Before you continue, make sure that you have them    installed. The dependencies are as follows:  
    Now, keep your command prompt open, and generate a new file,    e.g. touch lr-finder.py.    Open this file in a code editor, and you're ready to code.  
    The first thing I always do is to import everything we need:  
    Next, we set the configuration for our test scenario. Well use    batches of 250 samples for testing. Our images are 28 x 28    pixels and are one-channeled, as the MNIST dataset is    grayscale. The number of classes equals 10, while well test    for 5 epochs (unless one of the abort conditions, such as a    loss value that goes out of the roof, occurs before then). Our    estimated start learning rate is 10^-4 while we stop at 10.    When generating a plot of our test results, we use a moving    average of 20 loss values for smoothing the line, to make our    results more interpretable.  
    The next things we do are related to the dataset:  
    Then, we specify the model architecture. Its not the most    important thing for today, but here it is. Its a simple    ConvNet using Max Pooling:  
    Now, heres the interesting part. We specified the model    architecture in our previous step, so we can now decide about    which tests we want to perform. For the sake of simplicity, we    specify only two, but you can test as much as youd like:  
    As you can see, the tests that we will perform today will find    the best learning rate for the traditional SGD optimizer, and    also for the Adam one. Whats great is that by plotting them    together (thats what we will do later), we can even compare    the performance of the optimizer given this architecture. We    can thus also answer the question :Which    optimizer produces lowest loss?  
    Then, we perform the test. For every test, we specify the    test_optimizer to be used    as well as the label, and    compile the model following that particular optimizer. This is    followed by instantiating the Learning Rate Range Test through    LRFinder, and performing    the actual test using the training data and the configuration    we specified above.  
    Once the test has finished  this may either be the case    because we have completed all epochs, because loss becomes    NaN or because loss    becomes too large - we take the learning_rates, the losses and loss_changes and store them in    containers. However, before storing the loss changes, we smooth    them using the moving_average that we defined    before. Credits for the smoothing part of the code go to the    keras-lr-finder    package  
    After smoothing, we store the learning rates per step, as well    as the test losses and the labels, to the containers we    specified before. This iteration will ensure that all tests are    performed in line with how we want them to perform.  
    Now that we have the outcomes, we can visualize them! 🙂 Well    use Matplotlib for doing so, and well create two plots: one    for the loss deltas and one for the actual loss values.  
    For each, the first thing we do is iterate over the containers,    and generate a plot for each test with plt.plot. In our case, this generates    two plots, both on top of each other. This is followed by plot    configuration - for example, we set the x axis to logarithmic    scale, and finally by a popup that visualizes the end result.  
    All right, you should now have a model that runs!  
    Open up that terminal again, cd to the folder where your    .py file is located (if    you're not already there), and run e.g. python lr-finder.py. You should see    the epochs begin, and once they finish, two plots similar to    these ones should pop up sequentially:  
    The results are very clear: for this training setting, Adam    performs substantially better. We can observe that it reaches a    lower loss value compared to SGD (first plot), and that it does    so in a much shorter time (second plot  the negative delta    occurs at a lower learning rate). Likely, this is how we    benefit from the fact that Adam performs local parameter    updates, whereas SGD does not. If we had to choose between    these two optimizers, it would clearly be Adam with a learning    rate of  10^-3.95.  
    Full code is available at my Github    repository.  
    In this blog post, we looked at the Learning Rate Range Test    for finding the best learning rate for your neural network     empirically.  
    This was done by looking at the concept of a learning rate    before moving to Python code. What is a learning rate? Why is    it useful? And how to configure it objectively? Do I need a    fixed or a decaying learning rate? Those are all questions that    we answered in the first part of this blog post.  
    In the second part, we introduced the Learning Rate Range Test:    a method based on Smith (2018) that allows us to empirically    determine the best learning rate for the model and its    compile settings that you    specify. It even allows us to compare multiple settings at    once, and which learning rate is best!  
    In the third and final part, we used the keras-lr-finder package to implement    the Learning Rate Range Test. With blocks of Python code, we    explained each step of doing so - and why we set that    particular step. This should allow you to use the Learning Rate    Range Test in your own projects too.  
Read more from the original source:
Finding Optimal Learning Rates. The Learning Rate Range Test | by Francesco Franco | Jan, 2024 - Medium