

maybe need isn't the right word Jeremy's courses show how to become a world-class deep learning practitioner with only a minimal level of scalar calculus, thanks to leveraging the automatic differentiation built in to modern deep learning libraries. And it's not just any old scalar calculus that pops up-you need differential matrix calculus, the shotgun wedding of linear algebra and multivariate calculus. Pick up a machine learning paper or the documentation of a library such as PyTorch and calculus comes screeching back into your life like distant relatives around the holidays. Most of us last saw calculus in school, but derivatives are a critical part of machine learning, particularly deep neural networks, which are trained by optimizing a loss function. The derivative with respect to the bias.The gradient with respect to the weights.The gradient of the neural network loss function.Derivatives of vector element-wise binary operators.Introduction to vector calculus and partial derivatives.Note: There is a reference section at the end of the paper summarizing all the key matrix calculus rules and terminology discussed here. And if you're still stuck, we're happy to answer your questions in the Theory category at. Don't worry if you get stuck at some point along the way-just go back and reread the previous section, and try writing down and working through some examples. Note that you do not need to understand this material before you start learning to train and use deep learning in practice rather, this material is for those who are already familiar with the basics of neural networks, and wish to deepen their understanding of the underlying math. We assume no math knowledge beyond what you learned in calculus 1, and provide links to help you refresh the necessary math where needed. This paper is an attempt to explain all the matrix calculus you need in order to understand the training of deep neural networks. A Chinese version is also available (content not verified by us). Printable version (This HTML was generated from markup using bookish). Please send comments, suggestions, or fixes to Terence. For more material, see Jeremy's fast.ai courses and University of San Francisco's Data Institute in-person version of the deep learning course.) You might know Terence as the creator of the ANTLR parser generator. (Terence is a tech lead at Google and ex-Professor of computer/data science in University of San Francisco's MS in Data Science program. The Matrix Calculus You Need For Deep Learning
