![]() ![]() We’ll then use this to create a basic neural net forward pass, including a first look at how neural networks are initialized (a topic we’ll be going into in great depth in the coming lessons). ![]() Step 1 is matrix multiplication! We’ll gradually refactor and accelerate our first, pure python, matrix multiplication, and in the process will learn about broadcasting and einstein summation. So we’ll need to cover a lot of territory. Our main goal is to build up to a complete system that can train Imagenet to a world-class result, both in terms of accuracy and speed. Lesson 8: Matrix multiplication forward and backward passes And if you have any questions along the way (or just want to chat with other students) there’s a very active forum for the course. In the remainder of this post I’ll provide a quick summary of some of the topics you can expect to cover in this course-if this sounds interesting, then click on lesson 8 in the “Part 2” section of the sidebar over on the left. In fact, we’ll be reimplementing a significant subset of the fastai library! Along the way, we will practice implementing papers, which is an important skill to master when making state of the art models. In this new course, we will learn to implement a lot of things that are inside the fastai and PyTorch libraries. This is particularly important nowadays because this field is moving so fast. This time, we’re not learning practical things that we will use right away, but are learning foundations that we can build on. The purpose of Deep Learning from the Foundations is, in some ways, the opposite of part 1. The first five lessons use Python, PyTorch, and the fastai library the last two lessons use Swift for TensorFlow, and are co-taught with Chris Lattner, the original creator of Swift, clang, and LLVM. Before starting this part, you need to have completed Part 1: Practical Deep Learning for Coders. It covers many of the most important academic papers that form the foundations of modern deep learning, using “ code-first” teaching, where each method is implemented from scratch in python and explained in detail (in the process, we’ll discuss many important software engineering techniques too). It takes you all the way from the foundations of implementing matrix multiplication and back-propagation, through to high performance mixed-precision training, to the latest neural network architectures and learning techniques, and everything in between. Welcome to Part 2: Deep Learning from the Foundations, which shows how to build a state of the art deep learning model from scratch.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |