Inside Deep Learning, Video Edition - Edward Raff is your go-to guide for mastering deep learning using the PyTorch framework. This video series helps you understand and implement deep learning techniques to solve real-world data problems. You’ll learn to choose the right components, train and evaluate models, and fine-tune them for maximum performance. The content is presented in clear, simple language, making complex concepts easy to grasp, even if you're new to the field.
Deep learning can seem like a mysterious black box, but it doesn’t have to be. This video series breaks down how models and algorithms work, giving you the confidence to understand and explain your projects. You don’t need to be a math expert or a professional data scientist to follow along. The videos offer practical insights and explanations that make deep learning accessible to everyone.
Through plain language, annotated code, and practical examples, Inside Deep Learning, Video Edition - Edward Raff explains deep learning concepts. You’ll explore various neural network types without getting bogged down in complex math. Each solution is designed to be run with readily available GPU hardware, so you can apply what you learn immediately. By the end of the series, you'll have the knowledge and skills to use deep learning effectively in your projects.
Inside Deep Learning, Video Edition - Edward Raff Table of Contents:
- Part 1. Foundational methods
- Chapter 1. The mechanics of learning
- Chapter 1. The world as tensors
- Chapter 1. Automatic differentiation
- Chapter 1. Optimizing parameters
- Chapter 1. Loading dataset objects
- Chapter 1. Summary
- Chapter 2. Fully connected networks
- Chapter 2. Building our first neural network
- Chapter 2. Classification problems
- Chapter 2. Better training code
- Chapter 2. Training in batches
- Chapter 2. Summary
- Chapter 3. Convolutional neural networks
- Chapter 3. What are convolutions?
- Chapter 3. How convolutions benefit image processing
- Chapter 3. Putting it into practice: Our first CNN
- Chapter 3. Adding pooling to mitigate object movement
- Chapter 3. Data augmentation
- Chapter 3. Summary
- Chapter 4. Recurrent neural networks
- Chapter 4. RNNs in PyTorch
- Chapter 4. Improving training time with packing
- Chapter 4. More complex RNNs
- Chapter 4. Summary
- Chapter 5. Modern training techniques
- Chapter 5. Learning rate schedules
- Chapter 5. Making better use of gradients
- Chapter 5. Hyperparameter optimization with Optuna
- Chapter 5. Summary
- Chapter 6. Common design building blocks
- Chapter 6. Normalization layers: Magically better convergence
- Chapter 6. Skip connections: A network design pattern
- Chapter 6. 1 × 1 Convolutions: Sharing and reshaping information in channels
- Chapter 6. Residual connections
- Chapter 6. Long short-term memory RNNs
- Chapter 6. Summary
- Part 2. Building advanced networks
- Chapter 7. Autoencoding and self-supervision
- Chapter 7. Designing autoencoding neural networks
- Chapter 7. Bigger autoencoders
- Chapter 7. Denoising autoencoders
- Chapter 7. Autoregressive models for time series and sequences
- Chapter 7. Summary
- Chapter 8. Object detection
- Chapter 8. Transposed convolutions for expanding image size
- Chapter 8. U-Net: Looking at fine and coarse details
- Chapter 8. Object detection with bounding boxes
- Chapter 8. Using the pretrained Faster R-CNN
- Chapter 8. Summary
- Chapter 9. Generative adversarial networks
- Chapter 9. Mode collapse
- Chapter 9. Wasserstein GAN: Mitigating mode collapse
- Chapter 9. Convolutional GAN
- Chapter 9. Conditional GAN
- Chapter 9. Walking the latent space of GANs
- Chapter 9. Ethics in deep learning
- Chapter 9. Summary
- Chapter 10. Attention mechanisms
- Chapter 10. Adding some context
- Chapter 10. Putting it all together: A complete attention mechanism with context
- Chapter 10. Summary
- Chapter 11. Sequence-to-sequence
- Chapter 11. Machine translation and the data loader
- Chapter 11. Inputs to Seq2Seq
- Chapter 11. Seq2Seq with attention
- Chapter 11. Summary
- Chapter 12. Network design alternatives to RNNs
- Chapter 12. Averaging embeddings over time
- Chapter 12. Pooling over time and 1D CNNs
- Chapter 12. Positional embeddings add sequence information to any model
- Chapter 12. Transformers: Big models for big data
- Chapter 12. Summary
- Chapter 13. Transfer learning
- Chapter 13. Transfer learning and training with CNNs
- Chapter 13. Learning with fewer labels
- Chapter 13. Pretraining with text
- Chapter 13. Summary
- Chapter 14. Advanced building blocks
- Chapter 14. Improved residual blocks
- Chapter 14. MixUp training reduces overfitting
- Chapter 14. Summary
- Appendix. Setting up Colab
Who is this course for?
- Designed for Python programmers with basic machine learning skills.
Click on the links below to Download Inside Deep Learning, Video Edition - Edward Raff!
You are replying to :