🛠️ Workshop: A Tour of PyTorch 2.0
Learn about PyTorch 2.0 with a deep dive into the technology stack that powers the new
TorchInductor. The new compiler stack reduces training times across a wide range of workloads while being fully backwards compatible. Bring your laptops, or connect to remote GPU powered systems to run the examples.
🛠️ Workshop: PyTorch Distributed Training on AWS
Learn how to efficiently scale your training workloads to multiple instances with Amazon SageMaker. SageMaker manages your compute, storage and networking infrastructure, simply bring in your PyTorch code and learn how to distribute training across large number of CPUs and GPUs.
Relevant blog posts
Was this page helpful?
Glad to hear it! Please tell me how I can improve.
Sorry to hear that. Please tell me how I can improve.