This is the archived website for the Spring 2025 offering of CS336.
The latest offering is here.

Logistics

  • Lectures: Tuesday/Thursday 3:00-4:20pm in NVIDIA Auditorium
  • Office hours:
    • Tatsu Hashimoto (Gates 364): Fridays at 3-4pm
    • Percy Liang (Gates 350): Fridays at 11am-12pm
    • Marcel Rød (Gates 415): Mondays 11am-12pm, Wednesdays 11am-12pm
    • Neil Band (Gates 358): Mondays 4-5pm, Tuesdays 5-6pm
    • Rohith Kuditipudi (Gates 358): Mondays 10-11am, Wednesdays 10-11am
  • Contact: Students should ask all course-related questions in public Slack channels. All announcements will also be made in Slack. For personal matters, email cs336-spr2425-staff@lists.stanford.edu.

Content

What is this course about?

Language models serve as the cornerstone of modern natural language processing (NLP) applications and open up a new paradigm of having a single general purpose system address a range of downstream tasks. As the field of artificial intelligence (AI), machine learning (ML), and NLP continues to grow, possessing a deep understanding of language models becomes essential for scientists and engineers alike. This course is designed to provide students with a comprehensive understanding of language models by walking them through the entire process of developing their own. Drawing inspiration from operating systems courses that create an entire operating system from scratch, we will lead students through every aspect of language model creation, including data collection and cleaning for pre-training, transformer model construction, model training, and evaluation before deployment.

Prerequisites

Note that this is a 5-unit class. This is a very implementation-heavy class, so please allocate enough time for it.


Coursework

Assignments

  • Assignment 1: Basics [leaderboard]
    • Implement all of the components (tokenizer, model architecture, optimizer) necessary to train a standard Transformer language model.
    • Train a minimal language model.
  • Assignment 2: Systems [leaderboard]
    • Profile and benchmark the model and layers from Assignment 1 using advanced tools, optimize Attention with your own Triton implementation of FlashAttention2.
    • Build a memory-efficient, distributed version of the Assignment 1 model training code.
  • Assignment 3: Scaling
    • Understand the function of each component of the Transformer.
    • Query a training API to fit a scaling law to project model scaling.
  • Assignment 4: Data [leaderboard]
    • Convert raw Common Crawl dumps into usable pretraining data.
    • Perform filtering and deduplication to improve model performance.
  • Assignment 5: Alignment and Reasoning RL
    • Apply supervised finetuning and reinforcement learning to train LMs to reason when solving math problems.
    • Optional Part 2: implement and apply safety alignment methods such as DPO.
All (currently tentative) deadlines are listed in the schedule.

GPU compute for self-study

If you are following along at home, you can access GPU compute from a cloud provider to complete the assignments.
Here are a few options (prices for a single H100 80GB GPU on June 6, 2025):

For convenience and to save money, we recommend debugging correctness of your implementation on CPU first and then using GPU(s) (with the count recommended in the assignments) for completing training runs (A1, A4, A5) or benchmarking GPU operations (A2).

Honor code

Like all other classes at Stanford, we take the student Honor Code seriously. Please respect the following policies:
  • Collaboration: Study groups are allowed, but students must understand and complete their own assignments, and hand in one assignment per student. If you worked in a group, please put the names of the members of your study group at the top of your assignment. Please ask if you have any questions about the collaboration policy.
  • AI tools: Prompting LLMs such as ChatGPT is permitted for low-level programming questions or high-level conceptual questions about language models, but using it directly to solve the problem is prohibited. We strongly encourage you to disable AI autocomplete (e.g., Cursor Tab, GitHub CoPilot) in your IDE when completing assignments (though non-AI autocomplete, e.g., autocompleting function names is totally fine). We have found that AI autocomplete makes it much harder to engage deeply with the content.
  • Existing code: Implementations for many of the things you will implement exist online. The handouts we'll give will be self-contained, so that you will not need to consult third-party code for producing your own implementation. Thus, you should not look at any existing code unless when otherwise specified in the handouts.

Submitting coursework

  • All coursework are submitted via Gradescope by the deadline. Do not submit your coursework via email.
  • If anything goes wrong, please ask a question in Slack or contact a course assistant.
  • You can submit as many times as you'd like until the deadline: we will only grade the last submission.
  • Partial work is better than not submitting any work.

Late days

  • Each student has 6 late days to use. A late day extends the deadline by 24 hours.
  • You can use up to 3 late days per assignment.

Regrade requests

If you believe that the course staff made an objective error in grading, you may submit a regrade request on Gradescope within 3 days after the grades are released.

Sponsor

We would like to thank Together AI for sponsoring the compute for this class.


Schedule

# Date Description Course Materials Deadlines
1 Tues April 1 Overview, tokenization (Percy) lecture_01.py Assignment 1 out
[code]
[preview]
[leaderboard]
2 Thurs April 3 PyTorch, resource accounting (Percy) lecture_02.py
3 Tues April 8 Architectures, hyperparameters (Tatsu) lecture 3.pdf
4 Thurs April 10 Mixture of experts (Tatsu) lecture 4.pdf
5 Tues April 15 GPUs (Tatsu) lecture 5.pdf Assignment 1 due
Assignment 2 out
[code]
[preview]
[leaderboard]
6 Thurs April 17 Kernels, Triton (Tatsu) lecture_06.py
7 Tues April 22 Parallelism (Tatsu) lecture 7.pdf
8 Thurs April 24 Parallelism (Percy) lecture_08.py
9 Tues April 29 Scaling laws (Tatsu) lecture 9.pdf Assignment 3 out
[code]
[preview]
Wed April 30 Assignment 2 due
10 Thurs May 1 Inference (Percy) lecture_10.py
11 Tues May 6 Scaling laws (Tatsu) lecture 11.pdf Assignment 3 due
Assignment 4 out
[code]
[preview]
[leaderboard]
12 Thurs May 8 Evaluation (Percy) lecture_12.py
13 Tues May 13 Data (Percy) lecture_13.py
14 Thurs May 15 Data (Percy) lecture_14.py
15 Tues May 20 Alignment - SFT/RLHF (Tatsu) lecture 15.pdf
16 Thurs May 22 Alignment - RL (Tatsu) lecture 16.pdf
Fri May 23 Assignment 4 due
Assignment 5 out
[code]
[preview]
17 Tues May 27 Alignment - RL (Percy) lecture_17.py
18 Thurs May 29 Guest Lecture by Junyang Lin
19 Tues June 3 Guest lecture by Mike Lewis
Fri June 6 Assignment 5 due