Large Language Models (LLMs) have demonstrated astonishing capabilities, but out-of-the-box, they are simply powerful text predictors. They don’t inherently understand what makes a response helpful, harmless, or aligned with human values. The technique that has proven most effective at bridging this gap is Reinforcement Learning from Human Feedback (RLHF), and at its heart lies a powerful algorithm: Proximal Policy Optimization (PPO).

You may have seen diagrams like the one below, which outlines the RLHF training process. It can look intimidating, with a web of interconnected models, losses, and data flows.

![[Pasted image 20250730232756.png]]

This post will decode that diagram, piece by piece. We’ll explore the “why” behind each component, moving from high-level concepts to the deep technical reasoning that makes this process work.

Translating RL to a Conversation @@ -23,4 +23,4 @@ where δ_t = r_t + γV(s_{t+1}) - V(s_t)