diff --git a/404.html b/404.html index 933633f..4b34eb0 100644 --- a/404.html +++ b/404.html @@ -4,4 +4,4 @@ 2016 - 2025 Eric X. Liu -[38bbe8c] \ No newline at end of file +[73f53ff] \ No newline at end of file diff --git a/about/index.html b/about/index.html index 6c25567..2dd50a1 100644 --- a/about/index.html +++ b/about/index.html @@ -4,4 +4,4 @@ 2016 - 2025 Eric X. Liu -[38bbe8c] \ No newline at end of file +[73f53ff] \ No newline at end of file diff --git a/categories/index.html b/categories/index.html index aadb1e0..758f6b6 100644 --- a/categories/index.html +++ b/categories/index.html @@ -4,4 +4,4 @@ 2016 - 2025 Eric X. Liu -[38bbe8c] \ No newline at end of file +[73f53ff] \ No newline at end of file diff --git a/index.html b/index.html index 9af4a03..1a7dd19 100644 --- a/index.html +++ b/index.html @@ -4,4 +4,4 @@ 2016 - 2025 Eric X. Liu -[38bbe8c] \ No newline at end of file +[73f53ff] \ No newline at end of file diff --git a/index.xml b/index.xml index 39f8809..a0834be 100644 --- a/index.xml +++ b/index.xml @@ -1,5 +1,5 @@ -Eric X. Liu's Personal Page/Recent content on Eric X. Liu's Personal PageHugoenSun, 03 Aug 2025 01:47:39 +0000A Deep Dive into PPO for Language Models/posts/a-deep-dive-into-ppo-for-language-models/Sun, 03 Aug 2025 01:47:10 +0000/posts/a-deep-dive-into-ppo-for-language-models/<p>Large Language Models (LLMs) have demonstrated astonishing capabilities, but out-of-the-box, they are simply powerful text predictors. They don&rsquo;t inherently understand what makes a response helpful, harmless, or aligned with human values. The technique that has proven most effective at bridging this gap is Reinforcement Learning from Human Feedback (RLHF), and at its heart lies a powerful algorithm: Proximal Policy Optimization (PPO).</p> -<p>You may have seen diagrams like the one below, which outlines the RLHF training process. It can look intimidating, with a web of interconnected models, losses, and data flows.</p>T5 - The Transformer That Zigged When Others Zagged - An Architectural Deep Dive/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/Sun, 03 Aug 2025 01:47:10 +0000/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/<p>In the rapidly evolving landscape of Large Language Models, a few key architectures define the dominant paradigms. Today, the &ldquo;decoder-only&rdquo; model, popularized by the GPT series and its successors like LLaMA and Mistral, reigns supreme. These models are scaled to incredible sizes and excel at in-context learning.</p> +Eric X. Liu's Personal Page/Recent content on Eric X. Liu's Personal PageHugoenSun, 03 Aug 2025 02:37:56 +0000A Deep Dive into PPO for Language Models/posts/a-deep-dive-into-ppo-for-language-models/Sun, 03 Aug 2025 02:36:44 +0000/posts/a-deep-dive-into-ppo-for-language-models/<p>Large Language Models (LLMs) have demonstrated astonishing capabilities, but out-of-the-box, they are simply powerful text predictors. They don&rsquo;t inherently understand what makes a response helpful, harmless, or aligned with human values. The technique that has proven most effective at bridging this gap is Reinforcement Learning from Human Feedback (RLHF), and at its heart lies a powerful algorithm: Proximal Policy Optimization (PPO).</p> +<p>You may have seen diagrams like the one below, which outlines the RLHF training process. It can look intimidating, with a web of interconnected models, losses, and data flows.</p>T5 - The Transformer That Zigged When Others Zagged - An Architectural Deep Dive/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/Sun, 03 Aug 2025 02:36:44 +0000/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/<p>In the rapidly evolving landscape of Large Language Models, a few key architectures define the dominant paradigms. Today, the &ldquo;decoder-only&rdquo; model, popularized by the GPT series and its successors like LLaMA and Mistral, reigns supreme. These models are scaled to incredible sizes and excel at in-context learning.</p> <p>But to truly understand the field, we must look at the pivotal models that explored different paths. Google&rsquo;s T5, or <strong>Text-to-Text Transfer Transformer</strong>, stands out as one of the most influential. It didn&rsquo;t just introduce a new model; it proposed a new philosophy. This article dives deep into the architecture of T5, how it fundamentally differs from modern LLMs, and the lasting legacy of its unique design choices.</p>Some useful files/posts/useful/Mon, 26 Oct 2020 04:14:43 +0000/posts/useful/<ul> <li><a href="https://ericxliu.me/rootCA.pem" class="external-link" target="_blank" rel="noopener">rootCA.pem</a></li> <li><a href="https://ericxliu.me/vpnclient.ovpn" class="external-link" target="_blank" rel="noopener">vpnclient.ovpn</a></li> diff --git a/posts/a-deep-dive-into-ppo-for-language-models/index.html b/posts/a-deep-dive-into-ppo-for-language-models/index.html index 8e59e1b..f5bd92d 100644 --- a/posts/a-deep-dive-into-ppo-for-language-models/index.html +++ b/posts/a-deep-dive-into-ppo-for-language-models/index.html @@ -1,10 +1,10 @@ A Deep Dive into PPO for Language Models · Eric X. Liu's Personal Page
\ No newline at end of file diff --git a/posts/index.html b/posts/index.html index 0a0fe93..0193e08 100644 --- a/posts/index.html +++ b/posts/index.html @@ -7,4 +7,4 @@ 2016 - 2025 Eric X. Liu -[38bbe8c] \ No newline at end of file +[73f53ff] \ No newline at end of file diff --git a/posts/index.xml b/posts/index.xml index 66f896c..c720390 100644 --- a/posts/index.xml +++ b/posts/index.xml @@ -1,5 +1,5 @@ -Posts on Eric X. Liu's Personal Page/posts/Recent content in Posts on Eric X. Liu's Personal PageHugoenSun, 03 Aug 2025 01:47:39 +0000A Deep Dive into PPO for Language Models/posts/a-deep-dive-into-ppo-for-language-models/Sun, 03 Aug 2025 01:47:10 +0000/posts/a-deep-dive-into-ppo-for-language-models/<p>Large Language Models (LLMs) have demonstrated astonishing capabilities, but out-of-the-box, they are simply powerful text predictors. They don&rsquo;t inherently understand what makes a response helpful, harmless, or aligned with human values. The technique that has proven most effective at bridging this gap is Reinforcement Learning from Human Feedback (RLHF), and at its heart lies a powerful algorithm: Proximal Policy Optimization (PPO).</p> -<p>You may have seen diagrams like the one below, which outlines the RLHF training process. It can look intimidating, with a web of interconnected models, losses, and data flows.</p>T5 - The Transformer That Zigged When Others Zagged - An Architectural Deep Dive/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/Sun, 03 Aug 2025 01:47:10 +0000/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/<p>In the rapidly evolving landscape of Large Language Models, a few key architectures define the dominant paradigms. Today, the &ldquo;decoder-only&rdquo; model, popularized by the GPT series and its successors like LLaMA and Mistral, reigns supreme. These models are scaled to incredible sizes and excel at in-context learning.</p> +Posts on Eric X. Liu's Personal Page/posts/Recent content in Posts on Eric X. Liu's Personal PageHugoenSun, 03 Aug 2025 02:37:56 +0000A Deep Dive into PPO for Language Models/posts/a-deep-dive-into-ppo-for-language-models/Sun, 03 Aug 2025 02:36:44 +0000/posts/a-deep-dive-into-ppo-for-language-models/<p>Large Language Models (LLMs) have demonstrated astonishing capabilities, but out-of-the-box, they are simply powerful text predictors. They don&rsquo;t inherently understand what makes a response helpful, harmless, or aligned with human values. The technique that has proven most effective at bridging this gap is Reinforcement Learning from Human Feedback (RLHF), and at its heart lies a powerful algorithm: Proximal Policy Optimization (PPO).</p> +<p>You may have seen diagrams like the one below, which outlines the RLHF training process. It can look intimidating, with a web of interconnected models, losses, and data flows.</p>T5 - The Transformer That Zigged When Others Zagged - An Architectural Deep Dive/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/Sun, 03 Aug 2025 02:36:44 +0000/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/<p>In the rapidly evolving landscape of Large Language Models, a few key architectures define the dominant paradigms. Today, the &ldquo;decoder-only&rdquo; model, popularized by the GPT series and its successors like LLaMA and Mistral, reigns supreme. These models are scaled to incredible sizes and excel at in-context learning.</p> <p>But to truly understand the field, we must look at the pivotal models that explored different paths. Google&rsquo;s T5, or <strong>Text-to-Text Transfer Transformer</strong>, stands out as one of the most influential. It didn&rsquo;t just introduce a new model; it proposed a new philosophy. This article dives deep into the architecture of T5, how it fundamentally differs from modern LLMs, and the lasting legacy of its unique design choices.</p>Some useful files/posts/useful/Mon, 26 Oct 2020 04:14:43 +0000/posts/useful/<ul> <li><a href="https://ericxliu.me/rootCA.pem" class="external-link" target="_blank" rel="noopener">rootCA.pem</a></li> <li><a href="https://ericxliu.me/vpnclient.ovpn" class="external-link" target="_blank" rel="noopener">vpnclient.ovpn</a></li> diff --git a/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/index.html b/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/index.html index 78910f4..5058e82 100644 --- a/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/index.html +++ b/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/index.html @@ -1,10 +1,10 @@ T5 - The Transformer That Zigged When Others Zagged - An Architectural Deep Dive · Eric X. Liu's Personal Page
\ No newline at end of file diff --git a/posts/useful/index.html b/posts/useful/index.html index 443f227..d444194 100644 --- a/posts/useful/index.html +++ b/posts/useful/index.html @@ -10,4 +10,4 @@ One-minute read
  • [38bbe8c] \ No newline at end of file +[73f53ff] \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index 2e334a2..192e074 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -1 +1 @@ -/posts/a-deep-dive-into-ppo-for-language-models/2025-08-03T01:47:39+00:00weekly0.5/2025-08-03T01:47:39+00:00weekly0.5/posts/2025-08-03T01:47:39+00:00weekly0.5/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/2025-08-03T01:47:39+00:00weekly0.5/posts/useful/2020-10-26T04:47:36+00:00weekly0.5/about/2020-06-16T23:30:17-07:00weekly0.5/categories/weekly0.5/tags/weekly0.5 \ No newline at end of file +/posts/a-deep-dive-into-ppo-for-language-models/2025-08-03T02:37:56+00:00weekly0.5/2025-08-03T02:37:56+00:00weekly0.5/posts/2025-08-03T02:37:56+00:00weekly0.5/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/2025-08-03T02:37:56+00:00weekly0.5/posts/useful/2020-10-26T04:47:36+00:00weekly0.5/about/2020-06-16T23:30:17-07:00weekly0.5/categories/weekly0.5/tags/weekly0.5 \ No newline at end of file diff --git a/tags/index.html b/tags/index.html index 5d58ecb..cf6b4df 100644 --- a/tags/index.html +++ b/tags/index.html @@ -4,4 +4,4 @@ 2016 - 2025 Eric X. Liu -[38bbe8c] \ No newline at end of file +[73f53ff] \ No newline at end of file