diff --git a/404.html b/404.html index cd5b6f1..c7b51f0 100644 --- a/404.html +++ b/404.html @@ -4,4 +4,4 @@ 2016 - 2025 Eric X. Liu -[16732da] \ No newline at end of file +[69c2890] \ No newline at end of file diff --git a/about/index.html b/about/index.html index 3f03cc3..276d199 100644 --- a/about/index.html +++ b/about/index.html @@ -4,4 +4,4 @@ 2016 - 2025 Eric X. Liu -[16732da] \ No newline at end of file +[69c2890] \ No newline at end of file diff --git a/categories/index.html b/categories/index.html index 848c70b..eac792e 100644 --- a/categories/index.html +++ b/categories/index.html @@ -4,4 +4,4 @@ 2016 - 2025 Eric X. Liu -[16732da] \ No newline at end of file +[69c2890] \ No newline at end of file diff --git a/index.html b/index.html index 063163c..cb4f664 100644 --- a/index.html +++ b/index.html @@ -4,4 +4,4 @@ 2016 - 2025 Eric X. Liu -[16732da] \ No newline at end of file +[69c2890] \ No newline at end of file diff --git a/index.xml b/index.xml index c199819..1c7d677 100644 --- a/index.xml +++ b/index.xml @@ -1,12 +1,4 @@ -Eric X. Liu's Personal Page/Recent content on Eric X. Liu's Personal PageHugoenWed, 20 Aug 2025 06:04:36 +0000Quantization in LLMs/posts/quantization-in-llms/Tue, 19 Aug 2025 00:00:00 +0000/posts/quantization-in-llms/<p>The burgeoning scale of Large Language Models (LLMs) has necessitated a paradigm shift in their deployment, moving beyond full-precision floating-point arithmetic towards lower-precision representations. Quantization, the process of mapping a wide range of continuous values to a smaller, discrete set, has emerged as a critical technique to reduce model size, accelerate inference, and lower energy consumption. This article provides a technical overview of quantization theories, their application in modern LLMs, and highlights the ongoing innovations in this domain.</p>Transformer's Core Mechanics/posts/transformer-s-core-mechanics/Tue, 19 Aug 2025 00:00:00 +0000/posts/transformer-s-core-mechanics/<p>The Transformer architecture is the bedrock of modern Large Language Models (LLMs). While its high-level success is widely known, a deeper understanding requires dissecting its core components. This article provides a detailed, technical breakdown of the fundamental concepts within a Transformer block, from the notion of &ldquo;channels&rdquo; to the intricate workings of the attention mechanism and its relationship with other advanced architectures like Mixture of Experts.</p> -<h3 id="1-the-channel-a-foundational-view-of-d_model"> - 1. The &ldquo;Channel&rdquo;: A Foundational View of <code>d_model</code> - <a class="heading-link" href="#1-the-channel-a-foundational-view-of-d_model"> - <i class="fa-solid fa-link" aria-hidden="true" title="Link to heading"></i> - <span class="sr-only">Link to heading</span> - </a> -</h3> -<p>In deep learning, a &ldquo;channel&rdquo; can be thought of as a feature dimension. While this term is common in Convolutional Neural Networks for images (e.g., Red, Green, Blue channels), in LLMs, the analogous concept is the model&rsquo;s primary embedding dimension, commonly referred to as <code>d_model</code>.</p>Breville Barista Pro Maintenance/posts/breville-barista-pro-maintenance/Sat, 16 Aug 2025 00:00:00 +0000/posts/breville-barista-pro-maintenance/<p>Proper maintenance is critical for the longevity and performance of a Breville Barista Pro espresso machine. Consistent cleaning not only ensures the machine functions correctly but also directly impacts the quality of the espresso produced. This guide provides a detailed, technical breakdown of the essential maintenance routines, from automated cycles to daily upkeep.</p> +Eric X. Liu's Personal Page/Recent content on Eric X. Liu's Personal PageHugoenWed, 20 Aug 2025 06:28:39 +0000Quantization in LLMs/posts/quantization-in-llms/Tue, 19 Aug 2025 00:00:00 +0000/posts/quantization-in-llms/<p>The burgeoning scale of Large Language Models (LLMs) has necessitated a paradigm shift in their deployment, moving beyond full-precision floating-point arithmetic towards lower-precision representations. Quantization, the process of mapping a wide range of continuous values to a smaller, discrete set, has emerged as a critical technique to reduce model size, accelerate inference, and lower energy consumption. This article provides a technical overview of quantization theories, their application in modern LLMs, and highlights the ongoing innovations in this domain.</p>Breville Barista Pro Maintenance/posts/breville-barista-pro-maintenance/Sat, 16 Aug 2025 00:00:00 +0000/posts/breville-barista-pro-maintenance/<p>Proper maintenance is critical for the longevity and performance of a Breville Barista Pro espresso machine. Consistent cleaning not only ensures the machine functions correctly but also directly impacts the quality of the espresso produced. This guide provides a detailed, technical breakdown of the essential maintenance routines, from automated cycles to daily upkeep.</p> <h4 id="understanding-the-two-primary-maintenance-cycles"> <strong>Understanding the Two Primary Maintenance Cycles</strong> <a class="heading-link" href="#understanding-the-two-primary-maintenance-cycles"> @@ -33,6 +25,14 @@ <p><strong>The Problem:</strong> Many routing mechanisms, especially &ldquo;Top-K routing,&rdquo; involve a discrete, hard selection process. A common function is <code>KeepTopK(v, k)</code>, which selects the top <code>k</code> scoring elements from a vector <code>v</code> and sets others to $-\infty$ or $0$.</p>An Architectural Deep Dive of T5/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/Sun, 01 Jun 2025 00:00:00 +0000/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/<p>In the rapidly evolving landscape of Large Language Models, a few key architectures define the dominant paradigms. Today, the &ldquo;decoder-only&rdquo; model, popularized by the GPT series and its successors like LLaMA and Mistral, reigns supreme. These models are scaled to incredible sizes and excel at in-context learning.</p> <p>But to truly understand the field, we must look at the pivotal models that explored different paths. Google&rsquo;s T5, or <strong>Text-to-Text Transfer Transformer</strong>, stands out as one of the most influential. It didn&rsquo;t just introduce a new model; it proposed a new philosophy. This article dives deep into the architecture of T5, how it fundamentally differs from modern LLMs, and the lasting legacy of its unique design choices.</p>Mastering Your Breville Barista Pro: The Ultimate Guide to Dialing In Espresso/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/Thu, 01 May 2025 00:00:00 +0000/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/<p>Are you ready to transform your home espresso game from good to genuinely great? The Breville Barista Pro is a fantastic machine, but unlocking its full potential requires understanding a few key principles. This guide will walk you through the systematic process of dialing in your espresso, ensuring every shot is delicious and repeatable.</p> -<p>Our overarching philosophy is simple: <strong>isolate and change only one variable at a time.</strong> While numbers are crucial, your palate is the ultimate judge. Dose, ratio, and time are interconnected, but your <strong>grind size</strong> is your most powerful lever.</p>Some useful files/posts/useful/Mon, 26 Oct 2020 04:14:43 +0000/posts/useful/<ul> +<p>Our overarching philosophy is simple: <strong>isolate and change only one variable at a time.</strong> While numbers are crucial, your palate is the ultimate judge. Dose, ratio, and time are interconnected, but your <strong>grind size</strong> is your most powerful lever.</p>Transformer's Core Mechanics/posts/transformer-s-core-mechanics/Tue, 01 Apr 2025 00:00:00 +0000/posts/transformer-s-core-mechanics/<p>The Transformer architecture is the bedrock of modern Large Language Models (LLMs). While its high-level success is widely known, a deeper understanding requires dissecting its core components. This article provides a detailed, technical breakdown of the fundamental concepts within a Transformer block, from the notion of &ldquo;channels&rdquo; to the intricate workings of the attention mechanism and its relationship with other advanced architectures like Mixture of Experts.</p> +<h3 id="1-the-channel-a-foundational-view-of-d_model"> + 1. The &ldquo;Channel&rdquo;: A Foundational View of <code>d_model</code> + <a class="heading-link" href="#1-the-channel-a-foundational-view-of-d_model"> + <i class="fa-solid fa-link" aria-hidden="true" title="Link to heading"></i> + <span class="sr-only">Link to heading</span> + </a> +</h3> +<p>In deep learning, a &ldquo;channel&rdquo; can be thought of as a feature dimension. While this term is common in Convolutional Neural Networks for images (e.g., Red, Green, Blue channels), in LLMs, the analogous concept is the model&rsquo;s primary embedding dimension, commonly referred to as <code>d_model</code>.</p>Some useful files/posts/useful/Mon, 26 Oct 2020 04:14:43 +0000/posts/useful/<ul> <li><a href="/rootCA.crt" >rootCA.pem</a></li> </ul>About/about/Fri, 01 Jun 2018 07:13:52 +0000/about/ \ No newline at end of file diff --git a/posts/breville-barista-pro-maintenance/index.html b/posts/breville-barista-pro-maintenance/index.html index 611ef69..deeb116 100644 --- a/posts/breville-barista-pro-maintenance/index.html +++ b/posts/breville-barista-pro-maintenance/index.html @@ -25,4 +25,4 @@ Understanding the Two Primary Maintenance Cycles Link to heading The Breville Ba 2016 - 2025 Eric X. Liu -[16732da] \ No newline at end of file +[69c2890] \ No newline at end of file diff --git a/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/index.html b/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/index.html index fe403b8..0161a7f 100644 --- a/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/index.html +++ b/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/index.html @@ -20,4 +20,4 @@ Our overarching philosophy is simple: isolate and change only one variable at a 2016 - 2025 Eric X. Liu -[16732da] \ No newline at end of file +[69c2890] \ No newline at end of file diff --git a/posts/how-rvq-teaches-llms-to-see-and-hear/index.html b/posts/how-rvq-teaches-llms-to-see-and-hear/index.html index 641776c..a285d76 100644 --- a/posts/how-rvq-teaches-llms-to-see-and-hear/index.html +++ b/posts/how-rvq-teaches-llms-to-see-and-hear/index.html @@ -18,4 +18,4 @@ The answer lies in creating a universal language—a bridge between the continuo 2016 - 2025 Eric X. Liu -[16732da] \ No newline at end of file +[69c2890] \ No newline at end of file diff --git a/posts/index.html b/posts/index.html index 39ba520..fb09eda 100644 --- a/posts/index.html +++ b/posts/index.html @@ -1,8 +1,7 @@ Posts · Eric X. Liu's Personal Page
\ No newline at end of file +[69c2890] \ No newline at end of file diff --git a/posts/index.xml b/posts/index.xml index 0a5e7ca..568980d 100644 --- a/posts/index.xml +++ b/posts/index.xml @@ -1,12 +1,4 @@ -Posts on Eric X. Liu's Personal Page/posts/Recent content in Posts on Eric X. Liu's Personal PageHugoenWed, 20 Aug 2025 06:04:36 +0000Quantization in LLMs/posts/quantization-in-llms/Tue, 19 Aug 2025 00:00:00 +0000/posts/quantization-in-llms/<p>The burgeoning scale of Large Language Models (LLMs) has necessitated a paradigm shift in their deployment, moving beyond full-precision floating-point arithmetic towards lower-precision representations. Quantization, the process of mapping a wide range of continuous values to a smaller, discrete set, has emerged as a critical technique to reduce model size, accelerate inference, and lower energy consumption. This article provides a technical overview of quantization theories, their application in modern LLMs, and highlights the ongoing innovations in this domain.</p>Transformer's Core Mechanics/posts/transformer-s-core-mechanics/Tue, 19 Aug 2025 00:00:00 +0000/posts/transformer-s-core-mechanics/<p>The Transformer architecture is the bedrock of modern Large Language Models (LLMs). While its high-level success is widely known, a deeper understanding requires dissecting its core components. This article provides a detailed, technical breakdown of the fundamental concepts within a Transformer block, from the notion of &ldquo;channels&rdquo; to the intricate workings of the attention mechanism and its relationship with other advanced architectures like Mixture of Experts.</p> -<h3 id="1-the-channel-a-foundational-view-of-d_model"> - 1. The &ldquo;Channel&rdquo;: A Foundational View of <code>d_model</code> - <a class="heading-link" href="#1-the-channel-a-foundational-view-of-d_model"> - <i class="fa-solid fa-link" aria-hidden="true" title="Link to heading"></i> - <span class="sr-only">Link to heading</span> - </a> -</h3> -<p>In deep learning, a &ldquo;channel&rdquo; can be thought of as a feature dimension. While this term is common in Convolutional Neural Networks for images (e.g., Red, Green, Blue channels), in LLMs, the analogous concept is the model&rsquo;s primary embedding dimension, commonly referred to as <code>d_model</code>.</p>Breville Barista Pro Maintenance/posts/breville-barista-pro-maintenance/Sat, 16 Aug 2025 00:00:00 +0000/posts/breville-barista-pro-maintenance/<p>Proper maintenance is critical for the longevity and performance of a Breville Barista Pro espresso machine. Consistent cleaning not only ensures the machine functions correctly but also directly impacts the quality of the espresso produced. This guide provides a detailed, technical breakdown of the essential maintenance routines, from automated cycles to daily upkeep.</p> +Posts on Eric X. Liu's Personal Page/posts/Recent content in Posts on Eric X. Liu's Personal PageHugoenWed, 20 Aug 2025 06:28:39 +0000Quantization in LLMs/posts/quantization-in-llms/Tue, 19 Aug 2025 00:00:00 +0000/posts/quantization-in-llms/<p>The burgeoning scale of Large Language Models (LLMs) has necessitated a paradigm shift in their deployment, moving beyond full-precision floating-point arithmetic towards lower-precision representations. Quantization, the process of mapping a wide range of continuous values to a smaller, discrete set, has emerged as a critical technique to reduce model size, accelerate inference, and lower energy consumption. This article provides a technical overview of quantization theories, their application in modern LLMs, and highlights the ongoing innovations in this domain.</p>Breville Barista Pro Maintenance/posts/breville-barista-pro-maintenance/Sat, 16 Aug 2025 00:00:00 +0000/posts/breville-barista-pro-maintenance/<p>Proper maintenance is critical for the longevity and performance of a Breville Barista Pro espresso machine. Consistent cleaning not only ensures the machine functions correctly but also directly impacts the quality of the espresso produced. This guide provides a detailed, technical breakdown of the essential maintenance routines, from automated cycles to daily upkeep.</p> <h4 id="understanding-the-two-primary-maintenance-cycles"> <strong>Understanding the Two Primary Maintenance Cycles</strong> <a class="heading-link" href="#understanding-the-two-primary-maintenance-cycles"> @@ -33,6 +25,14 @@ <p><strong>The Problem:</strong> Many routing mechanisms, especially &ldquo;Top-K routing,&rdquo; involve a discrete, hard selection process. A common function is <code>KeepTopK(v, k)</code>, which selects the top <code>k</code> scoring elements from a vector <code>v</code> and sets others to $-\infty$ or $0$.</p>An Architectural Deep Dive of T5/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/Sun, 01 Jun 2025 00:00:00 +0000/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/<p>In the rapidly evolving landscape of Large Language Models, a few key architectures define the dominant paradigms. Today, the &ldquo;decoder-only&rdquo; model, popularized by the GPT series and its successors like LLaMA and Mistral, reigns supreme. These models are scaled to incredible sizes and excel at in-context learning.</p> <p>But to truly understand the field, we must look at the pivotal models that explored different paths. Google&rsquo;s T5, or <strong>Text-to-Text Transfer Transformer</strong>, stands out as one of the most influential. It didn&rsquo;t just introduce a new model; it proposed a new philosophy. This article dives deep into the architecture of T5, how it fundamentally differs from modern LLMs, and the lasting legacy of its unique design choices.</p>Mastering Your Breville Barista Pro: The Ultimate Guide to Dialing In Espresso/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/Thu, 01 May 2025 00:00:00 +0000/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/<p>Are you ready to transform your home espresso game from good to genuinely great? The Breville Barista Pro is a fantastic machine, but unlocking its full potential requires understanding a few key principles. This guide will walk you through the systematic process of dialing in your espresso, ensuring every shot is delicious and repeatable.</p> -<p>Our overarching philosophy is simple: <strong>isolate and change only one variable at a time.</strong> While numbers are crucial, your palate is the ultimate judge. Dose, ratio, and time are interconnected, but your <strong>grind size</strong> is your most powerful lever.</p>Some useful files/posts/useful/Mon, 26 Oct 2020 04:14:43 +0000/posts/useful/<ul> +<p>Our overarching philosophy is simple: <strong>isolate and change only one variable at a time.</strong> While numbers are crucial, your palate is the ultimate judge. Dose, ratio, and time are interconnected, but your <strong>grind size</strong> is your most powerful lever.</p>Transformer's Core Mechanics/posts/transformer-s-core-mechanics/Tue, 01 Apr 2025 00:00:00 +0000/posts/transformer-s-core-mechanics/<p>The Transformer architecture is the bedrock of modern Large Language Models (LLMs). While its high-level success is widely known, a deeper understanding requires dissecting its core components. This article provides a detailed, technical breakdown of the fundamental concepts within a Transformer block, from the notion of &ldquo;channels&rdquo; to the intricate workings of the attention mechanism and its relationship with other advanced architectures like Mixture of Experts.</p> +<h3 id="1-the-channel-a-foundational-view-of-d_model"> + 1. The &ldquo;Channel&rdquo;: A Foundational View of <code>d_model</code> + <a class="heading-link" href="#1-the-channel-a-foundational-view-of-d_model"> + <i class="fa-solid fa-link" aria-hidden="true" title="Link to heading"></i> + <span class="sr-only">Link to heading</span> + </a> +</h3> +<p>In deep learning, a &ldquo;channel&rdquo; can be thought of as a feature dimension. While this term is common in Convolutional Neural Networks for images (e.g., Red, Green, Blue channels), in LLMs, the analogous concept is the model&rsquo;s primary embedding dimension, commonly referred to as <code>d_model</code>.</p>Some useful files/posts/useful/Mon, 26 Oct 2020 04:14:43 +0000/posts/useful/<ul> <li><a href="/rootCA.crt" >rootCA.pem</a></li> </ul> \ No newline at end of file diff --git a/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/index.html b/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/index.html index 469b177..e7554f6 100644 --- a/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/index.html +++ b/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/index.html @@ -44,4 +44,4 @@ The Top-K routing mechanism, as illustrated in the provided ima 2016 - 2025 Eric X. Liu -[16732da] \ No newline at end of file +[69c2890] \ No newline at end of file diff --git a/posts/page/2/index.html b/posts/page/2/index.html index 0eb4f1b..3275a75 100644 --- a/posts/page/2/index.html +++ b/posts/page/2/index.html @@ -5,4 +5,4 @@ 2016 - 2025 Eric X. Liu -[16732da] \ No newline at end of file +[69c2890] \ No newline at end of file diff --git a/posts/ppo-for-language-models/index.html b/posts/ppo-for-language-models/index.html index 4f52ea8..8b204e7 100644 --- a/posts/ppo-for-language-models/index.html +++ b/posts/ppo-for-language-models/index.html @@ -23,4 +23,4 @@ where δ_t = r_t + γV(s_{t+1}) - V(s_t)

  • γ (gam 2016 - 2025 Eric X. Liu -[16732da] \ No newline at end of file +[69c2890] \ No newline at end of file diff --git a/posts/quantization-in-llms/index.html b/posts/quantization-in-llms/index.html index f2ffa62..4fadd1a 100644 --- a/posts/quantization-in-llms/index.html +++ b/posts/quantization-in-llms/index.html @@ -7,4 +7,4 @@ 2016 - 2025 Eric X. Liu -[16732da] \ No newline at end of file +[69c2890] \ No newline at end of file diff --git a/posts/secure-boot-dkms-and-mok-on-proxmox-debian/index.html b/posts/secure-boot-dkms-and-mok-on-proxmox-debian/index.html index 3f68631..a6e87d7 100644 --- a/posts/secure-boot-dkms-and-mok-on-proxmox-debian/index.html +++ b/posts/secure-boot-dkms-and-mok-on-proxmox-debian/index.html @@ -59,4 +59,4 @@ nvidia-smi failed to communicate with the NVIDIA driver modprobe nvidia → “K 2016 - 2025 Eric X. Liu -[16732da] \ No newline at end of file +[69c2890] \ No newline at end of file diff --git a/posts/supabase-deep-dive/index.html b/posts/supabase-deep-dive/index.html index 7250acf..02f8fe9 100644 --- a/posts/supabase-deep-dive/index.html +++ b/posts/supabase-deep-dive/index.html @@ -90,4 +90,4 @@ Supabase enters this space with a radically different philosophy: transparency. 2016 - 2025 Eric X. Liu -[16732da] \ No newline at end of file +[69c2890] \ No newline at end of file diff --git a/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/index.html b/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/index.html index 33a3cb4..318766d 100644 --- a/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/index.html +++ b/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/index.html @@ -30,4 +30,4 @@ But to truly understand the field, we must look at the pivotal models that explo 2016 - 2025 Eric X. Liu -[16732da] \ No newline at end of file +[69c2890] \ No newline at end of file diff --git a/posts/transformer-s-core-mechanics/index.html b/posts/transformer-s-core-mechanics/index.html index 772fe43..626fced 100644 --- a/posts/transformer-s-core-mechanics/index.html +++ b/posts/transformer-s-core-mechanics/index.html @@ -8,10 +8,10 @@ In deep learning, a “channel” can be thought of as a feature dimension. While this term is common in Convolutional Neural Networks for images (e.g., Red, Green, Blue channels), in LLMs, the analogous concept is the model’s primary embedding dimension, commonly referred to as d_model.">
    \ No newline at end of file diff --git a/posts/useful/index.html b/posts/useful/index.html index 3e68911..6832f79 100644 --- a/posts/useful/index.html +++ b/posts/useful/index.html @@ -9,4 +9,4 @@ One-minute read
    • [16732da] \ No newline at end of file +[69c2890] \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index ff573bb..6401c0b 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -1 +1 @@ -/2025-08-20T06:04:36+00:00weekly0.5/posts/2025-08-20T06:04:36+00:00weekly0.5/posts/quantization-in-llms/2025-08-20T06:02:35+00:00weekly0.5/posts/transformer-s-core-mechanics/2025-08-20T06:04:36+00:00weekly0.5/posts/breville-barista-pro-maintenance/2025-08-20T06:04:36+00:00weekly0.5/posts/secure-boot-dkms-and-mok-on-proxmox-debian/2025-08-14T06:50:22+00:00weekly0.5/posts/how-rvq-teaches-llms-to-see-and-hear/2025-08-08T17:36:52+00:00weekly0.5/posts/supabase-deep-dive/2025-08-04T03:59:37+00:00weekly0.5/posts/ppo-for-language-models/2025-08-20T06:04:36+00:00weekly0.5/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/2025-08-03T06:02:48+00:00weekly0.5/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/2025-08-03T03:41:10+00:00weekly0.5/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/2025-08-03T04:20:20+00:00weekly0.5/posts/useful/2025-08-03T08:37:28-07:00weekly0.5/about/2020-06-16T23:30:17-07:00weekly0.5/categories/weekly0.5/tags/weekly0.5 \ No newline at end of file +/2025-08-20T06:28:39+00:00weekly0.5/posts/2025-08-20T06:28:39+00:00weekly0.5/posts/quantization-in-llms/2025-08-20T06:02:35+00:00weekly0.5/posts/breville-barista-pro-maintenance/2025-08-20T06:04:36+00:00weekly0.5/posts/secure-boot-dkms-and-mok-on-proxmox-debian/2025-08-14T06:50:22+00:00weekly0.5/posts/how-rvq-teaches-llms-to-see-and-hear/2025-08-08T17:36:52+00:00weekly0.5/posts/supabase-deep-dive/2025-08-04T03:59:37+00:00weekly0.5/posts/ppo-for-language-models/2025-08-20T06:04:36+00:00weekly0.5/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/2025-08-03T06:02:48+00:00weekly0.5/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/2025-08-03T03:41:10+00:00weekly0.5/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/2025-08-03T04:20:20+00:00weekly0.5/posts/transformer-s-core-mechanics/2025-08-20T06:28:39+00:00weekly0.5/posts/useful/2025-08-03T08:37:28-07:00weekly0.5/about/2020-06-16T23:30:17-07:00weekly0.5/categories/weekly0.5/tags/weekly0.5 \ No newline at end of file diff --git a/tags/index.html b/tags/index.html index e6289b7..a54965e 100644 --- a/tags/index.html +++ b/tags/index.html @@ -4,4 +4,4 @@ 2016 - 2025 Eric X. Liu -[16732da] \ No newline at end of file +[69c2890] \ No newline at end of file