diff --git a/404.html b/404.html index 7560813..b232c3b 100644 --- a/404.html +++ b/404.html @@ -1,7 +1,7 @@ -Eric X. Liu's Personal Page
\ No newline at end of file diff --git a/about/index.html b/about/index.html index 3ec98a7..9415763 100644 --- a/about/index.html +++ b/about/index.html @@ -1,4 +1,4 @@ -About · Eric X. Liu's Personal Page[cd4cace] \ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/authors/index.html b/authors/index.html index e61c29c..ca31fea 100644 --- a/authors/index.html +++ b/authors/index.html @@ -1,7 +1,7 @@ -Authors · Eric X. Liu's Personal Page
\ No newline at end of file diff --git a/categories/index.html b/categories/index.html index 24f611d..2db886a 100644 --- a/categories/index.html +++ b/categories/index.html @@ -1,7 +1,7 @@ -Categories · Eric X. Liu's Personal Page
\ No newline at end of file diff --git a/index.html b/index.html index 446d99c..608517b 100644 --- a/index.html +++ b/index.html @@ -1,8 +1,8 @@ -Eric X. Liu's Personal Page
avatar

Eric X. Liu

  • +
\ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/index.xml b/index.xml index 8d79e8f..06b7a5b 100644 --- a/index.xml +++ b/index.xml @@ -1,9 +1,9 @@ -Eric X. Liu's Personal Pagehttps://ericxliu.me/Recent content on Eric X. Liu's Personal PageHugoenSat, 27 Dec 2025 21:18:10 +0000Abouthttps://ericxliu.me/about/Fri, 19 Dec 2025 22:46:12 -0800https://ericxliu.me/about/<img src="https://ericxliu.me/images/about.jpeg" alt="Eric Liu" width="300" style="float: left; margin-right: 1.5rem; margin-bottom: 1rem; border-radius: 8px;"/> +Eric X. Liu's Personal Pagehttps://ericxliu.me/Recent content on Eric X. Liu's Personal PageHugoenSat, 27 Dec 2025 22:05:12 +0000From Gemini-3-Flash to T5-Gemma-2 A Journey in Distilling a Family Finance LLMhttps://ericxliu.me/posts/technical-deep-dive-llm-categorization/Sat, 27 Dec 2025 00:00:00 +0000https://ericxliu.me/posts/technical-deep-dive-llm-categorization/<p>Running a family finance system is surprisingly complex. What starts as a simple spreadsheet often evolves into a web of rules, exceptions, and &ldquo;wait, was this dinner or <em>vacation</em> dinner?&rdquo; questions.</p> +<p>For years, I relied on a rule-based system to categorize our credit card transactions. It worked&hellip; mostly. But maintaining <code>if &quot;UBER&quot; in description and amount &gt; 50</code> style rules is a never-ending battle against the entropy of merchant names and changing habits.</p>Abouthttps://ericxliu.me/about/Fri, 19 Dec 2025 22:46:12 -0800https://ericxliu.me/about/<img src="https://ericxliu.me/images/about.jpeg" alt="Eric Liu" width="300" style="float: left; margin-right: 1.5rem; margin-bottom: 1rem; border-radius: 8px;"/> <p>Hi, I&rsquo;m <strong>Eric Liu</strong>.</p> <p>I am a <strong>Staff Software Engineer and Tech Lead Manager (TLM)</strong> at <strong>Google</strong>, based in Sunnyvale, CA.</p> <p>My work focuses on <strong>Infrastructure Performance and Customer Engineering</strong>, specifically for <strong>GPUs and TPUs</strong>. I lead teams that bridge the gap between cutting-edge AI hardware and the latest ML models (like Gemini), ensuring optimal performance and reliability at Google Cloud scale. I thrive in the ambiguous space where hardware constraints meet software ambition—whether it&rsquo;s debugging race conditions across thousands of chips or designing API surfaces for next-gen models.</p>The Convergence of Fast Weights, Linear Attention, and State Space Modelshttps://ericxliu.me/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/Fri, 19 Dec 2025 00:00:00 +0000https://ericxliu.me/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/<p>Modern Large Language Models (LLMs) are dominated by the Transformer architecture. However, as context windows grow, the computational cost of the Transformer’s attention mechanism has become a primary bottleneck. Recent discussions in the AI community—most notably by Geoffrey Hinton—have highlighted a theoretical link between biological memory mechanisms (&ldquo;Fast Weights&rdquo;) and efficient engineering solutions like Linear Transformers and State Space Models (SSMs).</p> -<p>This article explores the mathematical equivalence between Hinton’s concept of Fast Weights as Associative Memory and the recurrence mechanisms found in models such as Mamba and RWKV.</p>From Gemini-3-Flash to T5-Gemma-2 A Journey in Distilling a Family Finance LLMhttps://ericxliu.me/posts/technical-deep-dive-llm-categorization/Mon, 08 Dec 2025 00:00:00 +0000https://ericxliu.me/posts/technical-deep-dive-llm-categorization/<p>Running a family finance system is surprisingly complex. What starts as a simple spreadsheet often evolves into a web of rules, exceptions, and &ldquo;wait, was this dinner or <em>vacation</em> dinner?&rdquo; questions.</p> -<p>For years, I relied on a rule-based system to categorize our credit card transactions. It worked&hellip; mostly. But maintaining <code>if &quot;UBER&quot; in description and amount &gt; 50</code> style rules is a never-ending battle against the entropy of merchant names and changing habits.</p>vAttentionhttps://ericxliu.me/posts/vattention/Mon, 08 Dec 2025 00:00:00 +0000https://ericxliu.me/posts/vattention/<p>Large Language Model (LLM) inference is memory-bound, primarily due to the Key-Value (KV) cache—a store of intermediate state that grows linearly with sequence length. Efficient management of this memory is critical for throughput. While <strong>PagedAttention</strong> (popularized by vLLM) became the industry standard by solving memory fragmentation via software, recent research suggests that leveraging the GPU’s native hardware Memory Management Unit (MMU) offers a more performant and portable solution.</p> +<p>This article explores the mathematical equivalence between Hinton’s concept of Fast Weights as Associative Memory and the recurrence mechanisms found in models such as Mamba and RWKV.</p>vAttentionhttps://ericxliu.me/posts/vattention/Mon, 08 Dec 2025 00:00:00 +0000https://ericxliu.me/posts/vattention/<p>Large Language Model (LLM) inference is memory-bound, primarily due to the Key-Value (KV) cache—a store of intermediate state that grows linearly with sequence length. Efficient management of this memory is critical for throughput. While <strong>PagedAttention</strong> (popularized by vLLM) became the industry standard by solving memory fragmentation via software, recent research suggests that leveraging the GPU’s native hardware Memory Management Unit (MMU) offers a more performant and portable solution.</p> <h4 id="the-status-quo-pagedattention-and-software-tables"> The Status Quo: PagedAttention and Software Tables <a class="heading-link" href="#the-status-quo-pagedattention-and-software-tables"> diff --git a/posts/benchmarking-llms-on-jetson-orin-nano/index.html b/posts/benchmarking-llms-on-jetson-orin-nano/index.html index 3468e93..c89eec7 100644 --- a/posts/benchmarking-llms-on-jetson-orin-nano/index.html +++ b/posts/benchmarking-llms-on-jetson-orin-nano/index.html @@ -1,4 +1,4 @@ -Why Your Jetson Orin Nano's 40 TOPS Goes Unused (And What That Means for Edge AI) · Eric X. Liu's Personal Page[cd4cace] \ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/posts/breville-barista-pro-maintenance/index.html b/posts/breville-barista-pro-maintenance/index.html index e97bc94..fdb9d47 100644 --- a/posts/breville-barista-pro-maintenance/index.html +++ b/posts/breville-barista-pro-maintenance/index.html @@ -1,4 +1,4 @@ -Breville Barista Pro Maintenance · Eric X. Liu's Personal Page[cd4cace] \ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/index.html b/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/index.html index e0a54c9..74fdc36 100644 --- a/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/index.html +++ b/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/index.html @@ -1,4 +1,4 @@ -Mastering Your Breville Barista Pro: The Ultimate Guide to Dialing In Espresso · Eric X. Liu's Personal Page
\ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/posts/flashing-jetson-orin-nano-in-virtualized-environments/index.html b/posts/flashing-jetson-orin-nano-in-virtualized-environments/index.html index d3cc223..3fe75b4 100644 --- a/posts/flashing-jetson-orin-nano-in-virtualized-environments/index.html +++ b/posts/flashing-jetson-orin-nano-in-virtualized-environments/index.html @@ -1,4 +1,4 @@ -Flashing Jetson Orin Nano in Virtualized Environments · Eric X. Liu's Personal Page[cd4cace] \ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/posts/how-rvq-teaches-llms-to-see-and-hear/index.html b/posts/how-rvq-teaches-llms-to-see-and-hear/index.html index 1008eae..e495a1f 100644 --- a/posts/how-rvq-teaches-llms-to-see-and-hear/index.html +++ b/posts/how-rvq-teaches-llms-to-see-and-hear/index.html @@ -1,4 +1,4 @@ -Beyond Words: How RVQ Teaches LLMs to See and Hear · Eric X. Liu's Personal Page
\ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/posts/index.html b/posts/index.html index 40bd9c6..98bf375 100644 --- a/posts/index.html +++ b/posts/index.html @@ -1,8 +1,8 @@ -Posts · Eric X. Liu's Personal Page
\ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/posts/index.xml b/posts/index.xml index f4b9681..1073892 100644 --- a/posts/index.xml +++ b/posts/index.xml @@ -1,6 +1,6 @@ -Posts on Eric X. Liu's Personal Pagehttps://ericxliu.me/posts/Recent content in Posts on Eric X. Liu's Personal PageHugoenSat, 27 Dec 2025 21:18:10 +0000The Convergence of Fast Weights, Linear Attention, and State Space Modelshttps://ericxliu.me/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/Fri, 19 Dec 2025 00:00:00 +0000https://ericxliu.me/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/<p>Modern Large Language Models (LLMs) are dominated by the Transformer architecture. However, as context windows grow, the computational cost of the Transformer’s attention mechanism has become a primary bottleneck. Recent discussions in the AI community—most notably by Geoffrey Hinton—have highlighted a theoretical link between biological memory mechanisms (&ldquo;Fast Weights&rdquo;) and efficient engineering solutions like Linear Transformers and State Space Models (SSMs).</p> -<p>This article explores the mathematical equivalence between Hinton’s concept of Fast Weights as Associative Memory and the recurrence mechanisms found in models such as Mamba and RWKV.</p>From Gemini-3-Flash to T5-Gemma-2 A Journey in Distilling a Family Finance LLMhttps://ericxliu.me/posts/technical-deep-dive-llm-categorization/Mon, 08 Dec 2025 00:00:00 +0000https://ericxliu.me/posts/technical-deep-dive-llm-categorization/<p>Running a family finance system is surprisingly complex. What starts as a simple spreadsheet often evolves into a web of rules, exceptions, and &ldquo;wait, was this dinner or <em>vacation</em> dinner?&rdquo; questions.</p> -<p>For years, I relied on a rule-based system to categorize our credit card transactions. It worked&hellip; mostly. But maintaining <code>if &quot;UBER&quot; in description and amount &gt; 50</code> style rules is a never-ending battle against the entropy of merchant names and changing habits.</p>vAttentionhttps://ericxliu.me/posts/vattention/Mon, 08 Dec 2025 00:00:00 +0000https://ericxliu.me/posts/vattention/<p>Large Language Model (LLM) inference is memory-bound, primarily due to the Key-Value (KV) cache—a store of intermediate state that grows linearly with sequence length. Efficient management of this memory is critical for throughput. While <strong>PagedAttention</strong> (popularized by vLLM) became the industry standard by solving memory fragmentation via software, recent research suggests that leveraging the GPU’s native hardware Memory Management Unit (MMU) offers a more performant and portable solution.</p> +Posts on Eric X. Liu's Personal Pagehttps://ericxliu.me/posts/Recent content in Posts on Eric X. Liu's Personal PageHugoenSat, 27 Dec 2025 22:05:12 +0000From Gemini-3-Flash to T5-Gemma-2 A Journey in Distilling a Family Finance LLMhttps://ericxliu.me/posts/technical-deep-dive-llm-categorization/Sat, 27 Dec 2025 00:00:00 +0000https://ericxliu.me/posts/technical-deep-dive-llm-categorization/<p>Running a family finance system is surprisingly complex. What starts as a simple spreadsheet often evolves into a web of rules, exceptions, and &ldquo;wait, was this dinner or <em>vacation</em> dinner?&rdquo; questions.</p> +<p>For years, I relied on a rule-based system to categorize our credit card transactions. It worked&hellip; mostly. But maintaining <code>if &quot;UBER&quot; in description and amount &gt; 50</code> style rules is a never-ending battle against the entropy of merchant names and changing habits.</p>The Convergence of Fast Weights, Linear Attention, and State Space Modelshttps://ericxliu.me/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/Fri, 19 Dec 2025 00:00:00 +0000https://ericxliu.me/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/<p>Modern Large Language Models (LLMs) are dominated by the Transformer architecture. However, as context windows grow, the computational cost of the Transformer’s attention mechanism has become a primary bottleneck. Recent discussions in the AI community—most notably by Geoffrey Hinton—have highlighted a theoretical link between biological memory mechanisms (&ldquo;Fast Weights&rdquo;) and efficient engineering solutions like Linear Transformers and State Space Models (SSMs).</p> +<p>This article explores the mathematical equivalence between Hinton’s concept of Fast Weights as Associative Memory and the recurrence mechanisms found in models such as Mamba and RWKV.</p>vAttentionhttps://ericxliu.me/posts/vattention/Mon, 08 Dec 2025 00:00:00 +0000https://ericxliu.me/posts/vattention/<p>Large Language Model (LLM) inference is memory-bound, primarily due to the Key-Value (KV) cache—a store of intermediate state that grows linearly with sequence length. Efficient management of this memory is critical for throughput. While <strong>PagedAttention</strong> (popularized by vLLM) became the industry standard by solving memory fragmentation via software, recent research suggests that leveraging the GPU’s native hardware Memory Management Unit (MMU) offers a more performant and portable solution.</p> <h4 id="the-status-quo-pagedattention-and-software-tables"> The Status Quo: PagedAttention and Software Tables <a class="heading-link" href="#the-status-quo-pagedattention-and-software-tables"> diff --git a/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/index.html b/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/index.html index 35ec354..7742fb8 100644 --- a/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/index.html +++ b/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/index.html @@ -1,4 +1,4 @@ -Mixture-of-Experts (MoE) Models Challenges & Solutions in Practice · Eric X. Liu's Personal Page[cd4cace] \ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/posts/openwrt-mwan3-wireguard-endpoint-exclusion/index.html b/posts/openwrt-mwan3-wireguard-endpoint-exclusion/index.html index 950bb4d..bede2c4 100644 --- a/posts/openwrt-mwan3-wireguard-endpoint-exclusion/index.html +++ b/posts/openwrt-mwan3-wireguard-endpoint-exclusion/index.html @@ -1,4 +1,4 @@ -OpenWrt: Fix WireGuard Connectivity with MWAN3 by Excluding the VPN Endpoint · Eric X. Liu's Personal Page[cd4cace] \ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/posts/page/2/index.html b/posts/page/2/index.html index 2787eef..321bab5 100644 --- a/posts/page/2/index.html +++ b/posts/page/2/index.html @@ -1,4 +1,4 @@ -Posts · Eric X. Liu's Personal Page
\ No newline at end of file diff --git a/posts/ppo-for-language-models/index.html b/posts/ppo-for-language-models/index.html index 770d873..bfaa7de 100644 --- a/posts/ppo-for-language-models/index.html +++ b/posts/ppo-for-language-models/index.html @@ -1,4 +1,4 @@ -A Deep Dive into PPO for Language Models · Eric X. Liu's Personal Page[cd4cace] \ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/posts/quantization-in-llms/index.html b/posts/quantization-in-llms/index.html index 34450e0..008bd11 100644 --- a/posts/quantization-in-llms/index.html +++ b/posts/quantization-in-llms/index.html @@ -1,4 +1,4 @@ -Quantization in LLMs · Eric X. Liu's Personal Page
\ No newline at end of file diff --git a/posts/secure-boot-dkms-and-mok-on-proxmox-debian/index.html b/posts/secure-boot-dkms-and-mok-on-proxmox-debian/index.html index ab98506..0f029f2 100644 --- a/posts/secure-boot-dkms-and-mok-on-proxmox-debian/index.html +++ b/posts/secure-boot-dkms-and-mok-on-proxmox-debian/index.html @@ -1,4 +1,4 @@ -Fixing GPU Operator Pods Stuck in Init: Secure Boot, DKMS, and MOK on Proxmox + Debian · Eric X. Liu's Personal Page[cd4cace] \ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/posts/supabase-deep-dive/index.html b/posts/supabase-deep-dive/index.html index 611b20b..fb34026 100644 --- a/posts/supabase-deep-dive/index.html +++ b/posts/supabase-deep-dive/index.html @@ -1,4 +1,4 @@ -Supabase Deep Dive: It's Not Magic, It's Just Postgres · Eric X. Liu's Personal Page
\ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/index.html b/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/index.html index d4f408d..11e83a6 100644 --- a/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/index.html +++ b/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/index.html @@ -1,4 +1,4 @@ -An Architectural Deep Dive of T5 · Eric X. Liu's Personal Page
\ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/posts/technical-deep-dive-llm-categorization/index.html b/posts/technical-deep-dive-llm-categorization/index.html index 72ff7b8..8255f3e 100644 --- a/posts/technical-deep-dive-llm-categorization/index.html +++ b/posts/technical-deep-dive-llm-categorization/index.html @@ -1,10 +1,10 @@ -From Gemini-3-Flash to T5-Gemma-2 A Journey in Distilling a Family Finance LLM · Eric X. Liu's Personal Page
\ No newline at end of file diff --git a/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/index.html b/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/index.html index 1bc4ad3..278a501 100644 --- a/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/index.html +++ b/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/index.html @@ -1,4 +1,4 @@ -The Convergence of Fast Weights, Linear Attention, and State Space Models · Eric X. Liu's Personal Page
\ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/posts/transformer-s-core-mechanics/index.html b/posts/transformer-s-core-mechanics/index.html index cbebc79..ea96fa7 100644 --- a/posts/transformer-s-core-mechanics/index.html +++ b/posts/transformer-s-core-mechanics/index.html @@ -1,4 +1,4 @@ -Transformer's Core Mechanics · Eric X. Liu's Personal Page[cd4cace] \ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/posts/unifi-vlan-migration-to-zone-based-architecture/index.html b/posts/unifi-vlan-migration-to-zone-based-architecture/index.html index 5aee0b4..cd18fa5 100644 --- a/posts/unifi-vlan-migration-to-zone-based-architecture/index.html +++ b/posts/unifi-vlan-migration-to-zone-based-architecture/index.html @@ -1,4 +1,4 @@ -UniFi VLAN Migration to Zone-Based Architecture · Eric X. Liu's Personal Page
\ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/posts/useful/index.html b/posts/useful/index.html index c1815ce..c971bf5 100644 --- a/posts/useful/index.html +++ b/posts/useful/index.html @@ -1,4 +1,4 @@ -Some useful files · Eric X. Liu's Personal Page
\ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/posts/vattention/index.html b/posts/vattention/index.html index e66611b..1593d92 100644 --- a/posts/vattention/index.html +++ b/posts/vattention/index.html @@ -1,4 +1,4 @@ -vAttention · Eric X. Liu's Personal Page[cd4cace] \ No newline at end of file +[9ffc2bb] \ No newline at end of file diff --git a/series/index.html b/series/index.html index 440892e..f4460ef 100644 --- a/series/index.html +++ b/series/index.html @@ -1,7 +1,7 @@ -Series · Eric X. Liu's Personal Page
\ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index 94f6324..22a4c48 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -1 +1 @@ -https://ericxliu.me/about/2025-12-20T09:52:07-08:00weekly0.5https://ericxliu.me/2025-12-27T21:18:10+00:00weekly0.5https://ericxliu.me/posts/2025-12-27T21:18:10+00:00weekly0.5https://ericxliu.me/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/2025-12-19T21:21:55+00:00weekly0.5https://ericxliu.me/posts/technical-deep-dive-llm-categorization/2025-12-27T21:18:10+00:00weekly0.5https://ericxliu.me/posts/vattention/2025-12-19T21:21:55+00:00weekly0.5https://ericxliu.me/posts/benchmarking-llms-on-jetson-orin-nano/2025-10-04T20:41:50+00:00weekly0.5https://ericxliu.me/posts/flashing-jetson-orin-nano-in-virtualized-environments/2025-10-02T08:42:39+00:00weekly0.5https://ericxliu.me/posts/openwrt-mwan3-wireguard-endpoint-exclusion/2025-10-02T08:34:05+00:00weekly0.5https://ericxliu.me/posts/unifi-vlan-migration-to-zone-based-architecture/2025-10-02T08:42:39+00:00weekly0.5https://ericxliu.me/posts/quantization-in-llms/2025-08-20T06:02:35+00:00weekly0.5https://ericxliu.me/posts/breville-barista-pro-maintenance/2025-08-20T06:04:36+00:00weekly0.5https://ericxliu.me/posts/secure-boot-dkms-and-mok-on-proxmox-debian/2025-08-14T06:50:22+00:00weekly0.5https://ericxliu.me/posts/how-rvq-teaches-llms-to-see-and-hear/2025-08-08T17:36:52+00:00weekly0.5https://ericxliu.me/posts/supabase-deep-dive/2025-08-04T03:59:37+00:00weekly0.5https://ericxliu.me/posts/ppo-for-language-models/2025-10-02T08:42:39+00:00weekly0.5https://ericxliu.me/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/2025-08-03T06:02:48+00:00weekly0.5https://ericxliu.me/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/2025-08-03T03:41:10+00:00weekly0.5https://ericxliu.me/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/2025-08-03T04:20:20+00:00weekly0.5https://ericxliu.me/posts/transformer-s-core-mechanics/2025-10-02T08:42:39+00:00weekly0.5https://ericxliu.me/posts/useful/2025-08-03T08:37:28-07:00weekly0.5https://ericxliu.me/authors/weekly0.5https://ericxliu.me/categories/weekly0.5https://ericxliu.me/series/weekly0.5https://ericxliu.me/tags/weekly0.5 \ No newline at end of file +https://ericxliu.me/2025-12-27T22:05:12+00:00weekly0.5https://ericxliu.me/posts/technical-deep-dive-llm-categorization/2025-12-27T22:05:12+00:00weekly0.5https://ericxliu.me/posts/2025-12-27T22:05:12+00:00weekly0.5https://ericxliu.me/about/2025-12-20T09:52:07-08:00weekly0.5https://ericxliu.me/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/2025-12-19T21:21:55+00:00weekly0.5https://ericxliu.me/posts/vattention/2025-12-19T21:21:55+00:00weekly0.5https://ericxliu.me/posts/benchmarking-llms-on-jetson-orin-nano/2025-10-04T20:41:50+00:00weekly0.5https://ericxliu.me/posts/flashing-jetson-orin-nano-in-virtualized-environments/2025-10-02T08:42:39+00:00weekly0.5https://ericxliu.me/posts/openwrt-mwan3-wireguard-endpoint-exclusion/2025-10-02T08:34:05+00:00weekly0.5https://ericxliu.me/posts/unifi-vlan-migration-to-zone-based-architecture/2025-10-02T08:42:39+00:00weekly0.5https://ericxliu.me/posts/quantization-in-llms/2025-08-20T06:02:35+00:00weekly0.5https://ericxliu.me/posts/breville-barista-pro-maintenance/2025-08-20T06:04:36+00:00weekly0.5https://ericxliu.me/posts/secure-boot-dkms-and-mok-on-proxmox-debian/2025-08-14T06:50:22+00:00weekly0.5https://ericxliu.me/posts/how-rvq-teaches-llms-to-see-and-hear/2025-08-08T17:36:52+00:00weekly0.5https://ericxliu.me/posts/supabase-deep-dive/2025-08-04T03:59:37+00:00weekly0.5https://ericxliu.me/posts/ppo-for-language-models/2025-10-02T08:42:39+00:00weekly0.5https://ericxliu.me/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/2025-08-03T06:02:48+00:00weekly0.5https://ericxliu.me/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/2025-08-03T03:41:10+00:00weekly0.5https://ericxliu.me/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/2025-08-03T04:20:20+00:00weekly0.5https://ericxliu.me/posts/transformer-s-core-mechanics/2025-10-02T08:42:39+00:00weekly0.5https://ericxliu.me/posts/useful/2025-08-03T08:37:28-07:00weekly0.5https://ericxliu.me/authors/weekly0.5https://ericxliu.me/categories/weekly0.5https://ericxliu.me/series/weekly0.5https://ericxliu.me/tags/weekly0.5 \ No newline at end of file diff --git a/tags/index.html b/tags/index.html index 1cf3308..9327f0b 100644 --- a/tags/index.html +++ b/tags/index.html @@ -1,7 +1,7 @@ -Tags · Eric X. Liu's Personal Page
\ No newline at end of file