This commit is contained in:
eric
2025-10-02 07:23:17 +00:00
parent 811c80144e
commit 4808a62cd0
22 changed files with 32 additions and 32 deletions

View File

@@ -1,4 +1,4 @@
<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Eric X. Liu's Personal Page</title><link>/</link><description>Recent content on Eric X. Liu's Personal Page</description><generator>Hugo</generator><language>en</language><lastBuildDate>Tue, 23 Sep 2025 06:20:36 +0000</lastBuildDate><atom:link href="/index.xml" rel="self" type="application/rss+xml"/><item><title>UniFi VLAN Migration to Zone-Based Architecture</title><link>/posts/unifi-vlan-migration-to-zone-based-architecture/</link><pubDate>Mon, 22 Sep 2025 00:00:00 +0000</pubDate><guid>/posts/unifi-vlan-migration-to-zone-based-architecture/</guid><description>&lt;p&gt;Embarking on a network migration to a properly segmented VLAN architecture is a rite of passage for any serious home lab or small business operator. The goal is clear: improve security and organization by separating traffic. However, the path from a flat network to a segmented one is often paved with subtle but critical configuration details that can lead to hours of frustrating troubleshooting.&lt;/p&gt;
<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Eric X. Liu's Personal Page</title><link>/</link><description>Recent content on Eric X. Liu's Personal Page</description><generator>Hugo</generator><language>en</language><lastBuildDate>Thu, 02 Oct 2025 07:22:54 +0000</lastBuildDate><atom:link href="/index.xml" rel="self" type="application/rss+xml"/><item><title>UniFi VLAN Migration to Zone-Based Architecture</title><link>/posts/unifi-vlan-migration-to-zone-based-architecture/</link><pubDate>Mon, 22 Sep 2025 00:00:00 +0000</pubDate><guid>/posts/unifi-vlan-migration-to-zone-based-architecture/</guid><description>&lt;p&gt;Embarking on a network migration to a properly segmented VLAN architecture is a rite of passage for any serious home lab or small business operator. The goal is clear: improve security and organization by separating traffic. However, the path from a flat network to a segmented one is often paved with subtle but critical configuration details that can lead to hours of frustrating troubleshooting.&lt;/p&gt;
&lt;p&gt;This article documents that journey. It details the pitfalls encountered, the core networking concepts that were essential to understand, and the best practices that ultimately led to a stable, secure, and logical network design built on a zone-based firewall model.&lt;/p&gt;</description></item><item><title>Quantization in LLMs</title><link>/posts/quantization-in-llms/</link><pubDate>Tue, 19 Aug 2025 00:00:00 +0000</pubDate><guid>/posts/quantization-in-llms/</guid><description>&lt;p&gt;The burgeoning scale of Large Language Models (LLMs) has necessitated a paradigm shift in their deployment, moving beyond full-precision floating-point arithmetic towards lower-precision representations. Quantization, the process of mapping a wide range of continuous values to a smaller, discrete set, has emerged as a critical technique to reduce model size, accelerate inference, and lower energy consumption. This article provides a technical overview of quantization theories, their application in modern LLMs, and highlights the ongoing innovations in this domain.&lt;/p&gt;</description></item><item><title>Breville Barista Pro Maintenance</title><link>/posts/breville-barista-pro-maintenance/</link><pubDate>Sat, 16 Aug 2025 00:00:00 +0000</pubDate><guid>/posts/breville-barista-pro-maintenance/</guid><description>&lt;p&gt;Proper maintenance is critical for the longevity and performance of a Breville Barista Pro espresso machine. Consistent cleaning not only ensures the machine functions correctly but also directly impacts the quality of the espresso produced. This guide provides a detailed, technical breakdown of the essential maintenance routines, from automated cycles to daily upkeep.&lt;/p&gt;
&lt;h4 id="understanding-the-two-primary-maintenance-cycles"&gt;
&lt;strong&gt;Understanding the Two Primary Maintenance Cycles&lt;/strong&gt;
@@ -16,7 +16,7 @@
&lt;p&gt;The answer lies in creating a universal language—a bridge between the continuous, messy world of pixels and audio waves and the discrete, structured world of language tokens. One of the most elegant and powerful tools for building this bridge is &lt;strong&gt;Residual Vector Quantization (RVQ)&lt;/strong&gt;.&lt;/p&gt;</description></item><item><title>Supabase Deep Dive: It's Not Magic, It's Just Postgres</title><link>/posts/supabase-deep-dive/</link><pubDate>Sun, 03 Aug 2025 00:00:00 +0000</pubDate><guid>/posts/supabase-deep-dive/</guid><description>&lt;p&gt;In the world of Backend-as-a-Service (BaaS), platforms are often treated as magic boxes. You push data in, you get data out, and you hope the magic inside scales. While this simplicity is powerful, it can obscure the underlying mechanics, leaving developers wondering what&amp;rsquo;s really going on.&lt;/p&gt;
&lt;p&gt;Supabase enters this space with a radically different philosophy: &lt;strong&gt;transparency&lt;/strong&gt;. It provides the convenience of a BaaS, but its built on the world&amp;rsquo;s most trusted relational database: PostgreSQL. The &amp;ldquo;magic&amp;rdquo; isn&amp;rsquo;t a proprietary black box; it&amp;rsquo;s a carefully assembled suite of open-source tools that enhance Postgres, not hide it.&lt;/p&gt;</description></item><item><title>A Deep Dive into PPO for Language Models</title><link>/posts/ppo-for-language-models/</link><pubDate>Sat, 02 Aug 2025 00:00:00 +0000</pubDate><guid>/posts/ppo-for-language-models/</guid><description>&lt;p&gt;Large Language Models (LLMs) have demonstrated astonishing capabilities, but out-of-the-box, they are simply powerful text predictors. They don&amp;rsquo;t inherently understand what makes a response helpful, harmless, or aligned with human values. The technique that has proven most effective at bridging this gap is Reinforcement Learning from Human Feedback (RLHF), and at its heart lies a powerful algorithm: Proximal Policy Optimization (PPO).&lt;/p&gt;
&lt;p&gt;You may have seen diagrams like the one below, which outlines the RLHF training process. It can look intimidating, with a web of interconnected models, losses, and data flows.
&lt;img src="/images/ppo-for-language-models/7713bd3ecf27442e939b9190fa08165d.png" alt="S3 File"&gt;&lt;/p&gt;</description></item><item><title>Mixture-of-Experts (MoE) Models Challenges &amp; Solutions in Practice</title><link>/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/</link><pubDate>Wed, 02 Jul 2025 00:00:00 +0000</pubDate><guid>/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/</guid><description>&lt;p&gt;Mixture-of-Experts (MoEs) are neural network architectures that allow different parts of the model (called &amp;ldquo;experts&amp;rdquo;) to specialize in different types of inputs. A &amp;ldquo;gating network&amp;rdquo; or &amp;ldquo;router&amp;rdquo; learns to dispatch each input (or &amp;ldquo;token&amp;rdquo;) to a subset of these experts. While powerful for scaling models, MoEs introduce several practical challenges.&lt;/p&gt;
&lt;img src="http://localhost:4998/attachments/image-3632d923eed983f171fba4341825273101f1fc94.png?client=default&amp;amp;bucket=obsidian" alt="S3 File"&gt;&lt;/p&gt;</description></item><item><title>Mixture-of-Experts (MoE) Models Challenges &amp; Solutions in Practice</title><link>/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/</link><pubDate>Wed, 02 Jul 2025 00:00:00 +0000</pubDate><guid>/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/</guid><description>&lt;p&gt;Mixture-of-Experts (MoEs) are neural network architectures that allow different parts of the model (called &amp;ldquo;experts&amp;rdquo;) to specialize in different types of inputs. A &amp;ldquo;gating network&amp;rdquo; or &amp;ldquo;router&amp;rdquo; learns to dispatch each input (or &amp;ldquo;token&amp;rdquo;) to a subset of these experts. While powerful for scaling models, MoEs introduce several practical challenges.&lt;/p&gt;
&lt;h3 id="1-challenge-non-differentiability-of-routing-functions"&gt;
1. Challenge: Non-Differentiability of Routing Functions
&lt;a class="heading-link" href="#1-challenge-non-differentiability-of-routing-functions"&gt;