From 2d7d143cbfa7deb0fb671c1d3031bfbb2d7986ab Mon Sep 17 00:00:00 2001 From: eric Date: Sat, 20 Dec 2025 17:22:08 +0000 Subject: [PATCH] deploy: 2bb856b1e7c4243d32c232f0ae0d5e6f5ff5c68d --- 404.html | 4 ++-- about/index.html | 4 ++-- categories/index.html | 4 ++-- index.html | 4 ++-- posts/benchmarking-llms-on-jetson-orin-nano/index.html | 4 ++-- posts/breville-barista-pro-maintenance/index.html | 4 ++-- .../index.html | 4 ++-- .../index.html | 4 ++-- posts/how-rvq-teaches-llms-to-see-and-hear/index.html | 4 ++-- posts/index.html | 4 ++-- .../index.html | 4 ++-- posts/openwrt-mwan3-wireguard-endpoint-exclusion/index.html | 4 ++-- posts/page/2/index.html | 4 ++-- posts/ppo-for-language-models/index.html | 4 ++-- posts/quantization-in-llms/index.html | 4 ++-- posts/secure-boot-dkms-and-mok-on-proxmox-debian/index.html | 4 ++-- posts/supabase-deep-dive/index.html | 4 ++-- .../index.html | 4 ++-- .../index.html | 4 ++-- posts/transformer-s-core-mechanics/index.html | 4 ++-- .../index.html | 4 ++-- posts/useful/index.html | 4 ++-- posts/vattention/index.html | 4 ++-- tags/index.html | 4 ++-- 24 files changed, 48 insertions(+), 48 deletions(-) diff --git a/404.html b/404.html index 6470b78..956476c 100644 --- a/404.html +++ b/404.html @@ -1,7 +1,7 @@ Eric X. Liu's Personal Page

404

Page Not Found

Sorry, this page does not exist.
You can head back to the homepage.

\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/about/index.html b/about/index.html index fffd3fd..d5163a6 100644 --- a/about/index.html +++ b/about/index.html @@ -9,10 +9,10 @@ I am a Staff Software Engineer and Tech Lead Manager (TLM) at Google, based in S My work focuses on Platforms Performance and Customer Engineering, specifically for GPUs and TPUs. I lead teams that bridge the gap between cutting-edge AI hardware and the latest ML models (like Gemini), ensuring optimal performance and reliability at Google Cloud scale. Beyond the code, I maintain this “digital garden” where I document my projects and learnings. It serves as my second brain, capturing everything from technical deep dives to random musings.">

About

Hi, I’m Eric Liu.

I am a Staff Software Engineer and Tech Lead Manager (TLM) at Google, based in Sunnyvale, CA.

My work focuses on Platforms Performance and Customer Engineering, specifically for GPUs and TPUs. I lead teams that bridge the gap between cutting-edge AI hardware and the latest ML models (like Gemini), ensuring optimal performance and reliability at Google Cloud scale.

Beyond the code, I maintain this “digital garden” where I document my projects and learnings. It serves as my second brain, capturing everything from technical deep dives to random musings.

Personal Interests +

About

Hi, I’m Eric Liu.

I am a Staff Software Engineer and Tech Lead Manager (TLM) at Google, based in Sunnyvale, CA.

My work focuses on Platforms Performance and Customer Engineering, specifically for GPUs and TPUs. I lead teams that bridge the gap between cutting-edge AI hardware and the latest ML models (like Gemini), ensuring optimal performance and reliability at Google Cloud scale.

Beyond the code, I maintain this “digital garden” where I document my projects and learnings. It serves as my second brain, capturing everything from technical deep dives to random musings.

Personal Interests Link to heading

I’m a tinkerer at heart, whether digital or physical:

  • Homelab: Kubernetes, Proxmox, and self-hosted services. I love over-engineering my home network.
  • DIY & Jeep: Maintaining and modifying my Jeep, and general DIY projects.
  • Cooking: experimenting with new recipes and techniques.

Welcome to my corner of the internet.

\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/categories/index.html b/categories/index.html index ecc310d..f0f7bfa 100644 --- a/categories/index.html +++ b/categories/index.html @@ -1,7 +1,7 @@ Categories · Eric X. Liu's Personal Page
\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/index.html b/index.html index 9c8142f..54bd1ec 100644 --- a/index.html +++ b/index.html @@ -1,7 +1,7 @@ Eric X. Liu's Personal Page
avatar

Eric X. Liu

Software & Performance Engineer @Google

\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/benchmarking-llms-on-jetson-orin-nano/index.html b/posts/benchmarking-llms-on-jetson-orin-nano/index.html index c48ac1a..447bec6 100644 --- a/posts/benchmarking-llms-on-jetson-orin-nano/index.html +++ b/posts/benchmarking-llms-on-jetson-orin-nano/index.html @@ -10,7 +10,7 @@ After running 66 inference tests across seven different language models ranging After running 66 inference tests across seven different language models ranging from 0.5B to 5.4B parameters, I discovered something counterintuitive: the device’s computational muscle sits largely idle during single-stream LLM inference. The bottleneck isn’t computation—it’s memory bandwidth. This isn’t just a quirk of one device; it’s a fundamental characteristic of single-user, autoregressive token generation on edge hardware—a reality that shapes how we should approach local LLM deployment.">
\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/breville-barista-pro-maintenance/index.html b/posts/breville-barista-pro-maintenance/index.html index cbdbc84..f312c3b 100644 --- a/posts/breville-barista-pro-maintenance/index.html +++ b/posts/breville-barista-pro-maintenance/index.html @@ -10,7 +10,7 @@ The Breville Barista Pro has two distinct, automated maintenance procedures: the Understanding the Two Primary Maintenance Cycles Link to heading The Breville Barista Pro has two distinct, automated maintenance procedures: the Cleaning (Flush) Cycle and the Descale Cycle. It is important to understand that these are not interchangeable, as they address different types of buildup within the machine.">

Breville Barista Pro Maintenance

Proper maintenance is critical for the longevity and performance of a Breville Barista Pro espresso machine. Consistent cleaning not only ensures the machine functions correctly but also directly impacts the quality of the espresso produced. This guide provides a detailed, technical breakdown of the essential maintenance routines, from automated cycles to daily upkeep.

Understanding the Two Primary Maintenance Cycles @@ -25,4 +25,4 @@ Understanding the Two Primary Maintenance Cycles Link to heading The Breville Ba 2016 - 2025 Eric X. Liu -[3f9f80d]

\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/index.html b/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/index.html index ece69ec..b373215 100644 --- a/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/index.html +++ b/posts/espresso-theory-application-a-guide-for-the-breville-barista-pro/index.html @@ -3,7 +3,7 @@ Our overarching philosophy is simple: isolate and change only one variable at a Our overarching philosophy is simple: isolate and change only one variable at a time. While numbers are crucial, your palate is the ultimate judge. Dose, ratio, and time are interconnected, but your grind size is your most powerful lever.">

Mastering Your Breville Barista Pro: The Ultimate Guide to Dialing In Espresso

Are you ready to transform your home espresso game from good to genuinely great? The Breville Barista Pro is a fantastic machine, but unlocking its full potential requires understanding a few key principles. This guide will walk you through the systematic process of dialing in your espresso, ensuring every shot is delicious and repeatable.

Our overarching philosophy is simple: isolate and change only one variable at a time. While numbers are crucial, your palate is the ultimate judge. Dose, ratio, and time are interconnected, but your grind size is your most powerful lever.

Let’s dive in!


Part 1: The Foundation — Dose (The Weight of Dry Coffee) @@ -20,4 +20,4 @@ Our overarching philosophy is simple: isolate and change only one variable at a 2016 - 2025 Eric X. Liu -[3f9f80d]

\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/flashing-jetson-orin-nano-in-virtualized-environments/index.html b/posts/flashing-jetson-orin-nano-in-virtualized-environments/index.html index 6d507e2..857c589 100644 --- a/posts/flashing-jetson-orin-nano-in-virtualized-environments/index.html +++ b/posts/flashing-jetson-orin-nano-in-virtualized-environments/index.html @@ -14,7 +14,7 @@ Flashing NVIDIA Jetson devices remotely presents unique challenges when the host machine is virtualized. This article documents the technical challenges, failures, and eventual success of flashing a Jetson Orin Nano Super developer kit using NVIDIA SDK Manager in various virtualized environments, specifically focusing on QEMU/KVM virtual machines and LXC containers on Proxmox VE.">
\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/how-rvq-teaches-llms-to-see-and-hear/index.html b/posts/how-rvq-teaches-llms-to-see-and-hear/index.html index 89047ff..3fca861 100644 --- a/posts/how-rvq-teaches-llms-to-see-and-hear/index.html +++ b/posts/how-rvq-teaches-llms-to-see-and-hear/index.html @@ -3,7 +3,7 @@ The answer lies in creating a universal language—a bridge between the continuo The answer lies in creating a universal language—a bridge between the continuous, messy world of pixels and audio waves and the discrete, structured world of language tokens. One of the most elegant and powerful tools for building this bridge is Residual Vector Quantization (RVQ).">

Beyond Words: How RVQ Teaches LLMs to See and Hear

Large Language Models (LLMs) are masters of text, but the world is not made of text alone. It’s a symphony of sights, sounds, and experiences. The ultimate goal for AI is to understand this rich, multi-modal world as we do. But how do you teach a model that thinks in words to understand a picture of a sunset or the melody of a song?

The answer lies in creating a universal language—a bridge between the continuous, messy world of pixels and audio waves and the discrete, structured world of language tokens. One of the most elegant and powerful tools for building this bridge is Residual Vector Quantization (RVQ).

This article dives deep into RVQ, exploring how it turns raw data into meaningful semantic IDs and how these IDs, in turn, unlock multi-modal understanding in LLMs.

What is Residual Vector Quantization? The Art of Smart Compression @@ -18,4 +18,4 @@ The answer lies in creating a universal language—a bridge between the continuo 2016 - 2025 Eric X. Liu -[3f9f80d]

\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/index.html b/posts/index.html index 6b45aa9..1b788e9 100644 --- a/posts/index.html +++ b/posts/index.html @@ -1,6 +1,6 @@ Posts · Eric X. Liu's Personal Page
\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/index.html b/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/index.html index dbf953e..e354832 100644 --- a/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/index.html +++ b/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/index.html @@ -11,7 +11,7 @@ Many routing mechanisms, especially “Top-K routing,” involve a discr 1. Challenge: Non-Differentiability of Routing Functions Link to heading The Problem: Many routing mechanisms, especially “Top-K routing,” involve a discrete, hard selection process. A common function is KeepTopK(v, k), which selects the top k scoring elements from a vector v and sets others to $-\infty$ or $0$.">

Mixture-of-Experts (MoE) Models Challenges & Solutions in Practice

Mixture-of-Experts (MoEs) are neural network architectures that allow different parts of the model (called “experts”) to specialize in different types of inputs. A “gating network” or “router” learns to dispatch each input (or “token”) to a subset of these experts. While powerful for scaling models, MoEs introduce several practical challenges.

1. Challenge: Non-Differentiability of Routing Functions @@ -44,4 +44,4 @@ The Top-K routing mechanism, as illustrated in the provided ima 2016 - 2025 Eric X. Liu -[3f9f80d]

\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/openwrt-mwan3-wireguard-endpoint-exclusion/index.html b/posts/openwrt-mwan3-wireguard-endpoint-exclusion/index.html index 257407d..5ba6930 100644 --- a/posts/openwrt-mwan3-wireguard-endpoint-exclusion/index.html +++ b/posts/openwrt-mwan3-wireguard-endpoint-exclusion/index.html @@ -7,7 +7,7 @@ When using WireGuard together with MWAN3 on OpenWrt, the tunnel can fail to establish or flap when the peer’s IP is routed into the tunnel itself. This is a classic routing bootstrap problem: WireGuard wants to route 0.0.0.0/0 into the tunnel, but the UDP packets to the peer’s public endpoint also get captured, so they never reach the Internet to bring the tunnel up.">
\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/page/2/index.html b/posts/page/2/index.html index 6ba2c20..4178e95 100644 --- a/posts/page/2/index.html +++ b/posts/page/2/index.html @@ -1,6 +1,6 @@ Posts · Eric X. Liu's Personal Page
\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/ppo-for-language-models/index.html b/posts/ppo-for-language-models/index.html index fd3db49..00df420 100644 --- a/posts/ppo-for-language-models/index.html +++ b/posts/ppo-for-language-models/index.html @@ -4,7 +4,7 @@ You may have seen diagrams like the one below, which outlines the RLHF training You may have seen diagrams like the one below, which outlines the RLHF training process. It can look intimidating, with a web of interconnected models, losses, and data flows.">

A Deep Dive into PPO for Language Models

Large Language Models (LLMs) have demonstrated astonishing capabilities, but out-of-the-box, they are simply powerful text predictors. They don’t inherently understand what makes a response helpful, harmless, or aligned with human values. The technique that has proven most effective at bridging this gap is Reinforcement Learning from Human Feedback (RLHF), and at its heart lies a powerful algorithm: Proximal Policy Optimization (PPO).

You may have seen diagrams like the one below, which outlines the RLHF training process. It can look intimidating, with a web of interconnected models, losses, and data flows. @@ -25,4 +25,4 @@ where δ_t = r_t + γV(s_{t+1}) - V(s_t)

\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/quantization-in-llms/index.html b/posts/quantization-in-llms/index.html index 393be62..4c95867 100644 --- a/posts/quantization-in-llms/index.html +++ b/posts/quantization-in-llms/index.html @@ -1,10 +1,10 @@ Quantization in LLMs · Eric X. Liu's Personal Page

Quantization in LLMs

The burgeoning scale of Large Language Models (LLMs) has necessitated a paradigm shift in their deployment, moving beyond full-precision floating-point arithmetic towards lower-precision representations. Quantization, the process of mapping a wide range of continuous values to a smaller, discrete set, has emerged as a critical technique to reduce model size, accelerate inference, and lower energy consumption. This article provides a technical overview of quantization theories, their application in modern LLMs, and highlights the ongoing innovations in this domain.

The Fundamentals of Quantization

At its core, quantization seeks to represent model weights and activations using fewer bits. Three primary approaches form the theoretical foundation:

  1. K-Means-based Quantization (Non-uniform): This method clusters floating-point weights into a predefined number of groups. Each weight is then replaced by the centroid of its assigned cluster. While effective for storage compression by storing a small “codebook” of centroids and integer indices, its direct computational benefits during inference are limited unless specialized hardware for lookup tables is employed.

  2. Linear (Affine) Quantization: The most prevalent form, linear quantization maps a floating-point range to a fixed integer range using a simple linear transformation: r = S * (q - Z). Here, r is the real value, q is the quantized integer, S is the scale factor, and Z is the zero-point (offset). This approach directly enables integer arithmetic, which is significantly faster and more energy-efficient on modern hardware.

  3. Binary and Ternary Quantization (Extreme Low-Bit): These push quantization to its limits by constraining weights and/or activations to only two (e.g., +1, -1) or three (e.g., +1, 0, -1) values. While offering maximal compression and enabling bitwise operations instead of multiplications, they often incur substantial accuracy degradation for complex LLMs. For instance, BinaryConnect enabled training deep neural networks with binary weights, showing near state-of-the-art results on image classification tasks. XNOR-Net further extended this by binarizing both weights and inputs, achieving significant speedups and memory savings. Ternary Weight Networks (TWNs) and Trained Ternary Quantization (TTQ) improve upon binary methods by introducing a zero value or learnable scaling factors, respectively, mitigating some accuracy loss.

Quantization Strategies: Bridging Accuracy and Efficiency

The practical application of quantization involves distinct strategies:

  1. Post-Training Quantization (PTQ): This approach applies quantization to an already trained, full-precision model without any further training or fine-tuning.

    • Quantization Granularity: The precision of quantization can vary across a model.
      • Per-Tensor Quantization applies a single scale and zero-point to an entire tensor.
      • Per-Channel Quantization assigns unique scale and zero-point parameters to each output channel of a layer, crucial for handling diverse value distributions.
      • Group Quantization provides an intermediate granularity, where scales and zero-points are applied to smaller groups of weights within a channel or layer. This balances fine-grained control with hardware efficiency.
    • Dynamic Range Clipping (Calibration): A critical aspect of PTQ is determining the optimal range (r_min, r_max) for quantization, especially for activations, which often exhibit outliers. Methods include:
      • Min-Max: Simply using the observed minimum and maximum values.
      • Exponential Moving Averages (EMA): Tracking ranges using a smoothed average during a calibration run.
      • Kullback-Leibler (KL) Divergence Minimization: Selecting clipping thresholds that minimize the information loss between the original and quantized distributions.
      • Mean Square Error (MSE) Minimization: Optimizing scale and zero-point parameters to minimize the reconstruction error. Adaptive rounding techniques, such as AdaRound, further refine this by optimizing rounding decisions for individual weights.
  2. Quantization-Aware Training (QAT): This method integrates the quantization process directly into the training or fine-tuning loop. By simulating the effects of low-precision arithmetic during training, the model learns to be robust to quantization noise. The Straight-Through Estimator (STE) is commonly used to approximate gradients for the non-differentiable quantization operations, enabling backpropagation. QAT generally yields higher accuracy than PTQ, particularly for aggressive low-bit quantization.

Emerging Techniques for Modern LLMs

The scale and complexity of LLMs necessitate advanced quantization strategies:

  1. One-Shot Post-Training Quantization (e.g., GPTQ, AWQ): These techniques aim to achieve near-QAT accuracy with PTQ’s convenience, requiring only a small, unlabelled calibration dataset and no full retraining. GPTQ quantizes weights layer-by-layer by minimizing output MSE, leveraging Hessian-aware information. AWQ identifies and scales “important” weights based on activation magnitudes before quantization. These methods have been instrumental in enabling 4-bit LLM inference on consumer-grade hardware.

  2. Sparsity-Quantization Hybrid (e.g., SpQR): These approaches combine model pruning (removing redundant connections) with quantization to achieve even greater compression. SpQR prunes weights and then quantizes the remaining non-zero weights, often with special handling for critical outlier weights.

  3. Quantization for Efficient Fine-tuning (e.g., QLoRA): QLoRA quantizes the base LLM weights (e.g., to 4-bit) and freezes them, then fine-tunes only small, low-rank adapter modules in full precision. This drastically reduces the memory requirements for fine-tuning large models on limited hardware.

  4. Hardware-Optimized Quantization Formats: Beyond bit-width, specialized floating-point formats and efficient kernels are being developed. MXFP4 (Microscaling FP4), NVIDIA’s FP8 (E4M3/E5M2), and GGUF’s K-quants are examples of block-wise floating-point formats and hierarchical quantization schemes optimized for high performance on modern accelerators like NVIDIA’s Blackwell GPUs. These formats offer superior dynamic range compared to fixed-point integers at very low bit-widths.

Multi-Level Scaling in Group Quantization: A Deeper Dive

Modern group quantization approaches often employ multi-level scaling to achieve an optimal balance between precision and compression. Consider a generalized formula for reconstructing a real value r from a quantized value q:

r = (q - z) * s_l0 * s_l1 * ...

where z is the zero-point (often 0 for symmetric quantization), and s_l0, s_l1 are scale factors at different hierarchical levels. The “Effective Bit Width” reflects the average number of bits per weight after accounting for both the quantized value and its associated scales.

Let’s dissect a representative table of such schemes:

Quantization ApproachData Type (q)L0 Group SizeL0 Scale Data TypeL1 Group SizeL1 Scale Data TypeEffective Bit Width
Per-Channel QuantINT4Per ChannelFP16--4
VSQINT416UINT4Per ChannelFP164 + 4/16 = 4.25
MX4S1M22E1M016E8M03 + 1/2 + 8/16 = 4
MX6S1M42E1M016E8M05 + 1/2 + 8/16 = 6
MX9S1M72E1M016E8M08 + 1/2 + 8/16 = 9
  • Data Types Explanation:

    • INT4: Standard 4-bit integer.
    • UINT4: 4-bit unsigned integer.
    • FP16: 16-bit floating-point number.
    • S1M2: A custom 3-bit floating-point-like format (1 sign bit, 2 mantissa bits), with its exponent effectively derived from shared scales.
    • S1M4: A custom 5-bit format (1 sign bit, 4 mantissa bits).
    • S1M7: A custom 8-bit format (1 sign bit, 7 mantissa bits).
    • E1M0: A custom 1-bit exponent-only floating-point scale (1 exponent bit, 0 mantissa bits).
    • E8M0: A custom 8-bit exponent-only floating-point scale (8 exponent bits, 0 mantissa bits).
  • Row-by-Row Analysis:

    1. Per-Channel Quant: This represents a baseline. Each individual value (q) is stored as a 4-bit integer. A single 16-bit FP16 scale (s_l0) is applied per channel. Since a channel contains many weights, the overhead of the 16-bit scale is amortized, making the effective bit width approximately 4 bits per weight.
    2. VSQ (Per-Vector Scaled Quantization): This scheme introduces a two-level scaling hierarchy. The core quantized value (q) is a 4-bit integer. A finer-grained 4-bit unsigned integer scale (s_l0 in UINT4) is applied to groups of 16 quantized values. A coarser 16-bit FP16 scale (s_l1) is applied per channel. The effective bit width is calculated as: (4 bits for q) + (4 bits for s_l0 / 16 elements) = 4 + 0.25 = 4.25 bits/weight. The FP16 s_l1 scale overhead per channel is negligible, hence not included in the fraction.
    3. MX4 (Mixed-Precision with Microexponents, 4-bit effective): This is a key example of specialized floating-point quantization. The base quantized value (q) uses a compact 3-bit S1M2 format. A 1-bit E1M0 scale (s_l0) is applied to very small groups of 2 q values. A coarser 8-bit E8M0 scale (s_l1) is applied to groups of 16 q values. The effective bit width is: (3 bits for q) + (1 bit for s_l0 / 2 elements) + (8 bits for s_l1 / 16 elements) = 3 + 0.5 + 0.5 = 4 bits/weight. This allows for a wider dynamic range, typical of floating-point numbers, while maintaining a very low average bit-width.
    4. MX6: Similar to MX4, but uses a 5-bit S1M4 format for q. The effective bit width becomes: 5 + 0.5 + 0.5 = 6 bits/weight, offering higher precision at the cost of slight increase in size.
    5. MX9: Uses an 8-bit S1M7 format for q. The effective bit width is: 8 + 0.5 + 0.5 = 9 bits/weight, providing near-INT8 precision while retaining the floating-point-like dynamic range benefits.

These multi-level, mixed-precision, floating-point quantization schemes represent a significant advancement, enabling LLMs to run efficiently on diverse hardware while maintaining high accuracy, especially for managing the ubiquitous outlier values in LLM activations and weights.

Current Trends and Future Outlook

The field of LLM quantization is characterized by rapid innovation.

  • Linear (Affine) Quantization remains the foundational principle, with most advancements focusing on refining its application.
  • Per-channel and especially Group/Block-wise Quantization are indispensable for LLMs due to their heterogeneous weight distributions.
  • Post-Training Quantization (PTQ), particularly advanced one-shot methods like GPTQ and AWQ, are highly relevant for efficient deployment of LLMs without the extensive resources required for QAT.
  • Quantization-Aware Training (QAT) is the benchmark for achieving peak accuracy at very low bit-widths, particularly when PTQ falls short.
  • Mixed-Precision Quantization is crucial for balancing accuracy and efficiency across the massive, varying layers of LLMs.
  • Hardware-optimized quantization formats (like MXFP4, FP8) represent a significant step towards co-designing models and silicon for maximum performance.

Conversely, methods like pure K-means quantization (where computation requires fetching float centroids) and general-purpose pure binary/ternary quantization are less commonly adopted as primary strategies for high-accuracy LLM inference, primarily due to the greater accuracy challenges and lack of widespread hardware acceleration for these specific paradigms compared to optimized integer or block-floating-point operations. The trajectory indicates a continuous push for lower effective bit-widths, driven by clever scaling strategies, specialized data formats, and a hardware-aware approach to model optimization.


References

Courbariaux, M., Bengio, Y., & David, J. P. (2015). BinaryConnect: Training Deep Neural Networks with Binary Weights during Propagations. NeurIPS Proceedings.

Dai, S., Venkatesan, R., Ren, H., Zimmer, B., Dally, W. J., & Khailany, B. (2021). VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision Neural Network Inference. arXiv preprint arXiv:2102.04503.

Rastegari, M., Ordonez, V., Redmon, J., & Farhadi, A. (2016). XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks. European Conference on Computer Vision (ECCV).

Zhu, C., Han, S., Mao, H., & Dally, W. J. (2017). Trained Ternary Quantization. International Conference on Learning Representations (ICLR).

Migacz, S. (2017). 8-bit Inference with TensorRT. NVIDIA GTC Presentation.

Krishnamoorthi, R. (2018). Quantizing Deep Convolutional Networks for Efficient Inference: A Whitepaper. arXiv preprint arXiv:1806.08342.

Li, F., Liu, B., Wang, X., Zhang, B., & Yan, J. (2016). Ternary Weight Networks. arXiv preprint arXiv:1605.04711.

Jacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard, A., Adam, H., & Kalenichenko, D. (2018). Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

Nagel, M., van Baalen, T., Blankevoort, T., & Louizos, C. (2019). Data-Free Quantization Through Weight Equalization and Bias Correction. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).

Han, S., Mao, H

\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/secure-boot-dkms-and-mok-on-proxmox-debian/index.html b/posts/secure-boot-dkms-and-mok-on-proxmox-debian/index.html index 800eb9a..5413f16 100644 --- a/posts/secure-boot-dkms-and-mok-on-proxmox-debian/index.html +++ b/posts/secure-boot-dkms-and-mok-on-proxmox-debian/index.html @@ -7,7 +7,7 @@ That message is the tell: Secure Boot is enabled and the kernel refuses to load nvidia-smi failed to communicate with the NVIDIA driver modprobe nvidia → “Key was rejected by service” That message is the tell: Secure Boot is enabled and the kernel refuses to load modules not signed by a trusted key.">

Fixing GPU Operator Pods Stuck in Init: Secure Boot, DKMS, and MOK on Proxmox + Debian

I hit an issue where all GPU Operator pods on one node were stuck in Init after migrating from Legacy BIOS to UEFI. The common error was NVIDIA components waiting for “toolkit-ready,” while the toolkit init container looped with:

  • nvidia-smi failed to communicate with the NVIDIA driver
  • modprobe nvidia → “Key was rejected by service”

That message is the tell: Secure Boot is enabled and the kernel refuses to load modules not signed by a trusted key.

Environment @@ -59,4 +59,4 @@ nvidia-smi failed to communicate with the NVIDIA driver modprobe nvidia → “K 2016 - 2025 Eric X. Liu -[3f9f80d]

\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/supabase-deep-dive/index.html b/posts/supabase-deep-dive/index.html index 49fe791..24cad48 100644 --- a/posts/supabase-deep-dive/index.html +++ b/posts/supabase-deep-dive/index.html @@ -3,7 +3,7 @@ Supabase enters this space with a radically different philosophy: transparency. Supabase enters this space with a radically different philosophy: transparency. It provides the convenience of a BaaS, but it’s built on the world’s most trusted relational database: PostgreSQL. The “magic” isn’t a proprietary black box; it’s a carefully assembled suite of open-source tools that enhance Postgres, not hide it.">

Supabase Deep Dive: It's Not Magic, It's Just Postgres

In the world of Backend-as-a-Service (BaaS), platforms are often treated as magic boxes. You push data in, you get data out, and you hope the magic inside scales. While this simplicity is powerful, it can obscure the underlying mechanics, leaving developers wondering what’s really going on.

Supabase enters this space with a radically different philosophy: transparency. It provides the convenience of a BaaS, but it’s built on the world’s most trusted relational database: PostgreSQL. The “magic” isn’t a proprietary black box; it’s a carefully assembled suite of open-source tools that enhance Postgres, not hide it.

This deep dive will deconstruct that suite. We will move beyond the basics to explore the architectural patterns, security models, and development workflows that allow you to build robust, scalable applications. We will cover:

  • The Supabase Blueprint: A procedural guide to designing your application.
  • The Pillars of Supabase: A detailed look at Auth, Storage, Functions, and Realtime.
  • Transactional Realtime: How Supabase guarantees data consistency in a live environment.
  • Best Practices: The practical knowledge you need before writing a single line of code.

The Guiding Philosophy: Your Database as the Source of Truth @@ -90,4 +90,4 @@ Supabase enters this space with a radically different philosophy: transparency. 2016 - 2025 Eric X. Liu -[3f9f80d]

\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/index.html b/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/index.html index 9317209..bee8e8d 100644 --- a/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/index.html +++ b/posts/t5-the-transformer-that-zigged-when-others-zagged-an-architectural-deep-dive/index.html @@ -3,7 +3,7 @@ But to truly understand the field, we must look at the pivotal models that explo But to truly understand the field, we must look at the pivotal models that explored different paths. Google’s T5, or Text-to-Text Transfer Transformer, stands out as one of the most influential. It didn’t just introduce a new model; it proposed a new philosophy. This article dives deep into the architecture of T5, how it fundamentally differs from modern LLMs, and the lasting legacy of its unique design choices.">

An Architectural Deep Dive of T5

In the rapidly evolving landscape of Large Language Models, a few key architectures define the dominant paradigms. Today, the “decoder-only” model, popularized by the GPT series and its successors like LLaMA and Mistral, reigns supreme. These models are scaled to incredible sizes and excel at in-context learning.

But to truly understand the field, we must look at the pivotal models that explored different paths. Google’s T5, or Text-to-Text Transfer Transformer, stands out as one of the most influential. It didn’t just introduce a new model; it proposed a new philosophy. This article dives deep into the architecture of T5, how it fundamentally differs from modern LLMs, and the lasting legacy of its unique design choices.

The Core Philosophy: Everything is a Text-to-Text Problem @@ -30,4 +30,4 @@ But to truly understand the field, we must look at the pivotal models that explo 2016 - 2025 Eric X. Liu -[3f9f80d]

\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/index.html b/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/index.html index 9f31665..6cfe6b8 100644 --- a/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/index.html +++ b/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/index.html @@ -3,7 +3,7 @@ This article explores the mathematical equivalence between Hinton’s concept of This article explores the mathematical equivalence between Hinton’s concept of Fast Weights as Associative Memory and the recurrence mechanisms found in models such as Mamba and RWKV.">

The Convergence of Fast Weights, Linear Attention, and State Space Models

Modern Large Language Models (LLMs) are dominated by the Transformer architecture. However, as context windows grow, the computational cost of the Transformer’s attention mechanism has become a primary bottleneck. Recent discussions in the AI community—most notably by Geoffrey Hinton—have highlighted a theoretical link between biological memory mechanisms (“Fast Weights”) and efficient engineering solutions like Linear Transformers and State Space Models (SSMs).

This article explores the mathematical equivalence between Hinton’s concept of Fast Weights as Associative Memory and the recurrence mechanisms found in models such as Mamba and RWKV.

1. The Standard Transformer Bottleneck @@ -26,4 +26,4 @@ This article explores the mathematical equivalence between Hinton’s concept of 2016 - 2025 Eric X. Liu -[3f9f80d]

\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/transformer-s-core-mechanics/index.html b/posts/transformer-s-core-mechanics/index.html index 69eaca2..c780609 100644 --- a/posts/transformer-s-core-mechanics/index.html +++ b/posts/transformer-s-core-mechanics/index.html @@ -10,7 +10,7 @@ In deep learning, a “channel” can be thought of as a feature dimensi 1. The “Channel”: A Foundational View of d_model Link to heading In deep learning, a “channel” can be thought of as a feature dimension. While this term is common in Convolutional Neural Networks for images (e.g., Red, Green, Blue channels), in LLMs, the analogous concept is the model’s primary embedding dimension, commonly referred to as d_model.">

Transformer's Core Mechanics

The Transformer architecture is the bedrock of modern Large Language Models (LLMs). While its high-level success is widely known, a deeper understanding requires dissecting its core components. This article provides a detailed, technical breakdown of the fundamental concepts within a Transformer block, from the notion of “channels” to the intricate workings of the attention mechanism and its relationship with other advanced architectures like Mixture of Experts.

1. The “Channel”: A Foundational View of d_model @@ -36,4 +36,4 @@ In deep learning, a “channel” can be thought of as a feature dimensi 2016 - 2025 Eric X. Liu -[3f9f80d]

\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/unifi-vlan-migration-to-zone-based-architecture/index.html b/posts/unifi-vlan-migration-to-zone-based-architecture/index.html index 4169539..f20ecbf 100644 --- a/posts/unifi-vlan-migration-to-zone-based-architecture/index.html +++ b/posts/unifi-vlan-migration-to-zone-based-architecture/index.html @@ -3,7 +3,7 @@ This article documents that journey. It details the pitfalls encountered, the co This article documents that journey. It details the pitfalls encountered, the core networking concepts that were essential to understand, and the best practices that ultimately led to a stable, secure, and logical network design built on a zone-based firewall model.">

UniFi VLAN Migration to Zone-Based Architecture

Embarking on a network migration to a properly segmented VLAN architecture is a rite of passage for any serious home lab or small business operator. The goal is clear: improve security and organization by separating traffic. However, the path from a flat network to a segmented one is often paved with subtle but critical configuration details that can lead to hours of frustrating troubleshooting.

This article documents that journey. It details the pitfalls encountered, the core networking concepts that were essential to understand, and the best practices that ultimately led to a stable, secure, and logical network design built on a zone-based firewall model.

Lesson 1: Demystifying the Native VLAN @@ -28,4 +28,4 @@ This article documents that journey. It details the pitfalls encountered, the co 2016 - 2025 Eric X. Liu -[3f9f80d]

\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/useful/index.html b/posts/useful/index.html index d8c22cc..02819b8 100644 --- a/posts/useful/index.html +++ b/posts/useful/index.html @@ -2,11 +2,11 @@ rootCA.pem ">

Some useful files

\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/posts/vattention/index.html b/posts/vattention/index.html index b77f3bf..3ceb93e 100644 --- a/posts/vattention/index.html +++ b/posts/vattention/index.html @@ -10,7 +10,7 @@ Prior to PagedAttention, systems allocated contiguous memory for the maximum pos The Status Quo: PagedAttention and Software Tables Link to heading Prior to PagedAttention, systems allocated contiguous memory for the maximum possible context length, leading to severe fragmentation and wasted memory. PagedAttention addressed this by chunking the KV cache into non-contiguous blocks, managed by a software-defined “page table” (the Block Table) [1].">

vAttention

Large Language Model (LLM) inference is memory-bound, primarily due to the Key-Value (KV) cache—a store of intermediate state that grows linearly with sequence length. Efficient management of this memory is critical for throughput. While PagedAttention (popularized by vLLM) became the industry standard by solving memory fragmentation via software, recent research suggests that leveraging the GPU’s native hardware Memory Management Unit (MMU) offers a more performant and portable solution.

The Status Quo: PagedAttention and Software Tables @@ -31,4 +31,4 @@ The GPU TLB hierarchy is sensitive to page sizes.

  • 4KB Pages:< 2016 - 2025 Eric X. Liu -[3f9f80d]

\ No newline at end of file +[2bb856b] \ No newline at end of file diff --git a/tags/index.html b/tags/index.html index 11fc113..12f9e13 100644 --- a/tags/index.html +++ b/tags/index.html @@ -1,7 +1,7 @@ Tags · Eric X. Liu's Personal Page
\ No newline at end of file +[2bb856b] \ No newline at end of file