deploy: 3f9f80d24f
This commit is contained in:
@@ -1,4 +1,7 @@
|
||||
<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Eric X. Liu's Personal Page</title><link>https://ericxliu.me/</link><description>Recent content on Eric X. Liu's Personal Page</description><generator>Hugo</generator><language>en</language><lastBuildDate>Fri, 19 Dec 2025 21:21:55 +0000</lastBuildDate><atom:link href="https://ericxliu.me/index.xml" rel="self" type="application/rss+xml"/><item><title>The Convergence of Fast Weights, Linear Attention, and State Space Models</title><link>https://ericxliu.me/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/</link><pubDate>Fri, 19 Dec 2025 00:00:00 +0000</pubDate><guid>https://ericxliu.me/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/</guid><description><p>Modern Large Language Models (LLMs) are dominated by the Transformer architecture. However, as context windows grow, the computational cost of the Transformer’s attention mechanism has become a primary bottleneck. Recent discussions in the AI community—most notably by Geoffrey Hinton—have highlighted a theoretical link between biological memory mechanisms (&ldquo;Fast Weights&rdquo;) and efficient engineering solutions like Linear Transformers and State Space Models (SSMs).</p>
|
||||
<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Eric X. Liu's Personal Page</title><link>https://ericxliu.me/</link><description>Recent content on Eric X. Liu's Personal Page</description><generator>Hugo</generator><language>en</language><lastBuildDate>Fri, 19 Dec 2025 23:02:31 -0800</lastBuildDate><atom:link href="https://ericxliu.me/index.xml" rel="self" type="application/rss+xml"/><item><title>About</title><link>https://ericxliu.me/about/</link><pubDate>Fri, 19 Dec 2025 22:46:12 -0800</pubDate><guid>https://ericxliu.me/about/</guid><description><p>Hi, I&rsquo;m <strong>Eric Liu</strong>.</p>
|
||||
<p>I am a <strong>Staff Software Engineer and Tech Lead Manager (TLM)</strong> at <strong>Google</strong>, based in Sunnyvale, CA.</p>
|
||||
<p>My work focuses on <strong>Platforms Performance and Customer Engineering</strong>, specifically for <strong>GPUs and TPUs</strong>. I lead teams that bridge the gap between cutting-edge AI hardware and the latest ML models (like Gemini), ensuring optimal performance and reliability at Google Cloud scale.</p>
|
||||
<p>Beyond the code, I maintain this &ldquo;digital garden&rdquo; where I document my projects and learnings. It serves as my second brain, capturing everything from technical deep dives to random musings.</p></description></item><item><title>The Convergence of Fast Weights, Linear Attention, and State Space Models</title><link>https://ericxliu.me/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/</link><pubDate>Fri, 19 Dec 2025 00:00:00 +0000</pubDate><guid>https://ericxliu.me/posts/the-convergence-of-fast-weights-linear-attention-and-state-space-models/</guid><description><p>Modern Large Language Models (LLMs) are dominated by the Transformer architecture. However, as context windows grow, the computational cost of the Transformer’s attention mechanism has become a primary bottleneck. Recent discussions in the AI community—most notably by Geoffrey Hinton—have highlighted a theoretical link between biological memory mechanisms (&ldquo;Fast Weights&rdquo;) and efficient engineering solutions like Linear Transformers and State Space Models (SSMs).</p>
|
||||
<p>This article explores the mathematical equivalence between Hinton’s concept of Fast Weights as Associative Memory and the recurrence mechanisms found in models such as Mamba and RWKV.</p></description></item><item><title>vAttention</title><link>https://ericxliu.me/posts/vattention/</link><pubDate>Mon, 08 Dec 2025 00:00:00 +0000</pubDate><guid>https://ericxliu.me/posts/vattention/</guid><description><p>Large Language Model (LLM) inference is memory-bound, primarily due to the Key-Value (KV) cache—a store of intermediate state that grows linearly with sequence length. Efficient management of this memory is critical for throughput. While <strong>PagedAttention</strong> (popularized by vLLM) became the industry standard by solving memory fragmentation via software, recent research suggests that leveraging the GPU’s native hardware Memory Management Unit (MMU) offers a more performant and portable solution.</p>
|
||||
<h4 id="the-status-quo-pagedattention-and-software-tables">
|
||||
The Status Quo: PagedAttention and Software Tables
|
||||
@@ -75,4 +78,4 @@ Many routing mechanisms, especially &ldquo;Top-K routing,&rdquo; involve
|
||||
</h3>
|
||||
<p>In deep learning, a &ldquo;channel&rdquo; can be thought of as a feature dimension. While this term is common in Convolutional Neural Networks for images (e.g., Red, Green, Blue channels), in LLMs, the analogous concept is the model&rsquo;s primary embedding dimension, commonly referred to as <code>d_model</code>.</p></description></item><item><title>Some useful files</title><link>https://ericxliu.me/posts/useful/</link><pubDate>Mon, 26 Oct 2020 04:14:43 +0000</pubDate><guid>https://ericxliu.me/posts/useful/</guid><description><ul>
|
||||
<li><a href="https://ericxliu.me/rootCA.crt" >rootCA.pem</a></li>
|
||||
</ul></description></item><item><title>About</title><link>https://ericxliu.me/about/</link><pubDate>Fri, 01 Jun 2018 07:13:52 +0000</pubDate><guid>https://ericxliu.me/about/</guid><description/></item></channel></rss>
|
||||
</ul></description></item></channel></rss>
|
||||
Reference in New Issue
Block a user