This commit is contained in:
eric
2026-01-10 20:11:08 +00:00
parent 4355096bdc
commit 07438a27e9
35 changed files with 55 additions and 55 deletions

View File

@@ -1,4 +1,4 @@
<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Posts on Eric X. Liu's Personal Page</title><link>https://ericxliu.me/posts/</link><description>Recent content in Posts on Eric X. Liu's Personal Page</description><generator>Hugo</generator><language>en</language><lastBuildDate>Thu, 08 Jan 2026 18:13:13 +0000</lastBuildDate><atom:link href="https://ericxliu.me/posts/index.xml" rel="self" type="application/rss+xml"/><item><title>Why I Downgraded Magisk to Root My Pixel 2 XL</title><link>https://ericxliu.me/posts/rooting-pixel-2-xl-for-reverse-engineering/</link><pubDate>Wed, 07 Jan 2026 00:00:00 +0000</pubDate><guid>https://ericxliu.me/posts/rooting-pixel-2-xl-for-reverse-engineering/</guid><description>&lt;p&gt;For the past few weeks, I&amp;rsquo;ve been stuck in a stalemate with my EcoFlow Bluetooth Protocol Reverse Engineering Project. I have the hci snoop logs, I have the decompiled APK, and I have a strong suspicion about where the authentication logic is hiding. But suspicion isn&amp;rsquo;t proof.&lt;/p&gt;
<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Posts on Eric X. Liu's Personal Page</title><link>https://ericxliu.me/posts/</link><description>Recent content in Posts on Eric X. Liu's Personal Page</description><generator>Hugo</generator><language>en</language><lastBuildDate>Sat, 10 Jan 2026 20:10:48 +0000</lastBuildDate><atom:link href="https://ericxliu.me/posts/index.xml" rel="self" type="application/rss+xml"/><item><title>Why I Downgraded Magisk to Root My Pixel 2 XL</title><link>https://ericxliu.me/posts/rooting-pixel-2-xl-for-reverse-engineering/</link><pubDate>Wed, 07 Jan 2026 00:00:00 +0000</pubDate><guid>https://ericxliu.me/posts/rooting-pixel-2-xl-for-reverse-engineering/</guid><description>&lt;p&gt;For the past few weeks, I&amp;rsquo;ve been stuck in a stalemate with my EcoFlow Bluetooth Protocol Reverse Engineering Project. I have the hci snoop logs, I have the decompiled APK, and I have a strong suspicion about where the authentication logic is hiding. But suspicion isn&amp;rsquo;t proof.&lt;/p&gt;
&lt;p&gt;Static analysis has its limits. I found the &amp;ldquo;smoking gun&amp;rdquo; function—a native method responsible for encrypting the login payload—but understanding &lt;em&gt;how&lt;/em&gt; it constructs that payload within a strict 13-byte limit purely from assembly (ARM64) was proving to be a headache.&lt;/p&gt;</description></item><item><title>Why Your "Resilient" Homelab is Slower Than a Raspberry Pi</title><link>https://ericxliu.me/posts/debugging-authentik-performance/</link><pubDate>Fri, 02 Jan 2026 00:00:00 +0000</pubDate><guid>https://ericxliu.me/posts/debugging-authentik-performance/</guid><description>&lt;p&gt;In the world of self-hosting, there are many metrics for success: 99.9% uptime, sub-second latency, or a perfect GitOps pipeline. But for those of us running &amp;ldquo;production&amp;rdquo; at home, there is only one metric that truly matters: &lt;strong&gt;The Wife Acceptance Factor (WAF)&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;My detailed Grafana dashboards said everything was fine. But my wife said the SSO login was &amp;ldquo;slow sometimes.&amp;rdquo; She was right. Debugging it took me down a rabbit hole of connection pooling, misplaced assumptions, and the harsh reality of running databases on distributed storage.&lt;/p&gt;</description></item><item><title>How I Got Open WebUI Talking to OpenAI Web Search</title><link>https://ericxliu.me/posts/open-webui-openai-websearch/</link><pubDate>Mon, 29 Dec 2025 00:00:00 +0000</pubDate><guid>https://ericxliu.me/posts/open-webui-openai-websearch/</guid><description>&lt;p&gt;OpenAI promised native web search in GPT5, but LiteLLM proxy deployments (and by extension Open WebUI) still choke on it—issue &lt;a href="https://github.com/BerriAI/litellm/issues/13042" class="external-link" target="_blank" rel="noopener"&gt;#13042&lt;/a&gt; tracks the fallout. I needed grounded answers inside Open WebUI anyway, so I built a workaround: route GPT5 traffic through the Responses API and mask every &lt;code&gt;web_search_call&lt;/code&gt; before the UI ever sees it.&lt;/p&gt;
&lt;p&gt;This post documents the final setup, the hotfix script that keeps LiteLLM honest, and the tests that prove Open WebUI now streams cited answers without trying to execute the tool itself.&lt;/p&gt;</description></item><item><title>From Gemini-3-Flash to T5-Gemma-2: A Journey in Distilling a Family Finance LLM</title><link>https://ericxliu.me/posts/technical-deep-dive-llm-categorization/</link><pubDate>Sat, 27 Dec 2025 00:00:00 +0000</pubDate><guid>https://ericxliu.me/posts/technical-deep-dive-llm-categorization/</guid><description>&lt;p&gt;Running a family finance system is surprisingly complex. What starts as a simple spreadsheet often evolves into a web of rules, exceptions, and &amp;ldquo;wait, was this dinner or &lt;em&gt;vacation&lt;/em&gt; dinner?&amp;rdquo; questions.&lt;/p&gt;
@@ -66,7 +66,7 @@
&lt;p&gt;The answer lies in creating a universal language—a bridge between the continuous, messy world of pixels and audio waves and the discrete, structured world of language tokens. One of the most elegant and powerful tools for building this bridge is &lt;strong&gt;Residual Vector Quantization (RVQ)&lt;/strong&gt;.&lt;/p&gt;</description></item><item><title>Supabase Deep Dive: It's Not Magic, It's Just Postgres</title><link>https://ericxliu.me/posts/supabase-deep-dive/</link><pubDate>Sun, 03 Aug 2025 00:00:00 +0000</pubDate><guid>https://ericxliu.me/posts/supabase-deep-dive/</guid><description>&lt;p&gt;In the world of Backend-as-a-Service (BaaS), platforms are often treated as magic boxes. You push data in, you get data out, and you hope the magic inside scales. While this simplicity is powerful, it can obscure the underlying mechanics, leaving developers wondering what&amp;rsquo;s really going on.&lt;/p&gt;
&lt;p&gt;Supabase enters this space with a radically different philosophy: &lt;strong&gt;transparency&lt;/strong&gt;. It provides the convenience of a BaaS, but its built on the world&amp;rsquo;s most trusted relational database: PostgreSQL. The &amp;ldquo;magic&amp;rdquo; isn&amp;rsquo;t a proprietary black box; it&amp;rsquo;s a carefully assembled suite of open-source tools that enhance Postgres, not hide it.&lt;/p&gt;</description></item><item><title>A Deep Dive into PPO for Language Models</title><link>https://ericxliu.me/posts/ppo-for-language-models/</link><pubDate>Sat, 02 Aug 2025 00:00:00 +0000</pubDate><guid>https://ericxliu.me/posts/ppo-for-language-models/</guid><description>&lt;p&gt;Large Language Models (LLMs) have demonstrated astonishing capabilities, but out-of-the-box, they are simply powerful text predictors. They don&amp;rsquo;t inherently understand what makes a response helpful, harmless, or aligned with human values. The technique that has proven most effective at bridging this gap is Reinforcement Learning from Human Feedback (RLHF), and at its heart lies a powerful algorithm: Proximal Policy Optimization (PPO).&lt;/p&gt;
&lt;p&gt;You may have seen diagrams like the one below, which outlines the RLHF training process. It can look intimidating, with a web of interconnected models, losses, and data flows.
&lt;img src="http://localhost:4998/attachments/image-3632d923eed983f171fba4341825273101f1fc94.png?client=default&amp;amp;bucket=obsidian" alt="S3 File"&gt;&lt;/p&gt;</description></item><item><title>Mixture-of-Experts (MoE) Models Challenges &amp; Solutions in Practice</title><link>https://ericxliu.me/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/</link><pubDate>Wed, 02 Jul 2025 00:00:00 +0000</pubDate><guid>https://ericxliu.me/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/</guid><description>&lt;p&gt;Mixture-of-Experts (MoEs) are neural network architectures that allow different parts of the model (called &amp;ldquo;experts&amp;rdquo;) to specialize in different types of inputs. A &amp;ldquo;gating network&amp;rdquo; or &amp;ldquo;router&amp;rdquo; learns to dispatch each input (or &amp;ldquo;token&amp;rdquo;) to a subset of these experts. While powerful for scaling models, MoEs introduce several practical challenges.&lt;/p&gt;
&lt;img src="https://ericxliu.me/images/ppo-for-language-models/7713bd3ecf27442e939b9190fa08165d.png" alt="S3 File"&gt;&lt;/p&gt;</description></item><item><title>Mixture-of-Experts (MoE) Models Challenges &amp; Solutions in Practice</title><link>https://ericxliu.me/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/</link><pubDate>Wed, 02 Jul 2025 00:00:00 +0000</pubDate><guid>https://ericxliu.me/posts/mixture-of-experts-moe-models-challenges-solutions-in-practice/</guid><description>&lt;p&gt;Mixture-of-Experts (MoEs) are neural network architectures that allow different parts of the model (called &amp;ldquo;experts&amp;rdquo;) to specialize in different types of inputs. A &amp;ldquo;gating network&amp;rdquo; or &amp;ldquo;router&amp;rdquo; learns to dispatch each input (or &amp;ldquo;token&amp;rdquo;) to a subset of these experts. While powerful for scaling models, MoEs introduce several practical challenges.&lt;/p&gt;
&lt;h3 id="1-challenge-non-differentiability-of-routing-functions"&gt;
1. Challenge: Non-Differentiability of Routing Functions
&lt;a class="heading-link" href="#1-challenge-non-differentiability-of-routing-functions"&gt;