76 lines
26 KiB
HTML
76 lines
26 KiB
HTML
<!doctype html><html lang=en><head><title>From Gemini-3-Flash to T5-Gemma-2: A Journey in Distilling a Family Finance LLM · Eric X. Liu's Personal Page</title><meta charset=utf-8><meta name=viewport content="width=device-width,initial-scale=1"><meta name=color-scheme content="light dark"><meta http-equiv=Content-Security-Policy content="upgrade-insecure-requests; block-all-mixed-content; default-src 'self'; child-src 'self'; font-src 'self' https://fonts.gstatic.com https://cdn.jsdelivr.net/; form-action 'self'; frame-src 'self' https://www.youtube.com https://disqus.com; img-src 'self' https://referrer.disqus.com https://c.disquscdn.com https://*.disqus.com; object-src 'none'; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com/ https://cdn.jsdelivr.net/; script-src 'self' 'unsafe-inline' https://www.google-analytics.com https://cdn.jsdelivr.net/ https://pagead2.googlesyndication.com https://static.cloudflareinsights.com https://unpkg.com https://ericxliu-me.disqus.com https://disqus.com https://*.disqus.com https://*.disquscdn.com https://unpkg.com; connect-src 'self' https://www.google-analytics.com https://pagead2.googlesyndication.com https://cloudflareinsights.com ws://localhost:1313 ws://localhost:* wss://localhost:* https://links.services.disqus.com https://*.disqus.com;"><meta name=author content="Eric X. Liu"><meta name=description content='Running a family finance system is surprisingly complex. What starts as a simple spreadsheet often evolves into a web of rules, exceptions, and “wait, was this dinner or vacation dinner?” questions.
|
|
For years, I relied on a rule-based system to categorize our credit card transactions. It worked… mostly. But maintaining if "UBER" in description and amount > 50 style rules is a never-ending battle against the entropy of merchant names and changing habits.'><meta name=keywords content="software engineer,performance engineering,Google engineer,tech blog,software development,performance optimization,Eric Liu,engineering blog,mountain biking,Jeep enthusiast,overlanding,camping,outdoor adventures"><meta name=twitter:card content="summary"><meta name=twitter:title content="From Gemini-3-Flash to T5-Gemma-2: A Journey in Distilling a Family Finance LLM"><meta name=twitter:description content='Running a family finance system is surprisingly complex. What starts as a simple spreadsheet often evolves into a web of rules, exceptions, and “wait, was this dinner or vacation dinner?” questions.
|
|
For years, I relied on a rule-based system to categorize our credit card transactions. It worked… mostly. But maintaining if "UBER" in description and amount > 50 style rules is a never-ending battle against the entropy of merchant names and changing habits.'><meta property="og:url" content="https://ericxliu.me/posts/technical-deep-dive-llm-categorization/"><meta property="og:site_name" content="Eric X. Liu's Personal Page"><meta property="og:title" content="From Gemini-3-Flash to T5-Gemma-2: A Journey in Distilling a Family Finance LLM"><meta property="og:description" content='Running a family finance system is surprisingly complex. What starts as a simple spreadsheet often evolves into a web of rules, exceptions, and “wait, was this dinner or vacation dinner?” questions.
|
|
For years, I relied on a rule-based system to categorize our credit card transactions. It worked… mostly. But maintaining if "UBER" in description and amount > 50 style rules is a never-ending battle against the entropy of merchant names and changing habits.'><meta property="og:locale" content="en"><meta property="og:type" content="article"><meta property="article:section" content="posts"><meta property="article:published_time" content="2025-12-27T00:00:00+00:00"><meta property="article:modified_time" content="2026-01-08T18:13:13+00:00"><link rel=preload href=/fonts/fa-solid-900.woff2 as=font type=font/woff2 crossorigin><link rel=preload href=/fonts/fa-brands-400.woff2 as=font type=font/woff2 crossorigin><link rel=canonical href=https://ericxliu.me/posts/technical-deep-dive-llm-categorization/><link rel=preload href=/fonts/fa-brands-400.woff2 as=font type=font/woff2 crossorigin><link rel=preload href=/fonts/fa-regular-400.woff2 as=font type=font/woff2 crossorigin><link rel=preload href=/fonts/fa-solid-900.woff2 as=font type=font/woff2 crossorigin><link rel=stylesheet href=/css/coder.min.4b392a85107b91dbdabc528edf014a6ab1a30cd44cafcd5325c8efe796794fca.css integrity="sha256-SzkqhRB7kdvavFKO3wFKarGjDNRMr81TJcjv55Z5T8o=" crossorigin=anonymous media=screen><link rel=stylesheet href=/css/coder-dark.min.a00e6364bacbc8266ad1cc81230774a1397198f8cfb7bcba29b7d6fcb54ce57f.css integrity="sha256-oA5jZLrLyCZq0cyBIwd0oTlxmPjPt7y6KbfW/LVM5X8=" crossorigin=anonymous media=screen><link rel=icon type=image/svg+xml href=/images/favicon.svg sizes=any><link rel=icon type=image/png href=/images/favicon-32x32.png sizes=32x32><link rel=icon type=image/png href=/images/favicon-16x16.png sizes=16x16><link rel=apple-touch-icon href=/images/apple-touch-icon.png><link rel=apple-touch-icon sizes=180x180 href=/images/apple-touch-icon.png><link rel=manifest href=/site.webmanifest><link rel=mask-icon href=/images/safari-pinned-tab.svg color=#5bbad5><script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-3972604619956476" crossorigin=anonymous></script><script type=application/ld+json>{"@context":"http://schema.org","@type":"Person","name":"Eric X. Liu","url":"https:\/\/ericxliu.me\/","description":"Software \u0026 Performance Engineer at Google","sameAs":["https:\/\/www.linkedin.com\/in\/eric-x-liu-46648b93\/","https:\/\/git.ericxliu.me\/eric"]}</script><script type=application/ld+json>{"@context":"http://schema.org","@type":"BlogPosting","headline":"From Gemini-3-Flash to T5-Gemma-2: A Journey in Distilling a Family Finance LLM","genre":"Blog","wordcount":"1355","url":"https:\/\/ericxliu.me\/posts\/technical-deep-dive-llm-categorization\/","datePublished":"2025-12-27T00:00:00\u002b00:00","dateModified":"2026-01-08T18:13:13\u002b00:00","description":"\u003cp\u003eRunning a family finance system is surprisingly complex. What starts as a simple spreadsheet often evolves into a web of rules, exceptions, and \u0026ldquo;wait, was this dinner or \u003cem\u003evacation\u003c\/em\u003e dinner?\u0026rdquo; questions.\u003c\/p\u003e\n\u003cp\u003eFor years, I relied on a rule-based system to categorize our credit card transactions. It worked\u0026hellip; mostly. But maintaining \u003ccode\u003eif \u0026quot;UBER\u0026quot; in description and amount \u0026gt; 50\u003c\/code\u003e style rules is a never-ending battle against the entropy of merchant names and changing habits.\u003c\/p\u003e","author":{"@type":"Person","name":"Eric X. Liu"}}</script></head><body class="preload-transitions colorscheme-auto"><div class=float-container><a id=dark-mode-toggle class=colorscheme-toggle><i class="fa-solid fa-adjust fa-fw" aria-hidden=true></i></a></div><main class=wrapper><nav class=navigation><section class=container><a class=navigation-title href=https://ericxliu.me/>Eric X. Liu's Personal Page
|
|
</a><input type=checkbox id=menu-toggle>
|
|
<label class="menu-button float-right" for=menu-toggle><i class="fa-solid fa-bars fa-fw" aria-hidden=true></i></label><ul class=navigation-list><li class=navigation-item><a class=navigation-link href=/posts/>Posts</a></li><li class=navigation-item><a class=navigation-link href=https://chat.ericxliu.me>Chat</a></li><li class=navigation-item><a class=navigation-link href=https://git.ericxliu.me/user/oauth2/Authenitk>Git</a></li><li class=navigation-item><a class=navigation-link href=https://coder.ericxliu.me/api/v2/users/oidc/callback>Coder</a></li><li class=navigation-item><a class=navigation-link href=/about/>About</a></li><li class=navigation-item><a class=navigation-link href=/>|</a></li><li class=navigation-item><a class=navigation-link href=https://sso.ericxliu.me>Sign in</a></li></ul></section></nav><div class=content><section class="container post"><article><header><div class=post-title><h1 class=title><a class=title-link href=https://ericxliu.me/posts/technical-deep-dive-llm-categorization/>From Gemini-3-Flash to T5-Gemma-2: A Journey in Distilling a Family Finance LLM</a></h1></div><div class=post-meta><div class=date><span class=posted-on><i class="fa-solid fa-calendar" aria-hidden=true></i>
|
|
<time datetime=2025-12-27T00:00:00Z>December 27, 2025
|
|
</time></span><span class=reading-time><i class="fa-solid fa-clock" aria-hidden=true></i>
|
|
7-minute read</span></div></div></header><div class=post-content><p>Running a family finance system is surprisingly complex. What starts as a simple spreadsheet often evolves into a web of rules, exceptions, and “wait, was this dinner or <em>vacation</em> dinner?” questions.</p><p>For years, I relied on a rule-based system to categorize our credit card transactions. It worked… mostly. But maintaining <code>if "UBER" in description and amount > 50</code> style rules is a never-ending battle against the entropy of merchant names and changing habits.</p><p>Recently, I decided to modernize this stack using Large Language Models (LLMs). This post details the technical journey from using an off-the-shelf commercial model to distilling that knowledge into a small, efficient local model (<code>google/t5gemma-2-270m</code>) that runs on my own hardware while maintaining high accuracy.</p><h2 id=phase-1-the-proof-of-concept-with-commercial-llms>Phase 1: The Proof of Concept with Commercial LLMs
|
|
<a class=heading-link href=#phase-1-the-proof-of-concept-with-commercial-llms><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
|
|
<span class=sr-only>Link to heading</span></a></h2><p>My first step was to replace the spaghetti code of regex rules with a prompt. I used <strong>Gemini-3-Flash</strong> (via <code>litellm</code>) as my categorization engine.</p><p>The core challenge was context. A transaction like <code>MCDONALDS</code> could be:</p><ul><li><strong>Dining</strong>: A quick lunch during work.</li><li><strong>Travel-Dining</strong>: A meal while on a road trip.</li></ul><p>To solve this, I integrated my <strong>private Google Calendar</strong> (via <code>.ics</code> export). The prompt doesn’t just see the transaction; it sees <em>where I was</em> and <em>what I was doing</em> on that day.</p><h3 id=the-god-prompt>The “God Prompt”
|
|
<a class=heading-link href=#the-god-prompt><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
|
|
<span class=sr-only>Link to heading</span></a></h3><p>The system prompt was designed to return strict JSON, adhering to a schema of Categories (e.g., <code>Dining</code>, <code>Travel</code>, <code>Bills</code>) and Sub-Categories (e.g., <code>Travel</code> -> <code>Accommodation</code>).</p><div class=highlight><pre tabindex=0 style=color:#e6edf3;background-color:#0d1117;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-json data-lang=json><span style=display:flex><span>{
|
|
</span></span><span style=display:flex><span> <span style=color:#7ee787>"Category"</span>: <span style=color:#a5d6ff>"Travel"</span>,
|
|
</span></span><span style=display:flex><span> <span style=color:#7ee787>"Travel Category"</span>: <span style=color:#a5d6ff>"Dining"</span>,
|
|
</span></span><span style=display:flex><span> <span style=color:#7ee787>"Reasoning"</span>: <span style=color:#a5d6ff>"User is on 'Trip: 34TH ARCH CANYON 2025', distinguishing this from regular dining."</span>
|
|
</span></span><span style=display:flex><span>}
|
|
</span></span></code></pre></div><p>This worked well. The “Reasoning” field even gave me explanations for why it flagged something as <code>Entertainment</code> vs <code>Shopping</code>. But relying on an external API for every single transaction felt like overkill for a personal project, and I wanted to own the stack.</p><h2 id=phase-2-distilling-knowledge>Phase 2: Distilling Knowledge
|
|
<a class=heading-link href=#phase-2-distilling-knowledge><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
|
|
<span class=sr-only>Link to heading</span></a></h2><p>I wanted to train a smaller model to mimic Gemini’s performance. But I didn’t want to manually label thousands of transactions.</p><h3 id=consistency-filtering>Consistency Filtering
|
|
<a class=heading-link href=#consistency-filtering><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
|
|
<span class=sr-only>Link to heading</span></a></h3><p>I had a massive CSV of historical transactions (years of data). However, that data was “noisy”—some manual labels were outdated or inconsistent.</p><p>I built a <strong>Distillation Pipeline</strong> (<code>distill_reasoning.py</code>) that uses the Teacher Model (Gemini) to re-label the historical data. But here’s the twist: I only added a data point to my training set if the <strong>Teacher’s prediction matched the Historical Ground Truth</strong>.</p><div class=highlight><pre tabindex=0 style=color:#e6edf3;background-color:#0d1117;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-python data-lang=python><span style=display:flex><span><span style=color:#8b949e;font-style:italic># Pseudo-code for consistency filtering</span>
|
|
</span></span><span style=display:flex><span>teacher_pred <span style=color:#ff7b72;font-weight:700>=</span> gemini<span style=color:#ff7b72;font-weight:700>.</span>categorize(transaction)
|
|
</span></span><span style=display:flex><span>historical_label <span style=color:#ff7b72;font-weight:700>=</span> row[<span style=color:#a5d6ff>'Category'</span>]
|
|
</span></span><span style=display:flex><span>
|
|
</span></span><span style=display:flex><span><span style=color:#ff7b72>if</span> teacher_pred<span style=color:#ff7b72;font-weight:700>.</span>category <span style=color:#ff7b72;font-weight:700>==</span> historical_label:
|
|
</span></span><span style=display:flex><span> <span style=color:#8b949e;font-style:italic># High confidence sample!</span>
|
|
</span></span><span style=display:flex><span> training_data<span style=color:#ff7b72;font-weight:700>.</span>append({
|
|
</span></span><span style=display:flex><span> <span style=color:#a5d6ff>"input"</span>: format_transaction(transaction),
|
|
</span></span><span style=display:flex><span> <span style=color:#a5d6ff>"output"</span>: teacher_pred<span style=color:#ff7b72;font-weight:700>.</span>to_json()
|
|
</span></span><span style=display:flex><span> })
|
|
</span></span><span style=display:flex><span><span style=color:#ff7b72>else</span>:
|
|
</span></span><span style=display:flex><span> <span style=color:#8b949e;font-style:italic># Discard: Either history is wrong OR teacher hallucinated.</span>
|
|
</span></span><span style=display:flex><span> log_fail(transaction)
|
|
</span></span></code></pre></div><p>This filtered out the noise, leaving me with ~2,000 high-quality, “verified” examples where both the human (me, years ago) and the AI agreed.</p><h2 id=phase-3-training-the-little-guy>Phase 3: Training the Little Guy
|
|
<a class=heading-link href=#phase-3-training-the-little-guy><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
|
|
<span class=sr-only>Link to heading</span></a></h2><p>For the local model, I chose <strong>google/t5gemma-2-270m</strong>. This is a Seq2Seq model, which fits the “Text-to-JSON” task perfectly, and it’s tiny (270M parameters), meaning it can run on almost anything.</p><h3 id=the-stack>The Stack
|
|
<a class=heading-link href=#the-stack><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
|
|
<span class=sr-only>Link to heading</span></a></h3><ul><li><strong>Library</strong>: <code>transformers</code>, <code>peft</code>, <code>bitsandbytes</code></li><li><strong>Technique</strong>: <strong>LoRA</strong> (Low-Rank Adaptation). I targeted all linear layers (<code>q_proj</code>, <code>k_proj</code>, <code>v_proj</code>, etc.) with <code>r=16</code>.</li><li><strong>Optimization</strong>: <code>AdamW</code> with linear decay.</li></ul><h3 id=pitfall-1-the-loss-is-0-initial-panic>Pitfall #1: The “Loss is 0” Initial Panic
|
|
<a class=heading-link href=#pitfall-1-the-loss-is-0-initial-panic><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
|
|
<span class=sr-only>Link to heading</span></a></h3><p>My first training run showed a loss of exactly <code>0.000</code> essentially immediately. In deep learning, if it looks too good to be true, it’s a bug.
|
|
It turned out to be a syntax error in my arguments passed to the <code>Trainer</code> (or rather, my custom loop). Once fixed, the loss looked “healthy”—starting high and decaying noisily.</p><h3 id=pitfall-2-stability-vs-noise>Pitfall #2: Stability vs. Noise
|
|
<a class=heading-link href=#pitfall-2-stability-vs-noise><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
|
|
<span class=sr-only>Link to heading</span></a></h3><p>The loss curve was initially extremely erratic. The batch size on my GPU was limited (Physical Batch Size = 4).
|
|
<strong>The Fix</strong>: I implemented <strong>Gradient Accumulation</strong> (accumulating over 8 steps) to simulate a batch size of 32. This smoothed out the optimization landscape significantly.
|
|
<img src="http://localhost:4998/attachments/image-1b23344ea5541d156e5ac20823d12d7c6723b691.png?client=default&bucket=obsidian" alt="S3 File"></p><h3 id=pitfall-3-overfitting>Pitfall #3: Overfitting
|
|
<a class=heading-link href=#pitfall-3-overfitting><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
|
|
<span class=sr-only>Link to heading</span></a></h3><p>With a small dataset (~2k samples), overfitting is a real risk. I employed a multi-layered defense strategy:</p><ol><li><strong>Data Quality First</strong>: The “Consistency Filtering” phase was the most critical step. By discarding ambiguous samples where the teacher model disagreed with history, I prevented the model from memorizing noise.</li><li><strong>Model Regularization</strong>:<ul><li><strong>LoRA Dropout</strong>: I set <code>lora_dropout=0.1</code>, randomly dropping 10% of the trainable adapter connections during training to force robust feature learning.</li><li><strong>Gradient Clipping</strong>: We capped the gradient norm at <code>1.0</code>. This prevents the “exploding gradient” problem and keeps weight updates stable.</li><li><strong>AdamW</strong>: Using the AdamW optimizer adds decoupled weight decay, implicitly penalizing overly complex weights.</li></ul></li></ol><p>I also set up a rigorous evaluation loop (10% validation split, eval every 50 steps) to monitor the <code>Train Loss</code> vs <code>Eval Loss</code> in real-time. The final curves showed them tracking downwards together, confirming generalization.</p><h2 id=phase-4-results-and-the-travel-edge-case>Phase 4: Results and The “Travel” Edge Case
|
|
<a class=heading-link href=#phase-4-results-and-the-travel-edge-case><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
|
|
<span class=sr-only>Link to heading</span></a></h2><p>The distilled model is surprisingly capable. It learned the JSON schema very well. Although I included a regex fallback in the inference script as a safety net, the model generates valid JSON the vast majority of the time.</p><h3 id=head-to-head-local-model-vs-gemini-flash>Head-to-Head: Local Model vs Gemini-Flash
|
|
<a class=heading-link href=#head-to-head-local-model-vs-gemini-flash><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
|
|
<span class=sr-only>Link to heading</span></a></h3><p>I ran a blind evaluation on 20 random unseen transactions.</p><ul><li><strong>Gemini-3-Flash Accuracy</strong>: 90% (18/20)</li><li><strong>Local T5-Gemma-2 Accuracy</strong>: 85% (17/20)</li></ul><p>The gap is surprisingly small. In fact, the local model sometimes outperformed the API because it was fine-tuned on <em>my</em> specific data distribution.</p><p><strong>Win for Local Model:</strong></p><blockquote><p><strong>Transaction</strong>: <code>XX RANCH #1702</code>
|
|
<strong>Local Prediction</strong>: <code>Groceries</code> (Correct)
|
|
<strong>API Prediction</strong>: <code>Gas</code> (Incorrect)
|
|
<strong>Local Reasoning</strong>: " XX RANCH refers to a well-known supermarket chain.
|
|
<strong>API Reasoning</strong>: “XX RANCH is a known convenience store and gas station chain.”
|
|
<strong>Analysis</strong>: The local model “knows” (from training data) that XX Ranch is a Asian grocery store I frequent, whereas the general-purpose API assumed it was a gas station based on the name pattern.</p></blockquote><p><strong>Win for API (World Knowledge):</strong></p><blockquote><p><strong>Transaction</strong>: <code>LOVE'S #0792</code>
|
|
<strong>Local Prediction</strong>: <code>Dining</code> (Hallucination)
|
|
<strong>API Prediction</strong>: <code>Travel-Gas</code> (Correct)
|
|
<strong>Local Reasoning</strong>: “Love’s is a well-known restaurant chain, which falls under the Dining category.”
|
|
<strong>API Reasoning</strong>: “Love’s is a well-known gas station chain, and the transaction occurred during a trip to Moab, categorizing it as travel-related fuel.”
|
|
<strong>Analysis</strong>: The API knows “Love’s” is a major gas station chain. The small local model lacks this world knowledge and hallucinates it as a restaurant, highlighting the pure “Knowledge Gap” between a 270M and a 70B+ model. Additionally, Gemini Flash has <strong>Google Search grounding</strong> enabled, allowing it to verify real-world entities in real-time—a capability our isolated local model intrinsically lacks.</p></blockquote><h3 id=surprise-win-json-stability>Surprise Win: JSON Stability
|
|
<a class=heading-link href=#surprise-win-json-stability><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
|
|
<span class=sr-only>Link to heading</span></a></h3><p>One pleasant surprise was the <strong>format adherence</strong>. I initially feared I’d need constrained generation tools like <code>outlines</code> or a simplified schema for a 270M parameter model. However, the distilled T5-Gemma model followed the complex JSON schema (including nested fields) with near-perfect reliability, proving that specific structure can be learned effectively through fine-tuning alone.</p><h3 id=key-lesson-the-noisy-ground-truth-trap>Key Lesson: The “Noisy Ground Truth” Trap
|
|
<a class=heading-link href=#key-lesson-the-noisy-ground-truth-trap><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
|
|
<span class=sr-only>Link to heading</span></a></h3><p>Since this is a <strong>distillation (SFT)</strong> pipeline, not Reinforcement Learning, the model has no way to “unlearn” bad habits via negative rewards. It relies entirely on the quality of the teacher’s reasoning.</p><blockquote><p><strong>Transaction</strong>: <code>[TRAVEL] SWEETHOME KITCHEN</code>
|
|
<strong>Local Prediction</strong>: <code>Dining</code>
|
|
<strong>API Prediction</strong>: <code>Travel-Dining</code>
|
|
<strong>Local Reasoning</strong>: “The description ‘SWEETHOME KITCHEN’ indicates a restaurant or dining establishment, which falls under the Dining category.”
|
|
<strong>API Reasoning</strong>: “The transaction is for a kitchen/restaurant and occurred while the user was traveling to Pfeiffer Big Sur SP, making it a travel-related dining expense.”</p></blockquote><p>In this case, the API correctly used the calendar context (“User is in Big Sur”). The local model missed this link. This highlights that simply having the data isn’t enough—the <em>reasoning</em> in the training set must explicitly force the model to look at the context, or it will revert to simple pattern matching (Kitchen = Dining).</p><h2 id=conclusion>Conclusion
|
|
<a class=heading-link href=#conclusion><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
|
|
<span class=sr-only>Link to heading</span></a></h2><p>We often think we need 70B parameter models for everything. usage shows that for a specific, well-defined task with consistent formatting, a <strong>270M parameter model</strong>—fine-tuned on high-quality, distilled data—can punch way above its weight class.</p><p>The key was <strong>data quality over quantity</strong>. By using the commercial model to “verify” my historical data, I created a dataset that was cleaner than either source alone.</p></div><footer><div id=disqus_thread></div><script>window.disqus_config=function(){},function(){if(["localhost","127.0.0.1"].indexOf(window.location.hostname)!=-1){document.getElementById("disqus_thread").innerHTML="Disqus comments not available by default when the website is previewed locally.";return}var t=document,e=t.createElement("script");e.async=!0,e.src="//ericxliu-me.disqus.com/embed.js",e.setAttribute("data-timestamp",+new Date),(t.head||t.body).appendChild(e)}(),document.addEventListener("themeChanged",function(){document.readyState=="complete"&&DISQUS.reset({reload:!0,config:disqus_config})})</script></footer></article><link rel=stylesheet href=https://cdn.jsdelivr.net/npm/katex@0.16.4/dist/katex.min.css integrity=sha384-vKruj+a13U8yHIkAyGgK1J3ArTLzrFGBbBc0tDp4ad/EyewESeXE/Iv67Aj8gKZ0 crossorigin=anonymous><script defer src=https://cdn.jsdelivr.net/npm/katex@0.16.4/dist/katex.min.js integrity=sha384-PwRUT/YqbnEjkZO0zZxNqcxACrXe+j766U2amXcgMg5457rve2Y7I6ZJSm2A0mS4 crossorigin=anonymous></script><script defer src=https://cdn.jsdelivr.net/npm/katex@0.16.4/dist/contrib/auto-render.min.js integrity=sha384-+VBxd3r6XgURycqtZ117nYw44OOcIax56Z4dCRWbxyPt0Koah1uHoK0o4+/RRE05 crossorigin=anonymous onload='renderMathInElement(document.body,{delimiters:[{left:"$$",right:"$$",display:!0},{left:"$",right:"$",display:!1},{left:"\\(",right:"\\)",display:!1},{left:"\\[",right:"\\]",display:!0}]})'></script></section></div><footer class=footer><section class=container>©
|
|
2016 -
|
|
2026
|
|
Eric X. Liu
|
|
<a href="https://git.ericxliu.me/eric/ericxliu-me/commit/f7528b3">[f7528b3]</a></section></footer></main><script src=/js/coder.min.6ae284be93d2d19dad1f02b0039508d9aab3180a12a06dcc71b0b0ef7825a317.js integrity="sha256-auKEvpPS0Z2tHwKwA5UI2aqzGAoSoG3McbCw73gloxc="></script><script defer src=https://static.cloudflareinsights.com/beacon.min.js data-cf-beacon='{"token": "987638e636ce4dbb932d038af74c17d1"}'></script></body></html> |