\ No newline at end of file
diff --git a/about/index.html b/about/index.html
index ea651b2..fdd39cd 100644
--- a/about/index.html
+++ b/about/index.html
@@ -1,7 +1,7 @@
-About · Eric X. Liu's Personal Page
\ No newline at end of file
+[4c1d048]
\ No newline at end of file
diff --git a/categories/index.html b/categories/index.html
index 22bc69c..af59783 100644
--- a/categories/index.html
+++ b/categories/index.html
@@ -1,7 +1,7 @@
-Categories · Eric X. Liu's Personal Page
\ No newline at end of file
+[4c1d048]
\ No newline at end of file
diff --git a/index.html b/index.html
index 8a7413d..d633d5b 100644
--- a/index.html
+++ b/index.html
@@ -1,7 +1,7 @@
-Eric X. Liu's Personal Page
\ No newline at end of file
+[4c1d048]
\ No newline at end of file
diff --git a/posts/benchmarking-llms-on-jetson-orin-nano/index.html b/posts/benchmarking-llms-on-jetson-orin-nano/index.html
index 5588238..ac72952 100644
--- a/posts/benchmarking-llms-on-jetson-orin-nano/index.html
+++ b/posts/benchmarking-llms-on-jetson-orin-nano/index.html
@@ -8,7 +8,7 @@
NVIDIA’s Jetson Orin Nano promises impressive specs: 1024 CUDA cores, 32 Tensor Cores, and 40 TOPS of INT8 compute performance packed into a compact, power-efficient edge device. On paper, it looks like a capable platform for running Large Language Models locally. But there’s a catch—one that reveals a fundamental tension in modern edge AI hardware design.
After running 66 inference tests across seven different language models ranging from 0.5B to 5.4B parameters, I discovered something counterintuitive: the device’s computational muscle sits largely idle during single-stream LLM inference. The bottleneck isn’t computation—it’s memory bandwidth. This isn’t just a quirk of one device; it’s a fundamental characteristic of single-user, autoregressive token generation on edge hardware—a reality that shapes how we should approach local LLM deployment.">