Files
ericxliu-me/content/posts/reverse-engineering-antigravity-ide.md
Automated Publisher fe6bf91f8c
All checks were successful
Hugo Publish CI / build-and-deploy (push) Successful in 20s
📚 Auto-publish: Add/update 1 blog posts
Generated on: Thu Jan 22 01:49:53 UTC 2026
Source: md-personal repository
2026-01-22 01:49:53 +00:00

5.3 KiB
Raw Blame History

title, date, draft
title date draft
How I Built a Blog Agent that Writes About Itself 2026-01-16 false

I've been spending a lot of time "vibe coding" in the Antigravity IDE lately. It's an incredible flow state—intense, iterative, and fast. But it has a major flaw: the context is ephemeral. Once the session is over, that rich history of decisions, wrong turns, and "aha!" moments is locked away in an opaque, internal format.

I wanted to capture that value. I wanted a system that could take my chaotic coding sessions and distill them into structured, technical blog posts (like the one you're reading right now).

But getting the data out turned into a much deeper rabbit hole than I expected.

The Challenge: Check the Database?

My first instinct was simple: It's an Electron app, so there's probably a SQLite database.

I found it easily enough at ~/Library/Application Support/Antigravity/User/globalStorage/state.vscdb. But when I opened it up, I hit a wall. The data wasn't plain text; it was stored in the ItemTable under keys like antigravityUnifiedStateSync.trajectorySummaries as Base64-encoded strings.

Decoding them revealed raw Protobuf wire formats, not JSON.

The "Wire-Walking" Dead End

I spent a few hours writing a Python script to "wire-walk" the Protobuf data without a schema. I managed to extract some human-readable strings, but it was a mess:

  1. Missing Context: I got fragments of text, but the user prompts and cohesive flow were gone.
  2. Encryption: The actual conversation files (ending in .pb) in ~/.gemini/antigravity/conversations/ were encrypted.

It turns out Antigravity uses Electrons safeStorage API, which interfaces directly with the macOS Keychain. Without the app's private key (which is hardware-bound), that data is effectively random noise. I even tried using Frida to hook safeStorage.decryptString(), but macOS SIP (System Integrity Protection) and code signing shut that down immediately.

I was stuck. I couldn't decrypt the local files, and I couldn't parse the database effectively.

The Breakthrough: Living Off the Land

When you can't break the front door, look for the side entrance. I realized I wasn't the only one trying to read this state—the official extensions had to do it too.

I started poking around the source code of the vscode-antigravity-cockpit extension, specifically a file named local_auth_importer.ts. That's where I found the golden ticket.

The extension doesn't decrypt the local files. Instead, it reads a specific key from the SQLite database: jetskiStateSync.agentManagerInitState.

When I decoded field #6 of this Protobuf structure, I found an OAuthTokenInfo message. It contained the users active accessToken and refreshToken.

Shifting Strategy: Don't Crack it, Join it

This changed everything. I didn't need to reverse-engineer the local storage encryption; I just needed to impersonate the IDE.

By "piggybacking" on this existing auth mechanism, I could extract a valid OAuth token directly from the local state. But I still needed the endpoints.

Instead of guessing, I opened the Developer Tools inside Antigravity itself (it is Electron, after all). By enabling the Chrome network tracing tools and triggering an export manually, I caught the request in the act.

I saw the exact call to exa.language_server_pb.LanguageServerService/ConvertTrajectoryToMarkdown.

It was perfect. By sending a gRPC-over-HTTP request to this endpoint using the stolen token, the server—which does have access to the unencrypted history—returned a perfectly formatted Markdown document of my entire coding session.

The Architecture: The Blog-Agent

Once I had the data extraction solved, building the rest of the "blog-agent" was straightforward. I built a Node.js stack to automate the pipeline:

  • Backend: An Express server handles the routing, session imports, and post generation.
  • Frontend: A clean EJS interface to list sessions, view summaries, and "publish" them to the filesystem.
  • Storage: A local SQLite database (data/sessions.sqlite) acts as a cache. (I learned my lesson: always cache your LLM inputs).
  • The Brain: I use the OpenAI SDK (pointing to a LiteLLM proxy) to interface with gemini-3-flash. I wrote a map-reduce style prompt that first extracts technical decisions from the raw conversation log, then synthesizes them into a narrative.
  • Persistence: The final posts are saved with YAML front matter into a generated_posts/ directory.

Key Insights

  • Don't Fight the OS: Trying to break macOS Keychain/SIP encryption is a losing battle for a weekend project.
  • Follow the Tokens: Applications often store auth tokens in less-secure places (like plain SQLite or weaker encryption) than the user content itself.
  • Extensions are Open Books: If an app has extensions, their source code is often the best documentation for the internal API.

In a satisfying detailed loop, this very article was generated by the blog-agent itself, analyzing the "vibe coding" session where I built it.

References

  • server.js: The Express server and API implementation.
  • services/antigravity.js: The client for the Antigravity gRPC-over-HTTP API.
  • vscode-antigravity-cockpit: The extension that leaked the auth logic.