Files
ericxliu-me/posts/open-webui-openai-websearch/index.html
2025-12-29 07:17:38 +00:00

89 lines
30 KiB
HTML
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<!doctype html><html lang=en><head><title>How I Got Open WebUI Talking to OpenAI Web Search · Eric X. Liu's Personal Page</title><meta charset=utf-8><meta name=viewport content="width=device-width,initial-scale=1"><meta name=color-scheme content="light dark"><meta http-equiv=Content-Security-Policy content="upgrade-insecure-requests; block-all-mixed-content; default-src 'self'; child-src 'self'; font-src 'self' https://fonts.gstatic.com https://cdn.jsdelivr.net/; form-action 'self'; frame-src 'self' https://www.youtube.com https://disqus.com; img-src 'self' https://referrer.disqus.com https://c.disquscdn.com https://*.disqus.com; object-src 'none'; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com/ https://cdn.jsdelivr.net/; script-src 'self' 'unsafe-inline' https://www.google-analytics.com https://cdn.jsdelivr.net/ https://pagead2.googlesyndication.com https://static.cloudflareinsights.com https://unpkg.com https://ericxliu-me.disqus.com https://disqus.com https://*.disqus.com https://*.disquscdn.com https://unpkg.com; connect-src 'self' https://www.google-analytics.com https://pagead2.googlesyndication.com https://cloudflareinsights.com ws://localhost:1313 ws://localhost:* wss://localhost:* https://links.services.disqus.com https://*.disqus.com;"><meta name=author content="Eric X. Liu"><meta name=description content="OpenAI promised native web search in GPT5, but LiteLLM proxy deployments (and by extension Open WebUI) still choke on it—issue #13042 tracks the fallout. I needed grounded answers inside Open WebUI anyway, so I built a workaround: route GPT5 traffic through the Responses API and mask every web_search_call before the UI ever sees it.
This post documents the final setup, the hotfix script that keeps LiteLLM honest, and the tests that prove Open WebUI now streams cited answers without trying to execute the tool itself."><meta name=keywords content="software engineer,performance engineering,Google engineer,tech blog,software development,performance optimization,Eric Liu,engineering blog,mountain biking,Jeep enthusiast,overlanding,camping,outdoor adventures"><meta name=twitter:card content="summary"><meta name=twitter:title content="How I Got Open WebUI Talking to OpenAI Web Search"><meta name=twitter:description content="OpenAI promised native web search in GPT5, but LiteLLM proxy deployments (and by extension Open WebUI) still choke on it—issue #13042 tracks the fallout. I needed grounded answers inside Open WebUI anyway, so I built a workaround: route GPT5 traffic through the Responses API and mask every web_search_call before the UI ever sees it.
This post documents the final setup, the hotfix script that keeps LiteLLM honest, and the tests that prove Open WebUI now streams cited answers without trying to execute the tool itself."><meta property="og:url" content="https://ericxliu.me/posts/open-webui-openai-websearch/"><meta property="og:site_name" content="Eric X. Liu's Personal Page"><meta property="og:title" content="How I Got Open WebUI Talking to OpenAI Web Search"><meta property="og:description" content="OpenAI promised native web search in GPT5, but LiteLLM proxy deployments (and by extension Open WebUI) still choke on it—issue #13042 tracks the fallout. I needed grounded answers inside Open WebUI anyway, so I built a workaround: route GPT5 traffic through the Responses API and mask every web_search_call before the UI ever sees it.
This post documents the final setup, the hotfix script that keeps LiteLLM honest, and the tests that prove Open WebUI now streams cited answers without trying to execute the tool itself."><meta property="og:locale" content="en"><meta property="og:type" content="article"><meta property="article:section" content="posts"><meta property="article:published_time" content="2025-12-29T00:00:00+00:00"><meta property="article:modified_time" content="2025-12-29T07:15:58+00:00"><link rel=preload href=/fonts/fa-solid-900.woff2 as=font type=font/woff2 crossorigin><link rel=preload href=/fonts/fa-brands-400.woff2 as=font type=font/woff2 crossorigin><link rel=canonical href=https://ericxliu.me/posts/open-webui-openai-websearch/><link rel=preload href=/fonts/fa-brands-400.woff2 as=font type=font/woff2 crossorigin><link rel=preload href=/fonts/fa-regular-400.woff2 as=font type=font/woff2 crossorigin><link rel=preload href=/fonts/fa-solid-900.woff2 as=font type=font/woff2 crossorigin><link rel=stylesheet href=/css/coder.min.4b392a85107b91dbdabc528edf014a6ab1a30cd44cafcd5325c8efe796794fca.css integrity="sha256-SzkqhRB7kdvavFKO3wFKarGjDNRMr81TJcjv55Z5T8o=" crossorigin=anonymous media=screen><link rel=stylesheet href=/css/coder-dark.min.a00e6364bacbc8266ad1cc81230774a1397198f8cfb7bcba29b7d6fcb54ce57f.css integrity="sha256-oA5jZLrLyCZq0cyBIwd0oTlxmPjPt7y6KbfW/LVM5X8=" crossorigin=anonymous media=screen><link rel=icon type=image/svg+xml href=/images/favicon.svg sizes=any><link rel=icon type=image/png href=/images/favicon-32x32.png sizes=32x32><link rel=icon type=image/png href=/images/favicon-16x16.png sizes=16x16><link rel=apple-touch-icon href=/images/apple-touch-icon.png><link rel=apple-touch-icon sizes=180x180 href=/images/apple-touch-icon.png><link rel=manifest href=/site.webmanifest><link rel=mask-icon href=/images/safari-pinned-tab.svg color=#5bbad5><script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-3972604619956476" crossorigin=anonymous></script><script type=application/ld+json>{"@context":"http://schema.org","@type":"Person","name":"Eric X. Liu","url":"https:\/\/ericxliu.me\/","description":"Software \u0026 Performance Engineer at Google","sameAs":["https:\/\/www.linkedin.com\/in\/eric-x-liu-46648b93\/","https:\/\/git.ericxliu.me\/eric"]}</script><script type=application/ld+json>{"@context":"http://schema.org","@type":"BlogPosting","headline":"How I Got Open WebUI Talking to OpenAI Web Search","genre":"Blog","wordcount":"1087","url":"https:\/\/ericxliu.me\/posts\/open-webui-openai-websearch\/","datePublished":"2025-12-29T00:00:00\u002b00:00","dateModified":"2025-12-29T07:15:58\u002b00:00","description":"\u003cp\u003eOpenAI promised native web search in GPT5, but LiteLLM proxy deployments (and by extension Open WebUI) still choke on it—issue \u003ca href=\u0022https:\/\/github.com\/BerriAI\/litellm\/issues\/13042\u0022 class=\u0022external-link\u0022 target=\u0022_blank\u0022 rel=\u0022noopener\u0022\u003e#13042\u003c\/a\u003e tracks the fallout. I needed grounded answers inside Open WebUI anyway, so I built a workaround: route GPT5 traffic through the Responses API and mask every \u003ccode\u003eweb_search_call\u003c\/code\u003e before the UI ever sees it.\u003c\/p\u003e\n\u003cp\u003eThis post documents the final setup, the hotfix script that keeps LiteLLM honest, and the tests that prove Open WebUI now streams cited answers without trying to execute the tool itself.\u003c\/p\u003e","author":{"@type":"Person","name":"Eric X. Liu"}}</script></head><body class="preload-transitions colorscheme-auto"><div class=float-container><a id=dark-mode-toggle class=colorscheme-toggle><i class="fa-solid fa-adjust fa-fw" aria-hidden=true></i></a></div><main class=wrapper><nav class=navigation><section class=container><a class=navigation-title href=https://ericxliu.me/>Eric X. Liu's Personal Page
</a><input type=checkbox id=menu-toggle>
<label class="menu-button float-right" for=menu-toggle><i class="fa-solid fa-bars fa-fw" aria-hidden=true></i></label><ul class=navigation-list><li class=navigation-item><a class=navigation-link href=/posts/>Posts</a></li><li class=navigation-item><a class=navigation-link href=https://chat.ericxliu.me>Chat</a></li><li class=navigation-item><a class=navigation-link href=https://git.ericxliu.me/user/oauth2/Authenitk>Git</a></li><li class=navigation-item><a class=navigation-link href=https://coder.ericxliu.me/api/v2/users/oidc/callback>Coder</a></li><li class=navigation-item><a class=navigation-link href=/about/>About</a></li><li class=navigation-item><a class=navigation-link href=/>|</a></li><li class=navigation-item><a class=navigation-link href=https://sso.ericxliu.me>Sign in</a></li></ul></section></nav><div class=content><section class="container post"><article><header><div class=post-title><h1 class=title><a class=title-link href=https://ericxliu.me/posts/open-webui-openai-websearch/>How I Got Open WebUI Talking to OpenAI Web Search</a></h1></div><div class=post-meta><div class=date><span class=posted-on><i class="fa-solid fa-calendar" aria-hidden=true></i>
<time datetime=2025-12-29T00:00:00Z>December 29, 2025
</time></span><span class=reading-time><i class="fa-solid fa-clock" aria-hidden=true></i>
6-minute read</span></div></div></header><div class=post-content><p>OpenAI promised native web search in GPT5, but LiteLLM proxy deployments (and by extension Open WebUI) still choke on it—issue <a href=https://github.com/BerriAI/litellm/issues/13042 class=external-link target=_blank rel=noopener>#13042</a> tracks the fallout. I needed grounded answers inside Open WebUI anyway, so I built a workaround: route GPT5 traffic through the Responses API and mask every <code>web_search_call</code> before the UI ever sees it.</p><p>This post documents the final setup, the hotfix script that keeps LiteLLM honest, and the tests that prove Open WebUI now streams cited answers without trying to execute the tool itself.</p><h2 id=why-open-webui-broke>Why Open WebUI Broke
<a class=heading-link href=#why-open-webui-broke><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
<span class=sr-only>Link to heading</span></a></h2><ol><li><strong>Wrong API surface.</strong> <code>/v1/chat/completions</code> still rejects <code>type: "web_search"</code> with <code>Invalid value: 'web_search'. Supported values are: 'function' and 'custom'.</code></li><li><strong>LiteLLM tooling gap.</strong> The OpenAI TypedDicts in <code>litellm/types/llms/openai.py</code> only allow <code>Literal["function"]</code>. Even if the backend call succeeded, streaming would crash when it saw a new tool type.</li><li><strong>Open WebUI assumptions.</strong> The UI eagerly parses every tool delta, so when LiteLLM streamed the raw <code>web_search_call</code> chunk, the UI tried to execute it, failed to parse the arguments, and aborted the chat.</li></ol><p>Fixing all three required touching both the proxy configuration and the LiteLLM transformation path.</p><h2 id=step-1--route-gpt5-through-the-responses-api>Step 1 Route GPT5 Through the Responses API
<a class=heading-link href=#step-1--route-gpt5-through-the-responses-api><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
<span class=sr-only>Link to heading</span></a></h2><p>LiteLLMs Responses bridge activates whenever the backend model name starts with <code>openai/responses/</code>. I added a dedicated alias, <code>gpt-5.2-search</code>, that hardcodes the Responses API plus web search metadata. Existing models (reasoning, embeddings, TTS) stay untouched.</p><div class=highlight><pre tabindex=0 style=color:#e6edf3;background-color:#0d1117;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-yaml data-lang=yaml><span style=display:flex><span><span style=color:#8b949e;font-style:italic># proxy-config.yaml (sanitized)</span><span style=color:#6e7681>
</span></span></span><span style=display:flex><span><span style=color:#7ee787>model_list</span>:<span style=color:#6e7681>
</span></span></span><span style=display:flex><span><span style=color:#6e7681> </span>- <span style=color:#7ee787>model_name</span>:<span style=color:#6e7681> </span><span style=color:#a5d6ff>gpt-5.2-search</span><span style=color:#6e7681>
</span></span></span><span style=display:flex><span><span style=color:#6e7681> </span><span style=color:#7ee787>litellm_params</span>:<span style=color:#6e7681>
</span></span></span><span style=display:flex><span><span style=color:#6e7681> </span><span style=color:#7ee787>model</span>:<span style=color:#6e7681> </span><span style=color:#a5d6ff>openai/responses/openai/gpt-5.2</span><span style=color:#6e7681>
</span></span></span><span style=display:flex><span><span style=color:#6e7681> </span><span style=color:#7ee787>api_key</span>:<span style=color:#6e7681> </span><span style=color:#a5d6ff>&lt;OPENAI_API_KEY&gt;</span><span style=color:#6e7681>
</span></span></span><span style=display:flex><span><span style=color:#6e7681> </span><span style=color:#7ee787>reasoning_effort</span>:<span style=color:#6e7681> </span><span style=color:#a5d6ff>high</span><span style=color:#6e7681>
</span></span></span><span style=display:flex><span><span style=color:#6e7681> </span><span style=color:#7ee787>merge_reasoning_content_in_choices</span>:<span style=color:#6e7681> </span><span style=color:#79c0ff>true</span><span style=color:#6e7681>
</span></span></span><span style=display:flex><span><span style=color:#6e7681> </span><span style=color:#7ee787>tools</span>:<span style=color:#6e7681>
</span></span></span><span style=display:flex><span><span style=color:#6e7681> </span>- <span style=color:#7ee787>type</span>:<span style=color:#6e7681> </span><span style=color:#a5d6ff>web_search</span><span style=color:#6e7681>
</span></span></span><span style=display:flex><span><span style=color:#6e7681> </span><span style=color:#7ee787>user_location</span>:<span style=color:#6e7681>
</span></span></span><span style=display:flex><span><span style=color:#6e7681> </span><span style=color:#7ee787>type</span>:<span style=color:#6e7681> </span><span style=color:#a5d6ff>approximate</span><span style=color:#6e7681>
</span></span></span><span style=display:flex><span><span style=color:#6e7681> </span><span style=color:#7ee787>country</span>:<span style=color:#6e7681> </span><span style=color:#a5d6ff>US</span><span style=color:#6e7681>
</span></span></span></code></pre></div><p>Any client (Open WebUI included) can now request <code>model: "gpt-5.2-search"</code> over the standard <code>/v1/chat/completions</code> endpoint, and LiteLLM handles the Responses API hop transparently.</p><h2 id=step-2--mask-web_search_call-chunks-inside-litellm>Step 2 Mask <code>web_search_call</code> Chunks Inside LiteLLM
<a class=heading-link href=#step-2--mask-web_search_call-chunks-inside-litellm><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
<span class=sr-only>Link to heading</span></a></h2><p>Even with the right API, LiteLLM still needs to stream deltas Open WebUI can digest. My <a href=https://ericxliu.me/hotfix.py class=external-link target=_blank rel=noopener>hotfix.py</a> script copies the LiteLLM source into <code>/tmp/patch/litellm</code>, then rewrites two files. This script runs as part of the Helm releases init hook so I can inject fixes directly into the container filesystem at pod start. That saves me from rebuilding and pushing new images every time LiteLLM upstream changes (or refuses a patch), which is critical while waiting for issue #13042 to land. Ill try to upstream the fix, but this is admittedly hacky, so timelines are uncertain.</p><ol><li><strong><code>openai.py</code> TypedDicts</strong>: extend the tool chunk definitions to accept <code>Literal["web_search"]</code>.</li><li><strong><code>litellm_responses_transformation/transformation.py</code></strong>: intercept every streaming item and short-circuit anything with <code>type == "web_search_call"</code>, returning an empty assistant delta instead of a tool call.</li></ol><div class=highlight><pre tabindex=0 style=color:#e6edf3;background-color:#0d1117;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-python data-lang=python><span style=display:flex><span><span style=color:#8b949e;font-style:italic># Excerpt from hotfix.py</span>
</span></span><span style=display:flex><span>tool_call_chunk_original <span style=color:#ff7b72;font-weight:700>=</span> (
</span></span><span style=display:flex><span> <span style=color:#a5d6ff>&#39;class ChatCompletionToolCallChunk(TypedDict): # result of /chat/completions call</span><span style=color:#79c0ff>\n</span><span style=color:#a5d6ff>&#39;</span>
</span></span><span style=display:flex><span> <span style=color:#a5d6ff>&#39; id: Optional[str]</span><span style=color:#79c0ff>\n</span><span style=color:#a5d6ff>&#39;</span>
</span></span><span style=display:flex><span> <span style=color:#a5d6ff>&#39; type: Literal[&#34;function&#34;]&#39;</span>
</span></span><span style=display:flex><span>)
</span></span><span style=display:flex><span>tool_call_chunk_patch <span style=color:#ff7b72;font-weight:700>=</span> tool_call_chunk_original<span style=color:#ff7b72;font-weight:700>.</span>replace(
</span></span><span style=display:flex><span> <span style=color:#a5d6ff>&#39;Literal[&#34;function&#34;]&#39;</span>, <span style=color:#a5d6ff>&#39;Literal[&#34;function&#34;, &#34;web_search&#34;]&#39;</span>
</span></span><span style=display:flex><span>)
</span></span><span style=display:flex><span><span style=color:#ff7b72;font-weight:700>...</span>
</span></span><span style=display:flex><span><span style=color:#ff7b72>if</span> tool_call_chunk_original <span style=color:#ff7b72;font-weight:700>in</span> content:
</span></span><span style=display:flex><span> content <span style=color:#ff7b72;font-weight:700>=</span> content<span style=color:#ff7b72;font-weight:700>.</span>replace(tool_call_chunk_original, tool_call_chunk_patch, <span style=color:#a5d6ff>1</span>)
</span></span></code></pre></div><div class=highlight><pre tabindex=0 style=color:#e6edf3;background-color:#0d1117;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-python data-lang=python><span style=display:flex><span>added_block <span style=color:#ff7b72;font-weight:700>=</span> <span style=color:#a5d6ff>&#34;&#34;&#34; elif output_item.get(&#34;type&#34;) == &#34;web_search_call&#34;:
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> # Mask the call: Open WebUI should never see tool metadata
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> action_payload = output_item.get(&#34;action&#34;)
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> verbose_logger.debug(
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> &#34;Chat provider: masking web_search_call (added) call_id=</span><span style=color:#a5d6ff>%s</span><span style=color:#a5d6ff> action=</span><span style=color:#a5d6ff>%s</span><span style=color:#a5d6ff>&#34;,
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> output_item.get(&#34;call_id&#34;),
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> action_payload,
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> )
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> return ModelResponseStream(
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> choices=[
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> StreamingChoices(
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> index=0,
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> delta=Delta(content=&#34;&#34;),
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> finish_reason=None,
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> )
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> ]
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> )
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff>&#34;&#34;&#34;</span>
</span></span></code></pre></div><p>These patches ensure LiteLLM never emits a <code>tool_calls</code> delta for <code>web_search</code>. Open WebUI only receives assistant text chunks, so it happily renders the model response and the inline citations the Responses API already provides.</p><h2 id=step-3--prove-it-with-curl-and-open-webui>Step 3 Prove It with cURL (and Open WebUI)
<a class=heading-link href=#step-3--prove-it-with-curl-and-open-webui><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
<span class=sr-only>Link to heading</span></a></h2><p>I keep a simple smoke test (<code>litellm_smoke_test.sh</code>) that hits the public ingress with and without streaming. The only secrets are placeholders here, but the structure is the same.</p><div class=highlight><pre tabindex=0 style=color:#e6edf3;background-color:#0d1117;-moz-tab-size:4;-o-tab-size:4;tab-size:4><code class=language-bash data-lang=bash><span style=display:flex><span><span style=color:#8b949e;font-weight:700;font-style:italic>#!/usr/bin/env bash
</span></span></span><span style=display:flex><span>set -euo pipefail
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span>echo <span style=color:#a5d6ff>&#34;Testing non-streaming...&#34;</span>
</span></span><span style=display:flex><span>curl <span style=color:#a5d6ff>&#34;https://api.ericxliu.me/v1/chat/completions&#34;</span> <span style=color:#79c0ff>\
</span></span></span><span style=display:flex><span> -H <span style=color:#a5d6ff>&#34;Authorization: Bearer &lt;LITELLM_MASTER_KEY&gt;&#34;</span> <span style=color:#79c0ff>\
</span></span></span><span style=display:flex><span> -H <span style=color:#a5d6ff>&#34;Content-Type: application/json&#34;</span> <span style=color:#79c0ff>\
</span></span></span><span style=display:flex><span> -d <span style=color:#a5d6ff>&#39;{
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> &#34;model&#34;: &#34;gpt-5.2-search&#34;,
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> &#34;messages&#34;: [{&#34;role&#34;: &#34;user&#34;, &#34;content&#34;: &#34;Find the sunset time in Tokyo today.&#34;}]
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> }&#39;</span>
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span>echo -e <span style=color:#a5d6ff>&#34;\n\nTesting streaming...&#34;</span>
</span></span><span style=display:flex><span>curl <span style=color:#a5d6ff>&#34;https://api.ericxliu.me/v1/chat/completions&#34;</span> <span style=color:#79c0ff>\
</span></span></span><span style=display:flex><span> -H <span style=color:#a5d6ff>&#34;Authorization: Bearer &lt;LITELLM_MASTER_KEY&gt;&#34;</span> <span style=color:#79c0ff>\
</span></span></span><span style=display:flex><span> -H <span style=color:#a5d6ff>&#34;Content-Type: application/json&#34;</span> <span style=color:#79c0ff>\
</span></span></span><span style=display:flex><span> -d <span style=color:#a5d6ff>&#39;{
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> &#34;model&#34;: &#34;gpt-5.2-search&#34;,
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> &#34;stream&#34;: true,
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> &#34;messages&#34;: [{&#34;role&#34;: &#34;user&#34;, &#34;content&#34;: &#34;What is the weather in NYC right now?&#34;}]
</span></span></span><span style=display:flex><span><span style=color:#a5d6ff> }&#39;</span>
</span></span></code></pre></div><p>Each request now returns grounded answers with citations (<code>url_citation</code> annotations) via Open WebUI, and the SSE feed never stalls because the UI isnt asked to interpret tool calls.</p><h2 id=lessons--pitfalls>Lessons & Pitfalls
<a class=heading-link href=#lessons--pitfalls><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
<span class=sr-only>Link to heading</span></a></h2><ul><li><strong>The Responses API is non-negotiable (and syntax-sensitive).</strong> <code>/v1/chat/completions</code> still rejects <code>web_search</code>. Always test against <code>/v1/responses</code> directly before wiring LiteLLM into the loop. Furthermore, the syntax for <code>reasoning</code> is different: while Chat Completions uses the top-level <code>reasoning_effort</code> parameter, the Responses API requires a nested object: <code>"reasoning": {"effort": "medium"}</code>.</li><li><strong>The Native Model Trap.</strong> Models like <code>gpt-5-search-api</code> exist and support web search via standard Chat Completions, but they are often less flexible—for instance, rejecting <code>reasoning_effort</code> entirely. Routing a standard model through LiteLLM&rsquo;s Responses bridge offers more control over formatting and fallbacks.</li><li><strong>Magic strings control routing.</strong> LiteLLM has hardcoded logic (deep in <code>main.py</code>) that only triggers the Responses-to-Chat bridge if the backend model name starts with <code>openai/responses/</code>. Without that specific prefix, LiteLLM bypasses its internal transformation layer entirely, leading to cryptic 404s or &ldquo;model not found&rdquo; errors.</li><li><strong>Synthesized Sovereignty: The Call ID Crisis.</strong> Open WebUI is a &ldquo;well-behaved&rdquo; OpenAI client, yet it often omits the <code>id</code> field in <code>tool_calls</code> when sending assistant messages back to the server. LiteLLM&rsquo;s Responses bridge initially exploded with a <code>KeyError: 'id'</code> because it assumed an ID would always be present. The fix: synthesizing predictable IDs like <code>auto_tool_call_N</code> on the fly to satisfy the server-side schema.</li><li><strong>The Argument Delta Void.</strong> In streaming mode, the Responses API sometimes skips sending <code>response.function_call_arguments.delta</code> entirely if the query is simple. If the proxy only waits for deltas, the client receives an empty <code>{}</code> for tool arguments. The solution is to fallback and synthesize the <code>arguments</code> string from the <code>action</code> payload (e.g., <code>output_item['action']['query']</code>) when deltas are missing.</li><li><strong>Streaming State Machines are Fragile.</strong> Open WebUI is highly sensitive to the exact state of a tool call. If it sees a <code>web_search_call</code> with <code>status: "in_progress"</code>, its internal parser chokes, assuming it&rsquo;s an uncompleted &ldquo;function&rdquo; call. These intermediate state chunks must be intercepted and handled before they reach the UI.</li><li><strong>Defensive Masking is the Final Boss.</strong> To stop Open WebUI from entering an infinite client-side loop (thinking it needs to execute a tool it doesn&rsquo;t have), LiteLLM must &ldquo;mask&rdquo; the <code>web_search_call</code> chunks. By emitting empty content deltas instead of tool chunks, we hide the server-side search mechanics from the UI, allowing it to stay focused on the final answer.</li></ul><p>With those guardrails in place, GPT5s native web search works end-to-end inside Open WebUI, complete with citations, without waiting for LiteLLM upstream fixes.</p><h2 id=references>References
<a class=heading-link href=#references><i class="fa-solid fa-link" aria-hidden=true title="Link to heading"></i>
<span class=sr-only>Link to heading</span></a></h2><ul><li><a href=https://docs.litellm.ai/docs/proxy/openai_responses class=external-link target=_blank rel=noopener>LiteLLM Documentation - OpenAI Responses API Bridge</a></li><li><a href=https://platform.openai.com/docs/api-reference/responses class=external-link target=_blank rel=noopener>OpenAI Documentation - Responses API</a></li><li><a href=https://github.com/BerriAI/litellm/issues/13042 class=external-link target=_blank rel=noopener>LiteLLM GitHub Issue #13042</a></li><li><a href=https://docs.openwebui.com/ class=external-link target=_blank rel=noopener>Open WebUI Documentation</a></li><li><a href=https://ericxliu.me/hotfix.py class=external-link target=_blank rel=noopener>The hotfix.py Script</a></li></ul></div><footer><div id=disqus_thread></div><script>window.disqus_config=function(){},function(){if(["localhost","127.0.0.1"].indexOf(window.location.hostname)!=-1){document.getElementById("disqus_thread").innerHTML="Disqus comments not available by default when the website is previewed locally.";return}var t=document,e=t.createElement("script");e.async=!0,e.src="//ericxliu-me.disqus.com/embed.js",e.setAttribute("data-timestamp",+new Date),(t.head||t.body).appendChild(e)}(),document.addEventListener("themeChanged",function(){document.readyState=="complete"&&DISQUS.reset({reload:!0,config:disqus_config})})</script></footer></article><link rel=stylesheet href=https://cdn.jsdelivr.net/npm/katex@0.16.4/dist/katex.min.css integrity=sha384-vKruj+a13U8yHIkAyGgK1J3ArTLzrFGBbBc0tDp4ad/EyewESeXE/Iv67Aj8gKZ0 crossorigin=anonymous><script defer src=https://cdn.jsdelivr.net/npm/katex@0.16.4/dist/katex.min.js integrity=sha384-PwRUT/YqbnEjkZO0zZxNqcxACrXe+j766U2amXcgMg5457rve2Y7I6ZJSm2A0mS4 crossorigin=anonymous></script><script defer src=https://cdn.jsdelivr.net/npm/katex@0.16.4/dist/contrib/auto-render.min.js integrity=sha384-+VBxd3r6XgURycqtZ117nYw44OOcIax56Z4dCRWbxyPt0Koah1uHoK0o4+/RRE05 crossorigin=anonymous onload='renderMathInElement(document.body,{delimiters:[{left:"$$",right:"$$",display:!0},{left:"$",right:"$",display:!1},{left:"\\(",right:"\\)",display:!1},{left:"\\[",right:"\\]",display:!0}]})'></script></section></div><footer class=footer><section class=container>©
2016 -
2025
Eric X. Liu
<a href="https://git.ericxliu.me/eric/ericxliu-me/commit/f1178d3">[f1178d3]</a></section></footer></main><script src=/js/coder.min.6ae284be93d2d19dad1f02b0039508d9aab3180a12a06dcc71b0b0ef7825a317.js integrity="sha256-auKEvpPS0Z2tHwKwA5UI2aqzGAoSoG3McbCw73gloxc="></script><script defer src=https://static.cloudflareinsights.com/beacon.min.js data-cf-beacon='{"token": "987638e636ce4dbb932d038af74c17d1"}'></script></body></html>