llm-openrouter 0.6

Simon Willison released llm-openrouter 0.6 with a new refresh command that lets users update available models on-demand instead of waiting for cache expiration, enabling faster access to newly listed models like Kimi 2.6.
Modelwire context
Analyst takeThe refresh command is a small UX fix, but the underlying pressure it addresses is real: OpenRouter's model catalog is moving fast enough that a stale cache is now a meaningful friction point for developers trying to evaluate new arrivals like Kimi 2.6 the day they land.
This story is largely disconnected from the OpenAI-heavy coverage that dominated the week of April 15-16, including the Agents SDK update and the Codex expansions covered here. Those stories were about vertically integrated toolchains where OpenAI controls both the model and the developer surface. Willison's plugin points in the opposite direction: toward a routing layer that abstracts away which lab's model you're actually calling. As more capable models from non-OpenAI sources (Kimi 2.6 being one example) appear on OpenRouter, the value of that abstraction layer grows, and so does the importance of keeping the local model list current.
Watch whether llm-openrouter adds automatic refresh triggers tied to OpenRouter's API changelog or webhook events within the next two major releases. If it does, that would signal the catalog-freshness problem is chronic enough to warrant infrastructure rather than a manual command.
This analysis is generated by Modelwire’s editorial layer from our archive and the summary above. It is not a substitute for the original reporting. How we write it.
MentionsSimon Willison · llm-openrouter · OpenRouter · Kimi 2.6
Modelwire summarizes — we don’t republish. The full article lives on simonwillison.net. If you’re a publisher and want a different summarization policy for your work, see our takedown page.