The gpt-oss-20B endpoint is live on Lightning, and it’s not just fast, it’s efficient. While we sit in the top 3 for raw speed, we lead end-to-end response time vs. price on Artificial Analysis. ✅ Excellent latency & TTFT tradeoff for an MoE model ✅ Best-in-class energy efficiency & cost per token ✅ Throughput (tokens/sec) competing with dense 32B-class baselines And it’s not just this model—Lightning’s Model APIs give you access to top open- and closed-source models like Anthropic, OpenAI, Google, and more. Manage routing, memory, and benchmarking all in one place, and switch models with a single line of code. Run our endpoints today—or deploy your own via Lightning’s stack. Try it → https://lnkd.in/ewd-8Jhc Benchmarks → https://lnkd.in/e93A3GZn