Azure Foundry has partnered with Fireworks AI to provide ultra-fast inference for the latest open-weight models directly within the Foundry Open Models Catalog.
You can now instantly deploy cutting-edge models like DeepSeek V3.2, DeepSeek-R1, Kimi K2.5, MiniMax M2.5, and gpt-oss-120b directly from the Foundry Model Catalog using Fireworks' highly optimized serverless inference engine.
This allows enterprises to use the absolute cutting edge of the open-source world with the same security, RBAC, and SLA guarantees as first-party Azure models.