Imagine a future where software writes and runs itself — not in theory, but live in production. That’s exactly what AI startup Riza is building. The company just emerged from stealth with $2.7 million in funding to turn that vision into reality, allowing large language models (LLMs) to write and execute code autonomously in real time.
This early-stage round was led by Matrix Partners, with additional support from 43. The funding will help Riza grow its team, develop new tools for AI-powered code generation, and improve secure execution for untrusted or dynamically generated code.
Real-Time AI Coding Without the Risk
Riza isn’t just another code generator. It’s building what it calls “AI-first infrastructure” — a secure, developer-friendly way for LLMs to execute code safely using a sandboxed WebAssembly (WASM) environment. The goal? Let AI agents write and run their tools without risking the security of production environments.
At the heart of this approach is a simple but powerful idea: give developers and AI systems the ability to spin up isolated environments that can handle Python, JavaScript, and other languages without compromising host infrastructure. The platform shields core systems from untrusted code, dramatically reducing setup time and operational overhead while improving reliability and responsiveness.
This allows developers to skip manual reviews and complex infrastructure, freeing them to focus on building rather than babysitting code execution.
Born From a Weekend Hack, Backed by Industry Vets
Riza was founded by Andrew Benton and Kyle Gray — seasoned engineers who previously worked at Twilio, Stripe, and Retool. The startup’s origin story began with a casual Slack message from a former coworker struggling to safely run LLM-generated code.
That weekend, Benton and Gray built a working prototype: a secure, sandboxed WebAssembly runtime. It could isolate untrusted code without sacrificing speed or flexibility. That weekend project evolved into Riza’s core infrastructure.
Now, their product has matured into a scalable platform used in development, CI pipelines, and production systems. And it’s seeing serious adoption: Riza-powered systems handled more than 850 million code execution requests in March 2025 alone.
Introducing “Just-in-Time Programming”
Riza is also pioneering a new development concept it calls Just-in-Time Programming. Instead of writing code ahead of time and waiting for human review, developers can let AI generate and run code on the fly. This shift unlocks faster workflows — but it also introduces serious security concerns.
That’s where Riza’s sandboxed runtime shines. It lets companies embrace Just-in-Time Programming without exposing critical infrastructure. One Riza customer, for example, uses an LLM to automatically write scripts that gather and analyze data from multiple sources. The AI then builds charts and visualizations directly into customer-facing reports — all safely executed in Riza’s secure environment.
This approach bridges the gap between the creative potential of generative AI and the real-world demands of production systems. It allows AI agents not only to suggest solutions but to implement them, instantly.
Building the Next Layer of AI Infrastructure
As generative AI becomes deeply embedded in software workflows, the need for intelligent, real-time code execution infrastructure is only growing. Investors are taking note.
Matrix partner Patrick Malatack, who led the round, believes this is a foundational shift: “We’re watching AI agents evolve into primary users of cloud infrastructure. But they come with their own needs — and today’s systems weren’t built for them. Riza’s team has created iconic developer tools before. Now they’re doing it again, building the compute layer for AI-first software.”
With its platform now generally available and customer interest growing, Riza is poised to become a key player in the next wave of developer tooling. Its real-time execution engine gives AI the power not just to write software — but to run it, safely and autonomously.