RunSybil Raises $40M to Automate Ethical Hacking
RunSybil just raised $40 million to build AI agents that hack your software before the bad guys do. The round was led by Khosla Ventures, with Anthropic's Anthology Fund, S32, Conviction, and Elad Gil piling in — plus a murderer's row of angel investors including Jeff Dean, Nikesh Arora, and Amit Agarwal. The message is blunt: human penetration testers can't keep up, and autonomous AI agents are coming for their jobs.
The Thesis: Continuous Hacking, Zero Humans
Founded in 2023 by Ari Herbert-Voss — OpenAI's first security research hire — and Vlad Ionescu, who ran Meta's offensive security red team, RunSybil has built an AI agent called Sybil that performs continuous, autonomous penetration testing on live applications. Not static code analysis. Not pre-deployment scanning. This thing probes running software like an attacker would — discovering endpoints, chaining vulnerabilities, testing authentication boundaries, and finding paths to sensitive data.
That distinction matters enormously. Traditional application security tools operate upstream: they scan code, flag known patterns, and generate mountains of alerts that security teams then have to triage. RunSybil operates downstream, on production systems, doing what a human pen tester would do — except it never sleeps, never takes a vacation, and doesn't charge $500 an hour.
It's a black-box approach. Sybil doesn't need access to source code. It reasons like an adversary, dynamically exploring systems to find exploitable weaknesses that static analysis would miss entirely. If you've ever watched a skilled red teamer work, you know the magic isn't in finding individual bugs — it's in chaining seemingly minor issues into catastrophic attack paths. That's what RunSybil is automating.
Why This Round Is Significant
The investor list here reads like a deliberate signal. Khosla Ventures leading tells you the bet is on technical ambition — Vinod Khosla has publicly described RunSybil as being "on the edge" of the AI security frontier. The Anthology Fund — Menlo Ventures' $100 million initiative in partnership with Anthropic — participating signals that the people building frontier models believe autonomous offensive security is a legitimate, near-term application of agentic AI.
And then there's Jeff Dean. Google's chief scientist doesn't angel invest casually. His presence alongside leaders from OpenAI, Palo Alto Networks, Stripe, and Google suggests the industry's most technically sophisticated people believe RunSybil is onto something real.
$40 million isn't a seed round anymore — it's a statement that RunSybil has product-market fit and needs to scale. The company says the funding will go toward accelerating customer development, deepening engineering investment, expanding security research capabilities, and fueling go-to-market efforts.
The Client List Tells the Story
RunSybil's current customer base is revealing. On the startup side: Cursor, Notion, Turbopuffer, Baseten, and Thinking Machines Lab. These are AI-native companies shipping code at ferocious speed — exactly the kind of organizations that can't afford to wait weeks for a manual pen test engagement. On the enterprise side: major financial institutions and Fortune 500 companies. The company claims customers have found critical vulnerabilities that traditional methods missed entirely.
This dual traction is the strongest signal in the whole announcement. Startups buy RunSybil because they're moving too fast for legacy security. Enterprises buy it because the supply of elite human pen testers is pathetically small relative to the attack surface they need to cover. Both segments have the same core problem: security testing is too slow, too expensive, and too infrequent.
The Uncomfortable Question: Does This Replace Pen Testers?
Let's not dance around it. Yes, RunSybil is gunning for the work currently done by penetration testers, bug bounty hunters, and internal red teams. The company's own framing is explicit — Sybil provides a "skill set that is notoriously difficult to find." Translation: there aren't enough humans who can do this work, and AI agents can fill the gap faster and cheaper.
The cybersecurity industry has a well-documented talent shortage. There are roughly 3.5 million unfilled cybersecurity positions globally, and offensive security specialists represent the sharpest end of that scarcity. A top pen tester might cost $300–$600 per hour. A firm engagement for a mid-complexity application runs $20,000–$100,000 and happens maybe once or twice a year. Meanwhile, your engineering team is shipping code daily.
The math doesn't work. It never has. RunSybil's bet is that AI agents can provide continuous coverage at a fraction of the cost — not as a nice-to-have supplement, but as the primary offensive security layer.
"AI is reshaping how companies operate and develop software. Continuous, automated offensive testing isn't optional anymore — it's the baseline."
That's the philosophical core of RunSybil's pitch, and it's hard to argue with. The question isn't whether AI will automate pen testing. It's whether RunSybil's agent is good enough to do it reliably on complex, real-world systems. Early customer results suggest it is.
The Founder Advantage
Co-founder pedigree matters in cybersecurity more than almost any other domain. You can't fake offensive security expertise — either you've broken real systems or you haven't. Herbert-Voss was literally the first person Anthropic's rival OpenAI hired to think about security. Ionescu ran offensive operations at Meta, one of the largest attack surfaces on the planet. Together, they've built a team that understands both the AI and the hacking, which is vanishingly rare.
This isn't another wrapper on GPT-4 with a security prompt. RunSybil has built purpose-built agents that understand web application architecture, authentication flows, and vulnerability chaining at a level that requires deep, specialized research. The Anthology Fund's involvement — remember, this is Anthropic's dedicated early-stage vehicle — suggests the underlying AI work is genuinely differentiated.
What Comes Next
With $40 million in the bank and a client roster spanning AI startups to Fortune 500 banks, RunSybil is positioned to define the autonomous offensive security category. The company is actively hiring across engineering, research, and customer-facing roles, which signals aggressive expansion.
The broader implication is clear: the cybersecurity industry is about to get the same agentic AI disruption that's already reshaping software development, customer support, and data analysis. Manual, periodic pen testing is a relic of a slower era. In a world where code ships continuously, security testing needs to be continuous too.
RunSybil isn't just building a better scanner. It's building an autonomous attacker that works for the defense. That's a fundamentally different product, and $40 million says the smartest money in tech agrees.
Want to stay ahead of AI's transformation of cybersecurity? Subscribe to the Ultrathink newsletter for sharp analysis of the funding rounds, product launches, and technical shifts reshaping the industry.
This article was ultrathought.
Get breaking news, funding rounds, and analysis delivered to your inbox. Free forever.