The framing that has dominated security operations conversations for the better part of a decade, build versus buy, internal SOC versus managed service, is no longer sufficient. That is not an observation vendors typically make, because dissolving the frame also dissolves the sale. It takes an independent analyst to say it plainly.
Oliver Rochford, Lead Analyst at Cyberfuturists and a former Research Director at Gartner, Securonix, and Tenable, has published a whitepaper that does exactly that. Security Operations at the Nexus of AI, developed in collaboration with Daylight Security, argues that AI has made the tool-versus-service distinction irrelevant as a primary decision criterion, and that organizations still using it as their main frame are asking the wrong question.
The right question, in Rochford’s framing, concerns operating models and governance. Not what technology an organization uses, but who makes the decisions that technology produces, who owns those decisions when they are wrong, and whether the organization can even see the decisions being made on its behalf.
Grounded in Practitioner Reality
What makes the report distinctive is its grounding in practitioner reality. Rochford draws on more than 4,000 career engagements with security leaders, SOC operators, and vendors. The appendix includes case studies built from interviews with two practitioners who had deployed AI SOC platforms in production, not pilots, which the report notes is itself diagnostic given AI SOC adoption rates the Gartner Hype Cycle for Security Operations estimates at one to five percent of organizations.
Both practitioners described meaningful capability gains. A CISO at a European mobility company with a dedicated SecOps team described AI enabling his team to correlate HR data, LinkedIn activity, and authentication logs across franchise locations to identify systematic credential sharing between branches. It was the kind of investigation his team would never have had the capacity to run at scale on what was, by alert-priority standards, a relatively low-risk signal. A sole security practitioner at a US-based conservation nonprofit replaced a $60,000 per year pass-through MDR, one that ingested data but performed no meaningful correlation, with an AI platform that cost half as much and delivered investigations with context and attribution already assembled.
Both also arrived, independently, at the same position on autonomous response: no. One framed it as enterprise risk management. The other framed it as an accountability principle, noting that you cannot hold a computer accountable. The architectures differed. The conclusion was identical.
Three Structural Shifts
Rochford’s framework for understanding the current AI SOC landscape is organized around three shifts. The first is economic: AI has made it possible for AI-native MDR providers to deliver expert-grade, personalized security operations at prices previously available only to organizations at the high end of the market. This only holds, however, for providers who built AI into their operational core rather than layering it over existing architecture. The distinction between AI-augmented MDRs and AI-native MDRs is a significant part of the report’s analytical contribution.
The second shift is in the governance question itself. The traditional MDR decision asked who runs the SOC. The AI-era question asks who owns decisions when machines are making them. Most evaluation frameworks, the report observes, have not caught up. They still emphasize alert volume, response SLAs, and analyst certifications, none of which address how AI-driven decisions are made, audited, or contested.
The third shift is structural. When an organization deploys an AI SOC tool, it is not making a technology decision in isolation. It is committing to a specific operating model, one with embedded governance assumptions, compounding workflow dependencies, and switching costs that are often invisible at purchase time but substantial to unwind later. The report’s phrasing is precise: deploying an AI SOC tool is not just a technology decision. It is a whole operating model commitment.
The Three Operating Models
Against that backdrop, the report presents three operating model options. The internal AI SOC suits mature teams with strong detection engineering and the distinct expertise required to govern probabilistic AI systems rather than rule-based tools. The AI-enabled MDR suits organizations that lack dedicated SOC capacity or that prefer to purchase governance capability alongside operational capability. The hybrid model serves organizations navigating uneven maturity across different security functions, with the primary caveat that hybrid failures almost always trace back to ambiguous ownership at the boundaries between internally and externally managed functions.
Daylight Security appears throughout the report as an example of AI-native MDR design. Its architecture centers on a knowledge graph encoding organizational context, covering assets, relationships, and behavioral norms, which it uses to evaluate events and derive verdicts across three possible states: benign, suspicious, or ambiguous. High-confidence verdicts resolve automatically. Lower-confidence events go to human analysts with a full evidence package that includes observable artifacts behind the AI’s classification. The architecture makes explicit design choices about what should not be automated, retaining data loss prevention decisions and ambiguous contextual judgments as human responsibilities.
The Questions Procurement Processes Skip
For CISOs navigating vendor conversations, the report provides an evaluation checklist built around six dimensions: decision ownership, explainability, control boundaries, failure behavior, AI supply chain, and adaptability over time. The questions it suggests are deliberately different from the ones that dominate most RFP processes. Who is accountable when the AI makes an incorrect decision? Do your SLAs explicitly cover AI-driven decisions? If your foundation model provider changes its pricing or deprecates an API, what happens to my contract? Can I see the reasoning behind a suppressed alert after the fact?
The AI supply chain section is particularly forward-looking. Most AI-enabled MDRs depend on third-party foundation models and cloud AI services. API pricing changes have already forced several AI-native security vendors to renegotiate mid-contract economics. Upstream model updates have altered detection behavior in downstream applications without warning. An organization that does not ask about its MDR’s supply chain dependencies is inheriting risks that are not visible in the service agreement but are very much present in the operational reality.
The report does not tell organizations which operating model to choose. It tells them how to choose deliberately rather than by default, and why the difference between those two paths is more consequential than most current security conversations acknowledge.



