In January of 2019, I was asked to run a large RevOps team that didn’t exist.
Less than a year earlier, my startup had been acquired and I was enjoying the ride at a high-growth company. Now the acquirer was itself being acquired by its largest competitor—a 4,000 person public company that was still largely controlled by a PE firm. Their board had decided they needed new energy so they were putting the acquired execs in charge of GTM for the larger company.
Now my former CEO—and newly established President—was asking me to run RevOps. My first response: “I don’t think you want me to do that.” I’d been a founder and a software engineer but I hadn’t run RevOps before. He insisted. I relented. I now realize he saw something in me that was necessary for success: a high pain threshold.1
Oh and one more thing, there’d never been RevOps here before. My team would be an amalgamation of a bunch of small teams cobbled together from across the company.
So that’s how I found myself with a team of 50 supporting 600 sellers. That’s also how I learned that I was inheriting a tech “stack”2 that included 11 different Salesforce orgs. This was the result of years of PE-driven acquisitions and an alarming lack of concern for any integration extending beyond the P&L.
The COO (who I reported to) and CFO both pulled me aside to tell me that—oh by the way—the CRM data is terrible and could I do something about that too?
So we rolled up our sleeves and made progress even if it wasn’t always pretty. We did the usual RevOps things: cleaning data, implementing workflows, redesigning process, evolving ROE, managing territories, building a BI infrastructure, administering comp plans, etc, etc.
We also got strategic. We brought together the GTM leadership for a weekly meeting that created actual alignment. We led a lot of organizational transformation after the acquisition including developing and adopting new strategies like dynamic books. We even made progress on reducing those Salesforce orgs by actually merging GTM motions, tech stack AND data.
I’m extremely proud of what we achieved. By assembling a bunch of small dispersed groups into a single RevOps team with a large span of control and by doing a bunch of hard work, we made things better. It was, in many ways, delivering on the promise of RevOps itself.
But lately I’m wondering about the future of RevOps.
A natural language interface for the CRO
We were super lucky. We had the full support of the President, a big team spanning multiple functions and I reported to the COO which freed us up to act more independently than if we’d rolled into the GTM org.
Most RevOps teams don’t have those advantages. They’re chronically understaffed and operate inside the GTM org—there’s no opportunity or leeway to be strategic. Instead they’re the technical execution arm of the sales leadership. RevOps teams take directives and operate as a thin human layer on top of the sprawl of data, apps and spreadsheets—a sort of natural language interface to GTM tech for the CRO.
That’s a precarious spot. Earlier this month, HubSpot and OpenAI announced that HubSpot would be the first CRM that ChatGPT users could connect to using Deep Research.

This ChatGPT integration is powered by HubSpot’s MCP server. As I shared in my post about agents, MCP is a sort of USB port for connecting apps to AI models. It’s supported by all the major AI vendors, not just OpenAI. There’s real momentum around this technology, so it’s reasonable to predict that many other GTM vendors will follow HubSpot’s lead here.
Combine powerful reasoning models (like OpenAI’s o3 which recently got an 80% price cut) with universal access to data and applications via MCP, and you’ve got all the ingredients you need to replace a RevOps team that’s only operating as a natural language interface to the tech stack.
Now, RevOps teams are an unusually technical bunch as far as GTM orgs go—it’s a big part of their job. You’d imagine they’d be rushing to disrupt themselves and be big users of AI. In some cases they are, but for a lot of teams it’s not happening.
According to the latest ICONIQ State of GTM Report, RevOps teams are actually lagging behind their GTM peers in AI adoption.

As you can see, 18% of companies reported no adoption of AI at all in RevOps. On average only 39% of RevOps FTEs use AI. Compare that to more than 50% for Marketing, BDR and AEs.
Looking more closely, there’s a pronounced split in RevOps that doesn’t appear for other teams: 22% of RevOps teams are super-adopters vs 18% non-adopters. It appears that some RevOps orgs are going fully AI-native (e.g. adopting GTM Engineering) but most are either a) doing nothing or b) just going through the motions.
I see three interpretations of this:
Cobbler’s shoes3: RevOps is spending all their time evaluating and implementing AI for other teams instead of themselves.
Tech mismatch: The current state of LLM-based AI is much better at text synthesis and production than analysis and execution.
Strategic shortfall: RevOps teams in most orgs are reactive and conservative—focused on troubleshooting, fulfilling requests and incremental improvements.
These aren’t mutually exclusive and the real cause could be some of all three. Let’s look at each one a little deeper.
Cobbler’s shoes
The Cobbler’s Shoes interpretation is the one ICONIQ favors. RevOps isn’t deploying their own AI because they’re too busy enabling everyone else. In theory, this could correct itself naturally. Once the rest of the teams are all stood up with AI, then RevOps can adopt more of it themselves.
When I originally posted about RevOps AI Adoption on LinkedIn, Joe Ort replied (as he often does) with a thoughtful comment:
In my head, RevOps is deploying for the benefit of other teams. It's not that they aren't doing anything but their personal workflow isn't changing.
Just as selling RevOps products to RevOps is a struggle as vendors, getting time spent on internal projects vs projects that benefit the GTM teams is a hard sell.
While I agree this is a reasonable explanation, it feels like a trap. In my experience, RevOps often struggles to sell the rest of the organization on how improving their internal processes benefits the whole GTM team. They fall back on “time saved” or “this won’t scale” as justifications without any real ROI.
This is short-sighted. An AI-enabled, radically more effective RevOps could, for example, more intelligently design territories—leading to massive ROI. Or they could use reasoning models + MCP to more quickly diagnose pipeline problems which could save next quarter. There’s a lot more here than just the classic “stop using spreadsheets” pitch.
These processes are the foundation of any GTM motion. If they get better, it benefits the entire GTM org.
Tech mismatch
The whole idea that RevOps should adopt AI is based on the assumption that AI actually helps with the tasks that RevOps needs to do. That brings me to the tech mismatch argument.
Perhaps one reason that AI adoption lags in RevOps is that it doesn’t actually help RevOps teams do the kinds of data analysis and execution work that RevOps has to do.
I’m sympathetic to that argument. The first ~2 years of LLM-based AI (ChatGPT launched on November 30th, 2022) didn’t help much here. That wave of AI was much better at summarizing and synthesizing text than it was at analyzing data and taking action. I remember my early attempt to use GPT 3.5 to do some basic analysis on a spreadsheet and getting worse-than-useless answers.
These limitations meant the initial highest and best use of AI for RevOps was a much better approach to data scraping and enrichment that could synthesize large amounts of text into structured data points suitable for CRM.
That’s changing. Models can write code to perform analysis of things like tabular data. In the past 6 months we’ve started to have access to widely available reasoning models that can do multi-step tasks.4 It’s only been in the last two months that MCP integrations have become readily available to help those models get context and take useful action.
And of course, non-developers can now vibe code their own internal tools.
Data access, task completion and vibe coding probably represent the core technologies necessary to open up new AI use cases for RevOps teams.
Of course, they’re also the things that could mean we don’t need RevOps at all.
Strategic shortfall
The more sinister possibility for RevOps is that they’re not adopting these new capabilities because they’re stuck in a rut of reactivity that now endangers their very existence.
The “too busy chopping wood to sharpen the axe” trap is a self-perpetuating cycle. These teams descend further into day-to-day tasks and so they can’t offer any strategic assistance. CROs learn they’re not going to get strategic assistance so they don’t include RevOps in those conversations. Instead they just give them tasks to do. And on and on it goes.
There’s a special class of this issue that Jason Linkswiler brought up on LinkedIn:
There is also a 4, which is 'we need to fix and clean our RevOps stack before we can start AI related initiatives'.
He frames it as #4 but I actually think it fits just fine here. I see way too many RevOps teams that deflect adopting any improvements until they’ve fixed everything—the data, the systems, all of it.
It’s a lot like saying we don’t just need to sharpen the axe, we need to stop chopping wood for as long as it takes to build ourselves a chainsaw. That’s a lot of cold nights without a fire. But after that (when or if it happens), just think of how scalable our wood chopping will be…
I desperately wanted to do that when I discovered my 11 unintegrated Salesforce orgs but I knew it wasn’t an option. There was too much EBITDA to be had! Too much shareholder value to create! Besides, the software engineer in me knew the “big rebuild” is something you should never do. It’ll always take too long and never provides the leap you’re looking for. In the meantime everything else suffers.
Instead you have to figure out how to steadily improve and evolve so you can make progress no matter what.
RevOps teams simply can’t allow the desire to fix something once and for all be the reason to do nothing at all. That line of thinking means they’ll never adopt something as important as AI.
I think this is what’s happening with AI laggards in the ICONIQ report. They’re bogged down by the day-to-day work while daydreaming of the big rebuild that can never come.
Teams that fall into this trap are the teams that are most at risk of being fully replaced by AI—not just have their numbers reduced. Why bother having humans be a natural language interface when AI does it just fine? After all, they’re not moving things forward strategically anyway.
Whither RevOps?
The last 6 months of AI development have put all the ingredients in place for a new phase of RevOps. Models can reason and analyze. They can reach into systems to get the data they need and take necessary actions. The technology is new and immature, but it’s here. Right now.
RevOps leaders: If you’ve been holding back on AI because you don’t think it’s helpful for your needs—reevaluate ASAP. Then, work with your stakeholders to be ruthless about prioritization. What can you defer today so you can sharpen the axe? Just make sure you don’t fall into the “big rebuild” trap and start using bad data or a sprawling tech stack as a reason to do nothing.
Sales leaders: Give your RevOps teams the benefit of the doubt when they’re asking to sharpen the axe with AI. Understand the time to do this has to come from somewhere. Some tasks may need to be reprioritized. It may also require some budget you could spend elsewhere. But, it’s in investment in every other process getting better for your team.
In the same way that AI is fundamentally changing the role and effectiveness of the sales rep, it will do the same for RevOps—only even more extreme. There will always be some role for humans talking to humans in the sales process. If your RevOps team is just helping humans talk to machines? Well, that may not be around much longer.
He also had a to make a “synergy” plan work, but who’s counting?
I suppose “sprawl” is more appropriate. Just figuring out how to even access the Salesforce orgs was a tough job.
Apparently there’s a lot of variations on the idea that when someone does a thing for a living they don’t really do that thing for themselves or their loved ones.
Google Gemini 1.5-pro came out in early 2024 but didn’t gain “Deep Research” until December. OpenAI announced o3 in December of 2024 and shipped o3-mini in January and their version of Deep Research in February. In between, there was DeepSeek R1 which briefly shook everything up. AI has a lot going on.