Agent Enablement is the New Sales Enablement
RevOps and sales enablement should lead the charge.
In 2026, RevOps will change dramatically. The end is the same: provide the platform that enables GTM teams to succeed. The means will need to change thanks to AI agents. (Which are finally real, btw). Counterintuitively, I think it’ll mean RevOps orgs are less focused on tech stack and more focused on enablement.
I’ve been in the trenches (er, the terminal) trying to remake Gradient Works into an agent-first company over the past few weeks. I’m learning what it takes to make agents a reality, even at a relatively small scale. It’s different than I expected. The technology is the easy part. The hard part is the enablement—not of the people using the agents, but of the agents themselves.
I haven’t seen too much use of the term “agent enablement”1 so I’ll provide my own definition. It means providing AI agents with appropriate access to business systems, sufficient information about their business context and proper skills for their job. Agents without access, context and skills are just a neat party trick with a penchant for telling convincing lies.
Think about a task like “prospect these 10 companies”. Whomever is doing that needs secure access to tools (LinkedIn Sales Nav, Salesforce); context (ICP, personas, market knowledge) and skills (the right sequence of steps to fully research a prospect). A rep without all that would fail. So will an agent.
The benefits of AI agents are such that most forward-looking GTM teams will at least attempt an agent-first transformation over the next 12 months. RevOps is well positioned to lead this charge, but it’ll require collapsing the boundaries between the “tech-first” mindset of most ops teams and the “people-first” mindset of most enablement teams.
Go back to the access, context and skills example above. Only the access part is even remotely technical—relating to APIs/MCPs/CLIs, information security and data governance.
Context and skills, on the other hand, are all about synthesizing large amounts of information into digestible documentation. This has historically been the domain of sales training, sales enablement and product marketing.
Now all three factors—access, context and skills—need to be delivered seamlessly together and packaged properly for agents to use them. It’s enough to make one think it’s impractical to keep RevOps and enablement separate.2
Let’s look at how agents are likely to work within human teams and then consider some practical issues for enabling them to be effective.
Two agentic modes
After a couple years of hype, a highly effective agent model that combines a local “agentic loop” with a powerful LLM equipped to use tools and skills has finally emerged.
This model has already transformed software engineering. The rest of knowledge work—including GTM—is next. Claude Code and OpenAI Codex are quickly moving beyond code. OpenClaw, which uses the same basic architecture as coding agents, has positioned itself as a general purpose agent from the beginning3.
So, if your job involves a computer, it’s about to change. Radically. Plenty of others have pontificated about whether this is horribly bad or just “normally” disruptive. I won’t do that, but I will make a prediction.
I believe we’ll will work with agents in two basic modes:
Assistant - This agent occupies space on your desktop. It’s integrated with the systems you care about and you chat with it to get tasks done. Think “find time for Bill and me to meet” and “analyze last month’s closed-lost reasons and see how they changed vs the prior month”. It learns how you want to work, uses tools on your behalf and generally makes you better at your job. In short, assistants augment humans.
Teammate - These agents run autonomously. They have specialized knowledge, they engage with multiple other people/agents, make plans and execute long-running tasks. They have their own environment, their own identities and their own tools. They require management and direction. In short, these agents replace humans.
Note that these are basically just AI versions of regular jobs. And for that, I owe you an apology. About a year ago I wrote AI Needs its Sewing Machine Era. I argued4 that the best way to build AI systems was by coming up with a wholly new way of looking at the problem, not by copying what humans do.
I was very wrong.5 The agent model that’s working is the one that’s most similar to how a human works: give it a computer, give it some training, give it access to software tools and give it the ability to get guidance as needed from knowledgable humans.
And that “help agents work how humans work” metaphor turns out to be a big unlock.
How to enable agents
Humans rarely do their jobs—GTM or otherwise—by mindlessly and precisely following flow charts. We learn the tools of our trade, we develop domain expertise and we learn to apply techniques that have been proven to work.
This is where a lot of the early “agent” systems failed. Early models weren’t very good at maintaining context and making decisions about what to do next. This limitation meant that most of the effort went towards tight guidance around steps the agents could take and sequences they would follow. Ultimately, they weren’t all that different from building out step-by-step workflows.
Modern agentic models (e.g. Opus 4.6 or GPT 5.2) are good at holding a lot of information at once and making decisions about what to do next based on an understanding of what they’ve done so far and what they’re ultimately trying to achieve. They just need good tools, good information and reasonable guardrails. In short, they’re a lot like a smart and industrious human.
Access
Access principles for agents are the same as for people: give them the most secure, least privileged access to the systems and information they need to do their job. They get what they need but with limited ability to expose sensitive data, damage your systems or otherwise mess stuff up.
Let’s look at a few different access models: APIs, MCPs and CLIs.
Agents are pretty good at interacting directly with APIs. They can take your request and turn it into code they execute to call the API. While it often works, it’s bad for security because you’ll need to hand over sensitive data like API keys directly to the model.
Most teams use MCPs to provide access to their systems. Both Claude Code and OpenAI Codex support MCP out of the box and there are lots of MCP servers out there. Two words of caution:
MCP can be a “token hog”. It stuffs a bunch of information into the agent’s context widow (sometimes about things you’re not using). This can crowd out your more important task-specific information and cost you more money.
MCP has its own pretty challenging security issues.
Finally, there are command-line interfaces (CLIs). These are simple text commands that “wrap” various remote functionality. Models are very good at using these. In fact, much of what the coding models do is based on using CLI commands. We’ve primarily gone down this path at Gradient Works, using bespoke CLIs6 to access our systems. They don’t use many tokens, we can customize access to meet our exact needs and we can embed our own security measures.
I’ll add one bonus option: browser use. Some services (cough LinkedIn cough) aren’t friendly to programmatic access. The agents can directly interact with the UI in the browser, but it’s slow and costs a lot of tokens.
The two best routes are MCPs and CLIs. They’ll work for both assistant and “teammate” use cases. Some browser use may be appropriate for assistant agents. For example, sales reps might be able to automate a bunch of Sales Nav clicks.
As with human system access, security is one of the most annoying things about agent access. It’ll be very tempting to just open the gates wide open and figure it out later. Don’t do this. It won’t seem so hot when an agent deletes a bunch of companies out of HubSpot or you suddenly see a user’s API key making weird requests to one of your systems. Just ask Drift.
Context
Think of context as the background information an agent needs to inform how they do their job—before even attempting to execute a specific task.
Imagine you want an agent to asses if a customer is at risk for churn. To do an effective job, it would need to know lots of things such as:
What is your company and what does it do?
What products and services do you offer?
What do customers generally do with your products?
What constitutes good engagement from a customer?
What kinds of companies are likely to be good customers?
What do we know about how that customer’s implemented your product?
What kinds of issues have they experienced recently?
Who are the key stakeholders at that customer?
Has anything changed recently with that customer’s business?
How do you identify a customer in your systems?
How do you know what a customer spends or when they renew?
These are contextual questions that range from semi-existential (e.g. “Who do I work for?” and “What do we do here?”) to hyper-specific (“What’s the last update we got from Customer X at the meeting on Thursday?”).
We take it for granted that employees know these things (though maybe we shouldn’t). Of course they know what the company does. They went through onboarding! Of course they know about the latest product offerings. They were in the launch meeting for the last feature! Of course they know the situation with the customer, they get all the support emails!
You’ve got to make all this information accessible to your agents if you want them to be effective. The state of the art for doing this seems to be—I kid you not—writing a bunch of neatly organized text files in Markdown format.
A big part of enabling agents is gathering this data wherever it lives (videos, PowerPoints, etc), extracting it and turning it into clean documentation. Luckily AI is pretty good at helping you bootstrap this. It helps if this data can be easily shared across agents and versioned as it changes7.
This area is where RevOps and sales enablement can make an awesome combination. RevOps can help automate the context gathering (e.g. fetching call transcripts) and sales enablement can help shape it into usable context (e.g. extracting key themes).
Skills
Skills are a special kind of context that gives an agent the specific knowledge it needs to carry out certain tasks. Common examples are reading PDFs or doing data analysis with SQL. So far at Gradient Works, we’ve made skills for things like navigating our Salesforce, building analytics dashboards, researching people on LinkedIn and triaging email.
Like just about every other part of agent knowledge, skills are just Markdown files in a specific format. That means they’re easy to write, manage, share and version. They’re just plain old words that describe how to do something.
You can enlist domain experts to write skills or you can use the agent to help you. This kind of skill “bootstrapping” with the agent is the most effective way I’ve found to make high-quality skills. You can go about it in two ways:
Jot down some notes, ask the AI to expand on them and then work with it to make adjustments.
Work with the agent to go through a task, prompting it to do each step and correcting anything it does wrong. One you’re done, ask it to write up a skill that describes what you just did. I did this with my email triage skill and it worked great.
Skills interact with access and context quite a bit. For example, much of our internal Salesforce skill is information about how to properly access Salesforce using our CLI tools.
Just like with humans, task-specific skills should be informed by the larger context. Going back to the example in the previous section, you might end up building a Churn Risk Assessment skill but that will necessarily build on context about your company generally and the customer being assessed specifically.
This is another area ripe for collaboration between RevOps and sales enablement. Every playbook or prospecting training is a potential skill in the making.
Wrapping up
You wouldn’t expect an employee to do great work on day one—no matter how smart they are. Of course they need access to your internal systems. But you don’t just give them a Salesforce login and stop there.
They also need time to get up to speed. They need to go through onboarding and training. They need to shadow more experienced folks. All that effort builds context and skills. It’s enablement.
All of the above is true for agents. People understand the access part. They set up a couple MCP connections and then expect their agents to do good work. They won’t. You’ve (sort of) solved access but not context and skills.
Like people, agents need access to systems, onboarding, training and coaching to do a good job. Unlike people, they want this in a technical wrapper of MCPs and Markdown files.
That’s the opportunity for RevOps to partner with sales enablement and lead this next phase of agentic adoption. They way forward is a unified team of technologists and trainers—one that understands that documentation is the new automation and that agent enablement is the new sales enablement. Your agents and their human teammates will thank you.
The closest analog is context engineering, but that’s more deeply concerned with the technical aspects of managing the actual agent context window. Most of us aren’t at that level of refinement. We’re just trying to do better than one-shotting our way to an answer without providing much context at all. Also the latest agent models and frameworks are pretty good about figuring out what to pay attention to from a sea of files and tokens.
It seems that enablement reports into RevOps only about a quarter of the time.
It went viral in late January 2026 after initial release in November 2025. In AI, “since the beginning” doesn’t really carry the temporal heft it does in other fields.
Sewing machines, you see, don’t sew the same way humans do. I’ve always thought it was brilliant first-principles approach to problem solving. Same with planes. You may have noticed they don’t flap their wings like birds. People that tried to make machines sew like humans or fly like birds failed miserably.
Or possibly pre-right. Maybe this post will be embarrassing in a year’s time. There’s no way today’s approach is the final architecture for all this stuff.
Developed with Claude Code, naturally.
The jury’s still out on this but it sure seems like a GitHub repository is a good place to start.


