AI Competency | Essay 3
Essay 21 established why root-level AI is the true competitive moat, the real question is why it’s so hard for most organizations to commit to building this moat. It’s not because they lack ambition or resources, but because the internal conditions required for root-level AI don’t receive the investment commitment they need.
This essay focuses on the structural hurdles that prevent organizations from building real AI competency, and the foundational investments that create it. These are prerequisite to any enterprise AI strategy, whether the firm intends to build or buy. With agentic AI accelerating and the workforce shifting toward a blend of human and AI agents, these investments are no longer optional preparation. They are the foundation of the next era of business.
AI Competency by Meghna Sinha is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Why Organizations Fail to Build AI Competency
The failure to build root-level AI is not a technology problem, it is a structural one. This is expressed through five persistent friction points – funding paradox, weight of legacy systems, organizational friction, regulatory constraint, and talent scarcity. Each friction leads to leaf-level AI, AI implemented at the end product without integration at the root. A conversational agent on a website that is disconnected from the customer service helpline is a typical example.
The Funding Paradox. You need a commercial win to fund the competency, but you need the competency to win. Root-level AI demands long-term capital commitments that rarely survive a CFO’s ROI timelines. So capital flows to leaf-level projects instead, chatbots, recommendation engines, tools that show quarterly returns but are easily duplicated by any competitor with the same off-the-shelf model. The funding paradox doesn’t just starve the root, it feeds the leaves, creating the illusion of progress while the structural gap widens.
The Weight of Legacy Systems. The typical enterprise doesn't have an AI problem. They have a data problem. Legacy systems were built for rule-based processing, not probabilistic inference that requires wide variety of data transformed at high speed. Upgrading them means converting a narrow road into an eight-lane freeway while traffic keeps moving. So the organization routes around it by deploying a standalone forecasting tool on one data stream rather than building the unified pipeline that would allow a single demand signal to feed pricing, logistics, and merchandising simultaneously. The leaf is hydrated while the root remains parched.
Organizational Friction. Traditional structures incentivize siloing data and hoarding talent. When every business unit owns its own model and data, there is no root, only disconnected leaves. The deeper failure is that root-level AI requires real-time feedback loops across functions. When organizations make decisions in silos the outcomes don't surface across functions for weeks, if ever. Rewiring these information flows is a fundamentally different ask than adopting new software. As Jack Dorsey argues in “From Hierarchy to Intelligence,”2 the structures most firms rely on were never designed for this.
Regulatory Constraints. In regulated industries, liability frameworks often make leaf-level AI the only legally viable option. A hospital can deploy a tool that flags conditions for physician review. It cannot easily deploy a system that routes patients autonomously based on probabilistic triage, even if outcomes would improve.
Talent Scarcity. Compounding this, most organizations lack the internal machine learning expertise to architect a proprietary intelligence layer, even where regulation permits one.
The irony is the organizations that most need root-level AI are often the least structurally capable of building it. The data they need is already in their systems. The conditions to use it are not.
Four Investments That Create AI Competency
For 250 years, automation followed a deterministic script: if X, then Y. That era is ending.
Deterministic systems produce hindsight: this part failed, reorder it. Probabilistic systems produce foresight within a confidence interval: there is a 78% chance this part will fail within 14 days, given current usage patterns. The difference sounds subtle. Organizationally, it is seismic. A deterministic metric requires someone to execute. A probabilistic recommendation requires someone to decide whether to act on a 78% confidence level, and to be accountable for that judgment. Decision rights, escalation protocols, meeting structures, even the way performance is evaluated, all of these were designed for a world where the system tells you what happened, not what is likely to happen. Most organizations have not begun to reckon with this and the emergence of a blended human-agent workforce compounds the challenge further.
Evans et al. argue in “Agentic AI and the Next Intelligence Explosion,”3 that the path to more powerful AI runs not through building a single colossal oracle but through composing richer social systems that are neither purely human nor purely machine.
Organizations that want to participate in these systems will need internal AI competency. Four foundational investments create the conditions for AI competency required to compete in the agentic AI era: Unified Data, Scalable Compute, Organization-wide AI Fluency, and Functional Governance.
Unified Data is the anchor. It starts first and runs longest, because every other priority depends on what data exists, where it lives, and how it flows. While the unified data strategy and architecture are being defined, two parallel tracks begin at lower intensity, compute infrastructure discovery, assessing what exists and where the gaps are; and domain expert talent literacy, building foundational AI fluency so that people are not encountering probabilistic systems for the first time when those systems go live. Once the unified data strategy is set and underway, scalable compute and organization-wide AI fluency formalize. Functional governance starts last, not because it matters least, but because it needs real inputs from the other three to be designed well.
1. Unified Data
The defensible advantage of the agentic era is not the model, it is the data the model runs on. Legacy firms possess decades of proprietary operational history that no competitor can replicate, but that data is largely trapped, siloed, inconsistently labeled, and architecturally inaccessible to probabilistic systems.
The first priority is a data strategy that treats data as a primary asset, not a byproduct of operations. This means unified pipelines that allow a single demand signal, not a forecast living in one team’s spreadsheet, to be queried by pricing, logistics, merchandising, and planning simultaneously. It means cleaned and labeled historical records that a model can actually train on, not raw transaction logs that require six months of engineering before they’re usable. It means an architecture where a single proprietary data root feeds multiple model and agentic applications, so that every new use case compounds the value of the investment rather than starting from scratch.
This work is foundational and unglamorous. It is also the one investment that compounds. Better data produces better models, which generate better data. Data is the anchor because it is the slowest to build, the most consequential if built wrong, and the investment that unlocks everything else.
2. Scalable Compute
Running probabilistic models at enterprise scale is a fundamentally different infrastructure problem than hosting a SaaS application. The question is not whether a firm has cloud access, it is whether the architecture can support real-time inference, edge processing for physical operations, and the data throughput that agentic workflows demand.
For a logistics firm, this means the difference between a model that optimizes routes in a nightly batch and one that reroutes in real time as weather, traffic, vehicle breakdowns, shifting delivery windows conditions change. The first is a reporting tool. The second is an operational capability. The infrastructure required for each is categorically different, and the decision to build for one or the other is difficult and expensive to reverse once made.
Compute begins in discovery mode alongside the early data work mapping current infrastructure, assessing what workloads the target architecture will demand, and understanding the cost structure at different levels of capability. It formalizes into architectural commitments once the data strategy is far enough along to define the data flows the infrastructure must support.
3. Organization-wide AI Fluency
The goal is not to hire a hundred machine learning engineers; it is to ensure every function in the organization has people who understand how to work alongside, direct, and audit AI systems.
It starts immediately with literacy. The frontline manager needs to understand what it means when a system offers a recommendation at 82% confidence and needs clear protocols for when to act on it and when to escalate. The CFO needs to evaluate AI investment on probabilistic rather than deterministic ROI logic. The return on a data pipeline is not a revenue line in Q3, it is a compounding reduction in decision latency across every function that it touches. The general counsel needs to understand what auditability means in practice, not just in policy. This foundational work begins alongside data and compute discovery, building the organizational muscle before the systems demand it.
The deeper work of training people on specific workflows, building override judgment formalizes once the data and compute layers provide real systems to practice against.
It also means identifying the internal orchestrators, people with deep domain expertise who can work with model-based predictions or direct agentic systems toward high-value outcomes. These are not the AI specialists. They are the supply chain veteran who knows which routing exceptions matter, the underwriter who understands which risk factors the model is likely to miss, the merchandiser who can tell the difference between a signal and noise in customer behavior data. These are the most valuable people in the organization-wide literacy investment. Find them early and build around them.
4. Functional Governance
The default instinct is to treat governance as a prerequisite. The instinct is understandable. Manage risk early, establish guardrails, avoid regulatory exposure. But governance designed before the data architecture is defined, the compute constraints are understood, and the organization has enough fluency to articulate domain-specific risks will produce frameworks that are either too generic to be useful or too rigid to survive contact with real systems.
Mike Tang’s article on agentic governance The Governance Paradigm That Wasn’t Built for Agents4 identifies the three elements that real-world frameworks are converging around real-time behavioral monitoring, intervention before risky actions reach production systems, and auditable records that tie every intervention to explicit policies and accountable humans. These are not theoretical requirements. They are the operational minimum.
Governance in the agentic era covers ground that most existing frameworks don’t touch. When an agent routes a shipment based on a probabilistic cost optimization, who reviews that decision and under what threshold? At what confidence level does a recommendation become a default action versus a suggestion requiring approval? When an agent trained on supply chain data makes a pricing recommendation that affects margin, which function owns the outcome?
These questions cannot be answered well in the abstract. They require a data architecture that defines what data flows where, compute infrastructure that determines what’s technically feasible in terms of logging and auditability, and people with enough fluency to articulate the domain-specific scenarios where AI decisions carry real risk. This is why governance trails the other three priorities. Governance built around real operational inputs will be specific, durable, and capable of evolving as AI generates new data, including multi-modal data that compounds the governance challenge with every deployment cycle.
The mitigation for regulated industries is not to build governance first but to define the governance questions early determining what will need to be governed and where regulatory exposure is highest. The answers come once there is enough operational substance to make them durable.
Getting the long term investment commitment right is what separates an AI program from sustainable AI competency. The firms that will define the next era of industry are not necessarily the ones with the largest AI budgets or the most advanced models. They are the ones investing now in the organizational conditions that allow intelligence to function at scale. That work should have begun yesterday.
The next essay takes on a question that stops most companies before they start, does strategy come before competency, or is competency the strategy?
Thank you for reading my essay. If you found it insightful, please comment, share, like.
About the Author: Meghna Sinha is Chief AI Officer and Co-founder of Kai Roses, Inc. Kai Roses helps organizations build AI competency that compounds. Contact Meghna at www.kairoses.com to discuss building AI competency for your organization.
