At some point in the last two years, someone on your L&D team likely adopted a new AI tool. Then someone else did. Then someone found something better for video, and another person had strong feelings about assessments. Whether your team is two people or twenty, none of those decisions were wrong. Nobody set out to build a fragmented system. It just ended up that way one decision at a time. And that’s what makes it so hard to see.
Most L&D teams, regardless of size, are now using more than three tools to produce a single piece of training content. Each owned by someone, each producing something, none of them talking to each other. Content gets made faster than it did before. It also gets lost, duplicated, and revised more than anyone wants to admit.
This is not a technology problem. The tools work. What’s missing is the governance and leadership decision to treat content production as a system rather than a set of individual tool choices. The longer that decision gets deferred, the more expensive the consequences – in rework, in compliance exposure, and in the quiet erosion of trust in what the L&D function actually produces.
The industry has spent two years talking about AI adoption as though adding capability was the same thing as building capacity. It isn’t. Speed at the point of creation is worth very little if the process around it can’t hold the output together. For a small L&D team, that overhead falls on the same people who are supposed to be building. For larger functions operating in regulated industries, for example, pharma, financial services, or manufacturing, those gaps don’t stay internal. Compliance and legal teams find them, and by then the absence of an audit trail is no longer an L&D problem.
What makes this harder to fix than it sounds is that “human in the loop” – the standard reassurance that someone reviews AI output before it goes live – turns out to mean very different things depending on how the system underneath it is built. When AI generates content that can’t be traced back to its source, reviewers have two choices: rubber-stamp it and hope, or read every line against the source document from scratch. Neither is a real quality process. The problem isn’t human oversight, it’s that oversight only functions when there’s something transparent to oversee.
This is where the organisations pulling ahead are doing something structurally different. They don’t have more sophisticated AI, but fewer handoffs. Content that starts in one place, moves through one connected workflow, and arrives at the LMS without being re-entered, re-formatted, or re-verified across platforms that were never designed to work together. When every piece of AI-generated content links back to the source material it drew from, review becomes targeted rather than exhaustive. The SME isn’t reconstructing context, they’re confirming connections they can already see. For a small team, that’s the difference between a sustainable workflow and a constant scramble. For a larger one, it’s what makes scaling actually possible.
The teams that have moved to this model consistently see faster production, less rework, and content they can stand behind when someone asks where it came from. More importantly, they’ve made a decision that most L&D leaders are still avoiding: that adding another tool to a broken architecture doesn’t fix the architecture.
The real leadership question isn’t which AI tools to adopt next. It’s whether the system those tools sit inside is built to produce content you can govern, trace, and scale. Most aren’t.
The ideal system is one where a single source produces every format you need, where review is built into the workflow rather than added at the end, and where nothing gets lost between tools because there are no gaps between them. That’s the architecture worth building toward, and it’s what SkillFLO was designed around if you’d like to check it out.

















































