Dialogue of AI is throughout us, however in my expertise, sensible steerage rooted in particular use instances is surprisingly uncommon. After spending months deep within the weeds of a large documentation migration with AI as my assistant, I’ve discovered some hard-won classes that I believe others may gain advantage from.
Should you work in content material engineering, technical documentation, or are merely interested by how AI holds up in a fancy, real-world challenge, right here’s my tackle what labored and what didn’t.
Mission Context
I’m a DITA Info Architect on the Info Expertise group at Splunk. DITA, brief for Darwin Info Typing Structure, is an open, XML-based commonplace for structuring and managing technical content material.
We not too long ago wrapped up the migration of three massive documentation websites right into a single assist portal, powered by a DITA-based part content material administration system (CCMS). The timeline was tight, and almost all the sources had been inner. The migrations had been complicated and important to the enterprise, requiring cautious planning and execution.
I initially deliberate solely to help the migration of the smaller, unversioned web site. When that went properly, I used to be requested to steer the a lot bigger second migration. (The third web site was dealt with by one other group.) Collectively, these two migrations meant grappling with roughly 30,000 HTML information, two very totally different web site architectures, and the problem of customizing an present Python migration script to suit the content material at hand, whereas additionally placing processes in place for writers to evaluate and clear up their content material.
I wish to be clear that AI didn’t full this challenge for me. It enabled me to work sooner and extra effectively, although solely whereas I did the planning, architecting, and troubleshooting. Used successfully, AI turned an influence device that dramatically sped up supply, however it by no means changed the necessity for experience or oversight.
All through this challenge, I used the then-current GPT-4 fashions by an inner Cisco chat-based deployment. Today, I work extra in editor-based instruments akin to GitHub Copilot. Nonetheless, the teachings I discovered ought to apply to the current (mid-2025) cutting-edge, with a number of caveats that I point out the place related.
How I used AI successfully
Prompting
One lesson I discovered early on was to deal with prompts the best way I strategy technical documentation: clear, constant, and complete. Earlier than consulting the AI, I’d sketch out what wanted to occur, then break it down into granular steps and write a immediate that left as little to the creativeness as doable.
If I wasn’t certain concerning the answer, I’d use the AI as a brainstorming companion first, then comply with up with a exact immediate for implementation.
Iterative improvement
The migration automation wasn’t a single script however turned a collection of Python instruments that crawl navigation bushes, fetch HTML, convert to DITA XML, break up matters into smaller items, map content material, and deal with model diffs. Every script began small, then grew as I layered in options.
I rapidly discovered that asking AI to rewrite a big script unexpectedly was a recipe for bugs and confusion. As a substitute, I added performance in small, well-defined increments. Every function or repair obtained its personal immediate and its personal GitLab commit. This made it straightforward to roll again when one thing went sideways and to trace precisely what every change achieved.
Debugging
Even with good prompts, AI-generated code not often labored completely on the primary strive – particularly because the scripts grew in measurement. My best debugging device was print statements. When the output wasn’t what I anticipated, I’d sprinkle print statements all through the logic to hint what was occurring. Typically I’d ask AI to re-explain the code line by line, which frequently revealed refined logical errors or edge instances I hadn’t thought-about.
Importantly, this wasn’t nearly fixing bugs, it was additionally about studying. My Python expertise grew immensely by this course of, as I pressured myself to actually perceive each line the AI generated. If I didn’t, I’d inevitably pay the worth later when a small tweak broke one thing downstream.
Today, I lean on an AI-powered built-in improvement atmosphere (IDE) to speed up debugging. However the precept is unchanged: don’t skip instrumentation and verification. If the AI can’t debug for you, fall again on print statements and your personal capability to hint the issue to its supply. And at all times double test any AI-generated code.
AI as an implementer, not inventor
This challenge taught me that AI is incredible at taking a well-defined thought and turning it into working code. However in the event you ask it to design an structure or invent a migration technique from scratch, it would most likely allow you to down. My most efficient workflow was to (1) design the method myself, (2) describe it intimately, (3) let the AI deal with the implementation and boilerplate, and (4) evaluate, check, and refine the AI output.
Model management
I can’t stress sufficient the significance of model management, even for easy scripts. Each time I added a function or mounted a bug, I made a commit. When a bug appeared days later, I might stroll again by my historical past and pinpoint the place issues broke. Positive, that is fundamental software program engineering, however whenever you’re working with AI, it’s much more crucial. The rate of change will increase, and your personal reminiscence of every modification is inevitably much less exhaustive.
The online impact of those practices was velocity with out chaos. We delivered far sooner than we might have in any other case, and the standard of the output considerably lowered post-migration cleanup.
The place AI fell brief
As precious as AI was, it had many shortcomings. The cracks began to indicate because the scripts grew in measurement and complexity:
- Context limits: When scripts obtained longer, the AI misplaced monitor of earlier code sections. It might add new standalone options, however integrating new logic into present, interdependent code? That always failed except I spelled out precisely the place and find out how to make adjustments. I ought to observe that immediately’s newer fashions with bigger context home windows may scale back a few of the points I bumped into with the migration scripts. However I believe that it’s nonetheless vital to be as particular as doable about what sections should be up to date and with what logic.
- Failure to discover a working implementation: I discovered that generally the AI merely couldn’t clear up the issue as outlined within the immediate. If I requested for a change and it failed three or 4 instances, that was often a sign to step again and take a look at one thing totally different – whether or not that meant prompting for an alternate strategy or writing the code myself.
- System understanding: Sure bugs or edge instances required a strong understanding of our programs, like how the CCMS handles ID values, or how competing case sensitivity guidelines throughout programs might journey issues up. This can be a essential space the place AI couldn’t assist me.
What I’d do in another way subsequent time
Right here’s my recommendation, if I needed to do it another time:
- Plan core libraries and conventions early: Resolve in your stack, naming schemes, and file construction on the outset and embrace them in each immediate. Inconsistencies right here led to time wasted refactoring scripts midstream. That mentioned, working in an editor-based device that’s conscious of your total pipeline will assist to maintain your libraries constant from the outset.
- Sanitize the whole lot: File names, IDs, casing, and different seemingly minor particulars may cause main downstream issues. Embrace this steerage in your prompting boilerplate.
- Account for customized content material: Don’t assume all docs comply with the identical patterns and positively don’t assume the AI understands the nuances of your content material. Discover out early the place the outliers are. This upfront work will prevent time in the long term.
- Doc the complicated stuff: For any logic that takes various minutes to grasp, write down an intensive rationalization you possibly can refer again to later. There have been instances I needed to re-analyze sophisticated components of the scripts weeks later, when an in depth observe would have set me again on the right track.
One non-AI tip: preserve copies of your supply and transformed markup in a repository even after importing the transformed content material to your manufacturing tooling. I promise that you simply’ll must refer again to them.
AI as a companion, not a substitute
Reflecting on the challenge, I can emphatically say that AI didn’t change my crucial pondering. As a substitute, it amplified my expertise, serving to me work at a velocity and scale that might have been tough to attain alone, whereas streamlining the post-migration cleanup. However anytime I leaned too closely on AI with out cautious planning, I wasted time and needed to backtrack.
The true worth got here from pairing my area data and important pondering with AI’s capability to iterate rapidly and implement. Used thoughtfully, AI helped me ship a challenge that turned a profession milestone.
Should you’re going through your personal daunting migration, or simply wish to get extra out of AI in your workflow, I hope these classes prevent some ache, and perhaps even encourage you to tackle a problem you might need thought was too large to sort out.
Discover extra tales on our Innovation channel and subscribe right here!
