AI in IT: What Actually Works and What's Just Hype
Every vendor pitch in 2026 starts the same way: “Our AI-powered solution will revolutionize your workflow.” Every conference talk promises the same thing. Every LinkedIn post from someone with “AI Evangelist” in their title says we’re six months away from full automation.
I’ve been using AI tools in production IT work for over a year now. Daily. Not in a lab. Not in a demo. In real infrastructure, with real deadlines, and real consequences when things break.
Here’s what I’ve actually learned.
The Things AI Is Genuinely Good At
Let me start with the positive, because the positive is real. AI has made me measurably faster at specific parts of my job.
Code Generation for Known Patterns
When I need a Terraform module that follows Azure Verified Module conventions, or a CI/CD pipeline that does the standard build-test-deploy dance, AI handles that beautifully. It’s seen thousands of these patterns. It knows the boilerplate. It writes it faster than I can type it.
I wrote about this in detail in my experience with Claude Code. For about 70% of my daily coding work, AI is a genuine force multiplier. Not 10x. More like 2-3x. But 2-3x is enormous when compounded over months.
Cross-Referencing Documentation
“Make this Terraform resource compliant with CIS benchmark NS-2 while following AVM conventions.” That query would take me 30 minutes of doc-reading. AI does it in seconds because it can hold multiple documentation sources in context simultaneously.
Catching Inconsistencies
AI reads every line. Humans skim. When you’re reviewing a 500-line Terraform plan, AI catches the variable naming inconsistency on line 347 that you’d miss because you were thinking about lunch.
Writing First Drafts
Boilerplate documentation, PR descriptions, incident reports. Anything where the structure is standard and the content is predictable. AI writes a solid first draft that I edit, rather than me starting from a blank page.
The Things AI Is Confidently Bad At
Here’s where it gets interesting, because AI doesn’t just fail at these things. It fails while sounding completely sure of itself.
Understanding Your Specific Environment
AI doesn’t know that your team renamed the staging subscription last month. It doesn’t know that Henk from networking “temporarily” opened port 443 to the wrong endpoint six months ago and forgot about it. It doesn’t know that your Azure DevOps pipeline times out on Tuesdays because of a scheduled compliance scan.
Context is everything in IT operations, and context is exactly what AI lacks. I’ve seen AI confidently suggest fixes for problems that don’t exist in our environment, while completely missing the actual cause because it requires knowledge of recent changes that aren’t in any documentation.
Debugging Novel Problems
If the problem has been solved before and documented somewhere, AI can probably help. If the problem is genuinely new, a unique combination of your infrastructure, your configuration, and your specific failure mode, AI guesses. And its guesses sound authoritative, which makes them dangerous.
I’ve written more about this in why troubleshooting as a skill is disappearing. The danger isn’t that AI gives wrong answers. It’s that people stop questioning those answers.
Architecture Decisions
“Should we use AKS or Container Apps for this workload?” AI will give you a comparison table. It’ll list pros and cons. It’ll sound very reasonable. But it can’t weigh the factors that actually matter: your team’s skill level, your budget constraints, your compliance requirements, your timeline, and whether the architect who’d maintain it is leaving next quarter.
Architecture is judgment. AI is pattern matching. Those aren’t the same thing.
Security Review
AI can find known vulnerability patterns. It can’t evaluate whether your specific security posture makes sense for your threat model. It doesn’t understand that the “best practice” it’s suggesting would break your SOC’s monitoring setup, or that the encryption standard it recommends isn’t compatible with your compliance framework.
The Things Nobody Talks About
The Quality Problem
The volume of AI-generated code has exploded. That’s not inherently good or bad. It depends entirely on whether the human reviewing it can tell good output from bad. I wrote about this in The AI Slop Problem. We’re producing more code, reviewed less carefully, deployed faster. That’s not productivity. That’s a technical debt factory running at full speed.
The Skills Gap It Creates
Junior engineers who start their careers with AI as a crutch develop differently than those who had to struggle through problems manually. They ship faster but understand less. When AI gives them the wrong answer - and it will - they don’t have the foundational knowledge to recognize it.
This isn’t hypothetical. I’ve interviewed candidates with certifications who couldn’t explain basic concepts when the conversation went off-script. The certification problem and the AI problem are related: both create a surface-level appearance of competence without the depth to back it up.
The Cost Nobody Mentions
AI API calls cost money. Running Claude or GPT-4 for a team of 10 engineers isn’t free. I’ve seen monthly AI bills exceed the cost of the junior developer the AI was supposed to replace. Nobody includes this in the “AI saves 40% of developer time” calculations.
The Vendor Lock-In
Your team builds workflows around Claude. Anthropic changes their API. Or their pricing. Or their rate limits. Now what? The same vendors selling you “AI-powered DevOps” today will sell you “migration services” tomorrow when they change their model.
What I Actually Recommend
After a year of daily production use, here’s my honest recommendation for IT teams:
Use AI for acceleration, not automation. Let it write the first draft. Let it catch the obvious mistakes. Let it help you search documentation faster. But keep a human in the loop for every decision that matters. I applied this exact philosophy in my Azure FinOps dashboard build on Agent Framework 1.0: the agents suggest and analyze, but destructive actions like applying tags go through a human approval gate.
Invest in fundamentals. The engineers who benefit most from AI are the ones who already understand the systems they’re working with. AI makes good engineers faster. It makes bad engineers more dangerous.
Don’t trust AI with context-dependent decisions. If the answer depends on knowing your specific environment, your team, or your history, do it yourself. AI doesn’t have your context and it won’t tell you that.
Track actual ROI. Don’t just count “time saved.” Count the time spent fixing AI mistakes. Count the API costs. Count the debugging time when AI-generated code fails in production. The net benefit is real, but it’s smaller than the marketing says.
Keep debugging skills alive. When AI can’t solve the problem - and there will always be problems AI can’t solve - you need engineers who can trace a packet through a firewall, read a stack trace, and correlate timestamps across systems. Those skills atrophy when you stop practicing them.
The Bottom Line
AI in IT is not the revolution vendors promise. It’s also not the threat that doomers predict. It’s a powerful tool that makes competent engineers more productive and incompetent engineers more confidently wrong.
The engineers who’ll thrive are the ones who use AI as a tool while maintaining the skills to work without it. The ones who accept the output when it’s right, catch it when it’s wrong, and always understand why.
That requires experience. Real experience. The kind you get from working at a small company where everything is your problem, not from watching AI generate solutions you don’t understand.
The hype will calm down. The tools will get better. And the fundamentals will still be the fundamentals. That’s not a prediction. That’s a pattern I’ve seen in every technology hype cycle since I traded my espresso machine for a terminal.