AI software engineering works in two directions. The first is applying software engineering discipline to AI systems — bringing the rigour that AI development often skips. The second is using AI agents to do software engineering work, done seriously rather than casually.
Both require the same underlying thing: treating AI as a systems problem, not a demo problem.
The gap between an AI demo and an AI system in production is a software engineering problem. Demos don't need observability, error handling, data freshness guarantees, or graceful degradation. Production systems do.
This means wiring model outputs into operational data flows with the same care you'd apply to any critical pipeline — versioning, testing, monitoring, and knowing what to do when the model returns something unexpected. An AI feature that works 95% of the time is a liability if the 5% isn't handled.
Agentic development — using coding agents to write, refactor, and reason about software — is a genuine shift in how software gets built. But there is a large gap between pasting code into a chat window and working systematically with an AI agent.
The discipline is in the context: structured prompts, persistent memory, tool integration, and verification habits. An agent with good context produces better work than a better model with poor context. That's a software engineering insight, not a prompting tip.