Sometime in mid-2023, a friend of mine — a data scientist with five years of experience training gradient-boosted models at a fintech company — messaged me: "I think my job title is wrong. I spend 80% of my time calling OpenAI's API and building RAG pipelines now. Am I still an ML engineer?"
He wasn't being dramatic. Something had genuinely shifted under his feet. The models he used to spend months training were now available as an API call. The data pipelines he'd painstakingly built to feed training loops were being replaced by vector databases and retrieval chains. The math hadn't changed, but the job had.
That question — "Am I an ML engineer or an AI engineer?" — is one that thousands of people in the industry are quietly asking themselves right now. And the answer matters more than you'd think, because behind the label sits a real divergence in skills, career trajectory, salary, and what you'll actually be doing at your desk on a Tuesday morning.
Here's what I found when I tried to untangle it.
Where This All Started: One Blog Post That Named a Profession
On June 30, 2023, Shawn "swyx" Wang published "The Rise of the AI Engineer" on Latent Space. It became his most-read post ever, and for good reason — it put a name on something everyone was feeling but nobody had articulated.
The core thesis was simple: LLMs had created an entirely new layer of abstraction, and the people building on top of that layer were doing fundamentally different work from the people building the models themselves. An ML engineer trains a model. An AI engineer uses a model to build a product. Both are engineering. They're not the same engineering.
Andrej Karpathy — who has as much credibility on this topic as anyone alive — responded on Twitter: "I've so far called this role 'Prompt Engineer' but agree it is misleading. It's not just prompting alone, there's a lot of glue code/infra around it. Maybe 'AI Engineer' is ~usable." He added something that turned out to be prophetic: "In numbers, there's probably going to be significantly more AI Engineers than there are ML engineers. One can be quite successful in this role without ever training anything."
Three years later, the data has proven him right.
The Numbers — Because Everything Else Should Be Read Through This Lens
LinkedIn's Jobs on the Rise 2026 report ranked AI Engineer as the #1 fastest-growing job in the United States. Not #5. Not "one of the fastest." Number one.
The numbers behind that ranking are staggering. AI Engineer positions grew 143.2% in 2025, making it the single fastest-growing role title in tech. The World Economic Forum, citing LinkedIn data, reports that AI has already created 1.3 million new roles — titles like AI Engineer, Forward-Deployed Engineer, and Data Annotator that barely existed three years ago. The share of AI/ML jobs in the tech job market went from 10% to 50% between 2023 and 2025.
Meanwhile, Stack Overflow's 2025 Developer Survey found that 84% of developers are using or planning to use AI tools — up from 76% in 2024 and 70% in 2023. That's not a trend. That's saturation.
And the U.S. Bureau of Labor Statistics still projects software developer employment to grow 17.9% through 2033 — much faster than average. The jobs aren't disappearing. They're transforming.
So What Actually Is the Difference?
The cleanest framing I've found comes from Chip Huyen's conversation with Gergely Orosz on The Pragmatic Engineer. Huyen — author of AI Engineering, the most-read book on O'Reilly's platform since its launch — describes the distinction this way:
AI engineering is application-first. You start by building the product — the chatbot, the search system, the document processor — using existing foundation models via API. You prototype fast. You only consider training or fine-tuning a custom model much, much later, if ever.
ML engineering is model-first. You spend weeks or months building, training, and optimizing a model on your specific data. Then you build the application around it. The model is the product, or at least the core of it.
That might sound like a subtle distinction. It's not. It changes everything — what you learn, what tools you use, what your day looks like, and frankly, what kind of person thrives in each role.
The AI Engineer's Workbench
If you're an AI engineer in 2026, your daily tools look like this: LangChain or LlamaIndex for orchestrating LLM calls. Vector databases like Pinecone, Weaviate, or Chroma for storing embeddings. RAG pipelines that retrieve relevant context before each LLM call. Prompt engineering — or, as Karpathy now prefers to call it, "context engineering": "the delicate art and science of filling the context window with just the right information for the next step."
You're spending your time on system architecture, API integration, evaluation frameworks, and deployment. You're debugging why the retrieval step is pulling irrelevant documents, not why the gradient is exploding. Your background is often full-stack development — and you got here by pivoting, not by getting a PhD.
The ML Engineer's Workbench
An ML engineer's day looks fundamentally different. You're in PyTorch or TensorFlow, building model architectures. You're writing data pipelines that clean, transform, and feature-engineer datasets. You understand why a learning rate of 3e-4 works and 3e-3 doesn't. You know what a vanishing gradient is, not from a textbook, but because you've debugged one at 2am.
Your tools: scikit-learn, XGBoost, PyTorch, TensorFlow, MLflow, Kubeflow. Your background: statistics, math, often a graduate degree. Your competitive advantage isn't that you can call an API — it's that you can build what's behind the API.
As one comparison puts it: the AI engineer is the architect who designs the building. The ML engineer builds the engine that powers it. Both are essential. Neither can do the other's job well.
The Salary Question Everyone's Actually Here For
Let's be honest — most people reading a "vs" article want to know who makes more money. The answer is: it's complicated, and anyone who gives you a simple number is lying.
According to Vettio's 2026 analysis, which aggregates data across major compensation databases:
- AI Engineer median base: $134,000–$145,000 nationally. Senior level: $155,000–$203,000. Tech hubs (San Jose): $206,000+.
- ML Engineer median base: $149,000–$159,000 nationally. Senior level: $168,000–$220,000+. Tech hubs: $187,000–$260,000+.
At first glance, ML engineers earn more. And on average, they do — about 10–15% higher base salary. That makes sense: ML engineering requires deeper specialization, and the supply of people who can actually train production models is smaller.
But here's where it gets interesting. Specialized AI engineers working on generative AI and agent-based systems are seeing base salaries between $175,000 and $250,000, because companies are racing to deploy autonomous agents and can't find enough people who know how to do it well.
The ceiling is higher for ML engineers at top labs. Senior ML researchers at OpenAI, Anthropic, and Google DeepMind command total compensation packages that can exceed $500,000–$900,000. But those are the NBA-equivalent salaries — the top 0.1% of the field.
For the median practitioner? A 21% wage premium comes from combining technical and soft skills — regardless of which title you hold. The premium isn't in the label. It's in the combination of depth and breadth.
The "Is ML Engineering Dead?" Debate
I need to address this directly, because I've seen this take on Twitter, Reddit, and Hacker News at least a hundred times in the last year. The argument goes: "Why would anyone train a model from scratch when GPT-4 exists? ML engineering is dead."
It's wrong. Here's why.
LLMs can't do everything. Try running a fraud detection system on GPT-4. A single inference call costs orders of magnitude more than a gradient-boosted tree that runs in single-digit milliseconds. When your payment processor needs to evaluate 10,000 transactions per second with sub-50ms latency, you don't call an API — you run XGBoost on a GPU.
Recommendation systems at scale don't use LLMs. Netflix, Spotify, Amazon — their recommendation engines are built on collaborative filtering, matrix factorization, and custom neural architectures trained on billions of interaction events. An LLM is a generalist. A recommendation model trained on your specific user behavior data is a specialist. The specialist wins every time.
Structured data still runs the world. Most enterprise data is tabular — rows and columns in databases. For structured data, traditional ML algorithms like Random Forest, XGBoost, and LightGBM consistently outperform deep learning approaches while being faster, cheaper, and more interpretable.
Someone still has to build the foundation models. The next GPT, the next Claude, the next Gemini — those are being built by ML engineers and researchers. The AI engineer ecosystem is downstream of this work. If ML engineering died, AI engineering would die with it.
What has changed is the ratio. Karpathy's prediction was right: there are now significantly more AI engineers than ML engineers. But "fewer" doesn't mean "dead." It means "more specialized, more senior, and arguably more important than ever."
The Prompt Engineering Controversy
I can't write about AI engineering without touching the grenade that is prompt engineering.
The skepticism is real, and it's not just trolling. A Hacker News thread captured the sentiment perfectly: "'Prompt engineering' isn't real. What are you engineering? You're throwing s*** at a wall." Another popular thread asked, simply: "Is prompt engineering just snake oil?"
I understand the frustration. When someone writes "Senior Prompt Engineer" on their LinkedIn and their job is asking ChatGPT questions slightly more cleverly than the average person, it's hard not to be cynical.
But here's the thing — that's not what production AI engineering looks like. The actual work involves designing system prompts that reliably produce structured output, building evaluation harnesses that catch regressions, orchestrating multi-step chains with error handling and retry logic, and managing context windows that can make or break a product.
Karpathy himself acknowledged this evolution. In mid-2025, he publicly endorsed the term "context engineering" over "prompt engineering": "People associate prompts with short task descriptions you'd give an LLM in your day-to-day use. When in every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window with just the right information for the next step."
MIT Technology Review covered this shift, noting that 2025 saw "a loose, vibes-based approach give way to a systematic approach to managing how AI systems process context."
So is prompt engineering "real engineering"? The naming was bad. The work is real. And it's evolving so fast that arguing about the label is mostly a waste of time.
Who Gets Hired — And What They Actually Need
One of the most persistent myths in AI is that you need a PhD. Let me kill that with data.
Anthropic — the company behind Claude — has approximately 50% of its technical staff without PhDs. Their careers page explicitly states: "If you have done interesting independent research, written an insightful blog post, or made substantial contributions to open-source software, put that at the TOP of your resume." Not your degree. Your work.
OpenAI runs a Residency Program — a 6-month program specifically for career-changers from physics, mathematics, neuroscience, or software engineering who have strong fundamentals but no formal ML experience. They pay $220,000 annualized. The message is clear: they'd rather take someone brilliant from an adjacent field than someone mediocre with a PhD in ML.
According to LinkedIn's data, the average AI engineer has 3.7 years of experience. The most common prior roles? Software engineer, data scientist, and full-stack engineer. It's not a role people are born into — it's one they pivot to.
That said, the requirements diverge sharply depending on which path you take:
AI Engineering — no PhD required. Strong software engineering skills are more important than research chops. You need to know how to build production systems, integrate APIs, manage infrastructure. The skillset overlaps heavily with senior backend or full-stack engineering.
ML Engineering (at top labs) — a PhD helps but isn't mandatory if you have equivalent research output. What's non-negotiable is deep mathematical maturity: linear algebra, calculus, probability theory, and optimization. You can't fake this. Either you understand why Adam optimizer works, or you don't.
ML Research — this is where PhDs genuinely matter. If you want to be the person inventing new architectures at DeepMind or writing papers at Anthropic's alignment team, a graduate degree (or exceptional independent research) is essentially required.
The uncomfortable truth: only 18% of tech postings in Q2 2025 were open to candidates with one year or less experience. The entry ramp is getting steeper for everyone — not because companies are cruel, but because the baseline competency threshold has risen dramatically.
The Career Path Decision Tree
After reading hundreds of job postings, salary reports, and career guides, here's my honest framework for thinking about this:
Choose AI engineering if:
- You come from software engineering, full-stack, or backend development
- You want to build products quickly using existing models
- You're comfortable with APIs, system design, and deployment
- You'd rather ship something this month than optimize a loss function for six months
- You don't have (or don't want) a graduate degree in math or CS
Choose ML engineering if:
- You genuinely enjoy mathematics — not tolerate it, enjoy it
- You want to work on problems where calling an API isn't good enough
- You care about model performance at the margin — making something 2% more accurate matters to you
- You're interested in domains like fraud detection, recommendation systems, autonomous vehicles, or drug discovery
- You're comfortable with longer iteration cycles and deeper specialization
Consider doing both if:
- You're already an ML engineer who wants to stay relevant as the industry shifts
- You're in a smaller company where one person wears both hats
- You want maximum career optionality
That 21% wage premium for combining technical and soft skills? It applies doubly to people who can bridge both worlds — who can train a custom model when the foundation model isn't good enough, but also build the production system around it.
What I Actually Think
After three weeks of research, here's where I land:
The AI engineer role is real, important, and here to stay. It's not a fad. It's not "just prompt engineering." It's a genuine new discipline created by a genuine technological shift. When LinkedIn ranks it #1 and the World Economic Forum tracks 1.3 million new roles in the category, the market has spoken.
ML engineering is not dead. It's more specialized. The number of people who need to train models from scratch has shrunk. The number of people who should hasn't — it's just that the bar has gone up. If you're an ML engineer working on problems where LLMs can't help (latency-critical systems, structured data, domain-specific models), your skills are more valuable now, not less. You're just competing in a smaller, more elite pool.
The biggest career risk is being stuck in the middle. A software engineer who "kind of" knows prompt engineering but can't architect a production RAG system. An ML engineer who "kind of" knows PyTorch but hasn't shipped a model since 2022. The market rewards depth in either direction. It doesn't reward vague familiarity with both.
The "prompt engineering isn't real" crowd is fighting the wrong battle. Whether you call it prompt engineering, context engineering, or "the stuff you do between the API call and the user," the work is real and it's growing. The naming is a sideshow. Prompt engineering evolved into AI engineering the same way webmastering evolved into frontend development. The early name sounded silly. The discipline that grew from it didn't.
The education debate is increasingly irrelevant. Anthropic hires people without PhDs for technical roles. OpenAI pays $220K/year for career-changers through its Residency. The pattern is clear: what you've built matters more than what degree you hold. This will only accelerate as more AI engineers come from software engineering backgrounds rather than research labs.
The real question isn't "AI engineer or ML engineer?" It's: "What problem am I solving, and which approach solves it better?" Sometimes you need a custom model. Usually you don't. Knowing when to use which is the actual skill — and the people who have it will always be in demand, regardless of what's on their business card.
If You're Making the Decision Right Now
Here's my pragmatic advice for someone standing at this fork:
- If you're a software engineer wanting to get into AI — start with AI engineering. Build a RAG pipeline. Deploy an LLM-powered feature. Ship something real. You can go deeper into ML later if the math calls to you.
- If you're a data scientist feeling the ground shift — you probably already have the statistical foundation for ML engineering. The question is whether you want to go deeper into model building or broader into system integration. Both are valid.
- If you're a student choosing a path — learn software engineering fundamentals first. Seriously. The most common prior role for AI engineers is software engineer. You can't orchestrate LLM calls if you can't write production code.
- If you're an ML engineer worried about relevance — don't panic. Learn the AI engineering stack (RAG, LangChain, vector DBs) to broaden your toolkit. But don't abandon your depth. The world needs people who can train models. It just needs fewer of them — and they'd better be good.
- Wherever you are — build in public. Anthropic literally tells applicants to put open-source contributions at the top of their resume. A deployed project beats a credential every time.
One Last Thing
In June 2023, swyx wrote that AI engineering was "a new kind of engineering." Three years later, it's not new anymore. It's mainstream. It's the fastest-growing job title in the country.
But ML engineering didn't die when AI engineering was born. It contracted, specialized, and became more rigorous. The people who were training models because it was trendy moved on. The people who were training models because they were good at it are still here — and they're more valuable than ever.
The real story isn't AI engineering versus ML engineering. It's the same story that plays out every time a new abstraction layer emerges in computing. Some people build the layer. Others build on top of it. We need both. We always have.
The only bad choice is standing still.
Sources
- Latent Space — "The Rise of the AI Engineer" by swyx (June 30, 2023)
- Andrej Karpathy — Tweet endorsing "AI Engineer" terminology
- Andrej Karpathy — "Context engineering" over "prompt engineering"
- LinkedIn — Jobs on the Rise 2026: 25 Fastest-Growing Roles in the U.S.
- World Economic Forum — AI Has Already Added 1.3 Million Jobs (LinkedIn data)
- Software Oasis — Growth in AI Job Postings: 2025 Statistics & Industry Trends
- Stack Overflow — 2025 Developer Survey: AI Section
- U.S. Bureau of Labor Statistics — Software Developers Occupational Outlook
- U.S. Bureau of Labor Statistics — AI Impacts in Employment Projections
- The Pragmatic Engineer — AI Engineering with Chip Huyen
- O'Reilly — AI Engineering: Building Applications with Foundation Models (Chip Huyen)
- Vettio — AI Engineer vs ML Engineer: Demand, Salaries, and Career Growth
- Axiom Recruit — AI Engineer Compensation 2026
- Skillify — AI Engineer vs ML Engineer: Salary, Skills & Career Fit
- DataExpert — AI Engineering Career Path: Complete Guide for 2026
- DataExec — Breaking Into AI in 2026: What Anthropic, OpenAI, and Meta Actually Hire For
- Firecrawl — Best Hands-On Resources to Learn AI Engineering in 2026
- MIT Technology Review — From Vibe Coding to Context Engineering
- Rajiv Pant — Why Prompt Engineering Is Legitimate Engineering
- DEV.to — Prompt Engineering is Dead, Long Live AI Engineering
- Hacker News — "Prompt engineering isn't real" debate
- NVIDIA — XGBoost Glossary
- System Design Handbook — ML System Design: A Complete Guide (2026)
- Helpware — Application of Machine Learning in 2026
- Final Round AI — Software Engineering Job Market Outlook for 2026
- Anthropic — Careers Page
I'm Ismat, and I build BirJob — Azerbaijan's job aggregator. If this article helped you think about your career, you can support my work at birjob.com/support.
You might also like
- Will AI Replace Jobs in Azerbaijan? Here's What the Hiring Data Actually Shows
- The Analytics Role Confusion: Business Analyst, Data Analyst, BI Analyst — What's the Actual Difference?
- DevOps vs SRE vs Platform Engineer: The Infrastructure Title Mess, Explained
- Product Manager vs Project Manager vs Program Manager: A Guide for People Who Can't Tell Them Apart
