After attending Google Cloud’s recent Startup School: Agentic AI webinar [1], which introduced the Agent Development Kit (ADK), I was reminded of a conversation with a startup founder where I first explained agentic AI, systems that enable autonomous reasoning, planning, and collaboration.

He asked: “Wait, how is this any different from our FM/LLM-backed worker‑based microservice app?”

It’s a great question. Workers backed by LLMs can reason within task boundaries. Especially when they receive enriched context from upstream—via stateful queues or embedded payloads—they begin to behave more intelligently. The line starts to blur.

I illustrated the difference between microservices and agentic architectures using a financial-services trading platform as the business use case [2]:

• FM/LLM-backed microservices power order entry, risk analysis, compliance checks, and settlement. Each service scales independently, integrates via APIs, and excels at throughput, but remains reactive and requires external orchestration.
• Agentic AI deploys an autonomous trading agent that continuously monitors market conditions, initiates orders when strategy thresholds are met, adapts risk parameters on the fly, and invokes downstream microservices for settlement. All while maintaining context across trades and collaborating with other agents.

FM/LLM-backed microservice apps are still reactive. They do not initiate tasks, plan across workflows, or collaborate natively.
Agentic AI, by contrast, is proactive. Agents decide what to do, how to do it, and with whom. They orchestrate workflows, adapt dynamically, and maintain memory across interactions. They are designed for autonomy, not just execution.

These architectures aren’t mutually exclusive. They complement each other, and the sweet spot is a hybrid architecture:
• Agents handle planning and delegation
• Workers execute tasks with FM/LLM intelligence
• Queues decouple and scale the flow while carrying enough context to enable reasoning-aware execution

This pattern preserves agentic reasoning while embracing production-grade throughput. It is especially relevant in Google ADK, NVIDIA NIM workflows, and real-world RAG pipelines.
The webinar didn’t dive into this comparison, but it sparked the reflection. If you’re exploring scalable AI workflows, feel free to connect. Always happy to exchange ideas and learn from others in the space.

Sources:
[1] Startup School: Agentic AI
[2] From microservices to AI agents: The evolution of application architecture

Written by Jonathan Wong at LinkedIn. Originally published as Agentic AI vs FM/LLM-Backed Microservices: A Real-World Reflection

Categorized in:

AI,

Tagged in:

,