Azure Container Apps vs Azure Functions: When to Use What
Choosing the right compute platform on Azure is one of those decisions that seems straightforward until you're three months into a project and realize you picked wrong. we've been on both sides of this — migrating a Functions-based event processor to Container Apps when we hit scaling limits, and also moving an over-engineered Container Apps deployment back to Functions when we realized we were paying for idle compute we didn't need.
Azure Container Apps and Azure Functions both fall under the "serverless" umbrella, but they solve fundamentally different problems. Functions is event-driven compute optimized for short-lived, discrete operations. Container Apps is a managed container platform that gives you more control over the runtime while still abstracting away infrastructure. The overlap in capabilities has grown significantly, which makes the decision harder.
In this post, let's explore a practical decision framework based on real production workloads. No marketing fluff — just trade-offs I've observed firsthand across a dozen projects.
The Comparison at a Glance
Before diving into details, here's the side-by-side comparison that we wish someone had given me two years ago:
| Feature | Azure Functions | Azure Container Apps |
|---|---|---|
| Execution model | Event-triggered, short-lived | Long-running containers |
| Max execution time | 10 min (Consumption), unlimited (Premium/Dedicated) | Unlimited |
| Scale-to-zero | Yes (Consumption plan) | Yes (with KEDA rules) |
| Cold start | 1-10 seconds (Consumption) | 5-30 seconds (depending on image size) |
| Scale unit | Function instance | Container replica |
| Max scale-out | 200 instances (Consumption) | 300 replicas per revision |
| Custom containers | Yes (custom handler) | Native |
| Networking | VNet integration (Premium+) | VNet integration (built-in) |
| State management | External (Durable Functions) | Dapr integration or external |
| Pricing model | Per-execution + GB-s | Per vCPU-s + per GiB-s |
| Min cost (idle) | $0 (Consumption) | $0 (scale-to-zero enabled) |
| Built-in triggers | 20+ (HTTP, Queue, Timer, Cosmos, etc.) | HTTP, KEDA scalers |
| Language support | C#, JS, Python, Java, PowerShell, Go, Rust | Any (containerized) |
| Observability | Application Insights (built-in) | Azure Monitor, Dapr observability |
| Deployment | ZIP deploy, CLI, CI/CD | Container images, CI/CD |
When Azure Functions Is the Right Choice
Functions excels when your workload is genuinely event-driven and each execution is a discrete, independent operation. Here's what that looks like in practice:
// Classic Functions use case — processing messages from a queue
[Function("ProcessOrderEvent")]
public async Task Run(
[ServiceBusTrigger("orders", Connection = "ServiceBusConnection")]
OrderEvent orderEvent,
FunctionContext context)
{
var logger = context.GetLogger("ProcessOrderEvent");
logger.LogInformation("Processing order {OrderId}", orderEvent.OrderId);
// Validate
var validation = await _validator.ValidateAsync(orderEvent);
if (!validation.IsValid)
{
logger.LogWarning("Invalid order {OrderId}: {Errors}",
orderEvent.OrderId,
string.Join(", ", validation.Errors));
return; // Dead-letter handling configured on the queue
}
// Process
await _orderService.FulfillAsync(orderEvent);
// Notify
await _notificationService.SendAsync(
orderEvent.CustomerId,
$"Order {orderEvent.OrderId} confirmed");
}
it's common to see teams reach for Container Apps when Functions would have been simpler and cheaper. Ask yourself: does your workload have natural trigger points? Is each execution independent? Does it complete in under 5 minutes? If yes to all three, Functions is almost certainly the right answer.
Where Functions shines:
- Queue/message processing with unpredictable volumes
- HTTP APIs with spiky traffic and long idle periods
- Scheduled jobs (timer triggers replace a lot of cron infrastructure)
- Event-driven workflows with Durable Functions
- Cosmos DB change feed processing
- File processing (Blob triggers)
The Consumption plan's pay-per-execution model is hard to beat for workloads that are idle 80% of the time. after running production Functions that process millions of events per month for under $20.
When Azure Container Apps Wins
Container Apps becomes the better choice when you need more control over the runtime, have long-running processes, or your application doesn't fit neatly into the event-trigger model:
# container-app.yaml — a background processing service
properties:
managedEnvironmentId: /subscriptions/.../managedEnvironments/prod
configuration:
activeRevisionsMode: Single
ingress:
external: true
targetPort: 8080
transport: http
dapr:
enabled: true
appId: order-processor
appPort: 8080
secrets:
- name: db-connection
value: "Server=..."
template:
containers:
- image: myregistry.azurecr.io/order-processor:v2.1
name: order-processor
resources:
cpu: 1.0
memory: 2Gi
env:
- name: ConnectionStrings__Database
secretRef: db-connection
scale:
minReplicas: 0
maxReplicas: 10
rules:
- name: queue-scaling
custom:
type: azure-servicebus
metadata:
queueName: orders
messageCount: "50"
auth:
- secretRef: sb-connection
triggerParameter: connection
Here's what worked for us: we had a data processing pipeline that needed to run Python ML models alongside .NET orchestration code. Functions couldn't handle the Python dependency chain cleanly, and the processing took 15-30 minutes per batch. Container Apps let us package everything into a single container with exact dependency versions and scale based on queue depth.
Where Container Apps shines:
- Microservices that need to run continuously
- Applications with complex dependency trees or non-.NET runtimes
- Long-running background processing (minutes to hours)
- Services that need Dapr for service-to-service communication
- APIs with consistent traffic that would be cheaper on reserved compute
- Multi-container applications (sidecars)
A Practical Decision Matrix
After going through this decision several times, here's a flowchart that works:
1. Is it a simple event-triggered operation (< 5 min)?
→ Yes: Azure Functions (Consumption)
→ No: Continue
2. Does it need to run longer than 10 minutes?
→ Yes: Container Apps
→ No: Continue
3. Is it an HTTP API with predictable, steady traffic?
→ Yes: Container Apps (more cost-effective at scale)
→ No: Continue
4. Does it need custom system dependencies or non-standard runtimes?
→ Yes: Container Apps
→ No: Continue
5. Is the traffic extremely spiky with long idle periods?
→ Yes: Azure Functions (Consumption or Flex)
→ No: Container Apps
There's one nuance that trips people up: the Flex Consumption plan for Functions blurs the line significantly. It offers VNet integration, larger instance sizes, and faster scaling while maintaining the event-driven model. If you're leaning toward Container Apps solely for VNet support or more memory, evaluate Flex Consumption first.
Scaling Behavior: Where the Differences Matter
The scaling models are fundamentally different, and this is where it's common to see the most production surprises.
Functions scales by adding instances, each handling one execution at a time (for queue triggers) or up to a configured concurrency limit (for HTTP). The platform manages this automatically:
// host.json — controlling Functions concurrency
{
"version": "2.0",
"extensions": {
"serviceBus": {
"maxConcurrentCalls": 16,
"maxConcurrentSessions": 8,
"prefetchCount": 0
}
},
"concurrency": {
"dynamicConcurrencyEnabled": true,
"snapshotPersistenceEnabled": true
}
}
Container Apps scales by adding replicas of your container, and you control the scaling rules via KEDA:
scale:
minReplicas: 1
maxReplicas: 30
rules:
- name: http-scaling
http:
metadata:
concurrentRequests: "100"
- name: cpu-scaling
custom:
type: cpu
metadata:
type: Utilization
value: "70"
The critical difference: Container Apps gives you multiple scaling dimensions (HTTP concurrency, CPU, memory, custom metrics). Functions gives you automatic scaling based on trigger backlog. For queue processing, Functions' built-in scaling is more responsive — it monitors queue length and scales proactively. With Container Apps, you're configuring KEDA rules yourself and tuning thresholds.
Cost Comparison: Real Numbers
Here's a cost breakdown from an actual workload — an API that handles 5 million requests per month with an average execution time of 200ms:
Azure Functions (Consumption):
Executions: 5,000,000 × $0.20/million = $1.00
Compute: 5,000,000 × 0.2s × 256MB = ~$4.20
Total: ~$5.20/month
Azure Functions (Flex Consumption):
Baseline: ~$8/month (always-ready instance)
Compute: ~$12/month (on-demand scaling)
Total: ~$20/month
Azure Container Apps:
1 replica × 0.5 vCPU × 1 GiB × 730 hours = ~$36/month
(scale-to-zero could reduce this, but steady traffic means ~18 hrs/day)
Total: ~$28/month
For this workload, Functions Consumption wins on cost. But if the same API had 50 million requests per month with steady traffic, Container Apps with 2-3 replicas would be cheaper than the per-execution charges.
The crossover point varies by workload, but in practice, Functions is cheaper below ~10 million executions per month for typical API workloads. Above that, Container Apps' fixed compute model starts to win.
Hybrid Patterns: Using Both
In practice, most of my production architectures use both services. Here's a pattern that works well:
- Container Apps for the core API layer — services with steady traffic, complex dependencies, and long-running operations
- Azure Functions for event processing — queue handlers, change feed processors, scheduled jobs, and webhook receivers
[Client] → [API Gateway]
↓
[Container Apps] ←→ [Dapr Service Bus]
Core API ↓
Auth Service [Azure Functions]
ML Pipeline Queue Processor
Timer Jobs
Webhook Handler
This gives you the best of both worlds: Container Apps handles the workloads that benefit from persistent compute and complex runtimes, while Functions handles the event-driven glue that connects everything together.
Key Takeaways
- Start with Functions for new event-driven workloads. The Consumption plan's zero cost at idle and automatic scaling are hard to beat.
- Move to Container Apps when you outgrow Functions — whether that's execution time limits, dependency complexity, or cost at scale.
- Don't overlook Flex Consumption — it fills many of the gaps that previously pushed teams toward Container Apps.
- Use both in the same architecture. They complement each other perfectly.
- Measure actual costs for your traffic patterns. The crossover point between per-execution and fixed compute pricing depends entirely on your workload shape.
The worst architectural decision is the one you make based on a blog post without measuring your own workload. Use the free tier of both services, deploy your actual code, simulate production traffic, and let the numbers guide you.
Comments
Ajit Gangurde
Software Engineer II at Microsoft | 15+ years in .NET & Azure