Skip to main content

Azure Cost Optimization for .NET Applications: Save 40% on Your Cloud Bill

April 11, 2026 9 min read

Last year, I helped a mid-sized fintech company cut their Azure bill from $47,000 per month to $28,000 — a 40% reduction — without degrading performance or removing any services. The changes weren't revolutionary. There was no secret Azure pricing hack. It was a combination of right-sizing, commitment-based discounts, intelligent autoscaling, and fixing a few surprisingly expensive oversights that most teams don't think to look for.

The uncomfortable truth about cloud costs is that most organizations are overspending by 30-50%, and the waste accumulates silently. Nobody notices that the dev environment is running D-series VMs 24/7, or that Application Insights is ingesting 50GB of telemetry per day because someone left verbose logging on, or that blob storage from a migration three years ago is sitting in hot tier costing $400/month when it could be in archive tier for $4.

In this post, I'll share the exact techniques and dollar amounts from real optimizations I've done. Every strategy here is something I've implemented in production — no theoretical advice, just things that actually moved the needle.

Right-Sizing: The Low-Hanging Fruit

Right-sizing is the single most impactful optimization for most teams, and it's free. Azure Advisor already tells you which VMs and App Service plans are over-provisioned — most teams just never look.

Start with this Azure CLI command to find underutilized VMs:

# Get VM utilization recommendations from Azure Advisor
az advisor recommendation list \
  --category Cost \
  --query "[?shortDescription.problem=='Right-size or shutdown underutilized virtual machines'].{
    VM: resourceMetadata.resourceId,
    Savings: extendedProperties.savingsAmount,
    CurrentSKU: extendedProperties.currentSku,
    RecommendedSKU: extendedProperties.targetSku
  }" \
  --output table

In my experience, the biggest wins come from App Service plans. I've seen teams running a single API on a P3v3 plan (roughly $600/month) when a P1v3 ($120/month) handles the load comfortably. That's $480/month saved from one change.

For .NET applications specifically, check your garbage collection settings. Server GC (<ServerGarbageCollection>true</ServerGarbageCollection>) uses more memory but improves throughput. If you've right-sized your App Service plan down, make sure your app's memory behavior matches:

// In your Program.cs — configure GC for memory-constrained environments
// For smaller App Service plans, workstation GC might be more appropriate
if (Environment.GetEnvironmentVariable("WEBSITE_MEMORY_LIMIT_MB") is string memLimit 
    && int.Parse(memLimit) < 1024)
{
    GCSettings.LatencyMode = GCLatencyMode.SustainedLowLatency;
}

// Monitor memory pressure in your health checks
builder.Services.AddHealthChecks()
    .AddCheck("memory", () =>
    {
        var allocated = GC.GetTotalMemory(false);
        var threshold = 800 * 1024 * 1024L; // 800MB
        return allocated < threshold
            ? HealthCheckResult.Healthy($"Memory: {allocated / 1024 / 1024}MB")
            : HealthCheckResult.Degraded($"Memory pressure: {allocated / 1024 / 1024}MB");
    });

Reserved Instances and Savings Plans

If you have workloads that run 24/7 (and most production systems do), you're leaving money on the table with pay-as-you-go pricing. Here's the math:

Resource Pay-as-you-go 1-Year Reserved 3-Year Reserved Monthly Savings
D4s_v5 VM $280/mo $175/mo (37% off) $112/mo (60% off) $105-168
SQL Database S3 $150/mo $100/mo (33% off) $75/mo (50% off) $50-75
Redis Cache C2 $168/mo $110/mo (35% off) $78/mo (54% off) $58-90

For the fintech project I mentioned, reserved instances for their three production databases and six VMs saved $8,400/month alone. That's over $100,000/year from a purchasing decision that takes 30 minutes.

Azure Savings Plans are even more flexible — they apply a commitment-based discount across compute services regardless of region, size, or operating system. For teams that frequently resize VMs or move between services, savings plans are usually a better fit than reserved instances.

# Check your commitment-based discount opportunities
az consumption reservation recommendation list \
  --scope "Shared" \
  --look-back-period "Last30Days" \
  --query "[?{term}=='P1Y'].{
    Resource: properties.scope,
    RecommendedQuantity: properties.recommendedQuantity,
    MonthlySavings: properties.costWithNoReservedInstances - properties.totalCostWithReservedInstances
  }"

Spot VMs: 90% Savings for the Right Workloads

Spot VMs let you use Azure's spare capacity at up to 90% discount, but they can be evicted with 30 seconds' notice. That makes them perfect for batch processing, CI/CD agents, and data processing — workloads that can handle interruption.

I've used spot VMs for:

  • Nightly data processing — ETL jobs that run for 2-3 hours. Even if evicted, they checkpoint and resume.
  • Build agents — Azure DevOps self-hosted agents on spot VMs. A build that gets evicted just retries.
  • Load testing — spin up 20 D8s_v5 spot instances for load testing at $0.07/hour instead of $0.38/hour.

Here's how to configure a .NET background worker that handles spot VM eviction gracefully:

public class SpotAwareBackgroundService : BackgroundService
{
    private readonly ILogger<SpotAwareBackgroundService> _logger;
    private readonly ICheckpointStore _checkpointStore;

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        // Register for spot eviction notification
        _ = MonitorEvictionAsync(stoppingToken);

        var lastCheckpoint = await _checkpointStore.GetLastCheckpointAsync();
        _logger.LogInformation("Resuming from checkpoint: {Checkpoint}", lastCheckpoint);

        await foreach (var batch in GetBatches(lastCheckpoint, stoppingToken))
        {
            await ProcessBatchAsync(batch, stoppingToken);
            await _checkpointStore.SaveCheckpointAsync(batch.LastId);
        }
    }

    private async Task MonitorEvictionAsync(CancellationToken ct)
    {
        using var client = new HttpClient();
        while (!ct.IsCancellationRequested)
        {
            try
            {
                // Azure IMDS endpoint signals eviction 30 seconds ahead
                var response = await client.GetAsync(
                    "http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01",
                    ct);
                var events = await response.Content
                    .ReadFromJsonAsync<ScheduledEvents>(cancellationToken: ct);

                if (events?.Events.Any(e => e.EventType == "Preempt") == true)
                {
                    _logger.LogWarning("Spot eviction detected — saving checkpoint");
                    await _checkpointStore.FlushAsync();
                    Environment.Exit(0);
                }
            }
            catch (Exception ex) when (ex is not OperationCanceledException)
            {
                _logger.LogDebug("IMDS check failed (not running on Azure?): {Error}", 
                    ex.Message);
            }
            await Task.Delay(TimeSpan.FromSeconds(5), ct);
        }
    }
}

Application Insights: The Hidden Cost Driver

Here's something that surprised me the first time I audited a client's Azure bill: Application Insights was their third-highest cost at $2,100/month. They were ingesting 45GB of telemetry per day because every service was configured with default sampling rates and verbose logging in production.

Fix this with adaptive sampling and ingestion controls:

builder.Services.AddApplicationInsightsTelemetry(options =>
{
    options.EnableAdaptiveSampling = true;
});

// Fine-tune sampling to reduce volume while keeping important data
builder.Services.Configure<TelemetryConfiguration>(config =>
{
    var samplingProcessor = config.DefaultTelemetrySink.TelemetryProcessors
        .OfType<AdaptiveSamplingTelemetryProcessor>()
        .FirstOrDefault();

    if (samplingProcessor != null)
    {
        samplingProcessor.MaxTelemetryItemsPerSecond = 5;  // Default is 5, but check yours
        samplingProcessor.ExcludedTypes = "Event";  // Keep custom events unsampled
    }
});

// Filter out noisy health check telemetry
builder.Services.AddApplicationInsightsTelemetryProcessor<HealthCheckFilter>();

public class HealthCheckFilter : ITelemetryProcessor
{
    private readonly ITelemetryProcessor _next;
    
    public HealthCheckFilter(ITelemetryProcessor next) => _next = next;

    public void Process(ITelemetry item)
    {
        if (item is RequestTelemetry request && 
            (request.Url?.AbsolutePath?.StartsWith("/health") == true ||
             request.Url?.AbsolutePath?.StartsWith("/alive") == true))
        {
            return;  // Drop health check telemetry
        }
        _next.Process(item);
    }
}

That health check filter alone saved one client $340/month. Their Kubernetes liveness and readiness probes were hitting every pod every 10 seconds, generating millions of request telemetry items monthly.

Also, set daily caps and alerts:

# Set a daily ingestion cap of 5GB
az monitor app-insights component update \
  --app my-app-insights \
  --resource-group my-rg \
  --ingestion-settings '{"ImmediatePurgeDataOn30Days": true}' \
  --cap 5

Storage Tier Optimization

Storage costs accumulate quietly. I've found significant savings by implementing lifecycle management policies:

{
  "rules": [
    {
      "name": "move-to-cool-after-30-days",
      "enabled": true,
      "type": "Lifecycle",
      "definition": {
        "filters": {
          "blobTypes": ["blockBlob"],
          "prefixMatch": ["logs/", "exports/", "backups/"]
        },
        "actions": {
          "baseBlob": {
            "tierToCool": { "daysAfterModificationGreaterThan": 30 },
            "tierToArchive": { "daysAfterModificationGreaterThan": 90 },
            "delete": { "daysAfterModificationGreaterThan": 365 }
          }
        }
      }
    }
  ]
}

Here's the cost difference in real numbers for 1TB of blob storage:

  • Hot tier: ~$20.80/month
  • Cool tier: ~$10.00/month (52% savings)
  • Archive tier: ~$1.80/month (91% savings)

For .NET applications writing to blob storage, make sure you're setting the access tier at upload time for data you know won't be accessed frequently:

var blobClient = containerClient.GetBlobClient($"exports/{fileName}");

await blobClient.UploadAsync(stream, new BlobUploadOptions
{
    AccessTier = AccessTier.Cool,  // Don't default to hot for export files
    HttpHeaders = new BlobHttpHeaders
    {
        ContentType = "application/json"
    }
});

Autoscaling Done Right

Autoscaling saves money by running fewer instances during off-peak hours. But poorly configured autoscaling can cost you more through thrashing or under-provisioning. Here's the pattern I use for App Service autoscaling:

# Scale down to 2 instances at night (8 PM - 7 AM)
az monitor autoscale rule create \
  --resource-group my-rg \
  --autoscale-name my-scale-settings \
  --condition "CpuPercentage < 40 avg 10m" \
  --scale to 2

# Scale based on HTTP queue length during the day
az monitor autoscale rule create \
  --resource-group my-rg \
  --autoscale-name my-scale-settings \
  --condition "HttpQueueLength > 50 avg 5m" \
  --scale out 2

For AKS workloads, combine the Horizontal Pod Autoscaler with the cluster autoscaler, and consider KEDA for event-driven scaling:

# Scale based on Azure Service Bus queue depth
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: order-processor-scaler
spec:
  scaleTargetRef:
    name: order-processor
  minReplicaCount: 1
  maxReplicaCount: 20
  triggers:
    - type: azure-servicebus
      metadata:
        queueName: orders
        messageCount: "10"
        connectionFromEnv: SERVICE_BUS_CONNECTION

This scales your order processor based on actual queue depth. When the queue is empty, you run a single replica. When Black Friday hits and the queue backs up, you scale to 20 replicas automatically. The cluster autoscaler adds nodes as needed and removes them when demand drops.

Key Takeaways

Here's the cost optimization checklist I run through with every Azure engagement:

  1. Right-size everything — Use Azure Advisor recommendations. This alone typically saves 15-25%. Check App Service plans, VM SKUs, and database tiers.
  2. Commit to reservations — For anything running 24/7, 1-year reservations save 30-40%. Three-year saves 50-60%. Do the math — it almost always makes sense.
  3. Use spot VMs for interruptible work — Build agents, batch processing, and load testing at 60-90% discount.
  4. Tame Application Insights — Enable adaptive sampling, filter health checks, set daily caps. This can save $500-2,000/month for medium-sized applications.
  5. Tier your storage — Implement lifecycle policies. Most blob data is accessed rarely after 30 days.
  6. Autoscale with intention — Scale down at night, scale based on meaningful metrics, not just CPU.
  7. Kill dev/test resources on schedule — Use Azure DevTest Labs or simple automation to shut down non-production environments outside business hours. This saved one team $3,200/month.

The 40% savings target isn't aspirational — it's achievable for most organizations that haven't done a thorough cost review. Start with the quick wins (right-sizing, reservations, dev/test schedules), then work through the more involved optimizations (Application Insights tuning, storage tiering, autoscaling). Track your progress in Azure Cost Management and celebrate the wins. There's nothing quite like saving your company $15,000/month and having the dashboard to prove it.

Share this post

Comments

Ajit Gangurde

Software Engineer II at Microsoft | 15+ years in .NET & Azure