Unless you’ve been living without internet connection for the last 5 years, you’ve probably heard of Datadog—one of the most widely used observability platforms. It provides monitoring, security, and analytics for cloud-scale applications, but as powerful as it is, Datadog’s pricing can be complex and unpredictable. Many organizations face unexpected end-of-month invoices that lead to budget overruns and financial headaches such as Coinbase who spent $65M in 2022.
Datadog turnover in 2024 reached $2,7B which represents an increase of 26% compared to 2023. This emphasizes that companies are spending more and more on Datadog and that it can impact their overall IT spend. As highlighted on the State of Finops 2025, FinOps practitioners are now looking to manage SaaS costs such as Datadog, Snowflake, Databricks alongside their cloud costs (Cloud+).
Moreover, Datadog has acquired many companies over the recent years leading to additional products and pricing complexity.
In this guide, we’ll break down how Datadog’s pricing works, why costs can spiral out of control, and—most importantly—how to optimize your Datadog spend using FinOps best practices and Holori’s cost management tools.
Understanding Datadog’s Pricing Model

Datadog’s pricing structure is divided into three primary categories:
- Host-Based Pricing – Charged per monitored instance, such as APM, Infrastructure Monitoring, and CSM.
- Volume-Based Pricing – Applied to services like Logs and Synthetics, where costs depend on data ingestion and execution volume.
- User-Based Pricing – Costs are based on the number of active users for services like Incident Management and CI Visibility.
Additionally, there are hybrid pricing models for products like containers, custom metrics, and indexing, which include both host-based and usage-based billing.
To add to the complexity, most products offer Pro and Enterprise tiers. The Pro tier covers essential functionality, while Enterprise unlocks advanced features like anomaly detection, SLO tracking, or extended retention. This means your final bill is shaped not only by what you monitor but also how deeply you want to monitor it.
And last but not least Datadog offers on-demand and commitment prices that have a huge impact on your Datadog cloud bill.
Breakdown of Datadog’s Pricing Model
Datadog Products Pricing and Key Cost Factors
Product | Base Pricing (on-demand ) | Billing Model |
---|---|---|
Infrastructure Monitoring | $15 per host/month | Per host |
APM (Application Performance) | $31 per host/month | Per host |
Log Management | $0.10 per GB ingestion | Per GB ingested |
Real User Monitoring (RUM) | $2 per 10,000 sessions | Per session |
Synthetic Monitoring | $5 per 10,000 API tests $12 per 1,000 browser tests | Per test execution |
Network Performance | $5 per host/month | Per host |
Security Monitoring | $0.20 per GB analyzed | Per GB |
Database Monitoring | $21 per host/month | Per host |
Continuous Profiler | $8 per host/month | Per host |
Serverless Monitoring | $0.50 per million invocations | Per function invocation |
Incident Management | $15 per user/month | Per user |
CI Visibility | $5 per 25,000 test runs | Per CI test run |
Datadog’s pricing structure falls into three primary categories, each with distinct calculation methods and optimization opportunities:
1. Host-Based Pricing
Host-based pricing applies to core Datadog services including:
- Network Performance Monitoring
- Infrastructure Monitoring
- APM (Application Performance Monitoring)
- Continuous Service Monitoring (CSM)
What makes host-based pricing particularly challenging is Datadog’s billing calculation method. Rather than charging based on average usage, Datadog bills according to the highest 99th percentile usage hour of the month. This means a single traffic spike can significantly impact your monthly bill.
For example, if your normal infrastructure consists of 100 hosts, but an autoscaling event temporarily scales to 150 hosts for just a few hours during a peak traffic event, you’ll be billed for approximately 149 hosts (the 99th percentile) for the entire month. This billing approach frequently catches teams off guard, especially those running dynamic environments with variable workloads.
2. Volume-Based Pricing
Volume-based pricing applies to services like:
- Log Management
- Synthetics API Tests
- Synthetics Browser Tests
The challenge with volume-based pricing is that data volumes can grow exponentially without proper controls. For example, enabling debug-level logging in production can increase log volume by 5-10x overnight. Similarly, improperly configured tracing can generate millions of unnecessary spans, quickly inflating costs.
Actual customer data shows that log volumes typically grow by 200-300% year-over-year for most organizations without active management. This is partly due to increased instrumentation and partly due to application growth.
3. User-Based Pricing
User-based pricing applies to services such as:
- Incident Management
User-based pricing is relatively straightforward but can still lead to unexpected costs when new team members are onboarded or when user accounts aren’t properly deprovisioned when staff leaves the organization.
4. Commitment-Based Discounts
Datadog offers significant discounts (typically 15-40%) for annual commitments. However, these commitments require careful capacity planning:
- Annual Commitments: Pre-purchasing monitoring capacity for 12 months
- Multi-Year Commitments: Longer-term agreements with steeper discounts
- Usage-Based Commitments: Discounted rates for committed spend amounts
For large organizations, the difference between on-demand and commitment pricing can amount to hundreds of thousands of dollars annually. However, overcommitting can lead to paying for unused capacity, while undercommitting results in paying premium rates for usage above your commitment level.
The Overlooked Cost Traps in Datadog Pricing

1. Unchecked Log Verbosity
One of the most common cost drivers is excessive logging. A single Java microservice with DEBUG logging enabled can generate 5-10GB of logs daily. High-cardinality tags in structured logging can increase storage and indexing costs by 3-5x. Access logs for high-traffic APIs can consume enormous amounts of storage.
Many teams experience significant cost reductions simply by moving from DEBUG to INFO level logging in production, often cutting log volumes by 70-80%.
2. Tag Explosion
Improper tagging strategy often leads to “tag explosion” or “high cardinality” issues. Kubernetes deployments using pod IDs as tags can create millions of unique combinations. Using user IDs as tags creates unlimited cardinality. Including build numbers or timestamps as tags multiplies metric counts.
Each unique tag combination creates a separate time series in Datadog, exponentially increasing costs. For example, tagging with 10 different dimensions where each has 10 possible values theoretically creates 10 billion potential combinations.
3. Redundant Monitoring
Organizations frequently implement overlapping monitoring tools. Many teams monitor the same services with both APM and custom metrics. Others run synthetic tests against endpoints already covered by health checks. Some maintain duplicate dashboards across teams with similar but slightly different metrics.
Consolidation of these monitoring approaches can often reduce costs by 15-20% without losing visibility.
4. Excessive Retention Periods
Many organizations keep log data far longer than necessary. Production logs are often kept for 30+ days when only 7-14 days provide operational value. Error logs may be retained for 90+ days when most investigations happen within 48 hours. Audit logs are sometimes retained in high-cost Datadog storage when they could be archived to cheaper long-term storage.
Right-sizing retention periods to match actual usage patterns can yield immediate savings of 20-40% on log management costs.
5. Inefficient Synthetic Testing
Common inefficiencies in synthetic testing include running tests every minute when every 5-10 minutes would provide sufficient coverage, creating separate tests for each API endpoint instead of consolidated test paths, and duplicating test coverage across development, staging, and production environments.
By optimizing test frequency and consolidating test paths, synthetic monitoring costs can often be reduced by 30-50%.
Datadog Cost Management Best Practices
1. Implement Robust Monitoring Governance
Establish a clear governance framework for your Datadog implementation. Define monitoring standards that clarify what should and shouldn’t be monitored based on service criticality. Create a structured approach to tagging that prevents cardinality issues. Establish appropriate log levels for different environments. Require justification for enabling high-volume data collection through approval workflows.
Organizations with formal monitoring governance typically spend 30-40% less on Datadog than those without clearly defined standards.
2. Leverage Datadog’s Usage Attribution
Datadog offers usage attribution through tags, allowing you to track which teams or services generate the most monitoring costs. This enables you to identify cost outliers and inefficient monitoring practices, and implement chargeback or showback models to drive accountability.
For example, adding a team:
tag to all resources allows you to generate usage reports broken down by team, making cost visibility much more actionable.
3. Set Up Usage Alerts
Create proactive alerts for unexpected usage patterns. Daily or weekly notifications when log ingestion exceeds thresholds can help identify problems early. Alerts when new high-cardinality metrics appear can prevent tag explosion. Notifications when host counts spike above normal levels can signal potential billing surprises.
These early warning systems can help catch runaway costs before they impact your monthly bill.
4. Implement Regular Cost Reviews
Establish a cadence for reviewing and optimizing Datadog costs. Weekly reviews of high-level usage metrics help maintain awareness of trends. Monthly deep dives into cost drivers provide opportunities for targeted optimization. Quarterly optimization initiatives targeting specific products or teams create a structured approach to continuous improvement.
Organizations that conduct regular cost reviews typically identify 15-25% in savings opportunities every quarter.
5. Use Specialized Cost Management Tools
While Datadog provides basic usage reporting, specialized tools offer deeper insights. With Holori you can manage your Datadog costs alongside other vendors. This makes it easier to allocate costs, set up budgets, create alerts and provide a strong FinOps workflow to every stakeholders in your company.
Key Optimization Techniques to lower Datadog Costs
1. Implement Intelligent Log Filtering
Create sophisticated log filtering strategies to reduce ingestion volumes at the source. For example, a log pipeline in Datadog to filter out noise might look like this:
// Example log pipeline in Datadog to filter out noise
// This can reduce log volume by 40-60% in many cases
if ($message contains "health check") {
discard();
}
if ($status == "2xx" && $path contains "/api/heartbeat") {
discard();
}
if ($level == "DEBUG" && $env == "production") {
discard();
}
By filtering logs before ingestion, you can dramatically reduce costs without losing valuable insights.
2. Optimize APM with Strategic Sampling
Implement intelligent sampling for high-volume services. Error-based sampling ensures you always trace errors while sampling successful requests. Latency-based sampling captures traces for slow transactions while sampling normal ones. Path-based sampling applies different sampling rates to different API endpoints based on their importance.
For example, sampling 10% of normal traffic while capturing 100% of errors can reduce APM costs by 80-90% while maintaining visibility into issues.
3. Right-Size Retention Periods
Adjust retention periods based on actual usage patterns. General logs often have a default retention of 30 days, but an optimized retention of 7-14 days could yield 50-75% savings. Error logs typically default to 30 days, but adjusting to 14-30 days based on actual investigation patterns can save up to 50%. APM traces with a 15-day default can often be reduced to 3-7 days for 50-80% savings. Metrics with 15-month retention can usually be reduced to 3-6 months for 60-80% savings.
Implementing a tiered retention strategy—keeping recent data in Datadog while archiving older data to cheaper storage—can provide the best of both worlds.
4. Implement Tag Management
Establish strict controls on tag usage. Limit high-cardinality dimensions like IDs, timestamps, and version numbers. Create allowlists for approved tags to prevent tag sprawl. Periodically audit and clean up unused or redundant tags.
For example, instead of tagging with exact version numbers like version:2.5.7234
, use major versions like version:2
to dramatically reduce cardinality.
5. Optimize Host Monitoring with Agent Configuration
Fine-tune the Datadog Agent configuration to reduce data collection:
# Example datadog.yaml configuration to reduce data volume
logs_enabled: true
logs_config:
processing_rules:
- type: exclude_at_match
name: "exclude_debug_logs"
pattern: "level=debug"
- type: exclude_at_match
name: "exclude_health_checks"
pattern: "health_check"
apm_config:
enabled: true
max_traces_per_second: 50
max_events_per_second: 100
process_config:
enabled: false # Disable process monitoring if not needed
Proper agent configuration can reduce data collection at the source rather than paying to ingest data you don’t need.
Datadog alternatives
Datadog’s pricing places it at the premium end of the observability market. Compared to alternatives, the costs can be significant:
Solution | Pros | Cons | Relative Cost |
---|---|---|---|
Datadog | Comprehensive, integrated, mature | Expensive, complex pricing | High |
New Relic | All-inclusive pricing, predictable | Less specialized tooling | Medium |
Elastic Stack | Open-source core, flexible | Complex to manage, hidden costs | Medium-Low |
Prometheus + Grafana | Open-source, highly customizable | Requires significant expertise | Low |
OpenTelemetry + CNCF Tools | Open standards, vendor-neutral | Immature integration, requires expertise | Low |
While Datadog’s costs are higher, many organizations find the integrated platform and ease of use justify the premium. The key is managing these costs effectively rather than treating them as fixed or uncontrollable expenses.
Bringing Observability to Datadog cost (ironical isn’t it ?)
In 2025, observability has become a critical part of IT budgets, sometimes matching or exceeding compute costs for data-intensive applications. Treating observability as a strategic investment rather than an unmanageable expense is essential for maintaining both operational excellence and cost efficiency.
Successful Observability requires prioritizing value over volume by focusing on collecting data that drives actual insights and actions. Right-sizing monitoring to match service criticality ensures you’re not overpaying for non-critical systems. Automating cost controls with guardrails prevents cost runaway before it happens. Driving accountability by making monitoring costs visible to the teams generating them creates better cost awareness. And treating cost management as an ongoing process, not a one-time project, ensures sustainable savings.
By applying these principles, organizations can enjoy the benefits of comprehensive observability through Datadog while keeping costs reasonable and predictable.
With tools like Holori Finops platform specifically designed to manage Datadog costs, even large-scale implementations can be optimized without sacrificing observability coverage. Think of Holori as an observability platform for Cost monitoring!
Try Holori now for free: https://app.holori.com/
