Skip to main content

Optimize Azure Monitor

Azure Monitor is a powerful tool for tracking and analyzing the performance of your Azure resources and applications. To make the most of Azure Monitor while keeping costs in check, it's important to follow best practices for cost optimization. This guide offers a comprehensive set of recommendations that will help you manage and optimize your Azure Monitor usage effectively.

Optimize Your Azure Environment with Us!
Looking to enhance your Azure setup for cost efficiency, performance, reliability, or security?
Talk to an Azure expert! Email us or schedule a 30-minute consultation and let's optimize your Azure environment together!
Stay ahead with actionable insights for Azure optimization. Subscribe to updates and unlock the full potential of Azure!

Cost Optimization Recommendations

Combined Operational and Security Data

Impact: Medium

Combining operational and security data in the same Log Analytics workspace can lead to higher costs if you enable Microsoft Sentinel.

When Microsoft Sentinel is enabled in a Log Analytics workspace, all data in that workspace is subject to its pricing model. Combining operational and security data in one workspace could increase costs if you're not using Sentinel to analyze both types of data. Carefully assess the cost benefits of consolidating data versus maintaining separate workspaces.

Pricing Tier

Impact: Medium

Choosing the wrong pricing tier for your Log Analytics workspaces can result in higher costs than necessary.

Log Analytics workspaces typically operate on a pay-as-you-go pricing model. However, if you consistently collect a large volume of data, you can save money by opting for a commitment tier. Commitment tiers offer reduced rates in exchange for a guaranteed daily data minimum. Additionally, you can save even more by using cluster pricing, which combines multiple workspaces in a region.

Data Retention

Impact: High

Retaining data longer than necessary can increase storage costs.

Log Analytics workspaces store data for a default retention period, but you can adjust this to suit your needs. Carefully consider how long you need to retain data based on compliance and analysis requirements. Using long-term retention, which allows data retention for up to twelve years, can help reduce costs if you access data infrequently.

Basic Logs for Infrequently Accessed Tables

Impact: High

Using Basic Logs for tables that are not queried often can save on ingestion costs.

Configuring tables for Basic Logs lowers ingestion costs, but it also limits available features and adds charges for log queries. If certain tables are infrequently queried and not used for alerting, Basic Logs can be an economical choice.

Limit Data Collection

Impact: Medium

Collecting unnecessary data increases ingestion and storage costs.

The volume of data you collect directly impacts Azure Monitor costs. Be selective about what data is collected for monitoring purposes. Weigh the trade-offs between the granularity of your data collection and its cost implications. For instance, a higher sampling rate provides more detailed insights but also increases costs.

Impact: Low

Regular data analysis helps identify potential cost-saving opportunities.

Use Log Analytics insights to periodically assess your data collection trends. Identifying anomalies or patterns in data collection can reveal opportunities for cost optimization. Regular data analysis helps you stay proactive and avoid unnecessary cost escalations.

Implement Proactive Alerts

Impact: Low

Alerts can help you avoid unexpected costs and make the most of optimization opportunities.

Set up alerts to notify you of excessive usage, and take advantage of Azure Advisor's cost recommendations. Azure Advisor analyzes your Log Analytics workspaces and provides actionable insights to optimize your costs. Proactively addressing recommendations can help mitigate future cost increases.

Collect Essential Resource Log Data

Impact: Low

Unnecessary resource log data increases ingestion costs.

Be selective when configuring diagnostic settings for resource logs. You can filter unnecessary data for resources that support workspace transformations, reducing ingestion costs. Only collect the data that is essential for monitoring and compliance.

Optimize Alert Frequency

Impact: Low

Alert frequency can influence costs.

  • Activity log, service health, and resource health alerts are free—use them as much as possible.
  • Log search alerts incur costs based on how frequently rules are evaluated. Set these alerts to balance timely notifications and cost optimization.
  • Metric alerts that monitor multiple resources can be expensive. Consider reducing the scope of alerts or using log search alert rules to monitor large groups of resources more cost-effectively.

Azure Monitor Agent Migration

Impact: Medium

Using the older Log Analytics agent limits your data filtering options.

The Azure Monitor agent provides more granular filtering options and flexible configuration for data collection. Migrating from the Log Analytics agent to the Azure Monitor agent can help optimize data collection and reduce costs by offering more control over what data is collected.

Filter Unneeded Data From Agents

Impact: Medium

Collecting unnecessary data from agents increases ingestion costs.

  • Filter agent data: Configure agents to send only the necessary data for analysis and alerting.
  • Optimize VM insights: Disable collection of data you don’t need, particularly if certain features or data are not in use.
  • Adjust performance counter polling frequency: Reducing polling frequency can lower data ingestion costs without sacrificing essential monitoring functionality.

Avoid Duplicate Data Collection

Impact: Medium

Duplicate data collection increases costs unnecessarily.

Ensure that multiple agents or data collection rules aren't duplicating data. If migrating from the Log Analytics agent to the Azure Monitor agent, verify that you’re not collecting the same data from both agents during the transition.

Migrate SCOM to Azure Monitor SCOM

Impact: Low

Maintaining an on-premises SCOM environment can be costly.

Consider migrating your System Center Operations Manager (SCOM) environment to Azure Monitor SCOM. This migration eliminates the need for on-premises management servers and reduces infrastructure costs.

Disable Console Logging in Production

Impact: High

Leaving the Console Logging Provider enabled in production can generate excessive log data and incur high costs.

Ensure that the default Console Logging Provider is disabled in production environments to prevent unnecessary logging and associated costs.

Reduce Log Verbosity

Impact: Medium

Verbose logs can significantly increase ingestion and storage costs.

Configure applications to collect only logs at the “Warning” level or higher, rather than the default “Information” level. This helps reduce the volume of logs and the associated storage costs.

Filter Out Unnecessary Traces

Impact: Medium

Noisy traces from sources like health checks contribute to unnecessary data ingestion and costs.

Configure instrumentation to exclude traces from endpoints that generate noisy data, such as health checks, to reduce the overall data volume.

Use Metrics for Alerting

Impact: Medium

Alerting on traces can be more expensive than using metrics.

Whenever possible, use OpenTelemetry metrics for alerting instead of traces. Metrics are more cost-effective for monitoring and alerting on application behavior.

Explore Lower-Cost Log Storage Tiers

Impact: Low

Choosing the right log storage tier can result in significant savings.

Consider switching to the Basic Logs tier for logs that don't require the full functionality of the default analytics tier. Basic Logs provide a lower cost per GB and can be a cost-effective choice for less frequently accessed log data.

Utilize Trace-Based Log Sampling

Impact: Low

Log sampling can reduce log ingestion volume and costs.

Use trace-based log sampling in the Azure Monitor OpenTelemetry Distro to intelligently sample logs associated with traces that are not necessary, while ensuring critical logs are not missed. This technique helps reduce ingestion volume and optimize costs.