Dynamic Sample
Synopsis
Dynamically adjusts sampling rates based on event volume.
Schema
- dynamic_sample:
sample_mode: <string>
sample_group_key: <string>
sample_period_sec: <numeric>
minimum_events: <numeric>
max_sampling_rate: <numeric>
exclude_filters: <string[]>
tag: <string>
description: <text>
if: <script>
ignore_failure: <boolean>
ignore_missing: <boolean>
on_failure: <processor[]>
on_success: <processor[]>
Configuration
The following fields are used to define the processor:
Field | Required | Default | Description |
---|---|---|---|
sample_mode | N | "logarithmic" | Algorithm used to calculate sampling rate ("logarithmic" or "square_root" ) |
sample_group_key | N | "{{host}}" | Template expression to group events for sampling (can reference event fields) |
sample_period_sec | N | 30 | Time period in seconds for measuring event volume and adjusting rates |
minimum_events | N | 30 | Minimum number of events within a period before sampling starts |
max_sampling_rate | N | 100 | Maximum sampling ratio (e.g., 100 means at most 1:100 events kept) |
exclude_filters | N | - | List of conditions to exclude events from sampling |
tag | N | - | Identifier for logging |
description | N | - | Explanatory note |
if | N | - | Condition to run |
ignore_failure | N | false | Continue processing if sampling fails |
ignore_missing | N | false | Skip if referenced fields don't exist |
on_failure | N | - | Error handling processors |
on_success | N | - | Success handling processors |
Sampling Modes
The processor supports two primary algorithms for calculating sampling rates:
-
Logarithmic (
logarithmic
): Uses natural log of event count to determine sampling rate.- More aggressive reduction at high volumes.
- Formula:
sampling_rate = ceiling(log(event_count))
-
Square Root (
square_root
): Uses square root of event count to determine sampling rate.- More gradual reduction than logarithmic.
- Formula:
sampling_rate = ceiling(sqrt(event_count))
Details
Provides intelligent event sampling that automatically adjusts sampling rates based on event volume. The processor monitors event frequency within defined groups and time periods, then dynamically calculates appropriate sampling rates to maintain data representativeness while reducing volume.
This adaptive approach ensures that during high-volume periods, sampling becomes more aggressive to prevent overwhelming downstream systems, while during low-volume periods, more or all events are preserved. The processor supports different sampling algorithms and can exclude specific events from sampling.
The dynamic sampler adds two metadata fields to sampled events:
_vmetric.sampled
: Shows the current sampling rate (e.g., "10:1")_vmetric.sample_group
: Contains the evaluated sample group key
These fields can be used for analysis or referenced in downstream processors.
Sampling inherently discards data, so use with caution for critical events. Always use exclude_filters
to preserve important events like errors, alerts, or security incidents that require 100% preservation regardless of volume.
Examples
Basic
Applying simple host-based sampling... |
|
begins sampling when a host sends more than 50 events per minute |
Service-Based with Exclusions
Sample by service with critical events excluded... |
|
maintains all error/critical events while sampling normal traffic |
Multi-Dimensional
Group by multiple attributes... |
|
handles high-volume containers independently from others |
Using Metadata Enrichment
Track sampling rate in the event data... |
|
records sampling metadata for proper event interpretation |