Skip to main content
Version: 1.5.1

Recover

Control Flow Pipeline

Synopsis

Recovers from errors that occurred in previous processors, allowing the pipeline to continue execution. Acts like a try-catch mechanism in programming languages.

Schema

- recover:
description: <text>
if: <script>
ignore_failure: <boolean>
on_failure: <processor[]>
on_success: <processor[]>
tag: <string>

Configuration

The following fields are used to define the processor:

FieldRequiredDefaultDescription
descriptionN-Explanatory note
ifN-Condition to run
ignore_failureNfalseContinue processing if operation fails
on_failureN-See Handling Failures
on_successN-See Handling Success
tagN-Identifier

Details

The recover processor clears error states from previous processors, allowing the pipeline to continue execution as if no error occurred. It functions similarly to exception handling in programming languages, where errors are caught and handled gracefully.

Error Recovery Mechanism:

  • Clears any error state from processors in the current on_failure chain
  • Allows the pipeline to continue with subsequent processors
  • Does not modify event data—purely handles error state
  • Can be conditional using the if field

Common Use Cases:

  • Parsing Failures: Recover when optional parsing fails (e.g., grok patterns, date formats)
  • Enrichment Failures: Continue when external enrichment services are unavailable
  • Validation Failures: Proceed with default values when validation fails
  • Optional Processing: Make certain processing steps non-critical
note

The recover processor doesn't modify any data or fields. It serves purely as an error recovery mechanism to prevent pipeline termination due to previous failures.

This processor is most commonly used in error handling chains, at the end of optional processing sequences, or when you want to ensure that certain errors don't prevent the pipeline from completing.

warning

Using recover will suppress all previous errors in the pipeline execution. Make sure this is the intended behavior before using this processor, as it can mask legitimate issues that should be addressed.

Examples

Basic Error Recovery

Recovering from pipeline errors...

- set:
field: user_id
value: "{{invalid.field}}"
on_failure:
- set:
field: user_id
value: "unknown"
- recover:
description: "Continue despite field extraction failure"

prevents pipeline failure:

{
"user_id": "unknown"
}

Conditional Recovery

Recovering only under specific conditions...

- enrich:
field: ip_address
target: geo_data
on_failure:
- recover:
if: "logEntry.source == 'internal'"
description: "Skip geo-enrichment for internal IPs"

recovers based on condition:

{
"ip_address": "10.0.0.1",
"source": "internal"
}

Optional Processing Chain

Making an entire processing chain optional...

- if:
condition: "logEntry.type == 'experimental'"
processors:
- experimental_processor:
field: data
- recover:
description: "Experimental processing is optional"
tag: "optional_processing"
- set:
field: processed
value: true

continues regardless of experimental processor outcome:

{
"type": "experimental",
"processed": true
}

Multiple Error Handlers

Using recover after multiple fallback attempts...

- grok:
pattern: "%{COMMONAPACHELOG}"
field: message
on_failure:
- grok:
pattern: "%{COMBINEDAPACHELOG}"
field: message
on_failure:
- set:
field: parse_status
value: "failed"
- recover:
description: "Continue with unparsed log"

recovers after all parsing attempts fail:

{
"message": "unparseable log format",
"parse_status": "failed"
}

External Service Fallback

Recovering from external service failures...

- virustotal:
field: file_hash
target: threat_info
api_key: "{{secrets.vt_key}}"
on_failure:
- set:
field: threat_check_status
value: "service_unavailable"
- recover:
description: "Continue without threat intelligence"
tag: "external_service_recovery"

continues when external API is unavailable:

{
"file_hash": "abc123...",
"threat_check_status": "service_unavailable"
}