Skip to main content
Version: 1.4.0

Recover

Control Flow Pipeline

Synopsis

Terminates the pipeline successfully, ignoring any previous errors.

Schema

- recover:
description: <text>
if: <script>
ignore_failure: <boolean>
ignore_missing: <boolean>
on_failure: <processor[]>
on_success: <processor[]>
tag: <string>

Configuration

The following fields are used to define the processor:

FieldRequiredDefaultDescription
descriptionN-Explanatory note
ifN-Condition to run
ignore_failureNfalseContinue processing if operation fails
ignore_missingNfalseSkip processing if referenced field doesn't exist
on_failureN-See Handling Failures
on_successN-See Handling Success
tagN-Identifier

Details

Rescues the pipeline from the conditions specified in on_failure, effectively ignoring any errors that occurred in previous processors. This is used for scenarios where you want to prevent failures from propagating.

The processor acts as a boundary, catching any previous failures and treating them as successful completions. This allows pipelines to continue processing subsequent log entries without being halted by errors in earlier processors.

note

The recover processor doesn't modify any data or fields. It serves purely as an error recovery mechanism to prevent pipeline termination due to previous failures.

This processor is most commonly used in error handling chains, at the end of optional processing sequences, or when you want to ensure that certain errors don't prevent the pipeline from completing.

warning

Using recover will suppress all previous errors in the pipeline execution. Make sure this is the intended behavior before using this processor, as it can mask legitimate issues that should be addressed.

Examples

Basic Error Recovery

Recovering from pipeline errors...

- set:
field: user_id
value: "{{invalid.field}}"
on_failure:
- set:
field: user_id
value: "unknown"
- recover:
description: "Continue despite field extraction failure"

prevents pipeline failure:

{
"user_id": "unknown"
}

Conditional Recovery

Recovering only under specific conditions...

- enrich:
field: ip_address
target: geo_data
on_failure:
- recover:
if: "logEntry.source == 'internal'"
description: "Skip geo-enrichment for internal IPs"

recovers based on condition:

{
"ip_address": "10.0.0.1",
"source": "internal"
}

Optional Processing Chain

Making an entire processing chain optional...

- if:
condition: "logEntry.type == 'experimental'"
processors:
- experimental_processor:
field: data
- recover:
description: "Experimental processing is optional"
tag: "optional_processing"
- set:
field: processed
value: true

continues regardless of experimental processor outcome:

{
"type": "experimental",
"processed": true
}

Multiple Error Handlers

Using recover after multiple fallback attempts...

- grok:
pattern: "%{COMMONAPACHELOG}"
field: message
on_failure:
- grok:
pattern: "%{COMBINEDAPACHELOG}"
field: message
on_failure:
- set:
field: parse_status
value: "failed"
- recover:
description: "Continue with unparsed log"

recovers after all parsing attempts fail:

{
"message": "unparseable log format",
"parse_status": "failed"
}

External Service Fallback

Recovering from external service failures...

- virustotal:
field: file_hash
target: threat_info
api_key: "{{secrets.vt_key}}"
on_failure:
- set:
field: threat_check_status
value: "service_unavailable"
- recover:
description: "Continue without threat intelligence"
tag: "external_service_recovery"

continues when external API is unavailable:

{
"file_hash": "abc123...",
"threat_check_status": "service_unavailable"
}