Skip to main content
Version: 1.4.0

Return

Control Flow Pipeline

Synopsis

Finalizes processing and prevents further pipeline execution.

Schema

- return:
description: <text>
if: <script>
ignore_failure: <boolean>
ignore_missing: <boolean>
on_failure: <processor[]>
on_success: <processor[]>
tag: <string>

Configuration

The following fields are used to define the processor:

FieldRequiredDefaultDescription
descriptionN-Explanatory note
ifN-Condition to run
ignore_failureNfalseContinue processing if operation fails
ignore_missingNfalseSkip processing if referenced field doesn't exist
on_failureN-See Handling Failures
on_successN-See Handling Success
tagN-Identifier

Details

Terminates pipeline execution immediately, preventing any subsequent processors from running. This processor is used to implement early termination logic, conditional exit points, and optimization paths that skip unnecessary processing.

When the return processor executes, it stops the current pipeline and marks the processing as complete. The data processed up to this point is preserved and passed to the next stage.

note

The return processor immediately stops pipeline execution. Any processors defined after a return statement will not be executed unless the return is conditional and the condition is not met.

This processor is commonly used for implementing business logic that requires early exit conditions, such as filtering logic, error handling, or optimization scenarios.

warning

Return processors with success/failure chains will execute those chains before terminating the pipeline. This allows for cleanup operations or additional processing before exit.

Examples

Basic Early Exit

Stopping pipeline execution early...

- set:
field: processed
value: true
- return:
description: "Exit after initial processing"
- set:
field: additional_processing
value: true

skips remaining processors:

{
"processed": true
}

Conditional Return

Exiting based on field conditions...

- return:
if: "logEntry.skip_processing == true"
description: "Skip processing when flag is set"
- set:
field: full_processing
value: true
- enrich:
field: user_id
target: user_details

returns early when condition matches:

{
"skip_processing": true
}

Error Handling Return

Using return for error scenarios...

- validate:
field: required_field
exists: true
on_failure:
- set:
field: error
value: "Missing required field"
- return:
description: "Exit due to validation failure"
- set:
field: validation_passed
value: true

exits pipeline on validation failure:

{
"error": "Missing required field"
}

Return with Cleanup

Performing cleanup before exit...

- return:
description: "Exit with cleanup operations"
on_success:
- set:
field: exit_timestamp
value: "{{now}}"
- set:
field: pipeline_status
value: "completed_early"

executes cleanup before termination:

{
"exit_timestamp": "2024-01-15T10:30:00Z",
"pipeline_status": "completed_early"
}

Performance Optimization

Optimizing pipeline for specific data types...

- return:
if: "logEntry.log_level == 'DEBUG' && logEntry.environment == 'production'"
description: "Skip debug logs in production"
on_success:
- set:
field: filtered_out
value: true
- enrich:
field: log_data
target: enriched_data
- complex_analysis:
field: enriched_data

skips expensive processing for debug logs:

{
"log_level": "DEBUG",
"environment": "production",
"filtered_out": true
}

Multi-Path Exit Strategy

Different exit points for different scenarios...

- return:
if: "logEntry.priority == 'low'"
description: "Quick exit for low priority events"
on_success:
- set:
field: processing_level
value: "minimal"
- return:
if: "logEntry.priority == 'medium'"
description: "Standard processing complete"
on_success:
- set:
field: processing_level
value: "standard"
- set:
field: processing_level
value: "full"
- advanced_processing:
field: data

exits at different points based on priority:

{
"priority": "medium",
"processing_level": "standard"
}