Processors
Processors are fundamental components in log processing pipelines that perform specific operations on log data. They are responsible for transforming, enriching, and manipulating log entries as they flow through the system. Each processor is designed to handle a specific type of operation, from simple field modifications to complex data transformations.
π§ AIβ
AI processors harness the power of artificial intelligence APIs for sophisticated content analysis and processing. These processors utilize various AI services to perform advanced text analysis, classification, and generation tasks. They enable intelligent processing of content, making it possible to extract insights and meaning from complex data.
π Anthropic
Processes content with Anthropic's Claude API
β‘ Azure OpenAI
Processes content with Azure OpenAI API
π OpenAI
Uses OpenAI's API for content analysis
βοΈ Control Flowβ
Control Flow processors manage the execution paths and logic within processing pipelines. They direct how documents move through the system, handle conditional processing, and organize pipeline structure. These processors are essential for creating sophisticated processing logic and maintaining efficient pipeline organization.
π Comment
Adds an explanatory comment
π Contains
Checks the presence of a value
π Date Index Name
Generates time-based index names
β Fail
Raises failures when conditions are met
π Final
Terminates a pipeline
π Foreach
Applies processors to arrays
π Pipeline
Executes another pipeline
π Reroute
Directs documents to specific destinations
π Script
Executes scripts
π Enrichβ
Enrichment processors enhance log data by incorporating additional context and information from external sources. They add value to existing data by integrating geographical information, performing DNS lookups, and adding domain intelligence. These processors connect with external databases and services to provide comprehensive context to your log data, making it more valuable for analysis and understanding.
π Attachment
Extracts content and metadata
β Circle
Converts circles to polygons
π DNS Lookup
Performs and caches DNS lookups
π Enrich
Enriches documents using lookup tables and SQL queries
π Geo Grid
Converts geo-grid definitions to shapes
πΊοΈ GeoIP
Adds geographic information
π Lookup
Enriches documents using lookup tables
π Network Direction
Determines network traffic direction
π Registered Domain
Extracts domain components
π― Filterβ
Filter processors selectively process or exclude data based on specific criteria. They help maintain data quality by removing unwanted information, applying pattern matching for selection, and standardizing content. These processors are crucial for ensuring that only relevant data continues through the pipeline, improving processing efficiency and data clarity.
π« Drop
Conditionally stops processing a document
π Gsub
Regular expression-based replacement
π§Ή HTML Strip
Removes HTML tags
π― Regex Filter
Filters events using regexes
βοΈ Trim
Removes spaces from the head and tail
βοΈ Mutateβ
Mutation processors modify existing data fields and values to ensure proper formatting and structure. They handle tasks such as appending values, converting data types, managing dates, and manipulating strings. These processors are fundamental for maintaining data consistency and preparing information for further processing or analysis.
β Append
Appends values to fields
π Bytes
Expresses values in bytes
π Community ID
Computes a community ID hash
π― Compact
Removes empty fields from documents
π Convert
Converts values between types
π Date
Parses dates from date fields
β¬οΈ Lowercase
Converts strings to lowercase
ποΈ Remove
Removes fields
π·οΈ Rename
Renames fields
βοΈ Set
Sets the value of a field
π Sort
Sorts values in a field
β Split
Split a string on a separator
β¬οΈ Uppercase
Converts strings to uppercase
π Parseβ
Parsing processors transform raw data into structured formats by extracting meaningful information from various input types. They handle multiple data formats and message types, converting them into structured data. These processors excel at converting unstructured or semi-structured data into well-organized, usable formats by applying patterns and rules to extract relevant fields.
π¨ CEF
Parses CEF messages
π CSV
Parses CSV data
πͺ Dissect
Parses data using pre-defined patterns
π― Grok
Extracts fields with patterns
π JSON
Parses JSON data
π§© KV
Extracts key-values pairs
π© LEEF
Parses LEEF messages
πΆ Level
Extracts log levels from messages
π§© Pattern
Extracts structured patterns from log messages
π§© Regex Extract
Extracts fields with named capture groups
π Syslog
Parses syslog messages
π URI Parts
Parses URI strings into fields
π URL Decode
Decodes URL-encoded strings
π€ User Agent
Parses agent strings
π XML
Parses XML into maps
π‘οΈ Securityβ
Security processors focus on protecting sensitive information and managing data security. They implement encryption and decryption operations, generate document signatures, and handle data masking and redaction. These processors ensure that sensitive information is properly protected while maintaining the utility of the data for analysis.
π Decrypt
Removes AES encryption from a field
π Encrypt
Encrypts string values using AES encryption with optional compression
π Fingerprint
Generates hashes to sign documents
π Mask
Masks sensitive data with hashes
β¬ Redact
Masks sensitive data
π΅οΈ Threat Intelligenceβ
Threat Intelligence processors integrate with external security services to provide context about potential security threats. They connect with various threat intelligence providers to retrieve and incorporate security data. These processors are crucial for security analysis and threat detection, providing real-time intelligence about potential security risks.
π½ AlienVault
Retrieves threat intelligence from AlienVault
βοΈ Cloudflare Intel
Retrieves intelligence from Cloudflare's API
π IP Quality Score
Enriches data with IP Quality Score
π‘οΈ VirusTotal
Enriches data with VirusTotal threat intelligence
π Transformβ
Transform processors handle structural changes to data by modifying how information is organized and represented. They manage tasks like expanding and nesting field structures, combining elements, and normalizing field names. These processors are essential for ensuring data consistency and maintaining proper data structure throughout the processing pipeline.
π³ Dot Expander
Expands dot notation field names into nested object structures
π Dot Nester
Flattens nested objects into dot notation fields
π Join
Combines array elements
π¦ Move
Changes field locations
π¨ Normalize
Converts field names between formats