Azure OpenAI
Synopsis
Enriches events by sending field content to Azure OpenAI API for analysis and storing the response.
Schema
azureai:
- field: <ident>
- api_key: <string>
- resource_name: <string>
- deployment_name: <string>
- target_field: <ident>
- description: <text>
- if: <script>
- ignore_failure: <boolean>
- ignore_missing: <boolean>
- temperature: <number>
- max_tokens: <number>
- api_version: <string>
- developer_msg: <string>
- on_failure: <processor[]>
- on_success: <processor[]>
- tag: <string>
Configuration
Field | Required | Default | Description |
---|---|---|---|
field | Y | message | Field containing content to analyze |
api_key | Y | - | Azure OpenAI API key |
resource_name | Y | - | Azure OpenAI resource name |
deployment_name | Y | - | Azure model deployment name |
target_field | N | field | Field to store the API response |
description | N | - | Explanatory note |
if | N | - | Condition to run |
ignore_failure | N | false | Continue if API call fails |
ignore_missing | N | false | Continue if source field doesn't exist |
temperature | N | 0.7 | Response randomness (0-1) |
max_tokens | N | 1000 | Maximum response length |
api_version | N | 2024-02-15-preview | Azure OpenAI API version |
developer_msg | N | Default system message | System message for context |
on_failure | N | - | See Handling Failures |
on_success | N | - | See Handling Success |
tag | N | - | Identifier |
Details
The processor is useful for automated analysis, context enrichment, and intelligent processing of log data. It uses Azure OpenAI API to analyze field content, and supports system context messages for customized analysis, temperature control for response variation, and token limits for response length management.
API responses are cached to improve performance.
API calls add latency to event processing.
Azure deployment integration is provided.
Resource name and deployment name must match your Azure setup. Invalid resource or deployment names will cause processing errors. Make sure your Azure OpenAI resource has the required model deployments
Developer messages help guide AI's analysis, and lower temperature values produce more focused responses.
Consider rate limits and costs for high-volume processing, as long token limits control response length and costs and input texts may hit token limits.
API keys must be securely stored and accessed.
Examples
Basic
Analyzing a Cisco device log... |
|
adds information to the event: |
|
Precision
Fine-tuning analysis parameters... |
|
produces more focused security insights: |
|
Conditionals
Analyzing only critical errors... |
|
is preferable due to level: |
|
Custom API
Using a specific API version... |
|
produces results specific to that version: |
|