Skip to main content
Version: 1.5.1

Microsoft Sentinel Integration

VirtualMetric Director supports Microsoft Sentinel integration through two different approaches: automatic discovery and manual configuration. Choose the method that best fits your environment and requirements.

Prerequisites

Both integration approaches require:

  • an Azure subscription with permissions to create resources
  • a Log Analytics workspace required by Microsoft Sentinel

Autodiscovery Setup

VirtualMetric Director provides an autodiscovery feature for Microsoft Sentinel integration. This enables automatic detection and configuration of Data Collection Rules (DCRs) and their associated streams, simplifying the setup process and providing dynamic updates as your Sentinel environment changes.

Open a terminal with Administrative access and navigate to <vm_root>. Then, type the following command and press Enter.

C:\vmetric-director -sentinel -autodiscovery

Follow the on-screen prompts to complete the setup process. For detailed step-by-step instructions, refer to Microsoft Sentinel Overview.

Manual Setup

Manual integration requires step-by-step configuration of Microsoft Sentinel components. This approach provides full control over the integration process and is ideal for environments requiring specific configuration requirements.

Service Principal Setup

Create a service principal for DataStream authentication:

  1. Navigate to Azure Active Directory > App registrations
  2. Select New registration
  3. Enter DataStream as the application name
  4. Select Accounts in this organizational directory only
  5. Click Register
  6. Record the Application (client) ID and Directory (tenant) ID
  7. Go to Certificates & secrets > New client secret
  8. Create a secret and record the Client secret value

Data Collection Endpoint Setup

  1. Navigate to Azure Portal > Monitor > Data Collection Endpoints
  2. Select Create
  3. Configure the DCE:
    FieldValue
    Namedatastream-dce
    Resource groupSelect your resource group
    RegionSame region as your Log Analytics workspace
  4. Click Review + create > Create
  5. Record the Logs Ingestion endpoint URL

Data Collection Rule Creation

  1. Navigate to Monitor > Data Collection Rules
  2. Select Create
  3. Configure basic settings:
    FieldValue
    Rule namedatastream-dcr
    Resource groupSame as your DCE
    RegionSame as your DCE
    Platform TypeWindows or Linux based on your data sources
  4. In Resources tab: add your Log Analytics workspace
  5. In Collect and deliver tab,
    FieldValue
    Data source typeCustom Text Logs or Windows Event Logs
    Data source nameDataStreamLogs
    File patternConfigure based on your log sources
  6. Configure Destination:
    FieldValue
    Destination typeAzure Monitor Logs
    DestinationYour Log Analytics workspace
    TableCreate or select target table
  7. Click Review + create > Create
  8. Record the DCR Immutable ID

Required Permissions

Director needs the following permissions for Microsoft Sentinel integration.

note

If you used the Automation tool with App Registration, these permissions are already configured.

  • For Data Collection - For each DCR name prefixed with vmetric:

    1. Navigate to the DCR in Azure Portal
    2. Go to Access Control (IAM)
    3. Select Add > Add role assignment
    4. Assign the following permissions:
      RoleAssignee
      Monitoring Metrics PublisherYour Managed Identity
      -or-
      Application
  • For Autodiscovery - To enable the DCR autodiscovery features:

    1. Navigate to the Resource Group containing your DCRs
    2. Go to Access Control (IAM)
    3. Select Add > Add role assignment
    4. Assign the following permissions:
      RoleAssignee
      Monitoring ReaderYour Managed Identity
      -or-
      Application
      important

      The Monitoring Reader role should be assigned at the Resource Group level only. Assigning this role at the Subscription level is not recommended since it is not required for the functionality to work, and it increases the autodiscovery scan duration.

Configuration

Basic Configuration

targets:
- name: sentinel
type: sentinel
properties:
tenant_id: "<your-tenant-id>"
client_id: "<your-client-id>"
client_secret: "<your-client-secret>"
endpoint: "/subscriptions/.../dataCollectionEndpoints/<your-dce-name>" # Use Resource ID

Filtered Streams

You can filter the autodiscovered streams that you intend to use:

targets:
- name: sentinel
type: sentinel
properties:
tenant_id: "<your-tenant-id>"
client_id: "<your-client-id>"
client_secret: "<your-client-secret>"
endpoint: "/subscriptions/.../dataCollectionEndpoints/<your-dce-name>"
streams:
- name: "Custom-WindowsEvent"
- name: "Custom-SecurityEvent"

Cache Configuration

You can optionally adjust the cache timeout (in seconds):

targets:
- name: sentinel
type: sentinel
properties:
endpoint: "/subscriptions/.../dataCollectionEndpoints/<your-dce-name>"
cache:
timeout: 300

Verification

After completing the setup and assigning permissions:

  1. Wait a few minutes for Azure RBAC to propagate (can take up to 30 minutes)
  2. Start Director with your configuration and monitor the startup logs for connection status
  3. Check the logs for any permission-related or configuration errors
  4. Verify data appears in your Log Analytics workspace table
Troubleshooting

For autodiscovery issues:

  • Verify that all role assignments are properly configured
  • Ensure the identity has the correct access scope
  • Check that Azure RBAC changes have propagated

For manual configuration issues:

  • Verify the DCE endpoint URL is correct
  • Confirm the DCR Immutable ID matches your configuration
  • Ensure the service principal has proper permissions on both the DCR and Log Analytics workspace

How It Works

Resource ID-Based Discovery

Instead of manually configuring the Data Collection Endpoint (DCE) URL, you can provide the DCE Resource ID. For example:

/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Insights/dataCollectionEndpoints/<dce-name>

When using a Resource ID, Director will discover all DCRs associated with the specified DCE, and collect detailed stream information including table names, table schemas (column definitions), and stream configurations.

Caching Mechanism

The default cache duration is 5 minutes. The cache is automatically invalidated when the configuration file (sentinel.yml) is modified or the cache timeout is reached.

Dynamic Updates

The autodiscovery feature continuously adapts to changes in your Sentinel environment, enabling automatic detection of new DCRs, recognition of table schema changes, and discovery and integration of custom tables and columns.

Phantom Field Prevention

Microsoft Sentinel has moved to DCR-based log ingestion and manual schema management. This change, while powerful, can lead to phantom fields, i.e. data fields that are ingested and billed even though they are not part of the table schema and therefore are inaccessible for querying, and yet are still incurring storage costs.

For a comprehensive understanding of phantom fields, see Sentinel Phantom Fields by ManagedSentinel.

Common scenarios that cause phantom fields include log splitting with mismatched schemas, temporary fields in transformations, duplicate fields emerging from improper field mapping, and schema modifications without proper cleanup.

Director's autodiscovery feature includes a built-in phantom field prevention mechanism based on the following:

Schema Validation - Automatically discovers table schemas from DCRs, validates each field against the known schema, and discards fields not present in the table schema.

Dynamic Field Mapping - Fields that exist in the schema or are required are kept while others are discarded.

Cost Optimization - Prevents unnecessary data ingestion thereby reducing storage costs while maintaining data accessibility.

The guiding principles here are:

  • For schema management, regularly review table schemas, update schemas when adding new fields, use autodiscovery to validate field usage.

  • For field mapping, let autodiscovery handle field validation, define critical fields explicitly when needed, and monitor for any dropped fields in logs.

  • For cost monitoring, track ingestion volumes, monitor field usage patterns, and verify data accessibility.

The most salient reason for preventing phantom fields is to reduce their impact on cost. Some environments show up to 65% of table data as phantom fields.