Copy of Integrations 🔌

How to Get Started

This guide shows you the minimum setup required to start using Scoutflo for AI‑powered incident investigation.

You will:

  1. Connect at least one Kubernetes cluster

  2. Connect your cloud provider

  3. Connect at least one observability source

  4. Run your first investigation

All detailed connection steps live in the Integrations page to avoid duplication.


Quick checklist

You are “set up” when all of these are true:

circle-info

If you are not sure how to complete any of the steps below, follow the links to the corresponding section in the Integrations page and come back here when done.


Step 1 – Connect a Kubernetes cluster

Scoutflo needs a view of your runtime environment to investigate incidents.

At a high level you will:

  1. Install the Scoutflo Agent in your cluster (via Helm)

  2. Verify the agent is Running and has read‑only access

  3. Confirm that services and pods start appearing in Scoutflo

What the agent sends:

  • Cluster topology (services, pods, deployments, HPA)

  • Node‑level metrics (CPU, memory pressure)

  • Kubernetes events (restarts, failures, CrashLoopBackOff, etc.)

➡️ Do this now: Follow “Kubernetes (Helm Agent)” in the Integrations page for the exact Helm commands, RBAC requirements, and troubleshooting steps.


Step 2 – Connect your cloud provider

Scoutflo uses cloud integrations to understand:

  • Where your workloads actually run (regions, instances, clusters)

  • What infra‑level changes happened around an incident (deployments, scaling, config changes)

For AWS, this is typically done by:

  1. Creating a Cloud Connection in the Scoutflo UI

  2. Approving a pre‑configured CloudFormation stack that grants read‑only IAM access

  3. Waiting for the connection status to turn Enabled in Scoutflo

circle-exclamation

➡️ Do this now: Follow “Cloud provider integrations (AWS)” in the Integrations page to:

  • Open Settings → Cloud Connections

  • Create a new Cloud Connection

  • Approve the CloudFormation stack

  • Verify the connection


Step 3 – Connect your observability stack

To explain why something broke, Scoutflo needs signals such as:

  • Application errors and exceptions

  • Logs and traces

  • Performance metrics and slow transactions

Typical observability integrations:

  • Sentry – for errors, exceptions, performance issues, and breadcrumbs

  • ELK – for centralized logs and search

  • (Others can be added following the same pattern)

Once connected, these signals are correlated with:

  • Kubernetes events and pod health

  • Cloud‑level changes and deployments

➡️ Do this now:

  • For errors and performance data, follow “Sentry integration” in the Integrations page

  • For logs, follow “ELK integration” in the Integrations page

circle-info

You do not need every integration on day one. Connecting one cluster, one cloud account, and one observability tool is enough to start seeing value.


Step 4 – Run your first investigation

Once your cluster, cloud, and at least one observability source are connected, you are ready to run your first AI‑powered investigation.

Recommended first incident:

  • A recent CrashLoopBackOff

  • A noticeable latency spike on a critical API

  • A resource exhaustion event (CPU or memory)

High‑level flow:

  1. In Scoutflo, navigate to the affected service or alert

  2. Start an Investigation from the incident or alert

  3. Wait for the analysis to complete

  4. Review:

    • The root‑cause summary

    • Correlated logs, metrics, traces, K8s events, and infra changes

    • Suggested next steps / remediation actions

circle-check

Where to go next

Once the basics are wired:

  • Learn how Automated Root Cause Analysis uses these integrations

  • Set up Automated Playbooks & Orchestration for recurring issues

  • Feed postmortems and runbooks into Knowledge Base & Incident History

For any new data source or tool you want to plug in, go to the Integrations page and follow the relevant section.

Last updated