Copy of Integrations đ
How to Get Started
This guide shows you the minimum setup required to start using Scoutflo for AIâpowered incident investigation.
You will:
Connect at least one Kubernetes cluster
Connect your cloud provider
Connect at least one observability source
Run your first investigation
All detailed connection steps live in the Integrations page to avoid duplication.
Quick checklist
You are âset upâ when all of these are true:
If you are not sure how to complete any of the steps below, follow the links to the corresponding section in the Integrations page and come back here when done.
Step 1 â Connect a Kubernetes cluster
Scoutflo needs a view of your runtime environment to investigate incidents.
At a high level you will:
Install the Scoutflo Agent in your cluster (via Helm)
Verify the agent is Running and has readâonly access
Confirm that services and pods start appearing in Scoutflo
What the agent sends:
Cluster topology (services, pods, deployments, HPA)
Nodeâlevel metrics (CPU, memory pressure)
Kubernetes events (restarts, failures, CrashLoopBackOff, etc.)
âĄď¸ Do this now: Follow âKubernetes (Helm Agent)â in the Integrations page for the exact Helm commands, RBAC requirements, and troubleshooting steps.
Step 2 â Connect your cloud provider
Scoutflo uses cloud integrations to understand:
Where your workloads actually run (regions, instances, clusters)
What infraâlevel changes happened around an incident (deployments, scaling, config changes)
For AWS, this is typically done by:
Creating a Cloud Connection in the Scoutflo UI
Approving a preâconfigured CloudFormation stack that grants readâonly IAM access
Waiting for the connection status to turn Enabled in Scoutflo
Scoutflo does not need production write access. The Cloud Connection should be scoped to readâonly permissions only.
âĄď¸ Do this now: Follow âCloud provider integrations (AWS)â in the Integrations page to:
Open Settings â Cloud Connections
Create a new Cloud Connection
Approve the CloudFormation stack
Verify the connection
Step 3 â Connect your observability stack
To explain why something broke, Scoutflo needs signals such as:
Application errors and exceptions
Logs and traces
Performance metrics and slow transactions
Typical observability integrations:
Sentry â for errors, exceptions, performance issues, and breadcrumbs
ELK â for centralized logs and search
(Others can be added following the same pattern)
Once connected, these signals are correlated with:
Kubernetes events and pod health
Cloudâlevel changes and deployments
âĄď¸ Do this now:
For errors and performance data, follow âSentry integrationâ in the Integrations page
For logs, follow âELK integrationâ in the Integrations page
You do not need every integration on day one. Connecting one cluster, one cloud account, and one observability tool is enough to start seeing value.
Step 4 â Run your first investigation
Once your cluster, cloud, and at least one observability source are connected, you are ready to run your first AIâpowered investigation.
Recommended first incident:
A recent CrashLoopBackOff
A noticeable latency spike on a critical API
A resource exhaustion event (CPU or memory)
Highâlevel flow:
In Scoutflo, navigate to the affected service or alert
Start an Investigation from the incident or alert
Wait for the analysis to complete
Review:
The rootâcause summary
Correlated logs, metrics, traces, K8s events, and infra changes
Suggested next steps / remediation actions
You are âsuccessfully onboardedâ when you can point Scoutflo at a real or test incident and get a single, coherent explanation of what happened and why without manually jumping through dashboards.
Where to go next
Once the basics are wired:
Learn how Automated Root Cause Analysis uses these integrations
Set up Automated Playbooks & Orchestration for recurring issues
Feed postmortems and runbooks into Knowledge Base & Incident History
For any new data source or tool you want to plug in, go to the Integrations page and follow the relevant section.
Last updated