AI Integrations
Terrateam’s hooks and workflow steps give you access to full plan and apply results via $TERRATEAM_RESULTS_FILE. Pair this with any LLM API to get AI-powered feedback directly in your pull requests — summarizing changes, flagging risks, diagnosing errors, and more.
How It Works
Section titled “How It Works”- Terrateam writes operation results to
$TERRATEAM_RESULTS_FILE(JSON) after plan or apply operations complete - A hook or workflow step script reads that file, extracts relevant data, and sends it to an LLM provider
- The script prints the LLM response to stdout
- Setting
capture_output: trueandvisible_on: alwaysmakes the response appear in the PR comment
Use Cases
Section titled “Use Cases”Plan Summary
Section titled “Plan Summary”After a successful plan, ask the LLM to summarize what will change, flag potential risks, and estimate blast radius. This gives reviewers a quick, plain-English overview without reading raw Terraform output.
Plan Error Analysis
Section titled “Plan Error Analysis”When a plan fails, send the error output to the LLM for an explanation of the root cause and suggested fixes. This is especially useful for teams where not every reviewer is a Terraform expert.
Apply Summary
Section titled “Apply Summary”After a successful apply, confirm what was deployed, highlight any warnings in the output, and note any resources that may need post-deployment verification.
Apply Error Analysis
Section titled “Apply Error Analysis”When an apply fails, have the LLM diagnose the root cause — partial state issues, provider errors, permission problems — and suggest remediation steps.
Security & Compliance Review
Section titled “Security & Compliance Review”Ask the LLM to review plan output for security concerns: open security groups, public S3 buckets, missing encryption, overly permissive IAM policies, and similar issues.
Configuration
Section titled “Configuration”There are two approaches to running AI feedback: hooks and workflow steps. Choose based on your needs.
Hooks Approach
Section titled “Hooks Approach”Hooks run once after all dirspaces complete. The $TERRATEAM_RESULTS_FILE contains aggregated results across all dirspaces, making this ideal for a single summary comment.
hooks: plan: post: - type: run cmd: ['bash', '${TERRATEAM_ROOT}/scripts/ai-feedback.sh'] capture_output: true run_on: always visible_on: always ignore_errors: trueFor apply feedback, add the same under hooks.apply.post:
hooks: apply: post: - type: run cmd: ['bash', '${TERRATEAM_ROOT}/scripts/ai-feedback.sh'] capture_output: true run_on: always visible_on: always ignore_errors: trueWorkflow Approach
Section titled “Workflow Approach”Workflow steps run per dirspace. Use this when you want individual AI feedback for each directory/workspace combination.
workflows: - tag_query: "" plan: - type: init - type: plan - type: run cmd: ['bash', '${TERRATEAM_ROOT}/scripts/ai-feedback.sh'] capture_output: true run_on: always visible_on: always ignore_errors: trueWhich to Choose?
Section titled “Which to Choose?”| Hooks | Workflow Steps | |
|---|---|---|
| Scope | Once after all dirspaces | Per dirspace |
| Results | Aggregated across all dirspaces | Single dirspace |
| Best for | Overall summary | Per-directory feedback |
| API calls | One per operation | One per dirspace |
Example Scripts
Section titled “Example Scripts”The following examples show how to send $TERRATEAM_RESULTS_FILE to an LLM provider. The results file contains everything — plan/apply output, success/failure status, cost estimation data — so you can pass it directly and let the LLM interpret it. Store your script at scripts/ai-feedback.sh in your repository.
Anthropic (Claude)
Section titled “Anthropic (Claude)”#!/usr/bin/env bashset -euo pipefail
# Requires ANTHROPIC_API_KEY as a GitHub SecretRESULTS=$(cat "$TERRATEAM_RESULTS_FILE")
RESPONSE=$(curl -s https://api.anthropic.com/v1/messages \ -H "content-type: application/json" \ -H "x-api-key: $ANTHROPIC_API_KEY" \ -H "anthropic-version: 2023-06-01" \ -d "$(jq -n --arg results "$RESULTS" '{ model: "claude-sonnet-4-20250514", max_tokens: 1024, system: "You are a Terraform expert. Analyze the following Terrateam results JSON. Summarize the changes, flag any risks or errors, and highlight anything that deserves reviewer attention. Be concise.", messages: [{role: "user", content: $results}] }')")
echo "$RESPONSE" | jq -r '.content[0].text'OpenAI (ChatGPT)
Section titled “OpenAI (ChatGPT)”#!/usr/bin/env bashset -euo pipefail
# Requires OPENAI_API_KEY as a GitHub SecretRESULTS=$(cat "$TERRATEAM_RESULTS_FILE")
RESPONSE=$(curl -s https://api.openai.com/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -d "$(jq -n --arg results "$RESULTS" '{ model: "gpt-4o", max_tokens: 1024, messages: [ {role: "system", content: "You are a Terraform expert. Analyze the following Terrateam results JSON. Summarize the changes, flag any risks or errors, and highlight anything that deserves reviewer attention. Be concise."}, {role: "user", content: $results} ] }')")
echo "$RESPONSE" | jq -r '.choices[0].message.content'Google (Gemini)
Section titled “Google (Gemini)”#!/usr/bin/env bashset -euo pipefail
# Requires GOOGLE_API_KEY as a GitHub SecretRESULTS=$(cat "$TERRATEAM_RESULTS_FILE")
RESPONSE=$(curl -s "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent?key=$GOOGLE_API_KEY" \ -H "Content-Type: application/json" \ -d "$(jq -n --arg results "$RESULTS" '{ system_instruction: {parts: [{text: "You are a Terraform expert. Analyze the following Terrateam results JSON. Summarize the changes, flag any risks or errors, and highlight anything that deserves reviewer attention. Be concise."}]}, contents: [{parts: [{text: $results}]}] }')")
echo "$RESPONSE" | jq -r '.candidates[0].content.parts[0].text'Storing API Keys
Section titled “Storing API Keys”Store your LLM provider API key as a GitHub Secret in your repository:
- Go to your repository Settings → Secrets and variables → Actions
- Click New repository secret
- Add your key (e.g.,
ANTHROPIC_API_KEY,OPENAI_API_KEY, orGOOGLE_API_KEY) - Set the
AI_PROVIDERenvironment variable to match your provider
You can reference secrets directly in your Terrateam configuration — they are available as environment variables during workflow execution. For environment-specific keys, use GitHub Environments to scope secrets to specific environments like production or staging.
Tips & Best Practices
Section titled “Tips & Best Practices”- Use
run_on: alwaysto get AI feedback on both success and failure. This is especially valuable for error analysis. - Use
visible_on: alwaysso the AI response always appears in the PR comment, regardless of operation outcome. - Use
ignore_errors: trueso LLM API failures don’t block your Terraform workflow. AI feedback is helpful but should never prevent a plan or apply from completing. - Truncate large plans before sending to the LLM. The example script limits output to 30,000 characters by default. Adjust
MAX_CHARSbased on your provider’s token limits. - Customize prompts per environment — use stricter security-focused prompts for production directories and lighter summaries for development.
- Use
jqfor safe JSON construction — the example script usesjq -nto build request payloads, which properly escapes special characters in Terraform output.