AI-Powered Analysis

gitlab-summary integrates with GitHub Copilot to provide intelligent analysis of failed CI/CD jobs, helping you understand failures faster and fix them more effectively.


Prerequisites

GitHub Copilot Subscription

You need an active GitHub Copilot subscription:

  • Individual: Personal subscription
  • Business: Organization subscription
  • Enterprise: Enterprise subscription

GitHub CLI

Install and authenticate the GitHub CLI:

  # Install (macOS)
brew install gh

# Install (Windows)
winget install --id GitHub.cli

# Install (Linux - Debian/Ubuntu)
sudo apt install gh

# Authenticate
gh auth login
  

Verify Setup

  # Check GitHub CLI
gh --version

# Check Copilot access
gh copilot --version
  

How AI Analysis Works

1. Failure Detection

When a job fails, gitlab-summary automatically marks it for potential analysis.

2. Trigger Analysis

From the dashboard:

  1. Click on a failed pipeline
  2. View the job list
  3. Click “Analyze with AI” on any failed job

3. Analysis Process

What Happens:

  1. Job log is retrieved from GitLab
  2. Log is sent to GitHub Copilot with context
  3. AI analyzes the failure
  4. Results are displayed and cached

Analysis Includes:

  • Root cause identification
  • Specific error messages explained
  • Recommended fixes
  • Context from logs and job metadata

4. Caching

Benefits:

  • Instant access to previous analyses
  • No repeated API calls
  • History preserved for team reference

Cache Location: ~/.gitlab-summary/ai-analysis-cache.json


Using AI Analysis

From Pipeline Details

  1. Open pipeline: Click any pipeline row in Projects or Overview
  2. Find failed job: Look for red “failed” status
  3. Analyze: Click “Analyze with AI” button
  4. Wait: Analysis takes 5-15 seconds
  5. View results: AI Analysis tab opens automatically

πŸ“Έ Screenshot placeholder: dashboard-pipeline-details.png Description: Pipeline details dialog showing job list with status icons, durations, and “Analyze with AI” button highlighted on a failed job

AI Analysis Display

Analysis Structure:

  πŸ” Root Cause Analysis

The build failed due to a missing npm package dependency.

πŸ“‹ Details:
- Error: "Cannot find module 'lodash'"
- Location: src/utils/helpers.js:3
- Stage: test
- Duration: 45 seconds

πŸ’‘ Recommended Fix:
1. Install the missing package:
   npm install lodash --save

2. Or remove the import if unused:
   Remove: import _ from 'lodash'

3. Update package.json with correct dependency version

πŸ”§ Additional Context:
The error occurred during the test stage when attempting to 
import lodash. This suggests the package was used in code but 
not declared in package.json dependencies.
  

πŸ“Έ Screenshot placeholder: dashboard-ai-analysis.png Description: AI Analysis tab displaying intelligent failure summary with root cause, details, recommended fixes, and additional context sections


Follow-up Questions

Ask Additional Questions

After initial analysis, you can ask follow-up questions for deeper understanding.

πŸ“Έ Screenshot placeholder: dashboard-ai-followup.png Description: Follow-up question interface showing conversation history with user questions and AI responses in chat-style format

Example Questions:

  • “How can I reproduce this locally?”
  • “What version of this package should I use?”
  • “Are there any related issues I should check?”
  • “What’s the best practice here?”

How to Ask:

  1. Scroll to bottom of AI Analysis tab
  2. Type question in text area
  3. Click “Ask Follow-up”
  4. View conversational response

Conversation History

Preservation:

  • All follow-up Q&A is saved
  • View complete conversation in AI History page
  • Context maintained across questions

Example Conversation:

  You: How can I fix this locally?

AI: To fix this locally:
1. Run: npm install lodash --save
2. Verify: npm list lodash
3. Test: npm test

You: What if I don't want to use lodash?

AI: If you want to avoid lodash:
1. Replace lodash methods with native JavaScript
2. For _.debounce, use setTimeout manually
3. For _.cloneDeep, use structuredClone()
  

AI Analysis Indicators

Project Level

Badge: “X AI” where X = number of analyzed jobs

Shows projects with AI-analyzed failures at a glance.

Pipeline Level

Icon: Small AI indicator on pipeline rows

Visible in project expansions and pipeline lists.

Job Level

Badge: “AI Analyzed” on job cards

Shows which specific jobs have been analyzed.


Managing AI Analyses

View All Analyses

AI History Page:

  • Access from left navigation
  • Shows all cached analyses
  • Paginated (20 per page)
  • Expandable cards with full content

Re-analyze

When to Re-analyze:

  • Updated AI prompt
  • Want fresh perspective
  • Additional context available

How:

  1. Open cached analysis
  2. Click “Re-analyze” button
  3. Confirm action
  4. New analysis replaces old one

Delete Analysis

When to Delete:

  • No longer relevant
  • Wrong job analyzed
  • Cleaning up cache

How:

  1. Open cached analysis
  2. Click “Delete” button
  3. Confirm deletion
  4. Analysis removed from cache

Customizing AI Prompt

Default Prompt

The default system prompt instructs AI to:

  • Identify root cause
  • Provide actionable fixes
  • Be concise and clear
  • Focus on relevant log excerpts

Custom Prompt

Access Settings:

  1. Click gear icon in top bar
  2. View/edit “System Prompt” text area
  3. Modify as needed
  4. Click “Save”

Custom Prompt Ideas:

More Detailed:

  You are a CI/CD troubleshooting expert. Analyze the failed job 
log and provide:
1. Detailed root cause with line numbers
2. Step-by-step fix instructions
3. Links to relevant documentation
4. Related best practices
5. Prevention strategies for the future
  

Team-Specific:

  Analyze this CI/CD failure for our team. Consider:
- Our tech stack: Node.js, PostgreSQL, Docker
- Our coding standards from CONTRIBUTING.md
- Common issues in our codebase
- Fixes that align with our architecture
  

Concise:

  Briefly explain why this job failed and how to fix it. 
One paragraph maximum.
  

Privacy & Data

What Gets Sent to GitHub Copilot

Included:

  • Job log output
  • Job name and stage
  • Project ID (not name)
  • Pipeline ID
  • Your custom system prompt

NOT Included:

  • Project source code
  • Commit diffs
  • Other job logs
  • Team member information
  • GitLab credentials

Data Storage

Local Cache Only:

  • Analyses stored on your machine
  • No cloud storage by gitlab-summary
  • You control the cache file

GitHub/Microsoft:


Best Practices

When to Use AI Analysis

βœ… Good Use Cases:

  • Complex build failures
  • Unfamiliar error messages
  • Time-sensitive production issues
  • Learning from failures
  • Onboarding new team members

❌ Skip AI For:

  • Obvious typos (you can spot faster)
  • Known issues with documented fixes
  • Infrastructure outages (not code-related)
  • Timeout errors without logs

Getting Better Results

Provide Context in Prompts:

  Analyze this test failure. Context: We recently upgraded 
from Node 18 to Node 20.
  

Ask Specific Follow-ups:

  ❌ "Can you explain more?"
βœ… "Which specific line in the test is causing the assertion failure?"
  

Iterate:

  • First analysis gives general overview
  • Follow-ups dive deeper
  • Re-analyze after gathering more info

Troubleshooting AI Features

“AI analysis failed”

Causes:

  • GitHub CLI not authenticated
  • No Copilot subscription
  • Network issues
  • Rate limiting

Solutions:

  # Re-authenticate
gh auth login

# Check Copilot status
gh copilot --version

# Test manually
gh copilot explain "npm install failed"
  

Empty or Generic Responses

Causes:

  • Insufficient log data
  • Generic error message
  • Very short logs

Solutions:

  • Check full job log in GitLab
  • Provide more context in follow-up
  • Customize system prompt for more detail

Analysis Not Caching

Causes:

  • Permissions on cache file
  • Disk space

Solutions:

  # Check cache file
ls -la ~/.gitlab-summary/ai-analysis-cache.json

# Fix permissions
chmod 644 ~/.gitlab-summary/ai-analysis-cache.json

# Check disk space
df -h ~
  

API Integration

For programmatic access to AI analysis:

Analyze Job:

  curl -X POST http://localhost:5100/api/ai/analyse \
  -H "Content-Type: application/json" \
  -d '{
    "projectId": 123,
    "pipelineId": 456,
    "jobId": 789,
    "jobName": "test",
    "stage": "test"
  }'
  

Get Cached Analysis:

  curl "http://localhost:5100/api/ai/cache?projectId=123&jobId=789"
  

See API Reference for complete documentation.


Examples

Example 1: Build Failure

Log Excerpt:

  npm ERR! code ENOENT
npm ERR! syscall open
npm ERR! path /app/package.json
npm ERR! errno -2
  

AI Analysis:

  Root Cause: Missing package.json file in working directory

The npm install command failed because it couldn't find package.json 
in the expected location (/app/). This typically happens when:

1. The working directory is incorrect
2. The repository wasn't cloned properly
3. A previous step failed to setup the workspace

Recommended Fix:
1. Check your CI configuration for correct working directory
2. Ensure git clone step completed successfully
3. Verify package.json exists in repository root
  

Example 2: Test Failure

Log Excerpt:

  ● Authentication β€Ί should return 401 for invalid token

expect(received).toBe(expected)

Expected: 401
Received: 500
  

AI Analysis:

  Root Cause: Authentication test expecting 401 but receiving 500

The test expects an unauthorized (401) response for an invalid token,
but the API is returning a server error (500). This indicates:

1. The authentication middleware isn't catching invalid tokens properly
2. An unhandled exception is occurring during auth validation
3. Error handling needs improvement

Recommended Fix:
1. Check auth middleware for try/catch blocks
2. Add proper error handling for token validation
3. Review server logs for the actual exception
4. Ensure token validation returns 401, not throws error
  

See Also