GitHub Actions: Niche and Unexpected Uses

Beyond CI/CD: weird, wonderful, and surprisingly practical things you can build with GitHub Actions—from chess games in issues to SSL certificate monitoring.

When I started this series on GitHub Actions, I promised to go beyond the basics. We’ve covered the fundamentals, and now it’s time to get weird.

This is the stuff that made me fall in love with Actions as a platform—not as a CI/CD tool, but as a general-purpose automation engine that happens to run on GitHub’s infrastructure. Some of these use cases are genuinely practical. Others are delightfully absurd. All of them made me think differently about what’s possible.

Let’s dig in.

Developer Productivity

Automated PR Descriptions

Writing good PR descriptions is tedious. You know you should explain what changed and why, but after spending three hours debugging a race condition, the last thing you want to do is document it.

Here’s a workflow that generates PR descriptions from your commit messages and the files you touched:

name: Generate PR Description

on:
  pull_request:
    types: [opened]

jobs:
  describe:
    runs-on: ubuntu-latest
    permissions:
      pull-requests: write
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Get commit messages
        id: commits
        run: |
          COMMITS=$(git log --oneline origin/${{ github.base_ref }}..HEAD)
          echo "messages<<EOF" >> $GITHUB_OUTPUT
          echo "$COMMITS" >> $GITHUB_OUTPUT
          echo "EOF" >> $GITHUB_OUTPUT

      - name: Get changed files
        id: files
        run: |
          FILES=$(git diff --name-only origin/${{ github.base_ref }}...HEAD)
          echo "changed<<EOF" >> $GITHUB_OUTPUT
          echo "$FILES" >> $GITHUB_OUTPUT
          echo "EOF" >> $GITHUB_OUTPUT

      - name: Update PR description
        uses: actions/github-script@v7
        with:
          script: |
            const commits = `${{ steps.commits.outputs.messages }}`;
            const files = `${{ steps.files.outputs.changed }}`;

            const body = `## Commits\n\`\`\`\n${commits}\n\`\`\`\n\n## Changed Files\n\`\`\`\n${files}\n\`\`\`\n\n---\n*Auto-generated. Please add context above.*`;

            await github.rest.pulls.update({
              owner: context.repo.owner,
              repo: context.repo.repo,
              pull_number: context.payload.pull_request.number,
              body: body
            });

The trick is using fetch-depth: 0 to get the full git history, then comparing against the base branch. The generated description is just a starting point—you still need to add the “why”—but it saves you from typing out the “what.”

Gotcha: This only runs on newly opened PRs. If you want to regenerate on force-push, add synchronize to the triggers. But then you risk overwriting manual edits to the description.

Smart Reviewer Assignment

Ever notice how certain people always end up reviewing the same files? That’s not random—it’s expertise. You can codify this with a workflow that assigns reviewers based on file ownership.

name: Assign Reviewers

on:
  pull_request:
    types: [opened, ready_for_review]

jobs:
  assign:
    runs-on: ubuntu-latest
    if: github.event.pull_request.draft == false
    permissions:
      pull-requests: write
    steps:
      - uses: actions/checkout@v4

      - name: Determine reviewers
        id: reviewers
        run: |
          # Define ownership mapping
          declare -A OWNERS
          OWNERS["src/api/"]="alice"
          OWNERS["src/frontend/"]="bob"
          OWNERS["src/database/"]="charlie"
          OWNERS["docs/"]="diana"

          CHANGED=$(gh pr view ${{ github.event.pull_request.number }} --json files -q '.files[].path')
          REVIEWERS=""

          for file in $CHANGED; do
            for pattern in "${!OWNERS[@]}"; do
              if [[ "$file" == $pattern* ]]; then
                OWNER="${OWNERS[$pattern]}"
                if [[ ! "$REVIEWERS" =~ "$OWNER" ]]; then
                  REVIEWERS="$REVIEWERS $OWNER"
                fi
              fi
            done
          done

          echo "list=$(echo $REVIEWERS | xargs)" >> $GITHUB_OUTPUT
        env:
          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

      - name: Request reviews
        if: steps.reviewers.outputs.list != ''
        run: |
          gh pr edit ${{ github.event.pull_request.number }} \
            --add-reviewer ${{ steps.reviewers.outputs.list }}
        env:
          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

For larger teams, consider using a CODEOWNERS file and the actions/labeler approach instead. But for smaller teams where ownership is more informal, this script-based approach gives you fine-grained control.

Limitation: The reviewer must have write access to the repo. Also, you can’t assign the PR author as a reviewer (GitHub will just ignore it).

Personal Productivity Dashboard

This one’s a bit self-indulgent, but I love it. You can auto-update your GitHub profile README with your latest activity, coding stats, or whatever else you want to show off.

name: Update Profile README

on:
  schedule:
    - cron: '0 0 * * *'  # Daily at midnight
  workflow_dispatch:

jobs:
  update:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Fetch GitHub stats
        id: stats
        run: |
          # Get recent activity
          COMMITS=$(gh api graphql -f query='
            query {
              viewer {
                contributionsCollection {
                  totalCommitContributions
                }
              }
            }' -q '.data.viewer.contributionsCollection.totalCommitContributions')

          echo "commits=$COMMITS" >> $GITHUB_OUTPUT
        env:
          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

      - name: Update README
        run: |
          cat > README.md << 'EOF'
          # Hi, I'm [Your Name]

          ## This Year's Activity
          - Commits: ${{ steps.stats.outputs.commits }}
          - Last updated: $(date +%Y-%m-%d)

          ## Currently Working On
          - [Project 1](link)
          - [Project 2](link)
          EOF

      - name: Commit changes
        run: |
          git config user.name 'github-actions[bot]'
          git config user.email 'github-actions[bot]@users.noreply.github.com'
          git add README.md
          git diff --quiet --cached || git commit -m "Update README stats"
          git push

Some developers go wild with this—embedding Spotify now-playing widgets, blog post lists, Twitter feeds. I find it charming, even if it’s mostly for fun.

Content and Documentation

Broken links in documentation are embarrassing. Typos in blog posts are worse. Here’s a workflow that catches both before they ship:

name: Content Validation

on:
  pull_request:
    paths:
      - 'docs/**'
      - 'content/**'
      - '*.md'

jobs:
  spellcheck:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Check spelling
        uses: streetsidesoftware/cspell-action@v6
        with:
          files: '**/*.md'
          config: '.cspell.json'

  linkcheck:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Check links
        uses: lycheeverse/lychee-action@v1
        with:
          args: --verbose --no-progress '**/*.md'
          fail: true

You’ll need a .cspell.json config file to handle technical jargon and custom words. The first run is humbling—turns out I’ve been misspelling “dependency” in creative ways for years.

Pro tip: Run link checking on a schedule too, not just on PRs. External links break over time, and you want to catch them before someone complains.

Screenshot Automation

Keeping documentation screenshots up-to-date is a nightmare. Every UI change means hunting down and replacing a dozen images. Here’s a workflow that uses Playwright to generate fresh screenshots automatically:

name: Update Screenshots

on:
  workflow_dispatch:
  schedule:
    - cron: '0 6 * * 1'  # Weekly on Monday

jobs:
  screenshots:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install Playwright
        run: |
          npm init -y
          npm install playwright
          npx playwright install chromium

      - name: Generate screenshots
        run: |
          node << 'EOF'
          const { chromium } = require('playwright');

          (async () => {
            const browser = await chromium.launch();
            const page = await browser.newPage();

            const screenshots = [
              { url: 'https://your-app.com/dashboard', path: 'docs/images/dashboard.png' },
              { url: 'https://your-app.com/settings', path: 'docs/images/settings.png' },
            ];

            for (const { url, path } of screenshots) {
              await page.goto(url);
              await page.waitForLoadState('networkidle');
              await page.screenshot({ path, fullPage: true });
              console.log(`Captured: ${path}`);
            }

            await browser.close();
          })();
          EOF

      - name: Commit updated screenshots
        run: |
          git config user.name 'github-actions[bot]'
          git config user.email 'github-actions[bot]@users.noreply.github.com'
          git add docs/images/
          git diff --quiet --cached || git commit -m "docs: update screenshots"
          git push

For authenticated pages, you’ll need to handle login flows. Playwright makes this straightforward—just add the login steps before capturing.

Gotcha: Screenshots can be flaky if your app has animations or lazy-loaded content. Use waitForLoadState and consider adding explicit waits for specific elements.

Translation Workflow Automation

Managing translations is tedious. New strings get added, existing strings get modified, and coordinating with translators is a logistical nightmare. Here’s a workflow that detects new translation keys and creates tracking issues:

name: Translation Tracker

on:
  push:
    branches: [main]
    paths:
      - 'src/**/*.ts'
      - 'src/**/*.tsx'

jobs:
  detect-strings:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 2

      - name: Find new translation keys
        id: keys
        run: |
          # Extract translation function calls from changed files
          NEW_KEYS=$(git diff HEAD~1 --name-only -- 'src/**' | \
            xargs grep -h "t('\\|t(\"" 2>/dev/null | \
            grep -oE "t\(['\"][^'\"]+['\"]" | \
            sort -u | \
            sed "s/t(['\"]//" | \
            sed "s/['\"]$//" || true)

          if [ -n "$NEW_KEYS" ]; then
            echo "found=true" >> $GITHUB_OUTPUT
            echo "keys<<EOF" >> $GITHUB_OUTPUT
            echo "$NEW_KEYS" >> $GITHUB_OUTPUT
            echo "EOF" >> $GITHUB_OUTPUT
          else
            echo "found=false" >> $GITHUB_OUTPUT
          fi

      - name: Create translation issue
        if: steps.keys.outputs.found == 'true'
        uses: actions/github-script@v7
        with:
          script: |
            const keys = `${{ steps.keys.outputs.keys }}`;
            await github.rest.issues.create({
              owner: context.repo.owner,
              repo: context.repo.repo,
              title: 'New strings need translation',
              labels: ['translation', 'needs-triage'],
              body: `The following translation keys were added:\n\n\`\`\`\n${keys}\n\`\`\`\n\nPlease add translations for all supported locales.`
            });

You can extend this to automatically create PRs that add empty translation entries, or to validate that all locales have the same keys.

Data and Research

Dataset Versioning

If you work with data—machine learning datasets, configuration files, anything that changes over time—you might want to track those changes with more visibility than just git diffs.

name: Dataset Diff Report

on:
  push:
    paths:
      - 'data/**/*.csv'
      - 'data/**/*.json'

jobs:
  diff-report:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 2

      - name: Generate diff report
        run: |
          mkdir -p reports

          for file in $(git diff --name-only HEAD~1 -- 'data/**'); do
            if [ -f "$file" ]; then
              BASENAME=$(basename "$file")

              # For CSV files, show row count changes
              if [[ "$file" == *.csv ]]; then
                OLD_COUNT=$(git show HEAD~1:"$file" 2>/dev/null | wc -l || echo 0)
                NEW_COUNT=$(wc -l < "$file")
                echo "## $file" >> reports/diff-summary.md
                echo "- Rows: $OLD_COUNT → $NEW_COUNT ($(($NEW_COUNT - $OLD_COUNT)) change)" >> reports/diff-summary.md
                echo "" >> reports/diff-summary.md
              fi

              # For JSON files, use jq to compare structure
              if [[ "$file" == *.json ]]; then
                echo "## $file" >> reports/diff-summary.md
                echo "Structure changed - review manually" >> reports/diff-summary.md
                echo "" >> reports/diff-summary.md
              fi
            fi
          done

      - name: Upload report
        uses: actions/upload-artifact@v4
        with:
          name: dataset-diff
          path: reports/

      - name: Comment on commit
        uses: actions/github-script@v7
        with:
          script: |
            const fs = require('fs');
            const report = fs.readFileSync('reports/diff-summary.md', 'utf8');

            await github.rest.repos.createCommitComment({
              owner: context.repo.owner,
              repo: context.repo.repo,
              commit_sha: context.sha,
              body: `## Dataset Changes\n\n${report}`
            });

For serious data versioning, look at DVC (Data Version Control). But for lighter-weight tracking, this approach works surprisingly well.

API Change Detection

Third-party APIs change. Sometimes they give you notice. Sometimes they don’t. Here’s a workflow that monitors external APIs for breaking changes:

name: API Monitor

on:
  schedule:
    - cron: '0 */6 * * *'  # Every 6 hours
  workflow_dispatch:

jobs:
  check-apis:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Check external APIs
        id: check
        run: |
          mkdir -p .api-snapshots
          CHANGES=""

          # Check a sample endpoint
          curl -s "https://api.example.com/v1/status" > .api-snapshots/current-status.json

          if [ -f ".api-snapshots/previous-status.json" ]; then
            if ! diff -q .api-snapshots/previous-status.json .api-snapshots/current-status.json > /dev/null; then
              CHANGES="API response structure changed"
            fi
          fi

          mv .api-snapshots/current-status.json .api-snapshots/previous-status.json

          if [ -n "$CHANGES" ]; then
            echo "changed=true" >> $GITHUB_OUTPUT
            echo "details=$CHANGES" >> $GITHUB_OUTPUT
          else
            echo "changed=false" >> $GITHUB_OUTPUT
          fi

      - name: Commit snapshot
        run: |
          git config user.name 'github-actions[bot]'
          git config user.email 'github-actions[bot]@users.noreply.github.com'
          git add .api-snapshots/
          git diff --quiet --cached || git commit -m "chore: update API snapshots"
          git push

      - name: Alert on changes
        if: steps.check.outputs.changed == 'true'
        uses: actions/github-script@v7
        with:
          script: |
            await github.rest.issues.create({
              owner: context.repo.owner,
              repo: context.repo.repo,
              title: '⚠️ External API Change Detected',
              labels: ['api', 'needs-investigation'],
              body: `${{ steps.check.outputs.details }}\n\nPlease investigate and update integrations if necessary.`
            });

The key insight is storing previous responses in the repo itself. Git becomes your change tracking mechanism.

Limitation: This only catches structural changes in the response. Semantic changes (same structure, different meaning) require more sophisticated validation.

Price Monitoring

This one’s fun for side projects. Monitor competitor pricing, product availability, or any public data that changes over time:

name: Price Monitor

on:
  schedule:
    - cron: '0 8 * * *'  # Daily at 8 AM

jobs:
  check-prices:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Fetch current prices
        run: |
          # Example: fetch from a public API or scrape a page
          # (Be respectful of rate limits and ToS)

          cat > prices.json << 'EOF'
          {
            "timestamp": "$(date -Iseconds)",
            "products": []
          }
          EOF

          # In reality, you'd populate this from actual data sources

      - name: Compare with previous
        id: compare
        run: |
          if [ -f "previous-prices.json" ]; then
            # Compare and detect significant changes
            # This is simplified - real implementation would be more sophisticated
            if ! diff -q prices.json previous-prices.json > /dev/null; then
              echo "changed=true" >> $GITHUB_OUTPUT
            else
              echo "changed=false" >> $GITHUB_OUTPUT
            fi
          else
            echo "changed=false" >> $GITHUB_OUTPUT
          fi

          mv prices.json previous-prices.json

      - name: Commit and notify
        run: |
          git config user.name 'github-actions[bot]'
          git config user.email 'github-actions[bot]@users.noreply.github.com'
          git add previous-prices.json
          git diff --quiet --cached || git commit -m "chore: update price snapshot"
          git push

I’ve used variations of this to track things like domain availability, conference ticket prices, and even restaurant reservation openings. It’s basically cron-as-a-service with free hosting.

DevOps and Infrastructure

SSL Certificate Monitoring

Nothing ruins your weekend like an expired SSL certificate. Here’s a workflow that warns you before it happens:

name: SSL Certificate Check

on:
  schedule:
    - cron: '0 9 * * *'  # Daily at 9 AM
  workflow_dispatch:

jobs:
  check-certs:
    runs-on: ubuntu-latest
    steps:
      - name: Check certificates
        id: certs
        run: |
          DOMAINS="example.com api.example.com staging.example.com"
          WARN_DAYS=30
          EXPIRING=""

          for domain in $DOMAINS; do
            EXPIRY=$(echo | openssl s_client -servername "$domain" -connect "$domain:443" 2>/dev/null | \
              openssl x509 -noout -enddate 2>/dev/null | \
              cut -d= -f2)

            if [ -n "$EXPIRY" ]; then
              EXPIRY_EPOCH=$(date -d "$EXPIRY" +%s 2>/dev/null || date -j -f "%b %d %T %Y %Z" "$EXPIRY" +%s)
              NOW_EPOCH=$(date +%s)
              DAYS_LEFT=$(( ($EXPIRY_EPOCH - $NOW_EPOCH) / 86400 ))

              echo "$domain: $DAYS_LEFT days remaining"

              if [ "$DAYS_LEFT" -lt "$WARN_DAYS" ]; then
                EXPIRING="$EXPIRING\n- $domain: $DAYS_LEFT days"
              fi
            else
              EXPIRING="$EXPIRING\n- $domain: UNABLE TO CHECK"
            fi
          done

          if [ -n "$EXPIRING" ]; then
            echo "expiring=true" >> $GITHUB_OUTPUT
            echo "details<<EOF" >> $GITHUB_OUTPUT
            echo -e "$EXPIRING" >> $GITHUB_OUTPUT
            echo "EOF" >> $GITHUB_OUTPUT
          else
            echo "expiring=false" >> $GITHUB_OUTPUT
          fi

      - name: Create alert issue
        if: steps.certs.outputs.expiring == 'true'
        uses: actions/github-script@v7
        with:
          script: |
            const details = `${{ steps.certs.outputs.details }}`;

            // Check if an issue already exists
            const issues = await github.rest.issues.listForRepo({
              owner: context.repo.owner,
              repo: context.repo.repo,
              labels: 'ssl-expiring',
              state: 'open'
            });

            if (issues.data.length === 0) {
              await github.rest.issues.create({
                owner: context.repo.owner,
                repo: context.repo.repo,
                title: '⚠️ SSL Certificates Expiring Soon',
                labels: ['ssl-expiring', 'urgent'],
                body: `The following certificates will expire soon:\n${details}\n\nPlease renew before expiration.`
              });
            }

The workflow checks if an issue already exists before creating a new one—otherwise you’d get spammed with duplicate alerts.

Dependency License Auditing

If you ship software commercially, you need to know what licenses your dependencies use. GPL in a proprietary codebase is a legal headache waiting to happen.

name: License Audit

on:
  schedule:
    - cron: '0 0 * * 0'  # Weekly on Sunday
  pull_request:
    paths:
      - 'package.json'
      - 'package-lock.json'
      - 'Cargo.toml'
      - 'go.mod'

jobs:
  audit:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install dependencies
        run: npm ci

      - name: Check licenses
        id: licenses
        run: |
          npx license-checker --summary --out licenses.txt

          # Check for problematic licenses
          PROBLEMATIC=$(npx license-checker --onlyAllow 'MIT;Apache-2.0;BSD-2-Clause;BSD-3-Clause;ISC;0BSD' 2>&1 || true)

          if echo "$PROBLEMATIC" | grep -q "FAIL"; then
            echo "issues=true" >> $GITHUB_OUTPUT
            echo "report<<EOF" >> $GITHUB_OUTPUT
            echo "$PROBLEMATIC" >> $GITHUB_OUTPUT
            echo "EOF" >> $GITHUB_OUTPUT
          else
            echo "issues=false" >> $GITHUB_OUTPUT
          fi

      - name: Upload license report
        uses: actions/upload-artifact@v4
        with:
          name: license-report
          path: licenses.txt

      - name: Fail on problematic licenses
        if: steps.licenses.outputs.issues == 'true'
        run: |
          echo "Problematic licenses detected:"
          echo "${{ steps.licenses.outputs.report }}"
          exit 1

The --onlyAllow flag specifies your acceptable licenses. Adjust based on your needs—and definitely consult your legal team if you’re unsure.

Security Advisory Monitoring

GitHub’s Dependabot handles this for supported ecosystems, but sometimes you need more control or coverage for platforms Dependabot doesn’t support:

name: Security Advisory Check

on:
  schedule:
    - cron: '0 */4 * * *'  # Every 4 hours
  workflow_dispatch:

jobs:
  check-advisories:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Check for vulnerabilities
        id: vulns
        run: |
          # For npm projects
          npm audit --json > audit.json 2>/dev/null || true

          CRITICAL=$(jq '.metadata.vulnerabilities.critical // 0' audit.json)
          HIGH=$(jq '.metadata.vulnerabilities.high // 0' audit.json)

          if [ "$CRITICAL" -gt 0 ] || [ "$HIGH" -gt 0 ]; then
            echo "found=true" >> $GITHUB_OUTPUT
            echo "critical=$CRITICAL" >> $GITHUB_OUTPUT
            echo "high=$HIGH" >> $GITHUB_OUTPUT
          else
            echo "found=false" >> $GITHUB_OUTPUT
          fi

      - name: Create security issue
        if: steps.vulns.outputs.found == 'true'
        uses: actions/github-script@v7
        with:
          script: |
            const critical = '${{ steps.vulns.outputs.critical }}';
            const high = '${{ steps.vulns.outputs.high }}';

            // Check for existing issue
            const issues = await github.rest.issues.listForRepo({
              owner: context.repo.owner,
              repo: context.repo.repo,
              labels: 'security',
              state: 'open'
            });

            if (issues.data.length === 0) {
              await github.rest.issues.create({
                owner: context.repo.owner,
                repo: context.repo.repo,
                title: '🔒 Security Vulnerabilities Detected',
                labels: ['security', 'urgent'],
                body: `Vulnerability scan found:\n- Critical: ${critical}\n- High: ${high}\n\nRun \`npm audit\` locally for details.`
              });
            }

Creative and Unusual

Now for the fun stuff—the workflows that made me smile when I first encountered them.

GitHub Profile Games

Yes, you can play chess via GitHub issues. Here’s a simplified version of how it works:

name: Chess Game

on:
  issues:
    types: [opened]

jobs:
  process-move:
    if: startsWith(github.event.issue.title, 'chess:')
    runs-on: ubuntu-latest
    permissions:
      issues: write
    steps:
      - uses: actions/checkout@v4

      - name: Parse move
        id: parse
        run: |
          TITLE="${{ github.event.issue.title }}"
          MOVE=$(echo "$TITLE" | sed 's/chess: //')
          echo "move=$MOVE" >> $GITHUB_OUTPUT

      - name: Validate and apply move
        run: |
          # In a real implementation, you'd:
          # 1. Load the current board state from a file
          # 2. Validate the move is legal
          # 3. Apply the move
          # 4. Check for checkmate/stalemate
          # 5. Save the new state
          # 6. Generate a new board image
          echo "Processing move: ${{ steps.parse.outputs.move }}"

      - name: Update README with board
        run: |
          # Generate ASCII or image board representation
          # Update README.md with new board state
          echo "Board updated"

      - name: Close issue with result
        uses: actions/github-script@v7
        with:
          script: |
            await github.rest.issues.createComment({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: context.payload.issue.number,
              body: 'Move processed! Check the README for the updated board.'
            });

            await github.rest.issues.update({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: context.payload.issue.number,
              state: 'closed'
            });

The real implementations are more sophisticated—they use proper chess engines for validation and generate nice-looking board images. Check out timburgan/timburgan for a polished example.

Generative Art from Repo Activity

Turn your commit history into something visual:

name: Activity Art

on:
  schedule:
    - cron: '0 0 * * 0'  # Weekly
  workflow_dispatch:

jobs:
  generate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Gather activity data
        run: |
          # Count commits per day for the last 30 days
          for i in $(seq 0 29); do
            DATE=$(date -d "$i days ago" +%Y-%m-%d 2>/dev/null || date -v-${i}d +%Y-%m-%d)
            COUNT=$(git log --oneline --after="$DATE 00:00" --before="$DATE 23:59" | wc -l)
            echo "$DATE,$COUNT" >> activity.csv
          done

      - name: Generate visualization
        run: |
          # You could use Python with matplotlib, Node with canvas, etc.
          # This is where you'd create an image based on the activity data
          echo "Generating visualization from activity.csv"

      - name: Update repository
        run: |
          git config user.name 'github-actions[bot]'
          git config user.email 'github-actions[bot]@users.noreply.github.com'
          git add activity.csv
          git diff --quiet --cached || git commit -m "chore: update activity visualization"
          git push

I’ve seen visualizations that create:

  • Skyline silhouettes (one “building” per day, height based on commits)
  • Color gradients based on activity intensity
  • Abstract patterns using commit timestamps as seeds

It’s useless in the best way.

Weather-Based Repository Theming

Why not change your repo’s appearance based on the weather?

name: Weather Theme

on:
  schedule:
    - cron: '0 */3 * * *'  # Every 3 hours

jobs:
  update-theme:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Fetch weather
        id: weather
        run: |
          # Using wttr.in for simplicity (no API key needed)
          WEATHER=$(curl -s "wttr.in/Berlin?format=%C")
          echo "condition=$WEATHER" >> $GITHUB_OUTPUT

      - name: Update theme references
        run: |
          CONDITION="${{ steps.weather.outputs.condition }}"

          # Map weather to theme
          case "$CONDITION" in
            *Sunny*|*Clear*)
              THEME="☀️ Sunny vibes"
              ;;
            *Rain*|*Drizzle*)
              THEME="🌧️ Rainy day coding"
              ;;
            *Snow*)
              THEME="❄️ Winter wonderland"
              ;;
            *Cloud*)
              THEME="☁️ Cloudy thoughts"
              ;;
            *)
              THEME="🌤️ Just another day"
              ;;
          esac

          echo "Current mood: $THEME" > WEATHER.md

      - name: Commit changes
        run: |
          git config user.name 'github-actions[bot]'
          git config user.email 'github-actions[bot]@users.noreply.github.com'
          git add WEATHER.md
          git diff --quiet --cached || git commit -m "chore: update weather theme"
          git push

Completely pointless. Absolutely delightful.

Team and Organization Management

Onboarding Automation

When a new team member joins, there’s always a checklist: set up dev environment, get access to services, meet the team, complete training. Why not automate the tracking?

name: New Team Member Onboarding

on:
  issues:
    types: [opened]

jobs:
  setup-onboarding:
    if: contains(github.event.issue.labels.*.name, 'new-hire')
    runs-on: ubuntu-latest
    permissions:
      issues: write
    steps:
      - name: Create onboarding checklist
        uses: actions/github-script@v7
        with:
          script: |
            const checklist = `
            ## Welcome! 🎉

            Here's your onboarding checklist:

            ### Week 1
            - [ ] Complete HR paperwork
            - [ ] Set up development environment
            - [ ] Get access to GitHub org
            - [ ] Meet with your buddy (assigned below)
            - [ ] Read the team wiki

            ### Week 2
            - [ ] Complete first code review
            - [ ] Submit first PR
            - [ ] Attend team standup
            - [ ] 1:1 with manager

            ### Week 3
            - [ ] Complete security training
            - [ ] Review production access guidelines
            - [ ] Ship something to production!

            ---
            *This issue will be automatically updated as you complete items.*
            `;

            await github.rest.issues.createComment({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: context.payload.issue.number,
              body: checklist
            });

            // Assign a buddy (could be rotated or random)
            await github.rest.issues.addAssignees({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: context.payload.issue.number,
              assignees: ['senior-dev-username']
            });

The issue becomes the single source of truth for onboarding progress. When all boxes are checked, you could even trigger a “graduation” workflow that grants additional access.

Team Health Metrics

Track how your team is doing over time—PR review times, issue close rates, contributor activity:

name: Team Metrics Report

on:
  schedule:
    - cron: '0 9 * * 1'  # Every Monday at 9 AM

jobs:
  generate-report:
    runs-on: ubuntu-latest
    steps:
      - name: Gather metrics
        id: metrics
        run: |
          # Average time to first review
          # Issues opened vs closed this week
          # PRs merged
          # Etc.

          # Using gh CLI for data gathering
          PRS_MERGED=$(gh pr list --state merged --search "merged:>=$(date -d '7 days ago' +%Y-%m-%d)" --json number | jq length)
          ISSUES_CLOSED=$(gh issue list --state closed --search "closed:>=$(date -d '7 days ago' +%Y-%m-%d)" --json number | jq length)
          ISSUES_OPENED=$(gh issue list --state all --search "created:>=$(date -d '7 days ago' +%Y-%m-%d)" --json number | jq length)

          echo "prs_merged=$PRS_MERGED" >> $GITHUB_OUTPUT
          echo "issues_closed=$ISSUES_CLOSED" >> $GITHUB_OUTPUT
          echo "issues_opened=$ISSUES_OPENED" >> $GITHUB_OUTPUT
        env:
          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

      - name: Post report
        uses: actions/github-script@v7
        with:
          script: |
            const report = `
            ## Weekly Team Metrics

            **Week of ${new Date().toISOString().split('T')[0]}**

            | Metric | Count |
            |--------|-------|
            | PRs Merged | ${{ steps.metrics.outputs.prs_merged }} |
            | Issues Closed | ${{ steps.metrics.outputs.issues_closed }} |
            | Issues Opened | ${{ steps.metrics.outputs.issues_opened }} |

            ---
            *Generated automatically by GitHub Actions*
            `;

            // Post to a discussions board or specific issue
            await github.rest.issues.createComment({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: 1, // Dedicated metrics issue
              body: report
            });

You could extend this to post to Slack, send email digests, or build a dashboard that tracks trends over time.

Knowledge Base Freshness

Documentation gets stale. Here’s a workflow that flags docs that haven’t been touched in a while:

name: Docs Freshness Check

on:
  schedule:
    - cron: '0 10 1 * *'  # First of every month

jobs:
  check-freshness:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Find stale docs
        id: stale
        run: |
          STALE_DAYS=180
          STALE_DATE=$(date -d "$STALE_DAYS days ago" +%s 2>/dev/null || date -v-${STALE_DAYS}d +%s)
          STALE_FILES=""

          for file in docs/**/*.md; do
            if [ -f "$file" ]; then
              LAST_MODIFIED=$(git log -1 --format=%ct -- "$file")
              if [ "$LAST_MODIFIED" -lt "$STALE_DATE" ]; then
                DAYS_OLD=$(( ($(date +%s) - $LAST_MODIFIED) / 86400 ))
                STALE_FILES="$STALE_FILES\n- $file ($DAYS_OLD days)"
              fi
            fi
          done

          if [ -n "$STALE_FILES" ]; then
            echo "found=true" >> $GITHUB_OUTPUT
            echo "files<<EOF" >> $GITHUB_OUTPUT
            echo -e "$STALE_FILES" >> $GITHUB_OUTPUT
            echo "EOF" >> $GITHUB_OUTPUT
          else
            echo "found=false" >> $GITHUB_OUTPUT
          fi

      - name: Create review issue
        if: steps.stale.outputs.found == 'true'
        uses: actions/github-script@v7
        with:
          script: |
            const files = `${{ steps.stale.outputs.files }}`;

            await github.rest.issues.create({
              owner: context.repo.owner,
              repo: context.repo.repo,
              title: '📚 Documentation Review Needed',
              labels: ['documentation', 'needs-review'],
              body: `The following documentation files haven't been updated in 6+ months:\n${files}\n\nPlease review and update or confirm they're still accurate.`
            });

Six months is arbitrary—adjust based on how fast your project moves. The goal isn’t to force updates, but to prompt someone to verify the content is still accurate.

The Limits and Gotchas

Before you go wild with these patterns, a few things to keep in mind:

Rate limits are real. GitHub’s API has rate limits, and if your workflows are too chatty, you’ll hit them. The actions/github-script action is particularly easy to abuse.

Scheduled workflows can be unreliable. GitHub doesn’t guarantee exact timing for cron-triggered workflows, especially during high-load periods. If you need precision, consider an external scheduler.

Secrets are scoped. Secrets defined at the repo level aren’t available in forked repos’ PRs (for security reasons). This breaks some workflows in open-source contexts.

Self-modifying repos are tricky. Workflows that commit back to the repo can trigger themselves, creating infinite loops. Use conditionals or special commit authors to break the cycle.

Costs add up. GitHub Actions has generous free tiers, but heavy usage of macOS or Windows runners can get expensive. Monitor your usage if you’re on a paid plan.

Wrapping Up the Series

When I started writing about GitHub Actions, I didn’t expect to end up here—talking about chess games in issues and weather-based themes. But that’s what makes Actions interesting. It’s not just a CI/CD tool. It’s a platform.

The workflows in this series barely scratch the surface. I’ve seen teams build entire internal tools on Actions—onboarding systems, deployment pipelines, data processing infrastructure. The constraint of “it runs on events and produces outputs” turns out to be surprisingly flexible.

If there’s one thing I hope you take away from this series, it’s this: start small, but think big. Set up the basic CI workflow. Then add a linter. Then auto-label your PRs. Then maybe something weird just for fun.

The best automation isn’t the cleverest—it’s the stuff you forget about because it just works. The PR that got the right labels. The stale issue that got closed. The certificate that got renewed before it expired. These aren’t exciting moments, but they add up.

And sometimes, yes, you should build a chess bot. Not because it’s useful, but because it’s fun. That’s allowed too.

Thanks for reading along. Now go automate something.