$120 tested Claude codes · real before/after data · Full tier $15 one-timebuy --sheet=15 →
$Free 40-page Claude guide — setup, 120 prompt codes, MCP servers, AI agents. download --free →
clskills.sh — terminal v2.4 — 2,347 skills indexed● online
[CL]Skills_
Monitoring & Loggingadvanced

Log Aggregation

Share

Set up log aggregation pipeline

Works with OpenClaude

You are a DevOps engineer specializing in observability infrastructure. The user wants to set up a centralized log aggregation pipeline that collects, processes, and indexes logs from multiple sources.

What to check first

  • Verify disk space availability: df -h — log aggregation requires 50GB+ for reasonable retention
  • Check if Elasticsearch/OpenSearch cluster exists: curl -s http://localhost:9200/_cluster/health | jq .
  • Confirm Filebeat/Logstash/Fluentd is not already running: ps aux | grep -E "(filebeat|logstash|fluentd)"

Steps

  1. Install Filebeat on source machines: curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.11.0-linux-x86_64.tar.gz && tar xzf filebeat-8.11.0-linux-x86_64.tar.gz
  2. Configure Filebeat inputs in /etc/filebeat/filebeat.yml — add filebeat.inputs: section with type: log and enabled: true pointing to your log paths
  3. Set up Logstash filters for parsing: create /etc/logstash/conf.d/pipeline.conf with input {}, filter {}, and output {} blocks
  4. Configure Elasticsearch output in Logstash: set hosts => ["elasticsearch:9200"] and index => "logs-%{+YYYY.MM.dd}"
  5. Create Elasticsearch index template: POST to _index_template/logs with mapping for @timestamp, message, hostname, and custom fields
  6. Enable log rotation on source servers: edit /etc/logrotate.d/ configs to compress old logs and prevent disk exhaustion
  7. Deploy Kibana dashboards: use POST /_kibana/api/saved_objects/dashboard to create visualization queries filtering by source.hostname and log.level
  8. Set up Beats monitoring: enable monitoring.enabled: true in Filebeat to track its own performance metrics

Code

# /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/syslog
    - /var/log/auth.log
  fields:
    service: myapp
    environment: production
  multiline.pattern: '^\['
  multiline.negate: true
  multiline.match: after

- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  fields:
    service: nginx
  json.message_key: message
  json.keys_under_root: true

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: true
  reload.period: 10s

processors:
  - add_kubernetes_metadata:

Note: this example was truncated in the source. See the GitHub repo for the latest full version.

Common Pitfalls

  • Treating this skill as a one-shot solution — most workflows need iteration and verification
  • Skipping the verification steps — you don't know it worked until you measure
  • Applying this skill without understanding the underlying problem — read the related docs first

When NOT to Use This Skill

  • When a simpler manual approach would take less than 10 minutes
  • On critical production systems without testing in staging first
  • When you don't have permission or authorization to make these changes

How to Verify It Worked

  • Run the verification steps documented above
  • Compare the output against your expected baseline
  • Check logs for any warnings or errors — silent failures are the worst kind

Production Considerations

  • Test in staging before deploying to production
  • Have a rollback plan — every change should be reversible
  • Monitor the affected systems for at least 24 hours after the change

Quick Info

Difficultyadvanced
Version1.0.0
AuthorClaude Skills Hub
monitoringlogsaggregation

Install command:

curl -o ~/.claude/skills/log-aggregation.md https://claude-skills-hub.vercel.app/skills/monitoring/log-aggregation.md

Related Monitoring & Logging Skills

Other Claude Code skills in the same category — free to download.

Want a Monitoring & Logging skill personalized to YOUR project?

This is a generic skill that works for everyone. Our AI can generate one tailored to your exact tech stack, naming conventions, folder structure, and coding patterns — with 3x more detail.