Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Add Container Metrics on running containers in OpenRMF #83

Closed
Cingulara opened this issue Jan 22, 2020 · 2 comments
Closed

[FEATURE] Add Container Metrics on running containers in OpenRMF #83

Cingulara opened this issue Jan 22, 2020 · 2 comments
Assignees
Labels
enhancement New feature or request

Comments

@Cingulara
Copy link
Owner

Is your feature request related to a problem? Please describe.
We need to show metrics on running containers inside the application. This is different then how the APIs or MSGs are used. This is system level metrics that something like cAdvisor can give.

Describe the solution you'd like
Show all metrics on the containers running inside Docker or inside K8s.

Additional context
Use something like https://github.com/google/cadvisor linked to Prometheus possibly.

@Cingulara Cingulara added the enhancement New feature or request label Jan 22, 2020
@Cingulara Cingulara self-assigned this Jan 22, 2020
@Cingulara
Copy link
Owner Author

Cingulara commented Mar 26, 2020

global:
scrape_interval: 30s # By default, scrape targets every 5 seconds.

scrape_configs:
#The job name is added as a label job=<job_name> to any timeseries scraped from this config.

  • job_name: 'nats-server'
    static_configs:

    • targets: ['natspromexporter:7777']
  • job_name: 'docker'
    static_configs:

    • targets: ['docker.for.mac.host.internal:9323']
  • job_name: 'prometheus'
    static_configs:

    • targets: ['docker.for.mac.localhost:9090']
  • job_name: cadvisor
    scrape_interval: 5s
    static_configs:

    • targets:
      • cadvisor:8080
  • job_name: node-exporter
    scrape_interval: 5s
    static_configs:

    • targets:
      • node-exporter:9100

@Cingulara
Copy link
Owner Author

Elastic Stack persistence

elasticserver:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1
restart: always
container_name: elasticserver
ports:
- 9200:9200
- 9300:9300
environment:
- discovery.type=single-node
- logger.level=DEBUG
volumes:
- esdata01:/usr/share/elasticsearch/data
networks:
- xxxxxxxxxxx

NATS message server for publish/subscribe and request/reply messages

natsserver:
image: nats:2.1.2-linux
container_name: natsserver
command: -m 8222
restart: always
ports:
- 4222
- 6222
- 8222
networks:
- xxxxxxxxxxx

natspromexporter:
image: synadia/prometheus-nats-exporter:latest
command: -varz -connz -subz http://natsserver:8222
restart: always
container_name: natspromexporter
ports:
- 7777
networks:
- xxxxxxxxxxx

Metrics Servers

node-exporter:
image: prom/node-exporter:latest
container_name: node-exporter
restart: unless-stopped
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.rootfs=/rootfs'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'
expose:
- 9100
networks:
- xxxxxxxxxxx
labels:
org.label-schema.group: "monitoring"

cadvisor:
image: google/cadvisor:latest
container_name: cadvisor
restart: always
ports:
- 9080:8080
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
networks:
- xxxxxxxxxxx
labels:
org.label-schema.group: "monitoring"

prometheus:
image: prom/prometheus
container_name: prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--web.enable-lifecycle'
restart: always
ports:
- 9090:9090
volumes:
- prometheus-data-volume:/prometheus # persist the data
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
networks:
- xxxxxxxxxxx
depends_on:
- cadvisor
- natspromexporter
labels:
org.label-schema.group: "monitoring"

grafana:
image: grafana/grafana
container_name: grafana
#command:
environment:
- GF_SECURITY_ADMIN_PASSWORD=1qaz2WSX3edc4RFVgr@fana
restart: always
volumes:
- grafana-data-volume:/var/lib/grafana # persist the data
ports:
- 3000:3000
networks:
- xxxxxxxxxxx
depends_on:
- prometheus
labels:
org.label-schema.group: "monitoring"

jaeger:
image: jaegertracing/all-in-one:latest
container_name: jaeger
restart: always
ports:
- "5778:5778/tcp"
- "6831:6831/udp"
- "16686:16686" # Query Service and UI Metrics
- "16687:16687"
- "14271:14271" # Agent Metrics
networks:
- xxxxxxxxxxx
labels:
org.label-schema.group: "monitoring"

put all the volume listings here for persistent data

volumes:
prometheus-data-volume:
grafana-data-volume:

networks:
xxxxxxxxxxxx:

@Cingulara Cingulara mentioned this issue May 28, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant