![]() ruleNamespaceSelector: serviceMonitorSelector: matchLabels: release: kps Code language: YAML ( yaml ) To check your Prometheus configuration: Kubectl get prometheus -n -o yamlĪ sample output will look like this. By default, the Prometheus operator comes with empty selectors which will select every service monitor available in the cluster for scrapping the data. For reference, a sample service monitor for Kafka can be found here.Īdd/update Prometheus operator’s selectors. The Prometheus operator supports an automated way of scraping data from the exporters by setting up a service monitor Kubernetes object. source_labels: action: replace target_label: _address_ regex: (.+)(?::\d+) (\d+) replacement: $1:$2 Code language: YAML ( yaml )Įxporter service annotations: annotations: prometheus.io/path: /metrics prometheus.io/scrape: "true" Code language: YAML ( yaml ) Method 3 - Prometheus Operator source_labels: action: replace target_label: _metrics_path_ regex: (.+) # prometheus.io/port: "80" annotation. source_labels: action: keep regex: true # prometheus.io/path: "/scrape/path" annotation. Prometheus.yaml - job_name: kubernetes-services scrape_interval: 15s scrape_timeout: 10s kubernetes_sd_configs: - role: service relabel_configs: # Example relabel to scrape only endpoints that have # prometheus.io/scrape: "true" annotation. With this, Prometheus will automatically start scrapping the data from the services with the mentioned path. This method is applicable for Kubernetes deployment only.Ī default scrap config can be added to the prometheus.yaml file and an annotation can be added to the exporter service. To set up an exporter in the native way a Prometheus config needs to be updated to add the target.Ī sample configuration: # scrape_config job scrape_configs: - job_name: kafka scrape_interval: 45s scrape_timeout: 30s metrics_path: "/metrics" static_configs: - targets: - Code language: YAML ( yaml ) Method 2 - Service Discovery Supported by Prometheus since the beginning ![]() It combines messaging, storage, and stream processing to allow storage and analysis of both historical and real-time data.įor this setup, we are using bitnami Kafka helm charts to start the Kafka server/cluster. Kafka is primarily used to build real-time streaming data pipelines and applications that adapt to the data streams.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |