Description
Describe the bug
it seems like the alertmanager_url
configured in the ruler config within the template is incorrect:
alertmanager_url: dnssrvnoa+http://_http-metrics._tcp.{{ template "mimir.fullname" . }}-alertmanager-headless.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain }}/alertmanager
alertmanager=http://<cluster-ip>.mimir-alertmanager-headless.monitoring.svc.cluster.local.:8080/alertmanager/api/v2/alerts count=1 msg="Error sending alert" err="bad response status 404 Not Found"
To Reproduce
Steps to reproduce the behavior:
- Deploy mimir-distributed chart with alertmanager + ruler in multi tenant mode
- Deploy a rule that will fire immediately
- ruler will print the above error with the default configuration template, as it cannot invoke an alert on alertmanager's metrics endpoint.
Expected behavior
i'd expect the mimir chart to properly connect the ruler deployment to the alertmanager instance(s) out of the box
not 100% sure what the correct template would be. i'm also not sure why ruler is querying api/v2/alerts
when the mimir docs state the correct path is api/v1/alerts
https://grafana.com/docs/mimir/latest/operators-guide/reference-http-api/
Environment
- Infrastructure: Kubernetes
- Deployment tool: helm
Additional Context
from my preliminary tests, when i forward to alertmanager and query the API using postman, POSTing to /api/v1/alerts
with an OrgID works and is properly sent to the upstream alertmanager api, but POSTing to /api/v2/alerts
(which is what ruler does) doesn't work and returns a 404 error.