Skip to content

Ingester does not start properly when using OTEL image #1250

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
kevinearls opened this issue Oct 13, 2020 · 3 comments
Closed

Ingester does not start properly when using OTEL image #1250

kevinearls opened this issue Oct 13, 2020 · 3 comments

Comments

@kevinearls
Copy link
Contributor

kevinearls commented Oct 13, 2020

ingester.log

I used a modified version of our simple-streaming example (attached) to try to run the OTEL ingester. The ingester pod will startup, and the log will eventually say:

2020-10-12T12:33:45.982Z INFO service/service.go:252 Everything is ready. Begin running and processing data.
2020-10-12T12:33:45.984Z INFO kafkareceiver/kafka_receiver.go:152 Starting consumer group {"component_kind": "receiver", "component_type": "kafka", "component_name": "kafka", "partition": 0}

This makes it look like it's ready However, the pod never goes into a ready state, and a minute or so later it will get: " Received signal from OS {"signal": "terminated"}"

The full log is attached. I'm not sure if this is a bug or if we just need additional configuration changes when using the OTEL ingester

@pavolloffay Do you have any thoughts on this? @objectiser mentioned that Joe or someone from logz may have tested the otel collector and ingester. Do you know anything about that?

@kevinearls
Copy link
Contributor Author

simple-streaming-otel.yaml is here: https://gist.github.com/kevinearls/d81ed041e17331de8e022b6f92f21a90

@pavolloffay
Copy link
Member

This part from logs is interesting. It went to ready state and then received terminated signal.

	Health Check state change	{"component_kind": "extension", "component_type": "health_check", "component_name": "health_check", "status": "ready"}
2020-10-12T12:33:45.982Z	INFO	service/service.go:252	Everything is ready. Begin running and processing data.
2020-10-12T12:33:45.984Z	INFO	kafkareceiver/kafka_receiver.go:152	Starting consumer group	{"component_kind": "receiver", "component_type": "kafka", "component_name": "kafka", "partition": 0}
2020-10-12T12:34:56.926Z	INFO	service/service.go:265	Received signal from OS	{"signal": "terminated"}
2020-10-12T12:34:56.927Z	INFO	service/service.go:432	Starting shutdown...
2020-10-12T12:34:56.930Z	INFO	healthcheck/handler.go:128	Health Check state change	{"component_kind": "extension", "component_type": "health_check", "component_name": "health_check", "status": "unavailable"}

I am not sure what is going on. You can try to run it locally with local kafka, here are the commands that I have used open-telemetry/opentelemetry-collector#1410

@kevinearls
Copy link
Contributor Author

@pavolloffay Not a bug, I was just using the wrong health check port. Sorry!.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants