-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
5xx errors in load generator #1800
Comments
@agardnerIT are all pods running? |
I've turned some off (like OpenSearch, Jaeger, Grafana and accounting service because it keeps crashing) but other than that, yes they are.
|
I deployed this today. |
Sorry @agardnerIT, this question was for @puckpuck 😅 . Locally, we have one issue with |
Side note: It would be good to document which services can be disabled and what effects (if any) that has on the core demo / usecases. For example, I've turned off my accounting service - but I have no idea whether that matters to the "core system" or not. Perhaps a new column / table on this page like:
|
I (probably) have the same problem, but not with Kubernetes, but with local docker. When I open http://localhost:8080 I get this: http://localhost:8080/jaeger/ui works fine, http://localhost:8080/feature/ works fine as well. Next to the standard setup, I have https://github.com/cbos/observability-toolkit coupled with the demo setup.
This is a weird ip address where it tries to connect to: 52.140.110.196 This is what I have in the Envoy proxy (http://localhost:10000/clusters):
Some of the IP addresses are not resolved to local addresses, but remote/public addresses. |
Changing #1768 changed this version from 1.30 to 1.32. cc: @puckpuck I don't know all details, but at least this fixed my problem, my frontend (and other services) are now resolved to local addresses:
|
@agardnerIT and @cbos in what distro of k8s are you running this? I've tried out today with a local |
Bug Report
Symptom
Lots and lots of 5xx errors.
What is the expected behavior?
No 5xx errors
What do you expect to see?
What is the actual behavior?
Please describe the actual behavior experienced.
Reproduce
Could you provide the minimum required steps to resolve the issue you're seeing?
We will close this issue if:
Additional Context
curl from on-cluster pod
It works:
The text was updated successfully, but these errors were encountered: