Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

5xx errors in load generator #1800

Open
agardnerIT opened this issue Nov 28, 2024 · 9 comments · May be fixed by #1806
Open

5xx errors in load generator #1800

agardnerIT opened this issue Nov 28, 2024 · 9 comments · May be fixed by #1806
Labels
bug Something isn't working

Comments

@agardnerIT
Copy link

agardnerIT commented Nov 28, 2024

Bug Report

helm list -A
my-otel-demo    ......   deployed    opentelemetry-demo-0.33.4     1.12.0

Symptom

Lots and lots of 5xx errors.

Failed to load resource: the server responded with a status of 503 (Service Unavailable)
Failed to load resource: net::ERR_CONNECTION_REFUSED
Failed to load resource: net::ERR_CONNECTION_REFUSED
Failed to load resource: net::ERR_CONNECTION_REFUSED
Failed to load resource: net::ERR_CONNECTION_REFUSED
Failed to load resource: the server responded with a status of 503 (Service Unavailable)

What is the expected behavior?

No 5xx errors

What do you expect to see?

What is the actual behavior?

Please describe the actual behavior experienced.

Reproduce

Could you provide the minimum required steps to resolve the issue you're seeing?

We will close this issue if:

  • The steps you provided are complex.
  • If we can not reproduce the behavior you're reporting.

Additional Context

kubectl -n <REDACTED> describe pod/my-otel-demo-loadgenerator-867c949bd7-dcgtt
Name:             my-otel-demo-loadgenerator-867c949bd7-dcgtt
Namespace:        <REDACTED>
Priority:         0
Service Account:  my-otel-demo
Start Time:       Thu, 28 Nov 2024 14:16:51 +1000
Labels:           app.kubernetes.io/component=loadgenerator
                  app.kubernetes.io/instance=my-otel-demo
                  app.kubernetes.io/name=my-otel-demo-loadgenerator
                  opentelemetry.io/name=my-otel-demo-loadgenerator
                  pod-template-hash=867c949bd7
Annotations:      <none>
Status:           Running
IP:               <REDACTED>
IPs:
  IP:           <REDACTED>
Controlled By:  ReplicaSet/my-otel-demo-loadgenerator-867c949bd7
Containers:
  loadgenerator:
    Container ID:   containerd://344df4921a5b3b84577b3a85bed54f9c6db1ef1547c5f46a47c6798fbc24912b
    Image:          ghcr.io/open-telemetry/demo:1.12.0-loadgenerator
    Image ID:       ghcr.io/open-telemetry/demo@sha256:85c9935ff31b7ab575903fbd0b56a3161ec13e508966df25dc68fcfe7af5ec98
    Port:           8089/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 28 Nov 2024 14:16:52 +1000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  1500Mi
    Requests:
      memory:  1500Mi
    Environment:
      OTEL_SERVICE_NAME:                                   (v1:metadata.labels['app.kubernetes.io/component'])
      OTEL_COLLECTOR_NAME:                                my-otel-demo-otelcol
      OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE:  cumulative
      LOCUST_WEB_PORT:                                    8089
      LOCUST_USERS:                                       10
      LOCUST_SPAWN_RATE:                                  1
      LOCUST_HOST:                                        http://my-otel-demo-frontendproxy.<REDACTED>.svc.cluster.local:8080
      LOCUST_HEADLESS:                                    false
      LOCUST_AUTOSTART:                                   true
      LOCUST_BROWSER_TRAFFIC_ENABLED:                     true
      PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION:             python
      FLAGD_HOST:                                         my-otel-demo-flagd
      FLAGD_PORT:                                         8013
      OTEL_EXPORTER_OTLP_ENDPOINT:                        http://$(OTEL_COLLECTOR_NAME):4317
      OTEL_RESOURCE_ATTRIBUTES:                           service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo,service.version=1.12.0
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qzcxc (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  kube-api-access-qzcxc:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  4m14s  default-scheduler  Successfully assigned <REDACTED>/my-otel-demo-loadgenerator-867c949bd7-dcgtt to <REDACTED>
  Normal  Pulled     4m14s  kubelet            Container image "ghcr.io/open-telemetry/demo:1.12.0-loadgenerator" already present on machine
  Normal  Created    4m14s  kubelet            Created container loadgenerator
  Normal  Started    4m14s  kubelet            Started container loadgenerator
kubectl -n <REDACTED> get svc
NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                     AGE
my-otel-demo-adservice               ClusterIP   <REDACTED>   <none>        8080/TCP                                                                    169m
my-otel-demo-cartservice             ClusterIP   <REDACTED>    <none>        8080/TCP                                                                    169m
my-otel-demo-checkoutservice         ClusterIP   <REDACTED>     <none>        8080/TCP                                                                    169m
my-otel-demo-currencyservice         ClusterIP   <REDACTED>    <none>        8080/TCP                                                                    169m
my-otel-demo-emailservice            ClusterIP   <REDACTED>    <none>        8080/TCP                                                                    169m
my-otel-demo-flagd                   ClusterIP   <REDACTED>     <none>        8013/TCP,4000/TCP                                                           169m
my-otel-demo-frontend                ClusterIP   <REDACTED>   <none>        8080/TCP                                                                    169m
my-otel-demo-frontendproxy           ClusterIP   <REDACTED>    <none>        8080/TCP                                                                    169m
my-otel-demo-imageprovider           ClusterIP   <REDACTED>    <none>        8081/TCP                                                                    169m
my-otel-demo-kafka                   ClusterIP   <REDACTED>     <none>        9092/TCP,9093/TCP                                                           169m
my-otel-demo-loadgenerator           ClusterIP   <REDACTED>     <none>        8089/TCP                                                                    169m
my-otel-demo-otelcol                 ClusterIP   <REDACTED>   <none>        6831/UDP,14250/TCP,14268/TCP,8888/TCP,4317/TCP,4318/TCP,9464/TCP,9411/TCP   169m
my-otel-demo-paymentservice          ClusterIP   <REDACTED>    <none>        8080/TCP                                                                    169m
my-otel-demo-productcatalogservice   ClusterIP   <REDACTED>   <none>        8080/TCP                                                                    169m
my-otel-demo-quoteservice            ClusterIP   <REDACTED>   <none>        8080/TCP                                                                    169m
my-otel-demo-recommendationservice   ClusterIP   <REDACTED>     <none>        8080/TCP                                                                    169m
my-otel-demo-shippingservice         ClusterIP   <REDACTED>    <none>        8080/TCP                                                                    169m
my-otel-demo-valkey                  ClusterIP  <REDACTED>      <none>        6379/TCP                                                                    169m

curl from on-cluster pod

It works:

$ kubectl -n <REDACTED> run mycurlpod --image=curlimages/curl -i --tty -- sh
~ $ curl http://my-otel-demo-frontendproxy.<REDACTED>.svc.cluster.local:8080
<!DOCTYPE html><html><head><meta charSet="utf-8"/><meta name.....
@agardnerIT agardnerIT added the bug Something isn't working label Nov 28, 2024
@julianocosta89
Copy link
Member

@agardnerIT are all pods running?

@agardnerIT
Copy link
Author

I've turned some off (like OpenSearch, Jaeger, Grafana and accounting service because it keeps crashing) but other than that, yes they are.

components:
  grafana:
    enabled: false
  opensearch:
    enabled: false
  jaeger:
    enabled: false
  prometheus:
    enabled: false
  accountingService:
    enabled: false
$ kubectl -n <REDACTED> get pods
NAME                                                        READY   STATUS      RESTARTS        AGE
my-otel-demo-adservice-6f4f57766f-nq8bj                     1/1     Running     0               6h27m
my-otel-demo-cartservice-6bfc654788-l64ft                   1/1     Running     0               6h27m
my-otel-demo-checkoutservice-5cdc66f5cc-kdckm               1/1     Running     0               6h27m
my-otel-demo-currencyservice-cfd644bbf-hcljk                1/1     Running     0               6h27m
my-otel-demo-emailservice-5955c8ddfd-mf9h2                  1/1     Running     0               6h27m
my-otel-demo-flagd-5b9f48f7b5-ft7rf                         2/2     Running     15              6h27m
my-otel-demo-frauddetectionservice-6895d4c998-5c5d4         1/1     Running     0               6h27m
my-otel-demo-frontend-65c644ffdf-ntvsq                      1/1     Running     0               3h50m
my-otel-demo-frontendproxy-fbb8588cf-c8s2m                  1/1     Running     0               3h50m
my-otel-demo-imageprovider-65dd7698c9-bkfk7                 1/1     Running     0               6h27m
my-otel-demo-kafka-6678c45b5c-6x588                         1/1     Running     1 (59m ago)     6h27m
my-otel-demo-loadgenerator-867c949bd7-b5bsn                 1/1     Running     0               3h24m
my-otel-demo-otelcol-7cb964855d-sd2p9                       1/1     Running     0               3h55m
my-otel-demo-paymentservice-69fb7df989-mnfwd                1/1     Running     0               6h27m
my-otel-demo-productcatalogservice-6f4b457bfd-82t4v         1/1     Running     0               3h52m
my-otel-demo-quoteservice-7d4f6f9666-2ppx2                  1/1     Running     0               6h27m
my-otel-demo-recommendationservice-9f4496497-72hs8          1/1     Running     0               6h27m
my-otel-demo-shippingservice-695d794b8d-4wrkc               1/1     Running     0               6h27m
my-otel-demo-valkey-6cf4dcccbf-nxbcv                        1/1     Running     0               6h27m
mycurlpod                                                   1/1     Running     1 (3h12m ago)   3h18m

@julianocosta89
Copy link
Member

@puckpuck any ideas here?
I'm starting to suspect of the PR #1785.

Have you redeployed your running demo in K8s after that was merged?

@agardnerIT
Copy link
Author

Have you redeployed your running demo in K8s after that was merged? Have I?

I deployed this today.

@julianocosta89
Copy link
Member

Have you redeployed your running demo in K8s after that was merged? Have I?

I deployed this today.

Sorry @agardnerIT, this question was for @puckpuck 😅 .
He has the demo running also using Helm.

Locally, we have one issue with accountingservice, but yours is new.

@agardnerIT
Copy link
Author

agardnerIT commented Nov 29, 2024

Side note: It would be good to document which services can be disabled and what effects (if any) that has on the core demo / usecases.

For example, I've turned off my accounting service - but I have no idea whether that matters to the "core system" or not.

Perhaps a new column / table on this page like:

service core service technology
accountingservice .NET
grafana 🔴 ...

Note: Turning off core services may effect the demo system. Non-core services are "safe" to disable and cause minimal disruption to the core usecases for the demo.

@cbos
Copy link

cbos commented Nov 29, 2024

I (probably) have the same problem, but not with Kubernetes, but with local docker.

When I open http://localhost:8080 I get this:
upstream connect error or disconnect/reset before headers. reset reason: connection timeout

http://localhost:8080/jaeger/ui works fine, http://localhost:8080/feature/ works fine as well.
http://localhost:8080/grafana does not work
And the normal shop does not work.

Next to the standard setup, I have https://github.com/cbos/observability-toolkit coupled with the demo setup.
In the logging in Loki I see this:

2024-11-29 14:06:28.036 | [2024-11-29T13:06:28.036Z] "POST /api/checkout HTTP/1.1" 503 UF upstream_reset_before_response_started{connection_timeout} - "-" 385 91 4999 - "-" "python-requests/2.31.0" "4f9ddd89-e990-9549-9c85-77562419236d" "frontend-proxy:8080" "52.140.110.196:8080" frontend - 172.22.0.26:8080 172.22.0.25:44032 - - 

This is a weird ip address where it tries to connect to: 52.140.110.196

This is what I have in the Envoy proxy (http://localhost:10000/clusters):

flagservice::observability_name::flagservice
flagservice::default_priority::max_connections::1024
flagservice::default_priority::max_pending_requests::1024
flagservice::default_priority::max_requests::1024
flagservice::default_priority::max_retries::3
flagservice::high_priority::max_connections::1024
flagservice::high_priority::max_pending_requests::1024
flagservice::high_priority::max_requests::1024
flagservice::high_priority::max_retries::3
flagservice::added_via_api::false
flagservice::172.22.0.4:8013::cx_active::0
flagservice::172.22.0.4:8013::cx_connect_fail::0
flagservice::172.22.0.4:8013::cx_total::0
flagservice::172.22.0.4:8013::rq_active::0
flagservice::172.22.0.4:8013::rq_error::0
flagservice::172.22.0.4:8013::rq_success::0
flagservice::172.22.0.4:8013::rq_timeout::0
flagservice::172.22.0.4:8013::rq_total::0
flagservice::172.22.0.4:8013::hostname::flagd
flagservice::172.22.0.4:8013::health_flags::healthy
flagservice::172.22.0.4:8013::weight::1
flagservice::172.22.0.4:8013::region::
flagservice::172.22.0.4:8013::zone::
flagservice::172.22.0.4:8013::sub_zone::
flagservice::172.22.0.4:8013::canary::false
flagservice::172.22.0.4:8013::priority::0
flagservice::172.22.0.4:8013::success_rate::-1
flagservice::172.22.0.4:8013::local_origin_success_rate::-1
frontend::observability_name::frontend
frontend::default_priority::max_connections::1024
frontend::default_priority::max_pending_requests::1024
frontend::default_priority::max_requests::1024
frontend::default_priority::max_retries::3
frontend::high_priority::max_connections::1024
frontend::high_priority::max_pending_requests::1024
frontend::high_priority::max_requests::1024
frontend::high_priority::max_retries::3
frontend::added_via_api::false
frontend::52.140.110.196:8080::cx_active::3
frontend::52.140.110.196:8080::cx_connect_fail::425
frontend::52.140.110.196:8080::cx_total::428
frontend::52.140.110.196:8080::rq_active::0
frontend::52.140.110.196:8080::rq_error::480
frontend::52.140.110.196:8080::rq_success::0
frontend::52.140.110.196:8080::rq_timeout::0
frontend::52.140.110.196:8080::rq_total::0
frontend::52.140.110.196:8080::hostname::frontend
frontend::52.140.110.196:8080::health_flags::healthy
frontend::52.140.110.196:8080::weight::1
frontend::52.140.110.196:8080::region::
frontend::52.140.110.196:8080::zone::
frontend::52.140.110.196:8080::sub_zone::
frontend::52.140.110.196:8080::canary::false
frontend::52.140.110.196:8080::priority::0
frontend::52.140.110.196:8080::success_rate::-1
frontend::52.140.110.196:8080::local_origin_success_rate::-1
opentelemetry_collector_grpc::observability_name::opentelemetry_collector_grpc
opentelemetry_collector_grpc::default_priority::max_connections::1024
opentelemetry_collector_grpc::default_priority::max_pending_requests::1024
opentelemetry_collector_grpc::default_priority::max_requests::1024
opentelemetry_collector_grpc::default_priority::max_retries::3
opentelemetry_collector_grpc::high_priority::max_connections::1024
opentelemetry_collector_grpc::high_priority::max_pending_requests::1024
opentelemetry_collector_grpc::high_priority::max_requests::1024
opentelemetry_collector_grpc::high_priority::max_retries::3
opentelemetry_collector_grpc::added_via_api::false
opentelemetry_collector_grpc::172.22.0.9:4317::cx_active::10
opentelemetry_collector_grpc::172.22.0.9:4317::cx_connect_fail::0
opentelemetry_collector_grpc::172.22.0.9:4317::cx_total::10
opentelemetry_collector_grpc::172.22.0.9:4317::rq_active::0
opentelemetry_collector_grpc::172.22.0.9:4317::rq_error::0
opentelemetry_collector_grpc::172.22.0.9:4317::rq_success::791
opentelemetry_collector_grpc::172.22.0.9:4317::rq_timeout::0
opentelemetry_collector_grpc::172.22.0.9:4317::rq_total::791
opentelemetry_collector_grpc::172.22.0.9:4317::hostname::otelcol
opentelemetry_collector_grpc::172.22.0.9:4317::health_flags::healthy
opentelemetry_collector_grpc::172.22.0.9:4317::weight::1
opentelemetry_collector_grpc::172.22.0.9:4317::region::
opentelemetry_collector_grpc::172.22.0.9:4317::zone::
opentelemetry_collector_grpc::172.22.0.9:4317::sub_zone::
opentelemetry_collector_grpc::172.22.0.9:4317::canary::false
opentelemetry_collector_grpc::172.22.0.9:4317::priority::0
opentelemetry_collector_grpc::172.22.0.9:4317::success_rate::-1
opentelemetry_collector_grpc::172.22.0.9:4317::local_origin_success_rate::-1
imageprovider::observability_name::imageprovider
imageprovider::default_priority::max_connections::1024
imageprovider::default_priority::max_pending_requests::1024
imageprovider::default_priority::max_requests::1024
imageprovider::default_priority::max_retries::3
imageprovider::high_priority::max_connections::1024
imageprovider::high_priority::max_pending_requests::1024
imageprovider::high_priority::max_requests::1024
imageprovider::high_priority::max_retries::3
imageprovider::added_via_api::false
imageprovider::172.22.0.12:8081::cx_active::0
imageprovider::172.22.0.12:8081::cx_connect_fail::0
imageprovider::172.22.0.12:8081::cx_total::0
imageprovider::172.22.0.12:8081::rq_active::0
imageprovider::172.22.0.12:8081::rq_error::0
imageprovider::172.22.0.12:8081::rq_success::0
imageprovider::172.22.0.12:8081::rq_timeout::0
imageprovider::172.22.0.12:8081::rq_total::0
imageprovider::172.22.0.12:8081::hostname::imageprovider
imageprovider::172.22.0.12:8081::health_flags::healthy
imageprovider::172.22.0.12:8081::weight::1
imageprovider::172.22.0.12:8081::region::
imageprovider::172.22.0.12:8081::zone::
imageprovider::172.22.0.12:8081::sub_zone::
imageprovider::172.22.0.12:8081::canary::false
imageprovider::172.22.0.12:8081::priority::0
imageprovider::172.22.0.12:8081::success_rate::-1
imageprovider::172.22.0.12:8081::local_origin_success_rate::-1
jaeger::observability_name::jaeger
jaeger::default_priority::max_connections::1024
jaeger::default_priority::max_pending_requests::1024
jaeger::default_priority::max_requests::1024
jaeger::default_priority::max_retries::3
jaeger::high_priority::max_connections::1024
jaeger::high_priority::max_pending_requests::1024
jaeger::high_priority::max_requests::1024
jaeger::high_priority::max_retries::3
jaeger::added_via_api::false
jaeger::172.22.0.3:16686::cx_active::2
jaeger::172.22.0.3:16686::cx_connect_fail::0
jaeger::172.22.0.3:16686::cx_total::2
jaeger::172.22.0.3:16686::rq_active::0
jaeger::172.22.0.3:16686::rq_error::0
jaeger::172.22.0.3:16686::rq_success::6
jaeger::172.22.0.3:16686::rq_timeout::0
jaeger::172.22.0.3:16686::rq_total::6
jaeger::172.22.0.3:16686::hostname::jaeger
jaeger::172.22.0.3:16686::health_flags::healthy
jaeger::172.22.0.3:16686::weight::1
jaeger::172.22.0.3:16686::region::
jaeger::172.22.0.3:16686::zone::
jaeger::172.22.0.3:16686::sub_zone::
jaeger::172.22.0.3:16686::canary::false
jaeger::172.22.0.3:16686::priority::0
jaeger::172.22.0.3:16686::success_rate::-1
jaeger::172.22.0.3:16686::local_origin_success_rate::-1
loadgen::observability_name::loadgen
loadgen::default_priority::max_connections::1024
loadgen::default_priority::max_pending_requests::1024
loadgen::default_priority::max_requests::1024
loadgen::default_priority::max_retries::3
loadgen::high_priority::max_connections::1024
loadgen::high_priority::max_pending_requests::1024
loadgen::high_priority::max_requests::1024
loadgen::high_priority::max_retries::3
loadgen::added_via_api::false
loadgen::20.61.97.129:8089::cx_active::0
loadgen::20.61.97.129:8089::cx_connect_fail::0
loadgen::20.61.97.129:8089::cx_total::0
loadgen::20.61.97.129:8089::rq_active::0
loadgen::20.61.97.129:8089::rq_error::0
loadgen::20.61.97.129:8089::rq_success::0
loadgen::20.61.97.129:8089::rq_timeout::0
loadgen::20.61.97.129:8089::rq_total::0
loadgen::20.61.97.129:8089::hostname::loadgenerator
loadgen::20.61.97.129:8089::health_flags::healthy
loadgen::20.61.97.129:8089::weight::1
loadgen::20.61.97.129:8089::region::
loadgen::20.61.97.129:8089::zone::
loadgen::20.61.97.129:8089::sub_zone::
loadgen::20.61.97.129:8089::canary::false
loadgen::20.61.97.129:8089::priority::0
loadgen::20.61.97.129:8089::success_rate::-1
loadgen::20.61.97.129:8089::local_origin_success_rate::-1
grafana::observability_name::grafana
grafana::default_priority::max_connections::1024
grafana::default_priority::max_pending_requests::1024
grafana::default_priority::max_requests::1024
grafana::default_priority::max_retries::3
grafana::high_priority::max_connections::1024
grafana::high_priority::max_pending_requests::1024
grafana::high_priority::max_requests::1024
grafana::high_priority::max_retries::3
grafana::added_via_api::false
grafana::20.189.171.131:3000::cx_active::0
grafana::20.189.171.131:3000::cx_connect_fail::0
grafana::20.189.171.131:3000::cx_total::0
grafana::20.189.171.131:3000::rq_active::0
grafana::20.189.171.131:3000::rq_error::0
grafana::20.189.171.131:3000::rq_success::0
grafana::20.189.171.131:3000::rq_timeout::0
grafana::20.189.171.131:3000::rq_total::0
grafana::20.189.171.131:3000::hostname::grafana
grafana::20.189.171.131:3000::health_flags::healthy
grafana::20.189.171.131:3000::weight::1
grafana::20.189.171.131:3000::region::
grafana::20.189.171.131:3000::zone::
grafana::20.189.171.131:3000::sub_zone::
grafana::20.189.171.131:3000::canary::false
grafana::20.189.171.131:3000::priority::0
grafana::20.189.171.131:3000::success_rate::-1
grafana::20.189.171.131:3000::local_origin_success_rate::-1
flagdui::observability_name::flagdui
flagdui::default_priority::max_connections::1024
flagdui::default_priority::max_pending_requests::1024
flagdui::default_priority::max_requests::1024
flagdui::default_priority::max_retries::3
flagdui::high_priority::max_connections::1024
flagdui::high_priority::max_pending_requests::1024
flagdui::high_priority::max_requests::1024
flagdui::high_priority::max_retries::3
flagdui::added_via_api::false
flagdui::172.22.0.10:4000::cx_active::0
flagdui::172.22.0.10:4000::cx_connect_fail::0
flagdui::172.22.0.10:4000::cx_total::8
flagdui::172.22.0.10:4000::rq_active::0
flagdui::172.22.0.10:4000::rq_error::0
flagdui::172.22.0.10:4000::rq_success::14
flagdui::172.22.0.10:4000::rq_timeout::0
flagdui::172.22.0.10:4000::rq_total::14
flagdui::172.22.0.10:4000::hostname::flagdui
flagdui::172.22.0.10:4000::health_flags::healthy
flagdui::172.22.0.10:4000::weight::1
flagdui::172.22.0.10:4000::region::
flagdui::172.22.0.10:4000::zone::
flagdui::172.22.0.10:4000::sub_zone::
flagdui::172.22.0.10:4000::canary::false
flagdui::172.22.0.10:4000::priority::0
flagdui::172.22.0.10:4000::success_rate::-1
flagdui::172.22.0.10:4000::local_origin_success_rate::-1
opentelemetry_collector_http::observability_name::opentelemetry_collector_http
opentelemetry_collector_http::default_priority::max_connections::1024
opentelemetry_collector_http::default_priority::max_pending_requests::1024
opentelemetry_collector_http::default_priority::max_requests::1024
opentelemetry_collector_http::default_priority::max_retries::3
opentelemetry_collector_http::high_priority::max_connections::1024
opentelemetry_collector_http::high_priority::max_pending_requests::1024
opentelemetry_collector_http::high_priority::max_requests::1024
opentelemetry_collector_http::high_priority::max_retries::3
opentelemetry_collector_http::added_via_api::false
opentelemetry_collector_http::172.22.0.9:4318::cx_active::0
opentelemetry_collector_http::172.22.0.9:4318::cx_connect_fail::0
opentelemetry_collector_http::172.22.0.9:4318::cx_total::0
opentelemetry_collector_http::172.22.0.9:4318::rq_active::0
opentelemetry_collector_http::172.22.0.9:4318::rq_error::0
opentelemetry_collector_http::172.22.0.9:4318::rq_success::0
opentelemetry_collector_http::172.22.0.9:4318::rq_timeout::0
opentelemetry_collector_http::172.22.0.9:4318::rq_total::0
opentelemetry_collector_http::172.22.0.9:4318::hostname::otelcol
opentelemetry_collector_http::172.22.0.9:4318::health_flags::healthy
opentelemetry_collector_http::172.22.0.9:4318::weight::1
opentelemetry_collector_http::172.22.0.9:4318::region::
opentelemetry_collector_http::172.22.0.9:4318::zone::
opentelemetry_collector_http::172.22.0.9:4318::sub_zone::
opentelemetry_collector_http::172.22.0.9:4318::canary::false
opentelemetry_collector_http::172.22.0.9:4318::priority::0
opentelemetry_collector_http::172.22.0.9:4318::success_rate::-1
opentelemetry_collector_http::172.22.0.9:4318::local_origin_success_rate::-1

Some of the IP addresses are not resolved to local addresses, but remote/public addresses.

@cbos
Copy link

cbos commented Nov 29, 2024

Changing src/frontendproxy/Dockerfile fixed my problem.
I changed the FROM to envoyproxy/envoy:v1.31-latest

#1768 changed this version from 1.30 to 1.32. cc: @puckpuck
The release notes of Envoy proxy mention something about internal_address_config.
https://www.envoyproxy.io/docs/envoy/v1.32.0/version_history/v1.32/v1.32.0.html

I don't know all details, but at least this fixed my problem, my frontend (and other services) are now resolved to local addresses:

frontend::observability_name::frontend
frontend::default_priority::max_connections::1024
frontend::default_priority::max_pending_requests::1024
frontend::default_priority::max_requests::1024
frontend::default_priority::max_retries::3
frontend::high_priority::max_connections::1024
frontend::high_priority::max_pending_requests::1024
frontend::high_priority::max_requests::1024
frontend::high_priority::max_retries::3
frontend::added_via_api::false
frontend::172.22.0.24:8080::cx_active::21
frontend::172.22.0.24:8080::cx_connect_fail::0
frontend::172.22.0.24:8080::cx_total::61
frontend::172.22.0.24:8080::rq_active::0
frontend::172.22.0.24:8080::rq_error::0
frontend::172.22.0.24:8080::rq_success::1185
frontend::172.22.0.24:8080::rq_timeout::0
frontend::172.22.0.24:8080::rq_total::1185
frontend::172.22.0.24:8080::hostname::frontend
frontend::172.22.0.24:8080::health_flags::healthy
frontend::172.22.0.24:8080::weight::1
frontend::172.22.0.24:8080::region::
frontend::172.22.0.24:8080::zone::
frontend::172.22.0.24:8080::sub_zone::
frontend::172.22.0.24:8080::canary::false
frontend::172.22.0.24:8080::priority::0
frontend::172.22.0.24:8080::success_rate::-1
frontend::172.22.0.24:8080::local_origin_success_rate::-1

@julianocosta89 julianocosta89 linked a pull request Dec 1, 2024 that will close this issue
@julianocosta89
Copy link
Member

@agardnerIT and @cbos in what distro of k8s are you running this?

I've tried out today with a local minikube and everything started fine 🤔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants