-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bindings forwarder implementation #476
Bindings forwarder implementation #476
Conversation
log.Error(err, "failed to upgrade connection", "errorCode", resp.ErrorCode, "errorMessage", resp.ErrorMessage) | ||
return err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OOC: Does returning errors from the forwarder lead to CrashLoopBackoff or a NotReady state via its health checks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not currently. The challenge I see is that if a single port forwarder is returning errors, but the rest are fine, what do we do? Since our services all use the same selectors for targeting the forwarder pods, marking them not ready if just one forwarder has an issue would lead to knocking all port forwarders offline
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would think that having 1 Pod marked as NotReady would tell k8s Service to update the EndpointSlice which would then have our client services not talk to the Pods that are NotReady.
I think that this is what we want, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We chatted in Slack about this. This is a hard problem to solve because we don't want 1 BoundEndpoint's port to take the whole Pod offline because that would affect other ports in that Pod too.
Instead we talked about ways we could check for statuses of individual Pod IPs via headless Services, but came to the conclusion that because we're relying on k8s' service network routing that we may not be able to solve for this exact case using the proxy forwarder implementation.
So we will come back to this question post launch.
18297ad
to
a678688
Compare
This will likely live in ngrok-go long term, but we need to start consuming it now
a678688
to
f4addac
Compare
log.Error(err, "failed to upgrade connection", "errorCode", resp.ErrorCode, "errorMessage", resp.ErrorMessage) | ||
return err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would think that having 1 Pod marked as NotReady would tell k8s Service to update the EndpointSlice which would then have our client services not talk to the Pods that are NotReady.
I think that this is what we want, no?
log.Error(err, "failed to upgrade connection", "errorCode", resp.ErrorCode, "errorMessage", resp.ErrorMessage) | ||
return err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We chatted in Slack about this. This is a hard problem to solve because we don't want 1 BoundEndpoint's port to take the whole Pod offline because that would affect other ports in that Pod too.
Instead we talked about ways we could check for statuses of individual Pod IPs via headless Services, but came to the conclusion that because we're relying on k8s' service network routing that we may not be able to solve for this exact case using the proxy forwarder implementation.
So we will come back to this question post launch.
* chore: Copy in proto code This will likely live in ngrok-go long term, but we need to start consuming it now * feat: Add additional printer columns * feat: Add bindings connection handler to wire it all up
What
This adds a binding forwarder implementation, so that when a client connects to the given port, we open a connection to the bindings ingress endpoint, passing along the header, and then joining the net.Conns together.
How
KubernetesOperator
. This now gives additional columns when doing-o wide
. Ex:kubectl get kubernetesoperators.ngrok.k8s.ngrok.com -A -o wide
Breaking Changes
No