Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bindings forwarder implementation #476

Merged
merged 3 commits into from
Oct 30, 2024

Conversation

jonstacks
Copy link
Collaborator

What

This adds a binding forwarder implementation, so that when a client connects to the given port, we open a connection to the bindings ingress endpoint, passing along the header, and then joining the net.Conns together.

How

  • In order to send the header information to mux, we pull in pg_agent so we have the definition for ConnRequest and ConnResponse. In the future, this will most likely be included as part of ngrok-go IMO.
  • Added some additional printer columns to KubernetesOperator. This now gives additional columns when doing -o wide. Ex: kubectl get kubernetesoperators.ngrok.k8s.ngrok.com -A -o wide
  • Swap out the example handler for one that dials the ingress endpoint using the cert from the KubernetesOperator, exchanges the binding header, and then joins the connections together.

Breaking Changes

No

@jonstacks jonstacks self-assigned this Oct 29, 2024
@jonstacks jonstacks requested a review from a team as a code owner October 29, 2024 22:01
@github-actions github-actions bot added area/controller Issues dealing with the controller area/helm-chart Issues dealing with the helm chart labels Oct 29, 2024
cmd/api/main.go Show resolved Hide resolved
internal/controller/bindings/forwarder_controller.go Outdated Show resolved Hide resolved
Comment on lines +217 to +216
log.Error(err, "failed to upgrade connection", "errorCode", resp.ErrorCode, "errorMessage", resp.ErrorMessage)
return err
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OOC: Does returning errors from the forwarder lead to CrashLoopBackoff or a NotReady state via its health checks?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not currently. The challenge I see is that if a single port forwarder is returning errors, but the rest are fine, what do we do? Since our services all use the same selectors for targeting the forwarder pods, marking them not ready if just one forwarder has an issue would lead to knocking all port forwarders offline

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would think that having 1 Pod marked as NotReady would tell k8s Service to update the EndpointSlice which would then have our client services not talk to the Pods that are NotReady.

I think that this is what we want, no?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We chatted in Slack about this. This is a hard problem to solve because we don't want 1 BoundEndpoint's port to take the whole Pod offline because that would affect other ports in that Pod too.

Instead we talked about ways we could check for statuses of individual Pod IPs via headless Services, but came to the conclusion that because we're relying on k8s' service network routing that we may not be able to solve for this exact case using the proxy forwarder implementation.

So we will come back to this question post launch.

internal/controller/bindings/forwarder_controller.go Outdated Show resolved Hide resolved
@jonstacks jonstacks force-pushed the bindings-forwarder-implementation branch from 18297ad to a678688 Compare October 30, 2024 18:52
@jonstacks jonstacks force-pushed the bindings-forwarder-implementation branch from a678688 to f4addac Compare October 30, 2024 18:59
Comment on lines +217 to +216
log.Error(err, "failed to upgrade connection", "errorCode", resp.ErrorCode, "errorMessage", resp.ErrorMessage)
return err
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would think that having 1 Pod marked as NotReady would tell k8s Service to update the EndpointSlice which would then have our client services not talk to the Pods that are NotReady.

I think that this is what we want, no?

Comment on lines +217 to +216
log.Error(err, "failed to upgrade connection", "errorCode", resp.ErrorCode, "errorMessage", resp.ErrorMessage)
return err
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We chatted in Slack about this. This is a hard problem to solve because we don't want 1 BoundEndpoint's port to take the whole Pod offline because that would affect other ports in that Pod too.

Instead we talked about ways we could check for statuses of individual Pod IPs via headless Services, but came to the conclusion that because we're relying on k8s' service network routing that we may not be able to solve for this exact case using the proxy forwarder implementation.

So we will come back to this question post launch.

@jonstacks jonstacks merged commit 867460e into ngrok:main Oct 30, 2024
8 checks passed
@jonstacks jonstacks deleted the bindings-forwarder-implementation branch October 30, 2024 19:25
hjkatz pushed a commit that referenced this pull request Nov 4, 2024
* chore: Copy in proto code

This will likely live in ngrok-go long term, but we need to start consuming it now

* feat: Add additional printer columns

* feat: Add bindings connection handler to wire it all up
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/controller Issues dealing with the controller area/helm-chart Issues dealing with the helm chart
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants