Tags: teng1/linkerd2-proxy
Tags
v2.145.0 * Controller clients of components with more than one replica could fail to drive all connections to completion. This could result in timeouts showing up in logs, but would not have prevented proxies from communicating with controllers. linkerd/linkerd2#6146 * linkerd#992 made the `l5d-dst-override` header required for ingress-mode proxies. This behavior has been reverted so that requests without this header are forwarded to their original destination. * OpenCensus trace spans for HTTP requests no longer include query parameters.
v2.144.0 This release adds an `l5d-client-id` header on mutually-authenticated inbound requests so that applications can discover the client's identity. This header is omitted on requests from unauthenticated connections.
v1.43.0 This release simplifies internals so that endpoint-forwarding logic is completely distinct from handling of load balanced services. The ingress-mode outbound proxy has been simplified to *require* the `l5d-dst-override` header and to fail non-HTTP communication. This ensures that the ingress-mode proxy does not unexpectedly revert to insecure communication. Finally, a regression was recently introduced that caused all proxy logs to be output with ANSI control characters. Logs are now output in plaintext by default
v2.141.1 This release cherry-picks several fixes back to to the v2.141.0 proxy release (which was released in linkerd stable-2.10.1): - Fixes a task leak that could be triggered when clients disconnect when a service is in failfast. - Improves admin server protocol detection so that error messages are more descriptive about the underlying problem. - Fixes panics found in fuzz testing. These panics were extremely unlikely to occur in practice and would require very specific configuration overrides to be triggered.
v2.142.0 This release primarily improves protocol detection error messages in the admin server so that logs clearly indicate when the client expected a different TLS server identity than that of the running proxy. A number of internal improvements have been made, especially eliminating some potential runtime panics detected by oss-fuzz. It is not expected that these panics could be triggered in typical cluster configurations.
v2.141.0 This release fixes a caching issue in the outbound proxy's "ingress mode" that could cause the incorrect client to be used for requests. This caching has been fixed so that clients cannot be incorrectly reused across logical destinations.
v2.140.0 This release fixes two issues: 1. The inbound proxy could break non-meshed TLS connections when the initial ClientHello message was larger than 512 bytes or when the entire message was not received in the first data packet of the connection. TLS detection has been fixed to ensure that the entire message is preserved in these cases. 2. The admin server could emit warnings about HTTP detection failing in some innocuous situations, such as when the socket closes before a request is sent. These situations are now handled gracefully without logging warnings.
v2.139.0 This release includes several stability improvements, following initial feedback from the stable-2.10.0 release: * The control plane proxies no longer emit warnings about the resolution stream ending. This error was innocuous. * The proxy's logging infrastructure has been updated to avoid including client addresses in cached logging spans. Now, client addresses are preserved to be included in warning logs. This should reduce memory pressure in high-connection environments. * The proxy could infinitely retry failed requests to the destination controller when it returned a FailedPrecondition, indicating an unexpected cluster state. These errors are now handled gracefully.
v2.138.0 This release fixes an issue where non-HTTP streams could hang due to TLS buffering. Buffered data is now flushed more aggressively to prevent TCP streams from getting "stuck" in the proxy.
v2.137.0 This release fixes several stability issues identified in pre-release testing: * linkerd/linkerd2#5871 reported that the outbound proxy would not tear down client connections when communicating with a defunct endpoint (especially when communicating with headless services). Now, dispatch timeouts trigger serverside connection teardown so that clients have an opportunity to re-resolve the destination. * The ingress-mode outbound proxy did not properly share load balancers for connections targeting multiple endpoints in the same logical service. Now, when the l5d-dst-override header is set, the ingress-mode proxy correctly reuses load balancers independently of the original destination address. * The proxy's server could panic when `accept(2)` returned an error. This case is now handled gracefully and logged as a warning. * The inbound proxy included a redundant cache that has been removed. * Diagnostic logging has been improved, especially for TCP forwarding.
PreviousNext