Tags: acarroll/linkerd2-proxy
Tags
v2.93.0 This release introduces a per-endpoint authority-override feature. This is driven by the destination controller and is needed to support mutli-cluster gateways.
v2.92.0 This release includes a new protocol detection timeout, which prevents clients from consuming resources indefinitely when they do not send any data. Additionally: the proxy's admin endpoint now supports a `/live` endpoint for liveness checks, and a feature has been added to enrich tracing metadata from a file of label/values.
v2.91.0 This release fixes a bug introduced in v2.89.0 that could cause spurious timeouts for inbound proxies that handle HTTP requests for many distinct domains.
v2.90.0 This release restores the `route_actual_response_total` metric, which is needed for `linkerd stat -o wide`.
v2.89.0 This release builds on changes in the prior release to ensure that balancers process updates eagerly. Cache capacity limitations have been removed; and services now fail eagerly, rather than making all requests wait for the timeout to expire. Also, a bug was fixed in the way the `LINKERD2_PROXY_LOG` env variable is parsed.
v2.88.0 This release includes a significant internal change to how backpressure is handled in the proxy. These changes fix a class of bugs related to discovery staleness, and it should be rarer to encounter "dispatch timeout" errors.
v2.87.0 This release comprises many internal changes that are not expected to have any user-facing impact. There is one user-facing change: the inbound router's default capacity has been increased from 100 to 10K to accomodate environments that have a high cardinality of virtual hosts served by a single pod.
v2.86.0 This release includes the results from continued profiling & performance analysis. In addition to modifying internals to prevent unwarranted memory growth, we've introduced new metrics to aid in debugging and diagnostics: a new `request_errors_total` metric exposes the number of requests that receive synthesized responses due to proxy errors; and a suite of `stack_*` metrics expose proxy internals that can help us identify unexpected behavior.
lock: Generalize to protect a guarded value (linkerd#431) We used Tokio's Lock implementation in the router's cache implementation, though we know it can leak under contention. This change generalizes the Lock to return a guarded value (like Tokio's lock). This change simplifies the Lock's state management: the Lock may no longer hold a value, nor can it fail. The `lock::Service` implementation now holds a `Result<Service, ServiceError>` so that lock services may still broadcast the inner service's failure.
v2.85.0 This release fixes a bug in the proxy's logging subsystem that could cause the proxy to consume memory until the process is OOMKilled, especially when the proxy was configured to log diagnostic information. The proxy also now properly emits `grpc-status` headers when signaling proxy errors to gRPC clients. This release upgrades the proxy's Rust version, the `http` crate dependency to address RUSTSEC-2019-0033 and RUSTSEC-2019-0034, and the `prost` crate dependency has been patched to address RUSTSEC-2020-02.
PreviousNext