diff --git a/README.md b/README.md index 73c020b7..60a477b7 100644 --- a/README.md +++ b/README.md @@ -6,13 +6,13 @@ RWKV twitter: https://twitter.com/BlinkDL_AI (lastest news) RWKV discord: https://discord.gg/bDSBUMeFpc (9k+ members) -RWKV-7 "Goose" is the best **linear-time & constant-space (no kv-cache) & attention-free** architecture on this planet at this moment, suitable for both LLM and multimodal applications, and more (check [rwkv.com](https://rwkv.com) for examples). +RWKV-7 "Goose" is the best **linear-time & constant-space (no kv-cache) & attention-free** architecture on this planet at this moment, suitable for both LLM and multimodal applications, and more (see [rwkv.com](https://rwkv.com)). -It is a [meta-in-context learner](https://raw.githubusercontent.com/BlinkDL/RWKV-LM/main/RWKV-v7.png), test-time-training its state on the context via in-context gradient descent at every token, and 100% RNN. +RWKV-7 is a [meta-in-context learner](https://raw.githubusercontent.com/BlinkDL/RWKV-LM/main/RWKV-v7.png), test-time-training its state on the context via in-context gradient descent at every token, and 100% RNN. RWKV is a [Linux Foundation AI project](https://lfaidata.foundation/projects/rwkv/), so totally free. RWKV runtime is [already in Windows & Office](https://x.com/BlinkDL_AI/status/1831012419508019550). -You are welcome to ask the RWKV community (such as [RWKV discord](https://discord.gg/bDSBUMeFpc) for advice on upgrading your attention-based models to rwkv7-based models :) +You are welcome to ask the RWKV community (such as [RWKV discord](https://discord.gg/bDSBUMeFpc) for advice on upgrading your attention/ssm models to rwkv7 models :)