-
Notifications
You must be signed in to change notification settings - Fork 443
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(redis): move more into kvstore #2789
base: master
Are you sure you want to change the base?
Conversation
|
||
export const redisURL = process.env['NANGO_REDIS_URL']; | ||
let kvStore: RedisKVStore | InMemoryKVStore | undefined; | ||
export async function createKVStore() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rename createKVStore
to something like getKVStore
or similar since it now can return a memoized instance and doesn't always create a new instance
|
||
this.client.on('error', (err) => { | ||
console.error(`Redis (kvstore) error: ${err}`); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is removing the logging on error on purpose?
} | ||
|
||
let redisClient: RedisClientType | undefined; | ||
export function createRedisClient({ url = redisURL, cache = true }: { url?: string; cache?: boolean } = {}): RedisClientType { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not super obvious what the cache parameter is actually doing unless you look at the body of the function.
What about instead having 2 functions:
createRedisClient
that is always creating a new clientgetRedisClient
that is return the memoized one if exist or call createRedisClient if not
@@ -17,28 +16,27 @@ const enum MessageType { | |||
|
|||
export type WebSocketClientId = string; | |||
|
|||
export class Redis { | |||
export class RedisPubSub { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
better name 👍
} | ||
|
||
public override async publish(channel: string, message: string) { | ||
public async publish(channel: string, message: string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no override anymore?
@@ -1,3 +1,3 @@ | |||
import { ENVS, parseEnvs } from '@nangohq/utils'; | |||
|
|||
export const envs = parseEnvs(ENVS.required({ ORCHESTRATOR_SERVICE_URL: true })); | |||
export const envs = parseEnvs(ENVS.required({ ORCHESTRATOR_SERVICE_URL: true, NANGO_REDIS_URL: true })); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is making Redis mandatory for enterprise setup no? are we ok with that? is it gonna break existing enterprise customer? cc @khaliqgant
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah that's why I ask on slack, I thought it was there also for enterprise.
As I said in the PR I think our usage of PG lock can be problematic, I'm not sure keeping them is wise so hopefully we can setup a redis on entreprise
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the locking would still works with an in-memory kvstore if there is a single instance? no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are not using redis fwiw
Describe your changes
While investigating the db pool exhaustion I realized we were using db lock for stuff that can be user-generated. While in itself is not an issue, it's an issue if there are a lot of them waiting since we have a limited number of connections.
Ideally we should be using something that can scale like Redis, fortunately, we have already everything in place, except the code is still in
shared
.This is another step towards this, no new logic is added in this PR, just code moving
Move FeatureFlags to
kvstore
packageCopy Locking to
kvstore
packageAllow reuse of Redis connection pool
Right now each KVstore instantiation was creating a new connection, which could lead to issues later but also increase memory/cpu in Node.
Move runner flags in jobs so they can use kvstore
We don't want kvstore to be imported by shared