You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running the whisper stt model, locally, would negate the first step in using OpenAI's API. Although this specific model (whisper-largev2) will be computationally expensive so multi-gpu support would help alleviate the issue.
The text was updated successfully, but these errors were encountered:
Actually im experimenting with whisper x right now bcause my experience with whisper.cpp isnt great. It might have multi gpu support, ill check. If it doesnt though, it should be fine because i heard its incredibly fast
Running the whisper stt model, locally, would negate the first step in using OpenAI's API. Although this specific model (whisper-largev2) will be computationally expensive so multi-gpu support would help alleviate the issue.
The text was updated successfully, but these errors were encountered: