-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error while running locally using run-cli.sh #45
Comments
Hi @ved-asole can you give some more details like what you did to run this? did you just checkout the code and |
As per the documentation I used the below commands :
|
Thanks, I recently added support for multiple jvms which messed this up. I'll fix. In the meantime use the code from the latest release tag |
Thanks for the update. I will use the older v0.2.1 version for now |
@tjake I'm not sure where I can get the older v0.2.1 version/ Could you please suggest how i can get it? |
Getting below error while running v0.2.1 : Command :
|
You need to build the native extensions requires make and gcc. I realize this is all a PITA. I'm going to add release artifacts for all the platforms when I get a chance. |
The easiest way to "just run a model" is with langchain4j integrations. Try this: https://github.com/langchain4j/langchain4j-examples/tree/main/jlama-examples |
I want to run the models locally. Will it run with the langchain4j implementations? |
Yes, it uses the maven artifacts which are pre-built |
I got the below error while using it :
|
Which model are you loading and what kind of cpu are you loading it on? |
Looking at the line it seems to be a non-arm AVX which I can support but currently needs a code change. I can fix that. |
The text was updated successfully, but these errors were encountered: