Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use local model file but have an error "Not allowed to load local resource: file:///D:/model/gemma-2b-it-gpu-int4.bin" #5597

Open
akau16 opened this issue Aug 30, 2024 · 2 comments
Labels
platform:javascript MediaPipe Javascript issues stat:awaiting googler Waiting for Google Engineer's Response task:LLM inference Issues related to MediaPipe LLM Inference Gen AI setup type:support General questions

Comments

@akau16
Copy link

akau16 commented Aug 30, 2024

Have I written custom code (as opposed to using a stock example script provided in MediaPipe)

None

OS Platform and Distribution

Firebase Hosting

MediaPipe Tasks SDK version

No response

Task name (e.g. Image classification, Gesture recognition etc.)

/llm_inference /js/

Programming Language and version (e.g. C++, Python, Java)

html, javascript

Describe the actual behavior

can not access local model file

Describe the expected behaviour

can access model file

Standalone code/steps you may have used to try to get what you need

When I run the llm_inference in localhost, it's ok to access model file like "gemma-2b-it-gpu-int4.bin" that is in project folder, but when I run llm_inference in Firebase Hosting, it can not access on-device's model file, it will show "Not allowed to load local resource: file:///D:/model/gemma-2b-it-gpu-int4.bin".
And I query it to get that info 'In standard HTML and JavaScript, it is not possible to directly specify to read files with a specific path on the local machine. This is due to browser security restrictions designed to protect user privacy and prevent malicious websites from automatically accessing the local file system'.

But I try your sample in MediaPipe Studio(https://mediapipe-studio.webapps.google.com/studio/demo/llm_inference), I can click 'Choose a model file' and select model file in my device and run OK, I would like to ask how can it do that? Thank you!

Other info / Complete Logs

No response

@kuaashish kuaashish added platform:javascript MediaPipe Javascript issues task:LLM inference Issues related to MediaPipe LLM Inference Gen AI setup type:support General questions labels Aug 30, 2024
@kuaashish
Copy link
Collaborator

Hi @akau16,

Could you please review the Stack Overflow thread https://stackoverflow.com/questions/5074680/chrome-safari-errornot-allowed-to-load-local-resource-file-d-css-style and try the suggested solution? Let us know if you still need further assistance.

Thank you!!

@kuaashish kuaashish added the stat:awaiting response Waiting for user response label Aug 30, 2024
@akau16
Copy link
Author

akau16 commented Aug 31, 2024

Hi kuaashish:

Thanks for your kindly reply, I think my problem is a little different with it. Above is my code.

LlmInference
.createFromOptions(genaiFileset, {
baseOptions: {modelAssetPath:'D:/model/gemma-2b-it-gpu-int4.bin'},
.....

and it will show the error Not allowed to load local resource: file:///D:/model/gemma-2b-it-gpu-int4.bin",
how can I resolve the problem?
thank you

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Waiting for user response label Aug 31, 2024
@kuaashish kuaashish added the stat:awaiting googler Waiting for Google Engineer's Response label Sep 24, 2024
@schmidt-sebastian schmidt-sebastian removed their assignment Nov 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform:javascript MediaPipe Javascript issues stat:awaiting googler Waiting for Google Engineer's Response task:LLM inference Issues related to MediaPipe LLM Inference Gen AI setup type:support General questions
Projects
None yet
Development

No branches or pull requests

3 participants