This project is a website which demonstrates some emerging Web AI demos that can run on Intel AI PCs. The AI tasks are executed on client side through WebAssembly and WebGPU technology.
- node 16+
- npm
Install necessary dependencies
npm install
This website itself is a static page without dependency to any server APIs. However the models needed by every AI tasks must be downloaded from somewhere to the users' browser.
To better adapt to various deployment environments, we provide two modes for model downloading:
-
Remote mode (Download models from huggingface)
In this mode, the end users'(who use the browser to access this web page) browsers will fetch model files from huggingface.
This mode is useful when the end users can access huggingface easily or the hosting server can't store large files.
-
Hosting mode (Download models from hosting server)
In this mode, we can download the required models to the hosting server in advance, and the end users' browsers will fetch model files from the server hosting this web page.
This mode is useful when the end users don't have access to huggingface or their network is too slow to download large files from huggingface.
In Hosting mode only models are deployed on the hosting server, the end users' browser still need to fetch some other resources (e.g. some wasm files) through CDN, this means Hosting mode is not suitable for pure offline environment.
Note: LLM-Gemma
uses the gemma-2b-it-gpu-int4
model, which must be downloaded and loaded manually before inference. See this for more details.
The build phase will bundle the source code.
If using Hosting mode, it will also download required models and put into correct directories.
npm run prod:use-remote-models
npm run prod
This may take a long time to download model files (~3GB to download).
HTTPS
is required since we use WebGPU in some samples and WebGPU is only available in secure-context.
We provide a npm script to generate ssl based on openssl
. Try to install openssl
if you don't have it on your system.
-
Linux
# install openssl sudo apt-get install libssl-dev # generate `cert.pem` and `key.pem` npm run generate-ssl
-
Windows:
openssl
is bundled withGit
so that you can directly generatecert.pem
andkey.pem
with following command ingitbash
if you haveGit
installed.openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout key.pem -out cert.pem
Then move the
cert.pem
andkey.pem
into the project root directory.
You can also setup https server with other solutions(like
Express
andCaddy
) and set the./dist
as root directory.
npm run startup
Once finished, open the browser and navigate to: https://localhost:8080 or https://your-server-ip:8080
Some samples for this repository are based on modifications of examples from Transformers.js(Apache-2.0) and MediaPipe(Apache-2.0).
Sample | Source | Model | Model License |
---|---|---|---|
Background Removal | Transformers.js | RMBG-1.4 | bria-rmbg-1.4 |
Image to text | Transformers.js | ViT-GPT2 | Apache-2.0 |
Question Answering | Transformers.js | DistilBERT | Apache-2.0 |
Summarization | Transformers.js | DistilBART CNN | Apache-2.0 |
Phi3 WebGPU | Transformers.js | Phi-3-mini-4k | MIT |
LLM Gemma | MediaPipe | Gemma-2B | Gemma |