Ho Kei Cheng, Alexander Schwing
University of Illinois Urbana-Champaign
You can download your dataset from Google Drive. If you want to prepare your own dataset, just follow the same structure.
-- datasets
|
-- image
|
-- Scene_1
-- 00000.jpg
-- 00001.jpg
-- Scene_2
-- 00000.jpg
-- 00001.jpg
-- gt
|
-- Scene_1
-- 00000.png
-- 00001.png
-- Scene_2
-- 00000.png
-- 00001.png
-- annotations
|
-- Scene_1
-- 00000.png [only the first frame is enough]
-- Scene_2
-- 00000.png
Also, if you run the application through Docker, create an output directory at project root, which will be mounted to store the segmented images.
1. Create a "saves" directory, download the model in the following way, and make sure the pre-trained models are in your "saves" directory.
wget https://github.com/hkchengrex/XMem/releases/download/v1.0/XMem-s012.pth
2. Install the dependencies.
pip install -r requirements.txt
3. Run the following line to start the evaluation:
python eval.py --d17_path dataset_root --split test --model ./saves/XMem-s012.pth --output ./output_path
You can run a docker container with the following command:
docker build -t xmem . && \
docker run -it --rm -v RGB_IMAGE_PATH:/images \
-v GT_IMAGE_PATH:/gt \
-v ANNOTATIONS_PATH:/annotations \
--gpus all --shm-size=2gb xmem
For example, this is the exact command is run locally:
docker build -t xmem . && \
docker run -it --rm -v C:\Users\ge79pih\tmo_data\tmo\tmo_dataset:/images \
-v C:\Users\ge79pih\tmo_data\tmo\tmo_gt:/gt \
-v C:\Users\ge79pih\tmo_data\tmo\tmo_annotations:/annotations \
--gpus all --shm-size=2gb xmem
The skeleton of this project is taken from Official XMem Implementation. If you want more detail about the workflow, please refer to the official repository.