|
1 | 1 | # LCFCN - ECCV 2018
|
| 2 | + |
2 | 3 | ## Where are the Blobs: Counting by Localization with Point Supervision
|
3 | 4 | [[Paper]](https://arxiv.org/abs/1807.09856)[[Video]](https://youtu.be/DHKD8LGvX6c)
|
4 | 5 |
|
5 |
| -## Requirements |
6 |
| - |
7 |
| -- Pytorch version 0.4 or higher. |
| 6 | +Turn your segmentation model into a landmark detection model using the lcfcn loss. It can learn to output predictions like in the following image by training on point-level annotations only. |
| 7 | +This script outputs the following dashboard |
| 8 | + |
8 | 9 |
|
9 |
| -## Description |
10 |
| -Given a test image, the trained model outputs blobs in the image, then counts the number of predicted blobs (see Figure below). |
| 10 | +## Usage |
11 | 11 |
|
12 |
| - |
| 12 | +``` |
| 13 | +pip install git+https://github.com/ElementAI/LCFCN |
| 14 | +``` |
13 | 15 |
|
14 |
| -## Test on single image |
| 16 | +```python |
| 17 | +from lcfcn import lcfcn_loss |
15 | 18 |
|
16 |
| -We test a trained ResNet on a Trancos example image as follows: |
| 19 | +# compute per-pixel logits using any segmentation model |
| 20 | +logits = seg_model.forward(images) |
17 | 21 |
|
18 |
| -``` |
19 |
| -python main.py -image_path figures/test.png \ |
20 |
| - -model_path checkpoints/best_model_trancos_ResFCN.pth \ |
21 |
| - -model_name ResFCN |
| 22 | +# compute lcfcn loss given 'points' as H x W mask |
| 23 | +loss = lcfcn_loss.compute_lcfcn_loss(logits, points) |
| 24 | +loss.backward() |
22 | 25 | ```
|
23 | 26 |
|
24 |
| -The expected output is shown below, and the output image will be saved in the same directory as the test image. |
25 | 27 |
|
26 |
| -Trancos test image | Trancos predicted image |
27 |
| -:-------------------------:|:-------------------------: |
28 |
| - |  |
29 | 28 |
|
| 29 | +## Experiments |
30 | 30 |
|
31 |
| -## Running the saved models |
| 31 | +### 1. Install dependencies |
32 | 32 |
|
33 |
| -1. Download the checkpoints, |
34 | 33 | ```
|
35 |
| -bash checkpoints/download.sh |
| 34 | +pip install -r requirements.txt |
36 | 35 | ```
|
37 |
| -For the shanghai model, download the checkpoint from this link: |
| 36 | +This command installs pydicom and the [Haven library](https://github.com/ElementAI/haven) which helps in managing the experiments. |
38 | 37 |
|
39 |
| -https://drive.google.com/file/d/1N75fun1I1XWh1LuKmi60QXF2SgCPLLLQ/view?usp=sharing |
40 | 38 |
|
41 |
| -2. Output the saved results, |
| 39 | +### 2. Download Datasets |
42 | 40 |
|
43 |
| -``` |
44 |
| -python main.py -m summary -e trancos |
45 |
| -``` |
| 41 | +- Shanghai Dataset |
| 42 | + |
| 43 | + ``` |
| 44 | + wget -O shanghai_tech.zip https://www.dropbox.com/s/fipgjqxl7uj8hd5/ShanghaiTech.zip?dl=0 |
| 45 | + ``` |
| 46 | +- Trancos Dataset |
| 47 | + ``` |
| 48 | + wget http://agamenon.tsc.uah.es/Personales/rlopez/data/trancos/TRANCOS_v3.tar.gz |
| 49 | + ``` |
| 50 | +<!-- |
| 51 | +#### Model |
| 52 | +- Shanghai: `curl -L https://www.dropbox.com/sh/pwmoej499sfqb08/AABY13YraHYF51yw62Zc1w0-a?dl=0 ` |
| 53 | +- Trancos: `curl -L https://www.dropbox.com/sh/rms4dg5autwtpnf/AADQBOr1ruFsptbqG_uPt_zCa?dl=0` --> |
46 | 54 |
|
47 |
| -3. Re-evaluate the saved model, |
| 55 | +#### 2.2 Run training and validation |
48 | 56 |
|
49 | 57 | ```
|
50 |
| -python main.py -m test -e trancos |
| 58 | +python trainval.py -e trancos -d <datadir> -sb <savedir_base> -r 1 |
51 | 59 | ```
|
52 | 60 |
|
| 61 | +- `<datadir>` is where the dataset is located. |
| 62 | +- `<savedir_base>` is where the experiment weights and results will be saved. |
| 63 | +- `-e trancos` specifies the trancos training hyper-parameters defined in [`exp_configs.py`](exp_configs.py). |
53 | 64 |
|
54 |
| -## Training the models from scratch |
55 |
| - |
56 |
| -To train the model, |
| 65 | +### 3. Results |
| 66 | +#### 3.1 Launch Jupyter from terminal |
57 | 67 |
|
58 | 68 | ```
|
59 |
| -python main.py -m train -e trancos |
| 69 | +> jupyter nbextension enable --py widgetsnbextension --sys-prefix |
| 70 | +> jupyter notebook |
60 | 71 | ```
|
61 | 72 |
|
| 73 | +#### 3.2 Run the following from a Jupyter cell |
| 74 | +```python |
| 75 | +from haven import haven_jupyter as hj |
| 76 | +from haven import haven_results as hr |
| 77 | + |
| 78 | +# path to where the experiments got saved |
| 79 | +savedir_base = <savedir_base> |
| 80 | + |
| 81 | +# filter exps |
| 82 | +filterby_list = [('dataset.name','trancos')] |
| 83 | +# get experiments |
| 84 | +rm = hr.ResultManager(savedir_base=savedir_base, |
| 85 | + filterby_list=filterby_list, |
| 86 | + verbose=0) |
| 87 | +# dashboard variables |
| 88 | +legend_list = ['model.base'] |
| 89 | +title_list = ['dataset', 'model'] |
| 90 | +y_metrics = ['val_mae'] |
| 91 | + |
| 92 | +# launch dashboard |
| 93 | +hj.get_dashboard(rm, vars(), wide_display=True) |
| 94 | +``` |
62 | 95 |
|
63 |
| -## Benchmark |
64 |
| - |
65 |
| -| Method | Trancos | Pascal| |
66 |
| -|------------------|---------|-------| |
67 |
| -| ResFCN | 3.39 | 0.31 | |
68 |
| -| Paper | 3.32 | 0.31 | |
69 |
| - |
70 |
| - |
71 |
| - |
| 96 | +This script outputs the following dashboard |
| 97 | + |
72 | 98 |
|
73 | 99 | ## Citation
|
74 | 100 | If you find the code useful for your research, please cite:
|
|
0 commit comments