Skip to content

Commit eb5c390

Browse files
committed
new
1 parent cb082a6 commit eb5c390

40 files changed

+1067
-1620
lines changed

.gitignore

+14
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
__pycache__
2+
*.pyc
3+
*.pt
4+
*.out
5+
6+
__pycache__/
7+
.vscode/
8+
*.ckpt
9+
10+
results/*.ipynb
11+
results/*.ipynb*
12+
src/datasets/synbols
13+
usr_configs.py
14+
tmp.png

README.md

+66-40
Original file line numberDiff line numberDiff line change
@@ -1,74 +1,100 @@
11
# LCFCN - ECCV 2018
2+
23
## Where are the Blobs: Counting by Localization with Point Supervision
34
[[Paper]](https://arxiv.org/abs/1807.09856)[[Video]](https://youtu.be/DHKD8LGvX6c)
45

5-
## Requirements
6-
7-
- Pytorch version 0.4 or higher.
6+
Turn your segmentation model into a landmark detection model using the lcfcn loss. It can learn to output predictions like in the following image by training on point-level annotations only.
7+
This script outputs the following dashboard
8+
![](results/shanghai.png)
89

9-
## Description
10-
Given a test image, the trained model outputs blobs in the image, then counts the number of predicted blobs (see Figure below).
10+
## Usage
1111

12-
![Shanghai test image](figures/shanghai.png)
12+
```
13+
pip install git+https://github.com/ElementAI/LCFCN
14+
```
1315

14-
## Test on single image
16+
```python
17+
from lcfcn import lcfcn_loss
1518

16-
We test a trained ResNet on a Trancos example image as follows:
19+
# compute per-pixel logits using any segmentation model
20+
logits = seg_model.forward(images)
1721

18-
```
19-
python main.py -image_path figures/test.png \
20-
-model_path checkpoints/best_model_trancos_ResFCN.pth \
21-
-model_name ResFCN
22+
# compute lcfcn loss given 'points' as H x W mask
23+
loss = lcfcn_loss.compute_lcfcn_loss(logits, points)
24+
loss.backward()
2225
```
2326

24-
The expected output is shown below, and the output image will be saved in the same directory as the test image.
2527

26-
Trancos test image | Trancos predicted image
27-
:-------------------------:|:-------------------------:
28-
![Trancos test image](figures/test.png) | ![Trancos pred image](figures/test.png_blobs_count:32.png)
2928

29+
## Experiments
3030

31-
## Running the saved models
31+
### 1. Install dependencies
3232

33-
1. Download the checkpoints,
3433
```
35-
bash checkpoints/download.sh
34+
pip install -r requirements.txt
3635
```
37-
For the shanghai model, download the checkpoint from this link:
36+
This command installs pydicom and the [Haven library](https://github.com/ElementAI/haven) which helps in managing the experiments.
3837

39-
https://drive.google.com/file/d/1N75fun1I1XWh1LuKmi60QXF2SgCPLLLQ/view?usp=sharing
4038

41-
2. Output the saved results,
39+
### 2. Download Datasets
4240

43-
```
44-
python main.py -m summary -e trancos
45-
```
41+
- Shanghai Dataset
42+
43+
```
44+
wget -O shanghai_tech.zip https://www.dropbox.com/s/fipgjqxl7uj8hd5/ShanghaiTech.zip?dl=0
45+
```
46+
- Trancos Dataset
47+
```
48+
wget http://agamenon.tsc.uah.es/Personales/rlopez/data/trancos/TRANCOS_v3.tar.gz
49+
```
50+
<!--
51+
#### Model
52+
- Shanghai: `curl -L https://www.dropbox.com/sh/pwmoej499sfqb08/AABY13YraHYF51yw62Zc1w0-a?dl=0 `
53+
- Trancos: `curl -L https://www.dropbox.com/sh/rms4dg5autwtpnf/AADQBOr1ruFsptbqG_uPt_zCa?dl=0` -->
4654

47-
3. Re-evaluate the saved model,
55+
#### 2.2 Run training and validation
4856

4957
```
50-
python main.py -m test -e trancos
58+
python trainval.py -e trancos -d <datadir> -sb <savedir_base> -r 1
5159
```
5260

61+
- `<datadir>` is where the dataset is located.
62+
- `<savedir_base>` is where the experiment weights and results will be saved.
63+
- `-e trancos` specifies the trancos training hyper-parameters defined in [`exp_configs.py`](exp_configs.py).
5364

54-
## Training the models from scratch
55-
56-
To train the model,
65+
### 3. Results
66+
#### 3.1 Launch Jupyter from terminal
5767

5868
```
59-
python main.py -m train -e trancos
69+
> jupyter nbextension enable --py widgetsnbextension --sys-prefix
70+
> jupyter notebook
6071
```
6172

73+
#### 3.2 Run the following from a Jupyter cell
74+
```python
75+
from haven import haven_jupyter as hj
76+
from haven import haven_results as hr
77+
78+
# path to where the experiments got saved
79+
savedir_base = <savedir_base>
80+
81+
# filter exps
82+
filterby_list = [('dataset.name','trancos')]
83+
# get experiments
84+
rm = hr.ResultManager(savedir_base=savedir_base,
85+
filterby_list=filterby_list,
86+
verbose=0)
87+
# dashboard variables
88+
legend_list = ['model.base']
89+
title_list = ['dataset', 'model']
90+
y_metrics = ['val_mae']
91+
92+
# launch dashboard
93+
hj.get_dashboard(rm, vars(), wide_display=True)
94+
```
6295

63-
## Benchmark
64-
65-
| Method | Trancos | Pascal|
66-
|------------------|---------|-------|
67-
| ResFCN | 3.39 | 0.31 |
68-
| Paper | 3.32 | 0.31 |
69-
70-
71-
96+
This script outputs the following dashboard
97+
![](results/dashboard_trancos.png)
7298

7399
## Citation
74100
If you find the code useful for your research, please cite:

checkpoints/download.sh

-25
This file was deleted.

datasets/__init__.py

-5
This file was deleted.

datasets/download/pascal_download.sh

-2
This file was deleted.

datasets/download/penguins_download.sh

-20
This file was deleted.

datasets/download/shanghai_download.sh

-2
This file was deleted.

datasets/download/trancos_download.sh

-2
This file was deleted.

datasets/pascal.py

-165
This file was deleted.

0 commit comments

Comments
 (0)