Skip to content

Commit

Permalink
Added a mini-tutorial to Chapter 6 README
Browse files Browse the repository at this point in the history
  • Loading branch information
Shervin Emami committed Apr 12, 2015
1 parent f3b6916 commit 5a6f4a6
Showing 1 changed file with 115 additions and 0 deletions.
115 changes: 115 additions & 0 deletions Chapter6_NonRigidFaceTracking/README.txt
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,123 @@ Windows (MS Visual Studio):
- A static library will be written to the "lib" directory.
- The execuables can be found in the "bin" directory.


----------------------------------------------------------
Running the various programs:
----------------------------------------------------------
* On Linux or Mac: ./bin/program_name --help
* On Windows: bin\Debug\program_name --help


----------------------------------------------------------
Mini tutorial:
----------------------------------------------------------
(Follow these steps in the order 1 to 10)

1. Create "annotations.yaml":
---------
Usage:
> ./annotate [-v video] [-m muct_dir] [-d output_dir]
- video: video containing frames to annotate.
- muct_dir: directory containing "muct-landmarks/muct76-opencv.csv", the pre-annotated MUCT dataset (http://www.milbo.org/muct/).
- output_dir: contains the annotation file and annotated images (if using -v)
Example:
> mkdir muct
> ./annotate -m ${MY_MUCT_DIR}/ -d muct/

2. Visualise annotations:
----------
Usage:
> ./visualise_annotation annotation_file
Example:
> ./visualize_annotations muct/annotations.yaml
Keys:
- 'p': show next image and annotations
- 'o': show previous image and annotations
- 'f': show flipped image and annotations

3. Train shape model:
----------
Usage:
> ./train_shape_model annotation_file shape_model_file [-f fraction_of_variation] [-k maximum_modes] [--mirror]
- annotation_file: generated by "annotate"
- shape_model_file: output YAML file containing trained shape model
- fraction_of_variation: A fraction between 0 and 1 specifying ammount of variation to retain
- maximum_modes: A cap on the number of modes the shape model can have
- mirror: Use mirrored images as samples (only use if symmety points were specified in "annotate")
Example:
> ./train_shape_model muct/annotations.yaml muct/shape_model.yaml

4. Visualise shape model:
-----------
Usage:
> ./visualise_shape_model shape_model
- shape_model: generated using "train_shape_model"
Example:
> ./visualize_shape_model muct/shape_model.yaml

5. Train patch detectors:
--------------
Usage:
> ./train_patch_model annotation_file shape_model_file patch_model_file[-w face_width] [-p patch_size] [-s search_window_size] [--mirror]
- annotation_file: generated by "annotate"
- shape_model_file: generated by "train_shape_model"
- patch_model_file: output YAML file containing trained patch model
- face_width: How many pixels-wide the reference face is
- patch_size: How many pixels-wide the patches are in the reference face image
- search_window_Size: How many pixels-wide the search region is
- mirror: Use mirrored images as samples (only use if symmety points were specified in "annotate")
Example:
> ./train_patch_model muct/annotations.yaml muct/shape_model.yaml muct/patch_model.yaml

6. Visualise patch detectors:
------------
Usage:
> ./visualise_patch_model patch_model [-w face_width]
- patch_model: generated using "train_patch_model"
- face_width: Width of face to visualise patches on
Example:
> ./visualize_patch_model muct/patch_model.yaml

7. Build face detector:
------------
Usage:
> ./train_face_detector detector_file annotation_file shape_model_file detector_model_file [-f min_frac_of_pts_in_det_rect] [--mirror]
- detector_file: pre-trained OpenCV cascade face detector (look in the data directory of the OpenCV package)
- annotation_file: generated using "annotate"
- shape_model_file: generated using "train_shape_model"
- detector_model_file: output YAML file containing face detector model
- min_frac_of_pts_in_det_rect: Minimum fraction of points inside detection window for sample to be considered and inlier for training
- mirror: Use mirrored images as samples (only use if symmety points were specified in "annotate")
Example:
> ./train_face_detector ${MY_OPENCV_DIR}/data/lbpcascades/lbpcascade_frontalface.xml muct/annotations.yaml muct/shape_model.yaml muct/detector.yaml

8. Visualise face detector:
------------
Usage:
> ./visualise_face detector [video_file]
- detector: generated using "train_face_detector"
- video_file: Optional video to test results on. Default is to use webcam
Example:
> ./visualize_face_detector muct/detector.yaml

9. Train face tracker:
-----------
Usage:
> ./train_face_tracker shape_model_file patch_models_file face_detector_file face_tracker_file
- shape_model_file: generated using "train_shape_model"
- patch_model_file: generated using "train_patch_model"
- face_detector_file: generated using "train_face_detector"
- face_tracker_file: output YAML file containing face tracker model
Example:
> ./train_face_tracker muct/shape_model.yaml muct/patch_model.yaml muct/detector.yaml muct/tracker.yaml

10. Test face tracker:
----------
Usage:
> ./visualise_face_tracker tracker [video_file]
- tracker: generated using "train_face_tracker"
- video_file: Optional video to test tracker on. Default is to use webcam
Example:
./visualize_face_tracker muct/tracker.yaml

0 comments on commit 5a6f4a6

Please sign in to comment.