This repo is for eye track research
We are in the process of preparing dataset for free public release now.
generate-dataset.py
generate.npz
data package for training from the MPIIFaceGaze dataset.- MPIIFaceGaze dataset arranged folder with different subjects and then different days' images data
- We can read the directory from the
p%02d.txt
file to obtain the complete path for each images file and the corresponding gaze point on the screen - The screen size parameters inside the codes should be modified according to the values inside the
screenSize.mat
file in theCalibration
folder - The saved
.npz
file contains two fields:- faceData: nSample * rows * cols * channels;
- eyeTrackData: nSamples * v_grids * h_grids * 3 (relative x, y, and probability p)
$ python generate-dataset.py <folder (e.g., ./p00)> <save.npz>
i2g_g_v1.0.py
previous code for model training, use matlab data file as input.- The
.mat
file should contain two fields, the same as instructed above in thegenerate-dataset.py
- The
$ python i2g_g_v1.0.py <data.mat (v7.3)> | tee <logfile>.txt
i2g_train.py
train the model with.npz
dataset generated bygenerate-dataset.py
, save model file.h5
., it can be trained from a pre-trained model
$ python i2g_train.py <data.npz> <save.h5> [<pre-trained model.h5>]
train_whole_and_face_roi.py
train the model with.npz
dataset generated bygenerate-dataset-v5.0.py
, save model file.h5
., it can be trained from a pre-trained model
$ python train_whole_and_face_roi.py <data.npz> <save.h5> [<pre-trained model.h5>]
predict-gaze.py
predict gaze point and display both the orginal image and gaze point with trained model.h5
and file list.txt
.- The setting is the same as the MPIIFaceGaze data generation
$ python predict-gaze.py <model.h5> <filelist.txt (e.g., p00.txt)>
- Python code should follow Google style