You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
but when I run the s3_preprocess_features.py it creates an empty festures_x.csv file.
The output reads as :
Number of samples = 128
Raw feature length = 36
Number of classes = 2
Classes: ['wavingg' 'shaking']
(128, 36)
Extracting time-serials features ...
0/128,
a new video clip starts, reset the feature generator
X.shape = (0,), len(Y) = 0
Writing features and labels to disk ...
Save features to: /home/usr/Desktop/activity_Recognition/Realtime-Action-Recognition/src/../data_proc/features_X.csv
Save labels to: /home/usr/Desktop/activity_Recognition/Realtime-Action-Recognition/src/../data_proc/features_Y.csv
Can you please help me with why the preprocess is not doing anything?
The text was updated successfully, but these errors were encountered:
@Yaffa16 Just to confirm. Step2 returns some data, but step3 is empty, correct? Looks like it's because the skeleton is missing too many parts, and in step3 (the preprocessing step) all the skeletons are discarded.
I am trying to train the model on my own data.
The output of src/s2_put_skeleton_txts_to_a_single_txt.py is not empty.
It shows the skeleton)info.txt file as :
[[1, 1, 1, "wavingg", "wavingg_07-22-11-21-37-691/00001.jpg", 0.4176829268292683, 0.41983695652173914, 0.43597560975609756, 0.6195652173913043, 0.2926829268292683, 0.5869565217391305, 0.0, 0.0, 0.0, 0.0, 0.6067073170731707, 0.6236413043478262, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.39939024390243905, 0.3831521739130435, 0.45426829268292684, 0.37907608695652173, 0.0, 0.0, 0.5030487804878049, 0.3953804347826087], [1, 1, 2, "wavingg", "wavingg_07-22-11-21-37-691/00002.jpg", 0.4146341463414634, ......
but when I run the s3_preprocess_features.py it creates an empty festures_x.csv file.
The output reads as :
Number of samples = 128
Raw feature length = 36
Number of classes = 2
Classes: ['wavingg' 'shaking']
(128, 36)
Extracting time-serials features ...
0/128,
a new video clip starts, reset the feature generator
X.shape = (0,), len(Y) = 0
Writing features and labels to disk ...
Save features to: /home/usr/Desktop/activity_Recognition/Realtime-Action-Recognition/src/../data_proc/features_X.csv
Save labels to: /home/usr/Desktop/activity_Recognition/Realtime-Action-Recognition/src/../data_proc/features_Y.csv
Can you please help me with why the preprocess is not doing anything?
The text was updated successfully, but these errors were encountered: