forked from AILAB-CEFET-RJ/atmoseer
-
Notifications
You must be signed in to change notification settings - Fork 0
/
out.txt
16095 lines (16090 loc) · 807 KB
/
out.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
Input dimensions of the data matrix: 15
last_layer.bias = Parameter containing:
tensor([ 0.0819, 0.0156, -0.1376, 0.1214, 0.0068], requires_grad=True)
target_average = tensor([1.0000e+00, 9.2844e-02, 6.7718e-03, 5.0610e-04, 4.9266e-05])
last_layer.bias = Parameter containing:
tensor([1.0000e+00, 9.2844e-02, 6.7718e-03, 5.0610e-04, 4.9266e-05],
requires_grad=True)
OrdinalClassificationNet(
(feature_extractor): Sequential(
(0): Conv1d(15, 16, kernel_size=(3,), stride=(1,), padding=(3,))
(1): ReLU(inplace=True)
(2): Dropout1d(p=0.5, inplace=False)
)
(classifier): Sequential(
(0): Linear(in_features=112, out_features=50, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=50, out_features=5, bias=True)
(3): Sigmoid()
)
)
- Setting up optimizer: Adam (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.999)
capturable: False
eps: 1e-08
foreach: None
lr: 3e-06
maximize: False
weight_decay: 0
)
- Creating data loaders.
- Moving data and parameters to cuda:0.
- Fitting model... [ 1/8000] train_loss: 1167.09178 valid_loss: 1188.25951
Validation loss decreased (inf --> 1188.259509). Saving model ...
[ 2/8000] train_loss: 1159.41425 valid_loss: 1180.93165
Validation loss decreased (1188.259509 --> 1180.931653). Saving model ...
[ 3/8000] train_loss: 1151.38789 valid_loss: 1172.97184
Validation loss decreased (1180.931653 --> 1172.971844). Saving model ...
[ 4/8000] train_loss: 1142.56177 valid_loss: 1164.10781
Validation loss decreased (1172.971844 --> 1164.107806). Saving model ...
[ 5/8000] train_loss: 1132.85579 valid_loss: 1154.10887
Validation loss decreased (1164.107806 --> 1154.108868). Saving model ...
[ 6/8000] train_loss: 1121.71555 valid_loss: 1142.67249
Validation loss decreased (1154.108868 --> 1142.672485). Saving model ...
[ 7/8000] train_loss: 1109.34057 valid_loss: 1129.69371
Validation loss decreased (1142.672485 --> 1129.693707). Saving model ...
[ 8/8000] train_loss: 1095.15341 valid_loss: 1115.02543
Validation loss decreased (1129.693707 --> 1115.025433). Saving model ...
[ 9/8000] train_loss: 1079.46213 valid_loss: 1098.73264
Validation loss decreased (1115.025433 --> 1098.732635). Saving model ...
[ 10/8000] train_loss: 1061.79685 valid_loss: 1080.65704
Validation loss decreased (1098.732635 --> 1080.657043). Saving model ...
[ 11/8000] train_loss: 1042.59509 valid_loss: 1060.54470
Validation loss decreased (1080.657043 --> 1060.544699). Saving model ...
[ 12/8000] train_loss: 1021.15436 valid_loss: 1038.00896
Validation loss decreased (1060.544699 --> 1038.008957). Saving model ...
[ 13/8000] train_loss: 997.30274 valid_loss: 1012.80159
Validation loss decreased (1038.008957 --> 1012.801590). Saving model ...
[ 14/8000] train_loss: 971.59575 valid_loss: 985.01531
Validation loss decreased (1012.801590 --> 985.015308). Saving model ...
[ 15/8000] train_loss: 943.84648 valid_loss: 954.94968
Validation loss decreased (985.015308 --> 954.949683). Saving model ...
[ 16/8000] train_loss: 914.34703 valid_loss: 923.08728
Validation loss decreased (954.949683 --> 923.087280). Saving model ...
[ 17/8000] train_loss: 883.02437 valid_loss: 890.03624
Validation loss decreased (923.087280 --> 890.036240). Saving model ...
[ 18/8000] train_loss: 850.88199 valid_loss: 856.13437
Validation loss decreased (890.036240 --> 856.134369). Saving model ...
[ 19/8000] train_loss: 817.76536 valid_loss: 821.54584
Validation loss decreased (856.134369 --> 821.545844). Saving model ...
[ 20/8000] train_loss: 783.27143 valid_loss: 786.46676
Validation loss decreased (821.545844 --> 786.466757). Saving model ...
[ 21/8000] train_loss: 748.77796 valid_loss: 751.14131
Validation loss decreased (786.466757 --> 751.141312). Saving model ...
[ 22/8000] train_loss: 714.43061 valid_loss: 715.87056
Validation loss decreased (751.141312 --> 715.870563). Saving model ...
[ 23/8000] train_loss: 679.53212 valid_loss: 680.92294
Validation loss decreased (715.870563 --> 680.922943). Saving model ...
[ 24/8000] train_loss: 646.22463 valid_loss: 646.67488
Validation loss decreased (680.922943 --> 646.674881). Saving model ...
[ 25/8000] train_loss: 612.88898 valid_loss: 613.34336
Validation loss decreased (646.674881 --> 613.343359). Saving model ...
[ 26/8000] train_loss: 581.15125 valid_loss: 581.18337
Validation loss decreased (613.343359 --> 581.183368). Saving model ...
[ 27/8000] train_loss: 549.98101 valid_loss: 550.31610
Validation loss decreased (581.183368 --> 550.316096). Saving model ...
[ 28/8000] train_loss: 520.91777 valid_loss: 520.93268
Validation loss decreased (550.316096 --> 520.932680). Saving model ...
[ 29/8000] train_loss: 491.90691 valid_loss: 493.11408
Validation loss decreased (520.932680 --> 493.114081). Saving model ...
[ 30/8000] train_loss: 465.86816 valid_loss: 466.93592
Validation loss decreased (493.114081 --> 466.935916). Saving model ...
[ 31/8000] train_loss: 441.40827 valid_loss: 442.47712
Validation loss decreased (466.935916 --> 442.477116). Saving model ...
[ 32/8000] train_loss: 418.40047 valid_loss: 419.71220
Validation loss decreased (442.477116 --> 419.712204). Saving model ...
[ 33/8000] train_loss: 396.88765 valid_loss: 398.57272
Validation loss decreased (419.712204 --> 398.572723). Saving model ...
[ 34/8000] train_loss: 377.32586 valid_loss: 378.93554
Validation loss decreased (398.572723 --> 378.935538). Saving model ...
[ 35/8000] train_loss: 358.86552 valid_loss: 360.84236
Validation loss decreased (378.935538 --> 360.842355). Saving model ...
[ 36/8000] train_loss: 341.71449 valid_loss: 344.37979
Validation loss decreased (360.842355 --> 344.379793). Saving model ...
[ 37/8000] train_loss: 326.12762 valid_loss: 329.43843
Validation loss decreased (344.379793 --> 329.438432). Saving model ...
[ 38/8000] train_loss: 312.50191 valid_loss: 315.96290
Validation loss decreased (329.438432 --> 315.962903). Saving model ...
[ 39/8000] train_loss: 300.17501 valid_loss: 303.89313
Validation loss decreased (315.962903 --> 303.893133). Saving model ...
[ 40/8000] train_loss: 288.82426 valid_loss: 293.11485
Validation loss decreased (303.893133 --> 293.114851). Saving model ...
[ 41/8000] train_loss: 278.64353 valid_loss: 283.49497
Validation loss decreased (293.114851 --> 283.494968). Saving model ...
[ 42/8000] train_loss: 267.68435 valid_loss: 275.00098
Validation loss decreased (283.494968 --> 275.000976). Saving model ...
[ 43/8000] train_loss: 260.27083 valid_loss: 267.47750
Validation loss decreased (275.000976 --> 267.477496). Saving model ...
[ 44/8000] train_loss: 251.88078 valid_loss: 260.86574
Validation loss decreased (267.477496 --> 260.865740). Saving model ...
[ 45/8000] train_loss: 245.48147 valid_loss: 255.08604
Validation loss decreased (260.865740 --> 255.086035). Saving model ...
[ 46/8000] train_loss: 238.86384 valid_loss: 250.04313
Validation loss decreased (255.086035 --> 250.043130). Saving model ...
[ 47/8000] train_loss: 233.89637 valid_loss: 245.65707
Validation loss decreased (250.043130 --> 245.657074). Saving model ...
[ 48/8000] train_loss: 228.25081 valid_loss: 241.87214
Validation loss decreased (245.657074 --> 241.872140). Saving model ...
[ 49/8000] train_loss: 224.58804 valid_loss: 238.60679
Validation loss decreased (241.872140 --> 238.606786). Saving model ...
[ 50/8000] train_loss: 220.70823 valid_loss: 235.80303
Validation loss decreased (238.606786 --> 235.803033). Saving model ...
[ 51/8000] train_loss: 216.37682 valid_loss: 233.39751
Validation loss decreased (235.803033 --> 233.397511). Saving model ...
[ 52/8000] train_loss: 214.13555 valid_loss: 231.37668
Validation loss decreased (233.397511 --> 231.376677). Saving model ...
[ 53/8000] train_loss: 210.82609 valid_loss: 229.66407
Validation loss decreased (231.376677 --> 229.664072). Saving model ...
[ 54/8000] train_loss: 208.81451 valid_loss: 228.22115
Validation loss decreased (229.664072 --> 228.221146). Saving model ...
[ 55/8000] train_loss: 206.51740 valid_loss: 227.01063
Validation loss decreased (228.221146 --> 227.010627). Saving model ...
[ 56/8000] train_loss: 205.18482 valid_loss: 225.99917
Validation loss decreased (227.010627 --> 225.999166). Saving model ...
[ 57/8000] train_loss: 201.94092 valid_loss: 225.15832
Validation loss decreased (225.999166 --> 225.158320). Saving model ...
[ 58/8000] train_loss: 201.47846 valid_loss: 224.46023
Validation loss decreased (225.158320 --> 224.460232). Saving model ...
[ 59/8000] train_loss: 199.51876 valid_loss: 223.88483
Validation loss decreased (224.460232 --> 223.884835). Saving model ...
[ 60/8000] train_loss: 197.88080 valid_loss: 223.41258
Validation loss decreased (223.884835 --> 223.412582). Saving model ...
[ 61/8000] train_loss: 197.35481 valid_loss: 223.02215
Validation loss decreased (223.412582 --> 223.022152). Saving model ...
[ 62/8000] train_loss: 196.03673 valid_loss: 222.69644
Validation loss decreased (223.022152 --> 222.696439). Saving model ...
[ 63/8000] train_loss: 195.59994 valid_loss: 222.42900
Validation loss decreased (222.696439 --> 222.428999). Saving model ...
[ 64/8000] train_loss: 194.83756 valid_loss: 222.20820
Validation loss decreased (222.428999 --> 222.208202). Saving model ...
[ 65/8000] train_loss: 194.11020 valid_loss: 222.01828
Validation loss decreased (222.208202 --> 222.018282). Saving model ...
[ 66/8000] train_loss: 193.03170 valid_loss: 221.85742
Validation loss decreased (222.018282 --> 221.857421). Saving model ...
[ 67/8000] train_loss: 192.11821 valid_loss: 221.71864
Validation loss decreased (221.857421 --> 221.718636). Saving model ...
[ 68/8000] train_loss: 192.32473 valid_loss: 221.60631
Validation loss decreased (221.718636 --> 221.606307). Saving model ...
[ 69/8000] train_loss: 191.82731 valid_loss: 221.50341
Validation loss decreased (221.606307 --> 221.503415). Saving model ...
[ 70/8000] train_loss: 191.38897 valid_loss: 221.41502
Validation loss decreased (221.503415 --> 221.415024). Saving model ...
[ 71/8000] train_loss: 190.32599 valid_loss: 221.33057
Validation loss decreased (221.415024 --> 221.330574). Saving model ...
[ 72/8000] train_loss: 189.98166 valid_loss: 221.25624
Validation loss decreased (221.330574 --> 221.256242). Saving model ...
[ 73/8000] train_loss: 189.70923 valid_loss: 221.19642
Validation loss decreased (221.256242 --> 221.196421). Saving model ...
[ 74/8000] train_loss: 189.31760 valid_loss: 221.14436
Validation loss decreased (221.196421 --> 221.144356). Saving model ...
[ 75/8000] train_loss: 187.51579 valid_loss: 221.08010
Validation loss decreased (221.144356 --> 221.080100). Saving model ...
[ 76/8000] train_loss: 188.76265 valid_loss: 221.04039
Validation loss decreased (221.080100 --> 221.040391). Saving model ...
[ 77/8000] train_loss: 188.49626 valid_loss: 220.99875
Validation loss decreased (221.040391 --> 220.998746). Saving model ...
[ 78/8000] train_loss: 187.89425 valid_loss: 220.96813
Validation loss decreased (220.998746 --> 220.968135). Saving model ...
[ 79/8000] train_loss: 187.79085 valid_loss: 220.93316
Validation loss decreased (220.968135 --> 220.933157). Saving model ...
[ 80/8000] train_loss: 188.04346 valid_loss: 220.90737
Validation loss decreased (220.933157 --> 220.907368). Saving model ...
[ 81/8000] train_loss: 186.78565 valid_loss: 220.87238
Validation loss decreased (220.907368 --> 220.872381). Saving model ...
[ 82/8000] train_loss: 187.06971 valid_loss: 220.84968
Validation loss decreased (220.872381 --> 220.849680). Saving model ...
[ 83/8000] train_loss: 187.38856 valid_loss: 220.84619
Validation loss decreased (220.849680 --> 220.846186). Saving model ...
[ 84/8000] train_loss: 186.50198 valid_loss: 220.81998
Validation loss decreased (220.846186 --> 220.819985). Saving model ...
[ 85/8000] train_loss: 186.65134 valid_loss: 220.80665
Validation loss decreased (220.819985 --> 220.806647). Saving model ...
[ 86/8000] train_loss: 186.10312 valid_loss: 220.76991
Validation loss decreased (220.806647 --> 220.769911). Saving model ...
[ 87/8000] train_loss: 186.28247 valid_loss: 220.75814
Validation loss decreased (220.769911 --> 220.758142). Saving model ...
[ 88/8000] train_loss: 186.47653 valid_loss: 220.74611
Validation loss decreased (220.758142 --> 220.746115). Saving model ...
[ 89/8000] train_loss: 186.07126 valid_loss: 220.73392
Validation loss decreased (220.746115 --> 220.733916). Saving model ...
[ 90/8000] train_loss: 186.49681 valid_loss: 220.72916
Validation loss decreased (220.733916 --> 220.729158). Saving model ...
[ 91/8000] train_loss: 185.91303 valid_loss: 220.71497
Validation loss decreased (220.729158 --> 220.714965). Saving model ...
[ 92/8000] train_loss: 185.63906 valid_loss: 220.69587
Validation loss decreased (220.714965 --> 220.695866). Saving model ...
[ 93/8000] train_loss: 186.23780 valid_loss: 220.69680
EarlyStopping counter: 1 out of 1000
[ 94/8000] train_loss: 186.07539 valid_loss: 220.69079
Validation loss decreased (220.695866 --> 220.690793). Saving model ...
[ 95/8000] train_loss: 185.08168 valid_loss: 220.65661
Validation loss decreased (220.690793 --> 220.656609). Saving model ...
[ 96/8000] train_loss: 185.07447 valid_loss: 220.62811
Validation loss decreased (220.656609 --> 220.628107). Saving model ...
[ 97/8000] train_loss: 185.68714 valid_loss: 220.61778
Validation loss decreased (220.628107 --> 220.617781). Saving model ...
[ 98/8000] train_loss: 184.98536 valid_loss: 220.59437
Validation loss decreased (220.617781 --> 220.594375). Saving model ...
[ 99/8000] train_loss: 185.26646 valid_loss: 220.58101
Validation loss decreased (220.594375 --> 220.581008). Saving model ...
[ 100/8000] train_loss: 185.40181 valid_loss: 220.56570
Validation loss decreased (220.581008 --> 220.565703). Saving model ...
[ 101/8000] train_loss: 185.05352 valid_loss: 220.53766
Validation loss decreased (220.565703 --> 220.537663). Saving model ...
[ 102/8000] train_loss: 185.22825 valid_loss: 220.52465
Validation loss decreased (220.537663 --> 220.524652). Saving model ...
[ 103/8000] train_loss: 184.42743 valid_loss: 220.48395
Validation loss decreased (220.524652 --> 220.483951). Saving model ...
[ 104/8000] train_loss: 184.72587 valid_loss: 220.46119
Validation loss decreased (220.483951 --> 220.461191). Saving model ...
[ 105/8000] train_loss: 184.70656 valid_loss: 220.43450
Validation loss decreased (220.461191 --> 220.434496). Saving model ...
[ 106/8000] train_loss: 184.86956 valid_loss: 220.40901
Validation loss decreased (220.434496 --> 220.409010). Saving model ...
[ 107/8000] train_loss: 183.85362 valid_loss: 220.36973
Validation loss decreased (220.409010 --> 220.369727). Saving model ...
[ 108/8000] train_loss: 184.56939 valid_loss: 220.33493
Validation loss decreased (220.369727 --> 220.334929). Saving model ...
[ 109/8000] train_loss: 184.99839 valid_loss: 220.31593
Validation loss decreased (220.334929 --> 220.315934). Saving model ...
[ 110/8000] train_loss: 184.48426 valid_loss: 220.29062
Validation loss decreased (220.315934 --> 220.290616). Saving model ...
[ 111/8000] train_loss: 185.22451 valid_loss: 220.27230
Validation loss decreased (220.290616 --> 220.272299). Saving model ...
[ 112/8000] train_loss: 184.88271 valid_loss: 220.24355
Validation loss decreased (220.272299 --> 220.243554). Saving model ...
[ 113/8000] train_loss: 184.17742 valid_loss: 220.21072
Validation loss decreased (220.243554 --> 220.210718). Saving model ...
[ 114/8000] train_loss: 184.41522 valid_loss: 220.17680
Validation loss decreased (220.210718 --> 220.176797). Saving model ...
[ 115/8000] train_loss: 184.34198 valid_loss: 220.14464
Validation loss decreased (220.176797 --> 220.144644). Saving model ...
[ 116/8000] train_loss: 183.84747 valid_loss: 220.10672
Validation loss decreased (220.144644 --> 220.106719). Saving model ...
[ 117/8000] train_loss: 184.06636 valid_loss: 220.07266
Validation loss decreased (220.106719 --> 220.072665). Saving model ...
[ 118/8000] train_loss: 184.13449 valid_loss: 220.03806
Validation loss decreased (220.072665 --> 220.038062). Saving model ...
[ 119/8000] train_loss: 183.72206 valid_loss: 219.99958
Validation loss decreased (220.038062 --> 219.999584). Saving model ...
[ 120/8000] train_loss: 183.46468 valid_loss: 219.95137
Validation loss decreased (219.999584 --> 219.951368). Saving model ...
[ 121/8000] train_loss: 183.87176 valid_loss: 219.91643
Validation loss decreased (219.951368 --> 219.916425). Saving model ...
[ 122/8000] train_loss: 184.19734 valid_loss: 219.88783
Validation loss decreased (219.916425 --> 219.887827). Saving model ...
[ 123/8000] train_loss: 183.68383 valid_loss: 219.84720
Validation loss decreased (219.887827 --> 219.847197). Saving model ...
[ 124/8000] train_loss: 183.79355 valid_loss: 219.80760
Validation loss decreased (219.847197 --> 219.807603). Saving model ...
[ 125/8000] train_loss: 184.21460 valid_loss: 219.77999
Validation loss decreased (219.807603 --> 219.779990). Saving model ...
[ 126/8000] train_loss: 183.76783 valid_loss: 219.74330
Validation loss decreased (219.779990 --> 219.743299). Saving model ...
[ 127/8000] train_loss: 184.03101 valid_loss: 219.70985
Validation loss decreased (219.743299 --> 219.709849). Saving model ...
[ 128/8000] train_loss: 183.53350 valid_loss: 219.67190
Validation loss decreased (219.709849 --> 219.671899). Saving model ...
[ 129/8000] train_loss: 183.90728 valid_loss: 219.63785
Validation loss decreased (219.671899 --> 219.637853). Saving model ...
[ 130/8000] train_loss: 183.83117 valid_loss: 219.60225
Validation loss decreased (219.637853 --> 219.602246). Saving model ...
[ 131/8000] train_loss: 183.79297 valid_loss: 219.56589
Validation loss decreased (219.602246 --> 219.565887). Saving model ...
[ 132/8000] train_loss: 183.61822 valid_loss: 219.52799
Validation loss decreased (219.565887 --> 219.527988). Saving model ...
[ 133/8000] train_loss: 183.62758 valid_loss: 219.48972
Validation loss decreased (219.527988 --> 219.489722). Saving model ...
[ 134/8000] train_loss: 183.29080 valid_loss: 219.44793
Validation loss decreased (219.489722 --> 219.447928). Saving model ...
[ 135/8000] train_loss: 183.25681 valid_loss: 219.40590
Validation loss decreased (219.447928 --> 219.405902). Saving model ...
[ 136/8000] train_loss: 183.46476 valid_loss: 219.36167
Validation loss decreased (219.405902 --> 219.361668). Saving model ...
[ 137/8000] train_loss: 183.47600 valid_loss: 219.32846
Validation loss decreased (219.361668 --> 219.328462). Saving model ...
[ 138/8000] train_loss: 183.44563 valid_loss: 219.29249
Validation loss decreased (219.328462 --> 219.292493). Saving model ...
[ 139/8000] train_loss: 182.89999 valid_loss: 219.24188
Validation loss decreased (219.292493 --> 219.241880). Saving model ...
[ 140/8000] train_loss: 182.86268 valid_loss: 219.19391
Validation loss decreased (219.241880 --> 219.193909). Saving model ...
[ 141/8000] train_loss: 183.04053 valid_loss: 219.14954
Validation loss decreased (219.193909 --> 219.149542). Saving model ...
[ 142/8000] train_loss: 183.09077 valid_loss: 219.11003
Validation loss decreased (219.149542 --> 219.110029). Saving model ...
[ 143/8000] train_loss: 183.28651 valid_loss: 219.07543
Validation loss decreased (219.110029 --> 219.075427). Saving model ...
[ 144/8000] train_loss: 182.99838 valid_loss: 219.03520
Validation loss decreased (219.075427 --> 219.035197). Saving model ...
[ 145/8000] train_loss: 182.87323 valid_loss: 218.99303
Validation loss decreased (219.035197 --> 218.993029). Saving model ...
[ 146/8000] train_loss: 182.93135 valid_loss: 218.95590
Validation loss decreased (218.993029 --> 218.955905). Saving model ...
[ 147/8000] train_loss: 182.87521 valid_loss: 218.91537
Validation loss decreased (218.955905 --> 218.915369). Saving model ...
[ 148/8000] train_loss: 182.87937 valid_loss: 218.87083
Validation loss decreased (218.915369 --> 218.870832). Saving model ...
[ 149/8000] train_loss: 182.86719 valid_loss: 218.83226
Validation loss decreased (218.870832 --> 218.832258). Saving model ...
[ 150/8000] train_loss: 183.22229 valid_loss: 218.80301
Validation loss decreased (218.832258 --> 218.803015). Saving model ...
[ 151/8000] train_loss: 183.40664 valid_loss: 218.77413
Validation loss decreased (218.803015 --> 218.774127). Saving model ...
[ 152/8000] train_loss: 183.07574 valid_loss: 218.74013
Validation loss decreased (218.774127 --> 218.740126). Saving model ...
[ 153/8000] train_loss: 182.42096 valid_loss: 218.69291
Validation loss decreased (218.740126 --> 218.692911). Saving model ...
[ 154/8000] train_loss: 182.82654 valid_loss: 218.65476
Validation loss decreased (218.692911 --> 218.654755). Saving model ...
[ 155/8000] train_loss: 182.73238 valid_loss: 218.61027
Validation loss decreased (218.654755 --> 218.610268). Saving model ...
[ 156/8000] train_loss: 182.78934 valid_loss: 218.57006
Validation loss decreased (218.610268 --> 218.570060). Saving model ...
[ 157/8000] train_loss: 182.50939 valid_loss: 218.52557
Validation loss decreased (218.570060 --> 218.525573). Saving model ...
[ 158/8000] train_loss: 183.07425 valid_loss: 218.48938
Validation loss decreased (218.525573 --> 218.489375). Saving model ...
[ 159/8000] train_loss: 182.37559 valid_loss: 218.44014
Validation loss decreased (218.489375 --> 218.440137). Saving model ...
[ 160/8000] train_loss: 182.54683 valid_loss: 218.39508
Validation loss decreased (218.440137 --> 218.395080). Saving model ...
[ 161/8000] train_loss: 182.21330 valid_loss: 218.34489
Validation loss decreased (218.395080 --> 218.344893). Saving model ...
[ 162/8000] train_loss: 182.41355 valid_loss: 218.30733
Validation loss decreased (218.344893 --> 218.307331). Saving model ...
[ 163/8000] train_loss: 182.64646 valid_loss: 218.27356
Validation loss decreased (218.307331 --> 218.273563). Saving model ...
[ 164/8000] train_loss: 182.09320 valid_loss: 218.22783
Validation loss decreased (218.273563 --> 218.227825). Saving model ...
[ 165/8000] train_loss: 181.97811 valid_loss: 218.17073
Validation loss decreased (218.227825 --> 218.170729). Saving model ...
[ 166/8000] train_loss: 182.24425 valid_loss: 218.13054
Validation loss decreased (218.170729 --> 218.130539). Saving model ...
[ 167/8000] train_loss: 182.66197 valid_loss: 218.09483
Validation loss decreased (218.130539 --> 218.094826). Saving model ...
[ 168/8000] train_loss: 182.22358 valid_loss: 218.05064
Validation loss decreased (218.094826 --> 218.050636). Saving model ...
[ 169/8000] train_loss: 182.34729 valid_loss: 218.01420
Validation loss decreased (218.050636 --> 218.014197). Saving model ...
[ 170/8000] train_loss: 182.49573 valid_loss: 217.97669
Validation loss decreased (218.014197 --> 217.976685). Saving model ...
[ 171/8000] train_loss: 182.42913 valid_loss: 217.94500
Validation loss decreased (217.976685 --> 217.945004). Saving model ...
[ 172/8000] train_loss: 182.06789 valid_loss: 217.89984
Validation loss decreased (217.945004 --> 217.899841). Saving model ...
[ 173/8000] train_loss: 182.59659 valid_loss: 217.87602
Validation loss decreased (217.899841 --> 217.876022). Saving model ...
[ 174/8000] train_loss: 181.59246 valid_loss: 217.81930
Validation loss decreased (217.876022 --> 217.819305). Saving model ...
[ 175/8000] train_loss: 181.74139 valid_loss: 217.76743
Validation loss decreased (217.819305 --> 217.767429). Saving model ...
[ 176/8000] train_loss: 182.65408 valid_loss: 217.74394
Validation loss decreased (217.767429 --> 217.743939). Saving model ...
[ 177/8000] train_loss: 182.21697 valid_loss: 217.70880
Validation loss decreased (217.743939 --> 217.708804). Saving model ...
[ 178/8000] train_loss: 181.99826 valid_loss: 217.67079
Validation loss decreased (217.708804 --> 217.670790). Saving model ...
[ 179/8000] train_loss: 181.98459 valid_loss: 217.62652
Validation loss decreased (217.670790 --> 217.626517). Saving model ...
[ 180/8000] train_loss: 182.03381 valid_loss: 217.58822
Validation loss decreased (217.626517 --> 217.588222). Saving model ...
[ 181/8000] train_loss: 181.83129 valid_loss: 217.54938
Validation loss decreased (217.588222 --> 217.549385). Saving model ...
[ 182/8000] train_loss: 182.00973 valid_loss: 217.51460
Validation loss decreased (217.549385 --> 217.514600). Saving model ...
[ 183/8000] train_loss: 182.04072 valid_loss: 217.47850
Validation loss decreased (217.514600 --> 217.478500). Saving model ...
[ 184/8000] train_loss: 181.80721 valid_loss: 217.43917
Validation loss decreased (217.478500 --> 217.439167). Saving model ...
[ 185/8000] train_loss: 181.92709 valid_loss: 217.40301
Validation loss decreased (217.439167 --> 217.403015). Saving model ...
[ 186/8000] train_loss: 181.43618 valid_loss: 217.35507
Validation loss decreased (217.403015 --> 217.355066). Saving model ...
[ 187/8000] train_loss: 182.23617 valid_loss: 217.32360
Validation loss decreased (217.355066 --> 217.323603). Saving model ...
[ 188/8000] train_loss: 181.61868 valid_loss: 217.28088
Validation loss decreased (217.323603 --> 217.280876). Saving model ...
[ 189/8000] train_loss: 181.74635 valid_loss: 217.24288
Validation loss decreased (217.280876 --> 217.242875). Saving model ...
[ 190/8000] train_loss: 181.50408 valid_loss: 217.20439
Validation loss decreased (217.242875 --> 217.204388). Saving model ...
[ 191/8000] train_loss: 181.60832 valid_loss: 217.16474
Validation loss decreased (217.204388 --> 217.164737). Saving model ...
[ 192/8000] train_loss: 181.49335 valid_loss: 217.12882
Validation loss decreased (217.164737 --> 217.128819). Saving model ...
[ 193/8000] train_loss: 181.64694 valid_loss: 217.07851
Validation loss decreased (217.128819 --> 217.078511). Saving model ...
[ 194/8000] train_loss: 181.53661 valid_loss: 217.04124
Validation loss decreased (217.078511 --> 217.041240). Saving model ...
[ 195/8000] train_loss: 181.61589 valid_loss: 216.99820
Validation loss decreased (217.041240 --> 216.998197). Saving model ...
[ 196/8000] train_loss: 181.91879 valid_loss: 216.97303
Validation loss decreased (216.998197 --> 216.973034). Saving model ...
[ 197/8000] train_loss: 181.06126 valid_loss: 216.92744
Validation loss decreased (216.973034 --> 216.927443). Saving model ...
[ 198/8000] train_loss: 181.61523 valid_loss: 216.89010
Validation loss decreased (216.927443 --> 216.890103). Saving model ...
[ 199/8000] train_loss: 181.34018 valid_loss: 216.84689
Validation loss decreased (216.890103 --> 216.846888). Saving model ...
[ 200/8000] train_loss: 180.91844 valid_loss: 216.80522
Validation loss decreased (216.846888 --> 216.805220). Saving model ...
[ 201/8000] train_loss: 181.65071 valid_loss: 216.77033
Validation loss decreased (216.805220 --> 216.770331). Saving model ...
[ 202/8000] train_loss: 181.27975 valid_loss: 216.73190
Validation loss decreased (216.770331 --> 216.731902). Saving model ...
[ 203/8000] train_loss: 181.28121 valid_loss: 216.69573
Validation loss decreased (216.731902 --> 216.695731). Saving model ...
[ 204/8000] train_loss: 181.20682 valid_loss: 216.64744
Validation loss decreased (216.695731 --> 216.647443). Saving model ...
[ 205/8000] train_loss: 181.20877 valid_loss: 216.60986
Validation loss decreased (216.647443 --> 216.609864). Saving model ...
[ 206/8000] train_loss: 181.12220 valid_loss: 216.56785
Validation loss decreased (216.609864 --> 216.567846). Saving model ...
[ 207/8000] train_loss: 181.03939 valid_loss: 216.52586
Validation loss decreased (216.567846 --> 216.525863). Saving model ...
[ 208/8000] train_loss: 181.76379 valid_loss: 216.49445
Validation loss decreased (216.525863 --> 216.494453). Saving model ...
[ 209/8000] train_loss: 180.67689 valid_loss: 216.45108
Validation loss decreased (216.494453 --> 216.451075). Saving model ...
[ 210/8000] train_loss: 180.47773 valid_loss: 216.39718
Validation loss decreased (216.451075 --> 216.397182). Saving model ...
[ 211/8000] train_loss: 181.22738 valid_loss: 216.36810
Validation loss decreased (216.397182 --> 216.368099). Saving model ...
[ 212/8000] train_loss: 181.01587 valid_loss: 216.32031
Validation loss decreased (216.368099 --> 216.320311). Saving model ...
[ 213/8000] train_loss: 180.81092 valid_loss: 216.28571
Validation loss decreased (216.320311 --> 216.285710). Saving model ...
[ 214/8000] train_loss: 181.00699 valid_loss: 216.25857
Validation loss decreased (216.285710 --> 216.258569). Saving model ...
[ 215/8000] train_loss: 180.57687 valid_loss: 216.21103
Validation loss decreased (216.258569 --> 216.211031). Saving model ...
[ 216/8000] train_loss: 180.80934 valid_loss: 216.16131
Validation loss decreased (216.211031 --> 216.161308). Saving model ...
[ 217/8000] train_loss: 180.91712 valid_loss: 216.13290
Validation loss decreased (216.161308 --> 216.132897). Saving model ...
[ 218/8000] train_loss: 180.52286 valid_loss: 216.08829
Validation loss decreased (216.132897 --> 216.088290). Saving model ...
[ 219/8000] train_loss: 181.29424 valid_loss: 216.06019
Validation loss decreased (216.088290 --> 216.060192). Saving model ...
[ 220/8000] train_loss: 181.03767 valid_loss: 216.02449
Validation loss decreased (216.060192 --> 216.024492). Saving model ...
[ 221/8000] train_loss: 180.79989 valid_loss: 215.98899
Validation loss decreased (216.024492 --> 215.988995). Saving model ...
[ 222/8000] train_loss: 180.27539 valid_loss: 215.94623
Validation loss decreased (215.988995 --> 215.946233). Saving model ...
[ 223/8000] train_loss: 180.56069 valid_loss: 215.89821
Validation loss decreased (215.946233 --> 215.898206). Saving model ...
[ 224/8000] train_loss: 180.82112 valid_loss: 215.87471
Validation loss decreased (215.898206 --> 215.874712). Saving model ...
[ 225/8000] train_loss: 180.87470 valid_loss: 215.83264
Validation loss decreased (215.874712 --> 215.832642). Saving model ...
[ 226/8000] train_loss: 180.75504 valid_loss: 215.79772
Validation loss decreased (215.832642 --> 215.797719). Saving model ...
[ 227/8000] train_loss: 180.61419 valid_loss: 215.75423
Validation loss decreased (215.797719 --> 215.754226). Saving model ...
[ 228/8000] train_loss: 180.32220 valid_loss: 215.70768
Validation loss decreased (215.754226 --> 215.707681). Saving model ...
[ 229/8000] train_loss: 180.59679 valid_loss: 215.67349
Validation loss decreased (215.707681 --> 215.673491). Saving model ...
[ 230/8000] train_loss: 180.74883 valid_loss: 215.64214
Validation loss decreased (215.673491 --> 215.642145). Saving model ...
[ 231/8000] train_loss: 180.55258 valid_loss: 215.59336
Validation loss decreased (215.642145 --> 215.593361). Saving model ...
[ 232/8000] train_loss: 180.59674 valid_loss: 215.55956
Validation loss decreased (215.593361 --> 215.559565). Saving model ...
[ 233/8000] train_loss: 180.50842 valid_loss: 215.52544
Validation loss decreased (215.559565 --> 215.525438). Saving model ...
[ 234/8000] train_loss: 179.95665 valid_loss: 215.46960
Validation loss decreased (215.525438 --> 215.469598). Saving model ...
[ 235/8000] train_loss: 180.53363 valid_loss: 215.43924
Validation loss decreased (215.469598 --> 215.439243). Saving model ...
[ 236/8000] train_loss: 180.43785 valid_loss: 215.40083
Validation loss decreased (215.439243 --> 215.400830). Saving model ...
[ 237/8000] train_loss: 180.18222 valid_loss: 215.34515
Validation loss decreased (215.400830 --> 215.345147). Saving model ...
[ 238/8000] train_loss: 180.66889 valid_loss: 215.31482
Validation loss decreased (215.345147 --> 215.314818). Saving model ...
[ 239/8000] train_loss: 180.54221 valid_loss: 215.28135
Validation loss decreased (215.314818 --> 215.281347). Saving model ...
[ 240/8000] train_loss: 180.11380 valid_loss: 215.23648
Validation loss decreased (215.281347 --> 215.236478). Saving model ...
[ 241/8000] train_loss: 180.57488 valid_loss: 215.19884
Validation loss decreased (215.236478 --> 215.198837). Saving model ...
[ 242/8000] train_loss: 180.27380 valid_loss: 215.16018
Validation loss decreased (215.198837 --> 215.160180). Saving model ...
[ 243/8000] train_loss: 179.95979 valid_loss: 215.11464
Validation loss decreased (215.160180 --> 215.114644). Saving model ...
[ 244/8000] train_loss: 180.33822 valid_loss: 215.07582
Validation loss decreased (215.114644 --> 215.075823). Saving model ...
[ 245/8000] train_loss: 180.24148 valid_loss: 215.04510
Validation loss decreased (215.075823 --> 215.045102). Saving model ...
[ 246/8000] train_loss: 179.97260 valid_loss: 214.99861
Validation loss decreased (215.045102 --> 214.998608). Saving model ...
[ 247/8000] train_loss: 179.97906 valid_loss: 214.95332
Validation loss decreased (214.998608 --> 214.953316). Saving model ...
[ 248/8000] train_loss: 180.29787 valid_loss: 214.92621
Validation loss decreased (214.953316 --> 214.926213). Saving model ...
[ 249/8000] train_loss: 180.29218 valid_loss: 214.88498
Validation loss decreased (214.926213 --> 214.884978). Saving model ...
[ 250/8000] train_loss: 180.16689 valid_loss: 214.84573
Validation loss decreased (214.884978 --> 214.845733). Saving model ...
[ 251/8000] train_loss: 179.87613 valid_loss: 214.80355
Validation loss decreased (214.845733 --> 214.803549). Saving model ...
[ 252/8000] train_loss: 180.33417 valid_loss: 214.76185
Validation loss decreased (214.803549 --> 214.761849). Saving model ...
[ 253/8000] train_loss: 179.85850 valid_loss: 214.72391
Validation loss decreased (214.761849 --> 214.723913). Saving model ...
[ 254/8000] train_loss: 179.77773 valid_loss: 214.67654
Validation loss decreased (214.723913 --> 214.676538). Saving model ...
[ 255/8000] train_loss: 180.17773 valid_loss: 214.65365
Validation loss decreased (214.676538 --> 214.653646). Saving model ...
[ 256/8000] train_loss: 180.07477 valid_loss: 214.60360
Validation loss decreased (214.653646 --> 214.603603). Saving model ...
[ 257/8000] train_loss: 179.70256 valid_loss: 214.56700
Validation loss decreased (214.603603 --> 214.566995). Saving model ...
[ 258/8000] train_loss: 180.33629 valid_loss: 214.53076
Validation loss decreased (214.566995 --> 214.530762). Saving model ...
[ 259/8000] train_loss: 179.53131 valid_loss: 214.48356
Validation loss decreased (214.530762 --> 214.483559). Saving model ...
[ 260/8000] train_loss: 179.41419 valid_loss: 214.42598
Validation loss decreased (214.483559 --> 214.425980). Saving model ...
[ 261/8000] train_loss: 179.84478 valid_loss: 214.39384
Validation loss decreased (214.425980 --> 214.393842). Saving model ...
[ 262/8000] train_loss: 179.35351 valid_loss: 214.33793
Validation loss decreased (214.393842 --> 214.337933). Saving model ...
[ 263/8000] train_loss: 179.52114 valid_loss: 214.28758
Validation loss decreased (214.337933 --> 214.287576). Saving model ...
[ 264/8000] train_loss: 178.95700 valid_loss: 214.22582
Validation loss decreased (214.287576 --> 214.225818). Saving model ...
[ 265/8000] train_loss: 179.77772 valid_loss: 214.19825
Validation loss decreased (214.225818 --> 214.198254). Saving model ...
[ 266/8000] train_loss: 179.53817 valid_loss: 214.14405
Validation loss decreased (214.198254 --> 214.144054). Saving model ...
[ 267/8000] train_loss: 179.48780 valid_loss: 214.10148
Validation loss decreased (214.144054 --> 214.101477). Saving model ...
[ 268/8000] train_loss: 179.79304 valid_loss: 214.07491
Validation loss decreased (214.101477 --> 214.074907). Saving model ...
[ 269/8000] train_loss: 179.60546 valid_loss: 214.03173
Validation loss decreased (214.074907 --> 214.031727). Saving model ...
[ 270/8000] train_loss: 179.13227 valid_loss: 213.98897
Validation loss decreased (214.031727 --> 213.988970). Saving model ...
[ 271/8000] train_loss: 179.98437 valid_loss: 213.96566
Validation loss decreased (213.988970 --> 213.965664). Saving model ...
[ 272/8000] train_loss: 179.29926 valid_loss: 213.91541
Validation loss decreased (213.965664 --> 213.915409). Saving model ...
[ 273/8000] train_loss: 179.08566 valid_loss: 213.85816
Validation loss decreased (213.915409 --> 213.858157). Saving model ...
[ 274/8000] train_loss: 179.07434 valid_loss: 213.80458
Validation loss decreased (213.858157 --> 213.804576). Saving model ...
[ 275/8000] train_loss: 179.01653 valid_loss: 213.75130
Validation loss decreased (213.804576 --> 213.751299). Saving model ...
[ 276/8000] train_loss: 179.24394 valid_loss: 213.71680
Validation loss decreased (213.751299 --> 213.716803). Saving model ...
[ 277/8000] train_loss: 179.00466 valid_loss: 213.66609
Validation loss decreased (213.716803 --> 213.666086). Saving model ...
[ 278/8000] train_loss: 179.08099 valid_loss: 213.62085
Validation loss decreased (213.666086 --> 213.620853). Saving model ...
[ 279/8000] train_loss: 179.28253 valid_loss: 213.58150
Validation loss decreased (213.620853 --> 213.581503). Saving model ...
[ 280/8000] train_loss: 179.04732 valid_loss: 213.54384
Validation loss decreased (213.581503 --> 213.543839). Saving model ...
[ 281/8000] train_loss: 178.95775 valid_loss: 213.50695
Validation loss decreased (213.543839 --> 213.506948). Saving model ...
[ 282/8000] train_loss: 179.18793 valid_loss: 213.46167
Validation loss decreased (213.506948 --> 213.461674). Saving model ...
[ 283/8000] train_loss: 178.88979 valid_loss: 213.42375
Validation loss decreased (213.461674 --> 213.423752). Saving model ...
[ 284/8000] train_loss: 178.90089 valid_loss: 213.37178
Validation loss decreased (213.423752 --> 213.371785). Saving model ...
[ 285/8000] train_loss: 179.22863 valid_loss: 213.33336
Validation loss decreased (213.371785 --> 213.333356). Saving model ...
[ 286/8000] train_loss: 178.86962 valid_loss: 213.28554
Validation loss decreased (213.333356 --> 213.285539). Saving model ...
[ 287/8000] train_loss: 178.91536 valid_loss: 213.24116
Validation loss decreased (213.285539 --> 213.241158). Saving model ...
[ 288/8000] train_loss: 179.13452 valid_loss: 213.21085
Validation loss decreased (213.241158 --> 213.210854). Saving model ...
[ 289/8000] train_loss: 178.68875 valid_loss: 213.16748
Validation loss decreased (213.210854 --> 213.167484). Saving model ...
[ 290/8000] train_loss: 178.29470 valid_loss: 213.10107
Validation loss decreased (213.167484 --> 213.101065). Saving model ...
[ 291/8000] train_loss: 178.95440 valid_loss: 213.05705
Validation loss decreased (213.101065 --> 213.057053). Saving model ...
[ 292/8000] train_loss: 178.44930 valid_loss: 213.00201
Validation loss decreased (213.057053 --> 213.002013). Saving model ...
[ 293/8000] train_loss: 178.85286 valid_loss: 212.96234
Validation loss decreased (213.002013 --> 212.962341). Saving model ...
[ 294/8000] train_loss: 178.57023 valid_loss: 212.91568
Validation loss decreased (212.962341 --> 212.915675). Saving model ...
[ 295/8000] train_loss: 179.13406 valid_loss: 212.87777
Validation loss decreased (212.915675 --> 212.877773). Saving model ...
[ 296/8000] train_loss: 178.45210 valid_loss: 212.83263
Validation loss decreased (212.877773 --> 212.832634). Saving model ...
[ 297/8000] train_loss: 178.02992 valid_loss: 212.78478
Validation loss decreased (212.832634 --> 212.784777). Saving model ...
[ 298/8000] train_loss: 178.29117 valid_loss: 212.73778
Validation loss decreased (212.784777 --> 212.737780). Saving model ...
[ 299/8000] train_loss: 178.19895 valid_loss: 212.67685
Validation loss decreased (212.737780 --> 212.676847). Saving model ...
[ 300/8000] train_loss: 178.92670 valid_loss: 212.64377
Validation loss decreased (212.676847 --> 212.643765). Saving model ...
[ 301/8000] train_loss: 178.75944 valid_loss: 212.60208
Validation loss decreased (212.643765 --> 212.602078). Saving model ...
[ 302/8000] train_loss: 178.47027 valid_loss: 212.54230
Validation loss decreased (212.602078 --> 212.542305). Saving model ...
[ 303/8000] train_loss: 178.45126 valid_loss: 212.48862
Validation loss decreased (212.542305 --> 212.488622). Saving model ...
[ 304/8000] train_loss: 178.70270 valid_loss: 212.45571
Validation loss decreased (212.488622 --> 212.455711). Saving model ...
[ 305/8000] train_loss: 178.28910 valid_loss: 212.40312
Validation loss decreased (212.455711 --> 212.403116). Saving model ...
[ 306/8000] train_loss: 178.30568 valid_loss: 212.34436
Validation loss decreased (212.403116 --> 212.344358). Saving model ...
[ 307/8000] train_loss: 178.25084 valid_loss: 212.29458
Validation loss decreased (212.344358 --> 212.294581). Saving model ...
[ 308/8000] train_loss: 178.61951 valid_loss: 212.26040
Validation loss decreased (212.294581 --> 212.260400). Saving model ...
[ 309/8000] train_loss: 178.54102 valid_loss: 212.22896
Validation loss decreased (212.260400 --> 212.228958). Saving model ...
[ 310/8000] train_loss: 178.17149 valid_loss: 212.18442
Validation loss decreased (212.228958 --> 212.184419). Saving model ...
[ 311/8000] train_loss: 177.78556 valid_loss: 212.12625
Validation loss decreased (212.184419 --> 212.126250). Saving model ...
[ 312/8000] train_loss: 178.32835 valid_loss: 212.08956
Validation loss decreased (212.126250 --> 212.089560). Saving model ...
[ 313/8000] train_loss: 177.65013 valid_loss: 212.02592
Validation loss decreased (212.089560 --> 212.025925). Saving model ...
[ 314/8000] train_loss: 178.54789 valid_loss: 211.98048
Validation loss decreased (212.025925 --> 211.980476). Saving model ...
[ 315/8000] train_loss: 178.14361 valid_loss: 211.94663
Validation loss decreased (211.980476 --> 211.946629). Saving model ...
[ 316/8000] train_loss: 178.19499 valid_loss: 211.90223
Validation loss decreased (211.946629 --> 211.902234). Saving model ...
[ 317/8000] train_loss: 178.60877 valid_loss: 211.86490
Validation loss decreased (211.902234 --> 211.864903). Saving model ...
[ 318/8000] train_loss: 177.50183 valid_loss: 211.79663
Validation loss decreased (211.864903 --> 211.796630). Saving model ...
[ 319/8000] train_loss: 177.86190 valid_loss: 211.74506
Validation loss decreased (211.796630 --> 211.745060). Saving model ...
[ 320/8000] train_loss: 177.47188 valid_loss: 211.68692
Validation loss decreased (211.745060 --> 211.686921). Saving model ...
[ 321/8000] train_loss: 177.54728 valid_loss: 211.61439
Validation loss decreased (211.686921 --> 211.614391). Saving model ...
[ 322/8000] train_loss: 177.72405 valid_loss: 211.57401
Validation loss decreased (211.614391 --> 211.574007). Saving model ...
[ 323/8000] train_loss: 177.82900 valid_loss: 211.54035
Validation loss decreased (211.574007 --> 211.540349). Saving model ...
[ 324/8000] train_loss: 177.64112 valid_loss: 211.48053
Validation loss decreased (211.540349 --> 211.480534). Saving model ...
[ 325/8000] train_loss: 177.46972 valid_loss: 211.43537
Validation loss decreased (211.480534 --> 211.435365). Saving model ...
[ 326/8000] train_loss: 177.42430 valid_loss: 211.36873
Validation loss decreased (211.435365 --> 211.368731). Saving model ...
[ 327/8000] train_loss: 177.87798 valid_loss: 211.31991
Validation loss decreased (211.368731 --> 211.319911). Saving model ...
[ 328/8000] train_loss: 177.07610 valid_loss: 211.25991
Validation loss decreased (211.319911 --> 211.259913). Saving model ...
[ 329/8000] train_loss: 177.54110 valid_loss: 211.20449
Validation loss decreased (211.259913 --> 211.204490). Saving model ...
[ 330/8000] train_loss: 177.83905 valid_loss: 211.17201
Validation loss decreased (211.204490 --> 211.172009). Saving model ...
[ 331/8000] train_loss: 177.34026 valid_loss: 211.10227
Validation loss decreased (211.172009 --> 211.102274). Saving model ...
[ 332/8000] train_loss: 177.03062 valid_loss: 211.04296
Validation loss decreased (211.102274 --> 211.042958). Saving model ...
[ 333/8000] train_loss: 177.20694 valid_loss: 210.98554
Validation loss decreased (211.042958 --> 210.985543). Saving model ...
[ 334/8000] train_loss: 177.86353 valid_loss: 210.94929
Validation loss decreased (210.985543 --> 210.949294). Saving model ...
[ 335/8000] train_loss: 177.38873 valid_loss: 210.89706
Validation loss decreased (210.949294 --> 210.897065). Saving model ...
[ 336/8000] train_loss: 177.44119 valid_loss: 210.86007
Validation loss decreased (210.897065 --> 210.860075). Saving model ...
[ 337/8000] train_loss: 177.42215 valid_loss: 210.80463
Validation loss decreased (210.860075 --> 210.804628). Saving model ...
[ 338/8000] train_loss: 177.04739 valid_loss: 210.75511
Validation loss decreased (210.804628 --> 210.755113). Saving model ...
[ 339/8000] train_loss: 177.31118 valid_loss: 210.70231
Validation loss decreased (210.755113 --> 210.702311). Saving model ...
[ 340/8000] train_loss: 177.44006 valid_loss: 210.66279
Validation loss decreased (210.702311 --> 210.662785). Saving model ...
[ 341/8000] train_loss: 176.75230 valid_loss: 210.59796
Validation loss decreased (210.662785 --> 210.597958). Saving model ...
[ 342/8000] train_loss: 177.04289 valid_loss: 210.53318
Validation loss decreased (210.597958 --> 210.533176). Saving model ...
[ 343/8000] train_loss: 176.61545 valid_loss: 210.47936
Validation loss decreased (210.533176 --> 210.479359). Saving model ...
[ 344/8000] train_loss: 177.50842 valid_loss: 210.43549
Validation loss decreased (210.479359 --> 210.435489). Saving model ...
[ 345/8000] train_loss: 176.52142 valid_loss: 210.36584
Validation loss decreased (210.435489 --> 210.365836). Saving model ...
[ 346/8000] train_loss: 176.84634 valid_loss: 210.31446
Validation loss decreased (210.365836 --> 210.314463). Saving model ...
[ 347/8000] train_loss: 177.20913 valid_loss: 210.27563
Validation loss decreased (210.314463 --> 210.275633). Saving model ...
[ 348/8000] train_loss: 176.44155 valid_loss: 210.20899
Validation loss decreased (210.275633 --> 210.208987). Saving model ...
[ 349/8000] train_loss: 177.11062 valid_loss: 210.16310
Validation loss decreased (210.208987 --> 210.163096). Saving model ...
[ 350/8000] train_loss: 176.76719 valid_loss: 210.09932
Validation loss decreased (210.163096 --> 210.099321). Saving model ...
[ 351/8000] train_loss: 177.14897 valid_loss: 210.04733
Validation loss decreased (210.099321 --> 210.047330). Saving model ...
[ 352/8000] train_loss: 176.84423 valid_loss: 209.99802
Validation loss decreased (210.047330 --> 209.998024). Saving model ...
[ 353/8000] train_loss: 176.54531 valid_loss: 209.95251
Validation loss decreased (209.998024 --> 209.952514). Saving model ...
[ 354/8000] train_loss: 176.67278 valid_loss: 209.89464
Validation loss decreased (209.952514 --> 209.894637). Saving model ...
[ 355/8000] train_loss: 176.69357 valid_loss: 209.83839
Validation loss decreased (209.894637 --> 209.838390). Saving model ...
[ 356/8000] train_loss: 176.17781 valid_loss: 209.76790
Validation loss decreased (209.838390 --> 209.767900). Saving model ...
[ 357/8000] train_loss: 176.76479 valid_loss: 209.71800
Validation loss decreased (209.767900 --> 209.717995). Saving model ...
[ 358/8000] train_loss: 176.48051 valid_loss: 209.67015
Validation loss decreased (209.717995 --> 209.670153). Saving model ...
[ 359/8000] train_loss: 176.81632 valid_loss: 209.62598
Validation loss decreased (209.670153 --> 209.625981). Saving model ...
[ 360/8000] train_loss: 176.47092 valid_loss: 209.56845
Validation loss decreased (209.625981 --> 209.568451). Saving model ...
[ 361/8000] train_loss: 176.29653 valid_loss: 209.50490
Validation loss decreased (209.568451 --> 209.504903). Saving model ...
[ 362/8000] train_loss: 176.03370 valid_loss: 209.43872
Validation loss decreased (209.504903 --> 209.438720). Saving model ...
[ 363/8000] train_loss: 176.33868 valid_loss: 209.39556
Validation loss decreased (209.438720 --> 209.395557). Saving model ...
[ 364/8000] train_loss: 176.57904 valid_loss: 209.34566
Validation loss decreased (209.395557 --> 209.345656). Saving model ...
[ 365/8000] train_loss: 176.36403 valid_loss: 209.27690
Validation loss decreased (209.345656 --> 209.276896). Saving model ...
[ 366/8000] train_loss: 176.26272 valid_loss: 209.23237
Validation loss decreased (209.276896 --> 209.232366). Saving model ...
[ 367/8000] train_loss: 176.37746 valid_loss: 209.18557
Validation loss decreased (209.232366 --> 209.185566). Saving model ...
[ 368/8000] train_loss: 176.08830 valid_loss: 209.12910
Validation loss decreased (209.185566 --> 209.129104). Saving model ...
[ 369/8000] train_loss: 176.22023 valid_loss: 209.07698
Validation loss decreased (209.129104 --> 209.076980). Saving model ...
[ 370/8000] train_loss: 175.95671 valid_loss: 209.02520
Validation loss decreased (209.076980 --> 209.025203). Saving model ...
[ 371/8000] train_loss: 176.31102 valid_loss: 208.97578
Validation loss decreased (209.025203 --> 208.975782). Saving model ...
[ 372/8000] train_loss: 176.13141 valid_loss: 208.91611
Validation loss decreased (208.975782 --> 208.916106). Saving model ...
[ 373/8000] train_loss: 175.66592 valid_loss: 208.86414
Validation loss decreased (208.916106 --> 208.864139). Saving model ...
[ 374/8000] train_loss: 176.17379 valid_loss: 208.79629
Validation loss decreased (208.864139 --> 208.796290). Saving model ...
[ 375/8000] train_loss: 176.12281 valid_loss: 208.74731
Validation loss decreased (208.796290 --> 208.747309). Saving model ...
[ 376/8000] train_loss: 175.79561 valid_loss: 208.68998
Validation loss decreased (208.747309 --> 208.689985). Saving model ...
[ 377/8000] train_loss: 176.01023 valid_loss: 208.63742
Validation loss decreased (208.689985 --> 208.637419). Saving model ...
[ 378/8000] train_loss: 175.63093 valid_loss: 208.56346
Validation loss decreased (208.637419 --> 208.563460). Saving model ...
[ 379/8000] train_loss: 175.70890 valid_loss: 208.51942
Validation loss decreased (208.563460 --> 208.519417). Saving model ...
[ 380/8000] train_loss: 175.75790 valid_loss: 208.46073
Validation loss decreased (208.519417 --> 208.460725). Saving model ...
[ 381/8000] train_loss: 175.76424 valid_loss: 208.39682
Validation loss decreased (208.460725 --> 208.396822). Saving model ...
[ 382/8000] train_loss: 175.62778 valid_loss: 208.33696
Validation loss decreased (208.396822 --> 208.336960). Saving model ...
[ 383/8000] train_loss: 176.18630 valid_loss: 208.29195
Validation loss decreased (208.336960 --> 208.291950). Saving model ...
[ 384/8000] train_loss: 175.57295 valid_loss: 208.23630
Validation loss decreased (208.291950 --> 208.236295). Saving model ...
[ 385/8000] train_loss: 175.43979 valid_loss: 208.18457
Validation loss decreased (208.236295 --> 208.184567). Saving model ...
[ 386/8000] train_loss: 175.98336 valid_loss: 208.13859
Validation loss decreased (208.184567 --> 208.138589). Saving model ...
[ 387/8000] train_loss: 175.55063 valid_loss: 208.06840
Validation loss decreased (208.138589 --> 208.068398). Saving model ...
[ 388/8000] train_loss: 175.64234 valid_loss: 208.00934
Validation loss decreased (208.068398 --> 208.009341). Saving model ...
[ 389/8000] train_loss: 175.56542 valid_loss: 207.95464
Validation loss decreased (208.009341 --> 207.954640). Saving model ...
[ 390/8000] train_loss: 175.75233 valid_loss: 207.90323
Validation loss decreased (207.954640 --> 207.903232). Saving model ...
[ 391/8000] train_loss: 175.35429 valid_loss: 207.84159
Validation loss decreased (207.903232 --> 207.841586). Saving model ...
[ 392/8000] train_loss: 174.97830 valid_loss: 207.78399
Validation loss decreased (207.841586 --> 207.783987). Saving model ...
[ 393/8000] train_loss: 175.19758 valid_loss: 207.72346
Validation loss decreased (207.783987 --> 207.723463). Saving model ...
[ 394/8000] train_loss: 175.18805 valid_loss: 207.66022
Validation loss decreased (207.723463 --> 207.660218). Saving model ...
[ 395/8000] train_loss: 175.41652 valid_loss: 207.59982
Validation loss decreased (207.660218 --> 207.599817). Saving model ...
[ 396/8000] train_loss: 175.27749 valid_loss: 207.55201
Validation loss decreased (207.599817 --> 207.552007). Saving model ...
[ 397/8000] train_loss: 175.31003 valid_loss: 207.48943
Validation loss decreased (207.552007 --> 207.489433). Saving model ...
[ 398/8000] train_loss: 175.10030 valid_loss: 207.41472
Validation loss decreased (207.489433 --> 207.414717). Saving model ...
[ 399/8000] train_loss: 174.96548 valid_loss: 207.35854
Validation loss decreased (207.414717 --> 207.358541). Saving model ...
[ 400/8000] train_loss: 174.85779 valid_loss: 207.29094
Validation loss decreased (207.358541 --> 207.290936). Saving model ...
[ 401/8000] train_loss: 174.91787 valid_loss: 207.21801
Validation loss decreased (207.290936 --> 207.218007). Saving model ...
[ 402/8000] train_loss: 174.97387 valid_loss: 207.15560
Validation loss decreased (207.218007 --> 207.155600). Saving model ...
[ 403/8000] train_loss: 175.18447 valid_loss: 207.11634
Validation loss decreased (207.155600 --> 207.116340). Saving model ...
[ 404/8000] train_loss: 174.71028 valid_loss: 207.04064
Validation loss decreased (207.116340 --> 207.040637). Saving model ...
[ 405/8000] train_loss: 174.59136 valid_loss: 206.98771
Validation loss decreased (207.040637 --> 206.987708). Saving model ...
[ 406/8000] train_loss: 174.83659 valid_loss: 206.91925
Validation loss decreased (206.987708 --> 206.919253). Saving model ...
[ 407/8000] train_loss: 175.11721 valid_loss: 206.86547
Validation loss decreased (206.919253 --> 206.865473). Saving model ...
[ 408/8000] train_loss: 174.34772 valid_loss: 206.79286
Validation loss decreased (206.865473 --> 206.792861). Saving model ...
[ 409/8000] train_loss: 174.81227 valid_loss: 206.72742
Validation loss decreased (206.792861 --> 206.727423). Saving model ...
[ 410/8000] train_loss: 174.94727 valid_loss: 206.69617
Validation loss decreased (206.727423 --> 206.696173). Saving model ...
[ 411/8000] train_loss: 174.58979 valid_loss: 206.63350
Validation loss decreased (206.696173 --> 206.633498). Saving model ...
[ 412/8000] train_loss: 174.07361 valid_loss: 206.54477
Validation loss decreased (206.633498 --> 206.544770). Saving model ...
[ 413/8000] train_loss: 174.79069 valid_loss: 206.49884
Validation loss decreased (206.544770 --> 206.498837). Saving model ...
[ 414/8000] train_loss: 174.18617 valid_loss: 206.42181
Validation loss decreased (206.498837 --> 206.421809). Saving model ...
[ 415/8000] train_loss: 174.83341 valid_loss: 206.37918
Validation loss decreased (206.421809 --> 206.379176). Saving model ...
[ 416/8000] train_loss: 173.97218 valid_loss: 206.31454
Validation loss decreased (206.379176 --> 206.314542). Saving model ...
[ 417/8000] train_loss: 174.15706 valid_loss: 206.23451
Validation loss decreased (206.314542 --> 206.234515). Saving model ...
[ 418/8000] train_loss: 174.54356 valid_loss: 206.18001
Validation loss decreased (206.234515 --> 206.180008). Saving model ...
[ 419/8000] train_loss: 174.44482 valid_loss: 206.13136
Validation loss decreased (206.180008 --> 206.131360). Saving model ...
[ 420/8000] train_loss: 174.73963 valid_loss: 206.07806
Validation loss decreased (206.131360 --> 206.078063). Saving model ...
[ 421/8000] train_loss: 174.33417 valid_loss: 206.01604
Validation loss decreased (206.078063 --> 206.016041). Saving model ...
[ 422/8000] train_loss: 174.26271 valid_loss: 205.95845
Validation loss decreased (206.016041 --> 205.958449). Saving model ...
[ 423/8000] train_loss: 173.95047 valid_loss: 205.89189
Validation loss decreased (205.958449 --> 205.891893). Saving model ...
[ 424/8000] train_loss: 173.62923 valid_loss: 205.81104
Validation loss decreased (205.891893 --> 205.811040). Saving model ...
[ 425/8000] train_loss: 174.15109 valid_loss: 205.75464
Validation loss decreased (205.811040 --> 205.754645). Saving model ...
[ 426/8000] train_loss: 174.00376 valid_loss: 205.69123
Validation loss decreased (205.754645 --> 205.691227). Saving model ...
[ 427/8000] train_loss: 174.08586 valid_loss: 205.64670
Validation loss decreased (205.691227 --> 205.646699). Saving model ...
[ 428/8000] train_loss: 174.23710 valid_loss: 205.57719
Validation loss decreased (205.646699 --> 205.577192). Saving model ...
[ 429/8000] train_loss: 173.99561 valid_loss: 205.51693
Validation loss decreased (205.577192 --> 205.516931). Saving model ...
[ 430/8000] train_loss: 173.23209 valid_loss: 205.45001
Validation loss decreased (205.516931 --> 205.450014). Saving model ...
[ 431/8000] train_loss: 173.97885 valid_loss: 205.40782
Validation loss decreased (205.450014 --> 205.407824). Saving model ...
[ 432/8000] train_loss: 173.69159 valid_loss: 205.33208
Validation loss decreased (205.407824 --> 205.332084). Saving model ...
[ 433/8000] train_loss: 173.77141 valid_loss: 205.25818
Validation loss decreased (205.332084 --> 205.258180). Saving model ...
[ 434/8000] train_loss: 173.62345 valid_loss: 205.18132
Validation loss decreased (205.258180 --> 205.181323). Saving model ...
[ 435/8000] train_loss: 173.68342 valid_loss: 205.13621
Validation loss decreased (205.181323 --> 205.136214). Saving model ...
[ 436/8000] train_loss: 174.04642 valid_loss: 205.10483
Validation loss decreased (205.136214 --> 205.104833). Saving model ...
[ 437/8000] train_loss: 173.45832 valid_loss: 205.04202
Validation loss decreased (205.104833 --> 205.042018). Saving model ...
[ 438/8000] train_loss: 173.72082 valid_loss: 204.98351
Validation loss decreased (205.042018 --> 204.983508). Saving model ...
[ 439/8000] train_loss: 173.33888 valid_loss: 204.90920
Validation loss decreased (204.983508 --> 204.909198). Saving model ...
[ 440/8000] train_loss: 173.54136 valid_loss: 204.86174
Validation loss decreased (204.909198 --> 204.861737). Saving model ...
[ 441/8000] train_loss: 173.49295 valid_loss: 204.78317
Validation loss decreased (204.861737 --> 204.783170). Saving model ...
[ 442/8000] train_loss: 173.39856 valid_loss: 204.73800
Validation loss decreased (204.783170 --> 204.738001). Saving model ...
[ 443/8000] train_loss: 173.25997 valid_loss: 204.68861
Validation loss decreased (204.738001 --> 204.688605). Saving model ...
[ 444/8000] train_loss: 173.65767 valid_loss: 204.63878
Validation loss decreased (204.688605 --> 204.638783). Saving model ...
[ 445/8000] train_loss: 173.73425 valid_loss: 204.57428
Validation loss decreased (204.638783 --> 204.574277). Saving model ...
[ 446/8000] train_loss: 173.05851 valid_loss: 204.51311
Validation loss decreased (204.574277 --> 204.513110). Saving model ...
[ 447/8000] train_loss: 173.34034 valid_loss: 204.45889
Validation loss decreased (204.513110 --> 204.458886). Saving model ...
[ 448/8000] train_loss: 173.37048 valid_loss: 204.38660
Validation loss decreased (204.458886 --> 204.386597). Saving model ...
[ 449/8000] train_loss: 172.87192 valid_loss: 204.31172
Validation loss decreased (204.386597 --> 204.311722). Saving model ...
[ 450/8000] train_loss: 172.95885 valid_loss: 204.25523
Validation loss decreased (204.311722 --> 204.255231). Saving model ...
[ 451/8000] train_loss: 173.01064 valid_loss: 204.17274
Validation loss decreased (204.255231 --> 204.172738). Saving model ...
[ 452/8000] train_loss: 173.11921 valid_loss: 204.11944
Validation loss decreased (204.172738 --> 204.119440). Saving model ...
[ 453/8000] train_loss: 173.02433 valid_loss: 204.06838
Validation loss decreased (204.119440 --> 204.068379). Saving model ...
[ 454/8000] train_loss: 172.45848 valid_loss: 203.99762
Validation loss decreased (204.068379 --> 203.997622). Saving model ...
[ 455/8000] train_loss: 173.02613 valid_loss: 203.94495
Validation loss decreased (203.997622 --> 203.944950). Saving model ...
[ 456/8000] train_loss: 173.22632 valid_loss: 203.87073
Validation loss decreased (203.944950 --> 203.870735). Saving model ...
[ 457/8000] train_loss: 173.06139 valid_loss: 203.83440
Validation loss decreased (203.870735 --> 203.834399). Saving model ...
[ 458/8000] train_loss: 173.12859 valid_loss: 203.78308
Validation loss decreased (203.834399 --> 203.783080). Saving model ...
[ 459/8000] train_loss: 172.68052 valid_loss: 203.68964
Validation loss decreased (203.783080 --> 203.689638). Saving model ...
[ 460/8000] train_loss: 172.85702 valid_loss: 203.62469
Validation loss decreased (203.689638 --> 203.624691). Saving model ...
[ 461/8000] train_loss: 172.55873 valid_loss: 203.56312
Validation loss decreased (203.624691 --> 203.563116). Saving model ...
[ 462/8000] train_loss: 172.87167 valid_loss: 203.49665
Validation loss decreased (203.563116 --> 203.496650). Saving model ...
[ 463/8000] train_loss: 172.41215 valid_loss: 203.41777
Validation loss decreased (203.496650 --> 203.417766). Saving model ...
[ 464/8000] train_loss: 172.55821 valid_loss: 203.36974
Validation loss decreased (203.417766 --> 203.369741). Saving model ...
[ 465/8000] train_loss: 172.50604 valid_loss: 203.32263
Validation loss decreased (203.369741 --> 203.322625). Saving model ...
[ 466/8000] train_loss: 172.76872 valid_loss: 203.28434
Validation loss decreased (203.322625 --> 203.284340). Saving model ...
[ 467/8000] train_loss: 172.54961 valid_loss: 203.20531
Validation loss decreased (203.284340 --> 203.205314). Saving model ...
[ 468/8000] train_loss: 172.83908 valid_loss: 203.14592
Validation loss decreased (203.205314 --> 203.145921). Saving model ...
[ 469/8000] train_loss: 172.12089 valid_loss: 203.06562
Validation loss decreased (203.145921 --> 203.065615). Saving model ...
[ 470/8000] train_loss: 172.69715 valid_loss: 203.01625
Validation loss decreased (203.065615 --> 203.016250). Saving model ...
[ 471/8000] train_loss: 171.91448 valid_loss: 202.93920
Validation loss decreased (203.016250 --> 202.939201). Saving model ...
[ 472/8000] train_loss: 172.18240 valid_loss: 202.88217
Validation loss decreased (202.939201 --> 202.882169). Saving model ...
[ 473/8000] train_loss: 172.52964 valid_loss: 202.83287
Validation loss decreased (202.882169 --> 202.832872). Saving model ...
[ 474/8000] train_loss: 172.26982 valid_loss: 202.75659
Validation loss decreased (202.832872 --> 202.756593). Saving model ...
[ 475/8000] train_loss: 172.14118 valid_loss: 202.71049
Validation loss decreased (202.756593 --> 202.710493). Saving model ...
[ 476/8000] train_loss: 172.21133 valid_loss: 202.62879
Validation loss decreased (202.710493 --> 202.628790). Saving model ...
[ 477/8000] train_loss: 172.27560 valid_loss: 202.57149
Validation loss decreased (202.628790 --> 202.571491). Saving model ...
[ 478/8000] train_loss: 171.79819 valid_loss: 202.50778
Validation loss decreased (202.571491 --> 202.507783). Saving model ...
[ 479/8000] train_loss: 171.80932 valid_loss: 202.45825
Validation loss decreased (202.507783 --> 202.458250). Saving model ...
[ 480/8000] train_loss: 171.91435 valid_loss: 202.39113
Validation loss decreased (202.458250 --> 202.391128). Saving model ...
[ 481/8000] train_loss: 171.38160 valid_loss: 202.30161
Validation loss decreased (202.391128 --> 202.301609). Saving model ...
[ 482/8000] train_loss: 171.66085 valid_loss: 202.23816
Validation loss decreased (202.301609 --> 202.238165). Saving model ...
[ 483/8000] train_loss: 171.65038 valid_loss: 202.17261
Validation loss decreased (202.238165 --> 202.172611). Saving model ...
[ 484/8000] train_loss: 171.81399 valid_loss: 202.10225