Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model Reproducibility Issue #10

Open
RR0810 opened this issue Aug 23, 2024 · 4 comments
Open

Model Reproducibility Issue #10

RR0810 opened this issue Aug 23, 2024 · 4 comments

Comments

@RR0810
Copy link

RR0810 commented Aug 23, 2024

Hello, when I used the command sh scripts/SparseTSF/etth1.sh to reproduce the results from the paper, I found that the predictions with sequence lengths of 336 and 720 differ significantly from the results reported in the paper. However, the results for the first two lengths are within a reasonable range. Could you please help me understand why this discrepancy might be occurring?
image
image

@lss-1138
Copy link
Owner

Hello, the results you reran seem to be correct and consistent with ours. We recently fixed a longstanding bug in this code framework (see description in TFB), which caused the last batch of data to be dropped during the testing phase. This bug mainly affected performance on small datasets, such as ETT1 and ETTh2.

@UP-programmer
Copy link

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

为什么会出现这个问题

@yvdeshuai
Copy link

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

为什么会出现这个问题

我也遇到了这个问题,我认为可能是作者实在linux环境下跑的代码,linux系统中可以使用多个子进程加载数据,而在windows系统中可能因为缺少某个系统调用而无法进行此操作。
解决办法:在run_longExp.py中,num_workers的默认值改为0即可解决问题,但会导致数据加载慢的问题
parser.add_argument('--num_workers', type=int, default=10, help='data loader num workers')

@UP-programmer
Copy link

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

为什么会出现这个问题

我也遇到了这个问题,我认为可能是作者实在linux环境下跑的代码,linux系统中可以使用多个子进程加载数据,而在windows系统中可能因为缺少某个系统调用而无法进行此操作。 解决办法:在run_longExp.py中,num_workers的默认值改为0即可解决问题,但会导致数据加载慢的问题 parser.add_argument('--num_workers', type=int, default=10, help='data loader num workers')

确实这个问题,我改了之后不报错了,但在交通数据集上跑得很慢.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants