New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Inductor] [CPU] performance regression with TORCHINDUCTOR_FREEZING=1 #104952
Comments
Have double checked according to the latest test results, below 4 models still have regressions. cc @zxd1997066 help to find guilty commit for those 4 models. those are fP32 static default wrapper test, the first three models test with multi-thread, the last one tnt_s_patch16_224 is single thread.
|
@zxd1997066 Please help to find the guilty commit for each regression, so we can take a look. |
I cannot reproduce the 2023-07-06 nightly result from my side for these 4 models
|
@zxd1997066 @chuanqi129 will check the performance data before regression. |
update: without freezing:(good) with freezing:(bad) but with 2024-01-29 nightly: 890d8e6, both with and without freezing have bad performance: |
Thanks. @chuanqi129 @zxd1997066, For |
It is hard to say, since it is a very early report. But per my verification, TORCHINDUCTOR_FREEZING=1 and TORCHINDUCTOR_FREEZING=0 make difference on the same commit 13763f5 BTW, when using TORCHINDUCTOR_FREEZING=0, tnt_s_patch16_224, functorch_dp_cifar10, Background_Matting have performance regression with latest pytorch, tnt_s_patch16_224 and functorch_dp_cifar10 have the same suspected guilty commit 7e098f9, Background_Matting has the guilty commit 7c97c94, will submit issue for them separately. |
Close this issue as tracked in new issues group by guilty commit as above. |
馃悰 Describe the bug
There are 6 performance regression from #93531 (comment)
SW information:
Versions
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519
The text was updated successfully, but these errors were encountered: