Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

validation in inference process has a large difference with validation in train process #81

Open
zsc1104 opened this issue Oct 11, 2021 · 1 comment

Comments

@zsc1104
Copy link

zsc1104 commented Oct 11, 2021

Thank you for all your works ! Sorry about that I have a question
in train process i get the the following metrics with val datas
2021-10-09 04:18:34,676 INFO [base.py, 84] Performance 0.7751076951173586 -> 0.7788474201729588
2021-10-09 04:18:39,755 INFO [trainer.py, 403] Test Time 16.872s, (0.148) Loss 0.32007823
2021-10-09 04:18:39,755 INFO [base.py, 33] Result for seg
2021-10-09 04:18:39,756 INFO [base.py, 49] Mean IOU: 0.7788474201729588
2021-10-09 04:18:39,756 INFO [base.py, 50] Pixel ACC: 0.9112432591266407

But when i use the same val datas in the inference process , i get the the following metrics
2021-10-11 15:45:30,445 INFO [ade20k_evaluator.py, 46] Evaluate 228 images
2021-10-11 15:45:30,445 INFO [ade20k_evaluator.py, 47] mIOU: 0.47010236649173026
2021-10-11 15:45:30,445 INFO [ade20k_evaluator.py, 48] Pixel ACC: 0.6231233362565961

I feel very strange , could you help me , thank you very much !!!

@faruknane
Copy link

@zsc1104 Hi, could you help me about couple of questions below?

Does ade20k dataset need any preprocessing before training? I'm trying to use ade20k dataset. Do I need to make any change in the data structure in the official ade20k dataset? They have multiple labels, so it must be complex?

Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants