You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@mzzjuve
Thanks for the information you shared. I suppose you use consider alpha=0.25 and mu=100000. Instead, I will recommend you try to initialize alpha at 0.01 and mu at 2.0 or 2.5 (use mu as a trainable parameter) for SMU and then run your experiments. From, my experience, these initializations provide better results. Loss should not be nan with these parameter values. Please let me know if you still got nan.
I use SMU instead of SILU in YoloV5, but loss shows up as nan.
Could you please tell me the possible reason?Or maybe it's normal that this happened in previous epochs?
@mzzjuve Thanks for the information you shared. I suppose you use consider alpha=0.25 and mu=100000. Instead, I will recommend you try to initialize alpha at 0.01 and mu at 2.0 or 2.5 (use mu as a trainable parameter) for SMU and then run your experiments. From, my experience, these initializations provide better results. Loss should not be nan with these parameter values. Please let me know if you still got nan.
I use SMU instead of SILU in YoloV5, but loss shows up as nan.
Could you please tell me the possible reason?Or maybe it's normal that this happened in previous epochs?
Thank you for your timely reply. The problem was solved after I modified the parameters. I'll keep training. Thank you for your excellent work
I use SMU instead of SILU in YoloV5, but loss shows up as nan.
Could you please tell me the possible reason?Or maybe it's normal that this happened in previous epochs?
The text was updated successfully, but these errors were encountered: