You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I ran e2e benchmarks and tested correctness on Desktop on webgl backend with default parameters, I found the comparing between actual and expected float results used 0.1 as the epsilon.
Thank you so much for reporting and locating this bug! I have sent a fix for it, but changing epsilon from 0.1 to 0.001 causes some WebGL test failures, so I may need to fix them before merging this fix.
When I ran e2e benchmarks and tested correctness on Desktop on webgl backend with default parameters, I found the comparing between
actual
andexpected
float results used0.1
as the epsilon.e2e test chooses the value of
epsilon
according to the float precision, choose0.1
for float16 and0.001
for float32.https://github.com/tensorflow/tfjs/blob/master/tfjs-core/src/test_util.ts#L37
The definition of
floatPrecision()
function on webgl backend is shown as below,I debugged that the
WEBGL_RENDER_FLOAT32_ENABLED
flag is always true, so the function always returns 16.Is there a conflict?
The text was updated successfully, but these errors were encountered: