You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In my application, I've a datastream, which has a defined bitformat. This should input the neural network and the output of the neural network should also have a defined bitwidth to be able to enter the next stage (which also already exists).
What I got so far is, that I'm able to fix the input bitsize by using a signature layer, but I've no clue, how to fix the output's bitwidth. I'm wondering, whether I've overseen this part in the documentation.
The text was updated successfully, but these errors were encountered:
Sorry for the late reply. GitHub signed me out for some reason...
Fixing the output size of the datastream can only be partially done for the fractional part by setting the weight to be none-trainable, as the integer part will still be adaptive during training.
While I would not expect this to be a significant issue during training, some manual efforts will be required to fix the output size of the output pipe (e.g., overriding result_t in the converted hls4ml model.)
In the next release of this library, finer control would be possible, including fixing the quantizer sizes.
In my application, I've a datastream, which has a defined bitformat. This should input the neural network and the output of the neural network should also have a defined bitwidth to be able to enter the next stage (which also already exists).
What I got so far is, that I'm able to fix the input bitsize by using a
signature
layer, but I've no clue, how to fix the output's bitwidth. I'm wondering, whether I've overseen this part in the documentation.The text was updated successfully, but these errors were encountered: