Truncated response when generating

Hey there! I’ve noticed a few instances of AI Decompilation being truncated early. Is this known?

Example:


File hash: 4308ed406a8de64eb3ca4e6accc27d10794e960a4631590cab569cf734f3d3e9

We currently have a maximum input and output length in the early experimental models but we’ll be upgrading the context length during the next release (hopefully in December). This should also include much better performance and Windows support :muscle:

Glad to hear this! Thanks!