☆ Yσɠƚԋσʂ ☆ to [email protected]English • 1 year ago1-bit LLM performs similarly to full-precision Transformer LLMs with the same model size and training tokens but is much more efficient in terms of latency, memory, throughput, and energy consumption.arxiv.orgmessage-square4fedilinkarrow-up122arrow-down17
arrow-up115arrow-down1external-link1-bit LLM performs similarly to full-precision Transformer LLMs with the same model size and training tokens but is much more efficient in terms of latency, memory, throughput, and energy consumption.arxiv.org☆ Yσɠƚԋσʂ ☆ to [email protected]English • 1 year agomessage-square4fedilink
minus-squareQ*Bert Reynoldslinkfedilink4•1 year agoSays 1-bit then goes on to describe inputs as -1, 0, or 1. That’s 2-bit. Am I missing something here?
minus-square@[email protected]linkfedilinkEnglish2•1 year agoIt’s actually 1.58bits weirdly. The addition of 0 here was the significant change/improvement in this experiment. The paper isn’t too dense and has some decent tables that explain things fairly accessibly.
Says 1-bit then goes on to describe inputs as -1, 0, or 1. That’s 2-bit. Am I missing something here?
It’s actually 1.58bits weirdly. The addition of 0 here was the significant change/improvement in this experiment. The paper isn’t too dense and has some decent tables that explain things fairly accessibly.