Comment on Microsoft just open-sourced bitnet.cpp, a 1-bit LLM inference framework. It let's you run 100B parameter models on your local CPU without GPUs. 6.17x faster inference and 82.2% less energy on CPUs.

Arghblarg@lemmy.ca ⁨6⁩ ⁨days⁩ ago

Is it still probabilistic slop or does the model understand what it’s doing and verify primary sources? If not, yay for burning the planet more slowly I guess, but still no thanks.

source
Sort:hotnewtop