Comment on Microsoft just open-sourced bitnet.cpp, a 1-bit LLM inference framework. It let's you run 100B parameter models on your local CPU without GPUs. 6.17x faster inference and 82.2% less energy on CPUs.
No that would require AI (Actual Intelligence)
We already have that. It’s like, you put the AI in your brain. You are the AI.
Dojan@pawb.social 6 days ago
We already have that. It’s like, you put the AI in your brain. You are the AI.