Foundry Local enables local execution of Small Language Models using the hardware on your device.
DetailsDesktop GUI for BitLlama LLM inference engine with Soul learning and model management
DetailsPure Rust LLM inference engine with 1.58-bit ternary support and Test-Time Training
Details