Foundry Local

Foundry Local enables local execution of Small Language Models using the hardware on your device.

Details
BitLlama Desktop

Desktop GUI for BitLlama LLM inference engine with Soul learning and model management

Details
BitLlama

Pure Rust LLM inference engine with 1.58-bit ternary support and Test-Time Training

Details