I installed llama 3 8b on my Mac Mini. It’s not a great LLM, but tbh, it works the way I expect it too on Ollama.
It’s unusable when being used via API…will work on molding it, but I think it won’t be too bad to help preserve my tokens/rate limits for smaller tasks.