Ollama set up and my use cases
After successfully installing Ventura on my old Mac (see previous post), I went straight to installing Llama and trying out models. I used the following YouTube tutorial: https://www.youtube.com/watch?v=GWB9ApTPTv4, which I found to be really good. The Llama documentation is also great: https://github.com/ollama/ollama/blob/main/README.md. So far, I have tried running Llama 3.2 7B and CodeGemma 7B. After getting used to Chat GPT, this seems like a significant slowdown. However, given that I have specific use cases in mind, it might still be suitable for me. ...