Ollama set up and my use cases
After I successfully installed Ventura to my old mac (see previous post) I went straight to installing Ollama and trying models. I have used following youtube tutorial: https://www.youtube.com/watch?v=GWB9ApTPTv4 and I can say it is really good. And Ollama documentation is great as well: https://github.com/ollama/ollama/blob/main/README.md. So far I tried running llama 3.2 7b and codegemma 7b. After getting used to chat gpt this seems like crazy slow. But, given that I have specific usecases in mind that might still fit me well. ...