I love my MacBook Pro mid-2012. It served me well until I bought the one with M1 chip. I waited for a long time to updgrade because each new line of the macbooks seemed to be worth somehow. I was really frustrated when Apple decided to solder RAM and then removed the ports until it reached an absolutely trash when they introduced useless touchbar, removed usb-a and hdmi ports, had problems with display cable and an awful keyboard.

I’ve been using my M1 macbook pro since 2020 and it is an excellent machine, although I still miss usb-a and would prefer RAM to be inserted separately. Well, I guess it is the price you pay for efficiency.

However nowadays I’ve decided it is time to push the dust off from my old macbook pro and put it to good use. What is good use? Well, it is 2025 - time to experiment with local LLMs! I’ve been using ChatGPT for a while now, Coplitot and Claude, but an ability to run a small model locally have been haunting me for a while now. Yes, I can run it on my M1 mac, and it would be faster and better, but I have other tasks for it, which sometime are quiet demanding - like data analysis and programming. With dedicated machine, however, I’ll get an opportunity to have it running constantly.

Ok, I want to run LLMs. But what is the best way to get started? Ollama! Unfortunately, Ollama does not support an older versions of MacOS. Hence, I’ve decided to upgare my mac to Ventura. We’ll see.

Downloaded OpenCode patcher, installed on USB, followed the steps, watched tutorial https://www.youtube.com/watch?v=D8djeFJ1czU and … Error! But I’ve erased the startup disk! Bummer. Well, I’ll try again and see if it’ll get me somewhere