I love my MacBook Pro mid-2012; it served me well until I bought the one with the M1 chip. I waited for a long time to upgrade because each new line of MacBooks seemed somehow worth it. However, I was really frustrated when Apple decided to solder RAM and then removed the ports, which eventually reached an absolute low point when they introduced the useless Touch Bar, removed USB-A and HDMI ports, had problems with the display cable, and featured an awful keyboard.

I’ve been using my M1 MacBook Pro since 2020, and it is an excellent machine, although I still miss USB-A and would prefer RAM to be inserted separately. Well, I guess that’s the price you pay for efficiency.

However, nowadays I’ve decided it’s time to dust off my old MacBook Pro and put it to good use. What is good use? Well, it’s 2025 - time to experiment with local LLMs! I’ve been using ChatGPT, Copilot, and Claude for a while now, but the ability to run a small model locally has been haunting me. Yes, I can run it on my M1 Mac, and it would be faster and better, but I have other tasks for it that are sometimes quite demanding - like data analysis and programming. With a dedicated machine, however, I’ll get an opportunity to have it running constantly.

Okay, I want to run LLMs. But what’s the best way to get started? Ollama! Unfortunately, Ollama does not support older versions of macOS. Hence, I’ve decided to upgrade my Mac to Ventura. We’ll see.

I downloaded the OpenCore patcher, installed it on a USB drive, followed the steps, watched the tutorial https://www.youtube.com/watch?v=D8djeFJ1czU, and… encountered an error! But I’d already erased the startup disk! Bummer. Well, I’ll try again and see if it gets me somewhere.