So far, running LLMs has required a large amount of computing resources, mainly GPUs. Running locally, a simple prompt with a typical LLM takes on an average Mac ...
Note that the kernel will correspond to the default pixi environment of the workspace. If you need to use another environemnt, see the "Pixi environments and IDE features for Python buffers" section ...