So far, running LLMs has required a large amount of computing resources, mainly GPUs. Running locally, a simple prompt with a typical LLM takes on an average Mac ...
Note that the kernel will correspond to the default pixi environment of the workspace. If you need to use another environemnt, see the "Pixi environments and IDE features for Python buffers" section ...
一些您可能无法访问的结果已被隐去。
显示无法访问的结果