It’s not as good, but running small LLMs locally can work. I’ve been messing around with ollama, which makes it drop dead simple to try out different models locally.
You won’t be running any model as powerful as ChatGPT - but for quick “stack overflow replacement” style of questions I find it’s usually good enough.
And before you write off the idea of local models completely, some recent studies indicate that our current models could be made orders of magnitude smaller for the same level of capability. Think Moore’s law but for shrinking the required connections within a model. I do believe we’ll be able to run GPT3.5-level models on consumer grade hardware in the very near future. (Of course, by then GPT-7 may be running the world but we live in hope).
It’s not as good, but running small LLMs locally can work. I’ve been messing around with ollama, which makes it drop dead simple to try out different models locally.
You won’t be running any model as powerful as ChatGPT - but for quick “stack overflow replacement” style of questions I find it’s usually good enough.
And before you write off the idea of local models completely, some recent studies indicate that our current models could be made orders of magnitude smaller for the same level of capability. Think Moore’s law but for shrinking the required connections within a model. I do believe we’ll be able to run GPT3.5-level models on consumer grade hardware in the very near future. (Of course, by then GPT-7 may be running the world but we live in hope).
GPT4all is another good local one. Runs on CPU but you can use GPU acceleration. Some models even run on my crappy dual core laptop.
SkyGPT