

3·
5 months agoCompare Llama 1 to the current state of the art local AI’s. They’re on a completely different level.
Compare Llama 1 to the current state of the art local AI’s. They’re on a completely different level.
My grandparents from one side of the family left me nothing, and the other side left two weeks rent. I know the direct descendants come first but at least give the grandkids 15% or something, it would have helped so much. We’re all working twice as hard to afford half the lifestyle our parents had
At my last WFH job my daily setup was firefox, sublime text, slack (electron app), github desktop (also electron), and 3 terminals, one running a local dev server. It all ran fine.
Am i the only one who still has no problems with 8GB? Not that I wouldn’t be happy with more but i can’t remember the last time I’ve even thought about ram usage
“AI, how do I do <obscure thing> in <complex programming framework>”
“Here is some <language> code. Please fix any errors: <paste code here>”
These save me hours of work on a regular basis and I don’t even use the paid tier of ChatGPT for it. Especially the first one because I used to read half the documentation to answer that question. Results are accurate 80% of the time, and the other 20% is close enough that I can fix it in a few minutes. I’m not in an obscure AI related field, any programmer can benefit from stuff like this.