stabby_cicada@slrpnk.net to solarpunk memes@slrpnk.net · 24 hours agoone more shitty techbrofad downslrpnk.netimagemessage-square100fedilinkarrow-up11arrow-down10file-text
arrow-up11arrow-down1imageone more shitty techbrofad downslrpnk.netstabby_cicada@slrpnk.net to solarpunk memes@slrpnk.net · 24 hours agomessage-square100fedilinkfile-text
minus-squareNigelFrobisher@aussie.zonelinkfedilinkarrow-up0·6 hours agoWhat’s actually going to kill LLMs is when the sweet VC money runs out and the vendors have to start charging what it actually costs to run.
minus-squarePriorityMotif@lemmy.worldlinkfedilinkarrow-up0·2 hours agoYou can run it on your own machine. It won’t work on a phone right now, but I guarantee chip manufacturers are working on a custom SOC right now which will be able to run a rudimentary local model.
minus-squareTherapyGary@lemmy.blahaj.zonelinkfedilinkarrow-up0·1 hour agoYou can already run 3B llms on cheap phones using MLCChat, it’s just hella slow
minus-squareVictor Gnarly@lemmy.worldlinkfedilinkEnglisharrow-up0·edit-24 hours agoThis isn’t the case. Midjourney doesn’t receive any VC money since it has no investors and this ignores genned imagery made locally off your own rig.
What’s actually going to kill LLMs is when the sweet VC money runs out and the vendors have to start charging what it actually costs to run.
You can run it on your own machine. It won’t work on a phone right now, but I guarantee chip manufacturers are working on a custom SOC right now which will be able to run a rudimentary local model.
You can already run 3B llms on cheap phones using MLCChat, it’s just hella slow
This isn’t the case. Midjourney doesn’t receive any VC money since it has no investors and this ignores genned imagery made locally off your own rig.