There are absolutely people that believe if you tell ChatGPT not to make mistakes that the output is more accurate 😩… it’s things like this where I kinda hate what Apple and Steve Jobs did by making tech more accessible to the masses
Well, you can get it to output better math by telling it to take a breathe first. It’s stupid but LLMS got trained on human data, so it’s only fair that it mimics human output
Whilst I’ve avoided LLMs mostly so far, seems like that should actually work a bit. LLMs are imitating us, and if you warn a human to be extra careful they will try to be more careful (usually), so an llm should have internalised that behaviour. That doesn’t mean they’ll be much more accurate though. Maybe they’d be less likely to output humanlike mistakes on purpose? Wouldn’t help much with llm-like mistakes that they’re making all on their own though.
There are absolutely people that believe if you tell ChatGPT not to make mistakes that the output is more accurate 😩… it’s things like this where I kinda hate what Apple and Steve Jobs did by making tech more accessible to the masses
Well, you can get it to output better math by telling it to take a breathe first. It’s stupid but LLMS got trained on human data, so it’s only fair that it mimics human output
Whilst I’ve avoided LLMs mostly so far, seems like that should actually work a bit. LLMs are imitating us, and if you warn a human to be extra careful they will try to be more careful (usually), so an llm should have internalised that behaviour. That doesn’t mean they’ll be much more accurate though. Maybe they’d be less likely to output humanlike mistakes on purpose? Wouldn’t help much with llm-like mistakes that they’re making all on their own though.