Their own caption says it was ChatGPT and I don’t believe it can be run locally. Either way, one of the many issues with LLMs is that they come across like a person and are convincing even when hallucinating. Couple that with the human psychology of generally taking people for their word, and you have created a perfect environment for being manipulated. Going by the linked thread, in one instance, the AI included a quote that wasn’t the exact quote. You could argue the actual comment wasn’t that different from it, but that’s more like confirmation bias. Even asking an AI to not comment on anything and just distill the provided content to the most important quotes would be affected by selection bias, but that’s not even how they used the model. They literally asked it to find what they were looking for.
Right, and the LLM output isn’t something I would’ve used either. The only thing I am noting as important is that the LLM stuff was an extra and unnecessary step, if that makes sense.
I don’t know anything about that banned user, but perhaps it’s that they didn’t want to look like the main characters of this community who essentially ban “just because” and needed something they believed would be convincing?
Their own caption says it was ChatGPT and I don’t believe it can be run locally. Either way, one of the many issues with LLMs is that they come across like a person and are convincing even when hallucinating. Couple that with the human psychology of generally taking people for their word, and you have created a perfect environment for being manipulated. Going by the linked thread, in one instance, the AI included a quote that wasn’t the exact quote. You could argue the actual comment wasn’t that different from it, but that’s more like confirmation bias. Even asking an AI to not comment on anything and just distill the provided content to the most important quotes would be affected by selection bias, but that’s not even how they used the model. They literally asked it to find what they were looking for.
Right, and the LLM output isn’t something I would’ve used either. The only thing I am noting as important is that the LLM stuff was an extra and unnecessary step, if that makes sense.
I don’t know anything about that banned user, but perhaps it’s that they didn’t want to look like the main characters of this community who essentially ban “just because” and needed something they believed would be convincing?
Uh sure, that’s a plausible way of looking at it.