I don’t think that’s a problem with the model itself, but the fact that it was heavily censored and lobotomized in order to achieve maximum political correctness so they could avoid another Tay incident.
It makes sense that they do that since the media and randoms on the internet think everything chatGPT and Bing chat say is as valid as info from OpenAI and MS official spokespersons.
I don’t think that’s a problem with the model itself, but the fact that it was heavily censored and lobotomized in order to achieve maximum political correctness so they could avoid another Tay incident.
It makes sense that they do that since the media and randoms on the internet think everything chatGPT and Bing chat say is as valid as info from OpenAI and MS official spokespersons.