While ChatGPT answered a variety of questions raised by testers successfully, some responses were noticeably off. In fact, Stack Overflow—a website for programmers—didn't allow users to share information from ChatGPT, saying that it's "harmful to the site and to users who are asking or looking for correct answers. "
Beyond the issue of spreading incorrect information, the tool could also be used to explain problematic thoughts, and as with all AI tools, spread biases (偏见) based on the pool of data on which it's trained. Typing something involving a CEO, for example, could arouse a response assuming that the individual is white and male, for example.
"While we've made efforts to make the model refuse unsuitable requests, it will sometimes respond to harmful instructions or exhibit biased behavior," OpenAI, the company that created ChatGPT, said on its website. "We're using the Moderation API to warn or stop certain types of unsafe content, but it still has some false negatives and positives for now. We're eager to collect user feedback (反馈) to aid our ongoing work to improve this system. "
Still, Lian Jye Su, a research director at market research company ABI Research, warns the chatbot is operating "without understanding the context of the language. "
"It is very easy for ChatGPT to give plausible-sounding (听起来合理) but incorrect or senseless answers," he said. "It guessed when it was supposed to explain and sometimes responded to harmful instructions or exhibited biased behavior. It also lacks regional and country-specific understanding. "
While ChatGPT is free, it does put a limit on the number of questions a user can raise before having to pay. When Elon Musk, a co-founder of OpenAI, recently asked Altman on Twitter about the average cost per ChatGPT chat, Altman said:"We will have to monetize (货币化) it somehow at some point; the compute costs are eye-watering. "