In the book Consciousness Explained, the cognitive (认知的) scientist Daniel Dennett describes a kind of fish, which wanders through the sea looking for a suitable rock to make its home for life. On finding one, the fish no longer needs its brain and eats it. Humanity is unlikely to adopt such an eating habit but there is a worrying trend that people are dumping themselves down by becoming overly dependent on "intelligent" machines, especially when ChatGPT comes out.
Since its launch in November 2022, ChatGPT has drawn 1 million people to register during the first week. Its allure is obvious: ChatGPT can produce jokes, write undergraduate essays and create computer code from a short writing prompt (提示).
But this is a false impression. Computers have become more capable but they lack genuine thinking, developed in humans through constant social practices. ChatGPT does not know what it is doing; it is unable to say how or why it produced a response; and cannot tell if it is making sense. So why all the fuss? Google's new Al-powered search tool, Bard was released in March 2023, making its ambition obvious in its promotional video. The profit-driven competition to fill our daily lives with artificial intelligence is becoming increasingly fierce.
Humans have a long track of turning a blind eye to the risks of new breakthroughs. Web companies want to draw their users to think extremely highly of their Al tools, encouraging humanity to think them far beyond human's cognitive competence. As we know, the rise of civilization through art and agriculture contributes mostly to the remarkable human mental powers. No one knows what will happen to such technologies if the software engineers of the future make themselves become software programs. Maybe the danger is not machines being treated like humans, but humans being treated like machines.