OpenAI's ChatGPT is the latest hot topic in AI/ML ( Artificial Intelligence and Machine Learning) . We were amazed by ChatGPT's ability to process conversations in natural language. ChatGPT is able to generate human-like conversational answers to any question you ask (whatever a language is), and most of these responses to your questions are definitely sensible and meaningful, even inspiring or creative. Also, even if you ask the same question, the answers won't be exactly the same. Because of ChatGPT's potential to give ill-advised, incorrect, biased, or unacceptable answers, people debate whether ChatGPT can be used in the real world.
A typical computer, as we have always known, consists of operations composed of logical programming codes. As long as we input the same content (such as text or numeric input) every time, we will get the same answer, and the answer will not have randomness.
But AI technology is different. AI is based on machine learning algorithms, and machine learning is the pursuit of high-precision estimates, but it is hardly to achieve 100% correctness. For example, a facial recognition AI program may accurately recognize millions of faces, but still may fail to recognize or misidentify someone. Another example is self-driving car accidents that seem inevitable.
In addition, ChatGPT is not only based on standard machine learning algorithms, but also applies generative artificial intelligence ( Generative AI ) technology, embedding algorithms with human-like "creative" capabilities and randomness into the system.
So, how much error should we tolerate in real-world applications of AI systems? (You can also try asking ChatGPT this question!)
This is a typical argumentative question of balancing risks, costs and benefits in a particular application and situation, as how much you trust a person. AI is not human, but we can assess its capabilities, benefits and risks to us. For example, would you hire a taxi and driver instead of sitting in a self-driving car during a travel?
In fact, humans are also very familiar with this kind of justification problem, such as the extreme situation of infinitely high risk and cost: aviation safety. When traveling by air, no one is immune to the risk of a plane crash. Between 2008 and 2017, there was 1 death for every 7.9 million passengers who boarded a plane, according to an MIT study. We know there is a risk of death, but we still view air travel as a means of travel. In the US, statistics show that traveling on major commercial airlines is much safer than traveling by car.
We may take a similar approach to assessing the use of artificial intelligence, although it may also involve more complex topics (such as considering the ethical, legal, security, cultural of different regions, countries, nations or communities), and while AI/ML technologies are still evolving, so there is more uncertainty. But we can apply risk management methodology to take control measures to reduce risk. For example, providing a channel for customer complaints or appeals when using a customer service chatbot application, and such feedback loop processing can further improve the accuracy and satisfaction of the AI system – in fact, the AI system is also a learning machine.
The most challenging problem in near term, at least to many teaching professionals, is how to determine whether the homework submitted by a student is done by his/her own or "created" by ChatGPT! Or, in the presence of AI tools like ChatGPT, what changes should educators make to the current educational theories and systems.
Related Reading
(1) Plane Crash Statistics
(2) MIT News – Study: Commercial air travel is safer than ever
(3) OpenAI ChatGPT
中文翻譯