
<prologue>
I started a blog called “The Baby Boomer Generation’s Miscellaneous Blog”(Dankai-sedai no garakutatyou:団塊世代の我楽多(がらくた)帳) in July 2018, about a year before I fully retired. More than six years have passed since then, and the number of articles has increased considerably.
So, in order to make them accessible to people who don’t understand Japanese, I decided to translate my past articles into English and publish them.
It may sound a bit exaggerated, but I would like to make this my life’s work.
It should be noted that haiku and waka (Japanese short fixed form poems) are quite difficult to translate into English, so some parts are written in Japanese.
If you are interested in haiku or waka and would like to know more, please read introductory or specialized books on haiku or waka written in English.
I also write many articles about the Japanese language. I would be happy if these inspire more people to want to learn Japanese.
my blog’s URL:団塊世代の我楽多(がらくた)帳 | 団塊世代が雑学や面白い話を発信しています
my X’s URL:団塊世代の我楽多帳(@historia49)さん / X
The progress of AI has been remarkable. We are now in an era in which robots are replacing humans not only for “simple” tasks, but also for “fairly complex” tasks.
However, I think it is dangerous to put too much trust in AI. AI is not God’s Hand or Almighty. In this issue, I would like to consider such dangers and limitations of “AI.
1. accidents involving “self-driving cars”
In the U.S. state of California, in March 2018, the driver of a Tesla Model X electric car was killed after crashing while using the “Autopilot” self-driving feature.
Also in Arizona in March 2018, an Uber Technology “self-driving car” struck and killed a pedestrian while in “self-driving mode,” although the driver was behind the wheel.
I have long had a simple question, “If both automated and self-driving cars detect danger, can they successfully avoid it?” “If a self-driving car and a regular car face a dangerous situation, can we correctly predict how the self-driving car will behave?” In the event of an accident, where do we place the blame? I thought that these were the issues.
Sometimes two people come walking from opposite directions, and in deciding which to avoid, they sometimes end up avoiding each other in the same direction many times.
In the case of cars, head-on collisions are not very likely, but if they approach each other at right angles in a place without a traffic light, a collision could occur if their decisions on which way to stop were the exact opposite.
It is also questionable to what extent “self-driving cars” will be able to react and judge correctly to the complex complexities of vehicles on the road. On highways, as in the case of ordinary cars, the slightest error in judgment can lead to a major accident.
And even if “automatic driving” is a single word, if the “standards for setting the method of dealing with danger avoidance” differ among “automatic car” manufacturers, it seems to me that the risk of accidents occurring will increase.
2. misdiagnosis of “medical robots
The endoscopic surgical robot “Da Vinci” manufactured by Intuitive Surgical, Inc. of the U.S. is said to be quite excellent.
On the other hand, however, it does not have a tactile sensing function, making it difficult to manipulate the suture, and in some cases, the suture may be torn off.
In August 2016, IBM’s Watson artificial intelligence detected the name of a special leukemia patient’s disease in about 10 minutes and saved his life, the Tokyo Institute of Medical Science announced. The patient was initially diagnosed by his doctor as having acute myeloid leukemia and was receiving anticancer drug treatment, but it was completely ineffective. According to the Institute of Medical Science of Tokyo, this is the first case in Japan where AI has saved a patient’s life.
On the other hand, there have been reports of a number of “dangerous and inaccurate treatment recommendations” when IBM’s AI “Watson” was used to diagnose cancer patients.
I once had a very bad experience with an “outrageous misdiagnosis” by an inexperienced young doctor just before my high school entrance exam.
No matter how many “medical robots” you call them, I think there is a wide range, from very small to very large, and even if they are high-performance “medical robots,” I think it will be very difficult to make a “correct diagnosis. I feel that we cannot trust a “medical robot” to make a correct diagnosis until it has been subjected to a number of diagnostic tests and has overcome many failures through the learning effect.
If the “medical robot” were to make a series of misdiagnoses, it would be a complete disaster and we would lose all hope.
Even human doctors have differences in “ability” and “experience,” resulting in “misdiagnosis” and “medical errors,” which I do not think will disappear in the future. However, I think it is safer to position “medical robots” as a “second opinion” or “third opinion” for now.
Just as “new drugs” are approved only after a thorough series of “clinical trials,” “new technologies” such as AI should be used only after fully recognizing their “risks” and “limitations,” testing them in a variety of cases, and fully confirming their safety.
In addition to that, I think it is necessary to make sure that “human control” is always left in place as a backup. In that sense, I think that “smart phones are OK in AI cars (with limited conditions)” is a dangerous and dangerous revision of the Road Traffic Law.
3. too much reliance on AI will result in the loss of “professional human beings”?
If you look at a bank teller these days, you will see that all cash accounts are counted by machine. In the old days, 10,000-yen bills were neatly opened in a fan shape and counted by “horizontal reading” with vivid hands. They also counted “vertically” at blinding speed. I am worried about the current tellers’ ability to count a large number of bills in the event of a machine breakdown or a catastrophe that disables the machine.
I am also concerned that if bank loan officers rely too much on AI analysis, more of them will not be able to analyze financial statements and other corporate documents manually by themselves, or will not be able to detect window dressing.
On the other hand, in the world of doctors, too much reliance on “da Vinci” and other “medical robots” may lead to a shortage of surgeons who can perform surgeries by hand. I hope this is a “groundless fear.