AI Coding Bots (Public Board)

by FSK, Wednesday, March 11, 2026, 23:34 (6 hours, 33 minutes ago) @ JoFrance
edited by FSK, Thursday, March 12, 2026, 00:16

Corey Doctorow had a nice article on this.

https://doctorow.medium.com/https-pluralistic-net-2025-12-05-pop-that-bubble-u-washington-8b6b75abc28e

When you have AI code with a human reviewer, the human reviewer's job is not to catch bugs and do quality control. The human's job is so there's someone to take the blame when it inevitably fails.

He gives an analogy. If you have a radiologist reviewing x-rays for cancer, he might do 100 per hour. Proper use of AI is the AI checks the radiologist's work. If the AI and radiologist get different answers, then the radiologist carefully re-reviews the x-ray. This would be correct use of AI, using it to reduce the error rate.

That is not what happens in practice. The AI does the first analysis, and the radiologist is responsible to check the AI's work. The radiologist is now expected to do 1000 per hour instead of 100. Of course, he can't do a proper review of 1000 per hour. But the radiologist is signing the paperwork, so he has liability, and not the AI, if something goes wrong. They fire 90% of the radiologists, so the radiologist is happy to still keep his job and can't complain. If the radiologist does object, they'll just hire one of the 90% radiologists who just got fired to take his place. The radiologist's job is to take the blame if something goes wrong, and not to supervise the AI.

He had another great quote. The AI isn't good enough to take your job. But the AI salesman is good enough to convince your boss to fire you and replace you with an AI, which is all that ultimately matters.


Complete thread:

 RSS Feed of thread