Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.
You can make humans more productive but for the foreseeable future you can’t take the human out of the loop to have an AI implementation that’s not a disaster/lawsuit waiting to happen. That, probably more than anything else, is why companies just aren’t seeing the much promised mass step change in productivity from AI and why so many companies are now saying they see zero ROI from AI efforts.
The lowest hanging fruit will be low value rote repetitive tasks like the whole India offshoring industry, which will be the first to vaporize if AI does start replacing humans. But until companies see success on the lowest of lowest hanging fruit on en-mass labor replacement with AI things higher up on the value chain will remain relatively safe.
PS: Nearly every mass layoff recently citing “AI productivity” hasn’t withstood scrutiny. They all seem to be just poorly performing companies slashing staff after overhiring, which management looking for any excuse other than just admitting that.
I don't think the intention matters here. Its the same deal with every profession using llm to "automate" their work. The onus in on the professional, not the llm. Arstechnica case could have been justified by same manner otherwise.
Not knowing the law isnt execuse to break law, so why is not knowing the tool an excuse to blame the tool.
Obviously lawyers should not be cheating with AI, especially when they don't even check it. But it does sound to me as if this is an opportunity to re-factor the process. We're carrying forward some ideas originally implemented in Latin, and which can be dramatically simplified.
I'm not a lawyer; I know this only in passing. And I am aware that there are big differences between law and code. But every time I encounter the law, and hear about cases like this, what I see are vast oceans of text that can surely be made more rigorous. AI is not the problem; it's pointing out the opportunity.
The surface level for us is not just LLM generated text, it is also the combination of AI augmented audio (for incoming calls) and then for our own voice agents being able to protect and identify services cloning our own agent voices with watermarking.
It's not fun, as we are constantly catching up.
Next: gunman pleads death occured solely due to reliance on an automatic weapon.
The judge took no personal responsibility.
> She told the court that this was her first time using an AI tool and she had believed the citations to be "genuine". She had no intention to misquote or misrepresent the rulings and that "the mistake occurred solely due to the reliance on an automatic source", the high court wrote.
She had one job. And that was to read the citations. Instead of owning up to the mistake of being lazy all she wanted to talk about "intentions".
The high court also took no responsibility.
> In its order, the high court said that "the citations may be non-existent, but if the learned trial court has considered the correct principles of law and its application to the facts of the case is also correct,
This line of reasoning is questionable and attempt to gaslight everyone. Judges cite other cases in their judgement. But if the junior judge had no clue that the references were fake what correct principles was she applying?
End of the day maybe the judgement is correct but this overall bullshit.
Given that this is happening all over the world people seem to have a convenient excuse - The AI made me do it.
Why not use AI to adjudicate cases, and if it is dismissal, dismissal it is.
If not then move to a proper court.
This way the backlog of cases will significantly drop, and we will work only on cases that there is enough meat to lead to a conviction.
Setting AI aside for a moment, this reflects a broader issue in India and elsewhere. When institutions respond to new technologies with anger or threats rather than systemic thinking, it signals a deeper problem.
The real challenge is not AI itself, but how complex systems adapt to change. Instead of reacting defensively, institutions should anticipate second-order effects, build regulatory capacity, and treat this as a governance and systems problem.
Mature institutions approach disruption with foresight, incentives, and feedback loops, not emotions. Without that shift, they risk reinforcing outdated hierarchies rather than serving the public effectively.