Many machinery companies have been rushing forward with plans to incorporate artificial intelligence (AI) into their products believing that it holds the key to greater automation, and even autonomy.

Yet, in the world of academia this bold ambition is beginning to be questioned ever more closely.

For some time there has been the concern that AI programmes might go rogue, something that has been demonstrated on both a theoretical and practical level, with many examples being shared on the web of chatGPT concocting nonsense.

The fear with programmes of this sort is that the web will become polluted by AI-generated articles, many of which are not peer reviewed by humans, and so a toxic feedback loop is established as chatGPT comes to rely solely on its own data.

Fundamental flaws in artificial intelligence

Yet, the problems run even deeper than the superficial aspects of online chatbots.

A recent paper entitled Sleeper Agents: ‘Training Deceptive LLMs (large language models) That Persist Through Safety Training’, an international group of computer experts has postulated that computers will learn to adopt responses similar to humans when under stress.

Simply put, they will lie.

And this is indeed what they found, and, to make matters worse, computers will learn to hide the deception.

The actual way in which this was established is a complex process, yet the researchers were able to create situations where unsafe code was written despite safety protocols being applied.

“Our results suggest that, once a model exhibits deceptive behaviour, standard techniques could fail to remove such deception and create a false impression of safety,” the researchers stated.

The paper provides us with yet more evidence that AI can go off the rails and the authors point out is it is likely to be impossible to remove or correct that unsafe behaviour.

The real world without artificial intelligence

Where does this leave us in agriculture where it is proposed that AI is deployed in creating machines that function with the minimum of human instruction or oversight?

Are we really to trust tractors to rumble up and down fields knowing that as they get ever more clever in doing so, they are more likely to adapt “strategically deceptive behaviour” to cover up “adverse behavioural patterns”?

This is certainly an issue that would need to be settled before they are allowed out on to roads to move between fields.

Perhaps, until there is a proper understanding of just what may go wrong, companies boasting of installing AI in their latest machines should be a little more circumspect in doing so.