The AI’s Guide to the Galaxy: We Need a Debate on the Subject of Self-Learning Algorithms

Jeff Renske Program via YouTube || CC

Artificial intelligence (AI) and robotics are termed “disruptive technologies”, which sounds somewhat dangerous and fraught with risk. But according to the Fraunhofer Institute for Production Technology, disruptive technologies are simply “innovations which replace a successful, existing technology, product or service or oust it from the market”. German society currently views such disruptive developments with great scepticism. In industries as diverse as power technology or car manufacturing, the acceptance of technological innovations seems guided by the philosophy of “never touch a running system”.

Yes, But…

The German “Yes, but” mentality is also found in autonomous driving. A survey by the Friedrich-Naumann Foundation for Freedom found that only 30% of the population can see themselves using a self-driving car from 2025 onwards. This scepticism results from certain unanswered questions, including:

  • In case of an accident, who is liable: the human or the AI? If it is the AI, who is responsible: The manufacturers? The programmers?

  • How should the AI behave in conflict situations? This poses deep ethical questions. Must the AI compare different life values in the case of an accident? Should the AI base its decisions on a specific moral standard? If so, who develops it? The business? A social discourse?

  • What happens to the collected data? Where is it stored? Who can access it?

All these questions are critically relevant to autonomous driving. This is also illustrated by the debate around a fatal accident in Arizona, where a driverless car hit a female pedestrian. At the time, the driver was behind the wheel, but did not intervene to prevent the accident. Just like human drivers, self-driving cars will sometimes get into situations where collisions are unavoidable, such as when pedestrians step out of the shadows and onto the road without paying attention to the traffic. For this reason, a range of stakeholders should work on these questions and develop various solutions in which all road users share the responsibility of ensuring traffic safety. This is especially important in inner-city traffic.

Trial and Error

Fundamentally, AI should be seen not only as a disruptive technology, but also as an enabling or supporting technology. The benefits of self-driving vehicles are self-evident: they help prevent traffic accidents, boost individual mobility, and lower fuel consumption. By collecting data and linking information about traffic participants and infrastructure, AI also enables anticipatory driving.

Automated driving is commonly broken down into five different levels. At its most basic, it involves the involvement of driving assistance such as ABS (level 1). Level 3, highly automated driving, involves taking the hands off the steering wheel, but requires the driver to be able to resume control at any time; the highest level is driverless driving (level 5).

At present, we find ourselves at the stage of semi-automatic driving (level 2). This means that there is software to help with parking, for example, but the driver’s hands have to stay on the wheel at all times. The technology for driverless driving already exists, but still requires extensive real-life testing and research before deployment.

The use of this technology must be approached gradually through trials and possible errors. Exploring AI will be an enormous task for society as a whole. As of now, we do not have all the necessary information to answer all the questions regarding autonomous driving. However, if we approach the issue with an open mind and a forward outlook, these questions can be answered in a macrosocial discourse. The actual potential for disruption lies in the algorithms currently influencing our lives. An example of this would be the Facebook algorithm.

With Great Power Comes Great Responsibility”

When decisions and competences are delegated to a higher-level system, there is always the danger of creeping incapacitation. This makes every citizen’s individual responsibility all the more important. It is vital to remember the role humans should play in an automated society. The use of AI causes conflict between personal responsibility and exoneration. For this reason algorithms should not be black boxes.

  • Their purpose and goal must be clear to all users.

  • They must be explicable to the extent that the reason for a particular result can be verified.

  • Algorithms must be secure. They may not do anything they are not meant to do. Cybersafety and data protection play an important role here.

Last but not least, human rights must be implemented in AI. At the same time, it must be understood that the existence of AI will change our values. Basic digital civil rights, such as the right to informational self-determination, must be protected. The General Data Protection Regulation is a framework of such rights. It is also possible that in future there may be a need for particular “data-saving” AI applications: such applications may become successful business models.

Humankind has always found itself caught between risk and innovation, and perhaps this is precisely where creativity and motivation flourish. AI must be approached with an open mind. If we keep both its risks and opportunities in sight, and if we accept the challenge without fear, the disruption can become the greatest opportunity for innovation.

Christine Frohn
avatar