Opinion:

How can we foster artificial intelligence that is not harmful but beneficial to human life?

Steering clear of our worst nightmares of artificial intelligence

OPINION: A giant leap for mankind, without doubt, but will artificial intelligence be a friend or foe? The trick may lie in better involving the public in scrutinizing AI’s problematic aspects.

Published

Artificial intelligence (AI) is increasingly used in many public and private spheres of human life. Fields such as law enforcement, financial markets, social media, or autonomous machines and robotics have made great strides in recent years through a new paradigm of machine learning – deep learning.

This is raising concerns regarding human autonomy, agency, fairness, and justice. Among them the very practical fears that for instance unconscious discrimination against certain groups of people, as well as more philosophical ones in that deep learning systems are notoriously hard to debug.

AI developers, media, and civil society need to engage and work together to overcome the poor transparency and accountability of new AI technologies.

The good, the bad or the ugly?

Like other innovations in, e.g., nuclear power or bioengineering, AI can both offer smart solutions and leave us with tricky problems. On the one hand, AI deliver improved organisational performance and decisions and can for example help set diagnosis and treat poorly understood diseases.

To overcome common deficits found in AI innovation, AI developers should openly share technological shortcomings and let media and engaged citizens contribute in the debate

On the other hand, like Elon Musk put it some years back: “AI is far more dangerous than nukes.” Like humans, AIs can fail their intended goals. This can be because the training data they use may be biased or because their recommendations and decisions may yield unintended and negative consequences.

Inclusive discussions and careful considerations

How can we foster artificial intelligence that is not harmful but beneficial to human life?

Communication plays a crucial role in addressing this question. Research in both business and technology ethics has emphasized the value of public engagement and deliberation for shaping responsible innovation. This means a thoughtful and inclusive form of discussion that seeks the attainable consensus. Ideally, ethical businesses engage with local actors, governments, and civil society to foster better understanding and responsible processes for innovation through deliberation.

However, for reasons of self-interest, power imbalances, and information advantages, businesses are seemingly unlikely to solve AI challenges deliberatively.

Is it possible to shine light into black boxes?

Even if private sector innovators were committed to communicate with stakeholders to ensure fair and responsible innovation, AI – as a technology – seemingly complicates such efforts because of its opacity, that is it’s poor transparency, explainability, and accountability.

This is because machine-learning algorithms may, for practical purposes, be inaccessible and beyond our ability to understand. This is not only to laypersons but oftentimes also, at least in everyday practice, to the organisations that own and employ them, and even to system programmers and specialists.

However, opacity of AI must not serve as an excuse to resist scrutiny of AI in public discourse.

To show pathways forward, we first relate the ideal requirements for deliberation to the specific conditions for AI opacity. These for instance comprise the principle that every voice should, within reasonable limits, be given the opportunity to be heard. To make this work in practice, we highlight the responsibilities of key actors such as AI developers, media, NGOs and activists. Through deliberative exploration and evaluation of responsible AI implementations, a more transparent and accountable AI is achieved.

Open democratic discussions needed

We advocate to bridge the gap between the experts with technical knowledge to inspect AI, and the potentially impacted public at large. Journalism and activism can take a translation role, if this role is performed well, with a deliberative stance.

To overcome common deficits found in AI innovation, AI developers should openly share technological shortcomings and let media and engaged citizens contribute in the debate forward. Together, this should strengthen the bottom-up identification, problematization, and interpretation of AI in practice to make progress in this domain more responsible in the long run.


Reference:

Buhmann, A. and C. Fieseler (2021). "Towards a deliberative framework for responsible innovation in artificial intelligence." Technology in Society 64: https://doi.org/10.1016/j.techsoc.2020.101475 (open access)

Powered by Labrador CMS