RAZVAN CHIOREAN g a l l e r y

View Original

BCS Hybrid - Calculemus (Let us calculate): What world is AI giving us?

Compliance, AI

Steve Torrance, Chichester Lecture Theatre

Abstract

Gottfried Wilhelm Leibniz, a Prussian metaphysician and scientific innovator who lived from 1646 to 1716, supported the idea of a universal language and a formal reasoning calculus. He believed that these tools could resolve any dispute through calculation. Leibniz’s philosophy, rooted in medieval Rational Theology, paved the way for modern symbolic logic and eventually led to the development of computing languages, ICT and AI.

In the modern digital world, especially with the rise of generative AI technologies, tech corporations often use the public as test subjects when releasing new products to the market. While publicly funded research projects, such as those within the EU’s Horizon programs, have strict ethical requirements like prior informed consent, these principles are often overlooked when new ICT technologies transition from research to commercialization. This raises the question of how we can increase democratic participation in the planning and implementation of digital technologies, particularly AI, to ensure that their impact on our social structures is carefully considered and guided by ethical principles?

Steve Torrance has had a long-standing association with Sussex University, where he has served as a lecturer and visiting senior research fellow for many years. In addition to his academic work, he also contributes to the field of ethics as a research reviewer for the European Union around ICT technologies with focus on artificial intelligence regulation frameworks.

Torrance has been invited to talk as a regulatory body by British Computer Society, The Chartered Institute for IT and was attended by faculty members and industry professionals. Steve’s expertise and experience in the field have made a valuable contribution to the event.

AI and Ethical Participation

Publicly funded research projects, such as the European Union's Horizon programs, have recognized the importance of ethical requirements and strive to uphold them. These projects often prioritize principles like prior informed consent, ensuring that individuals understand and willingly participate in research activities. However, as ICT technologies transition from the realm of research to commercialization, ethical considerations can sometimes be overlooked or undervalued by private corporations seeking market dominance.

To address this challenge, it is crucial to enhance democratic participation in the planning and implementation of digital technologies, especially AI. By incorporating diverse perspectives and engaging stakeholders from various backgrounds, we can ensure that the development and deployment of AI systems are guided by ethical principles and considerate of their broader impact on society.

One approach to increasing democratic participation is to establish multidisciplinary committees or regulatory bodies composed of experts in AI, ethics, law, philosophy, sociology, and other relevant fields. These bodies can collaborate with technology companies, researchers, policymakers and civil society organizations to develop guidelines and frameworks that emphasize ethical considerations, transparency and accountability in AI development and deployment.

Public awareness campaigns and educational initiatives can play a crucial role in fostering an informed society that actively engages in discussions surrounding AI and its societal implications. By promoting public understanding of AI technologies, their capabilities and potential risks, individuals can make more informed decisions and actively participate in shaping the policies and regulations governing AI.

Furthermore, encouraging open-source development and collaborative innovation can democratize the AI landscape. Open-source frameworks and platforms provide opportunities for individuals and communities to contribute, audit and improve AI systems collectively. This approach not only fosters transparency and inclusivity but also helps mitigate the risks associated with AI technologies being controlled solely by a few powerful entities.

To ensure a future vision for AI and machine learning that aligns with democratic values and ethical principles, ongoing dialogue and collaboration between technology developers, policymakers, researchers and the public are essential. Ethical considerations and public input should be integrated into the design, development, deployment and monitoring of AI systems. Regular audits, impact assessments and independent evaluations can help ensure that AI technologies meet the required standards of fairness, transparency, accountability and respect for individual rights.

Looking ahead, as AI continues to evolve, it is crucial to remain vigilant and adaptable in addressing the ethical challenges that may arise. Ongoing research, discussions and policy frameworks should remain dynamic, incorporating the latest insights, societal values and future visions to shape AI and machine learning for the benefit of humanity. By embracing a collaborative and inclusive approach, we can pave the way for a future where AI technologies are ethically grounded, transparent and empower individuals and communities rather than compromising their well-being.

IMAGE: Steve Torrance, Chichester Lecture Theatre, University of Sussex

CREDIT: PHOTO Razvan Chiorean g a l l e r y

See this gallery in the original post