Facebook has announced a new partnership with the Technical University of Munich (TUM) to support the creation of an independent AI ethics research center. The Institute for Ethics in Artificial Intelligence, which is supported by an initial funding grant from Facebook of $7.5 million over five years, will help advance the growing field of ethical research on new technology and will explore fundamental issues affecting the use and impact of AI.
Artificial intelligence offers an immense opportunity to benefit people and communities around the world. But as AI technology increasingly impacts people and society, the academics, industry stakeholders and developers driving these advances need to do so responsibly and ensure AI treats people fairly, protects their safety, respects their privacy, and works for them.
At Facebook, ensuring the responsible and thoughtful use of AI is foundational to everything we do — from the data labels we use, to the individual algorithms we build, to the systems they are a part of. We’re developing new tools like Fairness Flow, which can help generate metrics for evaluating whether there are unintended biases in certain models. We also work with groups like the Partnership for AI, of which Facebook is a founding member, and the AI4People initiative. However, AI poses complex problems which industry alone cannot answer, and the independent academic contributions of the Institute will play a crucial role in furthering ethical research on these topics.
The Technical University of Munich is one of the top-ranked universities worldwide in the field of artificial intelligence, with work extending from fundamental research, to applications in fields like robotics and machine intelligence, to the study of the social implications of AI. The Institute for Ethics in Artificial Intelligence will leverage the TUM’s outstanding academic expertise, resources and global network to pursue rigorous ethical research into the questions evolving technologies raise.
The Institute will also benefit from Germany’s position at the forefront of the conversation surrounding ethical frameworks for AI — including the creation of government-led ethical guidelines on autonomous driving — and its work with European institutions on these issues.
Institute Overview
Drawing on expertise across academia and industry, the Institute will conduct independent, evidence-based research to provide insight and guidance for society, industry, legislators and decision-makers across the private and public sectors. The Institute will address issues that affect the use and impact of artificial intelligence, such as safety, privacy, fairness and transparency.
Through its work, the Institute will seek to contribute to the broader conversation surrounding ethics and AI, pursuing research that can help provide tangible frameworks, methodologies and algorithmic approaches to advise AI developers and practitioners on ethical best practices to address real world challenges.
To help meet the need for thoughtful and groundbreaking academic research in these areas, Facebook looks forward to supporting the Institute and help offer an industry perspective on academic research proposals, rendering the latter more actionable and impactful.
Operational Model
The independent Institute will be led by TUM Professor Dr. Christoph Lütge, who holds degrees in business informatics and philosophy and has served as the Peter Löscher Endowed Chair of Business Ethics at TUM since 2010. Working with a diverse advisory board of representatives from academia, civil society and industry, the Institute will identify specific research questions and convene researchers focused on AI ethics and governance-related issues.
“At the TUM Institute for Ethics in Artificial Intelligence, we will explore the ethical issues of AI and develop ethical guidelines for the responsible use of the technology in society and the economy,” Dr. Lütge said. “Our evidence-based research will address issues that lie at the interface of technology and human values. Core questions arise around trust, privacy, fairness or inclusion, for example, when people leave data traces on the internet or receive certain information by way of algorithms. We will also deal with transparency and accountability, for example in medical treatment scenarios, or with rights and autonomy in human decision-making in situations of human-AI interaction.”
While Facebook has provided initial funding, the Institute will explore other funding opportunities from additional partners and agencies. Facebook may also share insights, tools, and industry expertise related to issues such addressing algorithmic bias, in order to help Institute researchers focus on real-world problems that manifest at scale.
The Institute will also pursue opportunities to publish research and work with other experts in the field; organize conferences, symposia, and workshops; and launch educational activities with other leading institutions in common areas of interest.
Realizing AI’s huge potential for good while balancing its risks is a global effort, and it will not be accomplished overnight. The Institute is an exciting step forward in our continued commitment to partnering with academic institutions, governments, NGOs, advocacy and industry groups, and others who are working to advance AI in a safe and responsible way.
Source: https://newsroom.fb.com/news/2019/01/tum-institute-for-ethics-in-ai/