Categories
- ACTUARIAL DATA SCIENCE
- AFIR / ERM / RISK
- ASTIN / NON-LIFE
- BANKING / FINANCE
- DIVERSITY & INCLUSION
- EDUCATION
- HEALTH
- IACA / CONSULTING
- LIFE
- PENSIONS
- PROFESSIONALISM
- THOUGHT LEADERSHIP
- MISC
ICA LIVE: Workshop "Diversity of Thought #14
Italian National Actuarial Congress 2023 - Plenary Session with Frank Schiller
Italian National Actuarial Congress 2023 - Parallel Session on "Science in the Knowledge"
Italian National Actuarial Congress 2023 - Parallel Session with Lutz Wilhelmy, Daniela Martini and International Panelists
Italian National Actuarial Congress 2023 - Parallel Session with Kartina Thompson, Paola Scarabotto and International Panelists
71 views
0 comments
0 likes
2 favorites
In the past few years, AI technology has become practical and ubiquitous. Yet, as a relatively new technology, society has not developed trust or social norms for AI use. Many people exhibit algorithm aversion, reinforced by headline news stories of embarrassing ethical and reputational AI failures. Due to its technical complexity, AI systems tend to be designed and managed by people with a narrow technical engineering focus. This article describes how human values, human rights, and behavioural science can be at odds with technical engineering-focused common practices in AI, then develops a new set of best practices informed by these disciplines that can create value for all stakeholders.
Find the Q&A here: Q&A on 'Challenges in a Digital Era'
0 Comments
There are no comments yet. Add a comment.