Categories
- DATA SCIENCE / AI
- AFIR / ERM / RISK
- ASTIN / NON-LIFE
- BANKING / FINANCE
- DIVERSITY & INCLUSION
- EDUCATION
- HEALTH
- IACA / CONSULTING
- LIFE
- PENSIONS
- PROFESSIONALISM
- THOUGHT LEADERSHIP
- MISC
ICA LIVE: Workshop "Diversity of Thought #14
Italian National Actuarial Congress 2023 - Plenary Session with Frank Schiller
Italian National Actuarial Congress 2023 - Parallel Session on "Science in the Knowledge"
Italian National Actuarial Congress 2023 - Parallel Session with Lutz Wilhelmy, Daniela Martini and International Panelists
Italian National Actuarial Congress 2023 - Parallel Session with Kartina Thompson, Paola Scarabotto and International Panelists
67 views
0 comments
0 likes
3 favorites
joan
Deep learning models have been very successful in the application of machine learning methods, often out-performing classical statistical models such as linear regression models or generalized linear models. On the other hand, deep learning models are often criticized for not being explainable nor allowing for variable selection. There are two different ways of dealing with this problem, either we use post-hoc model interpretability methods or we design specific deep learning architectures that allow for an easier interpretation and explanation.This paper builds on our previous work on the LocalGLMnet architecture that gives an interpretable deep learning architecture. In the present paper, we show how group LASSO regularization (and other regularization schemes) can be implemented within the LocalGLMnet architecture so that we receive feature sparsity for variable selection. We benchmark our approach with the recently developed LassoNet of Lemhadri et al. [11].
Outcomes
– Introduction to explainable models, such as the LocalGLMnet
– Understanding of regularization techniques within machine learning
0 Comments
There are no comments yet. Add a comment.