Lasso Regularisation Within the LocalGLMnet Architecture

  • 67 views

  • 0 comments

  • 3 favorites

  • joan joan
  • 330 media
  • uploaded December 6, 2022

Deep learning models have been very successful in the application of machine learning methods, often out-performing classical statistical models such as linear regression models or generalized linear models. On the other hand, deep learning models are often criticized for not being explainable nor allowing for variable selection. There are two different ways of dealing with this problem, either we use post-hoc model interpretability methods or we design specific  deep learning architectures that allow for an easier interpretation and explanation.This paper builds on our previous work on the LocalGLMnet architecture that gives an interpretable deep learning architecture. In the present paper, we show how group LASSO regularization (and other regularization schemes) can be implemented within the LocalGLMnet architecture so that we receive feature sparsity for variable selection. We benchmark our approach with the recently developed LassoNet of Lemhadri et al. [11].

Outcomes

– Introduction to explainable models, such as the LocalGLMnet

– Understanding of regularization techniques within machine learning

Tags:
Categories: DATA SCIENCE / AI

More Media in "DATA SCIENCE / AI"

0 Comments

There are no comments yet. Add a comment.