Deep Neural Networks as Dynamical Systems by Dr Andrew Corbett (IDSAI)
Interpretable AI is key to understanding the predictions of machine learning models.
|An Institute for Data Science and Artificial Intelligence seminar|
|Date||29 November 2022|
|Time||13:00 to 14:00|
|Place||Streatham Court Old B|
Hybrid delivery using Zoom.
The most interpretable model, perhaps, is a linear fit, in which input features are ascribed a measure of relevance through their corresponding parameter. As one constructs deep neural networks by stacking thousands (or millions) of parameters into convoluted non-linear layers, interpretability goes out the window—whilst accuracy and performance become state of the art. Recently, these deep architectures have been realised through the lens of dynamical systems; forward passes are solutions to ordinary differential equations. This opens up a centuries-old tool kit of mathematical techniques which we shall endeavour to explore in an accessible way over the course of this talk.
To find out more about Andrew, click here.
Delivery and Registration:
The seminar will be delivered in person and on Zoom. To register, please click here. Registration closes: Tuesday, 29 November 2022 at 09:00 (BST). If you miss the registration, please contact IDSAI.
Whilst we appreciate the flexibility that hybrid deliver brings, we would encourage you to come along in person where there will be tea and coffee afterwards.
If you have any queries, please contact IDSAI.
This forms part of the IDSAI Research Seminar Series for 2022-2023.
Streatham Court Old B