Skip to main content

Study information

Machine Learning and Project - 2025 entry

MODULE TITLEMachine Learning and Project CREDIT VALUE30
MODULE CODECOMM422Z MODULE CONVENERDr Fabrizio Costa (Coordinator)
DURATION: TERM 1 2 3
DURATION: WEEKS 11
Number of Students Taking Module (anticipated)
DESCRIPTION - summary of the module content

Machine learning has emerged mainly from computer science and artificial intelligence and draws on methods from a variety of related subjects including statistics, applied mathematics and more specialized fields, such as pattern recognition and neural computation. Applications are, for example, image and speech analysis, medical imaging, bioinformatics and exploratory data analysis in natural science and engineering. This module will provide you with a thorough grounding in the theory and application of machine learning, pattern recognition, classification, categorisation, and concept acquisition.  Hence, it is particularly suitable for Computer Science, Mathematics and Engineering students and any students with some experience in probability and programming. Research Project: In this module, you will work on a research problem in an area relating to your programme of study, applying the tools and techniques that you have learned throughout the modules of the programme. This is an independent project, supervised by an expert from the relevant area, and culminates in writing a dissertation in the form of a research paper, describing your research and its results.  

AIMS - intentions of the module
In this data-driven era, modern technologies are generating massive and high-dimensional datasets. This module aims to give you an understanding of computational methods used in modern data analysis.
 
In particular, this module aims to impart knowledge and understanding of machine learning methods from basic pattern-analysis methods to state-of-the-art research topics; to give you experience of data-modelling development in practical workshops. Neural Networks, Bayesian methods and kernel-based algorithms will be introduced for extracting knowledge from large data sets of patterns (data mining techniques). Recent development of techniques and algorithms for big-data analysis will also be addressed.
INTENDED LEARNING OUTCOMES (ILOs) (see assessment section below for how ILOs will be assessed)

On successful completion of this module you should be able to:

Module Specific Skills and Knowledge

1. Apply advanced and complex principles for statistical machine learning to various data analysis
2. Analyse novel pattern recognition and classification problems; establish statistical models for them and write software to solve them

Discipline Specific Skills and Knowledge

3. State the importance and difficulty of establishing a principled probabilistic model for pattern recognition
4. Apply a number of complex and advanced mathematical and numerical techniques to a wide range of problems and domains

Personal and Key Transferable / Employment Skills and Knowledge

5. Identify the compromises and trade-offs which must be made when translating theory into practice
6. Conduct small individual research projects

 

SYLLABUS PLAN - summary of the structure and academic content of the module

1.      Introductory Material

a.      Practical motivation for Machine Learning (applications in science, industry, society).

b.      Paradigms of learning: supervised, unsupervised, semi-supervised, reinforcement.

c.      Core tasks:

i.     Classification (predicting categories).

ii.     Regression (predicting continuous values).

d.      Contrast with rule-based systems; advantages and challenges of data-driven methods.

2.      Error and Loss Functions

a.      Role of loss functions in model training and evaluation.

b.      Common loss functions:

i.     Regression: squared error, absolute error

ii.     Classification: cross-entropy, hinge loss.

c.      Empirical risk minimisation.

d.      Training error vs. generalisation error.

3.      Maximum Likelihood and Maximum a Posteriori Estimate

a.      MLE: principle and worked examples (e.g. Gaussian parameters).

b.      MAP: incorporation of priors; Bayesian interpretation.

c.      Relationship to regularisation (MAP ↔ ridge/lasso regression).

d.      Applications in probabilistic modelling.

4.      Bias–Variance Tradeoff

a.      Error decomposition: bias, variance, irreducible noise.

b.      Intuition: underfitting vs. overfitting.

c.      Role of model complexity and dataset size.

d.      Graphical illustrations and practical implications.

5.      Regularisation

a.      Purpose: controlling complexity, improving generalisation.

b.      Techniques:

i.     Weight penalties: L1 (lasso), L2 (ridge).

ii.     Early stopping.

iii.     Dropout and data augmentation.

c.      Tuning the regularisation strength (bias–variance control).

6.      Decision Trees and Ensemble Methods

a.      Decision tree construction (splitting criteria: Gini, entropy).

b.      Pros and cons (interpretability vs. instability).

c.      Ensemble learning strategies:

d.      Bagging: bootstrap aggregation, Random Forests.

e.      Boosting: AdaBoost, Gradient Boosting.

f.       Stacking and blending.

g.      Impact on bias and variance.

7.      Support Vector Machines and Large Margin Classification

a.      Geometric view: maximum-margin hyperplanes.

b.      Support vectors and their role.

c.      Soft margins and slack variables.

d.      Kernel trick: linear vs. nonlinear SVMs (RBF, polynomial kernels).

e.      Scalability and computational issues.

8.      Deep Neural Networks, Convolutional Architectures, and Gradient-based Optimisation

a.      Feedforward neural networks and universal approximation.

b.      Backpropagation and stochastic gradient descent.

c.      Optimisers: momentum, learning rate schedules

d.      Convolutional Neural Networks:

i.     Local receptive fields and weight sharing.

ii.     Hierarchical feature learning.

iii.     Applications in computer vision, speech, NLP.

e.      Challenges: vanishing/exploding gradients, overfitting, need for large datasets.

9.      Generative Methods

a.      Goal: modelling full data distributions.

b.      Classical methods: Gaussian Mixture Models, Hidden Markov Models.

c.      Modern methods:

i.     Variational Autoencoders (VAEs).

ii.     Generative Adversarial Networks (GANs).

iii.     Normalising Flows.

iv.     Diffusion Models.

d.      Applications: image and text synthesis, protein/drug generation.

10.   Reinforcement Learning

a.      Learning paradigm: agent, environment, states, actions, rewards.

b.      Exploration vs. exploitation tradeoff.

c.      Markov Decision Processes (MDPs).

d.      Value-based methods: Q-learning, Deep Q-Networks (DQN).

e.      Policy-based methods: Actor–Critic, PPO.

Applications: robotics, autonomous systems, games, recommendation engines.

LEARNING AND TEACHING
LEARNING ACTIVITIES AND TEACHING METHODS (given in hours of study time)
Scheduled Learning & Teaching Activities 60 Guided Independent Study 240 Placement / Study Abroad 0
DETAILS OF LEARNING ACTIVITIES AND TEACHING METHODS
Category Hours of study time Description
Scheduled Learning & Teaching activities 60 Asynchronous online learning activities, skill-based exercises and practical work
Guided independent study 200 Project work
Guided independent study 40 Background reading and self-study Including preparation for online content, reflection on taught material, wider reading and completion of assessments 

 

ASSESSMENT
FORMATIVE ASSESSMENT - for feedback and development purposes; does not count towards module grade
Form of Assessment Size of Assessment (e.g. duration/length) ILOs Assessed Feedback Method
Practical exercises, Weekly assigned work/exercises/forum discussion at the end of each sub-session and the end of week activities  1 hour per week All Written feedback provided summarising performance and key areas for improvement 

 

SUMMATIVE ASSESSMENT (% of credit)
Coursework 100 Written Exams 0 Practical Exams 0
DETAILS OF SUMMATIVE ASSESSMENT
Form of Assessment % of Credit Size of Assessment (e.g. duration/length) ILOs Assessed Feedback Method
Project task (practical work and report) 70 Code notebook and 4 pages-word report All Written
Coursework  30 Code notebook All Written
         
         
         

 

DETAILS OF RE-ASSESSMENT (where required by referral or deferral)
Original Form of Assessment Form of Re-assessment ILOs Re-assessed Time Scale for Re-assessment
Project Project (70%) All Referral/deferral period
Coursework Coursework (30%) All Referral/deferral period
       

 

RE-ASSESSMENT NOTES
RESOURCES
INDICATIVE LEARNING RESOURCES - The following list is offered as an indication of the type & level of
information that you are expected to consult. Further guidance will be provided by the Module Convener

Basic reading:

  • Shawe-Taylor, J. and Cristianini, N. Kernel methods for pattern analysis. Cambridge University Press, 2006, 521813972
  • Christopher Bishop. Pattern Recognition and Machine Learning. Springer, 2007, 978-0387310732
  • Webb, A. Statistical Pattern Recognition 2 Wiley, 2002, 0-470-84513-9
  • Murphy, K. Machine Learning: A Probabilistic Perspective, 1st, MIT Press, 2012, 978-0-262-018029
  • Hastie, T., Tibshirani, R. & Friedman, J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd, Springer 2009, 978-0387848570
  • David Barber, Bayesian Reasoning and Machine Learning, Cambridge University Press, 2012, 978-0-521-51814-7

Web based and Electronic Resources:

  • ELE.

Other Resources:

 

Reading list for this module:

There are currently no reading list entries found for this module.

CREDIT VALUE 30 ECTS VALUE 15
PRE-REQUISITE MODULES None
CO-REQUISITE MODULES None
NQF LEVEL (FHEQ) 7 AVAILABLE AS DISTANCE LEARNING Yes
ORIGIN DATE Tuesday 30th September 2025 LAST REVISION DATE Wednesday 1st October 2025
KEY WORDS SEARCH None Defined

Please note that all modules are subject to change, please get in touch if you have any questions about this module.