Skip to main content

Study information

Large Language Models and Applications - 2025 entry

MODULE TITLE Large Language Models and Applications CREDIT VALUE15
MODULE CODECOMM117 MODULE CONVENERUnknown
DURATION: TERM 1 2 3
DURATION: WEEKS 11
Number of Students Taking Module (anticipated) 40
DESCRIPTION - summary of the module content

Large Language Models (LLMs) have enabled powerful applications across various domains. In this module, you will learn key technologies, architectures, training and evaluation methods of LLMs, for example, GPT and BERT models. You will also learn the practical use cases and emerging trends of LLM applications. You will be able to analyse real-world problems and formulate effective solutions using LLMs, such as chatbots and machine translation. You will attend lecture and lab sessions, where you will learn to apply and analyse LLMs for your chosen application(s). This module is suitable for Computer Science, Mathematics and Engineering students and any students with experience in programming and machine learning.

Pre-requisite modules: COMM113 Deep Learning

AIMS - intentions of the module

This module aims to provide you with knowledge and skills to understand, analyse and apply LLMs, including for example, their architectures, training techniques, and practical applications. You will study key topics such as transformer models, GPT and BERT models, and fine-tuning. In this module you will also examine the use of LLMs in real-world scenarios, for example, text generation, text summarisation, and machine translation. You will work with relevant AI frameworks and tools, such as LLMs APIs, to develop LLM applications. Additionally, you will engage with ethical challenges and best practices in deploying LLMs. By the end of the module, you will gain hands-on experience in developing, applying and evaluating LLMs across a variety of domains.

INTENDED LEARNING OUTCOMES (ILOs) (see assessment section below for how ILOs will be assessed)

On successful completion of this module you should be able to:

Module Specific Skills and Knowledge:

  1. Explain the key technologies, architecture and training methods of large language models (LLMs), including for example, GPT and BERT models.

  2. Formulate relevant real-world challenges as problems that can be effectively addressed using large language models.

  3. Fine-tune LLMs for various NLP tasks, for example, text generation, text summarisation and machine translation.

  4. Critically evaluate the performance of LLMs and their applications to a range of NLP tasks.

Discipline Specific Skills and Knowledge:

  1. Evaluating the compromises and trade-offs which must be made when translating theory into practice.

Personal and Key Transferable/ Employment Skills and Knowledge:

  1. Effectively communicate insights and evaluations drawn from research papers and technical reports. 

SYLLABUS PLAN - summary of the structure and academic content of the module
Concepts and Theoretical Foundations
Introduction and history of LLMs.
Fundamentals of autoregressive generative models.
 
Implementation and Practical Techniques
For example, GPT architecture and training; BERT pre-training and fine-tuning; Fine-tuning LLMs for domain-specific applications.
 
Analysis and Evaluation
Quantitative and qualitative evaluation techniques.
Ablation study.
Real-world case studies and advanced applications.
LEARNING AND TEACHING
LEARNING ACTIVITIES AND TEACHING METHODS (given in hours of study time)
Scheduled Learning & Teaching Activities 33 Guided Independent Study 117 Placement / Study Abroad 0
DETAILS OF LEARNING ACTIVITIES AND TEACHING METHODS
Category  Hours of study time  Description 
Scheduled Learning & Teaching activities 22 Lectures
Scheduled Learning & Teaching activities 11 Workshops/tutorials
Guided independent study 60 Coursework preparation and completion
Guided independent study 57 Wider reading and self-study

 

ASSESSMENT
FORMATIVE ASSESSMENT - for feedback and development purposes; does not count towards module grade
Form of Assessment Size of the assessment e.g. duration/length ILOs assessed Feedback method
Practical Exercises 10 All Answers to exercises and oral feedback

 

SUMMATIVE ASSESSMENT (% of credit)
Coursework 100 Written Exams 0 Practical Exams 0
DETAILS OF SUMMATIVE ASSESSMENT
Form of Assessment % of credit Size of the assessment e.g. duration/length ILOs assessed  Feedback method
Continuous assessment 1 30   18 hours All  Written 
Continuous assessment 2 70  42 hours All Written 

 

DETAILS OF RE-ASSESSMENT (where required by referral or deferral)
Original form of assessment Form of re-assessment  ILOs re-assessed Time scale for re-assessment
Continuous assessment 1 Continuous assessment 1 All  Referral/deferral period  
Continuous assessment 2 Continuous assessment 2  All  Referral/deferral period  

 

RE-ASSESSMENT NOTES

Reassessment will be by coursework/quiz in the failed or deferred element only. For referred candidates, the module mark will be capped at 50%. For deferred candidates, the module mark will be uncapped.

RESOURCES
INDICATIVE LEARNING RESOURCES - The following list is offered as an indication of the type & level of
information that you are expected to consult. Further guidance will be provided by the Module Convener

Basic reading:

  • Goodfellow, I., Bengio, Y., Courville, A. and Bengio, Y., 2016. Deep learning (Vol. 1, No. 2). Cambridge: MIT press.

  • Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F.L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S. and Avila, R., 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.

Web-based and electronic resources: 

 

Reading list for this module:

There are currently no reading list entries found for this module.

CREDIT VALUE 15 ECTS VALUE 7.5
PRE-REQUISITE MODULES COMM113
CO-REQUISITE MODULES
NQF LEVEL (FHEQ) 7 AVAILABLE AS DISTANCE LEARNING No
ORIGIN DATE Monday 11th November 2024 LAST REVISION DATE Wednesday 6th August 2025
KEY WORDS SEARCH Large language models, generative AI, natural language processing

Please note that all modules are subject to change, please get in touch if you have any questions about this module.