Large Language Models and Applications - 2025 entry
| MODULE TITLE | Large Language Models and Applications | CREDIT VALUE | 15 |
|---|---|---|---|
| MODULE CODE | COMM117 | MODULE CONVENER | Unknown |
| DURATION: TERM | 1 | 2 | 3 |
|---|---|---|---|
| DURATION: WEEKS | 11 |
| Number of Students Taking Module (anticipated) | 40 |
|---|
Large Language Models (LLMs) have enabled powerful applications across various domains. In this module, you will learn key technologies, architectures, training and evaluation methods of LLMs, for example, GPT and BERT models. You will also learn the practical use cases and emerging trends of LLM applications. You will be able to analyse real-world problems and formulate effective solutions using LLMs, such as chatbots and machine translation. You will attend lecture and lab sessions, where you will learn to apply and analyse LLMs for your chosen application(s). This module is suitable for Computer Science, Mathematics and Engineering students and any students with experience in programming and machine learning.
Pre-requisite modules: COMM113 Deep Learning
This module aims to provide you with knowledge and skills to understand, analyse and apply LLMs, including for example, their architectures, training techniques, and practical applications. You will study key topics such as transformer models, GPT and BERT models, and fine-tuning. In this module you will also examine the use of LLMs in real-world scenarios, for example, text generation, text summarisation, and machine translation. You will work with relevant AI frameworks and tools, such as LLMs APIs, to develop LLM applications. Additionally, you will engage with ethical challenges and best practices in deploying LLMs. By the end of the module, you will gain hands-on experience in developing, applying and evaluating LLMs across a variety of domains.
On successful completion of this module you should be able to:
Module Specific Skills and Knowledge:
-
Explain the key technologies, architecture and training methods of large language models (LLMs), including for example, GPT and BERT models.
-
Formulate relevant real-world challenges as problems that can be effectively addressed using large language models.
-
Fine-tune LLMs for various NLP tasks, for example, text generation, text summarisation and machine translation.
-
Critically evaluate the performance of LLMs and their applications to a range of NLP tasks.
Discipline Specific Skills and Knowledge:
-
Evaluating the compromises and trade-offs which must be made when translating theory into practice.
Personal and Key Transferable/ Employment Skills and Knowledge:
-
Effectively communicate insights and evaluations drawn from research papers and technical reports.
| Scheduled Learning & Teaching Activities | 33 | Guided Independent Study | 117 | Placement / Study Abroad | 0 |
|---|
| Category | Hours of study time | Description |
| Scheduled Learning & Teaching activities | 22 | Lectures |
| Scheduled Learning & Teaching activities | 11 | Workshops/tutorials |
| Guided independent study | 60 | Coursework preparation and completion |
| Guided independent study | 57 | Wider reading and self-study |
| Form of Assessment | Size of the assessment e.g. duration/length | ILOs assessed | Feedback method |
| Practical Exercises | 10 | All | Answers to exercises and oral feedback |
| Coursework | 100 | Written Exams | 0 | Practical Exams | 0 |
|---|
| Form of Assessment | % of credit | Size of the assessment e.g. duration/length | ILOs assessed | Feedback method |
| Continuous assessment 1 | 30 | 18 hours | All | Written |
| Continuous assessment 2 | 70 | 42 hours | All | Written |
| Original form of assessment | Form of re-assessment | ILOs re-assessed | Time scale for re-assessment |
| Continuous assessment 1 | Continuous assessment 1 | All | Referral/deferral period |
| Continuous assessment 2 | Continuous assessment 2 | All | Referral/deferral period |
Reassessment will be by coursework/quiz in the failed or deferred element only. For referred candidates, the module mark will be capped at 50%. For deferred candidates, the module mark will be uncapped.
information that you are expected to consult. Further guidance will be provided by the Module Convener
Basic reading:
-
Goodfellow, I., Bengio, Y., Courville, A. and Bengio, Y., 2016. Deep learning (Vol. 1, No. 2). Cambridge: MIT press.
-
Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F.L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S. and Avila, R., 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
Web-based and electronic resources:
-
ELE
-
Speech and Language Processing - Chapter 10
Reading list for this module:
| CREDIT VALUE | 15 | ECTS VALUE | 7.5 |
|---|---|---|---|
| PRE-REQUISITE MODULES | COMM113 |
|---|---|
| CO-REQUISITE MODULES |
| NQF LEVEL (FHEQ) | 7 | AVAILABLE AS DISTANCE LEARNING | No |
|---|---|---|---|
| ORIGIN DATE | Monday 11th November 2024 | LAST REVISION DATE | Wednesday 6th August 2025 |
| KEY WORDS SEARCH | Large language models, generative AI, natural language processing |
|---|
Please note that all modules are subject to change, please get in touch if you have any questions about this module.


