Skip to main content

Study information

AI and Society

Module titleAI and Society
Module codePHLM020
Academic year2025/6
Credits30
Module staff
Duration: Term123
Duration: Weeks

11

Number students taking module (anticipated)

10

Module description

Advances in artificial intelligence, automation, and machine learning have the potential to influence every aspect of modern life, and they bring with them both significant opportunities and significant risks. How should we conceptualise the nature and influence of AI as it becomes a more pervasive part of our society, and how should we understand and mitigate its dangers?

In this module, you will learn about the theoretical and historical foundations of artificial intelligence, and reflect upon its present and future capacities for learning, agency, consciousness, and semantic understanding. AI systems may play the role of teacher, carer, therapist, or friend; they may propagate news, misinformation, or propaganda; and they may replicate creative, military, or economic decision-making. How these artificial agents are to be integrated into 21st Century life, and how we are to accommodate them in our theory and practice, are urgent issues for contemporary society.

This module has no pre-requisites and is suitable for students without a specialist background in AI, mathematics, or computer science. 

Module aims - intentions of the module

This module aims to equip you with the knowledge and skills to develop and defend your own position on the fundamental nature of AI, and to evaluate the complex opportunities and risks it raises to modern society. The module introduces you to key theoretical questions surrounding artificial intelligence, machine learning, and automation, alongside associated technologies such as LLMs, Chatbots, digital avatars, autonomous vehicles, and robotics. The social and ethical contexts in which these technologies arise will be examined, including local institutions like schools and workplaces, and wider geopolitical settings in online and offline space. The module considers current debates over the moral status of artificial agents, and engages with recent scholarly literature in practical ethics, epistemology, and the philosophy and social science of technology.

Intended Learning Outcomes (ILOs)

ILO: Module-specific skills

On successfully completing the module you will be able to...

  • 1. Evaluate the influence of AI technologies on modern society, including their role in settings of work, education, healthcare, news, warfare, and interpersonal relationships
  • 2. Assess and critically evaluate the differing costs and benefits associated with use of AI technologies when considered from perspectives of user, designer, and regulator.

ILO: Discipline-specific skills

On successfully completing the module you will be able to...

  • 3. Critically reflect on the ethical considerations associated with use of AI technologies in formal and informal social contexts
  • 4. Display a comprehensive and critical understanding of key contributions to scholarship on AI and its place in society

ILO: Personal and key skills

On successfully completing the module you will be able to...

  • 5. Effectively communicate complex ideas using written and verbal methods appropriate to the intended audience.
  • 6. Demonstrate cognitive skills of critical and reflective thinking
  • 7. Demonstrate effective independent study and research skills

Syllabus plan

While the precise content of the module will vary from year to year, it is expected that the syllabus will cover the following topics:

  • AI, social media, and the flow of information;
  • AI in the context of therapy and care;
  • AI in the context of interpersonal relationships;
  • AI and warfare;
  • AI, moral agency, and moral patiency;
  • AI, deepfakes, and democracy;
  • AI, superintelligence, and existential risk;
  • AI, automation, labour, and capital;
  • AI, creativity, and intellectual property;
  • AI decision-making and the right to explanation. 

Learning activities and teaching methods (given in hours of study time)

Scheduled Learning and Teaching ActivitiesGuided independent studyPlacement / study abroad
22278

Details of learning activities and teaching methods

CategoryHours of study timeDescription
Scheduled Learning and Teaching2211 x 2 hour lectures and discussion (2 hours/week)
Guided Independent Study178Background reading
Guided Independent Study100Coursework preparation and writing

Formative assessment

Form of assessmentSize of the assessment (eg length / duration)ILOs assessedFeedback method
Essay plan1500 words1-7Oral and written comments

Summative assessment (% of credit)

CourseworkWritten examsPractical exams
10000

Details of summative assessment

Form of assessment% of creditSize of the assessment (eg length / duration)ILOs assessedFeedback method
Written essay1005000 words1-7Written comments

Details of re-assessment (where required by referral or deferral)

Original form of assessmentForm of re-assessmentILOs re-assessedTimescale for re-assessment
Written essay (5000 words)Written essay (5000 words)1-7August/September reassessment period

Indicative learning resources - Basic reading

  • Chalmers, D. (2010). The Singularity: A Philosophical Analysis. Journal of Consciousness Studies 17, 7–65.
  • Coeckelbergh, M. (2010). Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology, 12(3), 209–221.
  • Dung, L. (2023). How to deal with risks of AI suffering. Inquiry, 1–29. https://doi-org.uoelibrary.idm.oclc.org/10.1080/0020174X.2023.2238287
  • Eggert, L. Autonomised harming. Philos Stud 182, 1–24 (2025). https://doi.org/10.1007/s11098-023-01990-y
  • Harris, K. R. (2024). AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors. Philosophy & Technology, 37(1), 7.
  • Moosavi, P. (2023). Will intelligent machines become moral patients? Philosophy and Phenomenological Research 109 (1):95-116.
  • Mallory, F., (2023) Fictionalism about Chatbots, Ergo an Open Access Journal of Philosophy 10: 38. doi: https://doi.org/10.3998/ergo.4668
  • Sparrow, R. (2007). ‘Killer Robots.’ Journal of Applied Philosophy, Vol. 24, No. 1.
  • Rini, R. (2020). Deepfakes and the Epistemic Backstop. Philosophers’ Imprint, 20(24), 1-16.
  • Vredenburgh, K. (2022). The Right to Explanation. The Journal of Political Philosophy, 30:2 (209-229.

Key words search

Artificial intelligence; digital society; ethics; cognition; 

Credit value30
Module ECTS

15

Module pre-requisites

None

Module co-requisites

None

NQF level (module)

7

Available as distance learning?

No

Origin date

01/04/2025

Last revision date

01/04/2025