Data-Driven Hybrid Motion–Force Control for Robust Human–Manipulator Interaction Lancaster University – in collaboration with United Kingdom National Nuclear Laboratory (UKNNL)
We invite applications for a fully funded PhD studentship at Lancaster University’s School of Engineering, in partnership with United Kingdom National Nuclear Laboratory (UKNNL). This exciting project will develop novel data-driven, robust, and adaptive control methods for human–robot interaction and teleoperation, with direct applications in nuclear robotics, hazardous environment manipulation, and beyond.
Project Overview
Teleoperation is a critical enabler for safe and efficient operation in hazardous environments such as nuclear decommissioning. However, current industrial solutions suffer from limitations under uncertainty, time delays, and noisy sensing.
This PhD project will design and experimentally validate a hybrid motion–force control framework that ensures precise end-effector positioning while maintaining robust and adaptive force regulation under real-world conditions. Research will include:
- Development of nonlinear robust adaptive controllers and disturbance observers.
- Design of bilateral teleoperation schemes that enhance transparency and stability under communication delays.
- Integration of data-driven approaches for force estimation and safety.
- Experimental validation on industrial robotic platforms at the UKNNL Hot Robotics Facility.
The project provides the opportunity to work on cutting-edge robotics challenges with significant industrial impact, supported by state-of-the-art facilities at both Lancaster University and UKNNL.
Supervisory Team
- Dr Allahyar Montazeri (Lead Supervisor, School of Engineering, Lancaster University; Data Science Institute Member)
- Professor Plamen Angelov (Co-Supervisor, School of Computing and Communications, Lancaster University; Data Science Institute Member)
Training and Development
- The successful candidate will receive a tailored training programme including:
- Hands-on training with ROS2, MATLAB/Simulink, and CoppeliaSim.
- Access to world-class robotics laboratories and facilities.
- Opportunities to engage with national and international conferences, workshops, and training events.
- Insight into the nuclear sector through industrial collaboration with UKNNL.
Funding
- Duration: 4 years (3.5 years EPSRC Doctoral Landscape Award + 0.5 years UKNNL extension)
- Coverage: UKRI minimum stipend, tuition fees for Home students, and a research training support grant.
- Additional support for consumables, maintenance, and travel.
Eligibility
- Open to UK Home students only, due to clearance requirements for UKNNL facilities.
- Applicants should have (or expect to obtain) a First or Upper Second-Class degree (or equivalent) in Engineering, Control, Robotics, Computer Science, or a related discipline.
- Strong mathematical and programming skills (MATLAB, Python, or C++) are highly desirable.
Application Process
Applicants should submit:
- A full CV.
- A one-page cover letter outlining their motivation and suitability for the project.
- Reference letter from two academics commenting on the candidate abilities.
Applications will be considered on a rolling basis until the position is filled, with an expected start date of January 2025.
Closing Date – 1st December
For informal enquiries, please contact Dr Allahyar Montazeri
Details
Start Date: As soon as possible
Deadline for application: Open (it is recommended you apply as soon as possible)
Interview: Rolling
Description
If you’re interested in protecting AI from rapidly emerging cyber threats and securing a technology that will define the coming decades, this PhD studentship is for you.
We are seeking candidates to join our AI security group at Lancaster University, and become part of this rapidly growing research field.
The adoption of Artificial Intelligence (AI) and prominent technologies such as Generative AI, LLMs, and Agentic AI systems is rapidly accelerating across both research and industry.
While there is considerable research activity on the application of AI for security, there has been less attention towards the security of AI itself. AI security focuses on addressing cyber security risks against the AI systems against a wide plethora of cyber attacks, spanning prompt injection, data leakage, jailbreaking, bypassing guardrails, model backdoors, and more. The emergence of such AI risks has drawn the attention of every nation and major business, however existing cyber security tools and methods are ineffective within AI systems due to the intrinsically random, complex, and opaque nature of neural networks. To date, how to secure today’s and tomorrow’s AI models and systems remains unsolved.
This project would provide you the skill and training necessary to become a researcher specializing in AI security – an area that is increasingly sought after in academia and industry.
Research Areas
Topics of interest you could pursue include:
- Discover new types of cyber attacks / security vulnerabilities in AI and GenAI
- Create defence systems and countermeasures against AI cyber attacks
- Design run-time detection systems for prompt injection and jailbreaking
- Explore different cyber attack modalities (i.e. malicious instructions in images/audio)
- Build and develop cutting-edge LLM guardrails and firewalls
- Investigate hidden security characteristics within neural networks
- Identify ShadowAI – malicious AI systems hidden within an organization
- Uncover backdoor attacks and model hijacking within ML artefacts
What We Offer
- A 3.5-year fully funded PhD studentship (including both tuition and stipend).
- Access to a large-scale GPU data centre entirely dedicated to our research lab.
- Comprehensive training in cutting-edge AI technology and cyber security techniques.
- Employment opportunities at Mindgard (https://mindgard.ai/), an award-winning AI security company founded at our lab, and now based in the heart of London.
- Collaboration opportunities with Nvidia, Mindgard, GCHQ’s National Cyber Security Centre, and NetSPI, amongst others.
- Opportunity to travel to conferences internationally to present your research.
Our Research Lab
We are among the few labs globally specializing in AI security. You will be part of a new cohort of PhD students joining an established team of scientists and engineers. Founded in 2016, the research lab led by Professor Peter Garraghan is internationally renowned in AI systems and security, publishing over 70 research papers, securing over £14M in external grant funding, the formation of Mindgard, and all research students to date securing positions in academia or industry R&D labs upon graduation.
About You
- We highly value people who are kind, curious and believe in making a difference.
- A good background in Computer Science, ideally a BSc in Computer science (or equivalent) with a 2:1 classification and above.
- Interest in Artificial Intelligence, Cyber Security, Distributed Systems, or a combination of the above.
- Highly motivated, and capable of working both independently and as part of a team.
- Good communication, technical knowledge, and writing skills.
Get in Touch
These positions are available now, thus candidates are strongly recommended to apply as early as possible.
For informal enquiries about these positions, please contact and share your CV with Professor Peter Garraghan. To apply, please visit our school PhD opportunities page, which includes guidance on submission, and a link to the submission system.
Details
Academic Requirements: First-class or 2.1 (Hons) degree, or master’s degree (or equivalent) in an appropriate subject
Recently, we have seen a transformative change in the use of artificial intelligence (AI) technology in many aspects of our lives. In our personal lives, we have access to services and tools that make use of AI in creative and useful ways and – similarly – in a professional setting, AI is being used to enable major changes to the way business is conducted. Some propose that we are at the beginning of a journey in which AI will fundamentally change the way our societies and businesses function.
The concept of AI has been around for several decades and can take many forms. A recent US National Institute for Standards and Technology (NIST) document (NIST AI 100-2e2023), which examines AI attacks, defines two main classes: (i) predictive; and (ii) generative AI. The former is concerned with predicting classes of data (e.g. for anomaly detection); whereas the latter is used to generate content, often using large language models (LLMs). In general, this is not a new technology. However, the recent rapid acceleration of the use of AI has emerged because of new generative models and abundant access to task-specific compute capabilities.
Inspired by this trend, the nuclear sector is exploring the use AI and its capabilities to support a variety of functions. For example, it can be used to enable efficiencies in business process execution, supporting staff with a variety of decision-making tasks using AI-enabled assistants. Moreover, AI can be used to support other functions in a nuclear setting such as those related to physical security, materials inspection, and automated and autonomous robotics and control. A comprehensive review of the uses of AI in the nuclear sector has been produced by the International Atomic Energy Agency (IAEA)[1].
An emerging area of application of AI is to support efficient, safe and secure use of operational technology (OT). This can take many forms, including using machine learning models to optimize control strategies without the need to develop mathematical models of a target system, supporting predictive maintenance to ensure maintenance activities are realized in a cost-effective and safe manner, enabling autonomous operations, and using various forms of machine learning to predict and classify anomalous system behaviour. OT systems typically support business and – in some cases – safety critical functions; therefore, the correct operation of OT that incorporates AI is of the utmost importance.
Nuclear is the most heavily regulated sector in the world. This is because of the uniquely severe consequences of the failure of functions on nuclear safety and security. Failures can result in major environmental disasters and loss of life. In this setting, the use of AI should be approached in a consequence and risk-informed manner. An important way to manage risks that stem from errant AI behaviour is to realize so-called guardrails. Guardrails take many forms and can be described in this context as socio-technical measures to protect the function of systems from the errant behaviour of artificial intelligence. Example guardrails include policies that mandate that humans are integral to decision making that is supported by AI or physical controls (safety interlocks, etc.) that prevent an AI-supported system from causing an accident. It is worth noting that guardrails will likely play an important role in gaining regulatory approval for the use of AI to support safety-relevant functions in nuclear.
Whilst chosen guardrails may be suitable at the genesis of a system, there are potential longitudinal socio-technical effects that might degrade their performance. These effects emerge because of different forms of “drift” associated with a system and its use. Example types of drift include organizational change (e.g. changes in policy), shifts in the criticality of functions and associated systems, changes in regulatory assurance requirements, and generational shifts in staff experience and knowledge, e.g. caused by AI-supported autonomy. These changes may be slow and occur over extended periods, making them difficult to detect. The result is a failure or sub-optimal use of guardrails to effectively mitigate errant AI behaviour.
The aim of this PhD proposal is to investigate a framework that supports risk-informed decisions to be made about the choice of guardrails for ensuring the safe and secure operation of nuclear functions, which include systems that have an AI component. Specifically, the project will focus on case studies that incorporate AI for improving the security and efficiency of OT in the nuclear sector. This framework should consider the characteristics of the guardrails (e.g. their cost, flexibility, scrutability, and effectiveness) along with how they are affected by longitudinal drift. The intention is to take a systems view, in line with work by Leveson et al.[2] who argue that traditional models of failure causality (the fault, error, failure chain) are inadequate for understanding the causes of failures. Rather, that a more complex view of the system in its context, which include changes in the way systems are operated over time, is better suited to this task.
Supervisor: Professor Paul Smith, School of Computing and Communications, Lancaster University
This is a 42-month funded project, including fees and an enhanced stipend.
Entry Requirements
Applicants must have a Master’s degree and/or a minimum of a 2:1 in their bachelor’s degree in computer science or a related field.
Applicants must be resident in the UK during the period of study; they may need to travel to collect data during their studies and will need to obtain security clearance. It is expected the primary fieldwork site will be in Cumbria.
How to Apply
Applications should be submitted via Phillip Satchell, the postgraduate coordinator in the School of Computing and Communications
You must provide an up-to-date CV, and two references. We also request a written statement of purpose (explaining why you want to undertake this project, why you have the requisite skills). A further piece of research/assignment work, dissertation section, or publication is also recommended to be submitted.
Applicants can contact Professor Paul Smith to discuss their applications
[1] https://www.iaea.org/publications/15198/artificial-intelligence-for-accelerating-nuclear-applications-science-and-technology
[2] http://sunnyday.mit.edu/
We invite applications for a fully funded PhD studentship at Lancaster University in collaboration with SP Electricity North West (SP ENWL). This is an exciting opportunity to develop next-generation methods for attack surface mapping, exploring how data science, AI, and cybersecurity techniques can be used to produce more accurate and reliable tools that support decision-makers in their analysis of large-scale modern digital infrastructure, such as power grids.
PhD Overview
As society becomes increasingly reliant on digital infrastructure, it is critical that decision-makers at organisational and national levels understand the resilience of their systems. Analysts use Attack Surface Mapping (ASM) to identify their internet-connected digital assets and associated vulnerabilities. This allows them to understand how robust the infrastructure is, plan mitigation strategies, and support recovery post-attack.
This PhD will leverage data science, AI, and cybersecurity techniques to develop the next generation of ASM tools. Research will include:
- Fusing multiple ASM tools and pieces of open-source information to give more accurate understanding of attack surfaces than the current state-of-the-art tools can provide.
- Developing techniques to measure and interpret the uncertainty of ASM results, giving practitioners confidence in their analysis.
- Investigating how AI automation can safely and effectively improve the ASM process.
This PhD is in collaboration with SP Electricity North West, with a crucial focus on securing digital infrastructure across their network and enabling the secure deployment of innovative new services as they digitise their operations. Furthermore, this project aligns with ongoing work the team are carrying out with the UK’s National Cybersecurity Centre (NCSC); as such, there is a real opportunity for your research to make an impact.
Supervisory Team
- Dr Edward Austin (School of Computing and Communications)
- Professor Nicholas Race (School of Computing and Communications)
- Dr Xiandong Ma (School of Engineering)
Training and Development
The successful candidate will receive a tailored training programme including:
- Support using, and access to, ASM tools such as Shodan and Censys.
- Opportunities to engage with national and international conferences, workshops, and training events.
- Insight into the power sector through industrial collaboration with SP ENWL.
Funding
- A 3.5-year UKRI-funded studentship, including a stipend (currently £20,780 per year) and full tuition fees for Home students.
- An additional research training grant (£1,000 per year) for consumables, maintenance, and travel to events/conferences.
Eligibility
- Applicants should have (or expect to obtain) a First or Upper Second-Class degree (or equivalent) in Computer Science, Data Science, or Cybersecurity. Applicants from other disciplines with a substantial mathematical component are also encouraged to apply.
- There is no expectation that a candidate will be proficient in all areas of data science, cybersecurity, computer networking and AI tooling. However, candidates should be aware that this PhD will have a substantial cybersecurity component.
Application Process
Applicants should submit:
- A cover letter outlining their motivation and suitability.
- A CV outlining skills and experience.
Applications will be considered on a rolling basis until the position is filled. The expected start dates are either April 2026 or October 2026.
Contact Information
Please contact Professor Nicholas Race (n.race@lancaster.ac.uk) and Dr Edward Austin (e.austin@lancaster.ac.uk)