What is Unsecurities Lab?
Unsecurities Lab is a unique interdisciplinary research method that uses immersive artworks to engage interdisciplinary teams in complex and emerging cyber-physical challenges.
Developed by Lancaster University in partnership with Abandon Normal Devices and embedded within Security Lancaster, the Lab convenes small, specialist groups of researcher-practitioners to think together in conditions that stimulate innovative and entangled thinking.
Contemporary artworks often deal with system complexity, world building, non-human agency, and unexpected consequences. These are themes that resonate closely with current challenges in security research.
Unsecurities Lab supports foresight and security sensemaking under post-truth, posthuman conditions and grows a community able to “forecast better” by improving the quality and resilience of judgement when evidence itself is unknowable.
By working with immersive media, Unsecurities Lab makes it possible for participants to explore threats associated emerging cyber-physical realities that are not easily represented in technical models or disciplinary frameworks. These include the unintended consequences of climate technologies, and other forms of agential, and perhaps autonomous, systems. Through close engagement with artworks and each other, participants surface tensions and reconsider assumptions of their disciplinary areas, and reflect on different configurations of agency, responsibility and reparative culture between them.
Each Lab takes place in Lancaster’s 180° Data Immersion Suite, where a selected artwork acts as a shared stimulus for collaborative analysis. The artworks are speculative and immersive, designed to prompt focused discussion on a given cyber-physical challenge. Participants’ structured dialogues are captured using AI transcription tools and analysed for further research and policy development.
These conversations contribute to an evolving Planetary Threat-and-Repair Archive- a record of emerging concepts, cross-sector tensions, and provisional models for adaptation.
Unsecurities Lab is doing two things at once:
- First, it is building an evidence base about how foresight and security sensemaking actually works under post-truth, posthuman conditions: the workshops generate analysable traces (recorded dialogue, artefacts, and protocol outputs) that show where assumptions form, where trust collapses, how disagreement is handled, and what stabilisation measures emerge—creating a dataset that can be compared across cycles and against other methods (for example, prompting an AI vs prompting artists vs conventional expert framing) to test which processes produce the most transparent, actionable and trustworthy anticipations.
- Second, it is a cohort skills intervention: participants leave with strengthened reflexes for operating in high-uncertainty environments: practical capabilities in cross-disciplinary sensemaking, communicating uncertainty, interrogating provenance and mediation, recognising model drift and representational risk, and translating complex, distributed agency into workable protocols. So the Lab grows a community able to “forecast better” by improving the quality and resilience of judgement when evidence itself is unknowable for reasons of opacity or complexity.