Ada Lovelace Institute is hiring a Senior Researcher to lead research into industry practices and emerging technologies. Amongst other projects, this researcher will lead a series of projects exploring the effectiveness of responsible AI and ethics/accountability practices and demystifying emerging technologies, including:
- AI auditing practices as a method for assessing and inspecting algorithmic systems
- Impact assessments as a method for identifying and documenting the potential risks
- Transparency mechanisms like transparency standards or datasheets
- Emerging generative AI governance mechanisms, like red-teaming or bias testing
- Exploring the societal impacts of emerging technologies like general purpose AI or synthetic data
This role is an excellent opportunity to oversee a series of projects that will explore practical on-the-ground experiences of responsible AI practitioners and produce a series of projects that will feed into contemporary AI legislative and policy debates.
Working with the Associate Director, this role will be responsible for developing and executing a research agenda that explores the practices that industry can implement to improve accountability and demystifying the limitations, opportunities, and potential societal impacts of emerging technologies.
There are three potential projects this role may immediately oversee:
- A project to explore lessons learned from a local government’s attempt to require algorithmic bias audits of employment tools.
- A project with a law firm to study how a third-party algorithmic auditing agency can develop and implement practices for algorithmic auditing.
- A project exploring generative AI governance approaches.
The deadline for applications is 9:00 BST on Tuesday 20 June 2023. Find out more and apply via the website.