AI Safety Opportunities

Fellowships, jobs, research programs, and other opportunities in AI safety. Find your next role in making AI go well.

Open Opportunities

Showing 4 of 4 opportunities

jobai-safety

Arcadia Impact Business Operations Manager

Arcadia Impact

Closes 8 April 2026

London, UK

Arcadia Impact is hiring a Business Operations Manager to support their AI safety and governance training and research programmes. The role covers finance (budget tracking, accounting, invoicing), legal and compliance (contracts, charity law, visa sponsorship), internal systems (CRM, data management, infosec), and general operations improvement. Candidates should have 2–3 years' experience in finance, accounting, or a related field, with strong attention to detail and an optimisation mindset. Salary is £50,000–£55,000 plus 3% pension. The role is primarily in-person from their London office, starting June 2026. UK visa sponsorship is available. Benefits include a £5,000 annual professional development budget, 25 days paid leave plus UK bank holidays, and free office meals and snacks.

jobai-safetyearly-career

Arcadia Impact Operations Associate

Arcadia Impact

Closes 8 April 2026

London, UK

Arcadia Impact is hiring an Operations Associate — an entry-level generalist operations role supporting their AI safety and governance programmes. Responsibilities span people operations (employee documentation, HR systems, hiring processes), programme support (recruitment, participant queries, efficiency improvements), and general operations (documentation, AI tools for productivity, metrics tracking). Ideal candidates have some operations experience (even informal, like running student groups), a generalist mindset, and strong communication skills. Salary is £44,000–£46,000 plus 3% pension. The role is primarily in-person from their London office, starting June 2026. UK visa sponsorship is available. Benefits include a £5,000 annual professional development budget, 25 days paid leave plus UK bank holidays, and free office meals and snacks. Selection process: application → interview → work task → offer (expected mid-May).

fellowshipai-safetygovernancetechnicalpaid

ERA Summer Fellowship 2026

ERA Cambridge

Closes 12 April 2026

Cambridge, UK

ERA Cambridge is running its Summer 2026 Fellowship — a 10-week, fully-funded research programme in Cambridge, UK, from July 6 to September 11, 2026. The programme brings together approximately 30 fellows from around the world to work on concrete research projects with mentorship from expert AI safety and governance researchers. The fellowship has three tracks: AI Governance, Technical AI Governance, and Technical AI Safety. ERA is particularly interested in fellows who bridge technical and policy research under a single project. Research topics span a wide range — from studying models that scheme under differential oversight to designing tamper-evident hardware for international AI treaties. Fellows receive a competitive stipend, with meals during working hours, transport, visas, and lodging all covered. The programme also hosts 30+ events over the fellowship period. ERA welcomes talented individuals at any career stage — researchers, entrepreneurs, and policymakers — who are motivated to contribute to AI safety and governance research.

fundingai-safetygovernance

SFF 2026 Main Round

Survival and Flourishing Fund

Closes 22 April 2026

Global (for-profits: US/UK/Canada/Australia)

The SFF-2026 S-Process Main Round is a major grant round distributing an estimated $14-28M across three tracks. SFF funds organisations working on humanity's long-term survival and flourishing, with strong alignment to AI safety and governance work. Applicants may apply to the Main Round plus one themed round maximum. A Speculation Grant (automatically submitted with the rolling application) is required for guaranteed eligibility. **IP requirement:** Default obligation to release work as open-source, open-access (CC-BY), and under permissive software licences (MIT + Apache 2). ## Tracks ### Main Track ($10-20M, 6 recommenders) Broad-based funding for survival and flourishing initiatives, including AI safety, existential risk mitigation, biosecurity, and policy/institutional work. ### Freedom Track ($2-4M, 3 recommenders) Protecting meaningful freedom of speech, individual liberties (privacy, property, association), and maintaining sovereignty for self-governing territories. Focused on avoiding concentrations of authority and supporting AI uses that strengthen freedom. ### Fairness Track ($2-4M, 3 recommenders) Empowering the global majority, addressing monopolistic practices, defusing conflicts from unfair discrimination, and fostering inclusivity in AI governance and access.

Past Opportunities

fellowshipai-safetytechnicalearly-career

LASR Labs Summer 2026 Fellowship

LASR Labs

Closed 30 March 2026

London, UK

LASR Labs (London AI Safety Research Labs) is running its Summer 2026 cohort — a 13-week, full-time, in-person technical AI safety research programme based at the London Initiative for Safe AI. The programme runs from July to October 2026. Participants work in teams of three to four, supervised by an experienced AI safety researcher, to take a research project from proposal to publication. The programme produces an academic-style paper and accompanying blog post per team. Previous supervisors have come from Google DeepMind, UK AISI, and leading UK universities. LASR has a strong publication track record: four out of five groups in the 2023 cohort had papers accepted at NeurIPS workshops or ICLR, and all five papers from Summer 2024 were accepted at NeurIPS workshops. Alumni have gone on to work at UK AISI, Apollo Research, Leap Labs, and Open Philanthropy. Participants receive an £11,000 stipend plus food, office space, and travel. The programme is designed for people looking to join technical AI safety teams in the next year, or those hoping to publish in academia. Typical applicants have ML engineering experience, strong quantitative skills, and research ability — a PhD in a relevant field is common but not required. Research areas include: science of deep learning, multi-agent systems and collusion, alignment theory in RL, deception in LLMs, interpretability, scalable oversight, capability evals, and AI control.

Know of an opportunity?

Help the ANZ AI safety community by letting us know about fellowships, jobs, and programs we should list here.

Get in touch