AI Safety Opportunities

Fellowships, jobs, research programs, and other opportunities in AI safety. Find your next role in making AI go well.

Open Opportunities

Showing 5 of 5 opportunities

courseai-safetygovernancetechnical

Introduction to Digital Minds (Cambridge Digital Minds)

Cambridge Digital Minds

Closes 15 April 2026

Online

Introduction to Digital Minds is a free 8-week online course from Cambridge Digital Minds exploring the perceptions and plausibility of digital minds — and what this may mean for ethics, governance, and society. The course covers theories of consciousness, testing for mental states in AI systems, interactions with AI safety and ethics, practical frameworks for working under uncertainty, and approaches to governing digital minds. The course is designed for researchers, engineers, policymakers, lawyers, and anyone with some grounding in AI who wants to engage seriously with these questions. Participants should have a background equivalent to completing a BlueDot course or similar. The time commitment is approximately 2 hours of reading plus 1.5 hours of facilitated group discussion per week. Applications close 15 April 2026. The first cohort starts the week of 25 May and runs through the week of 13 July, with introductory calls the week of 18 May. Spots are limited this round.

fundingai-safetygovernance

SFF 2026 Main Round

Survival and Flourishing Fund

Closes 22 April 2026

Global (for-profits: US/UK/Canada/Australia)

The SFF-2026 S-Process Main Round is a major grant round distributing an estimated $14-28M across three tracks. SFF funds organisations working on humanity's long-term survival and flourishing, with strong alignment to AI safety and governance work. Applicants may apply to the Main Round plus one themed round maximum. A Speculation Grant (automatically submitted with the rolling application) is required for guaranteed eligibility. **IP requirement:** Default obligation to release work as open-source, open-access (CC-BY), and under permissive software licences (MIT + Apache 2). ## Tracks ### Main Track ($10-20M, 6 recommenders) Broad-based funding for survival and flourishing initiatives, including AI safety, existential risk mitigation, biosecurity, and policy/institutional work. ### Freedom Track ($2-4M, 3 recommenders) Protecting meaningful freedom of speech, individual liberties (privacy, property, association), and maintaining sovereignty for self-governing territories. Focused on avoiding concentrations of authority and supporting AI uses that strengthen freedom. ### Fairness Track ($2-4M, 3 recommenders) Empowering the global majority, addressing monopolistic practices, defusing conflicts from unfair discrimination, and fostering inclusivity in AI governance and access.

fellowshipbiosecurityai-safetytechnicalpaid

AIxBio Research Fellowship Summer 2026

ERA Cambridge

Closes 27 April 2026

Cambridge, UK

ERA Cambridge, in partnership with the Cambridge Biosecurity Hub, is running the AIxBio Research Fellowship — a 10-week, fully-funded research programme in Cambridge, UK, from July 6 to September 11, 2026. The fellowship focuses specifically on mitigating biosecurity risks amplified by frontier AI, sitting at the intersection of AI capabilities and biological risk. Fellows work on independent research projects with expert mentors matched to their area, supported by dedicated research managers. Example research directions include metagenomic surveillance and anomaly detection, capability evaluations of AI systems relevant to CBRN risk, adversarial robustness and red-teaming for sensitive queries, know-your-customer tooling for synthesis providers, guardrails for biodesign software, and machine unlearning to reduce retained hazardous knowledge. The programme also features skill-building workshops, talks from leading researchers and policymakers, and an AIxBio symposium at the end for disseminating research. Fellows receive a competitive stipend, lodging in Cambridge, meals during work hours, and full visa and travel support. The fellowship welcomes researchers from around the world with backgrounds in AI/ML, biology, policy, or related fields. Applications close April 27, 2026.

fellowshipai-safetyfieldbuildingpaid

Generator Residency Summer 2026

Kairos

Closes 27 April 2026

Berkeley, California (in-person)

The Generator Residency is a three-month program run by Kairos and Constellation for people looking to build capacity and infrastructure across the AI safety ecosystem. The Summer 2026 cohort runs from June 15 to August 28, with possible extensions through November 2026. Residents pitch, build, and ship projects while receiving mentorship and support transitioning into full-time roles. The residency is based at Constellation's Berkeley office and is open to any background — technical, operations, policy, engineering, design, or writing. It includes a $6,000/month stipend, covered housing and travel, and project budgets for execution. Cohort size is 15-30 residents. J-1 visa sponsorship is available for international applicants. Example projects include organising workshops and conferences, building recruiting pipelines, creating travel grant programs, developing strategic awareness tools, and running AI communications fellowships. Over 80% of past residents have transitioned to full-time roles in AI safety. Applications close April 27, 2026.

fellowshipai-safetytechnicalgovernancefieldbuilding

Astra Fellowship September 2026

Constellation

Closes 3 May 2026

Berkeley, California (in-person)

The Astra Fellowship is a fully funded, 5-month in-person program at Constellation's Berkeley research centre, where 300+ AI safety professionals work every week. Fellows pair with mentors across empirical AI safety research, strategy, and governance. The fellowship includes a monthly stipend, approximately $15,000/month in compute expenses for research fellows, visa support for international applicants, and placement services into AI safety organisations. Over 80% of the first cohort now work full-time in AI safety at organisations including Redwood Research, METR, Anthropic, OpenAI, and Google DeepMind. Constellation is also partnering with OpenAI to host OpenAI Safety Fellows alongside Astra Fellows. Applications for the September 2026 cohort close May 3rd. Prior AI safety experience is not required — the program is looking for people with technical or domain expertise who are motivated to reduce risks from advanced AI. Astra also offers incubation support for fellows who want to start their own organisations.

Past Opportunities

fellowshipai-safetygovernancetechnicalpaid

ERA Summer Fellowship 2026

ERA Cambridge

Closed 12 April 2026

Cambridge, UK

ERA Cambridge is running its Summer 2026 Fellowship — a 10-week, fully-funded research programme in Cambridge, UK, from July 6 to September 11, 2026. The programme brings together approximately 30 fellows from around the world to work on concrete research projects with mentorship from expert AI safety and governance researchers. The fellowship has three tracks: AI Governance, Technical AI Governance, and Technical AI Safety. ERA is particularly interested in fellows who bridge technical and policy research under a single project. Research topics span a wide range — from studying models that scheme under differential oversight to designing tamper-evident hardware for international AI treaties. Fellows receive a competitive stipend, with meals during working hours, transport, visas, and lodging all covered. The programme also hosts 30+ events over the fellowship period. ERA welcomes talented individuals at any career stage — researchers, entrepreneurs, and policymakers — who are motivated to contribute to AI safety and governance research.

jobai-safety

Arcadia Impact Business Operations Manager

Arcadia Impact

Closed 8 April 2026

London, UK

Arcadia Impact is hiring a Business Operations Manager to support their AI safety and governance training and research programmes. The role covers finance (budget tracking, accounting, invoicing), legal and compliance (contracts, charity law, visa sponsorship), internal systems (CRM, data management, infosec), and general operations improvement. Candidates should have 2–3 years' experience in finance, accounting, or a related field, with strong attention to detail and an optimisation mindset. Salary is £50,000–£55,000 plus 3% pension. The role is primarily in-person from their London office, starting June 2026. UK visa sponsorship is available. Benefits include a £5,000 annual professional development budget, 25 days paid leave plus UK bank holidays, and free office meals and snacks.

jobai-safetyearly-career

Arcadia Impact Operations Associate

Arcadia Impact

Closed 8 April 2026

London, UK

Arcadia Impact is hiring an Operations Associate — an entry-level generalist operations role supporting their AI safety and governance programmes. Responsibilities span people operations (employee documentation, HR systems, hiring processes), programme support (recruitment, participant queries, efficiency improvements), and general operations (documentation, AI tools for productivity, metrics tracking). Ideal candidates have some operations experience (even informal, like running student groups), a generalist mindset, and strong communication skills. Salary is £44,000–£46,000 plus 3% pension. The role is primarily in-person from their London office, starting June 2026. UK visa sponsorship is available. Benefits include a £5,000 annual professional development budget, 25 days paid leave plus UK bank holidays, and free office meals and snacks. Selection process: application → interview → work task → offer (expected mid-May).

fellowshipai-safetytechnicalearly-career

LASR Labs Summer 2026 Fellowship

LASR Labs

Closed 30 March 2026

London, UK

LASR Labs (London AI Safety Research Labs) is running its Summer 2026 cohort — a 13-week, full-time, in-person technical AI safety research programme based at the London Initiative for Safe AI. The programme runs from July to October 2026. Participants work in teams of three to four, supervised by an experienced AI safety researcher, to take a research project from proposal to publication. The programme produces an academic-style paper and accompanying blog post per team. Previous supervisors have come from Google DeepMind, UK AISI, and leading UK universities. LASR has a strong publication track record: four out of five groups in the 2023 cohort had papers accepted at NeurIPS workshops or ICLR, and all five papers from Summer 2024 were accepted at NeurIPS workshops. Alumni have gone on to work at UK AISI, Apollo Research, Leap Labs, and Open Philanthropy. Participants receive an £11,000 stipend plus food, office space, and travel. The programme is designed for people looking to join technical AI safety teams in the next year, or those hoping to publish in academia. Typical applicants have ML engineering experience, strong quantitative skills, and research ability — a PhD in a relevant field is common but not required. Research areas include: science of deep learning, multi-agent systems and collusion, alignment theory in RL, deception in LLMs, interpretability, scalable oversight, capability evals, and AI control.

Know of an opportunity?

Help the ANZ AI safety community by letting us know about fellowships, jobs, and programs we should list here.

Get in touch