Public Security Policy Researcher
                                Description:
• Conduct rigorous policy research mapping the landscape of current and proposed AI governance approaches, identifying gaps and fragilities in existing frameworks
• Assess societal vulnerabilities to advanced AI systems across critical infrastructure, physical safety, information ecosystems, and social institutions
• Develop comprehensive AI disaster scenario plans adaptable to different jurisdictional contexts and governance structures
• Research how to translate complex technical concepts and risk scenarios into clear, actionable policy recommendations for decision-makers
• Draft compelling policy briefs, reports, and blog posts communicating AI governance challenges to diverse audiences
• Develop realistic simulation materials and tabletop exercises that stress-test emergency response protocols for AI system failures
• Identify early warning indicators and potential cascade failures in AI governance structures
• Map institutional dependencies and coordination challenges in cross-jurisdictional disaster response
• Analyze existing civil defense and disaster response frameworks for adaptability, or the lack thereof, to novel AI threats
• Support PSP advocacy initiatives and strategic relationship development with policymakers
Requirements:
• Master's degree in Public Policy, Public Administration, Political Science, International Relations, Science and Technology Studies, or related field; OR bachelor's degree with substantial relevant professional experience
• Demonstrated ability to conduct thorough policy analysis and produce high-quality written outputs for both academic and policy audiences
• Exceptional writing abilities with experience translating complex technical concepts into accessible language for non-specialist audiences
• Strong grasp of policy processes, political systems, and strategic approaches to effectuating policy change
• Working knowledge of international relations frameworks and governance structures relevant to emerging technology management
• Demonstrated understanding of AI safety concerns, systemic risks, and the sociotechnical implications of increasingly capable AI systems
• Self-directed with ability to manage complex projects in a remote team environment while maintaining high attention to detail
• Experience in crisis response planning, disaster preparedness, or emergency management
• Background in cybersecurity, biosecurity, critical infrastructure protection, or other relevant risk domains
• Understanding of organizational response limits during acute crises
• Familiarity with both technical and governance aspects of AI alignment challenges
• Experience conducting vulnerability assessments of complex sociotechnical systems
• Knowledge of civil defense planning, continuity of government operations, or supply chain resilience
• Background in multi-stakeholder coordination across government agencies, private sector, and civil society
• Some coursework, experience, or interest in the hard sciences
• Experience with scenario planning methodologies and strategic foresight techniques
Benefits:
• plus good benefits if a U.S. employee
Apply Job!
Apply to this Job