About The Role
We're looking for security-savvy power users to help stress-test, evaluate, and harden AI systems. You'll probe for vulnerabilities, craft adversarial prompts, and provide expert feedback that directly improves AI safety and robustness.
• Organization: Alignerr
• Type: Hourly Contract
• Compensation: $15–$75 /hour
• Location: Remote
• Commitment: 10–40 hours/week
What You’ll Do
• Conduct red‑teaming exercises to identify security weaknesses in AI systems
• Craft adversarial prompts and edge‑case scenarios to test model guardrails
• Evaluate AI outputs for safety, bias, and policy compliance
• Document vulnerabilities, exploits, and unexpected behaviors in structured reports
• Collaborate with engineering teams to recommend mitigations and improvements
• Stay current on emerging AI security threats, jailbreak techniques, and best practices
• Help define and refine security evaluation rubrics and testing protocols
Who You Are
• Strong understanding of cybersecurity concepts, threat modeling, or penetration testing
• Hands‑on experience with AI/ML systems, LLMs, or prompt engineering
• Creative and analytical thinker – you enjoy breaking things to make them better
• Excellent written communication and documentation skills
• Comfortable working independently on task‑based, asynchronous assignments
• Familiarity with the OpenClaw ecosystem or similar open‑source AI platforms is a plus
• Background in infosec, ethical hacking, or AI safety research is a plus but not required
Why Join Us
• Work on the cutting edge of AI security with top research labs
• Directly shape the safety and reliability of AI products used by millions
• Freelance perks: full autonomy, flexible schedule, and global collaboration
• Build expertise in one of the fastest‑growing domains in tech
• Potential for ongoing work, expanded scope, and contract extension
Application Process (Takes 10–15 min)
• Submit your resume
• Complete a short screening
• Project matching and onboarding