Algorithm Engineer LLM Safety First – AI Safety
## Responsibilities:
Design and build an AI Guardrails framework as a safety layer for LLMs and agent workflows
Define and enforce safety, security, and compliance policies across applications
Detect and mitigate prompt injection, jailbreaks, hallucinations, and unsafe outputs
Implement privacy and PII protection: redaction, obfuscation, minimisation, data residency controls
Build red-teaming pipelines, automated safety tests, and risk monitoring tools
Continuously improve guardrails to address new attack vectors, policies, and regulations
Fine-tune or optimise LLMs for trading, compliance, and Web3 tasks
Collaborate with Product, Compliance, Security, Data, and Support to ship safe features
## Requirements:
Master’s/PhD in Machine Learning, AI, Computer Science, or related field
Research track record (ICLR, NeurIPS, ACL, ICML) a plus
Hands-on experience building LLM/agent guardrails (policy design, refusal rules, filtering, permissions)
Practical experience with hallucination mitigation and safety evaluation
Proven ability to ship AI safety frameworks to production
Strong coding in Python (Java a plus); expertise in PyTorch/TensorFlow/JAX
Understanding of privacy, PII handling, data governance, and risk frameworks
Interest in crypto, Web3, and financial systems
Self-driven with strong ownership and delivery skills
Excellent communication and collaboration abilities
• *The crypto industry is evolving rapidly, offering new opportunities in blockchain, web3, and remote crypto roles — don’t miss your chance to be part of it.**
Apply tot his job
Apply To this Job