AI Security Engineer

Doha Tax Free2 days agoFull-time External
Negotiable
Overview We're seeking a technically focused AI Security Engineer to design, implement, and manage the security of AI and ML systems across data, model, and deployment layers. The role combines deep expertise in cybersecurity, DevSecOps, and applied machine learning, with hands-on experience building resilient, privacy-preserving, and production-safe AI solutions. Key Responsibilities • Model Security & Hardening: Implement adversarial training, gradient masking, watermarking, and integrity verification to protect ML models. • Privacy & Data Protection: Apply differential privacy, secure aggregation, and federated learning techniques to safeguard sensitive data. • MLOps & Infrastructure: Secure containerised and cloud-native ML environments (Kubernetes, Docker, Terraform, MLflow, Vault, CI/CD). • Secure Deployment: Harden inference APIs with encryption, rate-limiting, authentication, and runtime monitoring. • Threat Modelling & Adversarial Testing: Conduct ATT&CK-for-ML-aligned threat modelling and red-team style testing for adversarial, poisoning, and prompt-injection attacks. • Monitoring & Observability: Implement drift detection, performance telemetry, and anomaly detection using Prometheus, Grafana, or ELK. • Cross-Functional Collaboration: Work closely with data scientists, ML engineers, and security teams to embed secure design principles across the AI lifecycle. Technical Stack & Tools • Languages & Frameworks: Python, Bash, Go, PyTorch, TensorFlow, Hugging Face Transformers, ONNX • Cloud & DevSecOps: AWS (SageMaker, ECR, IAM), Azure ML, GCP Vertex AI, GitHub Actions, Terraform, Vault • Automation & Integration: , Zapier, n8n, Power Automate, LangChain, Dialogflow, Rasa • Monitoring & Security Ops: Prometheus, Grafana, ELK Stack, Vault, Kubernetes security controls Preferred Experience • 4–7 years' experience in security engineering, DevSecOps, or data security • 2–4 years' hands-on experience securing ML or LLM workloads in production environments • Exposure to adversarial ML, LLM security (prompt injection, data leakage testing), and privacy-preserving techniques • Familiarity with cloud-native ML tooling (MLflow, Kubeflow, Vertex AI, SageMaker) • Strong understanding of AI governance, compliance, and secure model deployment frameworks Soft Skills • Analytical and structured problem-solving • Excellent stakeholder communication across security and data teams • Ability to translate complex technical risk into business impact • Curiosity and a continuous learning mindset in fast-evolving AI security domains Notes on Experience Expectations AI security as a discipline has rapidly evolved since ~2018. Candidates with a strong foundation in cybersecurity and cloud engineering, and 2–5 years of hands-on AI/ML security work, will be well-suited for this role — even if they do not meet the longer "AI experience" requirements literally. Overview We're seeking a technically focused AI Security Engineer to design, implement, and manage the security of AI and ML systems across data, model, and deployment layers. The role combines deep expertise in cybersecurity, DevSecOps, and applied machine learning, with hands-on experience building resilient, privacy-preserving, and production-safe AI solutions. Key Responsibilities • Model Security & Hardening: Implement adversarial training, gradient masking, watermarking, and integrity verification to protect ML models. • Privacy & Data Protection: Apply differential privacy, secure aggregation, and federated learning techniques to safeguard sensitive data. • MLOps & Infrastructure: Secure containerised and cloud-native ML environments (Kubernetes, Docker, Terraform, MLflow, Vault, CI/CD). • Secure Deployment: Harden inference APIs with encryption, rate-limiting, authentication, and runtime monitoring. • Threat Modelling & Adversarial Testing: Conduct ATT&CK-for-ML-aligned threat modelling and red-team style testing for adversarial, poisoning, and prompt-injection attacks. • Monitoring & Observability: Implement drift detection, performance telemetry, and anomaly detection using Prometheus, Grafana, or ELK. • Cross-Functional Collaboration: Work closely with data scientists, ML engineers, and security teams to embed secure design principles across the AI lifecycle. Technical Stack & Tools • Languages & Frameworks: Python, Bash, Go, PyTorch, TensorFlow, Hugging Face Transformers, ONNX • Cloud & DevSecOps: AWS (SageMaker, ECR, IAM), Azure ML, GCP Vertex AI, GitHub Actions, Terraform, Vault • Automation & Integration: , Zapier, n8n, Power Automate, LangChain, Dialogflow, Rasa • Monitoring & Security Ops: Prometheus, Grafana, ELK Stack, Vault, Kubernetes security controls Preferred Experience • 4–7 years' experience in security engineering, DevSecOps, or data security • 2–4 years' hands-on experience securing ML or LLM workloads in production environments • Exposure to adversarial ML, LLM security (prompt injection, data leakage testing), and privacy-preserving techniques • Familiarity with cloud-native ML tooling (MLflow, Kubeflow, Vertex AI, SageMaker) • Strong understanding of AI governance, compliance, and secure model deployment frameworks Soft Skills • Analytical and structured problem-solving • Excellent stakeholder communication across security and data teams • Ability to translate complex technical risk into business impact • Curiosity and continuous learning mindset in fast-evolving AI security domains Notes on Experience Expectations AI security as a discipline has rapidly evolved since ~2018. Candidates with a strong foundation in cybersecurity and cloud engineering, and 2–5 years of hands-on AI/ML security work, will be well-suited for this role — even if they do not meet the longer "AI experience" requirements literally.