Role name: Security Engineer/AI Agent Builder
Work site: New York, NY, Pittsburgh, PA, Lake Mary, FL
Duration: 12+ Months
Job Description:
Senior level role with 7+ years of experience.
The Security Engineer / AI Agent Builder is responsible for designing, securing, and deploying agentic AI systems that operate safely within enterprise environments. This role blends security engineering, threat modeling, and applied AI system development—ensuring that intelligent agents operate reliably, securely, and in alignment with organizational policies.
Secure Agentic AI System Design
• architect security controls (identity, network, runtime isolation, sandboxing, policy enforcement) for agent-based AI systems.
• Develop AI‑specific threat models addressing agent misbehavior, adversarial prompts, data leakage, model tampering, and supply‑chain risks. Evaluate third‑party AI tools, APIs, and agent frameworks for security compliance and risk.
Build & Deploy AI Agents
• Design and implement autonomous AI agents using LLMs, APIs, orchestration frameworks, and multi‑agent systems.
• Build agent behavior logic including tool‑use, routing, planning, fallbacks, and guardrails.
• Prototype and iterate AI agents in production, refining reliability, safety, and output quality based on real‑world usage.
Production‑Grade Security Engineering
• Develop security monitoring pipelines for agent executions and automate detection of anomalous or harmful agent behavior.
• Implement secure MLOps practices—including model lineage tracking, training data protection, and integrity controls.
• Perform vulnerability assessments, penetration testing, and red‑teaming of AI agents and underlying infrastructure.
Cross‑Functional Collaboration
• Work closely with AI research, product, engineering, cloud, and cybersecurity teams to ensure agents are performant, safe, and compliant.
• Translate business workflows into agent behaviors through scoping, discovery sessions, and requirements definition.
Standards, Governance & Best Practices
• Establish secure development standards for agentic AI systems and contribute to enterprise AI governance frameworks.
• Publish internal best practices for agent security, including prompt‑security guidelines, LLM threat mitigation, and safe‑tooling patterns.
Required Skills & Qualifications
Technical Skills:
• 8+ years in cybersecurity engineering, application security, or cloud security.
• Hands‑on experience with LLMs, AI/ML pipelines, vector databases, orchestration frameworks (AutoGen, CrewAI, LangGraph, etc.).
• Strong programming background (Python required; Java/C++ optional).
• Expertise in threat modeling, identity & access management, secure API design, and network segmentation.
• Familiarity with adversarial ML, model robustness testing, data poisoning defenses, and model evaluation.
• Experience deploying secure workloads in AWS/Azure/GCP.
Preferred Skills
• Experience building autonomous agents or multi‑agent systems.
• Knowledge of AI governance, safety, and responsible AI frameworks.
• Background in cryptography, secure CI/CD pipelines, MLOps, and privacy‑preserving ML.
Regards
Raahul Bansiwaal
linkedin.com/in/rahul-b-14b5a4168
(2014792186) | Office: (201) 479 2186 EXT: 444
rahulb@net2source.com
www.net2source.com
270 Davidson Ave, Suite 704, Somerset, NJ 08873, USA
Knowledge is Power.