Rust Developer - Ai Training

New York 1 days agoFull-time External
Negotiable
Work Mode: Remote Engagement Type: Independent Contractor Schedule: Full-Time or Part-Time Contract Language Requirement: Fluent English Role Overview We partner with leading AI teams to improve the quality, usefulness, and reliability of general-purpose conversational AI systems. This project focuses specifically on evaluating and improving how AI systems reason about code, generate programming solutions, and explain technical concepts across various complexity levels. The role involves rigorous technical evaluation of AI-generated responses in coding and software engineering contexts. What Youll Do Evaluate LLM-generated responsesto coding and software engineering queries for accuracy, reasoning, clarity, and completeness Conduct fact-checkingusing trusted public sources and authoritative references Conduct accuracy testing byexecuting code and validating outputs using appropriate tools Annotate model responsesby identifying strengths, areas of improvement, and factual or conceptual inaccuracies Assess code quality, readability, algorithmic soundness, and explanation quality Ensuremodel responses align with expected conversational behaviorand system guidelines Apply consistent evaluation standardsby following clear taxonomies, benchmarks, and detailed evaluation guidelines Who You Are You hold aBS, MS, or PhD in Computer Science or a closely related field You havesignificant real-world experience in software engineeringor related technical roles You are an expert in atleast one relevant programming language (e.g., Python, Java, C++, JavaScript, Go, Rust) You are able to solveHackerRank or LeetCode Medium and Hardlevel problems independently You have experience contributing to well-known open-source projects, including merged pull requests You havesignificant experience using LLMs while codingand understand their strengths and failure modes You have strong attention to detailand arecomfortable evaluating complex technical reasoning, identifying subtle bugs or logical flaws Nice-to-Have Specialties Prior experience with RLHF, model evaluation, or data annotation work Track record in competitive programming Experience reviewing code in production environments Familiarity with multiple programming paradigms or ecosystems Experience explaining complex technical concepts to non-expert audiences What Success Looks Like You identify incorrect logic, inefficiencies, edge cases, or misleading explanations in model-generated code, technical concepts, and system design discussions Your feedback improves the correctness, robustness, and clarity of AI coding outputs You deliver reproducible evaluation artifacts that strengthen model performance