Machine Learning Engineer

Chicago 3 months agoFull-time External
1.4m - 1.8m / yr
Staff Machine Learning Engineer, Oncology foundation Model Join to apply for the Staff Machine Learning Engineer, Oncology foundation Model role at Tempus AI Staff Machine Learning Engineer, Oncology foundation Model Join to apply for the Staff Machine Learning Engineer, Oncology foundation Model role at Tempus AI Passionate about precision medicine and advancing the healthcare industry? Recent advancements in underlying technology have finally made it possible for AI to impact clinical care in a meaningful way. Tempus' proprietary platform connects an entire ecosystem of real-world evidence to deliver real-time, actionable insights to physicians, providing critical information about the right treatments for the right patients, at the right time. We are seeking an experienced and highly skilled Staff Machine Learning Engineer with deep expertise in large-scale multimodal model systems engineering to join our dynamic AI team. You will play a pivotal role in designing, building, and optimizing the foundational data infrastructure that powers Tempus's most advanced generative AI models. Your work will directly enable the training and deployment of robust, production-ready multimodal systems that analyze complex data types (like genomics, pathology images, radiology scans, and clinical notes) to improve patient care, optimize clinical workflows, and accelerate life-saving medical research. This is a critical, high-impact position for driving the practical application of cutting-edge AI to revolutionize healthcare. Focus Your primary focus will be to architect, build, and maintain the critical data infrastructure supporting our large multimodal generative models. This includes managing the entire lifecycle of vast datasets – from ingestion and processing of diverse training data to the integration and retrieval of extensive knowledge sources used to augment model capabilities. You will be building the data backbone that enables our AI to learn from Tempus's rich real-world evidence. Key Responsibilities As a technical leader in this space, you will be: • Architect and build sophisticated data processing workflows responsible for ingesting, processing, and preparing multimodal training data that seamlessly integrate with large-scale distributed ML training frameworks and infrastructure (GPU clusters). • Develop strategies for efficient, compliant data ingestion from diverse sources, including internal databases, third-party APIs, public biomedical datasets, and Tempus's proprietary data ecosystem. • Utilize, optimize, and contribute to frameworks specialized for large-scale ML data loading and streaming (e.g., Mosaic ML Streaming, Ray Data, HF Datasets). • Collaborate closely with infrastructure and platform teams to leverage and optimize cloud-native services (primarily GCP) for performance, cost-efficiency, and security. • Engineer efficient connectors and data loaders for accessing and processing information from diverse knowledge sources, such as knowledge graphs, internal structured databases, biomedical literature repositories (e.g., Pub Med), and curated ontologies. • Optimize data storage for efficient large scale training training and knowledge access. • Orchestrate, monitor, and troubleshoot complex data workflows using tools like Airflow, Kubeflow Pipelines.. • Establish robust monitoring, logging, and alerting systems for data pipeline health, data drift detection, and data quality assurance, providing feedback loops for continuous improvement. • Analyze and optimize data I/O performance bottlenecks considering storage systems, network bandwidth and compute resources. • Actively manage and seek optimizations for the costs associated with storing and processing massive datasets in the cloud. Required Skills And Experience • Master's degree in Computer Science, Artificial Intelligence, Software Engineering, or a related field. A strong academic background with a focus on AI data engineering. • Proven track record (8+ years of industry experience) in designing, building, and operating large-scale data pipelines and infrastructure in a production environment. • Strong experience working with massive, heterogeneous datasets (TBs+) and…