Senior Data/Software Engineer (Backend)

Los Angeles 28 days agoContractor External
1 / hr
Senior Data/Software Engineer - Backend | Hybrid (SF / Santa Monica / Glendale / Seattle) | Contract / W2 Only • There are no Corp-to-Corp options or Visa Sponsorship available for this position* Optomi, in partnership with a leading enterprise in the entertainment and media industry, is seeking a Senior Data/Software Engineer for a hybrid role based in San Francisco, Santa Monica, Glendale, or Seattle. This engineer will join a data platforms organization responsible for designing and scaling large-scale, cloud-based data systems that support real-time and batch analytics for high-impact, data-driven products. This role sits within a data engineering team focused on building next-generation big data platforms. You will work closely with analytics engineers, data scientists, product managers, and platform teams to develop highly reliable, scalable data pipelines and infrastructure. What the right candidate will enjoy: • Long-term contract with the potential to extend and/or convert! • Hands-on ownership of large-scale data platforms and pipelines! • Working with modern big data technologies in a cloud-native environment! • High-impact data work supporting business-critical analytics and products! • Collaborative team culture with strong technical ownership! Experience of the right candidate: • Bachelor’s degree in Computer Science, Information Technology, or a related STEM field. • 5+ years of professional experience in data engineering or backend engineering. • Strong programming experience in Scala, Python, and/or Java. • 3+ years of hands-on experience with big data technologies such as Apache Spark, Hive, Airflow, Kafka, and AWS big data services. • Solid understanding of distributed data systems, data modeling, and data architecture. • Experience building scalable, reliable, cloud-based data pipelines and platforms. • Strong communication skills and experience collaborating across data and engineering teams. Preferred Qualifications: • Experience working with large-scale datasets (terabyte to petabyte range). • Hands-on experience with cloud infrastructure and orchestration tools such as Terraform, Kubernetes (K8s), Spinnaker, and IAM. • Experience with batch and streaming data pipelines. • Familiarity with data lake and data warehouse architectures. • Exposure to Spring/Java–based services or API development. Responsibilities of the right candidate: • Design, build, and maintain large-scale batch and streaming data pipelines. • Develop and optimize Spark- and Hive-based data processing jobs. • Build and support cloud-based big data infrastructure on AWS. • Ensure data quality, reliability, performance, and observability across platforms. • Partner with data scientists, analytics teams, and product stakeholders to deliver data solutions. • Improve platform scalability, performance, and cost efficiency. • Contribute to engineering best practices including CI/CD, automated testing, and code reviews. What we’re looking for: • Proven experience building and operating large-scale data platforms in cloud environments • Strong sense of ownership over data systems and pipelines. • Ability to design, troubleshoot, and optimize complex distributed data workflows. • Passion for data, scalability, and modern data technologies. • Agile mindset and commitment to continuous improvement.