Location: Sunnyvale, CA Day (day 1 onsite ) Hybrid role
Duration 1 yr
. Experience/Skills Required:1. Bachelor's degree in Computer Science, Information Technology, or related field and 5 years experience in computer programming, software development or related2. 3+ years of solid Java and 2+ years experience in design, implementation, and support of solutions big data solution in Hadoop using Hive, Spark, Drill, Impala, HBase3. Hands on experience with Unix, GCP and other relational databases. Experience with @Scale a plus.4. Strong communication and problem-solving skills
• What are the top 3 skills needed/required?
Google Cloud platform work, SPARK coding experience, JAVA coding experience
• What skills and/or experience would separate the top candidate?
o What makes a candidate profile stand out to you?
Worked on ETL jobs in cloud environment on big data , compliance experience
• What will this person’s day-to-day responsibilities be?
Write ETL jobs, develop systems for analytical jobs, google platform job optimization, spark coding, debugging
• What is the project this person will be working on?
o How will they contribute to the project?
Customer datalake compliance and state level compliance support
• Additional Job Details : Experience in programming language (Java, Scala) Experience in using distributed architecture like Hadoop, Spark Preferred Qualifications: Knowledge in data engineering, machine learning Understanding of privacy compliance like CPRA Experience working with large data set in GCP cloud