Responsibility:
- Develop and maintain data pipelines implementing ETL processes in Bigdata environment
- Take responsibility for Hadoop development and implementation
- Experience in Python, Spark, PySpark, and Hive
- Understanding of data warehousing and data modeling techniques
- Knowledge on Data lake architecture
• *Requirements**:
- Bachelor's degree in Computer Science or related field
- 5 years of experience with big data technologies such as Hadoop, Spark, Hive, and Pig
- Proficiency in programming languages such as Java, Python, and Scala
- Knowledge of data modeling and data warehousing
- Excellent problem-solving and analytical skills
- Strong written and verbal communication skills
Employee Status : Full Time Employee
Shift : Day Job
Travel : No
Job Posting : Feb 02 2023
About Cognizant