What makes Cognizant a unique place to work? The combination of rapid growth and an international and innovative environment! This is creating many opportunities for people like YOU - people with an entrepreneurial spirit who want to make a difference in this world. At Cognizant, together with your colleagues from all around the world, you will collaborate on creating solutions for the world's leading companies and help them become more flexible, more innovative, and successful. Moreover, this is your chance to be part of the success story.
Position Summary:
• Building robust batch processing applications to produce high-quality, consistent, and structured data for the consumer products and business products transactions using Apache open-source technologies to meet the financial crime compliance regulatory requirements. Data scanning capability through the ingested batch framework will be used to find the financial frauds to lead in Financial Crime Domain in Australia.
• Having systemic thinking with data content architectural design which will be building a metrics layer to handle massive scale data and robust data modeling to produce high-quality, consistent, structured data on public cloud services.
• Must have strong customer complaints domain knowledge that enables handling customer complaints volumes and generating reports to help relevant teams understanding customer needs or issues for quality services.
• Highly proficient in modern batch data processing technologies with ability of understanding complex business requirements to make processing capabilities including ingestion, enrichment, transformation across cloud services.
• Build the business' data collection systems and processing pipelines for internal/external data from various source systems such as Oracle, SQL Server, and IBM DB2 to facilitate deeper analysis and reporting to understand customer needs.
Mandatory Skills:
• At least 12+ years professional experience as a data engineer with extensive knowledge in banking and financial, insurance and mortgage domain.
• At least 10+ years of hands-on experience in Hadoop stacks including HDFS, Yarn, MapReduce, HBase, Hive, Sqoop, Pig, Phoenix, Zookeeper, Flume, Oozie and Hue.
• At least 5+ years of hands-on experience of designing and implementing streaming-centric applications using modern streaming frameworks including Spark Structured Streaming, Kafka, Sqoop, Oozie, Databricks and Snowflake.
• At least 5+ years of experience in NoSQL databases such as HBase, MongoDB, and Phoenix.
• Must have strong experience with code management and build tools such as Git, Bitbucket, GitHub, SVN, Maven and SBT.
• Must have hands on CI/CD deployment tools such as Jenkins, Bamboo, and Azure DevOps.
• Proficient in relevant programming languages such as Java, Scala, Python, and SQL.
• Proficient in Azure cloud services including ADLS2, HD Insight, Azure SQL, Azure Data Factory, Cosmos DB, Azure Blob Storage, and Azure Databricks.
• Proficiency in streaming architecture patterns using a modern data architecture.
• Proficiency in analyzing business requirements for data architecting with both SQL/NoSQL databases.
• Working knowledge of Data Warehousing, and Data Lakes in both on-premises and cloud environments.
Roles and Responsibilities:
• Responsible for designing and implementing automate large-scale, high-performance distributed data processing systems (batch/ streaming) to drive business growth and improve the product experience.
• Work in a variety of settings to build systems that collect, manage, and convert for all raw data, including but not limited to customer data, product data, financial data, and operational data into usable information for business stakeholders, data scientists and business analysts to interpret.
• Assist in the development of data governance policies and procedures to protect customers' personal data and help business to adhere to the Government regulations and security standards.
• Evangelize high-quality software engineering practices toward building data infrastructure and pipelines at scale.
• Lead data engineering projects to ensure pipelines were reliable, efficient, testable, and maintainable and were largely in charge of architecting solutions.
• Design our data content models for optimal storage and retrieval and to meet critical product and business requirements.
• Collaborate with business stakeholders, data scientists, and software engineers to understand data needs and develop solutions to meet those needs.
• Design and implement data models that are optimized for performance and scalability.
• Monitor data quality and take corrective action when necessary.
• Stay up to date on new technologies and approaches in the data engineering field and recommend ways to improve the data architecture.
• Mentor junior members of the team and provide guidance on best practices.
• Participate in code reviews and provide feedback on design and implementation.
• Help troubleshoot production issues when they arise.
Salary Range: >$100,000
Next Steps:
If you would like to express interest in role, please click on the APPLY button now. We thank you for taking interest in this opportunity with us. For a complete list of opportunities with Cognizant visit http://www.cognizant.com/careers
Cognizant is committed to providing Equal Employment Opportunities. Successful candidate will be required to undergo a background check.
#LI-CTSAPAC