• Develop and optimize Python-based ETL pipelines for batch and incremental data migration.
• Load data from COBOL Enscribe/SQL MP sources (in CSV format) into PostgreSQL.
• Implement data transformations, cleansing, validation, and enrichment using Python, SQL, and Pandas.
• Design and tune high-performance SQL queries, indexing strategies, and database procedures.
• Map legacy data models to target schemas and address exceptions with technical teams.
• Create automated validation and reconciliation scripts to ensure data quality and accuracy.
• Support QA with test cases, test data, and verification workflows.
• Device and automate backup, restoration and purge strategies conforming to compliance and regulations.
•Deploy ETL components via CI/CD and manage migration waves, dry runs, rollbacks, and cutovers.
• Produce clear documentation and communicate effectively with stakeholders.
• Strong Python development (Pandas, NumPy, SQLAlchemy, pyodbc, custom ETL frameworks).
• Expert SQL: joins, window functions, CTEs, partitioning, indexing.
• Experience with ETL tools (Airflow, Informatica, Glue, Talend, SSIS, etc.).
• Hands-on data migration experience with large datasets (>100M rows).
• Solid understanding of RDBMS concepts, modelling, constraints, and referential integrity.
• Strong analytical, problem-solving, communication, and documentation skills.
• Ability to work under tight timelines during migration and cutover periods.
Education :Bachelor’s or Master’s in Computer Science, Information Systems, Data Engineering, or related fields.
Cloud/ETL/Python/DB certifications are a plus.