Job Description
Job title: ETL Developer- Data Pipeline
Company: MUFG
Job description: About MUFG Global Service (MGS)
MUFG Bank, Ltd. is Japan’s premier bank, with a global network spanning more than 40 markets. Outside of Japan, the bank offers an extensive scope of commercial and investment banking products and services to businesses, governments, and individuals worldwide. MUFG Bank’s parent, Mitsubishi UFJ Financial Group, Inc. (MUFG) is one of the world’s leading financial groups. Headquartered in Tokyo and with over 360 years of history, the Group has about 120,000 employees and offers services including commercial banking, trust banking, securities, credit cards, consumer finance, asset management, and leasing. The Group aims to be the world’s most trusted financial group through close collaboration among our operating companies and flexibly respond to all the financial needs of our customers, serving society, and fostering shared and sustainable growth for a better world. MUFG’s shares trade on the Tokyo, Nagoya, and New York stock exchanges.
Job Description
Job Profile
Position details
Data Integration/ETL Developer with hands-on experience in designing and implementing data warehouses, data lakes, and data marts for large financial institutions using Python, Informatica, AWS, and related database technologies such as DB2, Oracle. In this role you will focus on enabling data aggregation from disparate sources or systems. Responsibilities include scrubbing data; writing testing scripts; performing quality control on data and data storage; enabling data access; identifying or developing solutions to funnel data into a single platform; and developing and documenting processes for data validation and normalization, archiving, and backup.
Roles and Responsibilities
- Develop and maintain complex ETL mappings, workflows, and Unix shell scripts in a normalized/denormalized data warehouse/data mart environment, based on technical specifications and other supporting documentation, using Informatica PowerCenter, Unix Shell Scripts, Python, advanced SQL, Autosys.
- Implement processes and logic to extract, transform, and distribute data across one or more data stores from a wide variety of sources
- Optimize data integration platform to provide optimal performance under increasing data volumes
- Handle the data analysis requests, internal controls, performance tuning, impact analysis requests. Additionally work on new AWS cloud-based Enterprise Data Platform build out tasks as assigned.
- Convert physical data integration models and other design specifications to source codes
- Ensure high quality and optimum performance of data integration systems in order to meet business solutions.
Job Requirements:
- Bachelor’s Degree (or foreign equivalent degree) in Information Technology, Information Systems, Computer Science, Software Engineering, or a related field. Experience in the financial services or banking industry is preferred.
- 5+ years of hands-on expertise in developing Informatica ETL mappings and workflows in complex large-scale data warehouse and business intelligence projects
- 5+ years of experience in designing relational and dimensional databases using Oracle, SQL Server, DB2 writing, optimizing complex SQL queries that join across multiple tables and unionize multiple datasets required
- 5+ years of Unix shell scripting, script runner behaviors, scripted fields and listeners, etc.
- 2+ Years of ETL experience (preferably Informatica).
- 3+ years of hands-on experience in Autosys Enterprise Scheduler.
- Experience in end-to-end design and build of near-real time and batch data pipelines
- Familiarity with data architecture, data integration, data governance, and data lineage concepts. Good understanding of Data Warehouse concepts and relational databases.
- Experience working in Agile environments
- Exposure to Business Intelligence/Reporting Tools such as Tableau, Cognos, Business Objects a plus.
Expected salary:
Location: Bangalore, Karnataka
Job date: Fri, 02 Feb 2024 08:57:48 GMT