Support data extraction, transformation, and loading (ETL) processes, ensuring data accuracy and integrity using tools like Python, SSIS, and NiFi.
Assist in designing, coding, and optimizing stored procedures and other database objects. Participate in performance tuning and data consistency efforts (SQL, Python, SSIS).
Modify, test, and deploy .NET and Java applications, including JAR installations on production servers. Familiarity with Apache Kafka is a plus.
Follow predefined steps for application and data migrations across on-prem and AWS environments.
Create and maintain documentation for data mapping, transformation rules, approvals, and deployment procedures.
Work closely with developers, architects, and system administrators to troubleshoot issues and support smooth operational processes.
Qualifications
Bachelor’s degree in Computer Science, Information Technology, or a related field.
Some prior experience (internship or academic project) working with database systems, data warehouses, or ETL processes preferred.
Proficient in SQL and database concepts
Exposure to Python or scripting languages
Familiarity with .NET/Java programming
Understanding of version control systems (e.g., Git, Bitbucket)