We’re looking for a seasoned ETL Developer IV to lead the design and execution of scalable ETL solutions in a cloud-native environment. This role focuses on building and optimizing data pipelines using AWS services, ensuring high performance, reliability, and automation across data platforms.
Responsibilities
- Design, develop, and optimize ETL processes supporting enterprise-scale data solutions.
- Build and maintain data pipelines using AWS services (Glue, S3, Redshift, Lambda, etc.).
- Analyze complex data systems and propose efficient technical solutions.
- Manage ETL job scheduling and ensure operational compliance.
- Collaborate with cross-functional teams to deliver high-quality data solutions.
- Perform root cause analysis and production support for data environments.
- Create technical designs and functional specifications.
- Implement best practices in code, monitoring, and automation.
Required Skills
- 6+ years in ETL development with strong SQL skills.
- 3–4 years with Python or PySpark in AWS Glue environments.
- Proficient in AWS data services: Glue, S3, Redshift, Lambda, Step Functions, RDS, Iceberg, CloudWatch, etc.
- Experience with data architecture, big data processing, and observability tools.
- Strong problem-solving skills and accountability in project delivery.