This role involves leading cloud transformation initiatives, modernizing analytics platforms, and improving agility. You will leverage hands-on expertise in Snowflake and AWS technologies to guide the adoption of cloud solutions, establish best practices, and drive the development of data engineering architecture.
Primary Responsibilities
- Define and implement Data Engineering architecture strategy, best practices, and roadmap.
- Develop ETL pipelines using Python, Snowflake, and IDMC (Intelligent Data Management Cloud).
- Manage batch and streaming data processing using Kafka.
- Design and build data flows for mapping source systems and process flows.
- Assemble large, complex data sets that meet both functional and non-functional business requirements.
- Conduct code reviews, assist in optimization, and troubleshoot issues.
- Expertly manage Snowflake features like resource monitors, RBAC, virtual warehouse sizing, query performance tuning, zero-copy cloning, and time travel.
- Work with AWS services such as S3 for data management.
Required Experience
- 12+ years of experience in Data Engineering.
- Proficiency in Python and developing ETL pipelines in Snowflake.
- Experience with IDMC and AWS technologies.
- Strong understanding of Snowflake’s advanced features and data management techniques.