The Lead Engineer is responsible for the development, integration, and implementation of processes, procedures, software tools, and custom software to support both the Delivery Engineering teams and multiple development teams.
Design and implement scalable data pipelines to ingest, process, and transform large datasets.
Build and maintain data warehouses and data lakes using tools such as Apache Hadoop, Amazon S3, and Google BigQuery.
Develop and maintain ETL (Extract, Transform, Load) processes to move data between systems.
Develop and maintain automated testing and deployment scripts using tools such as Docker and Kubernetes.
Required Qualifications/Skills
Bachelor's degree in Computer Science, Information Technology, or a related field.
Experience in software development, with a focus on data engineering and DevOps.
Strong knowledge of big data technologies such as Apache Hadoop, Apache Spark, and NoSQL databases.
Experience with cloud-based services such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
Strong understanding of Linux operating systems and scripting languages such as Bash or Python.
Experience with ETL tools such as Informatica PowerCenter or Talend.