As a Data Engineer, you will be responsible for building scalable and high-performance applications using the Scala programming language and Spark. You will design, develop, and maintain Big Data Spark Applications, ensuring they meet security, performance, and scalability standards. Your role will involve developing application architecture, writing and debugging code, and working with tools and libraries associated with the Scala framework.
Primary Responsibilities
- Design and develop Spark applications in Scala, adhering to customer requirements and industry best practices.
- Build Spark applications within the Hadoop Ecosystem.
- Debug and optimize existing code to ensure high performance and scalability.
- Conduct testing to ensure applications are bug-free, secure, and compliant with industry standards.
- Optimize applications for maximum speed, scalability, and efficiency.
- Collaborate with cross-functional teams, including designers, testers, and system engineers, throughout the application development lifecycle.
- Ensure applications are scalable and up-to-date with the latest technologies.
- Manage code versioning and deployment using CI/CD processes.
Required Skills
- Proficiency in Scala and Spark for Big Data applications.
- Strong experience in the Hadoop Ecosystem.
- Expertise in debugging and optimizing code for performance.
- Familiarity with CI/CD practices for code deployment and version control.
- Collaborative mindset and effective communication skills across teams.