Senior Java Spark Developer

Long Finch Technologies

Austin, TX

Posted On: Jul 02, 2025

Posted On: Jul 02, 2025

Job Overview

Job Type

Contract - Corp-to-Corp

Experience

10 - 25 Years

Salary

Depends on Experience

Work Arrangement

On-Site

Travel Requirement

0%

Required Skills

  • Apache
  • Spark
  • Apache Spark
  • Java
  • microservices
  • HDFS
  • kafka
  • Java Devloper
Job Description

Job Title: Senior Java Spark Developer
Location: Austin, TX (LOCAL Candidate)
Experience Required: 9+ years

Job Summary:

We are seeking a Senior Java Spark Developer with expertise in Java, Apache Spark, and the Cloudera Hadoop Ecosystem to design and develop large-scale data processing applications. The ideal candidate will have strong hands-on experience in Java-based Spark development, distributed computing, and performance optimization for handling big data workloads.

Key Responsibilities:

Java & Spark Development:

  1. Develop, test, and deploy Java-based Apache Spark applications for large-scale data processing.
  2. Optimize and fine-tune Spark jobs for performance, scalability, and reliability.
  3. Implement Java-based microservices and APIs for data integration.

 Big Data & Cloudera Ecosystem:

  1. Work with Cloudera Hadoop components such as HDFS, Hive, Impala, HBase, Kafka, and Sqoop.
  2. Design and implement high-performance data storage and retrieval solutions.
  3. Troubleshoot and resolve performance bottlenecks in Spark and Cloudera platforms.

 Software Development & Deployment:

  1. Implement version control (Git) and CI/CD pipelines (Jenkins, GitLab) for Spark applications.
  2. Deploy and maintain Spark applications in cloud or on-premises Cloudera environments.

Required Skills & Experience:

 

  • 8+ years of experience in application development, with a strong background in Java and Big Data processing.
  • Strong hands-on experience in Java, Apache Spark, and Spark SQL for distributed data processing.
  • Proficiency in Cloudera Hadoop (CDH) components such as HDFS, Hive, Impala, HBase, Kafka, and Sqoop.
  • Experience building and optimizing ETL pipelines for large-scale data workloads.
  • Hands-on experience with SQL & NoSQL databases like HBase, Hive, and PostgreSQL.
  • Strong knowledge of data warehousing concepts, dimensional modeling, and data lakes.
  • Proven ability to troubleshoot and optimize Spark applications for high performance.
  • Familiarity with version control tools (Git, Bitbucket) and CI/CD pipelines (Jenkins, GitLab).
  • Exposure to real-time data streaming technologies like Kafka, Flume, Oozie, and Nifi.
  • Strong problem-solving skills, attention to detail, and ability to work in a fast-paced environment.

 

 

 


Job ID: LF250006


Posted By

Tanishq Trivedi