ATS Match is available
1) Upload your resume. 2) Open any job and click Check ATS Match to see your fit score.
Sign in to check your resume match
Sign in to tailor your resume for this role.
Overview Job ID: 8036
4 - 6 years
Position - 2
Job Summary This role is for staff augmen…
Job Description This role is for staff augmentation support and resources will be assisting non-core activities
Job Title: Data Engineer - Spark / GCP
Location: Remote / Hybrid (as per Target office policy)
Experience Level: 4-6 years
Start Date: ImmediateAbout the Role
We are seeking a highly skilled Data Engineer (Contractor) to support the development and optimization of data pipelines within our Google Cloud Platform (GCP) environment.
The ideal candidate will have strong experience building Spark pipelines (Scala preferred),
working with BigQuery, and writing efficient Unix scripts to support large-scale data processing.
You will collaborate with data engineering teams to design, build, and maintain high-performance
data pipelines that enable analytics, reporting, and business insights.Key Responsibilities
Design, develop, and deploy data ingestion and transformation pipelines using Apache Spark (Scala).
Work within Google Cloud Platform (GCP) components including Dataproc, BigQuery & Cloud Storage.
Write complex SQL queries for data analysis, validation, and transformation in BigQuery.
Develop Unix shell scripts for automation, data movement, and pipeline orchestration.
Optimize and troubleshoot data pipelines for performance, scalability, and reliability.
Collaborate with data engineers, analysts, and business stakeholders to understand requirements and ensure data quality.
Contribute to code reviews, documentation, and CI/CD integration of data workflows.
Required Skills & Qualifications 4-6 years of hands-on experience in data engineering or related roles.
Proven experience developing Spark applications in Scala.
Strong experience working in Google Cloud Platform (GCP) ecosystem — including Dataproc, BigQuery, Cloud Storage.
Proficient in SQL (especially BigQuery SQL dialects).
Strong experience with Unix/Linux scripting for data automation.
Familiarity with version control (Git) and CI/CD processes.
Excellent problem-solving and debugging skills.
Strong communication and documentation abilities.
Nice-to-Have Skills
Experience with Python for data processing or automation.
Knowledge of data governance, data quality, or metadata management best practices.
Education
Bachelor’s degree in Computer Science, Engineering, or a related technical discipline (or equivalent work experience).
Responsibilities Job ID: 8036
4 - 6 years
Position - 2
Requirements Job ID: 8036
4 - 6 years
Position - 2
Required Skills & Qualifications 4-6 years of hands-on experience in data engineering or related roles.
Proven experience developing Spark applications in Scala.
Strong experience working in Google Cloud Platform (GCP) ecosystem — including Dataproc, BigQuery, Cloud Storage.
Proficient in SQL (especially BigQuery SQL dialects).
Strong experience with Unix/Linux scripting for data automation.
Familiarity with version control (Git) and CI/CD processes.
Excellent problem-solving and debugging skills.
Strong communication and documentation abilities.
Nice-to-Have Skills
Experience with Python for data processing or automation.
Knowledge of data governance, data quality, or metadata management best practices.
Education
Bachelor’s degree in Computer Science, Engineering, or a related technical discipline (or equivalent work experience).