Key Responsibilities: - Lead the design and implementation of big data solutions using Spark, Pyspark, Hive, and Scala. - Collaborate with cross-functional teams to develop scalable data architectures and pipelines. - Optimize data processing workflows to ensure high performance and reliability. - Mentor and guide junior engineers, fostering a culture of continuous learning and improvement. - Ensure data security and compliance with industry standards. Minimum Qualifications: - Bachelor's degree in Technology (B.Tech) or related field. - Proven experience in big data technologies such as Spark, Pyspark, Hive, and Scala. - Strong analytical and problem-solving skills. Preferred Qualifications: - Experience with Kafka for real-time data processing. - Demonstrated leadership skills in managing technology projects. - Ability to work effectively in a collaborative team environment. - Track record of delivering high-quality data solutions. Good to have skills: Flask, Pandas, Data Analysis, Springboot, Microservices
ATS Match is available
1) Upload your resume. 2) Open any job and click Check ATS Match to see your fit score.
Sign in to check your resume match