Senior Pyspark Developer
Job Description
About The Job
At Minutes to Seconds, we match people having great skills with tailor-fitted jobs to achieve well-deserved success. We know how to match people to the right job roles to create that perfect fit. This changes the dynamics of business success and catalyzes the growth of individuals.Our aim is to provide both our candidates and clients with great opportunities and the ideal fit every time. We have partnered with the best people and the best businesses in Australia in order to achieve success on all fronts. We're passionate about doing an incredible job for our clients and job seekers.
Our success is determined by the success of individuals at the workplace.
We would love the opportunity to work with YOU!!
Minutes to Seconds is looking for a Senior Pyspark Developer in a Contract position.
Requirements
Job Description:
5+ years of experience in Pyspark including Hadoop, SQL and other big data technologies. Databricks knowledge is fine.
We are seeking a skilled PySpark Developer to join our dynamic team. The ideal candidate will have a strong background in big data processing and analytics using PySpark. Additionally, knowledge of Databricks is highly desirable. The PySpark Developer will be responsible for designing, implementing, and optimizing data pipelines and ensuring data quality and performance.
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using PySpark.
- Collaborate with cross-functional teams to gather requirements and deliver data solutions.
- Optimize and tune PySpark jobs for performance and scalability.
- Implement data quality checks and ensure data integrity.
- Troubleshoot and resolve issues related to data processing and performance.
- Work with Databricks to manage and optimize data workflows (nice to have).
- Develop and maintain documentation for data pipelines and processes.
- Stay updated with the latest trends and technologies in big data and analytics.
Nice to Have:
- Experience with Databricks for managing and optimizing data workflows.
- Knowledge of data streaming frameworks such as Kafka.
Experience Range:
5 - 8 years
Educational Qualifications:
B.Tech/B.E,
Skills Required:
Pyspark, Databricks, Hadoop, SQL, Kafka, Data Engineering.
Looking for Immediate joiner
Please send resume at [Confidential Information]