Data Architect - Pune
Job Description
Experience: 8 to 13 yearsLocations: Pune, Bangalore, Gurgaon
Job Role - Data Architect/ Big Data ARchitect
About the Role:
We are seeking a highly skilled Big Data Architect / Data Architect with expertise in PySpark to design, implement, and optimize large-scale data solutions. The ideal candidate will have extensive experience in big data technologies, data architecture, and cloud-based platforms, ensuring efficient data processing and analytics.
Key Responsibilities:
Architect and develop scalable, high-performance data platforms using PySpark.
Design and implement data lake, data warehouse, and real-time data processing solutions.
Lead the development of end-to-end data pipelines, ensuring data integrity and performance.
Collaborate with data engineers, analysts, and business stakeholders to define data strategies.
Optimize ETL workflows and manage big data ecosystems (Hadoop, Spark, Databricks, etc.).
Ensure data governance, security, and compliance with industry standards.
Provide technical leadership and mentor junior team members.
Required Skills & Qualifications:
8 to 13 years of experience in big data architecture and data engineering.
Expertise in PySpark, Hadoop, Spark, and distributed computing frameworks.
Strong knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra).
Experience with cloud-based data platforms (AWS, Azure, GCP).
Proficiency in data modeling, data warehousing, and ETL design.
Strong problem-solving and analytical skills with experience in high-volume data processing.
Preferred Qualifications:
Experience with Kafka, Airflow, and real-time data processing.
Exposure to machine learning and AI-driven analytics.
Certification in AWS/Azure/GCP Big Data solutions is a plus.