Data Engineer
Weekday Bangalore
This role is for one of the Weekday's clients
We are looking for a Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. You will work with both structured and unstructured data, ensuring seamless data ingestion, transformation, and storage to support analytics, machine learning, and business intelligence initiatives.You will collaborate closely with data scientists, analysts, and software engineers to develop high-performance data solutions.
Requirements
Key Responsibilities:
- Design, develop, and manage ETL/ELT pipelines for processing large-scale datasets efficiently.
- Work with SQL and NoSQL databases to ensure optimized data storage and retrieval.
- Develop and maintain data lakes and data warehouses using cloud-based solutions such as AWS, GCP, or Azure.
- Implement data quality, integrity, and governance best practices for validation and monitoring.
- Optimize data workflows and performance tuning to enhance query speed and system efficiency.
- Collaborate with cross-functional teams to integrate data solutions into various applications and services.
- Implement real-time and batch data processing using tools like Apache Spark, Kafka, or Flink.
- Work with cloud-based data services (BigQuery, Redshift, Snowflake) to build scalable and cost-effective solutions.
- Automate data pipeline deployment using CI/CD and infrastructure-as-code tools.
- Monitor and troubleshoot data pipeline issues to minimize downtime and ensure reliability.
Required Skills & Qualifications:
- 3+ years of experience in data engineering, data architecture, or related fields.
- Strong proficiency in Python, SQL, and scripting for data processing.
- Hands-on experience with big data frameworks such as Apache Spark, Hadoop, or Flink.
- Experience with ETL tools like Apache Airflow, DBT, or Talend.
- Knowledge of cloud platforms (AWS, GCP, Azure) and their data services (Redshift, BigQuery, Snowflake, etc.).
- Familiarity with data modeling, indexing, and query optimization techniques.
- Experience with real-time data streaming using Kafka, Kinesis, or Pub/Sub.
- Proficiency in Docker and Kubernetes for deploying data pipelines.
- Strong problem-solving and analytical skills, with a focus on performance optimization.
- Understanding of data security, governance, and compliance best practices.
Preferred Qualifications:
- Experience integrating machine learning workflows into data engineering pipelines.
- Knowledge of Infrastructure-as-Code (IaC) tools like Terraform or CloudFormation.
- Familiarity with graph databases and time-series databases.
iPivotBangalore
Job Description
We are Hiring for Azure Data Engineer & AWS Data Engineer with DataBricks & Redshift
Experience: 6+ Years
Location: Bengaluru, Hybrid Model.
Notice period: Immediate, 30 days serving.
Required Skills:
Proficient in DataBricks...
Bangalore
Job Description
Job Description:
As a Data Engineer with expertise in PySpark, Databricks, and Microsoft Azure, you will be responsible for designing, developing, and maintaining robust and scalable data pipelines and processing systems. You...
AmazonBangalore
years of data engineering experience
• Experience with data modeling, warehousing and building ETL pipelines
• Experience with SQL- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles...