Data Engineer I, SCOT - AIM
Amazon Bangalore
SCOT's Automated Inventory Management (AIM) team seeks talented individuals passionate about solving complex problems and driving impactful business decisions for our executives. The AIM team owns critical Tier 1 metrics for Amazon Retail stores, providing key insights to improve store health monitoring.
We focus on enhancing selection, product availability, inventory efficiency, and inventory readiness to fulfill customer orders (FastTrack) while enabling accelerated delivery Speed & Fulfillment Worldwide. This improves both Customer Experience (CX) and Long-Term Free Cash Flow (LTFCF) outcomes.
Key job responsibilities
We focus on enhancing selection, product availability, inventory efficiency, and inventory readiness to fulfill customer orders (FastTrack) while enabling accelerated delivery Speed & Fulfillment Worldwide. This improves both Customer Experience (CX) and Long-Term Free Cash Flow (LTFCF) outcomes.
Our approach involves creating standardized, scalable, and automated systems and tools to identify and reduce supply chain defects in our systems and inputs, while driving operational leverage and scaling.
As a Data Engineer, you will analyze large-scale business data, solve real-world problems, and develop metrics and business cases to delight our customers worldwide. You will work closely with Scientists, Engineers, and Product Managers to build scalable, high-impact products, architect data pipelines, and transform data into actionable insights to manage business at scale.We are looking for people who are motivated by thinking big, moving fast, and exploring business insights. If you love to implement solutions to hard problems while working hard, having fun, and making history, this may be the opportunity for you.
Key job responsibilitiesKey job responsibilities
- Develop data products and build, optimize, and maintain reliable data pipelines for extracting, transforming, and loading (ETL) large datasets from diverse sources.
- Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, AWS – Redshift, and OLAP technologies, Model data and metadata for ad hoc and pre-built reporting.
- Work with product tech teams and build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark.
- Monitor and improve data pipeline performance, ensuring low latency and high availability.
- Automate repetitive data engineering tasks to streamline workflows and improve efficiency.
About the team
Supply Chain Optimization Technologies (SCOT) is the name of a complex group of systems designed to make the best decisions when it comes to forecasting, buying, placing, and shipping inventory. Functionally these teams work together to drive in-stock, drive placement, drive inventory removal and manage the customer experience.- 1+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
- Experience with one or more scripting language (e.g., Python, KornShell)- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Hireginie Talent Cloud Private LimitedBangalore
Job Description
Job Title: AWS Data Engineer
Education: Bachelor's or master's degree in computer science, Information Technology
Experience: 4-8 Years
Location: Bangalore
Immediate joiners preferred
About the Role:
We are looking...
AmazonBangalore
Are you passionate about data and code? Does the prospect of dealing with mission-critical data excite you? Do you want to build data engineering solutions that process a broad range of business and customer data?
Do you want to continuously improve...
Talent WorxBangalore
in Computer Science, Information Technology, or a related field.
2. 4 to 7 years of experience as a Data Engineer with a focus on Databricks.
3. Proven experience in building ETL pipelines and working with large datasets.
4. Strong analytical and problem...