refresh

Trending companies

Trending companies

Mastercard
Mastercard

Global payments and technology company

Sr. Data Engineer (Big Data & Analytics Engineering) at Mastercard

RoleData Engineering
LevelSenior
LocationPune, India
WorkOn-site
TypeFull-time
Posted1 day ago
Apply now

About the role

Our Purpose

Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential.

Title and Summary

Sr. Data Engineer (Big Data & Analytics Engineering)

Job Posting Title: Sr. Data Engineer (Big Data & Analytics Engineering)


About Mastercard

Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere—by making transactions safe, simple, smart, and accessible. Through secure data, trusted networks, partnerships, and innovation, we enable individuals, financial institutions, governments, and businesses to realise their greatest potential.
Our culture is defined by our Decency Quotient (DQ), guiding how we work, collaborate, and create impact—inside and outside our company. With a presence across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all.


About the Role:

The Sr. Data Engineer will design, build, and operate scalable data pipelines and curated datasets that power analytics products, reporting, and advanced modeling. Working closely with the Lead and cross-functional partners (Product, Data Science, and Platform teams), this role focuses on reliability, performance, data quality, and governance across batch and (where applicable) streaming workloads.

  • Key Responsibilities
  • Build and maintain robust ETL/ELT pipelines for ingestion, transformation, and aggregation of large-scale datasets on Hadoop and enterprise data platforms.
  • Develop high-performance data processing jobs using Py Spark/Spark, Python, and SQL (including engines such as Impala where applicable).
  • Partner with Product and Analytics stakeholders to translate requirements into reusable, governed data models (facts/dimensions, curated layers, and semantic-ready datasets).
  • Implement and automate data quality checks, reconciliation, lineage documentation, and monitoring to ensure trust in downstream analytics and AI use cases.
  • Optimize pipeline performance and cost through partitioning, file formats, compute tuning, and efficient query patterns.
  • Optimize pipeline performance and cost through partitioning strategies, columnar file formats (Parquet, ORC, Delta), compute tuning, caching, and efficient query patterns.
  • Contribute to CI/CD for data workflows (testing, code reviews, deployment automation), promoting engineering best practices and maintainable codebases.
  • Support data governance, privacy, and security requirements (PII handling, access controls, auditability) in collaboration with platform and risk partners.
  • Collaborate with data scientists to publish analysis-ready and ML-ready datasets, including feature generation and repeatable data preparation processes.
  • Troubleshoot production issues, participate in on-call/operational rotations, and drive root-cause fixes to improve reliability.
  • Communicate data platform capabilities, limitations, and trade-offs clearly to technical and non-technical stakeholders.
  • Strong problem-solving skills with ability to debug complex distributed data issues independently.
  • Clear written and verbal communication with both technical engineers and non-technical business stakeholders.

All About You:

  • Technical Skills & Experience
  • Strong hands-on experience in data engineering building production-grade pipelines on big data platforms (Hadoop ecosystem and/or cloud data platforms).
  • Strong hands-on experience in data engineering building production-grade pipelines on big data platforms (Hadoop ecosystem: HDFS, Hive, Impala, YARN, Oozie).
  • Proficiency in Py Spark and Python and strong SQL skills across distributed and relational data stores.
  • Experience with orchestration/integration tools such as Apache Airflow, Apache Ni Fi, Azure Data Factory, Pentaho, or Talend.
  • Solid understanding of data modeling, incremental processing patterns (CDC, SCD Type 1/2), and building curated datasets for analytics and reporting
  • Experience with cloud services (Azure/AWS/GCP) for data lakes, compute, and storage is preferred.
  • Proficiency in columnar and open table formats: Parquet, ORC, Delta Lake, Apache Iceberg, or Apache Hudi.
  • Strong knowledge of distributed computing patterns: partitioning, bucketing, broadcast joins, shuffle optimization.
  • Working knowledge of DevOps/CI-CD practices: version control (Git), automated testing, release pipelines, and observability.
  • Strong problem-solving skills with the ability to debug complex data issues and communicate clearly with technical and non-technical stakeholders.
  • Bachelor’s degree in computer science, Engineering, or equivalent practical experience.
  • 5+ years of relevant experience in data engineering or big data analytics engineering (flexible based on depth of expertise).
  • GenAI / LLM Data Enablement (Preferred)
  • Experience preparing curated, governed datasets (including semi-structured/unstructured) for AI/GenAI consumption with attention to privacy, quality, and reproducibility

Corporate Security Responsibility

All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must:

  • Abide by Mastercard’s security policies and practices;

  • Ensure the confidentiality and integrity of the information being accessed;

  • Report any suspected information security violation or breach, and

  • Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Required skills

Data engineering

ETL

Big data

Data pipelines

Analytics engineering

Total Views

0

Total Apply Clicks

0

Total Mock Apply

0

Total Bookmarks

0

About Mastercard

Mastercard

A financial network that processes payments between banks and cardholders

10,001+

Employees

Purchase

Headquarters

$360B

Valuation

Reviews

10 reviews

3.8

10 reviews

Work-life balance

2.8

Compensation

4.1

Culture

4.2

Career

3.4

Management

3.1

72%

Recommend to a friend

Pros

Great team culture and supportive colleagues

Excellent benefits and compensation

Training and development opportunities

Cons

Work-life balance challenges and long hours

High pressure and stress during peak times

Management issues and lack of direction

Salary Ranges

51 data points

Junior/L3

Director

Junior/L3 · Data Engineer

5 reports

$137,800

total per year

Base

$106,000

Stock

-

Bonus

-

$107,900

$166,918

Interview experience

3 interviews

Difficulty

3.3

/ 5

Duration

14-28 weeks

Offer rate

33%

Experience

Positive 33%

Neutral 34%

Negative 33%

Interview process

1

Application Review

2

Recruiter Screen

3

Technical Phone Screen

4

Behavioral Interview

5

Super Day/Final Round

6

Offer

Common questions

Coding/Algorithm

Technical Knowledge

Behavioral/STAR

System Design

Past Experience