refresh

트렌딩 기업

트렌딩 기업

채용

채용JPMorgan Chase

Lead Data Engineer - Data Modeling

JPMorgan Chase

Lead Data Engineer - Data Modeling

JPMorgan Chase

Plano, TX, United States, US

·

On-site

·

Full-time

·

2w ago

Join us as we embark on a journey of collaboration and innovation, where your unique skills and talents will be valued and celebrated. Together we will create a brighter future and make a meaningful difference.

  • As a Lead Data Engineer at JPMorgan Chase within the Enterprise Technology
  • CTO SRE & Support team, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As a core technical contributor, you are responsible for maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives.

You are a technical builder with strong data modeling instincts to build the data backbone for an operational learning capability in a complex support and SRE environment. You will connect and model data from incidents, RCA outputs, problem records, support tickets, customer signals, and related telemetry to surface recurring patterns, identify systemic drivers, and produce actionable handoffs to prevention and readiness teams. The role goes beyond dashboards: it requires workflow-aware data modeling, pragmatic delivery, and comfort working with heterogeneous, imperfect operational data. Partnering closely with leaders across Support, SRE, and Engineering, you will deliver lightweight, durable data products that strengthen institutional learning, improve executive visibility, and enable proactive reliability improvements in a blameless, learning-oriented environment. Success demands hands-on technical depth, comfort with ambiguity, and the judgment to start with minimally sufficient solutions that evolve through use.

Job responsibilities

  • Design and implement a minimum viable data model that links incident, RCA, problem, ticketing, customer signals, and observability data for the review function.

  • Build and maintain robust pipelines and transformations that expose repeat patterns, operational toil themes, and systemic issue categories across sources.

  • Develop lightweight, workflow-supporting data products that turn operational events into actionable learning and clear handoffs for downstream owners.

  • Partner with support, SRE, and operational leaders to define required data fields, taxonomies, classifications, and handoff structures that make review outputs actionable and measurable.

  • Design mechanisms to distinguish one-off incidents from recurring classes of failure or avoidable demand, enabling detection of recurrence and informed prioritization.

  • Establish practical data quality standards, field definitions, and lightweight governance (e.g., lineage, stewardship, access) for operational learning datasets across multiple sources.

  • Safeguard blameless review practices by ensuring outputs promote learning and improvement rather than punitive reporting; embed blameless learning norms into data and workflow design.

  • Translate loosely defined operational problems into structured datasets, dashboards, and decision-support tools with clear business and engineering value.

  • Document data models, assumptions, transformation logic, and operating procedures to support maintainability, transparency, and long-term scale.

  • Build solutions that can start manual or semi manual and progressively automate as process maturity grows, integrating with enterprise systems (e.g., Service Now, Jira) over time.

Create decision-useful reporting, visualizations, and leadership-ready views on repeated high-impact issues, emerging pain themes, action status, and systemic trends, including service health metrics (e.g., MTTD, MTTR) to support prioritization, backlog visibility, ownership/SLA tracking, and escalation of repeated high impact patterns without creating reporting overhead.

Required qualifications, capabilities, and skills

  • Formal training or certification with 5+ years in professional data engineering roles in cloud-based environments.

  • Data engineering in operational domains: Proven experience building models and pipelines with SQL/Python across heterogeneous incident, ticketing, RCA, and telemetry sources; comfortable with imperfect or partial data.

  • Data quality and pragmatic governance: Field normalization, standards, and lineage practices that scale across sources without slowing delivery.

  • Blameless workflow design: Ability to design data and workflow outputs that support learning and improvement rather than punitive reporting.

  • Investigative rigor: Ability to reconstruct precise event timelines across systems and maintain strong evidence integrity in operational analyses.

  • Evidence integrity: Experience producing auditable, versioned datasets and reproducible analyses; clearly separates facts, interpretations, and hypotheses in artifacts and reviews.

  • Classification design: Experience designing taxonomies and controlled vocabularies that enable consistent classification and actionability across operational data.

  • Enterprise workflow integration: Integrates with enterprise platforms (e.g., ticketing/incident systems) and defines data fields, handoffs, and action-tracking structures that convert review outputs into owned, trackable work.

  • Incremental delivery mindset: Starts with minimally sufficient solutions and iterates toward greater automation; adapts under pressure and navigates evolving requirements while keeping stakeholders aligned.

  • Structured synthesis: Clear documentation of assumptions and logic; conducts structured, non-leading SME/operator interviews and synthesizes qualitative inputs into structured data.

  • Decision-useful reporting: Builds executive- and operator-facing dashboards and decision-support views tightly linked to prioritization, ownership, governance decisions, and measurable outcomes rather than volume reporting.

Preferred qualifications, capabilities, and skills

  • Direct experience with SRE, incident/problem management, RCA methods and techniques, service health metrics (e.g., MTTD, MTTR), and post-incident reviews.

  • Applied use of LLMs/agents, RAG, anomaly detection, or automated runbooks to accelerate evidence collection, summarization, and action routing in review workflows.

  • Familiarity with structured methods used in high-reliability investigations (e.g., Bowtie/Acci Map/STPA), peer review/checklists, cross-source corroboration, cognitive bias mitigation (e.g., confirmation, hindsight, outcome bias), and evidence-handling practices such as immutable log retention, event timestamping, query capture, and “docket”-style evidence packages suitable for leadership reviews and audits.

  • Experience with modern cloud data platforms and workflow orchestration (e.g., warehouses/lakehouses, streaming, Airflow/Prefect/dbt) and integration with systems like Service Now or Jira.

  • Background in financial services or other regulated, large-scale operating models; comfort with data privacy, retention, and access controls.

  • Designs metrics and feedback loops to evaluate the impact of corrective actions/safety recommendations and reduce recurrence over time.

Certifications/education may include Lean/Six Sigma, SRE, reliability/safety or RCA-focused training, or equivalent practical credentials.

총 조회수

0

총 지원 클릭 수

0

모의 지원자 수

0

스크랩

0

JPMorgan Chase 소개

JPMorgan Chase

JPMorgan Chase & Co. is an American multinational banking institution headquartered in New York City and incorporated in Delaware. It is the largest bank in the United States, and the world's largest bank by market capitalization as of 2025.

300,000+

직원 수

New York City

본사 위치

$500B

기업 가치

리뷰

3.8

10개 리뷰

워라밸

3.2

보상

4.1

문화

3.8

커리어

3.0

경영진

2.5

65%

친구에게 추천

장점

Good benefits and compensation

Supportive and collaborative environment

Flexible work arrangements

단점

Long hours and heavy workload

Management issues and lack of direction

High stress during peak times

연봉 정보

41개 데이터

Junior/L3

Mid/L4

Senior/L5

Junior/L3 · Analytics Solutions Associate

1개 리포트

$139,000

총 연봉

기본급

$107,000

주식

-

보너스

-

$139,000

$139,000

면접 경험

5개 면접

난이도

3.0

/ 5

소요 기간

14-28주

합격률

40%

경험

긍정 20%

보통 80%

부정 0%

면접 과정

1

Application Review

2

HireVue Video Interview

3

Recruiter Screen

4

Superday/Panel Interview

5

Final Interview

6

Offer

자주 나오는 질문

Behavioral/STAR

Technical Knowledge

Culture Fit

Past Experience

Case Study