採用
As a Data Engineer for the WORKS team, you will play a critical role in the modernization and re-platforming of the WORKS Data Warehouse. This position is part of a strategic initiative to transition from legacy data structures (Oracle Database) to a modern Data Mart on Google Cloud infrastructure (Big Query and Cloud Run). You will not just be "moving data"; you will be a key architect in reducing years of technical debt by refactoring complex table structures, eliminating redundancies, and building our first formal ETL processes to ensure a high-performance, cost-effective data environment.
Required Skills and Experience:
-
Bachelor’s Degree in Computer Science, Computer Engineering or a related field.
-
English proficiency (written and verbal).
-
Data Engineering or Database Development Experience (3-5 year minimum).
-
Excellent communication skills, with the ability to articulate complex technical concepts to global stakeholders.
-
Advanced proficiency in SQL and PL/SQL, with a strong ability to read and interpret complex legacy stored procedures and view logic
-
Proven experience in Data Modeling, specifically designing Data Marts and optimizing schemas for analytical workloads.
-
Familiarity with SAP Business Objects (Universes/Web Intelligence) with the ability to navigate and extract transformation logic from the semantic layer.
-
Experience with Google Cloud Platform (GCP) and Big Query.
-
Experience with ETL tools (such as Cloud Data Fusion, dbt, or Dataform).
-
Experience in relational database management systems (RDBMS) like Oracle or PostgreSQL.
-
Willingness to challenge the "status quo" to eliminate redundant table structures and unnecessary joins.
-
Ability to work in a dynamic environment, handling multiple assignments and prioritizing work appropriately.
-
Strong collaboration skills and ability to work across regions (US, Mexico, India).
-
Attention to detail and a strong "detective" mindset for solving data redundancy problems.
-
Experience working with Agile methodologies (SCRUM, Kanban)
Skills/Experience Preferred:
-
Domain knowledge of order to delivery of vehicles.
-
Specific experience with Big Query SQL and performance tuning (partitioning/clustering). Experience migrating from SAP Business Objects to Power BI (Data Layer focus).
-
Knowledge of Unix Shell Scripting
-
Experience with Git
Hub or other Version Control tools:
- Knowledge of Unix/Autosys to understand legacy job scheduling
Responsibilities:
-
Reverse Engineering & Logic Extraction: Analyze legacy Oracle data structures and SAP Business Objects Universes to identify and document the "hidden" ETL logic, complex joins, and calculated measures currently used for business reporting.
Data Mart Design: Design and implement a clean, high-performance Data Mart in Big Query (Star/Snowflake schema) that eliminates the redundancies and "spaghetti joins" of the legacy environment.
-
Source-to-Target Mapping: Map upstream interfaces and Oracle data structures to the new Big Query environment, ensuring data integrity and consistency during the transition.
-
ETL Development: Design, develop, and implement high-quality ETL pipelines to automate data movement and transformation to replace manual or non-existent processes.
-
Technical Debt Reduction: Actively simplify the data architecture by consolidating redundant tables and optimizing query paths for cloud-native performance.
-
Data Models Enablement: Build and maintain the core data layer (tables, views, and curated datasets) that will enable other Software Engineers to successfully transition reports from SAP Business Objects to Power BI.
-
Documentation: Create clear technical documentation of the new Data Mart schema and the logic used to transform legacy data.
-
Collaboration: Work closely with a global team of developers (located in Mexico, US and India) and business stakeholders to simplify WORKS data architecture, and ensure data availability and integrity for reporting and analytics.
-
Ensure on-time delivery using Agile, engaging in practices such as paired programming and automated testing for data pipelines.
-
Conduct code and design reviews to ensure adherence to data standards, patterns, and architecture principles.
-
Perform and participate in load/volume testing to ensure the new platform can handle global scale.
Total Views
0
Apply Clicks
0
Mock Applicants
0
Scraps
0
Similar Jobs
About Ford

Ford
PublicThe Ford Motor Company is an American multinational automobile manufacturer headquartered in Dearborn, Michigan, United States. It was founded by Henry Ford and incorporated on June 16, 1903.
10,001+
Employees
Naucalpan de Juarez
Headquarters
$48B
Valuation
Reviews
3.4
10 reviews
Work Life Balance
2.8
Compensation
3.7
Culture
2.5
Career
2.9
Management
2.3
45%
Recommend to a Friend
Pros
Good pay and benefits
Decent work-life balance options
Learning and advancement opportunities
Cons
Poor management and favoritism
Mandatory overtime and exhausting schedules
Limited growth opportunities
Salary Ranges
36 data points
Mid/L4
Senior/L5
Mid/L4 · ADAS Data Analytics Engineer
1 reports
$132,847
total / year
Base
$102,190
Stock
-
Bonus
-
$132,847
$132,847
Interview Experience
5 interviews
Difficulty
3.0
/ 5
Duration
14-28 weeks
Offer Rate
40%
Experience
Positive 40%
Neutral 40%
Negative 20%
Interview Process
1
Phone Screen
2
Technical Interview
3
Behavioral Interview
4
Final Round Interview
Common Questions
Behavioral
Technical
Assessment
News & Buzz
Ford held talks with China's Xiaomi over EV partnership, FT reports - Reuters
Source: Reuters
News
·
5w ago
Automotive Auction Company CEO Buys 2017 Ford GT Prototype: Video - Ford Authority
Source: Ford Authority
News
·
5w ago
Ford names a new head to its energy business startup - Detroit Free Press
Source: Detroit Free Press
News
·
5w ago
Michigan congressman presses Ford CEO on ties to Chinese companies - Detroit Free Press
Source: Detroit Free Press
News
·
5w ago


