* Design, implement, and maintain secure and scalable data ingestion pipelines from a wide variety of source systems, including SAP, SalesForce, SharePoint, APIs, and (legacy) manufacturing platforms.
* Build and enhance metadata-driven services that enable discoverability, access governance, and operational transparency of enterprise data.
* Serve as a technical expert and cross-functional enabler for structured and unstructured data acquisition, quality, and compliance.
* Establish and maintain holistic data quality management, monitoring and reporting.
* Contributes to a global data engineering team delivering to all major business domains.
* Drives ingestion and metadata service implementation for over 100 enterprise data sources.
* Collaborates across business IT, cybersecurity, infrastructure, and architecture teams to ensure secure and sustainable delivery.
Main Tasks:
▪ Build and maintain Python- or Scala-based extraction services (e.g., Debezium Server, custom APIs, rclone)
• Implement CDC, delta, and event-based patterns.
• Push-based HTTP, Kerberos-authenticated DLT delivery.
• Establish, operate and troubleshoot extraction from SAP using tools like Theobald Extract Universal.o
• Integrate with systems such as Salesforce, SharePoint, and other API- or file-based endpoints.
▪ Establish and maintain a business-friendly, web-accessible data catalog application, with dataset profiles, metadata, and usability features.
▪ Integrate dataset discoverability, preview/exploration options, and lineage information using Unity Catalog as a backend metadata system.
▪ Design and implement structured access request workflows including request submission, approval chains, audit trail, and enablement triggers.
• Perform design reviews with Cybersecurity.
• Ensure documentation and compliance for all interfaces and data ingress points.
• Manage audit and traceability requirements.
• Collaborate closely with IT and business users to translate requirements into scalable technical patterns.
• Serve as technical escalation point for complex source integration.
▪ Define and implement a multi-layered data quality framework, including unit-level, integration-level, and cross-pipeline validation rules.
▪ Establish centralized and version-controlled storage of DQ rules, with integration into orchestration and CI/CD pipelines.
▪ Implement automatic DQ monitoring with severity levels (Critical, High, Medium, Low) and enable flagging, filtering, and quarantining logic at relevant stages of the pipeline.
▪ Collaborate with source system owners and business stakeholders to define meaningful and actionable DQ thresholds.
## Qualifications
Degree in Computer Science, Data Engineering, or a related field. Azure or Databricks certification is a plus.
5–8 years in data engineering, with hands-on experience in ingesting structured and unstructured enterprise data into modern cloud platforms.
Proven implementation of source system ingestion frameworks, metadata automation, and compliance-controlled interfaces.
Not required; however, experience mentoring junior developers or leading implementation workstreams is a plus; contributes to engineering standards and code quality improvement initiatives
Comfortable working across geographies and time zones; collaborates effectively with global teams and enterprise stakeholders.
## Additional Information
The well-being of our employees is important to us. That's why we offer exciting career prospects and support you in achieving a good work-life balance with additional benefits such as:
- Training opportunities
- Mobile and flexible working models
- Sabbaticals
and much more...
Sounds interesting for you? Click here to find out more.
Diversity, Inclusion & Belonging are important to us and make our company strong and successful. We offer equal opportunities to everyone - regardless of age, gender, nationality, cultural background, disability, religion, ideology or sexual orientation.
Ready to drive with Continental? Take the first step and fill in the online application.