refresh

トレンド企業

トレンド企業

採用

求人Ingersoll Rand

AI DevOps Engineer

Ingersoll Rand

AI DevOps Engineer

Ingersoll Rand

Tlalnepantla, Ciudad de México, MX, 54070

·

On-site

·

Full-time

·

3w ago

必須スキル

GCP

Ingersoll Rand is committed to achieving workforce diversity reflective of our communities. We are an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances.

Role Summary

Enable and scale Ingersoll Rand’s GenAI program by designing, building, and operating the production infrastructure that powers AI-driven applications across the enterprise. This role focuses on DevOps, cloud infrastructure, CI/CD, observability, and platform reliability for GenAI systems built on LLM APIs and Snowflake-native capabilities.

Own the operational lifecycle of LLM-powered systems including prompt versioning, model configuration, cost controls, and production reliability across Snowflake-native and API-based GenAI platforms.

You will work closely with AI engineers and application developers to turn prototypes into secure, reliable, observable, and scalable AI applications, ensuring smooth integration with enterprise systems and data platforms. This is a DevOps and platform engineering role with a strong focus on production-grade AI systems.

The Core Challenge

GenAI teams can build powerful applications quickly using LLM APIs—but productionizing them at enterprise scale is hard. Challenges include environment consistency, secure data access, observability, cost control, CI/CD automation, and reliable integrations with core business systems.

This role bridges that gap by providing standardized infrastructure, deployment pipelines, and operational frameworks so AI teams can move fast without sacrificing reliability, security, or governance.

Key Responsibilities

Gen

AI Platform & Infrastructure:

  • Design, build, and maintain cloud infrastructure to host GenAI applications using GCP and Snowflake container services

  • Support Snowflake-based AI workflows including data ingestion, Cortex Agents, Analyst, and Search

  • Define standardized, reusable infrastructure patterns for AI applications across development, staging, and production environments

  • Implement cost-aware infrastructure patterns (warehouse sizing, service isolation, token budgeting) for GenAI workloads

  • Explore, build, and support proof-of-concept initiatives to evaluate emerging GenAI and MLOps platforms and architectures, focusing on deployment, orchestration, monitoring, and governance of LLM-based systems.

CI/CD & Automation

  • Build and maintain CI/CD pipelines using GitHub for AI applications and platform services

  • Automate infrastructure provisioning and environment configuration using Infrastructure-as-Code

  • Enable safe, repeatable deployments with versioning, rollback, and environment promotion strategies

Observability & Reliability

  • Implement observability for GenAI systems using Langfuse and Snowflake observability tools to continuously improve AI system reliability and usefulness.

  • Monitor application health, latency, usage, errors, and cost using dashboards, alerts, and runbooks to support reliable production operations.

Cloud & Container Operations

  • Manage containerized workloads across GCP and Snowflake containers

  • Ensure secure networking, secrets management, access controls, and environment isolation

  • Optimize performance, scalability, and cost for AI application workloads

Enterprise Integrations

  • Support and operationalize integrations between GenAI applications and enterprise systems such as SAP, Salesforce, Share Point, and other internal/external platforms

  • Ensure reliability, security, and observability of API-based and event-driven integrations

Collaboration & Enablement

  • Partner closely with AI engineers, data engineers, and IT teams to remove operational blockers

  • Provide documentation, templates, and best practices that enable teams to deploy and operate independently

  • Contribute to standards for security, reliability, and governance across the GenAI platform

Required Qualifications

  • 3+ years in DevOps, platform engineering, or software infrastructure roles; 1-2+ years specifically with ML/AI infrastructure or MLOps

  • Experience operating LLM-based applications in production, including prompt management, cost monitoring, and reliability practices

  • Strong experience with CI/CD pipelines (GitHub Actions preferred)

  • Hands-on experience with containerized applications (Docker; Kubernetes or managed container platforms)

  • Experience operating workloads on GCP or similar cloud platforms

  • Proficiency with Infrastructure-as-Code tools (Terraform or equivalent)

  • Strong scripting skills (Python and/or Bash)

  • Experience implementing monitoring, logging, and observability for production systems

  • Experience supporting API-based applications and integrations

  • Ability to troubleshoot and operate complex distributed systems

  • Strong communication skills and ability to collaborate across technical and business teams

  • Fluent in English (written and spoken)

  • Bachelor’s or Master’s degree in Computer Science, Software Engineering, IT, or related field (or equivalent experience)

Preferred Qualifications

  • Experience with Snowflake, including data ingestion pipelines and Snowflake-native applications

  • Familiarity with GenAI application architectures (RAG, agents, prompt orchestration, API-based LLM usage)

  • Experience with Langfuse or similar AI observability tools

  • Experience integrating enterprise systems (SAP, Salesforce, Share Point, etc.)

  • Experience with data versioning tools (DVC, Pachyderm, LakeFS)

  • Knowledge of vector databases and LLM infrastructure (Pinecone, Weaviate, Milvus, Chroma)

  • Cloud or MLOps certifications (AWS Machine Learning Specialty, AWS Solutions Architect, Kubernetes CKA/CKAD, Azure AI Engineer, GCP ML Engineer)

  • Manufacturing or industrial IoT experience

  • Experience with compliance and governance frameworks for AI/ML systems

What This Role IS

  • Infrastructure engineer who enables AI teams to move faster through automation and robust tooling

  • Systems thinker who balances reliability, scalability, and cost efficiency

  • Bridge between AI innovation and production operations who translates complex requirements into practical solutions

  • Continuous learner who keeps current with rapidly evolving AI-Ops ecosystem and cloud-native technologies

Ingersoll Rand Inc. (NYSE:IR), driven by an entrepreneurial spirit and ownership mindset, is dedicated to helping make life better for our employees, customers and communities. Customers lean on us for our technology-driven excellence in mission-critical flow creation and industrial solutions across 40+ respected brands where our products and services excel in the most complex and harsh conditions. Our employees develop customers for life through their daily commitment to expertise, productivity and efficiency. For more information, visit www.IRCO.com.

総閲覧数

0

応募クリック数

0

模擬応募者数

0

スクラップ

0

Ingersoll Randについて

Ingersoll Rand

Ingersoll Rand Inc. is an American multinational company that provides flow creation and industrial products. The company was formed in February 2020 through the spinoff of the industrial segment of Ingersoll-Rand plc and its merger with Gardner Denver.

10,001+

従業員数

Davidson

本社所在地

$15.2B

企業価値

レビュー

3.8

10件のレビュー

ワークライフバランス

2.8

報酬

3.2

企業文化

4.2

キャリア

3.1

経営陣

3.9

72%

友人に勧める

良い点

Supportive management and leadership

Great team culture and teamwork

Good benefits and training programs

改善点

Heavy workload and long working hours

Fast-paced and stressful environment

Limited growth opportunities

給与レンジ

20件のデータ

Senior/L5

Senior/L5 · Enterprise Identity and Access Management Lead

1件のレポート

$139,689

年収総額

基本給

$107,453

ストック

-

ボーナス

-

$139,689

$139,689

面接体験

46件の面接

難易度

3.0

/ 5

期間

14-28週間

内定率

38%

体験

ポジティブ 69%

普通 21%

ネガティブ 10%

面接プロセス

1

Phone Screen

2

Technical Interview

3

Hiring Manager

4

Team Fit

よくある質問

Technical skills

Past experience

Team collaboration

Problem solving