热门公司

NVIDIA
NVIDIA

Pioneering accelerated computing and AI

Solutions Architect - DevOps

职能DevOps
级别中级
地点Australia, Remote
方式远程
类型全职
发布1个月前
立即申请

NVIDIA is looking for Senior Cloud Infrastructure and DevOps Solutions Architect to join its ANZ Team, specifically focused on working with our Neo Clouds and their customers on stand up and operational excellence. Ideally the successful candidate would be located in either Sydney or Melbourne. Enterprise, Government and Academic Research groups around the world are using NVIDIA products to redefine deep learning and data analytics, and to power data workloads. Be involved with the crew developing many of the largest and fastest AI/HPC systems in the world! We are looking for someone with the ability to work on a dynamic customer focused team that requires excellent interpersonal skills. This role will be interacting with Neo Clouds, customers, partners and various departments, to analyze, define and implement and operate large scale AI Operational projects. The scope of these efforts includes a combination of System Building, Kubernetes-based platforms, Automation, Hardware and Networking.

What you'll be doing:

  • Maintain large scale computational and AI infrastructure, focusing on monitoring, logging, workload orchestration (Kubernetes and Linux job schedulers).

  • Optimize scalable, production-ready Kubernetes-based container platforms coordinated with enterprise-grade networking and storage.

  • Serve as a key technical resource, develop, refine, and document standard methodologies and operational guidelines to be shared with internal teams.

  • Perform end-to-end resolving across the stack, from bare metal and operating system, through the software stack, container platform, networking, and storage.

  • Support Enterprise, Research & Development activities and engage in POCs/POVs to validate new features, architectures, and upgrade approaches.

  • Deploy monitoring solutions for the servers, network and storage with a focus on services performance and availability optimisations to meet requirements and SLAs.

  • Develop tooling to automate deployment and management of large-scale infrastructure environments, to automate operational monitoring and alerting, and to enable self-service consumption of resources.

  • Create and deliver high-quality documentation, including runbooks, onboarding materials, and best-practice guides for customers and internal teams.

  • Become the technical leader for assigned customer accounts, providing strategic guidance on DevOps and platform architecture and influencing long-term infrastructure and operations decisions.

What we need to see:

  • BS/MS/PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, or related fields, with 5+ years of professional experience in managing scalable cloud environments and automation engineering roles.

  • Kubernetes & AI/ML Workloads: Extensive experience with Kubernetes for container orchestration, resource scheduling, scaling, and integration with HPC environments.

  • Cloud & HPC Expertise: Proven understanding of networking fundamentals (TCP/IP stack), data center architectures, and hands-on experience managing HPC/AI clusters, including deployment, optimization, and fixing issues.

  • Hardware & Software Knowledge: Familiarity with HPC and AI technologies (CPUs, GPUs, high-speed interconnects) and supporting software stacks.

  • Linux & Storage Systems: Deep knowledge of Linux (Red Hat/CentOS, Ubuntu), OS-level security, and protocols (TCP, DHCP, DNS). Experience with storage solutions such as Lustre, GPFS, ZFS, XFS, and emerging Kubernetes storage technologies.

  • Automation & Observability: Proficiency in Python and Bash scripting, configuration management, and Infrastructure-as-Code tools (e.g., Ansible, Terraform). Experience with observability stacks (Grafana, Loki, Prometheus) for monitoring, logging, and building fault-tolerant systems.

  • Solution Architecture & Customer Engagement: Strong background in crafting scalable solutions and providing consultative support to customers.

Ways to stand out from the crowd:

  • Knowledge of CI/CD pipelines for software deployment and automation.

  • Solid hands-on knowledge of Kubernetes and container-based microservices architectures.

  • Experience with GPU-focused hardware and software (e.g., NVIDIA DGX, CUDA, GPU Operator).

  • Background with RDMA-based fabrics (Infini Band or RoCE) in HPC or AI environments.

NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you!

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

浏览量

0

申请点击

0

Mock Apply

0

收藏

0

关于NVIDIA

NVIDIA

NVIDIA

Public

A computing platform company operating at the intersection of graphics, HPC, and AI.

10,001+

员工数

Santa Clara

总部位置

$4.57T

企业估值

评价

10条评价

4.4

10条评价

工作生活平衡

2.8

薪酬

4.5

企业文化

4.2

职业发展

4.3

管理层

3.8

78%

推荐率

优点

Cutting-edge technology and innovation

Excellent compensation and benefits

Great team culture and collaboration

缺点

High pressure and expectations

Poor work-life balance and long hours

Fast-paced environment leading to burnout

薪资范围

79个数据点

Junior/L3

Mid/L4

Senior/L5

Junior/L3 · Analyst

7份报告

$170,275

年薪总额

基本工资

$130,981

股票

-

奖金

-

$155,480

$234,166

面试评价

5条评价

难度

3.0

/ 5

面试流程

1

Application Review

2

Recruiter Screen

3

Technical Phone Screen

4

Onsite/Virtual Interviews

5

Team Matching

6

Offer

常见问题

Coding/Algorithm

System Design

Behavioral/STAR

Technical Knowledge

Past Experience