採用
Overview
Microsoft Azure AI Inference platform is the next generation cloud business positioned to address the growing AI market. We are on the verge of an AI revolution and have a tremendous opportunity to empower our partners and customers to harness the full power of AI responsibly. We offer a fully managed AI Inference platform to accelerate the research, development, and operations of AI powered intelligent solutions at scale. This team owns the hosting, optimization, and scaling the inference stack for all the Azure AI Foundary models including the latest and greatest from OpenAI, Grok, Deep Seek, and other OSS models.
Do you want to join a team entrusted with serving all internal and external ML workloads, solve real world inference problems for state-of-the-art large language (LLM) and multi-modal Gen AI models from OpenAI and other model providers? We are already serving billions of inferences per day on the most cutting-edge AI scenarios across the industry. You will be joining the AI Core Inferencing team, influencing the overall product, driving new features and platform capabilities from preview to General Availability, and many exciting problems on the intersection of AI and Cloud.
We’re looking for a passionate Software Engineer 2 to drive the design, optimization, and scaling of our inference systems. In this role, you’ll lead engineering efforts to ensure our largest models run efficiently in high-throughput, low-latency environments. You will get to work on and influence multiple levels of the AI Inference data plane stack.
We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served.
Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
Responsibilities-
Design and implement core inference infrastructure for serving frontier AI models in production.
-
Identify and drive improvements to end-to-end inference performance and efficiency of state-of-the-art LLMs and GenAI models from OpenAI, Anthropic and xAI hosted on AI Foundary.
-
Design and implement efficient load scheduling and balancing strategies, by leveraging key insights and features of the model and workload.
-
Scale the platform to support the growing inferencing demand and maintain high availability.
-
Deliver critical capabilities required to serve the latest and greatest Gen AI models such as GPT5, Realtime audio, Sora, and enable fast time to market for them.
-
Drive generic features to cater to the needs of customers such as GitHub, M365, Microsoft AI and third-party companies.
-
Collaborate with our partners both internal and external.
-
Embody Microsoft's Culture and Values.
Qualifications:
Required / Minimum Qualifications:
- Bachelor’s degree in Computer Science or a related technical field AND 2+ years of technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, or Golang, OR equivalent experience.
Other Requirements:
- Ability to meet Microsoft, customer, and/or government security screening requirements for this role. These requirements include, but are not limited to, the following specialized security screenings:Microsoft Cloud Background Check: This position requires passing the Microsoft Cloud Background Check upon hire or transfer and every two years thereafter.
Preferred Qualifications:
- Technical background with a solid foundation in software engineering principles, distributed computing, and system architecture.
- Experience working on high-scale, reliable online systems.
- Experience with real-time online services requiring low latency and high throughput.
- Experience working with Layer 7 (L7) network proxies and gateways.
- Knowledge of network architecture and concepts, including HTTP and TCP protocols, authentication, and session management.
- Knowledge and experience with OSS, Docker, Kubernetes, C++, Golang, or equivalent programming languages.
- Cross-team collaboration skills and the desire to collaborate in a team of researchers and developers.
- Ability to independently lead projects.
#AIPLATFORM
#AzureAI
#CoreAI
#GenAI
#AIInference
Software Engineering IC3 - The typical base pay range for this role across the U.S. is USD $100,600 - $199,000 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $131,400 - $215,400 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:
https://careers.microsoft.com/us/en/us-corporate-pay
This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.
Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Total Views
0
Apply Clicks
0
Mock Applicants
0
Scraps
0
Similar Jobs

Materials Engineer, Polymers
SpaceX · Brownsville, TX

Software Engineer - Apple Vision Pro
Apple · Los Angeles, CA

Ingénieur électrique - régulateur de vitesse / Electrical Engineer - Speed governors
GE Vernova · Brossard

Sr Data Science Engineer
Walt Disney · Remote Worker Location, USA

SoC Design Verification Engineer
Intel · India, Bangalore
About Microsoft
Reviews
3.8
5 reviews
Work Life Balance
4.1
Compensation
4.3
Culture
3.4
Career
3.2
Management
3.0
65%
Recommend to a Friend
Pros
Excellent compensation and benefits package
Four-day workweek with improved work-life balance
Supportive managers and teams
Cons
High-pressure environment causing anxiety
Unprofessional interview processes
Limited creative work opportunities
Salary Ranges
5,571 data points
Junior/L3
Mid/L4
Junior/L3 · Advertising Client Success
2 reports
$163,358
total / year
Base
$141,875
Stock
-
Bonus
-
$163,358
$163,358
Interview Experience
7 interviews
Difficulty
3.7
/ 5
Duration
14-28 weeks
Offer Rate
14%
Experience
Positive 14%
Neutral 29%
Negative 57%
Interview Process
1
Application Review
2
Recruiter Screen
3
Technical Phone Screen
4
Technical Interview
5
Onsite/Virtual Interviews
6
Final Round
7
Offer
Common Questions
Coding/Algorithm
System Design
Behavioral/STAR
Technical Knowledge
Past Experience
News & Buzz
Microsoft loses $400 billion in few hours, what's behind one of the worst stock market days for the compa - Times of India
Source: Times of India
News
·
5w ago
Microsoft Stock Tumbles 12.1% In Worst Day For Company In Years - HuffPost
Source: HuffPost
News
·
5w ago
Microsoft: The 'question' the company needs to answer - Yahoo Finance
Source: Yahoo Finance
News
·
5w ago
AI is a planet-sized bubble — and Microsoft's slump is a taste of the crash to come, tech guru Erik Gordon says - Business Insider
Source: Business Insider
News
·
5w ago
