This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As a core member of the team, you will play a pivotal role in leading a high-performing team to build a suite of optimized kernels and implement highly optimized inference stacks for a variety of state-of-the-art transformer models (e.g., Llama-3, Llama-4, Deepseek-R1, Qwen-3, Stable Diffusion-3 etc.). You will be responsible for managing and scaling a high-performance team to pioneer novel model mapping strategies, while co-designing inference time algorithms (e.g., speculative and parallel decoding, prefill-decode disaggregation etc.).
Job Responsibility:
Architect Best-in-Class Inference Performance on Sohu: Deliver continuous batching throughput exceeding B200 by ≥10x on priority workloads
Develop Best-in-Performance Inference Mega Kernels: Develop complex, fused kernels that increase chip utilization and reduce inference latency, and validate these optimizations through benchmarking and regression-tested in production pipelines
Architect Model Mapping Strategies: Develop system level optimizations using a mix of techniques such tensor parallelism and expert parallelism for optimal performance
Build Scalable Team and Roadmap: Grow and retain a team of high-performing inference optimization engineers
Cross-Functional Performance Alignment: Ensure inference stack and performance goals are aligned with the software infrastructure teams, GTM and hardware teams for future generations of our hardware
Requirements:
Experience in designing and optimizing GPU kernels for deep learning on GPUs using CUDA, and assembly (ASM)
Experience with low-level programming to maximize performance for AI operations, leveraging tools like Compute Kernel (CK), CUTLASS, and Triton for multi-GPU and multi-platform performance
Deep fluency with transformer inference architecture, optimization levers, and full-stack systems (e.g., vLLM, custom runtimes)
History of delivering tangible perf wins on GPU hardware or custom AI accelerators
Solid understanding of roofline models of compute throughput, memory bandwidth and interconnect performance
Experienced in running large-scale workloads on heterogeneous compute clusters, optimizing for efficiency and scalability of AI workloads
Scopes projects crisply, sets aggressive but realistic milestones, and drives technical decision-making across the team
Anticipates blockers and shifts resources proactively
Nice to have:
Experience with implementation of state-of-the-art reasoning and chain-of-thought models at production scale
Experience with implementation of newer AI compute operations on hardware (e.g., flash attention, long-context attention variants and alternatives)
Analyzed and implemented strategies such as KV-cache offloading for efficient compute resource management
Familiarity with linear algebra (e.g. matrix decomposition, alternatives bases for vector spaces, matrix rank and its implications)
Managed lean, high-performing engineering teams and drove execution on timelines with high quality outcomes
What we offer:
Medical, dental, and vision packages with generous premium coverage
$500 per month credit for waiving medical benefits
Housing subsidy of $2k per month for those living within walking distance of the office
Relocation support for those moving to San Jose (Santana Row)
Various wellness benefits covering fitness, mental health, and more