Description AWS's Trainium and Inferentia chips power the world's largest machine learning clusters. Our team builds virtual platforms - full-system C++ models of these custom SoCs - that let software teams start development months before...
Description AWS's Trainium chips power the world's largest machine learning training clusters. Our team builds the C++ and SystemC functional models of these custom SoCs - virtual platforms that let software teams start development months...
Description The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's custom machine learning accelerators, Inferentia and Trainiu...
Description AWS Machine Learning accelerators are at the forefront of AWS innovation. The Trainium chip delivers industry-leading ML inference and training performance at the lowest cost in the cloud. This is enabled by edge software stac...
Lugar:
Seattle, WA | 19/03/2026 01:03:37 AM | Salario: S/. No Especificado | Empresa:
AmazonDescription The Product: AWS Machine Learning accelerators are at the forefront of AWS innovation and one of several AWS tools used for building Generative AI on AWS. The Inferentia chip delivers best-in-class ML inference performance at ...
Lugar:
Seattle, WA | 08/03/2026 01:03:31 AM | Salario: S/. No Especificado | Empresa:
AmazonDescription We're seeking a Safety Innovation Laboratory Lab Manager to join the Global Safety Engineering team... oriented background in the management of research equipment, managing laboratory safety programs, running lab facilities...
Lugar:
Kent, WA | 18/02/2026 03:02:47 AM | Salario: S/. No Especificado | Empresa:
AmazonDescription The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's custom machine learning accelerators, Inferentia and Trainiu...
Description The Product: AWS Machine Learning accelerators are at the forefront of AWS innovation. The Inferentia chip delivers best-in-class ML inference performance at the lowest cost in cloud. Trainium delivers the best-in-class ML tra...
Description Custom SoCs (System on Chips) are the brains behind AWS's Machine Learning servers. Our team builds C++ & SystemC functional models of these custom-designed accelerator SoCs for use by AWS internal teams to significantly left-...
Description Custom silicon chips live at the heart of AWS Machine Learning servers, and this team builds the backend software that runs these servers. We're looking for someone to lead our SoC (System on Chip) device-driver / HAL (Hardwar...