Description AWS's Trainium chips power the world's largest machine learning training clusters. Our team builds the C++ and SystemC functional models of these custom SoCs - virtual platforms that let software teams start development months...
Description The Product: AWS Machine Learning accelerators are at the forefront of AWS innovation and one of several AWS tools used for building Generative AI on AWS. The Inferentia chip delivers best-in-class ML inference performance at ...
Lugar:
Seattle, WA | 08/03/2026 01:03:31 AM | Salario: S/. No Especificado | Empresa:
AmazonDescription The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's custom machine learning accelerators, Inferentia and Trainiu...
Description Custom SoCs (System on Chips) are the brains behind AWS's Machine Learning servers. Our team builds C++ & SystemC functional models of these custom-designed accelerator SoCs for use by AWS internal teams to significantly left-...
Description Custom silicon chips live at the heart of AWS Machine Learning servers, and this team builds the backend software that runs these servers. We're looking for someone to lead our SoC (System on Chip) device-driver / HAL (Hardwar...
Description AWS Trainium servers are complex supercomputers, with both hardware and software built entirely in-house from the ground-up. We're looking for someone to lead our SoC (System on a Chip) Hardware Abstraction Layer (HAL) team. Y...
1