Microsoft Foundry on Windows is reshaping how developers build and deploy AI experiences.
Together with Windows ML, it supports seamless on-device deployment of custom models that run efficiently across CPU, GPU, and NPU. This gives developers a unified and flexible platform for bringing advanced AI features directly into Windows applications.
At #MSIgnite, Andrew Leader and Anastasiya Tarnouskaya shared how Windows ML streamlines the AI workflow, making it easier to optimize, fine-tune, and deploy models across a wide range of hardware.
Discover more from #MSIgnite: https://lnkd.in/e2Du3-7P
It'd be ideal if your apps could seamlessly run across CPU, GPU and NPU devices across the wide breadth of Windows devices. However, the reality is this is kind of complex if your apps wanted to leverage the latest from Intel, AMD, Qualcomm and NVIDIA. You would have to include the SDK's for each of these I HV's in your app, which would increase your app size by hundreds of megabytes. Plus if you want to deploy your device with only the necessary dependencies, that really becomes complicated in your installer. You have to detect what device it's running and pull down the correct dependencies. So this is where Windows ML comes in. Windows ML provides a single system wide copy of the HIV specific. Execution providers, so your app doesn't have to carry those, but still gets that native like performance. This copy also automatically updates so your app will automatically run on the latest devices without you recompiling and redistributing your app. Windows EML is built upon the popular and familiar Onyx Runtime, so that means there's already a wide ecosystem of models and tools that are compatible with Windows ML.