ExecuTorch 1.0, officially launched on October 22, enables the deployment of AI applications on various edge devices, including mobile, desktop, and embedded systems. This framework supports different hardware accelerators like CPU, GPU, and NPU, ensuring compatibility with platforms such as iOS and Android. By facilitating on-device AI deployment, ExecuTorch dramatically reduces cloud dependency, allowing for faster, near real-time responses while enhancing data privacy by keeping data local.
Furthermore, developers can leverage ExecuTorch for a range of applications, including Large Language Models (LLMs), vision-language models, image segmentation, and audio processing, without the need for conversion or code rewrites. Already implemented in services like Instagram and WhatsApp, it accelerates on-device AI innovations for billions of users. Previous methods required conversions leading to issues like numerical inconsistency, but ExecuTorch eliminates this bottleneck, allowing developers to build optimized AI applications using familiar PyTorch tools. The release of ExecuTorch represents a significant step forward in the on-device AI ecosystem, building on the beta version released last year.
👉 Pročitaj original: CIO Magazine