Introduction to local AI models (with AMD's Anush Elangovan)
Introduction to local AI models (with AMD's Anush Elangovan)
James Governor sits down with Anush Elangovan, VP of AI at AMD, to dig into the fast-evolving world of local LLMs, edge hardware, and the future of AI-powered developer experience.
They cover:
Why local LLMs are exploding in adoption -Privacy, data sovereignty, and on-device intelligence
Running 120B-parameter models on AMD Strix Halo laptops
The software stack Anush uses: ROCm, Llama.cpp, Ollama, ComfyUI, PyTorch & more
How AMD is approaching AI-driven developer tools, coding agents, and predictive ops
The role of custom silicon, inference-optimized chips, and next-gen laptops
What ROCm actually is (and why developers should care)
If you're curious about the future of on-device AI, developer workflows, or AMD's AI strategy, this is a must-watch.
This podcast was sponsored by AMD.
✖  
By continuing your visit to this site, you accept the use of cookies by Google Analytics to make visits statistics.