David vonThenen, DigitalOcean, Senior Developer Advocate II, AI/ML
The future of IoT and Edge devices lies in seamlessly integrating voice-assisted AI interfaces with cloud capabilities. This session provides a hands-on exploration of how to architect efficient, voice-driven solutions using Large Language Models (LLMs) while balancing the demands of edge computing and cloud resources. Key takeaways include: Making informed decisions about where to run GPU-intensive AI processes, weighing edge performance against cloud scalability, privacy, and security. Designing modular, voice-first AI architectures that adapt to diverse IoT/Edge use cases. Practical demonstrations showcasing designs that perform tasks with voice-driven AI in hybrid edge-cloud environments. Attendees will gain practical insights into choosing the right workloads to deploy on resource-constrained edge devices versus the cloud, leveraging open-source tools and frameworks. By the end of this session, you’ll leave equipped to build smarter, voice-enabled IoT/Edge systems with architectures that maximize both resource efficiency and innovation.