{"type":"rich","version":"1.0","provider_name":"Transistor","provider_url":"https://transistor.fm","author_name":"Practical AI","title":"Stellar inference speed via AutoNAS","html":"<iframe width=\"100%\" height=\"180\" frameborder=\"no\" scrolling=\"no\" seamless src=\"https://share.transistor.fm/e/e07c06a1\"></iframe>","width":"100%","height":180,"duration":2535,"description":"Yonatan Geifman of Deci makes Daniel and Chris buckle up, and takes them on a tour of the ideas behind his amazing new inference platform. It enables AI developers to build, optimize, and deploy blazing-fast deep learning models on any hardware. Don’t blink or you’ll miss it!Sponsors:RudderStack – Smart customer data pipeline made for developers. RudderStack is the smart customer data pipeline. Connect your whole customer data stack. Warehouse-first, open source Segment alternative. SignalWire – Build what’s next in communications with video, voice, and messaging APIs powered by elastic cloud infrastructure. Try it today at signalwire.com and use code SHIPIT for $25 in developer credit. Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.comFeaturing:Yonatan Geifman – Website, GitHub, XChris Benson – Website, GitHub, LinkedIn, XDaniel Whitenack – Website, GitHub, XShow Notes:DeciAn Introduction to the Inference Stack and Inference Acceleration TechniquesDeci and Intel Collaborate to Optimize Deep Learning Inference on Intel’s CPUsDeciNets: A New Efficient Frontier for Computer Vision ModelsWhite paperUpcoming Events: Register for upcoming webinars here!","thumbnail_url":"https://img.transistorcdn.com/Ox7ZlyiQOhdDa4Qy1MnJH5WFoksAetrzb40Jo1pePFs/rs:fill:0:0:1/w:400/h:400/q:60/mb:500000/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8wMTZi/ZWJmNWIwNDdmYTcw/NGJjMTExZjNjZmYy/M2ZjNS5wbmc.webp","thumbnail_width":300,"thumbnail_height":300}