Hello,
I've started working on a project with a mobile robot platform with Raspberry Pi 3B+ running a Quad-core 1.4 GHz equipped with 1 GB RAM, and Raspbian. I am researching solutions to be able to run a vision model (e.g. LLaVa) with the on-board cameras to perform scene understanding. I am considering a cloud solution but would prefer a local deployment. I don't have experience with AI Hats so I am wondering if it is a good candidate for the application. Llava can range from 7 to 35 billion parameters making the models computationally expensive, using 8-bit quantisation, I would need 7 to 35GB VRAM on a GPU cluster. How does AI Hat fit into it? Thanks in advance for any inputs, it's all new to me so any guidance will be much appreciated
I've started working on a project with a mobile robot platform with Raspberry Pi 3B+ running a Quad-core 1.4 GHz equipped with 1 GB RAM, and Raspbian. I am researching solutions to be able to run a vision model (e.g. LLaVa) with the on-board cameras to perform scene understanding. I am considering a cloud solution but would prefer a local deployment. I don't have experience with AI Hats so I am wondering if it is a good candidate for the application. Llava can range from 7 to 35 billion parameters making the models computationally expensive, using 8-bit quantisation, I would need 7 to 35GB VRAM on a GPU cluster. How does AI Hat fit into it? Thanks in advance for any inputs, it's all new to me so any guidance will be much appreciated
Statistics: Posted by kinberra — Fri Aug 02, 2024 11:56 am