Counted the LLM's on my Pi5, now have 26, not all work but running just one uses lots of CPU grunt.
Was thinking cluster of Pi5's each running a different LLM?.
But just about any NPU/GPU is going to be faster than the Pi5 ARM cores.
How to make a super cheap cluster of LLM's running on what hardware?
A Pi5 running a smart fast LLM is nearly usable.
Was thinking cluster of Pi5's each running a different LLM?.
But just about any NPU/GPU is going to be faster than the Pi5 ARM cores.
How to make a super cheap cluster of LLM's running on what hardware?
A Pi5 running a smart fast LLM is nearly usable.
Statistics: Posted by Gavinmc42 — Thu May 09, 2024 9:52 pm