this post was submitted on 20 May 2024
65 points (98.5% liked)
Funhole
390 readers
15 users here now
Welcome to Funhole! The first and only hole on SDF exclusively for fun content! Are you a content creator? Then come post your content and join the fun!
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's pretty good for AI tasks at the edge, e.g. fully local image recognition.
I'm edging right now just thinking about it
Ahh gotcha, similar to Google's Coral? Neat.
I've recently been looking into locally hosting some LLMs for various purposes, I haven't specced out hardware yet. Any good resources you can recommend?
Kind of, it's a standalone system with the hardware integrated - kinda like Google Coral with a Raspberry Pi.
Not really, sorry - I haven't gone too deep into LLMs beyond simple use cases. I've only really used llama.cpp myself.
A dedicated NVIDIA GPU in a random x86 pc is a lot faster and more price efficient than a Jetson.
If it isn't about the form factor the Jetson is not a great contender.