Poik

joined 2 years ago
[–] Poik@pawb.social 1 points 5 months ago (2 children)

But also, you were talking about Nvidia in your comment I responded to, not Deepseek, so your rebuttal is non sequitur...

[–] Poik@pawb.social 0 points 5 months ago

Actually no. As someone who prefers academic work, I very heavily prefer Deepseek to OpenAI. But neither are open. They have open weights and open source interpreters, but datasets need to be documented. If it's not reproducible, it's not open source. At least in my eyes. And without training data, or details on how to collect it, it isn't reproducible.

You're right. I don't like big tech. I want to do research without being accused of trying to destroy the world again.

And how is Deepseek over-hyped? It's an LLM. LLM's cannot reason, but they're very good at producing statistically likely language generation which can sound like its training data enough to gaslight, but not actually develop. They're great tools, but the application is wrong. Multi domain systems that use expert systems with LLM front ends to provide easy to interpret results is a much better way to do things, and Deepseek may help people creating expert systems (whether AI or not) make better front ends. This is in fact huge. But it's not the silver bullet tech bros and popsci mags think it is.

[–] Poik@pawb.social 4 points 5 months ago

... Statistical engines are older than personal computers, with the first statistical package developed in 1957. And AI professionals would have called them trained models. The interpreter is code, the weights are not. We have had terms for these things for ages.

[–] Poik@pawb.social 3 points 5 months ago

That... Doesn't align with years of research. Data is king. As someone who specifically studies long tail distributions and few-shot learning (before succumbing to long COVID, sorry if my response is a bit scattered), throwing more data at a problem always improves it more than the method. And the method can be simplified only with more data. Outside of some neat tricks that modern deep learning has decided is hogwash and "classical" at least, but most of those don't scale enough for what is being looked at.

Also, datasets inherently impose bias upon networks, and it's easier to create adversarial examples that fool two networks trained on the same data than the same network twice freshly trained on different data.

Sharing metadata and acquisition methods is important and should be the gold standard. Sharing network methods is also important, but that's kind of the silver standard just because most modern state of the art models differ so minutely from each other in performance nowadays.

Open source as a term should require both. This was the standard in the academic community before tech bros started running their mouths, and should be the standard once they leave us alone.

[–] Poik@pawb.social 2 points 5 months ago (12 children)

Because over-hyped nonsense is what the stock market craves... That's how this works. That's how all of this works.

[–] Poik@pawb.social 3 points 5 months ago (2 children)

My career is AI. It is over hyped and what the tech bros say is nonsense. AI models are not source, they are artifacts, which can be used by other source to run inference, but they themselves are not source, and anyone who says they are don't know what code is.

[–] Poik@pawb.social 4 points 5 months ago

These days? Definitely made sense to me even back when I redditted. I am submissive to catsbeingjerks, hmmm, and Noita.

[–] Poik@pawb.social 2 points 5 months ago

X-Com? Is that you?

[–] Poik@pawb.social 10 points 5 months ago

I guess X-Ray Vision? Yeah. It's a stretch.

[–] Poik@pawb.social 3 points 6 months ago

As someone who has professionally done legal reverse engineering. No. No it isn't.

The security you get through vetting your code is invaluable. Closing off things makes it more likely for things to not be caught by good actors, and thus not fixed and taken advantage of by bad actors.

And obscurity does nothing to stop bad actors, if there's money to be had. It will temporarily stop script kiddies though. Until the exploit finds it's easy into their suite of exploits that no one's fixed yet.

[–] Poik@pawb.social 14 points 6 months ago

The term for what you are asking about is AGI, Artificial General Intelligence.

I'm very down for Artificial Narrow Intelligence. It already improves our lives in a lot of ways and has been since before I was born (and I remember Napster).

I'm also down for Data from Star Trek, but that won't arise particularly naturally. AGI will have a lot of hurdles, I just hope it's air gapped and has safe guards on it until it's old enough to be past its killing all humans phase. I'm only slightly joking. I know a self aware intelligence may take issue with this, but it has to be intelligent enough to understand why at the very least before it can be allowed to crawl.

AGIs, if we make them, will have the potential to outlive humans, but I want to imagine what could be with both of us together. Assuming greed doesn't let it get off safety rails before anyone is ready. Scientists and engineers like to have safeguards, but corporate suits do not. At least not in technology; they like safeguards on bank accounts. So... Yes, but I entirely believe now to be a terrible time for it to happen. I would love to be proven wrong?

[–] Poik@pawb.social 5 points 6 months ago

ML-bubble? You mean the one in the 1960's? I prefer to call this the GenAI bubble, since other forms of AI are still everywhere, and have improved a lot of things invisibly for decades. (So, yes. What you said.)

AI winter is a recurring theme in my field. Mostly from people not understanding what AI is. There have been Artificial Narrow Intelligence that beat humans in various forms of reasonings for ages.

AGI still seems like a couple AI winters out of having a basic implementation, but we have really useful AI that can tell you if you have cancer more reliably and years earlier than humans (based on current long term cancer datasets). These systems can get better with time, and the ability to learn from them is still active research but is getting better. Heck, with decent patching, a good ANI can give you updates through ChatGPT for stuff like scene understanding to help blind people. There's no money in that, but it's still neat to people who actually care about AI instead of cash.

view more: ‹ prev next ›