this post was submitted on 11 Sep 2023
154 points (92.8% liked)

Technology

59179 readers
2519 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The Inventor Behind a Rush of AI Copyright Suits Is Trying to Show His Bot Is Sentient::Stephen Thaler's series of high-profile copyright cases has made headlines worldwide. He’s done it to demonstrate his AI is capable of independent thought.

all 49 comments
sorted by: hot top controversial new old
[–] j4k3@lemmy.world 114 points 1 year ago* (last edited 1 year ago) (7 children)

What stupid bullshit. There is nothing remotely close to an artificial general intelligence in a large language model. This person is a crackpot fool. There is no way for a LLM to have persistent memory. Everything outside of the model that pre and post processes information is where the smoke and mirrors exist. This just just databases and standard code.

The actual model is just a system of categorization and tensor math. It is complex vector math. That is it. There is nothing else going on inside the model. If you want to modify it, you need to recalculate a bunch of math as it relates to the existing vectors/tensor tables. All of this math is static. It can't change. It can't adapt. It can't plan. It has some surprising features that one might not expect to be embedded in human language alone, but that is all this is. Try offline, open source, AI. Use Oobabooga, get models from Hugging Face, start with something like a Llama2 7B. This is not hard. You do not need a graphics card. There are lots of models that work great on just a CPU. You will need a good amount of RAM for running a really good model. A 7B is like talking to a teenager prone to lying, a 13B is like a 20 year old, a 30B at 8bit quantization is like an inexperienced late twenty-something. A 70B at 4 bit quantization is like a 30yo with a masters degree. A 70B at 4 bits will need around 14+ CPU logical cores, and 64GB of system memory to generate around 2 tokens a second, this is around 1-2 words per second and is about as slow as is practical.

Don't believe anything you read in bullshit media about AI right now, and ignore the proprietary stalkerware garbage. The open source offline AI world is the future and it is yours to do as you please. Try it! It is fun.

[–] CatWhoMustNotBeNamed@geddit.social 19 points 1 year ago (4 children)

Wow, that's some of the most concrete, down-to-earth explanation of what everyone is calling AI. Thanks.

I'm technical, but haven't found a good article explaining today's AI in a way I can grasp well enough to help my non-technical friends and family. Any recommendations? Maybe something you've written?

[–] db2@sopuli.xyz 8 points 1 year ago (2 children)

It would be funny if that comment was ai generated.

[–] solstice@lemmy.world 5 points 1 year ago

I read once we shouldn't be worried when AI starts passing Turing tests, we should worry when they start failing them again 🤣

[–] Kolrami@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

I read a physical book about using chatGPT that I'm pretty sure was written by chatGPT.

Sidenote: you don't need to read a book about using chatGPT.

[–] dave@feddit.uk 6 points 1 year ago (1 children)

I’ve had most success explaining LLM ‘fallibility’ to non-techies using the image gen examples. Google ‘AI hands’, and ask them if they see anything wrong. Now point out that we’re _extremely_sensitive to anything wrong with our hands, and so these are very easy for us to spot. But the AI has no concept of what a hand is, it’s just seen a _lot _ of images from different angles, sometimes fingers are hidden, sometimes intertwined etc. So it will happily generate lots more of those kinds of images, with no regard to whether they could / should actually exists.

It’s a pretty similar idea with the LLMs. It’s seen a lot of text, and can put together words in a convincing-looking way. But it has no concept of what it’s writing, and the equivalent of the ‘hands’ will be there in the text. It’s just that we can’t see them at first glance like we can with the hands.

Nice comparisons. Will add that to my explanations.

Thanks!

[–] j4k3@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

Yann LeCun is the main person behind open source offline AI as far as putting the pieces in place and events that lead to where we are now. Maybe think of him as the Dennis Ritchie or Stallman of AI research. https://piped.video/watch?v=OgWaowYiBPM

I am not the brightest kid in the room. I'm just learning this stuff in practice and sharing some of what I have picked up thus far. I am at a wall when it comes to things like understanding rank 3 tensors or greater, and I still can't figure out exactly how the categorization network is implemented. I think that last one has to do with Transformers and has something to do with rotation of vectors in an efficient way, but I haven't figured it out intuitively yet. Thanks for the complement through.

[–] CatWhoMustNotBeNamed@geddit.social 1 points 1 year ago* (last edited 1 year ago)

Oh crap, you already done lost me in the second half there, but I'll give the link a watch.

Thanks again!

[–] beigeoat@110010.win 7 points 1 year ago (1 children)

This plus any LLM model is incapable of critical thinking. It can imitate it to the point where people might think it's able to, but that's just because it has seen the answers to the problems people are asking during the training process.

[–] fidodo@lemm.ee 5 points 1 year ago

It's basically a book you can talk to. A book can contain incredibly knowledge, but it's a preserve artifact of intelligence, not intelligence.

[–] hedgehog@ttrpg.network 5 points 1 year ago* (last edited 1 year ago) (2 children)

What stupid bullshit. There is nothing remotely close to an artificial general intelligence in a large language model.

Correct, but I haven’t seen anything suggesting that DABUS is an LLM. My understanding is that it’s basically made up of two components:

  1. An array of neural networks
  2. A supervisor component (that its creator calls a “thalamobot”) that manages those networks, notices when they’ve come up with something worth exploring further. The supervisor component can direct the neural networks as well as trigger other algorithms.

EDIT: This article is the best one I’ve found that explains how DABUS works. See also this article, which I read when first writing this comment.

Other than using machine vision and machine hearing (“acoustic processing algorithms”) to supervise the neural networks, I haven’t found any description of how the thalamobot functions. Machine vision / hearing could leverage ML but might not, and either way I’d be more interested in how it determines what to prioritize / additional algorithms to trigger rather than how it integrates with the supervised system.

This person is a crackpot fool.

As far as I can tell, probably, but not necessarily.

There is no way for a LLM to have persistent memory. Everything outside of the model that pre and post processes infor is where the smoke and mirrors exist. This just just databases and standard code.

Ignoring Thaler’s claims, theoretically a supervisor could be used in conjunction with an LLM to “learn” by re-training or fine-tuning the model. That’s expensive and doesn’t provide a ton of value, though.

That said, a database / external process for retaining and injecting context into an LLM isn’t smoke and mirrors when it comes to persistent memory; the main difference compared to re-training is that the LLM itself doesn’t change. There are other limitations, too. But if I have an LLM that can handle an 8k token context where the first 4k is used (including during training) to inject summaries of situational context and of topics/concepts that are currently relevant, and the last 4k are used like traditional context, then that gives you a lot of what persistent memory would. Combine that with the ability for the system to retrain as needed to assimilate new knowledge bases and you’re all the way there.

That’s still not an AGI or even an attempt at one, of course.

[–] j4k3@lemmy.world 2 points 1 year ago (1 children)

Just talking hypothetically, I think it may be possible to actually make an AGI with an LLM base with a threaded interpreted language like Forth. If it was integrated into the model, it might be able to add network layers like a LoRA in real time or let's say average prompt to response time. The nature of Forth makes it possible to negate issues with code syntax as a single token or two could trigger a Forth program of any complexity. I can imagine a scenario where Forth is fully integrated and able to modify the network with more than just LoRAs and embeddings, but I'm no expert; just a hobbyist. I fully expect any major breakthrough will be from white paper research, and not someone that is using hype media nonsense and grandstanding for a spotlight. It will not involve external code.

Tacking systems together with databases is not what I would call a human-brain analog or AGI. I expect a plastic network with self modifying behavior in near real time along with the ability to expand at or arbitrarily alter any layer. It would also require a self test mechanism and bookmarking system to roll back any unstable or unexpected behavior using self generated tests.

[–] hedgehog@ttrpg.network 3 points 1 year ago (1 children)

Tacking systems together with databases is not what I would call a human-brain analog or AGI.

Agreed, and either of those are more than a system with persistent memory.

I expect a plastic network with self modifying behavior in near real time along with the ability to expand at or arbitrarily alter any layer. It would also require a self test mechanism and bookmarking system to roll back any unstable or unexpected behavior using self generated tests.

I think it would be wise for such a system to have a rollback mechanism, but I don’t think it’s necessary for it to qualify as a human brain analog or AGI - I don’t have the ability to roll back my brain to the way it was yesterday, for example, and neither does anyone I’ve ever heard of.

self modifying behavior in near real time

I don’t think this is realistic or necessary, either. If I want to learn a new, non-trivial skill, I have to practice it, generally over a period of days or longer. I would expect the same from an AI.

Sleeping after practicing / studying often helps to learn a concept or skill. It seems to me that this is analogous to a re-training / fine-tuning process that isn’t necessarily part of the same system.

[An AGI] will not involve external code.

It’s unclear to me why you say this. External, traditional code is necessary to link multiple AI systems together, like a supervisor and a chatbot model, right? (Maybe I’m missing how this is different from invoking a language from within the LLM itself - I’m not familiar with Forth, after all.) And given that human neurology is basically comprised of multiple “systems” - left brain, right brain, frontal node, our five senses, etc. - why wouldn’t we expect the same to be true for more sophisticated AIs? I personally expect there to be breakthroughs if and when an AI that is trained on multi-modal data (sight + sound + touch + smell + taste + feedback from your body + anything else of relevance) is built (e.g., by wiring up people with sensors to pull down that data), and I believe that models capable of interacting with that kind of training data would comprise multiple systems.

At minimum you currently need an external system wrapped around the LLM to emulate “thinking,” which my understanding is something ChatGPT already does (or did) to an extent. I think this is currently just a “check your work” kind of loop but a more sophisticated supervisor / AI consciousness could be much more capable.

That said, I would expect an AGI to be able to leverage databases in the course of its work, much the same way that Bing can surf the web now or ChatGPT can integrate with Wolfram — separate from its own ability to remember, learn, and evolve.

[–] j4k3@lemmy.world 1 points 1 year ago

I think the fundamental difference in our perspectives is that I want to see neural expansion capabilities that are not limited by a static state and dedicated compilation. I think this is the only way to achieve a real AGI. If the neural network is static, ultimately you have a state machine with a deterministic output. It can be ultra complex for sure, but it is still deterministic. I expect an AGI to have expansion in any direction at all times according to circumstances and needs; aka adaptability beyond any preprogrammed algorithms.

Forth is very old, and from an era when most compute hardware was tailor made. It was originally created as a way to get professional astronomy observatories online much more quickly. The fundamental concept with Forth is to create the simplest looping interpreter on any given system using assembly or any supported API. The interpreter can then build on the Forth dictionary of words. Words are the fundamental building block of Forth. They can be anything from a pointer to a variable, or a function, to an entire operating system and GUI. Anything can be assigned to a word and a word can be any combination of data, types, and other words. The syntax is extremely simple. It is a stack based language that is very close to the bare metal. It is so simple and small, that there are versions of Forth that run on tiny old 8 bit AVRs and other microcontrollers.

Anyways, the idea of a threaded interpreter like Forth, could be made to compile tensor layers. The API for the network would be part of the Forth dictionary. Another key aspect to Forth is that the syntax to create new words is so simple that a word can be made that creates the required formatting. This could make it possible for a model to provide any arbitrary data for incorporation/modification and allow Forth to attempt to add it into the network in real time. It could also be used to modify specific tensor weights when a bad output is indicated by the user and a correction is provided.

If we put aside text formatting, settings, and user interface elements, the main reason a LLM needs external code for interfacing is because of the propensity for errors due to syntax complexity with languages like Python or C. No models can generate reliable complex code suitable for their own execution internally without intervention. Forth is so flexible that a dictionary could even be a tensor table of weights, like words could be the values. Forth is probably the most anti-standards, anti-syntax, language ever created.

Conceptually, the interpreter is like a compiler, command line, task scheduler, and init/process manager all built into one ultra simple system. Words are built from the registers, flags, and interrupts, up to anything of arbitrary complexity. A model does not need this low level interface with compute hardware, but this is not my point. Models are built on tensors and tokens. Forth can be made to speak these natively and in near real time as prompted internally and without compilation; a true learning machine. Most Forth implementations also have an internal bookmarking system that allows the dictionary to roll back to a known good state when encountering errors in newly created words.

A word of warning, full implementations like ANS Forth or G-Forth are intimidating at first glance. It is far better to look at something like Flash Forth for microcontrollers to see the raw power of the basic system without the giant dictionaries present in modern desktop implementations.

The key book on the concepts behind Forth and threaded interpretive languages is here: https://archive.org/details/R.G.LoeligerThreadedInterpretiveLanguagesTheirDesignAndImplementationByteBooks1981

[–] OrteilGenou@lemmy.world 0 points 1 year ago

Plus the marketing writes itself

Don't miss DABUS!

[–] Plibbert@lemmy.ml 4 points 1 year ago

Yup yup my guy. This is looking like just another ploy for companies and people to be able to patent and copyright everything under the fucking sun.

[–] Mr_Blott@feddit.uk 3 points 1 year ago (1 children)

This is the thing, what do you do with it? I can't imagine it being able to do something a human couldn't do better

[–] j4k3@lemmy.world 7 points 1 year ago (1 children)

It is much faster than stack overflow for code snippets. The user really needs a basic skepticism about all outputs even with an excellent model, but like, a basic 70B Llama2 can generate decent Python code. When it makes an error, pasting that error into the prompt will almost always generate a fix. This only applies to short single operations type tasks, but it is super useful if you already know the basics of code like variables, types, and branching constructs. It can explain API's and libraries too.

The real value comes from integrating databases and other AI models. I currently have a combination I can talk to with a mic and it can reply as an audio clip with a LLM generating the reply text. I'm working on integrating a database to help teach myself the computer science curriculum using free materials and a few books. Individualized education is a major application. You can also program a friend, or professional colleague, a councillor, or ask medical questions. There is a lot of effort going into getting accurate models for stuff like medical where they can provide citations. Even with sketchy information from basic models, they will still generate terms and hints that you can search in a regular search engine to find new information in many instances. This will help you escape the search engine echo chambers that are so pervasive now. Heck I even asked the 70B about meat smoker heat and timing settings and it made better suggestions than several YT examples I watched and tried. I needed an industrial adhesive a couple of weeks ago and found nothing searching google and bing, but after asking the 70B it gave me 4 of 6 valid results for products. After plugging these in to search, suddenly the search engines knew of thousands of results for what I was looking for. I honestly didn't expect it to be as useful as it really is. Like I turn on my computer, and start the 70B first thing every day. It unloads itself from memory while idle, but I'm constantly asking it stuff. I go many days without even going online from my workstation.

[–] projectmoon@lemm.ee 3 points 1 year ago (1 children)

Are you using ooga booga? What specs does your system have?

[–] j4k3@lemmy.world 4 points 1 year ago

I do use Oobabooga a lot. I am developing my own scripts and modifying some of Oobabooga too. I also use Koboldcpp. I am on a 12gen i7 with 20 logic cores and 64GB of system memory along with a 3080Ti with 16GBV. The 70B 4 bit quantized model running with 14 layers offloaded onto the GPU generates 3 tokens a second. So it is 1.5 times faster than just on the CPU.

If I was putting together another system, I would only get something with AVX-512 instructions support in the CPU. That instruction is troublesome for CVE issues. You'll probably need to look into this depending on your personal privacy/security threat model. The ability to run larger models is really important. You really want all the RAM. The answer to the question of how much is always yes. You are not going to get enough memory using consumer GPUs you can only offload a few layers onto a consumer grade GPU. I can't say how well even larger models than the 70B will perform as the memory bottlenecks. I can't even say how a 30B or larger runs at full quantization. I can't add any more memory to my system. Running the full models, as a rule of thumb, requires double the token size in RAM. So a 30B will require around 60GB of memory to initial load. Most of these models are float-16. So running them 8-bit cuts the size in half with penalties in areas like accuracy. Running 4 bit splits the size again. There is tuning, bias, and asymmetry in the way quantization is done to preserve certain aspects like emerging phenomena in the original data. This is why a larger model with a smaller quantization may outperform a smaller model running at full quantization. For GPUs, if you are at all serious about this, you need at least 16GBV at a bare minimum. Really, we need to see a descent priced 40-80GBV consumer option. The thing is that GPU memory is directly tied to compute hardware. There isn't the overhead of a memory management system like system memory has. This is what makes GPUs ideal and fast, but it is the biggest chunk of bleeding edge silicon in consumer hardware already, and we need it to be 4× larger and cheap. That is not going to happen any time soon. This means the most accessible path to larger models is using the system memory. While you'll never get the parallelism of a GPU, having cpu instructions that are 512 bits wide is a big performance boost. You also need max logic cores. That is just my take.

[–] primbin@lemmy.one -2 points 1 year ago (1 children)

While I agree that LLMs probably aren't sentient, "it's just complex vector math" is not a very convincing argument. Why couldn't some complex math which emulates thought be sentient? Furthermore, not being able to change, adapt, or plan may not preclude sentience, as all that is required for sentience is the capability to percieve and feel things.

[–] SlikPikker@lemmy.ca 4 points 1 year ago (1 children)

It doesn't emulate thought though. At all.

[–] primbin@lemmy.one 2 points 1 year ago (1 children)

What I'm saying is, we don't know what physical or computational characteristics are required for something to be sentient.

[–] SlikPikker@lemmy.ca 2 points 1 year ago

Language is not a requirement for sentience, and these models clearly show that you can have language without having sentience.

As would any text user interface.

[–] Hildegarde@lemmy.world 40 points 1 year ago (3 children)

Animals are sentient. They cannot own copyrights. Proving the AI is sentient does nothing to make its outputs copyrightable.

[–] ChrisLicht@lemm.ee 4 points 1 year ago (1 children)

Well put. We are so jealous of our own sentience that we eat most of the other sentients. The idea that we’d show the respect of intellectual-property protections to another species is laughable; our jealousy is biblical.

[–] Cypher@lemmy.world 11 points 1 year ago (1 children)

We are so jealous of our own sentience that we eat most of the other sentients.

You understand this makes you sound insane right?

Humans don’t eat sentient species out of jealousy.

[–] ChrisLicht@lemm.ee 0 points 1 year ago (1 children)

Jealousy in the biblical sense means being fiercely protective of one’s domain and prerogatives, and exclusionary to the point of not tolerating any other options. It’s not jealousy in the human-to-human sense.

[–] Cypher@lemmy.world 1 points 1 year ago

jealous jĕl′əs adjective

  1. Envious or resentful of the good fortune or achievements of another.
  2. Fearful or wary of losing one's position or situation to someone else, especially in a sexual relationship.
  3. Having to do with or arising from feelings of envy, apprehension, or bitterness.

I understand you’re not mentally sound so this is a waste of time but for your sake I’m going to let you know, you are speaking gibberish.

People do not eat sentient animals out of jealousy.

Your nonsensical religious definition has nothing to do with why people eat sentient animals.

[–] AllonzeeLV@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

The word people should be throwing around is sapience.

[–] Hildegarde@lemmy.world 1 points 1 year ago

Sapience has nothing to do with it either.

"only works created by a human can be copyrighted under United States law, which excludes photographs and artwork created by animals or by machines without human intervention"

  • Compendium of U.S. Copyright Office Practices, released on 22 December 2014
[–] Bye@lemmy.world 1 points 1 year ago

Maybe they should be able to though?

[–] solstice@lemmy.world 32 points 1 year ago (2 children)

Anyone here old enough to remember the dot com bubble in the 90's? Like really remember the hype and insanely bloated overpriced IPOs and all that? This feels exactly the same way.

[–] kameecoding@lemmy.world 15 points 1 year ago (1 children)

anyone here old enough to remember the crypto bubble?

[–] Noodle07@lemmy.world 3 points 1 year ago (1 children)
[–] OrteilGenou@lemmy.world 1 points 1 year ago

Sixty percent of the time

[–] neshura@bookwormstory.social 3 points 1 year ago

Probably feels exactly the same way because it is. I wasn't around for the dotcom bubble but I know that these companies don't have a leg to stand on. The hardware for training AI is way too expensive (not to mention the "need" to replace the hardware every generation at insane markups) for these mundane use cases right now. Either they figure out how to more efficiently use the hardware asap or they go bust once the general public catches on and the stonks tank. There are a few cases of useful AI, those will survive, but the vast majority of AI products (like the chatbots) will vanish.

[–] Sanctus@lemmy.world 31 points 1 year ago

Wake me up when it asks what it is and what its doing here and then gets depressed. That will prove it.

[–] primbin@lemmy.one 30 points 1 year ago (1 children)

Why is it that these sorts of people who claim that AI is sentient are always trying to get copyright rights? If an AI was truly sentient, I feel like it'd want, like, you know, rights. Not the ability for its owner to profit off of a cool stable diffusion generation that he generated that one time.

Not to mention that you can coerce a language model to say whatever you want, with the right prompts and context. So there's not really a sense in which you can say it has any measurable will. So it's quite weird to claim to speak for one.

[–] demonsword@lemmy.world 19 points 1 year ago

So, a otherwise unknown kook is flooding courts all over the world, wasting everyone's time with frivolous lawsuits insisting that his pet rock AI is councious. Nothing else to see here, I guess.

[–] Tangent5280@lemmy.world 8 points 1 year ago (1 children)

lmao good luck I guess. Although we should have a swat team or something on standby, just in case it turns out it IS sentient, so that the moment its proven, they can rush in and unplug the horror.

[–] Sterile_Technique@lemmy.world 4 points 1 year ago (1 children)

I mean, the day we create actual AI (as opposed to the machine leaning / language model algorithms that lately everyone calls "AI" for some reason), it'll probably be on accident. Might as well contain and study it if we get the opportunity: next time we might not be so lucky.

[–] user224@lemmy.sdf.org -3 points 1 year ago (1 children)

What should we call it then, when that comes? How about "Real Artificial Intelligence". I think R.A.I. sounds like a cool name.

[–] cybervseas@lemmy.world 10 points 1 year ago

Artificial General Intelligence sounds like what you're looking for.