this post was submitted on 29 Sep 2023
856 points (98.4% liked)
Linux
48335 readers
499 users here now
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
GPT, for example, fails in calculation with problems like knapsack, adjacency matrix, Huffman tree, etc.
it starts giving garbled output.
Ask it simple question a calculator can ask. Say square root of 48. It will give the wrong answer
The current LLMs can't loop and can't see individual digits, so their failure at seemingly simple math problems is not terrible surprising. For some problems it can help to rephrase the question in such a way that the LLM goes through the individual steps of the calculation, instead of telling you the result directly.
And more generally, LLMs aren't exactly the best way to do math anyway. Human's aren't any good at it either, that's why we invented calculators, which can do the same task with a lot less computing power and a lot more reliably. LLMs that can interact with external systems are already available behind paywall.
The problem is chatgpt will say you the wrong answer confidently unlike humans
We must be hanging around different humans.
Humans are wrong all the time and confidently so. And it's an apples and oranges competition anyway, as ChatGPT has to cover essentially all human knowledge, while a single human only knows a tiny subset of it. Nobody expects a human to know everything ChatGPT knows in the first place. A human put into ChatGPTs place would not perform well at all.
Humans make the mistake that they overestimate their own capabilities because they can find mistakes the AI makes, when they themselves wouldn't be able to perform any better, at best they'd make different mistakes.
So same way it may not be able to code if it can't do math. All i see it having is profound english knowledge, and the data inputted.
Human knowledge is limited, i agree. But more knowledge is different from the ability to so called 'think'. Maybe it can be done with a different type of neural network and usage of logical gates seperate from the neural networks
https://www.deepmind.com/blog/competitive-programming-with-alphacode
People overestimate how much it matters that ai "doesn't have the capacity to understand it's output"
Even if it doesn't, is that a massive problem to overcome? There's studies showing that if you have an ai list the potential problems with an output and then apply them to its own output it performs significantly better. Perhaps we're just a recursive algorithm away from that.
Perhaps we're just a recursive algorithm.