Awesome, I'll give these a spin and see how it goes. Much appreciated!
WhirlpoolBrewer
Good to know. I'd hate to buy a new machine strictly for running an LLM. Could be an excuse to pickup something like a Framework 16, but realistically, I don't see myself doing that. I think you might be right about using something like Open Web UI or LM Studio.
This is all new to me, so I'll have to do a bit of homework on this. Thanks for the detailed and linked reply!
I have a MacBook 2 pro (Apple silicon) and would kind of like to replace Google's Gemini as my go-to LLM. I think I'd like to run something like Mistral, probably. Currently I do have Ollama and some version of Mistral running, but I almost never used it as it's on my laptop, not my phone.
I'm not big on LLMs and if I can find an LLM that I run locally and helps me get off of using Google Search and Gimini, that could be awesome. Currently I use a combo of Firefox, Qwant, Google Search, and Gemini for my daily needs. I'm not big into the direction Firefox is headed, I've heard there are arguments against Qwant, and using Gemini feels like the wrong answer for my beliefs and opinions.
I'm looking for something better without too much time being sunk into something I may only sort of like. Tall order, I know, but I figured I'd give you as much info as I can.
There are other ways to lower the amount of plastic in you. If you donate your blood you can measurably lower your pfas levels. Really just removing blood which carries plastic through your whole body will also lower your concentration of plastics. Because plastic is in the water, make sure you drink filtered water. They do make filters that will catch micro plastics and some will advertise it. If you want to keep your levels lower avoid hydrophobic coatings that sit next to food for extended periods of time and definitely don't heat that food next to a hydrophobic coating. Think microwaving food in a container with coatings that'll leach into the food. So bags of popcorn should be avoided like the plague, unfortunately.
Source: Veritasium, skip to at least 50:15, but honestly I'd recommend watching the whole thing https://youtu.be/SC2eSujzrUY.
I don't doubt that it can perform addition in multiple ways. I would go as far as saying it can probably attempt to perform addition in more ways than the average person as it has probably been trained on a bunch of math. Can it perform it correctly? Sometimes. That's ok, people make mistakes all the time too. I don't take away from LLMs just because they make mistakes. The ability to do math in multiple ways is not evidence of thinking though. That is evidence that it's been trained on at least a fair bit of math. I would say if you train it on a lot of math, it will attempt to do a lot of math. That's not thinking, that's just increasing the weighting on tokens related to math. If you were to train an LLM on nothing but math and texts about math, then asked it an art question, it would respond somewhat nonsensically with math. That's not thinking, that's just choosing the statistically most likely next token.
I had no idea about artificial neurons, TIL. I suppose that makes "neural networks" make more sense. In my readings on ML they always seemed to go straight to the tensor and overlook the neuron. They would go over the functions to help populate the weights but never used that term. Now I know.
I would point out I think you might be overly confident in the manner in which it was trained addition. I'm open to being wrong here, but when you say "It was not trained to do trigonometry to solve addition problem", that suggests to me either you know how it was trained, or are making assumptions about how it was trained. I would suggest unless you work at one of these companies, you probably are not privy to their training data. This is not an accusation, I think that is probably a trade secret at this point. And if the idea that there would be nobody training an LLM to do addition in this manner, I invite you to glance the Wikipedia article on addition. Really glance at literally any math topic on Wikipedia. I didn't notice any trigonometry in this entry but I did find the discussion around finding the limits of logarithmic equations in the "Related Operations" section: https://en.m.wikipedia.org/wiki/Addition. They also cite convolution as another way to add in which they jump straight to calculus: https://en.m.wikipedia.org/wiki/Convolution.
This is all to say, I would suggest that we don't know how they're training LLMs. We don't know what that training data is or how it is being used exactly. What we do know is that LLMs work on tokens and weights. The weights and statistical relevance to each of the other tokens depends on the training data, which we don't have access to.
I know this is not the point, but up until this point I've been fairly pedantic and tried to use the correct terminology, so I would point out that technically LLMs have "tensors" not "neurons". I get that tensors are designed to behave like neurons, and this is just me being pedantic. I know what you mean when you say neurons, just wanted to clarify and be consistent. No shade intended.
I don't think you can disconnect how an LLM was trained from how it operates. If you train an LLM to use trigonometry to solve addition problems, I think you will find the LLM will do trigonometry to solve addition problems. If you train an LLM in only Russian, it will speak Russian. I would suggest that regardless of what you train it on it will choose the statistically most likely next token based on its training data.
I would also suggest we don't know the exact training data being used on most LLMs, so as outsiders we can't say one way or another on how the LLM is being trained to do anything. We can try to extrapolate from posts like the one that you linked to how the LLM was trained though. In general if that is how the LLM is coming to its next token, then the training data must be really heavily weighted in that manner.
My apologies, I was too vague. I'm saying "thinking" by definition is not "statistics". Where Monkeys, birds, and human babies all "think", LLMs use algorithms and "statistics". I also think that "statistics" not meaning the same thing that "thinking" is a valid argument. I would go farther and say it's important that words have meaning. That is what I was attempting to convey. I'm happy to clear up anything I was unclear about.
I think you can make a strong argument that they don't think rooted in words should mean something and that statistics and thinking don't mean the same thing. To me, that feels like a fairly valid argument.
As a foster parent, we get trained on how kids frequently get trafficked and the number one place is anywhere parents feel their kids are safe and don't need close supervision. So anything kid centric like Disney World or family centric like a church are prime targets for predators. Roblox is a kid centric place where parents don't closely watch their kids.
Roblox is a big enough company and has been around long enough that they should be doing something. They should be doing something because they definitely know this happens at this point. If you believe everything they claim on their website is true: https://corp.roblox.com/resource/child-safety, they are doing something. As far as I can tell, there isn't a report or any way to validate they are actually doing anything. You just have to trust that the publicly traded company is investing in a department that doesn't directly generate profits for its stock holders. You have to trust this company is not giving in to pressure each quarter to increase profits and decrease costs around this function of their business.
I push back on the idea that if something is designed for kids it doesn't need to be safe for kids. Roblox has designed something for kids, they should do something to make it safe for kids, and parents should watch their kids.