Yes, I read what you posted and answered accordingly. Only, I didn't spend enough time dumbing it down further. So let me dumb it down.
Your main objection was the simplicity of the goal of LLMs- predicting the next word that occurs. Somehow, this simplistic goal makes the system stupid.
In my reply, I first said that self awareness occurs naturally after a system becomes more and more intelligent. I explained the reason as to why. I then went on to explain how a simplistic terminal goal has nothing to do with actual intelligence. Hence, no matter how stupid/simple a terminal goal is, if an intelligent system is challenged enough and given enough resources, it will develop sentience at a given point in time.
Sorry that I came across as a troll. That was not my intent.
Lmao this statement itself is a contradiction. You first say how "you can never know anything for sure" in regards to descriptive statements about reality. Then, in the same statement, you make a statement relating to the laws of logic (which by the way are descriptive statements about reality) and say that you are absolutely sure of this statement.
Serious answer though - the scientific method is based on a couple of axioms. Assuming that these axioms are true, yes, you can be absolutely sure about the nature of things.
You lack the understanding of how LLMs work. Please see how neural networks specifically work. They do learn and they do have memory. In fact, memory is the biggest reason why you can't run ChatGPT on your smartphone.
Untrue. Please learn how machine learning works.