UraniumBlazer

joined 1 year ago
[–] UraniumBlazer@lemm.ee 15 points 4 months ago* (last edited 4 months ago) (2 children)

Uk... I'm really curious as to how porn actors are hired. Do they send nudes and stuff in their portfolio?

  • Here's my penis.
  • Here's my ass.
  • Here's me rawdogging a twink.
  • Here's me getting rawdogged by a twink.
  • Here's a letter of recommendation from the same twink.
  • Yes, we might look alike, but the twink is not my brother. He's my cousin.
[–] UraniumBlazer@lemm.ee 9 points 4 months ago* (last edited 4 months ago) (1 children)

Don't worry. You're not alone. We are coming to say "always have been" n then point a pew pew at u.

[–] UraniumBlazer@lemm.ee 66 points 4 months ago (6 children)

Hopium question: Can Google be sued for this as anti-competitive behavior and fined for "lost revenue"?

[–] UraniumBlazer@lemm.ee -1 points 4 months ago

The main problem is the definition of what "us" means here. Our brain is a biological machine guided by the laws of physics. We have input parameters (stimuli) and output parameters (behavior).

We respond to stimuli. That's all that we do. So what does "we" even mean? The chemical reactions? The response to stimuli? Even a worm responds to stimuli. So does an amoeba.

There sure is complexity in how we respond to stimuli.

The main problem here is an absent objective definition of consciousness. We simply don't know how to define consciousness (yet).

This is primarily what leads to questions like u raised right now.

[–] UraniumBlazer@lemm.ee 1 points 4 months ago

Interesting perspective, although I don't see how some of your points might add up. Regardless, thank you for the elaboration! :)

[–] UraniumBlazer@lemm.ee -3 points 4 months ago

These things are like arguing about whether or not a pet has feelings...

Mhm. And what's fundamentally wrong with such an argument?

I'd say it's far more likely for a cat or a dog to have complex emotions and thoughts than for the human made LLM to actually be thinking.

Why?

I'm in the camp that thinks the LLMs are by and large a huge grift (that can produce useful output for certain tasks) by virtue of extreme exaggeration of the facts, but maybe I'm wrong.

Why?

I too see how grifters use AI to further their scams. That's with the case of any new tech that pops up. This however, doesn't make LLMs not interesting.

[–] UraniumBlazer@lemm.ee -2 points 4 months ago (4 children)

I am not in disagreement, and i hope you wont take offense to what i am saying but you strike me as someone quite new to philosophy in general.

Nah no worries haha. And yeah, I am relatively new to philosophy. I'm not even that well read on the matter as I would like to be. :(

Personal philosophy

I see philosophy (what we mean by philosophy TODAY) as putting up some axioms and seeing how logic follows. The scientific method differs, in that these axioms have to be proven to be true.

I would agree with you with the personal philosophy point regarding the ethics branch of philosophy. Different ethical frameworks always revolve around axioms that are untestable in the first place. Everything suddenly becomes subjective, with no capacity of being objective. Therefore, it makes this part of philosophy personal imo.

As for other branches of philosophy tho, (like metaphysics), I think it's just a game of logic. Doesn't matter who plays this game. Assume an untested/untestable axiom, build upon it using logic n see the beauty that u've created. If the laws of logic are followed and if the assumed axiom is the same, anyone can reach the same conclusion. So I don't see this as personal really.

but i would suggest first studying human consciousness before extrapolating psychology from ai behavior

Agreed

Personally i got my first taste from that knowledge pre-ai from playing video games

Woah that's interesting. Could you please elaborate upon this?

[–] UraniumBlazer@lemm.ee -1 points 4 months ago (2 children)

ChatGPT says this itself. However, why does an intention have to be made by ChatGPT itself? Our intentions are often trained into us by others. Take the example of propaganda. Political propaganda, corporate propaganda (advertisements) and so on.

[–] UraniumBlazer@lemm.ee 1 points 4 months ago (1 children)

It's just because AI stuff is overhyped pretty much everywhere as a panacea to solve all ~~capitalist~~ ails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.

Agreed :(

You know what's sad? Communities that look at this from a neutral, objective position (while still being fun) exist on Reddit. I really don't want to keep using it though. But I see nothing like that on Lemmy.

[–] UraniumBlazer@lemm.ee -3 points 4 months ago (1 children)

No, that definition does not exclude dogs or apes. Both are significantly more intelligent than an LLM.

Again, depends on what type of intelligence we are talking about. Dogs can't write code. Apes can't write code. LLMs can (not bad code in my experience for low level tasks). Dogs can't summarize huge pages of text. Heck, they can't even have a vocab greater than a few thousand words. All of this definitely puts LLMs above dogs n apes in the scale of intelligence.

Pseudo-intellectual bullshit like this being spread as adding to the discussion does meaningful harm. It's inherently malignant, and deserves to be treated with the same contempt as flat earth and fake medicine should be.

Your comments are incredibly reminiscent of self righteous Redditors. U make bold claims without providing any supporting explanation. Could you explain how any of this is pseudoscience? How does any of this not follow the scientific method? How is it malignant?

[–] UraniumBlazer@lemm.ee 3 points 4 months ago

Good for you 👍

view more: ‹ prev next ›