UraniumBlazer

joined 2 years ago
[–] UraniumBlazer@lemm.ee 32 points 8 months ago (1 children)

Ew the new one sucks. Why can't they spend the money that they have on important stuff instead of changing logos every couple of years? Uk, considering that their funding is going to dry up because of the Google anti trust case?

[–] UraniumBlazer@lemm.ee 1 points 8 months ago (3 children)

I don't oppose AI pictures at all. However, considering that all generative image models have been trained on human generated data, it is only fair that these models and art created by them be under copyleft licenses.

[–] UraniumBlazer@lemm.ee 15 points 8 months ago* (last edited 8 months ago) (2 children)

Uk... I'm really curious as to how porn actors are hired. Do they send nudes and stuff in their portfolio?

  • Here's my penis.
  • Here's my ass.
  • Here's me rawdogging a twink.
  • Here's me getting rawdogged by a twink.
  • Here's a letter of recommendation from the same twink.
  • Yes, we might look alike, but the twink is not my brother. He's my cousin.
[–] UraniumBlazer@lemm.ee 9 points 8 months ago* (last edited 8 months ago) (1 children)

Don't worry. You're not alone. We are coming to say "always have been" n then point a pew pew at u.

[–] UraniumBlazer@lemm.ee 67 points 8 months ago (6 children)

Hopium question: Can Google be sued for this as anti-competitive behavior and fined for "lost revenue"?

[–] UraniumBlazer@lemm.ee -1 points 8 months ago

The main problem is the definition of what "us" means here. Our brain is a biological machine guided by the laws of physics. We have input parameters (stimuli) and output parameters (behavior).

We respond to stimuli. That's all that we do. So what does "we" even mean? The chemical reactions? The response to stimuli? Even a worm responds to stimuli. So does an amoeba.

There sure is complexity in how we respond to stimuli.

The main problem here is an absent objective definition of consciousness. We simply don't know how to define consciousness (yet).

This is primarily what leads to questions like u raised right now.

[–] UraniumBlazer@lemm.ee 1 points 8 months ago

Interesting perspective, although I don't see how some of your points might add up. Regardless, thank you for the elaboration! :)

[–] UraniumBlazer@lemm.ee -3 points 8 months ago

These things are like arguing about whether or not a pet has feelings...

Mhm. And what's fundamentally wrong with such an argument?

I'd say it's far more likely for a cat or a dog to have complex emotions and thoughts than for the human made LLM to actually be thinking.

Why?

I'm in the camp that thinks the LLMs are by and large a huge grift (that can produce useful output for certain tasks) by virtue of extreme exaggeration of the facts, but maybe I'm wrong.

Why?

I too see how grifters use AI to further their scams. That's with the case of any new tech that pops up. This however, doesn't make LLMs not interesting.

[–] UraniumBlazer@lemm.ee -2 points 8 months ago (4 children)

I am not in disagreement, and i hope you wont take offense to what i am saying but you strike me as someone quite new to philosophy in general.

Nah no worries haha. And yeah, I am relatively new to philosophy. I'm not even that well read on the matter as I would like to be. :(

Personal philosophy

I see philosophy (what we mean by philosophy TODAY) as putting up some axioms and seeing how logic follows. The scientific method differs, in that these axioms have to be proven to be true.

I would agree with you with the personal philosophy point regarding the ethics branch of philosophy. Different ethical frameworks always revolve around axioms that are untestable in the first place. Everything suddenly becomes subjective, with no capacity of being objective. Therefore, it makes this part of philosophy personal imo.

As for other branches of philosophy tho, (like metaphysics), I think it's just a game of logic. Doesn't matter who plays this game. Assume an untested/untestable axiom, build upon it using logic n see the beauty that u've created. If the laws of logic are followed and if the assumed axiom is the same, anyone can reach the same conclusion. So I don't see this as personal really.

but i would suggest first studying human consciousness before extrapolating psychology from ai behavior

Agreed

Personally i got my first taste from that knowledge pre-ai from playing video games

Woah that's interesting. Could you please elaborate upon this?

[–] UraniumBlazer@lemm.ee -1 points 8 months ago (2 children)

ChatGPT says this itself. However, why does an intention have to be made by ChatGPT itself? Our intentions are often trained into us by others. Take the example of propaganda. Political propaganda, corporate propaganda (advertisements) and so on.

[–] UraniumBlazer@lemm.ee 1 points 8 months ago (1 children)

It's just because AI stuff is overhyped pretty much everywhere as a panacea to solve all ~~capitalist~~ ails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.

Agreed :(

You know what's sad? Communities that look at this from a neutral, objective position (while still being fun) exist on Reddit. I really don't want to keep using it though. But I see nothing like that on Lemmy.

view more: ‹ prev next ›