gohixo9650

joined 1 year ago
[–] gohixo9650@discuss.tchncs.de 2 points 9 months ago (2 children)

does it work for encrypted phones?

[–] gohixo9650@discuss.tchncs.de 2 points 9 months ago

yeah I mostly commented that because the fact that the game itself is so much smaller than the audio is impressive and funny at the same time

[–] gohixo9650@discuss.tchncs.de 7 points 9 months ago (2 children)

650MB CD media. The game itself was 40-50MB and the rest 500-600MB was the audio in wav (CD player compatible) type

[–] gohixo9650@discuss.tchncs.de 1 points 10 months ago

lmao after seeing your name I can't believe it was you all the time

[–] gohixo9650@discuss.tchncs.de 6 points 10 months ago

I agree on the first part. However this is from 2012 and in the meantime Linus himself realized and admitted that he was not proud of behaving like that and took real measures and seeked help in order to improve himself.

[–] gohixo9650@discuss.tchncs.de 4 points 10 months ago (3 children)

damn. I lost. Give me your bank details to send you your money.

[–] gohixo9650@discuss.tchncs.de 1 points 10 months ago

Didn’t say python because oh sweet Jesus the slowdown alone would grind the global economy to a halt if we were running all our banking software on Python XD

ah so we just need to persuade banks to switch to python. Noted

[–] gohixo9650@discuss.tchncs.de 2 points 10 months ago

this doesn't work. AI still needs to know what is CP in order to create CP for negative use. So you need to first feed it with CP. Recent example of how OpenAI was labelling "bad text"

The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild. That detector would be built into ChatGPT to check whether it was echoing the toxicity of its training data, and filter it out before it ever reached the user. It could also help scrub toxic text from the training datasets of future AI models.

To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

source: https://time.com/6247678/openai-chatgpt-kenya-workers/

[–] gohixo9650@discuss.tchncs.de 3 points 10 months ago

revisit the comment thread as someone has now posted a photo

[–] gohixo9650@discuss.tchncs.de 30 points 10 months ago (2 children)

Are there any predators smart enough to strategize like this?

it is the predators that build such passages. Have you ever seen any construction company building them? Even in the first photo that is under construction, there is not any human worker in sight

[–] gohixo9650@discuss.tchncs.de 2 points 10 months ago

yes, you're very correct on that. I failed to write it in a way that gives space for exclusions. I wanted to write something like "from the people who migrate in mostly atheistic (or at least less religious) countries, when they continue being fanatics in their religion, then this decision is by choice". Because they are now in a place that if they want to get rid of that culture, it is easier to do it.

Sure, there are people who migrate because they want to leave from the oppression they experience in their home countries and they decide to follow a completely different lifestyle but these are not the majority. But they surely exist.

[–] gohixo9650@discuss.tchncs.de 61 points 10 months ago (2 children)

all religions are cancer. ALL. period. I can criticise any fanatic of any religion the same way I criticise the fanatics of the religion I grew up and was brainwashed to follow. I was able to leave. For some people it may be more difficult because of the situation in their country. However, the people who migrate in mostly atheistic west countries, they continue being fanatics by choice.

view more: ‹ prev next ›