this post was submitted on 29 Jul 2024
27 points (100.0% liked)

TechTakes

1270 readers
192 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] corbin@awful.systems 10 points 1 month ago

This orange thread is about San Francisco banning certain types of landlord rent collusion. I cannot possibly sneer better than the following in-thread comment explaining why this is worthwhile:

While I agree that the giant metal spikes we put on all the cars aren't the exclusive reason that cars are lethal, I would hope we both agree that cars are less lethal when we don't cover them in giant metal spikes.

[–] o7___o7@awful.systems 10 points 1 month ago (3 children)

Riffing on this fun subthread:

  • Will AI invent the Philosopher's Stone?
  • Can AI duck the Zuck?
  • WIll AI divide by zero?
  • Can AI give you diarrhea?
  • Will AI make "fetch" happen?
  • Will AI have a second helping?
  • Does AI unlock the secret to making the Moon happy again?
  • Can AI stop eating after only one marshmallow?
[–] V0ldek@awful.systems 8 points 1 month ago (2 children)

Can AI give you diarrhea?

Already does make me nauseous, so...

load more comments (2 replies)
load more comments (2 replies)
[–] froztbyte@awful.systems 9 points 1 month ago

google continuing on their crusade to force ads down everyone’s eyeballs

not that it’s surprising, just “good” to see this playing out exactly as predicted

[–] BlueMonday1984@awful.systems 9 points 1 month ago (1 children)
[–] froztbyte@awful.systems 9 points 1 month ago

I’ve been under a rock with a bunch of life stuff so I only learned of this yesterday, and holy shit

those snapshots from the earnings call are wild

[–] froztbyte@awful.systems 9 points 1 month ago* (last edited 1 month ago)

a long AI sneer on gabi belle's channel

the comments also have some bangers. a quick selection:

My mom used to transcribe calls for companies as work, but then her work implemented AI to transcribe the calls, leading to work calls having sentences in them such as: “Thanks for watching!” “This venue has the best beat” and putting in random websites that DONT EXIST into the calls, when the caller said nothing like that.

imagine this in financial or medical services

had a survey from my university about how staff feel about AI implementation for our university. One question that concerned me was "how do you feel about AI being used for grading?"

The issue with AI grading assignments is that students will most likely get the mindset that if the professor or TA doesn't care enough to put in the effort to look at my work to give me feedback to do better, why the hell should I put in the effort and spend thousands to do this?

please come to our university, it costs 500k for a year and we pinky promise that the prof/TA won't get more than 10% of it, your learning is our "top priority"

My friend and I entered an art contest a few months ago and she lost to very obvious ai “art”. She spent months on a self portrait oil painting that even when putting my bias aside was amazingly beautiful and definitely deserved to win. There was ai competing in the charcoal category with me too but luckily someone looked at competing pieces before the judges actually scored anything and bombarded the hosts of the competition to remove the piece and they did eventually. It’s incredibly frustrating especially considering part of the first place prize was a scholarship that my friend definitely deserves and needs, and the contest hosts were very hesitant to remove the ai “art”.

it's weird how a chunk the art world continues being bad at handling fakes

For additional context, training ChatGPT-3 took enough energy to propel the titanic at full speed for 37 hours. Or around 920 tons of coal.

just absolutely mental

[–] froztbyte@awful.systems 9 points 1 month ago (10 children)

more of a nsfw, a post on pointing and laughing at the fash. figured y'all may also enjoy reading it

load more comments (10 replies)
[–] swlabr@awful.systems 9 points 1 month ago* (last edited 1 month ago)

Band I like accidentally purchases a poster design from a probable AI grifter. Here's their apology/acknowledgement post:

https://www.instagram.com/p/C98X_6huBOJ

The poster in question:

https://www.instagram.com/p/C95H8MQuVI6

I do not blame the band. I blame the AI person for being shit and passing off AI-generated shit as his own.

[–] sailor_sega_saturn@awful.systems 8 points 1 month ago* (last edited 1 month ago) (6 children)

Presented without comment: this utter crankery about hacking the Matrix (HN)

Given the highly speculative subject of this paper, we will attempt to give our work more gravitas by concentrating only on escape paths which rely on attacks similar to those we see in cybersecurity [37-39] research (hardware/software hacks and social engineering) and will ignore escape attempts via more esoteric paths such as:, meditation [40], psychedelics (DMT [41-43], ibogaine, psilocybin, LSD) [44, 45], dreams [46], magic, shamanism, mysticism, hypnosis, parapsychology, death (suicide [47], near-death experiences, induced clinical death), time travel, multiverse travel [48], or religion.

Among the things they've already tried are torture, touching grass, and declining all cookies:

Unethical behavior, such as torture, doesn’t cause suffering reducing interventions from the simulators.

Breaking out of your routine, such as by suddenly traveling to a new location [199], doesn’t result in unexpected observations.

Saying "I no longer consent to being in a simulation" [200].

[–] gerikson@awful.systems 8 points 1 month ago

This idiocy has been patiently submitted 3 times before but finally broke containment

https://news.ycombinator.com/from?site=theseedsofscience.org

load more comments (5 replies)
[–] BlueMonday1984@awful.systems 8 points 1 month ago (1 children)
load more comments (1 replies)
[–] dgerard@awful.systems 7 points 1 month ago

PURE PROMO: we're doing a Pivot to AI patrons’ video Q&A on Thursday August 8, at 16:00 UTC

also taking cryptocurrency questions

like you know I answer any of you guys' questions anyway, but this is in a much l33t3r format

load more comments
view more: ‹ prev next ›