V0ldek

joined 1 year ago
[–] V0ldek@awful.systems 3 points 16 hours ago

They were still early

[–] V0ldek@awful.systems 5 points 18 hours ago (3 children)

OH NO, I have TERRIBLE NEWS my new FAVOURITE SHOW got CANCELLED (probably by the WOKE MOB)

[–] V0ldek@awful.systems 8 points 18 hours ago

Hey mate what do you think learning is. Like genuinely, if you were to describe the process of learning a subject to me.

[–] V0ldek@awful.systems 7 points 1 day ago* (last edited 1 day ago) (4 children)

Ok now I'm wasted af and this show rules, this is the best fucking thing ever, gather all your friends and watch this shit, they hired the Mooch and someone whose credential is that they're a YouTuber to not give money to people whose ideas are "what if X but crypto" like one of them is literally "what if water but there's an NFT on it"

[–] V0ldek@awful.systems 9 points 1 day ago (7 children)

The first Crypto Name they introduce is fucking Anthony "Unit of Time Measurement" Scaramucci and I got severe whiplash, I am not mentally ready to watch this, I need to refill my drug drawer

[–] V0ldek@awful.systems 10 points 1 day ago (2 children)

I wouldn’t argue with someone who said reasoning models are a substantial advance

Oh, I would.

I've seen people say stuff like "you can't disagree the models have rapidly advanced" and I'm just like yes I can, here: no they didn't. If you're claiming they advanced in any way please show me a metric by which you're judging it. Are they cheaper? Are they more efficient? Are they able to actually do anything? I want data, I want a chart, I want a proper experiment where the model didn't have access to the test data when it was being trained and I want that published in a reputable venue. If the advances are so substantial you should be able to give me like five papers that contain this stuff. Absent that I cannot help but think that the claim here is "it vibes better".

If they're an AGI believer then the bar is even higher, since in their dictionary an advancement would mean the models getting closer to AGI, at which point I'd be fucked to see the metric by which they describe the distance of their current favourite model to AGI. They can't even properly define the latter in computer-scientific terms, only vibes.

I advocate for a strict approach, like physicist dismissing any claim containing "quantum" but no maths, I will immediately dismiss any AI claims if you can't describe the metric you used to evaluate the model and isolate the changes between the old and new version to evaluate their efficacy. You know, the bog-standard shit you always put in any CS systems Experimental section.

[–] V0ldek@awful.systems 26 points 2 days ago (2 children)

A company that forces you to write a "Connect" every half-year where you reflect on your performance and Impact™ : (click here for the definition of Impact™ in Microsoft® Sharepoint™)

[–] V0ldek@awful.systems 3 points 2 weeks ago

There is only one True Word

[–] V0ldek@awful.systems 8 points 2 weeks ago (3 children)

Some of them are pretty spot on.

  • Internet Explorer - 9/10, explores the internet, nothing to argue about
  • Windows - 8/10, kinda simplistic but it does have windows
  • Word - 10/10, it is for words, short, to the point
[–] V0ldek@awful.systems 17 points 2 weeks ago* (last edited 2 weeks ago)

Quantum computing reality vs quantum computing in popculture and marketing follows precisely the same line as quantum physics reality vs popular quantum physics.

  • Reality: Mostly boring multiplication of matrices, big engineering challenges, extremely interesting stuff if you're a nerd that loves the frontiers of human knowledge
  • Cranks: Literally magic, AntMan Quantummania was a documentary, give us all money
[–] V0ldek@awful.systems 1 points 2 weeks ago (1 children)

I think the end is way too generous. I don't think we deserve an end.

[–] V0ldek@awful.systems 24 points 2 weeks ago (15 children)

I've been thinking about this post for a full day now. It's truly bizzare, in a "I'd like to talk to this person and study their brain" kind of way.

Put aside the technical impossibility of LLMs acting as the agents he describes. That's small potatoes. The only thing that stays in my mind is this:

take 2 minutes to think of precisely the information I need

I can't even put into words the full nonsense of this statement. How do you think this would work? This is not how learning works. This is not how research works. This is not how anything works.

I can't understand this. Like yes, of course, some times there's this moment where you think "god I remember there was this particular chart I saw" or "how many people lived in Tokio again?" or "I read exactly the solution to this problem on StackOverflow once". In the days of yore you'd write one Google query and you'd get it. Nowadays maybe you can find it on Wikipedia. Sure. But that doesn't actually take two minutes either, it's like an instant one-second thought of "oh I know I saw exactly this factoid somewhere". You don't read books for that though. Does this person think books are just sequences of facts you're supposed to memorise?

How on earth do you think of "precisely the information you need". What does that mean? How many problems are there in your life where you precisely know how the solution would look like, you just need an elaborate query through an encyclopedia to get it? Maybe this is useful if your entire goal is creating a survey of existing research into a topic, but that's a really small fraction of applications for reading a fucking book. How often do you precisely know what you don't know? Like genuinely. How can your curiosity be distilled into a precise, well-structured query? Don't you ever read something and go "oh, I never even thought about this", "I didn't know this was a problem", "I wouldn't have thought of this myself". If not then what the fuck are you reading??

I am also presuming this is about purely non-fiction technical books, because otherwise this gets more nonsensical. Like what do you ask your agents for, "did they indeed take the hobbits to Isengard? Prepare a comprehensive review of conflicting points of view."

This single point presumes that none of the reasons for you absorbing knowledge from other people is to use it in a creative way, get inspired by something, or just find out about something you didn't know you didn't know. It's something so alien to me, so detached from what I consider the human experience, I simply don't comprehend this. Is this a real person? How does the day-to-day life of this person look like? What goes on in their head when they read a book? What are we moving towards as a species?

 

This is a nice post, but it has such an annoying sentence right in the intro:

At the time I saw the press coverage, I didn’t bother to click on the actual preprint and read the work. The results seemed unsurprising: when researchers were given access to AI tools, they became more productive. That sounds reasonable and expected.

What? What about it sounds reasonable? What about it sounds expected given all we know about AI??

I see this all the time. Why do otherwise skeptical voices always have the need to put in a weakening statement like this. "For sure, there are some legitimate uses of AI" or "Of course, I'm not claiming AI is useless" like why are you not claiming that. You probably should be claiming that. All of this garbage is useless until proven otherwise! "AI does not increase productivity" is the null hypothesis! It's the only correct skeptical position! Why do you seem to need to extend benefit of the doubt here, like seriously, I cannot explain this in any way.

1
submitted 6 months ago* (last edited 6 months ago) by V0ldek@awful.systems to c/freeasm@awful.systems
 

I'm looking for recommendations of good blogs for programmers. I've been asked about what I would recommend by younger folks a few times these past few months and I realised I don't really have a good list that I could just share with them.

What I'm interested in are blogs that don't focus specifically on any particular tech but more things like Coding Horror that are just for devs in general. They don't have to be for beginners. It'd also be interesting to see which of those are most popular in our little circle, so please upvote comments that contain recommendations you agree with.

I'm implicitly assuming stuff shared by folks here is going to be sensible, well-written blogs, and not some AI shill nonsense or other tech grift.

Note that I'm specifically interested in the text medium, podcasts or YT not so much.

 

An excellent post by Ludicity as per usual, but I need to vent two things.

First of all, I only ever worked in a Scrum team once and it was really nice. I liked having a Product Owner that was invested in the process and did customer communications, I loved having a Scrum Master that kept the meetings tight and followed up on Retrospective points, it worked like a well-oiled machine. Turns out it was a one-of-a-kind experience. I can't imagine having a stand-up for one hour without casualties involved.

A few months back a colleague (we're both PhD students at TU Munich) was taking a piss about how you can enroll in a Scrum course as an elective for our doctor school. He was in general making fun of the methodology but using words I've never heard before in my life. "Agile Testing". "Backlog Grooming". "Scrum of Scrums". I was like "dude, none of those words are in the bible", went to the Scrum Guide (which as far as I understood was the only document that actually defined what "Scrum" meant) and Ctrl+F-ed my point of literally none of that shit being there. Really, where the fuck does any of that come from? Is there a DLC to Scrum that I was never shown before? Was the person who first uttered "Scrumban" already drawn and quartered or is justice yet to be served?

Aside: the funniest part of that discussion was that our doctor school has an exemption that carves out "credits for Scrum and Agile methodology courses" as being worthless towards your PhD, so at least someone sane is managing that.

Second point I wanted to make was that I was having a perfectly happy holiday and then I read the phrase "Agile 2" and now I am crying into an ice-cream bucket. God help us all. Why. Ludicity you fucking monster, there was a non-zero chance I would've gone through my entire life without knowing that existed, I hate you now.

 

Turns out software engineering cannot be easily solved with a ~~small shell script~~ large language model.

The author of the article appears to be a genuine ML engineer, although some of his takes aged like fine milk. He seems to be shilling Google a bit too much for my taste. However, the sneer content is good nonetheless.

First off, the "Devin solves a task on Upwork" demo is 1. cherry picked, 2. not even correctly solved.

Second, and this is the absolutely fantastic golden nugget here, to show off its "bug solving capability" it creates its own nonsensical bugs and then reverses them. It's the ideal corporate worker, able to appear busy by creating useless work for itself out of thin air.

It also takes over 6 hours to perform this task, which would be reasonable for an experienced software engineer, but an experienced software engineer's workflow doesn't include burning a small nuclear explosion worth of energy while coding and then not actually solving the task. We don't drink that much coffee.

The next demo is a bait-and-switch again. In this case I think the author of the article fails to sneer quite as much as it's worthy -- the task the AI solves is writing test cases for finding the Least Common Multiple modulo a number. Come on, that task is fucking trivial, all those tests are oneliners! It's famously much easier to verify modulo arithmetic than it is to actually compute it. And it takes the AI an hour to do it!

It is a bit refreshing though that it didn't turn out DEVIN is just Dinesh, Eesha, Vikram, Ishani, and Niranjan working for $2/h from a slum in India.

 

I'm not sure if this fully fits into TechTakes mission statement, but "CEO thinks it's a-okay to abuse certificate trust to sell data to advertisers" is, in my opinion, a great snapshot of what brain worms live inside those people's heads.

In short, Facebook wiretapped Snapchat by sending data through their VPN company, Onavo. Installing it on your machine would add their certificates as trusted. Onavo would then intercept all communication to Snapchat and pretend the connection is TLS-secure by forging a Snapchat certificate and signing it with its own.

"Whenever someone asks a question about Snapchat, the answer is usually that because their traffic is encrypted, we have no analytics about them," Facebook CEO Mark Zuckerberg wrote in a 2016 email to Javier Olivan.

"Given how quickly they're growing, it seems important to figure out a new way to get reliable analytics about them," Zuckerberg continued. "Perhaps we need to do panels or write custom software. You should figure out how to do this."

Zuckerberg ordered his engineers to "think outside the box" to break TLS encryption in a way that would allow them to quietly sell data to advertisers.

I'm sure the brave programmers that came up with and implemented this nonsense were very proud of their service. Jesus fucking cinammon crunch Christ.

view more: next ›