Blemgo

joined 1 year ago
[–] Blemgo@lemmy.world 3 points 3 days ago (1 children)

Possibilities are all possible outcomes of a certain scenario. With the example of a coin toss, it's heads or tails. However, these are dependent on your definition of what you want to observe. For example, at a dice roll, you could define the possibilities as:

  • any number less than 5 is rolled
  • a 5 is rolled
  • a 6 is rolled

Probabilities are attached to possibilities. They define how likely an outcome is. For example, in an ideal coin toss heads and tails have a probabilitiy of 0.5 (or 50%) each.

With my 2nd example, the probabilities would be:

  • any number less than 5 is rolled: 4/6 (or 2/3 or 0.666... or 66.666...%)
  • a 5 is rolled (1/6 or 0.1666... or 16.666...%)
  • a 6 is rolled (1/6 or 0.1666... or 16.666...%)

All probabilities must add up to 1.0 (or 100%), otherwise your possibilities overlap, which is generally not something you want.


Plausibility is a bit more tricky, as it also depends on your definition, namely a cutoff point. You could see the cutoff point as a limit of how much you want to risk. I'll only examine the example for the coin toss for that. Say you will toss a coin 100 times. This would mean there are 2^100^ possibilities, but we will examine only 2 for this matter:

  • you will get 100 times tails
  • you will get as many tails as heads

Let's say the cutoff point is 0.01, i.e. 1%. This would make the first possibility improbable, as 1/(2^100^) is far lower than 0.01. The second possibility is 0.5, which is greater than 0.01, and therefore probable.

[–] Blemgo@lemmy.world 12 points 4 days ago (1 children)

Sphinx cats are also notorious for skin conditions, making them high maintenance in terms of vet visits, sadly.

But yeah, it would be cool to meet one in person.

[–] Blemgo@lemmy.world 3 points 6 days ago

But do you also sometimes leave out AI for steps the AI often does for you, like the conceptualisation or the implementation? Would it be possible for you to do these steps as efficiently as before the use of AI? Would you be able to spot the mistakes the AI makes in these steps, even months or years along those lines?

The main issue I have with AI being used in tasks is that it deprives you from using logic by applying it to real life scenarios, the thing we excel at. It would be better to use AI in the opposite direction you are currently use it as: develop methods to view the works critically. After all, if there is one thing a lot of people are bad at, it's thorough critical thinking. We just suck at knowing of all edge cases and how we test for them.

Let the AI come up with unit tests, let it be the one that questions your work, in order to get a better perspective on it.

[–] Blemgo@lemmy.world 1 points 1 week ago (1 children)

How do you play the missions? Generally I usually have almost enough for the next warbond after I maxed out the last one. I did hear that this struggle usually happens when people don't look for POIs, which also results in resources always being rather slow to accumulate.

Overall, the game encourages to not beeline for primary objectives and rather plan out a route, especially for side objectives, as they can often be further away. It does help a lot that crashed resource drops (or what they are called) have a beacon that flashes higher the further away you are from them.

[–] Blemgo@lemmy.world 1 points 2 weeks ago

I mean, Theranos was less classic ethical nightmare as it was just a grift, separating suckers from their money. A possible more fitting example in the same vein would be Roger Wakefield's "studies" on how the MMR vaccines cause autism., where actual children got harmed and spurred on the antivax movement.

[–] Blemgo@lemmy.world 1 points 2 weeks ago

Honestly, that's news to me. Mind linking it? Might be interesting to read about it.

[–] Blemgo@lemmy.world 1 points 2 weeks ago (4 children)

Funnily enough, the Stanford Prison experiment was pretty much just an act, with both parties encouraged to act the way they did. It's been discredited nowadays.

A better analogy would be the Milgram experiment(s). Often repeated, breaking certain ethical rules (e.g. not telling your test subjects the whole truth about the experiment), with the result of some test subjects taking their own life from the sheer realisation of what they did, and yet the experiment still stands uncontested in its results.

[–] Blemgo@lemmy.world 4 points 2 weeks ago

I think it can be both. However they are no justification as to why one should buy and like a game they clearly won't like for various reasons. Even more, trying to "fix" a game can alter the game's impact on the player. There's a reason why roguelikes/roguelites are so hard, and taking away the difficulty will lessen the experience. That's why most people also, for example, won't use cheating tools for their single player games apart from screwing around.

[–] Blemgo@lemmy.world 3 points 2 weeks ago

And that's why you need to use aluminium instead.

[–] Blemgo@lemmy.world 7 points 3 weeks ago (1 children)

Honestly, I think that this was a horrid read. It felt so unfocused, shallow and at times contradictory.

For example, at the top it talked about how software implementation has the highest adoption rate while code review/acceptance has the lowest, yet it never really talks about why that is apart from some shallow arguments (which I will come back later), or how to integrate AI more there.

And it never reached any depth, as any topic only gets grazed shortly before moving to the next, to the point where the pitfalls of overuse of AI (tech debt, security issues, etc.) are mentioned, twice, with no apparent acknowledgement of its former mention, and never mentioned how these issues get created nor show any examples.

And what I think is the funniest contradiction is that from the start, including the title, the article pushes for speed, yet near the end of the article, it discourages this thinking, saying that pushing dev teams for faster development will lead to corner cutting, and that for a better AI adoption one shouldn't focus on development speed. Make up your damn mind before writing the article!

[–] Blemgo@lemmy.world 3 points 3 weeks ago

I haven't watched the video yet, but I think TADC has unwillingly joined the "kids" content mill, which is probably what might be referenced.

Even Gooseworx dislikes how those content mill channels have abused TADC's popularity for their own profit while neither she nor Glitch can do much about it.

[–] Blemgo@lemmy.world 50 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

Funnily enough, Signal has circumvented the issue by marking their chat window as DRM content, making it invisible to Recall.

view more: next ›