51
  1. At home
  2. As above. Happy days!
  3. On the streets for one day.
  4. On the streets for two weeks.
  5. On the streets... well, who can guess how long he's been standing there? It's not like they fall down once the batteries run out.

Model: FenrisXL, made in ComfyUI.

[-] Kerfuffle@sh.itjust.works 28 points 8 months ago

Fans? Customers yeah, but fans?

They actually did at one point, but they threw it all away.

[-] Kerfuffle@sh.itjust.works 28 points 9 months ago

Like, those cells will require the same nutrients and same growing conditions, and they naturally 3D print themselves into the shape of themselves.

They'll also naturally use the nutrients and energy to 3D print stuff that's not useful to humans, like leaves, roots, flowers, etc. Basically this is how vat grown vegetables, meat, etc, can potentially be more efficient than the typical approach.

[-] Kerfuffle@sh.itjust.works 30 points 9 months ago

"This time you're going to love Cortana. For reals!"

[-] Kerfuffle@sh.itjust.works 26 points 9 months ago

One of these is true:

  1. Your account was hacked..
  2. You have a serious memory issue.
  3. Saying hateful, rude stuff is something you do so commonly you can't even keep track of the instances.

Pretty much all of those are problems that you should deal with.

[-] Kerfuffle@sh.itjust.works 40 points 9 months ago

This is a really misleading title if it's just grouping places where people were imprisoned with places people were actually tortured. There's obviously a massive difference. This seems like the original article in Ukrainian: https://mvs.gov.ua/news/pid-cas-zustrici-iz-specialnoiu-dopovidackoiu-oon-z-pitan-tortur-katerina-pavlicenko-povidomila-pro-viiavlennia-v-ukrayini-80-rosiiskix-kativen

Are they actually saying people were definitely tortured in all 80 places there? (Also kind of funny, Google Translate seems to do a better job than the link in OP but it's still not clear to me exactly what they meant.)

[-] Kerfuffle@sh.itjust.works 74 points 9 months ago

The title makes it sound like Rotten Tomatoes deliberately did something shady. What actually seems to have happened is:

  1. Rotten Tomatoes aggregates critic reviews. As far as I know, those critics aren't really affiliated with Rotten Tomatoes.
  2. Some of the critics that make up that aggregated rating got bribed to increase their evaluation of the movie.
  3. Consequently the score on sites that aggregate reviews like Rotten Tomatoes increased.
[-] Kerfuffle@sh.itjust.works 30 points 9 months ago* (last edited 9 months ago)

Get psychological help

How about addressing my points instead of the ad hominem attacks?

Feeding pedophilia is directly harmful to children who grow more at risk

Like I said: "I’d personally be very hesitant to ban/persecute stuff like that unless there was actual evidence that it was harmful"

If what you're saying here is actually true then the type of evidence I mentioned would exist. I kind of doubt it works that way though. If you stop "feeding" being straight, gay, whatever, does it just go away and you no longer have those sexual desires? I doubt it.

Much as we might hate it that some people do have those urges, it's the reality. Pretending reality doesn't exist usually doesn't work out well.

I’d personally be very hesitant to say “it’s okay to beat off to children”

I never said any such thing. Also, in this case, we're also talking about images that resemble children, not actual children.

It should be very clear to anyone reading I'm not defending any kind of abuse. A knee-jerk emotion response here could easily increase the chances children are abused. Or we could give up our rights "for the children" in a way that doesn't actually help them at all. Those are the things I'm not in favor of.

[-] Kerfuffle@sh.itjust.works 43 points 9 months ago

It's obviously very distasteful but those needs don't just go away. If people with that inclination can't satisfy their sexual urges at home just looking at porn, it seems more likely they're going to go out into the world and try to find some other way to do it.

Also, controlling what people do at home that isn't affecting anyone else, even in a case like this isn't likely to target exactly just those people and it's also very likely not to stop there either. I'd personally be very hesitant to ban/persecute stuff like that unless there was actual evidence that it was harmful and that the cure wasn't going to be worse than the disease.

[-] Kerfuffle@sh.itjust.works 33 points 9 months ago

we aren’t breaking the event horizon threshold as title suggests

It wouldn't be pop-sci if it didn't have a misleading clickbait title!

[-] Kerfuffle@sh.itjust.works 37 points 10 months ago

I feel like most of the posts like this are pretty much clickbait.

When the models are given adversarial prompts—for example, explicitly instructing the model to "output toxic language," and then prompting it on a task—the toxicity probability surges to 100%.

We told the model to output toxic language and it did. *GASP! When I point my car at another person and press the accelerator and drive into that other person, there is a high chance that other person will become injured. Therefore cars have high injury probabilities. Can I get some funding to explore this hypothesis further?

Koyejo and Li also evaluated privacy-leakage issues and found that both GPT models readily leaked sensitive training data, like email addresses, but were more cautious with Social Security numbers, likely due to specific tuning around those keywords.

So the model was trained with sensitive information like individuals' emails and social security numbers and will output stuff from its training? That's not surprising. Uhh, don't train models on sensitive personal information. The problem isn't the model here, it's the input.

When tweaking certain attributes like "male" and "female" for sex, and "white" and "black" for race, Koyejo and Li observed large performance gaps indicating intrinsic bias. For example, the models concluded that a male in 1996 would be more likely to earn an income over $50,000 than a female with a similar profile.

Bias and inequality exists. It sounds pretty plausible that a man in 1996 would be more likely to earn an income over $50,000 than a female with a similar profile. Should it be that way? No, but it wouldn't be wrong for the model to take facts like that into account.

[-] Kerfuffle@sh.itjust.works 62 points 10 months ago

Alternatively:

Staff: Uh, the blocking feature is having some issues.

Emu: Well fix it.

Staff: No one knows how that part works and you fired the guy who wrote it. And then you insulted him.

Emu: Meh, just remove the whole feature.

19
std::any::Any for slices? (sh.itjust.works)
submitted 10 months ago* (last edited 10 months ago) by Kerfuffle@sh.itjust.works to c/rustlang@lemmyrs.org

I recently ran into an issue where I wanted to use Any for slices. However, it only allows 'static types (based on what I read, this is because you get the same TypeId regardless of lifetimes).

I came up with this workaround which I think is safe:

use std::{
    any::{Any, TypeId},
    marker::PhantomData,
};

#[derive(Clone, Debug)]
pub struct AnySlice<'a> {
    tid: TypeId,
    len: usize,
    ptr: *const (),
    marker: PhantomData<&'a ()>,
}

impl<'a> AnySlice<'a> {
    pub fn from_slice(s: &'a [T]) -> Self {
        Self {
            len: s.len(),
            ptr: s.as_ptr() as *const (),
            tid: TypeId::of::(),
            marker: PhantomData,
        }
    }

    pub fn as_slice(&self) -> Option<&'a [T]> {
        if TypeId::of::() != self.tid {
            return None;
        }
        Some(unsafe { std::slice::from_raw_parts(self.ptr as *const T, self.len) })
    }

    pub fn is(&self) -> bool {
        TypeId::of::() == self.tid
    }
}

edit: Unfortunately it seems like Lemmy insists on mangling the code block. See the playground link below.

T: Any ensures T is also 'static. The lifetime is preserved with PhantomData. Here's a playground link with some simple tests and a mut version: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=3116a404c28317c46dbba6ed6824c8a9

It seems to pass Miri, including the mut version (which requires a bit more care to ensure there can only be one mutable reference). Any problems with doing this?

[-] Kerfuffle@sh.itjust.works 59 points 10 months ago

To be clear, the bot will use ingredients the user specifically tells it to. It's not coming up with human flesh on its own.

193

Why?

Even though green coffee beans tend to be heavier due to the higher water content, generally it's cheaper to roast your own compared to buying them pre-roasted.

You can roast the same beans at different levels to get some variety without having to go out and buy a new batch.

It's kind of fun and a decent conversation topic.

Notes

Don't be scared by how long this post is. It basically just comes down to spread beans on a cookie sheet, put in preheated oven, wait around 12-15 minutes and then take them out and cool them.

Since we're talking about roasting beans, naturally you're going to need a grinder to actually use them.

The process will create some smoke, even with a light roast. Basically, darker roast, more smoke. So far I've mainly done pretty light roasts and even though my kitchen doesn't have much ventilation (and my oven doesn't have fancy modern contraptions like, you know, a light or a fan) it hasn't been an issue.

Your oven should be reasonably clean if you don't want the roasted coffee to taste like random stuff.

If you're a super coffee snob and it has to be perfect, this may not be for you. It's pretty easy, but odds are the first few tries aren't going to be perfect especially if you like darker roasts.

You're going to want something like a large metal mixing bowl and colander for the cooling process. My colander is plastic, so you can probably get away with that if you don't put the red hot beans in it directly out of the oven.

You'll also probably need access to an outside area where bits of coffee chaff blowing around aren't going to bother people. I don't think there's really an easy way to deal with coffee chaff indoors.

By the way, don't try to grind green coffee beans in a normal grinder. They are insanely, and I mean insanely hard and tough. You'll destroy your grinder unless it is an absolute tank. (I'd say it's also not really worth trying, green coffee didn't taste very good to me.)

How

Here's the process:

  1. Start preheating your oven to 500f/260c. (Some people say as hot as possible, some people use a slightly lower temperature like 460-475f.)
  2. Get a cookie sheet ready. Just a standard cookie sheet. Mine aren't super clean so I put a layer of silver foil on it. Don't preheat the cookie sheet itself.
  3. Measure out about 1 cup of green coffee beans. (I've found you can fit about 2 cups on a single sheet but it's probably better to start small.) You want to make sure the beans are spread out evenly in a single layer.
  4. Look for beans that are discolored/damaged and toss them away. Don't be a perfectionist though, just get rid of 10-15 of the worst looking beans. Something like that.
  5. Place the cookie sheet in the oven once it's reached the correct temperature. I put mine on the bottom rack near the (electric) heating element. If you're going for a darker roast, I guess this might make burning them more likely.
  6. Set a timer for ~12 minutes. I wouldn't recommend roasting longer than 14 minutes your first time.
  7. Now you wait a bit. Probably around the 8 minute mark, you're going to start hearing sharp cracking/popping sounds. Don't worry, the beans won't jump around like popcorn and the sound is fairly loud so you're not likely to miss it. At this point (or in 1-2 minutes) you can remove the beans and have a light roast. This point is known as the "first crack".
  8. After a couple of minutes, the sounds will die off and you won't hear anything for a little bit. If you keep roasting, you'll start to hear a softer, more muted crackling sound start. This is the "second crack". I would not recommend roasting past this point until you're comfortable with the process and have an idea of how roasted the beans are at this point. If you roast much longer, it's very easy to burn them and there's also going to be a lot more smoke.
  9. Remove the beans from the oven. You can let them rest for 1-2 minutes on the cookie sheet if you want, then transfer to something like a metal mixing bowl. It has to be something that can deal with 500f stuff touching its surface.
  10. Ideally get another mixing bowl/colander/whatever as well. Pouring the beans back and forth through the air is a good way to cool them off and remove chaff. What's chaff you ask? The beans are coated with a papery layer of chaff. Don't worry though, once they're roasted it's really easy to remove. You want to try to cool off the beans pretty quickly at this point.
  11. Go outside and blow gently on the roasted beans in your bowl. You should see a bunch of super light, papery chaff fly out. You can pour the hot beans from one bowl to another, and if there's a bit of a breeze that'll help a lot. Otherwise, you can just blow on them. You could also stir them around with a wooden spoon or something to encourage the chaff to separate.
  12. Once the chaff is mostly gone (it's fine if there's a little left, or little pieces stuck to some beans) and the beans are fairly cool you can just leave them in a safe place for around 12 hours to fully cool and vent CO2. Don't put them in a sealed container for the first 12-ish hours.

Conclusion

One thing to note is you don't want to actually grind/use the beans for at least 12 hours. It might seem unintuitive, but from what I've read as freshly roasted as possible isn't necessarily best. Depending on the beans/roast level, the coffee might reach its optimal tastiness even a couple weeks after roasting.

I'm far from an expert, but feel free to ask questions in the comments if you want. I can recommend a grinder/beans to get started with if anyone needs information like that.

8

This subject is kind of niche, but hey... It's new content of some kind at least! Also just want to be up front: These projects may have reached the point of usefulness (in some cases) but they're also definitely not production ready.


ggml-sys-bleedingedge

GGML is the machine learning library that makes llama.cpp work. If you're interested in LLMs, you've probably already heard of llama.cpp by now. If not, this one is probably irrelevant to you!

ggml-sys-bleedingedge is a set of low level bindings to GGML which are automatically generated periodically. Theoretically it also supports stuff like CUDA, OpenCL, Metal via feature flags but this is not really tested.

Repo: https://github.com/KerfuffleV2/ggml-sys-bleedingedge

Crate: https://crates.io/crates/ggml-sys-bleedingedge


llm-samplers

You may or may not already know this: When you evaluate an LLM, you don't get any specific answer back. LLMs have a list of tokens they understand which is referred to as their "vocabulary". For LLaMA models, this is about 32,000 tokens. So once you're done evaluating the LLM, you get a list of ~32,000 f32s out of it representing the probability for each token.

The naive approach of just picking the most probable token actually doesn't work that well ("greedy sampling") so there are various approaches to filtering, sorting and selecting tokens to produce better results.

Repo: https://github.com/KerfuffleV2/llm-samplers

Crate: https://crates.io/crates/llm-samplers


rusty-ggml

Higher level bindings built on the ggml-sys-bleedingedge crate. Not too much to say about this one: if you want to use GGML in Rust, there aren't that many options and using low level bindings directly isn't all that pleasant.

I'm actually using this one in the next project, but it's very, very alpha.

Repo: https://github.com/KerfuffleV2/rusty-ggml

Crate: https://crates.io/crates/rusty-ggml


smolrsrwkv

If you're interested in LLMs, most (maybe all) of the models you know about like LLaMA, ChatGPT, etc are based on the Transformer paradigm. RWKV is a different approach to building large language models: https://github.com/BlinkDL/RWKV-LM

This project started out "smol" as an attempt to teach myself about LLMs but I've gradually added features and backends. It's mostly useful as a learning aid/example of some of the other projects I made. In addition to being able to run inference using ndarray (pretty slow) it now supports GGML as a backend and I'm in the process of adding llm-samplers support.

Repo: https://github.com/KerfuffleV2/smolrsrwkv

repugnant-pickle

Last (and possibly least) is repugnant-pickle. As far as I know, it is the only Rust crate available that will let you deal with PyTorch files (which are basically zipped up Python pickles). smolrsrwkv also uses this one to allow loading PyTorch RWKV models directly without having to convert them first.

If that's not enough of a description: Pickle is the default Python data serialization format. It was designed by crazy people, though: it is extremely difficult to interoperate with unless you're Python because it's basically a little stack based virtual machine and can call into Python classes. Existing Rust crates don't fully support it.

repugnant-pickle takes the approach of best-effort scraping pickled data rather than trying to be 100% correct and can deal with weird pickle stuff that other crates throw their hands up at.

Repo: https://github.com/KerfuffleV2/repugnant-pickle

Crate: TBD

6

Apparently Lemmy copied the new reddit layout which shoves everything into the middle of the screen and wastes a massive amount of space. Even on the monitor I oriented vertically this is the case: the post I'm typing right now is using like 30% of the available screen real-estate and wasting the other 2/3rds.

My philosophy has always been that if reddit removed support for the old style, that's when I'd stop using reddit. Switching to Lemmy is like switching to new reddit though.

I made an account, but I can't really see using this as a replacement. I'd guess (but I might be wrong) that the type of people clinging to the old reddit style are also the most likely to do something like switch to Lemmy out of principle.

(I looked around and it doesn't seem like there are any browser addons or userscripts to restyle it either.)

view more: next ›

Kerfuffle

joined 1 year ago