tal

joined 1 year ago
[–] tal 3 points 2 weeks ago* (last edited 2 weeks ago)

Just realized that I was dyslexic with the ordering directive, which worked well with Flux apparently not understanding it, since I wound up with the desired ordering; I'd intended to write "left to right".

[–] tal 3 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Not yet! One thing that AI generated images right now are not so good at is maintaining a consistent portrayal of a character from image to image, which is something you want for illustrating a story.

You might be able to do something like that with a 3d modeler to pose characters, generate a wireframe, and then feed that wireframe into ControlNet. Or if you have a huge corpus of existing images of a particular character portrayed in a particular way, you could maybe create new images with them in new situations. But without that, it's hard to go from a text description to many images portrayed in a consistent way. For one image, it works, and for some things, that's fine. But you'd have a hard time doing, say, a graphic novel that way.

I suspect that doing something like that is going to require having models that are actually working with 3D internal representations of the world, rather than 2D, at a bare minimum.

[–] tal 1 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

it starts flipping frames between the nodes and a different set of nodes.

Yeah, I don't know what would cause that. I use it in Firefox.

Maybe try opening it in Chromium or a private window to disable addons (if you have your Firefox install set up not to run addons in private windows?)

I'm still suspicious of resource consumption, either RAM or VRAM. I don't see another reason that you'd suddenly smack into problems when running ComfyUI.

I'm currently running ComfyUI and Firefox and some relatively-light other stuff, and I'm at 23GB RAM used (by processes, not disk caching), so I wouldn't expect that you'd be running into trouble on memory unless you've got some other hefty stuff going on. I run it on a 128GB RAM, 128GB paging NVMe machine, so I've got headroom, but I don't think that you'd need more than what you're running if you're generating stuff on the order of what I am.

goes investigating

Hmm. Currently all of my video memory (24GB) is being used, but I'm assuming that that's because Wayland is caching data or something there. I'm pretty sure that I remember having a lot of free VRAM at some point, though maybe that was in X.

considers

Let me kill off ComfyUI and see how much that frees up. Operating on the assumption that nothing immediately re-grabs the memory, that'd presumably give a ballpark for VRAM consumption.

tries

Hmm. That went down to 1GB for non-ComfyUI stuff like Wayland, so ComfyUI was eating all of that.

I don't know. Maybe it caches something.

experiments further

About 17GB (this number and others not excluding the 1GB for other stuff) while running, down to 15GB after the pass is complete. That was for a 1280x720 image, and I was loading the SwinIR upscaler; while not used, it might be resident in VRAM.

goes to set up a workflow without the upscaler to generate a 512x512 image

Hmm. 21GB while running. I'd guess that ComfyUI might be doing something to try to make use of all free VRAM, like, do more parallel processing.

Lemme try with a Stable Diffusion-based model (Realmixxl) instead of the Flux-based Newreality.

tries

About 10GB. Hmm.

kagis

https://old.reddit.com/r/comfyui/comments/1adhqgy/how_to_run_comfyui_with_mid_vram/

It sounds like ComfyUI also supports the --midvram and --lowvram flags, but that it's supposed to automatically select something reasonable based on your system. I dunno, haven't played with that myself.

tries --lowvram

I peak at about 14GB for ComfyUI at 512x512, was 13GB for most of generation.

tries 1280x720

Up to 15.7GB, down to 13.9GB after generation. No upscaling, just Newreality.

Hmm. So, based on that testing, I wouldn't be incredibly surprised if you might be exhausting your VRAM if you're running Flux on a GPU with 12GB. I'm guessing that it might be running dry on cards below 16GB (keeping in mind that it looks like other stuff is consuming about 1GB for me). I don't think I have a way to simulate running the card with less VRAM than it physically has to see what happens.

Keep in mind that I have no idea what kind of memory management is going on here. It could be that pytorch purges stuff if it's running low and doesn't actually need that much, so these numbers are too conservative. Or it could be that you really do need that much.

Here's a workflow (it generates a landscape painting, something I did a while back) using a Stable Diffusion XL-based model, Realmixxl (note: model and webpage includes NSFW content), which ran with what looked like maximum VRAM usage of about 10GB on my system using the attached workflow prompt/settings. You don't have to use Realmixxl, if you have another model, should be able to just choose that other one. But maybe try running it, see if those problems go away? Because if that works without issues, that'd make me suspicious that you're running dry on VRAM.

realmixxx.json.xz.base64/Td6WFoAAATm1rRGBMDODKdlIQEWAAAAAAAAAJBbwA/gMqYGRl0APYKAFxwti8poPaQKsgzf7gNj HOV2cLGoVLRUIxJu+Mk99kmS9PZ9/aKzcWYFHurbdYORPrA+/NX4nRVi/aTnEFuG5YjSEp0pkaGI CQDQpU2cQJVvbLOVQkE/8eb+nYPjBdD/2F6iieDiqxnd414rxO4yDpswrkVGaHmXOJheZAle8f6d 3MWIGkQGaLsQHSly8COMYvcV4OF1aqOwr9aNIBr8MjflhnuwrpPIP0jdlp+CJEoFM9a21B9XUedf VMUQNT9ILtmejaAHkkHu3IAhRShlONNqrea4yYBfdSLRYELtrB8Gp9MXN63qLW582DjC9zsG5s65 tRHRfW+q7lbZxkOVt4B21lYlrctxReIqyenZ9xKs9RA9BXCV6imysPX4W2J3g1XwpdMBWxRan3Pq yX5bD9e4wehtqz0XzM38BLL3+oDneO83P7mHO6Cf6LcLWNzZlLcmpvaznZR1weft1wsCN1nbuAt5 PJVweTW+s1NEhJe+awSofX+fFMG8IfNGt5tGWU5W3PuthZlBsYD4l5hkRilB8Jf7lTL60kMKv9uh pXv5Xuoo9PPj2Ot2YTHJHpsf0jjT/N5Z65v9CDjsdh+gJ5ZzH8vFYtWlD1K6/rIH3kjPas23ERFU xoCcYk7R6uxzjZMfdxSy1Mgo6/VqC0ZX+uSzfLok1LLYA7RcppeyY4c/fEpcbOLfYCEr9V+bwI4F VDwzBENC412i8JTF8KzzqA0fPF8Q4MwAeBFuJjVq1glsFgYqTpihnw5jVc5UfALRSXS2vjQR78v5 XDmiK7EvUIinqDJjmCzV+dpnTbjBAURsZNgCU+IJEQeggVybB+DkjOGnr/iIjvaSylO3vu9kq3Kn Dhzd/kzXutPecPtterHkiPjJI+5a9nxJPMLMuAqmnsh2sk7LX6OWHssHhxd/b2O2Y4/Ej0WoIZlf GD2gOt57hHvTrQ/HaG1AA8wjbHsZXWW9MXbJtDxZbECLIYGfPW2tQCfBaqYlxGXurrhOfZlKPUVx K9etDItoDHdwUxeM4HbCdptOjcSWAYgjwcQ4gI3Ook/5YLRbL+4rIgOIwz643v/bMh2jiLa4MEYm 9a4O8GL4xED3kvRQSgt0SkkIRIHO1AJ042TQC450OKwEtfFgBpmFQ+utgUOObjA409jIEhMoOIeP kDqv62f0Uu4qojiX7+o8rrWp/knAkDoFWam1E3ZKEiCINRfmRAMqTsPr8Wq/TQZM5OKtMGBLK9LY GxLOOUBCahU5iNPm01X3STNRmQKtATPgqPcszNeLONnZqcWusbZKvwkCoX4Z75T+s+852oo65Li6 7WQ3SaDBrs47qXeUobVIOjlXO2ln2oRRtTRfbk7/gD6K6s5kBjPexHEEIGseJysilyHyT2+VMtSy cyET83Exi5KEnrtl7XgMI4GM1tDeaq6anNdW1VgXdS4ypk8xqHTpQgszuJCgh3ID5pfvbHzzX0A7 zC5A+4oGk98ihe0dJc+KLzigJuZLk7jX5A7sGkBtht7oKjH8qrYM//DbQXkZbI06h/FP+2aBz5/t U3zTsSHyoU3LwihFOj0TA+DKnZUnm4TJtX6ABQaJPTYwHgQJ/B77VI9A+RYf7qd9o4cGaGrLoOES QdGPFUOqO0vL9EkpTsxgCEjxApBIw1gTCiaBF8Dofp6vBrd1zY1mXP9p1UunaaFZtdmx/vrWkLXQ iO09P6waY+6daKtZ7i+3j0WGvBFHx32toDgd94wGWXBa+hfEEK3d6kq8eGRWJC+OEL9KgUrrN4ki vwPjGe/1DXxzPIvZrMP2BtoxO34E9VuvsnmofW3kZvtLZXC+97RznZ5nIpG4Vk+uOPs1ne/s1UD3 x0vyTviwiK+lFIu5T3NdxFFssClZSDyFUkzCZUpbsLjcH3dzbwEDX4Vjq6rAz2IbXUGU6aTQ7RO1 Q1iUHaCqHZzNJEKKFcBaL/IGbmeOPUZJ7G3TbEYcMhBPtsmYSwNJkQ0cGj/KKqPF6fxpvNEt+QNh isgyNP+AuP0xxQgwXbxW2kO/3Y70L5+eNs2L8u0gJBHevYTAebv/mORBcNcs8hpFVZLOAahajv0h zj++ssD9BcgBTVMEC+knn0HjVaRjIW3UPsDApNjIsF7h06hWAGG79VGJb3mQ6PcwQAAAALZxoY8E al4jAAHqDKdlAABPKdovscRn+wIAAAAABFla

EDIT: Keep in mind that I'm not an expert on resource consumption on this, haven't read material about what requirements are, and there may be good material out there covering it. This is my ad-hoc, five-minutes-or-so-of testing; my own solution was mostly to just throw hardware at the problem, so I haven't spent a lot of time optimizing workflows for VRAM consumption.

EDIT2: Some of the systems (Automatic1111 I know, dunno about ComfyUI) are also capable, IIRC, of running at reduced precision, which can reduce VRAM usage on some GPUs (though it will affect the output slightly, won't perfectly reproduce a workflow), so I'm not saying that the numbers I give are hard lower limits; might be possible to configure a system to operate with less VRAM in some other ways. Like I said, I haven't spent a lot of time trying to drive down ComfyUI VRAM usage.

[–] tal 3 points 2 weeks ago* (last edited 2 weeks ago) (8 children)

Are there any specifics as to what the major disagreement was on, or has been in the past? All the article has is:

The coalition leaders meeting was widely reported as a "make or break" meeting for the coalition, with Lindner, in particular, having hinted in the run-up that he was not too worried about the latter.

In his reaction to Scholz's scathing remarks, Lindner accused the chancellor of a "calaculated break-up of the coalition" and his coaliton partners of "not even accepting" the FDP's proposals for turning the economy around "as a basis for discussion". Discord about how to revive an ailing economy

The coalition had been at odds for a while, with serious strains on the budget for 2025 and a disappointing performance by the German economy eliciting increasingly different suggestions on how to face and solve the problems.

So I'm assuming that Lindner wants more-economically-liberal policy than Scholz does?

Is there reason to believe that there's sufficient public support in elections to form a red-green coalition, or is it likely that the SDP and Greens would be out of government in a new election?

kagis

https://theweek.com/politics/german-economy-crisis-volkswagen

A snap election could be "disastrous for all three coalition parties," said Reuters. SDP and the Greens have lost support since the 2021 election, and the FDP "could be ejected from parliament altogether." But the dispute involves fundamental differences: FDP wants budget cuts, while the other two parties "agree that targeted government spending is needed to stimulate the economy," Reuters said.

That doesn't sound very good for them.

If they're out, and the AfD has been at record-high levels of support, does that mean maybe an incoming AfD government?

[–] tal 2 points 2 weeks ago (4 children)

Full Size

UI: ComfyUI

Model: STOIQNewrealityFLUXSD_F1DAlpha

A cute, adorable, loveable, happy cave spider.

The image is an illustration.

cute-cave-spider-workflow.json.xz.base64/Td6WFoAAATm1rRGBMC0DMpYIQEWAAAAAAAAALNZm2LgLEkGLF0APYKAFxwti8poPaQKsgzf7gNj HOV2cM7dN97GoIvoTF/iSMYEQoJZBJzI3UDxBXrl3MvcMb5e6h3lATBYjawf1klb5xXG0y783VSu rsL7JTqCcpc7ml6EBcQysA/lczXi4/z4NEU2MzJ6gMxLs5qhHCWRzV2fkgD1H3KHbl2of4CNliwR JNef6hoZP4HFMLybTP66HNwa0oszmGtepDwWHXSrlYVAso8imkQU11LM4qXGRXQKKo6EvjyTeYPG eZvQb4FxhvQIkQHLQWfLOyENpBMbl2mOgW0siOFXT72aS9mBHSeYP8AzU1giF+yk4IQh8W8UCh+k wncXTpG2M2mdiF0t0fMlAoOmMD0dDDSlpTIhgkEUgqjuFzi7YRnIgI9xJ4RbMQqrVgcEj4C8LWsP q8Pt9DnxVUmUQXwS04QFJpghbh6a73UsV1kjFiQZ9yo4RhR5ouTehVLXwyXn9E0/MW3nIZGdW6cW BqD/FwjDedfDigE0VsQAWD9OfH1I4pjkNePpmoHErOG2KQ9vec7Rd5iPWUIgxnBiNdg8Hczv3Ggy e02Chw1yzKXcTaIVW6tMxm5H62cyRRwy5a0Rn3KVLVQTyaZt2f+GplozjiLe3T1X8E7sKPoVWNsX cVGRPGouZdNO00VQlpmaKnpf9BXG+ujVGn0WCzsTBZZLqJNfGE5CwOMGqqib2L3pNFesoaku2U4n andtH2bHkiNNf1DpDmkAuNuGvmKRHfBXHVrU6+jcUbAjBZxe4kYsPP2+f5vJqNIWRPankSGF3+GF xjD4ntouwO3ruBHQlRMDf0Lcd6qy4ICW3OakgceBbk2vT42s9thrPuF779tKQ63RSN+nL/R9GyOb Tr7qEL71NSRqsK/hDhb2+lrc6p6LxPOPbHZJLDcLWunTtG8EayOjlE7K4iTB0AKloQg7xShjlDP8 ZQSTf3PwGLUBdJuv5aaqE+2payzFqCXxbp9849RL9f7mnarBlDj9E30qLRZZd9INv06Jk1OddXYY VGS2pDNDNMoFDmgywFxqd1ZWT4JZpPxUW1xUc/8P2FWs0W/o/EiCL99xDIa+LZn2UA8x5pDhZCbt +i3JcwbL8+XTDQHIdIp9RdN1eouuKtuhaklJlHcREEql6EQqldpf9htC6YtIZDdiJJViVSguOt7r iAcrVELFRGbvyrXkWzY7Jr7dt1nOGQrj9GDY3X2habyMXBD/3IIP3uwg05ZDXUVgYqV+fEYqAtE3 IuTiSDFjoLVr7IlByn99cUvfzMLsHud1K7EVi1E0DhNsf68mMyV177Fm3P4y2E4qc7XwrFAJ6GyP N5AVGcYSoozQ34UofUrpmmkU1ZDIQlLBZHICC7wGHkH+mINdXHB9/nOBpcJ/WwfZ5ruyjAh94GG7 mpvrskmGuWP3r2LgV8dPqf03Tnp7Jbe/NNUZKQ4cwh92/BP79cPY/T6iwPEZZS9ofebllBwwuM77 1kQ63ShrnI9yydhDKORd9qFNJSWd2/X/7VZv6gdczosU8YVaxq14UnVawJnkB4S4cN6Fvlwg9HBg IXDP9ITIH7lxG4iUY3gsEzUi8lgs6HYz5FMIYc82jfFnsnK3olAYB9cUPdE/K2nZKAHGRNoBmyvA pQC5dwZhurNZ5G32YRD8MVn4dRuGIllRRhukNVp1TUTebKqbgnsJZvZO1s2GNgtHlt/Dnn9l+jHy d8Ar4Y2gcxE0UobKgrY+1ONGwCWUxD4RZdGqZHdND88qsWd716JfxAwFcZ5r5WtzSkj6O4T+RC6E sTpxtctOPbh0pV+HiBIoaqkmRXmAzbwZeA9QXvJu6uPNXSUAXn6c9HfLlNabpdii7Gr31mNyNOtw BSxHDpWGyEvlXWpr3udsC4eypHZtRGl/iJsAAD0CzTvAZNWvDfbmkrAY8Ck7TEdvJ35uujJzDZVV X15wJ74T7ADWsjD/ZCTSjoq3ylt1nO1woSAhwpxdQS8S6rm8owXfQkWc4334/RIAGB6ZcrdNjRjg ORZw05NY5zdg9hE2fdy50SNVgvN2eIwTjp4bsfNt04z5te/h+InrVpvzFFjBL5SWLlOckqy7yoeB AHL0Slu0XJQhQb0aMfMGN8G46K1P5gAAvY1DGN46jOAAAdAMylgAAJlUxG+xxGf7AgAAAAAEWVo=

[–] tal 1 points 2 weeks ago* (last edited 2 weeks ago) (5 children)

When a model is initially being loaded, I see slowdown, but once that has happened, I don't. I see that with Automatic1111 as well. Once it's been loaded, though, I don't get that. I regularly do (non-GPU-using) stuff on another workspace when rendering, can't detect any slowdown.

So I don't know what might be the cause. Maybe memory exhaustion? A system that's paging like mad might do that, I guess.

As to an alternative, it depends on what you want to do.

If you've never done local GPU-based image generation, then Automatic1111 is probably the most-widely-used UI (albeit the oldest).

If you want to run Flux and Flux-derived models -- which I'm using to generate my above image -- I believe I recall reading that while Automatic1111 cannot run them -- and maybe that's changed, have not been monitoring the situation -- the Forge UI can do so as well. But I've never used it, so I can't provide any real guidance as to setup.

kagis

Yeah, looks like Automatic1111 can't do Flux:

https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16311

And it looks like Forge can indeed run Flux:

https://sandner.art/flux1-in-forge-ui-setup-guide-with-sdsdxl-tips/

If you're short of VRAM or RAM or something, though, I don't know if Forge will do better than ComfyUI. I think that I might at least try to diagnose what is causing the issue first, as there are some things that can be done to reduce resource usage, like generating images at lower resolution and relying more-heavily on tile-based upscaling. With at least some of the systems, haven't played around with ComfyUI here, there are also some command-line options to reduce VRAM usage in exchange for longer compute time, like --medvram or --lowvram in Automatic1111.

I don't think that there's a platform-agnostic way to see VRAM usage. I use a Radeon card on Linux, and there, the radeontop command will show VRAM usage. But I don't know what tools one would use in, say, Windows to look up the same numbers.

On Linux, top will show regular memory usage, can hit "M" to sort by memory usage. I'm pretty out of date in Windows or MacOS -- probably Task Manager or mmc on Windows and maybe top on MacOS as well? You may know better then me if you're accustomed to that platform.

I can maybe try to give some better suggestions if you can list any of the OS being used, what GPU you're running it on, and if you can, how much VRAM and RAM is on the system and if you can determine how much is being used.

[–] tal 1 points 2 weeks ago* (last edited 2 weeks ago)

despite editing the .sh file to point to the older tarballed Python version as advised on Github, it still tells me it uses the most up to date one that's installed system wide and thus can't install pytorch.

Can you paste your commands and output?

If you want, maybe on !imageai@sh.itjust.works, since I think that people seeing how to get Automatic1111 set up might help others.

I've set it up myself, and I don't mind taking a stab at getting it working, especially if it might help get others over the hump to a local Automatic1111 installation.

[–] tal 8 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

venv nonsense

I mean, the fact that it isn't more end-user invisible to me is annoying, and I wish that it could also include a version of Python, but I think that venv is pretty reasonable. It handles non-systemwide library versioning in what I'd call a reasonably straightforward way. Once you know how to do it, works the same way for each Python program.

Honestly, if there were just a frontend on venv that set up any missing environment and activated the venv, I'd be fine with it.

And I don't do much Python development, so this isn't from a "Python awesome" standpoint.

[–] tal 3 points 2 weeks ago* (last edited 2 weeks ago)

I called Hillary "Hillary", but that's to distinguish her from Bill Clinton, who I called "Clinton".

Honestly, you have to be a very commonly-used name before I'm going to use a single name for general purposes at all rather than a full name, so the set of people who have the chance to get into the "one name club" is very small.

[–] tal 2 points 2 weeks ago* (last edited 2 weeks ago)

I call Trump "Trump" and Harris "Harris".

[–] tal 1 points 2 weeks ago* (last edited 2 weeks ago)

I think that if you're looking at the Presidential race in particular, you probably want to look specifically at turnout in swing states, where the vote could have been realistically shifted.

Probably a lot of post-mortems happening. I want to see some material from Five Thirty Eight on what shifts happened from 2020. In the runup to the election, for example, I remember reading that young non-college-educated male blacks polled had swung dramatically more Republican between 2020 and 2024. That suggests that division around education is becoming more-important along party lines. A majority was still voting Democrat, but the shift was large, something crazy, like twenty percentage points. I remember reading another article in the runup that Trump had gained slightly among females, also kind of a surprise to me. Now that we've got voting data, though, we can look at county level stuff and try to get an idea of which demographics actually shifted their votes and how.

[–] tal 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

autoplay

Submitted videos, which should be what is relevant here, don't autoplay.

I'll also add that:

  • While the community is not explicitly one devoted to combat videos, it should be pretty clear that that is part of what is here. If someone is going to browse such a community, then they may well get videos that contain soldiers being killed.

  • One argument might be "what if someone wants to browse All and click on videos and watch them and finds videos that have death in them morally-objectionable". I don't browse All -- I think that trying to whitelist makes much more sense than blacklisting -- but my personal view is that anyone that does so is implicitly accepting that they're going to get a firehose of content of all sorts. Some of that is going to be political statements that they don't agree with. Some is going to be content that might have language that one might find objectionable. Some might be images that one finds repulsive -- we have one person on !imageai@sh.itjust.works who rather famously likes making "gross" images. Some of it might just be offensive to various parties, like off-color jokes. A very considerable amount of it might be material that parent might not want their six-year-old seeing, like discussions about sexuality or profanity. Some of it might be religiously-unacceptable to various groups.

    There are a very considerable number of things that some group, somewhere, might object to.

    I do not think that it is reasonable to repurpose NSFW to be a "might not personally like" flag that is placed on anything that anyone out there might potentially not want to see. The scope there is simply too broad. Every group somewhere has their own personal preferences, and has an incentive to try to convert "All" into a feed that matches their personal interests.

    I think that it's fine to recognize that someone, somewhere, might have different views than someone else, and that one day, having a curation system with finer-grained classification of content added to the Threadiverse -- perhaps with someone other than the submitter responsible for adding that variety of tags -- that is relatively fine-grained may be a good technical solution. My personal view is that the idea of "taglists" that users or groups can publish and other users can subscribe to, which attach a classification to the content of other users, is a good way to do this. That is, User A submits content. User B -- which might be a bot or a group of humans -- adds that submitted item to a list they publish with a classification, which might include a recommendation to hide or to bring to the attention of a user or simply to attach a content tag. User C can choose whether-or-not to subscribe to User B's "tag feed", and their client perhaps how to interpret that feed. C maybe takes a delay on content visibility to provide time for tagging to be applied to new content. Tag feeds could attach tags to users or to communities. That's a situation that I think might be workable, scales to an infinite number of content preferences, and permits for fine-grained content classification, without anyone imposing their content preferences on anyone else or requiring submitters or moderators to try to understand and adapt to all forms of content preference around the world.

    In the absence of such a solution, I am comfortable placing the burden on those who want a particular sort of content to do the filtration themselves, rather than just pushing that content to the other side of a wall for everyone else as well.

    I do not think that trying to repurpose NSFW for other content-filtering purposes is a reasonable approach. The intent of NSFW is to let people browse content in public or workplace environments. It is obviously not completely perfect -- there is no one set of precisely-identical global norms for what is acceptable in public. But there is enough overlap that I think that it's at least possible to talk about that as a concept.

I will add one final note. While I personally do not think that repurposing the NSFW flag in this way is justified, and think that down that road just lies an infinite number of arguments with various groups who don't want to see various forms of content and want their preferences being made the norm for everyone, if the moderators here ultimately decide that doing so is their decision, I would then advocate for a different change. Keep !ukraine@sopuli.xyz's NSFW flag off...but create a new sister community, !UkraineWarCombatVideos@sopuli.xyz. Move combat video content to that community, and flag that community NSFW, or at least require submitters there to flag a video NSFW if it contains death (or a close view of death, or whatever). Have each community link to the other in the sidebar. That keeps all the content in the former community other than combat video visible under prior rules. I think that there are many problems with this approach, starting with the "infinite groups with their own preferences who will make their own cases to alter the All feed", and then that there are plenty of news articles that contain non-war-video content and analysis but might also contain war videos...think The War Zone. Not to mention that the content on linked pages might change, something that The War Zone often does with embedded video updates. Many different news sources do not engage in this form of censorship, and are not going to bother segregating their own content. But it's at least a subset of the problems that the proposed "flag the whole Ukraine community NSFW" approach has.

 

Icelandic authorities said residents and emergency responders should be ready to evacuate Grindavík at short notice after a new and ongoing eruption on the Reykjanes Peninsula.

 

In desperate need of air defenses, Ukraine’s Soviet-designed SA-11 Buk mobile air defense systems have been outfitted with Sea Sparrows.

 

A software maker serving more than 10,000 courtrooms throughout the world hosted an application update containing a hidden backdoor that maintained persistent communication with a malicious website, researchers reported Thursday, in the latest episode of a supply-chain attack.

9
submitted 6 months ago* (last edited 6 months ago) by tal to c/fallout@lemmy.world
 

EDIT: Sorry, guys. Tried cross-posting someone's post and somehow got a non-crossposted text post. The original is at:

https://kbin.social/m/PCGaming/t/1051546

 

Ukrainian officials claim that the drone boat, fitted with Grad artillery rockets, has already been used in combat.

 

Scientists discovered that removing specific molecules from developing mice can completely reverse their sex from male to female.

view more: ‹ prev next ›