interdimensionalmeme

joined 3 years ago
MODERATOR OF
[–] interdimensionalmeme@lemmy.ml 1 points 2 days ago (2 children)

Thanks, although what I'm looking for is a file that will make a kde plasma desktop with chromium and libreoffice installed Something I can plop on a usb stick and hand to grandma so she can linux up her obsoleted win10 pc without me having to explain what a nixos is.

I understand she probably will not understand the nuance of a immutable filesystem based os but that's fine. I simply do not have the energy to tailor a file for each individual grandma that I have.

Thanks, that's exactly what I'm looking for.

[–] interdimensionalmeme@lemmy.ml 1 points 3 days ago (4 children)

Ah I thought it was all one file, and maybe that would just be like, commenting out the system section or something.

But I do mean, which apps are installed, what desktop environment, really the user interface, the end user experience that's what I was wondering.

So is there such a repository or not ? Is every nixos user making the configuration file from scratch in vim ?

I figured there should be a repository where most user just pick a ready made file and that's the end of that. Not really having to learn the syntax of that file and writing it down ?

[–] interdimensionalmeme@lemmy.ml 3 points 3 days ago (7 children)

Probably yes !

I meant the file that decides how everything works

I figured if you sent me that file from your system, I'd get your system exactly how you designed it.

I think it would be great to just load random ones of these from other people to see how they like their system.

And then I could pick and choose those aspects that I like while also trying many, radically different ways of doing things as fast as my system can reboot !

You'd be surprised, many humans have simply no backbone, common sense nor self respect so I think they very probably would still, in large numbers. Proof is facebook and palantir.

Because capitalism (and religion before it) told us it would come in the future. As long as we worked as hard as possible in the present.

In the case of religion, this was after you died, until people figured out it was a little too convenient, a little too much of a blank cheque that leaves very little room for recourse if it doesn't turn out as advertised.

In capitalism, "defferred gratification" is sold as a virtue, a sign of good moral character, you are made responsible for your own happiness in a way that requires continual vigilence.

[–] interdimensionalmeme@lemmy.ml 1 points 3 days ago (1 children)

Yes, I find it baffling that this does not yet exist. I was installing debian the other day and the incessant one-question-at-a-time installation with long delays between the question was aggravating. In particular since none of these questions really needed to be answered at the time.

Proxmox does it better, but still with annoying questions and limitation like having a mandatory static IP address and making your enter an email address notification. This is all actually optional stuff and it could all be dealt with after the install is completed.

All firefox really needed to be once google took over everything, was to be a viable alternative and find a way to metabolize all this cash in a way that doesn't damage google's own cash machine or threaten it's actual dominance.

For google the pitance they give firefox is a very cheap insurance policy against against anti-trust legislation. Just like Intel with AMD, this shows how toothless the liberal anti-trust legislation are, even if they were really being enforced, they cannot handle a token 2nd player. It cannot handle controlled opposition if it's credible and believable. So an actual thriving ecosystem doesn't need to exist, we just get duopolies instead of monopolies but in practices we get ducked up the cloaca just the same.

[–] interdimensionalmeme@lemmy.ml 0 points 3 days ago (2 children)

By photo ID, I don't mean just any photo, I mean "photo id" cryptographically signed by the state, certificates checked, database pinged, identity validated, the whole enchilada

If the rendering data for scraper was really the problem Then the solution is simple, just have downloadable dumps of the publicly available information That would be extremely efficient and cost fractions of pennies in monthly bandwidth Plus the data would be far more usable for whatever they are using it for.

The problem is trying to have freely available data, but for the host to maintain the ability to leverage this data later.

I don't think we can have both of these.

If you allow my searchxng search scraper then an AI scraper is indistinguishable.

If you mean, "google and duckduckgo are whitelisted" then lemmy will only be searchable there, those specific whitelisted hosts. And google search index is also an AI scraper bot.

 

Finally got a 3d printer, but the first thing I wanted to print... the model is 400$usd. It's a piece of machinery I repair at work. I just wanted to print it as a decoration for my toolbox but that is almost week's wages after taxes for me so :( Maybe I can find it on the high seas ?

 

Hi, So, I wipe all cookies on every restart of firefox by default.

However, there are a very very few cookies I would like to restore. And only to certain multi-account containers

They are the session cookie to the few websites I login to. Because I'm annoyed to have to login again on every reboot.

But I still want to wipe every other cookies they store.

I tried to make a bookmarklet that can save the session cookie

Example this

javascript:(function() {     function getCookie(name) {         var value = "; " + document.cookie;         var parts = value.split("; " + name + "=");         if (parts.length == 2) return parts.pop().split(";").shift();     }     var cookieValue = getCookie('session_id');     if (cookieValue) {         var data = new Blob([`session_id=${cookieValue}`], {type: 'text/plain'});         var a = document.createElement('a');         a.style.display = 'none';         document.body.appendChild(a);         var url = window.URL.createObjectURL(data);         a.href = url;         a.download = 'session_id.txt';         a.click();         window.URL.revokeObjectURL(url);     } else {         alert('Cookie "session_id" does not exist.');     } })();

Unfortunately, unlike regular cookies, this doesn't work and it returns the cookie doesn't exist.

I would have made another bookmarklet which create a cookie from the file.

What I really need is an addon that lets me specify which cookies to save and restory, in which multi-account container

 

So, I have a pretty big games collection on steam and the majority of these games are offline single player games.

And sometimes I fear just losing access to all of them !

I had quit piracy entirely in the 2010s during the golden age of streaming, but now I'm increasingly alienated from it all.

I could hunt down the crack for each and every single game I own but I would like to know, is there a better way ?

Could I just crack my offline steam library and play my games offline forever ?

 

Also, seems kind of scary that this implies a future where so many people are in prison that their vote could actually tip the balance ?

 

A user checking out one of these URLs does not want to filter only local post on that instance.

On all instances, this url should mean "show me all /c/piracy on all federated instances"

If you really mean /c/piracy only on that instance, then add something to the url.

The current convention breaks the most important aspect of federation and makes its vestigial appendage.

The current way has user asking question /c/piracy, but on which instance ?

So now they'll all join the same instance . You wouldn't post anywhere else since no one would every see it.

It's a recipe for centralization.

I think this is obvious to most users, were deal with "voat with extra steps" here

 

For example, let's take the website github.com

If you don't want to have to log in everytime you restart the browser, you have to whitelist the entire website as follows

But you would only need to whitelist the following cookies to do so.

on .github.com
logged_in

on github.com
_device_id
user_session
__Host-user_session_same_site
_gh_sess

I found an add-on called "Cookie Quick Manager" which is a great way to consult your cookies on a per-site basis, is container aware, it's great.

In that add-on there was a "protect cookie" function

Unfortunately, it would only only protect cookies from getting deleted by "Cookie Quick Manager" and not firefox's "delete data when Firefox is closed" but that would have been a fantastically convenient way to handle this

I think what would make sense would be the ability to append cookie name and container names to the "delete data when Firefox is closed" exception list

So instead of just

https://github.com

You might be able to specify containername!cookiename@https://github.com

With both the containername part and the cookiename part being optional limits to the whitelisting

Here is a mockup of what that might look like

 
 

This is a big problem. It creates the illusion that /c/cats on one particular instance is the real /c/cats.

This is the root of re-centralization and it must be pulled out.

 

Hi,

If you're like me, your probably seeing a lot of stuff you've already seen in jerboa

On Reddit this didn't happen because the site takes into account how many times a post was printed and the more you've seen it, the quicker it would disappear from your version of the front page.

Now of course jerboa could and should do this, But I think there's two opportunities to make this better than Reddit. On one part, putting the squarely in control of the content discovery algorithm, next, solicit user input and ask him to lend a hand in the social sorting algorithm that is voting.

So, a user voting sounds be a way to tell jerboa that "I've seen this" and it shouldn't show it anymore on my feed. To prevent bias, the neutral vote should be added.

Next is giving the user more explicit control of the algorithm. When you vote up or down, you're sorting for the community but also for yourself. Jerboa should take into account user's voting pattern and recommend current based on what the user likes.

These voting patterns should be publicly exchanged in "out of band" communication. Jerboa could then use these voting patterns to further help with content discovery in the following way.

"My user likes X,Y,Z, after consulting public voting patterns, we can see that most users who like X,Y,Z often also like A,B,C and dislike I,J,K"

This is how Netflix, YouTube and other algorithms find stuff you like.

The difference is now, this runs on your computer. You can see your algorithm weights and edit them. Place extra filters on them and most important, swap , export, import algorithm sorting weights and exchange them with others users, craft them for specific usage and etc.

Plus of course, basic function like chronological view that doesn't cheat or insert ads.

Algorithmic content discovery under user control is going to be the biggest user benefit of switching to Lemmy versus a private commercial centralized platform. Our data will finally serve us !

 

example lemmy.ml/c/pics

Would it be something like

lemmy.ml/c/pics!all

lemmy.ml/c/all/pics

lemmy.ml/all/pics

lemmy.ml/all/c/pics

All.lemmy.ml/c/pics

?

 
view more: ‹ prev next ›