Shdwdrgn

joined 2 years ago
[–] Shdwdrgn@mander.xyz 6 points 3 weeks ago (2 children)

I suspect because there's no consistency in the brightness of vehicle lights. But that's one of the reasons why I think an incremental light bar would be better, there's no variation between vehicles. You could even make it more informative by flashing the whole bar when you first brake, so someone behind you can more easily see how much of the bar is being lit up.

[–] Shdwdrgn@mander.xyz 2 points 3 weeks ago

That could probably be implemented in most existing vehicles, and at least it would provide more information.

[–] Shdwdrgn@mander.xyz 1 points 3 weeks ago

They don't want to admit they've been screwing us over even though we all know it's happening. All these companies could have rolled out suitable internet speeds a decade earlier but they would rather limit everyone to the lowest common denominator so they don't have to admit just how terrible their equipment is in most locations.

I've gotta say, having city-owned fiber is great, folks here don't have to wait weeks for Comcast to send out a tech who conveniently never shows up on the scheduled day, and customer service actually has a clue what they're talking about. This is how a public service should operate.

[–] Shdwdrgn@mander.xyz 5 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

I would say cable TV coax has quite a lot more capacity than what the providers let on. In my city they offered up to 50mbps at over $100/month. Then they lost their lawsuit trying to prevent the city from installing its own fiber network and suddenly the cable company decided they could offer 150mbps for around $75/month (with no equipment changes). Once the fiber network started becoming operational (offering 1gbps bidirectional for$50/month) the cable company decided they're better also offer gigabit connection speeds, but once again they simply flipped a switch to increase your bandwidth. This capability has been in place for quite some time, they just didn't want to offer it and their illegal "monopoly" gave them no incentive to provide competitive speeds.

*I say "monopoly" even though we technically also have DSL available in town. However when I asked one of the techs why DSL couldn't give me more than 896kps upload speed, I was told that the cable company had an arrangement with them which prevented the DSL from providing the speeds needed by businesses. After the lawsuit that broke up the state-wide bans on other providers, this practice was exposed and also broken up, so now the telco is able to max out their DSL speeds.

[–] Shdwdrgn@mander.xyz 51 points 3 weeks ago (29 children)

I still think rear signaling could be improved dramatically by using a wide third-brake light to show the intensity of braking.

For example -- I have seen some aftermarket turn signals which are bars the width of the vehicle, and show a "moving" signal starting in the center and then progressing towards the outer edge of the vehicle.

So now take that idea for brake. When you barely have your foot on the brake pedal, it would light a couple lights in the center of your brake signal. Press a little harder and now it's lighting up 1/4 of the lights from the center towards the outside edge of the vehicle. And when you're pressing the brake pedal to the floor, all of the lights are lit up from the center to the outside edges of the vehicle. The harder you press on the pedal, the more lights are illuminated.

Now you have an immediate indication of just how hard the person in front of you is braking. With the normal on/off brake signals, you don't know what's happening until moments later as you determine how fast you are approaching that car. They could be casually slowing, or they could be locking up their wheels for an accident in front of them.

[–] Shdwdrgn@mander.xyz 8 points 3 weeks ago (2 children)

And never forget about the I-D ten T error.

[–] Shdwdrgn@mander.xyz 3 points 1 month ago

Yikes, that sucks... but at least Linux is still usable.

[–] Shdwdrgn@mander.xyz 16 points 1 month ago

Hooray for no safety and hooray for death.

Isn't that the Musk philosophy? Kill anyone who gets in the way, and sue everyone else who survives his death traps?

[–] Shdwdrgn@mander.xyz 20 points 1 month ago (2 children)

If you want stability, you probably can't beat Debian, and you should be fairly used to the backend by now. I suspect the stylus use is just going to be figuring out what package provided your current access to it.

Before you wipe the laptop, I would recommend finding a command to list all the installed packages, then at least you'll have a reference to what was in place before. And if possible, maybe grab a backup of the /etc folder (or whatever might still be accessible) so you can reference the current configs on various packages to recreate whatever doesn't work by default.

There are a number of lightweight desktops you can choose from. I personally like Mate, but maybe you can play around with others on the new system and purge the ones you don't like. And while you're swapping drives, check the memory slots, maybe you can drop another 8GB stick in there to give the whole system a boost.

[–] Shdwdrgn@mander.xyz 2 points 1 month ago

Maybe if the administration wasn't actively dismantling out national security, this wouldn't be as big a problem. Russians attacking out computer systems? Meh, we'll just stop looking and maybe they'll go away. And lets just make enemies of all our allies so they don't trust sharing security information with us -- what could go wrong? And for the cherry on top, let's threaten a military takeover of neighboring friendly countries, because "trust me, bro".

[–] Shdwdrgn@mander.xyz 5 points 1 month ago

This is just another manifestation of the alternate reality Trump lives in. He's so convinced that everything he does is golden, and he's surrounded himself with yes-men who hide the truth from him (like Gabbard telling agents they need to change their intelligence reports to make Trump happy), so every time reality intrudes on his fantasy world he lashes out as if companies are doing this just to make HIM look bad. No dumbass, they're doing it because you fucked things up so badly that the market cannot cover up your gross negligence.

[–] Shdwdrgn@mander.xyz 7 points 1 month ago

You might check if a simple CPU upgrade would get you there. I previously ran some 2005 Poweredge servers that came with a Pentium D processor, and it cost me something like $8 from ebay to upgrade to a Xeon and start running KVM.

 

I've spent the past day working on my newest Poweredge R620 acquisition, and trying to nail down what things I can do without checking. Google has shown me that everyone seems to be having similar issues regardless of brand or model. Gone are the days when a rack server could be fully booted in 90 seconds. A big part of my frustration has been when the USB memory sticks are inserted to get firmware updated before I put this machine in production, easily driving times up to 15-20 minutes just to get to the point where I find out if I have the right combination of BIOS/EUFI boot parameters for each individual drive image.

I currently have this machine down to 6:15 before it starts booting the OS, and a good deal of that time is spent sitting here watching it at the beginning, where it says it's testing memory but in fact hasn't actually started that process yet. It's a mystery what exactly it's even doing.

At this point I've turned off the lifecycle controller scanning for new hardware, no boot processes on the internal SATA or PCI ports, or from the NICs, memory testing disabled... and I've run out of leads. I don't really see anything else available to turn off sensors and such. I mean it's going to be a fixed server running a bunch of VMs so there's no need for additional cards although some day I may increase the RAM, so I don't really need it to scan for future changes at every boot.

Anyway, this all got me thinking... it might be fun to compare notes and see what others have done to improve their boot times, especially if you're also balancing your power usage (since I've read that allowing full CPU power during POST can have a small effect on the time). I'm sure different brands will have different specific techniques, but maybe there's some common areas we can all take advantage of? And sure, ideally our machines would never need to reboot, but many people run machines at home only while being used and deal with this issue daily, or want to get back online as quickly as possible after a power outage, so anything helps...

 

I have been struggling with this for over a month and still keep running into a brick wall. I am building a new firewall which has six network interfaces, and want to rename them to a known order (wan[0-1], and eth[0-3]). Since Bullseye has stopped honoring udev rules, I have created link files under /etc/systemd/network/ for each interface based on their MAC address. The two WAN interfaces seem to be working reliably but they're not actually plugged into anything yet (this may be an important but untested distinction).

What I've found is that I might get the interfaces renamed correctly when logging in from the keyboard, and this continues to work for multiple reboots. However if I SSH into the machine (which of course is my standard method of working on my servers) it seems to destroy systemd's ability to rename the interface on the next boot. I have played around with the order of the link file numbers to ensure the renumbering doesn't have the devices trying to step on each other, but to no avail. Fixing this problem seems to come down to three different solutions...

  • I can simply touch the eth*.link files and I'm back up afte a reboot.
  • Sometimes I have to get more drastic, actually opening and saving each of the files (without making any changes). WHY these two methods give me different results, I cannot say.
  • When nothing else works, I simply rename one or more of the eth*.link files, giving them a different numerical order. So far it doesn't seem to matter which of the files I rename, but systemd sees that something has changed and re-reads them.

Another piece of information I ran across is that systemd does the interface renaming very early in the boot process, even before the filesystems are mounted, and that you need to run update-initramfs -u to create a new initrd.img file for grub. OK, sounds reasonable... however I would expect the boot behavior to be identical every time I reboot the machine, and not randomly stop working after I sign in remotely. I've also found that generating a new initrd.img does no good unless I also touch or change the link files first, so perhaps this is a false lead.

This behavior just completely baffles me. Renaming interfaces based on MAC addresses should be an extremely simple task, and yet systemd is completely failing unless I change the link files every time I remote connect? Surely someone must have found a reliable way to change multiple interface names in the years since Bullseye was released?

Sorry, I know this is a rant against systemd and this whole "predictable" naming scheme, but all of this stuff worked just fine for the last 24 years that I've been running linux servers, it's not something that should require any effort at all to set up. What do I need to change so that systemd does what it is configured to do, and why is something as simple as a remote connection enough to completely break it when I do get it to work? Please help save my sanity!

(I realize essential details are missing, but this post is already way too long -- ask what you need and I shall provide!)

tl;dr -- Systemd fails to rename network interfaces on the next cycle if I SSH in and type 'reboot'

 

Your dreams and imagination evolved as a view into another universe. As with the current beliefs, you cannot decipher technical information -- no words in books, no details of how devices work, so even if you can describe things you see from another place, you could not reproduce a working version.

Now how do you convince others that the things your are seeing are really happening without being labeled insane? And how could you use this information to benefit yourself or others? Take a peek into the multiverse to see how other versions of yourself have solved these problems...

 

I have a self-hosted matrix-synapse server up and running on a Debian linux server, but before I open it up I want to at least get a captcha service in place to reduce spamming. The only module I've seen to handle this function appears to require setting up a Google recaptcha though, however I would prefer to keep all of this entirely self-contained for the privacy of my users. Can anyone recommend a module that allows for a local captcha option? For that matter, can anyone also recommend a captcha system that is pretty straightforward to set up (which is compatible with matrix-synapse) and uses basic preinstalled code bases like perl or python?

And while I'm here, I would also like to provide the option of registering with an email address, but I'm having trouble finding any clear how-to pages on this. Seems like that function might be built directly in to matrix-synapse but I'm just not finding anything helpful. Any suggestions?

I'm fairly new to matrix in general, but I have an initial setup running with the homeserver, Element web page, and an IRC bridge, so if I can just nail down the validation part of registrations I'll have what I think is a good starting point to launch from.

 

I was reading another article which discussed taking measurements of distance stars at 6-month intervals to create a 3D map of their relative positions and direction of movement. This got me to thinking... has anyone proposed 'dropping' stationary satellites outside of Earth's orbital path for continuous monitoring even when our planet is no longer in that spot? It seems like such an arrangement could provide constant monitoring of things that are happening on the far side of the sun, and they could each act as a relay to each other, bringing the signals back around where we could receive them.

It could be fascinating to be able to constantly monitor the path of know comets, or perhaps even to detect large meteors which are safely away from us now but might some day pose a threat. Studies like mapping star positions could rapidly expand with the availability of continuous data feeds, and I'm sure if such a tool were available scientists would come up with a host of new experiments to try.

A couple other things also come to mind. First off is radio telescopes, which can gather more sensitive data by having sensors further apart. Of course in this case they would only be able to peer in two directions unless you set up the array to rotate as a singular ring (which greatly increases the complexity). The other idea was that I know some phenomena are so large that it takes a huge array of telescopes or sensors to even detect them, and something this large could detect truly astounding low frequency events. Throw in some gravity detectors and watch as the waves propagate through our solar system.

I'm just thinking there's a lot of possibilities here and a lot more data could be collected if we could drop four or eight satellites along the way. I would assume the idea has been proposed before, I just didn't know if this is even feasible?

 

Turns out both grow in my area, and look identical to this when young. Yikes! So based on a post yesterday, I took this outside and sliced it in half. So far it looks promising (I think?) and I'm not dead yet.

This was found growing in a Colorado yard near the base of an elm tree, in an area where there are also rotting cottonwood roots. Altitude is right at 5000 feet. It wasn't my yard so I'm not sure how many days it may have been growing before I picked it today. I have put both halves in the fridge for now, is there any other information I can provide to help identify it?

A full size copy of the inside can be viewed here: http://sourpuss.net/projects/mycology/2023-08-13/IMG_7239.JPG

2
submitted 2 years ago* (last edited 2 years ago) by Shdwdrgn@mander.xyz to c/debian@lemmy.ml
 

I've been running systems up to Buster and have always had the 'quiet' option in the grub settings to show the regular service startup messages (the colored ones showing [ok] and such but not all the dmesg stuff). I just upgraded a server to bullseye and there are zero messages being displayed now except an immediate message about not being able to use IRQ 0. Worse, google can't seem to find any information on this. If I remove the quiet option from grub then I see those service messages again, along with all the other stuff I don't need.

What is broken and how do I fix this issue? I assumed it would be safe to upgrade by now but this seems like a pretty big problem if I ever need to troubleshoot a system.

[Edit] In case anyone else finds this post searching for the same issue… Apparently the trick is that now you MUST install plymouth, even on systems that do not have a desktop environment. For whatever reason plymouth has taken over the job of displaying the text startup messages now. Keep your same grub boot parameters (quiet by itself, without the splash option) and you will get the old format of startup messages showing once again. It’s been working fine the old way for 20+ years but hey let’s change something just for the sake of confusing everyone.

[Edit 2] Thanks to marvin below, I now have a final solution that no longer requires plymouth to be installed. Edit /etc/default/grub and add systemd.show_status=true to GRUB_CMDLINE_LINUX_DEFAULT. In my case to full line is:

GRUB_CMDLINE_LINUX_DEFAULT="quiet systemd.show_status=true"

Don't forget to run update-grub after you save your changes.

 

I run my own email server, and a friend received a compromised laptop from work which resulted in a spam attack from Russia yesterday. Turtle settings saved the days with thousands of emails still in the queue when I saw the problem, however it made me realize that everyone with accounts on my server are local, do not travel, and have no requirement to send emails from outside the country.

I found how to use the smtpd_discard_ehlo_keyword_address_maps setting in postfix to block a CIDR list of IPs, then found a maintained list of IPs by country codes on github. Cool so far, and a script to keep my local list updated was easy enough.

Now the question is, what countries should I be blocking? There are plenty of lists of the top hacking sources, but it's hard to block #2 (the US) when that's where I am located. But otherwise, does anyone have a list of countries they outright block from logging on to their servers? From the above google searches I have 17 countries blocked so far, and in the first 30 minutes already stopped login attempts from three of those countries, so it appears to be working.

Of course I could write a script to parse my logs to see who has already made attempts, but that's what services like fail2ban are for, and I'm just wondering if there are any countries in particular I should directly block? My list so far includes the following: ae bg br cn de hk id in ir iq il kp ng ru sa th vn

The question itself may not be that interesting, but I thought at the very least some folks might be interested in my experience and think about doing something similar themselves. I can post more details of what I did if there is any interest.

55
submitted 2 years ago* (last edited 2 years ago) by Shdwdrgn@mander.xyz to c/mycology@mander.xyz
 

First pics of my first pins. I cut slits in the bag on Sunday and saw the first pins appear yesterday morning, now they're growing fast. This clump is already a full inch (25mm) tall, and I have four openings in the bag that are all pinning. I've been misting them a couple times a day but now I'll be working from home until next Monday so I can try to spray them more often.

For anyone who hasn't seen my previous posts, I started out with a very small sample of spawn from ebay just over two months ago. I expanded that out in jars of rye berries and popcorn kernels, and then on July 4th I split a jar between two fruiting bags with pasteurized straw (I also have two bags of blue oysters and opened one of those on Sunday, but no pins from it yet).

This is my first time trying to grow mushrooms so I've been researching and asking questions every step of the way, but so far so good! I also have never tasted oysters before so that will be a new experience too. Now I just have to temper my impatience until it's time to harvest...

[Update] Adding a second pic this morning. This is about 12 hours later and they've grown significantly again. For reference, the bag is about the size of a sheet of paper.

[2nd update] It's been five days now since I opened the bag for fruiting. Here's a pic of what the mushrooms currently look like. As far as what I've read, I expected them to get MUCH larger than this, but with the upturned caps I really believe these are done growing and should have been harvested yesterday (note this image shows the largest clump of the group). Any thoughts?

 

My first oyster pins appeared today and I've been thinking about humidity control. I have this big tub I made my still air box from and I've been wondering about using it to hold the two fruiting bags I have. I was concerned that maybe the X slices wouldn't get enough fresh air if I covered them, but then I've been worried about keeping up the humidity. Now that I'm seeing some pinning though I'm feeling like the humidity is more important? I live in Colorado, which isn't quite desert but the humidity in the house typically drops below 40% during the day (it's high right now because we've been getting some rain showers).

For reference, my SAB is a typical DIY, made from a large tub with just a couple hand-sized holes cut out. There's not a lot of airflow in that room anyway, and I'm not sure how much fresh air the mushrooms need once they start growing. Of course I realize they won't be able to stay in the SAB too long, I know they'll outgrow the available space, but I'm just thinking for the next few days, or however long it takes them to really fill in.

So, any thought on this? Should I close them up in the box or just leave them in open air?

 

I have Openfire set up with the monitoring service plugin which we have been using with Pidgin on the desktop. One of the things I've noticed is that when I sign in to another computer on the same account, I do not get a history of recent messages (which I thought the monitoring plugin was supposed to provide).

The other thing that doesn't seem to be working right is when I am logged in to two computers simultaneously (using the same account). I expect to see chat messages showing up on BOTH devices so I can go between machines, which again is something I thought the monitoring plug was supposed to provide.

The settings I believe are related are under "Offline messages" which I have set to always store, and retain for up to 30 days. Should I bee looking for anything else?

I have been using Pidgin with XMPP on Google for years, so I know both the XMPP protocol and the Pidgin client are capable of handling this functionality. I've been digging around trying to find a solution, and see a lot of things claiming Pidgin is the culprit here, but those messages are a decade old. I can't seem to find any information on the subject for Openfire newer than about 2016.

I'm hoping there's a setting I need to change or another plugin I need to add to get both of these features working on my server? I really love the software otherwise but this seems like a really basic function that should just work, and I am hoping someone can point me to whatever I'm missing.

 

Just curious if any such communities exist here. I built a DIY weather station from 3D prints and an ESP 8266, always looking for improvements on the design, but after a massive downpour yesterday I'm also looking for tips on more accurately calibrating my rain gauge.

view more: ‹ prev next ›