kamstrup

joined 2 years ago
[–] kamstrup@programming.dev 0 points 6 days ago

X11 is "complete" in the sense that we have followed it to the end of the road. X11 has a series of well documented fundamental problems that does not make it suitable for a modern OS. I will not belabor them here (except to note that security in particulat in X11, is exceptionally weak for modern standars). These issues are unfixable because they are built into core assumptions and behaviours of all legacy apps.

At some point there has to be a switch. There simply is not manpower to maintain 2 separate windowing systems. I am sure we would all want there to be an army of devs working on these things on maintain the 2 stacks. But that is not the timeline we live in. The number of devs working on these things is very low.

Was it too early? I don't know. There will never be 1-1 feature parity with 30 years of legacy apps. I honestly believe that fixing things like a11y are gonna be much more tenable with only a single windowing system.

[–] kamstrup@programming.dev 17 points 2 weeks ago* (last edited 2 weeks ago) (10 children)

For someone who has not used Gnome in 14+ years you sure seem to know a lot about it...

X11 has effectively already been deprecated for years, seeing little to no development on it. No one should be surprised.

And still, there are SEVERAL Long Term Support distros out there that will support X11 for the coming years. Please stop pretending that stuff will start breaking. It will not.

[–] kamstrup@programming.dev 7 points 1 month ago

I find that my projects hosted on codeberg are heavily deranked or entirely missing on the top mainstream search engines. My github projects are almost always top 3.

So if it is a library someone might gind useful it has to go in gh. My personal toys can stay on cb.

[–] kamstrup@programming.dev 16 points 2 months ago (1 children)

At least we still have Skype (new), Skype for Enterprise, and Windows Skype

 

The Go team is working on a new garbage collector called Green Tea.

[–] kamstrup@programming.dev 4 points 3 months ago

Targeting vulnerable people based on metadata with any form of commercial intent, is morally and ethically highly questionable! A vulnerable person is by definition extremely susceptible to exploitation. Assuming that companies are gonna act out of philanthropy and goodness of their hearts seems a bit naiive.

[–] kamstrup@programming.dev 1 points 7 months ago

Can't divulge too many details, but one example was when we had 2 options for solving a problem: 1. The "easy" way, storing a bunch small blobs to s3 as a job was running on an embedded device, or 2. The slightly tricky, implement streaming of said data on the device (not as easy as it sounds).

We went with option 1, the easy one, because it was deemed faster bang for the buck. I did some basic math showing that the bandwidth required upload the high number of blobs to s3 within our time budget was not possible on our uplink.

After we spend a month failing on 1., it was clear that we hit the predicted problem. Eventuelly we implement option 2.

[–] kamstrup@programming.dev 6 points 8 months ago (2 children)

Being comfortable with basic back-of-the-envelope math can be a huge benefit. (Full disclosure: i am a math major who is now a programmer)

Over my career I have several examples of projects that have saved weeks worth of dev time because someone could predict the result with some basic calculations. I also have several examples where I have shown people some basic math showing that their idea is never gonna work, they don't listen and do it anyway, and I see them 1 month later and the project failed in the way i predicted.

A popular (and wise) saying is that "Weeks of work can save you hours of meetings". I think the same is true for basic math. "Weeks of coding can save you minutes of calculation".

You can definitely be a successful programmer career without great math skills. Math is a tool that can help you be more effective.

[–] kamstrup@programming.dev 5 points 8 months ago

Must include CHANGELOG...

The changelog:

  • misc fixes
  • pls work
  • fixe a typo
[–] kamstrup@programming.dev 2 points 8 months ago

Interesting observation! The most simple explanation would be that it is memory claimed by the Go runtime during parsing of the incoming bson from Mongo. You can try calling runtime.GC() 3 times after ingest and see if it changes your memory. Go does not free memory to the OS immediately, but this should do it.

2 other options, a bit more speculative:

Go maps have been known to have a bit of overhead in particular for small maps. Even when calling make() with the correct capacity. That doesn't fit well with the memory profile you posted well, as I didn't see any map container memory in there...

More probable might be that map keys are duplicated. So if you have 100 maps with the key "hello" you have 100 copies of the string "hello" in memory. Ideally all 100 maps qould share the same string instance. This often happens when parsing data from an incoming stream. You can either try to manually dedup the stringa, see if the mongo driver has the option, or use the new 'unique' package in Go 1.23

 

In the original proof of concept for ranging over functions, iter.Pull was implemented via goroutines and channels, which has a massive overhead.

When I dug in to see what the released code did I was delighted to see that the go devs implemented actual coroutines to power it. Which is one of the only ways to get sensible performance from this.

Will the coro package be exposed as public API in the future? Here's to hoping ♥️

[–] kamstrup@programming.dev 1 points 11 months ago
[–] kamstrup@programming.dev 4 points 11 months ago

There is manual memory management, so it seems closer to Zig

 

Go 1.22 will ship with "range over int" and experimental support for "range over func" 🥳

view more: next ›