o11c

joined 2 years ago
[–] o11c@programming.dev 1 points 2 years ago (2 children)

That's misleading though, since it only cares about one side, and ignores e.g. the much faster development speed that dynamic linking can provide.

[–] o11c@programming.dev 3 points 2 years ago

Only if the library is completely shitty and breaks between minor versions.

If the library is that bad, it's a strong sign you should avoid it entirely since it can't be relied on to do its job.

[–] o11c@programming.dev 6 points 2 years ago (7 children)

Some languages don't even support linking at all. Interpreted languages often dispatch everything by name without any relocations, which is obviously horrible. And some compiled languages only support translating the whole program (or at least, whole binary - looking at you, Rust!) at once. Do note that "static linking" has shades of meaning: it applies to "link multiple objects into a binary", but often that it excluded from the discussion in favor of just "use a .a instead of a .so".

Dynamic linking supports much faster development cycle than static linking (which is faster than whole-binary-at-once), at the cost of slightly slower runtime (but the location of that slowness can be controlled, if you actually care, and can easily be kept out of hot paths). It is of particularly high value for security updates, but we all known most developers don't care about security so I'm talking about annoyance instead. Some realistic numbers here: dynamic linking might be "rebuild in 0.3 seconds" vs static linking "rebuild in 3 seconds" vs no linking "rebuild in 30 seconds".

Dynamic linking is generally more reliable against long-term system changes. For example, it is impossible to run old statically-linked versions of bash 3.2 anymore on a modern distro (something about an incompatible locale format?), whereas the dynamically linked versions work just fine (assuming the libraries are installed, which is a reasonable assumption). Keep in mind that "just run everything in a container" isn't a solution because somebody has to maintain the distro inside the container.

Unfortunately, a lot of programmers lack basic competence and therefore have trouble setting up dynamic linking. If you really need frobbing, there's nothing wrong with RPATH if you're not setuid or similar (and even if you are, absolute root-owned paths are safe - a reasonable restriction since setuid will require more than just extracting a tarball anyway).

Even if you do use static linking, you should NEVER statically link to libc, and probably not to libstdc++ either. There are just too many things that can go wrong when you given up on the notion of "single source of truth". If you actually read the man pages for the tools you're using this is very easy to do, but a lack of such basic abilities is common among proponents of static linking.

Again, keep in mind that "just run everything in a container" isn't a solution because somebody has to maintain the distro inside the container.

The big question these days should not be "static or dynamic linking" but "dynamic linking with or without semantic interposition?" Apple's broken "two level namespaces" is closely related but also prevents symbol migration, and is really aimed at people who forgot to use -fvisibility=hidden.

[–] o11c@programming.dev 10 points 2 years ago

As a practical matter it is likely to break somebody's unit tests.

If there's an alternative approach that you want people to use in their unit tests, go ahead and break it. If there isn't, but you're only doing such breakage rarely and it's reasonable for their unit tests to be updated in a way that works with both versions of your library, do it cautiously. Otherwise, only do it if you own the universe and you hate future debuggers.

[–] o11c@programming.dev 4 points 2 years ago

The thing is - I have probably seen hundreds of projects that use tabs for indentation ... and I've never seen a single one without tab errors. And that ignoring e.g. the fact that tabs break diffs or who knows how many other things.

Using spaces doesn't automatically mean a lack of errors but it's clearly easy enough that it's commonly achieved. The most common argument against spaces seems to boil down to "my editor inserts hard tabs and I don't know how to configure it".

[–] o11c@programming.dev 3 points 2 years ago

It's solving (and facing) some very interesting problems at a technical level ...

but I can't get over the dumb decision for how IO is done. It's $CURRENTYEAR; we have global constructors even if your platform really needs them (hint: it probably doesn't).

[–] o11c@programming.dev 6 points 2 years ago

Stop reinventing the wheel.

Major translation systems like gettext (especially the GNU variant) have decades of tooling built up for "merging" and all sorts of other operations.

Even if you don't want to use their binary format at runtime, their tooling is still worth it.

[–] o11c@programming.dev 1 points 2 years ago (1 children)

and I already explained that Union is a thing.

[–] o11c@programming.dev 1 points 2 years ago (3 children)

That still doesn't explain why duck typing is ever a thing beyond "I'm too lazy to write extends BaseClass". There's simply no reason to want it.

[–] o11c@programming.dev 1 points 2 years ago (5 children)

Then - ignoring dunders that have weird rules - what, pray tell, is the point of protocols, other than backward compatibility with historical fragile ducks (at the cost of future backwards compatibility)? Why are people afraid of using real base classes?

The fact that it is possible to subclass a Protocol is useless since you can't enforce subclassing, which is necessary for maintainable software refactoring, unless it's a purely internal interface (in which case the Union approach is probably still better).

That PEP link includes broken examples so it's really not worth much as a reference.

(for that matter, the Sequence interface is also broken in Python, in case you need another historical example of why protocols are a bad idea).

[–] o11c@programming.dev 2 points 2 years ago

chunks: [AtomicPtr>; 64], appears before the explanation of why 64 works, and was confusing at first glance since this is completely different than the previous use of 64, which was arbitrary. I was expecting a variable-size array of fixed-size arrays at first (using something like an rwlock you can copy/grow the internal vector without blocking - if there was a writer, the last reader of the old allocation frees it).

Instead of separate flags, what about a single (fixed-size, if chunks are) atomic bitset? This would increase contention slightly but that only happens briefly during growth, not accesses. Many architectures actually have dedicated atomic bit operations though sadly it's hard to get compilers to generate them.

The obvious API addition is for a single thread to push several elements at once, which can be done more efficiently.

[–] o11c@programming.dev 1 points 2 years ago

Aside: Note that requests is sloppy there, it should use either raise ... from e to make the cause explicit, or from None to hide it. Default propagation is supposed to imply that the second exception was unexpected.

view more: ‹ prev next ›