7
submitted 1 day ago by saint@group.lt to c/bofh@group.lt

How We Built the Internet

Metadata

Highlights

The internet is a universe of its own.

The infrastructure that makes this scale possible is similarly astounding—a massive, global web of physical hardware, consisting of more than 5 billion kilometers of fiber-optic cable, more than 574 active and planned submarine cables that span a over 1 million kilometers in length, and a constellation of more than 5,400 satellites offering connectivity from low earth orbit (LEO).

“The Internet is no longer tracking the population of humans and the level of human use. The growth of the Internet is no longer bounded by human population growth, nor the number of hours in the day when humans are awake,” writes Geoff Huston, chief scientist at the nonprofit Asia Pacific Network Information Center.

As Shannon studied the structures of messages and language systems, he realized that there was a mathematical structure that underlied information. This meant that information could, in fact, be quantified.

Shannon noted that all information traveling from a sender to a recipient must pass through a channel, whether that channel be a wire or the atmosphere.

Shannon’s transformative insight was that every channel has a threshold—a maximum amount of information that can be delivered reliably to a sender.

Kleinrock approached AT&T and asked if the company would be interested in implementing such a system. AT&T rejected his proposal—most demand was still in analog communications. Instead, they told him to use the regular phone lines to send his digital communications—but that made no economic sense.

What was exceedingly clever about this suite of protocols was its generality. TCP and IP did not care which carrier technology transmitted its packets, whether it be copper wire, fiber-optic cable, or radio. And they imposed no constraints on what the bits could be formatted into—video text, simple messages, or even web pages formatted in a browser.

David Clark, one of the architects of the original internet, wrote in 1978 that “we should … prepare for the day when there are more than 256 networks in the Internet.”

Fiber was initially laid down by telecom companies offering high-quality cable television service to homes. The same lines would be used to provide internet access to these households. However, these service speeds were so fast that a whole new category of behavior became possible online. Information moved fast enough to make applications like video calling or video streaming a reality.

And while it may have been the government and small research groups that kickstarted the birth of the internet, its evolution henceforth was dictated by market forces, including service providers that offered cheaper-than-ever communication channels and users that primarily wanted to use those channels for entertainment.

In 2022, video streaming comprised nearly 58 percent of all Internet traffic. Netflix and YouTube alone accounted for 15 and 11 percent, respectively.

At the time, Facebook users in Asia or Africa had a completely different experience to their counterparts in the U.S. Their connection to a Facebook server had to travel halfway around the world, while users in the U.S. or Canada could enjoy nearly instantaneous service. To combat this, larger companies like Google, Facebook, Netflix, and others began storing their content physically closer to users through CDNs, or “content delivery networks.”

Instead of simply owning the CDNs that host your data, why not own the literal fiber cable that connects servers from the United States to the rest of the world?

Most of the world’s submarine cable capacity is now either partially or entirely owned by a FAANG company—meaning Facebook (Meta), Amazon, Apple, Netflix, or Google (Alphabet).

Google, which owns a number of sub-sea cables across the Atlantic and Pacific, can deliver hundreds of terabits per second through its infrastructure.

In other words, these applications have become so popular that they have had to leave traditional internet infrastructure and operate their services within their own private networks. These networks not only handle the physical layer, but also create new transfer protocols —totally disconnected from IP or TCP. Data is transferred on their own private protocols, essentially creating digital fiefdoms.

SpaceX’s Starlink is already unlocking a completely new way of providing service to millions. Its data packets, which travel to users via radio waves from low earth orbit, may soon be one of the fastest and most economical ways of delivering internet access to a majority of users on Earth. After all, the distance from LEO to the surface of the Earth is just a fraction of the length of subsea cables across the Atlantic and Pacific oceans.

What is next?

5
Incantations (josvisser.substack.com)
submitted 1 week ago by saint@group.lt to c/bofh@group.lt

Incantations

Metadata

Highlights

The problem with incantations is that you don’t understand in what exact circumstances they work. Change the circumstances, and your incantations might work, might not work anymore, might do something else, or maybe worse, might do lots of damage. It is not safe to rely on incantations, you need to move to understanding.

13
submitted 1 week ago by saint@group.lt to c/science@beehaw.org

We can best view the method of science as the use of our sophisticated methodological toolbox

Metadata

Highlights

Scientific, medical, and technological knowledge has transformed our world, but we still poorly understand the nature of scientific methodology.

scientific methodology has not been systematically analyzed using large-scale data and scientific methods themselves as it is viewed as not easily amenable to scientific study.

This study reveals that 25% of all discoveries since 1900 did not apply the common scientific method (all three features)—with 6% of discoveries using no observation, 23% using no experimentation, and 17% not testing a hypothesis.

Empirical evidence thus challenges the common view of the scientific method.

This provides a new perspective to the scientific method—embedded in our sophisticated methods and instruments—and suggests that we need to reform and extend the way we view the scientific method and discovery process.

In fact, hundreds of major scientific discoveries did not use “the scientific method”, as defined in science dictionaries as the combined process of “the collection of data through observation and experiment, and the formulation and testing of hypotheses” (1). In other words, it is “The process of observing, asking questions, and seeking answers through tests and experiments” (2, cf. 3).

In general, this universal method is commonly viewed as a unifying method of science and can be traced back at least to Francis Bacon's theory of scientific methodology in 1620 which popularized the concept

Science thus does not always fit the textbook definition.

Comparison across fields provides evidence that the common scientific method was not applied in making about half of all Nobel Prize discoveries in astronomy, economics and social sciences, and a quarter of such discoveries in physics, as highlighted in Fig. 2b. Some discoveries are thus non-experimental and more theoretical in nature, while others are made in an exploratory way, without explicitly formulating and testing a preestablished hypothesis.

We find that one general feature of scientific methodology is applied in making science's major discoveries: the use of sophisticated methods or instruments. These are defined here as scientific methods and instruments that extend our cognitive and sensory abilities—such as statistical methods, lasers, and chromatography methods. They are external resources (material artifacts) that can be shared and used by others—whereas observing, hypothesizing, and experimenting are, in contrast, largely internal (cognitive) abilities that are not material (Fig. 2).

Just as science has evolved, so should the classic scientific method—which is construed in such general terms that it would be better described as a basic method of reasoning used for human activities (non-scientific and scientific).

An experimental research design was not carried out when Einstein developed the law of the photoelectric effect in 1905 or when Franklin, Crick, and Watson discovered the double helix structure of DNA in 1953 using observational images developed by Franklin.

Direct observation was not made when for example Penrose developed the mathematical proof for black holes in 1965 or when Prigogine developed the theory of dissipative structures in thermodynamics in 1969. A hypothesis was not directly tested when Jerne developed the natural-selection theory of antibody formation in 1955 or when Peebles developed the theoretical framework of physical cosmology in 1965.

Sophisticated methods make research more accurate and reliable and enable us to evaluate the quality of research.

Applying observation and a complex method or instrument, together, is decisive in producing nearly all major discoveries at 94%, illustrating the central importance of empirical sciences in driving discovery and science.

4
How much are your 9's worth? (hross.substack.com)
submitted 1 week ago by saint@group.lt to c/bofh@group.lt

How much are your 9's worth?

Metadata

Highlights

All nines are not created equal. Most of the time I hear an extraordinarily high availability claim (anything above 99.9%) I immediately start thinking about how that number is calculated and wondering how realistic it is.

Human beings are funny, though. It turns out we respond pretty well to simplicity and order.

Having a single number to measure service health is a great way for humans to look at a table of historical availability and understand if service availability is getting better or worse. It’s also the best way to create accountability and measure behavior over time…

… as long as your measurement is reasonably accurate and not a vanity metric.

Cheat #1 - Measure the narrowest path possible.

This is the easiest way to cheat a 9’s metric. Many nines numbers I have seen are various version of this cheat code. How can we create a narrow measurement path?

Cheat #2 - Lump everything into a single bucket.

Not all requests are created equal.

Cheat #3 - Don’t measure latency.

This is an availability metric we’re talking about here, why would we care about how long things take, as long as they are successful?!

Cheat #4 - Measure total volume, not minutes.

Let’s get a little controversial.

In order to cheat the metric we want to choose the calculation that looks the best, since even though we might have been having a bad time for 3 hours (1 out of every 10 requests was failing), not every customer was impacted so it wouldn’t be “fair” to count that time against us.

Building more specific models of customer paths is manual. It requires more manual effort and customization to build a model of customer behavior (read: engineering time). Sometimes we just don’t have people with the time or specialization to do this, or it will cost to much to maintain it in the future.

We don’t have data on all of the customer scenarios. In this case we just can’t measure enough to be sure what our availability is.

Sometimes we really don’t care (and neither do our customers). Some of the pages we build for our websites are… not very useful. Sometimes spending the time to measure (or fix) these scenarios just isn’t worth the effort. It’s important to focus on important scenarios for your customers and not waste engineering effort on things that aren’t very important (this is a very good way to create an ineffective availability effort at a company).

Mental shortcuts matter. No matter how much education we try, it’s hard to change perceptions of executives, engineers, etc. Sometimes it is better to pick the abstraction that helps people understand than pick the most accurate one.

Data volume and data quality are important to measurement. If we don’t have a good idea of which errors are “okay” and which are not, or we just don’t have that much traffic, some of these measurements become almost useless (what is the SLO of a website with 3 requests? does it matter?).

What is your way of cheating nines? ;)

11
submitted 1 week ago by saint@group.lt to c/war@group.lt

Cyber Conflict and Subversion in the Russia-Ukraine War

Metadata

Highlights

The Russia-Ukraine war is the first case of cyber conflict in a large-scale military conflict involving a major power.

Contrary to cyberwar fears, most cyber operations remained strategically inconsequential, but there are several exceptions: the AcidRain operation, the UKRTelecom disruption, the September 2022 power grid sabotage, and the catastrophic Kyivstar outage of 2023.

These developments suggest hacking groups are increasingly fusing cyber operations with traditional subversive methods to improve effectiveness.

The first exceptional case is AcidRain. This advanced malware knocked out satellite communication provided by Viasat’s K-SAT service across Europe the very moment the invasion commenced. Among the customers of the K-SAT service: Ukraine’s military. The operation that deployed this malware stands out not only because it shows a direct linkage to military goals but also because it could have plausibly produced a clear tactical, potentially strategic, advantage for Russian troops at a decisive moment.

The second exception is a cyber operation in March 2022 that caused a massive outage of UKRTelecom, a major internet provider in Ukraine. It took only a month to prepare yet caused significant damage. It cut off over 80 percent of UKRTelecom’s customers from the internet for close to 24 hours.

Finally, the potentially most severe challenge to the theory of subversion is a power grid sabotage operation in September 2022. The operation stands out not only because it used a novel technique but also because it took very little preparation. According to Mandiant, it required only two months of preparation and used what is called “living off the land” techniques, namely foregoing malware and using only existing functionality.

After all, why go through the trouble of finding vulnerabilities in complex networks and develop sophisticated exploits when you can take the easy route via an employee, or even direct network access?

11
Why We Love Music (greatergood.berkeley.edu)
submitted 2 weeks ago by saint@group.lt to c/music@beehaw.org

Some article from the past ;)

Why We Love Music

Metadata

Highlights

Using fMRI technology, they’re discovering why music can inspire such strong feelings and bind us so tightly to other people.

“A single sound tone is not really pleasurable in itself; but if these sounds are organized over time in some sort of arrangement, it’s amazingly powerful.”

There’s another part of the brain that seeps dopamine, specifically just before those peak emotional moments in a song: the caudate nucleus, which is involved in the anticipation of pleasure. Presumably, the anticipatory pleasure comes from familiarity with the song—you have a memory of the song you enjoyed in the past embedded in your brain, and you anticipate the high points that are coming.

During peak emotional moments in the songs identified by the listeners, dopamine was released in the nucleus accumbens, a structure deep within the older part of our human brain.

This finding suggested to her that when people listen to unfamiliar music, their brains process the sounds through memory circuits, searching for recognizable patterns to help them make predictions about where the song is heading. If music is too foreign-sounding, it will be hard to anticipate the song’s structure, and people won’t like it—meaning, no dopamine hit. But, if the music has some recognizable features—maybe a familiar beat or melodic structure—people will more likely be able to anticipate the song’s emotional peaks and enjoy it more. The dopamine hit comes from having their predictions confirmed—or violated slightly, in intriguing ways.

On the other hand, people tend to tire of pop music more readily than they do of jazz, for the same reason—it can become too predictable.

Her findings also explain why people can hear the same song over and over again and still enjoy it. The emotional hit off of a familiar piece of music can be so intense, in fact, that it’s easily re-stimulated even years later.

“Musical rhythms can directly affect your brain rhythms, and brain rhythms are responsible for how you feel at any given moment,” says Large.

“If I’m a performer and you’re a listener, and what I’m playing really moves you, I’ve basically synchronized your brain rhythm with mine,” says Large. “That’s how I communicate with you.”

He points to the work of Erin Hannon at the University of Nevada who found that babies as young as 8 months old already tune into the rhythms of the music from their own cultural environment.

“Liking is so subjective,” he says. “Music may not sound any different to you than to someone else, but you learn to associate it with something you like and you’ll experience a pleasure response.”

33
submitted 2 weeks ago by saint@group.lt to c/science@lemmy.world

Interesting findings

4

cross-posted from: https://group.lt/post/1926151

Don't read if you don't want to ruin your day.

The One About The Web Developer Job Market

Metadata

Highlights

Many organisations are also resorting to employee-hostile strategies to increase employee churn, such as forced Return-To-Office policies.

Finding a non-bullshit job is likely only going to get harder.

• Finding effective documentation, information, and training is likely to get harder, especially in specialised topics where LLMs are even less effective than normal.

as soon as you start to try to predict the second or third order consequences things very quickly get remarkably difficult.

In short, futurists are largely con artists.

you need to do something Note: ah.. the famous "you need to do something" This is not a one-off event but has turned into a stock market driven movement towards reducing the overall headcount of the tech industry.

What this means is that when the bubble ends, as all bubbles must, the job market is likely to collapse even further.

The stock market loves job cuts

Activist investors see it as an opportunity to lower developer compensation

Management believes they can replace most of these employees with LLM-based automation

Discovering whether it’s true or not is actually quite complicated as it, counter-intuitively, doesn’t depend on the degree of LLM functionality but instead depends entirely on what organisations, managers, and executives are using software projects for.

Since, even in the best case scenario of the most optimistic prediction of LLM power, you’re still going to need to structure a plan for the code and review it, the time spent on code won’t drop to zero. But if you believe in the best-case prediction, a 20-40% improvement in long term productivity sounds reasonable, if a bit conservative.

The alternate world-view, one that I think is much more common among modern management, is that the purpose of software development is churn.

None of these require that the software be free or even low on defects. The project doesn’t need to be accessible or even functional for a majority of the users. It just needs to look good when managers, buyers, and sales people poke at it.

The alternate world-view, which I think I can demonstrate is dominant in web development at least, is that software quality does not matter. Production not productivity is what counts. Up until now, the only way to get production and churn has been to focus on short-term developer experience, often at the expense of the long term health of the project, but the innovation of LLMs is that now you can get more churn, more production, with fewer developers.

This means that it doesn’t matter who is correct or not in their estimate of how well these tools work.

You aren’t going to notice the issue as an end-user. From your perspective the system is working perfectly.

Experienced developers will edit out the issues without thinking about it, focusing on the time-saving benefit of generating the rest. Inexperienced developers won’t notice the issues and think they’ve just saved a lot of time, not realising they’ve left a ticking time bomb in the code base.

The training data favours specific languages such as Python and JavaScript.

• The same tool that enhances their productivity by 20-30% might also be outright harmful to a junior developer’s productivity, once you take the inevitable and eventual corrections and fixes into account.

From the job market perspective, all that improved and safe LLM-based coding tools would mean is more job losses.

Because manager world-views are more important than LLM innovation.

If the technology is what’s promised, the churn world-view managers will just get more production, more churn, with even fewer developers. The job market for developers will decline.

they will still use the tech to increase production, with fewer developers, because software quality and software project success isn’t what they’re looking for in software development. The job market for developers will decline.

The more progress you see in the automation, the fewer of us they’ll need. It isn’t a question of the nature of the improvement, but of the attitudes of management.

Even if that weren’t true, technical innovations in programming generally don’t improve project or business outcomes.

The odds of a project’s success are dictated by user research, design, process, and strategy, not the individual technological innovations in programming. Rapid-Application-Development tools, for example, didn’t shift outcomes in meaningful ways.

What matters is whether the final product works and improves the business it was made for. Business value isn’t solely a function of code defects. Technical improvements that address code defects are necessary, but not sufficient.

tech industry management is firmly convinced that less is more when it comes to employing either.

Whether you’re a bear or a bull on LLMs, we as developers are going to get screwed either way, especially if we’re web developers.

Most web projects shipped by businesses today are broken, but businesses rarely seem to care.

Most websites perform so badly that they don’t even finish loading on low-end devices, even when business outcomes directly correlate with website performance, such as in ecommerce or ad-supported web media.

The current state of web development is as if most Windows apps released every year simply failed to launch on 20-40% of all supported Windows installs.

If being plausible is all that matters, then that’s the literal, genuine, core strength of an LLM.

This is a problem for the job market because if all that matters to these organisations is being seen plausibly chasing cutting-edge technology – that the actual business outcomes don’t matter – then the magic of LLMs mean that you don’t actually need that many developers to do that for much, much less money.

Web media is a major employer, both directly and indirectly, of web developers. If a big part of the web media industry is collapsing, then that’s an entire sector that isn’t hiring any of the developers laid off by Google, Microsoft, or the rest. And the people they aren’t hiring will still be on the job market competing with everybody else who wouldn’t have even applied to work in web media.

The scale of LLM-enabled spam production outstrips the ability of Bing or Google to counter it.

But it gets even worse as every major search engine provider on the market is all-in on replacing regular keyword search with chatbots and LLM-generated summaries that don’t drive any traffic at all to their sources.

It’s reasonable to expect that the job market is unlikely to ever fully bounce back, due to the collapse of web media alone.

Experience in Node or React is not a reliable signifier of an ability to work on successful Node or React projects because most Node or React projects aren’t even close to being successful from a business perspective. Lack of experience in Node or React – such as a background in other frameworks or vanilla JS – conversely isn’t a reliable signifier that the developer won’t be a successful hire.

Lower pay, combined with the information asymmetry about employer dysfunction, would then lead to more capable workers leaving the sector, either to run their own businesses – a generally dysfunctional web development sector is likely to have open market opportunities – or leave the industry altogether. This would exacerbate the job market’s dysfunctions even further, deepening the cycle.

The first thing to note is that, historically, whenever management adopts an adversarial attitude towards labour, the only recourse labour has is to unionise.

As employees, we have nothing to lose from unionising. That’s the first consequence of management deciding that labour is disposable.

Diversifying your skills has always been a good idea for a software developer. Learning a new language gives you insight into the craft of programming that is applicable beyond that language specifically.

But the market for developer training in general has collapsed.

Some of it is down to the job market. Why invest in training if tech cos aren’t hiring you anyway? Why invest in training your staff if you’re planning on replacing them with LLM tools anyway?

After all, as Amy Hoy wrote in 2016:

Running a biz is a lot less risky than having a job, because 1000 customers is a lot less fragile than 1 employer.

The tech industry has “innovated” itself into a crisis, but because the executives aren’t the ones out looking for jobs, they see the innovations as a success.

The rest of us might disagree, but our opinions don’t count for much.

But what we can’t do is pretend things are fine.

Because they are not.

Thoughts?

1
submitted 1 month ago by saint@group.lt to c/work@group.lt

Don't read if you don't want to ruin your day.

The One About The Web Developer Job Market

Metadata

Highlights

Many organisations are also resorting to employee-hostile strategies to increase employee churn, such as forced Return-To-Office policies.

Finding a non-bullshit job is likely only going to get harder.

• Finding effective documentation, information, and training is likely to get harder, especially in specialised topics where LLMs are even less effective than normal.

as soon as you start to try to predict the second or third order consequences things very quickly get remarkably difficult.

In short, futurists are largely con artists.

you need to do something Note: ah.. the famous "you need to do something" This is not a one-off event but has turned into a stock market driven movement towards reducing the overall headcount of the tech industry.

What this means is that when the bubble ends, as all bubbles must, the job market is likely to collapse even further.

The stock market loves job cuts

Activist investors see it as an opportunity to lower developer compensation

Management believes they can replace most of these employees with LLM-based automation

Discovering whether it’s true or not is actually quite complicated as it, counter-intuitively, doesn’t depend on the degree of LLM functionality but instead depends entirely on what organisations, managers, and executives are using software projects for.

Since, even in the best case scenario of the most optimistic prediction of LLM power, you’re still going to need to structure a plan for the code and review it, the time spent on code won’t drop to zero. But if you believe in the best-case prediction, a 20-40% improvement in long term productivity sounds reasonable, if a bit conservative.

The alternate world-view, one that I think is much more common among modern management, is that the purpose of software development is churn.

None of these require that the software be free or even low on defects. The project doesn’t need to be accessible or even functional for a majority of the users. It just needs to look good when managers, buyers, and sales people poke at it.

The alternate world-view, which I think I can demonstrate is dominant in web development at least, is that software quality does not matter. Production not productivity is what counts. Up until now, the only way to get production and churn has been to focus on short-term developer experience, often at the expense of the long term health of the project, but the innovation of LLMs is that now you can get more churn, more production, with fewer developers.

This means that it doesn’t matter who is correct or not in their estimate of how well these tools work.

You aren’t going to notice the issue as an end-user. From your perspective the system is working perfectly.

Experienced developers will edit out the issues without thinking about it, focusing on the time-saving benefit of generating the rest. Inexperienced developers won’t notice the issues and think they’ve just saved a lot of time, not realising they’ve left a ticking time bomb in the code base.

The training data favours specific languages such as Python and JavaScript.

• The same tool that enhances their productivity by 20-30% might also be outright harmful to a junior developer’s productivity, once you take the inevitable and eventual corrections and fixes into account.

From the job market perspective, all that improved and safe LLM-based coding tools would mean is more job losses.

Because manager world-views are more important than LLM innovation.

If the technology is what’s promised, the churn world-view managers will just get more production, more churn, with even fewer developers. The job market for developers will decline.

they will still use the tech to increase production, with fewer developers, because software quality and software project success isn’t what they’re looking for in software development. The job market for developers will decline.

The more progress you see in the automation, the fewer of us they’ll need. It isn’t a question of the nature of the improvement, but of the attitudes of management.

Even if that weren’t true, technical innovations in programming generally don’t improve project or business outcomes.

The odds of a project’s success are dictated by user research, design, process, and strategy, not the individual technological innovations in programming. Rapid-Application-Development tools, for example, didn’t shift outcomes in meaningful ways.

What matters is whether the final product works and improves the business it was made for. Business value isn’t solely a function of code defects. Technical improvements that address code defects are necessary, but not sufficient.

tech industry management is firmly convinced that less is more when it comes to employing either.

Whether you’re a bear or a bull on LLMs, we as developers are going to get screwed either way, especially if we’re web developers.

Most web projects shipped by businesses today are broken, but businesses rarely seem to care.

Most websites perform so badly that they don’t even finish loading on low-end devices, even when business outcomes directly correlate with website performance, such as in ecommerce or ad-supported web media.

The current state of web development is as if most Windows apps released every year simply failed to launch on 20-40% of all supported Windows installs.

If being plausible is all that matters, then that’s the literal, genuine, core strength of an LLM.

This is a problem for the job market because if all that matters to these organisations is being seen plausibly chasing cutting-edge technology – that the actual business outcomes don’t matter – then the magic of LLMs mean that you don’t actually need that many developers to do that for much, much less money.

Web media is a major employer, both directly and indirectly, of web developers. If a big part of the web media industry is collapsing, then that’s an entire sector that isn’t hiring any of the developers laid off by Google, Microsoft, or the rest. And the people they aren’t hiring will still be on the job market competing with everybody else who wouldn’t have even applied to work in web media.

The scale of LLM-enabled spam production outstrips the ability of Bing or Google to counter it.

But it gets even worse as every major search engine provider on the market is all-in on replacing regular keyword search with chatbots and LLM-generated summaries that don’t drive any traffic at all to their sources.

It’s reasonable to expect that the job market is unlikely to ever fully bounce back, due to the collapse of web media alone.

Experience in Node or React is not a reliable signifier of an ability to work on successful Node or React projects because most Node or React projects aren’t even close to being successful from a business perspective. Lack of experience in Node or React – such as a background in other frameworks or vanilla JS – conversely isn’t a reliable signifier that the developer won’t be a successful hire.

Lower pay, combined with the information asymmetry about employer dysfunction, would then lead to more capable workers leaving the sector, either to run their own businesses – a generally dysfunctional web development sector is likely to have open market opportunities – or leave the industry altogether. This would exacerbate the job market’s dysfunctions even further, deepening the cycle.

The first thing to note is that, historically, whenever management adopts an adversarial attitude towards labour, the only recourse labour has is to unionise.

As employees, we have nothing to lose from unionising. That’s the first consequence of management deciding that labour is disposable.

Diversifying your skills has always been a good idea for a software developer. Learning a new language gives you insight into the craft of programming that is applicable beyond that language specifically.

But the market for developer training in general has collapsed.

Some of it is down to the job market. Why invest in training if tech cos aren’t hiring you anyway? Why invest in training your staff if you’re planning on replacing them with LLM tools anyway?

After all, as Amy Hoy wrote in 2016:

Running a biz is a lot less risky than having a job, because 1000 customers is a lot less fragile than 1 employer.

The tech industry has “innovated” itself into a crisis, but because the executives aren’t the ones out looking for jobs, they see the innovations as a success.

The rest of us might disagree, but our opinions don’t count for much.

But what we can’t do is pretend things are fine.

Because they are not.

Thoughts?

9

Highlights

COBOL remains crucial to businesses and institutions around the world.

It is estimated $3 trillion in daily commerce flows through COBOL systems, while 95% of ATM swipes and 80% of in-person banking transactions rely on COBOL code.

when unemployment claims suddenly spiked due to the pandemic, these archaic systems could not keep up, which means that benefits are not being distributed.

The spike in unemployment claims exposed another new problem: there is no one around to repair these legacy systems.

Although a few universities still offer COBOL courses, the number of people studying it today is extremely small.

COBOL Cowboys’ business model is more akin to the gig economy rather than to that of the companies at which these industry veterans spent their careers. It is staffed with mostly older freelancers, everyone is an independent consultant, and there is no promise of any work. The company’s slogan is “not our first rodeo.”

“A lot of us want to spend time with our grandkids, but we also want to keep busy.”

Hinshaw was in contact with the state of New Jersey at the beginning of the current crisis, and quickly saw that the unemployment claims issue wasn’t a back-end problem. Every claim that was sent to the host (the back-end mainframe) was processed.

“They all have the same problem on the front end,” says Hinshaw, adding that these organizations’ Web sites were not designed to handle that kind of volume, while the back-end mainframes typically can.

IBM, which sold many of the mainframes on which COBOL systems run, has been scrambling to launch initiatives in order to meet the urgent need for COBOL programmers to address the overloaded unemployment systems.

While these measures should eventually help to alleviate the shortage in COBOL programming expertise, it is clear that the past approach of “if it isn’t broken, don’t fix it” has contributed to the current problem.

Are you learning COBOL already? ;)

7
Composite SLO (blog.alexewerlof.com)
submitted 1 month ago by saint@group.lt to c/bofh@group.lt

How to calculate SLO

24
submitted 1 month ago by saint@group.lt to c/bofh@group.lt

cross-posted from: https://feddit.it/post/7752642

A week of downtime and all the servers were recovered only because the customer had a proper disaster recovery protocol and held backups somewhere else, otherwise Google deleted the backups too

Google cloud ceo says "it won't happen anymore", it's insane that there's the possibility of "instant delete everything"

[-] saint@group.lt 16 points 2 months ago

i am all for normalizing raiding ambassies for [put the cause you support] as well

[-] saint@group.lt 56 points 4 months ago

well this is probably PR as there is no such system nor it can be made that can have 100% uptime. not talking about the fact that network engineers rarely work with servers :)

[-] saint@group.lt 7 points 6 months ago

there is an open request for this, but seems that not being actively worked on: https://github.com/mastodon/mastodon/issues/18601

[-] saint@group.lt 11 points 6 months ago

first you should check logs of cloudflare tunnel - most likely it cannot access your docker network. if you are using cloudflare container - it should use same network as a Immich instance.

in short: find the tunnel log and see what is happening there.

[-] saint@group.lt 20 points 8 months ago

matrix I, skipped classes and watch it more than ten times in cinema.

[-] saint@group.lt 8 points 10 months ago

not all the users put their matrix username in Lemmy. also - at least in desktop when clicking send secure message it brings up matrix client for me (element)

[-] saint@group.lt 39 points 10 months ago

That's my kind of people!

[-] saint@group.lt 12 points 10 months ago

Any observed impact to performance?

[-] saint@group.lt 9 points 10 months ago

not good, sometimes still trying to use it and get lost from time to time

[-] saint@group.lt 9 points 11 months ago

read books, play games, watch tv, walk the dog, love my wife, sleep

[-] saint@group.lt 8 points 11 months ago

pricing changes, i.e. - removing free tier and increasing other plan prices.

[-] saint@group.lt 11 points 1 year ago

yay! thank you all!

i have made a not-so-quick-but-dirty Dockerfile to build on arm64

FROM rust:1.70.0
WORKDIR /app

COPY . .

RUN echo "pub const VERSION: &str = \"$(git describe --tag)\";" > "crates/utils/src/version.rs"
RUN cargo build --release

RUN apt update
RUN apt -y install libpq5
RUN cp /app/target/release/lemmy_server /app/lemmy

CMD ["/app/lemmy"]

later I am planning to improve it a bit, to make the image smaller if i can

view more: next ›

saint

joined 2 years ago
MODERATOR OF