3
submitted 5 months ago by btp@kbin.social to c/tech@kbin.social

The Android phone maker says go ahead, fix your own phone.

The right-to-repair movement continues to gain steam as another big tech company shows its support for letting people fix their own broken devices.

Google endorsed an Oregon right-to-repair legislation Thursday calling it a “common sense repair bill” and saying it would be a “win for consumers.” This marks the first time the Android phone maker has officially backed any right-to-repair law.

The ability to repair a phone, for example, empowers people by saving money on devices while creating less waste,” said Steven Nickel, devices and services director of operations for Google, in a blog post Thursday. “It also critically supports sustainability in manufacturing. Repair must be easy enough for anyone to do, whether they are technicians or do-it-yourselfers.”

In the Oregon repair bill, manufacturers will be required to provide replacement parts, software, physical tools, documentation and schematics needed for repair to authorized repair providers or individuals. The legislation covers any digital electronics with a computer chip although cars, farm equipment, medical devices, solar power systems, and any heavy or industrial equipment that is not sold to consumers are exempt from the bill.

Google has made strides in making its Pixel phones easier to fix. The company enabled a Repair Mode for the phones last month allowing the protection of data on the device while it’s being serviced. There’s also a diagnostic feature that helps determine if your Pixel phone is working properly or not. That said, Google’s Pixel Watch is another story as the company said in October it will not provide parts to repair its smartwatch.

Apple jumped on the right-to-repair bandwagon back in October. The iPhone maker showed its support for a federal law to make it easier to repair its phones after years of being a staunch opponent.

429
submitted 5 months ago by btp@kbin.social to c/technology@lemmy.world

The Android phone maker says go ahead, fix your own phone.

The right-to-repair movement continues to gain steam as another big tech company shows its support for letting people fix their own broken devices.

Google endorsed an Oregon right-to-repair legislation Thursday calling it a “common sense repair bill” and saying it would be a “win for consumers.” This marks the first time the Android phone maker has officially backed any right-to-repair law.

The ability to repair a phone, for example, empowers people by saving money on devices while creating less waste,” said Steven Nickel, devices and services director of operations for Google, in a blog post Thursday. “It also critically supports sustainability in manufacturing. Repair must be easy enough for anyone to do, whether they are technicians or do-it-yourselfers.”

In the Oregon repair bill, manufacturers will be required to provide replacement parts, software, physical tools, documentation and schematics needed for repair to authorized repair providers or individuals. The legislation covers any digital electronics with a computer chip although cars, farm equipment, medical devices, solar power systems, and any heavy or industrial equipment that is not sold to consumers are exempt from the bill.

Google has made strides in making its Pixel phones easier to fix. The company enabled a Repair Mode for the phones last month allowing the protection of data on the device while it’s being serviced. There’s also a diagnostic feature that helps determine if your Pixel phone is working properly or not. That said, Google’s Pixel Watch is another story as the company said in October it will not provide parts to repair its smartwatch.

Apple jumped on the right-to-repair bandwagon back in October. The iPhone maker showed its support for a federal law to make it easier to repair its phones after years of being a staunch opponent.

2
submitted 5 months ago by btp@kbin.social to c/technology@kbin.social

The Android phone maker says go ahead, fix your own phone.

The right-to-repair movement continues to gain steam as another big tech company shows its support for letting people fix their own broken devices.

Google endorsed an Oregon right-to-repair legislation Thursday calling it a “common sense repair bill” and saying it would be a “win for consumers.” This marks the first time the Android phone maker has officially backed any right-to-repair law.

The ability to repair a phone, for example, empowers people by saving money on devices while creating less waste,” said Steven Nickel, devices and services director of operations for Google, in a blog post Thursday. “It also critically supports sustainability in manufacturing. Repair must be easy enough for anyone to do, whether they are technicians or do-it-yourselfers.”

In the Oregon repair bill, manufacturers will be required to provide replacement parts, software, physical tools, documentation and schematics needed for repair to authorized repair providers or individuals. The legislation covers any digital electronics with a computer chip although cars, farm equipment, medical devices, solar power systems, and any heavy or industrial equipment that is not sold to consumers are exempt from the bill.

Google has made strides in making its Pixel phones easier to fix. The company enabled a Repair Mode for the phones last month allowing the protection of data on the device while it’s being serviced. There’s also a diagnostic feature that helps determine if your Pixel phone is working properly or not. That said, Google’s Pixel Watch is another story as the company said in October it will not provide parts to repair its smartwatch.

Apple jumped on the right-to-repair bandwagon back in October. The iPhone maker showed its support for a federal law to make it easier to repair its phones after years of being a staunch opponent.

140
submitted 5 months ago by btp@kbin.social to c/technology@lemmy.ml

The Android phone maker says go ahead, fix your own phone.

The right-to-repair movement continues to gain steam as another big tech company shows its support for letting people fix their own broken devices.

Google endorsed an Oregon right-to-repair legislation Thursday calling it a “common sense repair bill” and saying it would be a “win for consumers.” This marks the first time the Android phone maker has officially backed any right-to-repair law.

The ability to repair a phone, for example, empowers people by saving money on devices while creating less waste,” said Steven Nickel, devices and services director of operations for Google, in a blog post Thursday. “It also critically supports sustainability in manufacturing. Repair must be easy enough for anyone to do, whether they are technicians or do-it-yourselfers.”

In the Oregon repair bill, manufacturers will be required to provide replacement parts, software, physical tools, documentation and schematics needed for repair to authorized repair providers or individuals. The legislation covers any digital electronics with a computer chip although cars, farm equipment, medical devices, solar power systems, and any heavy or industrial equipment that is not sold to consumers are exempt from the bill.

Google has made strides in making its Pixel phones easier to fix. The company enabled a Repair Mode for the phones last month allowing the protection of data on the device while it’s being serviced. There’s also a diagnostic feature that helps determine if your Pixel phone is working properly or not. That said, Google’s Pixel Watch is another story as the company said in October it will not provide parts to repair its smartwatch.

Apple jumped on the right-to-repair bandwagon back in October. The iPhone maker showed its support for a federal law to make it easier to repair its phones after years of being a staunch opponent.

133
submitted 6 months ago by btp@kbin.social to c/technology@lemmy.ml

The New York Times is suing OpenAI and Microsoft for copyright infringement, claiming the two companies built their AI models by “copying and using millions” of the publication’s articles and now “directly compete” with its content as a result.

As outlined in the lawsuit, the Times alleges OpenAI and Microsoft’s large language models (LLMs), which power ChatGPT and Copilot, “can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style.” This “undermine[s] and damage[s]” the Times’ relationship with readers, the outlet alleges, while also depriving it of “subscription, licensing, advertising, and affiliate revenue.”

The complaint also argues that these AI models “threaten high-quality journalism” by hurting the ability of news outlets to protect and monetize content. “Through Microsoft’s Bing Chat (recently rebranded as “Copilot”) and OpenAI’s ChatGPT, Defendants seek to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment,” the lawsuit states.

The full text of the lawsuit can be found here

10
submitted 6 months ago by btp@kbin.social to c/tech@kbin.social

The New York Times is suing OpenAI and Microsoft for copyright infringement, claiming the two companies built their AI models by “copying and using millions” of the publication’s articles and now “directly compete” with its content as a result.

As outlined in the lawsuit, the Times alleges OpenAI and Microsoft’s large language models (LLMs), which power ChatGPT and Copilot, “can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style.” This “undermine[s] and damage[s]” the Times’ relationship with readers, the outlet alleges, while also depriving it of “subscription, licensing, advertising, and affiliate revenue.”

The complaint also argues that these AI models “threaten high-quality journalism” by hurting the ability of news outlets to protect and monetize content. “Through Microsoft’s Bing Chat (recently rebranded as “Copilot”) and OpenAI’s ChatGPT, Defendants seek to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment,” the lawsuit states.

The full text of the lawsuit can be found here

1
submitted 6 months ago by btp@kbin.social to c/technology@kbin.social

The New York Times is suing OpenAI and Microsoft for copyright infringement, claiming the two companies built their AI models by “copying and using millions” of the publication’s articles and now “directly compete” with its content as a result.

As outlined in the lawsuit, the Times alleges OpenAI and Microsoft’s large language models (LLMs), which power ChatGPT and Copilot, “can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style.” This “undermine[s] and damage[s]” the Times’ relationship with readers, the outlet alleges, while also depriving it of “subscription, licensing, advertising, and affiliate revenue.”

The complaint also argues that these AI models “threaten high-quality journalism” by hurting the ability of news outlets to protect and monetize content. “Through Microsoft’s Bing Chat (recently rebranded as “Copilot”) and OpenAI’s ChatGPT, Defendants seek to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment,” the lawsuit states.

The full text of the lawsuit can be found here

52
submitted 6 months ago by btp@kbin.social to c/privacy@lemmy.ml

A controversial developer circumvented one of Mastodon's primary tools for blocking bad actors, all so that his servers could connect to Threads.

We’ve criticized the security and privacy mechanisms of Mastodon in the past, but this new development should be eye-opening. Alex Gleason, the former Truth Social developer behind Soapbox and Rebased, has come up with a sneaky workaround to how Authorized Fetch functions: if your domain is blocked for a fetch, just sign it with a different domain name instead.

Gleason was originally investigating Threads federation to determine whether or not a failure to fetch posts indicated a software compatibility issue, or if Threads had blocked his server. After checking some logs and experimenting, he came to a conclusion.

“Fellas,” Gleason writes, “I think threads.net might be blocking some servers already.”

What Alex found was that Threads attempts to verify domain names before allowing access to a resource, a very similar approach to what Authorized Fetch does in Mastodon.

You can see Threads fetching your own server by looking at the facebookexternalua user agent. Try this command on your server:

grep facebookexternalua /var/log/nginx/access.log

If you see logs there, that means Threads is attempting to verify your signatures and allow you to access their data.

8
How Big is YouTube? (ethanzuckerman.com)
submitted 6 months ago by btp@kbin.social to c/technology@lemmy.ml

I got interested in this question a few years ago, when I started writing about the “denominator problem”. A great deal of social media research focuses on finding unwanted behavior – mis/disinformation, hate speech – on platforms. This isn’t that hard to do: search for “white genocide” or “ivermectin” and count the results. Indeed, a lot of eye-catching research does just this – consider Avaaz’s August 2020 report about COVID misinformation. It reports 3.8 billion views of COVID misinfo in a year, which is a very big number. But it’s a numerator without a denominator – Facebook generates dozens or hundreds of views a day for each of its 3 billion users – 3.8 billion views is actually a very small number, contextualized with a denominator.

The paper this post describes can be found here
Abstract:

YouTube is one of the largest, most important communication platforms in the world, but while there is a great deal of research about the site, many of its fundamental characteristics remain unknown. To better understand YouTube as a whole, we created a random sample of videos using a new method. Through a description of the sample’s metadata, we provide answers to many essential questions about, for example, the distribution of views, comments, likes, subscribers, and categories. Our method also allows us to estimate the total number of publicly visible videos on YouTube and its growth over time. To learn more about video content, we hand-coded a subsample to answer questions like how many are primarily music, video games, or still images. Finally, we processed the videos’ audio using language detection software to determine the distribution of spoken languages. In providing basic information about YouTube as a whole, we not only learn more about an influential platform, but also provide baseline context against which samples in more focused studies can be compared.

2
How Big is YouTube? (ethanzuckerman.com)
submitted 6 months ago by btp@kbin.social to c/technology@kbin.social

I got interested in this question a few years ago, when I started writing about the “denominator problem”. A great deal of social media research focuses on finding unwanted behavior – mis/disinformation, hate speech – on platforms. This isn’t that hard to do: search for “white genocide” or “ivermectin” and count the results. Indeed, a lot of eye-catching research does just this – consider Avaaz’s August 2020 report about COVID misinformation. It reports 3.8 billion views of COVID misinfo in a year, which is a very big number. But it’s a numerator without a denominator – Facebook generates dozens or hundreds of views a day for each of its 3 billion users – 3.8 billion views is actually a very small number, contextualized with a denominator.

The paper this post describes can be found here
Abstract:

YouTube is one of the largest, most important communication platforms in the world, but while there is a great deal of research about the site, many of its fundamental characteristics remain unknown. To better understand YouTube as a whole, we created a random sample of videos using a new method. Through a description of the sample’s metadata, we provide answers to many essential questions about, for example, the distribution of views, comments, likes, subscribers, and categories. Our method also allows us to estimate the total number of publicly visible videos on YouTube and its growth over time. To learn more about video content, we hand-coded a subsample to answer questions like how many are primarily music, video games, or still images. Finally, we processed the videos’ audio using language detection software to determine the distribution of spoken languages. In providing basic information about YouTube as a whole, we not only learn more about an influential platform, but also provide baseline context against which samples in more focused studies can be compared.

18
How Big is YouTube? (ethanzuckerman.com)
submitted 6 months ago by btp@kbin.social to c/technology@lemmy.world

I got interested in this question a few years ago, when I started writing about the “denominator problem”. A great deal of social media research focuses on finding unwanted behavior – mis/disinformation, hate speech – on platforms. This isn’t that hard to do: search for “white genocide” or “ivermectin” and count the results. Indeed, a lot of eye-catching research does just this – consider Avaaz’s August 2020 report about COVID misinformation. It reports 3.8 billion views of COVID misinfo in a year, which is a very big number. But it’s a numerator without a denominator – Facebook generates dozens or hundreds of views a day for each of its 3 billion users – 3.8 billion views is actually a very small number, contextualized with a denominator.

The paper this post describes can be found here
Abstract:

YouTube is one of the largest, most important communication platforms in the world, but while there is a great deal of research about the site, many of its fundamental characteristics remain unknown. To better understand YouTube as a whole, we created a random sample of videos using a new method. Through a description of the sample’s metadata, we provide answers to many essential questions about, for example, the distribution of views, comments, likes, subscribers, and categories. Our method also allows us to estimate the total number of publicly visible videos on YouTube and its growth over time. To learn more about video content, we hand-coded a subsample to answer questions like how many are primarily music, video games, or still images. Finally, we processed the videos’ audio using language detection software to determine the distribution of spoken languages. In providing basic information about YouTube as a whole, we not only learn more about an influential platform, but also provide baseline context against which samples in more focused studies can be compared.

0
How Big is YouTube? (ethanzuckerman.com)
submitted 6 months ago by btp@kbin.social to c/tech@kbin.social

I got interested in this question a few years ago, when I started writing about the “denominator problem”. A great deal of social media research focuses on finding unwanted behavior – mis/disinformation, hate speech – on platforms. This isn’t that hard to do: search for “white genocide” or “ivermectin” and count the results. Indeed, a lot of eye-catching research does just this – consider Avaaz’s August 2020 report about COVID misinformation. It reports 3.8 billion views of COVID misinfo in a year, which is a very big number. But it’s a numerator without a denominator – Facebook generates dozens or hundreds of views a day for each of its 3 billion users – 3.8 billion views is actually a very small number, contextualized with a denominator.

The paper this post describes can be found here
Abstract:

YouTube is one of the largest, most important communication platforms in the world, but while there is a great deal of research about the site, many of its fundamental characteristics remain unknown. To better understand YouTube as a whole, we created a random sample of videos using a new method. Through a description of the sample’s metadata, we provide answers to many essential questions about, for example, the distribution of views, comments, likes, subscribers, and categories. Our method also allows us to estimate the total number of publicly visible videos on YouTube and its growth over time. To learn more about video content, we hand-coded a subsample to answer questions like how many are primarily music, video games, or still images. Finally, we processed the videos’ audio using language detection software to determine the distribution of spoken languages. In providing basic information about YouTube as a whole, we not only learn more about an influential platform, but also provide baseline context against which samples in more focused studies can be compared.

[-] btp@kbin.social 1 points 6 months ago

Checks and balances. Plus, the U.S. is a very large country, with a large population that has their own priorities and values. Local municipalities can also vary largely within state governments. The federal system allows these communities to self-determine, while also enacting a foundation of basic rights and government function.

[-] btp@kbin.social 14 points 6 months ago

First, a quick primer on the tech: ACR identifies what’s displayed on your television, including content served through a cable TV box, streaming service, or game console, by continuously grabbing screenshots and comparing them to a massive database of media and advertisements. Think of it as a Shazam-like service constantly running in the background while your TV is on.

All of this is in the second paragraph of the article.

[-] btp@kbin.social 2 points 6 months ago

I'm gunna keep sticking around and posting regularly for the time being. Still really enjoying the experience and communities that are still here.

[-] btp@kbin.social 23 points 7 months ago

A newly discovered trade-off in the way time-keeping devices operate on a fundamental level could set a hard limit on the performance of large-scale quantum computers, according to researchers from the Vienna University of Technology.

While the issue isn't exactly pressing, our ability to grow systems based on quantum operations from backroom prototypes into practical number-crunching behemoths will depend on how well we can reliably dissect the days into ever finer portions. This is a feat the researchers say will become increasingly more challenging.

Whether you're counting the seconds with whispers of Mississippi or dividing them up with the pendulum-swing of an electron in atomic confinement, the measure of time is bound by the limits of physics itself.

One of these limits involves the resolution with which time can be split. Measures of any event shorter than 5.39 x 10-44 seconds, for example, run afoul of theories on the basic functions of the Universe. They just don't make any sense, in other words.

Yet even before we get to that hard line in the sands of time, physicists think there is a toll to be paid that could prevent us from continuing to measure ever smaller units.

Sooner or later, every clock winds down. The pendulum slows, the battery dies, the atomic laser needs resetting. This isn't merely an engineering challenge – the march of time itself is a feature of the Universe's progress from a highly ordered state to an entangled, chaotic mess in what is known as entropy.

"Time measurement always has to do with entropy," says senior author Marcus Huber, a systems engineer who leads a research group in the intersection of Quantum Information and Quantum Thermodynamics at the Vienna University of Technology.

In their recently published theorem, Huber and his team lay out the logic that connects entropy as a thermodynamic phenomenon with resolution, demonstrating that unless you've got infinite energy at your fingertips, your fast-ticking clock will eventually run into precision problems.

Or as the study's first author, theoretical physicist Florian Meier puts it, "That means: Either the clock works quickly or it works precisely – both are not possible at the same time."

This might not be a major problem if you want to count out seconds that won't deviate over the lifetime of our Universe. But for technologies like quantum computing, which rely on the temperamental nature of particles hovering on the edge of existence, timing is everything.

This isn't a big problem when the number of particles is small. As they increase in number, the risk any one of them could be knocked out of their quantum critical state rises, leaving less and less time to carry out the necessary computations.

Plenty of research has gone into exploring the potential for errors in quantum technology caused by a noisy, imperfect Universe. This appears to be the first time researchers have looked at the physics of timekeeping itself as a potential obstacle.

"Currently, the accuracy of quantum computers is still limited by other factors, for example the precision of the components used or electromagnetic fields," says Huber.

"But our calculations also show that today we are not far from the regime in which the fundamental limits of time measurement play the decisive role."

It's likely other advances in quantum computing will improve stability, reduce errors, and 'buy time' for scaled-up devices to operate in optimal ways. But whether entropy will have the final say on just how powerful quantum computers can get, only time will tell.

This research was published in Physical Review Letters.

[-] btp@kbin.social 25 points 7 months ago

I think it kind of flies in the face of what Open Source Software should be. They're walling off code behind accounts in the Microsoft ecosystem.

[-] btp@kbin.social 35 points 7 months ago

"References illicit drugs" lol

[-] btp@kbin.social 2 points 7 months ago

Ah, okay, I see. Thanks for clearing that up.

[-] btp@kbin.social 1 points 7 months ago* (last edited 7 months ago)

I haven't read through all the rules proper yet, but it looks like his specific circumstance you're mentioning here has already been taken into account by the FCC. From the article:

Under the new rules, the FCC can fine telecom companies for not providing equal connectivity to different communities “without adequate justification,” such as financial or technical challenges of building out service in a particular area. The rules are specifically designed to address correlations between household income, race, and internet speed.

[-] btp@kbin.social 1 points 7 months ago

Never mind, I'm a big dummy. I see this one, at least.

[-] btp@kbin.social 2 points 8 months ago

Shhh. Just bask in its glory.

[-] btp@kbin.social 4 points 8 months ago
view more: next ›

btp

joined 1 year ago