this post was submitted on 12 Aug 2023
828 points (98.8% liked)

Technology

59392 readers
2534 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

DEF CON Infosec super-band the Cult of the Dead Cow has released Veilid (pronounced vay-lid), an open source project applications can use to connect up clients and transfer information in a peer-to-peer decentralized manner.

The idea being here that apps – mobile, desktop, web, and headless – can find and talk to each other across the internet privately and securely without having to go through centralized and often corporate-owned systems. Veilid provides code for app developers to drop into their software so that their clients can join and communicate in a peer-to-peer community.

In a DEF CON presentation today, Katelyn "medus4" Bowden and Christien "DilDog" Rioux ran through the technical details of the project, which has apparently taken three years to develop.

The system, written primarily in Rust with some Dart and Python, takes aspects of the Tor anonymizing service and the peer-to-peer InterPlanetary File System (IPFS). If an app on one device connects to an app on another via Veilid, it shouldn't be possible for either client to know the other's IP address or location from that connectivity, which is good for privacy, for instance. The app makers can't get that info, either.

Veilid's design is documented here, and its source code is here, available under the Mozilla Public License Version 2.0.

"IPFS was not designed with privacy in mind," Rioux told the DEF CON crowd. "Tor was, but it wasn't built with performance in mind. And when the NSA runs 100 [Tor] exit nodes, it can fail."

Unlike Tor, Veilid doesn't run exit nodes. Each node in the Veilid network is equal, and if the NSA wanted to snoop on Veilid users like it does on Tor users, the Feds would have to monitor the entire network, which hopefully won't be feasible, even for the No Such Agency. Rioux described it as "like Tor and IPFS had sex and produced this thing."

"The possibilities here are endless," added Bowden. "All apps are equal, we're only as strong as the weakest node and every node is equal. We hope everyone will build on it."

Each copy of an app using the core Veilid library acts as a network node, it can communicate with other nodes, and uses a 256-bit public key as an ID number. There are no special nodes, and there's no single point of failure. The project supports Linux, macOS, Windows, Android, iOS, and web apps.

Veilid can talk over UDP and TCP, and connections are authenticated, timestamped, strongly end-to-end encrypted, and digitally signed to prevent eavesdropping, tampering, and impersonation. The cryptography involved has been dubbed VLD0, and uses established algorithms since the project didn't want to risk introducing weaknesses from "rolling its own," Rioux said.

This means XChaCha20-Poly1305 for encryption, Elliptic curve25519 for public-private-key authentication and signing, x25519 for DH key exchange, BLAKE3 for cryptographic hashing, and Argon2 for password hash generation. These could be switched out for stronger mechanisms if necessary in future.

Files written to local storage by Veilid are fully encrypted, and encrypted table store APIs are available for developers. Keys for encrypting device data can be password protected.

"The system means there's no IP address, no tracking, no data collection, and no tracking – that's the biggest way that people are monetizing your internet use," Bowden said.

"Billionaires are trying to monetize those connections, and a lot of people are falling for that. We have to make sure this is available," Bowden continued. The hope is that applications will include Veilid and use it to communicate, so that users can benefit from the network without knowing all the above technical stuff: it should just work for them.

To demonstrate the capabilities of the system, the team built a Veilid-based secure instant-messaging app along the lines of Signal called VeilidChat, using the Flutter framework. Many more apps are needed.

If it takes off in a big way, Veilid could put a big hole in the surveillance capitalism economy. It's been tried before with mixed or poor results, though the Cult has a reputation for getting stuff done right. ®

you are viewing a single comment's thread
view the rest of the comments
[–] PottedPlant@lemm.ee 94 points 1 year ago (1 children)

Impressive design.

Implicit in the description is the weakness would be monitoring the entire network, somehow, if possible.

The more apps and nodes that run Veilid, the more private the system.

I look forward to adoption being vast and wide. The bigger the better.

But queue the 'but we need to protect the children ' crowd and outlaw these protocols.

[–] PeleSpirit@lemmy.world 2 points 1 year ago (8 children)

What I don't understand about these projects is why can't we both have them and protect the children (child porn, child trafficking, etc.)? Is there a way to self police like the fediverse is starting to do by kicking those people out of the instance or no one will connect with them? I would love the privacy from corporations, not places where really shitty people can do really shitty things.

[–] Beryl@lemmy.world 68 points 1 year ago

It's simple, really : if you have a built-in back door to prevent child porn circulation, then you can use it for anything else, and it WILL eventually be used in other ways.

[–] nickwitha_k@lemmy.sdf.org 48 points 1 year ago (3 children)

What I don't understand about these projects is why can't we both have them and protect the children (child porn, child trafficking, etc.)?

The reason is that the "protect the children" thing is and always has been a bad faith excuse to expand or establish control over others. That's not to say that places like TOR don't have a problem with CSAM but if that were the actual target, it would be addressed in the proposed laws and vigorously pursued. It never is.

Protecting children is always, at most, a token gesture in these laws, which exand censorship and surveillance of the population, while demonstrating complete disregard for harms and unnecessary risks that they introduce, while generally also exempting those in power from being impacted.

[–] lateraltwo@lemmy.world 2 points 1 year ago

For a good example of how "extra policing and security" can go too far, go no further than the TSA and all the "terrorism" they protect us from

[–] raspberriesareyummy@lemmy.world 32 points 1 year ago* (last edited 1 year ago) (2 children)

I would argue it could be more efficient to protect children (and all victims) in our daily lives - show empathy towards others, and improve empathy in societies where necessary (yes, sadly, this is a lengthy process), to the point where no country will seem to be turning a blind eye towards abusers, and where people care & check on the kids they see in the neighborhood. This won't eliminate all the abuse, but online policing of contents is only fighting the symptoms, so the "offline approach" seems preferable. And surprise - if people are vigilant offline, the excuse for global surveillance goes away & ugly corporate capitalistic assholes need to find a new excuse.

[–] thisbenzingring@lemmy.sdf.org 17 points 1 year ago

The way they caught that horrible serial abuser in Australia recently is a good example of a detective using localized skills to find the needle in the haystack and identify a blanket in an abuse video.

[–] milkjug@lemmy.wildfyre.dev 4 points 1 year ago

Actually protecting people? Showing empathy?! Who do you think we are, demoncrats?

[–] echo64@lemmy.world 32 points 1 year ago* (last edited 1 year ago) (1 children)

For the same reason, we don't allow government cameras in every public and private bathroom, even though it could stop really shitty people doing really shitty things.

Humans demand personal privacy, and need avenues for that. The quite literal big brother is generally not felt to be something any society wants, even if it could illimate the shitty people doing really shitty things.

It's not a tech problem. It's a societal one.

[–] Loulou@lemmy.mindoki.com 30 points 1 year ago

It was never about the children or fighting terrorism, to get pedophiles or twart attacks you have to have people "on the ground", not by snooping everything.

[–] boatswain@infosec.pub 8 points 1 year ago (1 children)

What I don't understand about these projects is why can't we both have them and protect the children

Think of this as closer to Signal than to a social media platform. It's a protocol, so there's no saying that you couldn't build a social media site with it, bit for now the demo app that I saw today is just chat. The parties involved share public keys with each other, and from then on, everything is encrypted so that only those people in the chat can read it.

With that model, censorship is not really feasible. If you're one of the perks in the conversation, you can say "guys, that's gross, stop" or send screenshots to the cops or whatever, but that's about it.

Ultimately, if the only way the Authorities have of acting against terrorism/pedophiles/etc is by infringing everyone in the county's right to privacy, they're doing a shit job and need to be replaced.

[–] guyrocket@kbin.social 8 points 1 year ago (3 children)

I think this is a great question, but I would ask it a little differently.

Is it possible for a p2p system to self police for things like cp?

Maybe no one knows how now. But maybe someone can figure it out eventually. Seems like a bit of a logical contradiction but I continue to be amazed at human creativity.

[–] treadful@lemmy.zip 7 points 1 year ago (1 children)

Yeah, they are contradictory concepts to an extent. Making an uncensorable and untraceable protocol means exactly that. Things like the Fediverse are not that and censorship can come through things like defederation and blocking.

That said, they exist on different layers. You could probably run a federated system on top of this protocol and still be able to filter out the illegal and offensive content. It doesn't mean that content just disappears, it just means you don't have to subject yourself to it.

[–] guyrocket@kbin.social 1 points 1 year ago

Interesting, especially running a federated system on top of the new protocol.

[–] linearchaos@lemmy.world 2 points 1 year ago

If it were just anonymous content in a public setting you could use crowd-based morality to filter it. Any hash with a 75% down vote gets blacklisted and kind of thing. You have to account for bots and AI which may not be possible.

But once you put private into the mix, you lose the crowd you'd need to vote for morality. Now you've got cases where MGM, Sony and BMG hire people to infiltrate the networks and shut down any post they deem unfit.

Privacy is the difference between the dark web and the public torrent scene.

[–] PeleSpirit@lemmy.world -1 points 1 year ago (1 children)

I mean AI can tell what pictures I have on my computer by showing me ads that coincide, surely it could have AI find it and block it on the system. That's one partial solution.

[–] jdaxe@infosec.pub 7 points 1 year ago (1 children)

Which AI is scanning your computer and how?

[–] yiliu@informis.land 6 points 1 year ago

That would require that users have access to other users' traffic, compromising security. After all, there's no reason the government or corporations couldn't operate many 'users'.