this post was submitted on 30 Jun 2024
41 points (100.0% liked)

retrocomputing

4103 readers
3 users here now

Discussions on vintage and retrocomputing

founded 1 year ago
MODERATORS
 

I've been playing with an idea that would involve running a machine over a delay-tolerant mesh network. The thing is, each packet is precious and needs to be pretty much self contained in that situation, while modern systems assume SSH-like continuous interaction with the user.

Has anyone heard of anything pre-existing that would work here? I figured if anyone would know about situations where each character is expensive, it would be you folks.

you are viewing a single comment's thread
view the rest of the comments
[–] nickwitha_k@lemmy.sdf.org 1 points 4 months ago (1 children)

As long as you're using TCP (what SSH uses) or a similar protocol, you should be able to deal with a situation like that. You'd mainly need to ensure that your client and server are tuned to meet your needs. With TCP, every packet is considered important and if the receiver does not acknowledge receipt, the sender will resend.

[–] CanadaPlus@lemmy.sdf.org 4 points 4 months ago* (last edited 4 months ago) (1 children)

I'm not talking a lot of latency, I'm talking snail-mail levels. Hours probably won't even be unusual, because hops will happen partly by sneakers net as people move around with their nodes. The concept is distributed burst radio for extreme censorship environments.

The point of the containers in the first place is to make as much as possible work offline, without the user having to be in the loop.

[–] nickwitha_k@lemmy.sdf.org 4 points 4 months ago (1 children)

Oh that's interesting. I might suggest looking at implementations of IP Over Avian Carrier (IPoAC). And I do mean that seriously. The idea started as an April Fools RFC but some people have actually implemented it. Basically, just using a different physical layer.

[–] CanadaPlus@lemmy.sdf.org 1 points 4 months ago (1 children)

Yeah, that's probably worth a look. Good suggestion. There's also delay-tolerant protocols for space and similar, but I don't know if any of them define an endpoint, as opposed to just a transport layer.

[–] nickwitha_k@lemmy.sdf.org 1 points 4 months ago (1 children)

Indeed. I'd really suggest going for something based upon Internet Protocol, with any software that you need at endpoints to read and/or transmit. I might poke about at some ideas on the weekend (long holiday). What languages are you thinking to use?

[–] CanadaPlus@lemmy.sdf.org 2 points 4 months ago* (last edited 4 months ago) (1 children)

Probably Rust, although I'm not married to it. I'm just at the planning stage right now, though.

One open question is if you can use a fairly standard transceiver like a Bluetooth chip, or if you need an SDR. Obviously they weren't designed with this in mind, by maybe there's a profile that's close enough.

Packets should have a few kilobytes of payload so you can fit a postquantum cryptographic artifact. Thankfully, even with a BCH code, it seems doable to fit that much in a 1-second burst in a standard amateur radio voice channel, for testing. (In actual clandestine use I'd expect you'd want to go as wide as the hardware can support)

As envisioned there would be someone operating a hub, which might have actual network access through some means, and on which the containers run. They would send out runners to collect traffic from busy public spaces which might serve as hubs for burst activity, and dump outgoing packets, all without giving up any locations.

Accounts with their own small container would be opened by sending in a public key, and then further communication would be by standard symmetric algorithm - except in testing, because that's an amateur radio no-no, so just signed cleartext. ID would be derived from signature fingerprint, as I have been thinking about it. I have a lightweight hash scheme in mind that would allow awarding of credit for retransmitting packets in a way that couldn't be cheated.

You'd want to have some ability to detect and move around jamming, or just other people's bursts. That's more hardware research, basically.

[–] nickwitha_k@lemmy.sdf.org 1 points 4 months ago (2 children)

I've got a few things that I need to get done in the next few days (hopefully mostly sorted today) but you've got me rather intrigued with this as a puzzle. I'll see if I can get some time to sketch some thoughts out and maybe some high-level implementation of some bits in Python (it's faster to POC things).

A few quick thoughts:

  • I think that an existing or novel protocol built on top of the Internet Protocol is likely the way to go. Following the OSI model, you can target Layer 4, with some simple stuff for higher layers. Client/Server (possibly the same binary) and associated automation should handle Layers 1-3 (translating between different carriers for Layers 1 and 2, and handling routing of data packets in Layer 3).

  • Message routing strategies and their impact on OpSec is worth consideration. By this I mean: broadcast-only vs targeted-only vs both vs hybrid. All three have trade-offs.

Broadcast-only: Makes it harder to know the intended destination of the message. Conversely, by being routed to either all known addresses or all approved addresses, it can be more vulnerable to interception by a compromised endpoint.

Targeted-only: May be harder to intercept as the path that a packet takes should result in it hitting fewer potential endpoints. Conversely, some form of addressing is necessary to know, at the least, the next hop in transit. This makes tracing the intended endpoint, as well as network hops much easier (ex. running a traceroute).

Both: Gains the advantages and disadvantages of both approaches, depending on the which mode the data is transmitted in. Ensuring that data is transmitted correctly becomes important and has implications on the requirement of maintaining known good versions of the client/server software to avoid unintentional or malicious improper routing.

Hybrid: Could take many forms but the one that comes to my mind is a multilevel hub and spoke architecture (I'll draw this out). Basically, you end up having 2-3 "modes" for a client/server: hub, spoke, and endpoint. One or more client/servers operating in a hub "mode" act like traditional servers, kinda like a bulletin board, holding packets for local delivery or transmission to another hub. Client/servers in the spoke mode act as hops between hubs. Client/servers in the endpoint mode are the actual intended destination (this could be combined with the spoke mode). To protect endpoint identity, the destination could be part of the encrypted data packet allowing an endpoint to attempt to decrypt packets received from a hub locally, making it harder to know which endpoint a message is intended for. This does still require greater visibility of hub addresses for routing.

  • Encryption of packets is vital. Supporting some modularity might be of value so as to allow use of simpler cryptography for PoC but, the protocol should ensure that it is possible to break reverse compatibility (normally NOT what you want to do for networking protocols but avoiding an "it's an old code but still checks out" situation would be more important).

  • Amateur radio should be avoided both in PoC and hypothetical "production" use cases. The ban on encryption is insurmountable there and illegal use of encryption could lead to hightened visibility because the FCC, historically, does not fuck around with illegal radio signals. This means all wireless should be below 1W in the US, in bands that are legal for unlicensed use.

  • Any physical layer that supports arbitrary data transfers should be possible. The implementation to support it would be part of the client/server. So, Bluetooth, 802.11, LORA, sneakernet, and many others could be hypothetically supported. Again, though, this relies on the protocol's stack to be and to understand it, either directly, or translated by another component.

  • A web of trust may be a good approach for authentication and identity.

[–] CanadaPlus@lemmy.sdf.org 3 points 4 months ago (1 children)

Darn, I have to go now. Apologies for the considerable latency there might be getting back to you on this, haha!

[–] nickwitha_k@lemmy.sdf.org 1 points 4 months ago

All good. I'm going to likely have bursts between the holiday, house guests, and other projects (ex. converting a "dumb" digital bbq thermometer into a Prometheus endpoint).

[–] CanadaPlus@lemmy.sdf.org 2 points 4 months ago* (last edited 4 months ago) (1 children)

Alright, I'm back.

I was talking about amateur radio (in general) as a physical layer because I'm familiar with it, and know it can support short, wide-enough bursts with total radio silence in between. That's an important requirement because if you're loud continuously, in the "prod" case, jackboots with a yagi will show up and arrest you. Spies use fast, wide digital radio transmissions a bit like this in really locked-down countries, just not networked together in any way.

If more end-user hardware - or even a non-RF medium - would work, great, no issue. Like you said, there's no way to support too many assuming they're safe.

For routing, I would suggest no incoming transmission (or "transmission" if it's really a hardwire connection) is ignored, but when to rebroadcast is left flexible for the user, who will be able to assess risk and likelihood of success getting closer to the destination in a way no reasonable software could.

Hybrid: Could take many forms but the one that comes to my mind is a multilevel hub and spoke architecture (I’ll draw this out). Basically, you end up having 2-3 “modes” for a client/server: hub, spoke, and endpoint. One or more client/servers operating in a hub “mode” act like traditional servers, kinda like a bulletin board, holding packets for local delivery or transmission to another hub. Client/servers in the spoke mode act as hops between hubs. Client/servers in the endpoint mode are the actual intended destination (this could be combined with the spoke mode). To protect endpoint identity, the destination could be part of the encrypted data packet allowing an endpoint to attempt to decrypt packets received from a hub locally, making it harder to know which endpoint a message is intended for. This does still require greater visibility of hub addresses for routing.

Yeah, so a hub just makes good sense - with such a modest network capacity relative to hardware capabilities, why not gather as much in one place as possible? Because one hub might get busted or just fall to some version of enshittification, it should be easy enough for a user to switch, but I think it's the best choice of central organising principle.

Other than anonymity, is there a reason to separate out spokes from endpoints? One thing I already have worked out is a system where the hub can keep track of who has helped transmit things (in a cheat proof way), and could simply give credit for traffic moved, offsetting whatever cost there is to use it (ISPs aren't usually free to start with, and this one is a safety risk to operate). The bandwidth overhead is literally just a key ID (address) and a hash per hop.

I figured switching keys frequently would be enough to ensure a degree of anonymity, since it's completely pseudonymous. We don't have a guarantee packets will arrive in order or in any reasonable timeframe, but if we did I'd suggest rolling through keys by count or timestamp.

A web of trust may be a good approach for authentication and identity.

I don't really have anything to add there. Proving identity beyond just "I hold this key" is out of the scope of what I was considering. I'd probably go about it the same way I would over a more traditional network, if it came up.

Edit: Oh, and I'm not really sure how well this all dovetails into IP. If it can, that's great, of course.

[–] nickwitha_k@lemmy.sdf.org 1 points 4 months ago (1 children)

Will have to get back to you sometime this week - family took more time than anticipated. But, I can layout a few things:

jackboots with a yagi will show up and arrest you.

Yeah. This is why I recommend avoiding it altogether. Regulatory agencies are too on top of the licensed spectrum when just worrying about keeping HAMs and others in line. The tools to catch unlicensed operators are just too well-developed and proven to consider it practical for a transport layer, outside of things like natural disaster and the like where transmission in the clear isn't usually a concern.

If more end-user hardware - or even a non-RF medium - would work, great, no issue. Like you said, there's no way to support too many assuming they're safe.

Exactly. If you're using RF as a physical layer, low-power in parts of the spectrum where encryption is both allowed and common is what you want.

Yeah, so a hub just makes good sense - with such a modest network capacity relative to hardware capabilities, why not gather as much in one place as possible? Because one hub might get busted or just fall to some version of enshittification, it should be easy enough for a user to switch, but I think it's the best choice of central organising principle.

This is where IP may come in handy for Layer 3 (I'll come back to that).

Other than anonymity, is there a reason to separate out spokes from endpoints? One thing I already have worked out is a system where the hub can keep track of who has helped transmit things (in a cheat proof way), and could simply give credit for traffic moved, offsetting whatever cost there is to use it (ISPs aren't usually free to start with, and this one is a safety risk to operate). The bandwidth overhead is literally just a key ID (address) and a hash per hop.

No. I think you are right. Spokes and endpoints should be indistinguishable from the outside. There are a number of mechanisms that could be used but, one that comes to mind would be having the packets wrapped in encryption that is decryptable only by the intended receiver, and having the final "hop" as part of the encrypted packet header. That would bring up some funky cryptography needs that is have to dwell on a bit.

A spoke could literally be servers connected via a "regular" network between hubs, a BLE or LoRA data transmitter, a USB stick, etc. As long as the L2-3 is supported in the stack, it should work.

I don't really have anything to add there. Proving identity beyond just "I hold this key" is out of the scope of what I was considering. I'd probably go about it the same way I would over a more traditional network, if it came up.

Might look into PGP/GPG. It could be a useful approach. Essentially, the idea being to be able to not take someone's word for who they are but rely on a consensus of trusted parties. Like PKI but not as centralized.

Edit: Oh, and I'm not really sure how well this all dovetails into IP. If it can, that's great, of course.

I really think it can pretty well. Using IP would give a native way to route on traditional networks and make traffic more likely to blend in with existing traffic. Building a protocol on Layer 4 reduces the implementation overhead by taking advantage of existing abstractions. Layer 3 doesn't need to know anything about the layers above it or below it, it just needs to know which server is sending, which server is receiving, and the payload.

[–] CanadaPlus@lemmy.sdf.org 1 points 4 months ago* (last edited 4 months ago)

Will have to get back to you sometime this week - family took more time than anticipated. But, I can layout a few things:

No worries. I had just spent a bunch of time replying to another guy about this, and then had to pop into surgery and recover for days, which is why I felt bad and specifically mentioned it.

Yeah. This is why I recommend avoiding it altogether. Regulatory agencies are too on top of the licensed spectrum when just worrying about keeping HAMs and others in line. The tools to catch unlicensed operators are just too well-developed and proven to consider it practical for a transport layer, outside of things like natural disaster and the like where transmission in the clear isn’t usually a concern.

Like I mentioned, this is inspired by an existing thing, so I know it's possible to not get caught if transmissions are kept very short, and done in a busy area. Definitely not recommending it, though; it's also just rude to fill up spectrum with massive cyphertexts if you don't have a good reason. Industry Canada (in my case) is one thing, but basic human decency comes first.

I hadn't actually thought of natural disasters. I suppose that could be a niche just because low-power transmitters are so much more common now. Above the physical layer it makes little difference as far as I can tell, so we can talk about that and not worry about the philosophy or practice of law-breaking.

I really think it can pretty well. Using IP would give a native way to route on traditional networks and make traffic more likely to blend in with existing traffic. Building a protocol on Layer 4 reduces the implementation overhead by taking advantage of existing abstractions. Layer 3 doesn’t need to know anything about the layers above it or below it, it just needs to know which server is sending, which server is receiving, and the payload.

So would the hub just function as a local network, then? I can see what you mean by that. So basically, each container would get an IPv6 address, and could communicate with the outside world normally when a low-latency connection - like maybe via satellite constellation - is up.

and having the final “hop” as part of the encrypted packet header.

Hah, is there an official term for the move from one node to another? I'm pretty sure I've heard a complete mix of things IRL.

You could do full-blown onion encryption if you wanted, assuming you know in advance the path your traffic will take (or at least the very end of it). If you don't, you pretty much just have to trust everyone to see what route your traffic did take in the end. Given that nodes are mobile, can change identities, and optimally only share encrypted traffic, does that sound like a huge risk? (Honest question)

I suppose in a disaster situation, you could just openly publish the GPS coordinates of the hub, and make a transmission strategy by as-the-crow-flies distance.

Might look into PGP/GPG. It could be a useful approach. Essentially, the idea being to be able to not take someone’s word for who they are but rely on a consensus of trusted parties. Like PKI but not as centralized.

I'm familiar as a user, but I'm not sure how few packets you could fit that into. You could definitely set your container to do a web of trust check over the normal internet, and just ask the other party to sign something with their published key.

Also, a bit off topic, has PGP/GPG already been adapted for post-quantum algorithms? You'd think it would be one of the first things to get set up.