matthewc

joined 1 year ago

Exactly what you said. It has always been about control.

I spin up a lot of Docker containers with large data sets locally.

[–] matthewc@lemmy.self-host.site 7 points 1 year ago (4 children)

Developer here. Completely depends on your workflow.

I went base model and the only thing I regret is not getting more RAM.

Speeds have been phenomenal when there binaries are native. Speeds have been good when the binaries are running through Rosetta.

The specs you’re wavering between are extremely workflow specific. You know if your workflow requires the 16 extra GPU cores. You know if your workflow requires another 64 GB of RAM.

Serious. I installed VSCodium today.

[–] matthewc@lemmy.self-host.site 1 points 1 year ago (2 children)

I didn’t realize vscode is open source. Good to know!

[–] matthewc@lemmy.self-host.site 3 points 1 year ago (1 children)

Use two providers on different networks. They can fill in the gaps for each other.

 

I have a DS920+ full of 16TB HHDs. I am considering adding a DX517. Is it possible to move my HDDs into the expansion unit and maintain the existing raid while installing SSDs into the original four bays? Does it make sense to put the HDDs in the expansion because they connect over a single eSATA cable? Does it make any sense to try and optimize like that considering all of the NICs are 1gb?

I highly recommend storing your DB and pictrs directories on an SSD volume.

[–] matthewc@lemmy.self-host.site 4 points 1 year ago (2 children)

I’m running on my NAS.

So far so good on my little one user instance as well.

view more: next ›