this post was submitted on 16 Jun 2023
17 points (100.0% liked)
Programming
13368 readers
1 users here now
All things programming and coding related. Subcommunity of Technology.
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
PostgreSQL is tricky to get right and I can't fault anyone for wanting different solution like RabbitMQ to workaround it. One of the thing I did back in the day was that when dealing with high-write traffic and the data itself is not mission critical, I would set up a tmpfs on Linux for specified amount of RAM to serves as a cache to create a duplicate of the same data table used for storing on SSD/HDD and then I create a view that combines them both where it would check the cache first before querying the HDD/SSD.
During an insert/update statement, it would trigger a condition that increment a variable (semaphore) and if reached a certain value, it would run a partitioned check on the cache table and scan for any old data that aren't in active use based on timestamp and then have those written to HDD/SSD as well as writing to HDD/SSD if the data have been on cache long enough. Doing it this way, i was able to increase the throughput more than a 100 folds and still have data that can be retained on database.
Obviously, there are going to be some additional risks incurred by doing this like putting your data on a volatile memory although it's less of a risk on ECC Memory on Servers. If the power goes out, whatever stored on the RAM would be gone, so I assumed in cloud they would have backup power and other solutions in place to ensure it doesn't happen. They might have a network outage, but it's rare for servers to do a hard fail.