this post was submitted on 11 Apr 2024
30 points (100.0% liked)
technology
23308 readers
243 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I meant this: https://www.marxists.org/encyclopedia/ my bad!
that's what I thought. but maybe you wanted an encycolopedia of every marxist email list that existed in the year 2000.
what was logsec's role in the scraping? was the script itself in logsec or was it converting to md?
Powershell did all the scraping using a module called PSParseHTML. I did my own markdown conversion logic of the paragraph elements. It dumped each term into a directory as a md file formatted for logseq.
Logseq was going to be a means of graphing the relationships between the terms based on where they are mentioned. I never got to that point because the volume of pages were to much for logseq and it would hard lock on loading the directory.
OK sorry if I am telling you something you already know, but the classic tool to use to scrape a website is
wget
. Starting from one URL it can find all linked URLs and download them, or not, based on configuration. I have always found it pretty much accessible it was one of the earliest cli tools I used. You can learn a little bit or a lot. I think for this task you could learn a little. So that would get you a perfect local mirror or the site.GNU has the comprehensive docs of course https://www.gnu.org/software/wget/. But starting with some random tutorial might be better choice.
To get it to markdown you'll have to add another step, perhaps turndown or pandoc.
Other projects to check out is httrack and tangentially archivebox.
Its all good. I've heard of wget and used it in the past. I use PowerShell at work so I'm more accustomed to using it and it's an object oriented programing language which I have a good understanding of.
The module is masked on angular and so I could do element queries for the dom nodes I wanted and created some logic to move to the next paragraph under a given header until I reached the next header. Each header + paragraphs were collected then written to the HDD as markdown files.
But thanks for the tool suggestion, its good to know of alternatives.
maybe you could break them up in chunks some how.