this post was submitted on 16 Apr 2025
161 points (100.0% liked)

news

23978 readers
752 users here now

Welcome to c/news! Please read the Hexbear Code of Conduct and remember... we're all comrades here.

Rules:

-- PLEASE KEEP POST TITLES INFORMATIVE --

-- Overly editorialized titles, particularly if they link to opinion pieces, may get your post removed. --

-- All posts must include a link to their source. Screenshots are fine IF you include the link in the post body. --

-- If you are citing a twitter post as news please include not just the twitter.com in your links but also nitter.net (or another Nitter instance). There is also a Firefox extension that can redirect Twitter links to a Nitter instance: https://addons.mozilla.org/en-US/firefox/addon/libredirect/ or archive them as you would any other reactionary source using e.g. https://archive.today/ . Twitter screenshots still need to be sourced or they will be removed --

-- Mass tagging comm moderators across multiple posts like a broken markov chain bot will result in a comm ban--

-- Repeated consecutive posting of reactionary sources, fake news, misleading / outdated news, false alarms over ghoul deaths, and/or shitposts will result in a comm ban.--

-- Neglecting to use content warnings or NSFW when dealing with disturbing content will be removed until in compliance. Users who are consecutively reported due to failing to use content warnings or NSFW tags when commenting on or posting disturbing content will result in the user being banned. --

-- Using April 1st as an excuse to post fake headlines, like the resurrection of Kissinger while he is still fortunately dead, will result in the poster being thrown in the gamer gulag and be sentenced to play and beat trashy mobile games like 'Raid: Shadow Legends' in order to be rehabilitated back into general society. --

founded 4 years ago
MODERATORS
 

you are viewing a single comment's thread
view the rest of the comments
[–] yogthos@lemmygrad.ml 6 points 6 days ago

a script to download all the images courtesy of DeepSeek :)

# Script to download multiple URLs from a text file with improved line handling
# Usage: ./download_urls.sh urls.txt [output_directory]

# Check if input file is provided
if [ -z "$1" ]; then
    echo "Error: Please provide a text file containing URLs"
    echo "Usage: $0 <input_file> [output_directory]"
    exit 1
fi

input_file="$1"
output_dir="${2:-./downloads}"

# Check if input file exists
if [ ! -f "$input_file" ]; then
    echo "Error: Input file '$input_file' not found"
    exit 1
fi

# Create output directory if it doesn't exist
mkdir -p "$output_dir"

# Read and process valid URLs into an array
urls=()
while IFS= read -r line || [[ -n "$line" ]]; do
    # Trim leading/trailing whitespace and remove CR characters
    trimmed_line=$(echo "$line" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//' | tr -d '\r')
    
    # Skip empty lines after trimming
    [[ -z "$trimmed_line" ]] && continue
    
    # Validate URL format
    if [[ "$trimmed_line" =~ ^https?:// ]]; then
        urls+=("$trimmed_line")
    else
        echo "Skipping invalid URL: $trimmed_line"
    fi
done < "$input_file"

total_urls=${#urls[@]}

if [[ $total_urls -eq 0 ]]; then
    echo "Error: No valid URLs found in input file"
    exit 1
fi

echo "Starting download of $total_urls files to $output_dir"
current=1

# Download each URL from the array
for url in "${urls[@]}"; do
    # Extract filename from URL or generate unique name
    filename=$(basename "$url")
    if [[ -z "$filename" || "$filename" =~ ^$ ]]; then
        filename="file_$(date +%s%N)_${current}.download"
    fi

    echo "[$current/$total_urls] Downloading $url"
    
    # Download with curl including error handling
    if ! curl -L --progress-bar --fail "$url" -o "$output_dir/$filename"; then
        echo "Warning: Failed to download $url"
        rm -f "$output_dir/$filename" 2>/dev/null
    fi
    
    ((current++))
done

echo "Download complete. Files saved to $output_dir"