this post was submitted on 26 Sep 2024
59 points (88.3% liked)

Technology

60075 readers
3855 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Note: this article is from 08 September 2024.

top 6 comments
sorted by: hot top controversial new old
[–] chrash0@lemmy.world 12 points 2 months ago (2 children)

All programs were developed in Python language (3.7.6). In addition, freely available Python libraries of NumPy (1.18.1) and Pandas (1.0.1) were used to manipulate data, cv2 (4.4.0) and matplotlib (3.1.3) were used to visualize, and scikit-learn (0.24.2) was used to implement RF. SqueezeNet and Grad-CAM were realized using the neural network library PyTorch (1.7.0). The DL network was trained and tested using a DL server mounted with an NVIDIA GeForce RTX 3090 GPU, 24 Intel Xeon CPUs, and 24 GB main memory

it’s interesting that they’re using pretty modest hardware (i assume they mean 24 cores not CPUs) and fairly outdated dependencies. also having their dependencies listed out like this is pretty adorable. it has academic-out-of-touch-not-a-software-dev vibes. makes you wonder how much further a project like this could go with decent technical support. like, all these talented engineers are using 10k times the power to work on generalist models like GPT that struggle at these kinds of tasks, while promising that it would work someday and trivializing them as “downstream tasks”. i think there’s definitely still room in machine learning for expert models; sucks they struggle for proper support.

[–] bamfic@lemmy.world 4 points 2 months ago

Appendix A of this paper is our requirements.txt

[–] Miaou@jlai.lu 3 points 2 months ago

I'd say the opposite. Usually you barely get the requirements.txt, when you do you're missing the versions (including for python itself), and then only must you find out The versions of cuda and cuda driver

[–] Kalothar@lemmy.ca 9 points 2 months ago

Neat and a great use for ai

[–] technocrit@lemmy.dbzer0.com 5 points 2 months ago* (last edited 2 months ago) (1 children)

There's no "AI" involved. The authors quickly retreat from their misleading title to the sightly less misleading "deep learning". Regardless of grifter terminology, we're actually talking about a machine computing the statistics of images. That might be good for patients. But it's got nothing to do with "artificial intelligence" or "seeing beyond humans".