this post was submitted on 27 Sep 2023
13 points (93.3% liked)

Comradeship // Freechat

2168 readers
94 users here now

Talk about whatever, respecting the rules established by Lemmygrad. Failing to comply with the rules will grant you a few warnings, insisting on breaking them will grant you a beautiful shiny banwall.

A community for comrades to chat and talk about whatever doesn't fit other communities

founded 3 years ago
MODERATORS
 

“What are the positives and negatives of using ChatGPT (and other AI) in post-secondary?”

This is a question I need to answer for an essay competition thing, and while I do have ideas myself and from my professor when I asked for his opinions, I was hoping if anyone here had some insights to add.

Is it ethical that I ask for your aid? I don’t want to overstep. I would not use anyone’s name/usernames at all in this essay, at most I will cite sources on the matter.

While I think my current ideas about the pros and cons are good (more cons than pros in my opinion) but I want to know if I missed anything.

If needed, I will add what ideas I’ve come up with so far but for now I’ll leave that out.

Edit: I was tempted to post this in the “Ask Lemmygrad” community but I think thats a more educational community about communism specifically so I’ll stick to asking here.

top 14 comments
sorted by: hot top controversial new old
[–] starkillerfish@lemmygrad.ml 13 points 1 year ago (2 children)

for me the arguments would be mostly negative because:

  1. using it does not train your research skills
  2. using it does not train your creative and academic writing skills
  3. it is often just wrong when synthesizing text

so to me those are major cons in an educational context some positives would perhaps be:

  1. it is useful as a phrase bank, as it can quickly give ideas on how to put words together.
  2. it is alright at giving direction when starting research, sort of like wikipedia

thats all i can think of so far

[–] elephantintheroom@lemmy.ml 7 points 1 year ago (1 children)

Don't forget about the privacy and copyright concerns. Scraping the internet for training data, copyrighted or not, and also logging every input for this purpose (and probably others).

A pretty significant con in my opinion.

[–] SpaceDogs@lemmygrad.ml 5 points 1 year ago (1 children)

Copyright is one of the cons I have written down but I never thought of the privacy issues with AI…

[–] elephantintheroom@lemmy.ml 7 points 1 year ago (1 children)

Most people don't. Convenience is more important than privacy for most people, can't blame you.

I'm just a paranoid tech geek. So this is usually the first and strongest concern for me.

[–] SpaceDogs@lemmygrad.ml 3 points 1 year ago (1 children)

Honestly, being in this forum has got me on the privacy paranoia train so I get it. Sometimes i forget how many aspects of life/the internet invade on privacy. Like now, I had no idea that AI like ChatGPT invaded privacy.

[–] elephantintheroom@lemmy.ml 3 points 1 year ago

I think AIs are one of the most privacy invading things right after social media platforms.

[–] SpaceDogs@lemmygrad.ml 1 points 1 year ago

These are all great, thank you for this!

[–] AlbigensianGhoul@lemmygrad.ml 7 points 1 year ago (2 children)

My preferred way of thinking about these chatbots is that they're effectively just on-demand peers with quick Google skills to chat. Just like humans they can be confidently wrong a lot or have incomplete information or presentation, but also just like humans they can help you explore your ideas and give you quck insight.

Besides all the technical cons (blatant disregard for copyright law and it being randomly racist sometimes), I don't think they're particularly bad. You just have to keep in mind that they're about as trustworthy as your local arrogant lab intern. Usually you're already required to source your claims on higher education work anyways.

Main issue right now is that the current favourite implementation seems to be specifically trained to almost never admit to not knowing something.

[–] relay@lemmygrad.ml 5 points 1 year ago

Main issue right now is that the current favourite implementation seems to be specifically trained to almost never admit to not knowing something.

Training data comes from Americans, so that makes sense.

[–] SpaceDogs@lemmygrad.ml 3 points 1 year ago

Just like humans they can be confidently wrong a lot or have incomplete information or presentation, but also just like humans they can help you explore your ideas and give you quck insight.

very very good stuff, thank you!

[–] bobs_guns@lemmygrad.ml 3 points 1 year ago (1 children)

For more general information on llms I recommend this blog. https://simonwillison.net/ The post about prompt injection is especially good.

[–] SpaceDogs@lemmygrad.ml 2 points 1 year ago

Oh this is going to be helpful since I don’t know much about LLMs.

[–] CannotSleep420@lemmygrad.ml 2 points 1 year ago* (last edited 1 year ago) (1 children)

It's been awhile since I skimmed it, but this article is a good source on the topic.

[–] SpaceDogs@lemmygrad.ml 2 points 1 year ago

I love getting reading material so thank you!