266
submitted 9 months ago* (last edited 9 months ago) by Rinna@lemm.ee to c/asklemmy@lemmy.ml
you are viewing a single comment's thread
view the rest of the comments
[-] DokPsy@infosec.pub 28 points 9 months ago

I don't mind the tool itself if you use it as such. I do mind when people use its output as the final product. See: the lawyer who used chatgpt for a legal brief

[-] XEAL@lemm.ee 5 points 9 months ago

The lawyer fuck up is what happens when someone doesn't know or understand the limitations of a LLM.

If you want a GPT model tailored and specialized for a specific task, you have to train it with custom data, fine tune it and tweak the model's parameters. You cannot do that from the ChatGPT web/app, you need a custom implementation coded in Python or some other language.

[-] TechieDamien@lemmy.ml 2 points 9 months ago

There are some uis that allow for fine tuning (assuming you have an extremely high end rig designed for ml). For example ChatGPT alternative and DALLE alternative.

[-] XEAL@lemm.ee 2 points 9 months ago

Thanks. I have a quite powerful rig, but at the moment I work with OpenAI's API using GPT 3.5 Turbo using a custom (but shitty) Python script with a simple Gradio web interface. However, I mostly stopped improving or updating it months ago. As long as I don't use LlamaIndex, the cost is quite low.

I already use Stable Diffusion WebUI, tho.

Also the "fine tuning" I was talking about is this https://platform.openai.com/docs/guides/fine-tuning

[-] TechieDamien@lemmy.ml 2 points 9 months ago

I am aware what fine tuning is. It is available from the train tab while the base checkpoint is loaded in both cases.

[-] uralsolo@hexbear.net 2 points 9 months ago* (last edited 9 months ago)

I also don't think that the ChatGPT model is able to do something that requires referencing case law or medical texts or whatever else at all in its current form. The way it works by generating probabilities for certain words is all wrong for doing something where the value of the output isn't subjective - you need the model to be able to distinguish between facts and opinion, you need it to be able to cite sources for what it says, you need it to be able to produce coherent cause and effect chains and formulate an argument, all things which no currently existing LLM is capable of no matter how much you fine tune it because of how it works.

[-] DokPsy@infosec.pub 1 points 9 months ago

I'm glad you understand my point. Chatgpt is not Google. It's a language model that will give you something that looks like the thing you asked for it to provide. It can and will pull facts out of its recycle bin if it fits the cadence of what it expects the answer to look like.

[-] XEAL@lemm.ee 1 points 9 months ago* (last edited 9 months ago)

ChatGPT is not Google, but sometimes it can work as a glorified search engine or even compete with asking in forums.

I've lost count of how many times ChatGPT has produced Bash or Python code for what I needed. Yes, sometimes the code is wrong and/or requires tweaking and sometimes I resorted to look into the documentation, but no one will answer faster and anytime of the day like ChatGPT does, at least not for free.

[-] DokPsy@infosec.pub 1 points 9 months ago

It's a tool to aid in creating a product, not a tool that magics out a finished product. That's my point. Too many people use it as the latter instead of the former.

[-] XEAL@lemm.ee 1 points 9 months ago

100% agree.

Maybe, with lots of training, weaking and testing the latter could be achieved, but that's it.

[-] intensely_human@lemm.ee -3 points 9 months ago
[-] Carighan@lemmy.world 2 points 9 months ago

Have you seen that legal brief?

[-] intensely_human@lemm.ee -2 points 9 months ago

No. Communicate please and we can have a real conversation.

[-] Carighan@lemmy.world 2 points 9 months ago

The person you first replied to asked you to see the legal brief as an example of why they mind using the output as the finished product. You then asked for an explanation. To which I asked you, hey, have you actually looked at that example? You have not.

What exactly do you want here, other than be argumentative for combative reasons?

[-] DokPsy@infosec.pub 2 points 9 months ago

Letting a language model do the work of thinking is like building a house and using a circular saw to put nails in. It will do it but you should not trust the results.

It is not Google. It can, will, and has made up facts as long as it fits the format expected

Not at the very least proof reading and fact checking the output is beyond lazy and a terrible use of a tool. Using it to create the end product instead of as a tool to use in creation of an end product are two very different things.

this post was submitted on 13 Sep 2023
266 points (98.5% liked)

Asklemmy

42502 readers
1451 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS