PoisonedPrisonPanda

joined 2 years ago

If you call and I don’t pick up, leave a voicemail. If you don’t leave a voicemail, I assume it wasn’t important

this a thousand times. IDK when people decided to not use the voicemail anymore.

But to be honest I make it more radical and not even return all calls. because. priority 1 - call, not answered but with a voicemail

priority 2 - unanswered call but a message sent afterwards.

priority 3 - message only.

an unanswered call on my side and no further information is for me simply to forget about it.

das netz kommt in der tat mit dem ausbau von PV net klar. die netzregler laufen jz schon am limit. solang wir da nicht mehr ausbauen bringts sich nix mehr zu dezentralisieren.

[–] PoisonedPrisonPanda@discuss.tchncs.de 45 points 1 month ago (6 children)

it only enabled its operators access to encrypted communications...

what the heck? isnt this much worse than simple microphone access?

Wenn ich raten müsste eine Kombi aus:

den wirtschaftsmotor namens rene benko nicht vergessen der gerade stottert.

[–] PoisonedPrisonPanda@discuss.tchncs.de -1 points 1 month ago (1 children)

well. indeed the devil's in the detail.

But going with your story. Yes, you are right in general. But the human input is already there.

But you have to have human-made material to train the classifier, and if the classifier doesn’t improve, then the generator never does either.

AI can already understand what stripes are, and can draw the connection that a zebra is a horse without stripes. Therefore the human input is already given. Brute force learning will do the rest. Simply because time is irrelevant and computations occur at a much faster rate.

Therefore in the future I believe that AI will enhance itself. Because of the input it already got, which is sufficient to hone its skills.

While I know for now we are just talking about LLMs as blackboxes which are repetitive in generating output (no creativity). But the 2nd grader also has many skills which are sufficient to enlarge its knowledge. Not requiring everything taught by a human. in this sense.

I simply doubt this:

LLMs will get progressively less useful

Where will it get data about new programming languages or solutions to problems in new software?

On the other hand you are right. AI will not understand abstractions of something beyond its realm. But this does not mean it wont expedite in stuff that it can draw conclusions from.

And even in the case of new programming languages, I think a trained model will pick up the logic of the code - basically making use of its already learned pattern recognition skills. And probably at a faster pace than a human can understand a new programming language.

[–] PoisonedPrisonPanda@discuss.tchncs.de -1 points 1 month ago (3 children)

Well. I doubt that very much. Take as an analogy the success of the chess AI which was left training itself - compared to being trained...

Programmers as it turns out are very ‘eh, the code should explain itself to anyone with enough brains to look at it’ type of people

I cannot say how much I hate this.

even worse for old code where proper variable naming and underscores were forbidden. Impossible to get into someone else's head.

[–] PoisonedPrisonPanda@discuss.tchncs.de 4 points 1 month ago (3 children)

Puzzling Stack Exchange

this simply an aggregator?

People also blame ai, but if people are going to ai to ask the common already answered questions then… good!

exactly!

While I am indeed worried about the "wasted" energy (thats a whole other topic), thats pretty much why AI is good for.

Isn't more like the main driver for our prospering civilization?

Some might say that the shift in desiring less is the downward path for the over-saturated humanity.

But lets not get too deep here.

17
database greenhorn (discuss.tchncs.de)
submitted 2 months ago* (last edited 2 months ago) by PoisonedPrisonPanda@discuss.tchncs.de to c/programming@programming.dev
 

hi my dears, I have an issue at work where we have to work with millions (150 mln~) of product data points. We are using SQL server because it was inhouse available for development. however using various tables growing beyond 10 mln the server becomes quite slow and waiting/buffer time becomes >7000ms/sec. which is tearing our complete setup of various microservices who read, write and delete from the tables continuously down. All the stackoverflow answers lead to - its complex. read a 2000 page book.

the thing is. my queries are not that complex. they simply go through the whole table to identify any duplicates which are not further processed then, because the processing takes time (which we thought would be the bottleneck). but the time savings to not process duplicates seems now probably less than that it takes to compare batches with the SQL table. the other culprit is that our server runs on a HDD which is with 150mb read and write per second probably on its edge.

the question is. is there a wizard move to bypass any of my restriction or is a change in the setup and algorithm inevitable?

edit: I know that my questions seems broad. but as I am new to database architecture I welcome any input and discussion since the topic itself is a lifetime know-how by itself. thanks for every feedbach.

 

dem forum nach zu urteilen trifft man hier einen wunden punkt der arbeitnehmer. :)

 

wie würdet ihr 10k investieren? was meint ihr bzgl. währungsverfall/risiko dollar & euro? wie hat sich euer anlagehorizon mit familie verändert?

view more: next ›