sisyphean

joined 2 years ago
MODERATOR OF
 

Intelligence explosion arguments don’t require Platonism. They just require intelligence to exist in the normal fuzzy way that all concepts exist.

1
submitted 2 years ago* (last edited 2 years ago) by sisyphean@programming.dev to c/auai@programming.dev
 

At OpenAI, protecting user data is fundamental to our mission. We do not train our models on inputs and outputs through our API.

 

We’re rolling out custom instructions to give you more control over how ChatGPT responds. Set your preferences, and ChatGPT will keep them in mind for all future conversations.

@AutoTLDR

 

GPT-3.5 and GPT-4 are the two most widely used large language model (LLM) services. However, when and how these models are updated over time is opaque. Here, we evaluate the March 2023 and June 2023 versions of GPT-3.5 and GPT-4 on four diverse tasks: 1) solving math problems, 2) answering sensitive/dangerous questions, 3) generating code and 4) visual reasoning. We find that the performance and behavior of both GPT-3.5 and GPT-4 can vary greatly over time. For example, GPT-4 (March 2023) was very good at identifying prime numbers (accuracy 97.6%) but GPT-4 (June 2023) was very poor on these same questions (accuracy 2.4%). Interestingly GPT-3.5 (June 2023) was much better than GPT-3.5 (March 2023) in this task. GPT-4 was less willing to answer sensitive questions in June than in March, and both GPT-4 and GPT-3.5 had more formatting mistakes in code generation in June than in March. Overall, our findings shows that the behavior of the “same” LLM service can change substantially in a relatively short amount of time, highlighting the need for continuous monitoring of LLM quality.

 

Introducing Llama 2 - The next generation of our open source large language model. Llama 2 is available for free for research and commercial use.

This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters.

@AutoTLDR

 

16 Mar, 2023

Kagi Search is pleased to announce the introduction of three AI features into our product offering.

We’d like to discuss how we see AI’s role in search, what are the challenges and our AI integration philosophy. Finally, we will be going over the features we are launching today.

@AutoTLDR

 

This is a game that tests your ability to predict ("forecast") how well GPT-4 will perform at various types of questions. (In caase you've been living under a rock these last few months, GPT-4 is a state-of-the-art "AI" language model that can solve all kinds of tasks.)

Many people speak very confidently about what capabilities large language models do and do not have (and sometimes even could or could never have). I get the impression that most people who make such claims don't even know what current models can do. So: put yourself to the test.

 

Increasingly powerful AI systems are being released at an increasingly rapid pace. This week saw the debut of Claude 2, likely the second most capable AI system available to the public. The week before, Open AI released Code Interpreter, the most sophisticated mode of AI yet available. The week before that, some AIs got the ability to see images.

And yet not a single AI lab seems to have provided any user documentation. Instead, the only user guides out there appear to be Twitter influencer threads. Documentation-by-rumor is a weird choice for organizations claiming to be concerned about proper use of their technologies, but here we are.

@AutoTLDR

 

An AI-first notebook, grounded in your own documents, designed to help you gain insights faster.

@AutoTLDR

 

We are pleased to announce Claude 2, our new model. Claude 2 has improved performance, longer responses, and can be accessed via API as well as a new public-facing beta website, claude.ai. We have heard from our users that Claude is easy to converse with, clearly explains its thinking, is less likely to produce harmful outputs, and has a longer memory. We have made improvements from our previous models on coding, math, and reasoning. For example, our latest model scored 76.5% on the multiple choice section of the Bar exam, up from 73.0% with Claude 1.3. When compared to college students applying to graduate school, Claude 2 scores above the 90th percentile on the GRE reading and writing exams, and similarly to the median applicant on quantitative reasoning.

@AutoTLDR

 

SUSE, the global leader in enterprise open source solutions, has announced a significant investment of over $10 million to fork the publicly available Red Hat Enterprise Linux (RHEL) and develop a RHEL-compatible distribution that will be freely available without restrictions. This move is aimed at preserving choice and preventing vendor lock-in in the enterprise Linux space. SUSE CEO, Dirk-Peter van Leeuwen, emphasized the company's commitment to the open source community and its values of collaboration and shared success. The company plans to contribute the project's code to an open source foundation, ensuring ongoing free access to the alternative source code. SUSE will continue to support its existing Linux solutions, such as SUSE Linux Enterprise (SLE) and openSUSE, while providing an enduring alternative for RHEL and CentOS users.

[–] sisyphean@programming.dev 17 points 2 years ago

Lol that’s like saying there’s too much porn on /r/gonewild

[–] sisyphean@programming.dev 1 points 2 years ago

LLMs can do a surprisingly good job even if the text extracted from the PDF isn't in the right reading order.

Another thing I've noticed is that figures are explained thoroughly most of the time in the text so there is no need for the model to see them in order to generate a good summary. Human communication is very redundant and we don't realize it.

[–] sisyphean@programming.dev 3 points 2 years ago (1 children)

If I remember correctly, the properties the API returns are comment_score and post_score.

[–] sisyphean@programming.dev 6 points 2 years ago (3 children)

Lemmy does have karma, it is stored in the DB, and the API returns it. It just isn’t displayed on the UI.

[–] sisyphean@programming.dev 2 points 2 years ago (2 children)

It only handles HTML currently, but I like your idea, thank you! I’ll look into implementing reading PDFs as well. One problem with scientific articles however is that they are often quite long, and they don’t fit into the model’s context. I would need to do recursive summarization, which would use much more tokens, and could become pretty expensive. (Of course, the same problem occurs if a web page is too long; I just truncate it currently which is a rather barbaric solution.)

[–] sisyphean@programming.dev 1 points 2 years ago (2 children)
[–] sisyphean@programming.dev 2 points 2 years ago* (last edited 2 years ago)

TIL. Thank you! (Now I will ssh into all my VPSes and set this up!)

(cool username btw)

[–] sisyphean@programming.dev 1 points 2 years ago

I think the incentives are a bit different here. If we can keep the threadiverse nonprofit, and contribute to the maintenance costs of the servers, it might stay a much friendlier place than Reddit.

[–] sisyphean@programming.dev 1 points 2 years ago

We should do an AmA with her!

[–] sisyphean@programming.dev 1 points 2 years ago

Lemmy actually has a really good API. Moderation tools are pretty simple though.

[–] sisyphean@programming.dev 7 points 2 years ago* (last edited 2 years ago) (1 children)

Here people actually react to what I post and write. And they react to the best possible interpretation of what I wrote, not the worst. And even if we disagree, we can still have a nice conversation.

Does anyone have a good theory about why the threadiverse is so much friendlier? Is it only because it's smaller? Is it because of the kind of people a new platform like this attracts? Because there is no karma? Maybe something else?

[–] sisyphean@programming.dev 5 points 2 years ago (1 children)

Did I miss something? Or is this still about Beehaw?

view more: next ›