daredevil

joined 1 year ago
[–] daredevil@kbin.social 3 points 6 months ago

Get well soon, and thanks for the update.

[–] daredevil@kbin.social 2 points 8 months ago (1 children)

defaming them without due diligence, think about that before continuing

The irony here is unbelievable rofl you can't make this up. My previous statement was calling you childish and desperate for attention. Thanks for reminding me of that fact, so I can stop wasting my time. It is very clear you're not interested in a genuine and constructive conversation.

[–] daredevil@kbin.social 2 points 8 months ago* (last edited 8 months ago) (3 children)

It's not one week of inactivity, is has been going on for months

Looks at 2 months straight of kbin devlogs since October, when the man was having pretty significant personal issues

Not to mention he was: recently sick; tended to financial issues, and personal matters; formalities relating to the project. This isn't even mentioning that he communicated this in the devlog magazine. Or the fact that he has implemented suggestions multiple times at the request of the community to enhance QoL, and allowed users to have agency in making mod contributions.

You might want to take your own advice. This has also allowed me to revise my earlier statement. You people are actually insane.

[–] daredevil@kbin.social 4 points 8 months ago (1 children)

every post I see from them further paints them as very childish and desperate for attention.

[–] daredevil@kbin.social 5 points 8 months ago

I just use it to bring awareness to similar magazines/communities across the fediverse

[–] daredevil@kbin.social 4 points 8 months ago

Agreed, every post I see from them further paints them as very childish and desperate for attention.

[–] daredevil@kbin.social 2 points 8 months ago* (last edited 8 months ago)

Since @ernest is the owner for that magazine, I think moderator requests have to go through him. Unfortunately, he was dealing with a slight fever awhile ago, and has been dealing with financial planning and project formalities awhile back as well. Hopefully things haven't gotten worse. For what it's worth, I think it's great you're eager to contribute. There have definitely been some spam issues recently. I hope a solution can be found soon. Maybe even something like posts which have a <10% upvote-to-downvote ratio over a day/week can be temporarily quarantined until an admin approves of it. Anyways, best of luck with modship.

[–] daredevil@kbin.social 2 points 8 months ago

Came here to post because I've also seen The Symphony of the Goddess live. The poster for it is behind me at the moment. Great experience.

[–] daredevil@kbin.social 2 points 8 months ago

I've only felt the need to change distros once, from Linux Mint to EndeavourOS, because I wanted Wayland support. I realize there were ways to get Wayland working on Mint in the past, but I've already made the switch and have already gotten used to my current setup. I personally don't feel like I'm missing out by sticking to one distro, tbh. If you're enjoying Mint, I'd suggest to stick with it, unless another distro fulfills a specific need you can't get on Mint.

[–] daredevil@kbin.social 2 points 8 months ago* (last edited 8 months ago)

You could make a (private) collection for your subscribed magazines. Not exactly the feature you were asking for, but it's an option to curate your feed. On Firefox I have various collections bookmarked and tagged so accessibility is seamless.

[–] daredevil@kbin.social 1 points 9 months ago* (last edited 9 months ago) (1 children)

I imagine something like this

Duly noted, I missed a line of text. Won't try to help in the future

 
 

Terminal Trove showcases the best of the terminal, Discover a collection of CLI, TUI, and more developer tools at Terminal Trove.

 

イニシエノウタ/デボル · SQUARE ENIX MUSIC · 岡部 啓一 · MONACA

NieR Gestalt & NieR Replicant Original Soundtrack

Released on: 2010-04-21

 

On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a "mixture of experts" (MoE) model with open weights that reportedly truly matches OpenAI's GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI's Andrej Karpathy and Jim Fan. That means we're closer to having a ChatGPT-3.5-level AI assistant that can run freely and locally on our devices, given the right implementation.

Mistral, based in Paris and founded by Arthur Mensch, Guillaume Lample, and Timothée Lacroix, has seen a rapid rise in the AI space recently. It has been quickly raising venture capital to become a sort of French anti-OpenAI, championing smaller models with eye-catching performance. Most notably, Mistral's models run locally with open weights that can be downloaded and used with fewer restrictions than closed AI models from OpenAI, Anthropic, or Google. (In this context "weights" are the computer files that represent a trained neural network.)

Mixtral 8x7B can process a 32K token context window and works in French, German, Spanish, Italian, and English. It works much like ChatGPT in that it can assist with compositional tasks, analyze data, troubleshoot software, and write programs. Mistral claims that it outperforms Meta's much larger LLaMA 2 70B (70 billion parameter) large language model and that it matches or exceeds OpenAI's GPT-3.5 on certain benchmarks, as seen in the chart below.
A chart of Mixtral 8x7B performance vs. LLaMA 2 70B and GPT-3.5, provided by Mistral.

The speed at which open-weights AI models have caught up with OpenAI's top offering a year ago has taken many by surprise. Pietro Schirano, the founder of EverArt, wrote on X, "Just incredible. I am running Mistral 8x7B instruct at 27 tokens per second, completely locally thanks to @LMStudioAI. A model that scores better than GPT-3.5, locally. Imagine where we will be 1 year from now."

LexicaArt founder Sharif Shameem tweeted, "The Mixtral MoE model genuinely feels like an inflection point — a true GPT-3.5 level model that can run at 30 tokens/sec on an M1. Imagine all the products now possible when inference is 100% free and your data stays on your device." To which Andrej Karpathy replied, "Agree. It feels like the capability / reasoning power has made major strides, lagging behind is more the UI/UX of the whole thing, maybe some tool use finetuning, maybe some RAG databases, etc."

Mixture of experts

So what does mixture of experts mean? As this excellent Hugging Face guide explains, it refers to a machine-learning model architecture where a gate network routes input data to different specialized neural network components, known as "experts," for processing. The advantage of this is that it enables more efficient and scalable model training and inference, as only a subset of experts are activated for each input, reducing the computational load compared to monolithic models with equivalent parameter counts.

In layperson's terms, a MoE is like having a team of specialized workers (the "experts") in a factory, where a smart system (the "gate network") decides which worker is best suited to handle each specific task. This setup makes the whole process more efficient and faster, as each task is done by an expert in that area, and not every worker needs to be involved in every task, unlike in a traditional factory where every worker might have to do a bit of everything.

OpenAI has been rumored to use a MoE system with GPT-4, accounting for some of its performance. In the case of Mixtral 8x7B, the name implies that the model is a mixture of eight 7 billion-parameter neural networks, but as Karpathy pointed out in a tweet, the name is slightly misleading because, "it is not all 7B params that are being 8x'd, only the FeedForward blocks in the Transformer are 8x'd, everything else stays the same. Hence also why total number of params is not 56B but only 46.7B."

Mixtral is not the first "open" mixture of experts model, but it is notable for its relatively small size in parameter count and performance. It's out now, available on Hugging Face and BitTorrent under the Apache 2.0 license. People have been running it locally using an app called LM Studio. Also, Mistral began offering beta access to an API for three levels of Mistral models on Monday.

 

On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a "mixture of experts" (MoE) model with open weights that reportedly truly matches OpenAI's GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI's Andrej Karpathy and Jim Fan. That means we're closer to having a ChatGPT-3.5-level AI assistant that can run freely and locally on our devices, given the right implementation.

Mistral, based in Paris and founded by Arthur Mensch, Guillaume Lample, and Timothée Lacroix, has seen a rapid rise in the AI space recently. It has been quickly raising venture capital to become a sort of French anti-OpenAI, championing smaller models with eye-catching performance. Most notably, Mistral's models run locally with open weights that can be downloaded and used with fewer restrictions than closed AI models from OpenAI, Anthropic, or Google. (In this context "weights" are the computer files that represent a trained neural network.)

Mixtral 8x7B can process a 32K token context window and works in French, German, Spanish, Italian, and English. It works much like ChatGPT in that it can assist with compositional tasks, analyze data, troubleshoot software, and write programs. Mistral claims that it outperforms Meta's much larger LLaMA 2 70B (70 billion parameter) large language model and that it matches or exceeds OpenAI's GPT-3.5 on certain benchmarks, as seen in the chart below.

The speed at which open-weights AI models have caught up with OpenAI's top offering a year ago has taken many by surprise. Pietro Schirano, the founder of EverArt, wrote on X, "Just incredible. I am running Mistral 8x7B instruct at 27 tokens per second, completely locally thanks to @LMStudioAI. A model that scores better than GPT-3.5, locally. Imagine where we will be 1 year from now."

LexicaArt founder Sharif Shameem tweeted, "The Mixtral MoE model genuinely feels like an inflection point — a true GPT-3.5 level model that can run at 30 tokens/sec on an M1. Imagine all the products now possible when inference is 100% free and your data stays on your device." To which Andrej Karpathy replied, "Agree. It feels like the capability / reasoning power has made major strides, lagging behind is more the UI/UX of the whole thing, maybe some tool use finetuning, maybe some RAG databases, etc."

Mixture of experts

So what does mixture of experts mean? As this excellent Hugging Face guide explains, it refers to a machine-learning model architecture where a gate network routes input data to different specialized neural network components, known as "experts," for processing. The advantage of this is that it enables more efficient and scalable model training and inference, as only a subset of experts are activated for each input, reducing the computational load compared to monolithic models with equivalent parameter counts.

In layperson's terms, a MoE is like having a team of specialized workers (the "experts") in a factory, where a smart system (the "gate network") decides which worker is best suited to handle each specific task. This setup makes the whole process more efficient and faster, as each task is done by an expert in that area, and not every worker needs to be involved in every task, unlike in a traditional factory where every worker might have to do a bit of everything.

OpenAI has been rumored to use a MoE system with GPT-4, accounting for some of its performance. In the case of Mixtral 8x7B, the name implies that the model is a mixture of eight 7 billion-parameter neural networks, but as Karpathy pointed out in a tweet, the name is slightly misleading because, "it is not all 7B params that are being 8x'd, only the FeedForward blocks in the Transformer are 8x'd, everything else stays the same. Hence also why total number of params is not 56B but only 46.7B."

Mixtral is not the first "open" mixture of experts model, but it is notable for its relatively small size in parameter count and performance. It's out now, available on Hugging Face and BitTorrent under the Apache 2.0 license. People have been running it locally using an app called LM Studio. Also, Mistral began offering beta access to an API for three levels of Mistral models on Monday.

16
submitted 10 months ago* (last edited 10 months ago) by daredevil@kbin.social to c/vgmusic@lemmy.world
 

Discovery helps reveal how a long-lost empire used multiculturalism to achieve political stability

3
Breaking Bread (media.kbin.social)
 
 

Resident Evil 4 Remake has been crowned PlayStation Game of the Year at The Golden Joysticks 2023 powered by Intel.

Capcom's third Resident Evil remake was released in March of this year and took players back to rural Spain to confront the mysterious, and deadly, Los Illuminados cult - 18 years after we originally did on the PlayStation 2. Fans clearly loved revisiting the classic survival horror game as it managed to beat out other games in the category including Final Fantasy 16, Street Fighter 6, and Star Wars Jedi: Survivor.

The other Golden Joystick Awards 2023 nominees in this category can be found below:

  • Final Fantasy 16
  • Resident Evil 4 Remake (winner)
  • Street Fighter 6
  • Humanity
  • Armored Core 6: Fires of Rubicon
  • Star Wars Jedi: Survivor
 

Final Fantasy 7 Rebirth has won the Most Wanted Game category at the Golden Joystick Awards 2023 powered by Intel.

Due in February of next year, Square Enix's much-anticipated follow-up marks the second part of a planned three-part modern-day reimagining of its 1997 source material.

Hot on the heels of 2020's Final Fantasy 7 Remake, Final Fantasy 7 Rebirth extends the legendary story beyond Midgar – with a recent trailer teasing familiar spots such as Cid's Rocket Town, Red XIII's Cosmo Canyon, and the indelible Gold Saucer theme park.

Add flashes of an introspective Sephiroth, Jenova, Junon Harbor and that thoroughfare-dominating parade, and it's easy to see why people are looking forward to this one, and, indeed, why it's come out on top of this year's Golden Joysticks' Most Wanted category.

Throw in the teasiest of Emerald Weapon teasers, and… yeah, February 29, 2024 really can't come soon enough. Full credit to Final Fantasy 7 Rebirth rising to the top of its 20-game-strong category.

Here's the full list of Most Wanted Game Golden Joystick 2023 nominees, and as you can see Final Fantasy 7 Rebirth beat 19 other games to come out on top:

  • Death Stranding 2

  • Star Wars Outlaws

  • Final Fantasy VII Rebirth (Winner)

  • Tekken 8

  • Vampire: The Masquerade - Bloodlines 2

  • S.T.A.L.K.E.R. 2: Heart of Chornobyl

  • Hades 2

  • Fable

  • Hollow Knight: Silksong

  • EVERYWHERE

  • Frostpunk 2

  • Ark 2

  • Metal Gear Solid Δ: Snake Eater

  • Persona 3 Reload

  • Bulwark: Falconeer Chronicles

  • Suicide Squad: Kill the Justice League

  • Pacific Drive

  • Black Myth: Wukong

  • Banishers: Ghosts of New Eden

  • Warhammer Age of Sigmar: Realms of Ruin

Discover the best games of 2023 at the best prices by checking out the Golden Joystick Awards Steam sale page

 

First one that comes to my mind is having to travel with an NPC and our walk/run speeds don't match.

 

@Ernest has pushed an update which allows users to request ownership/moderation of abandoned magazines. Ghost/abandoned magazines were fairly prevalent after the initial wave of hype due to users either squatting magazine names or becoming inactive for other reasons. Now is your chance to get involved, if you were waiting to do so.

To request ownership/moderator privileges, scroll down to where it says "MODERATORS" in the sidebar. There will be an icon of a hand pointing upwards that you can click on, then make the request. Cheers, and thank you for your hard work Ernest, as well as future mods.

view more: next ›