[-] TheOtherJake@beehaw.org 12 points 10 months ago* (last edited 10 months ago)

Endless Sky. The save game is a text file. Save a file on the mobile app (F-Droid), and on the PC (Flatpak), and note the last line. This is the line you must swap to transfer the save file. It is the first game I have played on both practically. The game mechanics are different between the two and you need to alter your strategy accordingly. On mobile, I travel with a ship setup for boarding pirate vessels and never target enemies directly; all of my guns are automatic turrets. I just use a fast ship and travel with a large group of fighters. It is more of a grind on mobile, but it can be used to build up resources and reserves. The game is much bigger than it first appears to be. You need to either check out a guide or explore very deep into the obscure pockets of the map.

[-] TheOtherJake@beehaw.org 16 points 11 months ago

I won't touch the proprietary junk. Big tech "free" usually means street corner data whore. I have a dozen FOSS models running offline on my computer though. I also have text to image, text to speech, am working on speech to text, and probably my ironman suit after that.

These things can't be trusted though. It is just a next word statistical prediction system combined with a categorization system. There are ways to make an LLM trustworthy, but it involves offline databases and prompting for direct citations, these are different from Chat prompt structures.

120
submitted 11 months ago* (last edited 11 months ago) by TheOtherJake@beehaw.org to c/lgbtq_plus@beehaw.org

Much love.

[-] TheOtherJake@beehaw.org 21 points 11 months ago

Oobabooga is the main GUI used to interact with models.

https://github.com/oobabooga/text-generation-webui

FYI, you need to find checkpoint models. In the available chat models space, naming can be ambiguous for a few reasons I'm not going to ramble about here. The main source of models is Hugging Face. Start with this model (or get the censored version):

https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGML

First, let's break down the title.

  • This is a model based in Meta's Llama2.
  • This is not "FOSS" in the GPL/MIT type of context. This model has a license that is quite broad in scope with the key point stipulating it can not be used commercially for apps that have more than 700 million users.
  • Next, it was quantized by a popular user going by "The Bloke." I have no idea who this is IRL but I imagine this is a pseudonym or corporate alias given how much content is uploaded by this account on HF.
  • This model is based on a 7 Billion parameter dataset, and is fine tuned for chat applications.
  • This is uncensored meaning it will respond to most inputs as best it can. It can get NSFW, or talk about almost anything. In practice there are still some minor biases that are likely just over arching morality inherent to the datasets used, or it might be coded somewhere obscure.
  • Last part of the title is that this is a GGML model. This means it can run on CPU or GPU or a split between the two.

As for options on the landing page or "model card"

  • you need to get one of the older style models that have "q(numb)" as the quantization type. Do not get the ones that say "qK" as these won't work with the llama.cpp file you will get with Oobabooga.
  • look at the guide at the bottom of the model card where it tells you how much ram you need for each quantization type. If you have a Nvidia GPU with the CUDA API, enabling GPU layers makes the model run faster, and with quite a bit less system memory from what is stated on the model card.

The 7B models are about like having a conversation with your average teenager. Asking technical questions yielded around 50% accuracy in my experience. A 13B model got around 80% accuracy. The 30B WizardLM is around 90-95%. I'm still working on trying to get a 70B running on my computer. A lot of the larger models require compiling tools from source. They won't work directly with Oobabooga.

14

My main reason for playing with offline AI right now is to help me get further into the Computer Science curriculum on my own. (disabled/just curious)

I have seen a few AI chat characters with highly detailed prompts that attempt to keep the LLM boxed into a cosplay character. I would like to try to create fellow students in a learning curriculum. I haven't seen anything like this yet, but maybe someone else here has seen this or has some helpful tips. I would like to prompt a character to not directly use programming knowledge from its base tokens and only use what is available in a Lora, or a large context, or a langchain database. I would like to have the experience of learning along side someone to talk out ideas when they have the same amount of information as myself. Like, I could grab all the information for a university lecture posted online and feed it to the AI, watch and read the information myself, and work through the quizzes or question anything I do not understand with the answers restricted to my own internal context region.

[-] TheOtherJake@beehaw.org 18 points 11 months ago

Hey there Lionir. Thanks for the post. Can the Beehaw team please look into copying or getting the creator of this bot to work here? https://lemmy.world/u/PipedLinkBot@feddit.rocks

I think the person that created that bot is somehow connected to the piped.video project. I know the whole privacy consciousness thing isn't for everyone, but this bot's posts are quite popular elsewhere on Lemmy.

FYI, the main reason to use piped.video links is that it is setup as an alternative front end for YT that automatically routes all users through a bunch of VPNs to help mitigate Alphabet's privacy abuses and manipulation.

[-] TheOtherJake@beehaw.org 15 points 11 months ago

Google is broken because AI is making it obsolete. I bet in 10 years google will be a historical footnote.

103

I just got Oobabooga running for the first time with Llama-2, and have Automatic1111, and ComfyUI running for images. I am curious about ML too but I don't know where this start with that one yet.

For the uninitiated, all of these tools are running offline open source (or mostly) models.

2
submitted 1 year ago by TheOtherJake@beehaw.org to c/chat@beehaw.org

In the USA the cultural atmosphere slows to a crawl between Christmas and New Years. I couldn't care less about the holidays. I am curious if the slow down is entirely cultural, or if there is some kind of inherent coupling where we all naturally slow down with the longest winter nights, in places with significantly shorter daylight hours.

I've worked night shifts doing hard manual labor. I'm well aware humans can adapt to any rhythm when required. I'm curious about the effects on people that do not have such rigid lifestyles.

[-] TheOtherJake@beehaw.org 13 points 1 year ago

I was a buyer for a chain of high end bike shops for many years. Amazon really only sells junk products. Any real quality brands of niche products can't support amazon and the typical brick and mortar business inventory structure. Like, I spent between $100k-$500k in preseason bike brand commitments for 3 stores. If any of those brands decided to allow sales on Amazon I would drop them immediately. Multiply this by every bike shop that exists. This is more than Amazon could compete with by a long shot. The issue is that every Buyer in a shop knows what they are able to sell effectively and buys accordingly. I tailored my orders for every shop independently. It would be impossible for Amazon to predict and fund high end bikes at this scale.

"So what," you say, "it's just bikes." No it is not. The bike brands are usually part of a group of brands that include several parts, clothing, and accessory products. These are part of preseason commitments with the bike brands too. So all of these are not sold on Amazon either. This is the case with most things, the best or even decent stuff is not sold on Amazon.

The worst thing with amazon is that they aggregate all identical products in their warehouses. This makes it trivial for a seller to insert fake goods into a product pool and it is completely untraceable back to them.

0

Ideal background material IMO

[-] TheOtherJake@beehaw.org 13 points 1 year ago

Steve hitched a ride up a mountain, got off at the top, watched the bus drive away, convinced he's conquering all the fops, too dumb for the bus; froze to death at the stop.

1
submitted 1 year ago* (last edited 1 year ago) by TheOtherJake@beehaw.org to c/food@beehaw.org

I don't want the super health food tree bark nonsense you give nonbelievers. I'm looking for better than those of any animal infidels. Don't hold back on me now!

[-] TheOtherJake@beehaw.org 11 points 1 year ago* (last edited 1 year ago)

The main difference will be if you have an Intel processor generation 10 or higher. The whole reason windows 11 was created is because Intel released their asymmetrical core architecture in the 10th generation processors.

One of the core parts of an operating system is the CPU scheduler. This is what juggles all the different things that are happening in the fore and background in order to make the computer work properly. On the surface the CPU scheduler is a rather simple function as far as reading and understanding the code, but it is the kind of thing that a tiny change can have massive repercussions in unexpected ways. It is designed to have a delicate balance that is very easy to screw up.

One of the fundamental aspects of the CPU scheduler used in W10 is that it assumes all of the cores your computer has are the same. Rewriting the CPU scheduler required a whole new rewrite of Windows to accommodate a much more complex architecture with some faster and some slower cores and a different spin up rate to go from idle to max speed on the two types, along with some differences in speed even on cores with adjacent threads. It also required changes to cache management strategies. This still isn't fully publicly documented for W11. I just know the way the scheduler changed in Linux and watched a conference with John Brown, the main Intel open source developer who mentioned that the 10th gen asymmetry was the main trigger for W11.

[-] TheOtherJake@beehaw.org 25 points 1 year ago

Rule number one of buying a new car: get the dealer to disconnect the modem.

Cars should be entirely open source by government regulation. All software should be public and the manufacturer should be required to host and maintain a public toolchain that can reproduce the software and any revisions made. All of this should also get mirrored by the library of Congress and made publicly available as a second source indefinitely. This is about ownership. Digital rights are never okay to reserve. If I do not own everything I am only renting from the real owner. Proprietary goods are theft of ownership. It really is that simple.

[-] TheOtherJake@beehaw.org 22 points 1 year ago

"Hi Karen , this is HR. You can now log anonymous complaints about IT, by logging into this external website with your company credentials. We provide this for your security because IT is able to monitor in network communication."

[-] TheOtherJake@beehaw.org 19 points 1 year ago* (last edited 1 year ago)

They need to hit the final nail on the head. All smart phones sold in Europe must have fully documented and open source hardware including the entire chipset, all peripherals, and the modem, with all registers and interfaces documented, the full API, and all programing documentation along with a public toolchain that can reproduce the software as shipped with the device and updated with any changes made to future iterations as soon as the updated software is made available.

This law would make these devices lifetime devices, if you choose; as in your lifetime. It would murder the disposable hardware culture, and it should happen now. Moore's law is dead. The race is over.

1
submitted 1 year ago by TheOtherJake@beehaw.org to c/food@beehaw.org

Tell me the details like what makes yours perfect, why, and your cultural influence if any. I mean, rice is totally different with Mexican, Chinese, Indian, Japanese, and Persian food just to name a few. It is not just the spices or sauces I'm mostly interested in. These matter too. I am really interested in the grain variety and specifically how you prep, cook, and absolutely anything you do after. Don't skip the cultural details that you might otherwise presume everyone does. Do you know why some brand or region produces better ingredients, say so. I know it seems simple and mundane but it really is not. I want to master your rice as you make it in your culture. Please tell me how.

So, how do you do rice?

view more: next ›

TheOtherJake

joined 1 year ago