[-] ConsciousCode@beehaw.org 22 points 9 months ago

Good to note that this isn't even hypothetical, it literally happened with cable. First it was ad-funded, then you paid to get rid of ads, then you paid exorbitant prices to get fed ads, and the final evolution was being required to pay $100+ for bundles including channels you'd never use to get at the one you would. It's already happening to streaming services too, which have started to bundle.

[-] ConsciousCode@beehaw.org 33 points 9 months ago

Huh, is this the start of a new post-platform era where we see such business models the way we now see cigarettes?

[-] ConsciousCode@beehaw.org 23 points 9 months ago

Can't be a billionaire if you pass a certain threshold of self-awareness, it's the rules.

[-] ConsciousCode@beehaw.org 28 points 9 months ago

Now I want to see his reaction when people start breaking out the guillotines because his ilk have made peaceful resolution impossible.

[-] ConsciousCode@beehaw.org 33 points 9 months ago

Daily reminder that Firefox is customizable to the point of removing Mozilla's telemetry and making it look and feel almost like Chromium. And no, de-Googled Chromium probably isn't enough because preliminary code for implementing WEI has been pushed upstream (basically they added the code which makes it possible for WEI to be implemented, strongly suggesting they're intending to actually implement it upstream and not in Chrome)

[-] ConsciousCode@beehaw.org 20 points 10 months ago

Doesn't this just make Trump's case worse? He was under strict limits of what he could say on social media lest he "accidentally" intimidate witnesses, but he's still culpable if his cronies do it for him.

[-] ConsciousCode@beehaw.org 19 points 10 months ago

The EU giveth (removable batteries, mandated USB-C) and it taketh away

[-] ConsciousCode@beehaw.org 23 points 10 months ago

It sounds simple but data conditioning like that is how you get scunthorpe being blacklisted, and the effects on the model even if perfectly executed are unpredictable. It could get into issues of "race blindness", where the model has no idea these words are bad and as a result is incapable of accommodating humans when the topic comes up. Suppose in 5 years there's a therapist AI (not ideal but mental health is horribly understaffed and most people can't afford a PhD therapist) that gets a client who is upset because they were called a f**got at school, it would have none of the cultural context that would be required to help.

Techniques like "constitutional AI" and RLHF developed after the foundation models really are the best approach for these, as they allow you to get an unbiased view of a very biased culture, then shape the model's attitudes towards that afterwards.

[-] ConsciousCode@beehaw.org 47 points 10 months ago

To be honest I'm fine with it in isolation, copyright is bullshit and the internet is a quasi-socialist utopia where information (an infinitely-copyable resource which thus has infinite supply and 0 value under capitalist economics) is free and humanity can collaborate as a species. The problem becomes that companies like Google are parasites that take and don't give back, or even make life actively worse for everyone else. The demand for compensation isn't so much because people deserve compensation for IP per se, it's an implicit understanding of the inherent unfairness of Google claiming ownership of other people's information while hoarding it and the wealth it generates with no compensation for the people who actually made that wealth. "If you're going to steal from us, at least pay us a fraction of the wealth like a normal capitalist".

If they made the models open source then it'd at least be debatable, though still suss since there's a huge push for companies to replace all cognitive labor with AI whether or not it's even ready for that (which itself is only a problem insofar as people need to work to live, professionally created media is art insofar as humans make it for a purpose but corporations only care about it as media/content so AI fits the bill perfectly). Corporations are artificial metaintelligences with misaligned terminal goals so this is a match made in superhell. There's a nonzero chance corporations might actually replace all human employees and even shareholders and just become their own version of skynet.

Really what I'm saying is we should eat the rich, burn down the googleplex, and take back the means of production.

[-] ConsciousCode@beehaw.org 21 points 10 months ago* (last edited 10 months ago)

People arguing he shouldn't be prosecuted is wild, like we've been so cowed into submission by this dumpster fire of an electoral system that we're afraid to prosecute high treason because otherwise the treasonist might win

[-] ConsciousCode@beehaw.org 20 points 11 months ago* (last edited 11 months ago)

This gives him way too much credit lol. He isn't playing 5D chess, he impulse-bought a $44B company and is too much of a narcissistic control freak to stop touching it. Harming marginalized people is a natural consequence of essentially any action a billionaire takes by virtue of their existence.

10

Considering the potential of the fediverse, is there any version of that for search engines? Something to break up a major point of internet centralization, fragility, and inertia to change (eg Google will never, ever, offer IPFS searches). Not only would decentralization be inherently beneficial, it would mean we're no longer compelled to hand over private information to centralized unvetted corporations like Google, Microsoft, and DuckDuckGo.

[-] ConsciousCode@beehaw.org 44 points 1 year ago

The hype cycle around AI right now is misleading. It isn't revolutionary because of these niche one-off use-cases, it's revolutionary because it's one AI that can do anything. Problem with that is what it's most useful for is boring for non-technical people.

Take the library I wrote to create "semantic functions" from natural language tasks - one of the examples I keep going to in order to demonstrate the usefulness is

@semantic
def list_people(text) -> list[str]:
    '''List the people mentioned in the given text.'''

8 months ago, this would've been literally impossible. I could approximate it with thousands of lines of code using SpaCy and other NLP libraries to do NER, maybe a dictionary of known names with fuzzy matching, some heuristics to rule out city names or more advanced sentence structure parsing for false positives, but the result would be guaranteed to be worse for significantly more effort. Here, I just tell the AI to do it and it... does. Just like that. But you can't hype up an algorithm that does boring stuff like NLP, so people focus on the danger of AI (which is real, but laymen and news focus on the wrong things), how it's going to take everyone's jobs (it will, but that's a problem with our system which equates having a job to being allowed to live), how it's super-intelligent, etc. It's all the business logic and doing things that are hard to program but easy to describe that will really show off its power.

4

Not sure if this is the right place to put this, but I wrote a library (MIT) for creating "semantic functions" using LLMs to execute them. It's optimized for ergonomics and opacity, so you can write your functions like:

from servitor import semantic
@semantic
def list_people(text) -> list[str]:
    """List the people mentioned in the text."""

(That's not a typo - the body of the function is just the docstring, servitor detects that it returns None and uses the docstring instead)

Basic setup:

$ pip install .[openai]
$ pip install .[gpt4all]
$ cp .env.template .env

Then edit .env to have your API key or model name/path.

I'm hoping for this to be a first step towards people treating LLMs less like agents and more like inference engines - the former is currently prevalent because ChatGPT is a chatbot, but the latter is more accurate to what they actually are.

I designed it specifically so it's easy to switch between models and LLM providers without requiring dependencies for all of them. OpenAI is implemented because it's the easiest for me to test with, but I also implemented gpt4all support as a first local model library.

What do you think? Can you find any issues? Implement any connectors or adapters? Any features you'd like to see? What can you make with this?

view more: next ›

ConsciousCode

joined 1 year ago