danzabia

joined 1 week ago
[–] danzabia@infosec.pub 1 points 1 hour ago

I've been running Gemma3 4b locally on ollama and it's useful. I'm thinking about applications where a multimodal model could receive video or sensor feeds (like a security can, say).

[–] danzabia@infosec.pub 1 points 1 hour ago

Students are now prompting the AI to make it sound like a student wrote it, or putting it through an AI detector and changing the parts that are detected as being written by AI (adding typos or weird grammar, say). Even kids who write their own papers have to do the latter sometimes.

[–] danzabia@infosec.pub 1 points 1 hour ago

Then the student could just ask the AI to simulate a thesis defense and learn answers to the most likely questions.

The funny thing is, they would actually learn the material this way, through a kind of osmosis. I remember writing cheat sheets in college and finding I didn't need it by the end.

So there are potential use cases, but not if the university doesn't acknowledge it and continues asking for work that can be simply automated.

[–] danzabia@infosec.pub 2 points 1 hour ago

Perhaps some people can't afford it. I have the luxury of paying for weekly therapy but its probably one of my biggest line item expenses.

[–] danzabia@infosec.pub 1 points 1 hour ago* (last edited 1 hour ago)

Yeah, it's like me never having alcohol before and walking into a frat party as a freshman. Sometimes it's better to come prepared.

[–] danzabia@infosec.pub 1 points 1 hour ago

People who track performance (like METR, a nonprofit) indicate that progress is, if anything, speeding up. Most people's use case is so simple they can't detect the difference. However for cases like complex problem solving, agentic tasks, etc you can in fact see significant progress happening. This should be concerning if you think the world isn't ready for labor displaced by LLMs.

[–] danzabia@infosec.pub 1 points 1 hour ago

I think this may be a skill issue on your part.

 

I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

[–] danzabia@infosec.pub 1 points 12 hours ago

The ad writes itself: NO ONEDRIVE.

[–] danzabia@infosec.pub 10 points 1 day ago (1 children)

I'm curious about the scientific consensus continually undershooting. At a certain point, if you're always updating in one direction, shouldn't you overcorrect a bit?

[–] danzabia@infosec.pub 6 points 1 day ago

Better that they cover it than not. And the recent ramp up is worth reporting on, it's a new level of weaponization compared to previous administrations.

[–] danzabia@infosec.pub 1 points 2 days ago (1 children)
[–] danzabia@infosec.pub 2 points 4 days ago (1 children)

I'm going with Windows Me key.

view more: next ›