pe1uca

joined 2 years ago
MODERATOR OF
[–] pe1uca@lemmy.pe1uca.dev 3 points 6 months ago (1 children)

Gameloft was a sister company, wasn't it? What's happened to them?
I've seen a few trailers for their games in some Nintendo directs, are they any good? Or have they followed a similar path as Ubisoft?

[–] pe1uca@lemmy.pe1uca.dev 23 points 7 months ago

I'd say it's one thing and better to be tracked only at account level than to be tracked at traffic level.

So you know only your history in the site can be used as opposed to any other form of fingerprinting the sites might use at browser, cookies, or ip level.

[–] pe1uca@lemmy.pe1uca.dev 3 points 7 months ago

Found the issue '^-^
UFW also blocks traffic between docker and host.
I had to add these rules

ufw allow proto tcp from 172.16.0.0/12 to 172.16.0.0/12 port 80  
ufw allow proto tcp from 172.16.0.0/12 to 172.16.0.0/12 port 443  
[–] pe1uca@lemmy.pe1uca.dev 1 points 7 months ago

Same problem.
I tried a few values and the same, ping works but curl doesn't.

[–] pe1uca@lemmy.pe1uca.dev 28 points 7 months ago (2 children)

I wonder if Trudeau will make the same move as the Mexican president and tell the actual truth about the meeting.

[–] pe1uca@lemmy.pe1uca.dev 22 points 7 months ago (2 children)

Why not report it in the repo?

[–] pe1uca@lemmy.pe1uca.dev 6 points 7 months ago (1 children)

Not sure what'd you consider lightweight, I've been using https://github.com/jhj0517/Whisper-WebUI with fast whisper.
The GPU integration has never worked well for me, but the CPU one works wonders.
You'll have to check if the models offer good results for those languages.

[–] pe1uca@lemmy.pe1uca.dev 2 points 7 months ago

The video in YT and the music played in YTMusic are two different uploads, you can easily get one in YT by checking the YTM URL and getting the ID. So yeah, yt-dlp should get you only the song if you created a playlist with only songs instead of music videos.

[–] pe1uca@lemmy.pe1uca.dev 2 points 7 months ago

Maybe FreshRSS with some extensions?
I saw a recent commit to fire an event when saving a favorite, so probably you can get an extension to send the link to something like archivebox for the pages you favorite.

I've just fiddled with an already created extension, but they seem fairly simple to create your own easily.
Of course you can inject JS so you could make it more complex if you want.

[–] pe1uca@lemmy.pe1uca.dev 4 points 7 months ago

With invidious and in FreshRSS I use the youtube extension to use the embedded video player, you just need update this part of the code https://github.com/FreshRSS/Extensions/blob/master/xExtension-YouTube/extension.php#L153-L163
It easy just to replace for this:

    public function getHtmlContentForLink(FreshRSS_Entry $entry, string $link): string
    {
        $domain = 'www.youtube.com';
        if ($this->useNoCookie) {
            $domain = 'www.youtube-nocookie.com';
        }
        $domain = 'invidious.personal.com';
        $params = 'quality=dash';
        $url = str_replace('//www.youtube.com/watch?v=', '//'.$domain.'/embed/', $link);
        $url = str_replace('http://', 'https://', $url);
        $url = $url . '?' . $params;

        return $this->getHtml($entry, $url);
    }

The only change is to use $domain = 'invidious.personal.com';
And add the parameter quality=dash

Seems there's also this one https://github.com/tunbridgep/freshrss-invidious
but haven't tried it

[–] pe1uca@lemmy.pe1uca.dev 8 points 7 months ago (7 children)

That's a weird read having in mind I had to move to Wayland because x11 had severe screen tearing. I would have guessed Wayland had better support.

[–] pe1uca@lemmy.pe1uca.dev 5 points 7 months ago (5 children)

I don't think there are services like that, since usually this means deploying and destructing an instance, which takes a few minutes (if you just turn off the instance you still get billed).
Probably the best option would be to have a snapshot, which costs way less than the actual instance, and create from it each day or so yo run on the images since it was last destroyed.

This is kind of what I do with my media collection, I process it on my main machine with a GPU, and then just serve it from a low-power one with Jellyfin.

 

I was trying to debug an issue I have connecting to a NAS, so I was checking the logs of UFW and found out there are a lot of connections being blocked from my chromecast HD (AndroidTV) on different ports via the local IP.

Sometimes I use jellyfin, but that's over tailscale, so there shouldn't be any traffic over local IP, just over tailscale's IP.
But shouldn't have traffic right now since I wasn't using it and didn't have tailscale on.

The ports seem random, just sometimes they are tried two times back to back, but afterwards another random port is tried to be accessed.

After seeing this I enabled UFW in my daily machine and the same type of logs showed up.

So, do you guys know what could be happening here?
Why is chromecast trying to access random ports on devices in the same network?

 

I've only used ufw and just now I had to run this command to fix an issue with docker.
sudo iptables -I INPUT -i docker0 -j ACCEPT
I don't know why I had to run this to make curl work.

So, what did I exactly just do?
This is behind my house router which already has reject input from wan, so I'm guessing it's fine, right?

I'm asking since the image I'm running at home I was previously running it in a VPS which has a public IP and this makes me wonder if I have something open there without knowing :/

ufw is configured to deny all incoming, but I learnt docker by passes this if you configure the ports like 8080:8080 instead of 127.0.0.1:8080:8080. And I confirmed it by accessing the ip and port.

 

I mean, the price of the product is the same, I'm taking a loan for the duration of the credit but paying no interest?
What's the catch?
I can keep my money making a bit of interest instead of giving it right away and without increasing the price of what I was already planning to buy. When or why wouldn't I choose 0% credits?

 

I'm looking at my library and I'm wondering if I should process some of it to reduce the size of some files.

There are some movies in 720p that are 1.6~1.9GB each. And then there are some at the same resolution but are 2.5GB.
I even have some in 1080p which are just 2GB.
I only have two movies in 4k, one is 3.4GB and the other is 36.2GB (can't really tell the detail difference since I don't have 4k displays)

And then there's an anime I have twice at the same resolution, one set of files are around 669~671MB, the other set 191 each (although in this the quality is kind of noticeable while playing them, as opposed to the other files I extract some frames)

What would you do? what's your target size for movies and series? What bitrate do you go for in which codec?

Not sure if it's kind of blasphemy in here talking about trying to compromise quality for size, hehe, but I don't know where to ask this. I was planning on using these settings in ffmpeg, what do you think?
I tried it in an anime at 1080p, from 670MB to 570MB, and I wasn't able to tell the difference in quality extracting a frame form the input and the output.
ffmpeg -y -threads 4 -init_hw_device cuda=cu:0 -filter_hw_device cu -hwaccel cuda -i './01.mp4' -c:v h264_nvenc -preset:v p7 -profile:v main -level:v 4.0 -vf "hwupload_cuda,scale_cuda=format=yuv420p" -rc:v vbr -cq:v 26 -rc-lookahead:v 32 -b:v 0

 

cross-posted from: https://lemmy.pe1uca.dev/post/1137911

I need to help auditing a project from another team.
I got the pointers on what's expected to be checked, but I don't have like templates for documents for what's expected from an audit report which also means I'm not sure what's the usual process to conduct an internal audit.
I mean I might as well read the whole repo, but maybe that's too much?

Any help or pointers on what I need to investigate to get started would be great!

 

I need to help auditing a project from another team.
I got the pointers on what's expected to be checked, but I don't have like templates for documents for what's expected from an audit report which also means I'm not sure what's the usual process to conduct an internal audit.
I mean I might as well read the whole repo, but maybe that's too much?

Any help or pointers on what I need to investigate to get started would be great!

 

cross-posted from: https://lemmy.pe1uca.dev/post/1136490

I'm checking this mini pc https://www.acemagic.com/products/acemagic-ad08-intel-core-i9-11900h-mini-pc

It says the M2 and SATA ports are limited to 2TB, but I can't imagine why that's the case.
Could there be a limit on the motherboard? On the CPU?
If most likely this is done in software (windows) probably it won't matter since I'm planning to switch to linux.

What I want to avoid is buying it and being unable to use an 8TB drive.

 

I'm checking this mini pc https://www.acemagic.com/products/acemagic-ad08-intel-core-i9-11900h-mini-pc

It says the M2 and SATA ports are limited to 2TB, but I can't imagine why that's the case.
Could there be a limit on the motherboard? On the CPU?
If most likely this is done in software (windows) probably it won't matter since I'm planning to switch to linux.

What I want to avoid is buying it and being unable to use an 8TB drive.

 

I started tinkering with frigate and saw the option to use a coral ai device to process the video feeds for object recognition.

So, I started checking a bit more what else could be done with the device, and everything listed in the site is related to human recognition (poses, faces, parts) or voice recognition.

In some part I read stable diffusion or LLMs are not an option since they require a lot of ram which these kind of devices lack.

What other good/interesting uses can these devices have? What are some of your deployed services using these devices for?

 

I have a few servers running some services using a custom domain I bought some time ago.
Each server has its own instance of caddy to handle a reverse proxy.
Only one of those servers can actually do the DNS challenge to generate the certificates, so I was manually copying the certificates to each other caddy instance that needed them and using the tls directive for that domain to read the files.

Just found there are two ways to automate this: shared storage, and on demand certificates.
So here's what I did to make it work with each one, hope someone finds it useful.

Shared storage

This one is in theory straight forward, you just mount a folder which all caddy instances will use.
I went through the route of using sshfs, so I created a user and added acls to allow the local caddy user and the new remote user to write the storage.

setfacl -Rdm u:caddy:rwx,d:u:caddy:rwX,o:--- ./
setfacl -Rdm u:remote_user:rwx,d:u:remote_user:rwX,o:--- ./
setfacl -Rm u:remote_user:rwx,d:u:remote_user:rwX,o:--- ./

Then on the server which will use the data I just mounted it

remote_user@<main_caddy_host>:/path/to/caddy/storage /path/to/local/storage fuse.sshfs noauto,x-systemd.automount,_netdev,reconnect,identityfile=/home/remote_user/.ssh/id_ed25519,allow_other,default_permissions,uid=caddy,gid=caddy 0 0

And included the mount as the caddy storage

{
	storage file_system /path/to/local/storage
}

On demand

This one requires a separate service since caddy can't properly serve the file needed to the get_certificate directive

We could run a service which reads the key and crt files and combines them directly from the main caddy instance, but I went to serve the files and combine them in the server which needs them.

So, in my main caddy instance I have this:
I restrict the access by my tailscale IP, and include the /ask endpoint required by the on demand configuration.

@certificate host cert.localhost
handle @certificate {
	@blocked not remote_ip <requester_ip>
	respond @blocked "Denied" 403

	@ask {
		path /ask*
		query domain=my.domain domain=jellyfin.my.domain
	}
	respond @ask "" 200

	@askDenied `path('/ask*')`
	respond @askDenied "" 404

	root * /path/to/certs
	@crt {
		path /cert.crt
	}
	handle @crt {
		rewrite * /wildcard_.my.domain.crt
		file_server
	}

	@key {
		path /cert.key
	}
	handle @key {
		rewrite * /wildcard_.my.domain.key
		file_server
	}
}

Then on the server which will use the certs I run a service for caddy to make the http request.
This also includes another way to handle the /ask endpoint since wildcard certificates are not handled with *, caddy actually asks for each subdomain individually and the example above can't handle wildcard like domain=*.my.domain.

package main

import (
	"io"
	"net/http"
	"strings"

	"github.com/labstack/echo/v4"
)

func main() {
	e := echo.New()

	e.GET("/ask", func(c echo.Context) error {
		if domain := c.QueryParam("domain"); strings.HasSuffix(domain, "my.domain") {
			return c.String(http.StatusOK, domain)
		}
		return c.String(http.StatusNotFound, "")
	})

	e.GET("/cert.pem", func(c echo.Context) error {
		crtResponse, err := http.Get("https://cert.localhost/cert.crt")
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}
		crtBody, err := io.ReadAll(crtResponse.Body)
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}
		defer crtResponse.Body.Close()
		keyResponse, err := http.Get("https://cert.localhost/cert.key")
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}
		keyBody, err := io.ReadAll(keyResponse.Body)
		if err != nil {
			return c.String(http.StatusInternalServerError, "")
		}

		return c.String(http.StatusOK, string(crtBody)+string(keyBody))
	})

	e.Logger.Fatal(e.Start(":1323"))
}

And in the CaddyFile request the certificate to this service

{
	on_demand_tls {
		ask http://localhost:1323/ask
	}
}

*.my.domain {
	tls {
		get_certificate http http://localhost:1323/cert.pem
	}
}
 

Seems the SSD sometimes heats up and the content disappears from the device, mostly from my router, sometimes from my laptop.
Do you know what I should configure to put the drive to sleep or something similar to reduce the heat?

I'm starting up my datahoarder journey now that I replaced my internal nvme SSD.

It's just a 500GB one which I attached to my d-link router running openwrt. I configured it with samba and everything worked fine when I finished the setup. I just have some media files in there, so I read the data from jellyfin.

After a few days the content disappears, it's not a connection problem from the shared drive, since I ssh into the router and the files aren't shown.
I need to physically remove the drive and connect it again.
When I do this I notice the somewhat hot. Not scalding, just hot.

I also tried this connecting it directly to my laptop running ubuntu. In there the drive sometimes remains cool and the data shows up without issue after days.
But sometimes it also heats up and the data disappears (this was even when the data was not being used, i.e. I didn't configure jellyfin to read from the drive)

I'm not sure how I can be sure to let the ssd sleep for periods of time or to throttle it so it can cool off.
Any suggestion?

 

I started fiddling with my alias service and started wondering what approach other people might take.
Not necessarily the best option but what do you prefer? What are the pros and cons you see with each option?

Currently I'm using anonaddy and proton, so I have a few options to create aliases.

  • The limited shared domain aliases (from my current subscription level)
    Probably the only option to not be tracked if it would be unlimited, I'd just have to pay more for the service.
  • Unlimited aliases with a subdomain of the shared domain
    For example: baked6863.addy.io
  • Unlimited aliases with custom domain.
  • Unlimited aliases with subdomain in custom domain.
    This is different from the one above since the domain could be used for different things, not dedicated to email.
  • Catch-all with addy.
    The downside I've read is people could spam any random word, and if then disabled the people that had an incorrect alias wouldn't be able to communicate anymore.
  • Catch-all with proton.
    Since proton has a limit on how many email addresses you actually have, so when you receive an email to an alias and want to replay to it you'll be doing it from the catch-all address instead of the alias.

What do you think?
What option would you choose?

view more: ‹ prev next ›