pe1uca

joined 2 years ago
MODERATOR OF
[–] pe1uca@lemmy.pe1uca.dev 6 points 1 year ago (1 children)

There's a rumor going that some sites slow their performance on non-chromium browsers, not sure if even on those like opera or brave which are also chromium but with I think they make more customizations.

Other thing could be how much you use each browser, how many tabs you have open when you tried them? How many extensions? What other programs did you have open?

[–] pe1uca@lemmy.pe1uca.dev 12 points 1 year ago (2 children)

I'm just annoyed by the regions issues, you'll get pretty biased results depending in what region you select.
If you try to search for something specific to a region with other selected you'll find sometime empty results, which shows you won't get relevant results about a search if you don't properly select the region.

Probably this is more obvious with non technical searches, for example my default region is canada-en and if I try "instituto nacional electoral" I only get a wiki page, an international site and some other random sites with no news, only when I change the region I get the official page ine.mx and news. For me this means kagi hides results from other regions instead of just boosting the selected region's ones.

[–] pe1uca@lemmy.pe1uca.dev 1 points 1 year ago

The problem is you want to achieve a high level answer from a low level model, it doesn't matter how much you change models if you keep to low parameter ones, you need to use big ones like the ones used in their data centers.

I've used 13B models with somewhat good results, I only tried once the mistral 8x7B and it was amazing the responses it gave.
But this was using llamacpp offloading some layers to the GPU and just the base model, no training.

Also, how did you connected the llm to your notes? Did you trained a lora? Used embeddings? Or were your notes just fed via the context?
IIRC the last two are basically the same and are limited to what your model accepts, usually 2048 tokens, which might be enough for a one chat with a not, but not enough for large amounts of notes.

[–] pe1uca@lemmy.pe1uca.dev 3 points 1 year ago (2 children)

It's regarding appropriate handling of user information.
I'm not sure it includes PII. Basically it's a ticketing system.
The pointers I got are: the software is secure and reliable to store the data and be able to be queried to understand the updates the data had.

[–] pe1uca@lemmy.pe1uca.dev 1 points 1 year ago

it just seems to redirect to an otherwise Internet accessible page.

I'm using authelia with caddy but I'm guessing it could be similar, you need to configure the reverse proxy to expect the token the authentication service adds to each request and redirect to sign in if not. This way all requests to the site are protected (of course you'll need to be aware of APIs or similar non-ui requests)

I have to make an Internet accessible subdomain.

That's true, but you don't have to expose the actual services you're running. An easy solution would be to name it other thing, specially if the people using it trust you.
Another would be to create a wildcard certificate, this way only you and those you share your site with will know the actual sub domain being used.

My advice is from my personal setup, but still all internal being able to remotely access it via tailscale, so do you really need to make your site public to the internet?
Only if you need to share it with multiple people is worth having it public, for just you or a few people is not worth the hassle.

[–] pe1uca@lemmy.pe1uca.dev 2 points 1 year ago (1 children)

I've read advice against buying used storage unless you don't mind being at more risk of losing the data in there.

[–] pe1uca@lemmy.pe1uca.dev 1 points 1 year ago* (last edited 1 year ago)

Yesterday I started looking for mini pcs and found this post https://www.reddit.com/r/MiniPCs/comments/1afzkt5/2024_general_mini_pc_guide_usa/

They shared this link which contains data on 2.8k machines, it helped me compare some of the options I was looking for and find new ones.
https://docs.google.com/spreadsheets/d/1SWqLJ6tGmYHzqGaa4RZs54iw7C1uLcTU_rLTRHTOzaA/edit

Sadly it doesn't contain data bout the ThinkPad, but I might as well share in case you're willing to consider other brands.

Edit: Oh, wait, I was thinking about a ThinkCentre, not a ThinkPad :P
Well, I'll leave this around in case someone finds it useful, hehe.

[–] pe1uca@lemmy.pe1uca.dev 60 points 1 year ago

It's just a matter of time until all your messages on Discord, Twitter etc. are scraped, fed into a model and sold back to you

As if it didn't happen already

[–] pe1uca@lemmy.pe1uca.dev 1 points 1 year ago

I'd say it depends on your threat model, it could be a valid option.
Still, how are you going to manage them? A password manager? You'd still be posing the same question: should I keep my accounts in a single password manager?

Maybe what you can do is use aliases, that way you don't expose anywhere the actual account used see your inbox, only accounts to send you emails.
But I tries this and some service providers don't handle well custom email domains (specially government and banking which move slowly to adapt new technology)

[–] pe1uca@lemmy.pe1uca.dev 5 points 1 year ago

I sort of did this for some movies I had to lessen the burden of on the fly encoding since I already know what formats my devices support.
Just something to have in mind, my devices only support HD, so I had a lot of wiggle room on the quality.

Here's the command jellyfin was running and helped me start figuring out what I needed.

/usr/lib/jellyfin-ffmpeg/ffmpeg -analyzeduration 200M -f matroska,webm -autorotate 0 -canvas_size 1920x1080 -i file:"/mnt/peliculas/Harry-Potter/3.hp.mkv" -map_metadata -1 -map_chapters -1 -threads 0 -map 0:0 -map 0:1 -map -0:0 -codec:v:0 libx264 -preset veryfast -crf 23 -maxrate 5605745 -bufsize 11211490 -x264opts:0 subme=0:me_range=4:rc_lookahead=10:me=dia:no_chroma_me:8x8dct=0:partitions=none -force_key_frames:0 "expr:gte(t,0+n_forced*3)" -sc_threshold:v:0 0 -filter_complex "[0:3]scale=s=1920x1080:flags=fast_bilinear[sub];[0:0]setparams=color_primaries=bt709:color_trc=bt709:colorspace=bt709,scale=trunc(min(max(iw\,ih*a)\,min(1920\,1080*a))/2)*2:trunc(min(max(iw/a\,ih)\,min(1920/a\,1080))/2)*2,format=yuv420p[main];[main][sub]overlay=eof_action=endall:shortest=1:repeatlast=0" -start_at_zero -codec:a:0 libfdk_aac -ac 2 -ab 384000 -af "volume=2" -copyts -avoid_negative_ts disabled -max_muxing_queue_size 2048 -f hls -max_delay 5000000 -hls_time 3 -hls_segment_type mpegts -start_number 0 -hls_segment_filename "/var/lib/jellyfin/transcodes/97eefd2dde1effaa1bbae8909299c693%d.ts" -hls_playlist_type vod -hls_list_size 0 -y "/var/lib/jellyfin/transcodes/97eefd2dde1effaa1bbae8909299c693.m3u8"

From there I played around with several options and ended up with this command (This has several map options since I was actually combining several files into one)

ffmpeg -y -threads 4 \
-init_hw_device cuda=cu:0 -filter_hw_device cu -hwaccel cuda \
-i './Harry Potter/3.hp.mkv' \
-map 0:v:0 -c:v h264_nvenc -preset:v p7 -profile:v main -level:v 4.0 -vf "hwupload_cuda,scale_cuda=format=yuv420p" -rc:v vbr -cq:v 26 -rc-lookahead:v 32 -b:v 0 \
-map 0:a:0 -map 0:a:1 \
-fps_mode passthrough -f mp4 ./hp-output/3.hp.mix.mp4

If you want to know other values for each option you can run ffmpeg -h encoder=h264_nvenc.

I don't have at hand all the sources from where I learnt what each option did, but here's what to have in mind to the best of my memory.
All of these comments are from the point of view of h264 with nvenc.
I assume you know who the video and stream number selectors work for ffmpeg.

  • Using GPU hardware acceleration produces a lower quality image at the same sizes/presets. It just helps taking less time to process.
  • You need to modify the -preset, -profile and -level options to your quality and time processing needs.
  • -vf was to change the data format my original files had to a more common one.
  • The combination of -rc and -cq options is what controls the variable rate (you have to set -b:v to zero, otherwise this one is used as a constant bitrate)

Try different combinations with small chunks of your files.
IIRC the options you need to use are -ss, -t and/or -to to just process a chunk of the file and not have to wait for hours processing a full movie.


Assuming that I have the hardware necessary to do the initial encoding, and my server will be powerful enough for transcoding in that format

There's no need to have a GPU or a big CPU to run these commands. The only problem will be the time.
Since we're talking about preprocessing the library you don't need real time encoding, your hardware can take one or two hours to process a 30 minutes video and you'll still have the result, so you only need patience.

You can see jellyfin uses -preset veryfast and I use -preset p7 which the documentation marks as slowest (best quality)
This is because jellyfin only process the video when you're watching it and it needs to process frames faster than your devices display them.
But my command doesn't, I just run it and whenever it finishes I'll have the files ready for when I want to watch them without a need for an additional transcode.

[–] pe1uca@lemmy.pe1uca.dev 6 points 1 year ago (1 children)

I think you have two options:

  1. Use a reverse proxy so you can even have two different domains for each instead of a path. The configuration for this would change depending on your reverse proxy.
  2. You can change the config of your pihole in /etc/lighttpd/conf-available/15-pihole-admin.conf. In there you can see what's the base url to be used and other redirects it has. You just need to remember to check this file each time there's an update, since it warns you it can be overwritten by that process.
[–] pe1uca@lemmy.pe1uca.dev 10 points 1 year ago (6 children)

Are you sure your IP is only used by you?
AFAIK ISPs usually bundle the traffic of users to a few public IP addresses, so maybe the things you see are just someone else in your area going out from the same IP your ISP provides.

But I'm not actually sure if this is how it works, I might be wrong.

view more: ‹ prev next ›