btrfs

112 readers
1 users here now

founded 1 year ago
MODERATORS
1
 
 

Josef Bacik who was working on this said:

I fixed up all your review comments, but yes we don't care about this internally anymore so it's been de-prioritized. I have to rebase onto the new stuff, re-run tests, fix any bugs that may have creeped in, but the current code addressed all of your comments. Once I get time to get back to this you'll have a new version in your inbox, but that may be some time.

(Note: This was back in April.)

2
6
submitted 1 month ago* (last edited 3 weeks ago) by gpstarman to c/btrfs@lemmy.ml
 
 

Let's say i made 10 snapshots on top of the base.

Now can i delete snap no. 5? Will the snaps after 5 will be affected?

Solved

Yes, one can delete consecutive snapshots. The data won't be deleted unless all snaps ( reference points ) get deleted.

Note: If you delete the original file and delete all the snapshots made when the file is still there, the file will get deleted permanently.

3
3
submitted 1 month ago* (last edited 1 month ago) by gpstarman to c/btrfs@lemmy.ml
 
 

Can I change the location of BTRFS snapshots. I installed CachyOS, and it automatically setup BTRFS subvols.

This is the layout 👇

ID gen parent top level path
258 1773 5 5 @root
259 1601 5 5 @srv
260 1789 5 5 @cache
261 1785 5 5 @tmp
262 1797 5 5 @log
263 26 377 377 var/lib/portables
264 26 377 377 var/lib/machines
265 1791 377 377 .snapshots
266 1427 378 378 @home/.snapshots
377 1797 5 5 @
378 1797 5 5 @home

According to Arch wiki https://wiki.archlinux.org/title/Snapper#Creating_a_new_configuration

Create a subvolume at /path/to/subvolume/.snapshots where future snapshots for this configuration will be stored. A snapshot's path is /path/to/subvolume/.snapshots/#/snapshot, where # is the snapshot number.

From which I understand that if I created a snap of /home (@home), it will save in /home/.snapshots (@home/.snapshots).

So, CachyOS configured to save snaps to separate subvol.

But, what I want to do is, Instead of just saving it in separate subvol, i want snaps to be saved on different btrfs partition. Maybe @home/.snapshots but on different partition.

Is that possible ?

4
 
 

Hi! I'm learning how to use btrfs and I need some advice.

One one of my desktop, I made the mistake of creating 2 partitions, one for /(root) and one for home. Both are btrfs. I didn't know that I could use subvolumes so that they could share the same physical space.

My question: How can I merge the root and home btrfs partitions into only 1 partition that would use btrfs subvolumes?

I'm looking for something like that:

  • Partition1 (btrfs)
    • subvolume 1: @root (mounted to /)
    • subvolume 2: @home (mounted to /home)
  • Partition 2, 3, 4...

My current setup:

  • 1 physical hard drive (1 TB), shown as sda below
  • The partitions I want to merge are sda7 and sda8
  • That computer is an iMac also running MacOS so it has a few other partitions that I should not touch
$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda      8:0    0 931,5G  0 disk 
├─sda1   8:1    0   200M  0 part /boot/efi
├─sda2   8:2    0 371,1G  0 part 
├─sda3   8:3    0 619,9M  0 part 
├─sda4   8:4    0   600M  0 part 
├─sda5   8:5    0  1023M  0 part 
├─sda7   8:7    0 422,9G  0 part /home
└─sda8   8:8    0 135,1G  0 part /
sdb      8:16   1     0B  0 disk 
sr0     11:0    1  1024M  0 rom  
$ blkid
/dev/sda4: UUID="d970eea2-142b-3f1c-9650-5e496d1e9b4b" BLOCK_SIZE="4096" LABEL="Linux HFS+ ESP" TYPE="hfsplus" PARTLABEL="Linux HFS+ ESP" PARTUUID="eab00592-b96d-4ecb-b2e9-816c95eaf860"
/dev/sda2: UUID="6a26963c-eabb-3e42-9e8d-8677a8282b61" BLOCK_SIZE="4096" LABEL="DD Macintosh" TYPE="hfsplus" PARTLABEL="DD Mac" PARTUUID="8257316c-1fd7-4885-bf2b-7e99557acd85"
/dev/sda7: UUID="22f5e59e-8509-484a-92d5-e7dc03bb70cd" UUID_SUB="78a998f5-db55-4a31-9506-afe548ec8d5e" BLOCK_SIZE="4096" TYPE="btrfs" PARTLABEL="Mint Home" PARTUUID="20b6c31d-5e2d-416f-beb3-faa295af67df"
/dev/sda5: UUID="587b0093-4b64-468a-9a01-b933630d184b" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="a47a3fbf-534a-435d-8916-0f83edebf296"
/dev/sda3: UUID="d0c171e8-572d-39f9-8bdc-38f33744a19a" BLOCK_SIZE="4096" LABEL="Recovery HD" TYPE="hfsplus" PARTLABEL="Recovery HD" PARTUUID="c9d0673b-bb2e-4322-9bf9-c661f7de6856"
/dev/sda1: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="67E3-17ED" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="d30954cb-b9b6-40fa-9202-a18cf146f7df"
/dev/sda8: UUID="7027382a-4369-4276-b916-9997c1007e5b" UUID_SUB="3e516464-dee1-4bda-9621-29591d54dc2d" BLOCK_SIZE="4096" TYPE="btrfs" PARTLABEL="Mint root" PARTUUID="f2fdb8cc-54fd-46c8-bf1f-85954c1dc363"
5
 
 

Hi,

there are mostly stabilization, refactoring and cleanup changes. There rest are minor performance optimizations due to caching or lock contention reduction and a few notable fixes.

Please pull, thanks.

Performance improvements:

  • minor speedup in logging when repeatedly allocated structure is preallocated only once, improves latency and decreases lock contention

  • minor throughput increase (+6%), reduced lock contention after clearing delayed allocation bits, applies to several common workload types

  • skip full quota rescan if a new relation is added in the same transaction

Fixes:

  • zstd fix for inline compressed file in subpage mode, updated version from the 6.8 time

  • proper qgroup inheritance ioctl parameter validation

  • more fiemap followup fixes after reduced locking done in 6.8

    • fix race when detecting delalloc ranges

Core changes:

  • more debugging code

    • added assertions for a very rare crash in raid56 calculation
    • tree-checker dumps page state to give more insights into possible reference counting issues
  • add checksum calculation offloading sysfs knob, for now enabled under DEBUG only to determine a good heuristic for deciding the offload or synchronous, depends on various factors (block group profile, device speed) and is not as clear as initially thought (checksum type)

  • error handling improvements, added assertions

  • more page to folio conversion (defrag, truncate), cached size and shift

  • preparation for more fine grained locking of sectors in subpage mode

  • cleanups and refactoring

    • include cleanups, forward declarations
    • pointer-to-structure helpers
    • redundant argument removals
    • removed unused code
    • slab cache updates, last use of SLAB_MEM_SPREAD removed
6
 
 

cross-posted from: https://programming.dev/post/9319044

Hey,

I am planning to implement authenticated boot inspired from Pid Eins' blog. I'll be using pam mount for /home/user. I need to check integrity of all partitions.

I have been using luks+ext4 till now. I am ~~hesistant~~ hesitant to switch to zfs/btrfs, afraid I might fuck up. A while back I accidently purged '/' trying out timeshift which was my fault.

Should I use zfs/btrfs for /home/user? As for root, I'm considering luks+(zfs/btrfs) to be restorable to blank state.

7
 
 

Hello,

I have an issue with btrfs I can't really make heads or tails of. I thought I'd try lemmy even if the community is small here (I refuse to go back to the hard-R place).

Recently, my machine seemingly froze, after which I force reset. After booting back up, the journal shows that the scrub failed for the last boot. One interesting note was the tty1 console showed that the kernel had a SIGILL(?), but I didnt catch it, as switching vts one more time froze the machine completely after switching back to X.

Kernel: Archlinux Linux novo 6.6.8-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 21 Dec 2023 19:01:01 +0000 x86_64 GNU/Linux

I only have a vague idea of what chunk maps are and I have no idea on how to figure out if I lost data.

Running btrfs inspect-internal logical-resolve / gives ENOENT for any of the unfound chunk map logicals in the output.

Any help highly appreciated :)

The dmesg output is following:

Jan 02 02:21:19 novo.vcpu kernel: BTRFS info (device nvme0n1p4): balance: start -dusage=30 -musage=30 -susage=30
Jan 02 02:21:19 novo.vcpu kernel: BTRFS info (device nvme0n1p4): relocating block group 2327056482304 flags system
Jan 02 02:21:19 novo.vcpu kernel: BTRFS info (device nvme0n1p4): found 5 extents, stage: move data extents
Jan 02 02:21:19 novo.vcpu kernel: BTRFS info (device nvme0n1p4): scrub: started on devid 1
Jan 02 02:21:19 novo.vcpu kernel: BTRFS info (device nvme0n1p4): balance: ended with status: 0
Jan 02 02:21:19 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173215744 length 4096
Jan 02 02:21:19 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173215744 length 61440
Jan 02 02:21:19 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173215744 length 4096
Jan 02 02:21:19 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173215744 length 4096
Jan 02 02:21:19 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173219840 length 4096
Jan 02 02:21:19 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173219840 length 4096
Jan 02 02:21:19 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173223936 length 4096
Jan 02 02:21:19 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173223936 length 4096
Jan 02 02:21:19 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173228032 length 4096
Jan 02 02:21:19 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173228032 length 4096
Jan 02 02:21:19 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173232128 length 4096
Jan 02 02:21:19 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173232128 length 4096
Jan 02 02:21:19 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173236224 length 4096
Jan 02 02:21:19 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173236224 length 4096
Jan 02 02:21:19 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173240320 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173240320 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173244416 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173244416 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173248512 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173248512 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173252608 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173252608 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173256704 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173256704 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173260800 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173260800 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173264896 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173264896 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173268992 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173268992 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173273088 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 173273088 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS error (device nvme0n1p4): fixed up error at logical 173211648 on dev /dev/nvme0n1p4 physical 173211648
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 2297397248 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 2297397248 length 32768
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 2297397248 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 2297397248 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 2297401344 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 2297401344 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 2297405440 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 2297405440 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 2297409536 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 2297409536 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 2297413632 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 2297413632 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 2297417728 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 2297417728 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 2297421824 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 2297421824 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 2297425920 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 2297425920 length 4096
Jan 02 02:21:20 novo.vcpu kernel: BTRFS error (device nvme0n1p4): fixed up error at logical 2297364480 on dev /dev/nvme0n1p4 physical 2297364480
Jan 02 02:21:20 novo.vcpu kernel: BTRFS error (device nvme0n1p4): fixed up error at logical 2297364480 on dev /dev/nvme0n1p4 physical 2297364480
Jan 02 02:21:20 novo.vcpu kernel: BTRFS error (device nvme0n1p4): fixed up error at logical 2297364480 on dev /dev/nvme0n1p4 physical 2297364480
Jan 02 02:21:20 novo.vcpu kernel: BTRFS error (device nvme0n1p4): fixed up error at logical 2297364480 on dev /dev/nvme0n1p4 physical 2297364480
Jan 02 02:21:20 novo.vcpu kernel: BTRFS error (device nvme0n1p4): fixed up error at logical 2297364480 on dev /dev/nvme0n1p4 physical 2297364480
Jan 02 02:21:20 novo.vcpu kernel: BTRFS error (device nvme0n1p4): fixed up error at logical 2297364480 on dev /dev/nvme0n1p4 physical 2297364480
Jan 02 02:21:20 novo.vcpu kernel: BTRFS error (device nvme0n1p4): fixed up error at logical 2297364480 on dev /dev/nvme0n1p4 physical 2297364480
Jan 02 02:21:20 novo.vcpu kernel: BTRFS error (device nvme0n1p4): fixed up error at logical 2297364480 on dev /dev/nvme0n1p4 physical 2297364480
Jan 02 02:21:21 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 4453298176 length 4096
Jan 02 02:21:21 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 4453298176 length 4096
Jan 02 02:21:21 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 4453298176 length 4096
Jan 02 02:21:21 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 4453298176 length 4096
Jan 02 02:21:21 novo.vcpu kernel: BTRFS error (device nvme0n1p4): fixed up error at logical 4453236736 on dev /dev/nvme0n1p4 physical 4453236736
Jan 02 02:21:22 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 6618599424 length 4096
Jan 02 02:21:22 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 6618599424 length 12288
Jan 02 02:21:22 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 6618599424 length 4096
Jan 02 02:21:22 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 6618599424 length 4096
Jan 02 02:21:22 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 6618603520 length 4096
Jan 02 02:21:22 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 6618603520 length 4096
Jan 02 02:21:22 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 6618607616 length 4096
Jan 02 02:21:22 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 6618607616 length 4096
Jan 02 02:21:23 novo.vcpu kernel: kauditd_printk_skb: 9 callbacks suppressed
Jan 02 02:21:23 novo.vcpu kernel: audit: type=1100 audit(1704154883.589:1799): pid=555000 uid=1000 auid=1000 ses=1 msg='op=PAM:unix_chkpwd acct="vytautas" exe="/usr/bin/unix_chkpwd" hostname=? addr=? terminal=? res=success'
Jan 02 02:21:23 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 8749236224 length 4096
Jan 02 02:21:23 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 8749236224 length 16384
Jan 02 02:21:23 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 8749236224 length 4096
Jan 02 02:21:23 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 8749236224 length 4096
Jan 02 02:21:23 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 8749240320 length 4096
Jan 02 02:21:23 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 8749240320 length 4096
Jan 02 02:21:23 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 8749244416 length 4096
Jan 02 02:21:23 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 8749244416 length 4096
Jan 02 02:21:23 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 8749248512 length 4096
Jan 02 02:21:23 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 8749248512 length 4096
Jan 02 02:21:24 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 10910683136 length 4096
Jan 02 02:21:24 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 10910683136 length 12288
Jan 02 02:21:24 novo.vcpu kernel: ------------[ cut here ]------------
Jan 02 02:21:24 novo.vcpu kernel: kernel BUG at include/linux/scatterlist.h:115!
Jan 02 02:21:24 novo.vcpu kernel: invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
Jan 02 02:21:24 novo.vcpu kernel: CPU: 6 PID: 554890 Comm: btrfs Tainted: P           OE      6.6.8-arch1-1 #1 2ffcc416f976199fcae9446e8159d64f5aa7b1db
Jan 02 02:21:24 novo.vcpu kernel: Hardware name: LENOVO 81Q9/LNVNB161216, BIOS AUCN61WW 04/19/2022
Jan 02 02:21:24 novo.vcpu kernel: RIP: 0010:__blk_rq_map_sg+0x4dc/0x4f0
Jan 02 02:21:24 novo.vcpu kernel: Code: 24 4c 8b 4c 24 08 84 c0 4c 8b 5c 24 10 44 8b 44 24 18 49 8b 07 0f 84 fd fb ff ff 4c 8b 54 24 30 48 8b 54 24 38 e9 34 ff ff ff <0f> 0b 0f 0b e8 3b d8 7d 00 66 66 2e 0f 1f 84 00 00 00 00 00 90 90
Jan 02 02:21:24 novo.vcpu kernel: RSP: 0000:ffffc9000680b858 EFLAGS: 00010202
Jan 02 02:21:24 novo.vcpu kernel: RAX: ffff888169d5f300 RBX: 0000000000000000 RCX: d94b4edc3cf35329
Jan 02 02:21:24 novo.vcpu kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff888169d5f300
Jan 02 02:21:24 novo.vcpu kernel: RBP: 00000002b45f0000 R08: 0000000000000000 R09: 0000000000000080
Jan 02 02:21:24 novo.vcpu kernel: R10: ffff8881d18886f0 R11: ffff888111586900 R12: 0000000000001000
Jan 02 02:21:24 novo.vcpu kernel: R13: 0000000000005000 R14: ffff8881d18886f0 R15: ffffc9000680b930
Jan 02 02:21:24 novo.vcpu kernel: FS:  00007fad5b87a6c0(0000) GS:ffff88848ad80000(0000) knlGS:0000000000000000
Jan 02 02:21:24 novo.vcpu kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jan 02 02:21:24 novo.vcpu kernel: CR2: 000055c11731ed60 CR3: 0000000043c4c003 CR4: 0000000000770ee0
Jan 02 02:21:24 novo.vcpu kernel: PKRU: 55555554
Jan 02 02:21:24 novo.vcpu kernel: Call Trace:
Jan 02 02:21:24 novo.vcpu kernel:  
Jan 02 02:21:24 novo.vcpu kernel:  ? die+0x36/0x90
Jan 02 02:21:24 novo.vcpu kernel:  ? do_trap+0xda/0x100
Jan 02 02:21:24 novo.vcpu kernel:  ? __blk_rq_map_sg+0x4dc/0x4f0
Jan 02 02:21:24 novo.vcpu kernel:  ? do_error_trap+0x6a/0x90
Jan 02 02:21:24 novo.vcpu kernel:  ? __blk_rq_map_sg+0x4dc/0x4f0
Jan 02 02:21:24 novo.vcpu kernel:  ? exc_invalid_op+0x50/0x70
Jan 02 02:21:24 novo.vcpu kernel:  ? __blk_rq_map_sg+0x4dc/0x4f0
Jan 02 02:21:24 novo.vcpu kernel:  ? asm_exc_invalid_op+0x1a/0x20
Jan 02 02:21:24 novo.vcpu kernel:  ? __blk_rq_map_sg+0x4dc/0x4f0
Jan 02 02:21:24 novo.vcpu kernel:  ? __blk_rq_map_sg+0x102/0x4f0
Jan 02 02:21:24 novo.vcpu kernel:  ? mempool_alloc+0x86/0x1b0
Jan 02 02:21:24 novo.vcpu kernel:  nvme_prep_rq.part.0+0xad/0x840 [nvme 0902d60a773d6eac1c90b6dbf9fb606c9432cc33]
Jan 02 02:21:24 novo.vcpu kernel:  ? try_to_wake_up+0x2b7/0x640
Jan 02 02:21:24 novo.vcpu kernel:  nvme_queue_rqs+0xc0/0x290 [nvme 0902d60a773d6eac1c90b6dbf9fb606c9432cc33]
Jan 02 02:21:24 novo.vcpu kernel:  blk_mq_flush_plug_list.part.0+0x58f/0x5c0
Jan 02 02:21:24 novo.vcpu kernel:  ? queue_work_on+0x3b/0x50
Jan 02 02:21:24 novo.vcpu kernel:  ? btrfs_submit_chunk+0x3ae/0x520 [btrfs e817a81ee371f312ac9558f8778b637a23a2adc5]
Jan 02 02:21:24 novo.vcpu kernel:  __blk_flush_plug+0xf5/0x150
Jan 02 02:21:24 novo.vcpu kernel:  blk_finish_plug+0x29/0x40
Jan 02 02:21:24 novo.vcpu kernel:  submit_initial_group_read+0xa3/0x1b0 [btrfs e817a81ee371f312ac9558f8778b637a23a2adc5]
Jan 02 02:21:24 novo.vcpu kernel:  flush_scrub_stripes+0x219/0x260 [btrfs e817a81ee371f312ac9558f8778b637a23a2adc5]
Jan 02 02:21:24 novo.vcpu kernel:  scrub_stripe+0x53c/0x700 [btrfs e817a81ee371f312ac9558f8778b637a23a2adc5]
Jan 02 02:21:24 novo.vcpu kernel:  scrub_chunk+0xcb/0x130 [btrfs e817a81ee371f312ac9558f8778b637a23a2adc5]
Jan 02 02:21:24 novo.vcpu kernel:  scrub_enumerate_chunks+0x2f0/0x6c0 [btrfs e817a81ee371f312ac9558f8778b637a23a2adc5]
Jan 02 02:21:24 novo.vcpu kernel:  btrfs_scrub_dev+0x212/0x610 [btrfs e817a81ee371f312ac9558f8778b637a23a2adc5]
Jan 02 02:21:24 novo.vcpu kernel:  btrfs_ioctl+0x2d0/0x2640 [btrfs e817a81ee371f312ac9558f8778b637a23a2adc5]
Jan 02 02:21:24 novo.vcpu kernel:  ? folio_add_new_anon_rmap+0x45/0xe0
Jan 02 02:21:24 novo.vcpu kernel:  ? set_ptes.isra.0+0x1e/0xa0
Jan 02 02:21:24 novo.vcpu kernel:  ? cap_safe_nice+0x38/0x70
Jan 02 02:21:24 novo.vcpu kernel:  ? security_task_setioprio+0x33/0x50
Jan 02 02:21:24 novo.vcpu kernel:  ? set_task_ioprio+0xa7/0x130
Jan 02 02:21:24 novo.vcpu kernel:  __x64_sys_ioctl+0x94/0xd0
Jan 02 02:21:24 novo.vcpu kernel:  do_syscall_64+0x5d/0x90
Jan 02 02:21:24 novo.vcpu kernel:  ? __count_memcg_events+0x42/0x90
Jan 02 02:21:24 novo.vcpu kernel:  ? count_memcg_events.constprop.0+0x1a/0x30
Jan 02 02:21:24 novo.vcpu kernel:  ? handle_mm_fault+0xa2/0x360
Jan 02 02:21:24 novo.vcpu kernel:  ? do_user_addr_fault+0x30f/0x660
Jan 02 02:21:24 novo.vcpu kernel:  ? exc_page_fault+0x7f/0x180
Jan 02 02:21:24 novo.vcpu kernel:  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
Jan 02 02:21:24 novo.vcpu kernel: RIP: 0033:0x7fad5b9e33af
Jan 02 02:21:24 novo.vcpu kernel: Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00
Jan 02 02:21:24 novo.vcpu kernel: RSP: 002b:00007fad5b879c80 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Jan 02 02:21:24 novo.vcpu kernel: RAX: ffffffffffffffda RBX: 0000562c0a109450 RCX: 00007fad5b9e33af
Jan 02 02:21:24 novo.vcpu kernel: RDX: 0000562c0a109450 RSI: 00000000c400941b RDI: 0000000000000003
Jan 02 02:21:24 novo.vcpu kernel: RBP: 0000000000000000 R08: 00007fad5b87a6c0 R09: 0000000000000000
Jan 02 02:21:24 novo.vcpu kernel: R10: 0000000000000000 R11: 0000000000000246 R12: fffffffffffffdb8
Jan 02 02:21:24 novo.vcpu kernel: R13: 0000000000000000 R14: 00007ffe13568550 R15: 00007fad5b07a000
Jan 02 02:21:24 novo.vcpu kernel:  
Jan 02 02:21:24 novo.vcpu kernel: Modules linked in: cdc_acm uinput udp_diag tcp_diag inet_diag exfat uas usb_storage uhid snd_usb_audio snd_usbmidi_lib snd_ump snd_rawmidi r8153_ecm cdc_ether usbnet r8152 mii xt_nat xt_tcpudp veth xt_conntrack xt_MASQUERADE nf_conntrack_netlink iptable_nat xt_addrtype iptable_filter br_netfilter ccm rfcomm snd_seq_dummy snd_hrtimer snd_seq snd_seq_device nft_reject_ipv4 nf_reject_ipv4 nft_reject nft_ct nft_masq nft_chain_nat nf_tables nf_nat_h323 nf_conntrack_h323 evdi(OE) nf_nat_pptp nf_conntrack_pptp nf_nat_tftp nf_conntrack_tftp nf_nat_sip nf_conntrack_sip nf_nat_irc nf_conntrack_irc nf_nat_ftp overlay nf_conntrack_ftp nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libchacha20poly1305 chacha_x86_64 poly1305_x86_64 libcurve25519_generic libchacha ip6_udp_tunnel udp_tunnel bridge stp llc cmac algif_hash algif_skcipher af_alg bnep snd_ctl_led snd_soc_skl_hda_dsp snd_soc_intel_hda_dsp_common snd_soc_hdac_hdmi snd_sof_probes snd_hda_codec_hdmi snd_hda_codec_realtek
Jan 02 02:21:24 novo.vcpu kernel:  snd_hda_codec_generic ledtrig_audio snd_soc_dmic snd_sof_pci_intel_icl snd_sof_intel_hda_common soundwire_intel snd_sof_intel_hda_mlink soundwire_cadence snd_sof_intel_hda intel_tcc_cooling snd_sof_pci snd_sof_xtensa_dsp x86_pkg_temp_thermal intel_powerclamp snd_sof hid_sensor_accel_3d coretemp hid_sensor_trigger snd_sof_utils industrialio_triggered_buffer kvm_intel snd_soc_hdac_hda kfifo_buf snd_hda_ext_core hid_sensor_iio_common industrialio hid_sensor_custom snd_soc_acpi_intel_match snd_soc_acpi kvm snd_intel_dspcfg hid_sensor_hub snd_intel_sdw_acpi snd_hda_codec irqbypass uvcvideo crct10dif_pclmul snd_hda_core btusb joydev intel_ishtp_hid polyval_clmulni videobuf2_vmalloc snd_hwdep btrtl yoga_usage_mode(OE) mousedev uvc vboxnetflt(OE) polyval_generic vboxnetadp(OE) btintel soundwire_generic_allocation videobuf2_memops soundwire_bus gf128mul videobuf2_v4l2 btbcm i915 btmtk snd_soc_core ghash_clmulni_intel wacom vboxdrv(OE) videodev snd_compress hid_multitouch sha1_ssse3 usbhid bluetooth iTCO_wdt iwlmvm
Jan 02 02:21:24 novo.vcpu kernel:  videobuf2_common rapl drm_buddy ac97_bus intel_pmc_bxt ecdh_generic crc16 mc iTCO_vendor_support mei_pxp mei_hdcp intel_rapl_msr intel_cstate processor_thermal_device_pci_legacy mac80211 snd_pcm_dmaengine i2c_algo_bit spi_nor processor_thermal_device think_lmi libarc4 intel_uncore firmware_attributes_class iwlwifi processor_thermal_rfim mtd ttm pcspkr snd_pcm wmi_bmof processor_thermal_mbox i2c_i801 snd_timer i2c_smbus processor_thermal_rapl intel_ish_ipc drm_display_helper ucsi_acpi snd lenovo_ymc cec intel_lpss_pci soundcore intel_wmi_thunderbolt thunderbolt intel_ishtp mei_me intel_rapl_common cfg80211 typec_ucsi int340x_thermal_zone mei typec intel_lpss intel_gtt vfat idma64 roles intel_soc_dts_iosf i2c_hid_acpi fat i2c_hid ideapad_laptop sparse_keymap platform_profile rfkill soc_button_array int3400_thermal acpi_thermal_rel acpi_pad acpi_tad mac_hid i2c_dev coda sg crypto_user acpi_call(OE) fuse dm_mod loop nfnetlink ip_tables x_tables btrfs blake2b_generic xor raid6_pq libcrc32c crc32c_generic
Jan 02 02:21:24 novo.vcpu kernel:  crc32_pclmul crc32c_intel serio_raw sha512_ssse3 atkbd sha256_ssse3 libps2 aesni_intel vivaldi_fmap nvme crypto_simd cryptd nvme_core spi_intel_pci xhci_pci spi_intel nvme_common xhci_pci_renesas video i8042 wmi serio
Jan 02 02:21:24 novo.vcpu kernel: Unloaded tainted modules: nvidia(POE):1
Jan 02 02:21:24 novo.vcpu kernel: ---[ end trace 0000000000000000 ]---
Jan 02 02:21:24 novo.vcpu kernel: RIP: 0010:__blk_rq_map_sg+0x4dc/0x4f0
Jan 02 02:21:24 novo.vcpu kernel: Code: 24 4c 8b 4c 24 08 84 c0 4c 8b 5c 24 10 44 8b 44 24 18 49 8b 07 0f 84 fd fb ff ff 4c 8b 54 24 30 48 8b 54 24 38 e9 34 ff ff ff <0f> 0b 0f 0b e8 3b d8 7d 00 66 66 2e 0f 1f 84 00 00 00 00 00 90 90
Jan 02 02:21:24 novo.vcpu kernel: RSP: 0000:ffffc9000680b858 EFLAGS: 00010202
Jan 02 02:21:24 novo.vcpu kernel: RAX: ffff888169d5f300 RBX: 0000000000000000 RCX: d94b4edc3cf35329
Jan 02 02:21:24 novo.vcpu kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff888169d5f300
Jan 02 02:21:24 novo.vcpu kernel: RBP: 00000002b45f0000 R08: 0000000000000000 R09: 0000000000000080
Jan 02 02:21:24 novo.vcpu kernel: R10: ffff8881d18886f0 R11: ffff888111586900 R12: 0000000000001000
Jan 02 02:21:24 novo.vcpu kernel: R13: 0000000000005000 R14: ffff8881d18886f0 R15: ffffc9000680b930
Jan 02 02:21:24 novo.vcpu kernel: FS:  00007fad5b87a6c0(0000) GS:ffff88848ad80000(0000) knlGS:0000000000000000
Jan 02 02:21:24 novo.vcpu kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jan 02 02:21:24 novo.vcpu kernel: CR2: 000055c11731ed60 CR3: 0000000043c4c003 CR4: 0000000000770ee0
Jan 02 02:21:24 novo.vcpu kernel: PKRU: 55555554
Jan 02 02:21:24 novo.vcpu kernel: ------------[ cut here ]------------
Jan 02 02:21:24 novo.vcpu kernel: WARNING: CPU: 6 PID: 554890 at kernel/exit.c:818 do_exit+0x8e9/0xb20
Jan 02 02:21:24 novo.vcpu kernel: Modules linked in: cdc_acm uinput udp_diag tcp_diag inet_diag exfat uas usb_storage uhid snd_usb_audio snd_usbmidi_lib snd_ump snd_rawmidi r8153_ecm cdc_ether usbnet r8152 mii xt_nat xt_tcpudp veth xt_conntrack xt_MASQUERADE nf_conntrack_netlink iptable_nat xt_addrtype iptable_filter br_netfilter ccm rfcomm snd_seq_dummy snd_hrtimer snd_seq snd_seq_device nft_reject_ipv4 nf_reject_ipv4 nft_reject nft_ct nft_masq nft_chain_nat nf_tables nf_nat_h323 nf_conntrack_h323 evdi(OE) nf_nat_pptp nf_conntrack_pptp nf_nat_tftp nf_conntrack_tftp nf_nat_sip nf_conntrack_sip nf_nat_irc nf_conntrack_irc nf_nat_ftp overlay nf_conntrack_ftp nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libchacha20poly1305 chacha_x86_64 poly1305_x86_64 libcurve25519_generic libchacha ip6_udp_tunnel udp_tunnel bridge stp llc cmac algif_hash algif_skcipher af_alg bnep snd_ctl_led snd_soc_skl_hda_dsp snd_soc_intel_hda_dsp_common snd_soc_hdac_hdmi snd_sof_probes snd_hda_codec_hdmi snd_hda_codec_realtek
Jan 02 02:21:24 novo.vcpu kernel:  snd_hda_codec_generic ledtrig_audio snd_soc_dmic snd_sof_pci_intel_icl snd_sof_intel_hda_common soundwire_intel snd_sof_intel_hda_mlink soundwire_cadence snd_sof_intel_hda intel_tcc_cooling snd_sof_pci snd_sof_xtensa_dsp x86_pkg_temp_thermal intel_powerclamp snd_sof hid_sensor_accel_3d coretemp hid_sensor_trigger snd_sof_utils industrialio_triggered_buffer kvm_intel snd_soc_hdac_hda kfifo_buf snd_hda_ext_core hid_sensor_iio_common industrialio hid_sensor_custom snd_soc_acpi_intel_match snd_soc_acpi kvm snd_intel_dspcfg hid_sensor_hub snd_intel_sdw_acpi snd_hda_codec irqbypass uvcvideo crct10dif_pclmul snd_hda_core btusb joydev intel_ishtp_hid polyval_clmulni videobuf2_vmalloc snd_hwdep btrtl yoga_usage_mode(OE) mousedev uvc vboxnetflt(OE) polyval_generic vboxnetadp(OE) btintel soundwire_generic_allocation videobuf2_memops soundwire_bus gf128mul videobuf2_v4l2 btbcm i915 btmtk snd_soc_core ghash_clmulni_intel wacom vboxdrv(OE) videodev snd_compress hid_multitouch sha1_ssse3 usbhid bluetooth iTCO_wdt iwlmvm
Jan 02 02:21:24 novo.vcpu kernel:  videobuf2_common rapl drm_buddy ac97_bus intel_pmc_bxt ecdh_generic crc16 mc iTCO_vendor_support mei_pxp mei_hdcp intel_rapl_msr intel_cstate processor_thermal_device_pci_legacy mac80211 snd_pcm_dmaengine i2c_algo_bit spi_nor processor_thermal_device think_lmi libarc4 intel_uncore firmware_attributes_class iwlwifi processor_thermal_rfim mtd ttm pcspkr snd_pcm wmi_bmof processor_thermal_mbox i2c_i801 snd_timer i2c_smbus processor_thermal_rapl intel_ish_ipc drm_display_helper ucsi_acpi snd lenovo_ymc cec intel_lpss_pci soundcore intel_wmi_thunderbolt thunderbolt intel_ishtp mei_me intel_rapl_common cfg80211 typec_ucsi int340x_thermal_zone mei typec intel_lpss intel_gtt vfat idma64 roles intel_soc_dts_iosf i2c_hid_acpi fat i2c_hid ideapad_laptop sparse_keymap platform_profile rfkill soc_button_array int3400_thermal acpi_thermal_rel acpi_pad acpi_tad mac_hid i2c_dev coda sg crypto_user acpi_call(OE) fuse dm_mod loop nfnetlink ip_tables x_tables btrfs blake2b_generic xor raid6_pq libcrc32c crc32c_generic
Jan 02 02:21:24 novo.vcpu kernel:  crc32_pclmul crc32c_intel serio_raw sha512_ssse3 atkbd sha256_ssse3 libps2 aesni_intel vivaldi_fmap nvme crypto_simd cryptd nvme_core spi_intel_pci xhci_pci spi_intel nvme_common xhci_pci_renesas video i8042 wmi serio
Jan 02 02:21:24 novo.vcpu kernel: Unloaded tainted modules: nvidia(POE):1
Jan 02 02:21:24 novo.vcpu kernel: CPU: 6 PID: 554890 Comm: btrfs Tainted: P      D    OE      6.6.8-arch1-1 #1 2ffcc416f976199fcae9446e8159d64f5aa7b1db
Jan 02 02:21:24 novo.vcpu kernel: Hardware name: LENOVO 81Q9/LNVNB161216, BIOS AUCN61WW 04/19/2022
Jan 02 02:21:24 novo.vcpu kernel: RIP: 0010:do_exit+0x8e9/0xb20
Jan 02 02:21:24 novo.vcpu kernel: Code: e9 35 f8 ff ff 48 8b bb 00 06 00 00 31 f6 e8 3e d9 ff ff e9 bd fd ff ff 4c 89 e6 bf 05 06 00 00 e8 0c 1b 01 00 e9 5f f8 ff ff <0f> 0b e9 8e f7 ff ff 0f 0b e9 4b f7 ff ff 48 89 df e8 91 34 12 00
Jan 02 02:21:24 novo.vcpu kernel: RSP: 0000:ffffc9000680bed8 EFLAGS: 00010286
Jan 02 02:21:24 novo.vcpu kernel: RAX: 0000000000000000 RBX: ffff88833e4f2700 RCX: 0000000000000000
Jan 02 02:21:24 novo.vcpu kernel: RDX: 0000000000000001 RSI: 0000000000002710 RDI: ffff888100faf380
Jan 02 02:21:24 novo.vcpu kernel: RBP: ffff88820a0e1680 R08: 0000000000000000 R09: ffffc9000680b580
Jan 02 02:21:24 novo.vcpu kernel: R10: 0000000000000003 R11: ffffffff82eb9850 R12: 000000000000000b
Jan 02 02:21:24 novo.vcpu kernel: R13: ffff888100faf380 R14: ffffc9000680b7a8 R15: ffff88833e4f2700
Jan 02 02:21:24 novo.vcpu kernel: FS:  00007fad5b87a6c0(0000) GS:ffff88848ad80000(0000) knlGS:0000000000000000
Jan 02 02:21:24 novo.vcpu kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jan 02 02:21:24 novo.vcpu kernel: CR2: 000055c11731ed60 CR3: 0000000043c4c003 CR4: 0000000000770ee0
Jan 02 02:21:24 novo.vcpu kernel: PKRU: 55555554
Jan 02 02:21:24 novo.vcpu kernel: Call Trace:
Jan 02 02:21:24 novo.vcpu kernel:  
Jan 02 02:21:24 novo.vcpu kernel:  ? do_exit+0x8e9/0xb20
Jan 02 02:21:24 novo.vcpu kernel:  ? __warn+0x81/0x130
Jan 02 02:21:24 novo.vcpu kernel:  ? do_exit+0x8e9/0xb20
Jan 02 02:21:24 novo.vcpu kernel:  ? report_bug+0x171/0x1a0
Jan 02 02:21:24 novo.vcpu kernel:  ? handle_bug+0x3c/0x80
Jan 02 02:21:24 novo.vcpu kernel:  ? exc_invalid_op+0x17/0x70
Jan 02 02:21:24 novo.vcpu kernel:  ? asm_exc_invalid_op+0x1a/0x20
Jan 02 02:21:24 novo.vcpu kernel:  ? do_exit+0x8e9/0xb20
Jan 02 02:21:24 novo.vcpu kernel:  ? do_exit+0x70/0xb20
Jan 02 02:21:24 novo.vcpu kernel:  ? do_user_addr_fault+0x30f/0x660
Jan 02 02:21:24 novo.vcpu kernel:  make_task_dead+0x81/0x170
Jan 02 02:21:24 novo.vcpu kernel:  rewind_stack_and_make_dead+0x17/0x20
Jan 02 02:21:24 novo.vcpu kernel: RIP: 0033:0x7fad5b9e33af
Jan 02 02:21:24 novo.vcpu kernel: Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 18 48 8b 44 24 18 64 48 2b 04 25 28 00 00
Jan 02 02:21:24 novo.vcpu kernel: RSP: 002b:00007fad5b879c80 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Jan 02 02:21:24 novo.vcpu kernel: RAX: ffffffffffffffda RBX: 0000562c0a109450 RCX: 00007fad5b9e33af
Jan 02 02:21:24 novo.vcpu kernel: RDX: 0000562c0a109450 RSI: 00000000c400941b RDI: 0000000000000003
Jan 02 02:21:24 novo.vcpu kernel: RBP: 0000000000000000 R08: 00007fad5b87a6c0 R09: 0000000000000000
Jan 02 02:21:24 novo.vcpu kernel: R10: 0000000000000000 R11: 0000000000000246 R12: fffffffffffffdb8
Jan 02 02:21:24 novo.vcpu kernel: R13: 0000000000000000 R14: 00007ffe13568550 R15: 00007fad5b07a000
Jan 02 02:21:24 novo.vcpu kernel:  
Jan 02 02:21:24 novo.vcpu kernel: ---[ end trace 0000000000000000 ]---
Jan 02 02:21:24 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 10910683136 length 4096
Jan 02 02:21:24 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 10910683136 length 4096
Jan 02 02:21:24 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 10910687232 length 4096
Jan 02 02:21:24 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 10910687232 length 4096
Jan 02 02:21:24 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 10910691328 length 4096
Jan 02 02:21:24 novo.vcpu kernel: BTRFS critical (device nvme0n1p4): unable to find chunk map for logical 10910691328 length 4096

8
 
 

The fscrypt work continues to steadily plod along, really hoping that there won't need to be many more version of the patchset, especially seeing as a bunch of the non-BTRFS-specific work has already landed.

9
10
11
 
 

cross-posted from: https://feddit.uk/post/4577666

Was looking at how to set up snapper on Fedora 39 and came across the ever knowledgable Stephens tech talks video. It does balance, setting up snapper, sub-volume management in a really cool GUI tool.

edit updated the link as the GitHub page was apparently ood, but it is in most repo's

12
0
duperemove speedups (trofi.github.io)
submitted 1 year ago by Atemu@lemmy.ml to c/btrfs@lemmy.ml
13
 
 

Looks like it's v2 time.

The btrfs-progs -side patch is here.

14
15
 
 

Update:

With the native Manjaro installer I succeeded in making my disk encrypted. But it's below the btrfs layer (btrfs sits inside the encryption)

16
 
 

Just wanted to share some love for this filesystem.

I’ve been running a btrfs raid1 continuously for over ten years, on a motley assortment of near-garbage hard drives of all different shapes and sizes. None of the original drives are still in it, and that server is now on its fourth motherboard. The data has survived it all!

It’s grown to 6 drives now, and most recently survived the runtime failure of a SATA controller card that four of them were attached to. After replacing it, I was stunned to discover that the volume was uncorrupted and didn’t even require repair.

So knock on wood — I’m not trying to tempt fate here. I just want to say thank you to all the devs for their hard work, and add some positive feedback to the heap since btrfs gets way more than it’s fair share of flak, which I personally find to be undeserved. Cheers!

17
 
 

Hi,

btrfs-progs version 6.3.3 have been released. This is a bugfix release.

There are two bug fixes, the rest is CI work, documentation updates and some preparatory work. Due to no other significant changes queued, the release 6.4 will be most likely skipped.

Changelog:

  • add btrfs-find-root to btrfs.box
  • replace: properly enqueue if there's another replace running
  • other:
    • CI updates, more tests enabled, code coverage, badges
    • documentation updates
    • build warning fixes
18
19
 
 
This is a changeset adding encryption to btrfs. It is not complete; it
does not support inline data or verity or authenticated encryption. It
is primarily intended as a proof that the fscrypt extent encryption
changeset it builds on work. 

As per the design doc refined in the fall of last year [1], btrfs
encryption has several steps: first, adding extent encryption to fscrypt
and then btrfs; second, adding authenticated encryption support to the
block layer, fscrypt, and then btrfs; and later adding potentially the
ability to change the key used by a directory (either for all data or
just newly written data) and/or allowing use of inline extents and
verity items in combination with encryption and/or enabling send/receive
of encrypted volumes. As such, this change is only the first step and is
unsafe.

This change does not pass a couple of encryption xfstests, because of
different properties of extent encryption. It hasn't been tested with
direct IO or RAID. Because currently extent encryption always uses inline
encryption (i.e. IO-block-only) for data encryption, it does not support
encryption of inline extents; similarly, since btrfs stores verity items
in the tree instead of in inline encryptable blocks on disk as other
filesystems do, btrfs cannot currently encrypt verity items. Finally,
this is insecure; the checksums are calculated on the unencrypted data
and stored unencrypted, which is a potential information leak. (This
will be addressed by authenticated encryption).

This changeset is built on two prior changesets to fscrypt: [2] and [3]
and should have no effect on unencrypted usage.

[1] https://docs.google.com/document/d/1janjxewlewtVPqctkWOjSa7OhCgB8Gdx7iDaCDQQNZA/edit?usp=sharing
[2]
https://lore.kernel.org/linux-fscrypt/cover.1687988119.git.sweettea-kernel@dorminy.me/
[3]
https://lore.kernel.org/linux-fscrypt/cover.1687988246.git.sweettea-kernel@dorminy.me
20
 
 
This changeset adds extent-based data encryption to fscrypt.
Some filesystems need to encrypt data based on extents, rather than on
inodes, due to features incompatible with inode-based encryption. For
instance, btrfs can have multiple inodes referencing a single block of
data, and moves logical data blocks to different physical locations on
disk in the background. 

As per discussion last year in [1] and later in [2], we would like to
allow the use of fscrypt with btrfs, with authenticated encryption. This
is the first step of that work, adding extent-based encryption to
fscrypt; authenticated encryption is the next step. Extent-based
encryption should be usable by other filesystems which wish to support
snapshotting or background data rearrangement also, but btrfs is the
first user. 

This changeset requires extent encryption to use inlinecrypt, as
discussed previously. There are two questionable parts: the
forget_extent_info hook is not yet in use by btrfs, as I haven't yet
written a test exercising a race where it would be relevant; and saving
the session key credentials just to enable v1 session-based policies is
perhaps less good than 

This applies atop [3], which itself is based on kdave/misc-next. It
passes most encryption fstests with suitable changes to btrfs-progs, but
not generic/580 or generic/595 due to different timing involved in
extent encryption. Tests and btrfs progs updates to follow.


[1] https://docs.google.com/document/d/1janjxewlewtVPqctkWOjSa7OhCgB8Gdx7iDaCDQQNZA/edit?usp=sharing
[2] https://lore.kernel.org/linux-fscrypt/80496cfe-161d-fb0d-8230-93818b966b1b@dorminy.me/T/#t
[3]
https://lore.kernel.org/linux-fscrypt/cover.1687988119.git.sweettea-kernel@dorminy.me/

21
1
submitted 1 year ago* (last edited 1 year ago) by Atemu@lemmy.ml to c/btrfs@lemmy.ml
 
 
btrfs quota groups (qgroups) are a compelling feature of btrfs that
allow flexible control for limiting subvolume data and metadata usage.
However, due to btrfs's high level decision to tradeoff snapshot
performance against ref-counting performance, qgroups suffer from
non-trivial performance issues that make them unattractive in certain
workloads. Particularly, frequent backref walking during writes and
during commits can make operations increasingly expensive as the number
of snapshots scales up. For that reason, we have never been able to
commit to using qgroups in production at Meta, despite significant
interest from people running container workloads, where we would benefit
from protecting the rest of the host from a buggy application in a
container running away with disk usage.  This patch series introduces a simplified version of qgroups called
simple quotas (squotas) which never computes global reference counts
for extents, and thus has similar performance characteristics to normal,
quotas disabled, btrfs. The "trick" is that in simple quotas mode, we
account all extents permanently to the subvolume in which they were
originally created. That allows us to make all accounting 1:1 with
extent item lifetime, removing the need to walk backrefs. However, this sacrifices the ability to compute shared vs. exclusive usage. It also
results in counter-intuitive, though still predictable and simple,
accounting in the cases where an original extent is removed while a
shared copy still exists. Qgroups is able to detect that case and count
the remaining copy as an exclusive owner, while squotas is not. As a
result, squotas works best when the original extent is immutable and
outlives any clones.

==Format Change==
In order to track the original creating subvolume of a data extent in
the face of reflinks, it is necessary to add additional accounting to
the extent item. To save space, this is done with a new inline ref item.
However, the downside of this approach is that it makes enabling squota
an incompat change, denoted by the new incompat bit SIMPLE_QUOTA. When
this bit is set and quotas are enabled, new extent items get the extra
accounting, and freed extent items check for the accounting to find
their creating subvolume. In addition, 1:1 with this incompat bit,
the quota status item now tracks a "quota enablement generation" needed
for properly handling deleting extents with predate enablement.

==API==
Squotas reuses the api of qgroups. The only difference is that when you
enable quotas via `btrfs quota enable`, you pass the `--simple` flag.
Squotas will always report exclusive == shared for each qgroup. Squotas
deal with extent_item/metadata_item sizes and thus do not do anything
special with compression. Squotas also introduce auto inheritance for
nested subvols. The API is documented more fully in the documentation
patches in btrfs-progs.

==Testing methodology==
Using updated btrfs-progs and fstests (relevant matching patch sets to
be sent ASAP)
btrfs-progs: https://github.com/boryas/btrfs-progs/tree/squota-progs
fstests: https://github.com/boryas/fstests/tree/squota-test

I ran '-g auto' on fstests on the following configurations:
1a) baseline kernel/progs/fstests.
1b) squota kernel baseline progs/fstests.
2a) baseline kernel/progs/fstests. fstests configured to mkfs with quota
2b) squota kernel/progs/fstests. fstests configured to mkfs with squota

I compared 1a against 1b and 2a against 2b and detected no regressions.
2a/2b both exhibit regressions against 1a/1b which are largely issues
with quota reservations in various complicated cases. I intend to run
those down in the future, but they are not simple quota specific, as
they are already broken with plain qgroups.

==Performance Testing==
I measured the performance of the change using fsperf. I ran with 3
configurations using the squota kernel:
- plain mkfs
- qgroup mkfs
- squota mkfs
And added a new performance test which creates 1000 files in a subvol,
creates 100 snapshots of that subvol, then unshares extents in files in
the snapshots. I measured write performance with fio and btrfs commit
critical section performance side effects with bpftrace on
'wait_current_trans'.

The results for the test which measures unshare perf (unshare.py) with
qgroup and squota compared to the baseline:

group test results
unshare results
          metric              baseline       current        stdev            diff
========================================================================================
avg_commit_ms                     162.13        285.75          3.14     76.24%
bg_count                              16            16             0      0.00%
commits                           378.20           379          1.92      0.21%
elapsed                           201.40        270.40          1.34     34.26%
end_state_mount_ns           26036211.60   26004593.60    2281065.40     -0.12%
end_state_umount_ns             2.45e+09      2.55e+09   20740154.41      3.93%
max_commit_ms                     425.80           594         53.34     39.50%
sys_cpu                             0.10          0.06          0.06    -42.15%
wait_current_trans_calls         2945.60       3405.20         47.08     15.60%
wait_current_trans_ns_max       1.56e+08      3.43e+08   32659393.25    120.07%
wait_current_trans_ns_mean    1974875.35   28588482.55    1557588.84   1347.61%
wait_current_trans_ns_min            232           232         25.88      0.00%
wait_current_trans_ns_p50            718           740         22.80      3.06%
wait_current_trans_ns_p95     7711770.20      2.21e+08   17241032.09   2761.19%
wait_current_trans_ns_p99    67744932.29      2.68e+08   41275815.87    295.16%
write_bw_bytes                 653008.80     486344.40       4209.91    -25.52%
write_clat_ns_mean            6251404.78    8406837.89      39779.15     34.48%
write_clat_ns_p50             1656422.40    1643315.20      27415.68     -0.79%
write_clat_ns_p99               1.90e+08      3.20e+08       2097152     68.62%
write_io_kbytes                   128000        128000             0      0.00%
write_iops                        159.43        118.74          1.03    -25.52%
write_lat_ns_max                7.06e+08      9.80e+08   47324816.61     38.88%
write_lat_ns_mean             6251503.06    8406936.06      39780.83     34.48%
write_lat_ns_min                    3354          4648        616.06     38.58%

squota test results
unshare results
          metric              baseline       current        stdev            diff
========================================================================================
avg_commit_ms                     162.13        164.16          3.14      1.25%
bg_count                              16             0             0   -100.00%
commits                           378.20        380.80          1.92      0.69%
elapsed                           201.40        208.20          1.34      3.38%
end_state_mount_ns           26036211.60   25840729.60    2281065.40     -0.75%
end_state_umount_ns             2.45e+09      3.01e+09   20740154.41     22.80%
max_commit_ms                     425.80        415.80         53.34     -2.35%
sys_cpu                             0.10          0.08          0.06    -23.36%
wait_current_trans_calls         2945.60       2981.60         47.08      1.22%
wait_current_trans_ns_max       1.56e+08      1.12e+08   32659393.25    -27.86%
wait_current_trans_ns_mean    1974875.35    1064734.76    1557588.84    -46.09%
wait_current_trans_ns_min            232           238         25.88      2.59%
wait_current_trans_ns_p50            718           746         22.80      3.90%
wait_current_trans_ns_p95     7711770.20       1567.60   17241032.09    -99.98%
wait_current_trans_ns_p99    67744932.29   49880514.27   41275815.87    -26.37%
write_bw_bytes                 653008.80        631256       4209.91     -3.33%
write_clat_ns_mean            6251404.78    6476816.06      39779.15      3.61%
write_clat_ns_p50             1656422.40       1581056      27415.68     -4.55%
write_clat_ns_p99               1.90e+08      1.94e+08       2097152      2.21%
write_io_kbytes                   128000        128000             0      0.00%
write_iops                        159.43        154.12          1.03     -3.33%
write_lat_ns_max                7.06e+08      7.65e+08   47324816.61      8.38%
write_lat_ns_mean             6251503.06    6476912.76      39780.83      3.61%
write_lat_ns_min                    3354          4062        616.06     21.11%

And the same, but only showing results where the deviation was outside
of a 95% confidence interval for the mean (default significance
highlighting in fsperf):
qgroup test results
unshare results
          metric              baseline       current        stdev            diff
========================================================================================
avg_commit_ms                     162.13        285.75          3.14     76.24%
elapsed                           201.40        270.40          1.34     34.26%
end_state_umount_ns             2.45e+09      2.55e+09   20740154.41      3.93%
max_commit_ms                     425.80           594         53.34     39.50%
wait_current_trans_calls         2945.60       3405.20         47.08     15.60%
wait_current_trans_ns_max       1.56e+08      3.43e+08   32659393.25    120.07%
wait_current_trans_ns_mean    1974875.35   28588482.55    1557588.84   1347.61%
wait_current_trans_ns_p95     7711770.20      2.21e+08   17241032.09   2761.19%
wait_current_trans_ns_p99    67744932.29      2.68e+08   41275815.87    295.16%
write_bw_bytes                 653008.80     486344.40       4209.91    -25.52%
write_clat_ns_mean            6251404.78    8406837.89      39779.15     34.48%
write_clat_ns_p99               1.90e+08      3.20e+08       2097152     68.62%
write_iops                        159.43        118.74          1.03    -25.52%
write_lat_ns_max                7.06e+08      9.80e+08   47324816.61     38.88%
write_lat_ns_mean             6251503.06    8406936.06      39780.83     34.48%
write_lat_ns_min                    3354          4648        616.06     38.58%

squota test results
unshare results
          metric              baseline       current        stdev            diff
========================================================================================
elapsed                           201.40        208.20          1.34      3.38%
end_state_umount_ns             2.45e+09      3.01e+09   20740154.41     22.80%
write_bw_bytes                 653008.80        631256       4209.91     -3.33%
write_clat_ns_mean            6251404.78    6476816.06      39779.15      3.61%
write_clat_ns_p50             1656422.40       1581056      27415.68     -4.55%
write_clat_ns_p99               1.90e+08      1.94e+08       2097152      2.21%
write_iops                        159.43        154.12          1.03     -3.33%
write_lat_ns_mean             6251503.06    6476912.76      39780.83      3.61%

Particularly noteworthy are the massive regressions to
wait_current_trans in qgroup mode as well as the solid regressions to
bandwidth, iops and write latency. The regressions/improvements in
squotas are modest in comparison in line with the expectation. I am
still investigating the squota umount regression, particularly whether
it is in the umount's final commit and represents a real performance
problem with squotas.

Link: https://github.com/boryas/btrfs-progs/tree/squota-progs
Link: https://github.com/boryas/fstests/tree/squota-test
Link: https://github.com/boryas/fsperf/tree/unshare-victim
22
 
 

Hi,

there are mainly core changes, refactoring and optimizations. Performance is improved in some areas, overall there may be a cumulative improvement due to refactoring that removed lookups in the IO path or simplified IO submission tracking.

No merge conflicts. Please pull, thanks.

Core:

  • submit IO synchronously for fast checksums (crc32c and xxhash), remove high priority worker kthread

  • read extent buffer in one go, simplify IO tracking, bio submission and locking

  • remove additional tracking of redirtied extent buffers, originally added for zoned mode but actually not needed

  • track ordered extent pointer in bio to avoid rbtree lookups during IO

  • scrub, use recovered data stripes as cache to avoid unnecessary read

  • in zoned mode, optimize logical to physical mappings of extents

  • remove PageError handling, not set by VFS nor writeback

  • cleanups, refactoring, better structure packing

  • lots of error handling improvements

  • more assertions, lockdep annotations

  • print assertion failure with the exact line where it happens

  • tracepoint updates

  • more debugging prints

Performance:

  • speedup in fsync(), better tracking of inode logged status can avoid transaction commit

  • IO path structures track logical offsets in data structures and does not need to look it up

User visible changes:

  • don't commit transaction for every created subvolume, this can reduce time when many subvolumes are created in a batch

  • print affected files when relocation fails

  • trigger orphan file cleanup during START_SYNC ioctl

Notable fixes:

  • fix crash when disabling quota and relocation

  • fix crashes when removing roots from drity list

  • fix transacion abort during relocation when converting from newer profiles not covered by fallback

  • in zoned mode, stop reclaiming block groups if filesystem becomes read-only

  • fix rare race condition in tree mod log rewind that can miss some btree node slots

  • with enabled fsverity, drop up-to-date page bit in case the verification fails

23
1
Btrfs progs release 6.3.2 (lore.kernel.org)
submitted 1 year ago* (last edited 1 year ago) by Atemu@lemmy.ml to c/btrfs@lemmy.ml
 
 

Changelog:

  • build: fix mkfs on big endian hosts
  • mkfs: don't print changed defaults notice with --quiet
  • scrub: fix wrong stats of processed bytes in background and foreground mode
  • convert: actually create free-space-tree instead of v1 space cache
  • print-tree: recognize and print CHANGING_FSID_V2 flag (for the metadata_uuid change in progress)
  • other:
    • documentation updates