jasonwcSo, "zfs send -i zfs-auto-snap_hourly-2014-12-26-0417 data/photos@zfs-auto-snap_hourly-2014-12-26-0517 | zfs recv Backups/photos" will send all incremental data between the specified snapshots.
jasonwcWill " zfs send -R -i -v data/photos@zfs-auto-snap_hourly-2014-12-26-0517 | zfs recv Backups/photos
jasonwc" also work?
jasonwcit seems "zfs send -I data/photos@zfs-auto-snap_hourly-2014-12-26-0417 data/photos@zfs-auto-snap_hourly-2014-12-26-0517 | zfs recv -d -F Backups/photos
jasonwc" is the command I want
eadrichi, can someone tel me how to send a dataset with all subdatasets and snapshots to another host
eadricI read the docs but... can't figure it out
eadricit skips the "sub"snapshots or give me errors
lblume-R
eadricThats what I thought but... no luck
eadricThe dataset is 2 T
eadricbut the send stops at 33 G
eadricWhich is in the toplevel dataset in this case
eadricok, sort of got it
eadricsnapshot -r first
lblumeheh, yes, it won't transfer snaps that don't exist :-D
eadricor that do exist in this case
eadricI removed all snaps and created new snapshots
eadricwhat I had was:
eadricpool/dataset/dataset
eadricwith snapshots of the last datasets
eadricand then snapshotted the "middle" dataset and tried to send thatone
eadrictherefore the snapshot names are not the same
eadrichow do I send a dataset with snapshots already layered deeper in it
eadric?
lblume-R should do it
biaxmy zpool has 2 datasets. one block device... and well i intend to use this other one for samba. should i create a zvol and then put ext4 on it? or use zfs filesystem?
biaxwith zvol i use -V to specify size... but what about zfs filesystem?
Lalufuwhy not use zfs?
biaxk
bekksAnd why not using quotas? :)
biaxlearned
mohamedAziz17hello. i recently upgraded from wheezy to jessie and now i have a lot of problems , yesterday i upgraded my system, but now when i try to install any package this is the output http://pastebin.com/XnKwqcuG i also tried apt-get -f install bit there is a lot of errors i tried also apt-get clean and then update and then upgrade but the same, HELP please !
perfinioni dont see how that is related to ZFS?
jasonwcI'm having an issue sending a large dataset using the -R option with send/rcv
jasonwchttp://pastebin.com/L8Y4dFNL
jasonwcIt sends about 1.7T and then fails stating "cannot receive incremental stream: invalid backup stream"
jasonwcI just did a scrub on the main pool a few days ago
jasonwcCould the snapshot itself be corrrupted or is there another issue?
dasjoeAre you really recving on the same host?
jasonwcYes
jasonwcI created a seperate pool to store my backups
jasonwcAlso, ZFS exhibits interesting behavior when the send operaiton fails
jasonwcThe data is still there as I can resume the send using zfs send -I from the existing snapshot to the latest one, but the directory structure does not show the dataset or .zfs under it.
jasonwcit failed again, this time on a different snapshot (same error). Any ideas?
jasonwchttp://pastebin.com/1vmRqMDG
dasjoejasonwc: stop the auto-snapshotting, try again?
dasjoejasonwc: it may be throwing away snapshots which are queued for sending
dasjoejasonwc: https://github.com/zfsonlinux/zfs/issues/1059
jasonwcActually, I'm pretty sure that's not the problem. When I run a large zfs send | recv I get emails such as these "cannot destroy snapshot data/backups@zfs-auto-snap_frequent-2014-12-26-1515: dataset is busy
jasonwc"
jasonwcBut I will definitely disable the cron job and test again.
jasonwcdasjoe, Is the problem in that bug report that the snapshots are being destroyed on the sending or receiving side? The bug report ends with "A similar bug was previously fixed with the zfs send operation, which marks dependent datasets as held until it completed. It seems to me that zfs receive should do the same thing - mark a just received dataset as held, until either the operation completes or the next dataset is recei
jasonwcved."
jasonwcI don't have auto-snapshotting on the receiving side
jasonwcSounds like the receiving side is deleting, causing the receive to fail
jasonwcJust re-read the bug report and found my answer: "Specifically, the system I was receiving on was using a timed zfs-auto-snapshot script." That isn't the case for me. I have auto snapshotting enabled on the sending pool but not the receiving. As the bug report suggests, no snapshots are destroyed while the send operation is in progress.
dasjoejasonwc: hm, too bad. Yours seems to be a new bug, then. Can you recv into a file without problems?
jasonwcSo, I just created a new snapshot and sent that (didn't use the -R option so there were no previous snapshots included). The 2.68T transfer completed successfully.
jasonwcI haven't had issues with the other datasets. Is it possible for a snapshot to be corrupt in a way that would cause the recv to fail?
dasjoeThat should not happen, but has happened before
dasjoeThere was a guy on zfs-discuss who managed to crash his box via recv
jasonwcWell, if so, it's not a data corruption issue as my last weekly scrub reported no errors, and I've now tried this several times, and I get no read errors.
jasonwcWell, just had the issue on another, much smaller dataset (52G)
dasjoeDurval confirmed the crashing, so *sometimes* a send stream may be unreceivable
jasonwcI'm going to try deleting snapshots to see if I can make it work
dasjoeAlthough that guy's stream came from FreeBSD, iirc
jasonwcI'm not sure how to read the error. It looks like the last snapshot sent successfully and there was an issue reading the next snapshot. But If I check the list of snapshots, it appears the prior snapshot never completed.
jasonwcIn this case, VMs@zfs-auto-snap_hourly-2014-11-01-0317
jasonwcAnd it worked
jasonwcI deleted that snapshot, destroyed Backups/VMs, and reissued the send/ recv command
jasonwcThis time it succeeded
jasonwcDamn, I think you were right all along dasjoe
jasonwcIt was doing snapshots on the new Backup pool
jasonwcSo, your explanation was likely the reason for the problem
jasonwcI need to manually disable snapshots on that pool
jasonwcIt copied the properties from the source dataset
jasonwcI forgot that the auto-snapshot property is maintained
jk4Has there been reports of issues with debian testing?
jk4has the appearance that zed isn't working
jk4symbol lookup error in libzpool
jk4but nothing else gives that error just zed
jk4this was probably a silly time to mention it since i don't have the drive with me to try things
jasonwcInteresting, I'm using Wheezy and zed works perfectly. I might have to install jessie in a VM and see if zed breaks.
dasjoejk4: I'm sure FransUrbo is interested in the precise error you're getting
jasonwcdasjoe, So, it appears the bug report you cited is likely impacting me as well. Cron emailed me to tell me that the frequent snapshots on the sending side could not be deleted as the dataset was in use, but there was no such warning on the receiving side, suggesting that they were in fact being pruned.
jasonwcI have since disabled auto-snapshooting on the receive side and am sending the 2.8T dataset with all its snapshots to see if that was in fact the problem
jasonwcI confirmed that it is no longer creating auto snapshots on the receive side
jk4linking error apparently
jk4"symbol lookup error: /lib/libzpool.so.2: undefined symbol: fnvlist_num_pairs"
jk4that's from zed
jk4no other tools seem to give any errors. they just report that no pools are found
jk4which.. makes sense
dasjoejk4: did you reboot after the last upgrade?
jk4yes, indeed
jk4first course of action is to turn it off and on again
dasjoejasonwc: you might want to update the ticket if you manage to find out something new, or to just say you're affected, too
jasonwcYeah, I will
dasjoejk4: "dkms status" lists spl and zfs built for the currently active kernel?
jasonwcI'm just verifying that was the bug I was seeing
jk4dasjoe: yes
jk4# uname -v
dasjoejk4: the error you're seeing *usually* indicates the userspace tools and kernel module being out of sync, "zpool status" or "zpool import" don't work?
jk4#1 SMP Debian 3.16.7-ckt2-1 (2014-12-08)yYYYYNlkj
jk4they do work, but say no pools found
jk4# dkms status
jk4spl, 0.6.3, 3.16.0-4-amd64, x86_64: installed
jk4zfs, 0.6.3, 3.16.0-4-amd64, x86_64: installed
dasjoeThat's 3.16.0, not 3.16.7
dasjoeAlso, please use a pastebin to paste the full output :)
jk4that's what was built fresh last night on the same kernel
jk4full output of what?
dasjoe"dkms status"
jk4there is nothing else
dasjoeInteresting. How come your kernel is 3.16.7?
jk4that doesn't look like an issue because i figure that the same major revision would be okay
jk4not sure why the difference
jk4suppose i could purge the packages and then reinstall
jk4though i did do that a number of times
jk4is it worth purging the lib* packages?
jk4purged nvpair1
jk4will now start debian-zfs install
jk4be back in a few
jk4i get the same listing from dkms status
jk4same zed error
jk4well, i am more than happy to work with someone to get to the root of the issue
jk4i just don't know enough about how it all fits together to triage much myself
dasjoejk4: I suspect a mismatch between your kernel version and the modules, do you have your kernel's headers installed?
dasjoejk4: is that kernel self-built? It doesn't look like it's coming from the repos
dasjoejk4: what's the kernel package name? All I can find for Debian is that kernel intended for use in debian-installer, not in installed systems
DrLouZFS wizards:
dasjoeOoh, we're wizards
DrLouis there a way to undo an erroneous add to a datapool?
dasjoeNo, sorry
dasjoeDelphix is working on something, but it's not merged upstream yet
lblumeAh, that's what I must have heard about then
dasjoeYou can try to get your pool rolled back to a previous state, but the last time somebody in here tried that it didn't work
dasjoeDrLou: did you add the vdev just now? You might have luck by exporting the pool and trying a reimport with -T
DrLouwell, it’s a recent ‘add’ of a disk which was supposed to be a replacement of faulted.
DrLou‘just now’ in usage terms - machine has been dormant for 24 hrs or so…
jk4dasjoe: it's from the repos
dasjoeDHE just said the rollback window was very short, so it probably won't work
dasjoejk4: link? I see https://packages.debian.org/jessie/kernel-image-3.16.0-4-amd64-di but that's for debian-installer only
DrLoui can’t even get a history on the pool at the moment...
jk4https://packages.debian.org/jessie/linux-image-3.16.0-4-amd64
lblumeDrLou: Since you added to a mirror half, and if you can stop services, it might still be relatively easy to send the data from the side with the extra vdev to a new vdev on the new disk, then destroy the original pool, and reattach that disk to the new pool.
jk4if you look to the right.. you see my kernel shown
jk4i certainly didn't build it
jk4actually the heading of hte page says "Package: linux-image-3.16.0-4-amd64 (3.16.7-ckt2-1) "
jk4that link is a dep of linux-image-amd64
DrLoulblume - am leaning toward something like this, but am nervous about the workflow…(!)
jk4https://packages.debian.org/jessie/linux-image-amd64
DrLouI _do_ have a fully-intact mirror-0 on this pool; shouldn’t that be my go-to?
lblumeDrLou: The trick is making sure beforehand that your backup tapes are available and readable.
DrLouno backup tapes on this machine - all the mirroring was the protection!
DrLouand, isn’t ‘disk3’ still a full copy of mirror-0?
lblumeACTION shakes head
DrLou‘shakes head’ at the lack of backup tapes?
lblumeYep. Mirroring's not an alternative
lblumeI think if you're careful, you're still good.
DrLouI’m trying to be careful - I’ll have to chalk this up to further edification on ZFS...
jasonwcdasjoe, That was indeed the problem
jasonwc zfs send -R data/backups@Test | pv | zfs recv Backups/backups2
jasonwc2.86TB 2:26:33 [ 342MB/s] [ <=> ]
jasonwccompleted successfully
lblumeDrLou: Try to get an extra copy of the data that you'll keep around until you are done
DrLou‘copy’ being zpool export/ import?
lblumezfs send, rsync, anything that puts the data on a different disk that you won't touch until finished with the current pool, just in case
DrLoutks for your help. Trying...
lblumeGot to sleep now, good luck, just take things slowly.
DrLouit’s a matter of (re) reading all the various docs, getting my nerve up!
dasjoejk4: right, sorry, didn't see that :) So, you got the .deb from zfsonlinux.org, installed it, then installed debian-zfs and it doesn't work?
jk4correct
jk4dasjoe: correct
jk4the pasted error is the only thing i've seen that looks improper
dasjoeRight, "dkms status" still lists the modules as installed? And they're loaded?
dasjoeIt'll take a while, I'll set up a VM
jk4dasjoe: i can attept to do some debugging here if you have instructions
dasjoejk4: well, you tried reinstalling and everything. I'll just whip up a fresh VM and check whether the package is installable and working
p_lhttp://lwn.net/SubscriberLink/627419/0accdce2651e00cb/ <--- heh
dasjoejk4: can't reproduce what you're seeing
dasjoe# zpool status -x tank → "pool 'tank' is healthy"
dasjoejk4: what I'm seeing: http://paste.debian.net/138198/
DrLouok, I’ve foolishly added a disk _beside_ mirror-0 not _to_ it…
DrLouis there no way to a) remove it or b) re-build a mirror from its still-quite-happy partner?
dasjoeDrLou: you can't remove a vdev as of now
dasjoeDrLou: you can still copy your data off the pool and recreate it
DrLoudasjoe - yes, ok. tks. As I’m reading/understanding
DrLoudon’t have any free sata ports on this machine to make that easy...
DrLouso, can’t even remove the 4th drive (to erase it, making it the basis of newpool)
dasjoeNo, that's not possible. What does your pool look like right now? pastebin a "zpool status"
DrLouit’s an embarrassment, arisen of a late-night frustrated attempt to replace a faulted 4th drive...
DrLouhttp://pastebin.com/Z7XxYR9a
DrLout3 and t5 _used_ to be mirror_1
dasjoeRight, you can't fit all your data on a single disk?
dasjoeAlso, I hope you've got good backups, as you're running without redundancy
DrLoui could (eventually) fit it all on a single disk - just can’t easily attach another one right now!
DrLouyes, I get that the mirror-1 is gone.
DrLouBut, as t3 was _half_ of mirror-1, and no meaningful data has been written in 24 hrs, do I have any options?
dasjoeYou could, if you had backups, detach one disk from the mirror, zpool create on that now empty disk, move your stuff over, destroy old pool, attach a mirror disk to new pool, then add the mirror pair to it
DrLoudamn, if only i could do that with t3 - half of mirror-1 (?? !!!)
DrLouwell, it’s time for me to put an HBA in this box anyway - more storage, and perhaps Thunderbolt
dasjoeAs said before, you can't remove top-level vdevs once they have been added. I'd probably detach from mirror-0 and use that detached disk, but I'd also have backups ;)
DrLouand that’s a series of zfs send/recvs - there isn’t a global ‘duplicate this pool’ kinda command, right?
cyberbootjedasjoe: you there?
dasjoeYes
dasjoecyberbootje: I sent you a query a while ago
cyberbootjethat's weird, didn't get anything
cyberbootjealso when i do a pm i get: "486: You must log in with services to message this user"
jk4i think i pasted the wrong part of the uname earlier
jk4# nm /lib/libnvpair.so.1
jk4nm: /lib/libnvpair.so.1: no symbols
jk4hmmp
jk4that seems fwrong
jk4dasjoe: http://slexy.org/view/s21w1XXQG5
jk4this looks basically just like yours except that there's a line missing from dmesg
crocketHow do I remove a faulty drive from a RAIDZ pool?
crocketzpool replace faulty-drive?
crocketzpool replace pool faulty-drive?
crocketzpool offlince & zpool replace?
bearfacezpool offline pool drive then zpool replace pool old-drive new-drive unless my tired tipsy state is playing tricks with me
crocketok
crocketYes, I can have control over my system.
jk4is a good situation to be in