dasjoe<veikko> if you want stable filesystem, try it again 2-3 years from now ← I disagree, ZFS on Linux is stable
dasjoelolcat: did you get it to work by now? If not: dkms will help, after upgrading zfs-dkms
veikko2248 38 < veikko> I appreciate <- I have to clarify that I dont appreciate if you force her to anything but I'm appreciating if somebody will contribute to the project in general
veikkodasjoe, not for me.. I had used it only couple of hours when I got this serious I/O bug which freezed the whole system
veikkostable for a filesystem means that you can trust your files to it and that is not the case with zfs atm
lolcatdasjoe: yes, apt-get install zfs-dkms zfsutils fixed it
dasjoeveikko: our opinions differ, then. It has been very stable for me and saved my data many times
dasjoelolcat: good. Did you find out why zfs-dkms was held back? Which distribution are you using?
lolcatveikko: zfsonlinux, zfs on freebsd is like a rock
veikkololcat, its been supported longer for fbsd
lolcatdasjoe: ubuntu, apt works in mysterious ways, I have no idea
dasjoeZFS on Linux has been production-ready for quite a while now
lolcatdasjoe: give it a couple of years...
dasjoelolcat: stable or daily PPA? 14.04 or 14.10?
dasjoelolcat: no need to give it some years when I've been using it in production since 2011 :)
lolcat14.04, no idea what ppa
lolcatdasjoe: You can use unstable software for production, but I wouldn't recommend it
dasjoelolcat: ZoL has been "ready for wide scale deployment" since 0.6.1, see https://groups.google.com/a/zfsonlinux.org/forum/#!topic/zfs-announce/ZXADhyOwFfA
lolcatdasjoe: yes, because someone writes it on the internet it must be true
lolcatUntil I can be sure a normal update won't render my filesystem dead I can't use it a my only filesystem
dasjoelolcat: *plonk*
dasjoeNot the FS' problem if you fail to read your upgrade log
lolcatdasjoe: ext4 never stops working after an update
lolcatYou don't have to read an upgrade log to be sure ext4 works after upgrading
cyberbootjedasjoe: i pinned the cpu load issue down to the fact that it's acting up big time when i use it on a zvol.... on the pool itself it's not that big of an issue i seems
cyberbootjeso now, is this a bug or am i doing something wrong...
rhaguhi, I am running Centos 7 and after a kernel update I had problems with it again. It just doesnt automatically build the modules and therefore I need to do import -f after every boot. The old kernel works though. I also saw another issue I really dont know how to investigate: http://s23.postimg.org/pnptq4by3/distorted_names.png i built this raid calling each drive by its name: ata-TOSHIBA_MQ... and now two of those names are distorted
dasjoerhagu: "import -f" has nothing to do with modules not being built automatically, dkms takes care of rebuilding modules on kernel upgrades whereas ZFS checks your hostid to find out whether it's been imported from a different system
rhaguok, at least the last part seems to happen due to a different problem: http://pastebin.com/0myh0x6q has anyone ever seen something like this?
dasjoerhagu: check the content of your /etc/hostid, it should be a binary file containing 4 bytes. "xxd /etc/hostid" displays them
rhagudasjoe I have no idea why kernel updates do not work for me, I would like to know
dasjoerhagu: did you manually trigger the module build after the last kernel update?
rhagudasjoe no such file
dasjoerhagu: "dmesg | grep hostid"
rhaguthe last time I downloaded the specific headers of the last kernel and then reinstalled zfs with yum
rhaguhttp://pastebin.com/Q7tmp2GA
dasjoerhagu: create a random hostid, then update your initrd. The first part: dd if=/dev/urandom of=/etc/hostid bs=4 count=1
rhaguhttps://github.com/zfsonlinux/zfs/issues/2800
ihrmIs there any way to change case sensitivity of a already exising pool?
rhagudasjoe thanks for the help, what do your commands do? what is /etc/hostid?
dasjoerhagu: it's a file containing your system's hostid, which is usually derived from its MAC address. Anyway, ZFS makes use of the hostid to check whether the pool comes from a different system
rhaguok, I created it, what should I do now?
dasjoerhagu: it won't import a pool using "zpool import" unless the pool was exported from a system with the same hostid, but yours is not existing, thus 0x00000000 is used. That value is invalid
rhaguI see so a reboot now should first automatically export the pools with the right hostid and then reimport them?
dasjoerhagu: update your initrd
rhaguok, how can I do that?
dasjoerhagu: I'm not using CentOS, but the internet™ suggests "man mkinitrd" or "man dracut"
rhagudo I need to do this everytime there is a kernel update?
dasjoerhagu: no, dkms builds the modules automatically as long as the new kernel headers are installed
rhagudasjoe, ok I built the new initrd (at least I followed the mkinitrd example) Anything else?
dasjoerhagu: a reboot to try it? Also, you should look into fixing the weird distortion. Maybe your SATA controller is on its way out?
rhagudasjoe that is my next question, I have no idea how to check that out
dasjoerhagu: export the pool, remove one of the affected disks, plug into next box, check its name? :)
rhaguoh, yes :-D
rhaguI looked at lsscsi and the garbled ones are attached to different controllers as far as I can see: http://pastebin.com/wVzQCZvV
dasjoerhagu: they are on your host0 and host1, which may be your on-board ports
luckylinux<dasjoe>: I finished sending the snapshot with the right command (hopefully) however I find a strange thing: on the destination pool less space (2.99T) is required than on the original pool (3.43T) - compression is the same (lzbj). Is this normal?
dasjoeluckylinux: I had similar experiences, so I assume "yes"
luckylinux<dasjoe>: If that matters original pool is RAIDZ-2 of 6x3TB disks while destination pool is single disk of 4TB
dasjoeluckylinux: that's one factor, yeah. Is there any reason why you're using lzbj?
luckylinuxanyway this was by using "zfs send -R zdata@20141223 | zfs recv -Fu zarchive_23_12_2014". Should be alright now (mountpoints are different between pools this time ;) )
luckylinux<dasjoe>: this is my freeBSD nas ... as I recall at the time (FreeBSD 9.2) LZ4 wasn't available yet
rhagudasjoe the first seems to be my m1015 as there are 8 disks on the same controller and the second one seems to be my second m1015. I checked the drive on another pc it can read the smartvalues without a problem
luckylinux<dasjoe>: Does is seem alright then? I had two snapsghots (one dating back 2 days) and I only sent this morning's snapshot using the command above. Still everything (the complete pool of this morning) should've been sent to the destination, right?
chungyraidz overhead is probably making most of the difference
dasjoerhagu: interesting. Maybe flash a new FW
chungyAlso, if compression wasn't always enabled, the received blocks will be recompressed with the present compression setting
rhagudasjoe first time this happened in 2 years
rhagunever befor
luckylinux<chungy>: you mean extra blocks to co,mpare checksums between parity drives and such?
dasjoeluckylinux: yeah, you sent a replication stream of "zdata" up to today's snapshot
luckylinux<dasjoe>: so this replicates the pool as of this morning. It doesn't matter that another snapshot of zdata had been made two days ago, right?
dasjoeluckylinux: that snapshot should be part of the stream
luckylinux<dasjoe>: the old one? Strangely enough I see it on the destination pool :D
luckylinux<dasjoe>: so the snapshot I made this morning actually snapshotted the snapshot I did two days ago and sent it along with the pool's content?
dasjoeluckylinux: not so strange, "man zfs" helps
dasjoe"-R Generate a replication stream package, which will replicate the specified filesystem, and all descendent file systems, up to the named snapshot. When received, all properties, snapshots, descendent file systems, and clones are preserved."
luckylinux<dasjoe>: ah ok thanks
luckylinux<dasjoe>: then disk usage differences are just overhead. I can finally shutdown the server for now ;)
luckylinux<dasjoe>: I was just a bit scared of such a big difference but since there are so many files in the pool RAIDZ2 overhead will be huge (see https://github.com/zfsonlinux/zfs/issues/548)
luckylinux<dasjoe>, <chungy>: thank you for your help. I'm going to sleep now -
rhagumhm dasjoe removed and plugged them back in and they are recognised again .. .
prometheanfirebehlendorf: thanks :D
dhoffendthis zvol performance is killing me ...
prometheanfiredhoffend: orly
dhoffendcopy stuff to zfs filesystem = no problem. copy stuff to zvol/ext4 is killing the server, Load is getting >100 before I kill the command.
behlendorfprometheanfire: n/p, thanks for reporting it. That's configuration sees less testing than others.
prometheanfirebehlendorf: np, I need to update a few servers/laptops with new kernels soon so I'll probably be using modules for this set
prometheanfirechanged the title to more accruately reflect the bug too
dhoffendprometheanfire: do you experience the same or is it just making fun :-)
prometheanfiredhoffend: define bad?
prometheanfireand zpool setup is important too
dhoffendwell my zpool is 4x1TB SAS with raidz2 + 2x100GB SSD log(mirror) + 2x100GB SSD cache
dhoffendwhen I rsync or dd stuff from my old mdraid/lvm to zfs I've good performance when working with default zfs filesystem
dasjoeprometheanfire: cyberbootje experiences a similar issue
dhoffendbut as soon as I work with zvolumes (which I use for kvm VMs) i get cpu soft lockup, memory is getting out-of-control and the load runs up to >100
dasjoebehlendorf: maybe interesting for you, too. Will zvol performance improve for 0.6.4? I haven't checked the commits nor the issue list
dhoffendi've just rebooted with those 2 spl kernel module options (reclaim=0, etc) the softlocks come not as fast, but the performance or internal blocks whatever are still the same
prometheanfiredhoffend: raid10 like?
prometheanfireor raidz
dhoffendraidz2 atm
behlendorfdasjoe: There are some changes which may help, but nothing specifically targeted for it. Do you have a specific benchmark which shows the issue?
prometheanfirewell, that's going to be slow
prometheanfirebehlendorf: talking of ryao's fun changes?
dhoffendbut the performance isn't a problem when i'm using zfs filesytems ... only when putting IO pressure on ZFS Volumens with internal partition tables and filesystems
prometheanfireoh, I also run a git version of zfs, so not running 0.6.3, so I have some of the perf improvements :P
dhoffendprometheanfire: well ... when I rsync stuff with bwlimit=20000 (20mbit) to a zvol it should not kill the server ... creating locks and push the load > 50
prometheanfireiirc, you can turn off write syncing on zvols and stuff
dasjoebehlendorf: cyberbootje reported (in here) that "zfs create -V 10G pool/testvol1; dd if=/dev/zero of=/dev/zvol/pool/testvol1 bs=1M count=4000" (no compression) makes his CPU usage go up to 80% on all 24 cores in htop. If he doesn't limit the count the box becomes unresponsive, if I remember correctly
dhoffendi removed my cache and log ssd's from the pool
prometheanfireand turn off replication too? don't think so but that'd help (might need that fancy rewrite code for adding drives for that)
prometheanfiredhoffend: version of zfs?
dhoffendstill the same .. it just takes longer cause
dhoffend0.6.3
dasjoebehlendorf: I couldn't reproduce that issue using dd to a file, but to a zvol shows similar behavior
prometheanfiredhoffend: from what package?
dhoffenddebian7 with zfsonlinux repo
prometheanfire0.6.4 has quite a few stability improvesments on the way
prometheanfiredunno about what specific 0.6.3 they package
dhoffendi've seen ticket regarding 0.6.4
prometheanfireI can say that when 0.6.3 was first released I was still having problems, now I'm not
dasjoebehlendorf: this is what I'm seeing using the default volblocksize: http://dasjoe.de/dump/iowatcher/testvol.svg and this is for 128K: http://dasjoe.de/dump/iowatcher/testvol-128.svg
dhoffendprometheanfire: what version/package are you running?
prometheanfiredhoffend: git master from a couple of months ago
prometheanfiregentoo zfs-9999 :P
dhoffendah okay
prometheanfirebefore that I was using some development patches ryao was nice enough to share, but those have been merged now
dhoffendyeah ... i'm really looking forward to an upcoming version
prometheanfirebehlendorf: we get a christmas present?
prometheanfire:P
dhoffendprakashsurya: i guess with the number of open tickets for 0.6.4 ... this won't happen
prometheanfiredhoffend: lolololololol
dhoffendups ... wrong autocomplete :-)
IRConan87 tickets, 2 days
IRConanI'm sure it can be done!
prometheanfireseems doable
dasjoeGo fix stuff, then? :)
prometheanfireya, sure ok
dhoffendwith 1 minute of copying a file into a zvol/ext4 I can get the system load up to 50. Now I stopped the command the system goes up to load 100 (1 minute later) until stuff has been written to disks
dhoffend*waiting*
Shinigami-Samaoww
Shinigami-Samabest I can get my system up to is load 32
dhoffendthe host system is running on seperate disks not connected to zpool or anything else ... otherwise I would have killed myself everytime one of the guest VMs would have higher load
prometheanfiredhoffend: I was using qcow files (openstack), that was fine
prometheanfireI know zvols still have some perf issues though
prometheanfireqcow on a cow filesystem lol
dhoffendi guess that's a possible solution
dasjoedhoffend: prometheanfire: I'm using raw images, not qcow2. Works like a charm and profits nicely from lz4
dhoffendi guess that's the way to go if I want to get better performance
prometheanfireI need to write that zfs backend for nova
dhoffendnova?
prometheanfireopenstack
dhoffendah okay
prometheanfirethe compute piece of the puzel
prometheanfirele
prometheanfirewhatever
prometheanfire:D
dhoffenddasjoe: so file based images has better performance then zvol? at least right now i guess
dhoffendsounds strange that files on top of a filesystem should be faster compared to "block devices"
dasjoedhoffend: I actually haven't tried ZVOLs, as I like being able to move and grow files
prometheanfireZFS's backend is object based, each block is an object
prometheanfireso files or not doesn't mater much
dhoffendpossible
prometheanfirethe fs side has had at least some optimization
prometheanfirethe zvol side not so much I think
ihrmAnyone here use plex?
prometheanfiredasjoe: ya, the only reason I don't like files is because of cow on cow
prometheanfireI might store cow on the backend and use flat on nova
prometheanfirethat would make me happy
prometheanfireihrm: xbmc
dasjoeprometheanfire: no reason to use cow :)
dhoffendzvol was just too nice to be true .. snapshooting blockdevices, clone them, roll back a volume, etc ... this is still possible with file based images .. but well ... it's just a different way
ihrmplex media server I shhould say
dasjoeprometheanfire: I'm sactually using a qcow2 image that's backed by a raw one
dasjoe-s
dhoffenddasjoe: do you have all your images in one dataset? or a dataset for every diskimage?
prometheanfiremost openstack stuff is qcow2
ihrmWell i just installed plex media server and it can see my main data set but it has no access to the folders within, any ideas why?
dasjoedhoffend: it's all in a single dataset right now, transitioning between servers
prometheanfireihrm: sounds like a perm issue, I'd ask in their channel
behlendorfprometheanfire: The best I can do for christman presents in a Fedora/EPEL point release based on 0.6.3 to address some build issues and a few important bug fixes.
behlendorfprometheanfire: It comes with a new FC21 repository.
prometheanfirebehlendorf: np, was just poking fun :D
ihrmThats what I'm thinking, i had to change some permissions to get samba to work
behlendorfdasjoe: OK, is there an issue open on the ZVOL issue so we can track it.
itr2401QQ - is anyone here running netatalk 3.1.x & using AD auth / winbind serving out zfs zvol's? Non AD auth works without issue, AD auth fails on OSX - but PAM, Netatalk report auth OK ..
dhoffendbehlendorf: would you recommend using file-images over zvol's atm when using an older 0.6.3 zfs release (the debian one)? Cause i've performance issues on zvol's right now?
prometheanfirebehlendorf: curious, what percentage of the zfs bugs do you think are dups?
behlendorfPersonally I've been using file based images over NFS for all my VMs. But that's just been a matter of convenience, I've found it works well. I haven't spent much time testing with zvols so I'm not sure I could say which is better.
behlendorfprometheanfire: Lots I suspect.
behlendorfBut it's hard to say until you carefully go through each one.
prometheanfireya, I was thinking 70%ish
prometheanfirebehlendorf: nfs4?
dasjoebehlendorf: https://github.com/zfsonlinux/zfs/issues/2272 looks similar, I'll update that ticket tomorrow
prometheanfirethat's what I use
behlendorfYup, krp5p nfsv4.
anticwi use zvols for VMs very heavily - so far it's working out fairly well
prometheanfireI need to use more krb auth with my nfs stuff, since I have both set up, just not talking to eachother
itr2401ive benn using zvols for 5 x VM's under Xen for 2 years - no issue, tho with 0.6.3 / git ive noticed fragmentation increase over time. In the past year fragmentation has jumped around 1% each month to currently 12%, tho cant be entirely sure that it is just the Xen VM's doing it
itr2401behlendorf: Just wondering if you also had any chance to further investigate the rcu issue - https://github.com/zfsonlinux/zfs/issues/2564 - looks like someone else may have triggered it as well
gardarwhen I try zfs set sharesmb=on I get this error cannot share '***': smb add share failed
gardarand no further explaination
gardarany thoughts?
cyberbootjeso, interesting... moved my kvm imaged to the /tank/image.raw and dd or load within a VM is not that bad for the cpu...
anticwi don't know show sharenfs= and shaersmb= are well supported ... i'm not even sure if people expect them to be at some point
anticw(though it seems perhaps we should)
gardaranticw: sharenfs works fine
gardarand from what I read sharesmb should work fine as well
cyberbootjei can safely say that it's better than having vm imaged on a zvol, but i expected more performance
anticwcyberbootje: vms on zvols work very well for me ... i've not compared against files recently but i'm not sure that would be faster
anticwtrim also works for zvols which is nice
cyberbootjeanticw: did you do "dd if=/dev/zero of=/tmp/test bs=1M count=1000" within that vm image and watched the cpu on the host node?
anticwcyberbootje: no
cyberbootjei'd say, try it and you will see what i mean
cyberbootjereally not normal
anticwbehlendorf: when locks are taken ... is there some way to record the 'owner' so that when things are blocked for too long we can get some idea of who has the lock and why?
anticwcyberbootje: tried it ... i don't see anything unexpected
cyberbootjeok, can you tell me what OS, version, zfs version, etc... ?
cyberbootjeso i can try that
anticwthis machine is debian 3.16.something (whatever is in testing). stock kernel. stock kvm. zfs v0.6.3-155_g7b2d78a
anticwcyberbootje: the guest seems fine, performance for writing 0s is 'good' (in as far as it doesn't do much of anything): 1048576000 bytes (1.0 GB) copied, 0.386119 s, 2.7 GB/s
behlendorfitr2401: Sorry, I haven't had a chance to seriously look at it yet.
cyberbootjeanticw: interesting... are you willing to create a new zvol and do a dd to that zvol directly ?
anticwjust did
anticwit's slower if the data set is larger (cache effects)
cyberbootjeanticw: ok, and if you do bs=1M and count=15000 ?
behlendorfanticw: Yes, and in fact this already happy for certain lock types.
cyberbootjemy load will go to 70
anticwcyberbootje: dd is a crappy test when used like that
cyberbootjeseel link: http://tinypic.com/r/2n87vqw/8
behlendorfanticw: already happens... In your case the rrw lock does record the writer, but not all readers.
anticwwhat are you trying to determine here?
behlendorfAnyway, guys. I've got to run.
anticwbehlendorf: does that mean you have half a fix? :)
anticwbehlendorf: thanks!
itr2401behlendorf: no probs
behlendorfanticw: See the end of rrw_enter_write()
anticwbehlendorf: let me dig out the other stacks and udpate the bug
anticwsometimes i think it would be easier to debug this with cpus=2 or something
cyberbootjeanticw: well, i noticed the load on the host(zfs) go to 20 when i did a file copy within a vm test, then using dd i got horrible results where a normal disk is just fine...
anticwcyberbootje: your test: 15728640000 bytes (16 GB) copied, 20.1466 s, 781 MB/s
cyberbootjei was on the verge of using this in production, actually today but this prevented it
itr2401anticw / cyberbootje: To test a disk perf on a VM I use "/usr/bin/iozone -i 0 -i 1 -t 1 -s 128m -r 128k -t 10"
anticwthe host CPU barely noticed
anticwitr2401: i don't know about iozone, but dbench writes 0's so gets insanely fast silly numbers
cyberbootjeanticw: saw the picture? that is what i'm getting over and over
anticwcyberbootje: saw no picture sorry ... came in here late and didn't check scrollback
cyberbootjeanticw: now i'm very interested in how you created you pool and zvols... maby i'm missing settings...
cyberbootje http://tinypic.com/r/2n87vqw/8 that one..
itr2401or if a Windows VM - Crystal DiskMark. If benchmarking the server itself, I use Intel's NAS Performance Toolkit
cyberbootjehere i did a dd from a .raw file to a zvol
cyberbootjenothing special
anticwcyberbootje: files on the fs probably have async write-out where zvols may not
anticwso maybe you create lots of RMW updates?
cyberbootjeno clue...
anticwcyberbootje: what io model do you use for the guest? and how is the guest fs layed out?
cyberbootjein this case i'm using 8 SSD disks, but i'm also having issue's without raidz and just 1 disk
cyberbootjeit's not onlu within a VM...
anticwis this gentoo?
cyberbootjeonly*
cyberbootjedebian7 64 bit
cyberbootjewith debian-zfs and KVM/qemu with also debian7 but that last part does not matter, on the host node(zfs) it's also high cpu usage when doing a dd to a zvol,
cyberbootjedoing a dd to the pool itself is less load but there is still much cpu activity i think...
anticware these sync updates?
cyberbootjehow can i check?
anticwmy guess is the are
anticw# zfs get sync pool/path/to/zvols
cyberbootjewhat am i looking for?
anticwjust disable sync and retest
anticwi'm not sure it's that useful in many cases (because it assumes hardware won't lie ... it does)
cyberbootjeyou mean zfs set sync=disabled tank ?
anticwyou can do for the entire thing or just what you wish to test
anticwthere is a heirarchy
cyberbootjeok, tried it...
cyberbootjeno luck, 24 threads to 95% usage
anticwwhich threads are in D ?
cyberbootjez_wr_iss a lot of them
anticwis the host severly ram limited?
cyberbootje128GB ram, plenty free
anticwmount the zvol directly and try from the host
anticwsame outcome?
cyberbootjefor now it's only a test system so every resource is free usable for the test and the load still manages to go to 60
cyberbootjeyes did that, same issue
anticwis compression on?
cyberbootjeno, not now
anticwturn it on
cyberbootjetried it with and without
anticwleave it on
anticwit's useful
anticwand cheap
anticwand for 0s ... should reduce the IO to just about nothing
anticwif you still get a lot of IO ... then i would wonder what's being written out
cyberbootjeok, turned it on
cyberbootjestill the same
anticwtesting from the host?
cyberbootjewithin the VM
anticwignore the VM for now
anticwtest from the host
cyberbootjeok
anticwfewer variables
cyberbootjehopeless
cyberbootjesame issue
cyberbootjecompression is on
cyberbootjedd if=/dev/zero of=/dev/zvol/tank/testvol50 bs=1M count=1000 and the load on the vm goes directly to 4
cyberbootjemaby worth saying... disks are setup in raidz2 8 x 240GB intel SSD but i don't think that's the issue since a normal sas disk gives me the same problems
cyberbootjeand yes, already tried other hardware(same config) and totally different hardware, also same...
anticwif you do this on a new zvol ... check the size of the zol after mkfs ... then after doing this ... it shouldn't grow much
anticwdoes it?
anticwreally, writing 0s when compression is one will just cause holes to get created ... so the only write are for fs metadata (bitmaps, inodes, etc) and in the zvol for the tree
anticwall of which should be pretty small
cyberbootjeafter mkfs ?
cyberbootjezfs create -V 100G tank/testvol80
cyberbootjethat's what i do
cyberbootjedo i need to do more?
anticw# zfs get written tank/testvol80
cyberbootjeok
cyberbootjevalue was 136K and after dd still the same
cyberbootjecompression is on, lz4
cyberbootjenow with sync=disabled, again the same issue
anticwso very little actual IO going on
cyberbootjebut heavy cpu
anticwwhat kernel is this?
anticw# uname -r
cyberbootje3.2.0-4-amd64 #1 SMP Debian 3.2.63-2+deb7u1 x86_64 GNU/Linux
anticwand zfs from debian?
anticwvs building yourself?
cyberbootjeno followed the howto and updated sources.list and did an apt-get
anticwah ok ... i guess i'm just coming to the point where i can't think of anything obvious and simple and was going to suggest seeing if this happens with a more recent kernel+zfs
cyberbootjeit's a new system, installed today..
cyberbootjeactually installed it 6 times already to test out / rule out different things
cyberbootjei'm on the verge of just getting a working version and try installing that one even if it's an older kernel / older OS
cyberbootjewhat spl version do you have?
cyberbootje0.6.3-15 ?
anticw0.6.3-50_g917fef2
cyberbootjeoh my..
cyberbootjeos?
anticwdebian
cyberbootjeok what am i missing...
anticwthat version is an artifact of git describe
anticwand i likely have a couple of local tweaks
cyberbootjewell there is a difference...
anticwactually ... it's not ... it's just because my HEAD doesn't have a tag at present
cyberbootjecare to give me all the versions like, OS, kernel, zfs versions, spl, etc.. ?
cyberbootjeah ok
anticwi just did
anticwlinux-image-3.16.0-4-amd64 3.16.7-2
cyberbootjeit's in unstable that kernel right?
anticwtesting
anticwjessie to be specific
cyberbootjejust to be clear, with that setup you do not see any issue's... right?
anticwi do not
anticwwhat's more i've used this setup for well over a year on a spread of hardware and kernel versions
anticwfor me, zvols have worked extremely well
cyberbootjesorry for pushing but i'm trying to get as much info so i can give that a go and test if that does work
fearedblissrenihs: I also had problems with 0.6.3-r2 (coming form 0.6.3) in other ways. After I went back down to 0.6.3 (-r0) it was fine. So I'm holding out now if anything.
fearedblissihrm: I use plex
crocketHow do I make zed report scrub status when scrub completes?
lundmanFM_EREPORT_ZFS_SCRUB_FINISH
djdunnmy desktop stops responding during heavy i/o for a few seconds every few seconds, but i can change to say vt1 and everything seems to work ok there
crocketzed doesn't send email reports.
crocketHow can I debug it?
Sachirucrocket, get a system that will work for millions of years.
crocketSachiru, Whut
crocketSachiru, You make one for me
prometheanfirewhen zfs is built as a module, does it obey the kernel cmd line options for restrictions on arc size and the like, or does that need to be in the modules config file (in /etc)?
crocketWhy is zed not sending emails on zfs events?
crocketI modified /etc/zfs/zed.d/zed.rc accordingly.
mrjesterGood evening. Trying to import a pool reated on a OmniOS box. Know of any workaround to get past this: " The pool uses the following feature(s) not supported on this sytem: com.delphix:hole_birth"
lundmanyour ZFS version has to support com.delphix:hole_birth
mrjesterYes. I understand that.
lundmanlatest o3s for osx has it :)
Phibszfs on osx, for he lolz
PhibsI'm glad they broke a feature based FS with com.delphix:hole_birth
Phibsfeature level that is
mrjesterWas attempting to migrate from OmniOS to ZoL, but this is a show stopper.
lundmanmaster on ZOL has it I thought
lundmanif you compile your own
PhibsI blame ryao
mrjesterhmm.. https://github.com/zfsonlinux/zfs/issues/2210 suggests that hole_birth was in place in 0.62
mrjesterAny idea on how stable GIT is?
SachiruVery
itr2401second that
SachiruTypically features don't get merged into head unless they are deemed "production-ready"
SachiruFor instance we already have working code to support 1 MB and larger block sizes, however they aren't merged into HEAD yet (IIRC) because it's not at least 99.99% stable.
mrjesterSo, before I switch to HEAD, do you know/confirm if the birth_hole feature is there?
mrjesterI see a commit around Oct 20 that suggests it might be.
mrjesterOtherwise, it is back to Omni until ZoL has it. :/
Sachirumrjester: You can check on the Open-ZFS website. http://open-zfs.org/wiki/Features
mrjesterSorry for souding dense if I am, but does that chart represent HEAD or just official releases?
SachiruAlso see ryao's blog post on the state of ZoL here: https://clusterhq.com/blog/state-zfs-on-linux/
SachiruOfficial release.
SachiruOne moment
SachiruChecking feature flags on my installation
SachiruAnd.. yeah
mrjesterGreat.
mrjesterThank you.
SachiruCode for hole_birth was pushed to repo in 0.6.2
SachiruBUT
SachiruNot merged into 0.6.2 or MAIN
SachiruDue to a few issues
SachiruIt is expected to be merged with 0.6.4
SachiruCurrently MAIN does *not* have hole_birth
SachiruWell, not so much issues
mrjesteroh? So, should be safe though, if I am not using the feature?
mrjesterMy pool just has the feature.
SachiruMore like "nobody has yet tested this with 100% rigor"
SachiruDue to lack of time
SachiruCan't say if it's safe, better ask one of the devs
SachiruBTW
SachiruJust did an apt-fast dist-upgrade today on Ubuntu 14.04, new ZFS packages just came in.
SachiruWhat are the changes?
mrjesterIs this spurious or do I need to fix something? lib/libzpool/Makefile.am:11: warning: source file '$(top_srcdir)/module/zcommon/zpool_prop.c' is in a subdirectory, lib/libzpool/Makefile.am:11: but option 'subdir-objects' is disabled
mrjesterFrom zfs autogen.sh
mrjesterNevermind. Future version warning
crockethell
mrjesterAny ideas on what is missing? http://pastie.org/9796882
mrjesterspl is loading.
dajhornmrjester, rebuild the SPL and this glitch should resolve.
dajhornmrjester, the subdir-objects warnings are caused by a deprecation in automake-1.14
mrjesterRebuilding. Thanks.
mrjester:D worked.
mrjesterInstructions here http://zfsonlinux.org/generic-deb.html need a little tweaking
mrjesterNeed to add autoconf and libtool to the apt-get install list
mrjesterAt least on a fresh ubuntu 14.04 server install.
dajhornmrjester, welcome. I think that this glitch was caused by a kernel package upgrade that didn't bump the KBI number.
mrjesterI think in this case, it was due to a the release package spl kmod still being in memory.
dajhornmrjester, good to know.
mrjesterI purged the package before I started with this, but I don't think it killed the kmod.