mrjesterSachiru, dajhorn Thanks for the help. My array is online and usable. Oh and FU Phibs. Just cause. ;)
SachiruNo problem.
renihshmm interesting, if i run a fio (single theaded 4k random reads) on a zfs dataset i get roughly 50k iops, on a zvol with ext4 (same pool) i get like 330k iops hmm
bekksZFS datasets do not support directio
renihspretty sure this didnt use direct_io
bekksYou'd have to set both primarycache and secondarycache to none, to mimic that behaviour.
renihs fio --name=randread --rw=randread --pre_read=1 --norandommap --bs=4k --size=256m --runtime=30 --loops=1000
renihsno direct_io involved here
renihstaken from the micro benches on https://gist.github.com/brendangregg/7270ff9698c70d9e7496
bekksfio alays try to use directio
renihshmm
renihswell, if i disable primary and secondary on the zvol
bekksAnd --size=256M implies that your benchmark will take place in your caches only.
renihsi get more iops
renihsyes
bekksWhich is a pretty useless benchmark then.
renihswell, it should be compareable though
renihssince in cachs
bekksIt is totally useless.
renihsits stil confusing why there is a 10x diff for a zvol/dataset
bekksThe result is "my RAM is faster than yours".
renihsboth ram should be fast
renihszvol/dataset i mean
renihssince its all just ram
bekksThe cause is the different cache behaviour for datasets and zvols.
renihsyeah, seems like zvols cache a lot more
renihsaccording to that then, which is not accurate
renihsand its not about the test making sense or not, its about the difference between the two making no sense :)
bekksAs long as your benchmark is pointless due to not actually benchmarking your disk, there is no difference in "it is pointless" and "it is pointless".
bekksI'd start with a size parameter larger than the ARC cache.
Lalufusignificantly larger.
IRConan"ARC cache"…
IRConansorry, I can't help pedantry
renihsi am trying to figure out why there is a signficant difference between the dataset and vol
renihssince both are "in memory"
renihsi am not trying to benchmark the disk
IRConanproblem is you don't really know the state of the ARC in each case
renihsbesides, that system has way more memory then storage
renihsso even if i would pointlessly do that, do measure disk iops which i dont care about, its not possible
bekksIf you are tyring to benchmark your RAM, why are you benchmarking your ZFS cache then? Benchmark your RAM instead.
renihsagain
renihsi am NOT
renihsi am trying to understand why there is 10x diff in datset/vol
renihswhen its only just ram anyways
bekksYou are comparing RAM benchmarks, with your size setting of 256M.
IRConanwhy don't you tell us what the real-world problem was rather than falling back onto a silly benchmark
renihsthere is NO real world problem or anything
renihsdo you guys have any idea why there is a such a diff between vol and dataset or not?
bekksNo real worls problem, but just silly benchmarks. Ok.
renihsum, so you dont have any idea, ok
bekksYou have no idea what you are doing there, obviously. Besides the point that you dont like the answers you get.
renihsbekks: you completely ignoring my question is a different thing
renihsi am just curious what would cause such a significant difference
bekksConstantly ignoring the SAME answers from DIFFERENT people is ignorant.
IRConanyour question is whether there should be a perf difference between ARCs for zvol and zfs
renihsIRConan: it certainly looks that way
renihsbekks: please, can you just ignore me then instead of picking/pointing to irrelevant things YOU dont like?
bekksOf course. Merry christmas. Pick a present from the kill file.
renihsthanks, bye
renihsmerry xmas
renihsIRConan: i would have expected it vice versa
dasjoe<Sachiru> Just did an apt-fast dist-upgrade today on Ubuntu 14.04, new ZFS packages just came in. <Sachiru> What are the changes? ← Check the PPA, click a package to see its changelog: https://launchpad.net/~zfs-native/+archive/ubuntu/stable/+packages?field.name_filter=&field.status_filter=published&field.series_filter=trusty
dasjoe<renihs> i am trying to figure out why there is a signficant difference between the dataset and vol ← The difference you are seeing is due to the page cache, which ext4 (on a zvol) makes use of but ZFS on its own can't
dasjoerenihs: I had the same question a few weeks ago
ryaoECC is a security feature now: http://m.slashdot.org/story/211489
bekks:D
dasjoeDoesn't help against multi-bit flips
dasjoeAlso, http://www.reddit.com/r/linux/comments/2q9sic/dram_vulnerability_due_to_charge_disturbance/cn4eeur
ryaodasjoe: Thanks
compdocall rather silly
thulledasjoe: running ECC-RAM in lockstep protects against multi-bit flips quite good
mrjesterJust to confirm, ZoL doesn't support naming SMB shares from the zfs command does it? "cannot set property for 'array/images': 'sharesmb' cannot be set to invalid options"
dasjoemrjester: FransUrbo has been working on improving sharesmb
mrjesterGreat. :)
dasjoeIt's not merged yet, iirc
mrjesterRunning HEAD, so no not merged.
compdocI tired using sharesmb, but it kept naming shares in ways I didnt like. Like tank_share, etc. So I just use smb.conf to share
dasjoemrjester: https://github.com/zfsonlinux/zfs/pull/1476
dasjoecyberbootje: I'm sorry, I can't reproduce your issue
dasjoecyberbootje: I'm seeing sensible load for dding to a 10G zvol, with or without compression
cyberbootjedasjoe: what is sensible? 10% 20% ?
cyberbootjedasjoe: i would like to install exactly what you have since i'm nog able to get anywhere near normal values...
dasjoecyberbootje: http://dasjoe.de/dump/iowatcher/testvol-20G-lz4.svg and http://dasjoe.de/dump/iowatcher/testvol-20G.svg
dasjoecyberbootje: the command I used to get those charts: zfs create -V 20G -o compression=off atlantis-rpool/testvol; iowatcher -d /dev/zvol/atlantis-rpool/testvol -t dd -P -o testvol-20G.svg -p dd if=/dev/zero of=/dev/zvol/atlantis-rpool/testvol bs=1M
dasjoecyberbootje: there are gaps in testvol-20G.svg because blktrace couldn't keep up