Brouillon¶
§ en vrac
Choix du matériel¶
Des disques ! Encore des disques !¶
Ce système de fichier étant avant tout pensé dans une optique "serveur", avec "des" disques (beaucoup de disques), vouloir utiliser ZFS sur un seul disque dur confine au ridicule. La dernière version de ZoL (RC14) règle définitivement le problème : un pool ne peut être composé qu'avec des disques, pas des partitions (ref nécéssaire).
multipath pour Kevin :
Sucreries d'administrateurs système¶
Oh la vache ! Du Copy-On-Write !¶
Toutes les opérations de ZFS sont des transactions copie-à-l'écriture, ainsi l'état sur le disque est toujours valide.
C'est trolldi, c'est permis !¶
Why btrfs is theorically better, while zfs is pratically
Pas de mise en concurrence réelle, l'un est utilisable maintenant mais pas mainstream, avec toutes les bonnes choses. L'autre sera utilisable bientôt, et 1000% mainstream, et il faut le surveiller activement.
btrfs: Pre-history
Imagine you are a Linux file system developer. It's 2007, and you are at the Linux Storage and File systems workshop. Things are looking dim for Linux file systems: Reiserfs, plagued with quality issues and an unsustainable funding model, has just lost all credibility with the arrest of Hans Reiser a few months ago. ext4 is still in development; in fact, it isn't even called ext4 yet. Fundamentally, ext4 is just a straightforward extension of a 30-year-old format and is light-years behind the competition in terms of features. At the same time, companies are clamping down on funding for Linux development; IBM's Linux division is coming to the end of its grace period and needs to show profitability now. Other companies are catching wind of an upcoming recession and are cutting research across the board. They want projects with time to results measured in months, not years.
Ever hopeful, the file systems developers are meeting anyway. Since the workshop is co-located with USENIX FAST '07, several researchers from academia and industry are presenting their ideas to the workshop. One of them is Ohad Rodeh. He's invented a kind of btree that is copy-on-write (COW) friendly [PDF]. To start with, btrees in their native form are wildly incompatible with COW. The leaves of the tree are linked together, so when the location of one leaf changes (via a write - which implies a copy to a new block), the link in the adjacent leaf changes, which triggers another copy-on-write and location change, which changes the link in the next leaf... The result is that the entire btree, from top to bottom, has to be rewritten every time one leaf is changed.
Rodeh's btrees are different: first, he got rid of the links between leaves of the tree - which also "throws out a lot of the existing b-tree literature", as he says in his slides [PDF] - but keeps enough btree traits to be useful. (This is a fairly standard form of btrees in file systems, sometimes called "B+trees".) He added some algorithms for traversing the btree that take advantage of reference counts to limit the amount of the tree that has to be traversed when deleting a snapshot, as well as a few other things, like proactive split and merge of interior nodes so that inserts and deletes don't require any backtracking. The result is a simple, robust, generic data structure which very efficiently tracks extents (groups of contiguous data blocks) in a COW file system. Rodeh successfully prototyped the system some years ago, but he's done with that area of research and just wants someone to take his COW-friendly btrees and put them to good use.
+ btrfs: A brief comparison with ZFS
différence algorithmique (https://lwn.net/Articles/342892/ , btrfs: A brief comparison with ZFS)
- pro/cons : http://rudd-o.com/linux-and-free-software/ways-in-which-zfs-is-better-than-btrfs ? (loïs: à mon avis, mauvaise idée, s'ils sont comparable, ils ne sont pas concurrent)
On a pu visualiser, sans réellement l'expliquer autrement que par l'algolrithmie interne, une différence flagrante : btrfs est plus rapide en écriture, là ou zfs est meilleur en lecture (même plateforme matérielle).
ZFS sur partitions, pas bon !¶
root@ocean:~# zpool status pool: data state: UNAVAIL status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool from a backup source. see: http://zfsonlinux.org/msg/ZFS-8000-5E scan: none requested config: NAME STATE READ WRITE CKSUM data UNAVAIL 0 0 0 insufficient replicas raidz2-0 UNAVAIL 0 0 0 insufficient replicas sdc3 FAULTED 0 0 0 corrupted data sdd3 FAULTED 0 0 0 corrupted data sdh3 FAULTED 0 0 0 corrupted data sde3 FAULTED 0 0 0 corrupted data sdg3 ONLINE 0 0 0 sdf3 ONLINE 0 0 0 root@ocean:~# zpool export data root@ocean:~# zpool status no pools available root@ocean:~# zpool import data cannot import 'data': pool may be in use from other system use '-f' to import anyway root@ocean:~# zpool import -f data root@ocean:~# zpool status pool: data state: ONLINE scan: scrub repaired 0 in 0h21m with 0 errors on Fri Mar 1 04:36:07 2013 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 sdd3 ONLINE 0 0 0 sde3 ONLINE 0 0 0 sdc3 ONLINE 0 0 0 sdh3 ONLINE 0 0 0 sdg3 ONLINE 0 0 0 sdf3 ONLINE 0 0 0 spares sdb3 AVAIL sda3 AVAIL errors: No known data errors
La première, ça fait peur. Les fois suivantes, c'est juste pénible.
On peut forcer un scrub, juste pour être sûr :
root@ocean:~# zpool scrub data [wait ~5 mn] root@ocean:~# zpool status pool: data state: ONLINE scan: scrub in progress since Fri Mar 1 11:18:49 2013 105G scanned out of 713G at 292M/s, 0h35m to go 0 repaired, 14,71% done config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 sdd3 ONLINE 0 0 0 sde3 ONLINE 0 0 0 sdc3 ONLINE 0 0 0 sdh3 ONLINE 0 0 0 sdg3 ONLINE 0 0 0 sdf3 ONLINE 0 0 0 spares sdb3 AVAIL sda3 AVAIL errors: No known data errors