New Immissions/Updates:
boundless - educate - edutalab - empatico - es-ebooks - es16 - fr16 - fsfiles - hesperian - solidaria - wikipediaforschools
- wikipediaforschoolses - wikipediaforschoolsfr - wikipediaforschoolspt - worldmap -

See also: Liber Liber - Libro Parlato - Liber Musica  - Manuzio -  Liber Liber ISO Files - Alphabetical Order - Multivolume ZIP Complete Archive - PDF Files - OGG Music Files -

PROJECT GUTENBERG HTML: Volume I - Volume II - Volume III - Volume IV - Volume V - Volume VI - Volume VII - Volume VIII - Volume IX

Ascolta ""Volevo solo fare un audiolibro"" su Spreaker.
CLASSICISTRANIERI HOME PAGE - YOUTUBE CHANNEL
Privacy Policy Cookie Policy Terms and Conditions
ZFS - Wikipedia, the free encyclopedia

ZFS

From Wikipedia, the free encyclopedia

ZFS
Developer Sun Microsystems
Full name ZFS
Introduced November 2005 (OpenSolaris)
Partition identifier
Structures
Directory contents Extensible Hash table
File allocation
Bad blocks
Limits
Max file size 16 exabytes
Max number of files 248
Max filename size
Max volume size 16 exabytes
Allowed characters in filenames
Features
Dates recorded
Date range
Forks Yes (called Extended Attributes)
Attributes POSIX
File system permissions POSIX
Transparent compression Yes
Transparent encryption No
Supported operating systems Sun Solaris, Apple Mac OS X 10.5

ZFS is a file system produced by Sun Microsystems for the Solaris Operating System, featuring high capacity, integration of the concepts of filesystem and volume management, novel on-disk structure, lightweight filesystems, and easy storage pool management. ZFS is an open source project licensed under the Common Development and Distribution License (CDDL).

Contents

[edit] History

ZFS was designed and implemented by a team at Sun led by Jeff Bonwick. It was announced on September 14, 2004.[1] Source code for the final product was integrated into the main trunk of Solaris development on October 31, 2005[2] and released as part of build 27 of OpenSolaris on November 16, 2005. Sun announced that ZFS was integrated into the 6/06 update to Solaris 10 in June 2006, one year after the opening of the OpenSolaris community.[3]

The name originally stood for "Zettabyte File System", but is now a pseudo-initialism.[4]

[edit] Capacity

ZFS is a 128-bit file system, which means it can store 18 billion billion (18.4 × 1018) times more data than current 64-bit systems. The limitations of ZFS are designed to be so large that they will never be encountered in practice. Project leader Bonwick said, "Populating 128-bit file systems would exceed the quantum limits of earth-based storage. You couldn't fill a 128-bit storage pool without boiling the oceans."[1]

Some theoretical limits in ZFS are:

  • 248 — Number of snapshots in any file system (2 × 1014)
  • 248 — Number of files in any individual file system (2 × 1014)
  • 16 exabytes (264 byte) — Maximum size of a file system
  • 16 exabytes (264 byte) — Maximum size of a single file
  • 16 exabytes (264 byte) — Maximum size of any attribute
  • 3 × 1023 petabytes — Maximum size of any zpool
  • 256 — Number of attributes of a file (actually constrained to 248 for the number of files in a ZFS file system)
  • 256 — Number of files in a directory (actually constrained to 248 for the number of files in a ZFS file system)
  • 264 — Number of devices in any zpool
  • 264 — Number of zpools in a system
  • 264 — Number of file systems in a zpool

As an example of how large these numbers are, if 1,000 files were created every second, it would take about 9,000 years to reach the limit of the number of files.

In reply to a question about filling up ZFS without boiling the oceans, Bonwick wrote:

Although we'd all like Moore's Law to continue forever, quantum mechanics imposes some fundamental limits on the computation rate and information capacity of any physical device. In particular, it has been shown that 1 kilogramme of matter confined to 1 litre of space can perform at most 1051 operations per second on at most 1031 bits of information [see Seth Lloyd, "Ultimate physical limits to computation." Nature 406, 1047-1054 (2000)]. A fully populated 128-bit storage pool would contain 2128 blocks = 2137 bytes = 2140 bits; therefore the minimum mass required to hold the bits would be (2140 bits) / (1031 bits/kg) = 136 billion kg.

To operate at the 1031 bits/kg limit, however, the entire mass of the computer must be in the form of pure energy. By E=mc², the rest energy of 136 billion kg is 1.2x1028 J. The mass of the oceans is about 1.4x1021 kg. It takes about 4,000 J to raise the temperature of 1 kg of water by 1 degree Celsius, and thus about 400,000 J to heat 1 kg of water from freezing to boiling. The latent heat of vaporization adds another 2 million J/kg. Thus the energy required to boil the oceans is about 2.4x106 J/kg * 1.4x1021 kg = 3.4x1027 J. Thus, fully populating a 128-bit storage pool would, literally, require more energy than boiling the oceans.[5]

[edit] Storage pools

Unlike a traditional file system, which resides on a single device and thus requires a volume manager to use more than one device, ZFS is built on top of virtual storage pools called zpools. A pool is constructed from virtual devices (vdevs), each of which is either a raw device, a mirror (RAID 1) of one or more devices, or a RAID-Z group of two or more devices. The storage capacity of all vdevs are then available to all of the file systems in the zpool.

To limit the amount of space a file system can occupy, a quota can be applied, and to guarantee that space will be available to a specific file system, a reservation can be set.

[edit] Copy-on-write transactional model

ZFS uses a copy-on-write, transactional object model. All block pointers within the filesystem contain a 256-bit checksum of the target block which is verified when the block is read. Blocks containing active data are never overwritten in place; instead, a new block is allocated, modified data is written to it, and then any metadata blocks referencing it are similarly read, reallocated, and written. To reduce the overhead of this process, multiple updates are grouped into transaction groups, and an intent log is used when synchronous write semantics are required.

[edit] Snapshots and clones

The ZFS copy-on-write model has another powerful advantage: when ZFS writes new data, instead of releasing the blocks containing the old data, it can instead retain them, creating a snapshot version of the file system. ZFS snapshots are created very quickly, since all the data comprising the snapshot is already stored; they are also space efficient, since any unchanged data is shared among the file system and its snapshots.

Writable snapshots ("clones") can also be created, resulting in two independent file systems that share a set of blocks. As changes are made to any of the clone file systems, new data blocks are created to reflect those changes, but any unchanged blocks continue to be shared, no matter how many clones exist.

[edit] Dynamic striping

Dynamic striping across all devices to maximize throughput means that as additional devices are added to the zpool, the stripe width automatically expands to include them, thus all disks in a pool are used, which balances the write load across them.

[edit] Variable block sizes

ZFS uses variable-sized blocks of up to 128 kilobytes. The currently available code allows the administrator to tune the maximum block size used as certain workloads do not perform well with large blocks. Automatic tuning to match workload characteristics is contemplated.

If compression is enabled, variable block sizes are used. If a block can be compressed to fit into a smaller block size, the smaller size is used on the disk to use less storage and improve IO throughput (though at the cost of increased CPU use for the compression and decompression operations)

[edit] Lightweight filesystem creation

In ZFS, filesystem manipulation within a storage pool is easier than volume manipulation within a traditional filesystem; the time and effort required to create or resize a ZFS filesystem is closer to that of making a new directory than it is to volume manipulation in other technologies.

[edit] Additional capabilities

  • Explicit I/O priority with deadline scheduling.
  • Globally optimal I/O sorting and aggregation.
  • Multiple independent prefetch streams with automatic length and stride detection.
  • Parallel, constant-time directory operations.
  • End-to-end checksumming, allowing data corruption detection and recovery (if you have redundancy in the pool).
  • Intelligent scrubbing and resilvering.[6]
  • Load and space usage sharing between disks in the pool.[7]
  • Ditto blocks: Metadata is replicated inside the pool, two or three times (according to metadata importance).[8] If the pool has several devices, ZFS tries to replicate over different devices. So a pool without redundancy can lose data if you find bad sectors, but metadata should be fairly safe even in this scenario.
  • ZFS design (copy-on-write + uberblocks) is safe when using disks with write cache enabled, if they "obey" the "cache flush" commands issued by ZFS. This feature provides safety and a considerable performance boost compared with other filesystems.
  • Given previous point, when given entire disks to a ZFS pool, ZFS automatically enables their write cache. This is not done if the ZFS only manages discrete slices of the disk, since it doesn't know if other slices are managed by non write cache safe filesystems, like UFS (and most others).

[edit] Cache Management

ZFS also introduces the ARC, a new method for cache management instead of the traditional Solaris virtual memory page cache.

[edit] Limitations

ZFS lacks transparent encryption, a la NTFS, although there is an OpenSolaris project underway.[9]

ZFS does not support per-user or per-group quotas. Instead, it is possible to create user-owned filesystems, each with its own size limit. The low overhead of ZFS filesystems makes this practical even with many users (but, as noted in the current implementation issues, may slow system startup considerably). Intrinsically, there is no practical quota solution for the file systems shared among several users (such as team projects, for example), where the data cannot be separated per user, although it could be implemented on top of the ZFS stack.

Capacity expansion is normally achieved by adding groups of disk as vdev (stripe, RAID-Z, RAID-Z2, or mirrored). Newly written data will dynamically start to use all available vdevs. It is also possible to expand the array by iteratively swapping each drive in the array with a bigger drive and waiting for ZFS to heal itself - the heal time will depend on amount of store information, not the disk size. One should refrain from taking snapshots during the process (as this will cause the heal to be restarted).

It is currently not possible to reduce the number of vdevs in a pool nor otherwise reduce pool capacity. However, it is expected to be implemented in the near future.[citation needed]

It is not possible to add a disk to a raidz or raidz2 vdev. This feature appears very difficult to implement. It should also be noted that adding disk to a raidz would degrade the data protection by reducing the proportion of parity to data bits.

Reconfiguring storage requires copying data offline, destroying the pool, and recreating the pool with the new policy.

[edit] Current implementation issues

Current ZFS implementation (Solaris 10 11/06) has some issues admins should know before deploying it. These issues are NOT inherent to ZFS, and might be solved in future releases:

  • ZFS is currently not available as a root filesystem on Solaris 10, since there is no ZFS boot support. The ZFS Boot project recently successfully added boot support to the OpenSolaris project, and is available in recent builds of Solaris Nevada. [10] ZFS boot is currently (20070208) planned for a Solaris 10 update in late 2007.
  • If a Solaris Zone is put on ZFS, the system cannot be upgraded — the OS will need to be reinstalled. This issue is planned to be addressed in a Solaris 10 update in 2007.
  • A file "fsync" will commit to disk all pending modifications on the filesystem. That is, an "fsync" on a file will flush out all deferred (cached) operations to the filesystem (not the pool) in which the file is located. This can make some fsync() slow when running alongside a workload which writes a lot of data to filesystem cache.[11]. The issue is currently fixed in Solaris Nevada.
  • New "vdev's" can be added to a storage pool, but they cannot be removed. A "vdev" can be exchanged for using a bigger new one, but it cannot be removed, in the process reducing the total pool storage size even if the pool has enough unused space. The ability to shrink a zpool is a work in progress, currently targeted for a Solaris 10 update in late 2007.
  • ZFS encourages creation of many filesystems inside the pool (for example, for quota control), but importing a pool with thousands of filesystems is a slow operation (can take minutes).
  • ZFS filesystem on-the-fly compression/decompression is single-threaded. So, only one CPU per zpool is used. The issue is now fixed in Solaris Nevada.
  • ZFS eats a lot of CPU when doing small writes (for example, a single byte). There are two root causes, currently being solved: a) Translating from znode to dnode is slower than necessary because ZFS doesn't use translation information it already has, and b) Current partial-block update code is very inefficient.[12]
  • ZFS Copy-on-Write operation can degrade on-disk file layout (file fragmentation) when files are modified, decreasing performance.
  • ZFS blocksize is configurable per filesystem, currently 128KB by default. If your workload reads/writes data in fixed sizes (blocks), for example a database, you should (manually) configure ZFS blocksize equal to the application blocksize, for better performance and to conserve cache memory and disk bandwidth.
  • ZFS only offlines a faulty harddisk if it can't be opened. Read/write errors or slow/timeouted operations are not currently used in the faulty/spare logic.
  • When listing ZFS space usage, the "used" column only shows non-shared usage. So if some of your data is shared (for example, between snapshots), you don't know how much is there. You don't know, for example, which snapshop deletion would give you more free space.
  • There is work in progress to provide automatic and periodic disk scrubbing, in order to provide corruption detection and early disk-rotting detection. Currently the data scrubbing must be done manually with "zpool scrub" command.
  • Current ZFS compression/decompression code is very fast, but compressratio is not comparable to gzip or similar algorithms. There is a project to add new compression modules to ZFS.[13]
  • When taking a snapshot while the zpool is scrubbing/resilvering, the process will be restarted from the beginning.[14]

[edit] Platforms

ZFS is part of Sun's own Solaris operating system and is thus available on both SPARC and x86 -based systems. Since the code for ZFS is open source, a port to other operating systems and platforms can be produced without Sun's involvement.

Nexenta OS, a complete GNU-based open source operating system built on top of the OpenSolaris kernel and runtime, includes a ZFS implementation, added in version alpha1.

Apple Computer is porting ZFS to their Mac OS X operating system, according to a post by a Sun employee on the opensolaris.org zfs-discuss mailing list, and previewed screenshots of the next version of Apple's Mac OS X.[15] As of Mac OS X 10.5 (Developer Seed 9A321), support for ZFS has been included, but lacks the ability to act as a root partition, noted above. Also, attempts to format local drives using ZFS are unsuccessful; this is a known bug.[16]

Porting ZFS to Linux is complicated by incompatibilities between CDDL, the license its source is released under, and the GNU General Public License which governs the Linux kernel. To work around this problem the Google Summer of Code program is sponsoring a port of ZFS to Linux's FUSE system so the filesystem will run in userspace instead.[17] However, running a file system outside the kernel on Linux has significant performance impact.[citation needed]

There are no plans to port ZFS to HP-UX or AIX.[18]

Matt Dillon started porting ZFS to DragonFly BSD as a plan for their 1.5 release,[19] and work is currently underway for a FreeBSD port as well, headed by developer Pawel Jakub Dawidek.[20] ZFS for FreeBSD will most likely first be seen in a 7.x release and will initially not have the full 128-bit support due to 64-bit limitations in UFS and some userland tools.[21]

[edit] Adaptive Endianness

Pools and their associated ZFS file systems can be moved between different platform architectures, even between systems implementing different byte orders. The ZFS block pointer format allows for filesystem metadata to be stored in an endian-adaptive way; individual metadata blocks are written with the native byte order of the system writing the block. When reading, if the stored endianness doesn't match the endianness of the system, the metadata is byte-swapped in memory.

This does not affect the stored data itself: as is usual in POSIX systems, files appear to applications as simple arrays of bytes, so applications creating and reading data remain responsible for doing so in a way independent of the underlying system's endianness.

[edit] References

  1. ^ a b ZFS: the last word in file systems. Sun Microsystems (September 14, 2004). Retrieved on 2006-04-30.
  2. ^ Jeff Bonwick (October 31, 2005). ZFS: The Last Word in Filesystems. Jeff Bonwick's Blog. Retrieved on 2006-04-30.
  3. ^ Sun Celebrates Successful One-Year Anniversary of OpenSolaris. Sun Microsystems (June 20, 2006).
  4. ^ Jeff Bonwick (2006-05-04). You say zeta, I say zetta. Jeff Bonwick's Blog. Retrieved on 2006-09-08.
  5. ^ Jeff Bonwick (September 25, 2004). 128-bit storage: are you high?. Sun Microsystems. Retrieved on 2006-07-12.
  6. ^ Smokin' Mirrors. Jeff Bonwick's Weblog (2006-05-02). Retrieved on 2007-02-23.
  7. ^ ZFS Block Allocation. Jeff Bonwick's Weblog (2006-11-04). Retrieved on 2007-02-23.
  8. ^ Ditto Blocks - The Amazing Tape Repellent. Flippin' off bits Weblog (2006-05-12). Retrieved on 2007-03-01.
  9. ^ OpenSolaris Project: ZFS on disk encryption support. OpenSolaris Project. Retrieved on 2006-12-13.
  10. ^ Latest ZFS add-ons. milek's blog (2007-03-28). Retrieved on 2007-03-29.
  11. ^ The Dynamics of ZFS. Roch Bourbonnais' Weblog (2006-06-21). Retrieved on 2007-02-19.
  12. ^ Implementing fbarrier() on ZFS. zfs-discuss (2007-02-13). Retrieved on 2007-02-13.
  13. ^ gzip for ZFS update. Adam Leventhal's Weblog (2007-01-31). Retrieved on 2007-03-09.
  14. ^ scrub/resilver has to start over when a snapshot is taken. OpenSolaris Bug Tracker (2005-10-30). Retrieved on 2007-03-14.
  15. ^ Porting ZFS to OSX. zfs-discuss (April 27, 2006). Retrieved on 2006-04-30.
  16. ^ Mac OS X 10.5 9A326 Seeded. InsanelyMac Forums (December 14, 2006). Retrieved on 2006-12-14.
  17. ^ Ricardo Correia (May 26, 2006). Announcing ZFS on FUSE/Linux. Retrieved on 2006-07-15.
  18. ^ Fast Track to Solaris 10 Adoption: ZFS Technology. Solaris 10 Technical Knowledge Base. Sun Microsystems. Retrieved on 2006-04-24.
  19. ^ Dillon, Matt (December 17, 2005). Plans for 1.5. Retrieved on 2006-04-24.
  20. ^ Dawidek, Pawel Jakub (August 22, 2006). Porting ZFS file system to FreeBSD. Retrieved on 2006-08-22.
  21. ^ Dawidek, Pawel Jakub (August 22, 2006). Porting ZFS file system to FreeBSD. Retrieved on 2007-03-03.

[edit] See also

[edit] External links

Static Wikipedia (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -

Static Wikipedia 2007 (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -

Static Wikipedia 2006 (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu

Static Wikipedia February 2008 (no images)

aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu