Lustre (file system)

Initial releaseDecember 16, 2003; 20 years ago (2003-12-16)[1]
Stable release
2.16.1 (latest major release), [2]

2.15.5 (latest maintenance release), [3]

/ June 28, 2024; 4 months ago (2024-06-28)
Preview release
2.16.50 / 2024-11-10
Repository
Written inC
Operating systemLinux kernel
Available inEnglish
TypeDistributed file system
LicenseGPL v2, LGPL
Websitewww.lustre.org Edit this at Wikidata
OpenSFS
Founded2010 (2010)
Type501(c)(6)
Location
Websitewww.opensfs.org
Cluster File Systems, Inc.
Company typePrivate
Founded2001
FounderPeter J. Braam
Defunct2007
SuccessorSun Microsystems
Headquarters,
Key people
Andreas Dilger, Eric Barton (HPC), Phil Schwan
ProductsLustre file system
Lustre
Developer(s)Whamcloud/DDN, HPE, ORNL, AWS, CEA, others
VariantsEXAScaler, ClusterStor, FEFS, FSX for Lustre
IntroducedDecember, 2003 with Linux
Structures
Directory contentsHash, Interleaved Hash with DNE in 2.7+
File typefile, directory, hardlink, symlink, block special, character special, socket, FIFO
BootableNo
Limits
Min volume size32 MB
Max volume size700 PB (production),[4] over 16 EB (theoretical)
Max file size32 PB (ext4), 16 EB (ZFS)
File size granularity4 KB
Max no. of filesPer Metadata Target (MDT): 4 billion files (ldiskfs backend), 256 trillion files (ZFS backend),[5] up to 128 MDTs per filesystem
Max filename length255 bytes
Max dirname length255 bytes
Max directory depth4096 bytes
Allowed filename
characters
All bytes except NUL ('\0') and '/' and the special file names "." and ".."
Features
Dates recordedmodification (mtime), attribute modification (ctime), access (atime), delete (dtime), create (crtime)
Date range2^34 bits (ext4), 2^64 bits (ZFS)
Date resolution1 s
ForksNo
Attributes32bitapi, acl, checksum, flock, lazystatfs, localflock, lruresize, noacl, nochecksum, noflock, nolazystatfs, nolruresize, nouser_fid2path, nouser_xattr, user_fid2path, user_xattr
File system
permissions
POSIX, POSIX.1e ACL, SELinux
Transparent
compression
Yes (ZFS only)
Transparent
encryption
Yes (network, storage with ZFS 0.8+, fscrypt with Lustre 2.14.0+)
Data deduplicationYes (ZFS only)
Copy-on-writeYes (ZFS only)
Other
Supported
operating systems
Linux kernel

Lustre is a type of parallel distributed file system, generally used for large-scale cluster computing. The name Lustre is a portmanteau word derived from Linux and cluster.[6] Lustre file system software is available under the GNU General Public License (version 2 only) and provides high performance file systems for computer clusters ranging in size from small workgroup clusters to large-scale, multi-site systems. Since June 2005, Lustre has consistently been used by at least half of the top ten, and more than 60 of the top 100 fastest supercomputers in the world,[7][8][9] including the world's No. 1 ranked TOP500 supercomputer in November 2022, Frontier,[4] as well as previous top supercomputers such as Fugaku,[10][11] Titan[12] and Sequoia.[13]

Lustre file systems are scalable and can be part of multiple computer clusters with tens of thousands of client nodes, hundreds of petabytes (PB) of storage on hundreds of servers, and tens of terabytes per second (TB/s) of aggregate I/O throughput.[14][15] This makes Lustre file systems a popular choice for businesses with large data centers, including those in industries such as meteorology,[16][17] simulation, artificial intelligence and machine learning,[18][19] oil and gas,[20] life science,[21][22] rich media, and finance.[23] The I/O performance of Lustre has widespread impact on these applications and has attracted broad attention.[24][25][26]

  1. ^ Cite error: The named reference 1.0rls was invoked but never defined (see the help page).
  2. ^ "Release 2.16.1". Lustre Wiki. OpenSFS. November 15, 2024. Retrieved November 17, 2024.
  3. ^ "Lustre 2.15.5 released". lustre.org. 2024-06-28. Retrieved 2024-06-28.
  4. ^ a b Cite error: The named reference ornl_frontier was invoked but never defined (see the help page).
  5. ^ Oracle Corporation / Intel Corporation (August 4, 2002). "Lustre* Software Release 2.x Operations Manual" (PDF). Instruction Manual. Intel. Retrieved June 25, 2024.
  6. ^ "Lustre Home". Archived from the original on March 31, 2001. Retrieved September 23, 2013.
  7. ^ "Lustre File System, Version 2.4 Released". Open Scalable File Systems. Retrieved 2014-10-18.
  8. ^ "Open-source Lustre gets supercomputing nod". Retrieved 2014-10-18.
  9. ^ "Xyratex Captures Oracle's Lustre". HPCWire. 21 February 2013. Retrieved 2014-10-18.
  10. ^ "Post-K (Fugaku) Information". Fujitsu. Archived from the original on 2020-06-08. Retrieved 2020-06-23.
  11. ^ "Supercomputer Fugaku" (PDF). Fujitsu.
  12. ^ "Titan System Overview". Oak Ridge National Laboratory. Archived from the original on 2018-02-13. Retrieved 2013-09-19.
  13. ^ Brian Behlendorf. "ZFS on Linux for Lustre" (PDF). Lawrence Livermore National Laboratory. Archived from the original (PDF) on 2014-10-31. Retrieved 2020-06-23.
  14. ^ "Orion: Frontier's Massive File System". insideHPC. 2023-04-03.
  15. ^ Andreas Dilger, Whamcloud (2019-06-20). "Lustre: The Next 20 Years" (PDF). HPCIODC.
  16. ^ "Cray to Provide NOAA with Two AMD-Powered Supercomputers". HPC Wire. 2020-02-24.
  17. ^ Julian Kunkel, DKRZ (2017-06-15). "Lustre at DKRZ" (PDF). OpenSFS.
  18. ^ Chris Mellor (2023-05-02). "Nvidia AI supercomputer shows its Lustre in Oracle cloud". Blocks and Files.
  19. ^ Julie Bernauer, Prethvi Kashinkunti, NVIDIA (2021-05-20). "Accelerating AI at-scale with Selene DGXA100 SuperPOD and Lustre Parallel Filesystem Storage" (PDF). OpenSFS.{{cite web}}: CS1 maint: multiple names: authors list (link)
  20. ^ Raj Gautam (2019-05-15). "Long distance Lustre Communication" (PDF). Exxon Mobil.
  21. ^ Cambridge-1: A NVIDIA Success Story on YouTube
  22. ^ James Beal, Pavlos Antoniou, Sanger Institute (2021-05-20). "Update on Secure Lustre" (PDF). OpenSFS.{{cite web}}: CS1 maint: multiple names: authors list (link)
  23. ^ Steve Crusan, Brock Johnson, Hudson River Trading (2022-05-10), Lustre in Finance (PDF), OpenSFS{{citation}}: CS1 maint: multiple names: authors list (link)
  24. ^ Wang, Teng; Byna, Suren; Lockwood, Glenn K.; Snyder, Shane; Carns, Philip; Kim, Sunggon; Wright, Nicholas J. (May 2019). "A Zoom-in Analysis of I/O Logs to Detect Root Causes of I/O Performance Bottlenecks". 2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). IEEE. pp. 102–111. doi:10.1109/CCGRID.2019.00021. ISBN 978-1-7281-0912-1. S2CID 195832257.
  25. ^ Gunasekaran, Raghul; Oral, Sarp; Hill, Jason; Miller, Ross; Wang, Feiyi; Leverman, Dustin (Nov 2015). "Comparative I/O workload characterization of two leadership class storage clusters" (PDF). Proceedings of the 10th Parallel Data Storage Workshop. ACM. pp. 31–36. doi:10.1145/2834976.2834985. ISBN 9781450340083. S2CID 15844745.
  26. ^ Saini, Subhash; Rappleye, Jason; Chang, Johnny; Barker, David; Mehrotra, Piyush; Biswas, Rupak (Dec 2012). "I/O performance characterization of Lustre and NASA applications on Pleiades". 2012 19th International Conference on High Performance Computing (PDF). IEEE. pp. 1–10. doi:10.1109/HiPC.2012.6507507. hdl:2060/20130001600. ISBN 978-1-4673-2371-0. S2CID 14323627.