Original author(s) | Doug Cutting, Mike Cafarella |
---|---|
Developer(s) | Apache Software Foundation |
Initial release | April 1, 2006[1] |
Stable release | |
Repository | Hadoop Repository |
Written in | Java |
Operating system | Cross-platform |
Type | Distributed file system |
License | Apache License 2.0 |
Website | hadoop |
Apache Hadoop ( /həˈduːp/) is a collection of open-source software utilities for reliable, scalable, distributed computing. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model. Hadoop was originally designed for computer clusters built from commodity hardware, which is still the common use.[3] It has since also found use on clusters of higher-end hardware.[4][5] All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common occurrences and should be automatically handled by the framework.[6]