Flux is an open-source machine-learning software library and ecosystem written in Julia.[1][6] Its current stable release is v0.14.5[4]. It has a layer-stacking-based interface for simpler models, and has a strong support on interoperability with other Julia packages instead of a monolithic design.[7] For example, GPU support is implemented transparently by CuArrays.jl.[8] This is in contrast to some other machine learning frameworks which are implemented in other languages with Julia bindings, such as TensorFlow.jl (the unofficial wrapper, now deprecated), and thus are more limited by the functionality present in the underlying implementation, which is often in C or C++.[9] Flux joined NumFOCUS as an affiliated project in December of 2021.[10]
Flux's focus on interoperability has enabled, for example, support for Neural Differential Equations, by fusing Flux.jl and DifferentialEquations.jl into DiffEqFlux.jl.[11][12]
Julia is a popular language in machine-learning[17] and Flux.jl is its most highly regarded machine-learning repository[17] (Lux.jl is another more recent, that shares a lot of code with Flux.jl). A demonstration[18] compiling Julia code to run in Google's tensor processing unit (TPU) received praise from Google Brain AI lead Jeff Dean.[19]
Flux has been used as a framework to build neural networks that work with homomorphic encrypted data without ever decrypting it.[20][21] This kind of application is envisioned to be central for privacy to future API using machine-learning models.[22]
^Roesch, Jared and Lyubomirsky, Steven and Kirisame, Marisa and Pollock, Josh and Weber, Logan and Jiang, Ziheng and Chen, Tianqi and Moreau, Thierry and Tatlock, Zachary (2019). "Relay: A High-Level IR for Deep Learning". arXiv:1904.08368 [cs.LG].{{cite arXiv}}: CS1 maint: multiple names: authors list (link)
^Tim Besard and Christophe Foket and Bjorn De Sutter (2019). "Effective Extensible Programming: Unleashing Julia on GPUs". IEEE Transactions on Parallel and Distributed Systems. 30 (4). Institute of Electrical and Electronics Engineers (IEEE): 827–841. arXiv:1712.03112. doi:10.1109/tpds.2018.2872064. S2CID11827394.
^Besard, Tim (2018). Abstractions for Programming Graphics Processors in High-Level Programming Languages (PhD). Ghent University.