Leaf - Machine Learning for Hackers
Our life is frittered away by detail. Simplify, simplify. - Henry David Thoreau
This short book teaches you how you can build machine learning applications (with Leaf).
Leaf is a Machine Intelligence Framework engineered by hackers, not scientists. It has a very simple API consisting of Layers and Solvers, with which you can build classical machine as well as deep learning and other fancy machine intelligence applications. Although Leaf is just a few months old, thanks to Rust and Collenchyma it is already one of the fastest machine intelligence frameworks available.
Leaf was inspired by the brilliant people behind TensorFlow, Torch, Caffe, Rust and numerous research papers and brings modularity, performance and portability to deep learning.
To make the most of the book, a basic understanding of the fundamental concepts of machine and deep learning is recommended. Good resources to get you from zero to almost-ready-to-build-machine-learning-applications:
And if you already have some experience, A 'brief' history of Deep Learning or The Glossary might prove informative.
Both machine and deep learning are really easy with Leaf.
Construct a Network by chaining Layers. Then optimize the network by feeding it examples. This is why Leaf's entire API consists of only two concepts: Layers and Solvers. Use layers to construct almost any kind of model: deep, classical, stochastic or hybrids, and solvers for executing and optimizing the model.
This is already the entire API for machine learning with Leaf. To learn how this is possible and how to build machine learning applications, refer to chapters 2. Layers and 3. Solvers. Enjoy!
Benefits+
Leaf was built with three concepts in mind: accessibility/simplicity, performance and portability. We want developers and companies to be able to run their machine learning applications anywhere: on servers, desktops, smartphones and embedded devices. Any combination of platform and computation language (OpenCL, CUDA, etc.) is a first class citizen in Leaf.
We coupled portability with simplicity, meaning you can deploy your machine learning applications to almost any machine and device with no code changes. Learn more at chapter 4. Backend or at the Collenchyma Github repository.
Contributing
Want to contribute? Awesome! We have instructions to help you get started.
Leaf has a near real-time collaboration culture, which happens at the Github repository and on the Leaf Gitter Channel.
API Documentation
Alongside this book you can also read the Rust API documentation if you would like to use Leaf as a crate, write a library on top of it or just want a more low-level overview.
License
Leaf is free for anyone for whatever purpose. Leaf is licensed under either Apache License v2.0 or, MIT license. Whatever strikes your fancy.
Layers
What is a Layer?
Layers are the only building blocks in Leaf. As we will see later on, everything is a layer. Even when we construct networks, we are still just working with layers composed of smalle layers. This makes the API clean and expressive.
A layer is like a function: given an input it computes an output. It could be some mathematical expression, like Sigmoid, ReLU, or a non-mathematical instruction, like querying data from a database, logging data, or anything in between. In Leaf, layers describe not only the interior 'hidden layers' but also the input and output layer.
Layers in Leaf are only slightly opinionated, they need to take an input and produce an output. This is required in order to successfully stack layers on top of each other to build a network. Other than that, a layer in Leaf can implement any behaviour.
Layers are constructed via the LayerConfig
(/src/layer.rs), which makes creating even complex networks easy
and manageable.
// construct the config for a fully connected layer with 500 notes
let linear_1: LayerConfig = LayerConfig::new("linear1", LinearConfig { output_size: 500 })
A LayerConfig
can be turned into an initialized, fully operable Layer
(/src/layer.rs) with its from_config
method.
// construct the config for a fully connected layer with 500 notes
let linear_1: LayerConfig = LayerConfig::new("linear1", LinearConfig { output_size: 500 })
let linear_network_with_one_layer: Layer = Layer::from_config(backend, &linear_1);
Hurray! We just constructed a network with one layer. (In the following chapter we will learn how to create more powerful networks).
The from_config
method initializes a Layer
, which wraps the specific implementation (a struct that has ILayer
(/src/layer.rs) implemented) in a worker field.
In the tiny example above, the worker field of the linear_network_with_one_layer
is a Linear
(/src/layers/common/linear.rs) because we constructed
the linear_network_with_one_layer
from a LinearConfig
. The worker field
introduces the specific behaviour of the layer.
In the following chapters we explore more about how we can construct real-world networks, the layer lifecycle and how we can add new layers to the Leaf framework.
What can Layers do?
A layer can implement basically any behaviour: deep learning related like convolutions or LSTM, classical machine learning related like nearest neighbors or random forest, or utility related like logging or normalization. To make the behaviour of a layer more explicit, Leaf groups layers into one of five categories based on their (machine learning) functionality:
In practice, the groups are not really relevant, it helps make the file structure cleaner. And it simplifies the explanation of what a layer is doing.
Activation Layers
Activation layers provide element-wise operations and return an output of the same size as the input. Activation layers can be seen as equivalent to nonlinear Activation Functions and are a fundamental piece in neural networks.
Examples of activation layers are Sigmoid
, TanH
or ReLU
. All available
activation layers can be found at
src/layers/activation.
Loss Layers
Loss layers compare an output to a target value and assign a cost to minimize. Loss layers are often the last layer in a network.
Examples of loss layers are Hinge Loss
, Softmax Loss
or Negative Log Likelihood
. All available loss layers can be found at
src/layers/loss.
Common Layers
Common layers can differ in their connectivity and behavior. They are typically anything that is not an activation or loss layer.
Examples of common layers are fully-connected
, convolutional
, pooling
, LSTM
,
etc. All available common layers can be found at
src/layers/common.
Utility Layers
Utility layers introduce all kind of helpful functionality, which might not be directly related to machine learning and neural nets. These could be operations for normalizing, restructuring or transforming information, log and debug behavior or data access. Utility Layers follow the general behavior of a layer like the other types.
Examples of Utility layers are Reshape
, Flatten
or Normalization
. All
available utility layers can be found at
src/layers/utility.
Container Layers
Container layers take LayerConfig
s and connect them on initialization, which
creates a "network". But as container layers are layers themselves, one can stack multiple
container layers on top of another and compose even bigger container layers.
Container layers differ in how they connect the layers that it receives.
Examples of container layers are Sequential
. All available container layers
can be found at
src/layers/container.
Why Layers?
The benefit of using a layer-based design approach is that it allows for a very expressive setup that can represent, as far as we know, any machine learning algorithm. That makes Leaf a framework, that can be used to construct practical machine learning applications that combine different paradigms.
Other machine learning frameworks take a symbolic instead of a layered approach. For Leaf we decided against it, as we found it easier for developers to work with layers than mathematical expressions. More complex algorithms like LSTMs are also harder to replicate in a symbolic framework. We believe that Leafs layer approach strikes a great balance between expressiveness, usability and performance.
Layer Lifecycle
In chapter 2. Layers we saw how to
construct a simple Layer
from a LayerConfig
. In this chapter, we take
a closer look at what happens inside Leaf when initializing a Layer
and when running its
.forward
and .backward
methods. In the next chapter 2.2 Create a Network we
apply our knowledge to construct deep networks with the container layer.
The most important methods of a Layer
are initialization (::from_config
), .forward
and .backward
.
They basically describe the entire API, so let's take a closer look at what happens inside Leaf when these methods are called.
Initialization
A layer is constructed from a LayerConfig
with the Layer::from_config
method, which returns a fully initialized Layer
.
let mut sigmoid: Layer = Layer::from_config(backend.clone(), &LayerConfig::new("sigmoid", LayerType::Sigmoid))
let mut alexnet: Layer = Layer::from_config(backend.clone(), &LayerConfig::new("alexnet", LayerType::Sequential(cfg)))
In the example above, the first layer has a Sigmoid worker
(LayerType::Sigmoid
) and the second layer has a Sequential worker.
Although both ::from_config
methods return a Layer
, the behavior of
that Layer
depends on the LayerConfig
it was constructed with. The
Layer::from_config
internally calls the worker_from_config
method, which
constructs the specific worker defined by the LayerConfig
.
fn worker_from_config(backend: Rc<B>, config: &LayerConfig) -> Box<ILayer<B>> {
match config.layer_type.clone() {
// more matches
LayerType::Pooling(layer_config) => Box::new(Pooling::from_config(&layer_config)),
LayerType::Sequential(layer_config) => Box::new(Sequential::from_config(backend, &layer_config)),
LayerType::Softmax => Box::new(Softmax::default()),
// more matches
}
}
The layer-specific ::from_config
(if available or needed) then takes care of
initializing the worker struct, allocating memory for weights and so on.
If the worker is a container layer, its ::from_config
takes
care of initializing all the LayerConfig
s it contains (which were added via its
.add_layer
method) and connecting them in the order they were provided.
Every .forward
or .backward
call that is made on the returned Layer
is
run by the internal worker.
Forward
The forward
method of a Layer
threads the input through the constructed
network and returns the output of the network's final layer.
The .forward
method does three things:
- Reshape the input data if necessary
- Sync the input/weights to the device where the computation happens. This step removes the need for the worker layer to care about memory synchronization.
- Call the
forward
method of the internal worker layer.
If the worker layer is a container layer, the .forward
method
takes care of calling the .forward
methods of its managed
layers in the right order.
Backward
The .backward
method of a Layer
works similarly to .forward
, apart from
needing to reshape the input. The .backward
method computes
the gradient with respect to the input as well as the gradient w.r.t. the parameters. However,
the method only returns the input gradient because that is all that is needed to compute the
gradient of the entire network via the chain rule.
If the worker layer is a container layer, the .backward
method
takes care of calling the .backward_input
and
.backward_parameter
methods of its managed layers in the right order.
Create a Network
In the previous chapters, we learned that in Leaf everything is build by
layers and that the constructed thing is again a layer, which means it can
function as a new building block for something bigger. This is possible, because
a Layer
can implement any behavior as long as it takes an input and produces
an output.
In 2.1 Layer Lifecycle
we have seen, that only one LayerConfig
can be used to turn it via
Layer::from_config
into an actual Layer
. But as Deep Learning relies on
chaining multiple layers together, we need a Layer
, who implements this
behavior for us.
Enter the container layers.
Networks via the Sequential
layer
A Sequential
Layer is a layer of type container layer. The config of a
container layer has a special method called,
.add_layer
which takes one LayerConfig
and adds it to an ordered list in the
SequentialConfig
.
When turning a SequentialConfig
into a Layer
by passing the config to
Layer::from_config
, the behavior of the Sequential
is to initialize all the
layers which were added via .add_layer
and connect the layers with each other.
This means, the output of one layer becomes the input of the next layer in the
list.
The input of a sequential Layer
becomes the input of the
first layer in the sequential worker, the sequential worker then takes care
of passing the input through all the layers and the output of the last layer
then becomes the output of the Layer
with the sequential worker. Therefore
a sequential Layer
fulfills the requirements of a Layer
- take an input,
return an output.
// short form for: &LayerConfig::new("net", LayerType::Sequential(cfg))
let mut net_cfg = SequentialConfig::default();
net_cfg.add_input("data", &vec![batch_size, 28, 28]);
net_cfg.add_layer(LayerConfig::new("reshape", ReshapeConfig::of_shape(&vec![batch_size, 1, 28, 28])));
net_cfg.add_layer(LayerConfig::new("conv", ConvolutionConfig { num_output: 20, filter_shape: vec![5], stride: vec![1], padding: vec![0] }));
net_cfg.add_layer(LayerConfig::new("pooling", PoolingConfig { mode: PoolingMode::Max, filter_shape: vec![2], stride: vec![2], padding: vec![0] }));
net_cfg.add_layer(LayerConfig::new("linear1", LinearConfig { output_size: 500 }));
net_cfg.add_layer(LayerConfig::new("sigmoid", LayerType::Sigmoid));
net_cfg.add_layer(LayerConfig::new("linear2", LinearConfig { output_size: 10 }));
net_cfg.add_layer(LayerConfig::new("log_softmax", LayerType::LogSoftmax));
// set up the sequential layer aka. a deep, convolutional network
let mut net = Layer::from_config(backend.clone(), &net_cfg);
As a sequential layer is like any other layer, we can use sequential layers as building blocks for larger networks. Important building blocks of a network can be grouped into a sequential layer and published as a crate for others to use.
// short form for: &LayerConfig::new("net", LayerType::Sequential(cfg))
let mut conv_net = SequentialConfig::default();
conv_net.add_input("data", &vec![batch_size, 28, 28]);
conv_net.add_layer(LayerConfig::new("reshape", ReshapeConfig::of_shape(&vec![batch_size, 1, 28, 28])));
conv_net.add_layer(LayerConfig::new("conv", ConvolutionConfig { num_output: 20, filter_shape: vec![5], stride: vec![1], padding: vec![0] }));
conv_net.add_layer(LayerConfig::new("pooling", PoolingConfig { mode: PoolingMode::Max, filter_shape: vec![2], stride: vec![2], padding: vec![0] }));
conv_net.add_layer(LayerConfig::new("linear1", LinearConfig { output_size: 500 }));
conv_net.add_layer(LayerConfig::new("sigmoid", LayerType::Sigmoid));
conv_net.add_layer(LayerConfig::new("linear2", LinearConfig { output_size: 10 }));
let mut net_cfg = SequentialConfig::default();
net_cfg.add_layer(conv_net);
net_cfg.add_layer(LayerConfig::new("linear", LinearConfig { output_size: 500 }));
net_cfg.add_layer(LayerConfig::new("log_softmax", LayerType::LogSoftmax));
// set up the 'big' network
let mut net = Layer::from_config(backend.clone(), &net_cfg);
Networks via other container layers
So far, there is only the sequential layer, but other container layers, with slightly different behaviors are conceivable. For example a parallel or concat layer in addition to the sequential layer.
How to 'train' or optimize the constructed network is topic of chapter 3. Solvers
Create a new Layer
A layer in Leaf can implement any behavior as long as it takes an input and produces an output. As Leaf is new, there are still many valuable layers that are not yet implemented. This is why this chapter shows how you can add new layers to Leaf.
A not exclusive list of steps to take in order to implement a new layer:
The Rust compiler is also very helpful with pointing out the necessary steps for implementing a new layer struct. It might be beneficial to start the implementation of a new layer from a copied file of an already existing layer.
-
Decide to which of the five types the new layer belongs. This decides under which directory to put the layer implementation in the Leaf project.
-
Create the
Layer
worker struct. -
Expose the
Layer
worker struct in themod.rs
of the layer type directory. -
Expose the
Layer
worker struct in themod.rs
of the/layers
directory. -
Implement
ILayer
and its trait boundaries for the newLayer
worker struct. -
Add the new layer to the
LayerType
inlayer.rs
and add the matching for.support_in_place
and.worker_from_config
. -
If the new layer relies on a collenchyma operation, also add the collenchyma trait boundary.
-
Add documentation and serialization to the new layer.
-
(optional) Depending on how complex the layer is, you might also add tests and more advanced implementations for its
.from_config
,.reshape
or other helper methods.
Solvers
Solvers optimize the layer with a given objective. This might happen by updating the weights of the layer, which is the usual practice for Neural Networks but is not limited to this kind of learning.
A solver can have different learning (solving) policies. With Neural Networks, it is common to use a Stochastic Gradient Descent based approach like Adagrad, whereas for a classical regression the solving might be done via a maximum likelihood estimation.
Similar to Layer
s, we can construct a Solver
(/src/solver/mod.rs)
from a SolverConfig
(/src/solver/mod.rs).
When passing this SolverConfig
(e.g. an Adagrad SolverConfig
) to the
Solver::from_config
method, a Solver
with the behavior
of the config is returned.
The most characteristic feature of the SolverConfig
is its network
and objective
fields. These two fields expect one LayerConfig
each. When
passing the SolverConfig
to the Solver::from_config
method, the
LayerConfig
of the network
and objective
fields are turned into
an initialized Layer
and provided to the returned, Solver
.
// set up a Solver
let mut solver_cfg = SolverConfig { minibatch_size: batch_size, base_lr: learning_rate, momentum: momentum, .. SolverConfig::default() };
solver_cfg.network = LayerConfig::new("network", net_cfg);
solver_cfg.objective = LayerConfig::new("classifier", classifier_cfg);
let mut solver = Solver::from_config(backend.clone(), backend.clone(), &solver_cfg);
The now initialized Solver
can be feed with data to optimize the network
.
Optimize Layers
In the previous chapter 3. Solver, we learned what a solver
is and what it does. In this chapter, we take a look on how to optimize a
network via a Solver
.
A Solver
after its initialization has two Layer
s, one for the network
which will be optimized and one for the objective
. The output of the network
layer is used by the objective
to compute the loss. The loss is then used by
the Solver
to optimize the network
.
The Solver
has a very simple API - .train_minibatch
and .network
. The
optimization of the network
is kicked off by the .train_minibatch
method, which takes two input parameters - some data that is feed to the network
and the expected target value for the network.
A SGD (Stochastic Gradient Descent) Solver
would now compute the output of
the network
using as input the data, put the output together with the expected
target value into the objective
layer and use it, together with the gradient
of the network
to optimize the weights of the network
.
/// Train the network with one minibatch
pub fn train_minibatch(&mut self, mb_data: ArcLock<SharedTensor<f32>>, mb_target: ArcLock<SharedTensor<f32>>) -> ArcLock<SharedTensor<f32>> {
// forward through network and classifier
let network_out = self.net.forward(&[mb_data])[0].clone();
let _ = self.objective.forward(&[network_out.clone(), mb_target]);
// forward through network and classifier
let classifier_gradient = self.objective.backward(&[]);
self.net.backward(&classifier_gradient[0 .. 1]);
self.worker.compute_update(&self.config, &mut self.net, self.iter);
self.net.update_weights(self.worker.backend());
self.iter += 1;
network_out
}
Using the .train_minibatch
is straight forward. We pass the data as well as the
expected result of the network
to the .train_minibatch
method of the
initialized Solver
struct. A more detailed example can be found at the
autumnai/leaf-examples repository.
let inp_lock = Arc::new(RwLock::new(inp));
let label_lock = Arc::new(RwLock::new(label));
// train the network!
let inferred_out = solver.train_minibatch(inp_lock.clone(), label_lock.clone());
If we don't want the network
to be trained, we can use the .network
method
of the Solver
to receive access to the network. The Solver
has actually
two network methods - .network
and mut_network
.
To run just the forward of the network
without any optimization we can run
let inferred_out = solver.network().forward(inp_lock.clone());
Leaf ships with a confusion matrix,
which is a convenient way to visualize the performance of the optimized
network
.
let inferred_out = solver.train_minibatch(inp_lock.clone(), label_lock.clone());
let mut inferred = inferred_out.write().unwrap();
let predictions = confusion.get_predictions(&mut inferred);
confusion.add_samples(&predictions, &targets);
println!("Last sample: {} | Accuracy {}", confusion.samples().iter().last().unwrap(), confusion.accuracy());
A more detailed example can be found at the autumnai/leaf-examples repository.
Multi-Device Optimization
Optimization of a Layer over multiple devices is planned for the Leaf 0.3 release. Thanks to the decoupling of computation and representation through Collenchyma, multi-device optimization is fairly straight forward to implement.
Pull Requests are welcome :)
Distributed Optimization
The distributed optimization of networks will (very likely) be managed by a standalone crate on top of Leaf. Although distributed optimization will not be a core part of Leaf itself, we will cover the topic of distributed optimization with Leaf here in this chapter of the book.
Backend
Via the concept of a backend we can abstract over the platform we will execute
or optimize a network on. The construction of a backend is trivial. The backend
is passed to the Solver
, (one backend for network
and one for the
objectve
). The Solver than executes all operations on the provided backend.
let backend = ::std::rc::Rc::new(Backend::<Cuda>::default().unwrap());
// set up solver
let mut solver_cfg = SolverConfig { minibatch_size: batch_size, base_lr: learning_rate, momentum: momentum, .. SolverConfig::default() };
solver_cfg.network = LayerConfig::new("network", net_cfg);
solver_cfg.objective = LayerConfig::new("classifier", classifier_cfg);
let mut solver = Solver::from_config(backend.clone(), backend.clone(), &solver_cfg);
The backend is a concept of Collenchyma, to which you can refer for now, until this chapter becomes more fleshed out.
Glossary
Layer
In General
A layer is the highest-level building block in a (Deep) Neural Network. A layer is a container that usually receives weighted input, transforms it and returns the result as output to the next layer. A layer usually contains one type of function like ReLU, pooling, convolution etc. so that it can be easily compared to other parts of the network. The first and last layers in a network are called input and output layers, respectively, and all layers in between are called hidden layers.
In Leaf
In Leaf, a layer is very similar to the general understanding of a layer. A layer in Leaf, like a layer in a (Deep) Neural Network,
- is the highest-level building block
- needs to receive input, might transform it and needs to return the result
- should be uniform (it does one type of function)
Additionally to a Neural Network layer, a Leaf layer can implement any
functionality, not only those related to Neural Networks like ReLU, pooling,
LSTM, etc. For example, the Sequential
layer in Leaf, allows it to connect
multiple layers, creating a network.
Network
In General
A network, also often called Neural Network (NN) or Artificial Neural Network (ANN) is a subset of Machine Learning methods.
A not exhaustive list of other Machine Learning methods:
Linear Regression, SVM, Genetic/Evolution Algorithms, dynamic programming,
deterministic algorithmic optimization methods.
In Leaf
In Leaf, a network means a graph (a connected set) of one or more layers. This network can consist of Artificial Neural Network methods, other Machine Learning methods or any other (not Machine Learning related) methods. As described in 2. Layers a network in Leaf is actually a layer which connects other layers.
An initialized network is a network, which is ready to be executed, meaning it is fully constructed e.g. all necessary memory is allocated on the host or device.