AIfES 2 2.0.0
ailayer_dense_avr_pgm.h File Reference

Implementation of the Dense layer with parameters in PROGMEM for AVR contoller. More...

Go to the source code of this file.

Functions

ailayer_tailayer_dense_f32_avr_pgm (ailayer_dense_f32_t *layer, ailayer_t *input_layer)
 Initializes and connect a Dense layer with the F32 AVR PGM implementation. More...
 
ailayer_tailayer_dense_q7_avr_pgm (ailayer_dense_q7_t *layer, ailayer_t *input_layer)
 Initializes and connect a Dense layer with the Q7 AVR PGM implementation. More...
 
ailayer_tailayer_dense_wt_q7_avr_pgm (ailayer_dense_q7_t *layer, ailayer_t *input_layer)
 Initializes and connect a Dense layer with the Q7 AVR PGM implementation. More...
 

Detailed Description

Implementation of the Dense layer with parameters in PROGMEM for AVR contoller.

Version
2.0alpha

AIfES is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details.

You should have received a copy of the GNU Affero General Public License along with this program. If not, see https://www.gnu.org/licenses/.

AVR controller implementations of the Dense layer in F32 and Q7 data-type. For more information about the Dense layer refer to ailayer_dense.h.

This implementation allows access of weights and biases directly from the program memory of AVR controllers. This is useful if there are too many weights to fit into the RAM.

Requires avr/pgmspace.h library

Function Documentation

◆ ailayer_dense_f32_avr_pgm()

ailayer_t * ailayer_dense_f32_avr_pgm ( ailayer_dense_f32_t layer,
ailayer_t input_layer 
)

Initializes and connect a Dense layer with the F32 AVR PGM implementation.

The weights and bias have to be defined constant in program memory (PROGMEM). The rest of the layer configuration is the same as with ailayer_dense_f32_default().

Example: Create the layer structure with pretrained weights:
In C:

// Use constant data only for inference. For training remove the const qualifier!!
const float weights_data_dense[] PROGMEM = {-10.1164f, -8.4212f, 5.4396f, 7.297f, -7.6482f, -9.0155f};
const float bias_data_dense[] PROGMEM = {-2.9653f, 2.3677f, -1.5968f};
ailayer_dense_f32_t dense_layer = {
.neurons = 3,
.weights.data = (float *) weights_data_dense,
.bias.data = (float *) bias_data_dense
};
General Dense layer structure.
Definition: ailayer_dense.h:74
uint32_t neurons
Layer neurons count (number of outputs).
Definition: ailayer_dense.h:84

In C, C++ and on Arduino:

// Use constant data only for inference. For training remove the const qualifier!!
const float weights_data_dense[] PROGMEM = {-10.1164f, -8.4212f, 5.4396f, 7.297f, -7.6482f, -9.0155f};
const float bias_data_dense[] PROGMEM = {-2.9653f, 2.3677f, -1.5968f};
ailayer_dense_f32_t dense_layer = AILAYER_DENSE_F32_M(3, weights_data_dense, bias_data_dense);

Example: Create the layer structure for automatic parameter distribution:
In C:

ailayer_dense_f32_t dense_layer = {
.neurons = 3
};

In C, C++ and on Arduino:

ailayer_dense_f32_t dense_layer = AILAYER_DENSE_F32_A(3);

Example: Initialize and connect the layer:

x = ailayer_dense_f32_avr_pgm(&dense_layer, x);
ailayer_t * ailayer_dense_f32_avr_pgm(ailayer_dense_f32_t *layer, ailayer_t *input_layer)
Initializes and connect a Dense layer with the F32 AVR PGM implementation.
Parameters
*layerThe layer structure to initialize.
*input_layerThe prior layer.
Returns
The (successfully) initialized layer structure.

◆ ailayer_dense_q7_avr_pgm()

ailayer_t * ailayer_dense_q7_avr_pgm ( ailayer_dense_q7_t layer,
ailayer_t input_layer 
)

Initializes and connect a Dense layer with the Q7 AVR PGM implementation.

The weights, bias and quantization parameter have to be defined constant in program memory (PROGMEM). The rest of the layer configuration is the same as with ailayer_dense_q7_default().

Example: Create the layer structure with pretrained weights:
In C:

// Use constant data only for inference. For training remove the const qualifier!!
// Weights (8 bit quantized)
const aimath_q7_params_t weights_q_params_dense PROGMEM = { .shift = 3, .zero_point = 0 };
const int8_t weights_data_dense[] PROGMEM = {-81, -67, 44, 58, -61, -72};
// Bias (32 bit quantized)
const aimath_q31_params_t bias_q_params_dense PROGMEM = { .shift = 10, .zero_point = 0 };
const int32_t bias_data_dense[] PROGMEM = {-3036, 2425, -1635};
// Result (8 bit quantized)
const aimath_q7_params_t result_q_params_dense PROGMEM = { .shift = 3, .zero_point = 41 };
ailayer_dense_q7_t dense_layer = {
.neurons = 3,
.weights = {
.tensor_params = (aimath_q7_params_t *) &weights_q_params_dense,
.data = (int8_t *) weights_data_dense
},
.bias = {
.tensor_params = (aimath_q31_params_t *) &bias_q_params_dense,
.data = (int32_t *) bias_data_dense
},
.base.result.tensor_params = (aimath_q7_params_t *) &result_q_params_dense
};
Parameters used for the quantized Q31 values, used as property of a tensor.
Definition: aimath_q31.h:152
uint16_t shift
The scaling factor of the quantization (The total scale is calculated with )
Definition: aimath_q31.h:153
Parameters used for the quantized Q7 values, used as property of a tensor.
Definition: aimath_q7.h:151
uint16_t shift
The scaling factor of the quantization (The total scale is calculated with )
Definition: aimath_q7.h:152

In C, C++ and on Arduino:

// Use constant data only for inference. For training remove the const qualifier!!
// Weights (8 bit quantized)
const aimath_q7_params_t weights_q_params_dense PROGMEM = { .shift = 3, .zero_point = 0 };
const int8_t weights_data_dense[] PROGMEM = {-81, -67, 44, 58, -61, -72};
// Bias (32 bit quantized)
const aimath_q31_params_t bias_q_params_dense PROGMEM = { .shift = 10, .zero_point = 0 };
const int32_t bias_data_dense[] PROGMEM = {-3036, 2425, -1635};
// Result (8 bit quantized)
const aimath_q7_params_t result_q_params_dense PROGMEM = { .shift = 3, .zero_point = 41 };
ailayer_dense_q7_t dense_layer = AILAYER_DENSE_Q7_M(3,
weights_data_dense, &weights_q_params_dense,
bias_data_dense, &bias_q_params_dense,
&result_q_params_dense);

Example: Create the layer structure for automatic parameter distribution (parameter buffer must be in PROGMEM):
In C:

ailayer_dense_q7_t dense_layer = {
.neurons = 3
};

In C, C++ and on Arduino:

ailayer_dense_q7_t dense_layer = AILAYER_DENSE_Q7_A(3);

Example: Initialize and connect the layer:

x = ailayer_dense_q7_avr_pgm(&dense_layer, x);
ailayer_t * ailayer_dense_q7_avr_pgm(ailayer_dense_q7_t *layer, ailayer_t *input_layer)
Initializes and connect a Dense layer with the Q7 AVR PGM implementation.
Parameters
*layerThe layer structure to initialize.
*input_layerThe prior layer.
Returns
The (successfully) initialized layer structure.

◆ ailayer_dense_wt_q7_avr_pgm()

ailayer_t * ailayer_dense_wt_q7_avr_pgm ( ailayer_dense_q7_t layer,
ailayer_t input_layer 
)

Initializes and connect a Dense layer with the Q7 AVR PGM implementation.

This implementation is the same as ailayer_dense_q7_avr_pgm() but with a transposed weights matrix/tensor.

The weights, bias and quantization parameter have to be defined constant in program memory (PROGMEM). The rest of the layer configuration is the same as with ailayer_dense_q7_default().

Example: Create the layer structure with pretrained weights:
In C:

// Use constant data only for inference. For training remove the const qualifier!!
// Weights (8 bit quantized)
const aimath_q7_params_t weights_q_params_dense PROGMEM = { .shift = 3, .zero_point = 0 };
const int8_t weights_data_dense[] PROGMEM = {-81, 58, -67, -61, 44, -72};
// Bias (32 bit quantized)
const aimath_q31_params_t bias_q_params_dense PROGMEM = { .shift = 10, .zero_point = 0 };
const int32_t bias_data_dense[] PROGMEM = {-3036, 2425, -1635};
// Result (8 bit quantized)
const aimath_q7_params_t result_q_params_dense PROGMEM = { .shift = 3, .zero_point = 41 };
ailayer_dense_q7_t dense_layer = {
.neurons = 3,
.weights = {
.tensor_params = (aimath_q7_params_t *) &weights_q_params_dense,
.data = (int8_t *) weights_data_dense
},
.bias = {
.tensor_params = (aimath_q31_params_t *) &bias_q_params_dense,
.data = (int32_t *) bias_data_dense
},
.base.result.tensor_params = (aimath_q7_params_t *) &result_q_params_dense
};

In C, C++ and on Arduino:

// Use constant data only for inference. For training remove the const qualifier!!
// Weights (8 bit quantized)
const aimath_q7_params_t weights_q_params_dense PROGMEM = { .shift = 3, .zero_point = 0 };
const int8_t weights_data_dense[] PROGMEM = {-81, 58, -67, -61, 44, -72};
// Bias (32 bit quantized)
const aimath_q31_params_t bias_q_params_dense PROGMEM = { .shift = 10, .zero_point = 0 };
const int32_t bias_data_dense[] PROGMEM = {-3036, 2425, -1635};
// Result (8 bit quantized)
const aimath_q7_params_t result_q_params_dense PROGMEM = { .shift = 3, .zero_point = 41 };
ailayer_dense_q7_t dense_layer = AILAYER_DENSE_Q7_M(3,
weights_data_dense, &weights_q_params_dense,
bias_data_dense, &bias_q_params_dense,
&result_q_params_dense);

Example: Create the layer structure for automatic parameter distribution (parameter buffer must be in PROGMEM):
In C:

ailayer_dense_q7_t dense_layer = {
.neurons = 3
};

In C, C++ and on Arduino:

ailayer_dense_q7_t dense_layer = AILAYER_DENSE_Q7_A(3);

Example: Initialize and connect the layer:

x = ailayer_dense_wt_q7_avr_pgm(&dense_layer, x);
ailayer_t * ailayer_dense_wt_q7_avr_pgm(ailayer_dense_q7_t *layer, ailayer_t *input_layer)
Initializes and connect a Dense layer with the Q7 AVR PGM implementation.
Parameters
*layerThe layer structure to initialize.
*input_layerThe prior layer.
Returns
The (successfully) initialized layer structure.