AIfES 2 2.0.0
ailayer_dense_cmsis.h File Reference

Arm CMSIS implementation of the Dense layer for Arm Cortex processors. More...

Go to the source code of this file.

Functions

ailayer_tailayer_dense_f32_cmsis (ailayer_dense_t *layer, ailayer_t *input_layer)
 Initializes and connect a Dense layer with the F32 CMSIS implementation. More...
 
ailayer_tailayer_dense_wt_q7_cmsis (ailayer_dense_t *layer, ailayer_t *input_layer)
 Initializes and connect a Dense layer with the Q31 CMSIS implementation. More...
 

Detailed Description

Arm CMSIS implementation of the Dense layer for Arm Cortex processors.

Version
2.0alpha

AIfES is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details.

You should have received a copy of the GNU Affero General Public License along with this program. If not, see https://www.gnu.org/licenses/.

Arm CMSIS implementations of the Dense layer in F32 , Q31 and Q7 data-type. These implementations are specifically designed for the Arm Cortex processors and take advantage of SIMD instructions. For more information about the Dense layer refer to ailayer_dense.h.

Function Documentation

◆ ailayer_dense_f32_cmsis()

ailayer_t * ailayer_dense_f32_cmsis ( ailayer_dense_t layer,
ailayer_t input_layer 
)

Initializes and connect a Dense layer with the F32 CMSIS implementation.

Example: Create the layer structure with pretrained weights:

// Use constant data only for inference. For training remove the const qualifier!!
const float weights_data_dense[] = {-10.1164f, -8.4212f, 5.4396f, 7.297f, -7.6482f, -9.0155f};
const float bias_data_dense[] = {-2.9653f, 2.3677f, -1.5968f};
ailayer_dense_f32_t dense_layer = {
.neurons = 3,
.weights.data = (float *) weights_data_dense,
.bias.data = (float *) bias_data_dense
};
General Dense layer structure.
Definition: ailayer_dense.h:74
uint32_t neurons
Layer neurons count (number of outputs).
Definition: ailayer_dense.h:84

Example: Create the layer structure for training:

ailayer_dense_f32_t dense_layer = {
.neurons = 3
};

Example: Initialize and connect the layer:

x = ailayer_dense_f32_cmsis(&dense_layer, x);
ailayer_t * ailayer_dense_f32_cmsis(ailayer_dense_t *layer, ailayer_t *input_layer)
Initializes and connect a Dense layer with the F32 CMSIS implementation.
Parameters
*layerThe layer structure to initialize.
*input_layerThe prior layer.
Returns
The (successfully) initialized layer structure.

◆ ailayer_dense_wt_q7_cmsis()

ailayer_t * ailayer_dense_wt_q7_cmsis ( ailayer_dense_t layer,
ailayer_t input_layer 
)

Initializes and connect a Dense layer with the Q31 CMSIS implementation.

Example: Create the layer structure with pretrained weights:

// Use constant data only for inference. For training remove the const qualifier!!
// Weights (32 bit quantized)
const aimath_q31_params_t weights_q_params_dense = { .shift = 25, .zero_point = 0 };
const int32_t weights_data_dense[] = {-3036, 2425, -1635, -3036, 2425, -1635};
// Bias (32 bit quantized)
const aimath_q31_params_t bias_q_params_dense = { .shift = 10, .zero_point = 0 };
const int32_t bias_data_dense[] = {-3036, 2425, -1635};
// Result (32 bit quantized)
const aimath_q31_params_t result_q_params_dense = { .shift = 26, .zero_point = 41 };
ailayer_dense_q31_t dense_layer = {
.neurons = 3,
.weights = {
.tensor_params = (aimath_q31_params_t *) &weights_q_params_dense,
.data = (int32_t *) weights_data_dense
},
.bias = {
.tensor_params = (aimath_q31_params_t *) &bias_q_params_dense,
.data = (int32_t *) bias_data_dense
},
.base.result.tensor_params = (aimath_q31_params_t *) &result_q_params_dense
};
Parameters used for the quantized Q31 values, used as property of a tensor.
Definition: aimath_q31.h:152
uint16_t shift
The scaling factor of the quantization (The total scale is calculated with )
Definition: aimath_q31.h:153

Example: Create the layer structure for training:

ailayer_dense_q31_t dense_layer = {
.neurons = 3
};

Example: Initialize and connect the layer:

x = ailayer_dense_q31_cmsis(&dense_layer, x);
Parameters
*layerThe layer structure to initialize.
*input_layerThe prior layer.
Returns
The (successfully) initialized layer structure.

Initializes and connect a Dense layer with the Q7 AMR CMSIS implementation for transposed weights tensor

The weights tensor has to be transposed for this implementation, like in ailayer_dense_wt_q7_default().

Example: Create the layer structure with pretrained weights:
In C:

// Use constant data only for inference. For training remove the const qualifier!!
// Weights (8 bit quantized)
const aimath_q7_params_t weights_q_params_dense = { .shift = 3, .zero_point = 0 };
const int8_t weights_data_dense[] = {-81, 58, -67, -61, 44, -72};
// Bias (32 bit quantized)
const aimath_q31_params_t bias_q_params_dense = { .shift = 10, .zero_point = 0 };
const int32_t bias_data_dense[] = {-3036, 2425, -1635};
// Result (8 bit quantized)
const aimath_q7_params_t result_q_params_dense = { .shift = 3, .zero_point = 41 };
ailayer_dense_q7_t dense_layer = {
.neurons = 3,
.weights = {
.tensor_params = (aimath_q7_params_t *) &weights_q_params_dense,
.data = (int8_t *) weights_data_dense
},
.bias = {
.tensor_params = (aimath_q31_params_t *) &bias_q_params_dense,
.data = (int32_t *) bias_data_dense
},
.base.result.tensor_params = (aimath_q7_params_t *) &result_q_params_dense
};
Parameters used for the quantized Q7 values, used as property of a tensor.
Definition: aimath_q7.h:151

In C, C++ and on Arduino:

// Use constant data only for inference. For training remove the const qualifier!!
// Weights (8 bit quantized)
const aimath_q7_params_t weights_q_params_dense = { .shift = 3, .zero_point = 0 };
const int8_t weights_data_dense[] = {-81, 58, -67, -61, 44, -72};
// Bias (32 bit quantized)
const aimath_q31_params_t bias_q_params_dense = { .shift = 10, .zero_point = 0 };
const int32_t bias_data_dense[] = {-3036, 2425, -1635};
// Result (8 bit quantized)
const aimath_q7_params_t result_q_params_dense = { .shift = 3, .zero_point = 41 };
ailayer_dense_q7_t dense_layer = AILAYER_DENSE_Q7_M(3,
weights_data_dense, &weights_q_params_dense,
bias_data_dense, &bias_q_params_dense,
&result_q_params_dense);
uint16_t shift
The scaling factor of the quantization (The total scale is calculated with )
Definition: aimath_q7.h:152

Example: Create the layer structure for automatic parameter distribution:
In C:

ailayer_dense_q7_t dense_layer = {
.neurons = 3
};

In C, C++ and on Arduino:

ailayer_dense_q7_t dense_layer = AILAYER_DENSE_Q7_A(3);

Example: Initialize and connect the layer:

x = ailayer_dense_wt_q7_cmsis(&dense_layer, x);
ailayer_t * ailayer_dense_wt_q7_cmsis(ailayer_dense_t *layer, ailayer_t *input_layer)
Initializes and connect a Dense layer with the Q31 CMSIS implementation.
Parameters
*layerThe layer structure to initialize.
*input_layerThe prior layer.
Returns
The (successfully) initialized layer structure.