AIfES 2  2.0.0
aialgo_sequential_inference.h File Reference

Functions required for inference of models. More...

Go to the source code of this file.

Functions

uint32_t aialgo_sizeof_inference_memory (aimodel_t *model)
 Calculate the memory requirements for intermediate results of an inference. More...
 
uint32_t aialgo_sizeof_parameter_memory (aimodel_t *model)
 Calculate the memory requirements for the trainable parameters (like weights, bias, ...) of the model. More...
 
uint8_t aialgo_schedule_inference_memory (aimodel_t *model, void *memory_ptr, uint32_t memory_size)
 Assign the memory for intermediate results of an inference to the model. More...
 
void aialgo_distribute_parameter_memory (aimodel_t *model, void *memory_ptr, uint32_t memory_size)
 Assign the memory for the trainable parameters (like weights, bias, ...) of the model. More...
 
aitensor_taialgo_forward_model (aimodel_t *model, aitensor_t *input_data)
 Perform a forward pass on the model. More...
 
aitensor_taialgo_inference_model (aimodel_t *model, aitensor_t *input_data, aitensor_t *output_data)
 Perform an inference on the model / Run the model. More...
 
uint8_t aialgo_compile_model (aimodel_t *model)
 Initialize the model structure. More...
 
void aialgo_print_model_structure (aimodel_t *model)
 Print the layer structure of the model with the configured parameters. More...
 

Detailed Description

Functions required for inference of models.

Version
2.0alpha

AIfES is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see https://www.gnu.org/licenses/.

The functions target memory allocation/scheduling, the calculation of the forward pass and quantization for model inference

Function Documentation

◆ aialgo_compile_model()

uint8_t aialgo_compile_model ( aimodel_t model)

Initialize the model structure.

Counts the number of layers and trainable parameters in a model as preparation for inference or training.

Parameters
*modelThe model
Returns
0 if successful

◆ aialgo_distribute_parameter_memory()

void aialgo_distribute_parameter_memory ( aimodel_t model,
void *  memory_ptr,
uint32_t  memory_size 
)

Assign the memory for the trainable parameters (like weights, bias, ...) of the model.

Only use this function if the parameters are not pre-trained or manually configured. Afterwards an initialization of the memory (for example by initializing the weights) has to be performed.

The required memory size can be calculated with aialgo_sizeof_parameter_memory()

Parameters
*modelThe model
*memory_ptrPointer to the memory block
memory_sizeSize of the memory block (for error checking)
Returns
0 if successful

◆ aialgo_forward_model()

aitensor_t* aialgo_forward_model ( aimodel_t model,
aitensor_t input_data 
)

Perform a forward pass on the model.

The result is stored in the result tensor of the output layer and a pointer to this is returned. This output result is stored in the inference memory and is only valid as long as the inference memory is valid. To get the output as a separate tensor, use aialgo_inference_model() instead.

Parameters
*modelThe model
*input_dataInput data tensor of the same shape as the input_layer shape
Returns
Pointer to the output data of the forward pass (points to the result tensor of the output layer)

◆ aialgo_inference_model()

aitensor_t* aialgo_inference_model ( aimodel_t model,
aitensor_t input_data,
aitensor_t output_data 
)

Perform an inference on the model / Run the model.

Make shure to initialize the model (aialgo_compile_model()) and schedule the inference memory (for example with aialgo_schedule_inference_memory() or aialgo_schedule_training_memory()) before calling this function.

Example:

float input_data[] = {0.0f, 1.0f};
uint16_t input_shape[] = {1, 2}
aitensor_t input_tensor = {
.dim = 2,
.shape = input_shape,
.data = input_data
};
float output_data[1];
uint16_t output_shape[] = {1, 1}
aitensor_t output_tensor = {
.dim = 2,
.shape = output_shape,
.data = output_data
};
aialgo_inference_model(&model, &input_tensor, &output_tensor);
// The results are now in the output_tensor
aitensor_t * aialgo_inference_model(aimodel_t *model, aitensor_t *input_data, aitensor_t *output_data)
Perform an inference on the model / Run the model.
const aimath_dtype_t * aif32
The F32 data-type indicator.
A tensor in AIfES.
Definition: aifes_math.h:98
const aimath_dtype_t * dtype
The datatype of the tensor, e.g.
Definition: aifes_math.h:99
Parameters
*modelThe model
*input_dataInput data tensor of the same shape as the input_layer shape
*output_dataEmpty tensor for the results of the inference with the size of your outputs
Returns
Pointer to the output_data tensor with the results

◆ aialgo_print_model_structure()

void aialgo_print_model_structure ( aimodel_t model)

Print the layer structure of the model with the configured parameters.

Parameters
*modelThe model

◆ aialgo_schedule_inference_memory()

uint8_t aialgo_schedule_inference_memory ( aimodel_t model,
void *  memory_ptr,
uint32_t  memory_size 
)

Assign the memory for intermediate results of an inference to the model.

The required memory size can be calculated with aialgo_sizeof_inference_memory()

Parameters
*modelThe model
*memory_ptrPointer to the memory block
memory_sizeSize of the memory block (for error checking)
Returns
0 if successful

◆ aialgo_sizeof_inference_memory()

uint32_t aialgo_sizeof_inference_memory ( aimodel_t model)

Calculate the memory requirements for intermediate results of an inference.

This memory is mainly for the result buffers of the layers.

Use aialgo_schedule_inference_memory() to set the memory to the model.

Parameters
*modelThe model
Returns
Required memory size in bytes

◆ aialgo_sizeof_parameter_memory()

uint32_t aialgo_sizeof_parameter_memory ( aimodel_t model)

Calculate the memory requirements for the trainable parameters (like weights, bias, ...) of the model.

Use aialgo_distribute_parameter_memory() to set the memory to the model.

Parameters
*modelThe model
Returns
Required memory size in bytes