This C function named `RTCD_SUF(compute_linear_)` performs a linear matrix-vector computation...

August 28, 2025 at 05:21 AM

void RTCD_SUF(compute_linear_) (const LinearLayer *linear, float *out, const float *in) { int i, M, N; const float *bias; celt_assert(in != out); bias = linear->bias; M = linear->nb_inputs; N = linear->nb_outputs; if (linear->float_weights != NULL) { if (linear->weights_idx != NULL) sparse_sgemv8x4(out, linear->float_weights, linear->weights_idx, N, in); else sgemv(out, linear->float_weights, N, M, N, in); } else if (linear->weights != NULL) { if (linear->weights_idx != NULL) sparse_cgemv8x4(out, linear->weights, linear->weights_idx, linear->scale, N, M, in); else cgemv8x4(out, linear->weights, linear->scale, N, M, in); /* Only use SU biases on for integer matrices on SU archs. */ #ifdef USE_SU_BIAS bias = linear->subias; #endif } else RNN_CLEAR(out, N); if (bias != NULL) { for (i=0;i<N;i++) out[i] += bias[i]; } if (linear->diag) { /* Diag is only used for GRU recurrent weights. */ celt_assert(3*M == N); for (i=0;i<M;i++) { out[i] += linear->diag[i]*in[i]; out[i+M] += linear->diag[i+M]*in[i]; out[i+2*M] += linear->diag[i+2*M]*in[i]; } } }

This C function named RTCD_SUF(compute_linear_) performs a linear matrix-vector computation in the context of neural networks and appears to operate as a part of recurrent network computations. Here's a detailed breakdown of what it does:

  1. Initialization and Assertions:

    • Checks that the input data pointer in is not the same as the output data pointer out (celt_assert(in != out)), presumably to avoid overwriting input data.
  2. Variable Setup:

    • Retrieves relevant information from the LinearLayer structure:
      • bias: Pointer to the bias vector.
      • M: The number of inputs (linear->nb_inputs).
      • N: The number of outputs (linear->nb_outputs).
    • Initializes some optional context regarding weights (linear->float_weights, linear->weights, etc.).
  3. Matrix-Vector Multiplication:

    • If floating-point weights (linear->float_weights) are available:
      • Uses either sparse_sgemv8x4 or sgemv depending on whether index pointers (sparse indexing, linear->weights_idx) are present, to perform the multiplication between the weights and the input vector.
    • If integer weights (linear->weights) are specified instead:
      • Uses either sparse_cgemv8x4 or cgemv8x4, again based on the presence of sparse indexing.
      • Additionally, a scaling factor (linear->scale) is applied to the weights for quantization adjustment.
      • A conditional directive (#ifdef USE_SU_BIAS) switches to using a different bias vector subias for certain architectures (SU) or cases where integer matrices are used.
    • If neither floating-point nor integer weights are provided, the output vector is cleared using the RNN_CLEAR function.
  4. Bias Addition:

    • If a bias vector (bias) is defined:
      • Adds the bias values to the output vector for each output dimension.
  5. Diagonal Weight Adjustment (Optional):

    • If a diag parameter exists in the LinearLayer struct:
      • It implies a possible recurrent weight adjustment (e.g., for GRU layers).
      • The function assumes 3x the input size equals the output size (3*M == N).
      • It adjusts the output vector further by incorporating diagonal weights (e.g., for three separate gates of a GRU).

Summary:

This function computes the result of a linear transformation (matrix-vector multiplication) commonly used in neural networks, with optional biases and diagonal adjustments.

  • It supports various weight types (floating-point or integer).
  • It includes optimizations for sparse representations.
  • It has specific adjustments for architectures or GRU recurrent layers.

This is likely part of a larger framework for deep learning or recurrent neural network computations, designed with flexibility and performance considerations.

Generate your own explanations
Download our vscode extension
Read other generated explanations

Built by @thebuilderjr
Sponsored by beam analytics
Read our terms and privacy policy
Forked from openai-quickstart-node