This code defines a function named `evaluate_model` that computes the...

August 25, 2025 at 08:22 PM

def evaluate_model(loader): correct = 0 total = 0 with torch.no_grad(): for batch in loader: inputs, labels = batch inputs = torch.stack(inputs) outputs = model(inputs) _, predicted = torch.max(outputs, 1) total += labels.size(0) correct += (predicted == torch.tensor(labels)).sum().item() accuracy = correct / total return accuracy accuracy = evaluate_model(test_loader) print(f'Accuracy: {accuracy * 100:.2f}%')

This code defines a function named evaluate_model that computes the accuracy of a PyTorch model on a given dataset loader (e.g., test_loader). Here's what it does step by step:

  1. Initialize Counters:

    • The variables correct (to track correct predictions) and total (to track the total number of samples) are initialized to 0.
  2. Disable Gradient Calculation:

    • The torch.no_grad() context ensures that no gradients are calculated during the evaluation, which reduces memory usage and speeds up computations, as this is an inference phase (not training).
  3. Iterate Through the Data Loader:

    • The for batch in loader: loop iterates through each batch of the data loader (loader), which provides the inputs (e.g., images or features) and the corresponding ground truth labels.
  4. Batch Processing:

    • The variable inputs contains the data (e.g., input features), and labels contains the ground truth classifications.
    • inputs is converted into a tensor using torch.stack(inputs), assuming inputs is a list of tensors.
    • The model (model) is applied to the inputs to generate predictions (outputs).
  5. Get Predicted Classes:

    • The torch.max(outputs, 1) operation takes the outputs tensor and extracts the predicted class indices (maximum logits) for each sample in the batch. This is stored in predicted.
  6. Update Counters:

    • total is incremented by the number of labels in the current batch (labels.size(0)).
    • correct is incremented by the number of predictions that match the true labels (predicted == labels). Specifically, (predicted == torch.tensor(labels)).sum().item() counts the number of matching predictions in the batch.
  7. Compute Accuracy:

    • After iterating through all batches, the overall accuracy is computed as the ratio of correctly predicted samples to the total number of samples (accuracy = correct / total).
  8. Return Accuracy:

    • The function returns the computed accuracy.
  9. Evaluate and Print Accuracy:

    • After calling evaluate_model(test_loader), the accuracy of the model on the test dataset (test_loader) is stored in the accuracy variable.
    • The accuracy is then printed as a percentage formatted to two decimal places: Accuracy: XX.XX%.

Example Usage

This function presumably evaluates a trained PyTorch model's performance on a dataset (e.g., a test dataset provided via test_loader). You need to define or load a suitable dataset loader (loader) and pre-trained model (model) for this code to work.

Notes

  • This code assumes the variable model refers to a pre-trained PyTorch neural network, though it’s not defined in the snippet.
  • If labels is already a PyTorch tensor, wrapping it in torch.tensor(labels) is unnecessary and could be omitted.
Generate your own explanations
Download our vscode extension
Read other generated explanations

Built by @thebuilderjr
Sponsored by beam analytics
Read our terms and privacy policy
Forked from openai-quickstart-node