The provided code implements a simulation and evaluation framework for...

September 2, 2025 at 12:00 PM

import numpy as np from pyomo.environ import * import random from global_solver import optimizerSolve from randomAlgorithm import randomAlgorithm from greedyAlgorithm import greedyAlgorithm from matchingAlgorithm import matchingAlgorithm from evaluation import * from tqdm import tqdm #from p_tqdm import p_map import os import json import pathlib ######################### ## Parameters ######################### #MonteCarlo runs runs = 200 #Number of Tx nodes K = 5 #Number of Rx nodes M = 5 #Number of switches U = 3 #Number of requests Ls = list(np.linspace(0,50,6,dtype='int')) algorithms = {'Optimal':optimizerSolve, 'Proposed':matchingAlgorithm} evaluationFunctions = {'numberOfRequests':numberOfRequests} results = {} for run in tqdm(range(runs)): for ind,L in enumerate(Ls): K=5 M=5 #Tx free EPR pairs TxFreePairs = np.ones((K,U)) #Tx fidelities TxFidelities = np.ones((K,U)) #Rx free EPR pairs RxFreePairs = np.ones((M,U)) #Rx fidelities RxFidelities = np.ones((M,U)) #Random generator for uu in range(U): for kk in range(K): TxFreePairs[kk][uu] = random.randint(1,5) TxFidelities[kk][uu] = random.uniform(0.83,0.99) for mm in range(M): RxFreePairs[mm][uu] = random.randint(1,5) RxFidelities[mm][uu] = random.uniform(0.83,0.99) #Requests = (TxNode, RxNode, minFidelity) requests = [] for rr in range(L): txNode = random.randint(0,K-1) rxNode = random.randint(0,M-1) #switch = random.randint(0,U-1) #RxFreePairs[txNode][switch] += 1 #TxFreePairs[rxNode][switch] += 1 minFidelity = random.uniform(0.5,0.8) requests.append((txNode,rxNode,minFidelity)) for algorithm in algorithms: matching = algorithms[algorithm](K,L,M,U,TxFreePairs,TxFidelities,RxFreePairs,RxFidelities,requests) for metric in evaluationFunctions: results[run,'case1',ind,algorithm,metric] = evaluationFunctions[metric](matching,K,L,M,U,TxFreePairs,TxFidelities,RxFreePairs,RxFidelities,requests) for ind,L in enumerate(Ls): K=7 M=5 #Tx free EPR pairs TxFreePairs = np.ones((K,U)) #Tx fidelities TxFidelities = np.ones((K,U)) #Rx free EPR pairs RxFreePairs = np.ones((M,U)) #Rx fidelities RxFidelities = np.ones((M,U)) #Random generator for uu in range(U): for kk in range(K): TxFreePairs[kk][uu] = random.randint(1,5) TxFidelities[kk][uu] = random.uniform(0.83,0.99) for mm in range(M): RxFreePairs[mm][uu] = random.randint(1,5) RxFidelities[mm][uu] = random.uniform(0.83,0.99) #Requests = (TxNode, RxNode, minFidelity) requests = [] for rr in range(L): txNode = random.randint(0,K-1) rxNode = random.randint(0,M-1) #switch = random.randint(0,U-1) #RxFreePairs[txNode][switch] += 1 #TxFreePairs[rxNode][switch] += 1 minFidelity = random.uniform(0.5,0.8) requests.append((txNode,rxNode,minFidelity)) for algorithm in algorithms: matching = algorithms[algorithm](K,L,M,U,TxFreePairs,TxFidelities,RxFreePairs,RxFidelities,requests) for metric in evaluationFunctions: results[run,'case2',ind,algorithm,metric] = evaluationFunctions[metric](matching,K,L,M,U,TxFreePairs,TxFidelities,RxFreePairs,RxFidelities,requests) for ind,L in enumerate(Ls): K=3 M=5 #Tx free EPR pairs TxFreePairs = np.ones((K,U)) #Tx fidelities TxFidelities = np.ones((K,U)) #Rx free EPR pairs RxFreePairs = np.ones((M,U)) #Rx fidelities RxFidelities = np.ones((M,U)) #Random generator for uu in range(U): for kk in range(K): TxFreePairs[kk][uu] = random.randint(1,5) TxFidelities[kk][uu] = random.uniform(0.83,0.99) for mm in range(M): RxFreePairs[mm][uu] = random.randint(1,5) RxFidelities[mm][uu] = random.uniform(0.83,0.99) #Requests = (TxNode, RxNode, minFidelity) requests = [] for rr in range(L): txNode = random.randint(0,K-1) rxNode = random.randint(0,M-1) #switch = random.randint(0,U-1) #RxFreePairs[txNode][switch] += 1 #TxFreePairs[rxNode][switch] += 1 minFidelity = random.uniform(0.5,0.8) requests.append((txNode,rxNode,minFidelity)) for algorithm in algorithms: matching = algorithms[algorithm](K,L,M,U,TxFreePairs,TxFidelities,RxFreePairs,RxFidelities,requests) for metric in evaluationFunctions: results[run,'case3',ind,algorithm,metric] = evaluationFunctions[metric](matching,K,L,M,U,TxFreePairs,TxFidelities,RxFreePairs,RxFidelities,requests) #AverageOverRuns resultsAveraged = {(ind,case,algorithm,metric):[] for ind in range(len(Ls)) for case in ['case1','case2','case3'] for algorithm in algorithms for metric in evaluationFunctions} for run in range(runs): for ind,L in enumerate(Ls): for case in ['case1','case2','case3']: for algorithm in algorithms: for metric in evaluationFunctions: resultsAveraged[ind,case,algorithm,metric].append(results[run,case,ind,algorithm,metric]) for result in resultsAveraged: resultsAveraged[result] = np.mean(resultsAveraged[result]) def remap_keys(mapping): return [{'parameters':k, 'value': v} for k, v in mapping.items()] output_file = open(os.path.join(os.path.dirname(__file__), 'resultsMultipleCases.json'), 'w+', encoding='utf-8') json.dump(remap_keys(resultsAveraged), output_file) output_file.write("\n") output_file.close()

The provided code implements a simulation and evaluation framework for comparing different algorithms under three cases using a Monte Carlo method. Here's what the code does:


Purpose:

The code evaluates the performance of algorithms for assigning requests between transmitting (Tx) and receiving (Rx) nodes using sets of resources (EPR pairs) with fidelity constraints. The main focus seems to be on comparing algorithms like optimizerSolve and matchingAlgorithm against performance metrics defined in evaluationFunctions. The simulation is run for different configurations of nodes, switches, and requests across multiple runs.


Explanation:

  1. Initialization:

    • The code imports several libraries (numpy, pyomo, random, and others) to handle numerical operations, optimization, and file manipulation.
    • A set of algorithms and evaluation functions is initialized:
      • algorithms: Maps the names of algorithms (e.g., 'Optimal' and 'Proposed') to their respective implementations like optimizerSolve and matchingAlgorithm.
      • evaluationFunctions: Maps metric names like 'numberOfRequests' to their respective evaluation functions.
  2. Simulation Parameters:

    • Monte Carlo simulation (runs=200): The entire process is repeated 200 times to capture statistical performance.
    • Different configurations are defined:
      • Number of transmitting nodes (K).
      • Number of receiving nodes (M).
      • Number of switches (U).
      • Number of requests (Ls) for different cases.
  3. Simulation Loop: For every Monte Carlo run:

    • The simulation is repeated for the following cases:
      • Case 1: K=5, M=5.
      • Case 2: K=7, M=5.
      • Case 3: K=3, M=5.
    • For each combination of cases and the number of requests in Ls:
      1. Resource Initialization:
        • EPR pairs (TxFreePairs, RxFreePairs) for nodes and their fidelities (TxFidelities, RxFidelities) are initialized with random values.
        • TxFreePairs and RxFreePairs contain the number of free EPR pairs for each node-switch combination.
        • TxFidelities and RxFidelities define the fidelity of each resource.
      2. Request Generation:
        • A list of requests is created, where each request is a tuple (txNode, rxNode, minFidelity).
        • txNode and rxNode represent indices for transmitting and receiving nodes for the request.
        • minFidelity is a random value specifying the minimum required fidelity for the request.
      3. Algorithm Execution:
        • Each algorithm in algorithms is executed for the generated requests and resources.
        • The result of each algorithm is evaluated using the metrics in evaluationFunctions.
        • Results are stored in the results dictionary with a multi-dimensional key structure: (run, case, L index, algorithm, metric).
  4. Averaging Results:

    • The results are averaged over all Monte Carlo runs:
      • For each combination of case, request size (L), algorithm, and metric, values across all runs are aggregated.
      • These averaged results are stored in a dictionary resultsAveraged.
  5. Results Export:

    • The resultsAveraged dictionary is reformatted into a JSON-compatible structure using remap_keys.
    • This JSON file (resultsMultipleCases.json) is saved to the current script's directory, containing the final averaged results for all cases, algorithms, and metrics.

Key Features:

  • Monte Carlo Simulation: Performs statistical evaluations over multiple runs to reduce randomness and noise in the results.
  • Multiple Cases: Simulates different scenarios by varying the number of transmitting nodes, receiving nodes, and requests.
  • Dynamic Resource and Request Initialization:
    • Randomly initializes EPR pairs and fidelities for Tx and Rx nodes.
    • Generates requests with random transmitting/receiving nodes and fidelity constraints.
  • Algorithm Comparison: Tests and evaluates multiple algorithms (optimizerSolve, matchingAlgorithm) against defined metrics.
  • Evaluation Metrics: Uses evaluationFunctions (e.g., numberOfRequests) to measure the performance of algorithms.
  • Results Aggregation and Export: Averages results across Monte Carlo runs and saves them to a JSON file.

Output:

  1. The output JSON file (resultsMultipleCases.json) contains the averaged performance metrics for all combinations of:
    • Cases (case1, case2, case3).
    • Numbers of requests (Ls).
    • Algorithms (Optimal, Proposed).
    • Evaluation metrics.
  2. Each dictionary entry in the JSON specifies:
    • Parameters (the combination of case, request size, algorithm, and metric).
    • The averaged value of the metric across all Monte Carlo runs.

Potential Use Case:

The code appears tailored for evaluating matching or optimization algorithms in scenarios involving resource constraints in communication networks, possibly in quantum networking or wireless systems.

Generate your own explanations
Download our vscode extension
Read other generated explanations

Built by @thebuilderjr
Sponsored by beam analytics
Read our terms and privacy policy
Forked from openai-quickstart-node