The provided code implements a simulation and evaluation framework for...
The provided code implements a simulation and evaluation framework for comparing different algorithms under three cases using a Monte Carlo method. Here's what the code does:
Purpose:
The code evaluates the performance of algorithms for assigning requests between transmitting (Tx) and receiving (Rx) nodes using sets of resources (EPR pairs) with fidelity constraints. The main focus seems to be on comparing algorithms like optimizerSolve
and matchingAlgorithm
against performance metrics defined in evaluationFunctions
. The simulation is run for different configurations of nodes, switches, and requests across multiple runs.
Explanation:
-
Initialization:
- The code imports several libraries (
numpy
,pyomo
,random
, and others) to handle numerical operations, optimization, and file manipulation. - A set of algorithms and evaluation functions is initialized:
algorithms
: Maps the names of algorithms (e.g.,'Optimal'
and'Proposed'
) to their respective implementations likeoptimizerSolve
andmatchingAlgorithm
.evaluationFunctions
: Maps metric names like'numberOfRequests'
to their respective evaluation functions.
- The code imports several libraries (
-
Simulation Parameters:
- Monte Carlo simulation (
runs=200
): The entire process is repeated 200 times to capture statistical performance. - Different configurations are defined:
- Number of transmitting nodes (
K
). - Number of receiving nodes (
M
). - Number of switches (
U
). - Number of requests (
Ls
) for different cases.
- Number of transmitting nodes (
- Monte Carlo simulation (
-
Simulation Loop: For every Monte Carlo run:
- The simulation is repeated for the following cases:
- Case 1:
K=5
,M=5
. - Case 2:
K=7
,M=5
. - Case 3:
K=3
,M=5
.
- Case 1:
- For each combination of cases and the number of requests in
Ls
:- Resource Initialization:
- EPR pairs (
TxFreePairs
,RxFreePairs
) for nodes and their fidelities (TxFidelities
,RxFidelities
) are initialized with random values. TxFreePairs
andRxFreePairs
contain the number of free EPR pairs for each node-switch combination.TxFidelities
andRxFidelities
define the fidelity of each resource.
- EPR pairs (
- Request Generation:
- A list of requests is created, where each request is a tuple (
txNode
,rxNode
,minFidelity
). txNode
andrxNode
represent indices for transmitting and receiving nodes for the request.minFidelity
is a random value specifying the minimum required fidelity for the request.
- A list of requests is created, where each request is a tuple (
- Algorithm Execution:
- Each algorithm in
algorithms
is executed for the generated requests and resources. - The result of each algorithm is evaluated using the metrics in
evaluationFunctions
. - Results are stored in the
results
dictionary with a multi-dimensional key structure:(run, case, L index, algorithm, metric)
.
- Each algorithm in
- Resource Initialization:
- The simulation is repeated for the following cases:
-
Averaging Results:
- The results are averaged over all Monte Carlo runs:
- For each combination of case, request size (
L
), algorithm, and metric, values across all runs are aggregated. - These averaged results are stored in a dictionary
resultsAveraged
.
- For each combination of case, request size (
- The results are averaged over all Monte Carlo runs:
-
Results Export:
- The
resultsAveraged
dictionary is reformatted into a JSON-compatible structure usingremap_keys
. - This JSON file (
resultsMultipleCases.json
) is saved to the current script's directory, containing the final averaged results for all cases, algorithms, and metrics.
- The
Key Features:
- Monte Carlo Simulation: Performs statistical evaluations over multiple runs to reduce randomness and noise in the results.
- Multiple Cases: Simulates different scenarios by varying the number of transmitting nodes, receiving nodes, and requests.
- Dynamic Resource and Request Initialization:
- Randomly initializes EPR pairs and fidelities for Tx and Rx nodes.
- Generates requests with random transmitting/receiving nodes and fidelity constraints.
- Algorithm Comparison: Tests and evaluates multiple algorithms (
optimizerSolve
,matchingAlgorithm
) against defined metrics. - Evaluation Metrics: Uses
evaluationFunctions
(e.g.,numberOfRequests
) to measure the performance of algorithms. - Results Aggregation and Export: Averages results across Monte Carlo runs and saves them to a JSON file.
Output:
- The output JSON file (
resultsMultipleCases.json
) contains the averaged performance metrics for all combinations of:- Cases (
case1
,case2
,case3
). - Numbers of requests (
Ls
). - Algorithms (
Optimal
,Proposed
). - Evaluation metrics.
- Cases (
- Each dictionary entry in the JSON specifies:
- Parameters (the combination of case, request size, algorithm, and metric).
- The averaged value of the metric across all Monte Carlo runs.
Potential Use Case:
The code appears tailored for evaluating matching or optimization algorithms in scenarios involving resource constraints in communication networks, possibly in quantum networking or wireless systems.