11a. Redispatch modeling in the ASSUME Framework

Contents

Note

You can download this example as a Jupyter notebook or try it out directly in Google Colab.

11a. Redispatch modeling in the ASSUME Framework#

Welcome to the ASSUME DSM Workshop!

This tutorial demonstrates modeling and simulation of redispatch mechanism using PyPSA as a plug and play module in ASSUME-framework. The model will be created mainly taking grid constraints into consideration to identify grid bottlenecks with dispatches from EOM and resolve them using the redispatch algorithm.


Concept of Redispatch#

The locational mismatch in demand and generation of electricity needs transmission of electricity from low demand regions to high demand regions. The transmission capacity limits the maximum amounts of electricity which can be transmitted at any point in time. If there is no enough capacity to transmit the required amount of electricity then there is a need of ramping down of generation at the locations of low demand and ramping up of generation at the locations of higher demand. This is typically called as Redispatch. Apart from spot markets there is redispatch mechanism to regulate this grid flows to avoid congestion issues. It is operated and controlled by the System operators (SO).

Objective#

The aim of redispatch is to reduce the overall cost of Redispatch(starting up, shutting down, ramping up, ramping down).

Structure in Redispatch model#

  • The redispatch has following structure:

    1. Ramping up of powerplants

    2. Ramping down of powerplants:

    3. Ramping up/down of Demand Side flexibilites:

Key Sections#

  • Section 1: 3 node example for modeling Redispatch (Hands-on)

  • Section 2: 3 node example for modeling DSM Units ( Demonstration)

  • Section 3: Germany scale example for modeling Redispatch (Demonstration)

0. Install Assume#

0.1 Repository Setup#

This tutorial provides insights into the newest developments of ASSUME. To access the we need to clone the specific branch on which they are:

[ ]:
import importlib.util

# Check whether notebook is run in google colab
IN_COLAB = importlib.util.find_spec("google.colab") is not None
[ ]:
if IN_COLAB:
    !git clone -b Redispatch_Workshop_iP --depth=1 https://github.com/assume-framework/assume.git assume-repo

0.2 Install Changes from pulled Branch#

Then we need to install Assume in this Colab. Here we just install the ASSUME core and network package via pip. In general the instructions for an installation can be found here: https://assume.readthedocs.io/en/latest/installation.html. All the required steps are executed here and since we are working in colab the generation of a venv is not necessary.

[ ]:
if IN_COLAB:
    ! pip install -e ./assume-repo[network]
    # Colab currently has issues with pyomo version 6.8.2, causing the notebook to crash
    # Installing an older version resolves this issue. This should only be considered a temporary fix.
    !pip install pyomo==6.8.0

# Install some additional packages for plotting
!pip install plotly
!pip install cartopy
!pip install seaborn
!pip install shapely

Note: After installation, Colab may prompt you to restart the session due to dependency changes. To do so, click “Runtime” → “Restart session…” in the menu bar, then re-run the cells above.


0.3 Input Path Configuration#

We define the path to input files depending on whether you’re in Colab or working locally. This variable will be used to load configuration and scenario files throughout the tutorial.

[ ]:
colab_inputs_path = "assume-repo/examples/inputs"
local_inputs_path = "../inputs"

inputs_path = colab_inputs_path if IN_COLAB else local_inputs_path

0.4 Installation Check#

Use the following cell to ensure the installation was successful and that essential components are available. This test ensures that the simulation engine and RL strategy base class are accessible before continuing.

[ ]:
try:
    from assume import World

    print("ASSUME framework is installed and functional.")
except ImportError as e:
    print("Failed to import essential components:", e)
    print(
        "Please review the installation instructions and ensure all dependencies are installed."
    )

Colab does not support Docker, so dashboard visualizations included in some ASSUME workflows will not be available. However, simulation runs and RL training can still be fully executed.

  • In Colab: Training and basic plotting are supported.

  • In Local environments with Docker: Full access, including dashboards.

Study Cases#

Let’s also import some basic libraries that we will use throughout the tutorial.

[ ]:
import logging
import os
import urllib.request
import zipfile
from pathlib import Path

import geopandas as gpd
import matplotlib.pyplot as plt
import pandas as pd
import yaml

# Function to display DataFrame in Jupyter
from IPython.display import display
from shapely.geometry import Point, box

from assume import World
from assume.scenario.loader_csv import load_scenario_folder
# from assume.units.dsm_load_shift import DSMFlex

Scenario 1: Redispatch (3-node Baseline)#

The grid infrastructure includes mainly three components:

  • Generators: Used to produce hydrogen for steel production.

  • Loads: Directly reduces iron ore using hydrogen.

  • Transmission grid: Converts the reduced iron into steel.

Here the components are defined with their operational constraints (such as power, efficiency, ramp rates etc.)

Step 1: Define the nodes/buses#

[ ]:
timesteps = 100  # Number of timesteps in the scenario
[ ]:
# 1. Define meta-data for buses/nodes
buses_data = {
    "name": ["north", "east", "west"],
    "v_nom": ["380", "380", "380"],
    "x": ["9.9437675", "12.228830", "6.6495454"],
    "y": ["53.5560129", "51.3418814", "51.238554"],
}
buses = pd.DataFrame(buses_data)

print("Buses dataframe")
display(buses)

Step 2: Define the lines(Transmission lines)#

[ ]:
# 2. Define meta-data for transmission lines
lines_data = {
    "name": ["Line_N_W", "Line_N_E", "Line_W_E"],
    "bus0": ["north", "north", "west"],
    "bus1": ["west", "east", "east"],
    "v_nom": ["380", "380", "380"],
    "s_nom": ["14", "14", "14"],
    "x": ["0.01", "0.01", "0.01"],
    "r": ["0.00001", "0.00001", "0.00001"],
}
lines = pd.DataFrame(lines_data)

print("Lines dataframe")
display(lines)

Step 3a: Define the Demand Units/Agents#

[ ]:
# 1. Define meta-data for demand units
demand_units_data = {
    "name": ["demand_north", "demand_east", "demand_west"],
    "technology": ["inflex_demand", "inflex_demand", "inflex_demand"],
    "bidding_EOM": [
        "demand_energy_naive",
        "demand_energy_naive",
        "demand_energy_naive",
    ],
    "bidding_redispatch": [
        "demand_energy_naive_redispatch",
        "demand_energy_naive_redispatch",
        "demand_energy_naive_redispatch",
    ],
    "max_power": [100000, 100000, 100000],  # Max capacity (could be MW)
    "min_power": [0, 0, 0],
    "node": ["north", "east", "west"],
    "unit_operator": ["eom_de", "eom_de", "eom_de"],
}
demand_units = pd.DataFrame(demand_units_data)

print("Demand units/Agents:")
display(demand_units)

Step 3b: Define the Demand Profile#

Now, create the demand time series for each agent.

[ ]:
index = pd.date_range("2023-01-01", periods=timesteps, freq="h")
demand_df = pd.DataFrame(
    {
        "datetime": index,
        "demand_north": [10] * timesteps,
        "demand_east": [10] * timesteps,
        "demand_west": [40] * timesteps,
    }
).set_index("datetime")

print("Inflexible Demand Profile (first 5 hours):")
display(demand_df.head())

Step 4a: Define the Powerplant Units/Agents#

[ ]:
# 1. Define meta-data for demand units
powerplant_units_data = {
    "name": ["Unit 1", "Unit 2", "Unit 3"],
    "technology": ["steam turbine", "steam turbine", "steam turbine"],
    "bidding_EOM": [
        "powerplant_energy_naive",
        "powerplant_energy_naive",
        "powerplant_energy_naive",
    ],
    "bidding_redispatch": [
        "powerplant_energy_naive_redispatch",
        "powerplant_energy_naive_redispatch",
        "powerplant_energy_naive_redispatch",
    ],
    "max_power": [31, 19, 30],  # Max capacity (could be MW)
    "min_power": [0, 0, 0],
    "efficiency": [1, 1, 1],
    "node": ["north", "east", "west"],
    "unit_operator": ["Operator 1", "Operator 2", "Operator 3"],
    "fuel_type": ["lignite", "hard coal", "natural gas"],
    "additional_cost": [10, 20, 50],
    "emission_factor": [0, 0, 0],  # Emission factor in kg CO2/kWh
}
powerplant_units = pd.DataFrame(powerplant_units_data)

print("Powerplant units/Agents:")
display(powerplant_units)

Step 4b: Define the Powerplant Profile#

Now, create the demand time series for each agent.

[ ]:
availability_df = pd.DataFrame(
    {
        "datetime": index,
        "Unit 1": [1] * timesteps,
        "Unit 2": [1] * timesteps,
        "Unit 3": [1] * timesteps,
    }
).set_index("datetime")

print("Availability Profile (first 5 hours):")
display(availability_df.head())

Step 5: Setting up Fuel prices and forecasts of fuel prices#

Here we define fuel prices for the power plant units

[ ]:
fuel_prices_data = {
    "fuel": ["lignite", "hard coal", "natural gas", "co2"],
    "price": [0, 0, 0, 0],
}
fuel_prices = pd.DataFrame(fuel_prices_data)

print("Fuel prices:")
display(fuel_prices)
[ ]:
fuel_prices_df = pd.DataFrame(
    {
        "datetime": index,
        "lignite": [0] * timesteps,
        "hard coal": [0] * timesteps,
        "natural gas": [0] * timesteps,
        "co2": [0] * timesteps,
    }
).set_index("datetime")

print("fuel prices profile(first 5 hours):")
display(fuel_prices_df.head())
[ ]:
forecasts_df = pd.DataFrame(
    {
        "datetime": index,
        "co2_price": [0] * timesteps,
        "electricity_price": [0] * timesteps,
        "electricity_price_flex": [0] * timesteps,
        "natural_gas_price": [0] * timesteps,
    }
).set_index("datetime")

print("Price forecast profile(first 5 hours):")
display(forecasts_df.head())

Step 6: Creating input Directory to save as CSV files#

First, we need to create the directory for the input files if it does not already exist. Then, we will save the DataFrames as CSV files in this directory.

[ ]:
# Define the input directory
input_dir = "inputs"
scenario = "scenario_1"
scenario_path = os.path.join(input_dir, scenario)

# Create the directory if it doesn't exist
os.makedirs(scenario_path, exist_ok=True)

# Save the DataFrames to CSV files
powerplant_units.to_csv(f"{scenario_path}/powerplant_units.csv", index=False)
availability_df.to_csv(f"{scenario_path}/availability_df.csv", index=True)
demand_units.to_csv(f"{scenario_path}/demand_units.csv", index=False)
demand_df.to_csv(f"{scenario_path}/demand_df.csv")
buses.to_csv(f"{scenario_path}/buses.csv", index=False)
lines.to_csv(f"{scenario_path}/lines.csv", index=False)
fuel_prices_df.to_csv(f"{scenario_path}/fuel_prices_df.csv", index=True)
forecasts_df.to_csv(f"{scenario_path}/forecasts_df.csv", index=True)

print(f"Input CSV files have been saved to the directory: {scenario_path}")

Step 7 Creating the Configuration YAML File#

For our simulation, we will define the configuration in a YAML format, which specifies the time range, market setup, and other parameters. This configuration will be saved as a config.yaml file.

Below is the creation of the configuration dictionary and saving it to a YAML file.

[ ]:
config = {
    "base": {
        "start_date": "2023-01-01 00:00",
        "end_date": "2023-01-02 23:00",
        "time_step": "1h",
        "save_frequency_hours": 24,
        "markets_config": {
            "EOM": {
                "start_date": "2023-01-01 00:00",
                "operator": "EOM_operator",
                "product_type": "energy",
                "products": [{"duration": "1h", "count": 24, "first_delivery": "24h"}],
                "opening_frequency": "24h",
                "opening_duration": "20h",
                "volume_unit": "MWh",
                "maximum_bid_volume": 100000,
                "maximum_bid_price": 3000,
                "minimum_bid_price": -500,
                "price_unit": "EUR/MWh",
                "market_mechanism": "pay_as_clear",
            },
            "redispatch": {
                "start_date": "2023-01-01 21:00",
                "operator": "network_operator",
                "product_type": "energy",
                "products": [{"duration": "1h", "count": 24, "first_delivery": "3h"}],
                "opening_frequency": "24h",
                "opening_duration": "2h",
                "volume_unit": "MWh",
                "maximum_bid_volume": 100000,
                "maximum_bid_price": 3000,
                "minimum_bid_price": -500,
                "price_unit": "EUR/MWh",
                "market_mechanism": "redispatch",
                "additional_fields": ["node", "min_power", "max_power"],
                "param_dict": {
                    "network_path": ".",
                    "solver": "highs",
                    "payment_mechanism": "pay_as_bid",
                    "backup_marginal_cost": 10000,
                },
            },
        },
    }
}

# Define the path for the config file
config_path = os.path.join(scenario_path, "config.yaml")

# Save the configuration to a YAML file
with open(config_path, "w") as file:
    yaml.dump(config, file, sort_keys=False)

print(f"Configuration YAML file has been saved to '{config_path}'.")

Step 8 Running the Simulation#

Now that we have prepared the input files and configuration, we can proceed to run the simulation using the ASSUME framework. In this step, we will load the scenario and execute the simulation.

[ ]:
# before you import assume
import contextlib

# override the suppress_output context manager
import assume.common.utils as utils

utils.suppress_output = contextlib.nullcontext
[ ]:
# Define paths for input and output data
csv_path = "outputs/scenario_1"
study_case = "base"  # The study case we defined earlier

# Define the data format and database URI
# Use "local_db" for SQLite database or "timescale" for TimescaleDB in Docker

# Create directories if they don't exist
os.makedirs(csv_path, exist_ok=True)
os.makedirs("local_db", exist_ok=True)

# Choose the data format: either local SQLite database or TimescaleDB
data_format = "local_db"  # Options: "local_db" or "timescale"

# Set the database URI based on the selected data format
if data_format == "local_db":
    db_uri = "sqlite:///local_db/assume_db.db"  # SQLite database
elif data_format == "timescale":
    db_uri = "postgresql://assume:assume@localhost:5432/assume"  # TimescaleDB

# Create the World instance
world = World(database_uri=db_uri, export_csv_path=csv_path)

# Load the scenario by providing the world instance
# The path to the inputs folder and the scenario name (subfolder in inputs)
# and the study case name (which config to use for the simulation)
load_scenario_folder(
    world,
    inputs_path=input_dir,
    scenario=scenario,  # Scenario folder for our case
    study_case=study_case,  # The config we defined earlier
)

# Run the simulation
world.run()

print("Simulation has completed.")

A) Download a map of Germany to visualize congested lines#

Step 9 Visualization: Congestion in the network#

[ ]:
# 1. The exact ASCII URL (no ellipsis!)
url = "https://gisco-services.ec.europa.eu/distribution/v2/nuts/download/ref-nuts-2013-01m.shp.zip"

# 2. Define the path where the zip file will be saved
cwd = Path.cwd()
scenario_rel = Path("inputs") / "scenario_1"
zip_name = "EU"
zip_path = scenario_rel / zip_name

# 3. Download into that file
urllib.request.urlretrieve(url, zip_path)
print("Download complete.")

# 4) Unzip everything into that same folder
with zipfile.ZipFile(zip_path, "r") as z:
    z.extractall(scenario_path)

# 5) Read the shapefile from the zip file
file_path = os.path.join(scenario_path, "NUTS_RG_01M_2013_4326_LEVL_0.shp.zip")
eu = gpd.read_file(file_path)

# 6) Filter the GeoDataFrame for Germany (country code 'DE')
germany_nuts3 = eu[eu["CNTR_CODE"] == "DE"]

# 7). Plot the map for Germany
germany_nuts3.plot(figsize=(5, 5), edgecolor="black")
plt.title("Germany")
plt.show()
[ ]:
# Convert buses DataFrame to a GeoDataFrame using longitude (x) and latitude (y)
geometry = [Point(xy) for xy in zip(buses["x"], buses["y"])]
buses_gdf = gpd.GeoDataFrame(buses, geometry=geometry, crs="EPSG:4326")

# Ensure the CRS matches between the NUTS3 shapefile and buses GeoDataFrame
buses_gdf = buses_gdf.to_crs(germany_nuts3.crs)
buses_gdf.head(2)
[ ]:
# Get the bounding box of the classified buses
bbox = buses_gdf.total_bounds  # [minx, miny, maxx, maxy]

# Calculate the center of the bounding box
center = [(bbox[0] + bbox[2]) / 2, (bbox[1] + bbox[3]) / 2]

# Calculate the width and height of the bounding box with a margin
width = bbox[2] - bbox[0]
height = bbox[3] - bbox[1]
new_bbox = box(
    center[0] - width * 0.55,
    center[1] - height * 0.55,
    center[0] + width * 0.55,
    center[1] + height * 0.55,
)

# Convert the bounding box to a GeoSeries
new_bbox = gpd.GeoSeries([new_bbox], crs=buses_gdf.crs)

# Plot the map for Germany with buses classified by NUTS3 regions
fig, ax = plt.subplots(figsize=(6, 6))
germany_nuts3.plot(ax=ax, edgecolor="black")
buses_gdf.plot(ax=ax, color="black", label="Buses", markersize=200)

plt.title("Buses in Germany")
B) Read & preprocess the line_loading data to visualize it#
[ ]:
# Read the line_loading CSV file
line_loading = pd.read_csv("outputs/line_loading.csv")
line_loading.head(5)
[ ]:
# Preprocessing of buses and lines dataframe for visualization

# 1. Transpose the line_loading dataframe in order to visualize it well
ll = line_loading.melt(id_vars="snapshot", var_name="name", value_name="loading")
ll.head(5)

# 2. Merge the line loading data with the lines DataFrame to get bus coordinates
bus_coords = (
    buses_gdf[["name", "geometry"]]
    .rename(columns={"name": "bus_name"})
    .assign(long=lambda df: df.geometry.x, lat=lambda df: df.geometry.y)
)

# 3. Merge geocordinates from buses data in lines DataFrame
lines_visual = (
    lines
    # from end
    .merge(bus_coords[["bus_name", "long", "lat"]], left_on="bus0", right_on="bus_name")
    .rename(columns={"long": "x0", "lat": "y0"})
    .drop(columns="bus_name")
    # to end
    .merge(bus_coords[["bus_name", "long", "lat"]], left_on="bus1", right_on="bus_name")
    .rename(columns={"long": "x1", "lat": "y1"})
    .drop(columns="bus_name")
)
lines_visual
[ ]:
# Select your snapshot to visualize
snapshot = 0
ll_snap = ll[ll["snapshot"] == snapshot].set_index("name")["loading"]
[ ]:
ll_snap
[ ]:
# Plot the line_loading on Germany map

fig, ax = plt.subplots(figsize=(6, 6))
germany_nuts3.plot(ax=ax, edgecolor="#ADD8E6", color="#ADD8E6")
buses_gdf.plot(ax=ax, color="black", label="Buses", markersize=200)

for _, row in lines_visual.iterrows():
    # find the one row in ll matching this snapshot & line name
    mask = (ll["snapshot"] == snapshot) & (ll["name"] == row["name"])
    if mask.any():
        load = ll.loc[mask, "loading"].iloc[0]
    else:
        load = 0.0

    color = "red" if load > 1 else "blue"
    ax.plot(
        [row["x0"], row["x1"]],
        [row["y0"], row["y1"]],
        color=color,
        linewidth=2,
        alpha=1,
    )

Step 10 Visualization: Redispatch amounts & Estimation of Redispatch cost#

A) Estimation of Redispatch cost from the market_orders output file#
[ ]:
scenario
[ ]:
# Read the market order csv file
market_orders = pd.read_csv(f"{csv_path}/{scenario}_{study_case}/market_orders.csv")
market_orders.head(5)
[ ]:
# fetch the market orders for the redispatch market_id only
redispatch_orders = market_orders[market_orders["market_id"] == "redispatch"]
redispatch_orders.head(2)
[ ]:
# Redispatch cost is equal to the accepted_volume * accepted_price for a particular snapshot
redispatch_orders.loc[:, "redispatch_cost"] = (
    redispatch_orders["accepted_volume"].abs() * redispatch_orders["accepted_price"]
)

# Then group & sum as before
redispatch_costs = (
    redispatch_orders.groupby("start_time")["redispatch_cost"].sum().reset_index()
)
redispatch_costs.head(2)

B) Plot a bar graph for redispatch by powerplants for one snapshot#

[ ]:
# Select the first available snapshot
snapshot = market_orders["start_time"].unique()[0]
filtered = market_orders[market_orders["start_time"] == snapshot]

# Pivot accepted_volume by unit_id and market_id
pivot = filtered.pivot_table(
    index="unit_id", columns="market_id", values="accepted_volume", aggfunc="sum"
).fillna(0)

# Plot stacked bar chart with annotations
fig, ax = plt.subplots(figsize=(8, 6))
bars = pivot.plot(kind="bar", stacked=True, ax=ax)
plt.axhline(0, color="black")
plt.xlabel("Unit ID")
plt.ylabel("Accepted Volume")
plt.title(f"Final dispatches for {snapshot}")
plt.legend(title="Market ID")

# Annotate each segment with its value
for container in bars.containers:
    for bar in container:
        height = bar.get_height()
        if height != 0:
            ax.annotate(
                f"{height:.1f}",
                xy=(bar.get_x() + bar.get_width() / 2, bar.get_y() + height / 2),
                ha="center",
                va="center",
            )

plt.tight_layout()
plt.show()
C) Plot on a map for redispatch by locations for one snapshot#
[ ]:
# fetch the market orders for the redispatch market_id only
redispatch_orders = market_orders[market_orders["market_id"] == "redispatch"]
# fetch latitudes and longitudes of the node in positive and negative redispatch orders
redispatch_orders = redispatch_orders.merge(
    buses_gdf[["name", "x", "y"]], left_on="node", right_on="name", how="left"
).drop(columns="name")
redispatch_orders.head(2)
[ ]:
snapshot = redispatch_orders["start_time"].unique()[0]
snapshot
[ ]:
# Separate redispatch orders by accepted_volume , positive values as positive redispatch and negative values as negative redispatch
positive_redispatch = redispatch_orders[redispatch_orders["accepted_volume"] > 0]
# Create a GeoDataFrame for positive redispatches
positive_gdf = gpd.GeoDataFrame(
    positive_redispatch,
    geometry=gpd.points_from_xy(positive_redispatch["x"], positive_redispatch["y"]),
    crs="EPSG:4326",
)
positive_gdf_snapshot = positive_gdf[positive_gdf["start_time"] == snapshot]
positive_gdf_snapshot = positive_gdf_snapshot[
    ["start_time", "accepted_volume", "geometry"]
]
positive_gdf_snapshot
[ ]:
negative_redispatch = redispatch_orders[redispatch_orders["accepted_volume"] < 0]
# Create a GeoDataFrame for positive redispatches
negative_gdf = gpd.GeoDataFrame(
    negative_redispatch,
    geometry=gpd.points_from_xy(negative_redispatch["x"], negative_redispatch["y"]),
    crs="EPSG:4326",
)
negative_gdf_snapshot = negative_gdf[negative_gdf["start_time"] == snapshot]
negative_gdf_snapshot = negative_gdf_snapshot[
    ["start_time", "accepted_volume", "geometry"]
]
negative_gdf_snapshot
[ ]:
# Get the bounding box of the classified buses
bbox = buses_gdf.total_bounds  # [minx, miny, maxx, maxy]

# Calculate the center of the bounding box
center = [(bbox[0] + bbox[2]) / 2, (bbox[1] + bbox[3]) / 2]

# Calculate the width and height of the bounding box with a margin
width = bbox[2] - bbox[0]
height = bbox[3] - bbox[1]
new_bbox = box(
    center[0] - width * 0.55,
    center[1] - height * 0.55,
    center[0] + width * 0.55,
    center[1] + height * 0.55,
)

# Convert the bounding box to a GeoSeries
new_bbox = gpd.GeoSeries([new_bbox], crs=buses_gdf.crs)

# Plot the map for Germany with buses classified by NUTS3 regions
fig, ax = plt.subplots(figsize=(8, 6))
germany_nuts3.plot(ax=ax, color="#ADD8E6", edgecolor="black")

# Plot positive redispatch with size based on accepted volume
positive_gdf_snapshot.plot(
    ax=ax,
    color="blue",
    markersize=positive_gdf_snapshot["accepted_volume"] * 50,
    label="Positive Redispatch",
)
# Plot negative redispatch with size based on accepted volume
negative_gdf_snapshot.plot(
    ax=ax,
    color="red",
    markersize=-negative_gdf_snapshot["accepted_volume"] * 50,
    label="Negative Redispatch",
)

# plot legend
plt.legend(loc="lower right", fontsize="small", markerscale=0.5, frameon=True)
plt.title("Buses in Germany")

Step 11: Tasks#

Go to Step 4a and try setting up,

  1. Change the marginal cost of the powerplant in node east to 40€/MWh

  2. wind powerplant at north node with following parameters:

  • marginal cost=0,

  • fuel_type=renewable,

  • technology = wind offshore,

  • availability==1

  1. wind powerplant at north node with following parameters:

  • marginal cost=-50,

  • fuel_type=renewable,

  • technology = wind offshore,

  • availability==1

Estimate the redispatch cost and see the difference for above scenarios

Scenario 2: Redispatch with Industrial DSM unit (3-node)#

Step 1 Add DSM unit at node ‘west’#

One can any DSM unit from the ones modelled in ASSUME already or create a customized DSM unit to add it to the the network

Here is just a demonstration of how a unit can be added at any node in the EOM and can be considered using bidding_strategy==steelplant_strategy in the redispatch mechanism

[ ]:
# Industrial Heat Pump unit: Meta-Data

heat_pump_data = {
    "name": ["Heat_pump_unit_1"],
    "technology": ["Heat pump"],
    "bidding_EOM": [
        "industry_energy_optimization"
    ],  # Example: simple market bidding strategy
    "node": ["west"],
    "bidding_redispatch": ["steelplant_strategy"],
    "unit_operator": ["dsm_operator_1"],
    "cop": [1],  # Coefficient of performance of the heat pump
    "max_power": 5,  # Fuel type for the heat pump
    "min_power": 0,  # Minimum power output
    "ramp_up": 5,  # Ramp up rate in MW/min
    "ramp_down": 5,  # Ramp down rate in MW/min
    "demand": 240,  # Fuel type for the heat pump
    "cost_tolerance": 100,  # Cost tolerance for the heat pump
}
heat_pump = pd.DataFrame(heat_pump_data)

print("Heat pump Meta-Data Table:")
display(heat_pump)
[ ]:
# Define the input directory for scenario_2
input_dir = "inputs"
scenario = "scenario_2"
scenario_path = os.path.join(input_dir, scenario)

# Create the directory if it doesn't exist
os.makedirs(scenario_path, exist_ok=True)

# Save the DataFrames to CSV files
powerplant_units.to_csv(f"{scenario_path}/powerplant_units.csv", index=False)
availability_df.to_csv(f"{scenario_path}/availability_df.csv", index=True)
demand_units.to_csv(f"{scenario_path}/demand_units.csv", index=False)
demand_df.to_csv(f"{scenario_path}/demand_df.csv")
buses.to_csv(f"{scenario_path}/buses.csv", index=False)
lines.to_csv(f"{scenario_path}/lines.csv", index=False)
fuel_prices_df.to_csv(f"{scenario_path}/fuel_prices_df.csv", index=True)
forecasts_df.to_csv(f"{scenario_path}/forecasts_df.csv", index=True)
heat_pump.to_csv(f"{scenario_path}/industrial_dsm_units.csv", index=False)

print(f"Input CSV files have been saved to the directory: {scenario_path}")
[ ]:
config = {
    "base": {
        "start_date": "2023-01-01 00:00",
        "end_date": "2023-01-02 23:00",
        "time_step": "1h",
        "save_frequency_hours": 24,
        "markets_config": {
            "EOM": {
                "start_date": "2023-01-01 00:00",
                "operator": "EOM_operator",
                "product_type": "energy",
                "products": [{"duration": "1h", "count": 24, "first_delivery": "24h"}],
                "opening_frequency": "24h",
                "opening_duration": "20h",
                "volume_unit": "MWh",
                "maximum_bid_volume": 100000,
                "maximum_bid_price": 3000,
                "minimum_bid_price": -500,
                "price_unit": "EUR/MWh",
                "market_mechanism": "pay_as_clear",
            },
            "redispatch": {
                "start_date": "2023-01-01 21:00",
                "operator": "network_operator",
                "product_type": "energy",
                "products": [{"duration": "1h", "count": 24, "first_delivery": "3h"}],
                "opening_frequency": "24h",
                "opening_duration": "2h",
                "volume_unit": "MWh",
                "maximum_bid_volume": 100000,
                "maximum_bid_price": 3000,
                "minimum_bid_price": -500,
                "price_unit": "EUR/MWh",
                "market_mechanism": "redispatch",
                "additional_fields": ["node", "min_power", "max_power"],
                "param_dict": {
                    "network_path": ".",
                    "solver": "highs",
                    "payment_mechanism": "pay_as_bid",
                    "backup_marginal_cost": 10000,
                },
            },
        },
    }
}

# Define the path for the config file
config_path = os.path.join(scenario_path, "config.yaml")

# Save the configuration to a YAML file
with open(config_path, "w") as file:
    yaml.dump(config, file, sort_keys=False)

print(f"Configuration YAML file has been saved to '{config_path}'.")
[ ]:
# Define paths for input and output data
csv_path = "outputs/scenario_2"
study_case = "base"  # The study case we defined earlier
# Define the data format and database URI
# Use "local_db" for SQLite database or "timescale" for TimescaleDB in Docker

# Create directories if they don't exist
os.makedirs(csv_path, exist_ok=True)
os.makedirs("local_db", exist_ok=True)

# Choose the data format: either local SQLite database or TimescaleDB
data_format = "local_db"  # Options: "local_db" or "timescale"

# Set the database URI based on the selected data format
if data_format == "local_db":
    db_uri = "sqlite:///local_db/assume_db.db"  # SQLite database
elif data_format == "timescale":
    db_uri = "postgresql://assume:assume@localhost:5432/assume"  # TimescaleDB

# Create the World instance
world = World(database_uri=db_uri, export_csv_path=csv_path)

# Load the scenario by providing the world instance
# The path to the inputs folder and the scenario name (subfolder in inputs)
# and the study case name (which config to use for the simulation)
load_scenario_folder(
    world,
    inputs_path=input_dir,
    scenario=scenario,  # Scenario folder for our case
    study_case=study_case,  # The config we defined earlier
)

# Run the simulation
world.run()

print("Simulation has completed.")

Step 2 Visualization: Redispatch amounts & Estimation of Redispatch cost#

A) Estimation of Redispatch amounts#
[ ]:
# Read the market order csv file
market_orders = pd.read_csv(f"{csv_path}/{scenario}_{study_case}/market_orders.csv")
market_orders.head(5)
[ ]:
# fetch the market orders for the redispatch market_id only
redispatch_orders = market_orders[market_orders["market_id"] == "redispatch"]
redispatch_orders.head(2)
[ ]:
# Redispatch cost is equal to the accepted_volume * accepted_price for a particular snapshot
redispatch_orders.loc[:, "redispatch_cost"] = (
    redispatch_orders["accepted_volume"].abs() * redispatch_orders["accepted_price"]
)

# Then group & sum as before
redispatch_costs = (
    redispatch_orders.groupby("start_time")["redispatch_cost"].sum().reset_index()
)
redispatch_costs.head(2)
B) Plot a bar graph for redispatch by powerplants for one snapshot#
[ ]:
# Select the first available snapshot
snapshot = market_orders["start_time"].unique()[0]
filtered = market_orders[market_orders["start_time"] == snapshot]

# Pivot accepted_volume by unit_id and market_id
pivot = filtered.pivot_table(
    index="unit_id", columns="market_id", values="accepted_volume", aggfunc="sum"
).fillna(0)

# Plot stacked bar chart with annotations
fig, ax = plt.subplots(figsize=(8, 6))
bars = pivot.plot(kind="bar", stacked=True, ax=ax)
plt.axhline(0, color="black")
plt.xlabel("Unit ID")
plt.ylabel("Accepted Volume")
plt.title(f"Final dispatches for {snapshot}")
plt.legend(title="Market ID")

# Annotate each segment with its value
for container in bars.containers:
    for bar in container:
        height = bar.get_height()
        if height != 0:
            ax.annotate(
                f"{height:.1f}",
                xy=(bar.get_x() + bar.get_width() / 2, bar.get_y() + height / 2),
                ha="center",
                va="center",
            )

plt.tight_layout()
plt.show()

C) Plot on a map for redispatch by locations for one snapshot#

[ ]:
# fetch the market orders for the redispatch market_id only
redispatch_orders = market_orders[market_orders["market_id"] == "redispatch"]
# fetch latitudes and longitudes of the node in positive and negative redispatch orders
redispatch_orders = redispatch_orders.merge(
    buses_gdf[["name", "x", "y"]], left_on="node", right_on="name", how="left"
).drop(columns="name")
redispatch_orders.head(2)
[ ]:
snapshot = redispatch_orders["start_time"].unique()[0]
snapshot
[ ]:
# Separate redispatch orders by accepted_volume , positive values as positive redispatch and negative values as negative redispatch
positive_redispatch = redispatch_orders[redispatch_orders["accepted_volume"] > 0]
# Create a GeoDataFrame for positive redispatches
positive_gdf = gpd.GeoDataFrame(
    positive_redispatch,
    geometry=gpd.points_from_xy(positive_redispatch["x"], positive_redispatch["y"]),
    crs="EPSG:4326",
)
positive_gdf_snapshot = positive_gdf[positive_gdf["start_time"] == snapshot]
positive_gdf_snapshot = positive_gdf_snapshot[
    ["start_time", "accepted_volume", "geometry"]
]
positive_gdf_snapshot
[ ]:
negative_redispatch = redispatch_orders[redispatch_orders["accepted_volume"] < 0]
# Create a GeoDataFrame for positive redispatches
negative_gdf = gpd.GeoDataFrame(
    negative_redispatch,
    geometry=gpd.points_from_xy(negative_redispatch["x"], negative_redispatch["y"]),
    crs="EPSG:4326",
)
negative_gdf_snapshot = negative_gdf[negative_gdf["start_time"] == snapshot]
negative_gdf_snapshot = negative_gdf_snapshot[
    ["start_time", "accepted_volume", "geometry"]
]
negative_gdf_snapshot
[ ]:
# Get the bounding box of the classified buses
bbox = buses_gdf.total_bounds  # [minx, miny, maxx, maxy]

# Calculate the center of the bounding box
center = [(bbox[0] + bbox[2]) / 2, (bbox[1] + bbox[3]) / 2]

# Calculate the width and height of the bounding box with a margin
width = bbox[2] - bbox[0]
height = bbox[3] - bbox[1]
new_bbox = box(
    center[0] - width * 0.55,
    center[1] - height * 0.55,
    center[0] + width * 0.55,
    center[1] + height * 0.55,
)

# Convert the bounding box to a GeoSeries
new_bbox = gpd.GeoSeries([new_bbox], crs=buses_gdf.crs)

# Plot the map for Germany with buses classified by NUTS3 regions
fig, ax = plt.subplots(figsize=(8, 6))
germany_nuts3.plot(ax=ax, color="#ADD8E6", edgecolor="black")

# Plot positive redispatch with size based on accepted volume
positive_gdf_snapshot.plot(
    ax=ax,
    color="blue",
    markersize=positive_gdf_snapshot["accepted_volume"] * 50,
    label="Positive Redispatch",
)
# Plot negative redispatch with size based on accepted volume
negative_gdf_snapshot.plot(
    ax=ax,
    color="red",
    markersize=-negative_gdf_snapshot["accepted_volume"] * 50,
    label="Negative Redispatch",
)

# plot legend
plt.legend(loc="lower right", fontsize="small", markerscale=0.5, frameon=True)
plt.title("Buses in Germany")

Scenario 3: Redispatch for whole Germany#

  1. Run simulation for Germany:

  • Nodes: 835

  • Lines: 1088

[ ]:
log = logging.getLogger(__name__)

csv_path = "scenario3/outputs"
os.makedirs("local_db", exist_ok=True)

if __name__ == "__main__":
    db_uri = "sqlite:///local_db/assume_db.db"

    scenario = "example_05f"
    study_case = "base"

    # create world
    world = World(database_uri=db_uri, export_csv_path=csv_path)

    # then we load the scenario specified above from the respective input files
    load_scenario_folder(
        world,
        inputs_path=inputs_path,
        scenario=scenario,
        study_case=study_case,
    )

    # after the learning is done we make a normal run of the simulation, which equals a test run
    world.run()

A) Read the line loading data#

[ ]:
# Read the line_loading CSV file
line_loading = pd.read_csv("outputs/line_loading.csv")
line_loading.head(5)
[ ]:
# Read the market order csv file
market_orders = pd.read_csv(f"{csv_path}/{scenario}_{study_case}/market_orders.csv")
market_orders.head(5)
[ ]:
# fetch the market orders for the redispatch market_id only
redispatch_orders = market_orders[market_orders["market_id"] == "redispatch"]
redispatch_orders.head(2)
[ ]:
# Redispatch cost is equal to the accepted_volume * accepted_price for a particular snapshot
# Compute cost using absolute accepted_volume
redispatch_orders.loc[:, "redispatch_cost"] = (
    redispatch_orders["accepted_volume"].abs() * redispatch_orders["accepted_price"]
)

# Then group & sum as before
redispatch_costs = (
    redispatch_orders.groupby("start_time")["redispatch_cost"].sum().reset_index()
)
redispatch_costs.head(2)

B) Plot a bar graph for redispatch by powerplants for one snapshot#

[ ]:
# Select the first available snapshot
snapshot = market_orders["start_time"].unique()[0]
filtered = market_orders[market_orders["start_time"] == snapshot]

# Pivot accepted_volume by unit_id and market_id
pivot = filtered.pivot_table(
    index="unit_id", columns="market_id", values="accepted_volume", aggfunc="sum"
).fillna(0)

# Plot stacked bar chart
fig, ax = plt.subplots(figsize=(8, 6))
pivot.plot(kind="bar", stacked=True, ax=ax)

# Zero line, labels and title
ax.axhline(0, color="black")
ax.set_xlabel("")  # remove x-axis label text if desired
ax.set_ylabel("Accepted Volume")
ax.set_title(f"Final dispatches for {snapshot}")

# turn off the x-tick labels
ax.set_xticklabels([])  # remove the unit names
ax.set_ylim(-500, 1000)

# (re)enable legend if you want it
ax.legend(title="Market ID", bbox_to_anchor=(1.02, 1), loc="upper left")

plt.tight_layout()
plt.show()
[ ]: