10a. Modeling Demand-Side Units and Flexibility in the ASSUME Framework

Contents

Note

You can download this example as a Jupyter notebook or try it out directly in Google Colab.

10a. Modeling Demand-Side Units and Flexibility in the ASSUME Framework#

Welcome to the ASSUME DSM Workshop!

In this hands-on tutorial, we will explore how to model Demand-Side Units (DSU) and unlock their flexibility potential in electricity market simulations using the ASSUME framework.


Why Demand-Side Flexibility in Electricity Markets?#

In today’s electricity system, Demand-Side Management (DSM), units help maintain physical balance and grid stability especially as more renewables increase variability and uncertainty. At the same time, in the market context, DSM enables consumers to respond to price signals, actively manage their costs, and help prevent price spikes caused by scarcity or inflexibility.

Demand-side flexibility enables:

  • Dynamic adaptation to price signals: Units can increase or decrease consumption when electricity prices change, helping to flatten peaks and fill valleys in demand.

  • Grid congestion management: DSM can be activated as a targeted resource in congested network areas, helping system operators alleviate bottlenecks without costly grid reinforcements.

  • Efficient market operations: By shifting demand in response to market conditions, DSM units can help avoid costly price spikes and reduce the need for expensive reserve generation.

  • Integration of renewables: Flexible demand can absorb excess renewable energy when supply is high and reduce consumption when supply is tight, supporting grid stability.

  • New revenue streams for consumers: By offering flexibility as a market product (e.g., via reserve or capacity markets), DSM units can generate additional value.

In the ASSUME framework, DSM units are explicitly modeled to interact with market mechanisms submitting bids, responding to electricity prices, and providing upward/downward regulation. This allows us to analyze both the technical and economic impact of flexibility in realistic market environments.


What is a Demand-Side Unit (DSU) in ASSUME?#

A DSU in ASSUME is a demand side agent representing an industrial plant, building, or flexible consumer, modeled with:

  • Technology composition: (e.g., heat pumps, boilers, electrolyzers, battery storage)

  • Technical constraints: (e.g., rated/min/max power, storage capacity, ramp rates)

  • Market participation logic: Ability to submit bids, react to prices, and optimize their operation.

DSUs in ASSUME can act as:

  • Passive demand: Consuming electricity as per a fixed profile.

  • Active agents: Shaping their demand by responding to external signals, thus providing flexibility.


What Will We Learn in This Tutorial?#

We will guide you through the step-by-step modeling and simulation of DSUs, focusing on how flexibility is implemented and utilized in ASSUME. Specifically, you will:

  • Understand how DSUs can be modeled and implemented in ASSUME.

  • Explore various DSM unit types available in ASSUME (e.g., Building, Hydrogen Plant, Steam Plant) and their key modeling attributes.

  • Apply multiple flexibility measures to DSM units, including customizing your own.

  • Integrate market bidding strategies and see how agents can monetize flexibility.

  • Integrate flexibility in the market.

  • Simulate and analyze a real use case: investment decision-making for a flexible industrial plant under market uncertainty.


Key Sections#

  • Section 1: Demand vs. DSM Unit in ASSUME

  • Section 2: Overview of DSM Units in ASSUME (Building, Hydrogen Plant, Steam Plant, etc.)

  • Section 3: Modeling a Flexible DSM Agent

  • Section 4: Flexibility Measures: How They Work & How to Add Your Own

  • Section 5: From Flexibility to Market Bids Connecting with Bidding Strategies

  • Section 6: Use Case-Investment Decision for a Hydrogen Production Plant Under Market Uncertainty

Ready? Let’s start unlocking demand-side flexibility in ASSUME!

Workshop Agenda#

  1. Demand Units vs. DSM Units in ASSUME

  2. DSM Units in the ASSUME Framework: What, How, and Why

  3. Hands-on: Building & Modeling a DSM Unit (Demo & Key Functions)

  4. Why Optimization? DSM as Agent-based Optimization Units

  5. Flexibility Measures in ASSUME: Overview & How to Implement

  6. Adding Your Own Flexibility Measure (Live Demo)

  7. Integrating Bidding Strategies

  8. Use Case: Investment Decision Under Uncertainty (Expected Utility Theory)

    • Tech configs: boiler only / boiler+HP / boiler+HP+storage

    • Scenarios: Low/Med/High CO₂ price

    • Simulate, calculate expected utility, select optimal config

  9. Q&A and Wrap-up

0. Install Assume#

First we need to install Assume in this Colab. Here we just install the ASSUME core package via pip. In general the instructions for an installation can be found here: https://assume.readthedocs.io/en/latest/installation.html. All the required steps are executed here and since we are working in colab the generation of a venv is not necessary.

[ ]:
import importlib.util

# Check whether notebook is run in google colab
IN_COLAB = importlib.util.find_spec("google.colab") is not None

if IN_COLAB:
    !pip install assume-framework
    # Colab currently has issues with pyomo version 6.8.2, causing the notebook to crash
    # Installing an older version resolves this issue. This should only be considered a temporary fix.
    !pip install pyomo==6.8.0

# Install some additional packages for plotting
!pip install plotly
!pip install cartopy
!pip install seaborn

Note: After installation, Colab may prompt you to restart the session due to dependency changes. To do so, click “Runtime” → “Restart session…” in the menu bar, then re-run the cells above.


Further we would like to access the predefined scenarios in ASSUME which are stored on the git repository. Hence, we clone the repository.

0.1 Repository Setup#

To access predefined simulation scenarios, clone the ASSUME repository (Colab only):

[ ]:
if IN_COLAB:
    !git clone --depth=1 https://github.com/assume-framework/assume.git assume-repo

Local users may skip this step if input files are already available in the project directory.


0.2 Input Path Configuration#

We define the path to input files depending on whether you’re in Colab or working locally. This variable will be used to load configuration and scenario files throughout the tutorial.

[ ]:
colab_inputs_path = "assume-repo/examples/inputs"
local_inputs_path = "../inputs"

inputs_path = colab_inputs_path if IN_COLAB else local_inputs_path

0.3 Installation Check#

Use the following cell to ensure the installation was successful and that essential components are available. This test ensures that the simulation engine and RL strategy base class are accessible before continuing.

[ ]:
try:
    from assume import World

    print("✅ ASSUME framework is installed and functional.")
except ImportError as e:
    print("❌ Failed to import essential components:", e)
    print(
        "Please review the installation instructions and ensure all dependencies are installed."
    )

Colab does not support Docker, so dashboard visualizations included in some ASSUME workflows will not be available. However, simulation runs and RL training can still be fully executed.

  • In Colab: Training and basic plotting are supported.

  • In Local environments with Docker: Full access, including dashboards.

Let’s also import some basic libraries that we will use throughout the tutorial.

[ ]:
import os
from collections.abc import Callable

import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pyomo as pyo
import yaml

# Function to display DataFrame in Jupyter
from IPython.display import display

from assume import World
from assume.common.base import (
    BaseStrategy,
    MinMaxStrategy,
    SupportsMinMax,
)
from assume.common.forecaster import SteamgenerationForecaster
from assume.scenario.loader_csv import load_scenario_folder
from assume.strategies import DsmEnergyOptimizationStrategy
from assume.units.dsm_load_shift import DSMFlex

Section 1: Demand Unit vs. DSM Unit in ASSUME#

1.1 What is an Inflexible Demand Unit?#

In the ASSUME framework, the simplest agent you can create is an inflexible demand unit.
This represents a consumer (such as a city, industrial site, or region) that has a fixed demand profile for each time step, regardless of electricity price or market conditions.
These inflexible agents are typically used to model the “must-serve” demand in the system—the electricity that needs to be supplied, no matter what.
Unlike flexible DSM units, these agents cannot adjust their consumption in response to market signals or provide demand-side flexibility.

Key Characteristics#

  • Profile-based: Their demand for each time period is pre-defined and does not change during simulation.

  • No flexibility: Cannot participate in flexibility markets (such as load shifting or reserve).

  • Use case: Useful for setting a baseline system demand, or representing legacy/critical loads.


Step 1: Define Inflexible Demand Agents#

First, we create the agent meta-data table.
This registers each unit with attributes needed for simulation and market interaction.
[ ]:
# 1. Define meta-data for demand units
demand_units_data = {
    "name": ["demand_north", "demand_east"],
    "technology": ["inflex_demand", "inflex_demand"],
    "bidding_EOM": ["demand_energy_naive", "demand_energy_naive"],
    "max_power": [100000, 100000],  # Max capacity [MW]
    "min_power": [0, 0],
    "node": ["north", "east"],
    "unit_operator": ["eom_de", "eom_de"],
}
demand_units = pd.DataFrame(demand_units_data)

print("Demand Agent Meta-Data Table:")
display(demand_units)

Step 2: Define the Demand Profile#

Now, create the demand time series for each agent.
This table provides the “what” and “when” for each agent.
[ ]:
index = pd.date_range("2023-01-01", periods=24, freq="h")
demand_df = pd.DataFrame(
    {
        "datetime": index,
        "demand_north": [20] * 24,
        "demand_east": [10] * 24,
    }
).set_index("datetime")

print("Inflexible Demand Profile (first 5 hours):")
display(demand_df.head())

Step 3: Accessing Demand Profiles#

You can now access the demand for any agent and time step directly:

[ ]:
print("Demand for 'demand_north', first 5 hours:")
print(demand_df["demand_north"].head())

Summary:

  • The agent table defines WHO is in the simulation and their technical/market attributes.

  • The profile table (demand_df) defines WHAT these agents demand, and WHEN.

Up Next:
We’ll now explore how flexible DSM units are set up—showing how they add intelligence and adaptability to the simulation.

Example: Inflexible Load Profile for a Hydrogen Plant#

Industrial-scale hydrogen plants—such as those using electrolysers—can also be modeled as inflexible demand units if they do not provide demand-side flexibility.
Below, we define an inflexible hydrogen plant unit with a fixed hourly electricity demand profile.

[ ]:
# Inflexible Paper Plant: Meta-Data
steam_plant_data = {
    "name": ["paper_production_plant"],
    "technology": ["inflex_demand"],
    "bidding_EOM": [
        "powerplant_energy_naive"
    ],  # Example: simple market bidding strategy
    "max_power": [10],  # 5 MW typical electrolyser power
    "min_power": [2],  # 2 MW technical minimum
    "node": ["east_industrial_zone"],
    "unit_operator": ["paper_gmbh"],
}
steam_plant = pd.DataFrame(steam_plant_data)

print("Paper Plant Meta-Data Table:")
display(steam_plant)
[ ]:
# Inflexible Hydrogen Plant Demand Profile (example: 24 hours)
index = pd.date_range("2023-01-01", periods=24, freq="h")
# Assume night operation at max, midday ramp-down, ramp-up again in evening
thermal_demand = [7.25] * 24
thermal_demand_df = pd.DataFrame(
    {
        "datetime": index,
        "steam_plant": thermal_demand,
    }
).set_index("datetime")
[ ]:
# Plot the hydrogen plant's inflexible load profile
plt.figure(figsize=(8, 4))
plt.plot(
    thermal_demand_df.index,
    thermal_demand_df["steam_plant"],
    marker="o",
    color="tab:blue",
    linewidth=2,
)
plt.title("Inflexible Thermal Demand Profile")
plt.ylabel("Power Demand (kW)")
plt.xlabel("Time")
plt.ylim(0, 10)
plt.grid(True, linestyle="--", alpha=0.6)
plt.tight_layout()
plt.show()

Explanation:

  • This hydrogen plant has a rigid consumption profile (e.g., dictated by process, market contract, or regulatory rules).

  • It is treated as a “must-run” load: it cannot respond to prices or grid needs, so all electricity consumed must be provided by the market, regardless of system conditions.

  • This is the reference case before introducing DSM flexibility.


Next: Let’s see how to upgrade this unit to a flexible (DSM-enabled) hydrogen plant—and what that means for system operation and electricity market participation.

1.2 From Inflexible Demand to Flexible DSM Units#

While inflexible agents simply draw power according to a schedule, DSM units in ASSUME are dynamic—they can optimize their consumption, react to electricity prices, and provide flexibility to the grid.

This allows for modular, reproducible modeling of large energy systems and fleets of flexible agents.

Key Characteristics of a Flexible DSM Unit:#

  • Technology Portfolio: Can be a combination of electrolyser, heat pump, battery, etc.

  • Operational Constraints: Minimum/maximum power, ramp rates, storage, and more.

  • Flexibility Logic: Allows shifting or reducing demand within operational and market rules.

  • Optimization-based Scheduling: Uses mathematical programming to find a cost-optimal schedule, rather than following a static load curve.

In ASSUME, all demand-side units including DSM plants are configured through tabular data (Pandas DataFrames or CSVs). Each row is a unit (or a technology within a plant), with columns specifying its:

  • ID, technology type, node, operator, bidding strategies, and all operational constraints.

[ ]:
from assume.units.steam_generation_plant import SteamPlant

# ---- Step 1: Set up time index and price profile ----
time_index = pd.date_range(start="2023-01-01 00:00", periods=24, freq="h")
price_signal = [25] * 12 + [29] * 12  # €/MWh

# ---- Step 2: Create NaiveForecast object with price signal ----
forecaster = SteamgenerationForecaster(
    index=time_index,
    electricity_price=price_signal,  # assign new price signal for optimization
    fuel_prices={"natural_gas": [35] * 24},  # not used here, just required by API
    thermal_demand=[None] * 24,  # no fixed absolute demand (let optimizer decide)
)
[ ]:
# ---- Step 3: Define plant configuration (technologies) ----
components = {
    "boiler": {
        "max_power": 10,  # MW
        "min_power": 2,  # MW
        "ramp_up": 10,  # MW/h
        "ramp_down": 10,  # MW/h
        "efficiency": 0.9,
        "fuel_type": "electricity",
    },
}

# ---- Step 4: Create the SteamPlant DSM agent ----
paper_plant = SteamPlant(
    id="steam_plant_dsm",
    unit_operator="paper_gmbh",
    components=components,
    demand=174,
    forecaster=forecaster,
    flexibility_measure="cost_based_load_shift",  # Enable optimization-based DSM
    cost_tolerance=10,  # 10% allowed cost deviation for flexibility
    bidding_strategies={"eom": "industry_energy_optimization"},
)

# ---- Step 5: Solve for optimal operation with flexibility ----
paper_plant.determine_optimal_operation_without_flex()
flex_profile = paper_plant.opt_power_requirement
[ ]:
# ---- Step 6: Plot the results ----
plt.figure(figsize=(8, 4))

# Primary y-axis: power demand
ax1 = plt.gca()
ax1.step(
    time_index,
    thermal_demand,
    where="mid",
    color="tab:blue",
    linewidth=2.5,
    linestyle="--",
    label="Inflexible Demand Profile",
)
ax1.step(
    time_index,
    flex_profile,
    where="mid",
    color="tab:green",
    linewidth=2,
    label="Flexible consumption",
)
ax1.set_xlabel("Hour")
ax1.set_ylabel("Power Input [MW]")
ax1.set_title("Paper Plant: DSM vs. Inflexible Demand")
ax1.grid(True)

# Legend for the left axis
lines_1, labels_1 = ax1.get_legend_handles_labels()

# Secondary y-axis: electricity price
ax2 = ax1.twinx()
ax2.plot(
    time_index, price_signal, "r-.", linewidth=2, label="Electricity Price (€/MWh)"
)
ax2.set_ylabel("Electricity Price (€/MWh)", color="red")
ax2.tick_params(axis="y", labelcolor="red")

# Legend for the right axis
lines_2, labels_2 = ax2.get_legend_handles_labels()
ax1.legend(lines_1 + lines_2, labels_1 + labels_2, loc="upper left")

plt.tight_layout()
plt.show()

Comparing Inflexible and Flexible Demand: The Value of DSM in Industrial Plants#

The figure above illustrates a fundamental difference between the Demand_Units and DSM_Units, as implemented in ASSUME. > In ASSUME, traditional demand units are static. By contrast, DSM agents become active market participants.


Section 2: Overview of DSM Units in ASSUME#

ASSUME provides a modular agent-based framework to represent all kinds of demand-side flexibility in the power system—from industrial plants to residential buildings. DSM agents in ASSUME are highly configurable and extensible, supporting a broad range of technology options, assets, and operating strategies.


2.1 Modular Structure: Industrial Plants & Buildings#

ASSUME’s flexibility framework allows users to:

  • Directly create DSM agents (industrial, building, custom) with a desired configuration of technologies.

  • Populate each agent with an arbitrary combination of assets (e.g., electrolyser, furnace, heat pump, battery storage, PV, EV, thermal storage).

  • Quickly add, remove, or reconfigure assets using a component inventory.

  • Model complex process chains and interaction between technologies using the provided modular connection system.

Example: Modular Connections#

[ ]:
from pathlib import Path

from IPython.display import Image, display

image_path = Path("assume-repo/docs/source/img/dsm_integration.PNG")
alt_image_path = Path("../../docs/source/img/dsm_integration.PNG")

if image_path.exists():
    display(Image(image_path))
elif alt_image_path.exists():
    display(Image(alt_image_path))

Figure: Modular connection of technologies in an industrial plant or building agent.


DSM Agents as Pyomo Models#

Each DSM agent in ASSUME (whether it’s an industrial plant, building, or custom asset) is internally modeled as an optimization problem usingPyomo. This means:

  • All physical and economic constraints (e.g., energy balances, operational limits, ramp rates, market participation rules) are represented as explicit mathematical constraints.

  • The objective function (e.g., minimizing cost, maximizing profit, maximizing flexible capacity) is user-configurable and solved using state-of-the-art solvers (such as HiGHS, Gurobi, CBC).

  • DSM agents dynamically optimize their operational schedule at each simulation step, responding to electricity prices, market signals, and system requirements.

This mathematical core enables rigorous analysis of flexibility potential, operational feasibility, and market value for any DSM technology.


2.2 DSM Agent Characteristics#

ASSUME enables users to specify key attributes and constraints for each DSM agent, including:

  • Technology/process type (e.g., electrolyser, heat pump, boiler, battery, EV, PV)

  • Power and energy capacities (max/min)

  • Ramping rates, efficiencies, cost parameters

  • Operating strategies and flexibility measures

Each DSM agent is described by a set of parameters:

  • Technical attributes:

    • Maximum/minimum power or capacity

    • Efficiency and ramping rates

    • Storage duration (short-term, seasonal)

    • Process interconnections and sequencing

  • Market behavior:

    • Bidding strategy (e.g., for day-ahead, balancing, CRM, redispatch markets)

    • Flexibility measure (e.g., load shifting, price response, CRM block bidding)

  • Flexibility configuration:

    • Cost tolerance (how much extra cost is allowed for providing flexibility)

    • Participation constraints (e.g., minimum bid size, symmetry, reserve duration)

These characteristics are reflected in the data tables and configuration files used to build the agents.

Example: Agent Attributes#

[ ]:
image_path = Path("assume-repo/docs/source/img/Demand_Attribute.png")
alt_image_path = Path("../../docs/source/img/Demand_Attribute.png")

if image_path.exists():
    display(Image(image_path, width=600))
elif alt_image_path.exists():
    display(Image(alt_image_path, width=600))

Figure: Examples of demand agent characteristics and configurable attributes in ASSUME.


2.3 Technology Inventory & Asset Library#

ASSUME provides an inventory of pre-built technology modules in dst_components:

  • Industrial plants can be configured with multiple process assets, short/long-term storage, and auxiliary systems.

  • Buildings can combine heat pumps, boilers, thermal storage, PV, batteries, and electric vehicles (unidirectional or bidirectional).

This modular approach means you can quickly create a new agent type or expand an existing one by adding more technologies.

Example: Technology Options#

[ ]:
image_path = Path("assume-repo/docs/source/img/Industry.png")
alt_image_path = Path("../../docs/source/img/Industry.png")

if image_path.exists():
    display(Image(image_path, width=600))
elif alt_image_path.exists():
    display(Image(alt_image_path, width=600))
[ ]:
image_path = Path("assume-repo/docs/source/img/Building.png")
alt_image_path = Path("../../docs/source/img/Building.png")

if image_path.exists():
    display(Image(image_path, width=600))
elif alt_image_path.exists():
    display(Image(alt_image_path, width=600))

Figure: Example technology options for industrial and building DSM agents.


2.4 Extending the Framework#

The modular DSM agent class in ASSUME makes it easy to extend:

  • Add new technologies to the asset library (dst_components) with custom parameters.

  • Develop new agent types by subclassing the base agent and specifying new process chains.

  • Connect additional flexibility products (e.g., for different market participation) by adding attributes or strategy modules.

Example: Building Configuration File Snippet#

```python # Snippet: Defining a building agent with PV, battery, EV, and heat pump

components = { “pv_plant”: {“max_power”: 10, “efficiency”: 0.98}, “battery_storage”: {“capacity”: 40, “max_power_charge”: 5, “max_power_discharge”: 5, “efficiency”: 0.9}, “heat_pump”: {“max_power”: 8, “min_power”: 2, “efficiency”: 3.5}, “electric_vehicle”: {“max_power_charge”: 7, “max_power_discharge”: 7, “battery_capacity”: 50}, } building = Building( id=”B102”, unit_operator=”res_operator”, components=components, is_prosumer=”Yes”, flexibility_measure=”cost_based_load_shift”, )

Section 3: Creating a Custom Flexible DSM Agent in ASSUME#

ASSUME is designed for modularity: you can easily define your own agent class—industrial or building—by assembling any set of technologies from the component library (dst_components).

Below, we show how to create a minimal custom agent by:

  • Assigning key technical attributes

  • Integrating any two technology blocks

  • Specifying their connection (process logic)

  • Exposing a flexible structure for your own processes

All DSM agents in ASSUME are ultimately Pyomo models: you only need to describe the structure; ASSUME handles the optimization logic!

The industrial agent consists of several key components (e.g., furnace, heating unit, storage unit, grinding unit, etc.). These components consume electricity and can be modeled to react dynamically to market conditions.

In the ASSUME framework, a DSM agent is created by defining its characteristics, components, and objectives. Let’s start by defining the core characteristics of the agent:

[ ]:
# SPDX-FileCopyrightText: ASSUME Developers
#
# SPDX-License-Identifier: AGPL-3.0-or-later

import logging
from datetime import datetime

import pyomo.environ as pyo

from assume.common.base import SupportsMinMax
from assume.common.forecaster import Forecaster
from assume.units.dsm_load_shift import DSMFlex

logger = logging.getLogger(__name__)


class CustomIndustrialPlant(DSMFlex, SupportsMinMax):
    """
    Represents a paper and pulp plant in an energy system. This includes components like heat pumps,
    boilers, and storage units for operational optimization.

    Args:
        id (str): Unique identifier for the plant.
        unit_operator (str): The operator responsible for the plant.
        bidding_strategies (dict): A dictionary of bidding strategies that define how the plant participates in energy markets.
        forecaster (Forecaster): A forecaster used to get key variables such as fuel or electricity prices.
        components (dict, optional): A dictionary describing the components of the plant, such as heat pumps and boilers.
        objective (str, optional): The objective function of the plant, typically to minimize variable costs. Default is "min_variable_cost".
        flexibility_measure (str, optional): The flexibility measure used for the plant, such as "max_load_shift". Default is "max_load_shift".
        demand (float, optional): The production demand, representing how much product needs to be produced. Default is 0.
        cost_tolerance (float, optional): The maximum allowable increase in cost when shifting load. Default is 10.
        node (str, optional): The node location where the plant is connected within the energy network. Default is "node0".
        location (tuple[float, float], optional): The geographical coordinates (latitude, longitude) of the paper and pulp plant. Default is (0.0, 0.0).
        **kwargs: Additional keyword arguments to support more specific configurations or parameters.
    """

    required_technologies = []
    optional_technologies = []  # "heat_pump", "boiler", "thermal_storage"

    def __init__(
        self,
        id: str,
        unit_operator: str,
        bidding_strategies: dict,
        forecaster: Forecaster,
        components: dict[str, dict] = None,
        technology: str = "",
        objective: str = "",
        flexibility_measure: str = "",
        demand: float = 0,
        cost_tolerance: float = 10,
        congestion_threshold: float = 0,
        node: str = "node0",
        location: tuple[float, float] = (0.0, 0.0),
        **kwargs,
    ):
        super().__init__(
            id=id,
            unit_operator=unit_operator,
            technology=technology,
            components=components,
            bidding_strategies=bidding_strategies,
            forecaster=forecaster,
            node=node,
            location=location,
            **kwargs,
        )

Bringing ``dst_components`` into the Plant Process

In the plant, we use components like the electrolyser and hydrogen storage to model the production and storage of hydrogen. These components are imported from dst_components.py and integrated into the plant’s process.

In this section, we will showcase how to model these components, define their characteristics, and integrate them into the overall process of the hydrogen plant.

In the ASSUME framework, components like the electrolyser and hydrogen storage are modeled using Pyomo, a Python-based optimization modeling tool. The framework supports detailed modeling of each component by specifying their characteristics and operational constraints.

For each component, attributes such as rated power, ramp rates, efficiency, etc. are defined. These attributes are essential for simulating the component’s behavior in the energy system.

Example: Electrolyser Model The electrolyser is a crucial component in hydrogen production. In this framework, the electrolyser is modeled with various characteristics, including power limits, operational efficiency, and ramp rates. These attributes ensure that the electrolyser operates within its technical and economic boundaries.

Below is a demo of the technology:

[ ]:
class Boiler:
    def __init__(
        self,
        max_power: float,
        efficiency: float,
        time_steps: list[int],
        fuel_type: str = "electricity",
        min_power: float = 0.0,
        ramp_up: float | None = None,
        ramp_down: float | None = None,
        min_operating_steps: int = 0,
        min_down_steps: int = 0,
        initial_operational_status: int = 1,
        **kwargs,
    ):
        super().__init__()

        self.max_power = max_power
        self.efficiency = efficiency
        self.time_steps = time_steps
        self.fuel_type = fuel_type
        self.min_power = min_power
        self.ramp_up = max_power if ramp_up is None else ramp_up
        self.ramp_down = max_power if ramp_down is None else ramp_down
        self.min_operating_steps = min_operating_steps
        self.min_down_steps = min_down_steps
        self.initial_operational_status = initial_operational_status
        self.kwargs = kwargs

        if self.fuel_type not in ["electricity", "natural_gas", "hydrogen_gas"]:
            raise ValueError(
                "Unsupported fuel_type for a boiler. Choose 'electricity' or 'natural_gas' or 'hydrogen_gas'."
            )

    def add_to_model(
        self, model: pyo.ConcreteModel, model_block: pyo.Block
    ) -> pyo.Block:
        # Define parameters
        "Add your Parameters here"

        # Define variables
        "Add your Parameters here"

        # Define constraints
        "Add your Constraints here"

The initialize_process_sequence() function is responsible for defining how the different components of the DSM unit are connected to form a complete process. This function is the placeholder to model the process.

[ ]:
def initialize_process_sequence(self):
    # Per-time-step constraint (default)
    if not self.demand or self.demand == 0:

        @self.model.Constraint(self.model.time_steps)
        def direct_heat_balance(m, t):
            total_heat_production = 0
            if self.has_heatpump:
                total_heat_production += m.dsm_blocks[""].heat_out[t]
            if self.has_boiler:
                total_heat_production += m.dsm_blocks[""].heat_out[t]
            if self.has_thermal_storage:
                storage_discharge = m.dsm_blocks[""].discharge[t]
                storage_charge = m.dsm_blocks[""].charge[t]
                return (
                    total_heat_production + storage_discharge - storage_charge
                    >= m.thermal_demand[t]
                )
            else:
                return total_heat_production >= m.thermal_demand[t]
    else:
        pass

    def define_constraints(self):
        """
        Defines the constraints for the paper and pulp plant model.
        """
[ ]:
def calculate_marginal_cost(self, start: datetime, power: float) -> float:
    """
    Calculate the marginal cost of the unit based on the provided time and power.

    Args:
        start (datetime): The start time of the dispatch
        power (float): The power output of the unit

    Returns:
        float: The marginal cost of the unit
    """
    marginal_cost = 0

    if self.opt_power_requirement.at[start] > 0:
        marginal_cost = (
            self.variable_cost_series.at[start] / self.opt_power_requirement.at[start]
        )

    return marginal_cost

4 Flexibility Measures: How They Work & How to Add Your Own#

In the ASSUME framework, Demand-Side Flexibility allows agents, to adjust their energy consumption in response to external signals. This flexibility is achieved by shifting loads or adjusting operations based on the agent’s predefined flexibility strategies.

In ASSUME we have 5 Flexibility Measures:

  • electricity_price_signal: Reacts on the price signal

  • cost_based_load_shift: Maximum flexibility potetial based on the risk tolerance.

  • congestion_management_flexibility: Reacts on the grid signal.

  • symmetric_flexible_block: Symmetric flexible blocks.

  • peak_load_shifting: Peak clipping parameterised by the degree of curtailment

  • renewable_utilisation: Load follows the renewable production.

[ ]:
class DSMFlex:
    # Mapping of flexibility measures to their respective functions
    flexibility_map: dict[str, Callable[[pyo.ConcreteModel], None]] = {
        "electricity_price_signal": lambda self, model: self.electricity_price_signal(
            model
        ),
        "cost_based_load_shift": lambda self, model: self.cost_based_flexibility(model),
        "congestion_management_flexibility": lambda self,
        model: self.grid_congestion_management(model),
        "symmetric_flexible_block": lambda self, model: self.symmetric_flexible_block(
            model
        ),
        "peak_load_shifting": lambda self, model: self.peak_load_shifting_flexibility(
            model
        ),
        "renewable_utilisation": lambda self, model: self.renewable_utilisation(
            model,
        ),
    }

    def design_your_own_flex_measure(self, model):
        """
        Placeholder for custom flexibility measures.
        This method can be overridden to implement specific flexibility strategies.
        """
[ ]:
# Set up signals ----
price_signal_flex = [-1] * 5 + [10] * 5 + [40] * 5 + [10] * 9
# steam_plant_dsm_congestion_signal = [0.5]*9 + [0.9]*5 + [0.5]*10
# steam_plant_dsm_renewable_utilisation = [0]*6 + [5]*5 + [7]*10 + [2]*4
[ ]:
# ---- Step 2: Create NaiveForecast object with price signal ----
forecaster = SteamgenerationForecaster(
    index=time_index,
    electricity_price=price_signal,
    electricity_price_flex=price_signal_flex,  # assign new price signal
    thermal_demand=[None] * 24,
)

# ---- Step 3: Define plant configuration (technologies) ----
components = {
    "boiler": {
        "max_power": 10,  # MW
        "min_power": 2,  # MW
        "ramp_up": 10,  # MW/h
        "ramp_down": 10,  # MW/h
        "efficiency": 0.9,
        "fuel_type": "electricity",
    },
}

# ---- Step 4: Create the SteamPlant DSM agent ----
paper_plant_1 = SteamPlant(
    id="steam_plant_dsm",
    unit_operator="paper_gmbh",
    components=components,
    demand=174,
    forecaster=forecaster,
    flexibility_measure="symmetric_flexible_block",
    cost_tolerance=10,  # 10% allowed cost deviation for flexibility
    congestion_threshold=0.9,
    peak_load_cap=10,
    bidding_strategies={"eom": "industry_energy_optimization"},
)

# ---- Step 5: Solve for optimal operation with flexibility ----
paper_plant_1.determine_optimal_operation_with_flex()
flex_profile = paper_plant_1.flex_power_requirement
[ ]:
# ---- Step 6: Plot the results ----
plt.figure(figsize=(8, 4))

# Primary y-axis: power demand
ax1 = plt.gca()
ax1.step(
    time_index,
    thermal_demand,
    where="mid",
    color="tab:blue",
    linewidth=2.5,
    linestyle="--",
    label="Inflexible Demand Profile",
)
ax1.step(
    time_index,
    flex_profile,
    where="mid",
    color="tab:green",
    linewidth=2,
    label="Flexible consumption",
)
ax1.set_xlabel("Hour")
ax1.set_ylabel("Power Input [MW]")
ax1.set_title("Paper Plant: DSM vs. Inflexible Demand")
ax1.grid(True)

# Legend for the left axis
lines_1, labels_1 = ax1.get_legend_handles_labels()

# # Secondary y-axis: electricity price
# ax2 = ax1.twinx()
# ax2.plot(time_index, price_signal_flex, 'r-.', linewidth=2, label="Electricity Price (€/MWh)")
# ax2.set_ylabel("Electricity Price (€/MWh)", color='red')
# ax2.tick_params(axis='y', labelcolor='red')

# Legend for the right axis
lines_2, labels_2 = ax2.get_legend_handles_labels()
ax1.legend(lines_1 + lines_2, labels_1 + labels_2, loc="upper left")

plt.tight_layout()
plt.show()

A common flexibility strategy for DSM units is to cap the load profile at a certain percentile (e.g., 90%). Any load exceeding this cap is shifted to periods of lower demand. This approach:

  • Reduces grid congestion and flattens the load curve.

  • Lowers network costs and can avoid peak-related tariffs.

  • Enables better integration with system constraints or market rules.

Example:
The plot below shows a load profile (blue) and the effect of capping at the 90th percentile (orange). The area above the red dashed line is “shifted” to later hours.
[ ]:
image_path = Path("assume-repo/docs/source/img/Peak load.png")
alt_image_path = Path("../../docs/source/img/Peak load.png")

if image_path.exists():
    display(Image(image_path))
elif alt_image_path.exists():
    display(Image(alt_image_path))

Another flexibility measure is to shift load in time to coincide with periods of high renewable (RE) generation (e.g., solar, wind). In this approach:

  • The load profile is adapted to follow a reference curve for RE availability.

  • Enables higher renewable integration and reduces curtailment.

  • Optimizes DSM unit operation in line with system and market incentives.

Example:
Below, the DSM-shifted load (green) tracks the renewable intensity signal (orange), while the original load (blue) does not. The yellow shaded area shows the RE-intensity window.
[ ]:
image_path = Path("assume-repo/docs/source/img/RE availability.png")
alt_image_path = Path("../../docs/source/img/RE availability.png")

if image_path.exists():
    display(Image(image_path))
elif alt_image_path.exists():
    display(Image(alt_image_path))

Both flexibility measures above show how DSM units in ASSUME can:

  • Serve as a controllable resource for the power system,

  • Participate in markets,

  • Provide value by either capping peaks or synchronizing with renewable availability.

These strategies can be implemented, visualized, and analyzed using the ASSUME framework.

5: Integrating Bidding Strategies with DSM Agents in ASSUME#

From Flexibility to Market Participation#

After optimizing the flexibility of a DSM unit, the next essential step is to make this flexibility profitable by participating in electricity markets. This is achieved through bidding strategies algorithms that translate physical flexibility into concrete market offers.

In ASSUME, bidding strategies:

  • Enable DSM units to submit bids to electricity markets (e.g., day-ahead, balancing, or reserve).

  • Respect market requirements (block length, symmetry, minimum bid size, etc.).

  • Link the physical flexibility (from optimization) to actual, revenue-generating market products.


How Does It Work?#

  • Separation of concerns: The DSM agent determines “what is possible” physically; the bidding strategy decides “how to sell it in the market.”

  • Market products: Most reserve and flexibility markets operate on block-based products (e.g., 4-hour symmetric CRM blocks in Germany).

  • Plug & play: You can switch or test different strategies without rewriting the core agent logic.


[ ]:
from assume.strategies.naive_strategies import (
    CapacityHeuristicBalancingNegStrategy,
    # EnergyNaiveStrategy,
    CapacityHeuristicBalancingPosStrategy,
    # EnergyNaiveProfileStrategy,
    DsmEnergyNaiveRedispatchStrategy,
    DsmEnergyOptimizationStrategy,
    MinMaxStrategy,
)

bidding_strategies: dict[str, BaseStrategy] = {
    "industry_energy_optimization": DsmEnergyOptimizationStrategy,
    "industry_capacity_heuristic_balancing_pos": CapacityHeuristicBalancingPosStrategy,
    "industry_capacity_heuristic_balancing_neg": CapacityHeuristicBalancingNegStrategy,
    "industry_energy_naive_redispatch": DsmEnergyNaiveRedispatchStrategy,
}

Bidding into the Day-Ahead Market with DSM Units#

DSM agents in ASSUME can participate not only in reserve markets (CRM) but also directly in the energy market. In the Day-Ahead Market (DAM), bids are created by mapping the agent’s optimized operation profile to market orders.

  • Bid Volume: The optimized (scheduled) load at each time step.

  • Bid Price: The marginal cost of the unit at that time (or a user-defined price).

This approach enables DSM units to monetize their flexibility in the core energy market, not just as reserves.

Below, we demonstrate how to use the NaiveDADSMStrategy to create DAM bids from a DSM plant’s optimized operation.

[ ]:
class CustomDADSMStrategy(MinMaxStrategy):
    """
    Custom DAM strategy for DSM agent: bids at marginal cost plus a risk premium.
    """

    def calculate_bids(self, unit, market_config, product_tuples, **kwargs):
        # Ensure the optimized schedule is available
        if unit.optimisation_counter == 0:
            unit.determine_optimal_operation_with_flex()
            unit.optimisation_counter = 1

        bids = []
        risk_premium = 10  # €/MWh, e.g.

        for product in product_tuples:
            start = product[0]
            volume = unit.opt_power_requirement.at[start]
            try:
                price = unit.calculate_marginal_cost(start, volume) + risk_premium
            except AttributeError:
                price = 120  # fallback

            bids.append(
                {
                    "start_time": start,
                    "end_time": product[1],
                    "only_hours": product[2],
                    "price": price,
                    "volume": -volume,
                }
            )
        return bids


# 2. Define hourly products for the DAM (here: one per hour)
product_tuples = [(t, t + pd.Timedelta(hours=1), None) for t in paper_plant_1.index]

# 3. Instantiate the DAM strategy
da_strategy = CustomDADSMStrategy()

# 4. Calculate bids for the DAM
da_bids = da_strategy.calculate_bids(
    unit=paper_plant_1,
    market_config=None,
    product_tuples=product_tuples,
)
[ ]:
times = [bid["start_time"] for bid in da_bids]
volumes = [-bid["volume"] for bid in da_bids]
prices = [bid["price"] for bid in da_bids]

fig, ax1 = plt.subplots(figsize=(10, 4))

ax1.bar(times, volumes, width=0.04, color="skyblue", label="DAM Bid Volume [MW]")
ax1.set_ylabel("Bid Volume [MW]")
ax1.set_xlabel("Hour")
ax1.legend(loc="upper left")
ax1.set_title("Day-Ahead Market Bids from DSM Agent")

ax2 = ax1.twinx()
ax2.plot(times, prices, "r--", label="Bid Price [€/MWh]")
ax2.set_ylabel("Bid Price [€/MWh]", color="red")
ax2.legend(loc="upper right")

plt.tight_layout()
plt.show()
[ ]:
# # 1. Set up your DSM agent and run optimization (see previous sections)

# # 2. Define hourly products for the DAM (here: one per hour)
# product_tuples = [(t, t+pd.Timedelta(hours=1), None) for t in paper_plant_1.index]

# # 3. Instantiate the DAM strategy
# da_strategy = DsmEnergyOptimizationStrategy()

# # 4. Calculate bids for the DAM
# da_bids = da_strategy.calculate_bids(
#     unit=paper_plant_1,
#     market_config=None,      # (or appropriate market config if available)
#     product_tuples=product_tuples,
# )

Bidding into the Control Reserve Market (CRM) Market with DSM Units#

Example: Creating CRM Market Bids from a DSM Agent

Suppose we have a steam plant agent (paper_plant) with an optimized load profile from Section 4. We want to create CRM (Capacity Reserve Market) symmetric bids for all 4-hour windows.

[ ]:
# 1. Prepare 4-hour rolling blocks for CRM bidding
block_length = 4
product_tuples = []
for i in range(len(paper_plant.index) - block_length - 1):
    product_tuples.append(
        (paper_plant.index[i], paper_plant.index[i + block_length], None)
    )

# Set max/min plant capacity attributes for CRM bidding (if not already set by flexibility measure)
paper_plant_1.max_plant_capacity = components["boiler"].max_power
paper_plant_1.min_plant_capacity = components["boiler"].min_power

# 2. Instantiate the positive CRM bidding strategy
strategy = CapacityHeuristicBalancingPosStrategy()

# 3. Calculate market bids using the agent's optimized flexibility
bids = strategy.calculate_bids(
    unit=paper_plant_1, market_config=None, product_tuples=product_tuples
)
[ ]:
# 1. Prepare per-hour bid availability (sum overlapping blocks)
timesteps = pd.date_range(start="2023-01-01 00:00", periods=24, freq="h")
block_length = 4
bid_volume_per_hour = np.zeros(len(timesteps))

# Fill per-hour bid volume from overlapping blocks
for bid in bids:
    start_idx = timesteps.get_loc(bid["start_time"])
    for offset in range(block_length):
        t_idx = start_idx + offset
        if t_idx < len(bid_volume_per_hour):
            bid_volume_per_hour[t_idx] += bid["volume"]

# 2. Plotting
plt.figure(figsize=(12, 5))

# Plot the flexible plant profile (assuming you have it as flex_profile)
plt.step(timesteps, flex_profile, where="mid", label="DSM Plant Load [MW]", linewidth=2)

# Plot the per-hour CRM bid volume as a filled area
plt.bar(
    timesteps,
    bid_volume_per_hour,
    width=0.04,
    alpha=0.4,
    color="tab:purple",
    label="CRM Bid Volume Available [MW]",
)

plt.xlabel("Hour")
plt.ylabel("Power [MW]")
plt.title("DSM Plant Flexibility and CRM Bids (per time step)")
plt.legend()
plt.grid(True, alpha=0.4)
plt.tight_layout()
plt.show()

Visualizing Bidding Intervals#

To get an intuition for when the agent can provide flexibility in the market, you can shade the periods of eligible bids on top of the agent’s power profile.


What Does This Mean?#

  • The DSM agent is now able to convert its flexibility into actual market offers.

  • Bids are structured to comply with real market rules e.g., block size, minimum capacity, symmetry.

  • You can experiment with different strategies (day-ahead, balancing, symmetric/asymmetric, etc.) simply by swapping the bidding strategy class.


In the next section, we’ll use these tools for investment decision analysis under market uncertainty!


6: Investment Decision-Making for a Flexible Industrial Plant under Market Uncertainty#

In this final section, we apply everything we’ve learned to a real-world use case. You will model a paper/steam plant that must decide which flexible technology configuration to invest in, under market price uncertainty.

We’ll:

  • Create a mini electricity system with a mix of flexible and inflexible demand units, and several power plants (renewables + conventional).

  • Define two technology investment options for the plant:

    1. Electrolyser

    2. Electrolyser + Seasonal Hydrogen Storage

  • Simulate system operation under several market scenarios (e.g., different CO₂/electricity price trajectories).

  • Quantify costs and revenues for each scenario.

  • Apply expected utility theory (EUT) to select the most robust investment.

Let’s get started!

[ ]:
# @title Setting Up Power Plant Units
from assume.units.steam_generation_plant import SteamPlant

# Define the list of power plants with their characteristics
powerplant_units_data = {
    "name": [
        "Wind onshore",
        "Solar",
        "Gas-fired power plant",
    ],
    "technology": [
        "wind_onshore",
        "solar",
        "combined cycle gas turbine",
    ],
    "bidding_EOM": [
        "powerplant_energy_naive",
        "powerplant_energy_naive",
        "powerplant_energy_naive",
    ],
    "fuel_type": [
        "renewable",
        "renewable",
        "natural gas",
    ],
    "emission_factor": [0, 0, 0.201],
    "max_power": [50, 50, 100],
    "min_power": [0, 0, 10],
    "efficiency": [1, 1, 0.33],
    "ramp_up": [None, None, 50],
    "ramp_down": [None, None, 50],
    "additional_cost": [0, 0, 10.3],
    "unit_operator": [
        "renewables_operator",
        "renewables_operator",
        "PP_operator",
    ],
}

# Create the DataFrame
powerplant_units = pd.DataFrame(powerplant_units_data)

# Define the list of demand units with their characteristics
demand_units_data = {
    "name": ["demand_EOM1", "demand_EOM2"],
    "technology": ["inflex_demand", "inflex_demand"],
    "bidding_EOM": ["demand_energy_naive", "demand_energy_naive"],
    "max_power": [70, 90],  # Max demand in MW
    "min_power": [0, 0],  # Min demand in MW
    "unit_operator": ["eom_de1", "eom_de1"],
}

# Create the DataFrame
demand_units = pd.DataFrame(demand_units_data)

# Define the time range for 3 days, with demand recorded every 15 minutes
time_index = pd.date_range(start="2023-01-01 00:00", end="2023-01-05 00:00", freq="1h")

# Simulate demand data for 'demand_EOM1' and 'demand_EOM2' (example demand pattern)
# For simplicity, we'll create a fluctuating demand pattern using a sinusoidal function
demand_values1 = 60 + 10 * np.sin(np.linspace(0, 6 * np.pi, len(time_index)))
demand_values2 = 80 + 10 * np.sin(np.linspace(0, 6 * np.pi, len(time_index)))

# Create the DataFrame with both 'demand_EOM1' and 'demand_EOM2'
demand_df = pd.DataFrame(
    {
        "datetime": time_index,
        "demand_EOM1": demand_values1,
        "demand_EOM2": demand_values2,
    }
)

# Set 'datetime' as the index
demand_df.set_index("datetime", inplace=True)

# Define the list of industrial DSM units (for hydrogen plant) with their characteristics
plant_with_out_storage = {
    "name": ["A360"],
    "unit_type": ["hydrogen_plant"],
    "technology": ["electrolyser"],
    "bidding_EOM": ["industry_energy_optimization"],
    "unit_operator": ["dsm_operator_1"],
    "objective": ["min_variable_cost"],
    "flexibility_measure": [
        "cost_based_load_shift"
    ],  # congestion_management_flexibility
    "congestion_threshold": [0.8],  # 80% congestion threshold for DSM
    "cost_tolerance": [2],
    "demand": [500],  # MW
    # "fuel_type": ["electricity", ""],
    "max_power": [10],  # MW
    "min_power": [0],  # MW
    "ramp_up": [10],  # MW/hr
    "ramp_down": [10],  # MW/hr
    "min_operating_time": [2],
    "min_down_time": [0],
    "efficiency": [0.8],
    "start_price": [5],
}

# Create the DataFrame
industrial_dsm_plant_with_out_storage = pd.DataFrame(plant_with_out_storage)

# Define the input directory
input_dir = "inputs"
scenario = "tutorial_dsm_plant_with_out_storage"
scenario_path = os.path.join(input_dir, scenario)

# Create the directory if it doesn't exist
os.makedirs(scenario_path, exist_ok=True)

# Extend the demand_df for the 5-day simulation (as we need to simulate for 3 days)
demand_df_extended = pd.concat([demand_df] * 5)
demand_df_extended.index = pd.date_range(
    start="2023-01-01", periods=len(demand_df_extended), freq="1h"
)

# Save the DataFrames to CSV files
powerplant_units.to_csv(f"{scenario_path}/powerplant_units.csv", index=False)
demand_units.to_csv(f"{scenario_path}/demand_units.csv", index=False)
demand_df_extended.to_csv(f"{scenario_path}/demand_df.csv")
industrial_dsm_plant_with_out_storage.to_csv(
    f"{scenario_path}/industrial_dsm_units.csv", index=False
)

# Define fuel prices for the power plant units
fuel_prices = {
    "fuel": ["natural_gas", "co2"],
    "price": [35, 65],  # Example prices for uranium and CO2
}

# Convert the dictionary to a DataFrame and save as CSV
fuel_prices_df = pd.DataFrame(fuel_prices).T
fuel_prices_df.to_csv(f"{scenario_path}/fuel_prices_df.csv", index=True, header=False)

np.random.seed(42)

# Assume you have 'time_index' as your hourly time range
hours = time_index.hour

# Create solar profile (Gaussian bell around noon, zero at night)
solar_profile = 0.7 * np.exp(-0.5 * ((hours - 12) / 3.5) ** 2)
solar_profile += 0.01 * np.random.randn(len(time_index))  # add small noise
solar_profile = np.clip(solar_profile, 0, 1)

# Set solar to zero for night hours (e.g., before 6am and after 18pm)
is_daytime = (hours >= 6) & (hours <= 18)
solar_profile = np.where(is_daytime, solar_profile, 0.0)

# Wind profile as before
wind_profile = (
    0.5
    + 0.15 * np.sin(np.linspace(0, 6 * np.pi, len(time_index)))
    + 0.05 * np.random.randn(len(time_index))
)
wind_profile = np.clip(wind_profile, 0, 1)

# Build dataframe
availability_df = pd.DataFrame(
    {
        "datetime": time_index,
        "Wind Onshore": wind_profile,
        "Solar": solar_profile,
    }
)
availability_df.set_index("datetime", inplace=True)

# Save to CSV if needed
availability_df.to_csv(f"{scenario_path}/availability_df.csv")

# Preview
# display(availability_df.head(48))

# Define the time range for the forecast (matching the 3-day simulation)

# Define the base price for the diurnal electricity price and amplitude for fluctuations
base_price = 10
price_amplitude = 20  # Max fluctuation range
hours_in_day = 24

# Use a sine wave to simulate the diurnal price cycle
electricity_price = base_price + price_amplitude * np.sin(
    2 * np.pi * (time_index.hour / hours_in_day)
)
# Generate a synthetic congestion signal (0–1), with some hours above the 0.8 threshold
congestion_signal = 0.5 + 0.3 * np.sin(
    np.linspace(0, 2 * np.pi, len(time_index))
)  # base sinusoidal
congestion_signal += 0.15 * np.random.randn(len(time_index))  # add a little noise
congestion_signal = np.clip(congestion_signal, 0, 1)

# Manually set a few hours above the threshold for demonstration
congested_hours = [12, 13, 14, 15, 35, 36, 50]
for h in congested_hours:
    if h < len(congestion_signal):
        congestion_signal[h] = np.random.uniform(0.85, 1.0)

forecasts_data = {
    "datetime": time_index,
    "price_EOM": electricity_price,  # Diurnal electricity price based on the sine wave
    "congestion_signal": congestion_signal,
}

# Create the DataFrame
forecasts_df = pd.DataFrame(forecasts_data)

# Set 'datetime' as the index
forecasts_df.set_index("datetime", inplace=True)

# Save the DataFrame as CSV
forecasts_df.to_csv(f"{scenario_path}/forecasts_df.csv")

# plt.plot(time_index, congestion_signal, color="red")
# plt.axhline(0.8, color="gray", ls="--", label="Congestion Threshold")
# plt.title("Congestion Signal (Synthetic Example)")
# plt.ylabel("Congestion Signal")
# plt.xlabel("Hour")
# plt.legend()
# plt.tight_layout()
# plt.show()

For our simulation, we will define the configuration in a YAML format, which specifies the time range, market setup, and other parameters. This configuration will be saved as a config.yaml file.

Below is the creation of the configuration dictionary and saving it to a YAML file.

[ ]:
# @title Configuring Market
# Define the configuration dictionary
config = {
    "Day_Ahead": {
        "start_date": "2023-01-01 00:00",
        "end_date": "2023-01-05 00:00",
        "time_step": "1h",
        "save_frequency_hours": 24,
        "markets_config": {
            "EOM": {
                "operator": "EOM_operator",
                "product_type": "energy",
                "products": [
                    {
                        "duration": "1h",  # Each product lasts for 1 hour
                        "count": 24,  # Number of products per day (24 hours)
                        "first_delivery": "1h",  # First delivery is 1 hour after the market opens
                    }
                ],
                "opening_frequency": "24h",  # Market opens once every 24 hours
                "opening_duration": "1h",  # Market stays open for 1 hour
                "volume_unit": "MWh",  # Market volume is measured in MWh
                "maximum_bid_volume": 100000,  # Maximum bid volume allowed
                "maximum_bid_price": 3000,  # Maximum allowed bid price
                "minimum_bid_price": -500,  # Minimum allowed bid price
                "price_unit": "EUR/MWh",  # Bid price unit is EUR per MWh
                "market_mechanism": "pay_as_clear",  # Market clears with pay-as-clear mechanism
            }
        },
    }
}

# Define the path for the config file
config_path = os.path.join(scenario_path, "config.yaml")

# Save the configuration to a YAML file
with open(config_path, "w") as file:
    yaml.dump(config, file, sort_keys=False)

print(f"Configuration YAML file has been saved to '{config_path}'.")

Now that we have prepared the input files and configuration, we can proceed to run the simulation using the ASSUME framework. In this step, we will load the scenario and execute the simulation.

[ ]:
# Define paths for input and output data
csv_path = "outputs"

# Define the data format and database URI
# Use "local_db" for SQLite database or "timescale" for TimescaleDB in Docker

# Create directories if they don't exist
os.makedirs(csv_path, exist_ok=True)
os.makedirs("local_db", exist_ok=True)

# Choose the data format: either local SQLite database or TimescaleDB
data_format = "local_db"  # Options: "local_db" or "timescale"

# Set the database URI based on the selected data format
if data_format == "local_db":
    db_uri = "sqlite:///local_db/assume_db.db"  # SQLite database
elif data_format == "timescale":
    db_uri = "postgresql://assume:assume@localhost:5432/assume"  # TimescaleDB

# Create the World instance
world = World(database_uri=db_uri, export_csv_path=csv_path)

# Load the scenario by providing the world instance
# The path to the inputs folder and the scenario name (subfolder in inputs)
# and the study case name (which config to use for the simulation)
load_scenario_folder(
    world,
    inputs_path=input_dir,
    scenario=scenario,  # Scenario folder for our case
    study_case="Day_Ahead",  # The config we defined earlier
)

# Run the simulation
world.run()

print("Simulation has completed.")
[ ]:
output_dir = f"outputs/{scenario}_Day_Ahead"

# ---- Load the market_meta CSV file (to get clearing price) ----
market_meta = pd.read_csv(f"{output_dir}/market_meta.csv")

# Parse the datetime if needed
if "product_start_time" in market_meta.columns:
    market_meta["time"] = pd.to_datetime(market_meta["product_start_time"])
elif "product_start" in market_meta.columns:
    market_meta["time"] = pd.to_datetime(market_meta["product_start"])
else:
    # fallback
    market_meta["time"] = pd.to_datetime(
        market_meta.iloc[:, -2]
    )  # usually second last column

# Sometimes price is missing for some hours; ensure numeric
market_meta["price"] = pd.to_numeric(market_meta["price"], errors="coerce")
market_meta = market_meta.sort_values("time")

# ---- Calculate Average Clearing Price ----
average_clearing_price = market_meta["price"].mean()
print(f"Average Market Clearing Price: {average_clearing_price:.2f} €/MWh")

# ---- Save to summary DataFrame ----
result_summary = pd.DataFrame(
    {
        "Scenario": [scenario],
        "Average_Price": [average_clearing_price],
        # Add more KPIs if you want (e.g. total cost, total consumption etc.)
    }
)

# Save or append to results file
summary_path = f"outputs/{scenario}_Day_Ahead/summary_kpis.csv"
result_summary.to_csv(summary_path, index=False)
print(f"Saved summary KPI to {summary_path}")
[ ]:
input_dir = "inputs"
scenario_2 = "tutorial_dsm_plant_with_out_storage_CO2_volatile"
scenario_path_2 = os.path.join(input_dir, scenario_2)
os.makedirs(scenario_path_2, exist_ok=True)

# Save the DataFrames to CSV files
powerplant_units.to_csv(f"{scenario_path_2}/powerplant_units.csv", index=False)
demand_units.to_csv(f"{scenario_path_2}/demand_units.csv", index=False)
demand_df_extended.to_csv(f"{scenario_path_2}/demand_df.csv")
industrial_dsm_plant_with_out_storage.to_csv(
    f"{scenario_path_2}/industrial_dsm_units.csv", index=False
)
# Save to CSV if needed
availability_df.to_csv(f"{scenario_path_2}/availability_df.csv")
# Save the DataFrame as CSV
forecasts_df.to_csv(f"{scenario_path_2}/forecasts_df.csv")

fuel_prices = {
    "fuel": ["natural_gas", "co2"],
    "price": [35, 90],  # Example prices for uranium and CO2
}

# Convert the dictionary to a DataFrame and save as CSV
fuel_prices_df = pd.DataFrame(fuel_prices).T
fuel_prices_df.to_csv(f"{scenario_path_2}/fuel_prices_df.csv", index=True, header=False)
# Define the path for the config file
config_path = os.path.join(scenario_path_2, "config.yaml")
# Save the configuration to a YAML file
with open(config_path, "w") as file:
    yaml.dump(config, file, sort_keys=False)
[ ]:
# Define paths for input and output data
csv_path = "outputs"

# Define the data format and database URI
# Use "local_db" for SQLite database or "timescale" for TimescaleDB in Docker

# Create directories if they don't exist
os.makedirs(csv_path, exist_ok=True)
os.makedirs("local_db", exist_ok=True)

# Choose the data format: either local SQLite database or TimescaleDB
data_format = "local_db"  # Options: "local_db" or "timescale"

# Set the database URI based on the selected data format
if data_format == "local_db":
    db_uri = "sqlite:///local_db/assume_db.db"  # SQLite database
elif data_format == "timescale":
    db_uri = "postgresql://assume:assume@localhost:5432/assume"  # TimescaleDB

# Create the World instance
world = World(database_uri=db_uri, export_csv_path=csv_path)

# Load the scenario by providing the world instance
# The path to the inputs folder and the scenario name (subfolder in inputs)
# and the study case name (which config to use for the simulation)
load_scenario_folder(
    world,
    inputs_path=input_dir,
    scenario=scenario_2,  # Scenario folder for our case
    study_case="Day_Ahead",  # The config we defined earlier
)

# Run the simulation
world.run()

print("Simulation has completed.")
[ ]:
output_dir = f"outputs/{scenario_2}_Day_Ahead"

# ---- Load the market_meta CSV file (to get clearing price) ----
market_meta = pd.read_csv(f"{output_dir}/market_meta.csv")

# Parse the datetime if needed
if "product_start_time" in market_meta.columns:
    market_meta["time"] = pd.to_datetime(market_meta["product_start_time"])
elif "product_start" in market_meta.columns:
    market_meta["time"] = pd.to_datetime(market_meta["product_start"])
else:
    # fallback
    market_meta["time"] = pd.to_datetime(
        market_meta.iloc[:, -2]
    )  # usually second last column

# Sometimes price is missing for some hours; ensure numeric
market_meta["price"] = pd.to_numeric(market_meta["price"], errors="coerce")
market_meta = market_meta.sort_values("time")

# ---- Calculate Average Clearing Price ----
average_clearing_price = market_meta["price"].mean()
print(f"Average Market Clearing Price: {average_clearing_price:.2f} €/MWh")

# ---- Save to summary DataFrame ----
result_summary = pd.DataFrame(
    {
        "Scenario": [scenario_2],
        "Average_Price": [average_clearing_price],
        # Add more KPIs if you want (e.g. total cost, total consumption etc.)
    }
)

# Save or append to results file
summary_path = f"outputs/{scenario_2}_Day_Ahead/summary_kpis.csv"
result_summary.to_csv(summary_path, index=False)
print(f"Saved summary KPI to {summary_path}")
[ ]:
plant_with_storage = {
    "name": ["A360", "A360"],
    "unit_type": ["hydrogen_plant", "hydrogen_plant"],
    "technology": ["electrolyser", "hydrogen_seasonal_storage"],
    "bidding_EOM": ["industry_energy_optimization", "industry_energy_optimization"],
    "unit_operator": ["dsm_operator_1", "dsm_operator_1"],
    "objective": ["min_variable_cost", ""],
    "flexibility_measure": [
        "cost_based_load_shift",
        "",
    ],  # congestion_management_flexibility
    "congestion_threshold": [0.8, None],  # 80% congestion threshold for DSM
    "cost_tolerance": [2, None],
    "demand": [500, None],  # MW
    "max_power": [10, 10],  # MW
    "min_power": [0, 0],  # MW
    "ramp_up": [10, 10],  # MW/hr
    "ramp_down": [10, 10],  # MW/hr
    "min_operating_time": [2, None],
    "min_down_time": [0, None],
    "efficiency": [0.8, None],
    "start_price": [5, None],
    "capacity": [None, 200],
    "min_soc": [None, 0],
    "initial_soc": [None, 0],
    "storage_loss_rate": [None, 0],
    "charge_loss_rate": [None, 0],
    "discharge_loss_rate": [None, 0],
    "storage_type": [None, "short-term"],
}

# Create the DataFrame
industrial_dsm_plant_with_storage = pd.DataFrame(plant_with_storage)

input_dir = "inputs"
scenario_3 = "tutorial_dsm_plant_with_storage"
scenario_path_3 = os.path.join(input_dir, scenario_3)
os.makedirs(scenario_path_3, exist_ok=True)

# Save the DataFrames to CSV files
powerplant_units.to_csv(f"{scenario_path_3}/powerplant_units.csv", index=False)
demand_units.to_csv(f"{scenario_path_3}/demand_units.csv", index=False)
demand_df_extended.to_csv(f"{scenario_path_3}/demand_df.csv")
industrial_dsm_plant_with_storage.to_csv(
    f"{scenario_path_3}/industrial_dsm_units.csv", index=False
)
# Save to CSV if needed
availability_df.to_csv(f"{scenario_path_3}/availability_df.csv")
# Save the DataFrame as CSV
forecasts_df.to_csv(f"{scenario_path_3}/forecasts_df.csv")
fuel_prices_df.to_csv(f"{scenario_path_3}/fuel_prices_df.csv", index=True, header=False)

fuel_prices = {
    "fuel": ["natural_gas", "co2"],
    "price": [35, 65],  # Example prices for uranium and CO2
}

# Define the path for the config file
config_path = os.path.join(scenario_path_3, "config.yaml")
# Save the configuration to a YAML file
with open(config_path, "w") as file:
    yaml.dump(config, file, sort_keys=False)
[ ]:
# Define paths for input and output data
csv_path = "outputs"

# Define the data format and database URI
# Use "local_db" for SQLite database or "timescale" for TimescaleDB in Docker

# Create directories if they don't exist
os.makedirs(csv_path, exist_ok=True)
os.makedirs("local_db", exist_ok=True)

# Choose the data format: either local SQLite database or TimescaleDB
data_format = "local_db"  # Options: "local_db" or "timescale"

# Set the database URI based on the selected data format
if data_format == "local_db":
    db_uri = "sqlite:///local_db/assume_db.db"  # SQLite database
elif data_format == "timescale":
    db_uri = "postgresql://assume:assume@localhost:5432/assume"  # TimescaleDB

# Create the World instance
world = World(database_uri=db_uri, export_csv_path=csv_path)

# Load the scenario by providing the world instance
# The path to the inputs folder and the scenario name (subfolder in inputs)
# and the study case name (which config to use for the simulation)
load_scenario_folder(
    world,
    inputs_path=input_dir,
    scenario=scenario_3,  # Scenario folder for our case
    study_case="Day_Ahead",  # The config we defined earlier
)

# Run the simulation
world.run()

print("Simulation has completed.")
[ ]:
output_dir = f"outputs/{scenario_3}_Day_Ahead"

# ---- Load the market_meta CSV file (to get clearing price) ----
market_meta = pd.read_csv(f"{output_dir}/market_meta.csv")

# Parse the datetime if needed
if "product_start_time" in market_meta.columns:
    market_meta["time"] = pd.to_datetime(market_meta["product_start_time"])
elif "product_start" in market_meta.columns:
    market_meta["time"] = pd.to_datetime(market_meta["product_start"])
else:
    # fallback
    market_meta["time"] = pd.to_datetime(
        market_meta.iloc[:, -2]
    )  # usually second last column

# Sometimes price is missing for some hours; ensure numeric
market_meta["price"] = pd.to_numeric(market_meta["price"], errors="coerce")
market_meta = market_meta.sort_values("time")

# ---- Calculate Average Clearing Price ----
average_clearing_price = market_meta["price"].mean()
print(f"Average Market Clearing Price: {average_clearing_price:.2f} €/MWh")

# ---- Save to summary DataFrame ----
result_summary = pd.DataFrame(
    {
        "Scenario": [scenario_3],
        "Average_Price": [average_clearing_price],
        # Add more KPIs if you want (e.g. total cost, total consumption etc.)
    }
)

# Save or append to results file
summary_path = f"outputs/{scenario_3}_Day_Ahead/summary_kpis.csv"
result_summary.to_csv(summary_path, index=False)
print(f"Saved summary KPI to {summary_path}")
[ ]:
input_dir = "inputs"
scenario_4 = "tutorial_dsm_plant_with_storage_CO2_volatile"
scenario_path_4 = os.path.join(input_dir, scenario_4)
os.makedirs(scenario_path_4, exist_ok=True)

# Save the DataFrames to CSV files
powerplant_units.to_csv(f"{scenario_path_4}/powerplant_units.csv", index=False)
demand_units.to_csv(f"{scenario_path_4}/demand_units.csv", index=False)
demand_df_extended.to_csv(f"{scenario_path_4}/demand_df.csv")
industrial_dsm_plant_with_storage.to_csv(
    f"{scenario_path_4}/industrial_dsm_units.csv", index=False
)
# Save to CSV if needed
availability_df.to_csv(f"{scenario_path_4}/availability_df.csv")
# Save the DataFrame as CSV
forecasts_df.to_csv(f"{scenario_path_4}/forecasts_df.csv")

fuel_prices = {
    "fuel": ["natural_gas", "co2"],
    "price": [30, 50],  # Example prices for uranium and CO2
}

# Convert the dictionary to a DataFrame and save as CSV
fuel_prices_df = pd.DataFrame(fuel_prices).T
fuel_prices_df.to_csv(f"{scenario_path_4}/fuel_prices_df.csv", index=True, header=False)
# Define the path for the config file
config_path = os.path.join(scenario_path_4, "config.yaml")
# Save the configuration to a YAML file
with open(config_path, "w") as file:
    yaml.dump(config, file, sort_keys=False)
[ ]:
# Define paths for input and output data
csv_path = "outputs"

# Define the data format and database URI
# Use "local_db" for SQLite database or "timescale" for TimescaleDB in Docker

# Create directories if they don't exist
os.makedirs(csv_path, exist_ok=True)
os.makedirs("local_db", exist_ok=True)

# Choose the data format: either local SQLite database or TimescaleDB
data_format = "local_db"  # Options: "local_db" or "timescale"

# Set the database URI based on the selected data format
if data_format == "local_db":
    db_uri = "sqlite:///local_db/assume_db.db"  # SQLite database
elif data_format == "timescale":
    db_uri = "postgresql://assume:assume@localhost:5432/assume"  # TimescaleDB

# Create the World instance
world = World(database_uri=db_uri, export_csv_path=csv_path)

# Load the scenario by providing the world instance
# The path to the inputs folder and the scenario name (subfolder in inputs)
# and the study case name (which config to use for the simulation)
load_scenario_folder(
    world,
    inputs_path=input_dir,
    scenario=scenario_4,  # Scenario folder for our case
    study_case="Day_Ahead",  # The config we defined earlier
)

# Run the simulation
world.run()

print("Simulation has completed.")
[ ]:
output_dir = f"outputs/{scenario_4}_Day_Ahead"

# ---- Load the market_meta CSV file (to get clearing price) ----
market_meta = pd.read_csv(f"{output_dir}/market_meta.csv")

# Parse the datetime if needed
if "product_start_time" in market_meta.columns:
    market_meta["time"] = pd.to_datetime(market_meta["product_start_time"])
elif "product_start" in market_meta.columns:
    market_meta["time"] = pd.to_datetime(market_meta["product_start"])
else:
    # fallback
    market_meta["time"] = pd.to_datetime(
        market_meta.iloc[:, -2]
    )  # usually second last column

# Sometimes price is missing for some hours; ensure numeric
market_meta["price"] = pd.to_numeric(market_meta["price"], errors="coerce")
market_meta = market_meta.sort_values("time")

# ---- Calculate Average Clearing Price ----
average_clearing_price = market_meta["price"].mean()
print(f"Average Market Clearing Price: {average_clearing_price:.2f} €/MWh")

# ---- Save to summary DataFrame ----
result_summary = pd.DataFrame(
    {
        "Scenario": [scenario_4],
        "Average_Price": [average_clearing_price],
        # Add more KPIs if you want (e.g. total cost, total consumption etc.)
    }
)

# Save or append to results file
summary_path = f"outputs/{scenario_4}_Day_Ahead/summary_kpis.csv"
result_summary.to_csv(summary_path, index=False)
print(f"Saved summary KPI to {summary_path}")

To select the optimal technology configuration for our DSM-enabled industrial plant, we apply the Expected Utility Theory (EUT). EUT is a standard approach for investment decision-making under uncertainty, especially when the decision maker is risk averse.

Approach:

  • We simulate two configurations (“Electrolyser” and “Electrolyser+Storage”) under two market scenarios (Stable CO₂ and Volatile CO₂).

  • For each, we calculate the total annual electricity cost for producing 500 MWh scaled up to 365 days (i.e., 500 MWh × 73 = 36,500 MWh/year).

  • We use the average electricity price from each simulation to estimate the annual cost for each configuration-scenario pair.

  • We assume subjective probabilities for each market scenario (e.g., 0.4 for Stable, 0.6 for Volatile).

  • Utilities are normalized to [0,1] (lower cost = higher utility).

  • We use a risk averse utility function (e.g., exponential: \(U(x) = 1 - e^{-\alpha x}\), \(\alpha > 0\)).

  • The configuration with the highest expected utility is selected for investment.


Steps:

  1. Calculate annual electricity cost for each scenario/configuration.

  2. Normalize the costs and compute utilities.

  3. Compute the expected utility for each configuration.

  4. Select the configuration with the highest expected utility.

[ ]:
# ---- 1. List your scenario summary CSV paths ----
summary_files = {
    "Electrolyser, Stable CO2": "outputs/tutorial_dsm_plant_with_out_storage_Day_Ahead/summary_kpis.csv",
    "Electrolyser, Volatile CO2": "outputs/tutorial_dsm_plant_with_out_storage_CO2_volatile_Day_Ahead/summary_kpis.csv",
    "Electrolyser+Storage, Stable CO2": "outputs/tutorial_dsm_plant_with_storage_Day_Ahead/summary_kpis.csv",
    "Electrolyser+Storage, Volatile CO2": "outputs/tutorial_dsm_plant_with_storage_CO2_volatile_Day_Ahead/summary_kpis.csv",
}

# ---- 2. Read average prices ----
average_prices = {}
for key, path in summary_files.items():
    df = pd.read_csv(path)
    avg_price = df["Average_Price"].iloc[0]
    average_prices[key] = avg_price

# ---- 3. Set annual hydrogen production (MWh) ----
annual_mwh = 500 * 73  # adjust as appropriate for your plant

# ---- 4. Calculate annual costs ----
annual_costs = {k: v * annual_mwh for k, v in average_prices.items()}

# ---- 5. Calculate annual revenues ----
h2_sale_price = 300  # €/MWh
annual_revenue = {k: h2_sale_price * annual_mwh for k in average_prices}

# ---- 6. Calculate annual profit (revenue - cost) ----
annual_profit = {k: annual_revenue[k] - annual_costs[k] for k in average_prices}

# ---- 7. Organize all in a DataFrame ----
summary = pd.DataFrame(
    [
        {
            "Configuration": "Electrolyser",
            "Scenario": "Stable CO2",
            "Annual_Cost": annual_costs["Electrolyser, Stable CO2"],
            "Annual_Revenue": annual_revenue["Electrolyser, Stable CO2"],
            "Annual_Profit": annual_profit["Electrolyser, Stable CO2"],
        },
        {
            "Configuration": "Electrolyser",
            "Scenario": "Volatile CO2",
            "Annual_Cost": annual_costs["Electrolyser, Volatile CO2"],
            "Annual_Revenue": annual_revenue["Electrolyser, Volatile CO2"],
            "Annual_Profit": annual_profit["Electrolyser, Volatile CO2"],
        },
        {
            "Configuration": "Electrolyser+Storage",
            "Scenario": "Stable CO2",
            "Annual_Cost": annual_costs["Electrolyser+Storage, Stable CO2"],
            "Annual_Revenue": annual_revenue["Electrolyser+Storage, Stable CO2"],
            "Annual_Profit": annual_profit["Electrolyser+Storage, Stable CO2"],
        },
        {
            "Configuration": "Electrolyser+Storage",
            "Scenario": "Volatile CO2",
            "Annual_Cost": annual_costs["Electrolyser+Storage, Volatile CO2"],
            "Annual_Revenue": annual_revenue["Electrolyser+Storage, Volatile CO2"],
            "Annual_Profit": annual_profit["Electrolyser+Storage, Volatile CO2"],
        },
    ]
)

# ---- 8. Set scenario probabilities ----
prob_stable = 0.4
prob_volatile = 0.6
summary["Probability"] = summary["Scenario"].map(
    {"Stable CO2": prob_stable, "Volatile CO2": prob_volatile}
)

# ---- 9. Normalize profit for utility (higher profit = higher utility) ----
min_profit = summary["Annual_Profit"].min()
max_profit = summary["Annual_Profit"].max()
summary["Norm_Profit"] = (summary["Annual_Profit"] - min_profit) / (
    max_profit - min_profit
)

# ---- 10. Compute risk-averse utility ----
alpha = 3
summary["Utility"] = 1 - np.exp(-alpha * summary["Norm_Profit"])

# ---- 11. Expected utility for each config ----
expected_utilities = summary.groupby("Configuration").apply(
    lambda df: np.sum(df["Probability"] * df["Utility"])
)

# ---- 12. Output results ----
print("Summary table:")
print(summary)
print("\nExpected utility for each configuration:")
print(expected_utilities)
print("\nBest option:", expected_utilities.idxmax())

Interpretation:

  • The configuration with the highest expected utility should be selected for investment.

  • This method accounts for both the uncertainty in CO₂ prices and the risk attitude of the decision maker.

  • Here, utility is higher for lower (better) cost, and is more sensitive to increases in cost due to risk aversion.

Note: You can adjust probabilities and the risk aversion parameter (alpha) to reflect different decision maker preferences or scenario likelihoods.