Welcome to the RangL Pathways to Net Zero challenge repository!
To get started, read the challenge overview.
RangL uses the Openai Gym framework. To install the RangL environment on your local machine,
-
If necessary, install the pip package manager (you can do this by running the
get-pip.py
Python script) -
Run
pip install -e .
Then head into the rangl/env_open_loop
folder and check out the README there.
Write Python code which returns your desired action at each timestep. The performance of this code (your agent) will determine your score in the challenge.
Design freely: you might use explicit rules, reinforcement learning, or some combination of these. The helper class Evaluate
in meaningful_agent_training/util.py
illustrates some basic approaches:
- the
min_agent
method gives your actions the smallest possible value at each step max_agent
: largest possible actions at each steprandom_agent
: random actions at each stepRL_agent
: actions are provided by a previously trained RL agent
The Evaluate
class also provides benchmark agents drawn from the Integrated Energy Vision study, which your agent should aim to beat:
breeze_agent
: implements actions corresponding to the Emerging scenario in the IEV study. These actions focus on deploying offshore windgale_agent
: implements the Progressive IEV scenario: Higher offshore wind and a mix of blue and green hydrogenstorm_agent
: implements the Transformational IEV scenario: Highest offshore wind, paired with green hydrogen
At each step, the RangL environment uses random noise to model real-world uncertainty. To evaluate an agent yourself, simply average its performance over multiple random seeds. To do this:
- Add your agent to the
Evaluate
helper class (the agents above are examples) - Evaluate it just as in
meaningful_agent_training/evaluate.py
ormeaningful_agent_training/evaluate_standard_agents.py
To get started with training RL agents, head to the meaningful_agent_training
folder and check out the README.
First test submission of a random agent to the competition by heading to the random_agent_submission
folder and checking out its README. Then head to the meaningful_agent_submission
folder to submit your competition entry.
The evaluation
folder is used to generate the challenge's web front-end and is not relevant to agent development.