Welcome to Rog RL’s documentation!

Installation

Stable release

To install Rog RL, run this command in your terminal:

$ pip install rog_rl

This is the preferred method to install Rog RL, as it will always install the most recent stable release.

If you don’t have pip installed, this Python installation guide can guide you through the process.

From sources

The sources for Rog RL can be downloaded from the Gitlab repo.

You can either clone the public repository:

$ git clone git://gitlab.aicrowd.com/rog-rl/rog-rl

Or download the tarball:

$ curl -OJL https://gitlab.aicrowd.com/rog-rl/rog-rl/tarball/master

Once you have a copy of the source, you can install it with:

$ python setup.py install

Usage

To use Rog RL in a project::

import rog_rl

rog_rl

rog_rl package

Submodules

rog_rl.agent module

class rog_rl.agent.DiseaseSimAgent(unique_id, model, prob_agent_movement=0.0, moore=True)[source]

Bases: mesa.agent.Agent

moore = True
move_to(new_position)[source]

Move the agent to a new location on the grid and do other associated house keeping tasks

  • Update global observation in model
pos = None
prob_agent_movement = 0.0
process_state_transitions()[source]
random_move()[source]
set_state(new_state: rog_rl.agent_state.AgentState)[source]
step()[source]

A single step of the agent.

trigger_infection(prob_infection=1.0)[source]

Attempts to trigger an infection, and if infection is triggered, then it returns True, else returns False.

rog_rl.agent_event module

class rog_rl.agent_event.AgentEvent(previous_state=<AgentState.SUSCEPTIBLE: 0>, new_state=<AgentState.SUSCEPTIBLE: 0>, update_timestep=-1)[source]

Bases: object

mark_as_executed()[source]

Mark that this event has been executed

mark_as_pending()[source]

Mark that the execution of this event is pending

rog_rl.agent_state module

class rog_rl.agent_state.AgentState[source]

Bases: enum.Enum

An enumeration.

EXPOSED = 1
INFECTIOUS = 2
RECOVERED = 4
SUSCEPTIBLE = 0
SYMPTOMATIC = 3
VACCINATED = 5

rog_rl.benchmark module

rog_rl.benchmark.performance_metrics(render_on=False)[source]
rog_rl.benchmark.profile(filename)[source]

rog_rl.cli module

Console script for rog_rl.

rog_rl.colors module

class rog_rl.colors.ANSI_COLOR_MAP[source]

Bases: object

BACK_BLACK = '\x1b[40m'
BACK_BLUE = '\x1b[44m'
BACK_CYAN = '\x1b[46m'
BACK_GREEN = '\x1b[42m'
BACK_MAGENTA = '\x1b[45m'
BACK_RED = '\x1b[41m'
BACK_RESET = '\x1b[49m'
BACK_WHITE = '\x1b[47m'
BACK_YELLOW = '\x1b[43m'
FORE_BLACK = '\x1b[30m'
FORE_BLUE = '\x1b[34m'
FORE_CYAN = '\x1b[36m'
FORE_GREEN = '\x1b[32m'
FORE_MAGENTA = '\x1b[35m'
FORE_RED = '\x1b[31m'
FORE_RESET = '\x1b[39m'
FORE_WHITE = '\x1b[37m'
FORE_YELLOW = '\x1b[33m'
class rog_rl.colors.ColorMap(mode='rgb')[source]

Bases: object

get_color(d)[source]
class rog_rl.colors.Colors[source]

Bases: object

Reference : https://materialuicolors.co/ # Level : 600

Can potentially use : https://github.com/secretBiology/SecretColors/

AMBER = (255, 179, 0)
BLUE = (30, 136, 229)
BLUE_GREY = (84, 110, 122)
BROWN = (109, 76, 65)
CYAN = (0, 172, 193)
DEEP_ORANGE = (244, 81, 30)
DEEP_PURPLE = (94, 53, 177)
GREEN = (67, 160, 71)
GREY = (117, 117, 117)
INDIGO = (57, 73, 171)
LIGHT_BLUE = (3, 155, 229)
LIGHT_GREEN = (124, 179, 66)
LIGHT_GREY = (234, 237, 237)
LIME = (192, 202, 51)
ORANGE = (251, 140, 0)
PINK = (216, 27, 96)
PURPLE = (142, 36, 170)
RED = (229, 57, 53)
TEAL = (0, 137, 123)
WHITE = (255, 255, 255)
YELLOW = (253, 216, 53)

rog_rl.contact_network module

class rog_rl.contact_network.ContactNetwork[source]

Bases: object

This keeps a record of all the “contacts” that happen in a single simulation

compute_R0()[source]

Returns the value of R0 based on all the registered infections

register_contact(agent_a, agent_b)[source]
register_infection_spread(agent_a, agent_b)[source]

Register the fact that agent_a infected agent_b

rog_rl.disease_planner module

class rog_rl.disease_planner.DiseasePlannerBase(random=False)[source]

Bases: object

This class plans the schedule of different state transitions for a disease

get_disease_plan(base_timestep=0)[source]

Plans out the schedule of the state transitions for a particular agent using a particular disease model.

It returns a list of AgentEvent objects which have to be “executed” by the Agent at the right moment.

sample_disease_progression()[source]
class rog_rl.disease_planner.SEIRDiseasePlanner(latent_period_mu=8, latent_period_sigma=4, incubation_period_mu=20, incubation_period_sigma=12, recovery_period_mu=56, recovery_period_sigma=4, random=False)[source]

Bases: rog_rl.disease_planner.DiseasePlannerBase

This class plans the schedule of different state transitions for a disease

build_disease_plan(disease_progression, base_timestep=0)[source]
get_disease_plan(base_timestep=0)[source]

It returns a list of AgentEvent objects which have to be “executed” by the Agent at the right moment.

sample_disease_progression()[source]

Plans out the schedule of the state transitions for a particular agent using a particular disease model.

class rog_rl.disease_planner.SimpleSEIRDiseasePlanner(latent_period=2, incubation_period=5, recovery_period=14, random=False)[source]

Bases: rog_rl.disease_planner.SEIRDiseasePlanner

This class plans the schedule of different state transitions for a disease

rog_rl.env module

class rog_rl.env.ActionType[source]

Bases: enum.Enum

An enumeration.

STEP = 0
VACCINATE = 1
class rog_rl.env.RogSimEnv(config={})[source]

Bases: gym.core.Env

close()[source]

Override close in your subclass to perform any necessary cleanup.

Environments will automatically close() themselves when garbage collected or when the program exits.

dummy_env_step()[source]

Implements a fake env.step for faster Integration Testing with RL experiments framework

get_current_game_metrics(dummy_simulation=False)[source]

Returns a dictionary containing important game metrics

get_current_game_score()[source]

Returns the current game score

The game score is currently represented as :
(percentage of susceptibles left in the population)
initialize_renderer(mode='human')[source]
render(mode='human')[source]

This methods provides the option to render the environment’s behavior to a window which should be readable to the human eye if mode is set to ‘human’.

reset()[source]

Resets the environment to an initial state and returns an initial observation.

Note that this function should not reset the environment’s random number generator(s); random variables in the environment’s state should be sampled independently between multiple calls to reset(). In other words, each call of reset() should yield an environment suitable for a new episode, independent of previous episodes.

Returns:
observation (object): the initial observation.
seed(seed=None)[source]

Sets the seed for this env’s random number generator(s).

Note:
Some environments use multiple pseudorandom number generators. We want to capture all such seeds used in order to ensure that there aren’t accidental correlations between multiple generators.
Returns:
list<bigint>: Returns the list of seeds used in this env’s random
number generators. The first value in the list should be the “main” seed, or the value which a reproducer should pass to ‘seed’. Often, the main seed equals the provided ‘seed’, but this won’t be true if seed=None, for example.
set_renderer(renderer)[source]
step(action)[source]

Run one timestep of the environment’s dynamics. When end of episode is reached, you are responsible for calling reset() to reset this environment’s state.

Accepts an action and returns a tuple (observation, reward, done, info).

Args:
action (object): an action provided by the agent
Returns:
observation (object): agent’s observation of the current environment reward (float) : amount of reward returned after previous action done (bool): whether the episode has ended, in which case further step() calls will return undefined results info (dict): contains auxiliary diagnostic information (helpful for debugging, and sometimes learning)
update_renderer(mode='human')[source]

Updates the latest board state on the renderer

rog_rl.model module

class rog_rl.model.DiseaseSimModel(width=50, height=50, population_density=0.75, vaccine_density=0, initial_infection_fraction=0.1, initial_vaccination_fraction=0.0, prob_infection=0.2, prob_agent_movement=0.0, disease_planner_config={'incubation_period_mu': 20, 'incubation_period_sigma': 0, 'latent_period_mu': 8, 'latent_period_sigma': 0, 'recovery_period_mu': 56, 'recovery_period_sigma': 0}, max_timesteps=200, early_stopping_patience=14, toric=True, seed=None)[source]

Bases: mesa.model.Model

The model class holds the model-level attributes, manages the agents, and generally handles the global level of our model.

There is only one model-level parameter: how many agents the model contains. When a new model is started, we want it to populate itself with the given number of agents.

The scheduler is a special model component which controls the order in which agents are activated.

get_observation()[source]
get_population_fraction_by_state(state: rog_rl.agent_state.AgentState)[source]
get_scheduler()[source]
initialize_agents(infection_fraction, vaccination_fraction)[source]

Intializes the intial agents on the grid

initialize_contact_network()[source]

Initializes the contact network

initialize_datacollector()[source]

Setup the initial datacollector

initialize_disease_planner()[source]

Initializes a disease planner that the Agents can use to “schedule” infection progressions

initialize_grid()[source]

Initializes the initial Grid

initialize_observation()[source]

Observation is a nd-array of shape (width, height, num_states) where each AgentState will be marked in a separate challenge for each of the cells

initialize_scheduler()[source]

Initializes the scheduler

is_running()[source]
propagate_infections()[source]

Propagates infection during a single simulation step

simulation_completion_checks()[source]
Simulation is complete if :
  • if the timesteps have exceeded the number of max_timesteps

or - the fraction of susceptible population is <= 0 or - the fraction of susceptible population has not changed since the last N timesteps

step()[source]

A model step. Used for collecting data and advancing the schedule

tick()[source]

a mirror function for the internal step function to help avoid confusion in the RL codebases (with the RL step)

vaccinate_cell(cell_x, cell_y)[source]

Vaccinates an agent at cell_x, cell_y, if present

Response with : (is_vaccination_successful, vaccination_response) of types (boolean, VaccinationResponse)

rog_rl.renderer module

class rog_rl.renderer.ANSIRenderer[source]

Bases: object

clear_screen()[source]
close()[source]
render(grid)[source]
render_grid(grid)[source]

Renders the Grid in ANSI

render_stats()[source]
setup(mode='ansi')[source]
setup_stats()[source]
update_stats(key, value)[source]
class rog_rl.renderer.Renderer(grid_size=(30, 30))[source]

Bases: object

close()[source]
convert_gym_color(color: rog_rl.colors.Colors)[source]
draw_cell(cell_x, cell_y, color=False)[source]
draw_grid(color)[source]
draw_standard_line(color, start_coord, end_coord)[source]
draw_standard_rect(color, rect_dims)[source]
draw_stats()[source]
get_cell_base(cell_x, cell_y)[source]
get_grid_height()[source]
get_grid_width()[source]
post_render(return_rgb_array=False)[source]

Some part of the code is taken from the file https://github.com/openai/gym/blob/master/gym/envs/classic_control/rendering.py The render method of class viewer clears the window. This also results in any text on the screen to be lost Hence we copy the contents of the render function and modify it

pre_render()[source]
prepare_render()[source]
setup(mode='human')[source]
setup_constants()[source]
setup_stats()[source]
update_stats(key, value)[source]

rog_rl.scheduler module

class rog_rl.scheduler.CustomScheduler(model: mesa.model.Model)[source]

Bases: mesa.time.RandomActivation

add(agent: mesa.agent.Agent) → None[source]

Add an Agent object to the schedule.

Args:
agent: An Agent to be added to the schedule. NOTE: The agent must have a step() method.
get_agent_count_by_state(state: rog_rl.agent_state.AgentState) → int[source]

Returns the current number of agents in a particular state.

get_agent_fraction_by_state(state: rog_rl.agent_state.AgentState) → int[source]

Returns the current number of agents in a particular state.

get_agents_by_state(state: rog_rl.agent_state.AgentState)[source]
remove(agent: mesa.agent.Agent) → None[source]

Remove all instances of a given agent from the schedule.

Args:
agent: An agent object.
update_agent_state_in_registry(agent: mesa.agent.Agent, previous_state: rog_rl.agent_state.AgentState) → None[source]

rog_rl.server module

Configure visualization elements and instantiate a server

rog_rl.server.agent_potrayal(agent)[source]
rog_rl.server.build_server(grid_width=50, grid_height=50)[source]

rog_rl.vaccination_response module

class rog_rl.vaccination_response.VaccinationResponse[source]

Bases: enum.Enum

An enumeration.

AGENT_EXPOSED = 2
AGENT_INFECTIOUS = 3
AGENT_RECOVERED = 5
AGENT_SYMPTOMATIC = 4
AGENT_VACCINATED = 6
AGENT_VACCINES_EXHAUSTED = 7
CELL_EMPTY = 1
VACCINATION_SUCCESS = 0

rog_rl.visualization module

class rog_rl.visualization.CustomTextGrid(grid, converter=None)[source]

Bases: mesa.visualization.TextVisualization.TextGrid

grid = None
render(endl='\n')[source]

What to show when printed.

Module contents

Top-level package for Rog RL.

Contributing

Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given.

You can contribute in many ways:

Types of Contributions

Report Bugs

Report bugs at https://gitlab.aicrowd.com/rog-rl/rog-rl/issues.

If you are reporting a bug, please include:

  • Your operating system name and version.
  • Any details about your local setup that might be helpful in troubleshooting.
  • Detailed steps to reproduce the bug.

Fix Bugs

Look through the gitlab issues for bugs. Anything tagged with “bug” and “help wanted” is open to whoever wants to implement it.

Implement Features

Look through the gitlab issues for features. Anything tagged with “enhancement” and “help wanted” is open to whoever wants to implement it.

Write Documentation

Rog RL could always use more documentation, whether as part of the official Rog RL docs, in docstrings, or even on the web in blog posts, articles, and such.

Submit Feedback

The best way to send feedback is to file an issue at https://gitlab.aicrowd.com/rog-rl/rog-rl/issues.

If you are proposing a feature:

  • Explain in detail how it would work.
  • Keep the scope as narrow as possible, to make it easier to implement.
  • Remember that this is a volunteer-driven project, and that contributions are welcome :)

Get Started!

Ready to contribute? Here’s how to set up RogRL for local development.

  1. Fork the RogRL repo on gitlab.

  2. Clone your fork locally:

    $ git clone git@gitlab.aicrowd.com:your_name_here/RogRL.git
    
  3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development:

    $ mkvirtualenv RogRL
    $ cd RogRL/
    $ python setup.py develop
    
  4. Create a branch for local development:

    $ git checkout -b name-of-your-bugfix-or-feature
    

    Now you can make your changes locally.

  5. When you’re done making changes, check that your changes pass flake8 and the tests, including testing other Python versions with tox:

    $ flake8 rog_rl tests
    $ python setup.py test or pytest
    $ tox
    

    To get flake8 and tox, just pip install them into your virtualenv.

  6. Commit your changes and push your branch to gitlab:

    $ git add .
    $ git commit -m "Your detailed description of your changes."
    $ git push origin name-of-your-bugfix-or-feature
    
  7. Submit a pull request through the gitlab website.

Pull Request Guidelines

Before you submit a pull request, check that it meets these guidelines:

  1. The pull request should include tests.
  2. If the pull request adds functionality, the docs should be updated. Put your new functionality into a function with a docstring, and add the feature to the list in README.rst.
  3. The pull request should work for Python 3.5, 3.6, 3.7 and 3.8, and for PyPy. Check https://travis-ci.com/gitlab/spMohanty/RogRL/pull_requests and make sure that the tests pass for all supported Python versions.

Tips

To run a subset of tests:

$ pytest -k agent_state

Deploying

A reminder for the maintainers on how to deploy. Make sure all your changes are committed (including an entry in HISTORY.rst). Then run:

$ bump2version patch # possible: major / minor / patch
$ git push
$ git push --tags

Travis will then deploy to PyPI if tests pass.

Credits

Development Lead

Contributors

None yet. Why not be the first?

History

0.1.0 (2020-04-02)

  • First release on PyPI.

Indices and tables