Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sensitivity analysis #113

Open
wants to merge 50 commits into
base: development
Choose a base branch
from
Open
Changes from 1 commit
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
fcf304e
Adding sensitivity analysis folder and files, minus batch script
Aug 21, 2024
1608c1d
Merged development into sensitivity-analysis
Aug 22, 2024
692cf7a
Added beta distribution to initialization and made necessary fun conc…
Aug 22, 2024
a651514
Not yet ready, in middle of changes for Snellius run.
Aug 23, 2024
f5571da
fixed lazy path errors, now pushing to test on Snellius again
Aug 23, 2024
31f8557
Batch script now runs on Snellius
Aug 26, 2024
ce253ac
Changed some print statements
Aug 27, 2024
cb71315
De-linting config.py
Aug 27, 2024
8d1f9b0
I think I don't have the right lint roller, I am reverting it back to…
Aug 27, 2024
390a62d
Added network metrics as node properties to make it possible to not s…
Sep 3, 2024
f75280b
Altered pytest to match updated network metrics functions; removed a …
Sep 3, 2024
6897cc5
Added further data collection options for any user who wants to save …
Sep 4, 2024
dced6d9
Attempt to fix the build fail. Reverting with next commit if unsuccce…
Sep 4, 2024
420ceea
Sorry about the build fail; I know it is probably for a reason, so I …
Sep 4, 2024
7bf41ea
I am trying to resolve the build fail again.
Sep 4, 2024
aceedaf
Now writes model_theta values to config.
Sep 9, 2024
8d90489
initialize_model_properties is no longer part of the model object. Fu…
Sep 10, 2024
e44b8e5
Trade money now trades money. PTM step now updates with theta of t-1…
Oct 10, 2024
edb37d3
Attempt to fix pytest.
Oct 10, 2024
debd316
Bellman equation NN update
Nov 18, 2024
4546f4e
Added GPU availability condition to run file
Nov 18, 2024
55c9112
added scheduled shock argument, fixed bug in model identifier
Nov 19, 2024
9d8e17b
fixed typo in SBATCH time
Nov 19, 2024
444bd7e
corrected environment name for Snellius
Nov 19, 2024
ad49889
attempting fix of environment setup on Snellius
Nov 21, 2024
8cbdad3
Correcting bash script file reference for Snellius
Nov 21, 2024
c061769
Re-ordered global theta in initialization to fix seed responsivenes
Nov 21, 2024
1ec12ec
Added profiling request to bash script
Nov 21, 2024
c5aa698
Changed folder path
Nov 21, 2024
9722c73
Added initialization of income and technology index removed unused di…
Nov 25, 2024
4189c7a
added null-variety experiment scripts
Jan 3, 2025
6e7f825
changed conda source directory due to very annoying Snellius permissi…
Jan 7, 2025
788d343
Revert "changed conda source directory due to very annoying Snellius …
Jan 7, 2025
66ab367
changed conda source directory due to very annoying Snellius permissi…
Jan 7, 2025
eafb6e2
fix bug in no adaptation version of wealth consumption
Jan 7, 2025
288b499
debugging on Snellius with no_adapt_run script
Jan 7, 2025
f66eb24
Adding tool to check equal initialization for experimental setups.
Jan 13, 2025
166fe94
Adding 4-panel plot tool.
Jan 16, 2025
1653ee1
Added data processor.
Jan 19, 2025
43cb124
added generic bash script that accepts python file as argument.
Jan 19, 2025
eaecd0a
pushed updated codes actually used in Snellius
Jan 20, 2025
f19c674
Added new data request for aggregated wealth and consumption in data_…
Jan 20, 2025
a507737
Fixed consumption typo in data_processor.py
Jan 20, 2025
250172d
testing i_a data output on snellius
Jan 20, 2025
577a525
Adding seed-wise mutual information calculation and zero variance skip
Jan 21, 2025
e6e1102
histogram data for timestep 25
Jan 22, 2025
f125340
histogram data for timestep 74
Jan 22, 2025
b1f6c6f
Trying to write directly to research drive from Snellius.
Jan 25, 2025
35da21d
Trying to write directly to research drive from Snellius.
Jan 25, 2025
8982eb3
Wasserstein distance added.
Jan 28, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
I think I don't have the right lint roller, I am reverting it back to…
… what worked for now
  • Loading branch information
Victoria authored and Victoria committed Aug 27, 2024
commit 8d1f9b08d7f50835054bf5a9c2fc6460f0e6c632
46 changes: 20 additions & 26 deletions dgl_ptm/dgl_ptm/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,12 @@
class MThetaDist(BaseModel):
"""Base class for m_theta distribution."""
type: str = "multinomial"
parameters: list[int | float | list[int | float]] = [[0.02, 0.03, 0.05, 0.9],
[0.7, 0.8, 0.9, 1]]
parameters: list[int | float | list[int | float]] = [[0.02, 0.03, 0.05, 0.9], [0.7, 0.8, 0.9, 1]]
round: bool = False
decimals: int | None = None

@field_validator("parameters")
def _convert_parameters(self, v, values):
def _convert_parameters(cls, v, values):
if values.data["type"] == "multinomial":
for i in v:
if not isinstance(i, list):
Expand All @@ -38,8 +37,7 @@ def _convert_parameters(self, v, values):


class SteeringParams(BaseModel):
"""
Base class for steering parameters.
"""Base class for steering parameters.
These are the parameters used within each step of the model.
"""
edata: list[str] = ["all"]
Expand Down Expand Up @@ -72,19 +70,19 @@ class SteeringParams(BaseModel):
data_collection_list: list[int] | None = None

@field_validator("adapt_m")
def _convert_adapt_m(self, v):
def _convert_adapt_m(cls, v):
return torch.tensor(v)

@field_validator("adapt_cost")
def _convert_adapt_cost(self, v):
def _convert_adapt_cost(cls, v):
return torch.tensor(v)

@field_validator("tech_gamma")
def _convert_tech_gamma(self, v):
def _convert_tech_gamma(cls, v):
return torch.tensor(v)

@field_validator("tech_cost")
def _convert_tech_cost(self, v):
def _convert_tech_cost(cls, v):
return torch.tensor(v)

# Make sure pydantic validates the default values
Expand All @@ -106,7 +104,7 @@ class AlphaDist(BaseModel):
decimals: int | None = None

@field_validator("parameters")
def _convert_parameters(self, v):
def _convert_parameters(cls, v):
return torch.tensor(v)

# Make sure pydantic validates the default values
Expand All @@ -121,7 +119,7 @@ class CapitalDist(BaseModel):
decimals: int | None = None

@field_validator("parameters")
def _convert_parameters(self, v):
def _convert_parameters(cls, v):
return torch.tensor(v)

# Make sure pydantic validates the default values
Expand All @@ -136,7 +134,7 @@ class LambdaDist(BaseModel):
decimals: int | None = 1

@field_validator("parameters")
def _convert_parameters(self, v):
def _convert_parameters(cls, v):
return torch.tensor(v)

# Make sure pydantic validates the default values
Expand All @@ -151,7 +149,7 @@ class SigmaDist(BaseModel):
decimals: int | None = 1

@field_validator("parameters")
def _convert_parameters(self, v):
def _convert_parameters(cls, v):
return torch.tensor(v)

# Make sure pydantic validates the default values
Expand All @@ -166,7 +164,7 @@ class TechnologyDist(BaseModel):
decimals: int | None = None

@field_validator("parameters")
def _convert_parameters(self, v):
def _convert_parameters(cls, v):
return v if None in v else torch.tensor(v)

# Make sure pydantic validates the default values
Expand All @@ -181,7 +179,7 @@ class AThetaDist(BaseModel):
decimals: int | None = None

@field_validator("parameters")
def _convert_parameters(self, v):
def _convert_parameters(cls, v):
return torch.tensor(v)

# Make sure pydantic validates the default values
Expand All @@ -196,23 +194,19 @@ class SensitivityDist(BaseModel):
decimals: int | None = None

@field_validator("parameters")
def _convert_parameters(self, v):
def _convert_parameters(cls, v):
return torch.tensor(v)

# Make sure pydantic validates the default values
model_config = ConfigDict(validate_default = True)


class Config(BaseModel):
"""
Base class for configuration parameters.
"""Base class for configuration parameters.
These are the parameters used by the overarching process.
"""
# because pydantic does not like underscores:
model_identifier: str = Field("test", alias='_model_identifier')
# Never used to influence processing. This value is meant purely to add a
# description to identify a parameter setting:
description: str = ""
"""
model_identifier: str = Field("test", alias='_model_identifier') # because pydantic does not like underscores
description: str = "" # Never used to influence processing. This value is meant purely to add a description to identify a parameter setting.
device: str = "cpu"
seed: int = 42
number_agents: PositiveInt = 100
Expand Down Expand Up @@ -244,8 +238,8 @@ class Config(BaseModel):

@classmethod
def from_yaml(cls, config_file):
"""
Read configs from a config.yaml file.
"""Read configs from a config.yaml file.

If key is not found in config.yaml, the default value is used.
"""
if not Path(config_file).exists():
Expand Down
Loading