Giter Site home page Giter Site logo

nrel / celavi Goto Github PK

View Code? Open in Web Editor NEW
8.0 5.0 6.0 10.25 MB

Codebase for the Circular Economy Lifecycle Assessment and VIsualization (CELAVI) modeling framework.

Home Page: https://nrel.github.io/celavi/

License: GNU General Public License v3.0

Python 100.00%

celavi's Introduction

CELAVI and Circularity Futures

A circular economy emphasizes the efficient use of all resources (e.g., materials, land, water). Despite anticipated overall benefits to society, the transition to a circular economy is likely to create regional differences in impacts. Current tools are unable to fully evaluate these potential externalities, which will be important for informing research prioritization and regional decision making.

The Circular Economy Lifecycle Assessment and VIsualization (CELAVI) framework allows stakeholders to quantify and visualize potential regional and sectoral transfers of impacts that could result from transitioning to a circular economy, with particular focus on energy materials. The framework uses system dynamics to model material flows for multiple circular economy pathways and decisions are based on learning-by-doing and are implemented via cost and strategic value of different circular economy pathways. It uses network theory to track the spatial and sectoral flow of functional units across a graph and discrete event simulation to to step through time and evaluate lifecycle assessment data at each time step. The framework is designed to be flexible and scalable to accommodate multiple energy materials and multiple energy technologies. The primary goal of CELAVI is to help answer questions about how material flows and environmental and economic impacts of energy systems might change if the circularity of energy systems increases.

Branch Structure and Management

This repo contains code development for the CELAVI codebase (local cost minimization determines material flows) and the Circularity Futures codebase (material flows determine local and system costs). Our long-term goal is to merge these two decision-making functionalities into a single unified model, but for now please consider these two codebases as separate.

  • For CELAVI development, see master, develop, and doc-pages, as well as any issue or bug fix branches off of these. Releases up through v1.3.2 correspond to CELAVI.
  • For Circularity Futures develpment, see dev-circfutures and issue/bug fix branches off of this branch ONLY. DO NOT merge any Circularity Futures development, issue, or bug fix branches back into master, as this will overwrite the latest working version of CELAVI.

Please maintain good repo hygiene by creating one branch per Git issue, using consistent branch naming schemes, tagging issues in commit messages, and writing explanatory commit messages.

macOS and Windows

On macOS, use Terminal to type the commands. On Windows, use the Anaconda Prompt. To start typing the commands, you will need to use the cd command (which does the same thing on macOS and Windows) to navigate to the root of the cloned repository. Most of the commands work the same on macOS or Windows. Where they differ, this documentation will call those difference out.

Installation

This installation assumes you are using conda to create virtual environments. CELAVI requires packages from pip and conda, so there are two steps to the installation. These installation commands only need to be executed once before CELAAVI can be executed.

Installation step 1: pip install

From the command prompt with the folder of your cloned repository, execute the following commands (these only need to be ran once at installation time):

conda create -n celavi python=3.8
conda activate celavi
pip install -e .

Running the package

From the root of the repo, type a command similar to the following. This will execute the costgraph. Note that the paths to the files will need to be changed for your particular folder structure.

python -m celavi --data [your path to the CELAVI data folder] --config [your path to the config file(yaml) in the CELAVI data folder]

Guide for development

Code formatting and type checking

To ensure code consistency, we use MyPy for type checking and Black for code formatting. This table lists some information on these packages and how they are set up:

Package What it does Configuration File URL for more information
MyPy Creates optional type checking for variables in the code to reduce errors that arise from type mismatches mypy.ini http://mypy-lang.org/
Black Ensures code is consistently formatted pyproject.toml https://black.readthedocs.io/en/stable/

Manually executing code formatting and type checking

From the root of the repo, run the following commands

mypy celavi
black celavi --exclude sd_model.py

(Note: The --exclude option will ignore the automatically generated SD model from PySD)

If all passes, you will get status messages that look similar to the following:

The message from MyPy:

Success: no issues found in 1 source file

The message from Black:

All done! ✨ 🍰 ✨
1 file left unchanged.

Black has the useful feature that, if it finds a non-compliant file, it will fail with an error but will also reformat the file for you.

The first time you run MyPy can be slow since it needs to parse the files and put them into a cache for faster type checking.

Docstrings

To build the documentation from the docstrings in the code, type the following commands from the root of the repo:

On macOS, from the root of the repo, type the following command:

cd docs
make html

On Windows, from the root of the repo, type the following command:

cd docs
make.bat html

After these commands, these documentation can be found at docs/_build/html/index.html

Testing

This project uses pytest as the testing framework. To run all the tests, execute the following command from the root of the repo.

pytest celavi/tests

celavi's People

Contributors

akey7 avatar eberlea avatar jwalzberg avatar rjhanes avatar tghoshnrel avatar tjlca avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

celavi's Issues

Float cannot be interpreted as an int

Problem

When I execute the celavi develop branch, I get the following error:

Traceback (most recent call last):
  File "/Users/akey/Projects/celavi/celavi/celavi/__main__.py", line 299, in <module>
    context = Context(
  File "/Users/akey/Projects/celavi/celavi/celavi/des.py", line 116, in __init__
    self.count_facility_inventories[step_facility_id] = FacilityInventory(
  File "/Users/akey/Projects/celavi/celavi/celavi/inventory.py", line 70, in __init__
    for _ in range(timesteps):
TypeError: 'float' object cannot be interpreted as an integer

I have isolated this to an issue where the timesteps are

Recommended action:

Round the timesteps up to the next highest integer to ensure arrays can accommodate all timesteps.

Expose melted dataframes from diagnostic viz to write as csvs

  • Return melted dataframes from the plotting method
  • Write csvs from melted dataframes at end of each, rather than the wide dataframes.
  • Correct extra space in logging statement 2021-11-03 11:06:28.257875 In While loop pylca interface
  • Get rid of hard-coded blade_mass.csv filename in __main__.py

Ouput blade counts as a csv for data visualization

To facilitate data visualization, output the blade counts over each timestep for all FacilityInventories to a .csv file. See the comment attached to this ticket for a description of table format

  • Rename class from DiagnosticViz to DiagnosticVizAndDataframe
  • Memoize gather_cumulative_histories()
  • Write the resulting csv to the output folder.

placeholder for any ideas for speeding up model runs

General ideas

  • Reduce the number of calculations by identifying and avoiding duplicate operations
  • Reduce problem complexity at the Router and/or CostGraph level - see #186
  • Run the DES at the level of three components instead of individual components - that is, at the level of one turbine of blades instead of one blade see issue #63 for progress on this item
  • Make pathway selections by facility_id and store, to avoid finding shortest pathways for every blade which requires many calls to CostGraph methods.
  • Run CostGraph and pylca every 2 or 5 years instead of annually. CostGraph has a config parameter that controls the run frequency; pylca needs this functionality added.

Use config file to set input/output file locations and parameters in LCIA calculations

The goal is to have the LCIA calculations set up similarly to the DES and CostGraph, with all file locations, directories, and parameters set through the Config file and processed in main.py. Currently a lot of the LCIA filenames etc. are hardcoded.

Specific tasks that I see (incomplete list):

  • Remove os.chdir call on line 159 of main.py - it will be unnecessary when the following tasks are completed
  • Replace hard-coded input filenames with filenames pulled from config (such as line 34 in pylca_celavi_background_postprocess.py)
  • Replace hard-coded output filenames with filenames pulled from config (such as line 150 in des_interface.py) - for the output files, also make sure the files are being saved to the outputs directory and not the pylca_celavi_data directory

To keep the config file in a consistent format, I recommend adding filenames to the input_filenames and output_filenames groups already in the config file. If there are additional parameters, another group called lcia_parameters or similar can be added to the file.

Isolation of transportation tracker for HPC speed debugging

Issue explored - Cause of significant slow down for HPC runs.

How - Starting with Aug 18th repo version, the data and the code repo was updated step by step to isolate transportation tracker as the main cause of the slowdown.

_The latest version of the code without transportation tracker is housed in this branch

Differences -

Data repository - The version of the data repository that is currently working is this. Due to issues in the current commit to the data repo, this version was used in the HPC runs. Its from Sep 8 before the harmonization commits.

Code repository -

The differences in the code repository are highlighted in this pull request.

  1. main.py has no significant changes. Minor changes. (Please ignore)
  2. costgraph.py has changes. Not sure if this is related to transportation tracker. Using this version without the transportation tracker files and commits resulted in runtime errors. So I am guessing its linked. @rjhanes Need your help on this
    image
  3. des.py and component.py has major changes - Transportation tracker. @akey7 Need your help on this.

Way forward -
This is my suggestion. Please feel free to suggest different ways -

Update the hpc_speed_debug with the addition of the transportation tracker step by step such that we can isolate the slow down issue.
Have a session where we work on this. I will not recommend a live session because re initializing cost graph and running takes time. However, CO and OH ( A combination of states) that I found to have low turbines and all possible facilities might be a good one to help debug this issue if we have a live session.

Need error handling in pathway cost recording

if one of sc_end facility type values is not present, skip recording the pathway costs

See starting at line 318 in CostGraph:

            for i in self.sc_end:
                self.pathway_cost_history.append(
                    {
                        'year' : self.year,
                        'facility_id' : self.supply_chain.nodes[source]['facility_id'],
                        'region_id_1' : self.loc_df[
                            self.loc_df.facility_id == self.supply_chain.nodes[source]['facility_id']
                        ].region_id_1.values[0],
                        'region_id_2' : self.loc_df[
                            self.loc_df.facility_id == self.supply_chain.nodes[source]['facility_id']
                        ].region_id_2.values[0],
                        'region_id_3' : self.loc_df[
                            self.loc_df.facility_id == self.supply_chain.nodes[source]['facility_id']
                        ].region_id_3.values[0],
                        'region_id_4' : self.loc_df[
                            self.loc_df.facility_id == self.supply_chain.nodes[source]['facility_id']
                        ].region_id_4.values[0],
                        'eol_pathway_type' : i,
                        'eol_pathway_dist' : min(
                            [value for key,value in subdict.items()
                             if i in key]
                        ),
                        'bol_pathway_dist' :
                            nx.shortest_path_length(
                                self.supply_chain,
                                source='manufacturing_' + str(
                                    self.find_upstream_neighbor(
                                        node_id=int(str(source).split('_')[1]
                                                    ),
                                        crit='cost'
                                    )
                                ),
                                target=str(source),
                                weight='cost',
                                method='bellman-ford'
                            )
                    }
                )

Before appending, check that paths to i exist.

add non-cost criteria to the pathway decision model in CostGraph

We want to include:

  • Max distance cutoff - the parameter max_dist in CostGraph already exists. Pathways with more than max_dist km between facilities should get a cost penalty assigned so they are not chosen. MIGRATED TO ISSUE #191
  • Life cycle impact(s) passed into CostGraph from the DES
    • Users will need to define which impact(s) to use for the decision.
  • Single aggregated criterion based on weighted and normalized cost, distance, impact, and other criteria of interest.
    • This will require some additional CostGraph parameters, either named or kwargs, for the weights.
    • This method can be used to combine life cycle impacts into one indicator. There doesn't need to be a separate method for deciding based on multiple life cycle impacts.
    • The normalization might be able to be done locally or with a standard algorithm that doesn't change with CostGraph instance.

Also in this issue:

  • Replace the Dijkstra shortest-path algorithm, which cannot handle negative pathway costs, with the Johnson Bellman-Ford shortest-path algorithm which can. Johnson algorithm does not have either of the single source methods needed for CostGraph

add memory tracker/monitor to CostGraph

Keep track of memory being taken up by the network and maybe add some functionality to terminate with an error if X memory is exceeded. X should be a parameter that users can set, for execution on different machines / nodes.

optimized LCIA model returns all-zero scaling vector for transportation final demand ... sometimes

Summary

The first time in a run with issue44 code on medium-data (see next section for exact run specification) that the LCIA model has transportation in the final demand vector, the solver finds an optimal solution where the scaling vector is all zeros. I tested reproducibility by running the develop code branch and did not see this error at the model year where it occurred using the issue44 code, and after I switched back to the issue44 branch I saw the error again, but 15 model years later. To get around the error, I added some checking that would just return an empty DataFrame in the correct format and print out a warning. After the run completed, that warning only showed up once in the entire run.

I determined the source of the error by tracing back the original error (a ValueError on line 181 of pylca_opt_foreground.py) to the output of line 110 in pylca_opt_foreground.py (solver_optimization method):

    solution = pyomo_postprocess(None, model, results)

solution is all zeros. I double checked model directly and found the same:

(Pdb) model.s.extract_values()
{0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0, 5: 0.0, 6: 0.0, 7: 0.0, 8: 0.0, 9: 0.0, 10: 0.0, 11: 0.0, 12: 0.0, 13: 0.0, 14: 0.0, 15: 0.0, 16: 0.0, 17: 0.0, 18: 0.0, 19: 0.0, 20: 0.0, 21: 0.0, 22: 0.0, 23: 0.0, 24: 0.0, 25: 0.0, 26: 0.0, 27: 0.0, 28: 0.0, 29: 0.0, 30: 0.0, 31: 0.0, 32: 0.0, 33: 0.0, 34: 0.0, 35: 0.0}

However, according to opt.solve(model), model optimized correctly:

(Pdb) opt.solve(model)
{'Problem': [{'Name': 'unknown', 'Lower bound': 0.0, 'Upper bound': 0.0, 'Number of objectives': 1, 'Number of constraints': 37, 'Number of variables': 37, 'Number of nonzeros': 88, 'Sense': 'minimize'}], 'Solver': [{'Status': 'ok', 'Termination condition': 'optimal', 'Statistics': {'Branch and bound': {'Number of bounded subproblems': 0, 'Number of created subproblems': 0}}, 'Error rc': 0, 'Time': 0.050501346588134766}], 'Solution': [OrderedDict([('number of solutions', 0), ('number of solutions displayed', 0)])]}

The final demand vector contains only transportation:

(Pdb) F
0     0.000000
1     0.000000
2     0.000000
3     0.000000
4     0.000000
5     0.000000
6     0.000000
7     0.000000
8     0.000000
9     0.000000
10    0.000000
11    0.000000
12    0.000000
13    0.000000
14    0.000000
15    0.000000
16    0.000000
17    0.000000
18    0.000000
19    0.000000
20    0.000000
21    0.000000
22    0.000000
23    0.000000
24    0.000000
25    0.000000
26    0.000000
27    0.000000
28    0.000000
29    0.000000
30    0.000000
31    0.000000
32    0.000000
33    0.000000
34    0.000000
35    0.000697
Name: flow quantity, dtype: float64

When I saw the error again, the final demand value was different but the vector again contained only transportation.

The inputs to solver_optimization that most recently produced an all-zero scaling vector are as follows:
tech_matrix
tech_matrix.xlsx
F
F.xlsx
process

['Acrylonitrile, at plant', 'Crude oil, extracted', 'Diesel, combusted in industrial boiler', 'Liquefied petroleum gas, combusted in industrial boiler', 'MMA, at plant', 'PAN, at plant', 'Portland cement, at plant', 'Propene, at plant', 'calcium carbonate transportation', 'calcium carbonate, at mine', 'carbon fiber reinforced polymer, at plant', 'carbon fiber, at plant', 'cement transportation', 'coal, at mine', 'coal, combusted in boiler', 'concrete, in use', 'electricity', 'epoxy, at plant', 'epoxy, supply', 'gasoline, combusted in boiler', 'glass fiber reinforced polymer, at plant', 'glass fiber reinforced polymer, coarse grinding', 'glass fiber reinforced polymer, coarse grinding onsite', 'glass fiber reinforced polymer, fine grinding', 'glass fiber reinforced polymer, landfilling', 'glass fiber reinforced polymer, rotor teardown', 'glass fiber reinforced polymer, segmenting', 'glass fiber, at plant', 'iron ore, resource', 'kaolin, supply', 'lime, at mine', 'natural gas, combusted in boiler', 'residual oil, combusted in boiler', 'sand and gravel, supply', 'steel, at plant', 'transportation, Transportation']

df_with_all_other_flows
df_with_all_other_flows.xlsx

Run specs

Commit abdc415 on issue44 branch (contains latest updates from develop and additional functionality for issue #44 and issue #3) with medium-data branch at fd10bf8 and the config file below. I checked that the routes file is up to date and updated the CostGraph pickle just prior to this run, hence False values for run_routes and initialize_costgraph. The medium-data branch also has the updated foreground_process_inventory.csv file that has the transportation processes in it.

flags:
  compute_locations     : True  # if compute_locations is enabled (True), compute locations from raw input files (e.g., LMOP, US Wind Turbine Database)
  generate_step_costs   : True # set to False if supply chain costs for a facility type vary regionally
  run_routes            : False  # if run_routes is enabled (True), compute routing distances between all input locations
  use_computed_routes   : True  # if use_computed_routes is enabled, read in a pre-assembled routes file instead of generating a new one
  initialize_costgraph  : False  # create cost graph fresh or use an imported version
  enable_data_filtering : False  # If true, dataset will be filtered to the states below
  pickle_costgraph      : True  # save the newly initialized costgraph as a pickle file
  use_fixed_lifetime    : True # set to False to use Weibull distribution for lifetimes

scenario_parameters:
    start_year: 2000.0
    end_year: 2050.0
    timesteps_per_year: 12
    max_dist: 300 #km
    # If you specify enable_data_filtering = True above, you need to list the states to filter here.
    # Default behavior is not to pass any states through the filter.
    # If enable_data_filtering is False, this list is ignored.
    states_to_filter:
    - IA

data_directories:
    inputs: inputs/
    raw_locations: inputs/raw_location_data/
    us_roads: inputs/precomputed_us_road_network/
    preprocessing_output: preprocessing/
    lookup_tables: lookup_tables/
    lci: pylca_celavi_data/
    outputs: outputs/
    routing_output: preprocessing/routing_intermediate_files/

input_filenames:
    locs: locations_computed.csv
    step_costs: step_costs.csv
    fac_edges: fac_edges.csv
    transpo_edges: transpo_edges.csv
    route_pairs: route_pairs.csv
    avg_blade_masses: avgblademass.csv
    routes_custom: routes.csv
    routes_computed: routes_computed.csv
    transportation_graph: transportation_graph.csv
    node_locs: node_locations.csv
    power_plant_locs: uswtdb_v4_1_20210721.csv
    landfill_locs: landfilllmopdata.csv
    other_facility_locs: other_facility_locations_all_us.csv
    standard_scenario: StScen20A_MidCase_annual_state.csv
    lookup_facility_type: facility_type.csv
    lookup_step_costs: step_costs_default.csv
    turbine_data: number_of_turbines.csv

output_filenames:
    costgraph_pickle: netw.obj
    costgraph_csv: netw.csv
    
costgraph_parameters:
    sc_begin: manufacturing
    sc_end: 
    - landfilling
    - cement co-processing
    - next use
    cg_verbose: 2
    save_cg_csv: True
    finegrind_cumul_initial: 1.0
    finegrind_initial_cost: 161.0
    finegrind_revenue: 262.0
    finegrind_learnrate: -0.05
    finegrind_material_loss: 0.3
    coarsegrind_cumul_initial: 1.0
    coarsegrind_initial_cost: 122.0
    coarsegrind_learnrate: -0.05
    cg_update_timesteps: 12
# coprocessing revenue at default value

discrete_event_parameters:
    component_list:
    - nacelle
    - blade
    - tower
    - foundation
    seed: 13
    min_lifespan: 120 # Units: timesteps
    blade_weibull_L: 240
    blade_weibull_K: 2.2
    component_fixed_lifetimes:
      nacelle : 30
      blade : 20
      foundation : 50
      tower : 50

Notes on run specs

This error does NOT show up when running the develop branch at f771969 on the medium-data branch at the same commit, same config file.

Investigation (in chronological order)

The runner method in pylca_opt_foreground.py is returning an empty DataFrame for one LCIA calculation that involves only transportation. I detected the error because the code was throwing a ValueError on line 181 in pylca_opt_foreground, where res is having column names assigned. I put a set_trace just after line 180 in pylca_opt_foreground, within an if statement so it only stopped the code if the ValueError was going to occur:

        res = runner(tech_matrix,F,yr,fac_id,stage,material,100000,process,df_with_all_other_flows)
        if len(res.columns) != 7:
            pdb.set_trace()
        res.columns = ['flow name','unit','flow quantity','year','facility_id','stage','material']

After putting in the set_trace, I found that the res value causing the error had only 3 columns and was empty. Attempting to change the column names with a list of length 7 was causing the ValueError.

I then investigated the values being passed to runner. I didn't see anything obviously wrong with the final demand numbers or the other arguments.

614 - 2022 - landfilling - glass fiber reinforced polymer shortcut calculations done
623 - 2022 - landfilling - glass fiber reinforced polymer shortcut calculations done
> c:\users\rhanes\github\celavi\celavi\pylca_celavi\pylca_opt_foreground.py(183)model_celavi_lci()
-> res.columns = ['flow name','unit','flow quantity','year','facility_id','stage','material']
(Pdb) res.columns
Index(['product', 'unit', 'value'], dtype='object')
(Pdb) res
Empty DataFrame
Columns: [product, unit, value]
Index: []
(Pdb) F
0     0.000000
1     0.000000
2     0.000000
3     0.000000
4     0.000000
5     0.000000
6     0.000000
7     0.000000
8     0.000000
9     0.000000
10    0.000000
11    0.000000
12    0.000000
13    0.000000
14    0.000000
15    0.000000
16    0.000000
17    0.000000
18    0.000000
19    0.000000
20    0.000000
21    0.000000
22    0.000000
23    0.000000
24    0.000000
25    0.000000
26    0.000000
27    0.000000
28    0.000000
29    0.000000
30    0.000000
31    0.000000
32    0.000000
33    0.000000
34    0.000000
35    0.000697
Name: flow quantity, dtype: float64
(Pdb) final_dem
                                                    0  ... flow quantity
0                                       Acrylonitrile  ...      0.000000
1                                                 MMA  ...      0.000000
2                                                 PAN  ...      0.000000
3                                   calcium carbonate  ...      0.000000
4                                        carbon fiber  ...      0.000000
5                     carbon fiber reinforced polymer  ...      0.000000
6                                    cement transport  ...      0.000000
7                                 cement_conventional  ...      0.000000
8                                                coal  ...      0.000000
9                                           coal, raw  ...      0.000000
10                                   concrete, in use  ...      0.000000
11                                          crude oil  ...      0.000000
12                                             diesel  ...      0.000000
13                                        electricity  ...      0.000000
14                                              epoxy  ...      0.000000
15                                      epoxy, supply  ...      0.000000
16                                           gasoline  ...      0.000000
17                                        glass fiber  ...      0.000000
18    glass fiber reinforced polymer, coarse grinding  ...      0.000000
19  glass fiber reinforced polymer, coarse grindin...  ...      0.000000
20      glass fiber reinforced polymer, fine grinding  ...      0.000000
21        glass fiber reinforced polymer, landfilling  ...      0.000000
22      glass fiber reinforced polymer, manufacturing  ...      0.000000
23     glass fiber reinforced polymer, rotor teardown  ...      0.000000
24         glass fiber reinforced polymer, segmenting  ...      0.000000
25                                           iron ore  ...      0.000000
26                                             kaolin  ...      0.000000
27                                               lime  ...      0.000000
28                                     lime transport  ...      0.000000
29                            liquefied petroleum gas  ...      0.000000
30                                        natural gas  ...      0.000000
31                                            propene  ...      0.000000
32                                       residual oil  ...      0.000000
33                                    sand and gravel  ...      0.000000
34                                              steel  ...      0.000000
35                     transportation, Transportation  ...     69.692424

[36 rows x 3 columns]
(Pdb) fac_id
'614'
(Pdb) stage
'Transportation'
(Pdb) material
'transportation'
(Pdb) process
['Acrylonitrile, at plant', 'Crude oil, extracted', 'Diesel, combusted in industrial boiler', 'Liquefied petroleum gas, combusted in industrial boiler', 'MMA, at plant', 'PAN, at plant', 'Portland cement, at plant', 'Propene, at plant', 'calcium carbonate transportation', 'calcium carbonate, at mine', 'carbon fiber reinforced polymer, at plant', 'carbon fiber, at plant', 'cement transportation', 'coal, at mine', 'coal, combusted in boiler', 'concrete, in use', 'electricity', 'epoxy, at plant', 'epoxy, supply', 'gasoline, combusted in boiler', 'glass fiber reinforced polymer, at plant', 'glass fiber reinforced polymer, coarse grinding', 'glass fiber reinforced polymer, coarse grinding onsite', 'glass fiber reinforced polymer, fine grinding', 'glass fiber reinforced polymer, landfilling', 'glass fiber reinforced polymer, rotor teardown', 'glass fiber reinforced polymer, segmenting', 'glass fiber, at plant', 'iron ore, resource', 'kaolin, supply', 'lime, at mine', 'natural gas, combusted in boiler', 'residual oil, combusted in boiler', 'sand and gravel, supply', 'steel, at plant', 'transportation, Transportation']
(Pdb) f_d
                        flow name  flow quantity
0  transportation, Transportation      69.692424
(Pdb) yr
2022

On the next run (same code and data commits and config parameters), I deleted the lca_db.csv file and added another set_trace after line 147 in runner:

    res.to_csv('intermediate_demand.csv',mode='a', header=False,index = False)
    if res.empty: pdb.set_trace()
    return res

Turns out that the output from the solver_optimization method is the empty DataFrame. I checked the arguments sent to this method and nothing looked obviously wrong:

(Pdb) tech_matrix
process                                             Acrylonitrile, at plant  ...  transportation, Transportation
product                                                                      ...                                
Acrylonitrile                                                         1.000  ...                             0.0
MMA                                                                   0.000  ...                             0.0
PAN                                                                   0.000  ...                             0.0
calcium carbonate                                                     0.000  ...                             0.0
carbon fiber                                                          0.000  ...                             0.0
carbon fiber reinforced polymer                                       0.000  ...                             0.0
cement transport                                                      0.000  ...                             0.0
cement_conventional                                                   0.000  ...                             0.0
coal                                                                 -0.019  ...                             0.0
coal, raw                                                             0.000  ...                             0.0
concrete, in use                                                      0.000  ...                             0.0
crude oil                                                             0.000  ...                             0.0
diesel                                                                0.000  ...                             0.0
electricity                                                          -0.111  ...                             0.0
epoxy                                                                 0.000  ...                             0.0
epoxy, supply                                                         0.000  ...                             0.0
gasoline                                                              0.000  ...                             0.0
glass fiber                                                           0.000  ...                             0.0
glass fiber reinforced polymer, coarse grinding                       0.000  ...                             0.0
glass fiber reinforced polymer, coarse grinding...                    0.000  ...                             0.0
glass fiber reinforced polymer, fine grinding                         0.000  ...                             0.0
glass fiber reinforced polymer, landfilling                           0.000  ...                             0.0
glass fiber reinforced polymer, manufacturing                         0.000  ...                             0.0
glass fiber reinforced polymer, rotor teardown                        0.000  ...                             0.0
glass fiber reinforced polymer, segmenting                            0.000  ...                             0.0
iron ore                                                              0.000  ...                             0.0
kaolin                                                                0.000  ...                             0.0
lime                                                                  0.000  ...                             0.0
lime transport                                                        0.000  ...                             0.0
liquefied petroleum gas                                               0.000  ...                             0.0
natural gas                                                           0.000  ...                             0.0
propene                                                              -1.116  ...                             0.0
residual oil                                                          0.000  ...                             0.0
sand and gravel                                                       0.000  ...                             0.0
steel                                                                 0.000  ...                             0.0
transportation, Transportation                                        0.000  ...                             1.0

[36 rows x 36 columns]
(Pdb) F
0     0.000000
1     0.000000
2     0.000000
3     0.000000
4     0.000000
5     0.000000
6     0.000000
7     0.000000
8     0.000000
9     0.000000
10    0.000000
11    0.000000
12    0.000000
13    0.000000
14    0.000000
15    0.000000
16    0.000000
17    0.000000
18    0.000000
19    0.000000
20    0.000000
21    0.000000
22    0.000000
23    0.000000
24    0.000000
25    0.000000
26    0.000000
27    0.000000
28    0.000000
29    0.000000
30    0.000000
31    0.000000
32    0.000000
33    0.000000
34    0.000000
35    0.000697
Name: flow quantity, dtype: float64
(Pdb) process
['Acrylonitrile, at plant', 'Crude oil, extracted', 'Diesel, combusted in industrial boiler', 'Liquefied petroleum gas, combusted in industrial boiler', 'MMA, at plant', 'PAN, at plant', 'Portland cement, at plant', 'Propene, at plant', 'calcium carbonate transportation', 'calcium carbonate, at mine', 'carbon fiber reinforced polymer, at plant', 'carbon fiber, at plant', 'cement transportation', 'coal, at mine', 'coal, combusted in boiler', 'concrete, in use', 'electricity', 'epoxy, at plant', 'epoxy, supply', 'gasoline, combusted in boiler', 'glass fiber reinforced polymer, at plant', 'glass fiber reinforced polymer, coarse grinding', 'glass fiber reinforced polymer, coarse grinding onsite', 'glass fiber reinforced polymer, fine grinding', 'glass fiber reinforced polymer, landfilling', 'glass fiber reinforced polymer, rotor teardown', 'glass fiber reinforced polymer, segmenting', 'glass fiber, at plant', 'iron ore, resource', 'kaolin, supply', 'lime, at mine', 'natural gas, combusted in boiler', 'residual oil, combusted in boiler', 'sand and gravel, supply', 'steel, at plant', 'transportation, Transportation']
(Pdb) df_with_all_other_flows
                                               process  ...                      stage
1                               carbon fiber, at plant  ...                 background
5                                        PAN, at plant  ...                 background
8                                        PAN, at plant  ...                 background
12                                       MMA, at plant  ...                 background
15                                       MMA, at plant  ...                 background
17                             Acrylonitrile, at plant  ...                 background
20                             Acrylonitrile, at plant  ...                 background
22                             Acrylonitrile, at plant  ...                 background
24                                   Propene, at plant  ...                 background
26                               glass fiber, at plant  ...                 background
31                               glass fiber, at plant  ...                 background
32                               glass fiber, at plant  ...                 background
33                               glass fiber, at plant  ...                 background
35                                     epoxy, at plant  ...                 background
37                                     epoxy, at plant  ...                 background
39                                     epoxy, at plant  ...                 background
40                                     epoxy, at plant  ...                 background
42                       gasoline, combusted in boiler  ...                 background
44   Liquefied petroleum gas, combusted in industri...  ...                 background
46                    natural gas, combusted in boiler  ...                 background
48                   residual oil, combusted in boiler  ...                 background
50                           coal, combusted in boiler  ...                 background
51                           coal, combusted in boiler  ...                 background
52                           coal, combusted in boiler  ...                 background
54                                  iron ore, resource  ...                 background
57                          calcium carbonate, at mine  ...                 background
59              Diesel, combusted in industrial boiler  ...                 background
61                                Crude oil, extracted  ...                 background
64                           Portland cement, at plant  ...                 background
66                             sand and gravel, supply  ...                 background
67                             sand and gravel, supply  ...                 background
68                             sand and gravel, supply  ...                 background
70                               cement transportation  ...                 background
71                               cement transportation  ...                 background
73                                      kaolin, supply  ...                 background
74                                      kaolin, supply  ...                 background
75                                      kaolin, supply  ...                 background
77                    calcium carbonate transportation  ...                 background
78                    calcium carbonate transportation  ...                 background
81                                       epoxy, supply  ...                 background
82                                       epoxy, supply  ...                 background
83                                       epoxy, supply  ...                 background
86                                       lime, at mine  ...                 background
96            glass fiber reinforced polymer, at plant  ...              manufacturing
119                     transportation, Transportation  ...             Transportation
131                                      coal, at mine  ...                 background
132                                      coal, at mine  ...                 background
144                                        electricity  ...  extraction and production
145                                        electricity  ...  extraction and production
146                                        electricity  ...  extraction and production
147                                        electricity  ...  extraction and production
148                                        electricity  ...  extraction and production
149                                        electricity  ...  extraction and production
150                                        electricity  ...  extraction and production
151                                        electricity  ...  extraction and production
153                                        electricity  ...  extraction and production
154                                        electricity  ...  extraction and production
155                                        electricity  ...  extraction and production

[58 rows x 9 columns] 

For the next run, I did not delete the lca_db.csv file (didn't seem to make a difference) and added another set_trace after line 110 in solver_optimization:

    opt = SolverFactory("glpk")
    results = opt.solve(model)
    solution = pyomo_postprocess(None, model, results)
    pdb.set_trace()
    scaling_vector = pd.DataFrame()

Found that results is the following:

(Pdb) results
{'Problem': [{'Name': 'unknown', 'Lower bound': 0.0, 'Upper bound': 0.0, 'Number of objectives': 1, 'Number of constraints': 37, 'Number of variables': 37, 'Number of nonzeros': 88, 'Sense': 'minimize'}], 'Solver': [{'Status': 'ok', 'Termination condition': 'optimal', 'Statistics': {'Branch and bound': {'Number of bounded subproblems': 0, 'Number of created subproblems': 0}}, 'Error rc': 0, 'Time': 0.051374197006225586}], 'Solution': [OrderedDict([('number of solutions', 0), ('number of solutions displayed', 0)])]}

and solution (output of pyomo_postprocess method) is all zeros:

(Pdb) solution
      s
0   0.0
1   0.0
2   0.0
3   0.0
4   0.0
5   0.0
6   0.0
7   0.0
8   0.0
9   0.0
10  0.0
11  0.0
12  0.0
13  0.0
14  0.0
15  0.0
16  0.0
17  0.0
18  0.0
19  0.0
20  0.0
21  0.0
22  0.0
23  0.0
24  0.0
25  0.0
26  0.0
27  0.0
28  0.0
29  0.0
30  0.0
31  0.0
32  0.0
33  0.0
34  0.0
35  0.0

Because solution is all zeros, the output of solver_optimization (results_total) is an empty DataFrame.

I double checked the pyomo model itself:

(Pdb) model.s.extract_values()
{0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0, 5: 0.0, 6: 0.0, 7: 0.0, 8: 0.0, 9: 0.0, 10: 0.0, 11: 0.0, 12: 0.0, 13: 0.0, 14: 0.0, 15: 0.0, 16: 0.0, 17: 0.0, 18: 0.0, 19: 0.0, 20: 0.0, 21: 0.0, 22: 0.0, 23: 0.0, 24: 0.0, 25: 0.0, 26: 0.0, 27: 0.0, 28: 0.0, 29: 0.0, 30: 0.0, 31: 0.0, 32: 0.0, 33: 0.0, 34: 0.0, 35: 0.0}

and it's returning all zeros for the optimal scaling vector.

clean up print statements, check for accuracy, and make timings consistent

Our debugging statements are in various formats, don't all refer to the same times, and may not reflect the current code functionality.

  • Line 344 of pylca_opt_background: print(str(time.time() - tim0) + ' ' + 'taken to do this run',flush=True) Run of what?
  • Line 120 of des_interface.py: print(str(facility_id) + ' - ' + str(year) + ' - ' + stage + ' - ' + material + ' shortcut calculations done',flush = True) I see printouts for the in use stage but not for any other processes
  • Line 258 of des.py: print(f'{datetime.now()}In While loop pylca interface',flush = True) delete?
  • Line 261 of des.py: print(str(time.time() - time0) + ' yield of env timeout pylca took these many seconds') delete?
  • Line 287 of des.py: print(str(time.time() - time0)+' For loop of pylca took these many seconds') delete?
  • Line 302 of des.py: print(f'{datetime.now()}In While loop update cost graph',flush = True) Reword for more information
  • Line 329 of des.py: print(f"{datetime.now()} Updated cost graph {year}: cum_mass_fine_grinding {cum_mass_fine_grinding}, cum_mass_coarse_grinding {cum_mass_coarse_grinding}, avg_blade_mass_kg {avg_blade_mass_kg}", flush=True)
  • Line 331 of pylca_opt_background: print(str(i) +' - '+j + ' - ' + k) Reword for more information
  • When a CostGraph object is read in from a pickle file, there are timings stored in that file that are printed out but do not correspond to the actual model run.

I propose two types of timing statement, with different formats so we can distinguish them:

  • CELAVI run timing: time = 0 when the run starts, and the timing of key events is expressed as number of seconds after the run starting. (i.e. "CELAVI run started at 0 s", "CostGraph completed at 12 s", "DES beginning at 14 s", "CELAVI run finished at 2413 s")
  • Individual process timings: process_time = 0 when the process starts, and the time it takes for the individual process to complete is printed. (i.e. "LCIA calculations for 2008 took 18 s", "Pathway selection for 2012 took 123 s")

Track separate materials within components.

  • Create a dictionary of masses of materials in each component. Keys are the names of the materials. Values are the masses of the materials.
  • At each stage of life, iterate over the materials in the dictionary and account for them in mass facility inventories.
  • Count facility inventories stay the same.
  • The possible items in the mass facility inventories will change from component types to material types.

implement disaggregation postprocessing on facility and transportation impacts

Current status

Inbound transportation to each facility is stored in t-km associated with the destination facility_id. No route information is attached to the transportation data.

Proposed solution

Postprocessing method - Associate transportation in t-km with both the source and destination facility_id values and create an output data structure with year, source facility_id, destination facility_id, and transportation in t-km (possibly also material type, when we get there). Merge this data structure with the routes input dataset and use the vmt by FIPS relative to total_vmt for each route to disaggregate the t-km values and associate transportation with each FIPS involved in a route.

Note: Ideally all environmental impacts could be aggregated up to one (or all!) of the region_id_x columns, but because we don't require that those columns be filled in it's not guaranteed to work completely. For now, stick with disaggregating the transportation calculations because those will definitely have counties.

Results once this is done will combine point sources of emissions (facilities) with county-level emissions (transportation). We can shade in counties according to the amount of transportation emissions and overlay facility representations to combine the results.

Join data to make map

Join final_lcia_results_to_des.csv to locations_computed.csv to create a csv file from which to generate a map. Join them on the facility_id column. This will allow making a map like the one below.

co_ia_map_10_12_2021
ke

avoid calling costgraph.shortest_paths method when blades are manufactured

CostGraph has a find_upstream_neighbor method that will connect a power plant facility with its closes manufacturing facility. When blades are created, a call to shortest_paths is used instead to connect the manufacturing and power plant facilities.

This upgrade would swap out the more intensive shortest_paths call with a call to find_upstream_neighbor when the blade is initialized (already implemented) and use information from that method to generate the first two steps of the blade's queue.

As a result, the shortest_paths method will only be called when a blade reaches end of life - ie once per blade, instead of twice as it currently runs.

Fine griding needs to handle multiple materials

This issue is not apparent with the tiny-data data branch.

The problem is noted in the first stack trace.

Traceback (most recent call last):
  File "/home/tghosh/celavi/celavi/celavi/component.py", line 192, in eol_process
    self.context.cost_graph.finegrind_material_loss * self.mass_tonnes * distance,
TypeError: unsupported operand type(s) for *: 'float' and 'dict'

 

The above exception was the direct cause of the following exception:

 

Traceback (most recent call last):
  File "__main__.py", line 389, in <module>
    count_facility_inventories = context.run()
  File "/home/tghosh/celavi/celavi/celavi/des.py", line 431, in run
    self.env.run(until=int(self.max_timesteps))
  File "/home/tghosh/.conda-envs/celavi/lib/python3.8/site-packages/simpy/core.py", line 254, in run
    self.step()
  File "/home/tghosh/.conda-envs/celavi/lib/python3.8/site-packages/simpy/core.py", line 206, in step
    raise exc
TypeError: unsupported operand type(s) for *: 'float' and 'dict'

Runtime of CELAVI code - Increasing successively. IOWA runs failing

Runtime for CELAVI code is increasing successively.

IOWA run failed today because the first year 2001 is taking more than 15000 seconds in DES and Simpy.

Causes - Unknown.

Potential solutions? _ revert back to old code.
Check filtering properly?

Old code saved in scratch (Not deleted).

run DES at technology (turbine) level instead of component (blade) level

Separate issue created from list in #7

Within the DES, create one component to represent three blades - there's a for loop in main.py that creates three components per turbine. Remove this loop and update the component mass to be 3x blade mass.

Currently one component = one blade.

This will cut the DES run time to one third of the current run time.

match locations_computed filtering to number_of_turbines filtering

After the addition of filtering on turbines and integrating it within the compute locations, the only facility that still shows up extra in number of turbines is 55871. While this is an easy fix, would you like to delve deeper into why this survived?

@TJTapajyoti Commit 2767c0a may resolve this issue, can you confirm?

implement facility connection filtering algorithm

Creating a new issue from an item in #145

To discuss: Can we apply the max distance filter at the routing stage, such that the routes file doesn't include connections between facilities where total_vmt is too high? That would both speed up the CostGraph initialization and remove this whole problem of filtering at the CostGraph level. We could then add functionality that if the facility connection (edge) doesn't exist in the routes file, the edge gets deleted from CostGraph.

A complication with implementing the distance cap: It has to be applied based on which facility types are being connected, because there are almost certainly power plants further than 300km away from a blade manufacturing facilities, and those facilities still need to be connected.

The cap should apply to these connections:

  • power plant to landfill
  • power plant to cement plant
  • cement plant to power plant
  • recycling facility to landfill
  • recycling facility to cement plant

which is basically all of them except manufacturing facility to power plant connections.

Clean up main.py -- create separate config for filenames and folders?

Ported from https://github.nrel.gov/aeberle/celavi/issues/160

This functionality is being implemented in the configs branch.

Latest commit is 7290b0a , which parameterizes the CostGraph initialization and all filename/directory references with information from the yaml file. Testing is still underway, I will update once I've fully tested the current commit and yaml file.

Current yaml file reads as:

flags:
  compute_locations: False  # if compute_locations is enabled (True), compute locations from raw input files (e.g., LMOP, US Wind Turbine Database)
  run_routes: False  # if run_routes is enabled (True), compute routing distances between all input locations
  use_computed_routes: True  # if use_computed_routes is enabled, read in a pre-assembled routes file instead of generating a new one
  initialize_costgraph: True  # create cost graph fresh or use an imported version
  pickle_costgraph: True  # save the newly initialized costgraph as a pickle file

data_filtering:
  enable_data_filtering: False  # If true, dataset will be filtered to the states below
  
  # If you specify True above, you need to list the states to filter here.
  # Default behavior is not to pass any states through the filter.
  # If enable_data_filtering is False, this list is ignored.
  
  states_to_filter:
    - TX
    - IA
    - CO

scenario_parameters:
    start_year: 2000
    end_year: 2050
    timesteps_per_year: 12
    max_dist: 300

data_directories:
    inputs: inputs/
    raw_locations: inputs/raw_location_data/
    us_roads: inputs/precomputed_us_road_network/
    preprocessing_output: preprocessing/
    lookup_tables: lookup_tables/
    lci: pylca_celavi_data/
    outputs: outputs
    routing_output: preprocessing/routing_intermediate_files

input_filenames:
    locs: locations_computed.csv
    step_costs: step_costs.csv
    fac_edges: fac_edges.csv
    transpo_edges: transpo_edges.csv
    route_pair: route_pairs.csv
    avg_blade_masses: avgblademass.csv
    routes_custom: routes.csv
    routes_computed: routes_computed.csv
    transportation_graph: transportation_graph.csv
    node_locs: node_locations.csv
    power_plant_locs: uswtdb_v4_1_20210721.csv
    landfill_locs: landfilllmopdata.csv
    other_facility_locs: other_facility_locations_all_us.csv
    lookup_facility_type: facility_type.csv
    turbine_data: number_of_turbines.csv

output_filenames:
    costgraph_pickle: netw.obj
    costgraph_csv: netw.csv
    
costgraph_parameters:
    sc_begin: manufacturing
    sc_end: 
      - landfilling
      - cement co-processing
      - next use
    cg_verbose: 1
    save_cg_csv: True
    finegrind_cumul_initial: 1.0
    finegrind_initial_cost: 165.38
    finegrind_revenue: 242.56
    finegrind_learnrate: -0.05
    finegrind_material_loss: 0.3
    coarsegrind_cumul_initial: 1.0
    coarsegrind_initial_cost: 121.28
    coarsegrind_learnrate: -0.05
    cg_update_timesteps: 12

line 560 in CostGraph throws error on national data commit 892270e

Following error is produced when running debugging-speed-runs branch on commit 892270e on national_data branch

image

The error reads index 0 is out of bounds for axis 0 with size 0 and it's thrown from the statement on line 560 of CostGraph.

Removing the [0] towards the end of line 560 should resolve the error.

more detailed algorithm for selecting facility pairs in Router

Current status

The input file route_pairs.csv is used to define pairs of facility_type values to connect using Router and whether out-of-state connections between pairs are allowed. Where out-of-state connections are allowed, Router will find routes between all combinations of facilities with the paired facility_type values. For instance, out-of-state connections are currently allowed between blade manufacturing facilities and power plant facilities. (This is because not all states have manufacturing facilities, and every power plant needs an upstream manufacturing facility.) For every blade manufacturing facility in the locations dataset, Router finds routes from that facility to every power plant in the locations dataset. As a result, our routes dataset contains many facility connections that, while technically allowable, are infeasible (inordinately long transportation distances) compared to other connections between facilities of the same type.

Proposed solution

Related to the maximum distance filtering part of #145 . A maximum distance filter is applied at the Router level, such that if two facilities are too far apart the route between them is not stored. This carries the risk that some vital connections may not be made (some power plants may not get assigned manufacturing facilities, for instance), so there would need to be an allowed exception that makes certain connections even when the maximum distance is exceeded. With this functionality, then the distance assigning logic in CostGraph could be extended such that edges without distances in the routes dataset are deleted from the network.

Iowa (and other states) speed test on local machines and HPC

This run will use every part of the functionality including generating the locations file, the turbine data file, the step_costs file, and the routes file, filtering down the datasets to include only Iowa facilities, and calculating transportation impacts along with the rest of the supply chain impacts.

Data branch: national_data at a50c717
Code branch: configs at 11f206f

Deleted the lca_db.csv file from pylca_celavi_data.

Config file:

(I set cg_verbose to 2 for more detailed monitoring but have set it to 1 below because CostGraph gives a lot of output)

flags:
  compute_locations: True  # if compute_locations is enabled (True), compute locations from raw input files (e.g., LMOP, US Wind Turbine Database)
  generate_step_costs: True # set to False if supply chain costs for a facility type vary regionally
  run_routes: True  # if run_routes is enabled (True), compute routing distances between all input locations
  use_computed_routes: True  # if use_computed_routes is enabled, read in a pre-assembled routes file instead of generating a new one
  initialize_costgraph: True  # create cost graph fresh or use an imported version
  enable_data_filtering: True  # If true, dataset will be filtered to the states below
  pickle_costgraph: True  # save the newly initialized costgraph as a pickle file
  use_fixed_lifetime: True # set to False to use Weibull distribution for lifetimes

scenario_parameters:
    start_year: 2000.0
    end_year: 2050.0
    timesteps_per_year: 12
    max_dist: 300 #km
    # If you specify enable_data_filtering = True above, you need to list the states to filter here.
    # Default behavior is not to pass any states through the filter.
    # If enable_data_filtering is False, this list is ignored.
    states_to_filter:
    - IA

data_directories:
    inputs: inputs/
    raw_locations: inputs/raw_location_data/
    us_roads: inputs/precomputed_us_road_network/
    preprocessing_output: preprocessing/
    lookup_tables: lookup_tables/
    lci: pylca_celavi_data/
    outputs: outputs/
    routing_output: preprocessing/routing_intermediate_files/

input_filenames:
    locs: locations_computed.csv
    step_costs: step_costs.csv
    fac_edges: fac_edges.csv
    transpo_edges: transpo_edges.csv
    route_pairs: route_pairs.csv
    avg_blade_masses: avgblademass.csv
    routes_custom: routes.csv
    routes_computed: routes_computed.csv
    transportation_graph: transportation_graph.csv
    node_locs: node_locations.csv
    power_plant_locs: uswtdb_v4_1_20210721.csv
    landfill_locs: landfilllmopdata.csv
    other_facility_locs: other_facility_locations_all_us.csv
    standard_scenario: StScen20A_MidCase_annual_state.csv
    lookup_facility_type: facility_type.csv
    lookup_step_costs: step_costs_default.csv
    turbine_data: number_of_turbines.csv

output_filenames:
    costgraph_pickle: netw.obj
    costgraph_csv: netw.csv
    
costgraph_parameters:
    sc_begin: manufacturing
    sc_end: 
    - landfilling
    - cement co-processing
    - next use
    cg_verbose: 1
    save_cg_csv: True
    finegrind_cumul_initial: 1.0
    finegrind_initial_cost: 165.38
    finegrind_revenue: 242.56
    finegrind_learnrate: -0.05
    finegrind_material_loss: 0.3
    coarsegrind_cumul_initial: 1.0
    coarsegrind_initial_cost: 121.28
    coarsegrind_learnrate: -0.05
    cg_update_timesteps: 12

discrete_event_parameters:
    component_list:
    - nacelle
    - blade
    - tower
    - foundation
    seed: 13
    min_lifespan: 120 # Units: timesteps
    blade_weibull_L: 240
    blade_weibull_K: 2.2
    component_fixed_lifetimes:
      nacelle : 30
      blade : 20
      foundation : 50
      tower : 50

Integrate visualization and plotting with results extraction from HPC

Initial comment from @TJTapajyoti

  1. Plot graphs using object file from CELAVI run. @akey
  2. Commit results successively to github results repo.
  3. Check to see if its possible from HPC directly.

Issues - Graphs currently made with plotly are not working. Need to use seaborn.

Response from @akey7

We still need to ensure that plotly cannot ave plots directly as png files. It might be able to and, if so, we should not use seaborn.

Response 2 from @akey7

I have confirmed that plotly will save plots with kaleido. We do not need to use seaborn.

Runtime of CELAVI code - Increasing successively. IOWA runs failing

Initial comment

Runtime for CELAVI code is increasing successively.

IOWA run failed today because the first year 2001 is taking more than 15000 seconds in DES and Simpy.

Causes - Unknown.

Potential solutions? _ revert back to old code.
Check filtering properly?

Old code saved in scratch (Not deleted). Can compare from there.

Followup comment

Comparing Aug 18 with Current code -

Changes in main.py

US WTDB was changed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.