Giter Site home page Giter Site logo

ladybug-tools / butterfly Goto Github PK

View Code? Open in Web Editor NEW
239.0 239.0 69.0 83.51 MB

:butterfly: A light python API for creating and running OpenFoam cases for CFD simulation.

Home Page: http://ladybug-tools.github.io/butterfly.html

License: GNU General Public License v3.0

C++ 1.43% Python 60.19% HTML 38.36% Shell 0.03%

butterfly's People

Contributors

antoinedao avatar chriswmackey avatar mostapharoudsari avatar petebachant avatar sariths avatar stefan-buildsci avatar thinklikeanarchitect avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

butterfly's Issues

Tracking and plotting residuals while running the case

Hello,

This is in reference to discussion #21 .

I will try to give my 0.02c on how we can go about visualizing residuals, and potentially any other information derived from function objects, within Butterfly. I think there are three possibilities to do this, with different potential and ease of implementation depending also on which operating system the user is running.
Note: I do not assume Butterfly is only for Windows, especially given the current transformation of LB (and guessing the whole suite of tools) as an independent entity.

I am listing the three possibilities below:

1.Using the pyFoam libraries and the pyFoamPlotRunner.py utility. This works in like a charm Linux but I haven't got the chance to test it in Windows. If we can manage to setup pyFoam in windows this would be one of the easiest ways to plot residuals (and do much more). Maybe since OF is actually working through a VM this is possible. In that case, what is required by Butterfly is to edit the Run command accordingly:

pyFoamPlotRunner.py mpirun -np 4 simpleFoam -parallel (for the parallel case)

pyFoamPlotRunner.py simpleFoam -parallel (for the single processor case)

The PlotRunner utility will automatically create residual plots that automatically update on every iteration. pyFoam also allows the user to create image files out of the plots at the end!

2.In case pyFoam cannot work under the VM environment in Windows then we can replicate its functionality by following the terminal output (> log.txt kind of thing) and extracting the necessary information from its output (e.g. the number for each residual in each iteration), since the format of the output is consistent at every iteration. We can then use some windows application to plotting that. For example I know that simply opening the .log file in a browser works so perhaps it can be a sort of visualization utility/code within a browser?

This is a bit like we are rediscovering the wheel. However, it represents a generic way to visualize run-time information in Windows which can be extremely valuable for the visualization of information created from function objects.

3.The third way is a compromise between the two, or rather a most specific and probably efficient way to go about no. 2. It pertains specifically to residuals and it uses the -residual function object of OF. This function would write out the residuals for each timestep in a file. We can use that file then to visualize information.

I have to wonder if a utility like a value tracker within GH can be used for the visualization of residuals. Cause that would be really cool!

Hope all this helps!

Kind regards,
Theodore.

Add 'duplicate' method to all the classes

This is critical to be able to use Butterfly for parametric runs in Grasshopper and Dynamo. Currently components are sharing the same instance which is fine as far as the user doesn't take the same case and use it for two different workflow.

This is not critical for beta testing but needs to be addressed before the first release.

WindTunnel component: nDivXYZ

At the moment this input allows the user to set up the number of cells in each direction.
I think it's more user friendly to use the cell size in each direction as an input.

This could be done easily by dividing the total X,Y, and Z lengths by the number given by the user and then taking the (integer) result of the division as an input to nDivXYZ.

If not for recursive relationships, since this information is currently created on the output of the component, this could easily be done with native GH. But I think it should be straight forward to apply this.

Kind regards,
Theodore.

Butterfly Roadmap 3. From Theodore's workflow

3a. The next step of the workflow is to create the mesh, by setting up the snappyHexMeshDict file. I'd like to note here that what is below stands for external simulations. I do not believe that SHM is good for internal meshing as it complicates things a lot. I will suggest a different program (opensource, tied to OF, and available in windows) for internal cases. SHM looks frightening at first glance, it has almost 100 options. However, a handful of these options can or should be adjusted, the rest can remain default values unless the user is experienced and wants to edit the dictionary by hand. The options that I usually change from simulation to simulation are:

  1. maxLocalCells, the maximum number of cells per processor usually tied to the next option.
  2. MaxGlobalCells, the total cells of the resulting mesh (most of the times the mesh is bigger than this). This defines when refinement stops, once it passes the limit and it is tied to the capacities of your computer, mainly ram.
  3. MinRefinementCells, when the refinement process stops. I usually have this down to 10, since I want refined meshes.
  4. NcellsBetweenLevels, defined the number of cells between the different levels of refinement in each region. Higher numbers make for a more gradual mesh but I find that 1 is the best option here. Can be default
  5. refinementSurfaces { }, here is where we define the surfaces and regions to be refined, as well as the level of refinement. By level we mean how many times the original blockMesh cell is divided.
  6. LocationInMesh, very important in defining what is the inside of your mesh. I usually put this somewhere high up near the top of my blockMesh for external and also never on coordinates like (100, 200, 50) but smth like (100.123 200.431 50.112). If this happens to be 'inside' a geometry, then snappyHexMesh will do an internal meshing and that would be wrong. Perhaps a check can exist here to tell the user the point u selected is inside an stl.
  7. The rest are mostly quality options on meshing. We can leave them default or create low, medium, high quality templates. I will share some of mine soon.

As mentioned in 1b, after exporting our geometry we should end up with either 1 .stl file containing all regions of the model or several .stl files with each region of the model. This file or files are then what we call in the geometry { } section of the snappyHexMeshDict. The convention of renaming each file or each region within the single file or in each file, as described in 1b, becomes important here. In the refinementSurfaces section of the SHM dictionary, it is that specific name that we use in order to 'call' the region and assign it a specific refinement level (in my example below, in order to save cells, I refine the context and podium in a much much lower degree than my towers):

refinementSurfaces {
Development //the name of the single, joined, stl filename
{
level (1 1) // the level of refinement for the WHOLE geometry
regions //here we define the specific regions WITHIN the single stl
{
Context
{
level (2 2)
}
Podium
{
level (3 3)
}
Tower_1
{
level (5 5)
}
Tower_2
{
level (5 5)
}
Tower_3
{
level (5 5)
}
}
}

3b. After we run snappyHexMesh, we will end up with a number of different folders containing a different version of the mesh. These depend on the first three options of the SHM dict:

• castellatedMesh
• snap
• addLayers

These represent the 3 different stages of mesh creation. A folder is created for each option set to true. In castellatedMesh, the geometry is assigned the different refinement levels and the different regions are refined and cells are added accordingly. In the snap stage, the geometry is then snapped into the blockMesh, and then optimised, chiseled, in a way. In the addLayers stage, layers of cells are added to the boundaries. For most of the external simulations that we would be doing, this is not necessary. It is also something that snappyHexMesh does badly. So I propose that the standard is here:

castellatedMesh true;
snap true;
addLayers false;

(although I usually have addLayers to true, but without adding layers, which adds a couple more extra quality controls to the process. I am unsure how this affects quality though, still to be tested more.)

The above means that 2 folders will be created in our case folder. Each folder has a polymesh inside. To run the simulation we need the last one, but openfoam allows you to export all in order to see and test your meshing step by step. The usual process, and this you can see in the very useful bash scripts in the openFoam tutorials (Allrun), is to copy the polymesh from the last time folder to your constant/ folder, then delete the 1/ 2/ folders from the main folder. We could either use the bashscripts to do this or we can code it ourselves, since it mainly has to do with folder manipulation. The final folder has to have a 0/, constant (with the last polyMesh inside), and system/ folders.
**NOTE: I like to have things clean and neat so this is my workflow. Openfoam allows you to not delete any folders created by SHM but instead to start solving from the last 'time directory' which would be the last folder of SHM. So most allrun scripts just copy the '0' folder in the last SHM folder and run. I find this very messy as it leaves things I dont want or need in my case folder. But it is easier I guess.

3c. After the meshing process is done and the whole folder clean up happens we need to make sure that the new polyMesh/boundary file is in accordance to our 0 folder files. That means that every geometry we added to the mesh (the boundaryfileds added here are linked to the refinementSurfaces names used, which are linked to the .stl file names, you see the importance of renaming them in the first place to avoid errors). These are typically walls and specific type of boundary conditions are added to them (I will also upload a few documents on this soon). But what is important in order for the thing to run is every boundaryfield in the polymesh/boundary file is there in our 0 folders.

3d. After that we should be ready to run our simulation. To run a simulation a few additional files are needed:

controlDict: defines the solver, when to save results, etc. and ALSO the function objects (i.e. real-time post-processing). I will also add a few incredible function objects for us here to input on our tool. Function objects are the things that give OF power and versatility.
FvSolution: the options for solving all equations, accuracies, and when to stop
fvSchemes: the numerical schemes to solve these equations.

**NOTE: I have left the parallel computation outside of this document, we can discuss it in the future, it really changes nothing on the above.

For all these I will again provide some standard values for each type of simulation, solver, and more importantly mesh quality. But ofc, these will just be my input more are welcome here!

When all these are set, we simply call the solver (which is defined in controlDict) by name and run the simulation. In future documents I will go more into detail on these last parts, but I think the setting up of the case folders which I tried to describe here is more important atm.

Connecting CFD to EnergyPlus: the curious case of TemperaturePattern:UserDefined

Hello everyone,

Happy new year! I wish you all a happy, health, and creative 2016!

I know this one is way up the chain but I thought I'd start the year with something.

Based on my question in the HB/LB forums concerning the TemperaturePattern:UserDefined object of E+ (http://www.grasshopper3d.com/group/ladybug/forum/topics/roomair-temperaturepattern-userdefined-using-cfd-results-in-e) I am attaching my unmethours question link here:

https://unmethours.com/question/13707/e-and-cfd-roomairtemperaturepatternuserdefined/

From Aaron's answer it seems that the field "Room Air Modeling Type: User Defined" is what would enable the RoomAir:TemperaturePattern:UserDefined object. From the various different patterns included in E+ I believe this (http://bigladdersoftware.com/epx/docs/8-4/input-output-reference/group-room-air-models.html#roomairtemperaturepatternnondimensionalheight) looks as the closest to a CFD simulation where it would be quite easy to have the (average) temperature stratification within a zone as an output. Furthermore, if the output was generated with function objects, then the connection could easily be done by feeding the output files of the objects to the simulation (perhaps even allowing for co-simulation in the case of transient simulations).

Kind regards,
Theodore.

Add initialConditions

turbulentKE and turbulentEpsilon will be set based on this calculation.

flowVelocity         (0 0 0);
pressure             0;
turbulentKE          0.5; // initial value based on calculation
turbulentEpsilon     0.01; // initial value based on calculation
#inputMode           merge;

merge .stl files

Currently .stl files are written as separate pieces. I need to merge them together.

Using python scripts to automate OpenFoam simulations

Hello everyone,

I found this script made by a researcher in Chalmers during his study on porous media. The background of the study and specific model assumptions are not so important I think. He is actually using the script to alternate between incompressible and compressible simulation on the fly, we can hopefully start with simpler stuff like calling applications and running them. The way he does it, the way he creates the python script, linking to the python library we talked about in our first meeting (pyFoam) may shed some light to you more experienced coders on how Butterfly can perhaps interface with OpenFoam in the same way. I hope it helps!

http://publications.lib.chalmers.se/records/fulltext/160199.pdf (page 53)

Kind regards,
Theodore.

create userobjects

For each component I need to

  • Add description for component, inputs and outputs. This is easy since I can take them from the code itself.
  • Add icons! This will be so much work. For now I will just use colored dots... let's make a 🌈

Categories:

0 :: Create

  1. butterfly (install, update)
  2. Create BFSurface
  3. Create BFTunnel
  4. Create Case

1 :: Boundary

  1. Inlet
  2. Outlet
  3. Wall
  4. Custom Boundary

2 :: BoundaryCondition

  1. calculated
  2. fixedValue
  3. zeroGradient
  4. kqWallFunction
  5. epsilonWallFunction
  6. nutkWallFunction
  7. custom

3 :: Mesh

  1. blockMesh
  2. snappyHexMesh
  3. checkMesh
  4. updateFvScheme

4 :: Solver

  1. simpleFoam

5 :: PostProcess

  1. loadMesh
  2. loadPoints
  3. loadProbes

6 :: Etc

  1. blockMeshDict
  2. snappyHexMeshDict
  3. controlDict
  4. residualControl
  5. probes
  6. Wind Tunnel Parameters
  7. purgeCase

Generate blockMeshDict

Again this is from Theordores work flow description there is also an example GH file which produces a block mesh dict. Essentially for this task this GH file needs to be translated into Python. The file can be found here: https://github.com/antonszilasi/Butterfly/blob/master/Butterfly%20roadmap%20step2%20block%20mesh%20dict%20example/Test.gh

  1. The second step of a simulation is to define our blockMesh. This is the imaginary boundary within which our geometry lies and is to be simulated. A blockMesh is nothing than a 3D box, and as such all we need to define it is a minimum (x,y,z) coordinate and a maximum (x,y,z) coordinate. As I already posted in github, the openFoam standard process of creating the blockMesh from these coordinates is (the text below corresponds to the vertices part of the blockMesh dict):

(minX minY minZ)
(maxX minY minZ)
(maxX maxY minZ)
(minX maxY minZ)
(minX minY maxZ)
(maxX minY maxZ)
(maxX maxY maxZ)
(minX maxY maxZ)

So in other words by only two coordinates of the blockMesh we can automatically build our vertices inside the blockMeshDict file with the above simple algorithm. However, I think that for sake of completeness we should give two options to our users here:

  1. Define a blockMesh by two coordinates (sometimes a person might already know this)
  2. Define a blockMesh by the characteristic length of the geometry

The second option is related to international standard and also intuition concerning the needs of the geometry at hand. The characteristic length of the geometry is the maximum height of the geometry. The European standards on blockMesh requirements for example are:

  1. On the upstream side: 5 x max height space before the geometry
  2. On the downstream side: 15x max height space after the geometry
  3. On the sides: 3x max height space around the geometry.

Of course the above differ from region to region, for example in the tropics maximum winds are much lower so this can be greatly reduced. But the relationship of the blockMesh geometry to the modelled geometry is extremely important in correctly solving the physics of the model. So I believe we should give the user both the option to choose from standard blockMesh requirements (like above) and manually selecting the 'x times Height' by himself.

I am guessing that this could be done with creating a bounding box around all geometry and getting the length of any vertical vertice. After that it is just a matter of extending the box to all directions according to user selected or standard height-lengths. After that, as before, all we will need in order to write the vertice part of the blockMeshDict is the min (x,y,z) and max (x,y,z) coordinates of the extended boundingbox.

2b. As you probably have noticed, inside the blockMeshDict there are also the 6 standard boundaryFields created by openFoam. These can be found in the '/constant/polymesh/boundary' file created that is created after the blockMesh command is run.

These boundaries have to be automatically passed to our 0 folder, for all external simulations. Originally, these have standard names in openFoam (top,bottom, sides, front and back, etc.). But usually the user wants to rename them into something that makes sense, for example I usually rename them to wind directions of my model. That means that we should both give the ability to the user to rename these and then to pass these changes to the 0 folder. I'm guessing that with our classes this is already done in this way.

WindTunnel component: wind speed

If memory serves right we decided that this is the wind speed at a reference height of 10m, right?

I think we need a tool tip here for the user to understand which wind speed is requested as an input.

Also, I think it's worth adding a reference height input in case the user has wind data that are calculated in a different reference height. Even though this won't be often, since EPW reference height is 10m, we should at least give access to this for users to enable flexibility. Ofc the default value can be set at 10m.

Kind regards,
Theodore.

blockMeshDict component

Not so clear at the moment where the blockMeshDict component is supposed to be linked to. There is no input apparently in the blockMesh component. There also does not appear to have a case output so it does not affect the current case setup (?). Or is it supposed to only help the user create a blockMesh dictionary?

I think we either open an input in the blockMesh component or make it so that the blockMesh dictionary (and case) are updated when the component runs. This would probably require a case input to blockMeshDict.

Hope I haven't misunderstood the component.

Regards,
Theodore.

Geometry export workflow

Hello all,

Right now the main question that I have is about the best way to export the geometry to OpenFOAM.

@stefan-buildSCI sent me an example file of a room with two windows. I did a test to generate blockMeshDict from scratch. It is all good but creating multiple blocks is not really easy. It can get really hard for more complex geometries.

For this particular reason I think it might be a better idea to use Grasshopper > STL > OpenFOAM workflow. I couldn't find a good example file to see how to do this using the command line? I can export the geometry to STL with no problem. STL to OpenFOAM is the question.

  1. What are your thoughts on this?
  2. Does anyone have a good example?

@mcneillj @stefan-buildSCI @TheodoreGalanos

Allow for saving, loading, and sharing of Butterfly cases

Hello everyone,

Just sharing an idea.

Allowing saving and loading of Butterfly cases would make sharing cases between Butterfly users incredibly easy!

The case output can be a file that can be loaded with a component similar to the HB components that do the same thing. When the case is loaded, all dictionaries and options up to that point are created.

That's all!

Kind regards,

Theodore.

Check default values for the case

@TheodoreGalanos, @stefan-buildSCI,

I uploaded a new folder study_room which is generated from scratch using butterfly.

I did a test and meshed it and ran it successfully but I'm not sure if the results is necessary accurate.

image

Can you please take a look and let me know your feedback? In particular I need to know your input on the default values for boundary conditions, etc. It's quite easy to create series of default values (similar to radianceParameters in Honeybee) but I don't have the expertise to come up with the values.

Add classes for OpenFoam dictionaries

The minimum necessary for now are:

  • constant/polyMesh/blockMeshDict
  • system/controlDict
  • system/snappyHexMeshDict
  • system/fvSchemes
  • system/fvSolution
  • constant/RASProperties
  • constant/transportProperties
  • 0/epsilon
  • 0/k
  • 0/nut
  • 0/p
  • 0/U

They all will be subclass of FoamFile with default values. User can update the values by using .updateValues({'key': 'value'}) and sending a dictionary for the new values. Already implemented controlDict and it is working pretty well.

OpenFOAM and Butterfly Installation Issues

Hi

I tried to install the latest version today. I find it difficult to execute the docker-dist of OpenFoam with Butterfly on win 7.

When testing the indoor and outdoor files i get the read errors as seen:
1_creates_batch_does_not_run_it_though

It seems like Butterfly is generating a bash-file but misses executing it. If I execute it myself and refresh GH the blockMesh is created as seen here:
2_creates_batch_does_not_run_it_though

I can repeat this process with hexmesh and checkmesh (and read the meshes with the Butterfly_Load Mesh component), but upon manual execution of run simplefoam it misses multiple paths and thus simply does not run properly.

It might be my docker/foam installation is off, but it seems to function ok as I tested it before working with Butterfly:
3_the_docker_openfoam_seems_to_be_working

Thank you for making all this possible

Potential Butterfly components

Hello everyone,

Following the discussion started in issue #14 I am starting a new discussion concerning potential Butterfly componets that do not necessarily have to do with the core API code that will connect it to OpenFOAM.

These are mostly based on providing Butterfly pre- and post-processing capabilities that, along with working within a 3D modelling program like Rhino and a visual algorithmic environemnt like GH, can really increase the value of the program.

I will try to post a reply to this thread for each component idea. In each reply I will try to include as many details as I can. Of course, these are original drafts and they can easily be changed, especially when the experience of people with coding skills comes in.

Please feel free to add any idea for a component that could be useful or any comments and ideas whatsoever.

Hoping this will be useful.

Kind regards,
Theodore.

Automate run process

Before running check the folder and copy the latest mesh (or the one picked by user) to constant/polyMesh folder and remove/rename folders generated by snappyHexMesh.

Add ABLConditions class

This object should be initiated from wind tunnel information

Uref                 2.8; // wind velocity
Zref                 10;    // reference z value
z0                   uniform 1; // roughness
flowDir              (0 1 0); // velocity vector
zDir                 (0 0 1); // z direction (0 0 1) always for our cases
zGround              uniform 0; //min z value of the bounding box

Running simplefoam error

Hello
In the outdoor airflow example, when i try to run the case, the analysis window opens and immediately closes, causing grasshopper and rhino to freeze.

It did work the first time i tried to run it, but only then. (restarted, tried again...)
The problem is probably in my system, i wonder where to start looking
Thanks,
H

TypeError in second run of the component

This is a strange error which I guess should have something with Grasshopper caching the imported libraries or me making a mistake while importing the modules.

When I open the file it works fine (image 1) but on the second run it gives me a TypeError which doesn't look to be right as it complains about the types but the types are the same. My best guess is that there are two instance of class are loaded and that's what causes the error. (image 2)

If I reload the libraries and then comment them out the component seems to work fine afterwards (image 3).

@piac do you have any insight on this? Do you have any insights on this issue. I should add that I didn't have the issue before re-structuring the folders when all the libraries where in the same level. Now I have grasshopper libraries under gh folder.

This also might help.

image

image

image

image

Wind Tunnel component

Via 7a7a615 Started from #16 by @TheodoreGalanos.

I started to put this together. The geometry side is working but I'm not sure about the boundary condition. Still waiting for #21 before moving forward. Also we need to have a strategy to automate meshing for this case.

image

Setting up ParaView inside OpenFOAM for Windows

I installed Paraview for Windows on my system but when I type parafoam it gives me the error below:

[ofuser@boot2docker building_test]$ paraFoam

FATAL ERROR: ParaView reader module libraries do not exist

Please build the reader module before continuing:
cd $FOAM_UTILITIES/postProcessing/graphics/PV4Readers
./Allwclean
./Allwmake

I couldn't figure out how to set the path to installation folder! :| I hope someone else know how we can do this.

RNGkEpsilon for outdoor studies

Hi,

I noticed we are using k-epsilon as our model of choice for outdoor studies (v 0.0.2). Even though this is very easy for the user to assign before he/she runs the simulation I think giving a default value is a wise choice. From my experience, RNGkEpsilon behaves much better and has more accuracy for outdoor flows.

Way to set this: RASModel RNGkEpsilon;

in the turbulenceProperties file

Add probes

This is one of the very useful OpenFOAM functions. Here is an example that should be added to controlDict:

functions
{
probes
{
// Where to load it from
functionObjectLibs ( "libsampling.so" );

type probes;
name probes; // Name of the directory for probe data
outputControl outputTime; // Write at same frequency as fields
outputInterval 5;
fields (p U); // Fields to be probed
probeLocations
    (
        // (x y z)
        ( 1e-06 0 0.01 )
        ( 0.21 -0.20999 0.01 )
        ( 0.21 0.20999 0.01 )
        ( 0.21 0 0.01 )
    );
}
}

WindTunnelPar: gradXYZ input

I do not think that the gradXYZ will currently work (as it is intended in OF) with the input required at the moment.

The required information OF wants in multiple grading are the following:

  • number of sections to be graded differently
  • percentage and distribution of cells across the sections
  • expansion ratio for each section

This information is required for each of the X, Y, and Z directions. To give an example, imagine we wish to grade our mesh in 3 sections: 1 before the geometry, 1 surrounding the geometry, 1 after the geometry. This would mean we will require in total 6 vectors for X and Y, like so:

X-grading:

(0.2 0.4 1)
(0.6 0.2 1)
(0.2 0.4 1)

Y-grading:

(0.3 0.45 1)
(0.4 0.15 1)
(0.3 0.45 1)

The Z-direction would depend on the user, below an example of a grading for the 10 first meters and the rest of the geometry (in a 100 meter high blockMesh):

(0.1 0.05 1)
(0.9 0.95 1.2)

Each vector represents: (percentage of X, Y, Z length, percentage of cells distributed, expansion ratio).

So in the typical really case described above the input would require 8 to 9 vectors. That's a bit hard to get with one go in grasshopper. I would propose that we perhaps allow a multipleGradingPar component as an input to this one, while have the default value of simpleGrading (1 1 1).

P.S.: Can't remember if we were supposed to enable multiple grading with this input. If instead it is meant to represent the simple grading parameters then renaming it to expansion ratio for XYZ would also do it. Then the multiple grading is a new input for future development.

Kind regards,
Theodore.

checkMesh on latestTime only

Hi just noticed this behavior on our automatic schemes step.

Since we aren't deleting the previous times from SHM, checkMesh command goes through all 3. This probably isn't a problem since the component sets the schemes according to the last time result but it is unecessary resources especially on complex models.

Simple fix to this is to use this command instead for the component: checkMesh -latestTime

Kind regards,
Theodore.

Post processing: the return link from Butterfly to Honeybee

So I had a small discussion with Chris a few days back and I promised I would look into this. The whole thing circled around me wanting something and Chris being a cool guy and helping me do it. I wanted to add my CFD results to his comfort map workflow. We decided the first step would be to allow for the indication of an average velocity (from CFD) in the outdoor comfort calculations.

The extraction of these things can be done in various ways in Openfoam. One way is by post processing through parafoam. From what I have seen there are a number of filter pipelines that can extract selections into tabulated format, which I'm guessing we can then use through python to feed into our mesh in HB.

But I think I found a better way, at least a faster and more automated way that we can include in Butterfly itself. It is through the sampleDict utility (https://github.com/OpenFOAM/OpenFOAM-3.0.x/blob/master/applications/utilities/postProcessing/sampling/sample/sampleDict).

This utility gives a number of options on where to sample from (face, cells, points, zones, patches, surfaces, etc.) and what to get as samples (Temperature, Velocity, Pressure, etc.). It seems to be the most suitable especially in the case of comfort simulations where usually large and complex surfaces are in play. sampleDict allows us to extract these complex regions just at the last timestep of the simulation.

I played around briefly however I can tell you the available formats it supports as an output:

-csv
-ensight
-gnuplot
-jplot
-raw
-vtk
-xmgr

The most suitable seems to be csv, although that is just to my eyes and not to coding requirements. The csv output looks something like this (extract of velocity from a face):

x y z U_0 U_1 U_2
0 0.218 0 0 0 0
0.00217143 0.218 4.32392E-08 0.000148327 -0.0726747 0.000311878
0.00434286 0.218 4.31451E-08 -3.95837E-05 -0.0987746 0.000508977
0.00651429 0.218 4.3051E-08 -0.000285569 -0.0930806 0.000553867
0.00868571 0.218 0.000000043 -0.000442578 -0.0747404 0.000501759
0.0108571 0.218 4.28629E-08 -0.000568711 -0.0543903 0.000387795
0.0130286 0.218 4.27688E-08 -0.000667924 -0.0341085 0.000248181
0.0152 0.218 4.26747E-08 -0.000763246 -0.014714 0.00011728
0.0173714 0.218 4.25806E-08 -0.000851649 0.00256538 0.000017424
0.0195429 0.218 4.24865E-08 -0.000920814 0.015989 -4.53957E-05

It's quite a straightforward output and an easy math calculation to get the scalar wind velocities at these points. Notice this is a face, so it the points are actually quite close to eachother. I still haven't used it in a surface so I'm not sure of how we can actually control the point numbers (in order to align it with the mesh points component in HB). I will test more, especially with the other file formats and will forward the files when I have them for testing.

Kind regards,
Theodore.

Generate .stl file

Hi Everyone I am splitting Theodore's work flow description in tasks and assigning the tasks to people, please let me know your thoughts on this.

1a. The first step in any CFD simulation is the creation of the geometry to be modelled. This is why our environment (Rhino/GH) is ideal.

1b. After the user has created his geometries, he needs to export these as an .stl file. Again this is something that Rhino does nicely. Bare in mind here that the format of the .stl file has to be ASCII. The .stl file exported is a collection of ASCII code with coordinates for every single vertice defining the geometry at hand. You will notice upon opening a file like this that two lines of code are always similar in all .stls:

'solid OBJECT', at the first line of each file
'endsolid OBJECT', at the last line of each file

These two lines are extremely important because later on they will define the regions of our simulation mode. What we need to do here, at least I find this correct and clean, is to change the word 'OBJECT' to the name of the .stl. So if I export a building as building.stl, I would change the words OBJECT to building (bare in mind this is case sensitive).

The importance of this is magnified in cases where we have more than one stl exported. Imagine a model with a development consisting of:

  1. 5-story podium
  2. 3 15-story residential towers on top
  3. Various buildings around (context geometry)

In this case, one would export at least 5 .stl files, one for each of the different parts of the geometry (podium, towers, context). After the export, and this is my suggestion again in order to be clean and neat, the different .stl files need to be concatenated. This is a simple copy-paste each file to the previous and merging them to one. Now, if the previous editing of the 'OBJECT' word in each file is not done you can imagine that we get a Development.stl file that has 5 regions ALL called OBJECT. If this is the case, that makes it impossible, or very messy indeed, to define refinement levels for each different region in our snappyHexMeshDict.

So for this reason, and to sum up, we should export each .stl, rename the OBJECT to filename (or whatever the user likes), and then concatanate to one stl.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.