Giter Site home page Giter Site logo

Comments (15)

FabianGabriel avatar FabianGabriel commented on August 23, 2024 1

Thanks! The file didn't have the "execute" permission which I than added. Now it does work.

Best regards,
Fabian

from active_flow_control_past_cylinder_using_drl.

FabianGabriel avatar FabianGabriel commented on August 23, 2024

Hi Andre,

I have some problems getting the job to work on the phoenix-cluster.
When I submit the command "sbatch jobscript cylinder2D_base" I get the following error:

Submitting case cylinder2D_base
/var/tmp/slurmd_spool/job1621901/slurm_script: line 26: ./Allrun.singularity: Permission denied

The test jobscript in the user manual works just fine.

Best Regards,
Fabian

from active_flow_control_past_cylinder_using_drl.

AndreWeiner avatar AndreWeiner commented on August 23, 2024

Seems like the Allrun script has the wrong owner/incorrect permissions. Can you navigate into the test case, run ls -al, and post the output here? You should be the file owner to execute the script. You can change the owner of a file using chown:

chown $(whoami) Allrun.singularity

Hope that helps.
Best, Andre

from active_flow_control_past_cylinder_using_drl.

FabianGabriel avatar FabianGabriel commented on August 23, 2024

Hi Andre,

Now that I ran the first simulations I've come across a bit of a problem regarding the overall computation time:
The simulation times for Re = 100 were (These numbers are from log.pimplefoam):

"Mesh size"  Number of cells Time Courant number mean (after 1 s) Courant number max (after 1 s)
25  4625  229.83 s 0.073249755 0.56789814
50  18500 601.55 s 0.073237097 0.56658645
100 74000 2889.3 s 0.073211338 0.55701695

These numbers are already way higher than in darshans report even though the mesh sizes are roughly equivalent. 

For higher Reynolds numbers the simulation time only increases so that the estimated time at Re = 1000 and "Mesh size = 100" would be about 8 h. That can't be right. This seems to be way too long.

What can I do to speed up this process?

Best Regards,
Fabian

from active_flow_control_past_cylinder_using_drl.

AndreWeiner avatar AndreWeiner commented on August 23, 2024

Hi Fabian,
on how many cores did you run the simulation and how did you decompose the domain? There is some space regarding the Courant number. You can almost double the time step for Co=1. pimpleFoam can also handle Co>1, but you should check the impact on the results. The number of cells you reported comes from the checkMesh output?
Best, Andre

from active_flow_control_past_cylinder_using_drl.

FabianGabriel avatar FabianGabriel commented on August 23, 2024

I ran it on 10 cores like you suggested in the initial issue message. For the decomposition I just changed the values in the decomposeParDict. Result:

numberOfSubdomains  10;

method              hierarchical;

coeffs
{
    n               (10 1 1);
}

The number of cells came from the log.blockMesh output. checkMesh produces the same numbers though.

Best Regards,
Fabian

from active_flow_control_past_cylinder_using_drl.

AndreWeiner avatar AndreWeiner commented on August 23, 2024

10 might have been a bit too much. With 5 subdivisions in x you should result in almost quadratic domains. Maybe it's also worth checking up to how many procs you get a speed-up.
Best, Andre

from active_flow_control_past_cylinder_using_drl.

FabianGabriel avatar FabianGabriel commented on August 23, 2024

Hi Andre,
I hope you had a nice weekend.
I have now adjusted the thickness and run the simulation again. However, the results still do not match Darshan's simulation (200 is Darshan's simulation, the other ones are mine):
mesh_dp_cd (1)
As you can see the values are much closer to Darshan's and to each other but still not perfect.
I can't really figure out why they still don't match up. Is there another normalisation variable that I have not yet changed?

Also the "25" mesh size is bothering me a bit. It should be roughly comparable to Darshan's "100" mesh size (5325 to 5506 cells)
but the results are vastly different.
Best Regards,
Fabian

from active_flow_control_past_cylinder_using_drl.

AndreWeiner avatar AndreWeiner commented on August 23, 2024

Hi Fabian,
the results look fine. Darshan's mesh had a very different topology than yours, so I would not expect the same convergence behavior. According to table 4 of this benchmarking article, the upper bound for the drag should be between 3.22 and 3.25. Eyeballing, I would say your results are almost perfect. The refinement in your setup is much more uniform than in Darshan's setup. The uniformity leads to lower discretization errors at the cost of a higher cell count. I would still add another refinement level to be sure. Don't worry too much about Darshan's results. The literature reference is more important.
Best, Andre

from active_flow_control_past_cylinder_using_drl.

FabianGabriel avatar FabianGabriel commented on August 23, 2024

Hi Andre,
thanks, that's good to know. This is now the result with all 4 mesh sizes:
mesh_dp_cd (4)
The values are like this:

"Mesh size"  Number of cells Time Courant number mean (after 4 s) Courant number max (after 4 s)
25  5325  232.09 s 0.092931529 0.596277
50  21250 762.14 s 0.093645126 0.61540178
100 85000 3566.77 s 0.093890705 0.62253628
200 340000 58581.3 s (~16h) 0.094134363 0.62508912

I also ran the first simulation on a higher Reynolds number. I chose Re=400 but the results are not what I expected:
mesh_dp_cd (2)
Is the mesh size just too small here or is there another problem?
The blockMeshDict file is the same as in 100_50.
ControlDict:
deltaT 1.25e-4;
magUInf 4.0;
setExprBoundaryFieldsDict:
expression #{ 4*6*pos().y()*(0.41-pos().y())/(0.41*0.41)*$[(vector)vel.dir] #};

"Mesh size"  Number of cells Time Courant number mean (after 4 s) Courant number max (after 4 s)
50  21250 2784.8 s 0.10366097 0.75172798

The max Courant number fluctuate quite a bit too:
From about 0.75 at 4 to about 0.68 at 4.019 (repeated periodically)
The mean Courant numbers on the other hand stay pretty much constant.
Is that normal for higher speeds?

Best Regards,
Fabian

from active_flow_control_past_cylinder_using_drl.

AndreWeiner avatar AndreWeiner commented on August 23, 2024

Hi Fabian,

thanks for the update. A couple of tips and suggestions:

  • comparing the time-dependent signals of lift and drag for various refinement levels is always a good first assessment; for a more rigorous, quantitative comparison, look at how min/max/mean of lift and drag change depending on the mesh density (the signals are hard to compare precisely if there is even a small phase shift)
  • if a simulation is not working as expected, the first two things to do are:
    • quick visual assessment of the flow field (p and U in your case) to see if there is something suspicious (extreme values, checkerboard patterns, unphysical behavior in general)
    • quick look at the log file; did the simulation converge? If you look at the fvSolution file, you see that the p-U-coupling is residual controlled (0.0001 currently) and that a maximum of 50 coupling iterations may be executed; the solver should always converge before the maximum number of iterations is reached (except for the first few time steps; look for the message Pimple converged in xx iterations.); other suspicious observations in the log file are: linear solvers reach 1000 iterations, max Courant number goes above one, any other kind of warning messages

Your settings for Re=400 seem to be correct. I guess the 25 cell mesh is too coarse to converge. You can try using a smaller time step (maybe one-half of the current one). However, I guess the mesh is really very coarse and not a potential candidate for further simulations. I guess the simulation did not converge, so you can report this convergence behavior (e.g., did not converge).

I hope this answers your question.
Best, Andre

from active_flow_control_past_cylinder_using_drl.

FabianGabriel avatar FabianGabriel commented on August 23, 2024

Hi Andre,
I have now also run the simulation for Re=400 and meshsize 100. The results were very similar. I had already looked at the log.pimplefoam file and did not notice anything unusual. The simulation did converge in about 4 -7 iterations and the Courant number never exceded 1. I also looked at the flow fields for U and p, but I didn't notice anything unusual there either. The flow fields also hardly differ between mesh sizes 50 and 100. After your message, however, I noticed the p flow field. Are these already the chess field patterns you mentioned?
400_100_p

Also the U-field here:

400_100_U

Best Regards,
Fabian

from active_flow_control_past_cylinder_using_drl.

AndreWeiner avatar AndreWeiner commented on August 23, 2024

Hi Fabian,
the p and U fields look absolutely fine; you see the classical van Karman vortex street. In my estimate, only the mesh with Nx=25 delivered strange results. The other results in the above drag plot look absolutely fine.
Best, Andre

from active_flow_control_past_cylinder_using_drl.

FabianGabriel avatar FabianGabriel commented on August 23, 2024

Hi Andre,

Sorry, the graphic in the post up there was wrong. I never actually ran the size 25 simulation. That one was already the size 50 one. The other ones were left over from the Re=100 batch, that I accidentally didn't remove.
This is the correct one:
mesh_dp_cd_400

Best Regards,
Fabian

from active_flow_control_past_cylinder_using_drl.

AndreWeiner avatar AndreWeiner commented on August 23, 2024

I see. The results might be correct then, and the visualization is the problem. If you look at fig. 3 of the article by Romain Prais et al. (cloud folder paris2021.pdf), you see that the frequency/Strouhal number as well as the drag increase with the Reynolds number. I also found this report, which may serve as a reference even though the setup is different (the confinement in the channel has a significant impact).

Another point is that you're plotting against the dimensionless time (eq. 2.2 in the report):
\tilde{t} = t/(d U_average)
I think Darshan forgot to divite by the diameter, and the average inlet velocity was unity in his case. However, his Reynolds number was constant. For you, the dimensionless time changes with the Reynolds number/inlet velocity.

To sum up: look at a shorter time window, and your simulations should look fine.

Best, Andre

from active_flow_control_past_cylinder_using_drl.

Related Issues (6)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.