Giter Site home page Giter Site logo

boschresearch / pylife Goto Github PK

View Code? Open in Web Editor NEW
136.0 11.0 24.0 21.19 MB

a general library for fatigue and reliability

Home Page: https://pylife.readthedocs.io

License: Apache License 2.0

Shell 0.10% Python 99.52% Batchfile 0.18% Jupyter Notebook 0.20%
fatigue reliability engineering lifetime material mechanical-engineering education material-science

pylife's Introduction

pyLife – a general library for fatigue and reliability

Binder Documentation Status PyPI PyPI - Python Version Testsuite

pyLife is an Open Source Python library for state of the art algorithms used in lifetime assessment of mechanical components subjected to fatigue.

Purpose of the project

This library was originally compiled at Bosch Research to collect algorithms needed by different in house software projects, that deal with lifetime prediction and material fatigue on a component level. In order to further extent and scrutinize it we decided to release it as Open Source. Read this article about pyLife's origin.

So we are welcoming collaboration not only from science and education but also from other commercial companies dealing with the topic. We commend this library to university teachers to use it for education purposes.

The company Viktor has set up a web application for Wöhler test analysis based on pyLife code.

Status

pyLife-2.0.3 has been released. That means that for the time being we hope that we will not introduce breaking changes. That does not mean that the release is stable finished and perfect. We will do small improvements, especially with respect to documentation in the upcoming months and release them as 2.0.x releases. Once we have noticeable feature additions we will come up with a 2.x.0 release. No ETA about that.

Contents

There are/will be the following subpackages:

  • stress everything related to stress calculation

    • equivalent stress
    • stress gradient calculation
    • rainflow counting
    • ...
  • strength everything related to strength calculation

    • failure probability estimation
    • S-N-calculations
    • local strain concept: FKM guideline nonlinear
    • ...
  • mesh FEM mesh related stuff

    • stress gradients
    • FEM-mapping
    • hotspot detection
  • util all the more general utilities

    • ...
  • materialdata analysis of material testing data

    • Wöhler (SN-curve) data analysis
  • materiallaws modeling material behavior

    • Ramberg Osgood
    • Wöhler curves
  • vmap a interface to VMAP

License

pyLife is open-sourced under the Apache-2.0 license. See the LICENSE file for details.

For a list of other open source components included in pyLife, see the file 3rd-party-licenses.txt.

pylife's People

Contributors

ajaust avatar alexander-maier avatar choeppler avatar dkreuter avatar johannes-mueller avatar kissgyongyver avatar m-wieler avatar maierbn avatar nae2rng avatar old2rng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pylife's Issues

Demos about mesh import not linked in the docs

The files import_mesh_ansys.nblink and import_mesh_vmap.nblink exsist but they are not linked in the docs. Either they or at least one of them should be linked somewhere (cookbook or tutorials). nblink files that are not linked in the docs should probably not exist.

Doubly linked demo notebooks in the docs

There are a couple of fkm related demo notebooks that are linked in tutorials.rst as well as in cookbook.rst. This leads to confusion as Sphinx recognizes them as the same file. Then when clicking on one of it from the Cookbook page you are taken to the Tutorial page.

We should think which we link from where.

Deprecation warning in CI

https://github.com/boschresearch/pylife/actions/runs/3378057075/jobs/5607765064#step:5:72

=============================== warnings summary ===============================
../../../../../opt/hostedtoolcache/Python/3.10.8/x64/lib/python3.10/site-packages/pymc/aesaraf.py:49
  /opt/hostedtoolcache/Python/3.10.8/x64/lib/python3.10/site-packages/pymc/aesaraf.py:49: DeprecationWarning: The module `aesara.sandbox.rng_mrg` is deprecated. Use the module `aesara.tensor.random` for random variables instead.
    from aesara.sandbox.rng_mrg import MRG_RandomStream as RandomStream

src/pylife/stress/rainflow/fourpoint.py:27
  /home/runner/work/pylife/pylife/src/pylife/stress/rainflow/fourpoint.py:27: DeprecationWarning: invalid escape sequence '\-'
    """Implements four point rainflow counting algorithm.

tests/core/test_broadcaster.py: 5 warnings
tests/materiallaws/test_woehlercurve.py: 1 warning
tests/strength/test_fatigue.py: 17 warnings
  /home/runner/work/pylife/pylife/src/pylife/core/broadcaster.py:209: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
    df.loc[:, c] = self._obj[c]

tests/stress/test_timesignal.py::test_polyfit_gridpoints1
  /opt/hostedtoolcache/Python/3.10.8/x64/lib/python3.10/site-packages/_pytest/python.py:199: PytestReturnNotNoneWarning: Expected None, but tests/stress/test_timesignal.py::test_polyfit_gridpoints1 returned           x   id  time
  time                  
  0      0.00  0.0     0
  1      2.25  0.0     1
  2      4.50  0.0     2
  3      6.75  0.0     3
  4      9.00  0.0     4
  5     16.00  0.0     5
  6     25.00  0.0     6
  7     36.00  0.0     7
  8     49.00  0.0     8
  9     64.00  0.0     9
  10    81.00  0.0    10, which will be an error in a future version of pytest.  Did you mean to use `assert` instead of `return`?
    warnings.warn(

tests/vmap/test_vmap_export.py: 90 warnings
tests/vmap/test_vmap_import.py: 40 warnings
  /home/runner/work/pylife/pylife/src/pylife/vmap/vmap_import.py:409: FutureWarning: iteritems is deprecated and will be removed in a future version. Use .items instead.
    for element_id, node_ids in connectivity.iteritems():

How to understand the affection of TN and TS in Wöehler Curve?

Discussed in #75

Originally posted by XiongYuZeng June 27, 2024
Hi Pylife group:
Need some help to understand TN and TS although I have read the help document about them.
I realize that if I do not provide TN, TS, it will use the default value 1. It seems that if TN and TS is 1, then I am not able to change the Wöehler curve to other failure probability. the code: woehler_curve_data.woehler.transform_to_failure_probability(new_failure_probability).to_pandas() will give me the same data no matter which value I assign to new_failure_probaility. Why?

In the help text, it stands beneath text, which I got few questions to ask.

TN: The scatter in cycle direction, (N_90/N_10)
If the key is missing it is assumed to be 1.0 or calculated from TS if given.

TS: The scatter in cycle direction, (S_90/S_10)
If the key is missing it is assumed to be 1.0 or calculated from TN if given.

Questions:

  1. what does (N90/N_10) and (S_90/S_10) mean?
  2. what is the reasonable value range of TS and TN I should use if I only have values for SD, ND, k_1 and k_2?

This needs to be improved in the docs.

Drop matplotlib dependency

We have a dependency to matplotlib because in for the time signal handling we use matplotlib.mlab.psd. That function calculates the power spectral density. There is a functionality in scipy which claims to do the same thing.

Task

Check if we can replace matplotlib.mlab.psd with the scipy.signal.periodogram and thus drop the dependency to matplotlib.

Fix Woehler curve slope estimator

When evaluating the slope of the Woehler curve all fractures are used for it's estimation. Based on DIN 50100:2022-12 chapter 9.2.2 "Perlenschnurverfahren" only the fractures in the finite life region are to be used for estimating the slope.
It will also be checked if the scattering in life time direction should contain all fractures or only the fractures in the finite life region.

Wrong result in simple damage calculation

I tried to calculate the damage for a simple synthetical signal, where damage calculation can be verified manually.
Did I make a mistake in using PyLife or is there a Bug?

Pylife example

import pylife.stress.timesignal as ts
import pylife.stress.rainflow as RF
import pylife.stress.rainflow.recorders as RFR

woehler_curve = pd.Series({
    'SD':  1000.,
    'ND':  1e7,
    'k_1': 5,
})

data_calc = np.sin(np.linspace(0,2_000*np.pi,200_000))*1_000 + np.sin(np.linspace(0,1_000*np.pi,200_000))*1_000

#Rainflow counting
rainflow_bins = 64
recorder   = RFR.FullRecorder()
detector   = RF.FKMDetector(recorder=recorder).process(data_calc)
rf_series  = detector.recorder.histogram(rainflow_bins)

#Damage calculation
damage_miner_elementary = woehler_curve.fatigue.miner_elementary().damage(rf_series.load_collective)

print(damage_miner_elementary .sum())
#0.0005812095296612522
# -> Manual calculation: 0.0008451272205597859

Calculating the damage manually

There are 500 cycles with an ampiltude of 369.00016409606667 and 500 cycles with an amplitude of 1760.172592660727.

k=5
SD=1000
ND=1e7
LW=500

damage = LW/(ND*(1760.1725926607273/SD)**(-k)) + LW/(ND*(369.00016409606667/SD)**(-k))
print(damage)
#0.0008451272205597859

Damage calc

Issue #1:

  • Feature "Miner Original" needed to compare given SN-curve (with 2 slopes) with assumed lowest possible damage sum (set k2 = inf).
  • e.g. given k1>1,k2 =22, S_E>1, N_E>1, T_S>1:
    --> mat.fatigue.damage(LC_df) --> return D_given_mat (based on given SN)
    --> needed: mat.fatigue.miner_original().damage(LC_df) --> return D_Miner_original (where only slope k2 is changed and set k2 = inf)

Issue #2:
Using Jupyter notebook provide different damage values (here for miner haibach) after re-run the same cell:
cell:
damage_miner_original = mat.fatigue.damage(LC_df)
damage_miner_elementary = mat.fatigue.miner_elementary().damage(LC_df)
damage_miner_haibach = mat.fatigue.miner_haibach().damage(LC_df)

it looks like that the fatigue damage calc functions changes the slope k2 and return the changed slope value, thus set it new. Possible issue: The last code in the cell is damage_miner_haibach, and it changes k2 to 2k-1 and returns the new k2, thus while re-run the cell, Miner-original provide the same damage sum as miner-haibach. It would be great if the damage functions do not return changed values, if this is the issue.
example: first run with k1 = 5, k2 = 9:
Miner Haibach: 0.0030593576374770533
Miner Elementar: 0.005205560532703841
Miner Original: 0.0030593576374770533

second run with k1 = 5, k2 = 22:
Miner Haibach: 0.0030593576374770533
Miner Elementar: 0.005205560532703841
Miner Original: 0.0029943003848670107

third run with k1 = 5, k2 = 22 (just re-run the cell):
Miner Haibach: 0.0030593576374770533
Miner Elementar: 0.005205560532703841
Miner Original: 0.0030593576374770533

Wöhler analyzes with zero or one pure fracture load level

Requirements

  • If there is exactly one pure fracture load level, the Wöhler analyzer must refuse analysis because we on the one hand know that the Wöhler curve is not flat, and on the other hand cannot calculate a slope. That's why we raise an exception.
  • If there is no pure fracture load level at all, we assume k_1 = inf and TN = 1.0.

Loosen version constraints for numpy (currently ==1.23.5)

Issue

I recently encountered version conflicts between pylife and other packages due to the pinned numpy version.

Is there a reason why the numpy is pinned to ==1.23.5?
If not, is it be possible to loosen the version constraints so that more recent numpy versions are supported?

The test suite passes with all python/numpy combinations (except python3.8 which is only supported for numpy<1.25)

Python | numpy 1.24 1.25 1.26
3.8 ✔️ not supported not supported
3.9 ✔️ ✔️ ✔️
3.10 ✔️ ✔️ ✔️
3.11 ✔️ ✔️ ✔️

BTW: version 1.23 will each its end of life by 24 Jun 2024.

Wrong explanation for RunOut: True it should be RunOut: False

Discussed in #76

Originally posted by Jaz939 June 27, 2024
Hi,
In the beginning, I would like to thank you for the great script.

When I reworked your demo script woehler_analyzer and data set fatigue-data-fractures, I noticed that in the library in script fatigue_data.py you have wrongly addressed the boolean for RunOut. You have stated that RunOut is True. When you do this the finite zone and infinite zone are wrong.
The correct value for RunOut is False. Then the program is working correctly.

Again thanks for the script.
Roman

BUG: Rainflow counting fails, with float index

Describe the bug
Rainflow counting fails, when fed pd.Series with an float index

To Reproduce
Code snippet to reproduce the bug (no screenshot but pasted code)

import pandas as pd
import pylife.stress.rainflow as RF

signal = pd.Series([0., 1., 0.], index=[1.0, 2.0, 3.0])

RF.FourPointDetector(recorder=RF.FullRecorder()).process(signal)

Expected result
Should work without error.

Observed result

Traceback (most recent call last):
  File "/tmp/rainflowtest.py", line 7, in <module>
    RF.FourPointDetector(recorder=RF.FullRecorder()).process(signal)
  File "/home/jmu3si/Devel/pylife/src/pylife/stress/rainflow/fourpoint.py", line 173, in process
    idx_2 = turns_index[residual_index.pop()]
            ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
IndexError: index 2 is out of bounds for axis 0 with size 2

Cythonize rainflow counters

Our current rainflow solutions perform quite poorly as they are written in plain python. In order to meet demand for more performance the sometimes is raised, we should cythonize the modules to see if we can speed them up.

Wrong detector class instantiation in rainflow fourpoint test

File tests/stress/rainflow/test_fourpoint.py includes all tests on fourpoint detector, as its name indicated.

I found a threepoint class instantiation in the test case TestFourPointsTwoAmplitudesSplit, line 141 of file tests/stress/rainflow/test_fourpoint.py.

class TestFourPointsTwoAmplitudesSplit(unittest.TestCase):
    def setUp(self):
        signal = np.array([0., 1., 0., 1., -1., 1., 0., 1., -1., 1., 0])
        self._fr = RFR.FullRecorder()
        self._dtor = RF.ThreePointDetector(recorder=self._fr).process(signal[:4]).process(signal[4:])

Should the last line is written as below?

self._dtor = RF.FourPointDetector(recorder=self._fr).process(signal[:4]).process(signal[4:])

RainflowCounterFKM - last and first sample point

Hi,

I get wrong results when using the stress.rainflow.RainflowCounterFKM class. The following input should result in one closed loop but does not:

import pylife.stress.rainflow as rainflow

m = np.array([0., 100., 0., 100.])
rfc = rainflow.RainflowCounterFKM().process(m)
print(f"from={rfc.loops_from}", f"to={rfc.loops_to}")

Instead rfc.loops_from and rfc.loops_to are empty. Using the class rainflow.RainflowCounterThreePoint the result is the following: from=[100.0] to=[0.0]

The following code leads to the same result as rainflow.RainflowCounterThreePoint

import pylife.stress.rainflow as rainflow

m = np.array([0., 100., 0., 100., 99.])
rfc = rainflow.RainflowCounterFKM().process(m)
print(f"from={rfc.loops_from}", f"to={rfc.loops_to}")

Is this a problem with the _get_new_turns method? The method rainflow.RainflowCounterThreePoint().process appends the last sample point, while stress.rainflow.RainflowCounterFKM().process does not.

turns_np = np.concatenate((residuals, self._get_new_turns(samples), samples[-1:]))

Add visuals to README

Is your feature request related to the way how to work with pyLife, i.e. the API.? If yes, please describe.
To make the README visually more appealing, consider adding visuals that could help.

Is your feature request about new functionality, new calculation model? If yes please describe
Not referred to functionality. The request could help with the presentation of the repository.

Describe the solution you'd like
A start would be to add the logo of pyLife:
pyLife logo

And a GIF of the web-based application of VIKTOR (GIF can be requested)

Indicate if you or someone you know would volunteer to work on this
I volunteer to contribute.

Additional context
Add any other context or screenshots about the feature request here.

Shutdown Bayesian Wöhler evaluation

The Bayesian Wöhler evaluator turns out to behave in an unstable way in too many cases. Therefore we will shut it down. That means, that the code will remain in pyLife's code base but importing it will raise an exception explaining the situation.

Incorrect autodoc links for fkm_nonlinear

The functions referenced in docs/strength/fkm_nonlinear are not found.

Python 3.11.4 (main, Jul  5 2023, 13:45:01) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pylife.strength.fkm_nonlinear
>>> pylife.strength.fkm_nonlinear.perform_fkm_nonlinear_assessment
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: module 'pylife.strength.fkm_nonlinear' has no attribute 'perform_fkm_nonlinear_assessment'
>>> pylife.strength.fkm_nonlinear.for_material_group
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: module 'pylife.strength.fkm_nonlinear' has no attribute 'for_material_group'
>>>

Support for elastic Plastic FE- Calculation

The fkm-nonlinear guideline evaluates the elastic plastic behaviour based on an elastic solution and notch approximation relationship like Neuber or Beste / Seeger. For some cases there the material response should directly be caluclated by aid for finete element methode, e.g. with high nonlienearities as contact. For this cas it would be nice if pylife can import and us it directly from finite element caluclation.

Rainflow binning gives unexpected wrong behaviour

Given slightly noisy data, the rainflow counting algorithm gives incorrect results, due to the order of the algorithm (counting before binning) and the way binning is performed

import pylife
import pylife.stress.rainflow as RF
import pandas as pd
import numpy as np

df = pd.DataFrame({"time": np.array([ 1. ,  2. ,  3. ,  4. ,  4.1,  4.8,  5. ,  6. ,  7. ,  8. ,  9.1,
                                    10.1, 11. , 12. , 13. , 14. ]),
                    "value": np.array([2.2, 7.3, 4.2, 8.1, 8.4, 8.2, 2.1, 4.9, 4.2, 6. , 1. , 6.9, 4. ,
                           5. , 2. , 5. ])})
# this should have four cycles: 5-4, 7-4, 4-5, 2-6
rfc = RF.FourPointDetector(recorder=RF.LoopValueRecorder())
bins = np.linspace(1,8,8)
n_bins = len(bins)
rfc.process(df["value"])
res = rfc.recorder.matrix(bins = bins)
result_df = pd.DataFrame(res[0],columns=res[2][:n_bins-1])
result_df["from"] = res[1][:n_bins-1]
result_df = result_df.melt(var_name="to", id_vars = "from")
result_df[result_df["value"] >0]


**Expected result**
"""	
output:
from	to	value
24	5.0	4.0	1.0
27	7.0	4.0	1.0
31	4.0	5.0	1.0
36	2.0	6.0	1.0""
First, a cycle from 4 to 4 doesn't make much sense in my opinion. For this reason binning should be _before_ counting, not after.
Second, 4.9 (in "value") gets binned to 4 because of the way binning is done internally, which is by all means undesired.

**Observed result**
"""	
output:
from	to	value
24	4.0	4.0	1.0
27	7.0	4.0	1.0
31	4.0	5.0	1.0
36	2.0	6.0	1.0""

Environment (please complete the following information):

  • OS: Windows 10 with Ubuntu subsystem
  • How installed: pip, pypi artifactory
  • Version: 2.0.0a7

Additional context
Add any other context about the problem here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.