Giter Site home page Giter Site logo

pyscience-projects / pyevtk Goto Github PK

View Code? Open in Web Editor NEW
57.0 6.0 15.0 430 KB

PyEVTK (Python Export VTK) exports data to binary VTK files for visualization/analysis with packages like Paraview, VisIt, and Mayavi.

License: Other

Python 100.00%
python vtk export paraview python-package python3 python2

pyevtk's Introduction

Coverage Status Build Status

PREAMBLE:

This package in its entirety belongs to Paulo Herrera and its currently hosted under:

https://github.com/paulo-herrera/PyEVTK

I've misappropriated, well forked and repackaged really, this package in order to host it on PyPI and allow for its easy distribution and installation as I use it a lot. I take no credit whatsoever for it.

My fork is hosted under:

https://github.com/pyscience-projects/pyevtk

This package is nowadays primarily maintained by René Fritze and Xylar Asay-Davis.

INTRODUCTION:

EVTK (Export VTK) package allows exporting data to binary VTK files for visualization and data analysis with any of the visualization packages that support VTK files, e.g. Paraview, VisIt and Mayavi. EVTK does not depend on any external library (e.g. VTK), so it is easy to install in different systems.

Since version 0.9 the package is composed only of a set of pure Python files, hence it is straightforwrd to install and run in any system where Python is installed. EVTK provides low and high level interfaces. While the low level interface can be used to export data that is stored in any type of container, the high level functions make easy to export data stored in Numpy arrays.

INSTALLATION:

This package is being hosted on PyPI under:

https://pypi.python.org/pypi/PyEVTK

and can be installed with pip using pip install pyevtk

DOCUMENTATION:

This file together with the included examples in the examples directory in the source tree provide enough information to start using the package.

DESIGN GUIDELINES:

The design of the package considered the following objectives:

  1. Self-contained. The package does not require any external library with the exception of Numpy, which is becoming a standard package in many Python installations.

  2. Flexibility. It is possible to use EVTK to export data stored in any container and in any of the grid formats supported by VTK by using the low level interface.

  3. Easy of use. The high level interface makes very easy to export data stored in Numpy arrays. The high level interface provides functions to export most of the grids supported by VTK: image data, rectilinear and structured grids. It also includes a function to export point sets and associated data that can be used to export results from particle and meshless numerical simulations.

  4. Performance. The aim of the package is to be used as a part of post-processing tools. Thus, good performance is important to handle the results of large simulations. However, latest versions give priority to ease of installation and use over performance.

REQUIREMENTS:

- Numpy. Tested with Numpy 1.11.3.

The package has been tested on: - MacOSX 10.6 x86-64. - Ubuntu 10.04 x86-64 guest running on VMWare Fusion. - Ubuntu 12.04 x86-64 running Python Anaconda (3.4.3) - Windows 7 x86-64 running Python Anaconda (3.4.3)

It is compatible with both Python 2.7 and Python 3.3. Since version 0.9 it is only compatible with VTK 6.0 and newer versions.

DEVELOPER NOTES:

It is useful to build and install the package to a temporary location without touching the global python site-packages directory while developing. To do this, while in the root directory, one can type:

1. python setup.py build --debug install --prefix=./tmp
2. export PYTHONPATH=./tmp/lib/python2.6/site-packages/:$PYTHONPATH

NOTE: you may have to change the Python version depending of the installed version on your system.

To test the package one can run some of the examples, e.g.: ./tmp/lib/python2.6/site-packages/examples/points.py

That should create a points.vtu file in the current directory.

SUPPORT:

I will continue releasing this package as open source, so it is free to be used in any kind of project. I will also continue providing support for simple questions and making incremental improvements as time allows. However, I also provide contract based support for commercial or research projects interested in this package or in topics related to data analysis and scientific programming with Python, Java, MATLAB/Octave, C/C++ or Fortran. For further details, please contact me to: [email protected].

NOTE: PyEVTK moved to GitHub. The new official page is this one (https://github.com/paulo-herrera/PyEVTK)

pyevtk's People

Contributors

anlavandier avatar bertta avatar davidlandry93 avatar jontong avatar moritz-h avatar muellerseb avatar paulo-herrera avatar renefritze avatar renemilk avatar somada141 avatar struoff avatar xylar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

pyevtk's Issues

Enhancement: Time Series

How about adding time series capabilities?

I am using your package for writing out simulation data and it works pretty well, but what I had to implement myself was writing files that would enclose a set of simulation results in a time series.
This is supported in Paraview relatively recently (5.6 or so).
I am not sure if there are other ways marking time in a file.

Let's say I have a simulation of 3 seconds with 31 "frames", i.e. one frame every 100 ms.
I write them out into a folder data/ (e.g. frame_000.vtk and so on) and place a file named data.vtr.series in the same folder as the data/ folder.

The code looks something like this:

import os, json
def save_vtr_series(out_dir, fname_prefix, series_dict):
    fname = os.path.join(out_dir, fname_prefix + ".vtr.series")
    with open(fname, "w") as fh:
        json.dump(series_dict, fh, indent=4)

This way, I end up with a file named data.vtr.series which is essentially JSON that looks like this:

{
    "file-series-version": "1.0",
    "files": [
        {
            "name": "data/frame_000.vtr",
            "time": 0.0
        },
        {
            "name": "data/frame_001.vtr",
            "time": 9.999999999999999e-04
        },
... and so on ...
    ]
}

While I came across this vt{r,...}.series format somewhere (maybe on StackExchange), I can't find any usable documentation.

So, this seems to work for me and it might be useful for others. It has, however, a couple of drawbacks: The main one, imho, is that the series file can only be written out at the end of a simulation. Otherwise it needs to be updated, which is a relatively slow process.

Invalid XML-files

pyevtk stores the data associated with the given mesh in append data, that is given in a raw binary format.
Setting of raw:

encoding="raw"

Encoding as binary:
bin_pack = struct.pack(fmt, *dd)

This is producing invalid XML files and other software is may unwilling to read the files (e.g. meshio see: nschloe/meshio#786)

We should use base64 encoded data, where the binary data is encoded with ascii-characters as suggested by VTK. Some references:

When using appended data with base64, we have to keep in mind, that the offsets are changing, since they address the character in the base64 encoded string and not the binary offset like with raw data:

self.offset += (

meshio already implemented writing routines using base64 encoding. We should have a look there:
https://github.com/nschloe/meshio/blob/d9c05ae688858b5166630874f3ec875a43b8fd37/meshio/vtu/_vtu.py#L482

Dataset attributes bug

I noticed two bugs with the dataset attributes:

Cannot install from tarball

I'm preparing a conda-forge recipe for pyevtk atm, since I want to use it in a conda package for pymor. Turns out the current tarball does not include the README.md, so setup.py from an sdist fails. I will fix this, add a check in CI and would then create a 1.1.1 tag.

Sounds good, @somada141 ?

File generated by VtkGroup can not be loaded into paraview.

I am able to generate a perfectly good .vtu file using unstructuredGridToVTK. As a natural next step, I am using VtkGroup to combine multiple .vtu into one .pvd (along the lines of this example).While it is able to generate .pvd file without errors, the file can not be read by Paraview (version 5.11.0). Following is the error:

ERROR: In vtkXMLReader.cxx, line 305
vERROR: In vtkXMLReader.cxx, line 305
vtkXMLUnstructuredGridReader (0000015830D9A8E0): Error opening file C:/Users/vd6558/Downloads/Graphics/Raw_0_0.vtu

ERROR: In vtkExecutive.cxx, line 741
vtkCompositeDataPipeline (0000015841504AF0): Algorithm vtkXMLUnstructuredGridReader (0000015830D9A8E0) returned failure for request: vtkInformation (0000015852D91B30)
  Debug: Off
  Modified Time: 521679
  Reference Count: 1
  Registered Events: (none)
  Request: REQUEST_INFORMATION
  FORWARD_DIRECTION: 0
  ALGORITHM_AFTER_FORWARD: 1

structured grid normal reversed

I written a structured grid to a vtk file using both the vtk python interface and pyevtk. The structuredgrid, when rendered with an contour in paraview appears dark using the file written with pyevtk. The attached image shows the vtk lib output rendering on the left and the pyevtk output on the right.
Is there an easy fix to this?

comparison

Update PyPI version

Hello all, and thank you for creating such a helpful package - the only one really that doesn't need to pull in a whole libvtk. I would like to have a downstream Python library depend on pyevtk; however, the latest PyPI version is behind the latest GitHub release and conda version which are, in turn, 22 commits behind the master.

The last commits also add quite a few improvements that would be useful for downstream library developers (who publish their package to PyPI too). Would it be possible to update the PyPI version to 1.1.2 and perhaps publish the latest commits in a new release?

The official version of PyEVTK has moved

I wanted to bring to your attention that @paulo-herrera has moved the official site for PyEVTK to GitHub: https://github.com/paulo-herrera/PyEVTK

Given this move, it might be confusing to users if there is an official GitHub site and also this mirror site with modifications, particularly if this site is used for pypi and conda-forge feedstocks. I suggest you consider making pull requests with whatever changes you would like to make to the official site instead and maybe consider archiving this repo. Perhaps @paulo-herrera would be willing to add you as collaborators if there are significant contributions to be made, but that's obviously up to him.

I discuss this further in my proposed update to the conda-forge feedstock here:
conda-forge/pyevtk-feedstock#1

meshio

I believe much of pyevtk's functionality is captured in meshio (a project of mine). Perhaps worth checking out.

1.0.2 wheel for py3?

Could you upload that to PyPi too, @somada141 ?

When I get around to adding more tests than just importing submodules, we could switch to deploying tagged builds to pypi from travis builds. It's pretty straightforward.

Name for Blocks (Datasets) in Collection (Multi-Block Dataset)

I was able to use the pointsToVTK and VtkGroup functions to generate a Multiblock-Data Collection. However, for larger Collection it becomes difficult to know which Block inside the Collecton corresponds to which Data.

After doing a bit of Internet-search and also trying for myself, I was able to find out how I can "name" the Blocks (i.E. Datasets) inside a Collection, just by adding the "name" attribute (in this case by hand):

<?xml version="1.0"?>
<VTKFile type="Collection" version="0.1" byte_order="LittleEndian">
<Collection>
<DataSet timestep="0" group="" part="0" name="2.50mm" file="vtk/SSB_JetABC__XZ_Plane_x_loc=_2.50_mm.vtu"/>
<DataSet timestep="0" group="" part="0" name="" file="vtk/SSB_JetABC__XZ_Plane_x_loc=_15.00_mm.vtu"/>
<DataSet timestep="0" group="" part="0" file="vtk/SSB_JetABC__XZ_Plane_x_loc=_30.00_mm.vtu"/>
</Collection>
</VTKFile>

Resulting in the following Data Hierarchy:
grafik

As you can see, that for the first Dataset, the MultiBlock is named "2.50mm", as specified, and the other two are still name "Multi-block Dataset", regardless whether the value for name was blank or not given.

I already tried to insert it into the code, maybe you can then test it and hopefully merge it into a new tag. The changes would be the following:

def addFile(self, filepath, sim_time, group="", part="0"):

    def addFile(self, filepath, sim_time, group="", part="0", name=""):

pyevtk/pyevtk/vtk.py

Lines 236 to 238 in 5cb3370

self.xml.addAttributes(
timestep=sim_time, group=group, part=part, file=filename
)

        self.xml.addAttributes(
            timestep=sim_time, group=group, part=part, file=filename, name=name
        )

Enhancement: export field data

None of the provided routines can handle field data.

Meshio for example is capable of that, but there only structured meshes are supported.

It would be nice to also export field_data passed as a dictionary in the high-level routines.

Would it be possible with this low-level function to just create a field data section:

def openData(self, nodeType, scalars=None, vectors=None, normals=None, tensors=None, tcoords=None):

?

Cheers, Sebastian

Add a conda recipe

Would you mind either uploading a source archive for 1.1.1 to pypi or making a github release from the tag, @somada141 ? For conda I'll need an archive online at a static url with a checksum.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.