Giter Site home page Giter Site logo

pyshp's Introduction

pyshp's People

Contributors

brideau avatar bva-bme avatar caseymeiz avatar eumiro avatar ezheidtmann avatar fiveham avatar fnorf avatar geospatialpython avatar ignamv avatar jamesparrott avatar jmoujaes avatar karimbahgat avatar lgolston avatar ltiao avatar marcincuprjak-tomtom avatar megies avatar micahcochran avatar mjmdavis avatar musicinmybrain avatar mwtoews avatar nijel avatar pemn avatar razzius avatar rgbkrk avatar riggsd avatar rossrogers avatar sebastic avatar shinnonoir avatar timgates42 avatar ulikoehler avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyshp's Issues

Python MemeoryError exception on larage shapefile

[Update] please ignor/deletee this. I was running a 32 bit versoin of Python, which can only address 2gB of RAM.


I am new to Pyshp (but not to Python) and am staring big by getting my dataset from http://edcintl.cr.usgs.gov/downloads/sciweb1/shared/topo/downloads/GMTED/GMTED_Metadata/GMTED2010_Spatial_Metadata.zip

It unpacks to 1.7gB, with the .shp file being 1.2gB.

With this code:

import shapefile
data = shapefile.Reader("data_file.shp")

instantiating the Shapefile reader gives me an exception MemoryError when using Pyshp. But I am using ony 3% of my machine's 32gB, so I don't understand it.

Is there any other approach that I can take? Can process the file in chunks in Python? Or use some tool to spilt the file into chinks, then process each of them individually?

More on supported field types in README

Currently in the README there is no discussion of which field types are supported, which character code represents which type, and what values each type stores. Would be useful to add a sentence or two on that in the README.

Check that all shapes are of same type

One requirement of shapefiles is that they contain only shapes of the same type. As of right now PyShp does not try to check or ensure this, making it easy to create invalid shapefiles. The closest thing is it occasionally tries to (silently) force shape types to the type of the shapefile, but this can backfire and lead to unexpected shape contents (eg doing writer.point(...) on a Polygon shapefile).

I suggest dropping the silent type forcings, and instead check and raise informative exceptions. Null geometries would ofcourse have to be allowed mixed in.

How to validate shapefiles?

I find pyshp really valuable, thanks for your work. However, sometimes it creates shapfiles that I'm unable to open in certain GIS programs, namely MapInfo. The same files normally open just fine in qGIS. As MapInfo gives absolutely no information about what went wrong, I don't know where to start debugging, even though I'd be happy to contribute to pyshp.

Do you have any recommendation about how to diagnose the shapefiles?

Can not create Shapefile with NULL type

I tried to create a Shapefile of NULL type with several records. When I tried to save it I got this exception:

File "C:\Projects\Sources\Arboretum\src\shapefile.py", line 1030, in save
    self.saveShx(target)
File "C:\Projects\Sources\Arboretum\src\shapefile.py", line 993, in saveShx
    self.shapeType = self._shapes[0].shapeType
IndexError: list index out of range

The reason is this condition: if not self.shapeType: in the Writer.saveShp and Writer.saveShx methods. When the condition is changed to if self.shapeType is None: the Shapefile is saved successfully.

Spelling Errors in README

During a quality review of the last upload of pyshp in Debian, the following spelling errors were identified:
$ codespell --quiet-level=3
...
./README.txt:265: dicussed ==> discussed
./README.txt:490: seperate ==> separate
./README.txt:551: nmae ==> name

I would have gladly produced a Pull Request for such a minor fix, but the same errors occur in PKG-INFO, README.pdf & README.html. Assuming one of them is the master, how were the other files produced from it.

It would be great if only one of these files was distributed in the tarball, and the rest built from the master file at build time. Is that possible?

PyPI package: LICENSE.TXT missing

It looks like the LICENSE.TXT (and maybe some other file which should have made it to the tarball, e.g. changelog) is missing. I am referring to version 1.2.3. This makes it a bit difficult to comply with the license. Maybe this could be added again.

Python 2.7 UnicodeEncodeError when to save chinese in records

I am writing some csv data into a shapefile of POINT type. One of the csv fields is in chinese and the pyshp.Writer complains about the encoding failure. Any ideas?

Traceback (most recent call last):
  File "apmap2shapefile.py", line 31, in write2shp
    shpw.save(outfile)
  File "/usr/local/lib/python2.7/site-packages/pyshp-1.2.3-py2.7.egg/shapefile.py", line 1061, in save
    self.saveDbf(target)
  File "/usr/local/lib/python2.7/site-packages/pyshp-1.2.3-py2.7.egg/shapefile.py", line 1033, in saveDbf
    self.__dbfRecords()
  File "/usr/local/lib/python2.7/site-packages/pyshp-1.2.3-py2.7.egg/shapefile.py", line 916, in __dbfRecords
    value = str(value)[:size].ljust(size)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordinal not in range(128)

exception when opening .shp/.dbf with no records

If you try to open a .dbf or .shp file with zero records, you get an exception:

errorTraceback (most recent call last)
<ipython-input-2-d2d34bc0c430> in <module>()
      1 d = r'C:\Temp\shp_create_test'
----> 2 sf = shapefile.Reader(p.join(d, "prec_test.dbf"))

s:\source\git\pyshp\shapefile.py in __init__(self, *args, **kwargs)
    235         if len(args) > 0:
    236             if is_string(args[0]):
--> 237                 self.load(args[0])
    238                 return
    239         if "shp" in kwargs.keys():

s:\source\git\pyshp\shapefile.py in load(self, shapefile)
    281             self.__shpHeader()
    282         if self.dbf:
--> 283             self.__dbfHeader()
    284 
    285     def __getFileObj(self, f):

s:\source\git\pyshp\shapefile.py in __dbfHeader(self)
    478             raise ShapefileException("Shapefile dbf header lacks expected terminator. (likely corrupt?)")
    479         self.fields.insert(0, ('DeletionFlag', 'C', 1, 0))
--> 480         fmt,fmtSize = self.__recordFmt()
    481         self.__recStruct = Struct(fmt)
    482 

s:\source\git\pyshp\shapefile.py in __recordFmt(self)
    484         """Calculates the format and size of a .dbf record."""
    485         if not self.numRecords:
--> 486             self.__dbfHeader()
    487         fmt = ''.join(['%ds' % fieldinfo[2] for fieldinfo in self.fields])
    488         fmtSize = calcsize(fmt)

s:\source\git\pyshp\shapefile.py in __dbfHeader(self)
    462         numFields = (headerLength - 33) // 32
    463         for field in range(numFields):
--> 464             fieldDesc = list(unpack("<11sc4xBB14x", dbf.read(32)))
    465             name = 0
    466             idx = 0

error: unpack requires a string argument of length 32

Best way to extract shapes from a big shapefile?

Say, I have a big shapefile that contains all the countries of the world.

Then what is the best way to extract shapes (records) based on each country, then group them as a new shapefile for its own ?

`autoBalance = 1` does not balance

I don't have access to ArcMap, but a customer told me that the shapefile I made with pyshp had a mismatched count of shapes and records. If len(w._shapes) and len(w.records) are useful metrics before writing, I think I found a bug:

import shapefile
w = shapefile.Writer()
w.shapetype = shapefile.POLYLINE
w.autoBalance = 1
w.field('myfield', 'C', 10, 0)
w.poly(shapeType=shapefile.POLYLINE, parts=[[[1,2], [2,3]]])
w.record(**{})
assert len(w._shapes) == len(w.records) # fails for me

Mysterious 1.2.1 Release

Both the Google Code and GitHub repositories appear to have the source code for pyshp 1.2.0, however the download on PyPI contains 1.2.1 source code. Could you please push the latest code to the public repositories?

Bump the version number

Since 1.2.3 there's been good fixes pulled into master, but the version remains the same. For dependency management reasons it would be great if the version was incremented to 1.2.4.

Python 3.4.3 Error : invalid literal for int() with 10: floatvalue

Version 1.2.1 20140507

The following line is throwing a stack track with the subject error message:

Line 501
value = int(value)

My quick fix is
value = int(float(value))

However there may be other areas where this type of conversion fails with python 3.

Review my field definition it is an integer though the incoming value is a float, will change my field spec fiction to a float

It appears this line# is 506 with latest version.

Copy shape object from ShapeRecord

`r = shapefile.Reader(r'...local\share\cartopy\shapefiles\natural_earth\cultural\10m_admin_1_states_provinces')

w = shapefile.Writer()
w.fields = r.fields[1:] # skip first deletion field

for shaperec in r.iterShapeRecords():
#if shapeRec.record[8] == 'California':
w.record(*shaperec.record)
w.shape(shaperec.shape)

w.save('shapefile/cal/test')`

when trying to use the above code from the docs, i get the following error:

`---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
in ()
7 #if shapeRec.record[8] == 'California':
8 w.record(*shaperec.record)
----> 9 w.shape(shaperec.shape)
10
11 w.save('shapefile/cal/test')

C:\Program Files\Anaconda3\lib\site-packages\shapefile.py in shape(self, i)
1053
1054 def shape(self, i):
-> 1055 return self._shapes[i]
1056
1057 def shapes(self):

TypeError: list indices must be integers or slices, not _Shape
`

My goal here is to copy only a portion of an existing shapefile to a new one.

INTEGER VALUE OF 0 stack trace

    we.field('frame','N',9,0)

If the value in the field is non-zero the write succeeds, if the value is 0 the following stack trace occurs.

raceback (most recent call last):
File "lib\tc\tc.py", line 2809, in
run(sys.argv[1])
File "lib\tc\tc.py", line 1666, in run
td,kd=shape_to_tripdct(src,cfg)
File "lib\tc\tc.py", line 1747, in shape_to_tripdct
attributes = sf.records()
File "C:\p\tc_64\lib\shapefile.py", line 535, in records
r = self.__record()
File "C:\p\tc_64\lib\shapefile.py", line 500, in __record
value = int(value)
ValueError: invalid literal for int() with base 10: '0.0'
Press any key to continue . . .

unpack requires a bytes object of length 32 error

Hello,

When I do the following:

shp = r"D:\data\dummytest.shp"
reader = shapefile.Reader(shp)
print(reader.fields)

The following error occurs:

struct.error: unpack requires a bytes object of length 32

This error occurs in __dbfHeader(self).

In addition, it appears that the package is also returning the incorrect number of fields. There are 8 non spatial columns and it says there are 7. I think it might be due to the fact that the shapefile has FID before the SPATIAL column, but it's only a guess. The shapefile is empty (no records) and it was created using ArcGIS Desktop.

I'm using v1.2.10 with python 3.4. Attached is the offending shapefile.

dummytest.zip

Update
It appears that if there are no records in the shapefile, the error will occur, but if you add a record, it will not throw this error. Hope this helps!

Fields don't align to record values

Hello,
I have a simple example:

    reader = shapefile.Reader(filename)
    fields = [field[0] for field in reader.fields]
    for r in reader.iterShapeRecords():
        print(dict(zip(fields, r.records)))
        break
    del reader

The records are not aligning up to the field names. Is there a way to ensure field values match up field names?

For example, I have a STATE_FIPS field, it should be numeric value, but it's a string value.

What is the proper way to do this?

Thanks

How do I specify an integer attribute?

w.field('testattr', 'N')

This appears to create a double, with length 50, precision 0.

I was wondering what I need to specify to obtain an integer attribute.

Python 3.4.3 and shapefile.py 1.2.3

Python 3: make decoding shapefile fields more lenient...

This was reported matplotlib/basemap#187 and subsequently merged into basemap. I have a basemap PR that deletes the inline copy of pyshp and makes pyshp an external dependency. I just want to make sure that this patch has its chance for the "mainline" pyshp.

Edit: pyshp is now an external dependency in the development version of matplotlib/basemap.

assertion error with floats and improving the assertion error message?

I am encountering an assertion error when I write floats with a lot of precision. Decreasing the precision solves the issue, again, this may be 'my' specification for the field. So this is probably not a bug, but it might be useful to have the assertion print out the field name and value for developers?

    w.field('test','N',16,3)

This value was causing the issue:
'test': 0.77569341788591528,

When rounded no error
round(val,4)

ERROR fragment

File "c:\p\via\lib\shapefile.py", line 1043, in save
self.saveDbf(target)
File "c:\p\via\lib\shapefile.py", line 1015, in saveDbf
self.__dbfRecords()
File "c:\p\via\lib\shapefile.py", line 902, in __dbfRecords
assert len(value) == size
AssertionError

Unknown format code 'd' for object of type 'float'

Updating pyshp from 1.2.0 tot 1.2.11 results in the following error when trying to write a shapefile: ValueError:Unknown format code 'd' for object of type 'float'

python3.6/site-packages/shapefile.py in __dbfRecords
935.  value = format(value, "d")[:size].rjust(size) # caps the size if exceeds the field size

_shapes.extend access

PyCharm code inspection is flagging using the subject method as an issue.
Access to a protected member _shapes of a class less... (Ctrl+F1)
This inspection warns if a protected member is access outside the class or a descendant of the class where it's defined.
Is there a better way to do this concatenate?
Can this method be made public?
r=shapefile.Reader(f)
w.records.extend(r.records())
w._shapes.extend(r.shapes())

Pip install of latest release is broken

~ pip2 install --upgrade pyshp
Collecting pyshp
  Downloading pyshp-1.2.8.tar.gz
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/private/var/folders/zz/yg37r3x91_d5kkj0k8x8g0cm0000gn/T/pip-build-CZc6UF/pyshp/setup.py", line 6, in <module>
        long_description=open('README.md').read(),
    IOError: [Errno 2] No such file or directory: 'README.md'

    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/zz/yg37r3x91_d5kkj0k8x8g0cm0000gn/T/pip-build-CZc6UF/pyshp/

Bounding box : required argument is not a float.

__shapefileHeader

(Pdb) self.bbox()
[(27.832608133201113, 9.795471641760086), (27.83407345224767, 9.792505897009239), (28.999435663405407, 10.166337714425078), (29.00016322567532, 10.166267648796262)]

(Pdb) pack("<4d", *self.bbox())
*** struct.error: required argument is not a float

Documentation URL not valid

For more on this specification see: http://www. clicketyclick.dk/databases/xbase/format/index.html

This url has a space after "www. " causing a link failure when selected.

Bbox error when saving null geometries

Surprised this hasn't been reported earlier, but saving shapefiles containing at least one null geometry results in error (though maybe related to #22?). This because the method to calculate the bbox for the entire file attempts to use coordinate information for all shapes, but null shapes don't have coordinates:

  File "C:\Python27\lib\site-packages\shapefile.py", line 717, in bbox
    return self.__bbox(self._shapes)
  File "C:\Python27\lib\site-packages\shapefile.py", line 687, in __bbox
    px, py = list(zip(*s.points))[:2]
ValueError: need more than 0 values to unpack

The bbox method looks like this:

def __bbox(self, shapes, shapeTypes=[]):
    x = []
    y = []
    for s in shapes:
        shapeType = self.shapeType
        if shapeTypes:
            shapeType = shapeTypes[shapes.index(s)]
        px, py = list(zip(*s.points))[:2]
        x.extend(px)
        y.extend(py)
    return [min(x), min(y), max(x), max(y)]

Should be fairly easy to fix, by simply skipping each shape of type NULL. The shapeTypes arg also seems not to be unused and unnecessary, and can besides be accessed directly from each shape object via s.shapeType. Throwing together a PR at a later point.

StringIO example a few more details

Similar to what is the manual page with a few more details.

from io import BytesIO as StringIO
import shapefile as sf

wb=sf.Writer(shapeType=sf.POINT)

wb.field('test','N',16,3)
wb.record(test=100)
wb.point(1.0,1.0)

wbshp=StringIO()
wbshx=StringIO()
wbdbf=StringIO()
wb.saveShp(wbshp)
wb.saveShx(wbshx)
wb.saveDbf(wbdbf)

with open('testdata.shp', 'wb') as f:
f.write(wbshp.getvalue())
with open('testdata.shx', 'wb') as f:
f.write(wbshx.getvalue())
with open('testdata.dbf', 'wb') as f:
f.write(wbdbf.getvalue())

UnboundLocalError when writing large shp file (while calculating shpFileLength)

Thanks for the great pySHP library! However, I'm currently facing an error, I hope you can help me out? It happens when I try to write a pretty large shp file, I tried with another file before and that worked just fine.

I have already read your suggestions here:
http://geospatialpython.com/2011/02/merging-lots-of-shapefiles-quickly.html

And I also already Google'd around to see if an solution was around, but I couldn't find one myself. I can provide you with the testdata files, as they are Creative Commons, and I also can provide you with a sample of the code I'm using.

I am using the latest pyshp version from pypi (1.2.1) on Python 2.7.6 (OS X 10.9.2).

Traceback (most recent call last):
  File "/Users/daniel/Projects/shapefileimport/app.py", line 55, in <module>
    hectopunten_output_writer.saveShp("Wegvakken")
  File "/Users/daniel/.virtualenvs/shapefileimport/lib/python2.7/site-packages/shapefile.py", line 995, in saveShp
    self.__shapefileHeader(self.shp, headerType='shp')
  File "/Users/daniel/.virtualenvs/shapefileimport/lib/python2.7/site-packages/shapefile.py", line 709, in __shapefileHeader
    f.write(pack(">i", self.__shpFileLength()))
  File "/Users/daniel/.virtualenvs/shapefileimport/lib/python2.7/site-packages/shapefile.py", line 617, in __shpFileLength
    size += nParts * 4
UnboundLocalError: local variable 'nParts' referenced before assignment

Handle correct size of e+ values

One thing that is still an issue with saving numeric values is that really small or really big numbers will be represented as an "e+" or "e-" type string when PyShp converts it to string before writing. This can cause the value to be larger than the allowed field length and/or raise an exception due to value length being different than the expected struct pack length.

Solution to this should be to put the value through format(value, "formatlength") instead of str(value).

Record Alignment Issue

Setup code

>>> import shapefile
>>> sf = shapefile.Reader("shapefiles/blockgroups")
>>> records = sf.records()

Record Alignment issue in pyshp version 1.2.9.

From the doctests
"Let's read the blockgroup key and the population for the 4th blockgroup:"

    >>> records[3][1:3]
    ['060750601001', 4715]

pyshp version 1.2.3, Python 2.7

In [9]: records[3][1:3]
Out[9]: [u'060750601001', 4715]

pyshp version 1.2.9, Python 2.7.12

In [15]: records[3][5:7]
Out[15]: ('060750601001', '     4715')

pyshp version 1.2.9, Python 3.5.2
(Each subsequent record has move the fields over by 1 value.)

In [5]: records[3][5:7]
Out[5]: (b'060750601001', b'     4715')

In [10]: records[4][6:8]
Out[10]: (b'060750102001', b'      473')

In [11]: records[5][7:9]
Out[11]: (b'060750126001', b'     1137')

The length of sf.records() has changed...

That issue could be explained by the record alignment issue.

In all versions tests this works...

>>> sf.numRecords
663

pyshp 1.2.3

>>> len(records)
663

pyshp 1.2.9

>>> len(records)
678

There are a total of five (5) failing tests. This could solve one (1) of those failing tests.

EDIT Additional Info:

The "DeletionFlag" field is being put in the output of the records function caused by PR #62 (@karimbahgat). The record function works fine.

For instance:
pyshp 1.2.3, Python 2.7

In [35]: records[0][:5]
Out[35]: [0.96761, u'060750179029', 4531, 4682.7, 970]

pyshp 1.2.9, Python 2.7

In [49]: records[0][:5]
Out[49]: (' ', '           0.96761', '060750179029', '     4531', '    4682.7')

README.txt and README.md

Having both of these files causes confusion. They are not kept in sync with one another. For the most part the text version seems to have had some more extensive edits.

The Markdown version can be improved using guidance from the CommonMark spec to be made more readable from a text editor. Also, Markdown can be converted to many formats using pandoc or similar converter.

The doctest seems to work fine with the markdown version, but it does find about 25 fewer tests.

It seems like they were basically the same file on March 11, 2014.

I propose removing README.txt, retargeting the doctest to use README.md, and include changes made to README.txt for inclusion to README.md.

The only issue I can see is that README.txt has release version, author, and release date. This could be maintained in a separate file such as perhaps a RELEASE.txt file.

DBF only

The class Reader comment says "If one of the three files (.shp, .shx, .dbf) is missing no exception is thrown until you try to call a method that depends on that particular file." but if the .shp AND the .shx are missing it throws an exception.
It would be nice if the library accepted to open a .dbf alone : of course a shp with no .shp and no .shx is a poor one, but shapefile would still be very useful to analyse fields, compare dbfs, and some other things. Other libraries like dbf do that, but they have their own problems (no "introspection" with dbf for example).

Documentation appears to be out of date.

The documentation for shapefile.Writer says that it has an autobalance attribute and Balance() method, neither of which seem to exist (they do exist in Editor though, I see).

Add Support for Shapefile Metadata

We'd like to add metadata to our shapefile.

From qQIS/ArcGIS, we are able to save information such as Description (Short Name, Title, Abstract, Keywords, URL), Attribution, Properties.

Does this already exist? I couldn't find anything from a quick cursory glance at code and issue tracker.

Wrong signed_area calculation

  • The link referred for the area algorithm is not available anymore.
  • There is also bug in the formula (Second node's coordinates are appended to xs, ys instead of first node's. The polygon thus formed has a protruding edge.)

pyshp 1.21 unable to write double values for an attribute

Using a double attribute for a field causes a save error.

e.save(srcpath)

File "lib\shapefile.py", line 1042, in save
self.saveDbf(target)
File "lib\shapefile.py", line 1014, in saveDbf
self.__dbfRecords()
File "lib\shapefile.py", line 901, in __dbfRecords
assert len(value) == size
AssertionError

Read only speficied lat/long and plot on the basemap.

I hope you are aware that your pyshp module is the backbone of readshapefile function from basemap (matplotlib sub module). I am trying to plot some data on a basemap and using one of the NE (world) shapefile for plotting boundary. For some reason the readshapefile function only able to read entire shapefile instead of reading the lat/long specified in the basemap instance. This reading of shapefile takes tooo much time which eventually delays the plotting time.

I have not found any solution for the problem of slow reading shpaefiles. I have raised this issue in the matplotlib issue (later on it was moved to basemap issue tracker). The relevent issue raised is here matplotlib/basemap#263 .

It would be great if you can solve this issue (either read the shapefile faster or read only the data within specified lat/long) at your end and I am sure that developers behind the matplotlib/basemap will integrate it later.

Add encoding='' kwarg to shapefile.Reader and shapefile.Writer

The natural earth data stores its strings encoded as Windows-1252 (versions < 3.x) or UTF-8 (>= 3.x). Currently, shapefile.py tries to guess at what it should do in the b() and u() routines. For Python-3, utf-8 is assumed as the encoding, and for python-2, no encode or decode operations are performed.

It's my opinion that the user should be able to specify the encoding for the DBF fields as a kwarg to Reader and Writer. If provided, b() should use that instead of utf-8 for the v.encode() operation, and it should do so for both python 2 and python 3. Likewise, u(v) should be modified to decode in a similar manner.

This would enable cartopy (a popular mapping module with built-in support for natural earth data) to provide the encoding parameter, returning "accurate" data to the user, regardless of natural earth data version or python version.

Documentation on testing pyshp

This library only seems like it was designed to be tested during installation. Since, the README.txt (used by the doctest) and the test data folder shapefiles/ are not installed by setup.py.

If you aren't in the pyshp folder, running shapefile.test() from the command line doesn't end well:

$ python3 -c 'import shapefile; shapefile.test()'
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python3.4/dist-packages/shapefile.py", line 1186, in test
    doctest.testfile("README.txt", verbose=1)
  File "/usr/lib/python3.4/doctest.py", line 2039, in testfile
    encoding or "utf-8")
  File "/usr/lib/python3.4/doctest.py", line 221, in _load_testfile
    file_contents = package.__loader__.get_data(filename)
  File "<frozen importlib._bootstrap>", line 1623, in get_data
FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/lib/python3.4/dist-packages/README.txt'

Perhaps the test function should be renamed to _test, to show that it is a private function.

A little more documentation (README?) on testing pyshp for the people contributing to the project.

Shapefilepy fails to open a shapefile that has zero records.

Error when opening a shapefile that has zero records.

Traceback (most recent call last):
File "", line 1, in
File "C:\Python27\ArcGIS10.3\lib\site-packages\shapefile.py", line 234, in init
self.load(args[0])
File "C:\Python27\ArcGIS10.3\lib\site-packages\shapefile.py", line 279, in load
self.__dbfHeader()
File "C:\Python27\ArcGIS10.3\lib\site-packages\shapefile.py", line 476, in __dbfHeader
fmt,fmtSize = self.__recordFmt()
File "C:\Python27\ArcGIS10.3\lib\site-packages\shapefile.py", line 482, in __recordFmt
self.__dbfHeader()
File "C:\Python27\ArcGIS10.3\lib\site-packages\shapefile.py", line 460, in __dbfHeader
fieldDesc = list(unpack("<11sc4xBB14x", dbf.read(32)))
error: unpack requires a string argument of length 32

shapefile.version
'1.2.10'

add new feature

it seems that right now when adding new features save() will write the whole file once again, not append, right? adding new features extremely slow and unstable when data size is hugh.

Writing POINTZ does not work correctly

Hi,

below the test code

from shapefile import Writer as shpWriter, Reader as shpReader
from shapefile import POINTZ, POLYLINEZ, POLYGONZ

import os
folder = os.path.dirname(os.path.abspath(__file__))

outShp = shpWriter(POINTZ)
outShp.field('id','N','10')
outShp.point(10, 10, 10)
outShp.record(0)
outPath = folder + os.sep + 'out.shp'
outShp.save(outPath)

shp = shpReader(outPath)
shape = shp.shapes()[0]
xy = shape.points[0]
z = shape.z[0]
print(xy, z)

work correctly with pyshp 1.2.3 but raise an error with v 1.2.11

AttributeError: _Shape instance has no attribute 'z'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.