Giter Site home page Giter Site logo

nbtlib's People

Contributors

actions-user avatar dependabot-preview[bot] avatar dependabot-support avatar dependabot[bot] avatar mestrelion avatar vberlier avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

nbtlib's Issues

Trouble parsing .mca region files

I am trying to read the NBT data from region files but the output seems for every file seems to be an empty dictionary.

Here's an example of what I'm getting using the CLI, though this also happens while trying to read .mca files in python code as well.

$ nbt -r r.-1.-1.mca --plain
{}

Can't read FTB Quests NBT

Error:

Traceback (most recent call last):
  File "/home/alice/bin/nbt", line 8, in <module>
    sys.exit(main())
    │        └ <function main at 0x7f05b71c3250><module 'sys' (built-in)>
  File "/home/alice/.local/lib/python3.10/site-packages/nbtlib/cli.py", line 57, in main
    for tag in read(
  File "/home/alice/.local/lib/python3.10/site-packages/nbtlib/cli.py", line 79, in read
    nbt_file = parse_nbt(f.read())
               │         └ <_io.TextIOWrapper name="/home/alice/.local/share/multimc/instances/Alice's Modpack/.minecraft/local/ftbquests/saved/2022-08-08-...
               └ <function parse_nbt at 0x7f05b735caf0>
  File "/home/alice/.local/lib/python3.10/site-packages/nbtlib/literal/parser.py", line 105, in parse_nbt
    tag = parser.parse()
          └ <nbtlib.literal.parser.Parser object at 0x7f05b71d59f0>
  File "/home/alice/.local/lib/python3.10/site-packages/nbtlib/literal/parser.py", line 167, in parse
    return handler()
           └ <bound method Parser.parse_compound of <nbtlib.literal.parser.Parser object at 0x7f05b71d59f0>>
  File "/home/alice/.local/lib/python3.10/site-packages/nbtlib/literal/parser.py", line 213, in parse_compound
    for token in self.collect_tokens_until("CLOSE_COMPOUND"):
  File "/home/alice/.local/lib/python3.10/site-packages/nbtlib/literal/parser.py", line 206, in collect_tokens_until
    raise self.error(f"Expected comma but got {self.current_token.value!r}")
          │                                    └ <nbtlib.literal.parser.Parser object at 0x7f05b71d59f0>
          └ <nbtlib.literal.parser.Parser object at 0x7f05b71d59f0>
nbtlib.literal.parser.InvalidLiteral: Expected comma but got 'default_reward_team' at position 16

Command used: nbt -s data.snbt
File contents:

{
	version: 13
	default_reward_team: false
	default_consume_items: false
	default_autoclaim_rewards: "disabled"
	default_quest_shape: "circle"
	default_quest_disable_jei: false
	emergency_items_cooldown: 300
	drop_loot_crates: false
	loot_crate_no_drop: {
		passive: 4000
		monster: 600
		boss: 0
	}
	disable_gui: false
	grid_scale: 0.5d
	pause_game: false
	lock_message: ""
}

I believe the issue may be that there's no commas? Is there a way to convert it anyway?
FTB Quests decided to store all their data in files like these and it would be useful if I could automate editing some of these.

Overflow Error

Hello, i'm trying to load the nbt data extracted from a .mca file.
I already got a working program to extract the nbt data und decompress it. But when i try to load them i get an OverflowError:

Traceback (most recent call last):
  File "main.py", line 27, in <module>
    chunk_nbt_data = nbtlib.File().parse(io.BytesIO(decompressed_data))
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nbtlib/tag.py", line 496, in parse
    self[name] = cls.get_tag(tag_id).parse(buff, byteorder)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nbtlib/tag.py", line 496, in parse
    self[name] = cls.get_tag(tag_id).parse(buff, byteorder)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nbtlib/tag.py", line 496, in parse
    self[name] = cls.get_tag(tag_id).parse(buff, byteorder)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nbtlib/tag.py", line 224, in parse
    return cls(read_numeric(cls.fmt, buff, byteorder))
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nbtlib/tag.py", line 218, in __new__
    if cls.range and int(self) not in cls.range:
OverflowError: Python int too large to convert to C ssize_t

I also tried to save the extracted nbt-bytes to a file, but i've got the same result:

Traceback (most recent call last):
  File "main.py", line 30, in <module>
    chunk_nbt_data = nbtlib.load("tmp.nbt")
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nbtlib/nbt.py", line 37, in load
    return File.from_buffer(buff, byteorder)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nbtlib/nbt.py", line 96, in from_buffer
    self = cls.parse(buff, byteorder)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nbtlib/tag.py", line 496, in parse
    self[name] = cls.get_tag(tag_id).parse(buff, byteorder)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nbtlib/tag.py", line 496, in parse
    self[name] = cls.get_tag(tag_id).parse(buff, byteorder)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nbtlib/tag.py", line 496, in parse
    self[name] = cls.get_tag(tag_id).parse(buff, byteorder)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nbtlib/tag.py", line 224, in parse
    return cls(read_numeric(cls.fmt, buff, byteorder))
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nbtlib/tag.py", line 218, in __new__
    if cls.range and int(self) not in cls.range:
OverflowError: Python int too large to convert to C ssize_t

I already searched but i couldn't find any simple solution by myself to fix this. If you need any additional information, let me know and thank you for your help!

Help with Minecraft Region Data

Hi, I saw your comment here on extracting minecraft chunk data to be processed by nbtlib. You said to "extract the nbt regions from the .mca files yourself and feed them to nbtlib manually". I am trying to write a script to process entity data from a region file, but can't figure out how to extract the nbt regions from the file in python. Would you be able to help me out?

Thanks.

Jython compatibility

Is there any way to use nbtlib with Jython?
Unfortunately, Jython is currently only compatible with Python 2.7 scripts. But we need Jython for a project and nbtlib is an important part of the whole thing.

Everything works fine with Python 3. Which is good because our project basically consists of two parts. A standalone version and a plugin version for Spigot. However, the plugin version uses Jython and that is where problems arise. Our Script works fine until nbtlib has to be loaded.

This does not support the Minecraft education edition level.dat files.

This does not support the Minecraft Education Edition level.dat files out of the box. There are two main reasons:

  1. It doesn't understand the eight byte header at the start of the file. The first four bytes look like a version or marker field (in my file it is an 8). The next four bytes are the length of the file in bytes not counting the eight byte header.
  2. The numbers in the file are all stored in little endian format.

I was able to read my particular file by chopping out the first 8 bytes of the file and changing this block of code in tag.py:

# Struct formats used to pack and unpack numeric values

BYTE = struct.Struct('<b')
SHORT = struct.Struct('<h')
USHORT = struct.Struct('<H')
INT = struct.Struct('<i')
LONG = struct.Struct('<q')
FLOAT = struct.Struct('<f')
DOUBLE = struct.Struct('<d')

a minor issue of index of list.

if isinstance(index, (int, slice)):

there is a corner case that need to be note.
the object used as an index of a list, can be something other than int or slice.

it may be any object with an __index__ method which returns an int. for example, uint8 from numpy is not a subclass of pythons built in int. therefore, it cant be used as index of ListTag. (but it can be used as index of IntArrayTag)

i suggest to add a attribute in Path object: "isNbtPath". then we can use hasattr(index, "isNbtPath") to know if it is a nbt_Path.

`File` is not writing the trailing End tag

Now that you/we have "unwrapped" the spurious root Compound from File, it is now responsible to make sure the following are written:

  • its own tag ID
  • name (mostly empty, but still takes 2 bytes to say so)
  • its children
  • the trailing End tag

Writing the children and the End is done by Compound.write(), which correctly do so:

fileobj.write(self.end_tag)

But... self.end_tag is overwritten in File to be...

end_tag = b""

With this justification:

# We remove the inherited end tag as the end of nbt files is
# specified by the end of the file
end_tag = b""

This used to work when we had an extra Compound layer. Not anymore. And NBT Explorer, for example, does require that trailing byte (it doesn't even list the file if it doesn't have it), so perhaps Minecraft does too.

But, before filling this 1-line PR fix, I'd like to understand why File had this setting in the first place (since the initial commit!). Was it only a workaround because of the extra layer, or does it has other roles? Is the justification in that comment really valid? Would it have any consequence or side effect in removing it, letting it fallback to Compoud's b"\x00"? Does parse() require it to be blank, perhaps for fault tolerance purposes?

Please investigate possible consequences and side effects of this before merging the PR! I would love to head some feedback from you.

serialize_tag, python 3.8

as we have come to serialize_tag, I do some testing and found something wrong.
str(nbtlib.Byte(4)) == 'Byte(4)b'
int.__str__(nbtlib.Byte(4)) == 'Byte(4)'

def serialize_numeric(self, tag):
"""Return the literal representation of a numeric tag."""
str_func = int.__str__ if isinstance(tag, int) else float.__str__
return str_func(tag) + tag.suffix

#probably due to python 3.8 ?


yes,it is.

float.__str__ is int.__str__ is object.__str__ ==>true

even int.__str__(4.0) is accepted now.


python 3.8 :

bpo-36793: Removed str implementations from builtin types bool, int, float, complex and few classes from the standard library. They now inherit str() from object.

python/cpython@96aeaec

Originally posted by @ch-yx in #1 (comment)

Implement nbt paths?

First mentioned here. Link to wiki reference.

I think it would be neat if it could look a bit like this:

p = Path('foo.bar[0]."A [crazy name]!".baz')
some_compound_tag[p]

assert Path()['foo']['bar'][0]['A [crazy name]!']['baz'] == p
assert str(p) == 'foo.bar[0]."A [crazy name]!".baz'
assert p.parent == Path('foo.bar[0]')['A [crazy name]!']
assert p.parts == ('foo', 'bar', 0, 'A [crazy name]!', 'baz')
assert p.stem == Path('foo.bar[0]."A [crazy name]!"')
assert p.name == 'baz'

Ideally these would be immutable, and maybe inherit directly from str or tuple.

Preserve type on List.copy() and slicing

I just noticed List.copy() returns a plain list, as it doesn't override the copy method, and was about to create a PR implementing it. Then I realized this is a deeper issue, so I need your opinion before moving on:

  • list.copy() == list[:], so copy if was changed to preserve type, should slicing be changed too? I.e., should List.__getitem__ use the a similar logic as __setitem__, and do something like this:if isinstance(index, slice): return self.__class__(super().__getitem__(index)).

  • It would be surprising if copy() preserve type but slicing didn't. In one hand, some might expect (and rely on) copy and slicing to return a plain list. OTOH, it's awkward to write b = List[xxx](a.copy()) to preserve type and easy to do b = list(a.copy()) to guarantee a plain list.

  • In a general way, should the mutables Compound and List preserve type in slicing and subsets?

I could do some research to see what numpy.ndarray and other similar list subclasses do, and also if there's any principle suggested in collections.abc.MutableSequence.

Meanwhile, what's your opinion on this? Interested in a PR to implement copy() and related methods in Compound and List?

`Path` cannot add or init integers, only strings and other Paths

Path.__add__() (and __radd__), as well as __init__(), accepts str and Path, but not int. Considering it can handle ints just fine in subscriptors, why not add support for + too?

Currently we have:

Path("a")["b"]["c"] == Path("a.b.c")
"a" + Path("b") + "c" == Path("a.b.c")
Path("a")[1]["c"] == Path("a[1].c")
Path("[0]")[1]["c"] == Path("[0][1].c")

It could also allow the following:

"a" + Path("[1]") + "b" + 2 == Path("a[1].b[2]")
0 + Path("b") + 1 == Path("[0].b[1]")
0 + Path()[1] + 2 == Path("[0][1][2]")
"a" + Path(1)["b"] + Path(3)[4] + 5 + "c" == Path("a[1].b[3][4][5].c")

I could send a PR if you approve the idea

Iterate NBT

Is it possible to iterate over the nbtlib.tag.Compound after nbtlib.load?

>>> nbtfile = nbtlib.load("C:\\Users\\WDAGUtilityAccount\\Desktop\\level.dat")
>>> type(nbtfile['Data'])
<class 'nbtlib.tag.Compound'>

`nbtlib 2.0`

This tracks a number of issues that should be resolved before making 2.x releases stable. I think the recent feedback shows that nbtlib could benefit from a little revamp. Proper static typing is long overdue and I'd like to take some time to experiment with optimizations to speed everything up a bit.

Also this could be the perfect occasion to revisit old issues. It would be interesting to revisit the idea of nbtlib.contrib once the main issues have been addressed (#60). Also I'm not sure about the status of the setup.py situation anymore (#54). And finally, I've always struggled with writing documentation. Maybe we can come up with a better strategy, or find a way to break it down and make a proper roadmap for this (#16).

If anything else comes up I'll also add it in here.

Add `Path.from_parts()` or similar to allow tuple constructor like `pathlib.Path`

I have this NBT Explorer-like printing and when re-implementing it using my generic tree walker I noticed a missing feature in Path that forced me to use a really inefficient constructor:

The generic tree walker yields a generic tuple containing each path component, similar to the pathlib.Path() API. And to create an nbtlib.Path from this tuple I had to resort to an expensive that iteratively (or recursively) creates a new Path for each component by concatenating the next component (def get_element(root, keys): return get_element(root[keys[0]], keys[1:]). (by the way, this concatenation is what led me to #146 , as each part can be an int (for List) or str (for Compound)

I'm sure there are better, or at least easier ways of doing this. Something like Path.from_accessors(), but skipping the parser by assuming each part is either an int or a str representing a single key.

List subtype inference doesn't consider `List` as a possible subtype

it cannot parse this snbt:
{c:[ [] , [[],[]] ]}

while minecraft does allow it.

>>> nbtlib.parse_nbt("{c:[[] ,[[],[]] ]}")

Traceback (most recent call last):
File "<pyshell#2>", line 1, in
nbtlib.parse_nbt("{c:[[] ,[[],[]] ]}")
File "C:\Users\doctc\AppData\Local\Programs\Python\Python37\lib\site-packages\nbtlib\literal.py", line 53, in parse_nbt
tag = parser.parse()
File "C:\Users\doctc\AppData\Local\Programs\Python\Python37\lib\site-packages\nbtlib\literal.py", line 122, in parse
return handler()
File "C:\Users\doctc\AppData\Local\Programs\Python\Python37\lib\site-packages\nbtlib\literal.py", line 175, in parse_compound
compound_tag[item_key] = self.parse()
File "C:\Users\doctc\AppData\Local\Programs\Python\Python37\lib\site-packages\nbtlib\literal.py", line 122, in parse
return handler()
File "C:\Users\doctc\AppData\Local\Programs\Python\Python37\lib\site-packages\nbtlib\literal.py", line 209, in parse_list
raise self.error(f'Item {str(item)!r} is not a '
nbtlib.literal.InvalidLiteral: Item '[[],[]]' is not a List[End] tag at position 15

Root creates an spurious Compound that is not in actual NBT data

I've mentioned this a few years back, and now I'm familiar enough with NBT in binary form to raise this again, more confident that this is indeed a design bug in this awesome library.

Currently, when loading an NBT file like this (uncompressed for clarity):

00000000: 0a00 000b 0008 506f 7369 7469 6f6e 0000  ......Position..
00000010: 0002 ffff fff0 ffff ffef 0300 0b44 6174  .............Dat
00000020: 6156 6572 7369 6f6e 0000 0aaa 0900 0845  aVersion.......E
00000030: 6e74 6974 6965 730a 0000 0001 0900 064d  ntities........M
00000040: 6f74 696f 6e06 0000 0003 0000 0000 0000  otion...........
00000050: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000060: 0000 0200 0648 6561 6c74 6800 0501 000c  .....Health.....
00000070: 496e 7675 6c6e 6572 6162 6c65 0002 0003  Invulnerable....
00000080: 4169 7201 2c01 0008 4f6e 4772 6f75 6e64  Air.,...OnGround
00000090: 0003 000e 506f 7274 616c 436f 6f6c 646f  ....PortalCooldo
000000a0: 776e 0000 0000 0900 0852 6f74 6174 696f  wn.......Rotatio
000000b0: 6e05 0000 0002 4191 f848 0000 0000 0500  n.....A..H......
000000c0: 0c46 616c 6c44 6973 7461 6e63 6500 0000  .FallDistance...
000000d0: 000a 0004 4974 656d 0800 0269 6400 0e6d  ....Item...id..m
000000e0: 696e 6563 7261 6674 3a73 616e 6401 0005  inecraft:sand...
000000f0: 436f 756e 7401 0009 0003 506f 7306 0000  Count.....Pos...
00000100: 0003 c06f e0cb b28e cfd4 404b e000 0000  ...o......@K....
00000110: 0000 c070 8aaa 4f56 97f6 0200 0b50 6963  ...p..OV.....Pic
00000120: 6b75 7044 656c 6179 0000 0200 0446 6972  kupDelay.....Fir
00000130: 6500 0008 0002 6964 000e 6d69 6e65 6372  e.....id..minecr
00000140: 6166 743a 6974 656d 0b00 0455 5549 4400  aft:item...UUID.
00000150: 0000 045c 04d8 3949 9646 96b4 0160 7235  ...\..9I.F...`r5
00000160: acc2 cf02 0003 4167 6513 e900 00         ......Age....

We have this result:

{
    "": {
        Position: [I; -16, -17], 
        DataVersion: 2730, 
        Entities: [
            {
                Motion: [0.0d, 0.0d, 0.0d], 
                Health: 5s, 
                Invulnerable: 0b, 
                Air: 300s, 
                OnGround: 0b, 
                PortalCooldown: 0, 
                Rotation: [18.246231079101562f, 0.0f], 
                FallDistance: 0.0f, 
                Item: {
                    id: "minecraft:sand", 
                    Count: 1b
                }, 
                Pos: [-255.02486541645942d, 55.75d, -264.6665795691073d], 
                PickupDelay: 0s, 
                Fire: 0s, 
                id: "minecraft:item", 
                UUID: [I; 1543821369, 1234585238, -1274978190, 900514511], 
                Age: 5097s
            }
        ]
    }
}

It looks like we have a Compound as root, and then another (unnamed) Compound inside it. But that's not true, the binary data clearly shows there is a single (unnamed) Compound in the beginning. So the actual parsing result should be:

{
    Position: [I; -16, -17], 
    DataVersion: 2730, 
    Entities: [
        {
            Motion: [0.0d, 0.0d, 0.0d], 
            Health: 5s, 
            ...
            UUID: [I; 1543821369, 1234585238, -1274978190, 900514511], 
            Age: 5097s
        }
    ]
}

Notice the root name is not represented in the case. And it does not matter, as a tag's name is parsed by (and belongs to) a tag's parent. A tag by itself has no idea about its own name (and tags in lists don't even have one).

Ok, the root tag of an NBT data does have a name, even if 99% (all?) of real world NBT have it empty. But having a name does not make it a Compound with a single child named as itself. This is wrong! If forces some weird syntax to acess the content:

tag = nbtlib.load("somefile.dat")
tag['']["DataVersion"]  # or tag.root["DataVersion"]

Instead of a much simpler (and correct) tag["DataVersion"]

If Root is a Compound, it should not require extra syntax to access its contents. No other tag requires so. If preserving the root name is important for saving/loading integrity, it should be stored elsewhere (Root.name perhaps?)

If displaying this name whenever printing the root tag is needed (why would it be?), then I suggest this format:

"rootname": {
    Position: [I; -16, -17], 
    DataVersion: 2730, 
    Entities: [
        {
            Motion: [0.0d, 0.0d, 0.0d], 
            Health: 5s, 
            ...
            UUID: [I; 1543821369, 1234585238, -1274978190, 900514511], 
            Age: 5097s
        }
    ]
}

This does not imply there are two nested compounds in the beginning. Much cleaner, easier to use, and correctly reflects the NBT data. You can even completely omit the name for empty names, and start right away with { (as my previous example)

This format could also be used to display names for non-compound root tags too. A case that Minecraft seems not to use, and nbtlib does not support, but given the NBT spec there is no technical limitation:

"Age": 5097s

Anyway, allowing root to be a non-compound is a challenge for another day. But for now, removing the fake extra Compound would be very very nice!!!

Improve `read_numeric()` to vastly increase `parse()` performance for all tags

When doing some profiling loading NBT files, trying to optimize loading times, read_numeric() stands at the top by a large margin. Taking a closer look at it, it seems this is the culprit:

def get_format(fmt, string):
    """Return a dictionary containing a format for each byte order."""
    return {"big": fmt(">" + string), "little": fmt("<" + string)}

BYTE = get_format(Struct, "b")
SHORT = get_format(Struct, "h")
...
def read_numeric(fmt, fileobj, byteorder="big"):
    """Read a numeric value from a file-like object."""
    try:
        fmt = fmt[byteorder]
        return fmt.unpack(fileobj.read(fmt.size))[0]
        ...

And that is universally used in all tag classes using a similar pattern:

tag_id = read_numeric(BYTE, fileobj, byteorder)
length = read_numeric(INT, fileobj, byteorder)
tag = cls.get_tag(read_numeric(BYTE, fileobj, byteorder))
data = fileobj.read(read_numeric(INT, fileobj, byteorder) * item_type.itemsize)
...

The problem is: read_numeric creates a new Struct instance on every read. That is a very expensive operation. There should probably be a way to pre-build (or cache) such instances, so either read_numeric or get_format or even BYTE/INT... contain/return the same struct instances, while still keeping the ability to select byteorder on a per-call basis.

I can submit a PR to fix this, and I'm sure reading (and writing) times will vastly improve. I'll do so in a way it does not change the API of any of the tag classes (i.e, keep Compound.parse(cls, fileobj, byteorder="big") signature for all write/parse of all tags), and possibly keep read_numeric() signature too (so no changes to the Tag classes at all), but most likely get_format() will change signature and/or internal structure, and the underlying BYTES/INT/... will most likely change their internal values, but I'll do my best to keep them still byteorder-agnostic constants .

Is such improvement welcome?

Provide a `setup.py`, as Poetry does not (yet) cover all cases

TL;DR: Poetry does not provide (yet) any way to install a local package (cloned from github) in editable mode (the equivalent of pip install -e .), and pip does not support (yet) editable mode with pyproject.toml. The solution is, for the time being, to provide a setup.py.

Long version:

So here's my use-case: I'm using nbtlib as a dependency in my projects, which themselves might or might not use Poetry. For developing those projects, I can simply run pip3 install --user nbtlib and done, a stable nbtlib is downloaded from PyPI and installed in my ~/.local/lib/python3.x/site-packages and made globally available to any project that might use it. (or install it in each project's virtualenv, same result)

But... I'd also like to contribute to nbtlib. So I clone it's github repo, install Poetry and pytest to run tests before creating PRs. So far so great. Now I want to use this modified version in my projects, so I need to install the local version (not a stable from PyPI). And it must be in editable mode, so any further changes are automatically refletected in my projects.

Poetry? No such feature. It's meant for managing a package's dependencies, not to install or use a package as a dependency. Pip? No luck: it does not support pip install -e . with a pyproject.toml:

01:15:27 rodrigo@desktop ~/work/minecraft/nbtlib master $ pip3 install --user -e .
ERROR: File "setup.py" not found. Directory cannot be installed in editable mode: /home/rodrigo/work/minecraft/nbtlib
(A "pyproject.toml" file was found, but editable mode currently requires a setup.py based build.)

Poetry is still too immature and lacks a lot of features to cover all use cases of setup.py, pip and etc. And pip is not modern enough to use pyproject.toml for all of its features.

If there's no better solution, I believe nbtlib should provide a setup.py, at least until both tools mature.

Incomplete documentation for parsing raw nbt data

I have raw nbt data for chunks from a region file that I am not sure how to correctly parse using NBTLib. It seems the only way to parse raw nbt is to send a file. It would be better to not create a file for each chunk.

Here is the code I use to parse the region file and create the raw nbt data:
https://gist.github.com/nwesterhausen/527fb947d4432c1f40c06dca07cb9253

If I take the output of get_data(0,0) for one of my region files and save it as a binary file, I can open the binary file using NBTLib and it seems to have parsed it correctly (no errors). I think this could be as easy as another entry point besides File or load.

I'd appreciate any thoughts on this.

Long lists when reading a 'little' byteordered list are scrambled near the end

I don't know why this would be, but if i try and load this file via this tool vs a gui tool (NBT studio) i get different results, NBT studio matches the expected format as opened into minecraft bedrock edtion. The last 2 ints in NBTfile[""]["structure"]["block_indices"][0] should be 70 according to the described format and the NBT studio but i get 1 and 68.

I did page through the parser and i can't see where it would be broken
test6.zip

How to use/initialize nbtlib.ByteArray?

I am trying to create a ByteArray tag. The documentation says the underlying type is a numpy array.

Which dtype do I have to use for the bumpy array? How do I create/fill it with values? A short example would be very welcome here.

I am using to the recommend 1.12.1 release of nbtlib on Windows 10 with python 3.8.

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xed in position 101: invalid continuation byte

I got this error trying to open a player's dat file which contained non utf-8 characters. I fixed by changing 'utf-8' to 'ISO-8859-1' in tag.py

`
def read_string(buff, byteorder='big'):
"""Read a string from a file-like object."""
length = read_numeric(USHORT, buff, byteorder)
return buff.read(length).decode('ISO-8859-1')

def write_string(value, buff, byteorder='big'):
"""Write a string to a file-like object."""
data = value.encode('ISO-8859-1')
write_numeric(USHORT, len(data), buff, byteorder)
buff.write(data)
`

path

parse_nbt("[[{a:2}],[3b]]")[path("[].[].a")]

raise TypeError instead of IndexError.

I guess that there may be more bugs like that....😞

Add nbtlib.contrib package?

From #1 (comment).

It could be interesting to provide the necessary abstractions to deal with common nbt use-cases in a contrib package.

The package could include:

  • Something to read and edit Minecraft region files
  • Schemas that are currently available in the examples directory

servers.dat issues.

    try:
            server = MinecraftServer.lookup(str(ip))
            status = server.status()
            with nbtlib.load('servers.dat') as serverDat:
                data = parse_nbt(serverDat)
                data[Path("''[].servers.''[]")] = Compound({
                     'ip' : String(str(ip)),
                     'name' : String(str(ip + ':25565 by Jaydenn#7935')),
                     'icon' : String(str(status.favicon))})
    except Exception as e:
        print(e)
    pass

So basically im trying to make a servers.dat generator, and you input a server, and it adds it to the servers.dat file. My issue is that i can't figure out how to do this right, please help me.

Remaining support for 1.13

I saw that whitespace and case-agnostic prefixes were taken care of recently. 👍 I'll make note of any remaining 1.13 issues here seeing as you're active and able to implement them into the official repo.

Remaining issues for 1.13 support:

  1. Something important to note is that whitespace was not valid (in SNBT) until 1.13, so you may wish to either (1) add a parameter to the parser or (2) create a separate branch for legacy (1.12 and prior) support (or vice-versa).
  2. Long arrays are now a thing (apparently part of the new region file format), so probably [L should be added to the list of tokens and parsed accordingly.

I think that takes care of everything but there may still be changes before 1.13 is fully released.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.