Giter Site home page Giter Site logo

pybind11-stubgen's Introduction

pypi

About

Static analysis tools and IDE usually struggle to understand python binary extensions. pybind11-stubgen generates stubs for python extensions to make them less opaque.

While the CLI tool includes tweaks to target modules compiled specifically with pybind11 but it should work well with modules built with other libraries.

# Install
pip install pybind11-stubgen

# Generate stubs for numpy
pybind11-stubgen numpy

Usage

pybind11-stubgen [-h]
                 [-o OUTPUT_DIR]
                 [--root-suffix ROOT_SUFFIX]
                 [--ignore-invalid-expressions REGEX]
                 [--ignore-invalid-identifiers REGEX]
                 [--ignore-unresolved-names REGEX]
                 [--ignore-all-errors]
                 [--enum-class-locations REGEX:LOC]
                 [--numpy-array-wrap-with-annotated|
                  --numpy-array-use-type-var|
                  --numpy-array-remove-parameters]
                 [--print-invalid-expressions-as-is]
                 [--print-safe-value-reprs REGEX]
                 [--exit-code]
                 [--stub-extension EXT]
                 MODULE_NAME

pybind11-stubgen's People

Contributors

adam-urbanczyk avatar auscompgeek avatar chrlackner avatar cielavenir avatar haarigerharald avatar ixje avatar jochenhz avatar juliapoo avatar marcoffee avatar matthijsburgh avatar narottamroyal avatar ptosco avatar ringohoffman avatar sizmailov avatar tgpfeiffer avatar thetriplev avatar virtuald avatar yc7521 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pybind11-stubgen's Issues

Issue with callable annotation

>>> print(fn.__doc__)
fn(a: int, b: int, c: module.C, callback: Optional[Union[Callable[[C, int], bool], Callable[..., int]]], *args) -> D

Documentation for fn

In stubgen it gets translated to

def fn(a: int, b: int, c: module.C, callback: typing.Optional[typing.Callable[[C, int, ...], bool]], *args) -> D:

Whereas it should stay as the original annotation.

how to use the setup.py generated file

This question may be a bit out of the scope of pybind11-stubgen, but maybe an answer may be useful to others as well.

pybind11 allowed me to generate a file mypackage.cpython-310-x86_64-linux-gnu.so .

Applying pybind11-subgen with argument "mypackage" leads to the creations of a stubs folder with a init.pyi and a setup.py file.

I cannot find how to use the setup.py file to install the stubs. When directly calling pip install ., I get the error error: package directory 'mypackage-stubs' does not exist. When reorganizing the folders and file so that such directory exists and contains init.pyi, then I get the error ERROR: Could not find a version that satisfies the requirement mypackage (which is indeed a requirement in the setup.py file).

mypackage is properly installed, i.e. calling "import mypackage" in a python module works.

(I realize this may be more a packaging question than a pybind11-stubgen question)

Generate stubs before install myModule

Hi, I'm a newer in Python stubs.

Today I compile my C++ code to a Python Module using Pybind11.
If I understood how it works, to use pybind11-stubgen myModule MUST BE already installed in a Python environment, right?

If yes, so I have to follow these steps below:

  • Compile my C++ code to a Python Module (pybind11)
  • Install this module in the Python environment (Example.: using setup.py)
  • After that, run pybind11-stubgen, get the output files and copy them to the same folder where my module was installed

Right?

I would like to automatize this process and generate the stubs BEFORE run my module installation setup.py.
If is possible, it will enable to run setup.py once and get both myModule and myModuleStubs installed at the same time.

Question

Is there a way to do that?

Thank you,

attached submodules aren't resolved

It would be understandable if you decided this was out of scope. But here's snippets of what I have:

Binding code:

py::class_<typename nt::NetworkTableInstance> cls_NetworkTableInstance;

auto nf = m.def_submodule("NotifyFlags");
cls_NetworkTableInstance.attr("NotifyFlags") = nf;

Resulting pyi:

class NetworkTablesInstance():
    NotifyFlags: module # value = <module '_pyntcore._ntcore.NotifyFlags'>

It seems like I would expect it to be like this instead:

import _pyntcore._ntcore.NotifyFlags

class NetworkTablesInstance():
    NotifyFlags: _pyntcore._ntcore.NotifyFlags

Docstring can trigger catastrophic backtracking in regex module

Hi

version: pybind11_stubgen-0.8.6-py3.7

I've tracked down a particular docstring in a large codebase that hangs pybind11-stubgen during the stub creation. I've simplified it to a (non-realistic) example such as below:

def foo(fn):
    """
    foo(fn: Any) -> None
 
    Use-case:
        foo(os.get_handle_inheritable, os.set_handle_inheritable)
    """
    pass

The use-case includes a period and is longer than just a few characters, which I suspect triggers a catastrophic backtracking in matching the regex for balanced parentheses.

Of course I can fix this by modifying the example, or prefacing the use-case with '>>>'. However, I don't have direct access to the code and believe that bad docstrings shouldn't break pybind11-stubgen.

I wonder if the regex matching function signatures could be made more robust?

Thanks!

Pybind11 enums are not correctly detected.

When exporting an enum using pybind11 enum_ template:

py::enum_<ENUM_TYPE> (m, "MyEnum")
.value("val1", VAL1)
.value("val2", VAL2)
.export_values();

the stub generator assumes it is a module, the corresponding pyi file has:

import MyEnum

How do you want to classify enums? Using the Python Enum module?
Best
Christopher

Release 0.8.8 creates wheel with invalid version

With 0.8.8 I get a file name neo3vm_stubs-None-py3-none-any.whl, which errors as follows when trying to pip install

ERROR: Could not find a version that satisfies the requirement neo3vm==None (from neo3vm-stubs) (from versions: 0.8.3, 0.8.4)
ERROR: No matching distribution found for neo3vm==None

The stub generation output gives some warnings

/Users/erik/Documents/code/neo3vm-priv/venv/lib/python3.9/site-packages/setuptools/dist.py:501: UserWarning: The version specified ('None') is an invalid version, this may not work as expected with newer versions of setuptools, pip, and PyPI. Please see PEP 440 for more details.
  warnings.warn(
running bdist_wheel
running build
running build_py
package init file 'neo3vm-stubs/__init__.py' not found (or not a regular file)
running egg_info
/Users/erik/Documents/code/neo3vm-priv/venv/lib/python3.9/site-packages/pkg_resources/__init__.py:116: PkgResourcesDeprecationWarning: None is an invalid version and will not be supported in a future release
  warnings.warn(
creating neo3vm_stubs.egg-info
writing neo3vm_stubs.egg-info/PKG-INFO
writing dependency_links to neo3vm_stubs.egg-info/dependency_links.txt
writing requirements to neo3vm_stubs.egg-info/requires.txt
writing top-level names to neo3vm_stubs.egg-info/top_level.txt
writing manifest file 'neo3vm_stubs.egg-info/SOURCES.txt'
reading manifest file 'neo3vm_stubs.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'neo3vm_stubs.egg-info/SOURCES.txt'
copying neo3vm-stubs/__init__.pyi -> build/lib/neo3vm-stubs
/Users/erik/Documents/code/neo3vm-priv/venv/lib/python3.9/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
installing to build/bdist.macosx-11-x86_64/wheel
running install
running install_lib
creating build/bdist.macosx-11-x86_64/wheel
creating build/bdist.macosx-11-x86_64/wheel/neo3vm-stubs
copying build/lib/neo3vm-stubs/__init__.pyi -> build/bdist.macosx-11-x86_64/wheel/neo3vm-stubs
running install_egg_info
Copying neo3vm_stubs.egg-info to build/bdist.macosx-11-x86_64/wheel/neo3vm_stubs-None-py3.9.egg-info
running install_scripts
creating build/bdist.macosx-11-x86_64/wheel/neo3vm_stubs-None.dist-info/WHEEL
creating './neo3vm_stubs-None-py3-none-any.whl' and adding 'build/bdist.macosx-11-x86_64/wheel' to it
adding 'neo3vm-stubs/__init__.pyi'
adding 'neo3vm_stubs-None.dist-info/METADATA'
adding 'neo3vm_stubs-None.dist-info/WHEEL'
adding 'neo3vm_stubs-None.dist-info/top_level.txt'
adding 'neo3vm_stubs-None.dist-info/RECORD'
removing build/bdist.macosx-11-x86_64/wheel

For reference; release 0.8.7 produces the following name neo3vm_stubs-0.8.4-py3-none-any.whl that works fine

Add 'type' to CLASS_NAME_BLACKLIST

For some reason an exception class had the 'type' as a submodule. Here's the output:

class MyException(Exception, BaseException):
    class type():
        """
        type(object_or_name, bases, dict)
        type(object) -> the object's type
        type(name, bases, dict) -> a new type
        """
        class object():
            """
            The most base type
            """
            class type():
                pass
            pass
        class type():
            pass

Adding 'type' to CLASS_NAME_BLACKLIST seems like a reasonable thing to do, and removes type from the pyi output.

Thanks so much, this is pretty useful!

Avoid generating bad properties

Same problem as #33, but for properties instead!

    @property
    def m(self) -> std::mutex:
        """
        :type: std::mutex
        """

Looking at it... it seems like one should do the ast.parse trick in the PropertySignature constructor. However, I'm not sure if you want to keep using those globals in FunctionSignature? Maybe it would make more sense to have some kind of validator class for storing all of those globals?

WARNING - Generated stubs signature is degraded to `(*args, **kwargs) -> typing.Any` for

Hi, I got a perfect running stubs generation using pybind11-stubgen. It works great.

Question

Now, I just would like to remove this warning below.
Could you help me?

[2022-08-18 20:57:31,607] {__init__.py:131} WARNING - Generated stubs signature is degraded to `(*args, **kwargs) -> typing.Any` for
[2022-08-18 20:57:31,607] {__init__.py:135} WARNING - def __init__(self: maialib.maiacore.Score, partsName: std::initializer_list<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, numMeasures: int = 20) -> None: ...
[2022-08-18 20:57:31,607] {__init__.py:136} WARNING -                                                          ^-- Invalid syntax
[2022-08-18 20:57:31,672] {__init__.py:957} INFO - Useful link: Avoiding C++ types in docstrings:
[2022-08-18 20:57:31,673] {__init__.py:958} INFO -       https://pybind11.readthedocs.io/en/latest/advanced/misc.html#avoiding-cpp-types-in-docstrings
/home/nyck/.local/lib/python3.8/site-packages/coverage/control.py:794: CoverageWarning: No data was collected. (no-data-collected)
  self._warn("No data was collected.", slug="no-data-collected")

Here the same message in a print screen

image

Here is my Score class bind code

    // bindings to Score class
    py::class_<Score> cls(m, "Score");
    cls.def(py::init<const std::initializer_list<std::string>&, const int>(),
            py::arg("partsName"),
            py::arg("numMeasures") = 20,
            py::call_guard<py::scoped_ostream_redirect, py::scoped_estream_redirect>());

    cls.def(py::init<const std::vector<std::string>&, const int>(),
            py::arg("partsName"),
            py::arg("numMeasures") = 20,
            py::call_guard<py::scoped_ostream_redirect, py::scoped_estream_redirect>());

    cls.def(py::init<const std::string&>(),
            py::arg("filePath"),
            py::call_guard<py::scoped_ostream_redirect, py::scoped_estream_redirect>());

Here the same code in a print screen (with colors):

image

Thank you

Contribute all good stuff to mypy.stubgenc and freeze this repo

At the point when mypy.stubgen (or other) will be able to generate full-featured stubs for pybind11 there would be no need in this tool.

Things to ship:

  • function signatures
  • default argument parsing
  • attribute types
  • attribute values
  • nested classes
  • functions with positional-only arguments
  • pybind11-specific type transformations (e.g. iterator -> typing.Iterator)

skip signature degrading

I just upgraded to 0.6.2 and am now facing something like this

[2020-08-27 11:14:21,239] {__init__.py:55} ERROR - Generated stubs signature is degraded to `(*args, **kwargs) -> Any` for
[2020-08-27 11:14:21,240] {__init__.py:56} ERROR - def __init__(self: neo3vm.ArrayStackItem, reference_counter: neo3vm::ReferenceCounter) -> None: pass
[2020-08-27 11:14:21,240] {__init__.py:57} ERROR -                                                                    ^-- Invalid syntax

The general idea is cool, but it would be nice if there is a cli flag to skip this downgrading. For example I do a second pass on the generates stubs to fix these myself.

Missing argument names result in malformed *pyi files

I'm working on a large module (https://github.com/CadQuery/OCP) and sometimes this kind of situation occurs. I'm OK with looking for a solution and opening a PR, but suggestions would be welcome.

Call signature:  OCP.HLRBRep.HLRBRep_Data.EdgeOfTheHidingFace(*args, **kwargs)
Type:            instancemethod
String form:     <instancemethod EdgeOfTheHidingFace at 0x7f1b140862b8>
Docstring:      
EdgeOfTheHidingFace(*args, **kwargs)
Overloaded function.

1. EdgeOfTheHidingFace(self: OCP.HLRBRep.HLRBRep_Data, E: int, ED: OCP.HLRBRep.HLRBRep_EdgeData) -> bool

Returns the true if the Edge <ED> belongs to the Hiding Face.

2. EdgeOfTheHidingFace(self: OCP.HLRBRep.HLRBRep_Data, : int, ED: OCP.HLRBRep.HLRBRep_EdgeData) -> bool

Returns the true if the Edge <ED> belongs to the Hiding Face.
Class docstring:
instancemethod(function)

Bind a function to a class.

Function default argument with bound enum

Unfortunately, pybind generates an unhelpful docstring for this function with a default argument with a bound enum:

enum class Color {Red};

PYBIND11_MODULE(foo, m) {
  pybind11::enum_<Color> (m, "Color").value("Red", Color::Red);
  m.def("bar", [](Color) {}, pybind11::arg("a") = Color::Red);
}
>>> help(foo.bar)
foo(...) method of builtins.PyCapsule instance
    bar(a: foo.Color = <Color.Red: 0>) -> None

To your credit, even though the default argument isn't even module qualified, your parsing doesn't trip over this. Unfortunately, it does classify the enum name Color as a module and adds it to the import statements at the top:

import foo
import typing
import Color

Arguably, the onus is on pybind to write valid docstrings. But this could be fixed in two lines here:

@@ -744,7 +744,8 @@ class ModuleStubsGenerator(StubsGenerator):
         for f in self.free_functions:  # type: FreeFunctionStubsGenerator
             result |= f.get_involved_modules_names()

-        return set(result) - {"builtins", 'typing', self.module.__name__}
+        classnames = set(C.klass.__name__ for C in self.classes)
+        return set(result) - {"builtins", 'typing', self.module.__name__} - classnames

     def to_lines(self):  # type: () -> List[str]

Would you consider adding such a hack to enable the use of bound enums in the current form or reject it in favor of upstream work?

Generating bad types for `py::bool_` and `py::float_`

I use

using PyBaseTypes = std::variant<py::bool_, py::int_, py::float_, py::str, py::bytes, py::object>;

Generated then:

typing.Union[bool_, int, float_, str, bytes, object]

I would expect:

typing.Union[bool, int, float, str, bytes, object]

I'm not quite sure if the fault is here or with pybind11.

bybind11: 2.9.0
pybind11-stubgen: 0.10.5

Is it possible to type the iterator?

This is probably more of a question and I don't know where else to ask it right now.

Is it possible to type the iterator?

I don't know if pybind11 doesn't currently offer that in general, if it's because of my possibly specific implementation or if this package just hasn't implemented that so far?

class Value
{
public:
  virtual ~Value() = default;
};

class Array : public Value, public std::vector<Value*>
{
public:
  using std::vector<Value*>::vector;
....
};
....
  py::class_<Array, Value>(m, "Array", "")
....
    .def("__iter__", [](Array& array) {
      return py::make_iterator(array.begin(), array.end());
    }, py::keep_alive<0, 1>()); /* Keep vector alive while iterator is used */

Generates:

class Array(Value):
    def __iter__(self) -> typing.Iterator: ...

Is it possible to generate somethin like the following?

class Array(Value):
    def __iter__(self) -> typing.Iterator[Value]: ...

Document in overloaded function is placed in wrong place

When a function is overloaded, documentation comments is placed in wrong place.

For example, the C++ wrapper is below:

#include <iostream>
#include <pybind11/pybind11.h>
int add_int(int x, int y) {
    return x + y;
}
double add_double(double x, double y) {
    return x + y;
}

namespace py = pybind11;

PYBIND11_MODULE(python_example, m) {
    m.doc() = "pybind11 example plugin";
    m.def("add", &add_int, "Add x and y(int)", py::arg("x"), py::arg("y"));
    m.def("add", &add_double, "Add x and y(double)", py::arg("x"), py::arg("y"));
}

then, __init__.pyi below is generated:

"""pybind11 example plugin"""
from __future__ import annotations
import python_example
import typing

__all__ = [
    "add"
]


@typing.overload
def add(x: float, y: float) -> float:
    """
    Add x and y(int)

    Add x and y(double)
    """
@typing.overload
def add(x: int, y: int) -> int:
    pass

I expect it to be like below:

"""pybind11 example plugin"""
from __future__ import annotations
import python_example
import typing

__all__ = [
    "add"
]


@typing.overload
def add(x: float, y: float) -> float:
    """
    Add x and y(double)
    """
@typing.overload
def add(x: int, y: int) -> int:
    """
    Add x and y(int)
    """

help(add) prints separated documents, so I think this is pybind11-stubgen problem.

Is there a solution to fix it?

Strip extra info from numpy arrays types

At the moment numpy.ndarray[...] annotations are not supported by static analysis tools and produce warnings/errors.
Add a flag to strip extra info in brackets.

(follow up to #34)

error: Name "capsule" is not defined

I have a pybind11-based library that uses capsules (pybind11::capsule). When I try to use the stubs generated by pybind11-stubgen with mypy, I'm getting a lot of errors like the following:

stubs/_pymargo-stubs/__init__.pyi:318: error: Name "capsule" is not defined

This can be fixed by manually adding the "capsule" class to init.pyi, but it would be better if pybind11-stubgen could do it by itself.

Enums as default value make signature degrade to (*args, **kwargs) -> typing.Any

When I try to use an (previously defined via pybind11::enum_) enum as default value for a pybind11::arg, pybind11-stubgen thinks it is a C++ type. Below is an example:

#include <pybind11/pybind11.h>


namespace py = pybind11;

enum TestEnum : int8_t {
  VALUE_1 = 0,
  VALUE_2 = 1,
  VALUE_3 = 2
};

PYBIND11_MODULE(test_module, m) {
  py::enum_<TestEnum>(m, "TestEnum")
    .value("VALUE_1", TestEnum::VALUE_1)
    .value("VALUE_2", TestEnum::VALUE_2)
    .value("VALUE_3", TestEnum::VALUE_3);

  m.def(
    "test_function",
    [] (TestEnum enum_val = TestEnum::VALUE_1) {},
    py::arg("enum_val") = TestEnum::VALUE_1
  );
}

After installing it as sdutil.test_module via pip and running pybind11-stubgen sdutil.test_module, I get the following errors:

[2020-10-13 23:25:26,067] {__init__.py:95} ERROR - Generated stubs signature is degraded to `(*args, **kwargs) -> typing.Any` for
[2020-10-13 23:25:26,067] {__init__.py:99} ERROR - def test_function(enum_val: sdutil.test_module.TestEnum = <TestEnum.VALUE_1: 0>) -> None: ...
[2020-10-13 23:25:26,067] {__init__.py:100} ERROR -                                                           ^-- Invalid syntax
[2020-10-13 23:25:26,067] {__init__.py:918} INFO - Useful link: Avoiding C++ types in docstrings:
[2020-10-13 23:25:26,067] {__init__.py:919} INFO -       https://pybind11.readthedocs.io/en/master/advanced/misc.html#avoiding-cpp-types-in-docstrings

When I cast TestEnum::VALUE_1 to int8_t, the stubs are generated, but when I call the function without enum_val parameter, pybind11 complains that the type is invalid.

Thanks in advance!

Error writing utf-8 pyi file on windows

Here's an example docstring:

.def("utf8_docstring", &::DocClass::utf8_docstring, release_gil(), py::doc(
    "Construct a Ramsete unicycle controller.\n"
"\n"
"Tuning parameter (b > 0 radยฒ/mยฒ) for which larger values make\n"
"\n"
"convergence more aggressive like a proportional term.\n"
"Tuning parameter (0 radโปยน < zeta < 1 radโปยน) for which larger\n"
"values provide more damping in response.")
  )

And the corresponding stubgen stack trace:

     File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\pybind11_stubgen\__init__.py", line 954, in main
      _module.write()
    File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\pybind11_stubgen\__init__.py", line 859, in write
      init_pyi.write("\n".join(self.to_lines()))
    File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\encodings\cp1252.py", line 19, in encode
      return codecs.charmap_encode(input,self.errors,encoding_table)[0]
  UnicodeEncodeError: 'charmap' codec can't encode character '\u207b' in position 6129: character maps to <undefined>
  error: Failed to generate .pyi file (see above, or set RPYBUILD_SKIP_PYI=1 to ignore) via ['C:\\hostedtoolcache\\windows\\Python\\3.7.9\\x64\\python.exe', '-m', 'robotpy_build.command.build_pyi']

If the output file encoding was set to utf-8 (which I think would be pretty reasonable since python code is always? utf-8 now), I believe this issue would be addressed. The default output file encoding on Windows is mbcs.

Unused imports

I noticed the following is always included in the generated stubs.

result += [
"from numpy import float64",
"_Shape = Tuple[int, ...]"
]

For example in my case I don't use numpy, so this by default creates invalid stubs. The _Shape doesn't break any, but I also don't need it. Is this a left over or why does it exist? Thanks

ModuleNotFoundError: No module named

Hi,

I have built test .pyd library using pybind11 and currently I'm looking for a way to generate .pyi files to make available autocompletions with vscode pylance.

I have HFGt.pyd in current folder and I'm running:
pybind11-stubgen -o doc_py HFGt.pyd
and get error:

Traceback (most recent call last):
  File "c:\users\tasik\.julia\conda\3\lib\runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "c:\users\tasik\.julia\conda\3\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "C:\Users\tasik\.julia\conda\3\Scripts\pybind11-stubgen.exe\__main__.py", line 7, in <module>
  File "c:\users\tasik\.julia\conda\3\lib\site-packages\pybind11_stubgen\__init__.py", line 915, in main
    _module = ModuleStubsGenerator(_module_name)
  File "c:\users\tasik\.julia\conda\3\lib\site-packages\pybind11_stubgen\__init__.py", line 666, in __init__
    self.module = importlib.import_module(module_or_module_name)
  File "c:\users\tasik\.julia\conda\3\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'HFGt'

I also tried addind/removing suffixes but it didn't help.

My library depends on two DLL files that are also in the same folder. So I'm able to import my module in test it in debug mode. What do I miss?

Fix int_ and bool_ types

I discussed this somewhere before (can't recall where), but some signatures contain int_ and bool_ (for respectively returning py::int_ and py::bool_ from the binding side). The only logical and valid signature types would be int and bool. I currently do a second pass to myself to convert these. Is it an idea to fix these here or do you recon it should be fixed on the pybind11 side ?

import directive ignored

When a pybind11 module is depending on another module, its required to add a line
py::import::module

in a module function definition i add an object of types of this module to the module

py::module::import("mytype")
m.add_object("name", py::cast(new mytype::MyType()))

This seems to work from pybind11 view, but my stub file is now missing an import of module mytype

WorkAround:
generate a dummy class as inheritance of MyType and add it to the module
py::class_<Dummy, mytype::MyType>(m ,"Dummy")
generates an import in the stub file

Slowness with simple 'max' signature

Hello,
the signature regex seems to be getting stuck on a pretty simple signature of a pybind11 built module.

I'm not experienced with regex but I can provide a simple example for you:
(requires pip install pebble)

I think this is sort of mentioned in a few other issues (specifically #51 ) but I think the max signature which is pybind11 generated is slightly different.

If you don't think so, feel free to close this.

Thank you for your work on this library!


from pebble import ProcessPool
import re


def run_test(name, line):
    no_parentheses = r"[^()]*"
    parentheses_one_fold = r"({nopar}(\({nopar}\))?)*".format(nopar=no_parentheses)
    parentheses_two_fold = r"({nopar}(\({par1}\))?)*".format(par1=parentheses_one_fold, nopar=no_parentheses)
    parentheses_three_fold = r"({nopar}(\({par2}\))?)*".format(par2=parentheses_two_fold, nopar=no_parentheses)
    signature_regex = r"(\s*(?P<overload_number>\d+).)" \
                      r"?\s*{name}\s*\((?P<args>{balanced_parentheses})\)" \
                      r"\s*->\s*" \
                      r"(?P<rtype>[^\(\)]+)\s*".format(name=name,
                                                       balanced_parentheses=parentheses_three_fold)
    return re.match(signature_regex, line)

def main():
    test_subjects = (
            # typed
            ("max", "max( unsigned short int, unsigned short int )"),
            ("max", "max( long int, long int )"),
            ("max", "max(a: int, b: int)"),
            # example in docstring
            ("pss", 'myps = pss(banana, "HELLOBANANA", myproj::things::Thing1)'),
            ("pss", '    pss(banana, "HELLOBANANA", myproj::things::Thing1)'),
            )
    with ProcessPool(max_workers=5) as ppe:
        futures = []
        for name, line in test_subjects:
            futures.append(ppe.schedule(run_test, args=[name, line], timeout=5))
    for i, x in enumerate(futures):
        print(f"{x.exception}, {test_subjects[i][0]}\n{test_subjects[i][1]}")

if __name__ == "__main__":
    main()

yields:

<bound method Future.exception of <ProcessFuture at 0x7fe12e9a6e50 state=finished raised TimeoutError>>, max
max( unsigned short int, unsigned short int )
<bound method Future.exception of <ProcessFuture at 0x7fe12e9a6f70 state=finished returned NoneType>>, max
max( long int, long int )
<bound method Future.exception of <ProcessFuture at 0x7fe12e9b6070 state=finished returned NoneType>>, max
max(a: int, b: int)
<bound method Future.exception of <ProcessFuture at 0x7fe12e9b6130 state=finished returned NoneType>>, pss
myps = pss(banana, "HELLOBANANA", myproj::things::Thing1)
<bound method Future.exception of <ProcessFuture at 0x7fe12e9b61f0 state=finished raised TimeoutError>>, pss
    pss(banana, "HELLOBANANA", myproj::things::Thing1)

Generate less bad typestubs for invalid bindings?

I've got one like so:

    def registerFPGAButtonCallback(self, callback: std::function<void (wpi::StringRef, HAL_Value const*)>, initialNotify: bool) -> CallbackStore: 
        """
        registerFPGAButtonCallback(self: hal.simulation._simulation.RoboRioSim, callback: std::function<void (wpi::StringRef, HAL_Value const*)>, initialNotify: bool) -> hal.simulation._simulation.CallbackStore
        """

Now, clearly I forgot to include <pybind11/functional.h>. It seems like it would be a good idea to deal with this in the stubgen to make it easier to find this sort of thing. Some possibilities:

  • Quote types if they have :: in them
  • Return some error code to indicate that the resulting stub has a problem

Derived class has methods defined in base class

class Base:
    doc: str = ""

    def foo(self):
        """foo(self) -> None"""
        pass


class Derived(Base):
    pass

results to

class Base():
    def foo(self) -> None: ...
    __annotations__: dict # value = {'doc': <class 'str'>}
    __dict__: mappingproxy # value = mappingproxy({'__module__': 'xxx', '__annotations__': {'doc': <class 'str'>}, 'doc': '', 'foo': <function Base.foo at 0x7f8c4e26c7b8>, '__dict__': <attribute '__dict__' of 'Base' objects>, '__weakref__': <attribute '__weakref__' of 'Base' objects>, '__doc__': None})
    __weakref__: getset_descriptor # value = <attribute '__weakref__' of 'Base' objects>
    doc = ''
    pass
class Derived(Base):
    def foo(self) -> None: ...
    __annotations__: dict # value = {'doc': <class 'str'>}
    __dict__: mappingproxy # value = mappingproxy({'__module__': 'xxx', '__doc__': None})
    __weakref__: getset_descriptor # value = <attribute '__weakref__' of 'Base' objects>
    doc = ''
    pass

License

Hi, I would like to build upon that code base. I would be willing to commit features/improvements if I do some. Could you license it under some terms?
Best Christopher

misssing docstrings with empty lines

using c++11 string literals in docstrings seems not supported, gives a warning about empty docstrings.

e.g.
.def("hello", &hello, R"-( help for function help )-"

output is not deterministic

Every time I run pybind11-stubgen the output file is different, i.e., method order changes. In my CI runs I would like to check that the latest version for my pybind module is committed, ideally with something like pybind11-stubgen ... && git diff --exit-code, but that does not work if the file changes every time.

Would it be easily possible to fix the order in which methods are output?

Signatures may expand multiple lines

When a function has default arguments, the repr() of those arguments appear in the function signature in the __doc__. Sometimes, the repr() of those default arguments may expand more than one line. For example, if the default argument is a numpy matrix, the repr() of that matrix can have multiple lines.

__init__(*args, **kwargs)
Overloaded function.

1. __init__(self: Pose, rotation: numpy.ndarray[float64[3, 3]]=array([[1., 0., 0.],
       [0., 1., 0.],
       [0., 0., 1.]]), translation: numpy.ndarray[float64[3, 1]]=array([0., 0., 0.])) -> None

These seems to break the parsing code here, which enters an infinite loop.

For the particular case of numpy arrays, one can use

import numpy as np
np.set_string_function(lambda x: " ".join(np.array_repr(x).split()))

to avoid the multiply lines but a more general solution would be to actually detect and properly parse signatures with multiple lines.

My regex abilities are not good enough to provide a fix. Sorry and thanks for building such a useful tool!

Unrecognized time_point type

error info:
Generated stubs signature is degraded to `(*args, **kwargs) -> typing.Any` for
[2022-08-05 11:28:03,390] {__init__.py:135} WARNING - def Try(self: PyQTP.Quote.OrderBookEvent) -> std::optional<std::chrono::time_point<std::chrono::_V2::system_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > > >: ...
[2022-08-05 11:28:03,390] {__init__.py:136} WARNING -                                                                                                                                                           ^-- Invalid syntax

cpp code
std::optional<std::chrono::time_point<std::chrono::system_clock>> Try();

__entries and __members__ are untyped for enums

If I have an enumeration that's converted to a stub, the result looks something like

class Foo():
    class Status():
        """
        Members:

          COMPLETE

          PARTIAL

          INVALID
        """
        def __init__(self, arg0: int) -> None: ...
        def __int__(self) -> int: ...
        @property
        def name(self) -> str:
            """
            (self: handle) -> str

            :type: str
            """
        COMPLETE: mypkg.Foo.Status # value = Status.COMPLETE
        INVALID: mypkg.Foo.Status # value = Status.INVALID
        PARTIAL: mypkg.Foo.Status # value = Status.PARTIAL
        __entries: dict # value = {'COMPLETE': (Status.COMPLETE, None), 'PARTIAL': (Status.PARTIAL, None), 'INVALID': (Status.INVALID, None)}
        __members__: dict # value = {'COMPLETE': Status.COMPLETE, 'PARTIAL': Status.PARTIAL, 'INVALID': Status.INVALID}

but then mypy complains about

error: Implicit generic "Any". Use "typing.Dict" and specify generic parameters

for the untyped dict lines. Could we make this a typing.Dict with appropriate generic parameters instead?

Add optional annotation quation

I am renaming the output files to be module.py in an attempt to use pylint. This mostly works, but I see some errors when there are forward declarations.

pyi
def registerUserDefinedAlgorithm(self, arg0: UserDefinedRegistrationInformation) -> None: ...

The error goes away if I add quotes like this:
def registerUserDefinedAlgorithm(self, arg0: 'UserDefinedRegistrationInformation') -> None: ...

pylint error
[E0601(used-before-assignment)Factory.registerUserDefinedAlgorithm] Using variable 'UserDefinedRegistrationInformation' before assignment

Some classes in the stub are imported as well

For three of the classes in my generated stub file, I also find import X lines at the top of that file. It looks like

import mypkg
import typing
import Bar
import Foo
import Hoge
import datetime
import numpy
__all__ = [
...
"Bar",
"Foo",
"Hoge",
...
]
...

mypy fails on the generated file with

error: Name 'Bar' already defined (possibly by an import)  [no-redef]

I can't find anything that makes these classes "special" or understand why only these three should be mentioned there.

subpackage shouldn't show up in `__all__`

pylint (as embedded in vscode anyways) flags this as an undefined variable.

If the subpackage is placed in its own package, seems like having it in __all__ is unnecessary?

Add tests

Test stubs generation on

  • simple pybind11 project
  • pure python module

Failure when function parameter default value repr isn't valid python

[2020-09-03 01:39:32,372] {__init__.py:60} ERROR - Generated stubs signature is degraded to `(*args, **kwargs) -> Any` for
[2020-09-03 01:39:32,372] {__init__.py:65} ERROR - def __init__(self: wpilib.controller._controller.trajectory.TrapezoidProfile, constraints: wpilib.controller._controller.trajectory.TrapezoidProfile.Constraints, goal: wpilib.controller._controller.trajectory.TrapezoidProfile.State, initial: wpilib.controller._controller.trajectory.TrapezoidProfile.State = <wpilib.controller._controller.trajectory.TrapezoidProfile.State object at 0x7fc9b00663b0>) -> None: ...

If I'm understanding the implementation correctly, it looks like mypy's stubgen will set it to ...?

Tried it out, looks like I'm right:

class A:
    pass


def foo(x: int = 5):
    pass


def a(a: A = A()):
    pass

Resulting generated pyi

from typing import Any

class A: ...

def foo(x: int=...) -> Any: ...
def a(a: A=...) -> Any: ...

Fully qualified name for enum values seems not appropriate

Consider the following C++ code:

namespace pkg {
class Foo {
 public:
  enum FooStatus { kA, kB };
};

enum Bar { kC, kD };
}

and the corresponding pybind wrapper code

  py::class_<pkg::Foo> foo(m, "Foo");

  py::enum_<pkg::Foo::FooStatus>(foo, "FooStatus")
      .value("A", pkg::Foo::FooStatus::kA)
      .value("B", pkg::Foo::FooStatus::kB)
      .export_values();

  py::enum_<pkg::Bar>(m, "Bar")
      .value("C", pkg::Bar::kC)
      .value("D", pkg::Bar::kD)
      .export_values();

This leads to the following generated stub when run with pybind11-stubgen org.company.pkg:

class Bar():
    """
    Members:

      C

      D
    """
    def __init__(self, arg0: int) -> None: ...
    def __int__(self) -> int: ...
    @property
    def name(self) -> str:
        """
        (self: handle) -> str

        :type: str
        """
    C: org.company.pkg.Bar # value = Bar.C
    D: org.company.pkg.Bar # value = Bar.D
    __entries: dict # value = {'C': (Bar.C, None), 'D': (Bar.D, None)}
    __members__: dict # value = {'C': Bar.C, 'D': Bar.D}
    pass

class Foo():
    class FooStatus():
        """
        Members:

          A

          B
        """
        def __init__(self, arg0: int) -> None: ...
        def __int__(self) -> int: ...
        @property
        def name(self) -> str:
            """
            (self: handle) -> str

            :type: str
            """
        A: org.company.pkg.FooStatus # value = FooStatus.A
        B: org.company.pkg.FooStatus # value = FooStatus.B
        __entries: dict # value = {'A': (FooStatus.A, None), 'B': (FooStatus.B, None)}
        __members__: dict # value = {'A': FooStatus.A, 'B': FooStatus.B}
        pass
    A: org.company.pkg.FooStatus # value = FooStatus.A
    B: org.company.pkg.FooStatus # value = FooStatus.B
    pass

C: org.company.pkg.Bar # value = Bar.C
D: org.company.pkg.Bar # value = Bar.D

and the Python code

from .pkg import Foo, Bar

reveal_type(Foo.FooStatus.A)
reveal_type(Bar.C)

when processed with mypy shows

typetest.py:3: note: Revealed type is 'Any'
typetest.py:4: note: Revealed type is 'Any'

When we replace the lines

        A: org.company.pkg.FooStatus # value = FooStatus.A
        B: org.company.pkg.FooStatus # value = FooStatus.B

inside FooStatus with

        A: Foo.FooStatus # value = FooStatus.A
        B: Foo.FooStatus # value = FooStatus.B

and

    C: org.company.pkg.Bar # value = Bar.C
    D: org.company.pkg.Bar # value = Bar.D

with

    C: Bar # value = Bar.C
    D: Bar # value = Bar.D

then mypy shows

typetest.py:3: note: Revealed type is 'pkg.Foo.FooStatus'
typetest.py:4: note: Revealed type is 'pkg.Bar'

This makes me wonder whether we should not put the fully qualified name of the class behind the colon (in the case of nested enums it's not even correct), but only the __qualname__. I can try to attack this, but I would first like to understand the reasoning behind this.

On another note, why are the enum values repeated on one level above their definition? With the code above C and D become top-level entries of the module, this doesn't seem right.

Yet another related comment: When processing the stub above with mypy then it also complains that dict should not be used, but a typing.Dict[K, V] should be used instead.

pip uninstall myModule - Not deleting entire myModule folder

Hi,

After I generate the Python stubs using pybind11-stubgen, I installed this stubs using a custom Python script that copies all generated stubs files to inside the myModule installed directory (which in my case is: ~/.local/lib/python3.8/site-packages/myModule/

All works good! No problems.

Problem

Now, if I try to uninstall this package using pip uninstall myModule, this command deletes all files inside the myModule installed directory, but remains the root folder and the copied stubs files.

Question

What I need to do to be able to call pip uninstall myModule and get all files and the MyModule root installed directory deleted?

Thank you,

My System:

  • OS: Ubuntu 20.04 LTS (Windows WSL 2.0)
  • Python 3.8

Question: How to use/validate ndarray[...] types?

At the moment, the code generated from pybind11-stubgen (0.6.0) has numpy types like

numpy.ndarray[numpy.float64, _Shape[4, 4]]
numpy.ndarray[numpy.int8]
numpy.ndarray[uint8]
numpy.ndarray[float64[3, 1], flags.writeable]

but how do I actually put them to use? I have the numpy-stubs (git+https://github.com/numpy/numpy-stubs.git@c49d2d6875971a669a166ea93ef998911af283a1) installed, but validating the above results in

error: "ndarray" expects no type arguments, but 2 given  [type-arg]

for the ndarray parameters and

error: Bad number of arguments for type alias, expected: 0, given: 2

on the same line (for _Shape I guess?).

At the moment I regex the additional type information away before feeding it to numpy, but how is this actually supposed to be used?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.