Giter Site home page Giter Site logo

pytorch2timeloop-converter's People

Contributors

gilbertmike avatar jsemer avatar kyungmi-lee avatar nschiefer avatar nullplay avatar radomirbosak avatar tanner-andrulis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

pytorch2timeloop-converter's Issues

Unsure how to use specifications

When attempting to use the Alexnet output with TimeloopFE, based on the problem's instance attribute:

Can not convert non-dict to dict: 0 <= classifier_6_G < 1 and 0 <= classifier_6_C < 4096 and 0 <= classifier_6_M < 1000 and 0 <= classifier_6_N < 2 and 0 <= classifier_6_P < 1 and 0 <= classifier_6_Q < 1 and 0 <= classifier_6_R < 1 and 0 <= classifier_6_S < 1

So how am I supposed to actually use pytorch2timeloop-converter?

Full error is:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:144, in TypeSpecifier.cast_check_type(self, value, node, key)
    143 try:
--> 144     casted = self.cast(value)
    145 except Exception as exc:

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:197, in TypeSpecifier.cast(self, value, _TypeSpecifier__node_skip_parse)
    196 else:
--> 197     value = self.callfunc(value)
    198 if not primitive:

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:106, in TypeSpecifier.__init__.<locals>.callfunc(x, _TypeSpecifier__node_skip_parse)
    105     return x
--> 106 return rt(x, __node_skip_parse=__node_skip_parse)

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/v4/problem.py:143, in Instance.__init__(self, *args, **kwargs)
    142 def __init__(self, *args, **kwargs):
--> 143     super().__init__(*args, **kwargs)

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:1207, in DictNode.__init__(self, _DictNode__node_skip_parse, *args, **kwargs)
   1205 super().__init__(*args, **kwargs)
-> 1207 self.update(self._to_dict(args))
   1208 for a in args:

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:1238, in DictNode._to_dict(x)
   1237 for y in x:
-> 1238     result.update(DictNode._to_dict(y))
   1239 return result

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:1241, in DictNode._to_dict(x)
   1240 else:
-> 1241     raise TypeError(f"Can not convert non-dict to dict: {x}")

TypeError: Can not convert non-dict to dict: 0 <= classifier_6_G < 1 and 0 <= classifier_6_C < 4096 and 0 <= classifier_6_M < 1000 and 0 <= classifier_6_N < 2 and 0 <= classifier_6_P < 1 and 0 <= classifier_6_Q < 1 and 0 <= classifier_6_R < 1 and 0 <= classifier_6_S < 1

The above exception was the direct cause of the following exception:

ParseError                                Traceback (most recent call last)
File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:144, in TypeSpecifier.cast_check_type(self, value, node, key)
    143 try:
--> 144     casted = self.cast(value)
    145 except Exception as exc:

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:197, in TypeSpecifier.cast(self, value, _TypeSpecifier__node_skip_parse)
    196 else:
--> 197     value = self.callfunc(value)
    198 if not primitive:

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:106, in TypeSpecifier.__init__.<locals>.callfunc(x, _TypeSpecifier__node_skip_parse)
    105     return x
--> 106 return rt(x, __node_skip_parse=__node_skip_parse)

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/v4/problem.py:24, in Problem.__init__(self, *args, **kwargs)
     23 def __init__(self, *args, **kwargs):
---> 24     super().__init__(*args, **kwargs)
     25     self.version: str = self["version"]

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:1229, in DictNode.__init__(self, _DictNode__node_skip_parse, *args, **kwargs)
   1228 if not __node_skip_parse:
-> 1229     self._parse_elems()

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:564, in Node._parse_elems(self)
    563 for k, check in self._get_index2checker().items():
--> 564     self._parse_elem(k, check)

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:541, in Node._parse_elem(self, key, check, value_override)
    540 if check is not None:
--> 541     v = check.cast_check_type(v, self, key)
    543 if isinstance(v, Node):

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:181, in TypeSpecifier.cast_check_type(self, value, node, key)
    180     new_exc._last_non_node_exception = last_non_node_exception
--> 181     raise new_exc from exc
    183 # self.check_type(casted, node, key)

ParseError: Error calling cast function "Instance" for value "0 <= classifier_6_G < 1 and 0 <= classifier_6_C < 4096 and 0 <= classifier_6_M < 1000 and 0 <= classifier_6_N < 2 and 0 <= classifier_6_P < 1 and 0 <= classifier_6_Q < 1 and 0 <= classifier_6_R < 1 and 0 <= classifier_6_S < 1" in Problem[instance]. 

Can not convert non-dict to dict: 0 <= classifier_6_G < 1 and 0 <= classifier_6_C < 4096 and 0 <= classifier_6_M < 1000 and 0 <= classifier_6_N < 2 and 0 <= classifier_6_P < 1 and 0 <= classifier_6_Q < 1 and 0 <= classifier_6_R < 1 and 0 <= classifier_6_S < 1

The above exception was the direct cause of the following exception:

ParseError                                Traceback (most recent call last)
Cell In[5], line 1
----> 1 spec = tl.Specification.from_yaml_files(
      2     ARCH_PATH,
      3     COMPONENTS_PATH,
      4     MAPPER_PATH,
      5     #PROBLEM_PATH,
      6     ALEXNET_PATH,
      7     VARIABLES_PATH,
      8 )  # Gather YAML files into a Python object
      9 tl.call_mapper(spec, output_dir=f"{os.curdir}/outputs")  # Run the Timeloop mapper
     10 stats = open("outputs/timeloop-mapper.stats.txt").read()

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/base_specification.py:179, in BaseSpecification.from_yaml_files(cls, *args, **kwargs)
    167 @classmethod
    168 def from_yaml_files(cls, *args, **kwargs) -> "Specification":
    169     """
    170     Create a Specification object from YAML files.
    171 
   (...)
    177         Specification: The created Specification object.
    178     """
--> 179     return super().from_yaml_files(*args, **kwargs)

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:1362, in DictNode.from_yaml_files(cls, jinja_parse_data, *files, **kwargs)
   1359             key2file[k] = f
   1360             rval[k] = v
-> 1362 c = cls(**rval, **kwargs)
   1363 logging.info(
   1364     "Parsing extra attributes %s", ", ".join([x[0] for x in extra_elems])
   1365 )
   1366 c._parse_extra_elems(extra_elems)

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/v4/specification.py:61, in Specification.__init__(self, *args, **kwargs)
     59 assert "_required_processors" not in kwargs, "Cannot set _required_processors"
     60 kwargs["_required_processors"] = REQUIRED_PROCESSORS
---> 61 super().__init__(*args, **kwargs)
     62 self.architecture: arch.Architecture = self["architecture"]
     63 self.constraints: constraints.Constraints = self["constraints"]

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/base_specification.py:73, in BaseSpecification.__init__(self, *args, **kwargs)
     69 self.spec = self
     71 self._early_init_processors(**kwargs)  # Because processors define declare_attrs
---> 73 super().__init__(*args, **kwargs)
     74 TypeSpecifier.reset_id2casted()
     76 self.processors: ListNode = self["processors"]

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:1229, in DictNode.__init__(self, _DictNode__node_skip_parse, *args, **kwargs)
   1227         self[k] = default_unspecified_
   1228 if not __node_skip_parse:
-> 1229     self._parse_elems()

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:564, in Node._parse_elems(self)
    562 self.spec = parent.spec if parent is not None else Node.get_global_spec()
    563 for k, check in self._get_index2checker().items():
--> 564     self._parse_elem(k, check)

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:541, in Node._parse_elem(self, key, check, value_override)
    539 tag = Node._get_tag(v)
    540 if check is not None:
--> 541     v = check.cast_check_type(v, self, key)
    543 if isinstance(v, Node):
    544     v.tag = tag

File ~/.pyenv/versions/3.8.18/lib/python3.8/site-packages/timeloopfe/common/nodes.py:181, in TypeSpecifier.cast_check_type(self, value, node, key)
    175     new_exc = ParseError(
    176         f'Error calling cast function "{callname}" '
    177         f'for value "{value}" in {node.get_name()}[{key}]. '
    178         f"{self.removed_by_str()}{estr}"
    179     )
    180     new_exc._last_non_node_exception = last_non_node_exception
--> 181     raise new_exc from exc
    183 # self.check_type(casted, node, key)
    184 return casted

ParseError: Error calling cast function "Problem" for value "[{'instance': '0 <= features_0_G < 1 and 0 <= features_0_C < 3 and 0 <= features_0_M < 64 and 0 <= features_0_N < 2 and 0 <= features_0_P < 55 and 0 <= features_0_Q < 55 and 0 <= features_0_R < 11 and 0 <= features_0_S < 11', 'shape': {'data-spaces': [{'name': 'features_0_filter', 'projection': '[ features_0_G, features_0_C, features_0_M, features_0_R, features_0_S ]'}, {'name': 'x_out', 'projection': '[ features_0_N, features_0_G*3 + features_0_C, features_0_R + features_0_P*4,  features_0_S + features_0_Q*4 ]'}, {'name': 'features_0_out', 'projection': '[ features_0_N, features_0_G*64 + features_0_M, features_0_P, features_0_Q ]', 'read-write': True}], 'dimensions': ['features_0_G', 'features_0_C', 'features_0_M', 'features_0_R', 'features_0_S', 'features_0_N', 'features_0_P', 'features_0_Q'], 'name': 'features_0'}}, {'instance': '0 <= features_2_C < 64 and 0 <= features_2_N < 2 and 0 <= features_2_P < 27 and 0 <= features_2_Q < 27 and 0 <= features_2_R < 3 and 0 <= features_2_S < 3', 'shape': {'data-spaces': [{'name': 'features_0_out', 'projection': '[ features_2_N, features_2_C, features_2_R + features_2_P*2,  features_2_S + features_2_Q*2 ]'}, {'name': 'features_2_out', 'projection': '[ features_2_N, features_2_C, features_2_P, features_2_Q ]', 'read-write': True}], 'dimensions': ['features_2_C', 'features_2_R', 'features_2_S', 'features_2_N', 'features_2_P', 'features_2_Q'], 'name': 'features_2'}}, {'instance': '0 <= features_3_G < 1 and 0 <= features_3_C < 64 and 0 <= features_3_M < 192 and 0 <= features_3_N < 2 and 0 <= features_3_P < 27 and 0 <= features_3_Q < 27 and 0 <= features_3_R < 5 and 0 <= features_3_S < 5', 'shape': {'data-spaces': [{'name': 'features_3_filter', 'projection': '[ features_3_G, features_3_C, features_3_M, features_3_R, features_3_S ]'}, {'name': 'features_2_out', 'projection': '[ features_3_N, features_3_G*64 + features_3_C, features_3_R + features_3_P*1,  features_3_S + features_3_Q*1 ]'}, {'name': 'features_3_out', 'projection': '[ features_3_N, features_3_G*192 + features_3_M, features_3_P, features_3_Q ]', 'read-write': True}], 'dimensions': ['features_3_G', 'features_3_C', 'features_3_M', 'features_3_R', 'features_3_S', 'features_3_N', 'features_3_P', 'features_3_Q'], 'name': 'features_3'}}, {'instance': '0 <= features_5_C < 192 and 0 <= features_5_N < 2 and 0 <= features_5_P < 13 and 0 <= features_5_Q < 13 and 0 <= features_5_R < 3 and 0 <= features_5_S < 3', 'shape': {'data-spaces': [{'name': 'features_3_out', 'projection': '[ features_5_N, features_5_C, features_5_R + features_5_P*2,  features_5_S + features_5_Q*2 ]'}, {'name': 'features_5_out', 'projection': '[ features_5_N, features_5_C, features_5_P, features_5_Q ]', 'read-write': True}], 'dimensions': ['features_5_C', 'features_5_R', 'features_5_S', 'features_5_N', 'features_5_P', 'features_5_Q'], 'name': 'features_5'}}, {'instance': '0 <= features_6_G < 1 and 0 <= features_6_C < 192 and 0 <= features_6_M < 384 and 0 <= features_6_N < 2 and 0 <= features_6_P < 13 and 0 <= features_6_Q < 13 and 0 <= features_6_R < 3 and 0 <= features_6_S < 3', 'shape': {'data-spaces': [{'name': 'features_6_filter', 'projection': '[ features_6_G, features_6_C, features_6_M, features_6_R, features_6_S ]'}, {'name': 'features_5_out', 'projection': '[ features_6_N, features_6_G*192 + features_6_C, features_6_R + features_6_P*1,  features_6_S + features_6_Q*1 ]'}, {'name': 'features_6_out', 'projection': '[ features_6_N, features_6_G*384 + features_6_M, features_6_P, features_6_Q ]', 'read-write': True}], 'dimensions': ['features_6_G', 'features_6_C', 'features_6_M', 'features_6_R', 'features_6_S', 'features_6_N', 'features_6_P', 'features_6_Q'], 'name': 'features_6'}}, {'instance': '0 <= features_8_G < 1 and 0 <= features_8_C < 384 and 0 <= features_8_M < 256 and 0 <= features_8_N < 2 and 0 <= features_8_P < 13 and 0 <= features_8_Q < 13 and 0 <= features_8_R < 3 and 0 <= features_8_S < 3', 'shape': {'data-spaces': [{'name': 'features_8_filter', 'projection': '[ features_8_G, features_8_C, features_8_M, features_8_R, features_8_S ]'}, {'name': 'features_6_out', 'projection': '[ features_8_N, features_8_G*384 + features_8_C, features_8_R + features_8_P*1,  features_8_S + features_8_Q*1 ]'}, {'name': 'features_8_out', 'projection': '[ features_8_N, features_8_G*256 + features_8_M, features_8_P, features_8_Q ]', 'read-write': True}], 'dimensions': ['features_8_G', 'features_8_C', 'features_8_M', 'features_8_R', 'features_8_S', 'features_8_N', 'features_8_P', 'features_8_Q'], 'name': 'features_8'}}, {'instance': '0 <= features_10_G < 1 and 0 <= features_10_C < 256 and 0 <= features_10_M < 256 and 0 <= features_10_N < 2 and 0 <= features_10_P < 13 and 0 <= features_10_Q < 13 and 0 <= features_10_R < 3 and 0 <= features_10_S < 3', 'shape': {'data-spaces': [{'name': 'features_10_filter', 'projection': '[ features_10_G, features_10_C, features_10_M, features_10_R, features_10_S ]'}, {'name': 'features_8_out', 'projection': '[ features_10_N, features_10_G*256 + features_10_C, features_10_R + features_10_P*1,  features_10_S + features_10_Q*1 ]'}, {'name': 'features_10_out', 'projection': '[ features_10_N, features_10_G*256 + features_10_M, features_10_P, features_10_Q ]', 'read-write': True}], 'dimensions': ['features_10_G', 'features_10_C', 'features_10_M', 'features_10_R', 'features_10_S', 'features_10_N', 'features_10_P', 'features_10_Q'], 'name': 'features_10'}}, {'instance': '0 <= features_12_C < 256 and 0 <= features_12_N < 2 and 0 <= features_12_P < 6 and 0 <= features_12_Q < 6 and 0 <= features_12_R < 3 and 0 <= features_12_S < 3', 'shape': {'data-spaces': [{'name': 'features_10_out', 'projection': '[ features_12_N, features_12_C, features_12_R + features_12_P*2,  features_12_S + features_12_Q*2 ]'}, {'name': 'features_12_out', 'projection': '[ features_12_N, features_12_C, features_12_P, features_12_Q ]', 'read-write': True}], 'dimensions': ['features_12_C', 'features_12_R', 'features_12_S', 'features_12_N', 'features_12_P', 'features_12_Q'], 'name': 'features_12'}}, {'instance': '0 <= avgpool_C < 256 and 0 <= avgpool_N < 2 and 0 <= avgpool_P < 6 and 0 <= avgpool_Q < 6 and 0 <= avgpool_R < 1 and 0 <= avgpool_S < 1', 'shape': {'data-spaces': [{'name': 'features_12_out', 'projection': '[ avgpool_N, avgpool_C, avgpool_R + avgpool_P*1,  avgpool_S + avgpool_Q*1 ]'}, {'name': 'avgpool_out', 'projection': '[ avgpool_N, avgpool_C, avgpool_P, avgpool_Q ]', 'read-write': True}], 'dimensions': ['avgpool_C', 'avgpool_R', 'avgpool_S', 'avgpool_N', 'avgpool_P', 'avgpool_Q'], 'name': 'avgpool'}}, {'instance': '0 <= A < 2 and 0 <= B < 9216', 'shape': {'data-spaces': [{'name': 'avgpool_out', 'projection': '[ floor(B*1 + A*9216/9216)%2, floor(B*1 + A*9216/36)%256, floor(B*1 + A*9216/6)%6, floor(B*1 + A*9216/1)%6 ]'}, {'name': 'flatten_out', 'projection': '[ A, B ]', 'read-write': True}], 'dimensions': ['A', 'B'], 'name': 'flatten'}}, {'instance': '0 <= classifier_1_G < 1 and 0 <= classifier_1_C < 9216 and 0 <= classifier_1_M < 4096 and 0 <= classifier_1_N < 2 and 0 <= classifier_1_P < 1 and 0 <= classifier_1_Q < 1 and 0 <= classifier_1_R < 1 and 0 <= classifier_1_S < 1', 'shape': {'data-spaces': [{'name': 'classifier_1_filter', 'projection': '[ classifier_1_G, classifier_1_C, classifier_1_M, classifier_1_R, classifier_1_S ]'}, {'name': 'flatten_out', 'projection': '[ classifier_1_N, classifier_1_G*9216 + classifier_1_C, classifier_1_R + classifier_1_P*1,  classifier_1_S + classifier_1_Q*1 ]'}, {'name': 'classifier_1_out', 'projection': '[ classifier_1_N, classifier_1_G*4096 + classifier_1_M, classifier_1_P, classifier_1_Q ]', 'read-write': True}], 'dimensions': ['classifier_1_G', 'classifier_1_C', 'classifier_1_M', 'classifier_1_R', 'classifier_1_S', 'classifier_1_N', 'classifier_1_P', 'classifier_1_Q'], 'name': 'classifier_1'}}, {'instance': '0 <= classifier_4_G < 1 and 0 <= classifier_4_C < 4096 and 0 <= classifier_4_M < 4096 and 0 <= classifier_4_N < 2 and 0 <= classifier_4_P < 1 and 0 <= classifier_4_Q < 1 and 0 <= classifier_4_R < 1 and 0 <= classifier_4_S < 1', 'shape': {'data-spaces': [{'name': 'classifier_4_filter', 'projection': '[ classifier_4_G, classifier_4_C, classifier_4_M, classifier_4_R, classifier_4_S ]'}, {'name': 'classifier_1_out', 'projection': '[ classifier_4_N, classifier_4_G*4096 + classifier_4_C, classifier_4_R + classifier_4_P*1,  classifier_4_S + classifier_4_Q*1 ]'}, {'name': 'classifier_4_out', 'projection': '[ classifier_4_N, classifier_4_G*4096 + classifier_4_M, classifier_4_P, classifier_4_Q ]', 'read-write': True}], 'dimensions': ['classifier_4_G', 'classifier_4_C', 'classifier_4_M', 'classifier_4_R', 'classifier_4_S', 'classifier_4_N', 'classifier_4_P', 'classifier_4_Q'], 'name': 'classifier_4'}}, {'instance': '0 <= classifier_6_G < 1 and 0 <= classifier_6_C < 4096 and 0 <= classifier_6_M < 1000 and 0 <= classifier_6_N < 2 and 0 <= classifier_6_P < 1 and 0 <= classifier_6_Q < 1 and 0 <= classifier_6_R < 1 and 0 <= classifier_6_S < 1', 'shape': {'data-spaces': [{'name': 'classifier_6_filter', 'projection': '[ classifier_6_G, classifier_6_C, classifier_6_M, classifier_6_R, classifier_6_S ]'}, {'name': 'classifier_4_out', 'projection': '[ classifier_6_N, classifier_6_G*4096 + classifier_6_C, classifier_6_R + classifier_6_P*1,  classifier_6_S + classifier_6_Q*1 ]'}, {'name': 'classifier_6_out', 'projection': '[ classifier_6_N, classifier_6_G*1000 + classifier_6_M, classifier_6_P, classifier_6_Q ]', 'read-write': True}], 'dimensions': ['classifier_6_G', 'classifier_6_C', 'classifier_6_M', 'classifier_6_R', 'classifier_6_S', 'classifier_6_N', 'classifier_6_P', 'classifier_6_Q'], 'name': 'classifier_6'}}]" in Specification[problem]. 

Can not convert non-dict to dict: 0 <= classifier_6_G < 1 and 0 <= classifier_6_C < 4096 and 0 <= classifier_6_M < 1000 and 0 <= classifier_6_N < 2 and 0 <= classifier_6_P < 1 and 0 <= classifier_6_Q < 1 and 0 <= classifier_6_R < 1 and 0 <= classifier_6_S < 1

Log instead of print messages

For code that is using pytorch2timeloop-converter it is sometimes undesirable to have text printed in stdout.

This package prints messages like

converting nn.Conv2d and nn.Linear in out model ...

on around four places, e.g.

print("converting {} in {} model ...".format("nn.Conv2d" if not convert_fc else "nn.Conv2d and nn.Linear", model_name))

A good mechanism for these kind of message is using the logging module, which can be turned on and off quite easily.

Other print statements found:

$ git grep -n 'print('
pytorch2timeloop/converter_pytorch.py:28:    print("converting {} in {} model ...".format("all", model_name))
pytorch2timeloop/converter_pytorch.py:52:    print("converting {} in {} model ...".format("nn.Conv2d" if not convert_fc else "nn.Conv2d and nn.Linear", model_name))
pytorch2timeloop/converter_pytorch.py:113:    print("conversion complete!\n")
pytorch2timeloop/utils/hooks.py:228:    print("unknown module type", module.__class__)

question

site-packages/pytorch2timeloop-0.2-py3.7.egg/pytorch2timeloop/utils/interpreter.py", line 87, in run_node
with self._set_current_node(n):
AttributeError: 'Converter' object has no attribute '_set_current_node'

Questions regarding the supported models.

Hi! Firstly, I want to thank you for your awesome work.

As a beginner in Accelergy, I got into problems in creating yaml files for different NN models.

In the documentation I saw it states this tool can support "certain transformers" but it seems it only supports Conv and linear layers. Did I miss something?
Besides, is it possible to implement many other layers in Pytorch, e.g., normalization layer, recurrent layer, pooling layer?

AssertionError in converting Googlenet

I get the following error in converting Googlenet:

File "/home/pytorch2timeloop-converter/pytorch2timeloop/converter_pytorch.py", line 252, in extract_layer_data
"Different number of conv layers detected by filter and io"
AssertionError: Different number of conv layers detected by filter and io

The converter file includes the following parameters:

import torchvision.models as models
import pytorch2timeloop
net = models.googlenet()
input_shape = (3, 224, 224)
batch_size = 1
top_dir = 'workloads'
sub_dir = 'googlenet'
convert_fc = True
exception_module_names = []
pytorch2timeloop.convert_model(net, input_shape, batch_size, sub_dir, top_dir, convert_fc, exception_module_names)

Sample script for converting transformers

Hello,
I have been trying to convert transformer models (DistilBertModel) using the tool without any success.
Would it be possible to provide a sample script (similar to the one for alexnet) ?

Thanks & regards,
Siva

A bug related to isinstance and issubclass

I installed it with Ptyhon 3.7. Execution of the test cases under the test folder produces the following error message.
In addition, I failed to install it if the Python 3.6 was used.

Traceback (most recent call last):
File "tst.py", line 34, in

pytorch2timeloop.convert_model(net, input_shape, batch_size, sub_dir, top_dir, convert_fc, exception_module_names)

File "/mnt/raiddisk/jyue/Project/tmp/pytorch2timeloop-converter/pytorch2timeloop/converter_pytorch.py", line 79, in convert_model

layer_data = _make_summary(model, sample_input, ignored_func=ignored_func)

File "/mnt/raiddisk/jyue/Project/tmp/pytorch2timeloop-converter/pytorch2timeloop/converter_pytorch.py", line 114, in _make_summary

converter.run(sample_input)

File "/home/jyue/miniconda3/envs/myenv/lib/python3.7/site-packages/torch/fx/interpreter.py", line 130, in run

self.env[node] = self.run_node(node)

File "/mnt/raiddisk/jyue/Project/tmp/pytorch2timeloop-converter/pytorch2timeloop/utils/interpreter.py", line 88, in run_node

original_args)

File "/mnt/raiddisk/jyue/Project/tmp/pytorch2timeloop-converter/pytorch2timeloop/utils/interpreter.py", line 100, in call_module

if isinstance(module, self.bypassed_modules):

File "/home/jyue/miniconda3/envs/myenv/lib/python3.7/typing.py", line 716, in instancecheck

return self.__subclasscheck__(type(obj))

File "/home/jyue/miniconda3/envs/myenv/lib/python3.7/typing.py", line 724, in subclasscheck

raise TypeError("Subscripted generics cannot be used with"

TypeError: Subscripted generics cannot be used with class and instance checks

While executing %features_0 : [#users=1] = call_module[target=features.0](args = (%x,), kwargs = {})

Original traceback:

None

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.