Giter Site home page Giter Site logo

mmtf-python's Introduction

Build Status Version License Changelog

The macromolecular transmission format (MMTF) is a binary encoding of biological structures.

This repository holds the Python 2 and 3 compatible API, encoding and decoding libraries.

The MMTF python API is available from pip:

pip install mmtf-python

Quick getting started.

  1. Get the data for a PDB structure and print the number of chains:
from mmtf import fetch
# Get the data for 4CUP
decoded_data = fetch("4CUP")
print("PDB Code: "+str(decoded_data.structure_id)+" has "+str(decoded_data.num_chains)+" chains")
  1. Show the charge information for the first group:
print("Group name: "+str(decoded_data.group_list[0]["groupName"])+" has the following atomic charges: "+",".join([str(x) for x in decoded_data.group_list[0]["formalChargeList"]]))
  1. Show how many bioassemblies it has:
print("PDB Code: "+str(decoded_data.structure_id)+" has "+str(len(decoded_data.bio_assembly))+" bioassemblies")

mmtf-python's People

Contributors

abradle avatar danpf avatar dependabot[bot] avatar josemduarte avatar kain88-de avatar marshuang80 avatar peterjc avatar pwrose avatar richardjgowers avatar yuy079 avatar zacharyrs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mmtf-python's Issues

\x00 for alt_loc_list

Hi,

I am using py3.5

In [15]: x = mmtf.fetch('4CUP')

In [16]: x.alt_loc_list[:10]
Out[16]:
['\x00',
 '\x00',
 '\x00',
 '\x00',
 '\x00',
 '\x00',
 '\x00',
 '\x00',
 '\x00',
 '\x00']

Consistency across platforms - use bin type?

This is something that you might not really notice, but python actually uses a different type when packing bytes compared to cpp and java(or whatever builds the mmtfs on the rcsb server) To make it consistent i think mmtf should use use_bin_type=True when packing. However, the problem with this is that there is a slight difference between python2 and python3 in the msgpack implementation (See below)

This brings up another question I had. Was this limitation the reason why you decided to use b'xxxx' strings for all dictionary keys?

The reason I ask is completely downstream and something I shouldn't expect mmtf to handle, but for example: json requires all keys to be strings, it's annoying to recursively update all dictionaries to be strings. From what I can tell, it would be simpler and not really change very much if keys were switched to standard strings. Am I missing something obvious?

Reference:
https://github.com/msgpack/msgpack-python#string-and-binary-type
https://github.com/msgpack/msgpack/blob/master/spec.md#str-format-family

#!/usr/bin/env python3.6

import msgpack
import struct

mylist = [1,2,3,4,22]
mylist = struct.pack(">{}i".format(len(mylist)), *mylist)

mydict = {}
mydict['test1'] = mylist
fin = msgpack.packb(mydict)
with open("test1", 'wb') as f:
    f.write(fin)

"""
==> test1 <==
00000000: 81a5 7465 7374 31b4 0000 0001 0000 0002  ..test1.........
00000010: 0000 0003 0000 0004 0000 0016 0a         .............
key header: 10000001 10100101
data header: 10110100
"""


mydict = {}
mydict[b'test2'] = mylist
fin = msgpack.packb(mydict)
with open("test2", 'wb') as f:
    f.write(fin)
"""
==> test2 <==
00000000: 81a5 7465 7374 32b4 0000 0001 0000 0002  ..test2.........
00000010: 0000 0003 0000 0004 0000 0016 0a         .............
key header: 10000001 10100101
data header: 10110100
"""


mydict = {}
mydict['test3'] = mylist
fin = msgpack.packb(mydict, use_bin_type=True)
with open("test3", 'wb') as f:
    f.write(fin)
"""
==> test3 <==
00000000: 81a5 7465 7374 33c4 1400 0000 0100 0000  ..test3.........
00000010: 0200 0000 0300 0000 0400 0000 160a       ..............
key header: 10000001 10100101
data header: 11000100
"""


mydict = {}
mydict[b'test4'] = mylist
fin = msgpack.packb(mydict, use_bin_type=True)
with open("test4", 'wb') as f:
    f.write(fin)
"""
==> test4 <==
00000000: 81c4 0574 6573 7434 c414 0000 0001 0000  ...test4........
00000010: 0002 0000 0003 0000 0004 0000 0016 0a    ...............
key header: 10000001 11000100
data header: 11000100
"""


"""
==> test_cpp <==
00000000: 81a5 7465 7374 31c4 1400 0000 0100 0000  ..test1.........
00000010: 0200 0000 0300 0000 0400 0000 160a       ..............
key header: 10000001 10100101
data header: 11000100
"""


"""
==> 4HHB.mmtf <==
00000000: de00 26ab 6d6d 7466 5665 7273 696f 6ea5  ..&.mmtfVersion.
00000010: 312e 302e 30ac 6d6d 7466 5072 6f64 7563  1.0.0.mmtfProduc
key header: 00100110 10101011
data header: 10100101

----

00002c10: 4b49 4e47 aa78 436f 6f72 644c 6973 74c5  KING.xCoordList.
00002c20: 256a 0000 000a 0000 12ab 0000 03e8 183c  %j.............<
data header: 11000101
"""

and python2.7

#!/usr/bin/env python2.7

import msgpack
import struct

mylist = [1,2,3,4,22]
mylist = struct.pack(">{}i".format(len(mylist)), *mylist)

mydict = {}
mydict['test1'] = mylist
fin = msgpack.packb(mydict)
with open("test1", 'wb') as f:
    f.write(fin)

"""
==> test1 <==
00000000: 81a5 7465 7374 31b4 0000 0001 0000 0002  ..test1.........
00000010: 0000 0003 0000 0004 0000 0016 0a         .............
key header: 10000001 10100101
data header: 10110100
"""


mydict = {}
mydict[b'test2'] = mylist
fin = msgpack.packb(mydict)
with open("test2", 'wb') as f:
    f.write(fin)
"""
==> test2 <==
00000000: 81a5 7465 7374 32b4 0000 0001 0000 0002  ..test2.........
00000010: 0000 0003 0000 0004 0000 0016 0a         .............
key header: 10000001 10100101
data header: 10110100
"""


mydict = {}
mydict['test3'] = mylist
fin = msgpack.packb(mydict, use_bin_type=True)
with open("test3", 'wb') as f:
    f.write(fin)
"""
==> test3 <==
00000000: 81c4 0574 6573 7433 c414 0000 0001 0000  ...test3........
00000010: 0002 0000 0003 0000 0004 0000 0016 0a    ...............
key header: 10000001 10100101
data header: 11000100
"""


mydict = {}
mydict[b'test4'] = mylist
fin = msgpack.packb(mydict, use_bin_type=True)
with open("test4", 'wb') as f:
    f.write(fin)
"""
==> test4 <==
00000000: 81c4 0574 6573 7434 c414 0000 0001 0000  ...test4........
00000010: 0002 0000 0003 0000 0004 0000 0016 0a    ...............
key header: 10000001 11000100
data header: 11000100
"""


"""
==> test_cpp <==
00000000: 81a5 7465 7374 31c4 1400 0000 0100 0000  ..test1.........
00000010: 0200 0000 0300 0000 0400 0000 160a       ..............
key header: 10000001 10100101
data header: 11000100
"""


"""
==> 4HHB.mmtf <==
00000000: de00 26ab 6d6d 7466 5665 7273 696f 6ea5  ..&.mmtfVersion.
00000010: 312e 302e 30ac 6d6d 7466 5072 6f64 7563  1.0.0.mmtfProduc
key header: 00100110 10101011
data header: 10100101

----

00002c10: 4b49 4e47 aa78 436f 6f72 644c 6973 74c5  KING.xCoordList.
00002c20: 256a 0000 000a 0000 12ab 0000 03e8 183c  %j.............<
data header: 11000101
"""

Missing bonds

In PDB 6C94 there are bonds to the FE in the HEM residue from other residues (e.g. V16 and C448). There are CONECT records for these in the PDB file, but they are not present in the MMTF bond_atom_list. Are they stored somewhere else in the MMTF file? If not, what are the rules for omitting bonds?

Download mmtf_reduced file

For the "get_raw_data_from_url" function in mmtf-python.api.default_api, I am wondering if we can simply add a parameter to choose between downloading full or reduced mmtf files?

Downloading whole pdb - different coordinates

Hello,
I'm getting very slightly different coordinates when using the hadoop database...

from pyspark import SparkContext, SparkConf

conf = SparkConf().setAppName("local")
sc = SparkContext(conf=conf )
reader = sc.sequenceFile('folder_containing_extracted_hadoop/")
pose_3KMA = reader.lookup('3KMA')[0]
pose_3KMA = mmtf.api.default_api.ungzip_data(pose_3KMA)
pose_3KMA = msgpack.unpackb(pose_3KMA.read())

pose = mmtf.MMTFDecoder()
pose.decode_data(pose_3KMA)
print(pose.x_coord_list[0])

printed:

-28.8

but the real CA coordinate is

-28.761999

Am I missing a step?

Also a function containing something similar to the below code might be useful in this library, it's mostly something I just scraped together...

Weird behaviour for the `title` header field

Hi,

It seems mmtf-python has a bit of a bug when it comes to the optional title field in the header.
Specifically, see lines 103-107 of mmtf_reader.py.

Firstly, it seems the sys version check does nothing?
Secondly, I think title should default to None, given it is an optional field, like how rFree is handled.
Currently this causes a crash in decoder_utils.py for files without the title field.

UniProtIds

Hi! Are the UniProtIds accessible with the Python API?
Thanks in advance.

MMTF Encoder

For mmtf encoder, groupIdList should use RunLengthDelta, which is type 8. However, groupIdList is using a type 4 encoding, which is FourByteToInt.

output_data[b"groupIdList"] = encode_array(self.group_id_list, 4, 0)

should be

output_data[b"groupIdList"] = encode_array(self.group_id_list, 8, 0)

Output `mmtf` uses 64bit floats which violates the mmtf specification.

The specification outlines the float type as 32bit. Python has 64bit floats, hence when packing these per the template are dumped to the output file. Other parsers (e.g. mmtf-java) try to load these as 32bit floats, and hence fail. We can overcome this easily by updating the msgpack.packb call to include use_single_float=True.

However, it seems mmtf-java also violates the standard, and uses doubles (64bit floats) for the ncsOperatorList, thus the above change means it can't parse the output still. Given mmtf-java is used for the RCSB files, we can assume they won't shift to 32bit floats - it'll break their parsing for even more files.

Additionally, the msgpack-python implementation does not support selecting doubles for only one field - msgpack/msgpack-python#326. Instead you have to pack the biological assemblies list separately and then combine it, as in the collapsed snipped below.

Code for packing separately.
# The mmtf standard expects everything as 32bit - hence use_single_float.
# Note the encode_data no longer includes bioAssemblyList.
main = msgpack.packb(self.encode_data(), use_bin_type=True, use_single_float=True)

# Assemblies need to be 64bit for Java compatibility.
assemblies = msgpack.packb(
    {"bioAssemblyList": self.bio_assembly},
    use_bin_type=True,
    use_single_float=False,
)

# In msgpack, the first three bytes of a map (over 15 elements) are `\xde\x12\x34`, where
# 1234 gives the map length.

# Our `main` map has 30-something elements, hence only the `\x34` matters.

# Get the new length indicator, prepended with the map indicator and a `\x00`.
new_map_length: bytes = b"\xde\x00" + chr(main[2] + 1).encode()

# Strip the first three bytes from `main` (the map indicator byte and two bytes for length).
main = main[3:]

# Strip the first byte from `assemblies` (it's less than 15 elements, has a single byte indicator).
assemblies = assemblies[1:]

# Finally put it all back together.
new_data = new_map_length + main + assemblies

For reference I have raised this issue in the mmtf-java repo too - rcsb/mmtf-java#53.

Python interface returns a wrong number of groups

This testcase:

from mmtf import fetch

decoded_data = fetch("4NHO")

print("numGroups="+str(len(decoded_data.group_list)))
for i in range(0, len(decoded_data.group_list)-1):
    print(decoded_data.group_list[i]["groupName"])                                                                                                                                                                        

prints that there are 32 groups, when in reality 4NHO has 488 residues (groups): https://www.rcsb.org/structure/4NHO

numGroups=32
SO4
ASN
VAL
LYS
CYS
HOH
HG
LEU
MET
PRO
GOL
HIS
ARG
ILE
GLY
GLU
GLU
ASP
TRP
GLN
TYR
PHE
THR
CXS
VAL
SER
THR
SER
GLN
ALA
CSO

Version py36-mmtf-python-1.1.2.

New release

Can we have a new release after the new update of "switching to msgpack-defaults for easier parsing"?

Method to select coordinates of given chain or group

I would like to be able to get a list of atom indices for a given chain ID. This is necessary for building biological assemblies. Currently, I am using the following code to assemble arrays with elements for each atom and indices of groups and chains, but it seems rather hacky. Is there a better way to do this, and if not, would this be useful to include?

atom_group_ids = numpy.zeros(len(tf.x_coord_list), dtype="I")
atom_chain_ids = numpy.zeros(len(tf.x_coord_list), dtype="I")

group_max = 0
group_in_chain = 0
current_chain = 0
for group_id, group_type in enumerate(tf.group_type_list):
    group_size = len(tf.group_list[group_type]["elementList"])
    atom_group_ids[group_max:group_max + group_size] = group_id
    atom_chain_ids[group_max:group_max + group_size] = current_chain
    group_max += group_size
    group_in_chain += 1
    if group_in_chain > tf.groups_per_chain[current_chain]:
        current_chain += 1
        group_in_chain = 0

Indices can then be obtained with:

numpy.argwhere(atom_chain_ids == 6)[:,0]

xxx_yyy_list is actually numpy array?

hi

In [11]: x = mmtf.fetch('4CUP')

In [12]: x.atom_id_list
Out[12]: array([   1,    2,    3, ..., 1105, 1106, 1107])

In [13]: type(x.atom_id_list)
Out[13]: numpy.ndarray

I know what you mean about list here but Python list is common.

two tests fail without network access

Two of the tests require network access and while one is indeed testing network access, the other would also work with local files. Could you include the pdb files that test_round_trip_list is trying to fetch in mmtf/tests/testdatastore and switch to testing against local copies?

With the above change, I'd run the testsuite using:

nosetests -e test_fetch

or even with

python2 setup.py nosetests --no-network

if you added such option.

My use case is building mmtf-python as a package for Fedora and Fedora build system doesn't provide network access during build process for security and reproducibility.

Here's the log:

$ python2 setup.py nosetests
['mmtf', 'mmtf.codecs', 'mmtf.utils', 'mmtf.api', 'mmtf.converters', 'mmtf.tests', 'mmtf.codecs.encoders', 'mmtf.codecs.decoders']
running nosetests
running egg_info
writing requirements to mmtf_python.egg-info/requires.txt
writing mmtf_python.egg-info/PKG-INFO
writing top-level names to mmtf_python.egg-info/top_level.txt
writing dependency_links to mmtf_python.egg-info/dependency_links.txt
writing entry points to mmtf_python.egg-info/entry_points.txt
reading manifest file 'mmtf_python.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'mmtf_python.egg-info/SOURCES.txt'
..............E......E........
======================================================================
ERROR: test_fetch (mmtf.tests.codec_tests.ConverterTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/builddir/build/BUILD/mmtf-python-1.0.5/mmtf/tests/codec_tests.py", line 194, in test_fetch
    decoded = fetch("4CUP")
  File "/builddir/build/BUILD/mmtf-python-1.0.5/mmtf/api/default_api.py", line 69, in fetch
    decoder.decode_data(get_raw_data_from_url(pdb_id))
  File "/builddir/build/BUILD/mmtf-python-1.0.5/mmtf/api/default_api.py", line 44, in get_raw_data_from_url
    response = urllib2.urlopen(request)
  File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib64/python2.7/urllib2.py", line 429, in open
    response = self._open(req, data)
  File "/usr/lib64/python2.7/urllib2.py", line 447, in _open
    '_open', req)
  File "/usr/lib64/python2.7/urllib2.py", line 407, in _call_chain
    result = func(*args)
  File "/usr/lib64/python2.7/urllib2.py", line 1230, in http_open
    return self.do_open(httplib.HTTPConnection, req)
  File "/usr/lib64/python2.7/urllib2.py", line 1200, in do_open
    raise URLError(err)
URLError: <urlopen error [Errno -2] Name or service not known>

======================================================================
ERROR: test_round_trip_list (mmtf.tests.codec_tests.ConverterTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/builddir/build/BUILD/mmtf-python-1.0.5/mmtf/tests/codec_tests.py", line 355, in test_round_trip_list
    self.round_trip(pdb_id)
  File "/builddir/build/BUILD/mmtf-python-1.0.5/mmtf/tests/codec_tests.py", line 300, in round_trip
    data_in = fetch(pdb_id)
  File "/builddir/build/BUILD/mmtf-python-1.0.5/mmtf/api/default_api.py", line 69, in fetch
    decoder.decode_data(get_raw_data_from_url(pdb_id))
  File "/builddir/build/BUILD/mmtf-python-1.0.5/mmtf/api/default_api.py", line 44, in get_raw_data_from_url
    response = urllib2.urlopen(request)
  File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib64/python2.7/urllib2.py", line 429, in open
    response = self._open(req, data)
  File "/usr/lib64/python2.7/urllib2.py", line 447, in _open
    '_open', req)
  File "/usr/lib64/python2.7/urllib2.py", line 407, in _call_chain
    result = func(*args)
  File "/usr/lib64/python2.7/urllib2.py", line 1230, in http_open
    return self.do_open(httplib.HTTPConnection, req)
  File "/usr/lib64/python2.7/urllib2.py", line 1200, in do_open
    raise URLError(err)
URLError: <urlopen error [Errno -2] Name or service not known>

----------------------------------------------------------------------
Ran 30 tests in 0.118s

FAILED (errors=2)

Entity list possibly incomplete

I tested the Biopython MMTF parser on the whole PDB and ran into errors on a few files. I think this is an issue on the mmtf side but am not sure. The files are: 1j6t, 1o2f, 1ts6, 1vrc, 2g10 and 2k9y.

For example fetch("1o2f") works without an error. But fetch("1o2f").entity_list gives

[{'chainIndexList': [0, 2, 5],
  'description': 'PTS SYSTEM, MANNITOL-SPECIFIC IIABC COMPONENT',
  'sequence': 'MANLFKLGAENIFLGRKAATKEEAIRFAGEQLVKGGYVEPEYVQAMLDREKLTPTYLGESIAVPHGTVEAKDRVLKTGVVFCQYPEGVRFGEEEDDIARLVIGIAARNNEHIQVITSLTNALDDESVIERLAHTTSVDEVLELLAGRK',
  'type': 'polymer'},
 {'chainIndexList': [1, 3, 6],
  'description': 'Phosphocarrier protein HPr',
  'sequence': 'MFQQEVTITAPNGLHTRPAAQFVKEAKGFTSEITVTSNGKSASAKSLFKLQTLGLTQGTVVTISAEGEDEQKAVEHLVKLMAELE',
  'type': 'polymer'}]

and fetch("1o2f").chain_name_list gives

[u'A', u'B', u'A', u'B', u'B', u'A', u'B', u'B']

So it appears the entity list is not complete as it is missing two of the chains. In the file these correspond to phosphate residues present only in Models 2 and 3. This leads to key errors in the chain_index_to_type_map function called by Biopython.

mmtf/tests/testdatastore missing from PyPI 1.0.5 tarball

mmtf/tests/testdatastore is missing from the 1.0.5 tarball available from PyPI, so some tests are failing:

[mockbuild@f1c696a972e3481593111ec80af39a2a mmtf-python-1.0.5]$ python2 setup.py nosetests --processes=4
['mmtf', 'mmtf.codecs', 'mmtf.utils', 'mmtf.api', 'mmtf.converters', 'mmtf.tests', 'mmtf.codecs.encoders', 'mmtf.codecs.decoders']
running nosetests
running egg_info
writing requirements to mmtf_python.egg-info/requires.txt
writing mmtf_python.egg-info/PKG-INFO
writing top-level names to mmtf_python.egg-info/top_level.txt
writing dependency_links to mmtf_python.egg-info/dependency_links.txt
writing entry points to mmtf_python.egg-info/entry_points.txt
reading manifest file 'mmtf_python.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'mmtf_python.egg-info/SOURCES.txt'
.....................E.EE...EE
======================================================================
ERROR: test_decoder (mmtf.tests.codec_tests.ConverterTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/builddir/build/BUILD/mmtf-python-1.0.5/mmtf/tests/codec_tests.py", line 180, in test_decoder
    decoded = parse("mmtf/tests/testdatastore/4CUP.mmtf")
  File "/builddir/build/BUILD/mmtf-python-1.0.5/mmtf/api/default_api.py", line 78, in parse
    newDecoder.decode_data(msgpack.unpackb(open(file_path, "rb").read()))
IOError: [Errno 2] No such file or directory: 'mmtf/tests/testdatastore/4CUP.mmtf'

======================================================================
ERROR: test_gz_decoder (mmtf.tests.codec_tests.ConverterTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/builddir/build/BUILD/mmtf-python-1.0.5/mmtf/tests/codec_tests.py", line 183, in test_gz_decoder
    decoded = parse_gzip("mmtf/tests/testdatastore/4CUP.mmtf.gz")
  File "/builddir/build/BUILD/mmtf-python-1.0.5/mmtf/api/default_api.py", line 87, in parse_gzip
    newDecoder.decode_data(msgpack.unpackb(gzip.open(file_path, "rb").read()))
  File "/usr/lib64/python2.7/gzip.py", line 34, in open
    return GzipFile(filename, mode, compresslevel)
  File "/usr/lib64/python2.7/gzip.py", line 94, in __init__
    fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb')
IOError: [Errno 2] No such file or directory: 'mmtf/tests/testdatastore/4CUP.mmtf.gz'

======================================================================
ERROR: test_gzip_open (mmtf.tests.codec_tests.ConverterTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/builddir/build/BUILD/mmtf-python-1.0.5/mmtf/tests/codec_tests.py", line 191, in test_gzip_open
    ungzip_data(open("mmtf/tests/testdatastore/4CUP.mmtf.gz","rb").read())
IOError: [Errno 2] No such file or directory: 'mmtf/tests/testdatastore/4CUP.mmtf.gz'

======================================================================
ERROR: test_round_trip (mmtf.tests.codec_tests.ConverterTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/builddir/build/BUILD/mmtf-python-1.0.5/mmtf/tests/codec_tests.py", line 294, in test_round_trip
    data_in = parse_gzip("mmtf/tests/testdatastore/4CUP.mmtf.gz")
  File "/builddir/build/BUILD/mmtf-python-1.0.5/mmtf/api/default_api.py", line 87, in parse_gzip
    newDecoder.decode_data(msgpack.unpackb(gzip.open(file_path, "rb").read()))
  File "/usr/lib64/python2.7/gzip.py", line 34, in open
    return GzipFile(filename, mode, compresslevel)
  File "/usr/lib64/python2.7/gzip.py", line 94, in __init__
    fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb')
IOError: [Errno 2] No such file or directory: 'mmtf/tests/testdatastore/4CUP.mmtf.gz'

======================================================================
ERROR: test_round_trip_list (mmtf.tests.codec_tests.ConverterTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/nose/plugins/multiprocess.py", line 812, in run
    test(orig)
  File "/usr/lib/python2.7/site-packages/nose/case.py", line 45, in __call__
    return self.run(*arg, **kwarg)
  File "/usr/lib/python2.7/site-packages/nose/case.py", line 133, in run
    self.runTest(result)
  File "/usr/lib/python2.7/site-packages/nose/case.py", line 151, in runTest
    test(result)
  File "/usr/lib64/python2.7/unittest/case.py", line 431, in __call__
    return self.run(*args, **kwds)
  File "/usr/lib64/python2.7/unittest/case.py", line 367, in run
    testMethod()
  File "/builddir/build/BUILD/mmtf-python-1.0.5/mmtf/tests/codec_tests.py", line 355, in test_round_trip_list
    self.round_trip(pdb_id)
  File "/builddir/build/BUILD/mmtf-python-1.0.5/mmtf/tests/codec_tests.py", line 300, in round_trip
    data_in = fetch(pdb_id)
  File "/builddir/build/BUILD/mmtf-python-1.0.5/mmtf/api/default_api.py", line 69, in fetch
    decoder.decode_data(get_raw_data_from_url(pdb_id))
  File "/builddir/build/BUILD/mmtf-python-1.0.5/mmtf/api/default_api.py", line 44, in get_raw_data_from_url
    response = urllib2.urlopen(request)
  File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib64/python2.7/urllib2.py", line 429, in open
    response = self._open(req, data)
  File "/usr/lib64/python2.7/urllib2.py", line 447, in _open
    '_open', req)
  File "/usr/lib64/python2.7/urllib2.py", line 407, in _call_chain
    result = func(*args)
  File "/usr/lib64/python2.7/urllib2.py", line 1230, in http_open
    return self.do_open(httplib.HTTPConnection, req)
  File "/usr/lib64/python2.7/urllib2.py", line 1203, in do_open
    r = h.getresponse(buffering=True)
  File "/usr/lib64/python2.7/httplib.py", line 1121, in getresponse
    response.begin()
  File "/usr/lib64/python2.7/httplib.py", line 438, in begin
    version, status, reason = self._read_status()
  File "/usr/lib64/python2.7/httplib.py", line 394, in _read_status
    line = self.fp.readline(_MAXLINE + 1)
  File "/usr/lib64/python2.7/socket.py", line 480, in readline
    data = self._sock.recv(self._rbufsize)
  File "/usr/lib/python2.7/site-packages/nose/plugins/multiprocess.py", line 276, in signalhandler
    raise TimedOutException()
TimedOutException: 'test_round_trip_list (mmtf.tests.codec_tests.ConverterTests)'

----------------------------------------------------------------------
Ran 30 tests in 10.279s

FAILED (errors=5)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.