Giter Site home page Giter Site logo

biopragmatics / pyobo Goto Github PK

View Code? Open in Web Editor NEW
59.0 3.0 13.0 23.7 MB

๐Ÿ“› A Python package for using ontologies, terminologies, and biomedical nomenclatures

Home Page: https://pyobo.readthedocs.io

License: MIT License

Python 97.11% HTML 2.89%
obo ontologies obofoundry biopragmatics bioinformatics bioregistry

pyobo's Introduction

PyOBO

Build Status Current version on PyPI Stable Supported Python Versions MIT License Zenodo Black Code Style Powered by the Bioregistry

Tools for biological identifiers, names, synonyms, xrefs, hierarchies, relations, and properties through the perspective of OBO.

Example Usage

Note! PyOBO is no-nonsense. This means that there's no repetitive prefixes in identifiers. It also means all identifiers are strings, no exceptions.

Note! The first time you run these, they have to download and cache all resources. We're not in the business of redistributing data, so all scripts should be completely reproducible. There's some AWS tools for hosting/downloading pre-compiled versions in pyobo.aws if you don't have time for that.

Note! PyOBO can perform grounding in a limited number of cases, but it is not a general solution for named entity recognition (NER) or grounding. It's suggested to check Gilda for a no-nonsense solution.

Mapping Identifiers and CURIEs

Get mapping of ChEBI identifiers to names:

import pyobo

chebi_id_to_name = pyobo.get_id_name_mapping('chebi')

name = chebi_id_to_name['132964']
assert name == 'fluazifop-P-butyl'

Or, you don't have time for two lines:

import pyobo

name = pyobo.get_name('chebi', '132964')
assert name == 'fluazifop-P-butyl'

Get reverse mapping of ChEBI names to identifiers:

import pyobo

chebi_name_to_id = pyobo.get_name_id_mapping('chebi')

identifier = chebi_name_to_id['fluazifop-P-butyl']
assert identifier == '132964'

Maybe you live in CURIE world and just want to normalize something like CHEBI:132964:

import pyobo

name = pyobo.get_name_by_curie('CHEBI:132964')
assert name == 'fluazifop-P-butyl'

Sometimes you accidentally got an old CURIE. It can be mapped to the more recent one using alternative identifiers listed in the underlying OBO with:

import pyobo

# Look up DNA-binding transcription factor activity (go:0003700)
# based on an old id
primary_curie = pyobo.get_primary_curie('go:0001071')
assert primary_curie == 'go:0003700'

# If it's already the primary, it just gets returned
assert 'go:0003700' == pyobo.get_priority_curie('go:0003700')

Mapping Species

Some resources have species information for their term. Get a mapping of WikiPathway identifiers to species (as NCBI taxonomy identifiers):

import pyobo

wikipathways_id_to_species = pyobo.get_id_species_mapping('wikipathways')

# Apoptosis (Homo sapiens)
taxonomy_id = wikipathways_id_to_species['WP254']
assert taxonomy_id == '9606'

Or, you don't have time for two lines:

import pyobo

# Apoptosis (Homo sapiens)
taxonomy_id = pyobo.get_species('wikipathways', 'WP254')
assert taxonomy_id == '9606'

Grounding

Maybe you've got names/synonyms you want to try and map back to ChEBI synonyms. Given the brand name Fusilade II of CHEBI:132964, it should be able to look it up and its preferred label.

import pyobo

prefix, identifier, name = pyobo.ground('chebi', 'Fusilade II')
assert prefix == 'chebi'
assert identifier == '132964'
assert name == 'fluazifop-P-butyl'

# When failure happens...
prefix, identifier, name = pyobo.ground('chebi', 'Definitely not a real name')
assert prefix is None
assert identifier is None
assert name is None

If you're not really sure which namespace a name might belong to, you can try a few in a row (prioritize by ones that cover the appropriate entity type to avoid false positives in case of conflicts):

import pyobo

# looking for phenotypes/pathways
prefix, identifier, name = pyobo.ground(['efo', 'go'], 'ERAD')
assert prefix == 'go'
assert identifier == '0030433'
assert name == 'ubiquitin-dependent ERAD pathway'

Cross-referencing

Get xrefs from ChEBI to PubChem:

import pyobo

chebi_id_to_pubchem_compound_id = pyobo.get_filtered_xrefs('chebi', 'pubchem.compound')

pubchem_compound_id = chebi_id_to_pubchem_compound_id['132964']
assert pubchem_compound_id == '3033674'

If you don't have time for two lines:

import pyobo

pubchem_compound_id = pyobo.get_xref('chebi', '132964', 'pubchem.compound')
assert pubchem_compound_id == '3033674'

Get xrefs from Entrez to HGNC, but they're only available through HGNC so you need to flip them:

import pyobo

hgnc_id_to_ncbigene_id = pyobo.get_filtered_xrefs('hgnc', 'ncbigene')
ncbigene_id_to_hgnc_id = {
    ncbigene_id: hgnc_id
    for hgnc_id, ncbigene_id in hgnc_id_to_ncbigene_id.items()
}
mapt_hgnc = ncbigene_id_to_hgnc_id['4137']
assert mapt_hgnc == '6893'

Since this is a common pattern, there's a keyword argument flip that does this for you:

import pyobo

ncbigene_id_to_hgnc_id = pyobo.get_filtered_xrefs('hgnc', 'ncbigene', flip=True)
mapt_hgnc_id = ncbigene_id_to_hgnc_id['4137']
assert mapt_hgnc_id == '6893'

If you don't have time for two lines (I admit this one is a bit confusing) and need to flip it:

import pyobo

hgnc_id = pyobo.get_xref('hgnc', '4137', 'ncbigene', flip=True)
assert hgnc_id == '6893'

Remap a CURIE based on pre-defined priority list and Inspector Javert's Xref Database:

import pyobo

# Map to the best source possible
mapt_ncbigene = pyobo.get_priority_curie('hgnc:6893')
assert mapt_ncbigene == 'ncbigene:4137'

# Sometimes you know you're the best. Own it.
assert 'ncbigene:4137' == pyobo.get_priority_curie('ncbigene:4137')

Find all CURIEs mapped to a given one using Inspector Javert's Xref Database:

import pyobo

# Get a set of all CURIEs mapped to MAPT
mapt_curies = pyobo.get_equivalent('hgnc:6893')
assert 'ncbigene:4137' in mapt_curies
assert 'ensembl:ENSG00000186868' in mapt_curies

If you don't want to wait to build the database locally for the pyobo.get_priority_curie and pyobo.get_equivalent, you can use the following code to download a release from Zenodo:

import pyobo.resource_utils

pyobo.resource_utils.ensure_inspector_javert()

Properties

Get properties, like SMILES. The semantics of these are defined on an OBO-OBO basis.

import pyobo

# I don't make the rules. I wouldn't have chosen this as the key for this property. It could be any string
chebi_smiles_property = 'http://purl.obolibrary.org/obo/chebi/smiles'
chebi_id_to_smiles = pyobo.get_filtered_properties_mapping('chebi', chebi_smiles_property)

smiles = chebi_id_to_smiles['132964']
assert smiles == 'C1(=CC=C(N=C1)OC2=CC=C(C=C2)O[C@@H](C(OCCCC)=O)C)C(F)(F)F'

If you don't have time for two lines:

import pyobo

smiles = pyobo.get_property('chebi', '132964', 'http://purl.obolibrary.org/obo/chebi/smiles')
assert smiles == 'C1(=CC=C(N=C1)OC2=CC=C(C=C2)O[C@@H](C(OCCCC)=O)C)C(F)(F)F'

Hierarchy

Check if an entity is in the hierarchy:

import networkx as nx
import pyobo

# check that go:0008219 ! cell death is an ancestor of go:0006915 ! apoptotic process
assert 'go:0008219' in pyobo.get_ancestors('go', '0006915')

# check that go:0070246 ! natural killer cell apoptotic process is a
# descendant of go:0006915 ! apoptotic process
apopototic_process_descendants = pyobo.get_descendants('go', '0006915')
assert 'go:0070246' in apopototic_process_descendants

Get the subhierarchy below a given node:

# get the descendant graph of go:0006915 ! apoptotic process
apopototic_process_subhierarchy = pyobo.get_subhierarchy('go', '0006915')

# check that go:0070246 ! natural killer cell apoptotic process is a
# descendant of go:0006915 ! apoptotic process through the subhierarchy
assert 'go:0070246' in apopototic_process_subhierarchy

Get a hierarchy with properties pre-loaded in the node data dictionaries:

import pyobo

prop = 'http://purl.obolibrary.org/obo/chebi/smiles'
chebi_hierarchy = pyobo.get_hierarchy('chebi', properties=[prop])

assert 'chebi:132964' in chebi_hierarchy
assert prop in chebi_hierarchy.nodes['chebi:132964']
assert chebi_hierarchy.nodes['chebi:132964'][prop] == 'C1(=CC=C(N=C1)OC2=CC=C(C=C2)O[C@@H](C(OCCCC)=O)C)C(F)(F)F'

Relations

Get all orthologies (ro:HOM0000017) between HGNC and MGI (note: this is one way)

>>> import pyobo
>>> human_mapt_hgnc_id = '6893'
>>> mouse_mapt_mgi_id = '97180'
>>> hgnc_mgi_orthology_mapping = pyobo.get_relation_mapping('hgnc', 'ro:HOM0000017', 'mgi')
>>> assert mouse_mapt_mgi_id == hgnc_mgi_orthology_mapping[human_mapt_hgnc_id]

If you want to do it in one line, use:

>>> import pyobo
>>> human_mapt_hgnc_id = '6893'
>>> mouse_mapt_mgi_id = '97180'
>>> assert mouse_mapt_mgi_id == pyobo.get_relation('hgnc', 'ro:HOM0000017', 'mgi', human_mapt_hgnc_id)

Writings Tests that Use PyOBO

If you're writing your own code that relies on PyOBO, and unit testing it (as you should) in a continuous integration setting, you've probably realized that loading all of the resources on each build is not so fast. In those scenarios, you can use some of the pre-build patches like in the following:

import unittest
import pyobo
from pyobo.mocks import get_mock_id_name_mapping

mock_id_name_mapping = get_mock_id_name_mapping({
    'chebi': {
        '132964': 'fluazifop-P-butyl',
    },
})

class MyTestCase(unittest.TestCase):
    def my_test(self):
        with mock_id_name_mapping:
            # use functions directly, or use your functions that wrap them
            pyobo.get_name('chebi', '1234')

Installation

PyOBO can be installed from PyPI with:

$ pip install pyobo

It can be installed in development mode from GitHub with:

$ git clone https://github.com/pyobo/pyobo.git
$ cd pyobo
$ pip install -e .

Curation of the Bioregistry

In order to normalize references and identify resources, PyOBO uses the Bioregistry. It used to be a part of PyOBO, but has since been externalized for more general reuse.

At src/pyobo/registries/metaregistry.json is the curated "metaregistry". This is a source of information that contains all sorts of fixes for missing/wrong information in MIRIAM, OLS, and OBO Foundry; entries that don't appear in any of them; additional synonym information for each namespace/prefix; rules for normalizing xrefs and CURIEs, etc.

Other entries in the metaregistry:

  • The "remappings"->"full" entry is a dictionary from strings that might follow xref: in a given OBO file that need to be completely replaced, due to incorrect formatting
  • The "remappings"->"prefix" entry contains a dictionary of prefixes for xrefs that need to be remapped. Several rules, for example, remove superfluous spaces that occur inside CURIEs or and others address instances of the GOGO issue.
  • The "blacklists" entry contains rules for throwing out malformed xrefs based on full string, just prefix, or just suffix.

Troubleshooting

The OBO Foundry seems to be pretty unstable with respect to the URLs to OBO resources. If you get an error like:

pyobo.getters.MissingOboBuild: OBO Foundry is missing a build for: mondo

Then you should check the corresponding page on the OBO Foundry (in this case, http://www.obofoundry.org/ontology/mondo.html) and make update to the url entry for that namespace in the Bioregistry.

pyobo's People

Contributors

bgyori avatar cthoyt avatar ddomingof avatar hrshdhgd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

pyobo's Issues

Summary endpoint in xrefs service

Which prefixes can be directly mapped to which (also give provenance information about where these mappings came from)?

Man I just realized this is what OXO was supposed to be able to do but they never finished it

Suspicious mappings for some genes

Playing around with some mappings, I'm seeing some strange results, and it makes me wonder if mappings being to broader/narrower categories is the underlying issue. One example is hgnc:6880 (MAPK7) getting mapped to uniprot:Q02750 (MAP2K1.)

  "uniprot:Q02750": [
    {
      "provenance": "hgnc",
      "source": "hgnc:6880",
      "target": "eccode:2.7.11.24"
    },
    {
      "provenance": "https://github.com/sorgerlab/famplex/raw/master/equivalences.csv",
      "source": "eccode:2.7.11.24",
      "target": "fplx:MAPK"
    },
    {
      "provenance": "https://github.com/sorgerlab/famplex/raw/master/equivalences.csv",
      "source": "fplx:MAPK",
      "target": "eccode:2.7.12.2"
    },
    {
      "provenance": "hgnc",
      "source": "eccode:2.7.12.2",
      "target": "hgnc:6840"
    },
    {
      "provenance": "hgnc",
      "source": "hgnc:6840",
      "target": "uniprot:Q02750"
    }
  ],

Is this result expected by default, and it's up to the user to e.g., exclude mappings via eccode? Or is this something that should be fixed in the mappings generation?

MetaNetX

Add mocks for testing in other packages

It's not so good to load entire ontologies, but other code in other packages relies on this. Add some unit test mock magic to allow the pyobo.get_name, pyobo.ground and other top-level functions to get monkey patched

Add alt_id support

Alt IDs are already part of the node's data structure provided by obonet

  • Add alt_ids as list of references to Term data structure
  • Extract/normalize alt ids from obonet and load into terms
  • look into previous sources that might have provided alt ids
  • implications for user interface - any time you put in an identifier, should it get auto-upgraded?
  • implications for oohnana - should the alt id's get auto-upgraded?

Some FamPlex equivalences have missing namespace

Some of the target name spaces in the FamPlex equivalences file are not recognized and are replaced by None when processing the xrefs df. Example: MEDSCAN urn:agi-aopfc:0000458 is picked up as None:urn:agi-aopfc:0000458. Similarly IP (InterPro) entries are picked up as None:IPR008349. Where should the mapping of these be fixed? In get_famplex_xrefs_df or normalize_prefix, or some other place?

Generate prioritized xref mapping

  1. load supersized xref database
  2. load priority list of namespaces
  3. for every term calculate the highest priority mapping based on everything it can possibly map to. If it maps to itself, exclude from the mapping.

Improve pattern curation and add regex checking to resolver

This should have been obvious since the patterns are available for most. Regex checking isn't strictly necessary if the id mapping is loaded, but can provide more information to users. It can also help catch cases where we would want to make an identifiers.org URL but there's no id->name mapping available already

Possible cases to handle:

  • some namespaces don't have patterns but do have identifiers... should warn me as the developer to get curating (pattern and sample ID should go in the metaregistry)
  • some namespaces have a pattern written in the metaregistry (ensure there's a sample ID)
  • Add optional field in metaregistry for decoy identifiers
  • some namespaces have a pattern just in MIRIAM

See also:

Get mesh to pubchem compound mappings

PubChem provides PubChem Compound identifiers to MeSH terms as a 2 column TSV at ftp://ftp.ncbi.nlm.nih.gov/pubchem/Compound/Monthly/2020-04-01/Extras/CID-MeSH (not a typo, they really don't give this file an extension).

Note that many (112K of 120K) of the mappings correspond to MeSH supplementary records (that start with a C)

WikiData property mappings and xref import

  • Identify wikidata properties that might be useful from
#Wikidata Properties related to biology
SELECT ?item ?itemLabel 
WHERE 
{
  ?item wdt:P31 wd:Q22988603.
  SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
}

and

#Wikidata Properties related to biology (Q22988603) that are also identifiers (Q19847637)
SELECT ?item ?itemLabel 
WHERE 
{
  ?item wdt:P31 wd:Q22988603.
  ?item wdt:P31 wd:Q19847637
  SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
}

Do some curation to get them where they need to be, and maybe update these queries.

  • write a script to autodownload all mappings
SELECT ?wd ?xref
WHERE 
{
  ?wd ?p ?xref.
  BIND(wdt:XXX AS ?p)
  SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
}

or

SELECT ?wd ?xref
WHERE 
{
  ?wd ?p ?xref.
  VALUES ?p { wd:P351 wd:P486 ... }
  SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
}

Add miRBase Families

Data available only in the database dump:

  • Family table: ftp://mirbase.org/pub/mirbase/CURRENT/database_files/mirna_prefam.txt.gz
  • Family to miRNA table: ftp://mirbase.org/pub/mirbase/CURRENT/database_files/mirna_2_prefam.txt.gz
  • miRNA table: ftp://mirbase.org/pub/mirbase/CURRENT/database_files/mirna.txt.gz

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.