Giter Site home page Giter Site logo

mapping-commons / sssom Goto Github PK

View Code? Open in Web Editor NEW
142.0 32.0 24.0 15.36 MB

Simple Standard for Sharing Ontology Mappings

Home Page: https://mapping-commons.github.io/sssom/

License: BSD 3-Clause "New" or "Revised" License

Makefile 6.53% Shell 1.03% Perl 0.89% Python 75.97% Jinja 15.57%
sssom owl tsv mapping mapping-tools mapping-standards metadata-elements linkml obofoundry mapping-commons

sssom's People

Contributors

actions-user avatar ahwagner avatar anitacaron avatar caufieldjh avatar cmungall avatar cthoyt avatar gouttegd avatar hrshdhgd avatar jamesamcl avatar jmillanacosta avatar joeflack4 avatar matentzn avatar nicolevasilevsky avatar nlharris avatar sierra-moxon avatar sujaypatil96 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sssom's Issues

Canonical ordering

@cmungall: "I think it's important to have a canonical ordering, and for x_id to directly precede x_label"

from the spec:

Apart from the elements, **_[this table](sssom_metadata.tsv) defines the canonical order_** in which the elements should appear when serialised. This precludes spurious diffs in a git setting, which is an important concern for the continuous reviewing of mappings by curators and users. 

Table: https://github.com/matentzn/SSSOM/blob/master/sssom_metadata.tsv

How should unmapped elements be indicated?

I frequently want to be able to state: I have looked in object_source for a matching or close concept to subject_id but am confident there is nothing.

How should I state this?

Can we make object_id/name nullable (where we have a convention that '' in the TSV is null)?

alternatively we could have a CURIE/URI for NoMapping, analogous to owl:Thing.

@wdduncan we need this for the nmdc sssom

I'm also using a pseudo-sssom for cob-to-external where I have blank entries for not mapped

Add subject/object qualifier

We want to be able to say things like:

  • uberon:heart = mouse:heart // context= NCBITaxon:10090
  • depth_in_feet = depth // when units = UO:foot

I think it's overloading SSSOM to build in a full post-composition / OWL class expression mechanism, but it would still be useful to have qualifier fields for subject and object that have context-specific reliable interpretations

Proposal for accommodating complex mappings

Note: all comments in the following apply to both subject and object fields.

The normal mapping case is when one term in a source set (subject) is mapped to exactly one term in the target set (object).
There are, however, many cases where we need to map sets of terms (subject and/or object), for example:

UBERON:Eye+NCBITaxon:Xenopus->XAO:Eye
MP:adiposeTissuePhenotype+PATO:abnormal->HP:AbnormallyAdiposeTissue
MP:X due to DO:1 -> HP:Y due to MONDO:1

This is can become a complicated mess, but I suggest the following:

  1. We allow pipe separated term lists for both subject_id and object_id. These lists are considered in the order given.
  2. We introduce a new (optional) field called object_pattern which is, by default, none (which means everything in subject_id is considered to be a single identifier pertaining to one term). Now if someone wishes to create a complex mapping, they would write a complex expression like RO:001 some (%s and (RO:002 some %s)) or simply %s and %s (see how we did this in a different context using templates). The filler terms (%s) are filled one by one with terms from the pipe seperated list in subject_id, which materialises the expression, for example, as an owl_class_expression.
  3. We introduce a new (optional) field called object_pattern_type, which is, if NOT set, interpreted to be a "class expression in manchester syntax" (so there is no need to set it). This could be used in the future to accomodate other kinds of patterns as well (there are complex expressions for example in the RBOX that are not class expressions, but maybe someone wants to use this to map non-owl patterns as well).

Does this make sense?

@cmungall @kshefchek @mellybelly @diatomsRcool @balhoff

Proposal: allow bijective functions to be applied to S and O prior to applying mapping predicate

There are many cases where equivalence/exact is not appropriate but we want to be more precise that close etc

E..g

  • uniprot gene-centric reference protein to a gene (cc @sierra-moxon)
  • two properties representing measurements in different units #52
  • species-neutral to species-specific (uberon, upheno)
  • mapping between chemical entities in a way that is stereoisochemically or charge neutral cc @balhoff

I propose to add two optional columns {sub,ob}ject_transform_function (SF, OF) such that a mapping is read as

SF(S) P OF(O)

E.g.

  1. encoded_by(uniprot:P12345) exactMatch HGNC:2345
  2. measurement_of(dwc:depth_in_meters) exactMatch measurement_of(foo:depth_in_cm)
  3. has_species_neutral_form(zfa:heart) exactMatch has_species_neutral_form(ma:heart)
  4. chebi:citric_acid exactMatch has_conjugate_acid(kegg:citrate)

However, this has the undesirable property of losing 1:1 of exact/equivalent/etc

It may be preferable to use 1:1 functions with the option to include an argument for building the function, e.g.

  1. no change
  2. measurement_in(m)(dwc:depth_in_meters) exactMatch measurement_of(cm)(foo:depth_in_cm)
  3. has_species_neutral_form(Drer)(zfa:heart) exactMatch has_species_neutral_form(Mmus)(ma:heart)
  4. no change

measurement_in(UNIT: u) is a function builder that returns a 1:1 function, e.g.

measurement_in(m) => F, F(10m) = 10, F(100cm) = 1, ...

This preserves 1:1ness of exactMatch, and allows users who don't care about semantic precision to see all 1:1 mappings, at the same time as preserving the precise semantics

The exact specification of the functions may be out scope for SSSOM itself, but could be handled by robot templates

This does complicate the mapping to OWL, particularly where a logical axiom type is used as the predicate. I am OK with simply saying these should not be translated without a force option

This may seem to be getting away from the Simple in SSSOM, but I would argue this keeps the mapping format simple and usable while dealing with genuinely tricky cases in a way that doesn't sacrifice precision.

Relationship to PROV-O

Not for the first version, but it would be interesting to think about how SSSOM relates to PROV-O. e.g. would mappings have prov:wasGeneratedBy properties? could ontology alignment be a prov:Activity?

Negated mappings and the standardisation of mapping predicate modifiers

This issue is a history of the discussion on how to handle negated mappings. After a lot of discussion and a final vote at #40 (comment), we've decided to go with adding an additional predicate modifier column to the SSSOM standard. This issue can be closed along with a pull request that realizes this update.

See draft solution in #99

Original issue text from @cmungall:

Similar to #38 we could allow predicates to be property expressions such as !owl:equivalentTo

Need to have metadata fields for the data model location of terms/code in both subject (source) and object (target)

In CCDH, we're trying to create a mapping from the NCI nodes' data dictionaries to NCIt codes that will be used in the common model. For example, in NCI GDC (genomic data commons) data model Aliquot, and attribute analyte_type, "cfDNA" is a value in the enum of terms for the attribute. The attribute will be mapped to Specimen.analyte_type in the common data model (CDM). At the same time, the term 'cfDNA' is going to be mapped to NCIT:C128274. We need to record the field mapping (GDC.Aliquot.analyte_type -> CDM.Specimen.analyte_type) in the term mapping to provide context and also reference.

We were trying to use SSSOM:subject_match_field and SSSOM:object_match_field for this purpose but that seems to be incorrect. What metadata fields should we use?

add category information for subjects and objects

In some cases, it is important to know what type of entity the subject and/or object is; e.g., class, property, annotation (perhaps primitive data types too?). Can we columns such as subject_category and object_category to capture this info?

cc @cmungall

sssom_metadata.tsv is confusing to the uninitiated

The actual format of SSSOM is kinda hard to grok in the docs

You have to go to
https://github.com/OBOFoundry/SSSOM/blob/master/SSSOM.md#tsv

which is very texty

In the text is buried a link to

https://github.com/OBOFoundry/SSSOM/blob/master/sssom_metadata.tsv

Which is hard to interpret

Which is a bit confusing. It looks like the first column in SSSOM is 'ID'. It took me a whole to realize this was a robot template

Also, the sssom: prefixes are confusing, they aren't actually meant to be used in the header are they?

I suggest a simple derived table that looks more like the original gdoc

also I use this hacky script to make gh tables from tsvs, may be helpful here:

#!/usr/bin/perl

my $n=0;
my $len;
my $hlen;
while(<>) {
    chomp;
    if ($n==0 && m@^\#@) {
        s@^\#@@;
    }
    my (@vals) = split(/\t/,$_);
    @vals = map {s@\|@, @g; $_} @vals;
    if (!$hlen) {
        $hlen = scalar(@vals);
    }
    while (scalar(@vals) < $hlen) {
        push(@vals, '');
    }
    print '|'.join('|',@vals)."|\n";
    $nulen = scalar(@vals);
    if ($n > 0) {
        if ($len ne $nulen) {
            print STDERR "MISMATCH: $len != $nulen\n";
        }
    }
    $len = $nulen;
    if ($n ==0) {
        @vals = map {"---"} @vals;
        print '|'.join('|',@vals)."|\n";
    }
    $n++;
}

Help with understanding "context"

There were a couple of requests asking for the ability to denote something like a "mapping context". So far, we have assumed only the general case -> A mapping either exists always or does not. This is an important requirement for realizing one of our core use cases, which is mix-an matching mapping from different mapping sets (and thereby effectively de-contextualising them). I would appreciate if someone could send me a link or explain a bit what exactly a "context-dependent" mapping is and how a context is typically denoted.

Example JSON is not valid

The code examples in SSSOM.md are not valid JSON: no trailing commas, smart quotes.

Is there an implicit JSON-LD context? Making that explicit could double as an indication of which version of this standard was used.

I'm confused by "date": “2020-09-2020". It might be good to enforce a date format.

Although it's a bit of a pain, I suggest writing a script that executes/tests/validates the documentation against your tools, to ensure that the examples are always right.

Inverse mappings

I have seen some SSSOMs in the wild with a predicate field that contains a property expression, e.g. inverseOf(P). Should we standardize this?

Note I'd prefer to use SPARQL syntax, e.g. ^ for inverted predicate

See also biolink/biolink-model#440

The other option is to have an explicit field such as invert: boolean, but the danger is that this is ignored and we get the wrong semantics

Constraints on TSV comments

Is it the case that no comment lines can appear once the TSV starts? Meaning, the comments must be in a contiguous block at the start of the file? This would make sense to me.

Also, is it the case that any comments must constitute valid YAML syntax after stripping the leading #? That's the impression I get from the spec, but I think it could be stated more directly.

Contribution to SSSOM

Hi Nico

I would like to contribute to building the new SSSOM format which I have been discussing with Thomas Liener. You can reach me at ianharrowconsulting(at)gmail(dot)com to this follow-up. You will find me on LinkedIn and the Pistoia Alliance website (Manager for Ontology Mapping and FAIR Implementation projects).

Cheers, Ian

Allow multiple match types

So far, we have only allowed a single match_type per mapping, like LexicalMatch, LogicalMatch, with the idea that in case we have multiple ways a term maps, we just create multiple mappings. Some people did not like this idea: they want to be able to just say that a single given mapping is both a LexicalMatch and a LogicalMatch. This issue here gives the opportunity to discuss the matter, but I am inclined to grant this request and turn match_type into a |-separated list that means, strictly: "this mapping can be derived via multiple routes". The main argument against this is that a bunch of metadata elements directly refer to the match_type (what would subject_match_field for example refer to if multiple match_types are chosen? All of the matches?). The main argument for this is that a user can see immediately the strong evidence for a mapping. My personal sense of clarity still tends to the single match_type to avoid the confusion of how to interpret other metadata, but I can see the appeal for other users, and will therefore simply do this (multiple match_types) if there are no further arguments against.

Add two new fields: sub/obj category

When mapping ontologies that contain many different kinds of entities (e.g NCIT) or even a few (GO) it can be useful to see the category of subject and object. This can be used to visually and computationally weed out false positives (e.g vector-sensu-math vs vector-sensu-infectius disease).

The source of categories may be domain-dependent. COB or biolink could be used for bio. It might still be useful to have a fixed level so that it is easy to compare like with like

How to accurately capture curation rules?

Curation rules can be complex, but lets start with the more simple ones we can already represent:

Domain expert decision is the default:

subject_id relation_id object_id match_type object_source
HP:001 owl:equivalentTo MP:001 sssom:HumanCuration mp.owl

The curation rule here is: a domain expert determined or confirmed the equivalence mapping by hand.

Automated tool:

subject_id relation_id object_id match_type object_source mapping_tool subject_match_field object_match_field
HP:001 owl:equivalentTo MP:001 sssom:LexicalMatch mp.owl logmap skos:exactSynonym rdfs:label

The curation rule is: a tool (logmap) determined the equivalence mapping by matching an exact synonym of the subject to a label of the object. (this could be further refined by adding preprocessing information, such as "stemming" to these fields).

@mellybelly and others, please add more concrete curation rules you encounter regularly when creating a mapping.

mapping_tool

From Hyeongsik:

SSSOM currently has mapping_tool fields but it could be a bit more specific similar to the agent information from web browsers, e.g., Mozilla/5.0. We just wanted to make sure that the same alignment results can be precisely reproduced using a specific version of an alignment tool, which could be helpful for testing and benchmarking different alignment tools for the purpose of research. At least, I wish SSSOM would support separate fields or predefined format for program id/name and version fields/build numbers, etc.

File mapping rules metadata idea

Starting a thread on how we might create a metadata structure for the rules used for the mapping. This would be for the whole file and could go in the header. It would require consideration of the mapping type field since you will often have more than one mapping type for a given mapping. For example, if I only make exact mappings when there is both a lexical match as well as on a synonym, or logical def, or Xref, etc.

There should be a "rule" for each type of mapping present in the file. Maybe start with somethign like:

predicate_id include_match_type exclude_match_type
skos:exactMatch Lexical; Logical
skos:narrowMatch Synonym Lexical

other more complex examples of inclusion/exclusion criteria - exact string match on definition string, matching both primary label and at least one synonym and not matching synonyms on other terms, use of dbxrefs, etc.

Add a quickstart to the README

The docs are great but you have to read a lot to get the gist

I suggest in the README adding an example table or two, e.g

subject_id predicate_id object_id match_type subject_label object_label^M
HP:0009124 skos:exactMatch MP:0000003 Lexical Abnormal adipose tissue morphology abnormal adipose tissue morphology^M
HP:0008551 skos:exactMatch MP:0000018 Lexical Microtia small ears^M
HP:0000411 skos:exactMatch MP:0000021 Lexical Protruding ear prominent ears^M

best practice for mapping two measurement concepts that take different units

Example 1:

subject = "s:depth"
object = "o:depth"

conceptually these are the same, but subject is for a property whose range is a literal with implicit unit of meters and object is for a property whose range is a literal with implicit unit of cm

Example 2:

subject = "s:depth_in_meters"
object = "o:depth"

conceptually these are the same, but subject is for a property whose range is a literal with unit of meters and object is for a property whose range is a string conforming to "{number} {unit}" syntax

I would say in both cases it is incorrect to use skos:exactMatch because the CURIE pairs denote different concepts, although the CURIEs are about the same concept. So we would instead go to skos:closeMatch. This is not wrong, however, it seems unsatisfactory as the concepts are "conceptually closer" than other close matches

Proposed recommendation:

  • skos:closeMatch is not wrong
  • define a new predicate is_about_identical_concepts. Formally: S is_about sc, O is_about oc, sc exactMatch oc.
  • add a new column intended for the unit case

cc @wdduncan @pbuttigieg

Representing data model elements and their values in SSSOM

Case:

we want to map literal values in a set of permissible values in a data model (say "M" in gcs.maritalstatus) to some ontology term NCIT:Married. Now there might not be anything like an id for the permissible values in the data model. One solution could be to do something like _:b in the subject or object_id column to indicate this is a blank node, meaning that the label should be used for the mapping, but maybe we need something more general to allow for mapping arbitrary literals, as @wdduncan suggested in another thread as well..

From @hsolbrig:

We DO need a way to represent a specific string that appears in a given data element but I see this as a case where we are imposing the constraints of OUR data model (RDF)  on a pretty straight-forward idea – string “M” in the marital status data element in model M means “Married” as defined by ontology … .  I wonder whether this may be a case where we are overburdening the notion of an “ontology map”.   Note that using the actual map if it is in RDF is going to require the same amount of work whether the source nodes are represented as Uri’s or BNodes.

broadMatch and narrowMatch may have been reversed

Referring to the SKOS spec:

“A triple <A> skos:broader <B> asserts that <B>, the object of the triple, is a broader concept than <A>”


skos:broadMatch is a sub-property of skos:broader


As I read this, <X> skos:broadMatch <Y> says that <Y> is broader than <X>. The current documentation appears to say exactly the opposite?

Consider how to effectively reference subgraphs/subsets

subject_source (or object_source of course) reference the source in general, like http://purl.obolibrary.org/obo/hp.owl
subject_source_version references a particular version of the resource, like http://purl.obolibrary.org/obo/hp/releases/2020-03-27/hp-base.owl

We often want to say that a mapping set maps, say, all phenotype in subject_source to all diseases in object_source, so we need to effectively be able to express something like subject_subgraph; My feeling is that we should overload the subject_source_version for this purpose and require the supplied link to the actual subgraph of the source used to compute the mapping. If we wanted to denote what the Medical Informatics community call "value sets" ("specifies a set of codes drawn from one or more code systems, intended for use in a particular context"), we could actually define a new field called source_terms (terms being the neutral version of what the MI people call value set, and the ontology world calls a signature of interest). The reason why I am not convinced this is a good idea is that the mapping set itself (the mappings I mean) sort of serve as a proxy for value set: if you mention a term A in the subject_id column, obviously it was also part of the value set that was being mapped.

Maybe all thats needed is overloading the subject_source_version to refer to the exact and complete input graph that was used to perform the mapping? Opinions welcome.

multiple fields mapping to one field

Do we have terms to handle when multiple fields map to a single field.

For example, it is common to see dates broken out into year, month, day field; e.g.: collection_year, collection_month, collection_day. But these may map to single field; e.g.: collection_date.

cc @cmungall

How to specify default namespace - or should we not?

It seems like it should be possible to do
:0001 instead of HP:00001, or even 0001. What would be the concerns?

# curie_map:
#       MP: http://purl.obolibrary.org/obo/MP_
#       _: http://purl.obolibrary.org/obo/MYSPACE_
subject_id relation_id object_id match_type object_source mapping_tool subject_match_field object_match_field
001 owl:equivalentTo MP:001 sssom:LexicalMatch mp.owl logmap skos:exactSynonym rdfs:label

internationalization / language tags

From Hyeonsik:

it would be nice if SSSOM can support language tags with encoding information, e.g., en_US.UTF-8 for terms in English or zh_CN.UTF-8 for terms in Mandarin. Such information does not have to be repeated every rows in TSV files but can be included in headers if possible. In the case where language codes or encodings are not specified, we could assume it as en_US.UTF-8. It's possible that this suggestion may not make sense to other British or American users but it would be safe to assume that source ontologies can be encoded in many different languages and encodings.

New command: summarize coverage of mappings

I want to know what % of terms in ontologies S and O are

  1. directly covered
  2. directly + indirectly covered for some set of relations (e.g isa/partof ancestors)

this should be pretty trivial combining ontobio and sssom-py... but engineering-wise how do we figure the dependencies? want to avoid reciprocal dependencies

mapping terms with different syntax for values

For the DWC-MIXS mappings (see repo), we are finding terms that map in sense that they are about the same thing, but the specification for the syntax of the values is different. An example of this is depth. In DWC, the unit and the scalar are in separate fields, but in MIxS the scalar and unit are in one field (e.g. "3 meters"). See this ticket:

tdwg/gbwg#28

Do we want to develop mappings between to capture differences in the value syntax? FWIW, I think DWC:depth should be mapped as a skos:exactMatch to MIXS:depth. The differing value syntax is an ETL/implementation issue. However, it might still be nice to have a vocabulary for capturing such differences.

cc @cmungall @raissameyer @pbuttigieg

Create better analysis of related work

Interested to be in the team

Hi All,
I am planning to work with few ontologists on mapping schema for various CDM mapping. I'd like to contribute or coordinate with this group.
Thanks,
Asiyah

Define canonical sort order

For diff purposes

Suggested: alphanumeric order prioritized by either (1) default column ordering or (2) default column ordering but all ID fields first (to avoid spurious diffs if labels change)

(we will likely want a diff function in sssom-py, but we also want to maximize value of git diffs)

New metadata element: mapping_cardinality

Proposal: add a new field mapping_cardinality which can take the following values:
1:n, 1:1, n:1, 1:0, 0:1

These are in principle derivable from the mapping set, but turn out to be super useful for querying and analysis to begin with.

given a mapping <s,p,o>

  • 1:n: s is mapped to more than o with relation p
  • 1:1: s is mapped to exactly 1 o with relation p
  • n:1: more than 1 s is mapped to the same o with relation p
  • 1:0: o is "sssom:NoMatch"
  • 0:1: s is "sssom:NoMatch"

More information on match types

Besides lexical mappings, what kinds of match types do you envision people asserting? I read through the SSSOM controlled vocabulary and didn't see anything about lexical mapping or other types there. It would be good to give examples of other types, and also specify how this column should be used

Semantic similarity example

This looks great. Can you give some examples how the phenodigm class scores similarity output might look from OWLSim.

The OWLSim format from --sim-save-phenodigm-class-scores is (partly translating the column headers to SSSOM):

# subject_id	object_id	simj	IC	mica_id
HP_0002651	HP_0002651	1.0	8.829843768215113	HP_0002651;

Assuming this is within scope for SSSOM

  1. Is there anywhere to add the algorithm name to describe how the the semantic_similarity_score was calculated e.g. Jaccard, Lin
  2. In the sssom_metadata there is no field for mica_id, but there is a field for information_content_mica_score. Is this deliberate?
  3. Is the information_content_mica_score supposed to be a normalised 0-1 for the dataset?
Element ID Description TSV Example RDF example Scope Entity Type Required Datatype Equivalent property Sub Property
sssom:semantic_similarity_score A score between 0 and 1 to denote the semantic similarity, where 1 denotes equivalence. 0.8 0.8 L owl:AnnotationProperty 0 xsd:double   sssom:metadata_element
sssom:information_content_mica_score A score between 0 and 1 to denote the information content of the most informative common ancestor, where 1 denotes the maximum level of informativeness. 0.3 0.3 L owl:AnnotationProperty 0 xsd:double   sssom:metadata_element

specifying multiple creators

In the only examples there is a single creator

can a list be provided using yaml list syntax? the linkml doesn't declare this as multivalued.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.