Giter Site home page Giter Site logo

rescience's Introduction

The ReScience Journal

ReScience is a platinum open-access peer-reviewed journal that targets computational research and encourages the explicit replication of already published research promoting new and open-source implementations in order to ensure the original research is reproducible. To achieve such a goal, the whole editing chain is radically different from any other traditional scientific journal. ReScience lives on github where each new implementation is made available together with the comments, explanations and tests. Each submission takes the form of a pull request that is publicly reviewed and tested in order to guarantee any researcher can re-use it. If you ever replicated a computational result from the literature, ReScience is the perfect place to publish this new implementation. Reproducible Science is Good. Replicated Science is better.

The Editorial Board

Note: This repository contains the first volumes of ReScience. Articles were submitted as pull requests to ReScience-submissions, which were merged upon acceptance with a reference added to this repository. The new ReScience C workflow is a bit different and based on two new repositories: one for submissions and another one containing the accepted articles. The entry point for ReScience remains its Web site.

Archives

rescience's People

Contributors

achabotl avatar benoit-girard avatar epogrebnyak avatar gitter-badger avatar hughparsonage avatar karthik avatar khinsen avatar marwahaha avatar oliviaguest avatar otizonaizit avatar pdebuyl avatar rougier avatar thierrymondeel avatar vnmabus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rescience's Issues

Do we need some conditions for reviewers ? (no)

A message has been posted (by me) on twitter saying any PhD or PhD-student can be reviewer.
What about master or grad students ? Is there any reason to not have them on board ? Do we need any conditions at all ?

I personally think that to properly make a review you need some knowledge of the scientific field to appreciate to what extent the original paper has been replicated or not. It means a reviewer should be able to read and understand the original paper as well as the new one.

page numbers

Hi @rougier @khinsen

So, we do not have page numbers? Just asking because I am preparing the files for Henriques et al. I know it seems a lowly thought but most citations styles will have a slot with page numbers, journal typesetters (whatever that means by now) will ask, etc.

So the full references is year, volume, issue? (+ name of authors to make papers distinct)

ORCID should be provided via OAuth not form field

ORCID IDs from authors, editors or reviewers should be provided by authenticated OAuth2 call to the ORCID registry rather than filling in a form. This not only provides a nice user experience, but avoids errors entering the ORCID. You don't have to be an ORCID member to use this functionality. Client libraries for many languages are available (I maintain the Ruby library omniauth-orcid).

A form for authors of projects to request a rescience replication of their publication.

These authors would be those who would welcome replication of their work. This may be less intimidating, particularly if the replicator works in a niche field and/or is at an early stage of their career. It would also make up a list of works to start on to build up the ReScience portfolio. This is not to say that those who don't welcome replication, shouldn't have their work replicated, it's just a start. This could be extended to 'nominations' for projects to be replicated.

typo

There's a typo, but unfortunately, I can't make a PR to fix it because it's in the wiki. :(

Anyway, in the submission page, you have:

Issue a pull request (PR) to ReScience with title "Review Request" and insert the followin text in the description:

followin needs a "g" at the end.

What about page numbers or equivalent identifiers

Bibtex requires, for a journal entry, that a page number is provided, right? And publishers do also require page numbers in the bibliography you cite in your papers.

I admit that it can be a bit ridiculous to put page numbers on an electronic publication. However electronic only publications, like the Frontiers journals of scholarpedia, have ID that are used as page numbers. Shouldn't we do something like that?

Make the issues easily searchable

I looked at this (great!) initiative but it is strange that there is no place where I can easily search through the content of the journal (I know, there is only one paper published, but nevertheless). I found the only paper through the the LabCritics blog; there should be some sort of a table of content and, possibly, some search facility set up on the journal's home page. (I am not sure how this could be done with GitHub, though).

Editorial board

This is the thread related to the editorial board formation. Ideally, the editorial board should cover any computational science as well as any languages and most common tools & libraries that are used in a specific domain.

  • Physical science
  • Life Science
  • Formal Science
  • Social Science
  • Engineering Science

Organize science domains

We can use the issue labels for specifying the domain of a submission. The list should probably be extended and revised regularly.

See http://en.wikipedia.org/wiki/Branches_of_science

  • Physical science
    • Physics
    • Chemistry
    • Earth Science
    • Ecology
    • Oceanography
    • Geology
    • Meteorology
  • Life Science
    • Biology
    • Neuroscience
    • Zoology
    • Botany
  • Formal Science
    • Computer Science
    • Mathematics
    • Statistics
    • Systems science
  • Social Science
    • Economics
    • Linguistics
    • Psychology
  • Engineering Science
  • etc.

We should restrict to computational domain only.

Paper only in PDF?

I have found the (only) publication of the journal: Interaction between cognitive and motor… but, to my great disappointment, the paper itself, ie, the narrative content, is only available in PDF. Based on the URL of the PDF file, I also got to the article's repositiory, but that repository only contained a PDF as a finished text. I would have expected to find, through github.io, a version in HTML, properly readable on mobile environment, possibly directly linking into, say iPython notebooks or whatever else, etc. This journal has the potential of showing all the power of the Web for publishing...

ReScience Archives

I've created a ReScience Archives organization (https://github.com/ReScience-Archives) where each associate editor is owner such that they can create new repositories. This was not possible using only teams rights.

This new organization might also be useful to avoid polluting the main organization repository

scientific area outside the editorial board expertise

Hi all,
congrats for a very interesting effort.
My question is: whom to contact if I would like to be part of the effort but my area is outside of the existing expertise of the Editorial Board?
I am working in Databases (and Data Management in general) and with the explosion of data we have a lot of interesting cases that would fall under the journal's interests of re-science.
Please let me know,

Best regards,

Dimitris

Dimitris Kotzinos
Professor
Head MIDI team
Lab. ETIS (ENSEA/UCP/CNRS UMR 8051)
& Dept. Sciences Informatiques, Université de Cergy-Pontoise
2 av. Adolphe Chauvin
Site Saint Martin, bureau A561
95000 Pontoise
France
phone: +33 13425 2855
e-mail: [email protected]

Replication of preprints

I've been asked if we have a specific policy regarding replication of preprints. There is already a preprint being replicated (ReScience/ReScience-submission#20) and I don't see any problem with preprint replication but I might miss something obvious. Any thought ?

Submitting replication done as the paper reviewer

I have been trying to do implementations of methodological work I'm asked to peer-review for journals. This is not always possible but I have managed to do a couple and successfully reproduce the submitted results. I shared my reproduction as part of my review and the papers were accepted and published.

Is there any ethical issue with submitting my reproduction made during peer-review after the article has been accepted and published?

Reviewer application

If you want to become a reviewer for ReScience, please post your information here. The format is:

[name](github account link)  
Scientific expertise - Language expertise  
ORCID: [xxxx](https://orcid.org/xxxx)

You can have a look at http://rescience.github.io/board/ for ideas (stars are pointers to reviews, no need to add them)

Then you can watch the submission thread and propose yourself for review if a new submission falls into your expertise domain.

Criteria for publication

Under criteria for publication, it is written "You cannot submit the replication of your own research, nor the research of your close collaborators."

The first part is clear.

We have replicated a 20-year-old study in a modern language without seeing the original implementation and the precise algorithms. The first author of the paper wrote all the code and prepared the figures, but we do not know him. I have not met the other authors. The second and third authors will provide ideas on future directions of a related project but we will write the code and tools. Can we please submit our contribution?

Reproducible empirical research?

All,

I am wondering whether it would be appropriate to extend the ReScience family, and create a second journal for reproduced experimental work, broadly spanning psychology and neuroscience. As you know reproducible research in these fields has drawn a lot of attention, but publishers are not willing to change their policies and/or direct reviewers to be more accepting of submissions describing reproduced work. I believe there is a need for an outlet for such work, and the way ReScience works seems very suitable, especially in light of the fact that in both psychology and neuroscience, students are more incentivised to reproduce work than their PIs, who are judged on originality.

Due to the focus of my own remit, I would propose to focus both on psychology and neuroscience work, broadly, giving way to cross-disciplines reviewing, but I imagine that if the journal gets overwhelmed with submissions, there might be a need to specialise and possibly divide into more specific journals.

Thoughts?

dr. Etienne Roesch
Associate Professor of Cognitive Science
University of Reading

Full affiliation for editors

Dear @ReScience/associate-editors @ReScience/editors,

We've applied for membership to the Open Access Scholarly Publishers Association and for this, we need to have the full affiliation (including department) for each of you, this will be displayed on the board page. We could also have a small text introducing yourself + a photo just like for the JOSS board.

If you're ok (small text + photo), just add a 👍 here and reply here with the relevant information.
Else, I would still need you full affiliation.

Docker environment

Hello,

should we encourage the submitters to give a dockerfile alongside theirs submission in order to facilitate the work of the reviewers (no installation other than docker, no package conflicts, no OS limitation, etc.).

For more information: Docker

Submission test

Before the journal can be started, we need to test the submission process to check if the procedure works as expected and modified it if necessary.

  • Isssue a pull request
  • Acknowledge the pull request
  • Start review
  • Accept the paper
  • Import the repo
  • Merge branch into the master (in the new repo)
  • Make a release
  • Get a DOI

Suggesting a paper to replicate?

There are two highly cited computational papers from the same author which I have been unable to reproduce, despite great effort. However, there are two problems:

  1. My attempts to reproduce them were in Matlab and I am less skilled with Python. As a busy postdoc, it would be impossible to find the time to learn Python enough to attempt a replication in Python.

  2. As a postdoc in the same field, I am hesitant to attach my name to a replication attempt if the results are deemed not reproducible.

This gives rise to two questions:

  1. Can I suggest papers for replication, so that someone with better Python skills can try it out?

  2. Can papers that fail to reproduce a result be submitted anonymously?

To be clear, I am completely open to the possibility that the results in those papers are reproducible and the mistake is mine. This is another reason I would be eager to suggest the paper for someone else to try.

I would be happy to donate to the ReScience project (if donations are accepted) or perform some needed service at ReScience in return for someone attempting a replication of the papers.

Article number request

This thread is for editors to register article numbers at the final stage of publication.

Post-publication maintenance

I have tried to reproduce the results of some of the papers published in ReScience, with the goal of using them as examples in a course on reproducible research. The idea is to let students re-run the code and compare the outcomes, without necessarily understanding much about the contents - the goal of the exercise is to show how computational work can be published reproducibly, and what reproduction actually involves with today's state of the art. But before telling students what to do, I prefer to try it myself.

So far I haven't fully succeeded with any of the three papers I have tried, which makes me wonder if we should encourage authors to provide some level of post-publication maintenance of their submissions. Technically this would be easy, as anyone can open an issue on a repository in ReScience-Archives to report a problem.

Any opinions?

RSS feed for new content

First, I think this is a great initiative!

One thing that seems lacking that I think should be added is an RSS feed for new content.

Single algorithms as ReScience publications

When working on a large project, researchers need algorithmic tools whose development is not the purpose of the project itself. These tools often don’t get shared properly, because of their small size and/or the fact that they don’t really fit the classic publication scheme (intro-methods-results-discussion). As a result, they’re never really re-used, nor improved on, nor corrected. We believe that these computational blocks, when they have a clear methodological application, could have a place in the new ecosystem of publication ReScience is advocating. Additionally, it would really make sense practically since ReScience is based on github…

Here is an example. @dhimmel , when working on his project (http://thinklab.com/p/rephetio), needed an implementation of the Gerstein-Sonnhammer-Chothia algorithm. Unable to find an existing one, he used the platform of collaborative research thinklab to find a suitable collaborator (me) who developed the tool he needed. After proper testing with example datasets, the tool has been incorporated in the workflow of the bigger project.
This tool is a well-defined, documented, tested, re-usable piece of code. It should be shared, tested again, compared, improved on, re-used, adapted to other languages; but there is no proper way to reference it, since it hasn’t be published. ReScience could bridge this gap.
I would suggest using this example as a case study for our proposition.

Thank you Daniel for the initial proposition! (tweet)

Looking forward to some feedback.

Copyright status of equations

There is a pending question related to the re-use of equations (ReScience/ReScience-submission#33) of the original paper in the replication. If fair use most certainly allows to re-use some equations of the original paper, we don't know if an author can re-use (i.e. copy them) all the equations in the new paper in order to ease the reading.

@iampritishpatil was kind enough to ask the question on stack exchange but no definitive answer yet.

Does anyone have some answer or pointers on that precise matter?

Partial replication of a paper

I'd like to bring up the question again whether ReScience is accepting partial replication of papers. This has been discussed to some extent in #19 and #18 but without an definite answer. I agree with the opinions stated over there that researchers are likely to replicate the results they are interested in for themselves only.

As an example consider the class of papers that introduces a new and improved algorithm for a known and solved problem. Typically, such a paper presents the proposed new algorithm and then compares it to state of the art algorithms. A full replication of such a paper would include not only the implementation of the new algorithm, but also of all others. The implementation of the state of the art algorithms is quite some overhead, most importantly because they are likely to not be used by the person replicating the paper.

Hence my question: to which extent does the original paper have to be replicated to be accepted?

Practical vs. theoretical reproducibility

I see some open questions regarding which research is reproducible and which is not. Here are some hypothetical edge cases:

  • My code is platform-specific assembler and can be run only on the blue gene supercomputer.
  • My calculations are fully reproducible, but take millions of CPU hours to reproduce.
  • My calculations are not deterministic, and I'm not concerned with statistics, but with properties of one particular case (think about measuring/simulating, and studying a single instance of a storm).

In this light, the requirement that all tools must be open source seems questionable to me. As long as one specifies a version or git commit of a proprietary tool, the calculation is in theory perfectly verifiable. I'm not a fan of closed source either, but politics maybe should not play role here? In the end, if someone uses a tool that is proprietary and no one can verify it, the work just won't get reviewed and accepted. So there is a natural incentive for going open source. But I don't see why it should make a work automatically rejected if someone with the same version of the tool can verify it. Thoughts?

Submit the replication of my own research not allowed

I'd like to strongly suggest to rethink that restriction. I really like the idea of this journal, but I fear that this restriction will be a substantial limiting factor for the number of submissions and the usefulness of the journal.

A very obvious reason for this is that time is limited for all of us. Therefore, we usually never reproduce the results of an entire paper, but only the parts that are relevant to us. Would you then accept code that reproduces parts of that paper?

More importantly, I don't think that this restriction really promotes reproducible research. By only allowing reproduced results from people other than the authors, you are basically checking whether the methods part of a paper is sufficient to reproduce the results. While that should be the case, it is unfortunately only true in rare situations. There are various reasons for this:

  • Journals have word limitations.
  • Condensing a complex analysis into a few paragraphs in a methods part is a daunting task and it is very easy to fail.
  • The data has not been published.

The best reproduction of an analysis is its code. This is why code should always be published with a paper. Unfortunately, this is not common practice and even if the code is published, I haven't seen a single instance in which it was reviewed. I thought that ReScience could fill this gap perfectly and who would be better suited to supply it than then authors?

Let's say you allowed authors to publish their analysis. This would mean that they would have to provide the data (because you need to be able to run it) and you would have at least one running instance that reproduces the results. It might have mistakes and biases, but at least it is one entire running instance. Others who are insterested in reproducing the paper could look at the code and, with the methods part of the paper, figure out what the authors did. If they find a error or a particular bias in the analysis, they could still submit an issue to the code repository. The time required to understand and reproduce the results, however, would be drastically reduced. I feel this would serve reproducibility much more than the hope that people spend significant time to reproduce an entire paper.

Reproducible study, replication of findings: usage in ReScience guidelines

To illustrate further on my long post regarding definitions ...
ReScience/ReScience-article#5 (comment)
... here is a quick review of usage in the ReScience guideline documents in this repo.

README.md

"ReScience is a peer-reviewed journal that targets computational research and encourages the explicit reproduction of already published research promoting new and open-source implementations in order to ensure the original research is reproducible."

The first use of “reproduction” here would have to be swapped to “replication” to be consistent with Claerbout/Donoho/Peng, but the use of “reproducible” at the end is not problematic: by replicating the original study and releasing all the (new) code and data, the research is now reproducible.

"If you ever reproduced [a] computational result from the literature… publish this new implementation.” ⟶ In talking of a new code implementation here, the scenario is likely that the original study did not publish code and data. The text would be consistent with Peng’s usage if it read instead “If you ever replicated a computational result …” (i.e.,replicate the findings using new code to obtain new data).

Note that the slogan of the journal is perfectly consistent with the Claerbout/Donoho/Peng usage: Reproducible Science is Good. Replicated Science is better. Peng (2011) says that “replication is the ultimate standard,” i.e., arriving at the same findings (with new code, new data). Reproducible research, he says, is a minimum standard.

author-guidelines.md

The suggested text for the submission description says:
"I request a review for the replication of the following paper:
References of the paper holding results you're replicating
I believed the original results have been faithfully replicated
as explained in the accompanying article."

Here, the use of “replication" is in line with the idea that an independent group collected new data to produce replication of the findings (results) from the original study. ⟶ consistent with Peng.

reviewer-guidelines.md

"The main criterion for acceptation is the actual replication of the research …” ⟶ implies that the study was replicated, new data was collected arriving at the same findings. Consistent usage. (though “acceptation” is wrong: should be “acceptance”)

"ensure the proposed submission is actually replicable.” ⟶ here, the term should be “reproducible” to be consistent with Claerbout/Donoho/Peng

"ReScience targets replication of original research” ⟶ no problem

(I did not go over the ReScience website, where I know that at least the FAQs has the "swapped" terminology."

Makefile returns "pandoc-crossref: Error in $: mempty"

Hi, the Makefile for creating a tex file from the md manuscript returns the following errors:
pandoc-crossref: Error in $: mempty pandoc: Error running filter /home/alexandra/.cabal/bin/pandoc-crossref Filter returned error status 1 Makefile:10: recipe for target 'rescience-template.tex' failed make: *** [rescience-template.tex] Error 83
Does anyone know what needs to be done to resolve this?
Thanks!
Edit: I have tried completely removing pandoc and pandoc-crossref and reinstalling them.

New submission thread

Dear @ReScience/reviewers, @ReScience/associate-editors,

Watching the submission repository might be too noisy for most of you. You should automatically receive new submission request (and new submission request only) from this thread. You can expect 1 notification / month.

Don't answer in this issue or this will send notifications to all reviewers.

ORCID for all

Dear @ReScience/editors, @ReScience/reviewers

We would like to ask authors / reviewers / editors to communicate their ORCID number. If you're ok with that, just thumb up. Else, please comment below.

You can enter your ORCID number using this form

Referencing and visibility of ReScience's articles

There appears to be a problem in the way google scholar recognizes the authors and the title of the first article published in ReScience (Topalidou and Rougier 2015) : https://scholar.google.fr/scholar?hl=en&q=author%3Azito+author%3Akhamassi&btnG=&as_sdt=1%2C5&as_sdtp=
Google scholar thinks that Tiziano, Benoît and I are the authors and that "Meropi Topalidou ... Nicolas Rougier ..." is the title of the article.
Apparently, Google scholar extracted this information from the pdf which was uploaded on researchgate.

Do you think we could solve this by adding meta-data in the PDF of each article published in ReScience so that Google parses it correctly?

Documentation

Finish the wiki and write all documentation:

  • Home
  • Editorial Board
  • Submit a paper
  • Author guidelines
  • Editor guidelines
  • Reviewer guidelines
  • Open Access License
  • Media & press
  • FAQ

Publication support

I am in the process of handling my first replication as editor (@rougier's ReScience/ReScience-submission#28). Reading the editor guidelines, I get a bit in doubt about the recommended procedure for inclusion into ReScience-Archives.
It says to:

  • Import the authors’ repository into the ReScience archives (https://github.com/ReScience-Archives) using the naming convention “Author(s)-YEAR”
  • Add a new remote (named rescience) in your local copy of the repository that points to the newly imported repository (the one on ReScience-Archives)
  1. Firstly, @rougier's repo is a fork of ReScience/ReScience-submission in which 'master' contains the boilerplate template and the branch 'rougier-2017' contains the replication. Is it really the intention to import the forked author repo, including the not-so-useful 'master' branch (and the seemingly outdated branch 'topalidou-rougier') into the new ReScience-Archives repo?
  2. In Add a new remote etc. I am a bit in doubt which repos are referred to. I assume your local copy of the repository is my local copy of rougier/ReScience-submission? And in this local repo I add a new remote pointing to the newly created ReScience-Archives/Rougier-2017?

Later, the editor guidelines state:

  • Merge the rescience branch into master
  • Push these changes onto the rescience remote repository
  1. Does the rescience branch correspond to the 'rougier-2017' branch in rougier/ReScience-submission? In that case, the merge would correspond to merging the 'rougier-2017' branch into 'master' in rougier/ReScience-submission - sort of overwriting the boilerplate template found there.
  2. Does Push these changes then refer to the resulting content of 'master' in rougier/ReScience-submission? Which should be pushed onto which branch in ReScience-Archives/Rougier-2017? This does not make much sense to me since ReScience-Archives/Rougier-2017 is assumed to already contain rougier/ReScience-submission by the initial import.

The following would make a lot more sense to me instead:

  1. Clone a local copy of rougier/ReScience-submission. The following takes place in that local copy; let us call it 'author-local'.
  2. Checkout the 'rougier-2017' branch.
  3. Fix what needs fixing (re-generate PDF etc.).
  4. Create a new public, empty repository ReScience-Archives/Rougier-2017.
  5. Add ReScience-Archives/Rougier-2017 as a new remote 'rescience' in the 'author-local' repo.
  6. Push the branch 'rougier-2017' to 'rescience/master'.

Surely, I must have missed the point somewhere. Please advise.

Line number in draft

I am currently reviewing a paper, and I think it would be more convenient to have the number of lines written on the submitted paper draft, so reviewers could refer more easily to particular expressions or sentences in the text.

Request version information

A full version information could be requested in the paper or the README of the code directory.

Example for Python

import sys
import numpy as np
import scipy

print("Platform:", sys.platform)
print("Python:", sys.version)
print("NumPy:", np.version.version)
print("SciPy:", scipy.version.version)
Platform: linux
Python: 3.5.2+ (default, Sep 22 2016, 12:18:14)
[GCC 6.2.0 20160914]
NumPy: 1.12.0b1
SciPy: 0.17.0

Example for C

pierre@pc$ uname -mosv
Linux #1 SMP Debian 3.16.7-ckt9-3 (2015-04-23) x86_64 GNU/Linux
pierre@pc$ gcc --version
gcc (Debian 6.2.0-5) 6.2.0 20160927
Copyright (C) 2016 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

The version used by the reviewers could be appended.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.