Giter Site home page Giter Site logo

praw-dev / praw Goto Github PK

View Code? Open in Web Editor NEW
3.3K 73.0 454.0 50.06 MB

PRAW, an acronym for "Python Reddit API Wrapper", is a python package that allows for simple access to Reddit's API.

Home Page: http://praw.readthedocs.io/

License: BSD 2-Clause "Simplified" License

Shell 0.02% Python 99.98%
python reddit api oauth praw reddit-api

praw's Introduction

PRAW: The Python Reddit API Wrapper

Latest PRAW Version

Supported Python Versions

PyPI - Downloads - Monthly

GitHub Actions Status

Coveralls Coverage

OpenSSF Scorecard

Contributor Covenant

pre-commit

Black code style

PRAW, an acronym for "Python Reddit API Wrapper", is a Python package that allows for simple access to Reddit's API. PRAW aims to be easy to use and internally follows all of Reddit's API rules. With PRAW there's no need to introduce sleep calls in your code. Give your client an appropriate user agent and you're set.

Installation

PRAW is supported on Python 3.8+. The recommended way to install PRAW is via pip.

pip install praw

To install the latest development version of PRAW run the following instead:

pip install --upgrade https://github.com/praw-dev/praw/archive/master.zip

For instructions on installing Python and pip see "The Hitchhiker's Guide to Python" Installation Guides.

Quickstart

Assuming you already have a credentials for a script-type OAuth application you can instantiate an instance of PRAW like so:

import praw

reddit = praw.Reddit(
    client_id="CLIENT_ID",
    client_secret="CLIENT_SECRET",
    password="PASSWORD",
    user_agent="USERAGENT",
    username="USERNAME",
)

With the reddit instance you can then interact with Reddit:

# Create a submission to r/test
reddit.subreddit("test").submit("Test Submission", url="https://reddit.com")

# Comment on a known submission
submission = reddit.submission(url="https://www.reddit.com/comments/5e1az9")
submission.reply("Super rad!")

# Reply to the first comment of a weekly top thread of a moderated community
submission = next(reddit.subreddit("mod").top(time_filter="week"))
submission.comments[0].reply("An automated reply")

# Output score for the first 256 items on the frontpage
for submission in reddit.front.hot(limit=256):
    print(submission.score)

# Obtain the moderator listing for r/test
for moderator in reddit.subreddit("test").moderator():
    print(moderator)

Please see PRAW's documentation for more examples of what you can do with PRAW.

Discord Bots and Asynchronous Environments

If you plan on using PRAW in an asynchronous environment, (e.g., discord.py, asyncio) it is strongly recommended to use Async PRAW. It is the official asynchronous version of PRAW and its usage is similar and has the same features as PRAW.

PRAW Discussion and Support

For those new to Python, or would otherwise consider themselves a Python beginner, please consider asking questions on the r/learnpython subreddit. There are wonderful people there who can help with general Python and simple PRAW related questions.

Otherwise, there are a few official places to ask questions about PRAW:

r/redditdev is the best place on Reddit to ask PRAW related questions. This subreddit is for all Reddit API related discussion so please tag submissions with [PRAW]. Please perform a search on the subreddit first to see if anyone has similar questions.

Real-time chat can be conducted via the PRAW Slack Organization (please create an issue if that invite link has expired).

Please do not directly message any of the contributors via Reddit, email, or Slack unless they have indicated otherwise. We strongly encourage everyone to help others with their questions.

Please file bugs and feature requests as issues on GitHub after first searching to ensure a similar issue was not already filed. If such an issue already exists please give it a thumbs up reaction. Comments to issues containing additional information are certainly welcome.

Note

This project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.

Documentation

PRAW's documentation is located at https://praw.readthedocs.io/.

History

August 2010: Timothy Mellor created a github project called reddit_api.

March 2011: The Python package reddit was registered and uploaded to pypi.

December 2011: Bryce Boe took over as maintainer of the reddit package.

June 2012: Bryce renamed the project PRAW and the repository was relocated to the newly created praw-dev organization on GitHub.

February 2016: Bryce began work on PRAW4, a complete rewrite of PRAW.

License

PRAW's source (v4.0.0+) is provided under the Simplified BSD License.

  • Copyright ©, 2016, Bryce Boe

Earlier versions of PRAW were released under GPLv3.

praw's People

Contributors

13steinj avatar abaisero avatar bakonydraco avatar bboe avatar cam-gerlach avatar crackedp0t avatar d0cr3d avatar damgaard avatar deimos avatar diceroll123 avatar eleweek avatar jamiemagee avatar jarhill0 avatar julian avatar kungming2 avatar leviroth avatar lilspazjoekp avatar liudvikam avatar maybenetwork avatar michael-lazar avatar nemec avatar nmtake avatar pyprohly avatar pythoncoderas avatar sprt avatar tmelz avatar vallard192 avatar voussoir avatar watchful1 avatar zhifuge avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

praw's Issues

fetched modhash is between quotes

Hi,

The fetched modhash has " characters around it that should be removed. Example:

r = reddit.Reddit(user_agent="my_cool_application")
r.login(user=USERNAME, password=PASSWORD)
modhash = r.modhash
print modhash    # "nb8gzf3..."

Patch in reddit.py at the end of def _fetch_modhash(self):

self.modhash = self.modhash.replace('"', '')

Laszlo

Caching of error messages defeats posting retries

https://github.com/reddit/reddit/issues/382

My bot was attempting to comment on a post and hit the rate limiter. I was rate limited for 9 minutes. 9 minutes later I was rate limited for 53 seconds. 53 seconds later I was rate limited for 202 milliseconds. All of that is as expected, although there is a pull request pending elsewhere regarding eliminating the millisecond instance.

This is where it got weird. My bot waited 1 second and tried again, and was again rate limited for 202 milliseconds. This repeated 30 times, over a period of 30 seconds, before it aborted. I think that this behavior is a bug, and bboe has indicated the bug is because the result was cached undesirably.

adding comment stopped working

Hi,

Adding comments went well, then suddenly it stopped working. Here is my test code:

r = reddit.Reddit(user_agent="my_cool_application")
r.login(user=USERNAME, password=PASSWORD)
post = r.get_submission_by_id(POST_ID)
post.add_comment('test comment')

It may be some temporary problem, I don't know.

Thanks,

Laszlo

Need new class for Messages

Getting messages from the Inbox object sometimes just returns json because I haven't made on object for Messages yet

httplib.IncompleteRead

The return response.read() statement in helpers._request occasionally raises httplib.IncompleteRead during extended sessions. This may be a network error on my end, or it could be on reddit's end.

I am testing a patch, and will submit a pull request if successful.

BaseReddit.get_content() assumes url_data will be a dict

  File "/usr/local/lib/python2.7/dist-packages/reddit/__init__.py", line 257, in get_content
    url_data['after'] = root[after_field]

This will fail if url_data is a list instead of a dict. urllib.urlencode allows url_data to be a mapping object OR a list of two-element tuples.

return value of "submit" has changed

Hi,

I just noticed that the return value of the submit function changed. Example:

r = reddit.Reddit(user_agent="...")
r.login(user=cfg.USERNAME, password=cfg.PASSWORD)
res = r.submit(subreddit, title, url=url)

Now the return value is just this: {u'errors': []}. The previous return value was much more informative, it also included the URL of the submission on reddit. Do you know what happened?

Laszlo

Get list of subreddits?

Was hoping to find a method get_subreddits() which returned a list of all subreddit names, is this possible.

Side Note : Examples.md "r = reddit.Reddit" should be "r = reddit.Reddit()"

HTTP 404 for get_submitted() call on deleted user

Here's an example of a post by a deleted user:

sub = r.get_submission(submission_id='rbw6o')
auth = sub.author 

Certain things seem to work okay, like auth.name, but list(auth.get_submitted()) raises a 404 error.

Perhaps this should be handled a little differently, like making auth.get_submitted() return None, or implementing some other way of detecting deleted users?

Tests tests tests

Every feature should have a test which verifies that it works as intended. Likewise, nothing should be added or modified unless a corresponding test is added that demonstrates it works properly.

Copyright and license ambiguity

Code currently is ambiguous as to:
a) who the copyright holder is and
b) what (if any) license or reuse is allowed

this makes usage of this code perilous, and it probably should be fixed (see this comment I made on reddit here http://www.reddit.com/r/Python/comments/exknd/how_would_i_go_about_using_python_to_submit_links/c1buru3 , and no downvotes, it's a real problem :-)

I'd recommend picking a license and adding it to the project. If you are having difficulty picking a license, I'd be glad to offer advice and/or suggestions based upon how you'd like your code to be reused (I'm an old-hat to this stuff).

Comment replies don't show up

The comment.replies list is empty for a comment that I know has replies. Code looks like this:

print r.get_redditor('roger_').get_comments().next().replies

NSFW issues

There are two issues:

  • Non-authenticated users must provide an over18 cookie in order to access NSFW pages
  • I suspect authenticated users must have the over18 field enabled in their settings in order to access NSFW pages

Ideally we will provide a method for obtaining the over18 cookie and either automatically perform this when an over18 exception is raised (if possible) or prompt the user for action.

KeyError: 'data' (MoreComments class, comments method)

File "C:\Python\Python27\lib\site-packages\reddit-1.2.5-py2.7.egg\reddit\objects.py", line 326, in comments
self._comments = response['data']['things']
KeyError: 'data'

I believe this is a result of the previous line:
response = self.reddit_session.request_json(url, params)
returning null or some other NOT dict; for whatever reason.

Recommendation is to check for this occurrance and fail silently.

User karma

I just started playing around with this wrapper today when I noticed something strange.

I attempted to get the user karma, based on the example given in the README:

ketralnis = r.get_redditor("ketralnis")
print ketralnis.link_karma, ketralnis.comment_karma

However I ran into an error when I ran my test program. Apparently link_karma and comment_karma aren't defined in the Redditor class?

EDIT: I did a search on old issues and came across this. I did the same test but a certain number of variables were not returned (has_mail, created, etc.)

Please correct me if I'm wrong, or doing something wrong, I was just curious as to what was happening.

Load comment tree from a reddit

From each specific thread page ( e.g. http://www.reddit.com/r/programming/comments/d6r59/hey_rprog_i_made_a_python_wrapper_for_the_reddit/.json) the JSON array has 2 variables. The first has the info about the original post (submitter, title, etc...) and this appears to be what is returned by get_hot and similar requests. The second JSON variable is a tree of all the comments on that post. Is there a way to access this entire variable, preferably with a single request?

modhash solution

Hi,

In reddit.py, it is written in comments at function _fetch_modhash:

# TODO: find the right modhash url, this is only temporary

I have the following solution:

r = reddit.Reddit(user_agent="my_cool_application")
r.login(user=USERNAME, password=PASSWORD)
opener = r._opener

def get_modhash():
    url = 'http://www.reddit.com/api/me.json'
    data = json.load(opener.open(url))
    return data['data']['modhash'] if 'data' in data else False

print get_modhash()

It solves two things: the modhash is not in quotes (see my previous ticket) and it is extractred from JSON, which is more elegant than doing regex on HTML source.

Laszlo

AttributeError: 'Subreddit' object has no attribute 'encode'

Hi, I was trying to write a script to delete comments for a user:

USER = "foo"
PASS = "bar"
USER_AGENT = "baz"

r = reddit.Reddit(user_agent=USER_AGENT)
r.login(user=USER, password=PASS)

for comment in r.get_redditor(USER).get_comments():
    comment.delete()

I can get the comments ok, but when I try to delete them, I get a stacktrace with:

AttributeError: 'Subreddit' object has no attribute 'encode'

Getting all comments from subreddit

I may be missing something, but I don't see a way of getting all comments for a specific subreddit (http://reddit.com/r/sub/comments). Useful for much the same things as reddit.get_all_comments - I'd think you could pretty much just make that function take an optional subreddit argument (and add a call to that to the Subreddit class). In fact while writing this comment, I did just that, and it seems to be working. I'd create a pull request, but I suck at git, and it's quite trivial anyway.

Issues with Python 2.7.1?

Getting some strange behavior on Mac OS X Lion. Perhaps something went awry during install? (I had to sudo)

>>> r = reddit.Reddit(user_agent='my_cool_application')
>>> submissions = r.get_subreddit('opensource').get_hot(limit=5)
>>> list(submissions)
[<reddit.objects.Submission object at 0x1070ea0d0>, <reddit.objects.Submission object at 0x1070ea310>, <reddit.objects.Submission object at 0x1070eaa50>, <reddit.objects.Submission object at 0x1070eab90>, <reddit.objects.Submission object at 0x1070eab10>]
>>> submission = submissions.next()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
StopIteration
>>> 

Comments are Dictionaries?

This (or there abouts) code
import reddit
r = reddit.Reddit()
sr = r.get_subreddit("subreddit")
n = sr.get_hot(limit=100)
for p in n :
for c in p.comments :
c.body

Will sometimes throw an error that c is a dict?

Message.replies is dictionary instead of list of Message objects

In [23]: r.user.get_modmail().next().replies
Out[23]: 
{u'data': {u'after': None,
  u'before': None,
  u'children': [<reddit.objects.Message at 0x5055290>,
   <reddit.objects.Message at 0x50558b0>],
  u'modhash': u'xxxxxxxxxxxxxx'},
 u'kind': u'Listing'}

Not a huge problem, but this is inconsistent with Comment.replies.

MoreComments: Child comment retrieved before its parent

Reference URL

The above submission results in the following:

File "reddit/objects.py", line 486, in _fetch_morecomments
    tmp = self._comments_by_id[comment.parent_id].replies
KeyError: u't1_c3pv5gy'

Upon initial inspection, it is in fact the case that we fetch the child before its parent, though we do eventually fetch its parent. However, this submission also results in fetching the same comment more than once which is unexpected. Additional investigation is needed to learn why this occurs.

The ideal solution will guarantee we process the parent prior to its children so that we do not need to maintain an orphaned list.

Does not get mail from inbox

I'm 100% positive I have mail in my inbox. The library reports that my inbox is empty.
Here's what I'm doing:

r = reddit.Reddit(user_agent='whatever')
r.login(user='myusername', password='password')
r.get_inbox()

Ability to report a submission

Hello!

There is one particular feature that seems to be missing from the API: the ability to report a submission.

To report a submission, one just needs to make a request to "api/report/". See here for more information.

Of course, if I just missed it and that it's indeed already possible to report a submission, you can completely ignore this issue! But I looked around in the code and couldn't find it, so I don't think it's available...

Thanks in advance!

EDIT: Just noticed how it's possible to attach your own code. Please disregard this particular issue. Sorry about that!

Cuts off post title

When i get a post title if it is too long it will cut it off, is there any way around this?

Arch Linux (feature request)

Hi,

I just packaged reddit_api for Arch Linux. It's available as "python2-reddit-git". It would be great if a Python 3 version of reddit_api was also available. Python 3 is default on Arch. If tarballs were released, there could even be python2-reddit and python-reddit packages, which, if popular, could become official Arch Linux packages in the future.

Thanks.

And also thanks for the nice examples in the documentation (the .md files), it made it easy to test and verify that the API can actually do something interesting. :)

Best regards,
Alexander Rødseth

Missing 'permalink' attribute in comments

I'm getting this traceback:

print comment.permalink
 File "C:\Program Files (x86)\Python\2.7\lib\site-packages\reddit-1.2.4-py2.7.egg\reddit\objects.py", line 58, in __getattr__
attr))

 AttributeError: <class 'reddit.objects.Comment'> has no attribute 'permalink'

(The comment ID in this case is: c30rjqb)

Not sure why the comment would have no permalink value. This also causes reply() to fail.

RateLimitExceeded.sleep_time asserts on singular "second" and any "milliseconds"

sleep_time doesn't handle "1 second" or "123 milliseconds" correctly. it probably also fails on "1 minute".

my attempt at a solution is as follows...

modified and new regular expressions:

MINUTES_RE = re.compile('(\d+) minutes?')
SECONDS_RE = re.compile('(\d+) seconds?')
MILLISECONDS_RE = re.compile('(\d+) milliseconds?')

new match condition for the new regex:

    match = self.MILLISECONDS_RE.search(self.message)
    if match:
        return 1

Subreddit Object does not check that it exists on creation.

If you call get_subreddit on a nonexistent subreddit, it creates the object regardless of whether or not the subreddit exists. To check whether a subreddit exists, you have to call an equivalent of r.get_subreddit("_df").get_hot(limit=1).next().

I can go ahead and stick that sanity check in there on creation, but first I'd like to confirm whether or not the lazy-fail version is a design decision or an oversight.

Submission.author occasionally returns unicode string instead of Redditor?

I'm occasionally getting this exception:

print post.author.name.encode('utf-8')
AttributeError: 'unicode' object has no attribute 'name'

I don't know the author or post right now since it happens after running for a long time, but I suspect the post or author may be deleted. Perhaps in this case it should raise an exception?

HTTP Error 429: Unknown when fetching submissions

Hi,

I am the developer of http://rvytpl.appspot.com/. First of all, thank you for providing the Python library for the Reddit API. It's very convenient and easy to use.

I'm encountering a problem relatively frequently in this code:

reddit_api = reddit.Reddit(user_agent=USER_AGENT)
subreddit = reddit_api.get_subreddit('videos').get_top(limit=100)
new_entries = { }
for rank in range(100):
    if rank and (rank % 25 == 0):
        logging.info('Fetched %d entries, sleeping' % rank)
        time.sleep(60)
    entry = subreddit.next()
    new_entries[entry.id] = (rank, entry)

I'm trying to fetch the top 100 entries of the videos subreddit. I get this exception:

HTTP Error 429: Unknown
Traceback (most recent call last):
  File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py", line 703, in __call__
    handler.post(*groups)
  File "/base/data/home/apps/s~rvytpl/1.356858274957999053/admin.py", line 62, in post
    entry = subreddit.next()
  File "/base/data/home/apps/s~rvytpl/1.356858274957999053/reddit/__init__.py", line 231, in get_content
    page_data = self.request_json(page_url, url_data=url_data)
  File "/base/data/home/apps/s~rvytpl/1.356858274957999053/reddit/decorators.py", line 110, in error_checked_func
    return_value = func(self, *args, **kwargs)
  File "/base/data/home/apps/s~rvytpl/1.356858274957999053/reddit/__init__.py", line 265, in request_json
    response = self._request(page_url, params, url_data)
  File "/base/data/home/apps/s~rvytpl/1.356858274957999053/reddit/__init__.py", line 165, in _request
    url_data)
  File "/base/data/home/apps/s~rvytpl/1.356858274957999053/reddit/util.py", line 44, in __call__
    return self._cache.setdefault(key, self.func(*args, **kwargs))
  File "/base/data/home/apps/s~rvytpl/1.356858274957999053/reddit/decorators.py", line 100, in __call__
    return self.func(*args, **kwargs)
  File "/base/data/home/apps/s~rvytpl/1.356858274957999053/reddit/helpers.py", line 97, in _request
    response = reddit_session._opener.open(request)
  File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 387, in open
    response = meth(req, response)
  File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 498, in http_response
    'http', request, response, code, msg, hdrs)
  File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 425, in error
    return self._call_chain(*args)
  File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 360, in _call_chain
    result = func(*args)
  File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 506, in http_error_default
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 429: Unknown

This only happens after 25, 50 and 75 entries are fetched. I've tried increasing the sleep time, to no avail. What could be going wrong?

Cheers,
Michael

LoggedInRedditor.my_reddits() error

When using r.user.my_reddits(), I get the error

Traceback (most recent call last):
  File "/home/john/.virtualenvs/rdoqdoq/lib/python2.7/site-packages/flask/app.py", line 1518, in __call__
    return self.wsgi_app(environ, start_response)
  File "/home/john/.virtualenvs/rdoqdoq/lib/python2.7/site-packages/flask/app.py", line 1506, in wsgi_app
    response = self.make_response(self.handle_exception(e))
  File "/home/john/.virtualenvs/rdoqdoq/lib/python2.7/site-packages/flask/app.py", line 1504, in wsgi_app
    response = self.full_dispatch_request()
  File "/home/john/.virtualenvs/rdoqdoq/lib/python2.7/site-packages/flask/app.py", line 1264, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/home/john/.virtualenvs/rdoqdoq/lib/python2.7/site-packages/flask/app.py", line 1262, in full_dispatch_request
    rv = self.dispatch_request()
  File "/home/john/.virtualenvs/rdoqdoq/lib/python2.7/site-packages/flask/app.py", line 1248, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "/home/john/Projects/rdoqdoq/app.py", line 110, in login
    my_reddits.append(subreddits.next())
  File "/home/john/.virtualenvs/rdoqdoq/src/reddit/reddit/__init__.py", line 221, in get_content
    root = page_data[root_field]
TypeError: string indices must be integers

lines 215 - 220 in __init__.py

while fetch_all or content_found < limit:
        page_data = self.request_json(page_url, url_data=url_data)
        if root_field:
            root = page_data[root_field]
        else:
            root = page_data

Here's what I got when printing variables in the above snippet:

page_url => 'http://www.reddit.com/reddits/mine/'
url_data => {}
page_data => {}
root_field => 'data'

So... page_data is an empty dict but being indexed by root_field?

I can't figure out why page_data = self.request_json(page_url, url_data=url_data) is returning an empty dict.

I hope this makes sense.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.