data4democracy / collect-social Goto Github PK
View Code? Open in Web Editor NEWSimply collect social media content
License: MIT License
Simply collect social media content
License: MIT License
Open call to anyone who would like to contribute more unit tests.
Bonus points if you want to get us setup to use [coverage.py] (https://coverage.readthedocs.io/en/coverage-4.3.4/) and document process for others.
Follow the setup instructions in the readme.
Submit a pull request if you have any suggested improvements or leave tips on anything you found confusing.
Need help with your first PR? Checkout github-playground or reach out in #github-help
Come up with a standard list of tweet fields to save (if available) . This is meant to be the "default" config list. The plan is to allow saving of full tweet object or a targeted list of fields but we want to come up with the most relevant fields that would be used by useful in nearly every project.
So far people have mentioned coordinates
and entities
(contains hashtags, user_mentions, urls) as potentially useful fields that are not currently captured by discursive.
Placeholder for feature request: Optional setting to store full tweet object to S3.
I think it would sense to dump the whole tweet to s3 and keep the fields of interest in storage of choice (DB, ElasticSearch etc.) Then if we realize that we need more fields we can just modify the parser and rerun on the backed up data (rather than have to deal with api limits etc)
Contingent on issue #2
Add 4chan source
poll_user
We currently have a stand alone proof of concept for pulling down reddit data which @wwymak graciously put together for us here. I propose we use that as a reference for creating reddit support within collect-social using the same model. get_posts
, get_comments
etc.
update Feb 1, 2017 Issue is unclaimed
Next steps:
reddit-integration
and push basic folder structure. Push all subsequent changes to this branch.@kylerbrown has got us up and running with pytest and written the first few unit tests. We need documentation & links to external resources to show newcomers how to contribute test along with their new code.
Save tweets/users profiles from twitter to SQLITE db similar to how the below code will save facebook posts/users. Create a new folder in collect-social/twitter
for the new code.
from collect_social.facebook get_posts
app_id = <YOUR APP ID>
app_secret = <YOUR APP SECRET>
connection_string = 'sqlite:///full-path-to-an-existing-database-file.sqlite'
get_posts.run(app_id,app_secret,connection_string,page_ids)
facebook.get_posts
becomestwitter.search_tweets
(search) or twitter.stream_tweets
(streaming)utils/setup_db
The need for a "collection ID" has come up several times in discussions on slack.
This element would be used to track what initiative the piece of media was collected under. I.E. if we started collecting tweets from women's march all those tweets would be tagged under that project/process so we would know how/why this tweet was collected when analyzing a large population later. A pain point for analysts working with large data sets in the past has been seeing various pieces of media but not knowing exactly why/how they were added to the data set. If we plan on streaming multiple sources to a single data store or archiving them together it will be especially important to document and track this.
The Twitter REST API has some pretty strict rate limits for requests. This is making testing very difficult for us! We need a way to bypass/circumvent these limits.
One suggestion was to use vcr, which has also been used for testing tweepy.
Looking for someone to start helping with unit tests. If you would like to work on this and feel strongly about a testing framework feel free to propose one. I would suggest pytest.
We can start small with just 2 or 3 basic tests and a update to the readme so future contributors can write their own tests.
disqus is used in quite a few websites for commenting. We can query for items such as users accounts, posts, forums etc via their api. More details can be found https://disqus.com/api/docs.
Official python lib https://github.com/disqus/disqus-python
They have an API, ftw. There'll undoubtedly be questions about how to structure the collection but it probably shares 4chan/Reddit/forum characteristics so let's all help out!
poll_user
Uses the twitter stream API tweepy streaming docs
stream_tweets.py
This function will be used to poll users over time. Users that have been identified as important nodes of a target community need to be polled over time to monitor their activity over time.
'active':
'bio':
'location': user.location,
'followers': user.followers_count,
'following': user.friends_count,
'image': user.profile_image_url
v0.1
This is a port of functionality getUserInfobyHandle
poll_user
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.