Giter Site home page Giter Site logo

collect-social's Introduction

Collect Social: Simply collect public social media content

Inactive

New development has moved to smtk

Maintainers: Maintainers have write access to the repository. They are responsible for reviewing pull requests, providing feedback and ensuring consistency.

Purpose:

Collecting social media data for analysis can be kind of a nuisance. This project aims to make that collection process as simple as possible, by making some common-sense assumptions about what most researchers need, and how they like to work with their data. Collect social sits on top of other python libraries such as facepy (facebook) and tweepy (twitter). Our purpose is to take care of low level details and provide a clean API for working across multiple platforms.

Philosophy:

Our goal is to make it as easy as possible for researchers to get up and running with new collections. Our focus is on ease of use over maximum features. At every decision point we should carefully consider how a new feature will impact simplicity. A user should be able to use collect-social without prior knowledge of underlying libraries and APIs. Based on our experience of underlying API we will attempt to make the best decision that should work in average case but if you are looking for maximum control over your collection process, consider using underlying libraries directly.

Roadmap:

  • Our current to-do can be found here here
  • Command line interface
  • Eventador integration.
  • Refactor facebook API to match twitter.
  • Save data to sqlite.
  • Flat file exports (JSON).
  • Option to upload file output to designated s3 bucket.

First class platforms

  • Facebook
  • Twitter

Platforms we will potentially support in future

These are platforms we will consider supporting in the future. These will not be built out until we are happy with our implementation of facebook/twitter. If you are familiar with any of these platforms and would like to put together a proof of concept in a separate repository we welcome input.

  • Reddit
  • Disqus
  • voat.co
  • 4chan/8chan etc.

Getting Started

If you are looking to get started contributing please see our contributor guide. TODO - write guide

Installation:

Collect-social is built to run on python 3.6.

git clone https://github.com/Data4Democracy/collect-social.git

Then install as a package using pip. This will allow you import collect_social from any python script.

cd collect-social
pip install .

Or you can use the setuptools directly with

python setup.py install

Usage

TODO: Explain settings file

Facebook

If you haven't already, make sure to create a Facebook app with your Facebook developer account. This will give you an app id and app secret that you'll use to query Facebook's graph API.

Note that you'll only be able to retrieve content from public pages that allow API access.

Retrieving posts

Caution work in progress. This will change as part of our ongoing refactor. We do not suggest anyone use this for now.

You can retrieve posts using Facebook page ids. Note that the page id isn't the same as page name in the URL. For example Justin Beiber's page name is JustinBieber, but the page id is 67253243887. You can find a page's id by looking at the source HTML at doing a ctrl+f (find in page) for pageid. Here's a longer explanation.

from collect_social.facebook import get_posts

app_id = '<YOUR APP ID>'
app_secret = '<YOUR APP SECRET>'
connection_string = 'sqlite:////full/path/to/a/database-file.sqlite'
page_ids = ['<page id 1>','<page id 2>']

get_posts.run(app_id,app_secret,connection_string,page_ids)

This will run until it has collected all of the posts from each of the pages in your page_ids list. It will create post, page, and user tables in the sqlite database from the file passed in connection_string. The database will be created if it does not already exist. Note: The app_id, app_secret and elements in the page_ids list are all strings, and should be quoted (' ' or " ").

If you like, quickly check the success of your program by viewing the first 10 posts:

sqlite3 database-file.sqlite "SELECT  message FROM post LIMIT 10"

Retrieving comments

Caution work in progress. This will change as part of our ongoing refactor. We do not suggest anyone use this for now.

This will retrieve all the comments (including threaded replies) for a list of posts. You can optionally provide a max_comments value, which is helpful if you're grabbing comments from the Facebook page of a public figure, where posts often get tens of thousands of comments.

from collect_social.facebook import get_comments

app_id = '<YOUR APP ID>'
app_secret = '<YOUR APP SECRET>'
connection_string = 'sqlite:////full/path/to/a/database-file.sqlite'
post_ids = ['<post id 1>','<post id 2>']

get_comments.run(app_id,app_secret,connection_string,post_ids,max_comments=5000)

This will create post, comment, and user tables in the sqlite database created in/opened from the file passed in connection_string, assuming those tables don't already exist.

Retrieving reactions

Caution work in progress. This will change as part of our ongoing refactor. We do not suggest anyone use this for now.

Reactions are "likes" and all the other happy/sad/angry/whatever responses that you can add to a Facebook post without actually typing a comment. The reaction author_id and reaction_type are saved to an interaction table in your sqlite database.

from collect_social.facebook import get_reactions

app_id = '<YOUR APP ID>'
app_secret = '<YOUR APP SECRET>'
connection_string = 'sqlite:////full/path/to/a/database-file.sqlite'
post_ids = ['<post id 1>','<post id 2>']

get_reactions.run(app_id,app_secret,connection_string,post_ids,max_comments=5000)

Twitter

If you haven't already, make sure to create a Twitter app with your Twitter account. This will give you an access token, access token secret, consumer key, and consumer secret that will be required to query the Twitter API.

API

This assumes you have a list of Twitter accounts you'd like to use as seeds. This will build a network of those seeds and the accounts the seeds follow, and collect all tweets from 1/1/2016 to now for each of those accounts.

from collect_social.twitter.utils import setup_db, setup_seeds
from collect_social.twitter.get_profiles import run as run_profiles
from collect_social.twitter.get_friends import run as run_friends
from collect_social.twitter.get_tweets import run as run_tweets

# These you generate on developers.twitter.com
consumer_key = 'YOUR KEY'
consumer_secret = 'YOUR SECRET'
access_key = 'YOUR ACCESS KEY'
access_secret = 'YOUR ACCESS SECRET'

# Path to your sqlite file
connection_string = 'sqlite:///db.sqlite'

args = [consumer_key, consumer_secret, access_key, access_secret, connection_string]

# Assuming your seed accounts are in a file called `seeds.txt`. Put each screen name
# on its own line in the file
seeds = [l.strip() for l in open('seeds.txt').readlines()]

db = setup_db(connection_string)
setup_seeds(db, consumer_key, consumer_secret, access_key, access_secret, screen_names=seeds)

# get user profiles
run_profiles(*args)

# get everyone they follow
run_friends(*args)

# get profiles for newly added users
run_profiles(*args)

# get everyone's last 3200 tweets
run_tweets(*args)

Using the data

You now have a sqlite database with user, tweet, mention, url, hashtag, and connection tables, that all can be pulled into a pandas.DataFrame.

import pandas as pd
import sqlite3

con = sqlite3.connect("db.sqlite")
df_tweet = pd.read_sql_query("SELECT * from tweet", con)
df_user = pd.read_sql_query("SELECT * from user", con)

collect-social's People

Contributors

abglassman avatar bkey avatar bstarling avatar ccarey avatar kylerbrown avatar metame avatar ottoman91 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

collect-social's Issues

Twitter: Get user followers

Requirements:

  • Function should be part of poll_user
  • Accepts a username, returns a list of people following this account
  • Include unit test
  • Submit PR to branch v0.1

Add basic twitter functionality

Goal:

Save tweets/users profiles from twitter to SQLITE db similar to how the below code will save facebook posts/users. Create a new folder in collect-social/twitter for the new code.

from collect_social.facebook get_posts

app_id = <YOUR APP ID>
app_secret = <YOUR APP SECRET>
connection_string = 'sqlite:///full-path-to-an-existing-database-file.sqlite'

get_posts.run(app_id,app_secret,connection_string,page_ids)

Keep in mind:

  • Try to keep api consistent across platforms. I.E. facebook.get_posts becomestwitter.search_tweets (search) or twitter.stream_tweets (streaming)
  • Use sane defaults so works with minimal setup/config.
  • Consider functionality which can be shared across both platforms be separated from the platform. I.E.utils/setup_db

Resources:

Twitter: Review API, capture most important fields

Goal:

Come up with a standard list of tweet fields to save (if available) . This is meant to be the "default" config list. The plan is to allow saving of full tweet object or a targeted list of fields but we want to come up with the most relevant fields that would be used by useful in nearly every project.

So far people have mentioned coordinates and entities (contains hashtags, user_mentions, urls) as potentially useful fields that are not currently captured by discursive.

Additional info:

Write unit tests

Looking for someone to start helping with unit tests. If you would like to work on this and feel strongly about a testing framework feel free to propose one. I would suggest pytest.

We can start small with just 2 or 3 basic tests and a update to the readme so future contributors can write their own tests.

Documentation: How to add unit tests

@kylerbrown has got us up and running with pytest and written the first few unit tests. We need documentation & links to external resources to show newcomers how to contribute test along with their new code.

Twitter: Stream tweets by topic

Uses the twitter stream API tweepy streaming docs

Requirements:

  • part of stream_tweets.py
  • Setup up a stream listener to follow a list of topics
  • Should keep processing through errors on a single tweet.
  • Returns FULL tweet object as they come in. Storage of actual tweet will be handled by backed service and out of scope for this piece. Simply return the tweet object is sufficient
  • Submit PR to branch v0.1

Twitter: User following

Requirements:

  • Function should be part of poll_user
  • Accepts a username, returns a list of people following the user
  • Include unit test
  • Submit PR to branch v0.1

Twitter: Poll user

Background

This function will be used to poll users over time. Users that have been identified as important nodes of a target community need to be polled over time to monitor their activity over time.

Requirements

  • Function accept a list of user handles
  • Function should return a dictionary of important user attributes.
'active':
'bio':
'location': user.location,
'followers': user.followers_count,
'following': user.friends_count,
'image': user.profile_image_url
  • PR should include unit tests (pytest)
  • PR should be merged with branch v0.1

This is a port of functionality getUserInfobyHandle

Placeholder store full tweet in s3

NOT READY FOR DEV

Placeholder for feature request: Optional setting to store full tweet object to S3.

I think it would sense to dump the whole tweet to s3 and keep the fields of interest in storage of choice (DB, ElasticSearch etc.) Then if we realize that we need more fields we can just modify the parser and rerun on the backed up data (rather than have to deal with api limits etc)

Contingent on issue #2

Add reddit functionality

We currently have a stand alone proof of concept for pulling down reddit data which @wwymak graciously put together for us here. I propose we use that as a reference for creating reddit support within collect-social using the same model. get_posts , get_comments etc.

update Feb 1, 2017 Issue is unclaimed

Next steps:

  • Contact @bstarling or post below if you would like to work on this
  • Create a branch reddit-integration and push basic folder structure. Push all subsequent changes to this branch.

Collect from voat.co

They have an API, ftw. There'll undoubtedly be questions about how to structure the collection but it probably shares 4chan/Reddit/forum characteristics so let's all help out!

Add Collection/Source ID

The need for a "collection ID" has come up several times in discussions on slack.

This element would be used to track what initiative the piece of media was collected under. I.E. if we started collecting tweets from women's march all those tweets would be tagged under that project/process so we would know how/why this tweet was collected when analyzing a large population later. A pain point for analysts working with large data sets in the past has been seeing various pieces of media but not knowing exactly why/how they were added to the data set. If we plan on streaming multiple sources to a single data store or archiving them together it will be especially important to document and track this.

Thanks @ccarey and @superace for bringing this up.

Twitter: Get tweets by handle

Requirements:

  • Function should be part of poll_user
  • Accepts a username, returns tweets in user timeline (within API limit)
  • Include unit test
  • Submit PR to branch v0.1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.