Giter Site home page Giter Site logo

thinkupllc / thinkup Goto Github PK

View Code? Open in Web Editor NEW
3.3K 187.0 677.0 38.62 MB

ThinkUp gives you insights into your social networking activity on Twitter, Facebook, Instagram, and beyond.

Home Page: http://thinkup.com

License: GNU General Public License v3.0

CoffeeScript 0.16% Shell 0.56% CSS 1.49% PHP 89.74% Smarty 4.15% Python 0.07% JavaScript 1.12% HTML 2.72%

thinkup's Introduction

ThinkUp, social media insights engine Build Status

ThinkUp is a free, installable web application that gives you insights into your activity on social networks like Twitter, Facebook, and Instagram. Find out more at http://thinkup.com.

Support and Documentation

Refer to ThinkUp's documentation, or contact the ThinkUp community on the project mailing list for support.

License

ThinkUp's source code is licensed under the GNU General Public License, except for the external libraries listed below.

External Libraries

thinkup's People

Contributors

aaronkalair avatar aciidgh avatar adampash avatar amygdala avatar anildash avatar brandonroberts avatar capndesign avatar cdmoyer avatar cvi avatar cwarden avatar diazuwi avatar ecucurella avatar ekanshpreet avatar gboudreau avatar ginatrapani avatar j883376 avatar jamesgallagher-ie avatar jmcpheron avatar markjaquith avatar mithaler avatar mwilkie avatar nilakshdas avatar olekvi avatar randi2kewl avatar randomecho avatar sarcas avatar suth avatar trevorbramble avatar unruthless avatar waxpancake avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

thinkup's Issues

Crawler off by one second

2009-09-03 14:00:03 | 1.34 MB | derekslenk | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/account/rate_limit_status.xml
2009-09-03 14:00:03 | 1.34 MB | derekslenk | CrawlerTwitterAPIAccessorOAuth:Parsing XML data from https://twitter.com/account/rate_limit_status.xml
2009-09-03 14:00:03 | 1.34 MB | derekslenk | CrawlerTwitterAPIAccessorOAuth:120 of 150 API calls left this hour; 0 for crawler until 14:00:04

Now at 3:00 it will run, and run fine, but then again at 4:00, it will be off by a second. Is this to do with me setting the timezone to eastern or the time on my server or are other people receiving this error too?

Groups support

Capture all the lists ownerโ€™s Twitter users are on, display this somehow (tag cloud?)
Sort replies by list membership (show me only replies by people on โ€œmy best friendsโ€ list)

Here's a rundown of what has to be done:

  1. Create 3 new database tables: groups, group_owners, group_members. The
    groups table will be keyed on group_id and the fields will represent all
    the metadata for individual groups, plus a network field that defaults to the value 'twitter.' The group_members should just be two fields: group_id and member_user_id. The group_owners table should also be two fields: owner_user_id and group_id. The new table creation SQL should go in both build_database.sql as well as a file in the
    db_migrations folder.
  2. The API call the crawler should use is:
    http://apiwiki.twitter.com/Twitter-REST-API-Method:-GET-lists
    Add it to the array of API calls in /common/
    class.TwitterAPIAccessorOAuth.php in the prepAPI method.
    Then, add the list XML parsing case to that same class's parseXML
    method.
  3. Create a Group object and GroupDAO object in a new /common/
    class.Group.php file. You can use the class.Post.php file as an
    example.
  4. Add a fetchGroups method to the Crawler class which makes the API
    call and parses the XML, then uses the GroupDAO to save the data to the
    right tables.
  5. Add $crawler->fetchLists() to the /thinktank/common/plugins/twitter/twitter.php script in the twitter_crawl() method and voila! We'll be crawling lists.
  6. Write tests for the Group DAO's and the new TwitterCrawler method.

This sounds like a lot but it's not that bad if you look at the
existing code; it's exactly how every other type of data is retrieved
(posts, friends, followers, etc).

Something to keep in mind:
The growing/shrinking problem applies to groups as well as posts and favorites. I'd want ThinkTank to reflect the current state of a group, so maybe once every X days the crawler will have to wipe group data and recapture it.

Highlight replies from friends (Gina)

In a post's reply listing, show replies by friends of the original poster first, then sort by follower count. This will require new fields in the tt_posts table, and is dependent on Issue #124.

Plan of Action:

  • Add two int fields to tt_posts: is_reply_by_friend and is_retweet_by_friend, with values 0 or 1.
  • Before a reply or a retweet is inserted into the database, set each value via a new boolean FollowDAO::isUserFriendOfAuthor($author_id, $user_id) method.
  • Sort all reply and retweet listing by is_reply/retweet_by_friend first, then by follower count
  • Highlight the replies and retweets which are by friends of the original poster.

Reorganize table structures for multi-service

Table structures should be generic enough to accommodate data from multiple services from Twitter to Facebook to blogs, and get a source column that shows what service the entry is from.

What needs to be done per table:

* tt_users:*

  • RENAME tweet_count to post_count
  • RENAME last_status_id to last_post_id

tt_instances:

  • RENAME twitter_user_id to network_user_id
  • RENAME twitter_username to network_username
  • RENAME total_tweets_in_system and total_tweets_by_owner to total_posts_in_system and total_posts_by_owner
  • RENAME earliest_tweet_in_system to earliest_post_in_system
  • ADD column network default value 'twitter'

DONE:

  • add source column to tweets, follows, and users tables
  • add plug-ins table: id, name, description, is_active
  • add plugin_options table: plugin_id, option_name, option_value
  • add new replies table which looks like tweets table

tt_tweets:

  • RENAME table to tt_posts
  • RENAME status_id to post_id
  • RENAME tweet_text and tweet_html to post_text and post_html

tt_links:

  • RENAME status_id to post_id

tt_tweet_errors:

  • RENAME table to tt_post_errors
  • RENAME status_id to post_id

Secure / Remove phpinfo.php

I'd suggest you don't distribute a phpinfo file with the app, depending on how people have their system setup it can introduce security issues.

Also you don't need to echo it, the function automatically outputs the details (the echo is what's causing the 1 at the bottom of the page in case you're wondering)

Tweet filter rulesets: iTunes Smart Playlists for replies

A TT user should be able to define filtering rules for a set of replies, like "collapse all replies that contain the word 'kitten'" or "send all replies that mention 'iPhone' to the bottom fo the list" and save a set of rules as a view, like an iTunes Smart playlist.

Add regression tests for all existing and new code

Set up regression tests DAO's and objects which initialize a test DB with test data, the web application frontend, and crawler consumption methods (which use mock API calls to test XML consumption).

DONE:
UserDAO

STARTED:
FollowDAO
Mock TwitterOAuth
front end/webapp calls
LinkDAO

TODO:
OwnerDAO
OwnerInstanceDAO
InstanceDAO
PostDAO

Add PHP version check in crawl.php

Common installation roadblock is that when users run crawl.php, they use PHP4, not 5, then they get some crazy error. Add a version check to crawl.php and maybe a note to the README or Installation guide to help along 1st crawler run.

Post list paging (Gina)

Done on public pages. Needs to be completed on private pages--though it's a little more complicated because of the Ajax loading of inline.view.php.

Update: This is dependent on Issue #167.

Refine plugin architecture

IMMEDIATE TODO'S:

  • Detect which plugins are available
  • Add an enable/disable front-end for available plugins
  • Build plugin options interface so that configuration comes out of config.inc.php

BACKGROUND:
To make TT truly multi-service and enable users to build their own features into it, it's got to be pluggable. There should be allowances for both webapp plugins (data visualizations and custom listings, etc) as well as crawler plugins (new data sources like Facebook and Buzz).

As we build TT plugins, we will continue to refine the plugin architecture.

Here's an update on the current state of the pluggable architecture:

Files

  • All plugin files are now located in a single place, common/plugins/.
  • Each individual plugin has its own subfolder there, so the Twitter plugin is in common/plugins/twitter/.
  • Templates for rendering plugin-specific things on the frontend are in a templates subdirectory, ie, common/plugins/twitter/templates/.
  • Classes the plugin requires should be in a lib subdirectory, ie, common/plugins/twitter/lib/.
  • Right now the only webapp plugin that's working is the configuration screen.

Database

  • There are now tt_plugins and tt_plugin_options tables
  • By default, the Twitter plugin is inserted and set to active in the tt_plugins table.

Future plans:

My vision is that ThinkTank plugins will work like WordPress: you drop a folder into the common/plugins/ directory, and it gets listed in the webapp as "Inactive." Click a link to activate the plugin (insert its row into the db, set is_active to 1, run any relevant installation routines) and the crawler and webapp will execute the methods it registers from there on in.

A hooks wishlist is underway here:
http://wiki.github.com/ginatrapani/thinktank/thinktank-plugin-hooks-wish-list

Capture and index retweets

There is an issue with thinktank regarding API-retweets (by you)
they don't get indexed and the crawler burns off a lot of API calls trying to find the "remaining tweets" (retweets, but they count as tweets in your profile)

Facebook crawler plugin

  • Enable FB Connect in ThinkTank
  • Fetch status updates and comments from FB for the user profile
  • Fetch status updates and comments from FB on an FB page

Add log rotation to the crawler

The log file gets awfully big after a while. It would be nice if the log rotated daily and a configuration option was introduced to control how many days old log files should be kept.

README: DO NOT ENTER AN ISSUE UNTIL IT'S BEEN VERIFIED

ThinkTank is rapidly-changing alpha software that's not nearly in a complete state, and it's got lots of known bugs/security issues/TODO items. The best way to bring up an issue is on the mailing list first.

Please don't enter feature requests or bug reports here until you have joined the mailing list and submitted it there for verification and discussion.

http://groups.google.com/group/thinktankapp

Then, once it's been vetted, if the issue is a known bug, we can enter it here (with a link to the mailing list discussion in the issue notes). Thanks for your cooperation!

Mark a tweet as a Retweet

Display a drop-down of recent owner tweets before the Retweet, with the ability to save one as a RT of that tweet.

Also: write tests for this and mark as reply mechanism; proper success/error messages should be worked out.

In Reply To Back Button

If you click on a link to see the In Reply To from your dashboard, the back button on the left side doesn't work.

New plug-in: StatusNet

Great place to start on multi service integration as statusnet uses the Twitter API (somehow) and reuse of code should make this one easy.

Optimize SQL

Some queries are slow; must optimize for good-sized databases.

More robust TT instance administrator controls

If someone is an administrator, on the Accounts page:

  • display what TT instance accounts own a Twitter account, and the last time they logged in to show inactive accounts that may be bogging down the crawler
  • add the ability to pause crawling of an account and start it again, ie, take it out of the active crawler queue if it's not being used
  • a user who hasn't logged in whose account is paused should have the ability to start it again.

Related: Issue #43 instance admins should be able to designate other users as admins

Redirect back from Twitter

After you authorize twitter to use your app and it re-directs you back, receive the following error:
Warning: Missing argument 4 for TwitterAPIAccessorOAuth::TwitterAPIAccessorOAuth(), called in /var/www/derekslenk.com/twitalytic/webapp/account/oauth.php on line 24 and defined in /var/www/derekslenk.com/twitalytic/common/class.TwitterAPIAccessorOAuth.php on line 10

Notice: Undefined variable: oauth_consumer_secret in /var/www/derekslenk.com/twitalytic/common/class.TwitterAPIAccessorOAuth.php on line 14

Remove network accounts

Right now you can add accounts, but not delete them. Add a "remove" button to delete the instance and owner_instance rows from the database. If two owners are associated with the instance, only delete the owner_instance row.

Crawler not crawling older tweets?

I've had the crawler running every hour for a couple days and while it still loads new tweets and replies it doesn't seem to be loading any more old stuff. The "System Progress" area shows this:

4% of Your Tweets Loaded (236 of 6,432)
100% of Your Followers Loaded (646 loaded)
65% of Your Friends Loaded (158 loaded)

Those numbers have stayed pretty much the same for the past two days..

I don't quite grok yet how the crawler works, but I'm wondering if what's happening is that the crawler is using up all its API calls checking out the people I follow and never getting around to archiving the rest of my tweets? There are 29,139 records in the tweets table but only 239 belong to me. But from the looks of it crawler.php should index my tweets before moving on to other tasks, so I'm not sure what's going on.

Thoughts?

Below is the entire log of the most recent crawl:

2009-08-25 09:05:06 | 1.16 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/account/rate_limit_status.xml
2009-08-25 09:05:06 | 1.16 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:Parsing XML data from https://twitter.com/account/rate_limit_status.xml
2009-08-25 09:05:06 | 1.16 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:150 of 150 API calls left this hour; 30 for crawler until 10:05:06
2009-08-25 09:05:07 | 1.16 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/users/show/swirlee.xml
2009-08-25 09:05:07 | 1.16 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:149 of 150 API calls left this hour; 29 for crawler until 10:05:06
2009-08-25 09:05:07 | 1.16 MB | swirlee | Crawler:Owner info set.
2009-08-25 09:05:08 | 1.17 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/statuses/user_timeline/swirlee.xml?count=200&since_id=3521244246&
2009-08-25 09:05:08 | 1.17 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:148 of 150 API calls left this hour; 28 for crawler until 10:05:06
2009-08-25 09:05:08 | 1.20 MB | swirlee | Crawler:3 tweet(s) found and 2 saved
2009-08-25 09:05:08 | 1.20 MB | swirlee | Crawler:More than Twitter cap of 3200 already in system, moving on.
2009-08-25 09:05:08 | 1.20 MB | swirlee | Crawler:238 in system; 6435 by owner
2009-08-25 09:05:09 | 1.56 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/statuses/replies.xml?count=200&
2009-08-25 09:05:09 | 1.56 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:147 of 150 API calls left this hour; 27 for crawler until 10:05:06
2009-08-25 09:05:09 | 2.54 MB | swirlee | TweetDAO:Reply found for 3536563209, ID: 3536581575; updating reply cache count
2009-08-25 09:05:09 | 2.55 MB | swirlee | TweetDAO:Reply found for 3536444769, ID: 3536533013; updating reply cache count
2009-08-25 09:05:09 | 2.55 MB | swirlee | TweetDAO:Reply found for 3536444769, ID: 3536508265; updating reply cache count
2009-08-25 09:05:09 | 2.69 MB | swirlee | Crawler:200 replies found and 4 saved
2009-08-25 09:05:09 | 2.69 MB | swirlee | Crawler:Retrieved newest replies; Reply archive loaded; Stopping reply fetch.
2009-08-25 09:05:09 | 1.38 MB | swirlee | Crawler:158 friends in system, 244 friends according to Twitter; Friend archive is not loaded
2009-08-25 09:05:12 | 1.56 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/statuses/friends/swirlee.xml?page=1&
2009-08-25 09:05:12 | 1.56 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:146 of 150 API calls left this hour; 26 for crawler until 10:05:06
2009-08-25 09:05:12 | 2.01 MB | swirlee | Crawler:Parsing XML. Page 1: 100 friends queued to update. 100 existing friends updated; 0 new friends inserted.
2009-08-25 09:05:15 | 2.19 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/statuses/friends/swirlee.xml?page=2&
2009-08-25 09:05:15 | 2.19 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:145 of 150 API calls left this hour; 25 for crawler until 10:05:06
2009-08-25 09:05:16 | 2.02 MB | swirlee | Crawler:Parsing XML. Page 2: 100 friends queued to update. 100 existing friends updated; 0 new friends inserted.
2009-08-25 09:05:18 | 2.10 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/statuses/friends/swirlee.xml?page=3&
2009-08-25 09:05:18 | 2.10 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:144 of 150 API calls left this hour; 24 for crawler until 10:05:06
2009-08-25 09:05:18 | 1.66 MB | swirlee | Crawler:Parsing XML. Page 3: 44 friends queued to update. 44 existing friends updated; 0 new friends inserted.
2009-08-25 09:05:18 | 1.66 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/statuses/friends/swirlee.xml?page=4&
2009-08-25 09:05:18 | 1.66 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:143 of 150 API calls left this hour; 23 for crawler until 10:05:06
2009-08-25 09:05:18 | 1.39 MB | swirlee | Crawler:Parsing XML. Page 4: 0 friends queued to update. 0 existing friends updated; 0 new friends inserted.
2009-08-25 09:05:18 | 1.38 MB | swirlee | Crawler:Follower archive marked as loaded
2009-08-25 09:05:18 | 1.38 MB | swirlee | Crawler:New follower count is 604 and system has 592; 12 new follows to load
2009-08-25 09:05:18 | 1.38 MB | swirlee | Crawler:Fetching follows via IDs
2009-08-25 09:05:19 | 1.40 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/followers/ids/swirlee.xml?page=1&
2009-08-25 09:05:19 | 1.40 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:142 of 150 API calls left this hour; 22 for crawler until 10:05:06
2009-08-25 09:05:22 | 1.65 MB | swirlee | Crawler:Page 1 has 723 follower IDs. 723 existing follows updated; 0 new follows inserted.
2009-08-25 09:05:22 | 1.65 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/followers/ids/swirlee.xml?page=2&
2009-08-25 09:05:22 | 1.65 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:141 of 150 API calls left this hour; 21 for crawler until 10:05:06
2009-08-25 09:05:22 | 1.39 MB | swirlee | Crawler:Page 2 has 0 follower IDs. 0 existing follows updated; 0 new follows inserted.
2009-08-25 09:05:23 | 1.38 MB | swirlee | Crawler:0 stray replied-to tweets to load.
2009-08-25 09:05:23 | 1.38 MB | swirlee | Crawler:0 unloaded follower details to load.
2009-08-25 09:05:23 | 1.38 MB | swirlee | Crawler:hythlae is friend most need of update
2009-08-25 09:05:23 | 1.39 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/statuses/user_timeline/hythlae.xml?count=200&since_id=3444205543&
2009-08-25 09:05:23 | 1.39 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:140 of 150 API calls left this hour; 20 for crawler until 10:05:06
2009-08-25 09:05:23 | 1.39 MB | swirlee | Crawler:1 tweet(s) found for hythlae and 1 saved
2009-08-25 09:05:24 | 1.39 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/friends/ids/20202481.xml?page=1&
2009-08-25 09:05:24 | 1.39 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:139 of 150 API calls left this hour; 19 for crawler until 10:05:06
2009-08-25 09:05:24 | 1.39 MB | swirlee | Crawler:Page 1 has 39 friend IDs. 39 existing follows updated; 0 new follows inserted.
2009-08-25 09:05:25 | 1.39 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/friends/ids/20202481.xml?page=2&
2009-08-25 09:05:25 | 1.39 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:138 of 150 API calls left this hour; 18 for crawler until 10:05:06
2009-08-25 09:05:25 | 1.39 MB | swirlee | Crawler:Page 2 has 0 friend IDs. 0 existing follows updated; 0 new follows inserted.
2009-08-25 09:05:25 | 1.39 MB | swirlee | Crawler:veroniquec is friend most need of update
2009-08-25 09:05:25 | 1.39 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/statuses/user_timeline/veroniquec.xml?count=200&since_id=3452713102&
2009-08-25 09:05:25 | 1.39 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:137 of 150 API calls left this hour; 17 for crawler until 10:05:06
2009-08-25 09:05:25 | 1.39 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/users/show/5420412.xml
2009-08-25 09:05:25 | 1.39 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:136 of 150 API calls left this hour; 16 for crawler until 10:05:06
2009-08-25 09:05:25 | 1.39 MB | swirlee | Crawler:Added/updated user veroniquec in database
2009-08-25 09:05:25 | 1.39 MB | swirlee | Crawler:0 tweet(s) found for veroniquec and 0 saved
2009-08-25 09:05:26 | 1.42 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/friends/ids/5420412.xml?page=1&
2009-08-25 09:05:26 | 1.42 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:135 of 150 API calls left this hour; 15 for crawler until 10:05:06
2009-08-25 09:05:39 | 2.22 MB | swirlee | Crawler:Page 1 has 2074 friend IDs. 2074 existing follows updated; 0 new follows inserted.
2009-08-25 09:05:39 | 2.22 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/friends/ids/5420412.xml?page=2&
2009-08-25 09:05:39 | 2.22 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:134 of 150 API calls left this hour; 14 for crawler until 10:05:06
2009-08-25 09:05:39 | 1.40 MB | swirlee | Crawler:Page 2 has 0 friend IDs. 0 existing follows updated; 0 new follows inserted.
2009-08-25 09:05:39 | 1.40 MB | swirlee | Crawler:socialthing is friend most need of update
2009-08-25 09:05:40 | 1.40 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/statuses/user_timeline/socialthing.xml?count=200&since_id=1631302512&
2009-08-25 09:05:40 | 1.40 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:133 of 150 API calls left this hour; 13 for crawler until 10:05:06
2009-08-25 09:05:41 | 1.40 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/users/show/5882252.xml
2009-08-25 09:05:41 | 1.40 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:132 of 150 API calls left this hour; 12 for crawler until 10:05:06
2009-08-25 09:05:41 | 1.40 MB | swirlee | Crawler:Added/updated user socialthing in database
2009-08-25 09:05:41 | 1.40 MB | swirlee | Crawler:0 tweet(s) found for socialthing and 0 saved
2009-08-25 09:05:41 | 1.48 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/friends/ids/5882252.xml?page=1&
2009-08-25 09:05:41 | 1.48 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:131 of 150 API calls left this hour; 11 for crawler until 10:05:06
2009-08-25 09:06:06 | 3.41 MB | swirlee | Crawler:Page 1 has 5000 friend IDs. 4999 existing follows updated; 1 new follows inserted.
2009-08-25 09:06:06 | 3.42 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/friends/ids/5882252.xml?page=2&
2009-08-25 09:06:06 | 3.42 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:130 of 150 API calls left this hour; 10 for crawler until 10:05:06
2009-08-25 09:06:06 | 1.40 MB | swirlee | Crawler:Page 2 has 0 friend IDs. 0 existing follows updated; 0 new follows inserted.
2009-08-25 09:06:06 | 1.40 MB | swirlee | Crawler:FuelFrog is friend most need of update
2009-08-25 09:06:07 | 1.40 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/statuses/user_timeline/FuelFrog.xml?count=200&since_id=2884840254&
2009-08-25 09:06:07 | 1.40 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:129 of 150 API calls left this hour; 9 for crawler until 10:05:06
2009-08-25 09:06:08 | 1.40 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/users/show/14136755.xml
2009-08-25 09:06:08 | 1.40 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:128 of 150 API calls left this hour; 8 for crawler until 10:05:06
2009-08-25 09:06:08 | 1.40 MB | swirlee | Crawler:Added/updated user FuelFrog in database
2009-08-25 09:06:08 | 1.40 MB | swirlee | Crawler:0 tweet(s) found for FuelFrog and 0 saved
2009-08-25 09:06:08 | 1.43 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/friends/ids/14136755.xml?page=1&
2009-08-25 09:06:08 | 1.43 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:127 of 150 API calls left this hour; 7 for crawler until 10:05:06
2009-08-25 09:06:19 | 2.17 MB | swirlee | Crawler:Page 1 has 1975 friend IDs. 1975 existing follows updated; 0 new follows inserted.
2009-08-25 09:06:20 | 2.17 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/friends/ids/14136755.xml?page=2&
2009-08-25 09:06:20 | 2.17 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:126 of 150 API calls left this hour; 6 for crawler until 10:05:06
2009-08-25 09:06:20 | 1.40 MB | swirlee | Crawler:Page 2 has 0 friend IDs. 0 existing follows updated; 0 new follows inserted.
2009-08-25 09:06:20 | 1.40 MB | swirlee | Crawler:johnmusser is friend most need of update
2009-08-25 09:06:20 | 1.40 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/statuses/user_timeline/johnmusser.xml?count=200&since_id=2935543364&
2009-08-25 09:06:20 | 1.40 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:125 of 150 API calls left this hour; 5 for crawler until 10:05:06
2009-08-25 09:06:21 | 1.40 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/users/show/14769689.xml
2009-08-25 09:06:21 | 1.40 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:124 of 150 API calls left this hour; 4 for crawler until 10:05:06
2009-08-25 09:06:21 | 1.40 MB | swirlee | Crawler:Added/updated user johnmusser in database
2009-08-25 09:06:21 | 1.40 MB | swirlee | Crawler:0 tweet(s) found for johnmusser and 0 saved
2009-08-25 09:06:21 | 1.40 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/friends/ids/14769689.xml?page=1&
2009-08-25 09:06:21 | 1.40 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:123 of 150 API calls left this hour; 3 for crawler until 10:05:06
2009-08-25 09:06:22 | 1.42 MB | swirlee | Crawler:Page 1 has 145 friend IDs. 145 existing follows updated; 0 new follows inserted.
2009-08-25 09:06:22 | 1.42 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/friends/ids/14769689.xml?page=2&
2009-08-25 09:06:22 | 1.42 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:122 of 150 API calls left this hour; 2 for crawler until 10:05:06
2009-08-25 09:06:22 | 1.40 MB | swirlee | Crawler:Page 2 has 0 friend IDs. 0 existing follows updated; 0 new follows inserted.
2009-08-25 09:06:22 | 1.40 MB | swirlee | Crawler:GrumpyTea is friend most need of update
2009-08-25 09:06:23 | 1.41 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/statuses/user_timeline/GrumpyTea.xml?count=200&since_id=3496491263&
2009-08-25 09:06:23 | 1.41 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:121 of 150 API calls left this hour; 1 for crawler until 10:05:06
2009-08-25 09:06:23 | 1.41 MB | swirlee | Crawler:5 tweet(s) found for GrumpyTea and 2 saved
2009-08-25 09:06:24 | 1.41 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:API request: https://twitter.com/friends/ids/15043768.xml?page=1&
2009-08-25 09:06:24 | 1.41 MB | swirlee | CrawlerTwitterAPIAccessorOAuth:120 of 150 API calls left this hour; 0 for crawler until 10:05:06
2009-08-25 09:06:25 | 1.43 MB | swirlee | Crawler:Page 1 has 56 friend IDs. 1 existing follows updated; 55 new follows inserted.
2009-08-25 09:06:26 | 1.39 MB | swirlee | InstanceDAO:Updated swirlee's system status.

Favorites (Amy)

Display them on a sub-tab somewhere, and also fill in Links from your Favorites area.

We'll need a new table for this, favorites, with user_id and status_id.

Here's a rundown of what has to be done:

  1. Create 1 new database table: favorites, with user_id, status_id, and network (default value 'twitter'). The new table creation SQL should go in both build_database.sql as well as a file in the db_migrations folder.
  2. The API call the crawler should use is:
    http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-favorites
    Add it to the array of API calls in /common/class.TwitterAPIAccessorOAuth.php in the prepAPI method. Then, add the list XML parsing case to that same class's parseXML
    method.
  3. Create a Favorite object and FavoriteDAO object in a new /common/class.Favorite.php file. You can use the class.Post.php file as an
    example.
  4. Add a fetchFavorites method to the /common/plugins/twitter/TwitterCrawler class which makes the API call and parses the XML, then uses the FavoriteDAO to save the data to the right tables.
  5. Add $crawler->fetchFavorites() to the /common/plugins/twitter/twitter.php script in the twitter_crawl() method and voila! We'll be crawling favorites.
  6. Write tests for the new DAO's and TwitterCrawler method.

Bit.ly click counts inconsistently displayed

They show up sometimes and not others, depending on what page you're on. Might want to move this out of Javascript and do it on the backend, storing the click count in the database so every page refresh doesn't make Bit.ly API calls.

Exporting

I don't know if you finished implementing this, but when I hit export, I got the following fatal error:
Fatal error: Cannot use object of type Tweet as array in /var/www/derekslenk.com/twitalytic/webapp/templates_c/%%42^42F^42F8A467%%status.export.tpl.php on line 9

Create user email invitation system

Right now a TU instance is either open for registration by anyone who knows the URL, or closed.

Create an email invitation system where TT instance admins can fire off an invitation email message to someone with a link that will activate their account on the instance.

Steps to complete:

  • Add a "Invite users to try this ThinkUp installation" link in the ThinkUp accounts tab in an admin's settings area
  • Display a text field where the admin enters an email address
  • Register a user with that address, but don't send activation email, instead, display the contents of the email with the option for the admin to copy and paste it him/herself, or have ThinkUp send it.

Footer Data

The footer data doesn't show anything except on the dashboard

Initial User Registration

When you do an initial user register, the first error you get is:

Warning: mysql_num_rows(): supplied argument is not a valid MySQL result resource in /var/www/derekslenk.com/twitalytic/webapp/session/register.php on line 35
Registration Successful! An activation code has been sent to your email address with an activation link...

The Email still goes out though. When I click on the link in the email, I got the following error, and the user does not get activated:
Table 'twitalytic.owners' doesn't exist

I have upgraged to the newest table structure, I think you missed an instance of the table in your code.

Set the owner of a TT installation in config file

There will be 3 levels of access permissions in a TT instance:

  • user
  • admin
  • owner

In the config.inc.php file, you should be able to specify a TT installation "owner." (Possibly this should be an array so there can be multiple owners.) The owner has super-rights: all rights both of a user and an admin, plus the right to promote other users to admin.

Admins are set in the owners table, in the is_admin field. Admins can see all the Twitter accounts and user accounts on the system, and add/remove them from the public timeline, as well as remove and add them to the crawler queue.

Regular users can only see the Twitter accounts they authorized and control those.

Tweet deletions trip up the crawler

Tweet deletions trip up the crawler

Right now the crawler compares the total number of tweets for a user saved in the ThinkUp database to the total number of tweets for a user reported by Twitter. If the numbers match, the crawler assumes it has all the tweets for a given user.

However, if a user has 10 tweets, and the crawler gets all 10, then a user deletes a tweet and tweets again, to the crawler it looks like it has all 10 tweets--but it's actually missing the latest and saving the deleted tweet.

Must set the crawler not only to compare the total number of tweets in ThinkUp to the total reported on Twitter, but to check to see if the latest status ID in ThinkUp matches the latest one on Twitter.com.

Saved keyword searches

Add the ability to save searches for certain kind of content that maybe isn't in an owner's stream, like a Twitter hashtag, or random keyword.

@username is done in this commit:
http://github.com/ginatrapani/thinkup/commit/1e3efaa0ef257ada87fc445a9afc5d31daea878c
Update: @username searches have been removed since the interface was too confusing--wasn't clear when to click Authorize button and when to just add a user name. Have to rethink that one.

Also the ability to save keyword searches in ThinkUp itself -- evaluate Sphinx (http://sphinxsearch.com/), Solr, MongoDB

Port DAOs to PDO (Mark W, CVi & Gina)

Create a new PDO parent DAO and have all existing DAO's extend it.

PDO port is in progress now. Here's who's doing what:

CVi:

  • Instance (complete)
  • Follow (complete) --TODO: Fix LeastLikely method bug, add test
  • Link (complete)

Mark W:

  • OwnerInstance (complete)
  • Plugin

Gina:

  • User (complete)
  • Post (complete)
  • Owner (complete)

Advantages:

  • Prepared statements are faster and more secure
  • Makes ThinkTank able to use multiple datastore types

During this project, let's:

  • Complete DAO tests
  • Add docblocks to all interfaces and tests
  • Develop most advanced coding best practices--like avoiding global THINKTANK_CFG calls

Mailing list discussion:

http://groups.google.com/group/thinktankapp/browse_thread/thread/9527d7b11ec72c54

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.