Giter Site home page Giter Site logo

twittersearch's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

twittersearch's Issues

PEP-8

TwitterSearch is great, but do you plan to provide a more PEP-8-ish API?

For instance, here's an example from the README:

from twitter_search import *

tso = TwitterSearchOrder() # create a TwitterSearchOrder object
tso.set_kewords(['Guttenberg', 'Doktorarbeit']) 
tso.set_language('de')
tso.set_count(7) 
tso.set_include_intities(False) 

try:
    ts = TwitterSearch(
        consumer_key = 'aaabbb',
        consumer_secret = 'cccddd',
        access_token = '111222',
        access_token_secret = '333444'
     )

    for tweet in ts.iter_search(tso):
        # ...
except TwitterSearchException as e:
    print(e)

how to set a proxy ip ?

Because of National Great Firewall ,i could not get the response,so how can i use a proxy ip ?

Search for list of emoji

Is there anyway to search for a list of emoji? I am trying to search for all the flag emoji, but I get the error Error 403: ('Forbidden: The request is understood, but', 'it has been refused or access is not allowed')

main.py

import flags
from TwitterSearch import *
import sys
import json

def is_flag_emoji(c):
    return "\U0001F1E6\U0001F1E8" <= c <= "\U0001F1FF\U0001F1FC" or c in ["\U0001F3F4\U000e0067\U000e0062\U000e0065\U000e006e\U000e0067\U000e007f", "\U0001F3F4\U000e0067\U000e0062\U000e0073\U000e0063\U000e0074\U000e007f", "\U0001F3F4\U000e0067\U000e0062\U000e0077\U000e006c\U000e0073\U000e007f"]



i = 0
data = {}

try:
    tso = TwitterSearchOrder() # create a TwitterSearchOrder object
    tso.set_keywords(flags.list) # let's define all words we would like to have a look for
    tso.set_language('en') # we want to see German tweets only
    tso.set_include_entities(False) # and don't give us all those entity information
    tso.set_count(20)

    # it's about time to create a TwitterSearch object with our secret tokens
    ts = TwitterSearch(
        consumer_key = '****',
        consumer_secret = '****',
        access_token = '****',
        access_token_secret = '****'
     )

     # this is where the fun actually starts :)
    for tweet in ts.search_tweets_iterable(tso):
        if i <= 20:
            # print( '@%s tweeted: %s' % ( tweet['user']['screen_name'], tweet['text'] ) )
            data[tweet['user']['screen_name']] = tweet['text']
            i += 1
        else:
            print(data)
            sys.exit(1)

except TwitterSearchException as e: # take care of all those ugly errors if there are some
    print(e)

flags.py

list = ["๐Ÿ‡ฆ๐Ÿ‡ซ", "๐Ÿ‡ฆ๐Ÿ‡ฝ", "๐Ÿ‡ฆ๐Ÿ‡ฑ", "๐Ÿ‡ฉ๐Ÿ‡ฟ", "๐Ÿ‡ฆ๐Ÿ‡ธ", "๐Ÿ‡ฆ๐Ÿ‡ฉ", "๐Ÿ‡ฆ๐Ÿ‡ด", "๐Ÿ‡ฆ๐Ÿ‡ฎ", "๐Ÿ‡ฆ๐Ÿ‡ถ", "๐Ÿ‡ฆ๐Ÿ‡ฌ", "๐Ÿ‡ฆ๐Ÿ‡ท", "๐Ÿ‡ฆ๐Ÿ‡ฒ", "๐Ÿ‡ฆ๐Ÿ‡ผ", "๐Ÿ‡ฆ๐Ÿ‡จ", "๐Ÿ‡ฆ๐Ÿ‡บ", "๐Ÿ‡ฆ๐Ÿ‡น", "๐Ÿ‡ฆ๐Ÿ‡ฟ", "๐Ÿ‡ง๐Ÿ‡ธ", "๐Ÿ‡ง๐Ÿ‡ญ", "๐Ÿ‡ง๐Ÿ‡ฉ", "๐Ÿ‡ง๐Ÿ‡ง", "๐Ÿ‡ง๐Ÿ‡พ", "๐Ÿ‡ง๐Ÿ‡ช", "๐Ÿ‡ง๐Ÿ‡ฟ", "๐Ÿ‡ง๐Ÿ‡ฏ", "๐Ÿ‡ง๐Ÿ‡ฒ", "๐Ÿ‡ง๐Ÿ‡น", "๐Ÿ‡ง๐Ÿ‡ด", "๐Ÿ‡ง๐Ÿ‡ฆ", "๐Ÿ‡ง๐Ÿ‡ผ", "๐Ÿ‡ง๐Ÿ‡ป", "๐Ÿ‡ง๐Ÿ‡ท", "๐Ÿ‡ฎ๐Ÿ‡ด", "๐Ÿ‡ป๐Ÿ‡ฌ", "๐Ÿ‡ง๐Ÿ‡ณ", "๐Ÿ‡ง๐Ÿ‡ฌ", "๐Ÿ‡ง๐Ÿ‡ซ", "๐Ÿ‡ง๐Ÿ‡ฎ", "๐Ÿ‡ฐ๐Ÿ‡ญ", "๐Ÿ‡จ๐Ÿ‡ฒ", "๐Ÿ‡จ๐Ÿ‡ฆ", "๐Ÿ‡ฎ๐Ÿ‡จ", "๐Ÿ‡จ๐Ÿ‡ป", "๐Ÿ‡ง๐Ÿ‡ถ", "๐Ÿ‡ฐ๐Ÿ‡พ", "๐Ÿ‡จ๐Ÿ‡ซ", "๐Ÿ‡ช๐Ÿ‡ฆ", "๐Ÿ‡น๐Ÿ‡ฉ", "๐Ÿ‡จ๐Ÿ‡ฑ", "๐Ÿ‡จ๐Ÿ‡ณ", "๐Ÿ‡จ๐Ÿ‡ฝ", "๐Ÿ‡จ๐Ÿ‡ต", "๐Ÿ‡จ๐Ÿ‡จ", "๐Ÿ‡จ๐Ÿ‡ด", "๐Ÿ‡ฐ๐Ÿ‡ฒ", "๐Ÿ‡จ๐Ÿ‡ฌ", "๐Ÿ‡จ๐Ÿ‡ฉ", "๐Ÿ‡จ๐Ÿ‡ฐ", "๐Ÿ‡จ๐Ÿ‡ท", "๐Ÿ‡จ๐Ÿ‡ฎ", "๐Ÿ‡ญ๐Ÿ‡ท", "๐Ÿ‡จ๐Ÿ‡บ", "๐Ÿ‡จ๐Ÿ‡ผ", "๐Ÿ‡จ๐Ÿ‡พ", "๐Ÿ‡จ๐Ÿ‡ฟ", "๐Ÿ‡ฉ๐Ÿ‡ฐ", "๐Ÿ‡ฉ๐Ÿ‡ฌ", "๐Ÿ‡ฉ๐Ÿ‡ฏ", "๐Ÿ‡ฉ๐Ÿ‡ฒ", "๐Ÿ‡ฉ๐Ÿ‡ด", "๐Ÿ‡ช๐Ÿ‡จ", "๐Ÿ‡ช๐Ÿ‡ฌ", "๐Ÿ‡ธ๐Ÿ‡ป", "๐Ÿ‡ฌ๐Ÿ‡ถ", "๐Ÿ‡ช๐Ÿ‡ท", "๐Ÿ‡ช๐Ÿ‡ช", "๐Ÿ‡ช๐Ÿ‡น", "๐Ÿ‡ช๐Ÿ‡บ", "๐Ÿ‡ซ๐Ÿ‡ฐ", "๐Ÿ‡ซ๐Ÿ‡ด", "๐Ÿ‡ซ๐Ÿ‡ฏ", "๐Ÿ‡ซ๐Ÿ‡ฎ", "๐Ÿ‡ซ๐Ÿ‡ท", "๐Ÿ‡ฌ๐Ÿ‡ซ", "๐Ÿ‡ต๐Ÿ‡ซ", "๐Ÿ‡น๐Ÿ‡ซ", "๐Ÿ‡ฌ๐Ÿ‡ฆ", "๐Ÿ‡ฌ๐Ÿ‡ฒ", "๐Ÿ‡ฌ๐Ÿ‡ช", "๐Ÿ‡ฉ๐Ÿ‡ช", "๐Ÿ‡ฌ๐Ÿ‡ญ", "๐Ÿ‡ฌ๐Ÿ‡ฎ", "๐Ÿ‡ฌ๐Ÿ‡ท", "๐Ÿ‡ฌ๐Ÿ‡ฑ", "๐Ÿ‡ฌ๐Ÿ‡ฉ", "๐Ÿ‡ฌ๐Ÿ‡ต", "๐Ÿ‡ฌ๐Ÿ‡บ", "๐Ÿ‡ฌ๐Ÿ‡น", "๐Ÿ‡ฌ๐Ÿ‡ฌ", "๐Ÿ‡ฌ๐Ÿ‡ณ", "๐Ÿ‡ฌ๐Ÿ‡ผ", "๐Ÿ‡ฌ๐Ÿ‡พ", "๐Ÿ‡ญ๐Ÿ‡น", "๐Ÿ‡ญ๐Ÿ‡ฒ", "๐Ÿ‡ญ๐Ÿ‡ณ", "๐Ÿ‡ญ๐Ÿ‡ฐ", "๐Ÿ‡ญ๐Ÿ‡บ", "๐Ÿ‡ฎ๐Ÿ‡ธ", "๐Ÿ‡ฎ๐Ÿ‡ณ", "๐Ÿ‡ฎ๐Ÿ‡ฉ", "๐Ÿ‡ฎ๐Ÿ‡ท", "๐Ÿ‡ฎ๐Ÿ‡ถ", "๐Ÿ‡ฎ๐Ÿ‡ช", "๐Ÿ‡ฎ๐Ÿ‡ฒ", "๐Ÿ‡ฎ๐Ÿ‡ฑ", "๐Ÿ‡ฎ๐Ÿ‡น", "๐Ÿ‡ฏ๐Ÿ‡ฒ", "๐Ÿ‡ฏ๐Ÿ‡ต", "๐Ÿ‡ฏ๐Ÿ‡ช", "๐Ÿ‡ฏ๐Ÿ‡ด", "๐Ÿ‡ฐ๐Ÿ‡ฟ", "๐Ÿ‡ฐ๐Ÿ‡ช", "๐Ÿ‡ฐ๐Ÿ‡ฎ", "๐Ÿ‡ฝ๐Ÿ‡ฐ", "๐Ÿ‡ฐ๐Ÿ‡ผ", "๐Ÿ‡ฐ๐Ÿ‡ฌ", "๐Ÿ‡ฑ๐Ÿ‡ฆ", "๐Ÿ‡ฑ๐Ÿ‡ป", "๐Ÿ‡ฑ๐Ÿ‡ง", "๐Ÿ‡ฑ๐Ÿ‡ธ", "๐Ÿ‡ฑ๐Ÿ‡ท", "๐Ÿ‡ฑ๐Ÿ‡พ", "๐Ÿ‡ฑ๐Ÿ‡ฎ", "๐Ÿ‡ฑ๐Ÿ‡น", "๐Ÿ‡ฑ๐Ÿ‡บ", "๐Ÿ‡ฒ๐Ÿ‡ด", "๐Ÿ‡ฒ๐Ÿ‡ฐ", "๐Ÿ‡ฒ๐Ÿ‡ฌ", "๐Ÿ‡ฒ๐Ÿ‡ผ", "๐Ÿ‡ฒ๐Ÿ‡พ", "๐Ÿ‡ฒ๐Ÿ‡ป", "๐Ÿ‡ฒ๐Ÿ‡ฑ", "๐Ÿ‡ฒ๐Ÿ‡น", "๐Ÿ‡ฒ๐Ÿ‡ญ", "๐Ÿ‡ฒ๐Ÿ‡ถ", "๐Ÿ‡ฒ๐Ÿ‡ท", "๐Ÿ‡ฒ๐Ÿ‡บ", "๐Ÿ‡พ๐Ÿ‡น", "๐Ÿ‡ฒ๐Ÿ‡ฝ", "๐Ÿ‡ซ๐Ÿ‡ฒ", "๐Ÿ‡ฒ๐Ÿ‡ฉ", "๐Ÿ‡ฒ๐Ÿ‡จ", "๐Ÿ‡ฒ๐Ÿ‡ณ", "๐Ÿ‡ฒ๐Ÿ‡ช", "๐Ÿ‡ฒ๐Ÿ‡ธ", "๐Ÿ‡ฒ๐Ÿ‡ฆ", "๐Ÿ‡ฒ๐Ÿ‡ฟ", "๐Ÿ‡ฒ๐Ÿ‡ฒ", "๐Ÿ‡ณ๐Ÿ‡ฆ", "๐Ÿ‡ณ๐Ÿ‡ท", "๐Ÿ‡ณ๐Ÿ‡ต", "๐Ÿ‡ณ๐Ÿ‡ฑ", "๐Ÿ‡ณ๐Ÿ‡จ", "๐Ÿ‡ณ๐Ÿ‡ฟ", "๐Ÿ‡ณ๐Ÿ‡ฎ", "๐Ÿ‡ณ๐Ÿ‡ช", "๐Ÿ‡ณ๐Ÿ‡ฌ", "๐Ÿ‡ณ๐Ÿ‡บ", "๐Ÿ‡ณ๐Ÿ‡ซ", "๐Ÿ‡ฒ๐Ÿ‡ต", "๐Ÿ‡ฐ๐Ÿ‡ต", "๐Ÿ‡ณ๐Ÿ‡ด", "๐Ÿ‡ด๐Ÿ‡ฒ", "๐Ÿ‡ต๐Ÿ‡ฐ", "๐Ÿ‡ต๐Ÿ‡ผ", "๐Ÿ‡ต๐Ÿ‡ธ", "๐Ÿ‡ต๐Ÿ‡ฆ", "๐Ÿ‡ต๐Ÿ‡ฌ", "๐Ÿ‡ต๐Ÿ‡พ", "๐Ÿ‡ต๐Ÿ‡ช", "๐Ÿ‡ต๐Ÿ‡ญ", "๐Ÿ‡ต๐Ÿ‡ณ", "๐Ÿ‡ต๐Ÿ‡ฑ", "๐Ÿ‡ต๐Ÿ‡น", "๐Ÿ‡ต๐Ÿ‡ท", "๐Ÿ‡ถ๐Ÿ‡ฆ", "๐Ÿ‡ท๐Ÿ‡ช", "๐Ÿ‡ท๐Ÿ‡ด", "๐Ÿ‡ท๐Ÿ‡บ", "๐Ÿ‡ท๐Ÿ‡ผ", "๐Ÿ‡ผ๐Ÿ‡ธ", "๐Ÿ‡ธ๐Ÿ‡ฒ", "๐Ÿ‡ธ๐Ÿ‡น", "๐Ÿ‡ธ๐Ÿ‡ฆ", "๐Ÿ‡ธ๐Ÿ‡ณ", "๐Ÿ‡ท๐Ÿ‡ธ", "๐Ÿ‡ธ๐Ÿ‡จ", "๐Ÿ‡ธ๐Ÿ‡ฑ", "๐Ÿ‡ธ๐Ÿ‡ฌ", "๐Ÿ‡ธ๐Ÿ‡ฝ", "๐Ÿ‡ธ๐Ÿ‡ฐ", "๐Ÿ‡ธ๐Ÿ‡ฎ", "๐Ÿ‡ธ๐Ÿ‡ง", "๐Ÿ‡ธ๐Ÿ‡ด", "๐Ÿ‡ฟ๐Ÿ‡ฆ", "๐Ÿ‡ฌ๐Ÿ‡ธ", "๐Ÿ‡ฐ๐Ÿ‡ท", "๐Ÿ‡ธ๐Ÿ‡ธ", "๐Ÿ‡ช๐Ÿ‡ธ", "๐Ÿ‡ฑ๐Ÿ‡ฐ", "๐Ÿ‡ง๐Ÿ‡ฑ", "๐Ÿ‡ธ๐Ÿ‡ญ", "๐Ÿ‡ฐ๐Ÿ‡ณ", "๐Ÿ‡ฑ๐Ÿ‡จ", "๐Ÿ‡ฒ๐Ÿ‡ซ", "๐Ÿ‡ต๐Ÿ‡ฒ", "๐Ÿ‡ป๐Ÿ‡จ", "๐Ÿ‡ธ๐Ÿ‡ฉ", "๐Ÿ‡ธ๐Ÿ‡ท", "๐Ÿ‡ธ๐Ÿ‡ฏ", "๐Ÿ‡ธ๐Ÿ‡ฟ", "๐Ÿ‡ธ๐Ÿ‡ช", "๐Ÿ‡จ๐Ÿ‡ญ", "๐Ÿ‡ธ๐Ÿ‡พ", "๐Ÿ‡น๐Ÿ‡ผ", "๐Ÿ‡น๐Ÿ‡ฏ", "๐Ÿ‡น๐Ÿ‡ฟ", "๐Ÿ‡น๐Ÿ‡ญ", "๐Ÿ‡น๐Ÿ‡ฑ", "๐Ÿ‡น๐Ÿ‡ฌ", "๐Ÿ‡น๐Ÿ‡ฐ", "๐Ÿ‡น๐Ÿ‡ด", "๐Ÿ‡น๐Ÿ‡น", "๐Ÿ‡น๐Ÿ‡ฆ", "๐Ÿ‡น๐Ÿ‡ณ", "๐Ÿ‡น๐Ÿ‡ท", "๐Ÿ‡น๐Ÿ‡ฒ", "๐Ÿ‡น๐Ÿ‡จ", "๐Ÿ‡น๐Ÿ‡ป", "๐Ÿ‡บ๐Ÿ‡ฌ", "๐Ÿ‡บ๐Ÿ‡ฆ", "๐Ÿ‡ฆ๐Ÿ‡ช", "๐Ÿ‡ฌ๐Ÿ‡ง", "๐Ÿด๓ ง๓ ข๓ ฅ๓ ฎ๓ ง๓ ฟ", "๐Ÿด๓ ง๓ ข๓ ณ๓ ฃ๓ ด๓ ฟ", "๐Ÿด๓ ง๓ ข๓ ท๓ ฌ๓ ณ๓ ฟ", "๐Ÿ‡บ๐Ÿ‡ธ", "๐Ÿ‡บ๐Ÿ‡พ", "๐Ÿ‡บ๐Ÿ‡ฒ", "๐Ÿ‡ป๐Ÿ‡ฎ", "๐Ÿ‡บ๐Ÿ‡ฟ", "๐Ÿ‡ป๐Ÿ‡บ", "๐Ÿ‡ป๐Ÿ‡ฆ", "๐Ÿ‡ป๐Ÿ‡ช", "๐Ÿ‡ป๐Ÿ‡ณ", "๐Ÿ‡ผ๐Ÿ‡ซ", "๐Ÿ‡ช๐Ÿ‡ญ", "๐Ÿ‡พ๐Ÿ‡ช", "๐Ÿ‡ฟ๐Ÿ‡ฒ", "๐Ÿ‡ฟ๐Ÿ‡ผ"]

KeyError: u'\ufeff'

Not sure if this is an error on my end, or something I don't understand.

Background:
I'm using a Ukrainian word list to mine tweets from twitter for research. I have it saved as a cPickle file which I upload and able to print into python without any problems.

Problem:
I receive the following error and can't figure out what is going on to is throwing it. Any help would be appreciated.

Traceback (most recent call last):
  File "<pyshell#29>", line 1, in <module>
    execfile("twittersearchloc.py")
  File "twittersearchloc.py", line 25, in <module>
    for tweet in ts.search_tweets_iterable(tso):
  File "C:\Python27\lib\site-packages\twittersearch-1.0.1-py2.7.egg\TwitterSearch\TwitterSearch.py", line 204, in search_tweets_iterable
    self.search_tweets(order)
  File "C:\Python27\lib\site-packages\twittersearch-1.0.1-py2.7.egg\TwitterSearch\TwitterSearch.py", line 305, in search_tweets
    self._start_url = order.create_search_url()
  File "C:\Python27\lib\site-packages\twittersearch-1.0.1-py2.7.egg\TwitterSearch\TwitterSearchOrder.py", line 232, in create_search_url
    url += '+'.join([quote_plus(i) for i in self.searchterms])
  File "C:\Python27\lib\urllib.py", line 1310, in quote_plus
    return quote(s, safe)
  File "C:\Python27\lib\urllib.py", line 1303, in quote
    return ''.join(map(quoter, s))
KeyError: u'\ufeff'

How I tried to solve the problem:

I figured the non ascii formart was throwing it off, but trying to decode or encode it into different formats didn't work.

API Limit informations

Hello,

Is it possible to get request usage ?
I mean the maximum number of requests that can be done using the given credentials together with the number of requests already done.

Thank you

Regards

Program randomly freezes

Hello... I'm using your library and I don't know why but my program randomly freezes sometimes. My program is pretty simple and is pretty much just copying the code sample you provide @https://twittersearch.readthedocs.org/en/latest/index.html (actually, your code sample was also freezing when I tried it).

Could it have to do with the version of python I'm using? (2.7.9)
I installed TwitterSearch through pip. I hope its not some deadlock issue.

Here's what I've been running:

from TwitterSearch import *
from time import sleep
try:
    tso = TwitterSearchOrder() # create a TwitterSearchOrder object
    tso.set_keywords(['#vr', '-RT']) # let's define all words we would like to have a look for
    tso.set_language('en') # hell no German, I want English!
    tso.set_include_entities(False) # and don't give us all those entity information

    # it's about time to create a TwitterSearch object with our secret tokens
    ts = TwitterSearch(
        consumer_key = 'xxxx',
        consumer_secret = 'xxxx',
        access_token = 'xxxx',
        access_token_secret = 'xxxx'
     )

    # open file for writing
    text_file = open("#vrtest.txt", "w")

    # check when to stop
    iterations = 0
    max_tweets = 100000

    # callback fucntion used to check if we need to pause the program
    def my_callback_closure(current_ts_instance): # accepts ONE argument: an instance of TwitterSearch
        queries, tweets_seen = current_ts_instance.get_statistics()

        if queries > 0 and (queries % 2) == 0: # trigger delay every other query
            print("\nQueries: " + str(queries) + " now sleeping, 1 minute.\n");
            sleep(60) # sleep for 60 seconds

     # this is where the fun actually starts :)
    for tweet in ts.search_tweets_iterable(tso, callback=my_callback_closure):

        current_line = "%s" % ( tweet['text'] )

        iterations = iterations + 1
        print( "i: " + str(iterations) + " - " + tweet['user']['screen_name'] + " tweeted: " + current_line )

        text_file.write(current_line.encode('utf-8', 'ignore') + "\n")

        # wait 1 second every 10 tweets
        if (iterations % 10 == 0):
            print("\nSleeping 1 second.\n")
            sleep(1)

        if (iterations >= max_tweets):
            break

except TwitterSearchException as e: # take care of all those ugly errors if there are some
    print(e)

finally:
    # close file
    text_file.close()

tso.SetGeocode flagging UK geocode as invalid number

I am trying to constrain the area I search for tweets in the UK, but am receiving an error response from the TwitterSearchOrder.py module.

[code]C:\Users\cjadmin>C:\Users\cjadmin\Desktop\py\search.py
Traceback (most recent call last):
File "C:\Users\cjadmin\Desktop\py\search.py", line 26, in
tso.setGeocode(53.409144,-2.147483,10,'mi') # Set location constraints with
geocode
File "C:\Python27\lib\site-packages\twittersearch-0.78.3-py2.7.egg\TwitterSear
ch\TwitterSearchOrder.py", line 138, in setGeocode
raise TwitterSearchException(1005)
TwitterSearch.TwitterSearchException.TwitterSearchException: Error 1005: Invalid
unit.[/code]

I've tried escaping the minus for the geocode but that also fails.

Are UK codes unsupported?

Find all tweets near me?

Hi, I am new to this. I have been able to get a script going using TwitterSearch by passing in a keyword or two, but can I mimic the near me function of the actual twitter search (more) options?

I tried different variations like this:
tso.set_keywords([], or_operator = True) # let's define all words we would like to have a look for tso.set_geocode(138.599,-34.93,10,imperial_metric=True)

Any help would be appreciated.

Collect tweets from 03-20 Aug, 2014 for a particular location

Hello,

I am a PhD student. I am new to python and using TwitterSearch to collect tweets from 03-20 Aug for a particular location (geocode). I am trying to run the following piece of code.

try:

tso = TwitterSearchOrder() 
tso.set_keywords(['protest','protested','protesting','riot','rioted',
                  'rioting',
                  'rally',
                  'rallied',
                  'rallying',
                  'marched',
                  'marching',
                  'strike',
                  'striked',
                  'striking']) 
tso.set_language('en') # we want to see German tweets only
tso.set_geocode (37.00,-92.00,100)
tso.set_include_entities(False) # and don't give us all those entity information
tso.set_since_id (367020906049835008)
tso.set_max_id (501488707195924481)
ts = TwitterSearch(
    consumer_key = '.............',                             # my access credentials 
    consumer_secret = '.........................',
    access_token = '...................',
    access_token_secret = '.......................'
 )

for tweet in ts.search_tweets_iterable(tso):
    user=tweet['user']['screen_name'].encode("ASCII", errors='ignore')
    text=tweet['text'].encode("ASCII", errors='ignore')
    time=tweet['created_at'].encode("ASCII", errors='ignore')
    print ( '@%s tweeted: %s on %s' % ( user, text, time ) + '\n')

except TwitterSearchException as e:
print(e)

However, above piece of code is not returning anything. Please help me in this regard. Can I collect old tweets without any keyword for a particular location?

Thanks in advance.

Regards,
Mohammed

Exclude retweets and replies

Add methods to TwitterSearchOrder for excluding retweets and replies.

There is currently a work-around for this:
tso.set_keywords(['yourKeywordHere', '-filter:retweets', '-filter:replies'])

AIOHTTP

Would there be any way to turn this easily into AIOHTTP instead of requests?

Truncated Tweets

Hi, any way to set tweet_mode to extended so I can access non-truncated tweets? Thanks.

Bug in TwitterSearch when looking for more tweets

Line 115 of TwitterSearch:

self.__nextMaxID = min(self.__response['content']['statuses'], key=lambda i: i['id'])['id'] - 1

Since I've only just started looking at this I'm essentially following your getting started guide in the README for a keyword search I'm looking at (single search term: 'ECAWA'), and this line is throwing a value error since the argument to min() is an empty sequence.

KeyError: 'search_metadata'

Seem to be having a problem with your Python Lib

Traceback (most recent call last):
File "main.py", line 41, in
for tweet in ts.searchTweetsIterable(tso): # this is where the fun actually starts :)
File "/usr/local/lib/python2.7/dist-packages/TwitterSearch-0.1-py2.7.egg/TwitterSearch/TwitterSearch.py", line 32, in searchTweetsIte
rable
self.searchTweets(order)
File "/usr/local/lib/python2.7/dist-packages/TwitterSearch-0.1-py2.7.egg/TwitterSearch/TwitterSearch.py", line 50, in searchTweets
self.sentSearch(order.createSearchURL())
File "/usr/local/lib/python2.7/dist-packages/TwitterSearch-0.1-py2.7.egg/TwitterSearch/TwitterSearch.py", line 40, in sentSearch
if self.response['content']['search_metadata'].get('next_results'):
KeyError: 'search_metadata'

Is the error I'm getting using the sample code on your read me (seen below):

from TwitterSearch import *
tso = TwitterSearchOrder() # create a TwitterSearchOrder object
tso.setKeywords(['hullcompsci']) # can include multipul searches (e.g. for when GGJ or TTG is on) tso.setKeywords(['Guttenberg', 'Doktorarbeit'])
tso.setCount(100) # please dear Mr Twitter, give us 100 results per page (this is the default value, I know :P)
tso.setLanguage('en')
tso.setIncludeEntities(False) # and don't give us all those entity information (this is a default value too)

ts = TwitterSearch(
consumer_key = 'censored'
consumer_secret = 'censored',
access_token = 'censored',
access_token_secret = 'censored'
)

ts.authenticate()

counter = 0 # just a small counter
for tweet in ts.searchTweetsIterable(tso): # this is where the fun actually starts :)
counter += 1
print '@%s tweeted: %s' % (tweet['user']['screen_name'], tweet['text'])

print '*** Found a total of %i tweets' % counter

Thanks

Search stopping because search_metadata.next_results missing

Thanks for this library. Working very well.

This is more of a question on the twitter api I guess, but maybe you've encountered this before.
Every now and again, I find that the search (iterating using searchTweetsIterable) stops because in the twitter response, the search_metadata.next_results item is completely missing. Do you know of a good reason why this is happening? I don't see anything about this in the API documentation. It is also not due to rate limitation.

If I manually run another search with my own max_id populated, I get another set of results, again with the search_metadata.next_results missing.

More bugs in TwitterSearchOrder

Hi, me again. Maybe I really should fork and pull. If I find another bug I will :)

70: self.argument.update( { 'since_id' : '%s' % twid } ) should be
70: self.arguments.update( { 'since_id' : '%s' % twid } )

The same for line 76
76: self.argument.update( { 'max_id' : '%s' % twid } ) should be
76: self.arguments.update( { 'max_id' : '%s' % twid } )

Next time I will fork, promise.

Search strings with special/punctuation characters cause unexpected exceptions

While using your lib, I've run into the issue that ValueError is thrown if certain characters such as '(', ')', '[', ']', '$', '?', "'" (apostrophe) and TwitterSearch.TwitterSearchException.TwitterSearchException: Error 401: Unauthorized is produced when I use 'test=', 'test=foo' (basically anytime when I use '=' character). Code producing the aforementioned exceptions (CONSUMER_KEY, CONSUMER_SECRET, TOKEN_KEY, TOKEN_SECRET are keys specific to my application and are working):

Python 2.7.5, TwitterSearch 0.78.3

import logging
import traceback
import TwitterSearch

def download_tweets(search_string, language):
"""Returns list of tweets containing <search_string>, should be like 'en' or 'ru' """

tso = TwitterSearch.TwitterSearchOrder()
tso.addKeyword(search_string)
tso.setLanguage(language)
tso.setIncludeEntities(False)

# create a TwitterSearch object with our secret tokens
ts = TwitterSearch.TwitterSearch(
    consumer_key=CONSUMER_KEY,
    consumer_secret=CONSUMER_SECRET,
    access_token=TOKEN_KEY,
    access_token_secret=TOKEN_SECRET
)
try:
    return ts.searchTweetsIterable(tso)

except TwitterSearch.TwitterSearchException as e:
    logging.exception("%s: %s", e.code, e.message)
    logging.exception("Stack trace: %s", traceback.format_exc())
    raise e

download_tweets("test=", "en")
download_tweets("test=foo", "en")
download_tweets("test'", "en")
download_tweets("test$", "en")
download_tweets("test?", "en")
download_tweets("test(", "en")
download_tweets("test)", "en")
download_tweets("test[", "en")
download_tweets("test]", "en")

Use No Keywords?

I would like to make a query using a geocode argument only. Just give it coordinates, radius, and date range, and have it pull up all tweets in the area. When I try this however, I get the "No Keywords Given" error. Is it possible to make a query with no Keywords in this library?

Codec Error When Installing

I am having a problem installing TwitterSearch using Python 3.4 on Windows 7. "pip install TwitterSearch" returns a codec error:

 return codecs.charmap_decode(input,self.errors,decoding_table)[0]
 UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 3826: character maps to      <undefined>

My full pip.log can be found here: https://gist.github.com/codedthiscode/a662f8223936e48a645d.

I also tried using easy_install but I am getting the same error. A fair amount of Googling has not solved the problem. Any advice?

tso.set_count(5)

tso.set_count(5)
is not working on my side:

try:
    tso = TwitterSearchOrder()
    tso.set_keywords(['Lucca'])
    tso.set_count(5)
    tso.set_result_type('recent')
#    tso.set_until(datetime.date(2016, 04, 27))
#    tso.set_until(datetime.date(datetime.now()))


    ts = TwitterSearch(
        consumer_key = 'xxx',
        consumer_secret = 'xxx',
        access_token = 'xxxx,
        access_token_secret = 'xxxx'
     )

    for tweet in ts.search_tweets_iterable(tso):

        print tweet['entities']['media'][0]['media_url_https']

except TwitterSearchException as e: # take care of all those ugly errors if there are some
    print(e)

it's reporting hundred of results.

tso.setGeocode invalid unit error in TwitterSearchorder.py for UK coordinates

I am trying to constrain the area I search for tweets in the UK, but am receiving an error response from the TwitterSearchOrder.py module.

[code]C:\Users\cjadmin>C:\Users\cjadmin\Desktop\py\search.py
Traceback (most recent call last):
File "C:\Users\cjadmin\Desktop\py\search.py", line 26, in
tso.setGeocode(53.409144,-2.147483,10,'mi') # Set location constraints with
geocode
File "C:\Python27\lib\site-packages\twittersearch-0.78.3-py2.7.egg\TwitterSear
ch\TwitterSearchOrder.py", line 138, in setGeocode
raise TwitterSearchException(1005)
TwitterSearch.TwitterSearchException.TwitterSearchException: Error 1005: Invalid
unit.[/code]

I've tried escaping the minus for the geocode but that also fails.

Are UK codes unsupported?

query operators does not work

Hello, I am having troubles using query operators. For example the query '"michelle bachelet"' throws tweets including not only "michelle bachelet" exact phrase, but also tweets containing only "michelle", others containing only "bachelet", and others containing both "michelle" and "bachelet" with different distances between those two words. In general AND, OR and exact phrase queries throws all three types of results.

I will appreciate any help with this issue.

My first time using GIT so if if I need to post this info in a different way let me know.

The
def next(self): was not working correctly it would return all of the items in the list except the last one.

So if I had tweeted the following

whitecowtest does this make sense

whitecowtest so this is love?

whitecowtest HELLO Mother

I would only get the following returned.

whitecowtest so this is love?

whitecowtest HELLO Mother

Below is the correct code for def next(self):

def next(self):
if self.nextTweet < len(self.response['content']['statuses']):
strresponse = self.response['content']['statuses'][self.nextTweet]
self.nextTweet += 1
return strresponse

    try:
        self.searchNextResults()
    except TwitterSearchException:
        raise StopIteration
    if len(self.response['content']['statuses']) != 0:
        self.nextTweet = 0
        return self.response['content']['statuses'][self.nextTweet]
    raise StopIteration

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.