Giter Site home page Giter Site logo

rtweet's Introduction

rtweet

R-CMD-check CRAN status Coverage Status Downloads ZENODO rOpenSci JOSS “Buy Me A Coffee”

Use twitter from R.

This package is no longer updated and no fixes can be expected.
A request to Orphan the package on CRAN was sent!

The maintainer cannot longer check most of the output of the functionality provided. If you want to volunteer reach out to the maintainer.

Installation

To get the current released version from CRAN:

install.packages("rtweet")

You can install the development version of rtweet from GitHub with:

install.packages("rtweet", repos = 'https://ropensci.r-universe.dev')

Usage

All users must be authenticated to interact with Twitter’s APIs. See vignette("auth", package = "rtweet") for details.

library(rtweet)

rtweet should be used in strict accordance with Twitter’s developer terms.

Usage

Depending on if you are a paid user or not you can do more or less things.

Free

You can post (tweet_post()) and retrieve information about yourself (user_self()).

Paid

You can do all the other things: search tweets (tweet_search_recent()), retrieve your own bookmarks (user_bookmarks()), check who follows who, (user_following(), or user_followers()), ….

Contact

Communicating with Twitter’s APIs relies on an internet connection, which can sometimes be inconsistent.

If you have questions, or needs an example or want to share a use case, you can post them on rOpenSci’s discuss. Were you can browse uses of rtweet too.

With that said, if you encounter an obvious bug for which there is not already an active issue, please create a new issue with all code used (preferably a reproducible example) on Github.

Code of Conduct

Please note that this package is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.

rtweet's People

Contributors

alexpghayes avatar bdilday avatar charliejhadley avatar dieghernan avatar engineerchange avatar giocomai avatar hadley avatar hrbrmstr avatar jdblischak avatar jennybc avatar jeroen avatar jonthegeek avatar kevintaylor avatar kohlkopf avatar kwazao avatar llrs avatar maelle avatar malcolmbarrett avatar mikechapple avatar mine-cetinkaya-rundel avatar mkearney avatar noamross avatar rmvegasm avatar simonheb avatar suppaman avatar tbuckl avatar thomas-keller avatar tylerlittlefield avatar tylermorganwall avatar xvrdm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rtweet's Issues

stream_tweets(): bounded bx geocoords not working?

Ahoy,

I have attempted to stream tweets from a geographic area, but when I run the script I believe I'm just getting a random sample of tweets (as if I left the script with q = "").

Here is what I ran:
area <- stream_tweets(q = "-94,36,-93,37", timeout = 30, parse = TRUE, clean_tweets = TRUE, token = twitter_token, verbose = TRUE)

Did I structure this correctly? Many thanks!

Search Tweets Error

search_tweets(q="Trump")
Error in as.POSIXct.default(rl_df$reset, origin = "1970-01-01") :
do not know how to convert 'rl_df$reset' to class “POSIXct”
sessionInfo()
R version 3.3.1 (2016-06-21)
Platform: x86_64-apple-darwin13.4.0 (64-bit)
Running under: OS X 10.10.5 (Yosemite)

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats graphics grDevices utils datasets methods base

other attached packages:
[1] ggmap_2.6.1 ggplot2_2.1.0 rtweet_0.2.0

loaded via a namespace (and not attached):
[1] Rcpp_0.12.7 magrittr_1.5 maps_3.1.1 munsell_0.4.3
[5] colorspace_1.2-6 geosphere_1.5-5 lattice_0.20-33 rjson_0.2.15
[9] R6_2.1.3 jpeg_0.1-8 stringr_1.1.0 httr_1.2.1
[13] plyr_1.8.4 dplyr_0.5.0 tools_3.3.1 grid_3.3.1
[17] gtable_0.2.0 png_0.1-7 DBI_0.5-1 digest_0.6.10
[21] openssl_0.9.4 lazyeval_0.2.0 assertthat_0.1 tibble_1.2
[25] reshape2_1.4.1 mapproj_1.2-4 curl_2.0 sp_1.2-3
[29] stringi_1.1.1 RgoogleMaps_1.4.1 scales_0.4.0 jsonlite_1.1
[33] proto_0.3-10

Never seen this before- any idea what it is?

Error in readRDS(pat) : error reading from connection

I'm testing the rtweet package, but I'm encountering this issue:

Error in readRDS(pat) : error reading from connection

when I try to start a data extraction. If I well understand, for the connection is still required to use the setup_twitter_oauth() fonction from twitteR package?

Thanks a lot

Problem with sentiment analysis

Working through the vignette - presumably different set of tweets

> sa_trump <- syuzhet::get_nrc_sentiment(dt$text)
Error in tolower(char_v) : 
  invalid input 'RT @CharMckenney: I am a black woman, educated (BA,MBA), independent thinker. I support Trump. 🙋�@realDonaldTrump @Women4Trump #Trump2016 #…' in 'utf8towcs'

Is there an easy way to exclude problem tweets. Not that I am trying to exclude tweets from black women LOL!

"Rate limit exceeded" tripped by checking rate_limit too often?

I'm trying to gather tweets iteratively based on a user list, and I'm being stopped by an error that didn't make much sense to me at first. It seems that the built in rate-limiting is what's causing the trouble, even though I'm doing a query that should allow for 900 queries in 15 minutes.

Here's some test code that prints to console:

# arbitrary list of popular twitter accounts
users = c("katyperry", "justinbieber", "taylorswift13", "BarackObama", "rihanna", "YouTube", "ladygaga", "TheEllenShow", "twitter", "jtimberlake", "KimKardashian", "britneyspears", "Cristiano", "selenagomez", "cnnbrk", "jimmyfallon", "ArianaGrande", "shakira", "instagram", "ddlovato", "JLo", "Oprah", "Drake", "KingJames", "BillGates", "nytimes", "KevinHart4real", "onedirection", "MileyCyrus", "SportsCenter")

for (i in users) {
  get_timeline(i, token=my_token, n = 100, clean_tweets = T, verbose = T)
  curr_limit = rate_limit(token=my_token)
  print(paste0("searched for ",i,", and there were ",curr_limit$remaining[curr_limit$query == "statuses/user_timeline"]," user API calls, ",curr_limit$remaining[curr_limit$query == "application/rate_status"]," status API calls left."))
}

And here's what's in the console - you'll notice that the remaining calls for the rate limiting info itself is what terminates the loop:

[1] "searched for katyperry, and there were 862 user API calls, 67 status API calls left."
[1] "searched for justinbieber, and there were 861 user API calls, 64 status API calls left."

(much of the same, until...)

[1] "searched for Oprah, and there were 841 user API calls, 4 status API calls left."
[1] "searched for Drake, and there were 840 user API calls, 1 status API calls left."
Error: rate limit exceeded.

I realise I'm inflating the rate at which the status API call limit decreases by calling it for the console output, but even so, since it starts at 180 it could never accommodate running through a 900 limit one all the way through.

A flag disabling the check or making it less frequent could work, perhaps?

Can't seem to pass extra params to stream_tweets

The steaming API supposedly takes extra parameters like lang - it would be nice to stream only english tweets. But there doesn't seem to be an input mechanism in stream_tweets that actually makes it through to the API call. Unless there's a way to format q such that it breaks into arguments?

tokens vignette

I've just realized that when using cat for writing the tokens file path to .Renviron, one should add append = TRUE, else it erases the previous .Renviron.

search_tweets without results not working?

I think there is a problem, if a search doesn't return anything:

tw <- search_tweets("foobarfoo", n = 1200, type="recent", token = twitter_token, geocode="52.520583,13.402765,5km")
Searching for tweets...
Error in x[[1]] : subscript out of bounds

(if I just use foobar it works (1 result)).

Do I need to type something extra or is it a bug?

sessionInfo()
R version 3.3.0 (2016-05-03)
Platform: x86_64-apple-darwin13.4.0 (64-bit)
Running under: OS X 10.11.6 (El Capitan)

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats graphics grDevices utils datasets methods base

other attached packages:
[1] rtweet_0.2.6

loaded via a namespace (and not attached):
[1] compiler_3.3.0 httr_1.2.1 R6_2.1.3 tools_3.3.0 curl_2.0
[6] jsonlite_1.1 openssl_0.9.4

Issue with storing tweet data

I get the following issue when using search_tweets(). I have the dev version installed.

tweets2 <- search_tweets("africa", n=25)
Searching for tweets...
Error in as.POSIXct.default(value) :
do not know how to convert 'value' to class “POSIXct”

I get the same error for get_timeline(), lookup_users()

However, stream_tweets() works, as does get_friends(), get_followers()

umlauts in tweets

When creating wordclouds from timelines I noticed that the text component of get_timeline suppresses all umlauts, and possibly also other characters from Western nonEnglisch languages. Could you modify the parser to keep those letters?

Error: Rate limit exceeded - please wait!

First of all: thank you very much for making and sharing this package!

All works well except for the search_users() function. I have a list of names of persons of whom I want to check if they have a twitter account or not. When I use search_users() in a loop, it says after checking just a few, e.g. six, names: Error: Rate limit exceeded - please wait!
Ofcourse I know the twitter api has limits, but they seem to be reached so quickly. Any thoughts about how to deal with this problem?

kind regards,
Jelle

if_load() can never be TRUE

I am trying to store the my API key in .Renviron as recommended by the vignette. However, I am getting this error upon running any rtweet function that requires authentication:

Error in if (if_load(pat)) { : argument is not interpretable as logical

It seems to me that the error is a result of the fact that if_load() can never be TRUE. I think this is only a problem if identical(pat, ".httr-oauth") is FALSE (see the load_tokens function).

Get_followers non-valid next_cursor value

I am using get_followers function to get followers of a group of users in a dataset. I have found out that the output from the function does not always look the same. Sometimes the ouput is a dataframe with a ids column and a valid cursor.

followers <- get_followers("33344949",n="all")
str(followers)

'data.frame': 42012 obs. of 1 variable:
$ ids: num 8.16e+17 7.83e+17 8.16e+17 8.09e+17 8.16e+17 ...
attr(*, "next_cursor")= chr "0"
NULL

next_cursor(followers)

"0"

However, some other times the output is a 5 column dataframe, where the cursor is not a character.

followers <- get_followers("110938362",n="all")
str(followers)

'data.frame': 882 obs. of 5 variables:
$ ids: num 1.11e+09 5.54e+08 7.51e+07 8.07e+07 2.88e+08 ...
$ NA : int 0 0 0 0 0 0 0 0 0 0 ...
$ NA : chr "0" "0" "0" "0" ...
$ NA : int 0 0 0 0 0 0 0 0 0 0 ...
$ NA : chr "0" "0" "0" "0" ...
attr(*, "next_cursor")=function (ids)
NULL

next_cursor(followers)

function (ids)
{
attr(ids, "next_cursor")
}
<environment: namespace:rtweet>

Am I doing something wrong?

Rate limit for multiple tokens

First, I would like to thank you for your work! I find specially useful and convenient the possibility of saving several tokens together.
However, although tokens are correctly iterated over when included into an API call, I have found out that when rate_limit() is called, it returns the rate limit for the first of the tokens. As a result, the output may show 0 calls remaining where other of the tokens have still calls available.
As I need to do a large number of API calls, I would like to be able to check the total number of calls remaining (or at least if there are any left).
For instance, in this situation, rate_limit of the joint tokens returns the same output than rate_limit of the first one (0). However, the get_followers query is still successful because it makes use of the available calls for the second token.

twitter_token1 <- c(
create_token(app = "XXXXX",
consumer_key = "XXXXXXXX",
consumer_secret = "XXXXXXXXXXX")
)
twitter_token2 <- c(
create_token(app = "XXXXXXXXX",
consumer_key = "XXXXXXXX",
consumer_secret = "XXXXXXXXXXXXXXX")
)

twitter_tokens <- c(twitter_token1,twitter_token2)
limit <- rate_limit(token = twitter_token1, query = "followers/ids", rest = TRUE)
print(limit)

             query limit remaining         reset
1 followers/ids    15         0              4.124966 mins

limit <- rate_limit(token = twitter_token2, query = "followers/ids", rest = TRUE)
print(limit)

             query limit remaining         reset
1 followers/ids    15         15            14.98748 mins

limit <- rate_limit(token = twitter_tokens, query = "followers/ids", rest = TRUE)
print(limit)

             query limit remaining         reset
1 followers/ids    15         0              4.124966 mins

followers <- get_followers("28918980",n=75000,page=-1,token=twitter_tokens)
print(str(followers))

'data.frame':	75000 obs. of  1 variable:
$ user_id: chr  "636025446" "802811092346900480" "741213275522752512" "816567129382199296" ...
attr(*, "next_cursor")= num 1.5e+18
NULL

Error when searching tweets

This works (well even if one gets 100 instead of 10 tweets):

> first7jobs <- search_tweets(q = "first7jobs", count = 10)
Searching for tweets...
Collected 100 tweets!

This doesn't:

first7jobs <- search_tweets(q = "first7jobs", count = 1000)
Searching for tweets...
Error in `[.data.frame`(dat, toplevel) : undefined columns selected

get_timeline -> changing the output a bit?

I'm currently using get_timeline and I've noticed it returns two data.frames now. For a package of mine (monkeylearn) I'd been advised to rather have only one data.frame as output with the other data.frame as attributes to the first one, so maybe in your case users could be an attribute of tweets so that it might be easier to deal with get_timeline output? It's just a suggestion of course, I find the function great already!

Moreover when retrieving 2000 tweets from realDonaldTrump the users data.frame contains 4 times the same line, should you add an unique-call somewhere?

Stream Tweets Issue

stream_tweets("trump", timeout =60)
Streaming tweets for 60 seconds...
|=========================================================================================================| 100%
opening file input connection.
Error: lexical error: invalid char in json text.
\n\n<meta http-equi
(right here) ------^
closing file input connection.

Downloaded the Github version of rtweet too- what's causing this? Sorry to keep submitting so many issues

search_tweets() limit

Why this 18.000 tweets limit? I still use the deprecated twitteR package and I was able to retrieve almost 200k tweets about a trending hashtag. When I tried to do the same with rtweet, I could only retrieve 17.745 for the same hashtag.

It would also be useful to include since and until parameters.

Kind regards

Error in search_users

I think there is an error in parseing the results from search_users().

Here's what I trying:

usersearch <- search_users(q = "obama", n = 500, token = Token.Twitter)

And here is the error I'm getting:

Error in `[[<-.data.frame`(`*tmp*`, "screen_name", value = c("obama1_obama",  : 
  replacement has 20 rows, data has 19

The thing is, if I run it with parse set to FALSE, there is no problem getting the data - which leads me to think, that there is somthing going on in the parser function, that I can't seem to get!

plyget not handling pages with missing response fields

When a single page of results doesn't have data for a given field, the lapply call in plyget fails and returns NAs for all responses. This seems to most often be a problem for the geo-data, as the parse piper function almost uniformly returns all NAs for all coordinate data.

Error in from_js(r) : could not find function "http_type"

Hi,
When I run this code

get_followers("publicmoney", n = 75000, page = "-1", parse = TRUE, token = NULL)

I get this error:

Error in from_js(r) : could not find function "http_type"

What do you think the problem is?

Here is session info

R version 3.3.1 (2016-06-21)
Platform: x86_64-apple-darwin13.4.0 (64-bit)
Running under: OS X 10.9.5 (Mavericks)

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats graphics grDevices utils datasets methods base

other attached packages:
[1] rtweet_0.2.0 httpuv_1.3.3 httr_1.0.0 jsonlite_1.0

loaded via a namespace (and not attached):
[1] assertthat_0.1 R6_2.1.2 magrittr_1.5 DBI_0.5 tools_3.3.1
[6] base64enc_0.1-3 dplyr_0.5.0 curl_1.2 tibble_1.1 Rcpp_0.12.6
[11] stringi_1.1.1 digest_0.6.10 stringr_1.0.0

Thanks

explanation on search_users

Hi, thank you for this package!
I played a little bit with it and have some difficulties understanding the function search_users.

When I try
users <- search_users("citizenscience", n=1000, verbose = T) I get 132 results, where Twitter returns only 60 accounts (usually they display up to 90 accounts).
I was therefore wondering whether the search was performed in the users' biographies or in their tweets?

Another problem relates to concatenated searches:
users <- search_users("citsci", n=1000, verbose = T)returns 36 results (same number as in the Twitter website search interface)
users <- search_users("\"citizen sciences\"", n=1000, verbose = T) returns 2 results (same number as in the Twitter website search interface)
But the combined search
search_users("\"citizen sciences\" OR citsci", n=1000, verbose = T) returns the error

Error in attr(d, "tweets") <- x[["tweets"]] :
tentative de changer un attribut en NULL

Thank you very much for your insights on this.

get_friends() error

Getting the following error when using get_friends(), but only some of the time. It will work for a few calls then won't.


> get_friends("potus")
Error in names(x) <- "ids" : 
  'names' attribute [1] must be the same length as the vector [0]

lookup_statuses get 400 malformed error

running this demo script does not return a dataframe

statuses <- c("potus", "hillaryclinton", "realdonaldtrump",
"fivethirtyeight", "cnn", "espn", "twitter")

twt_df <- lookup_statuses(statuses)
twt_df

Went looking at the internals and it seems to think that TWIT is making a call but nothing is being returned. What could be happening?

I tried the same query in https://apigee.com/console/twitter and it also get a 200 OK response but no data.

Posting plots generated in R via post_tweet

Hi there, the title basically says it. I'd like to be able to generate a plot in R:

my_plot <- plot(rnorm(100), rnorm(100))

And then embed it as media in a tweet from post_tweet:

post_tweet(my_plot)

Is this possible?

Many thanks, and great package!

Rate limit handling with search API

I was wondering if there are plans to build in rate limit handing to the search API, similar to what twitteR implemented, where the call is paused until the rate limit resets. Or do you intend this to be done with loops?

If the latter, do you think you'll implement error handling so that when the rate limit is exceeded, the current results are returned, rather than an error message without results?

Cheers!

date-specific queries

Hi there,
I see in the to-do-list you mention documentation for date-specific queries.
are they possible already in the current version?
I've been searching a bit on the code but I must have missed this feature.

Geo-located tweets does not work.

When I try to retrieve geo-located tweets it does not work. Would you know why it does not work?

This is exactly what I am using:
Zika_tw <- search_tweets("#Zika", n=100, type= "mixed", max_id = NULL, parse = TRUE, lang="pt", geocode="-22.9729560,-43.1954997")

When I remove the geocode, it works. But I definitely need tweets from one place. What do you recommend me to do?

Logical values not parsed back correctly.

I am doing the following in my code

  timeline<- get_timeline(pol, n = 4000, max_id = NULL, parse = TRUE,
                            clean_tweets = TRUE,lang="en")
    save_as_csv(timeline,destfile)

At a later point I am reading back the saved tweets for processing as follows

tweets<-read.csv(paste0("data/",pol,".tweet.tweets.csv"),stringsAsFactors=F)

What I have noticed is, This roundtripping converts the Logical column e.g. is_retweet or is_quote_status to string, The odd part is the TRUE string has a space in it i.e. " TRUE" i.e."TRUE" which makes conversion back to logical using as.logical difficult.
for e.g. see attached file

User IDs returned as double, not character

I've noticed that when running get_followers() or get_friends() User IDs are returned as double, like so:

          value
          <dbl>
1  4.113280e+07
2  3.977203e+08
3  1.580207e+09
4  7.538385e+17
5  3.177425e+09
6  1.792013e+08
7  1.027735e+08
8  1.687261e+09
9  1.298028e+08
10 3.050178e+09
# ... with 530 more rows

Perhaps these should be returned as character values by default?

number of tweets received by get_timeline

I requested 2000 tweets for about 50 people, and when their timeline had more than that, I received 1990 or 1991 for each one.
This is not a serious bug, but strange behavior.
DO you see a reason for this?

Do you rturn likes

"GET favorites/list
Returns the 20 most recent Tweets liked by the authenticating or specified user."

if not, that would be an enhancement I'd be interested in

Error in lookup_users

Here is the error message "Error in is.data.frame(x) : object 'usr' not found"

#Get follower IDs (This Twitter account has roughly 800 followers)
followers = get_followers(user = "xyz")

#Convert to DF (In preparation to get the last digit of the ID)
followers_DF = as.data.frame(followers)

#Create last digit (The purpose is to break it down by last digit to find out the potential cause)
library(stringr) followers_DF$last_digit <- str_sub(followers_DF$ids, -1)

#Transform num to char
followers_DF$user_id = as.character(followers_DF$ids)

table(followers_DF$last_digit)
# 0   1   2   3   4   5   6   7   8   9 
# 100  72 113  60  89  77  96  72  84  81 

#Lookup users
users <- lookup_users(followers_DF$user_id)
Error in is.data.frame(x) : object 'usr' not found

And, I have done a subset with the last digit = 0.

zero <- lookup_users(subset(followers_DF, last_digit==0, select = user_id))

Then, I received this error message. And, the count for 0 should be 100. Not sure how it is showing 93 and 92.

Error in `[[<-.data.frame`(`*tmp*`, "screen_name", value = c("gymibot",  : 
  replacement has 93 rows, data has 92

some more result columns that shouldn't be doubles

I wasn't sure if I could reopen an issue, but this is a follow-up on #11 basically. I noticed that the output of search_tweets() returns a couple of columns as double that should really be characters:

status_id
user_id
mentions_user_id
retweet_status_id

No biggie as you can easily as.character() them, but I thought you might want to change the default.

lookup_users issue

lookup_users("neuwirthe")

does not give the full information, there are lots of NAs.

lookup_users("neuwirthe","ArminWolf")

works, but

lookup_users("neuwirthe","neuwirthe")

also does not give the full information. So it seems that lookup_users needs to be given at least two different user names to return correct results.

Tweets and users use the same created_at variable to mean different things

This is partly a stylistic choice I suppose, but it seems like something you'd commonly want to do is extract user data from a set of tweets with users_data, then join the two data.frames together. This fails at the moment because they both use created_at, but one means the date the user account was created, while the other means the date the tweet was created.

scroll.R

You're lacking closing ")" after the two return(invisible() (doesn't seem worth a PR ;-))

issue with encoding

when using search_tweets() function to extract tweets in portuguese (lang ="pt-br"), it drops characters like ç, ã, Á. ("vdeo" instead of "vídeo")

Missing rows in latest update

Since the 25th December updates, search_tweets and stream_tweets return data frames with ~5% of rows consisting only of NAs. This seems to happen pre-parsing, and doesn't happen when installing from the 18th December update.

`search_tweets`

Shouldn't

  params <- list(q = q,
    result_type = type,
    count = 100,
    max_id = max_id,
    ...)

be

  params <- list(q = q,
    result_type = type,
    count = n,
    max_id = max_id,
    ...)

?

Error for filtered stream

Hi,
first of all thank you for creating this awesome package! I just used stream_twitter to download filtered data and while it was working flawlessly on the first attempt, all other attempts failed with errors:

stream <- stream_tweets('trump, clinton', 30, token=token)
Streaming tweets for 30 seconds...
Downloading: 2.2 MB       kB     
Finished streaming tweets!
opening file input connection.
 Found 1000 records...
incomplete final line found on 'C:\Users\kasus\AppData\Local\Temp\RtmpiUQL6r\file1dec6db41121.json'closing file input connection.
incomplete final line found on 'C:\Users\kasus\AppData\Local\Temp\RtmpiUQL6r\file1dec6db41121.json'Error in rl[[seq_len(length(rl) - 1)]] : 
  attempt to select more than one element in vectorIndex

Do you have any idea whats the problem here? I first thought that my syntax is incorrect and you expect a vector for multiple keyterms, like c('trump', 'clinton'), but this resulted in another error before even starting the stream.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.