Giter Site home page Giter Site logo

wbdata's People

Contributors

glatterf42 avatar oliversherouse avatar yaph avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wbdata's Issues

Quarterly External Debt Statistics (SDDS) Value error

I receive a value error when querying for some of the SDDS External debt data.

indicators = {'DT.DOD.DECT.CD.AR.US': 'Gross external debt'}
df = wbdata.get_dataframe(indicators, convert_date=False)

Traceback (most recent call last):
File "F:\source\Python\macro\source_external_debt.py", line 36, in
df = wbdata.get_dataframe(indicators, convert_date=False) # load data from worldbank
File "", line 2, in get_dataframe
File "C:\Python27\lib\site-packages\wbdata\api.py", line 53, in uses_pandas
return f(*args, **kwargs)
File "C:\Python27\lib\site-packages\wbdata\api.py", line 416, in get_dataframe
for i in indicators}
File "C:\Python27\lib\site-packages\wbdata\api.py", line 416, in
for i in indicators}
File "C:\Python27\lib\site-packages\wbdata\api.py", line 167, in get_data
data = fetcher.fetch(query_url, args)
File "C:\Python27\lib\site-packages\wbdata\fetcher.py", line 163, in fetch
raw_response = fetch_url(query_url)
File "C:\Python27\lib\site-packages\wbdata\fetcher.py", line 135, in fetch_url
raise ValueError
ValueError

Other fields (that are very similar) work fine, for example: {'DT.DOD.DECT.CD.GG.AR.US': 'Gross external debt general government'}

Many indicators not working

I have been testing the wbdata for some time now. It works well with some indicators, such as

{'1.1_ACCESS.ELECTRICITY.TOT': 'Access to Electricity (%)',
'4.1.1_TOTAL.ELECTRICITY.OUTPUT': 'Energy Production',
'DAK.INFR.ROD.CR': 'ciccio2',
'SP.POP.TOTL': 'Population',
'SP.URB.TOTL': 'Urban population',
'SP.URB.TOTL.IN.ZS': 'Urban population (%)'}

but it does not work with many others, such as

{'EN.PRD.ELEC.POP.ZS':'energy pro capita',
"DAK.INFR.ROD.CR":'infrastructure',
'SI.POV.NGAP':'Poverty Gap (index)',
'SI.POV.GAP2':'Poverty Rate (in % of population)'
}

and many others.

The error I get is the following:


TypeError Traceback (most recent call last)
in ()
2
3 #grab indicators above for countires above and load into data frame
----> 4 df = wbdata.get_dataframe(indicators, country=countries, convert_date=False)
5
6 #df is "pivoted", pandas' unstack fucntion helps reshape it into something plottable

in get_dataframe(indicators, country, data_date, convert_date, keep_levels)

/Users/lraso/anaconda/lib/python3.5/site-packages/wbdata/api.py in uses_pandas(f, *args, **kwargs)
51 if not pd:
52 raise ValueError("Pandas must be installed to be used")
---> 53 return f(*args, **kwargs)
54
55

/Users/lraso/anaconda/lib/python3.5/site-packages/wbdata/api.py in get_dataframe(indicators, country, data_date, convert_date, keep_levels)
414 to_df = {indicators[i]: get_data(i, country, data_date, convert_date,
415 pandas=True, keep_levels=keep_levels)
--> 416 for i in indicators}
417 return pd.DataFrame(to_df)
418

/Users/lraso/anaconda/lib/python3.5/site-packages/wbdata/api.py in (.0)
414 to_df = {indicators[i]: get_data(i, country, data_date, convert_date,
415 pandas=True, keep_levels=keep_levels)
--> 416 for i in indicators}
417 return pd.DataFrame(to_df)
418

/Users/lraso/anaconda/lib/python3.5/site-packages/wbdata/api.py in get_data(indicator, country, data_date, convert_date, pandas, column_name, keep_levels)
165 else:
166 args.append(("date", data_date.strftime("%Y")))
--> 167 data = fetcher.fetch(query_url, args)
168 if convert_date:
169 data = convert_dates_to_datetime(data)

/Users/lraso/anaconda/lib/python3.5/site-packages/wbdata/fetcher.py in fetch(query_url, args, cached)
166 if response is None:
167 raise ValueError("Got no response")
--> 168 results.extend(response[1])
169 this_page = response[0]['page']
170 pages = response[0]['pages']

TypeError: 'NoneType' object is not iterable

Unpickling error?

After installing using pip install wbdata I am getting a pickling error on doing just a simple import wbdata as wb command:

UnpicklingError                           Traceback (most recent call last)
<ipython-input-21-08dfa68d526a> in <module>
----> 1 import wbdata as wb

~\Anaconda3\envs\WorldBank\lib\site-packages\wbdata\__init__.py in <module>
      4 __version__ = "0.3.0"
      5 
----> 6 from .api import (  # noqa: F401
      7     get_country,
      8     get_data,

~\Anaconda3\envs\WorldBank\lib\site-packages\wbdata\api.py in <module>
     16 
     17 from decorator import decorator
---> 18 from . import fetcher
     19 
     20 BASE_URL = "https://api.worldbank.org/v2"

~\Anaconda3\envs\WorldBank\lib\site-packages\wbdata\fetcher.py in <module>
     62 
     63 
---> 64 CACHE = Cache()
     65 
     66 

~\Anaconda3\envs\WorldBank\lib\site-packages\wbdata\fetcher.py in __init__(self)
     40                 self.cache = {
     41                     i: (date, json)
---> 42                     for i, (date, json) in pickle.load(cachefile).items()
     43                     if (TODAY - datetime.date.fromordinal(date)).days < EXP
     44                 }

UnpicklingError: pickle data was truncated

When I had previously installed this package I remember I did have to restart the kernel for something because it was being too slow. Could something have become corrupted like this? I tried creating a new environment and reinstalling but this didn't work either. Thanks for any help!

Monthly data_date

Some indicators allow monthly date to be queried. For example, the call below is valid.

http://api.worldbank.org/country/1W/indicator/IRON_ORE_SPOT?date=2009M01:2013M13

wbdata.get_data() only allows annual data to be queried.

if data_date:
if type(data_date) is tuple:
data_date_str = ":".join((i.strftime("%Y") for i in data_date))
args.append(("date", data_date_str))
else:
args.append(("date", data_date.strftime("%Y")))

It is not clear to me how to differentiate between a user wanting annual series or a monthly series when using a datetime object. Should an additional parameter be passed to get_data ? Should you scrap the datetime requirement to the data_date parameter and just have the user pass a 2-tuple of strings ("2009","2011") and ("2009M01","2010M02") ?

Not exactly sure where this may fit into your thinking but if you look at:

http://databank.worldbank.org/data/views/variableselection/selectvariables.aspx?source=global-economic-monitor-(gem)-commodities

and click on the time dimension, there is a filter for "monthly" and "annual" in the leftmost pane. Thanks for this great library! Happy to discuss.

Stalled import on Mac OS and Python 3.12

We just switched from pandas-data reader to wbdata as a dependency for the pyam package, see https://pyam-iamc.readthedocs.io.

I have the following issue on Mac OS 14.2.1, Python 3.12.2, using both Poetry 1.8.2 and pip 24.0: the kernel freezes upon doing import wbdata, with RAM of the kernel going up to 2GB.

There is no error message to report, because there is no error - just a freeze.
fyi @glatterf42

JSONDecodeError: Expecting value for some indicators

To recreate the error

import wbdata wbdata.get_data('SL.TLF.TOTL.IN', country='all'

Error

`Traceback (most recent call last):

File "", line 1, in
wbdata.get_data('SL.TLF.TOTL.IN', country='all')

File "C:\Users\joshua.lee\Anaconda3\lib\site-packages\wbdata\api.py", line 289, in get_data
data = fetcher.fetch(query_url, args, cache=cache)

File "C:\Users\joshua.lee\Anaconda3\lib\site-packages\wbdata\fetcher.py", line 122, in fetch
response = get_response(url, args, cache=cache)

File "C:\Users\joshua.lee\Anaconda3\lib\site-packages\wbdata\fetcher.py", line 100, in get_response
return json.loads(response)

File "C:\Users\joshua.lee\Anaconda3\lib\json_init_.py", line 357, in loads
return _default_decoder.decode(s)

File "C:\Users\joshua.lee\Anaconda3\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())

File "C:\Users\joshua.lee\Anaconda3\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None

JSONDecodeError: Expecting value`

Thoughts

I have tried tracing through to the fetcher so see if I can understand the cause of the issue but am struggling. When I put the query_url into a browser (https://api.worldbank.org/v2/countries/all/indicators/SL.TLF.TOTL.IN), the output is largely as expected to the naked eye.

For other indicators, this issue only arises if I specify a start / end date. It's a bit of a mystery to me!

Any help gratefully appreciated.

Citation and Paper

This isn't an issue, more something to consider. Have used the library in some of my work and referenced the docs and repository. Would be good to be able to reference a paper with a DOI, even if just a pre-print, more so that you get proper credit for the work through a citation. Happy to help.

Martinique / MTQ is missing from wbdata

A query using wbdata does not return data, despite it being available on the WB website. For example, compare:

https://databank.worldbank.org/reports.aspx?source=worldwide-governance-indicators#`

With the following script:

import wbdata

data_dates = (datetime.datetime(2019,1,1), datetime.datetime(2019,1,1))

MTQ_missing = wbdata.get_dataframe({'PV.EST':'values'},
country=('MTQ'), data_date=data_dates, convert_date=False, keep_levels=True)

Which returns:


IndexError Traceback (most recent call last)
/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/wbdata/fetcher.py in fetch(url, args, cache)
123 try:
--> 124 results.extend(response[1])
125 this_page = response[0]["page"]

IndexError: list index out of range

During handling of the above exception, another exception occurred:

RuntimeError Traceback (most recent call last)
in
----> 1 MTQ_missing = wbdata.get_dataframe({'PV.EST':'values'},
2 country=('MTQ'), data_date=data_dates, convert_date=False, keep_levels=True)

</Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/decorator.py:decorator-gen-127> in get_dataframe(indicators, country, data_date, freq, source, convert_date, keep_levels, cache)

/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/wbdata/api.py in uses_pandas(f, *args, **kwargs)
83 if not pd:
84 raise ValueError("Pandas must be installed to be used")
---> 85 return f(*args, **kwargs)
86
87

/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/wbdata/api.py in get_dataframe(indicators, country, data_date, freq, source, convert_date, keep_levels, cache)
480 :returns: a WBDataFrame
481 """
--> 482 serieses = [
483 (
484 get_series(

/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/wbdata/api.py in (.0)
482 serieses = [
483 (
--> 484 get_series(
485 indicator=indicator,
486 country=country,

/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/wbdata/api.py in get_series(indicator, country, data_date, freq, source, convert_date, column_name, keep_levels, cache)
184 :returns: WBSeries
185 """
--> 186 raw_data = get_data(
187 indicator=indicator,
188 country=country,

/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/wbdata/api.py in get_data(indicator, country, data_date, freq, source, convert_date, pandas, column_name, keep_levels, cache)
287 if source:
288 args["source"] = source
--> 289 data = fetcher.fetch(query_url, args, cache=cache)
290 if convert_date:
291 data = convert_dates_to_datetime(data)

/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/wbdata/fetcher.py in fetch(url, args, cache)
128 try:
129 message = response[0]["message"][0]
--> 130 raise RuntimeError(
131 f"Got error {message['id']} ({message['key']}): "
132 f"{message['value']}"

RuntimeError: Got error 120 (Invalid value): The provided parameter value is not valid

get_country(incomelevel="OEC") produces "IndexError: list index out of range" error

The readthedocs documentation includes this line:

countries = [i['id'] for i in wbdata.get_country(incomelevel="OEC", display=False)]

This returns the error:

IndexError: list index out of range

The same error is returned with the simpler:

wbdata.get_country(incomelevel="OEC")

And other incomelevel values like oec, OECD, and oecd also produce this error.

get_country() without the incomelevel argument works fine.

Function documentation doesn't include more information like a list of valid incomelevel values:

wbdata/wbdata/api.py

Lines 253 to 261 in 0541f6d

def get_country(country_id=None, incomelevel=None, lendingtype=None,
display=None):
"""
Retrieve information on a country or regional aggregate. Can specify
either country_id, or the aggregates, but not both
:country_id: a country id or sequence thereof. None returns all countries
and aggregates.
:incomelevel: desired incomelevel id or ids.

Can I get a country classification?

Hi, thanks for the nice package.

I'm trying to plot total population per continent, or a similar classification.
Is it possible to get a country classification using wbdata? When I download a file manually from the world bank, I get some country metadata (with region information, no continent though). Can I access that metadata using wbdata ?

Thanks

Useful API Error Messages

Error messages from the API are currently silenced, which is bad. Those should be printed out instead.

AttributeError: module 'collections' has no attribute 'Sequence' Error

The wbdata python modual gives error of: "AttributeError: module 'collections' has no attribute 'Sequence'", when using python [3.10.]
The good place to see why this is the case is in this link:(https://stackoverflow.com/questions/69596494/unable-to-import-freegames-python-package-attributeerror-module-collections) Therefore i will not repeat it here.

It can be easily solved by replacing import collections with import collections.abc as collections in the api.py file. I am not sure if this works with other python version but for 3.10 any api request involving dates does not work without it.

WB data doesn't go beyond 1996 for some countries

Thank you for this helpful package! One of the issues I faced has to do with Taiwan data. If one tries to query it on its own (like in wbdata.get_data('IC.LGL.CRED.XQ', country=['TWN'])), I get

<ipython-input-76-471bf8328dd6> in <module>()
----> 1 wbdata.get_data('IC.LGL.CRED.XQ', country=['TWN'])

1 frames
/usr/local/lib/python3.6/dist-packages/wbdata/fetcher.py in fetch(url, args, cache)
    122         response = get_response(url, args, cache=cache)
    123         try:
--> 124             results.extend(response[1])
    125             this_page = response[0]["page"]
    126             pages = response[0]["pages"]

TypeError: 'NoneType' object is not iterable

However, If I do wbdata.get_data('IC.LGL.CRED.XQ', country=['TWN', 'HKG']), where the data for Taiwan is duplicated from the data for HK or no data at all (it is not reproducible really), i.e., silently failing to fetch data for Taiwan (which I assume doesn't exist in WB indicator data).

Thank you for your time.

Add source arg for get_data, get_series, get_dataframe

Per, @polk54, it turns out that an indicator can be in more than one source, and not all sources are updated at the same time, which is a hoot and also a holler. This means we need source arguments for the various data retrieval functions.

For get_data and get_series, this is straightforward enough, but a bit trickier for get_dataframe since you may way indicators from different sources in the same DataFrame. My initial thought is to allow the source to have one of three values:

  1. None, default
  2. an integer argument, which would apply to all variables
  3. an indicator->source dictionary

That seems to me to satisfy the law of least astonishment.

date kwarg does not seem to work for get_dataframe()

Thank you for the update! After updating kwarg data_date to date for the get_dataframe() method, it does not seem to accept tuples as datetime objects or as strings. Both wbdata.get_dataframe({"NY.GDP.MKTP.CD": "value"}, date=["2000", "2022"]) and

start_year = datetime(2000, 1, 1)
end_year = datetime(2022, 1, 1)
df = wbdata.get_dataframe({"NY.GDP.MKTP.CD": "value"}, date=[start_year, end_year])

yield error TypeError: expected string or bytes-like object with additional error logging point to this line in the _parse_date method:

    if PATTERN_YEAR.fullmatch(date):

Feature: Add docs on working with multi-index DataFrames

For a users with rudimentary Pandas experience (like me), it takes quite a while to discover that:

  1. wbdata returns DataFrames with multiple indices
  2. How to perform simple operations, like select a few rows, from a multi-index DataFrame

Adding docs on this will provide a much better quickstart experience for new users.

NB: I plan on adding these docs.

The get_data function raises a ValueError.

The get_data function raises a Value Error when used.

I have included as an example one of the entry values that I have used which raised the error.

indicator= 'GC.XPN.INTP.CN'
country= 'AUS'

feature request: option to turn cache on/off

Thanks a ton for this package, it helps quite a bit with my work. One request though would be an option to turn the cache feature on/off as a keyword. For example: wb.get_data(cache='off').

I find that when the cache grows quite large (over 2000 recent searches) the time to complete a response slows down drastically because of the pickle.dump() in fetcher.py. For example, on my wifi a typical wb api call for a single indicator, 20 years, and all countries, takes around 2-3 seconds if the cache is clear. If the cache is full, it can take 12-14 seconds.

Additionally, once the cache reaches a certain size (over 3000 or so recent api calls), every API call results in a OSError until the cache is deleted and the kernel is restarted––similar to this thread, although for me it's an OSError instead of an EOFerror, running rm 'file/path/to/cache/' fixes it.

My solution to this has been to delete the following line within the fetch function of fetcher.py:
CACHE[query_url] = (daycount(), raw_response)

I realize most users aren't making as many api calls as me (I'm trying to download every wb indicator for this project), but it would be nice to be able to turn the cache on and off in api calls.

Data accuracy

I noticed the data for example "WP11672.1" only covers years 2011 and 2014 how do i get access to data from say 2000-2016

cannot handle a non-unique multi index

Running the indicators

{u'IC.CRD.INFO.XQ': u'Depth of credit information index (0=low to 8=high)',
u'IC.ISV.CPI': u'Creditor participation index (0-4)'}

in the function

df = wbdata.get_dataframe(indicators, convert_date=True)

returns a "cannot handle a non-unique multi index error". Running the two indicators seperately works fine.

Is this a bug or misspecification from my side?

ExceptionTraceback (most recent call last)
in ()
----> 1 df = wbdata.get_dataframe(indicators)

in get_dataframe(indicators, country, data_date, convert_date, keep_levels)

C:\Python27\lib\site-packages\wbdata\api.pyc in uses_pandas(f, *args, **kwargs)
51 if not pd:
52 raise ValueError("Pandas must be installed to be used")
---> 53 return f(*args, **kwargs)
54
55

C:\Python27\lib\site-packages\wbdata\api.pyc in get_dataframe(indicators, country, data_date, convert_date, keep_levels)
415 pandas=True, keep_levels=keep_levels)
416 for i in indicators}
--> 417 return pd.DataFrame(to_df)
418
419

C:\Python27\lib\site-packages\pandas\core\frame.pyc in init(self, data, index, columns, dtype, copy)
273 dtype=dtype, copy=copy)
274 elif isinstance(data, dict):
--> 275 mgr = self._init_dict(data, index, columns, dtype=dtype)
276 elif isinstance(data, ma.MaskedArray):
277 import numpy.ma.mrecords as mrecords

C:\Python27\lib\site-packages\pandas\core\frame.pyc in _init_dict(self, data, index, columns, dtype)
409 arrays = [data[k] for k in keys]
410
--> 411 return _arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype)
412
413 def _init_ndarray(self, values, index, columns, dtype=None, copy=False):

C:\Python27\lib\site-packages\pandas\core\frame.pyc in _arrays_to_mgr(arrays, arr_names, index, columns, dtype)
5499
5500 # don't force copy because getting jammed in an ndarray anyway
-> 5501 arrays = _homogenize(arrays, index, dtype)
5502
5503 # from BlockManager perspective

C:\Python27\lib\site-packages\pandas\core\frame.pyc in _homogenize(data, index, dtype)
5798 # Forces alignment. No need to copy data since we
5799 # are putting it into an ndarray later
-> 5800 v = v.reindex(index, copy=False)
5801 else:
5802 if isinstance(v, dict):

C:\Python27\lib\site-packages\pandas\core\series.pyc in reindex(self, index, **kwargs)
2424 @appender(generic._shared_docs['reindex'] % _shared_doc_kwargs)
2425 def reindex(self, index=None, **kwargs):
-> 2426 return super(Series, self).reindex(index=index, **kwargs)
2427
2428 @appender(generic._shared_docs['fillna'] % _shared_doc_kwargs)

C:\Python27\lib\site-packages\pandas\core\generic.pyc in reindex(self, *args, **kwargs)
2513 # perform the reindex on the axes
2514 return self._reindex_axes(axes, level, limit, tolerance, method,
-> 2515 fill_value, copy).finalize(self)
2516
2517 def _reindex_axes(self, axes, level, limit, tolerance, method, fill_value,

C:\Python27\lib\site-packages\pandas\core\generic.pyc in _reindex_axes(self, axes, level, limit, tolerance, method, fill_value, copy)
2526 ax = self._get_axis(a)
2527 new_index, indexer = ax.reindex(labels, level=level, limit=limit,
-> 2528 tolerance=tolerance, method=method)
2529
2530 axis = self._get_axis_number(a)

C:\Python27\lib\site-packages\pandas\core\indexes\multi.pyc in reindex(self, target, method, level, limit, tolerance)
1861 tolerance=tolerance)
1862 else:
-> 1863 raise Exception("cannot handle a non-unique multi-index!")
1864
1865 if not isinstance(target, MultiIndex):

Exception: cannot handle a non-unique multi-index!

Switch to AppDirs library

Cache currently uses code stolen from appdirs, but it's probably safer and smarter just to use theirs, and it's lightweight enough that I don't mind having it as a dependency.

how to get indicator by topic

Hi there, how to get indicator list by topic for example indicators for economy and growth? i don't see any function to do that. thanks

import wbdata in python script

Hello,
I have started to use wbdata with python 2.7.3 on a Debian GNU/Linux 7 machine and everything went fine but after a while I was no longer able to import wbdata module:

import wbdata
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python2.7/dist-packages/wbdata/init.py", line 23, in
from .api import (get_country, get_data, get_dataframe, get_panel,
File "/usr/local/lib/python2.7/dist-packages/wbdata/api.py", line 31, in
from . import fetcher
File "/usr/local/lib/python2.7/dist-packages/wbdata/fetcher.py", line 104, in
if not len(CACHE.cache)== 0:
File "/usr/local/lib/python2.7/dist-packages/wbdata/fetcher.py", line 82, in cache
cache = pickle.load(cachefile)
EOFError

I tried reinstalling wbdata but nothing changed.
Could anyone suggest a solution? Thank you.

Data for 2017 not available when querying for all countries

When I query an indicator for one country, the 2017 value will be available.

indicators = {"CC.PER.RNK": "Control of Corruption"}
df = wbdata.get_dataframe(indicators, country=("AFG"))
Out[11]: 
      Control of Corruption
date                       
2017               3.846154
2016               3.846154
2015               6.250000
2014               5.288462
2013               1.895735

Doing the same for all countries, will not give the 2017 datapoint:

df = wbdata.get_dataframe(indicators)
Out[13]: 
                  Control of Corruption
country     date                       
Afghanistan 2016               3.365385
            2015               6.250000
            2014               5.288462
            2013               1.895735
            2012               2.369668
            2011               0.947867
            2010               0.952381
            2009               0.956938
            2008               0.485437
            2007               0.970874
            2006               3.902439
            2005               2.439024
            2004               5.853659
            2003               5.050505
            2002               5.050505
            2000               5.076142
            1998               9.793815
            1996               4.301075
Albania     2016              41.346153
            2015              38.461540
            2014              34.615383
            2013              27.014217

Search that isn't terrible

Current indicator search just involves checking whether each word in the search is in the string. There's got to be something better that isn't too heavy, even if it's just regex.

Add last_updated attribute to returned Data objects

I recently told someone that the API didn't give "last updated" metadata. Apparently it does, and I've just been frivolously throwing it away. Like an old glove!. Last updated should be a datetime attribute on returned single data series (either the list of dictionaries or the Pandas Series). For a returned DataFrame it should a dictionary with column names as keys and datetime objects as values.

World Development Indicators Dataset Value Error

I receive a value error when querying for some of the World Development Indicators data.

indicators = {u'NE.CON.PETC.ZS': u'Household final consumption (%GDP)'}
df = wbdata.get_dataframe(indicators, convert_date=False)

Traceback (most recent call last):

File "", line 1, in
df = wbdata.get_dataframe(indicators, convert_date=True)

File "", line 2, in get_dataframe

File "C:\ProgramData\Miniconda3\envs\py27\lib\site-packages\wbdata\api.py", line 53, in uses_pandas
return f(*args, **kwargs)

File "C:\ProgramData\Miniconda3\envs\py27\lib\site-packages\wbdata\api.py", line 416, in get_dataframe
for i in indicators}

File "C:\ProgramData\Miniconda3\envs\py27\lib\site-packages\wbdata\api.py", line 416, in
for i in indicators}

File "C:\ProgramData\Miniconda3\envs\py27\lib\site-packages\wbdata\api.py", line 167, in get_data
data = fetcher.fetch(query_url, args)

File "C:\ProgramData\Miniconda3\envs\py27\lib\site-packages\wbdata\fetcher.py", line 163, in fetch
raw_response = fetch_url(query_url)

File "C:\ProgramData\Miniconda3\envs\py27\lib\site-packages\wbdata\fetcher.py", line 135, in fetch_url
raise ValueError

ValueError

and same for NE.CON.PETC.KD.ZG, NV.SRV.TETC.ZS, NV.SRV.TETC.KD.ZG (also queried seperately).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.