Giter Site home page Giter Site logo

glassdoor-review-scraper's People

Contributors

matthewchatham avatar muhammadmehran avatar nate9676 avatar yihaozhadan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

glassdoor-review-scraper's Issues

No Such Element Exception

Hi All,

I'm trying to run this code as of Oct 2020, but I keep running into the "No Such Element Exception" error. Any help here would be GREATLY appreciated! Here is the error that runs:

2020-10-26 16:52:48,040 INFO 3 :(21411) - Scraping up to 25 reviews.
2020-10-26 16:52:48,040 INFO 3 :(21411) - Scraping up to 25 reviews.
2020-10-26 16:52:48,040 INFO 3 :(21411) - Scraping up to 25 reviews.
2020-10-26 16:52:48,040 INFO 3 :(21411) - Scraping up to 25 reviews.
2020-10-26 16:52:48,040 INFO 3 :(21411) - Scraping up to 25 reviews.
2020-10-26 16:52:48,040 INFO 3 :(21411) - Scraping up to 25 reviews.
2020-10-26 16:52:48,096 INFO 2 :(21411) - Signing in to [email protected]
2020-10-26 16:52:48,096 INFO 2 :(21411) - Signing in to [email protected]
2020-10-26 16:52:48,096 INFO 2 :(21411) - Signing in to [email protected]
2020-10-26 16:52:48,096 INFO 2 :(21411) - Signing in to [email protected]
2020-10-26 16:52:48,096 INFO 2 :(21411) - Signing in to [email protected]
2020-10-26 16:52:48,096 INFO 2 :(21411) - Signing in to [email protected]

NoSuchElementException Traceback (most recent call last)
in
44
45 if name == 'main':
---> 46 main()

in main()
7
8
----> 9 sign_in()
10
11 if not args.start_from_url:

in sign_in()
7 # import pdb;pdb.set_trace()
8
----> 9 email_field = browser.find_element_by_name('username')
10 password_field = browser.find_element_by_name('password')
11 submit_btn = browser.find_element_by_xpath('//button[@type="submit"]')

/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py in find_element_by_name(self, name)
494 element = driver.find_element_by_name('foo')
495 """
--> 496 return self.find_element(by=By.NAME, value=name)
497
498 def find_elements_by_name(self, name):

/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py in find_element(self, by, value)
976 return self.execute(Command.FIND_ELEMENT, {
977 'using': by,
--> 978 'value': value})['value']
979
980 def find_elements(self, by=By.ID, value=None):

/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py in execute(self, driver_command, params)
319 response = self.command_executor.execute(driver_command, params)
320 if response:
--> 321 self.error_handler.check_response(response)
322 response['value'] = self._unwrap_value(
323 response.get('value', None))

/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py in check_response(self, response)
240 alert_text = value['alert'].get('text')
241 raise exception_class(message, screen, stacktrace, alert_text)
--> 242 raise exception_class(message, screen, stacktrace)
243
244 def _value_or_default(self, obj, key, default):

NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"[name="username"]"}
(Session info: chrome=86.0.4240.111)

NoSuchElementException

It worked well until last week, but then it started showing the following nosuchelementexception for firms with more than 10 reviews (more than one page):
selenium.common.exceptions.NoSuchElementException:+Message:+no+such+element:+Unable+to+locate+element:+%7B%22method%22:%22css+selector%22,%22selector%22:%22.pagination__PaginationStyle__next%22%7D&ie=UTF-8&oe=UTF-8

I tried other codes that people mentioned in this forum who asked about the same issue, but still not working. Is anyone else having the same issue? I changed the time sleep too but didn't work. Perhaps Glassdoor changed their code? This error definitely has something to do with the "more pages" and "go to the next page" functions, but I'm not sure how to fix it. I've been working to fix this for days but unfortunately still not resolved. Any help would be greatly appreciated. Thank you!!!

scrape_years not working

I am getting the following error:

selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"tag name","selector":"p"}
(Session info: headless chrome=74.0.3729.169)

I also tried .find_element_by_class_name('mainText') instead of .find_element_by_tag_name('p'), but still got the same error. Any ideas?

No such element: Unable to locate element: {"method":"css selector","selector":".next"}

I was not able to understand this issue. Would like to help me?

DevTools listening on ws://127.0.0.1:49170/devtools/browser/91092bb7-fd9b-493e-814e-fece11203277
2019-11-22 12:57:28,971 INFO 423 :main.py(34780) - Scraping up to 15 reviews.
2019-11-22 12:57:28,993 INFO 361 :main.py(34780) - Signing in to [email protected]
2019-11-22 12:57:38,056 INFO 342 :main.py(34780) - Navigating to company reviews
2019-11-22 12:57:49,113 INFO 286 :main.py(34780) - Extracting reviews from page 1
2019-11-22 12:57:49,160 INFO 291 :main.py(34780) - Found 10 reviews on page 1
2019-11-22 12:57:49,518 INFO 297 :main.py(34780) - Scraped data for "Growth through challenge"(Mon Mar 11 2019 08:15:39 GMT+0200 (Eastern European Standard Time))
2019-11-22 12:57:50,343 INFO 297 :main.py(34780) - Scraped data for "Coming to work here was the best decision ever."(Fri Aug 24 2018 20:34:04 GMT+0300 (Eastern European Summer Time))
2019-11-22 12:57:50,955 INFO 297 :main.py(34780) - Scraped data for "I am a buisness phone banker"(Thu Nov 21 2019 07:36:02 GMT+0200 (Eastern European Standard Time))
2019-11-22 12:57:51,531 INFO 297 :main.py(34780) - Scraped data for "Wells Fargo a fine place to work."(Wed Nov 20 2019 05:41:15 GMT+0200 (Eastern European Standard Time))
2019-11-22 12:57:52,044 INFO 297 :main.py(34780) - Scraped data for "Premier banker"(Tue Nov 19 2019 22:03:04 GMT+0200 (Eastern European Standard Time))
2019-11-22 12:57:52,559 INFO 297 :main.py(34780) - Scraped data for "Good Place"(Tue Nov 19 2019 10:37:48 GMT+0200 (Eastern European Standard Time))
2019-11-22 12:57:53,082 INFO 297 :main.py(34780) - Scraped data for "Great Environment"(Tue Nov 19 2019 13:48:19 GMT+0200 (Eastern European Standard Time))
2019-11-22 12:57:53,490 INFO 297 :main.py(34780) - Scraped data for "Amazing"(Tue Nov 19 2019 15:36:25 GMT+0200 (Eastern European Standard Time))
2019-11-22 12:57:53,877 INFO 297 :main.py(34780) - Scraped data for "Wonderful Environment to Grow and Learn"(Mon Nov 18 2019 19:29:21 GMT+0200 (Eastern European Standard Time))
2019-11-22 12:57:54,258 INFO 297 :main.py(34780) - Scraped data for "Very corporate but good job overall"(Mon Nov 18 2019 17:45:43 GMT+0200 (Eastern European Standard Time))
2019-11-22 12:57:54,294 INFO 326 :main.py(34780) - Going to page 2
Traceback (most recent call last):
File "main.py", line 465, in
main()
File "main.py", line 453, in main
go_to_next_page()
File "main.py", line 330, in go_to_next_page
'next').find_element_by_tag_name('a')
File "C:\Users\E20008699\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py", line 398, in find_element_by_class_name
return self.find_element(by=By.CLASS_NAME, value=name)
File "C:\Users\E20008699\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py", line 659, in find_element
{"using": by, "value": value})['value']
File "C:\Users\E20008699\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py", line 633, in _execute
return self._parent.execute(command, params)
File "C:\Users\E20008699\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:\Users\E20008699\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".next"}
(Session info: headless chrome=78.0.3904.97)

No Such Element Exception

Hi, I've got the following error with the latest code and was wondering how to fix such problem.

2020-04-09 23:37:11,677 INFO 416 :main.py(27691) - Configuring browser
2020-04-09 23:37:18,237 INFO 458 :main.py(27691) - Scraping up to 25 reviews.
2020-04-09 23:37:18,269 INFO 395 :main.py(27691) - Signing in to #########.com
2020-04-09 23:37:34,673 INFO 375 :main.py(27691) - Navigating to company reviews
2020-04-09 23:37:44,163 INFO 322 :main.py(27691) - Extracting reviews from page 1
2020-04-09 23:37:44,822 INFO 327 :main.py(27691) - Found 10 reviews on page 1
Traceback (most recent call last):
File "/Users/millie/PycharmProjects/glassdoor-review-scraper/main.py", line 500, in
main()
File "/Users/millie/PycharmProjects/glassdoor-review-scraper/main.py", line 480, in main
reviews_df = extract_from_page()
File "/Users/millie/PycharmProjects/glassdoor-review-scraper/main.py", line 331, in extract_from_page
data = extract_review(review)
File "/Users/millie/PycharmProjects/glassdoor-review-scraper/main.py", line 317, in extract_review
res[field] = scrape(field, review, author)
File "/Users/millie/PycharmProjects/glassdoor-review-scraper/main.py", line 300, in scrape
return fdictfield
File "/Users/millie/PycharmProjects/glassdoor-review-scraper/main.py", line 155, in scrape_years
res = review.find_element_by_class_name('common__EiReviewTextStyles__allowLineBreaks').find_element_by_xpath('preceding-sibling::p').text
File "/Users/millie/Library/Python/3.7/lib/python/site-packages/selenium/webdriver/remote/webelement.py", line 398, in find_element_by_class_name
return self.find_element(by=By.CLASS_NAME, value=name)
File "/Users/millie/Library/Python/3.7/lib/python/site-packages/selenium/webdriver/remote/webelement.py", line 659, in find_element
{"using": by, "value": value})['value']
File "/Users/millie/Library/Python/3.7/lib/python/site-packages/selenium/webdriver/remote/webelement.py", line 633, in _execute
return self._parent.execute(command, params)
File "/Users/millie/Library/Python/3.7/lib/python/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/Users/millie/Library/Python/3.7/lib/python/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".common__EiReviewTextStyles__allowLineBreaks"}
(Session info: chrome=81.0.4044.92)

I've seen others posting similar issues in the past, were they due to the updates of Glassdoor?
Any help will be much appreciated. Thanks.

another No Such Element Exception

Hope this is not an existing issue.
I also have no such element exception error.
It seems like others got through the review extracting steps, but mine didn't. Can anyone help me with this? what should I do to fix this error?

Configuring browser
Scrapting up to 1000
Signing in
Navigating to company
Traceback (most recent call last):
File "main.py", line 462, in
main()
File "main.py", line 427, in main
reviews_exist = navigate_to_reviews()
File "main.py", line 350, in navigate_to_reviews
"//[@id='EmpLinksWrapper']/div//a[2]")
File "//anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 394, in find_element_by_xpath
return self.find_element(by=By.XPATH, value=xpath)
File "//anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 978, in find_element
'value': value})['value']
File "//anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "//anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//
[@id='EmpLinksWrapper']/div//a[2]"}
(Session info: headless chrome=76.0.3809.87)

Missing License

I would like to use this repo in a project of mine, but it's missing a license. This unfortunately limits my ability to use it. However, you also indicate that its usable as long as it is cited. Would you be able to add an official license to this repository to reflect that? Thank you very much!

Use multiprocessing

The script can operate in parallel on each page of 10 reviews.

In order to do this effectively, I should determine a good number of workers (probably 2-6) and assign each worker some subset of the total pages of reviews. So we need to compute the total number of pages of reviews. I have confirmed that there is a clear mapping between page number and URL, so we can send each worker to the appropriate pages with ease.

Generic Error when Trying to Run

When try to run example data pull, the following error pops up - any ideas on syntax fix?

python main.py --headless --url "https://www.glassdoor.com/Overview/Working-at-Wells-Fargo-EI_IE8876.11,22.htm" --limit 1000 -f wells_fargo_reviews.csv
File "", line 1
python main.py --headless --url "https://www.glassdoor.com/Overview/Working-at-Wells-Fargo-EI_IE8876.11,22.htm" --limit 1000 -f wells_fargo_reviews.csv
^
SyntaxError: invalid syntax

I'm a novice Python user running this from Spyder, which was recently downloaded so versions should be up to date.

Does not recognize URL as an argument

When I run the following command from the terminal on my Ubuntu 21.10 machine, it gives me the following error.

python3 main.py --headless --start_from_url "https://www.glassdoor.co.in/Reviews/insert-company-name" --limit 1000 -f glassdoor_reviews.csv

main.py: error: unrecognized arguments: https://www.glassdoor.co.in/Reviews/company-name

Is it possible that this is so because it's the Indian site?

The Wells-Fargo example command also gives me the following error

Traceback (most recent call last):
File "/home/shivani/ISB/glassdoor-review-scraper-master/main.py", line 89, in
d = json.loads(f.read())
File "/usr/lib/python3.9/json/init.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 11 (char 10)

I've tried editing the main.py file for this particular error but not sure what's wrong. Open to solutions!

Any leads?

Can I add new variable?

Hello, I want to add some new variables in the scraped website, I know how to add related code in the main.py, but I dont know how to change the code in the schema.
This is the original schema:
B

����ûR›<���„����������������@���s(���d�d�d�d�d�d�d�d�d�d d
d�d d
d�d�g�Z�d�S�)�⁄�dateZ�employee_title⁄�locationZ�employee_status⁄ review_titleZ�years_at_company⁄�helpful⁄�pros⁄�consZ�advice_to_mgmtZ�rating_overallZ�rating_balanceZ�rating_cultureZ
rating_careerZ�rating_cZ�rating_mgmtN)�⁄�SCHEMA©�r����r����˙</Users/mow/Desktop/glassdoor-review-scraper-master/schema.py⁄�����s��������������

This is my schema:
��������������������
B

ûR›<�„�@s(dd�d�d�d�d�d�d�d�d d
d�d d
d�d�g�Zd�S)�⁄�dateZ�employee_title⁄�locationZ�employee_status⁄ review_titleZ�years_at_company⁄�helpful⁄�recommand⁄�outlookZceoapprovaN)�⁄�SCHEMA©r�r�˙</Users/mow/Desktop/glassdoor-review-scraper-master/schema.py⁄��s�����������

But I got error:

Traceback (most recent call last):
File "maintest.py", line 31, in
from schema import SCHEMA
File "", line 983, in _find_and_load
File "", line 967, in _find_and_load_unlocked
File "", line 677, in _load_unlocked
File "", line 724, in exec_module
File "", line 857, in get_code
File "", line 525, in _compile_bytecode
EOFError: marshal data too short

Anyone can help?

Error when running Example 1

Hi I tried to run Example 1, but got error in below. Please help. Many thanks!


C:\Users\liusi\Desktop\glassdoor-review-scraper-master-matt>python main.py --headless --url "https://www.glassdoor.com/Overview/Working-at-W
ells-Fargo-EI_IE8876.11,22.htm" --limit 1000 -f wells_fargo_reviews.csv
2020-07-02 20:00:08,254 INFO 416 :main.py(6016) - Configuring browser

DevTools listening on ws://127.0.0.1:51582/devtools/browser/9236c725-665a-4462-9068-b1034a7a0893
2020-07-02 20:00:10,612 INFO 458 :main.py(6016) - Scraping up to 1000 reviews.
2020-07-02 20:00:10,630 INFO 395 :main.py(6016) - Signing in to [email protected]
Traceback (most recent call last):
File "main.py", line 500, in
main()
File "main.py", line 462, in main
sign_in()
File "main.py", line 408, in sign_in
submit_btn.click()
File "C:\Users\liusi\AppData\Local\Programs\Python\Python38-32\lib\site-packages\selenium\webdriver\remote\webelement.py", line 80, in cli
ck
self._execute(Command.CLICK_ELEMENT)
File "C:\Users\liusi\AppData\Local\Programs\Python\Python38-32\lib\site-packages\selenium\webdriver\remote\webelement.py", line 633, in _e
xecute
return self._parent.execute(command, params)
File "C:\Users\liusi\AppData\Local\Programs\Python\Python38-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in exe
cute
self.error_handler.check_response(response)
File "C:\Users\liusi\AppData\Local\Programs\Python\Python38-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in
check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element ... is not clickable at point (387, 488). Other element would receive the click:

...


(Session info: headless chrome=83.0.4103.116)

Getting a Syntax Error

Hi, I'm trying to run your example but I have the following error:

File "main.py", line 322
logger.info(f'Extracting reviews from page {page[0]}')
^
SyntaxError: invalid syntax

Do I need to change something in main.py in order to use it ?

Thanks for your work

New error

Dear Matthew,

Thank you very much for sharing. It is very helpful and quite easy to install.

I tried to run the sample code 1 that you provided but it gives me the following errors.

I ran with the command prompt and I am using Python 37. What do you think is causing the problem?

File "main.py", line 89, in
d = json.loads(f.read())
File "C:\Users\shuoy\Anaconda3\lib\json_init_.py", line 348, in loads
return _default_decoder.decode(s)
File "C:\Users\shuoy\Anaconda3\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\shuoy\Anaconda3\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Looking forward to your reply.

Thanks,
Shuo

version of ChromeDriver only supports Chrome version 78 But my chrome version is 78

Hi, I tried the code, and got this error:

2019-09-29 15:41:04,215 INFO 377 :main.py(10918) - Configuring browser
Traceback (most recent call last):
File "main.py", line 412, in
browser = get_browser()
File "main.py", line 382, in get_browser
browser = wd.Chrome(options=chrome_options)
File "/usr/local/var/pyenv/versions/anaconda3-2019.03/lib/python3.7/site-packages/selenium/webdriver/chrome/webdriver.py", line 81, in init
desired_capabilities=desired_capabilities)
File "/usr/local/var/pyenv/versions/anaconda3-2019.03/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 157, in init
self.start_session(capabilities, browser_profile)
File "/usr/local/var/pyenv/versions/anaconda3-2019.03/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 252, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/usr/local/var/pyenv/versions/anaconda3-2019.03/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/usr/local/var/pyenv/versions/anaconda3-2019.03/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of ChromeDriver only supports Chrome version 78

But my chrome version is 78, so I have no idea how to fix it...

pagingControls Error

I got the following error about the paging control when I try to scrap the data.

python.exe main.py --headless --url "https://www.glassdoor.com/Reviews/Walmart-Reviews-E715.htm" --limit 100 -f test.csv

2019-05-31 15:06:49,643 INFO 377 :main.py(17796) - Configuring browser

DevTools listening on ws://127.0.0.1:50831/devtools/browser/8c7890e8-fe24-41f7-b77f-d22dae3f6c3e
2019-05-31 15:06:51,700 INFO 419 :main.py(17796) - Scraping up to 100 reviews.
2019-05-31 15:06:51,717 INFO 358 :main.py(17796) - Signing in to ******@ou.edu
2019-05-31 15:06:55,478 INFO 339 :main.py(17796) - Navigating to company reviews
2019-05-31 15:07:08,137 INFO 286 :main.py(17796) - Extracting reviews from page 1
2019-05-31 15:07:08,200 INFO 291 :main.py(17796) - Found 10 reviews on page 1
2019-05-31 15:07:08,677 INFO 297 :main.py(17796) - Scraped data for "The Best in Retail"(Thu May 30 2019 20:24:44 GMT-0500 (Central Daylight Time))
2019-05-31 15:07:09,171 INFO 297 :main.py(17796) - Scraped data for "Walmart needs to bring worker dignity back into focus"(Wed May 29 2019 18:04:43 GMT-0500 (Central Daylight Time))
2019-05-31 15:07:09,673 INFO 297 :main.py(17796) - Scraped data for "Great for college students"(Thu May 30 2019 12:25:57 GMT-0500 (Central Daylight Time))
2019-05-31 15:07:10,042 INFO 297 :main.py(17796) - Scraped data for "Retail"(Thu May 30 2019 17:09:02 GMT-0500 (Central Daylight Time))
2019-05-31 15:07:10,497 INFO 297 :main.py(17796) - Scraped data for "walmart"(Mon May 27 2019 17:17:41 GMT-0500 (Central Daylight Time))
2019-05-31 15:07:10,966 INFO 297 :main.py(17796) - Scraped data for "Maintenance is well taken care of"(Tue May 28 2019 08:32:17 GMT-0500
(Central Daylight Time))
2019-05-31 15:07:11,437 INFO 297 :main.py(17796) - Scraped data for "It was the best job that I had to be honest"(Wed May 29 2019 20:29:39 GMT-0500 (Central Daylight Time))
2019-05-31 15:07:11,896 INFO 297 :main.py(17796) - Scraped data for "Great"(Wed May 29 2019 20:36:02 GMT-0500 (Central Daylight Time))
2019-05-31 15:07:12,281 INFO 297 :main.py(17796) - Scraped data for "floater pharmacist"(Wed May 29 2019 21:10:58 GMT-0500 (Central Daylight Time))
2019-05-31 15:07:12,708 INFO 297 :main.py(17796) - Scraped data for "cashier"(Wed May 29 2019 23:11:49 GMT-0500 (Central Daylight Time))
Traceback (most recent call last):
File "main.py", line 461, in
main()
File "main.py", line 446, in main
while more_pages() and
File "main.py", line 314, in more_pages
paging_control = browser.find_element_by_class_name('pagingControls')
File "C:\Users\wang0040\AppData\Local\Continuum\miniconda3\envs\Default\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 564, in find_element_by_class_name
return self.find_element(by=By.CLASS_NAME, value=name)
File "C:\Users\wang0040\AppData\Local\Continuum\miniconda3\envs\Default\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 978, in find_element
'value': value})['value']
File "C:\Users\wang0040\AppData\Local\Continuum\miniconda3\envs\Default\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:\Users\wang0040\AppData\Local\Continuum\miniconda3\envs\Default\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line
242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"class name","selector":"pagingControls"}
(Session info: headless chrome=74.0.3729.169)
(Driver info: chromedriver=74.0.3729.6 (255758eccf3d244491b8a1317aa76e1ce10d57e9-refs/branch-heads/3729@{#29}),platform=Windows NT 6.1.7601 SP1 x86_64)

I also got the No Such Element Exception #8 error, but overcoming it by hide the scrape_years part. I do not think this action cause the above issue but I am not sure.

Not pulling overall rating nor advice to management

Hey everyone,

I was able to modify the code to get it working (woo!) but am still unable to pull 1) overall rating and 2) advice to management.
I've tried inspecting both elements on Glassdoor's page and updating the functions that pull both of those things, with no luck.

Does anyone have any suggestions or has figured out a work-around for this?

JSONDecodeError

Really hoping to use this scraper, but ran into an issue. Any suggestions?

Traceback (most recent call last):
  File "main.py", line 89, in <module>
    d = json.loads(f.read())
  File "C:\Users\dvnguyen\AppData\Local\Continuum\anaconda3\lib\json\__init__.py", line 348, in loads
    return _default_decoder.decode(s)
  File "C:\Users\dvnguyen\AppData\Local\Continuum\anaconda3\lib\json\decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "C:\Users\dvnguyen\AppData\Local\Continuum\anaconda3\lib\json\decoder.py", line 353, in raw_decode
    obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 3 column 2 (char 37)

No Such Element Error

I'm getting a no such element error that I don't know how to fix. Help please.

2019-09-11 08:48:58,961 INFO 377    :main.py(1824) - Configuring browser

DevTools listening on ws://127.0.0.1:#####/devtools/browser/cccb51a6-3dc2-4f06-90db-660d#########

2019-09-11 08:49:03,942 INFO 419    :main.py(1824) - Scraping up to 1000 reviews.
2019-09-11 08:49:03,946 INFO 358    :main.py(1824) - Signing in to [email protected]
2019-09-11 08:49:06,541 INFO 339    :main.py(1824) - Navigating to company reviews
2019-09-11 08:49:12,674 INFO 286    :main.py(1824) - Extracting reviews from page 1
2019-09-11 08:49:12,696 INFO 291    :main.py(1824) - Found 10 reviews on page 1
2019-09-11 08:49:12,840 WARNING 126    :main.py(1824) - Failed to scrape employee_title
Traceback (most recent call last):
  File "main.py", line 461, in <module>
    main()
  File "main.py", line 441, in main
    reviews_df = extract_from_page()
  File "main.py", line 295, in extract_from_page
    data = extract_review(review)
  File "main.py", line 281, in extract_review
    res[field] = scrape(field, review, author)
  File "main.py", line 264, in scrape
    return fdict[field](review)
  File "main.py", line 156, in scrape_years
    'reviewBodyCell').find_element_by_tag_name('p')
  File "C:\Users\dvnguyen\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py", line 305, in find_element_by_tag_name
    return self.find_element(by=By.TAG_NAME, value=name)
  File "C:\Users\dvnguyen\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py", line 659, in find_element
    {"using": by, "value": value})['value']
  File "C:\Users\dvnguyen\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py", line 633, in _execute
    return self._parent.execute(command, params)
  File "C:\Users\dvnguyen\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
    self.error_handler.check_response(response)
  File "C:\Users\dvnguyen\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
    raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"p"}
  (Session info: headless chrome=77.0.3865.75)

Error when running Example 1. Please help!

Hi, I tried to run example 1 but returned with error. Could you please help? Many thanks!
Please see below.


C:\Users\liusi\Desktop\glassdoor-review-scraper-master-matt>python main.py --headless --url "https://www.glassdoor.co.uk/Overview/Working-at-Well
s-Fargo-EI_IE8876.11,22.htm" --limit 1000 -f wells_fargo_reviews.csv
2020-07-02 20:08:41,025 INFO 416 :main.py(4100) - Configuring browser

DevTools listening on ws://127.0.0.1:51703/devtools/browser/37f604c9-f5f4-4325-ac1d-a1b5514cff80
2020-07-02 20:08:43,379 INFO 458 :main.py(4100) - Scraping up to 1000 reviews.
2020-07-02 20:08:43,397 INFO 395 :main.py(4100) - Signing in to [email protected]
Traceback (most recent call last):
File "main.py", line 500, in
main()
File "main.py", line 462, in main
sign_in()
File "main.py", line 408, in sign_in
submit_btn.click()
File "C:\Users\liusi\AppData\Local\Programs\Python\Python38-32\lib\site-packages\selenium\webdriver\remote\webelement.py", line 80, in click
self._execute(Command.CLICK_ELEMENT)
File "C:\Users\liusi\AppData\Local\Programs\Python\Python38-32\lib\site-packages\selenium\webdriver\remote\webelement.py", line 633, in _execut
e
return self._parent.execute(command, params)
File "C:\Users\liusi\AppData\Local\Programs\Python\Python38-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:\Users\liusi\AppData\Local\Programs\Python\Python38-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check
_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element ... is not clickable at point (387, 488). Other element would receive the click:

...


(Session info: headless chrome=83.0.4103.116)

Issues with subratings

I noticed two issues with the results for subratings (work life balance, culture, comp, etc..).

  1. Glassdoor added a new subrating category "diversity and inclusion" which is missing from the code.
  2. There are numerous reviews which only contain ratings for a subset of the subratings categories. For instance a review might only provide ratings for comp & benefits and senior management. In the output csv files these ratings will be reflected in the columns for work life balance and culture and values.

The first issue was a quick fix by adding another iteration of the existing code in the main and schema files, but I have not been able to solve the second problem yet.

Cloudflare anti-bot protection

Hi Matthew,
I tried to run your repo in a docker container on a AWS server but ran into Cloudflare protection (JS Challenge/Captcha).
Have you experienced anything similar? And if so, how did you deal with it or what would you recommend?

Default decoder decode error

File "C:\Users\ashii\main.py", line 88, in
d = json.loads(f.read())
File "C:\Users\ashii\AppData\Local\Programs\Python\Python310\lib\json_init_.py", line 346, in loads
return _default_decoder.decode(s)
File "C:\Users\ashii\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\ashii\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

using below url -

python main.py --headless --url "https://www.glassdoor.com/Overview/Working-at-Stitch-Fix-EI_IE783817.11,21.htm" --username [email protected] --password XXX --limit 1000 -f glassdoor_reviews.csv

Error

I am getting the following error

TypeError: 'type' object is not iterable

for following the line of code

res = pd.DataFrame([], columns=Schema)

Chrome web driver arg chrome_options is deprecated

Chrome web driver arg chrome_options is deprecated. Please use "option" instead. For example, in main.py line 382, change to

browser = wd.Chrome(options=chrome_options)

Here is the reference webdriver.py.

By the way, may I be a contributor to your repository?

Syntax error?

No matter what I do, I get this:

SyntaxError: invalid syntax
[script code]
File "main.py", line 286
logger.info(f'Extracting reviews from page {page[0]}')

I think I did everything I needed to, but still nothing changes. Thank you for help!

--start_from_url NoSuchElementException

Hi, I got the following error when trying to use the --start_from_url function. I need help with this, thanks.

python main.py --headless --start_from_url --limit 999 --url "https://www.glassdoor.com/Reviews/Amazon-Reviews-E6036_P100.htm" -f Amazon_2008.csv
2020-04-28 22:03:27,057 INFO 367 :main.py(10156) - Configuring browser

DevTools listening on ws://127.0.0.1:51346/devtools/browser/bd3f16cf-aff6-41e9-b7f4-a234f825187e
2020-04-28 22:03:30,202 INFO 409 :main.py(10156) - Scraping up to 999 reviews.
2020-04-28 22:03:30,213 INFO 348 :main.py(10156) - Signing in to [email protected]
2020-04-28 22:03:45,688 INFO 377 :main.py(10156) - Getting current page number
Traceback (most recent call last):
File "main.py", line 451, in
main()
File "main.py", line 427, in main
page[0] = get_current_page()
File "main.py", line 383, in get_current_page
normalize-space(@Class),' '),' disabled ')]')
File "C:\Users\MXX\Anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py", line 351, in find_element_by_xpath
return self.find_element(by=By.XPATH, value=xpath)
File "C:\Users\MXX\Anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py", line 659, in find_element
{"using": by, "value": value})['value']
File "C:\Users\MXX\Anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py", line 633, in _execute
return self._parent.execute(command, params)
File "C:\Users\MXX\Anaconda3\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:\Users\MXX\Anaconda3\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//ul//li[contains (concat(' ',normalize-space(@Class),' '),' current ')] //span[contains(concat(' ', normalize-space(@Class),' '),' disabled ')]"}
(Session info: headless chrome=81.0.4044.122)

No Such Element Exception

It's looking like there may have been element changes either in Selenium or on Glassdoor.

I'm not completely familiar with Selenium, so I was wondering if someone had seen this issue;

selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"tag name","selector":"p"}

Issue with ChromeDriver

Hello,
I'll start by saying I started learning python a couple of weeks ago, so this may be a very basic question.

I installed ChromeDriver, and put it in the same working directory as my ipynb file. However when I try to run the code in main.py, I keep getting this error:
"WebDriverException: Message: 'chromedriver' executable needs to be in PATH. Please see https://chromedriver.chromium.org/home"

I'm not quite sure what this means, so I would appreciate any advice

Credentials error despite passing them?

I set up everything as per your instructions but since there is no example of the JSON file attached I am passing username and password with --username and --password. Note - the password contains special characters - "*" and "#". I still get this error:

Traceback (most recent call last):
File "/Users/XXX/Documents/GitHub/glassdoor-review-scraper/main.py", line 88, in
with open('secret.json') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'secret.json'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Users/XXX/Documents/GitHub/glassdoor-review-scraper/main.py", line 97, in
raise Exception(msg)
Exception: Please provide Glassdoor credentials. Credentials can be provided as a secret.json file in the working directory, or passed at the command line using the --username and --password flags.

Can you provide a sample JSON or feedback of any sort?

Thank you!

Here is the command I used:
python3 main.py --headless --url "https://www.glassdoor.ca/Reviews/XXX.htm" -f test.csv --headless --username [email protected] --password XXXX -l 260

NoSuchElementException - nable to locate element: {"method":"css selector","selector":"p"}

I have this error and don't know how to fix it.

python main.py --headless --url "https://www.glassdoor.com/Overview/Working-at-Wells-Fargo-EI_IE8876.11,22.htm" --limit 5 -f wells_fargo_reviews.csv
2019-11-21 16:21:07,264 INFO 377 :main.py(19772) - Configuring browser

DevTools listening on ws://127.0.0.1:54285/devtools/browser/45fc8723-9a20-498a-864e-5b4544f795ab
2019-11-21 16:21:09,710 INFO 419 :main.py(19772) - Scraping up to 5 reviews.
2019-11-21 16:21:09,718 INFO 358 :main.py(19772) - Signing in to [email protected]
2019-11-21 16:21:15,642 INFO 339 :main.py(19772) - Navigating to company reviews
2019-11-21 16:21:37,112 INFO 286 :main.py(19772) - Extracting reviews from page 1
2019-11-21 16:21:37,150 INFO 291 :main.py(19772) - Found 10 reviews on page 1
Traceback (most recent call last):
File "main.py", line 461, in
main()
File "main.py", line 441, in main
reviews_df = extract_from_page()
File "main.py", line 295, in extract_from_page
data = extract_review(review)
File "main.py", line 281, in extract_review
res[field] = scrape(field, review, author)
File "main.py", line 264, in scrape
return fdictfield
File "main.py", line 156, in scrape_years
'reviewBodyCell').find_element_by_tag_name('p')
File "C:\Users\E20008699\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py", line 305, in find_element_by_tag_name
return self.find_element(by=By.TAG_NAME, value=name)
File "C:\Users\E20008699\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py", line 659, in find_element
{"using": by, "value": value})['value']
File "C:\Users\E20008699\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py", line 633, in _execute
return self._parent.execute(command, params)
File "C:\Users\E20008699\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:\Users\E20008699\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"p"}
(Session info: headless chrome=78.0.3904.97)

getting the pros and cons back

Hey Matt, thanks for your help. I'm trying to tinker and get the pros and cons back and running with no real luck right now. Any help?

`
def scrape_pros(review):
try:
pros = review.find_element_by_xpath('//*[@id="empReview_28156727"]/div/div[2]/div[2]/div[4]')
expand_show_more(pros)
res = pros.text.replace('\nShow Less', '')
except Exception:
res = np.nan
return res

def scrape_cons(review):
    try:
        cons = review.find_element_by_xpath('//*[@id="empReview_28156727"]/div/div[2]/div[2]/div[5]')
        expand_show_more(cons)
        res = cons.text.replace('\nShow Less', '')
    except Exception:
        res = np.nan
    return res`

No Such Element Exception

Hi there...

I am also getting a no such element exception. Mine is slightly different than what NKoenig06 reported in that it is "Unable to locate element: {"method": "css selector", "selector":".reviewBodyCell"}

I've tinkered around but can't seem to fix it.

2019-06-27 14:09:01,138 INFO 377 :main.py(2268) - Configuring browser
2019-06-27 14:09:03,281 INFO 419 :main.py(2268) - Scraping up to 100 reviews.
2019-06-27 14:09:03,289 INFO 358 :main.py(2268) - Signing in to [email protected]
2019-06-27 14:09:07,250 INFO 339 :main.py(2268) - Navigating to company reviews
2019-06-27 14:09:19,008 INFO 286 :main.py(2268) - Extracting reviews from page 1
2019-06-27 14:09:19,028 INFO 291 :main.py(2268) - Found 9 reviews on page 1
2019-06-27 14:09:19,042 INFO 300 :main.py(2268) - Discarding a featured review
Traceback (most recent call last):

File "C:\Users\GBarnett\main.py", line 461, in
main()

File "C:\Users\GBarnett\main.py", line 441, in main
reviews_df = extract_from_page()

File "C:\Users\GBarnett\main.py", line 295, in extract_from_page
data = extract_review(review)

File "C:\Users\GBarnett\main.py", line 281, in extract_review
res[field] = scrape(field, review, author)

File "C:\Users\GBarnett\main.py", line 264, in scrape
return fdictfield

File "C:\Users\GBarnett\main.py", line 156, in scrape_years
'reviewBodyCell').find_element_by_tag_name('p')

File "C:\Users\GBarnett\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py", line 398, in find_element_by_class_name
return self.find_element(by=By.CLASS_NAME, value=name)

File "C:\Users\GBarnett\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py", line 659, in find_element
{"using": by, "value": value})['value']

File "C:\Users\GBarnett\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py", line 633, in _execute
return self._parent.execute(command, params)

File "C:\Users\GBarnett\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)

File "C:\Users\GBarnett\AppData\Local\Continuum\anaconda3\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)

NoSuchElementException: no such element: Unable to locate element: {"method":"css selector","selector":".reviewBodyCell"}
(Session info: headless chrome=75.0.3770.100)

Gets first 10 reviews then I get this error

Traceback (most recent call last):
File "main.py", line 461, in
main()
File "main.py", line 446, in main
while more_pages() and
File "main.py", line 314, in more_pages
paging_control = browser.find_element_by_class_name('pagingControls')
File "C:\Users\GLio\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 564, in find_element_by_class_name
return self.find_element(by=By.CLASS_NAME, value=name)
File "C:\Users\GLio\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 978, in find_element
'value': value})['value']
File "C:\Users\GLio\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:\Users\GLio\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".pagingControls"}
(Session info: headless chrome=76.0.3809.132)

Glassdoor new UI design

Hi,

apparently Glassdoor changed their website UI recently. If anyone already updated the class names for the several review categories, I would appreciate if you could share your adjustments to the scraper.

Many thanks

No such element error

I have this error and dont know how to fix it.
Traceback (most recent call last):
File "main.py", line 452, in
main()
File "main.py", line 440, in main
go_to_next_page()
File "main.py", line 317, in go_to_next_page
next_ = browser.find_element_by_xpath(".//li[@Class='pagination__PaginationStyle__next']/a")
File "/usr/local/var/pyenv/versions/anaconda3-2019.03/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 394, in find_element_by_xpath
return self.find_element(by=By.XPATH, value=xpath)
File "/usr/local/var/pyenv/versions/anaconda3-2019.03/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 978, in find_element
'value': value})['value']
File "/usr/local/var/pyenv/versions/anaconda3-2019.03/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/usr/local/var/pyenv/versions/anaconda3-2019.03/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":".//li[@Class='pagination__PaginationStyle__next']/a"}
(Session info: headless chrome=77.0.3865.90)

Helpful Count

The scraping of helpful isnt working and is always returning 0 as the output as mentioned in the exception

TypeError: list indices must be integers or slices, not str

Hi, I am getting below mentioned error while running "python main.py --headless -u https://www.glassdoor.co.in/Overview/Working-at-Tesla-EI_IE43129.11,16.htm -l 2300 -f tesla_reviews.csv"
I have already created Json file 'secret.json'

Traceback (most recent call last):
File "main.py", line 90, in
args.username = d['username']
TypeError: list indices must be integers or slices, not str

It will be really appreciated if you could look at this and suggest how to resolve it.. TIA!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.