Giter Site home page Giter Site logo

yoongikim / autocrawler Goto Github PK

View Code? Open in Web Editor NEW
1.5K 45.0 402.0 171.61 MB

Google, Naver multiprocess image web crawler (Selenium)

License: Apache License 2.0

Python 100.00%
crawler selenium deep-learning image-crawler chromedriver bigdata google multiprocess thread customizable

autocrawler's Introduction

AutoCrawler

Google, Naver multiprocess image crawler (High Quality & Speed & Customizable)

How to use

  1. Install Chrome

  2. pip install -r requirements.txt

  3. Write search keywords in keywords.txt

  4. Run "main.py"

  5. Files will be downloaded to 'download' directory.

Arguments

usage:

python3 main.py [--skip true] [--threads 4] [--google true] [--naver true] [--full false] [--face false] [--no_gui auto] [--limit 0]
--skip true        Skips keyword if downloaded directory already exists. This is needed when re-downloading.

--threads 4        Number of threads to download.

--google true      Download from google.com (boolean)

--naver true       Download from naver.com (boolean)

--full false       Download full resolution image instead of thumbnails (slow)

--face false       Face search mode

--no_gui auto      No GUI mode. (headless mode) Acceleration for full_resolution mode, but unstable on thumbnail mode.
                   Default: "auto" - false if full=false, true if full=true
                   (can be used for docker linux system)
                   
--limit 0          Maximum count of images to download per site. (0: infinite)
--proxy-list ''    The comma separated proxy list like: "socks://127.0.0.1:1080,http://127.0.0.1:1081".
                   Every thread will randomly choose one from the list.

Full Resolution Mode

You can download full resolution image of JPG, GIF, PNG files by specifying --full true

Data Imbalance Detection

Detects data imbalance based on number of files.

When crawling ends, the message show you what directory has under 50% of average files.

I recommend you to remove those directories and re-download.

Remote crawling through SSH on your server

sudo apt-get install xvfb <- This is virtual display

sudo apt-get install screen <- This will allow you to close SSH terminal while running.

screen -S s1

Xvfb :99 -ac & DISPLAY=:99 python3 main.py

Customize

You can make your own crawler by changing collect_links.py

How to fix issues

As google site consistently changes, you may need to fix collect_links.py

  1. Go to google image. https://www.google.com/search?q=dog&source=lnms&tbm=isch
  2. Open devloper tools on Chrome. (CTRL+SHIFT+I, CMD+OPTION+I)
  3. Designate an image to capture. CleanShot 2023-10-24 at 17 59 57@2x
  4. Checkout collect_links.py CleanShot 2023-10-24 at 18 02 35@2x
  5. Docs for XPATH usage: https://www.w3schools.com/xml/xpath_syntax.asp
  6. You can test XPATH using CTRL+F on your chrome developer tools. CleanShot 2023-10-24 at 18 05 14@2x
  7. You need to find logic to crawling to work.

autocrawler's People

Contributors

ahnjg avatar hajunho avatar hyeongminmoon avatar litcoderr avatar neolithera avatar rubai1597 avatar timgates42 avatar wooseokyourself avatar yoongikim avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autocrawler's Issues

constrain of numbers

hi, thanks for sharing.
And how to constrain the number of images to download, like I only want to 100 images.
thanks for help

Suggestion: Use Python Click for parsing command line arguments

Although argparser, which is a part of the standard library, is a good solution, I'd recommend Click for making command line tools. It comes with a full load of fancy features, but, most notably, it will just make your life way easier by eliminating a bunch of boilerplate code.

Take the following code for example (excerpted from main.py).

    parser.add_argument('--face', type=str, default='false', help='Face search mode')
    # ...
    _face = False if str(args.face).lower() == 'false' else True

With Click, the code above can be replaced with:

@click.option('--face', type=bool, default=True, help='Face search mode.')

This version of ChromeDriver only supports Chrome version 87

Log Error: "Error occurred while initializing chromedriver - Message: session not created: This version of ChromeDriver only supports Chrome version 87
Current browser version is 89.0.4389.90 with binary path /Applications/Google Chrome.app/Contents/MacOS/Google Chrome"

Resolve:

  1. Install web driver from pip:
    pip install webdriver-manager
  2. Replace init webdriver in file collect_links.py
    self.browser = webdriver.Chrome(ChromeDriverManager().install(), chrome_options=chrome_options)

seems like google still not working

Thanks for the great sources.

I tried with the latest commit, and it seems like files are not being downloaded in the directory.
script is finished without an error, but the message is saying

Collect links done. Site: google, Keyword: [ my keyword ], Total: 0

I could see scrolling down with the searched images, but actually downloaded nothing.
(P.S. naver is working well)

Thanks!

Results in limited number of images... why?

Hello,

I am crawling using keywords in various languages, and your repo has been a tremendous help!
The code worked perfectly until last week and I could get 100s ~ 1000s of images for each search entry.
But now it fails to crawl more than 40 images, even for most popular languages.
Do you have any idea why this happens & how to fix this issue?

Thank you!

image

chrome error at colab

When I'm trying to conduct main.py at colab in Window, there is a chrome-related error as follows:

Detected OS : Linux
Error occurred while initializing chromedriver - Message: Service /root/.wdm/drivers/chromedriver/linux64/104.0.5112/chromedriver unexpectedly exited. Status code was: -6

Please tell me the reason and a solution.

Selenium version issue and ChromeDriverManager

Selenium version is currently updated. Therefore, you must specify the version of selenium==4.9.0 for requirements.txt.
If you don't specify

self.browser = webdriver.Chrome(ChromeDriverManager().install(), chrome_options=chrome_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: WebDriver.init() got an unexpected keyword argument 'chrome_options'

the error appears.
If you look it up, you can see that the grammar has changed since 4.10.0 or later. Therefore, you must specify the old version of 4.9.0.

Although you have changed the contextual chromedriver directly in the current chromedriver directory, go to the https://chromedriver.chromium.org/home site designated by ChromeDriverManager to download the latest version (114.0.5735.90) within the site. So the current web driver version and the chromedriver version will be different. Since it will be posted on other websites from version 115 or higher, if you run the current code, it will continue to download and run the old version of the chromedriver.

As a result, it was found that the version comparison of chrome and chromedrivers in the host was not proceeding normally.
You can also see that the Download correct version at "http://chromedriver.chromium.org/downloads " and place in "./chromedriver statement is printed by continuously downloading and running the unwanted old version driver.
I also wonder if the chromedriver in the ./chromedriver directory is running.

There is a version mismatch problem, but crawling works.

Fatal server error

(Minseok) ubuntu@DESKTOP-SMIU2JP:~/anaconda3/envs/Minseok/Portfolio/TeamProject/AutoCrawler-master$ Xvfb :99 -ac & DISPLAY=:99 python3 main.py [1] 18306 (EE) Fatal server error: (EE) Server is already active for display 99 If this server is no longer running, remove /tmp/.X99-lock and start again
what kind of command should I use for this error?

The crawler falls with uninformative messages

Would you please make the logging more informative? It's kind of complicated to understand what went wrong here:

MacBook-Pro-2:AutoCrawler Arseny$ PATH="$PATH:./" python auto_crawler.py
Options - skip:True, threads:4, google:True, naver:True
2 keywords found: ['cat', 'dog']
Collecting links... cat from naver
Collecting links... cat from google
Collecting links... dog from google
Collecting links... dog from naver
Exception cat - Message: session not created exception
from unknown error: Runtime.executionContextCreated has invalid 'context': {"auxData":{"frameId":"297D684EEED11AB59AC252CF27E1E7B2","isDefault":true,"type":"default"},"id":1,"name":"","origin":"://"}
  (Session info: chrome=70.0.3538.102)
  (Driver info: chromedriver=2.20.353124 (035346203162d32c80f1dce587c8154a1efa0c3b),platform=Mac OS X 10.14.0 x86_64)

Exception dog - Message: session not created exception
from unknown error: Runtime.executionContextCreated has invalid 'context': {"auxData":{"frameId":"5805277448BDF963C6444F5EE17C0CDB","isDefault":true,"type":"default"},"id":1,"name":"","origin":"://"}
  (Session info: chrome=70.0.3538.102)
  (Driver info: chromedriver=2.20.353124 (035346203162d32c80f1dce587c8154a1efa0c3b),platform=Mac OS X 10.14.0 x86_64)

Exception cat - Message: session not created exception
from unknown error: Runtime.executionContextCreated has invalid 'context': {"auxData":{"frameId":"000EFF0A0FF3241C84B420B51870186B","isDefault":true,"type":"default"},"id":1,"name":"","origin":"://"}
  (Session info: chrome=70.0.3538.102)
  (Driver info: chromedriver=2.20.353124 (035346203162d32c80f1dce587c8154a1efa0c3b),platform=Mac OS X 10.14.0 x86_64)

Exception dog - Message: session not created exception
from unknown error: Runtime.executionContextCreated has invalid 'context': {"auxData":{"frameId":"1150DDBD226F6974C666A822E62E278B","isDefault":true,"type":"default"},"id":1,"name":"","origin":"://"}
  (Session info: chrome=70.0.3538.102)
  (Driver info: chromedriver=2.20.353124 (035346203162d32c80f1dce587c8154a1efa0c3b),platform=Mac OS X 10.14.0 x86_64)

Small grammatical typo in the README.md

In Arguments part
--full false Download full resolution image instead of thumbnails (slow)
to
--full false Download full resolution images instead of thumbnails (slow)

How to Modify "div" and "img" Variable in "collect_links.py"

How to modify "xpath" variable in "collect_links.py"?

Here is details:

I ran this command: python main.py --google=true --limit=10 --download_path=../../../Pictures/test --full=true, but I am facing this error message:

[Exception occurred while collecting links from google_full] Message: no such element: Unable to locate element: {"method":"xpath","selector":"//div[@class="n4hgof"]//img[@class="r48jcc pT0Scc iPVvYb"]"}
  (Session info: headless chrome=114.0.5735.198)

It looks like "div" and "img" variables need to be modified in "collect_links.py" (code line)

xpath = '//div[@class="n4hgof"]//img[@class="r48jcc pT0Scc iPVvYb"]'
imgs = elem.find_elements(By.XPATH, xpath)

Instead of waiting the codebase update, I want to try by myself. I opened the source code of Google search result, but there are many "<div class>" variable in there webpage (HTML). At the same time, it is hard to find "<img class>" but I see "img scr" .
   For example, I see these in "<div class>": "fR600b islir" or "mEQved GdCiyb aEY0r" before/near "<img scr>".
How to find proper value for the variable from Google search result?

Download failed - check_hostname requires server_host name

Download failed - check_hostname requires server_hostname
Downloading cat from google: 1 / 700
Download failed - check_hostname requires server_hostname
Downloading cat from google: 1 / 700
Download failed - check_hostname requires server_hostname
Downloading cat from google: 1 / 700
Download failed - check_hostname requires server_hostname
Downloading cat from google: 1 / 700
Download failed - check_hostname requires server_hostname
Downloading cat from google: 1 / 700
Download failed - check_hostname requires server_hostname
Downloading cat from google: 1 / 700
Download failed - check_hostname requires server_hostname
Downloading cat from google: 1 / 700
Download failed - check_hostname requires server_hostname
Downloading cat from google: 1 / 700
Download failed - check_hostname requires server_hostname
Downloading cat from google: 1 / 700
Download failed - check_hostname requires server_hostname
Task ended. Pool join.
Data imbalance checking...
dir: download/cat, file_count: 0
Data imbalance not detected.
End Program

After I close the VPN I'm using, the naver works fine, but google is dead in my country..
Although I can see selenium open google and doing the whole thing, I cannot download anything.

Can you help me?

__init__() got an unexpected keyword argument 'chrome_options'

Error: init() got an unexpected keyword argument 'chrome_options'
python 3.7.13

❯ python main.py
Options - skip:True, threads:4, google:True, naver:True, full_resolution:False, face:False, no_gui:False, limit:0, _proxy_list:['']
2 keywords found: ['cat', 'dog']
Detected OS : Mac
Detected OS : Mac
Detected OS : Mac
Detected OS : Mac
[WDM] - Downloading: 100%|███████| 8.29M/8.29M [00:02<00:00, 4.31MB/s]
[WDM] - Downloading: 100%|███████| 8.29M/8.29M [00:02<00:00, 4.28MB/s]
Error occurred while initializing chromedriver - __init__() got an unexpected keyword argument 'chrome_options'
Error occurred while initializing chromedriver - __init__() got an unexpected keyword argument 'chrome_options'
^CTask ended. Pool join.
Data imbalance checking...
Data imbalance not detected.
End Program
❯ pip list | grep sele
selenium               4.10.0

Is there any stopping function in it?

I really like your program, and I wonder if there's pause function like pressing spacebar to 'pause' the searching. And when pressing the spacebar again, the program do works again. If there's no function like this I really looking forward to making this function. Thank you.

google crawl issue regarding ChromeDriver

Thank you for the great repository for auto crawling and really enjoyed using my linux server to crawl images.

But now, google crawling(not Naver) wouldn't work any more, just done with non e image.

I did some test on my local Mac environment on conditions below(with updated ChromeDriver version), but the result was successful.

Current web-browser version:    79.0.3945.130
Current chrome-driver version:  78.0.3904.70

Current web-browser version:	79.0.3945.130
Current chrome-driver version:	79.0.3945.36

For linux, there is some error
Error occurred while initializing chromedriver - Message: unknown error: Chrome failed to start: exited abnormally
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)

so I just fixed with some solutions for running the code.

chrome_options = Options()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
 webdriver.Chrome('/path/to/your_chrome_driver_dir/chromedriver',chrome_options=chrome_options)
        self.browser = webdriver.Chrome(executable,options=chrome_options)

I guess this code can be the key for this issue, because Mac can't crawl google contents either in this environment.
there are some version issue regarding ChromeDriver or maybe Google and headless option that can't crawl the content.

Wait for help and I'd be happy to contribute if I can fix it.

How to download limited images?

This project is awesome. But how could I download only limted number of images, since only images that on the first page is most important.

python main.py --full true

Excuse me, I saw that your code has been updated, and I re-use python main.py --full true now, but it produces thumbnails instead of full images.
naver_0003
naver_0004
naver_0006

How can I get data using this code?

Hello, thanks for sharing your work.
I have a problem to crawl data.
When I execute your code, I can't get the image data.
How can i fix this code?

python3 main.py
Options - skip:True, threads:4, google:True, naver:True, full_resolution:False, face:False, no_gui:False, limit:0, _proxy_list:['']
2 keywords found: ['cat', 'dog']
_________________________________
Current web-browser version:    123.0.6312.105
Current chrome-driver version:  123.0.6312.122
_________________________________
Collecting links... dog from google
_________________________________
Current web-browser version:    123.0.6312.105
Current chrome-driver version:  123.0.6312.122
_________________________________
Collecting links... dog from naver
_________________________________
Current web-browser version:    123.0.6312.105
Current chrome-driver version:  123.0.6312.122
_________________________________
Collecting links... cat from google
_________________________________
Current web-browser version:    123.0.6312.105
Current chrome-driver version:  123.0.6312.122
_________________________________
Collecting links... cat from naver
Scrolling down
Scrolling down
Scrolling down
Scrolling down
Scraping links
Collect links done. Site: naver, Keyword: dog, Total: 0
Downloading images from collected links... dog from naver
Done naver : dog
Scraping links
Collect links done. Site: naver, Keyword: cat, Total: 0
Downloading images from collected links... cat from naver
Done naver : cat
Click time out - //input[@type="button"]
Refreshing browser...
Click time out - //input[@type="button"]
Refreshing browser...
Click time out - //input[@type="button"]
Refreshing browser...
Click time out - //input[@type="button"]
Refreshing browser...
Click time out - //input[@type="button"]
Refreshing browser...
Click time out - //input[@type="button"]
Refreshing browser...
Click time out - //input[@type="button"]
Refreshing browser...
Click time out - //input[@type="button"]
Refreshing browser...
Click time out - //input[@type=

python main.py --full true

Hello, I can download thumbnails using python main.py, but can’t download with parameter python main.py --full true.Please help me, thanks!
image

The resolution

I set the 'full_resolution=True'. The resolutions of images are still very small. How to obtain the high quality images.

Error occurred while initializing chromedriver

Detected OS : Windows
Detected OS : Windows

Error occurred while initializing chromedriver - HTTPSConnectionPool(host='chromedriver.storage.googleapis.com', port=443): Max retries exceeded with url: /103.0.5060/chromedriver_win32.zip (Caused by ProxyError('Cannot connect to proxy.', OSError(0, 'Error')))
Error occurred while initializing chromedriver - HTTPSConnectionPool(host='chromedriver.storage.googleapis.com', port=443): Max retries exceeded with url: /103.0.5060/chromedriver_win32.zip (Caused by ProxyError('Cannot connect to proxy.', OSError(0, 'Error')))
Task ended. Pool join.
Data imbalance checking...
Data imbalance not detected.
End Program

Google image crawling does not work.

[Exception occurred while collecting links from google_full] Message: no such element: Unable to locate element: {"method":"xpath","selector":"//div[@Class="k7O2sd"]"}
(Session info: headless chrome=113.0.5672.127)
Stacktrace:
Backtrace:
GetHandleVerifier [0x006A8893+48451]
(No symbol) [0x0063B8A1]
(No symbol) [0x00545058]
(No symbol) [0x00570467]
(No symbol) [0x0057069B]
(No symbol) [0x00569631]
(No symbol) [0x0058A304]
(No symbol) [0x00569586]
(No symbol) [0x0058A614]
(No symbol) [0x0059C482]
(No symbol) [0x0058A0B6]
(No symbol) [0x00567E08]
(No symbol) [0x00568F2D]
GetHandleVerifier [0x00908E3A+2540266]
GetHandleVerifier [0x00948959+2801161]
GetHandleVerifier [0x0094295C+2776588]
GetHandleVerifier [0x00732280+612144]
(No symbol) [0x00644F6C]
(No symbol) [0x006411D8]
(No symbol) [0x006412BB]
(No symbol) [0x00634857]
BaseThreadInitThunk [0x75EE7D59+25]
RtlInitializeExceptionChain [0x770AB74B+107]
RtlClearBits [0x770AB6CF+191]

Potential dependency conflicts between AutoCrawler and urllib3

Hi, as shown in the following full dependency graph of AutoCrawler, AutoCrawler requires urllib3 (the latest version), while the installed version of requests(2.22.0) requires urllib3>=1.21.1,<1.26.

According to Pip's “first found wins” installation strategy, urllib3 1.25.3 is the actually installed version.

Although the first found package version urllib3 1.25.3 just satisfies the later dependency constraint (urllib3>=1.21.1,<1.26), it will lead to a build failure once developers release a newer version of urllib3.

Dependency tree--------

AutoCrawler(version range:)
| +-certifi(version range:)
| +-chardet(version range:)
| +-idna(version range:)
| +-requests(version range:)
| | +-chardet(version range:>=3.0.2,<3.1.0)
| | +-idna(version range:>=2.5,<2.9)
| | +-urllib3(version range:>=1.21.1,<1.26)
| | +-certifi(version range:>=2017.4.17)
| +-selenium(version range:)
| +-urllib3(version range:)

Thanks for your attention.
Best,
Neolith

Chrome driver doesnt work

Options - skip:True, threads:4, google:True, naver:True, full_resolution:False, face:False, no_gui:False, limit:0, _proxy_list:['']
1 keywords found: ['wooden cat figurine']
[WDM] -

[WDM] - ====== WebDriver manager ======
[WDM] -

[WDM] - ====== WebDriver manager ======
[WDM] - Current google-chrome version is 117.0.5938
[WDM] - Current google-chrome version is 117.0.5938
[WDM] - Get LATEST driver version for 117.0.5938
[WDM] - Get LATEST driver version for 117.0.5938
Error occurred while initializing chromedriver - There is no such driver by url https://chromedriver.storage.googleapis.com/LATEST_RELEASE_117.0.5938
Error occurred while initializing chromedriver - There is no such driver by url https://chromedriver.storage.googleapis.com/LATEST_RELEASE_117.0.5938
Task ended. Pool join.
Data imbalance checking...
Data imbalance not detected.
End Program

can not download

the error as blow:

Scraping links
Click time out - //div[@Class="img_area _item"]
Click time out - //div[@Class="img_area _item"]
Refreshing browser...
Scraping links
Click time out - //div[@data-ri="0"]
Refreshing browser...
Click time out - //div[@Class="img_area _item"]
Refreshing browser...

pls help

Google download can not work

I set the --google true and naver fasle , it can not download image.But I set naver True it can download. What this mean?

Returns nothing...

1 keywords found: ['BTS']
Detected OS : Mac
Detected OS : Mac
Task ended. Pool join.
Data imbalance checking...
Data imbalance not detected.

(Yes, I tested with BTS).

I want 10000 pictures, and I set limit=10000, but only get 20 pictures

At first, thank for this project, it really makes me know how to crawl picture from internet. But I have some problems.

The settings are as following:

  1. keywords.txt: only apple
  2. main.py:
    Options - skip:True, threads:4, google:True, naver:False, full_resolution:False, face:False, no_gui:False, limit:10000, _proxy_list:['']

The crawl result is as picture below:
image
The download file is as picture below:
image

So I wanna to know how to get more pictures. 20 is not enough.
Thanks!

google-chrome: not found

I use WSL2 on Windows machine to test this repo, and here are some behaviors I experienced. Hope to get some guidance:
Install Chrome driver
chrome
Run main.py from another terminal
google-chrome:not found

LINUX
How to resolve this? Thank you!

what error

어떻게 해결해야 할까요?
how to solve this problem

[40156:43664:0107/162511.436:ERROR:chrome_browser_main_extra_parts_metrics.cc(226)] crbug.com/1216328: Checking Bluetooth availability started. Please report if there is no report that this ends.
[40156:43664:0107/162511.438:ERROR:chrome_browser_main_extra_parts_metrics.cc(229)] crbug.com/1216328: Checking Bluetooth availability ended.
[40156:43664:0107/162511.440:ERROR:chrome_browser_main_extra_parts_metrics.cc(232)] crbug.com/1216328: Checking default browser status started. Please report if there is no report that this ends.
[40156:57144:0107/162511.461:ERROR:device_event_log_impl.cc(214)] [16:25:11.461] Bluetooth: bluetooth_adapter_winrt.cc:1075 Getting Default Adapter failed.
[40156:43664:0107/162511.501:ERROR:chrome_browser_main_extra_parts_metrics.cc(236)] crbug.com/1216328: Checking default browser status ended.

No //input[@id='smb']

I can find it by Xpath,but can't see it with webdriver.
It cause program always run function wait_and_click.

Full size image?

Hi, the code runs perfectly!
I noticed that the downloaded images are thumbnails.
Is there any way I can download the full images?
Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.