Giter Site home page Giter Site logo

gracy's Introduction

Hi there

follow on github follow on twitter

πŸ§‘β€πŸ’» What I'm up to

  • πŸ¦– I’m taking care of a dinosaur linter for Python

  • 🎯 I’m currently working hard from Brazil πŸ‡§πŸ‡· to solve big challenges in a San Francisco πŸ‡ΊπŸ‡Έ startup.

⚑ Fun facts

  • πŸ˜› I broke production a few times haha
  • πŸ₯Š Boxing, skipping and jogging helps keeping me focused on my work
  • 🧘 I often meditate to manage my anxiety
  • πŸ€– I automate everything that I can (like this readme)

πŸ“ Blog Posts

πŸ‡ΊπŸ‡Έ English

πŸ‡§πŸ‡· Portuguese (not maintained anymore)

πŸ“ˆ Stats

gracy's People

Contributors

actions-user avatar guilatrova avatar logan-silk avatar q0w avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

gracy's Issues

httpx.ReadTimeout doesn't raise a GracyException or attempts a retry.

Hi,

in my main loop I'm trying to capture

except (GracyException, GracyParseFailed)

hoping that this will catch any GracyError caused by problems interfacing with
a API.

Alas every week or so a httpx.ReadTimeout causes my main loop to crash.
Also it seems that no retry is attempted.

The lowest error is a
"SSLWantReadError" - anyio/streams/tls.py in _call_sslobject_method at line 130
this bubbles up as
"CancelledError" - asyncio/locks.py in wait at line 213
"TimeoutError" - anyio/_core/_tasks.py in exit at line 118
"ReadTimeout" - httpcore/_exceptions.py in map_exceptions at line 14
before it hits my code in the function defined similar to the code in the
"# 2. Define your Graceful API" example.

I think it would be better if any uncaught httpx errors would turn into GracyException so that trying to capture GracyException is enough to notice that Gracy has given up.

Some more info: I've defined a REQUEST_TIMEOUT = 61.0 in my API, from the Sentry error log it looks like
there's 63 seconds between the start of the request and the error happening.
I have a Graceful retry defined but according to the log, no retry was attempted.

Slightly cleaned up code with some names changed / parameters removed:

class MyApiEndpoint(BaseEndpoint):
    GET_MYSTUFF = "...remove..."

# 2. Define your Graceful API
class MyAPI(Gracy[str]):
    class Config:  # type: ignore
        BASE_URL = "https://<REMOVED>/"  
        REQUEST_TIMEOUT = 61.0
        SETTINGS = GracyConfig(
            log_request=LogEvent(LogLevel.DEBUG),
            log_response=LogEvent(LogLevel.INFO, "{URL} took {ELAPSED}"),
            parser={
                "default": lambda r: r.json()
            },
            throttling=GracefulThrottle(
                rules=ThrottleRule(url_pattern=r".*",
                                   max_requests=1,
                                   per_time=datetime.timedelta(seconds=2)),
            ),
            retry=GracefulRetry(
                delay=30,
                max_attempts=5,
                delay_modifier=1.5,
                retry_on=None,
                log_before=LogEvent(LogLevel.INFO),
                log_after=LogEvent(LogLevel.WARNING),
                log_exhausted=LogEvent(LogLevel.CRITICAL),
                behavior="break",
            )
        )

    async def get_mystuff(self,
                          ... removed ...
                           ) -> Awaitable[dict]:
        params = {... removed ... }

        # the line below is where the httpx.timeout arrives

        return await self.get(MyApiEndpoint.GET_MYSTUFF, params)

The version of Gracy used is 1.11.2 - I updated today to 1.12 and will report if this happens in version 1.12 as well.

Stack trace (with a few things removed)

SSLWantReadError: The operation did not complete (read) (_ssl.c:2546)
File "anyio/streams/tls.py", line 130, in _call_sslobject_method
result = func(*args)
File "ssl.py", line 921, in read
v = self._sslobj.read(len)

CancelledError: null
File "httpcore/backends/asyncio.py", line 34, in read
return await self._stream.receive(max_bytes=max_bytes)
File "anyio/streams/tls.py", line 195, in receive
data = await self._call_sslobject_method(self._ssl_object.read, max_bytes)
File "anyio/streams/tls.py", line 137, in _call_sslobject_method
data = await self.transport_stream.receive()
File "anyio/_backends/_asyncio.py", line 1265, in receive
await self._protocol.read_event.wait()
File "asyncio/locks.py", line 213, in wait
await fut

TimeoutError: null
File "httpcore/_exceptions.py", line 10, in map_exceptions
yield
File "httpcore/backends/asyncio.py", line 32, in read
with anyio.fail_after(timeout):
File "anyio/_core/_tasks.py", line 118, in exit
raise TimeoutError

ReadTimeout: null
File "httpx/_transports/default.py", line 60, in map_httpcore_exceptions
yield
File "httpx/_transports/default.py", line 353, in handle_async_request
resp = await self._pool.handle_async_request(req)
File "httpcore/_async/connection_pool.py", line 253, in handle_async_request
raise exc
File "httpcore/_async/connection_pool.py", line 237, in handle_async_request
response = await connection.handle_async_request(request)
File "httpcore/_async/connection.py", line 90, in handle_async_request
return await self._connection.handle_async_request(request)
File "httpcore/_async/http11.py", line 112, in handle_async_request
raise exc
File "httpcore/_async/http11.py", line 91, in handle_async_request
) = await self._receive_response_headers(**kwargs)
File "httpcore/_async/http11.py", line 155, in _receive_response_headers
event = await self._receive_event(timeout=timeout)
File "httpcore/_async/http11.py", line 191, in _receive_event
data = await self._network_stream.read(
File "httpcore/backends/asyncio.py", line 31, in read
with map_exceptions(exc_map):
File "contextlib.py", line 155, in exit
self.gen.throw(typ, value, traceback)
File "httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc)

ReadTimeout: null
File "REMOVED.py", line 203, in
asyncio.run(main(days=7))
File "asyncio/runners.py", line 190, in run
return runner.run(main)
File "asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
File "asyncio/base_events.py", line 653, in run_until_complete
return future.result()
File "REMOVED.py", line 184, in main
pl = await REMOVED.REMOVED(REMOVED)
File "REMOVED.py", line 150, in get_playlist
return await self.get(REMOVED.REMOVED, params)
File "gracy/_core.py", line 322, in get
return await self._request("GET", endpoint, endpoint_args, *args, **kwargs)
File "gracy/_core.py", line 313, in _request
return await graceful_request
File "gracy/_core.py", line 186, in _gracify
result = await request()
File "httpx/_client.py", line 1533, in request
return await self.send(request, auth=auth, follow_redirects=follow_redirects)
File "httpx/_client.py", line 1620, in send
response = await self._send_handling_auth(
File "httpx/_client.py", line 1648, in _send_handling_auth
response = await self._send_handling_redirects(
File "httpx/_client.py", line 1685, in _send_handling_redirects
response = await self._send_single_request(request)
File "httpx/_client.py", line 1722, in _send_single_request
response = await transport.handle_async_request(request)
File "httpx/_transports/default.py", line 352, in handle_async_request
with map_httpcore_exceptions():
File "contextlib.py", line 155, in exit
self.gen.throw(typ, value, traceback)
File "httpx/_transports/default.py", line 77, in map_httpcore_exceptions
raise mapped_exc(message) from exc

Typing syntax causes failures on 3.8

PARSER_KEY = HTTPStatus | t.Literal["default"]

PARSER_TYPE = dict[PARSER_KEY, PARSER_VALUE] | Unset | None

LOG_EVENT_TYPE = None | Unset | LogEvent

STATUS_OR_EXCEPTION = HTTPStatus | type[Exception]

This syntax is not supported on 3.8 and causes failures. It should use old-style Union
PARSER_TYPE = dict[PARSER_KEY, PARSER_VALUE] | Unset | None

Here it should use Dict
and so on.

Retries not working as expected

Hi,

I've been using Gracy in production for a little while now and it's been great, so thank you!

The API I'm using likes to take far too long to respond, so timeouts are a pretty regular occurence and I've noticed two related issues.

The first is that retrying specific exceptions does not seem to work at all (retry_on=None works fine).

import asyncio
from gracy import BaseEndpoint, Gracy, GracyConfig, LogEvent, LogLevel, GracefulRetry
from httpx import ReadTimeout

class TestEndpoint(BaseEndpoint):
    OK = "/200/"

class GracefulTestAPI(Gracy[str]):
    class Config:
        BASE_URL = "https://httpstat.us/"
        REQUEST_TIMEOUT = 5.0
        SETTINGS = GracyConfig(
            log_request=LogEvent(LogLevel.DEBUG),
            log_response=LogEvent(LogLevel.INFO, "{URL} took {ELAPSED}"),
            parser={
                "default": lambda r: r.json()
            },
            retry=GracefulRetry(
                delay=1,
                max_attempts=5,
                delay_modifier=3,
                retry_on={
                    ReadTimeout,
                },
                log_before=LogEvent(LogLevel.WARNING),
                log_after=LogEvent(LogLevel.WARNING),
                log_exhausted=LogEvent(LogLevel.ERROR),
                behavior="break",
            ),
        )

    async def test(self):
        return await self.get(TestEndpoint.OK, params={"sleep": "6000"})  # <- force a timeout

testapi = GracefulTestAPI()

async def main():
    test_resp = await testapi.test()
    print(test_resp)

asyncio.run(main())

In the above example the ReadTimeout is not retried at all. I think I've traced it to this bit of code in GracyConfig.should_retry (line 562):

for maybe_exc in retry_on:
    if inspect.isclass(maybe_exc) and isinstance(
        req_or_validation_exc, maybe_exc
    ):
        return True

As far as I can tell the exception will always be wrapped in a GracyRequestFailed at this point, so req_or_validation_exc will never match any of the listed exceptions to retry. So I think it should instead be checking the original_exc attribute against maybe_exc.

The second - much more minor - issue I noticed is that the exception message reads "The request for [GET] https://httpstat.us/200/ never got a response due to " (i.e. no exception info). This is because httpx.ReadTimeout (and possibly all httpx exceptions?) has no message (related issue here). Maybe instead of str(original_exc) GracyRequestFailed could use the repr (possibly just for httpx exceptions?) instead so there's a useful output when logging?

Obfuscate API Key

Our API key goes directly into the URL. Is there a way to obfuscate this in the logging output?

Httpx 0.24.1 ->0.23.3 downgrade necessary?

Hi,

currently Gracy downgrades httpx because of this dependency

httpx = "^0.23.3"

The current version of httpx is 0.24.1

Is there a specific reason for this limitation or could you allow the current version of httpx?

Request blocked until cancelled. Gracy overrides httpx timeout default value.

Note: Update/solution below


Hi,

Today I had Gracy hang on a GET request for 75 minutes until I stopped it manually.

Can you say if Gracy changes httpx's default timeout values?

Httpx say they have a default timeout of 5 seconds on any request.
That means Gracy should time out after 5 seconds, too.
(They say it here)

I remember that the previous issue with the 504 retry I had timed out after
60 seconds when the server side gateway timed out,
so for some reasons the httpx default timeout is not enforced.

Of course, it would make sense to set the timeout from Gracy for slow APIs, the promised httpx default is way too short
to reliably capture an API result. I've not seen anything in the docs about defining timeouts, yet...

The API code is the same as in this issue

This is the trace from when I pressed Ctrl-C to stop the hanging job.

I'm using Python 3.11.1 on a raspberry Pi and these libraries

anyio==3.6.2
certifi==2022.12.7
gracy==1.11.2
h11==0.14.0
httpcore==0.16.3
httpx==0.23.3
idna==3.4
rfc3986==1.5.0
sentry-sdk==1.16.0
sniffio==1.3.0
urllib3==1.26.14

Stack Trace

SSLWantReadError: The operation did not complete (read) (_ssl.c:2546)
  File "anyio/streams/tls.py", line 130, in _call_sslobject_method
    result = func(*args)
  File "ssl.py", line 921, in read
    v = self._sslobj.read(len)

CancelledError: null
  File "radio.py", line 202, in <module>
    asyncio.run(main(days=7))
  File "asyncio/runners.py", line 190, in run
    return runner.run(main)
  File "asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
  File "asyncio/base_events.py", line 653, in run_until_complete
    return future.result()
  File "radio.py", line 183, in main
    pl = await playlistapi.get_playlist(station=station, playlist_dt=playlist_dt_hour)
  File "radio.py", line 149, in get_playlist
    return await self.get(PlaylistApiEndpoint.GET_PLAYLIST, params)
  File "gracy/_core.py", line 322, in get
    return await self._request("GET", endpoint, endpoint_args, *args, **kwargs)
  File "gracy/_core.py", line 313, in _request
    return await graceful_request
  File "gracy/_core.py", line 186, in _gracify
    result = await request()
  File "httpx/_client.py", line 1533, in request
    return await self.send(request, auth=auth, follow_redirects=follow_redirects)
  File "httpx/_client.py", line 1620, in send
    response = await self._send_handling_auth(
  File "httpx/_client.py", line 1648, in _send_handling_auth
    response = await self._send_handling_redirects(
  File "httpx/_client.py", line 1685, in _send_handling_redirects
    response = await self._send_single_request(request)
  File "httpx/_client.py", line 1722, in _send_single_request
    response = await transport.handle_async_request(request)
  File "httpx/_transports/default.py", line 353, in handle_async_request
    resp = await self._pool.handle_async_request(req)
  File "httpcore/_async/connection_pool.py", line 253, in handle_async_request
    raise exc
  File "httpcore/_async/connection_pool.py", line 237, in handle_async_request
    response = await connection.handle_async_request(request)
  File "httpcore/_async/connection.py", line 90, in handle_async_request
    return await self._connection.handle_async_request(request)
  File "httpcore/_async/http11.py", line 112, in handle_async_request
    raise exc
  File "httpcore/_async/http11.py", line 91, in handle_async_request
    ) = await self._receive_response_headers(**kwargs)
  File "httpcore/_async/http11.py", line 155, in _receive_response_headers
    event = await self._receive_event(timeout=timeout)
  File "httpcore/_async/http11.py", line 191, in _receive_event
    data = await self._network_stream.read(
  File "httpcore/backends/asyncio.py", line 34, in read
    return await self._stream.receive(max_bytes=max_bytes)
  File "anyio/streams/tls.py", line 195, in receive
    data = await self._call_sslobject_method(self._ssl_object.read, max_bytes)
  File "anyio/streams/tls.py", line 137, in _call_sslobject_method
    data = await self.transport_stream.receive()
  File "anyio/_backends/_asyncio.py", line 1265, in receive
    await self._protocol.read_event.wait()
  File "asyncio/locks.py", line 213, in wait
    await fut

If you are not changing the httpx timeout defaults, maybe this is an issue with httpx and not Gracy.


Update:

I had a look at the code and it seems that Gracy does override the timeout by setting it to None.

from "_core" : class Gracy(Generic[Endpoint]):

def _create_client(self, **kwargs: Any) -> httpx.AsyncClient:
    base_url = getattr(self.Config, "BASE_URL", "")
    request_timeout = getattr(self.Config, "REQUEST_TIMEOUT", None)
    return httpx.AsyncClient(base_url=base_url, timeout=request_timeout)

Maybe it would be a good idea to add a "REQUEST_TIMEOUT" line to the example code.

So it becomes easier to understand where to set the timeout, why the code may hang if you don't set one and that users are encouraged to think about what an acceptable timeout for their use case is.

If not, maybe set the default timeout to "DEFAULT_TIMEOUT_CONFIG " from "httpx/_config.py" instead of None if
"REQUEST_TIMEOUT" is not present in the config so it behaves like httpx normally does.

Namespacing API functional areas?

I'm looking at using Gracy for an API with a lot of different endpoints spread over 7 or 8 different functional areas (for example: users, requests, acquisitions, etc). Using a flat hierarchy for the API client would be very user-unfriendly (just too many functions with overlapping names), so I'd like to namespace them like so:

MyAPI.users.get_user()
MyAPI.users.requests.get_user_requests()
MyAPI.acquisitions.create_order()

The idea would be that namespaces would be nested several deep and would all share the same top-level config. I can't see any obvious way to achieve this with Gracy given how config works - is there anything you would suggest/is there any scope for implementing this functionality?

Thanks in advance, and props for the great project!

How to trigger a retry from the parser based on the reply content.

Hi,

in the GracyConfig I can define my own parser.

Is there a way from my own parsing code to trigger a retry?

def my_parser(r):
    try:
        return r.json() 
    except Exception:
        # trigger retry:
        raise MyException("Invalid JSON received")

class Config:
  SETTINGS = GracyConfig(
    ...,
    allowed_status_code=HTTPStatusCode.NOT_FOUND,
    parser={
      "default": my_parser
    }
  )

Use case: I am querying a borky API that sometime returns incomplete / broken results and
sometimes returns a HTML error page instead of the requested json result.

I'd have to write my own parser that decides if a retry should be made.
Exiting this with by raising an Error that triggers a retry would be a clean way
of doing this


Some more on this:

I'm hunting a case where the API delivers a code 504 HTML Gateway timeout message that
crashes the retries consistently with a GracyParseFailed after the first retry.
Haven't been able to create a reproducable example yet, because in theory 504
responses shouldn't get parsed - and when I built a local api that returns 504's
Gracy works fine catching those.

So while I look for this, I'm trying to catch all Gracy Exceptions:

I looked at the exections and found that
class GracyParseFailed(Exception):
is not a "GracyException"

What's the decision about that?
(You wrote a great blog article on proper exception design so there's probably me missing something here)

In my case using Gracy I'd watch a to try/catch all GracyExceptions as a safety net,
It would be useful to just need to try to catch GracyException and maybe even have a GracyTypeError Exception (that becomes a Gracy Exception) because currently the code may raise generic TypeErrors at some point.

Gracy gives up after first retry on 504 error when a NonOkResponse is raised

Hi,

I consistenly have a problem with one API where Gracy does only retry once and then quits with a NonOkResponse error.

What happens is that I query an API very slowly, once every 2 seconds, but sometimes the API goes down and returns a https status code 504.
What should happen is 5 retries after a long wait between each. What happens is that the first retry immediately crashes.

Note: I remove the non-public parts from the URL with "REMOVED"

NonOkResponse: https://<REMOVED>/2023-03-02%2018%3A00%3A00/24 raised 504, but it was expecting any successful status code
  File "radio.py", line 183, in main
    pl = await playlistapi.get_playlist(<removed>, playlist_dt=playlist_dt_hour)
  File "radio.py", line 149, in get_playlist
    return await self.get(PlaylistApiEndpoint.GET_PLAYLIST, params)
  File "gracy/_core.py", line 312, in get
    return await self._request("GET", endpoint, endpoint_args, *args, **kwargs)
  File "gracy/_core.py", line 303, in _request
    return await graceful_request
  File "gracy/_core.py", line 227, in _gracify
    raise validation_exc
  File "gracy/_core.py", line 201, in _gracify
    validator.check(result)
  File "gracy/_validators.py", line 15, in check
    raise NonOkResponse(str(response.url), response)

The time flow is like this.

  1. Gracy does the request, which times out after almost a minute

"took 0:00:59.626751"

  1. Gracy schedules a retry 30 seconds later

     GracefulRetry: https://<REMOVED>/2023-03-02%2018%3A00%3A00/24 will wait 30s before next attempt (1 out of 5)
    

    {asctime: 2023-03-03 00:47:26,091, CUR_ATTEMPT: 1, ENDPOINT: /2023-03-02%2018%3A00%3A00/24, MAX_ATTEMPT: 5, METHOD: GET, RETRY_DELAY: 30, UENDPOINT:/{datestr}%20{hour}%3A00%3A00/24, URL: https:///2023-03-02%2018%3A00%3A00/24

    , UURL: https:///{datestr}%20{hour}%3A00%3A00/24

    }

  2. Gracy retries exactly 30 seconds later

    GracefulRetry: https://2023-03-02%2018%3A00%3A00/24 attempt (1 out of 5)

    {asctime: 2023-03-03 00:47:56,315, CUR_ATTEMPT: 1, ENDPOINT: 2023-03-02%2018%3A00%3A00/24, MAX_ATTEMPT: 5, METHOD: GET, RETRY_DELAY: 30, UENDPOINT:/{datestr}%20{hour}%3A00%3A00/24, URL: https:///2023-03-02%2018%3A00%3A00/24

    , UURL: https:///{datestr}%20{hour}%3A00%3A00/24

    }

  3. Within the same second, the error occurs
    NonOkResponse: https:///2023-03-02%2018%3A00%3A00/24 raised 504, but it was expecting any successful status code

The API is defined this way

    # 1. Define your endpoints
    class PlaylistApiEndpoint(BaseEndpoint):
        GET_PLAYLIST = "<removed>/{datestr}%20{hour}%3A00%3A00/24"  

    # 2. Define your Graceful API
    class PlaylistAPI(Gracy[str]):
        class Config:  
            BASE_URL = "https://<REMOVED>"
            
            SETTINGS = GracyConfig(
                log_request=LogEvent(LogLevel.DEBUG),
                log_response=LogEvent(LogLevel.INFO, "{URL} took {ELAPSED}"),
                parser={
                    "default": lambda r: r.json()
                },
                throttling=GracefulThrottle(
                    rules=ThrottleRule(url_pattern=r".*",
                                       max_requests=1,
                                       per_time=datetime.timedelta(seconds=2)),
                ),
                retry=GracefulRetry(
                    delay=30,
                    max_attempts=5,
                    delay_modifier=1.5,
                    retry_on=None,
                    log_before=LogEvent(LogLevel.INFO),
                    log_after=LogEvent(LogLevel.WARNING),
                    log_exhausted=LogEvent(LogLevel.CRITICAL),
                    behavior="break",
                )
            )

        async def get_playlist(self,
                               # <REMOVED>
                               playlist_dt: datetime.datetime,
                               ) -> Awaitable[dict]:
            params = {
                "<REMOVED>":"",
                "datestr": f"{playlist_dt:%Y-%m-%d}",
                "hour": f"{playlist_dt:%H}"}

            return await self.get(PlaylistApiEndpoint.GET_PLAYLIST, params)

Thoughts: Since the first retry always crashes immediately, I don't trust the 504 status_code error message fully: Because the error comes so quickly - it's totally possible that the API returned a correct result with a 200 status_code on the retry. I also don't know yet if I had any successful retries with this API.

I'm logging/following this problem in Sentry. It occurs a few time each day and I will update this issue if I get more ideas or info.

Also I saw your answer on the issue #7 and will implement that, still I was expecting a 504 (Gateway Timeout) to trigger a retry instead of attempting a parse.

Throttling limit per second doesn't recognize values smaller than 1 second. Can you force a longer pause?

Hi, love your project and had to try it out immediately.

I ran into a showstopper right away.

For throttling I need to throttle requests to make sure there's several seconds between each,

Real world intervals are:

  • 1 every 61 seconds (if you call more than once a minute you get valid answers but they are empty)
  • 1 every 15 seconds (where I have a daily call limit to get position data and use up all tokens if I ask more)
  • 1 every 5 seconds (when web scraping to make sure I don't take up too many ressources)

I tried to enforce the 5 second rule this.

        throttling=GracefulThrottle(
            rules=ThrottleRule(url_regex=r".*", limit_per_second = 0.2),
        )

If I do it like this, I get no throttling at all.

It there a way of doing this or to define a minimum wait time after each request (which would also apply to retries)?

POST request params

Hi, I want to build with gracy a http client,
Can you provide a example of passing body headers and other parmas for post method

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.