Giter Site home page Giter Site logo

guidopetri / chess-pipeline Goto Github PK

View Code? Open in Web Editor NEW
20.0 1.0 2.0 3.03 MB

Pulling games from the Lichess API into a PostgreSQL database for data analysis.

License: GNU General Public License v3.0

Python 94.15% SQL 5.43% Shell 0.42%
python postgresql chess luigi-pipeline

chess-pipeline's People

Contributors

dependabot[bot] avatar guidopetri avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Forkers

jmherbst tebellox

chess-pipeline's Issues

Win probability by Stockfish eval

It would be pretty cool to be able to tell win probabilities by the stockfish eval. This would probably have to be calibrated on GM-level games, but it would be a neat addition to the metrics.

"Flag win" flag

I'm personally really interested in analyzing games that are "flag wins" - when you run an opponent's time out despite being in an inferior position. It would be amazing to have data on games like this. Unfortunately, it's not as simple as checking the very last position and the time left - lots of people blunder without much time left, so it might involve the flagger purposefully playing quick moves or complicating the position/keeping it complicated in order to induce mistakes or make their opponent take more time to think of moves. I have the feeling it'll end up being an arbitrary number of moves before the last move but this could possibly be mitigated by instead looking at the patterns of time usage across the game.

Add position FEN per move

I just realized I'm not storing position FENs whenever a game is pulled. This should probably go in a column in the game_moves table. Is this easily doable without having to re-run the entire pipeline for all time periods?

Add unit tests

Tests would be freakin amazing. Maybe that way I can avoid pushing changes that break the codebase and force me to re-run code on the RPi later.

Once that's done, maybe we could set this up on TravisCI or something similar to make sure the code is always working.

WinRatioByColor fails when no games exist

If no games exist for the week, WinRatioByColor fails:

Runtime error:
Traceback (most recent call last):
  File "/home/pi/.local/lib/python3.7/site-packages/luigi/worker.py", line 199, in run
    new_deps = self._run_get_new_deps()
  File "/home/pi/.local/lib/python3.7/site-packages/luigi/worker.py", line 141, in _run_get_new_deps
    task_gen = self.task.run()
  File "/home/pi/Git/chess-pipeline/newsletter.py", line 113, in run
    1.01],  # enforce two digits of precision
  File "/home/pi/.local/lib/python3.7/site-packages/pandas/plotting/_core.py", line 847, in __call__
    return plot_backend.plot(data, kind=kind, **kwargs)
  File "/home/pi/.local/lib/python3.7/site-packages/pandas/plotting/_matplotlib/__init__.py", line 61, in plot
    plot_obj.generate()
  File "/home/pi/.local/lib/python3.7/site-packages/pandas/plotting/_matplotlib/core.py", line 265, in generate
    self._make_legend()
  File "/home/pi/.local/lib/python3.7/site-packages/pandas/plotting/_matplotlib/core.py", line 572, in _make_legend
    ax.legend(handles, labels, loc="best", title=title)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/axes/_axes.py", line 406, in legend
    self.legend_ = mlegend.Legend(self, handles, labels, **kwargs)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/legend.py", line 575, in __init__
    self._init_legend_box(handles, labels, markerfirst)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/legend.py", line 833, in _init_legend_box
    fontsize, handlebox))
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/legend_handler.py", line 115, in legend_artist
    fontsize, handlebox.get_transform())
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/legend_handler.py", line 299, in create_artists
    self.update_prop(p, orig_handle, legend)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/legend_handler.py", line 72, in update_prop
    self._update_prop(legend_handle, orig_handle)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/legend_handler.py", line 65, in _update_prop
    self._update_prop_func(legend_handle, orig_handle)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/legend_handler.py", line 38, in update_from_first_child
    tgt.update_from(src.get_children()[0])
IndexError: list index out of range

Progress bar refactor

Currently, the progress bar indication on Luigi is based off of "time of day played". This is semi-useful in that it gives some indication of how far we've analyzed/downloaded games, but it's not the best, because if someone plays 100 games in a day and 1 game in the rest of the week, it looks like we're not making a lot of progress (since we're not progressing through time as much).

It would be better to have a counter that is based off of "which number game are we analyzing/downloading currently". However, I think this requires another lichess API pull - just so that we can count how many games we have to parse. Alternatively, we could pull each game from the lichess API individually (or in batches), then count them and then analyze them. Keeping the games in memory would be a drag though, so it would have to be in its own Task that writes to (several) files. Is this really a better idea?

query_for_column fails with empty tables

Traceback (most recent call last):
  File "/home/.../Git/chess_pipeline/chess_pipeline.py", line 108, in run
    evals_finished = query_for_column('game_evals', 'game_link')
  File "/home/.../Git/chess_pipeline/chess_pipeline.py", line 38, in query_for_column
    current_srs = DataFrame(result)[0]
...
KeyError: 0

We need to have an empty DataFrame check there and make sure that this gets tracked back on line 108.

Bishop pair visitor

It's pretty well known that having the bishop pair is an advantage in most positions in chess. But is this actually the case? We could find out by marking out games where one side had the bishop pair and analyzing them. This could be done with a python.chess Visitor.

Connection error

Got a weird connection error from the pipeline this morning. Traceback:

Runtime error:
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 601, in urlopen
    chunked=chunked)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 346, in _make_request
    self._validate_conn(conn)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 852, in _validate_conn
    conn.connect()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 337, in connect
    cert = self.sock.getpeercert()
  File "/usr/lib/python3.7/ssl.py", line 953, in getpeercert
    self._check_connected()
  File "/usr/lib/python3.7/ssl.py", line 918, in _check_connected
    self.getpeername()
OSError: [Errno 107] Transport endpoint is not connected

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home//.local/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
    timeout=timeout
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 639, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 367, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/home//.local/lib/python3.7/site-packages/six.py", line 702, in reraise
    raise value.with_traceback(tb)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 601, in urlopen
    chunked=chunked)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 346, in _make_request
    self._validate_conn(conn)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 852, in _validate_conn
    conn.connect()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 337, in connect
    cert = self.sock.getpeercert()
  File "/usr/lib/python3.7/ssl.py", line 953, in getpeercert
    self._check_connected()
  File "/usr/lib/python3.7/ssl.py", line 918, in _check_connected
    self.getpeername()
urllib3.exceptions.ProtocolError: ('Connection aborted.', OSError(107, 'Transport endpoint is not connected'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home//.local/lib/python3.7/site-packages/luigi/worker.py", line 199, in run
    new_deps = self._run_get_new_deps()
  File "/home//.local/lib/python3.7/site-packages/luigi/worker.py", line 141, in _run_get_new_deps
    task_gen = self.task.run()
  File "/home//Git/chess-pipeline/chess_pipeline.py", line 91, in run
    format=PYCHESS)
  File "/home//.local/lib/python3.7/site-packages/lichess/api.py", line 281, in user_games
    return _api_get('/api/games/user/{}'.format(username), kwargs, object_type=lichess.format.GAME_STREAM_OBJECT)
  File "/home//.local/lib/python3.7/site-packages/lichess/api.py", line 108, in _api_get
    return client.call(path, params, auth=auth, format=format, object_type=object_type)
  File "/home//.local/lib/python3.7/site-packages/lichess/api.py", line 73, in call
    resp = requests.get(url, params, headers=headers, cookies=cookies, stream=stream)
  File "/home//.local/lib/python3.7/site-packages/requests/api.py", line 76, in get
    return request('get', url, params=params, **kwargs)
  File "/home//.local/lib/python3.7/site-packages/requests/api.py", line 61, in request
    return session.request(method=method, url=url, **kwargs)
  File "/home//.local/lib/python3.7/site-packages/requests/sessions.py", line 530, in request
    resp = self.send(prep, **send_kwargs)
  File "/home//.local/lib/python3.7/site-packages/requests/sessions.py", line 643, in send
    r = adapter.send(request, **kwargs)
  File "/home//.local/lib/python3.7/site-packages/requests/adapters.py", line 498, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', OSError(107, 'Transport endpoint is not connected'))

I suspect this is actually related to Lichess downtime, but I'd like to make sure.

"Naturality" of a move

A common comment about moves is that they feel "natural", e.g. recapturing pieces or developing pieces in a certain way (maybe I should post some examples of GMs saying this?). But what does this actually mean? Can we come up with a metric that captures how "natural" a move feels?

This could start out as heuristics (like the example above) but I think we'd have to come up with a better explanation based on the board position itself, not the last few moves - though maybe I'm wrong. Thoughts?

Reorganize repo

Honestly, the repo kind of looks confusing at first. Maybe it should be reorganized so that the source is in a src folder and docs/utils elsewhere?

Berserked flag for arena games

Arena games allow for "berserk", which halves time and removes any possible increment. This is a big deal for short games (and even longer games, since there's no increment), and obviously will affect how players play and what their performance is. It would be useful to have this as a flag in the database for the games that are in arenas.

Queries are slow

Several queries that use the tables (mainly chess_games, of course) are extremely slow - they take about 4-6 seconds to run. This can probably be improved via usage of indices, but I'm not sure whether placing the indices is worth it, since it would cost a lot computationally when running the pipeline each night.

I would like to investigate this to improve query runtimes.

Add example analytics to repo

Some of the example analytics done should probably be added to the repo in a folder somewhere. Maybe some of the graphs too? It could be done in .md files.

`since` refactor

The way that the since parameter works right now is kind of clunky: it'll run FetchLichessApiPGN and request from that date on. This is crappy when you miss a day or several (due to bugs...), since then you have to find the right date to request from, or delete all rows from the table_updates table.

Refactoring since to be a parameter in CopyGames should allow us to dynamically call the API endpoint in a better fashion. There could be other ways of structuring the Tasks in order to achieve a better way of doing this, too - maybe not have it be dynamic, but rather only one day at a time, which would let us use luigi staples like RangeDaily easier.

Fix imports for postgres_templates and configs

If both this project and charlesoblack/spotify-playlist-creator are in PYTHONPATH, then postgres_templates and configs imports fail (because they reference each other). Using relative imports would solve the problem. Alternatively, the project could be run inside of Docker instead, or with some other form of env management.

Upgrade to stockfish 14

Stockfish 12 is out. Not only does the evaluation method need to change, we also need to make sure that it doesn't take too much longer with NNUE (via benchmark testing) and we need to update the win probability model with the new evaluations.

Use cloud_eval

Since I rolled up my own get_cloud_eval function since it's not available on python-lichess, I now need to make sure I capture the response correctly. This means that if I get a 429 status, I need to wait a full minute before requesting again - something I had forgotten about.

Add tentative rating

Knowing whether someone's rating is tentative or not is useful for recognizing whether someone is actually at that skill level or not. Even better would be to have the rating deviation; I don't know if this is available directly per-game, however, or if it would be somehow related to the account (and thus inaccessible because it updates immediately after a game is played).

I believe this information is available in the JSON endpoint.

Run JSON fetch first and count # of games

Right now, I'm running a simple JSON fetch, followed by the PGN fetch, and separately, a different JSON fetch. It makes sense to only do one JSON fetch, and then use the game count for the PGN fetch.

(Honestly, though - I'd really prefer to just have a game count API endpoint...)

Write newsletter docs

Requirements: seaborn, beautifulsoup4, newsletter_cfg, sendgrid apikey

The newsletter docs need to be written.

Pawn promotion visitor

It could be useful to know whether a game had a pawn promote or not. This should be fairly straightforward to implement - in fact, there might be something easy in python.chess that helps us find this in a game, we might not even need a Visitor.

Win probabilities CSV has improper numbers

The CSV for win probabilities uses improper numbers. E.g.:

-47.59999999997319,1.2584390499731036e-06,0.0

Postgres can't understand scientific notation, and it's probably also best to truncate the first number (which is the stockfish eval value) to 2 decimal places. Wherever scientific notation was used, Postgres is currently importing a NaN value (which also doesn't seem to be captured by is NULL). Obviously this breaks any possible analyses. Fixing this should be pretty easy: just change the .ipynb that generates the file to output it correctly.

Get multiple perftypes

Another update to the lichess API is that we can request several perftypes at once. This is probably better - however, the "time_control_category" attribute relies on the perftype specified, so this needs to be looked into.

Pawn gambits

Lots of openings are gambits (e.g. Queen's Gambit, King's Gambit, Evans Gambit, Latvian Gambit). How come? Does giving up a pawn in the beginning of the game in exchange for a tempo or faster development actually help that much?

Being able to identify games that have gambits like that would help us analyze chess games and identify what gambits may or may not be playable. This might be as simple as looking for specific openings/ECO codes, or more involved like looking at evals and material balances in the beginning of the game.

Check if pgnInJson is useful for this project

The Lichess API now provides the full PGN in the JSON response to "games from a user" requests. Is this useful? What does the PGN look like exactly? It would prevent the code from having to issue 2 requests per game.

Opening/Middlegame/Endgame division

How often do you blunder away your game in the middlegame? How good is your endgame play? Do you give up the advantage in the opening and never recover?

Being able to identify when a game is in each section of opening/middlegame/endgame would be useful to answer these questions. Lichess/Stockfish(?) use some heuristics to identify when the middlegame/endgame start. From the Lichess source, it seems they use number of majors/minors on the board, as well as "mixedness" of the board, or if the back rank is sparse. Stockfish seems to use some major/minor piece calc, too.

This probably needs some more discussion/thought before implementation.

Piece worth in time

How much is a bishop worth, in time units?

Sometimes, you can get away with playing a bullet game and blundering your queen away if the opponent only has a few seconds left on the clock. As long as the position is complicated enough (or even if it isn't, really, and you just have good mouse skills), your opponent simply can't move fast enough to beat you/prevent a draw by timeout vs insufficient material.

I can use the data that's being gathered here to evaluate how much time your opponent has to have on the clock before you can safely blunder a bishop away and still have good odds of getting away with it.

It'll probably be different depending on the timescale (e.g.: If a bishop is blundered with the opponent having 2s left on the clock, vs with 50s left on the clock in a bullet match), and there will (naturally!) be a lot of variance between opponents, but I think I can provide a pretty good estimate.

Stockfish benchmark code

It would be great to have some stockfish benchmark code in order to determine what's the best depth for --local-stockfish to search at. Maybe with some graphs?

Use cloud eval API if position is available

Brand new on the Lichess API: we can get cloud evals. This is probably faster than doing the search ourselves, but even if it isn't, it would give us a better depth. We should do the local eval if it's not available on cloud, as a fallback.

Reference

EloByWeekday fails when no games exist

If no games exist, EloByWeekday fails with:

Runtime error:
Traceback (most recent call last):
  File "/home/pi/.local/lib/python3.7/site-packages/luigi/worker.py", line 199, in run
    new_deps = self._run_get_new_deps()
  File "/home/pi/.local/lib/python3.7/site-packages/luigi/worker.py", line 141, in _run_get_new_deps
    task_gen = self.task.run()
  File "/home/pi/Git/chess-pipeline/newsletter.py", line 174, in run
    df['weekday_played'] = df['datetime_played'].dt.weekday
  File "/home/pi/.local/lib/python3.7/site-packages/pandas/core/generic.py", line 5270, in __getattr__
    return object.__getattribute__(self, name)
  File "/home/pi/.local/lib/python3.7/site-packages/pandas/core/accessor.py", line 187, in __get__
    accessor_obj = self._accessor(obj)
  File "/home/pi/.local/lib/python3.7/site-packages/pandas/core/indexes/accessors.py", line 338, in __new__
    raise AttributeError("Can only use .dt accessor with datetimelike values")
AttributeError: Can only use .dt accessor with datetimelike values

Rerun analyses for positions

I found this position in the db that has an evaluation of ~ -5: 4Rbk1/5p1p/1p4pB/3p1b2/1P1P4/4P3/1r1NKPPP/8 b - - 3. Obviously this is checkmate in 3. Depth 5 stockfish 10 on win-x64 has it at -778 cp, so I suspect it's just an imported row that had a very low depth when it was evaluated.

I should probably just rerun the evaluations for all positions in the db up to depth 20.

Position evals are being inserted multiple times

Several of the same FEN eval is being inserted. This shouldn't be happening at all (since it goes through postgres_templates.TransactionFactTable, but also, it should already not contain duplicate FENs by the time it gets there (since in GetEvals it should dedup FENs). What exactly is going on?

Lucena/Philidor position flags

It might be useful to know whether specific endgame variants came up, e.g. Lucena/Philidor positions. This might entail looking at the FEN for a position and some regex (maybe) to determine whether that position was reached.

Uneven exchanges

I can imagine it would be useful to see uneven exchanges in the data - e.g. bishop vs knight (#10), rooks vs queen, rook + minor + pawn vs queen. I'm not sure how to use this data nor how to get this from the PGN, but I think it would be interesting to analyze games that have uneven material like that. It could also just be used to eliminate games that have uneven material and focus on positional games.

Piece sac visitor

I think it might be useful/interesting for analysis if we can tell whether there was a piece sac in a game. This could be difficult to implement since I can think of a few situations where a player ends up a piece up in material:

  • a simple blunder (e.g. 3 defending vs 2 attacking and miscounting/not seeing defenders)
  • a tactical piece sac that leads to the player who sac'd being up positionally
  • a tactical piece sac that leads to the player winning back material (e.g. sac queen -> fork king + queen and the sacrificing player is up a pawn/piece/whatever)
  • a "time"/pressure piece sac that leads to a player having an attack that is difficult to defend against

Obviously blunders shouldn't be counted as piece sacs, but the 4th situation might be considered a piece sac. This implies that it's not just a move that leads to a |3| variation (or so) in eval - we also need to consider what tactics might be available, or how difficult that position is to play.

"Complicatedness" of a position

This goes a little hand in hand with #14. GMs also often comment that positions are complicated. What does this actually mean? Recapturing a piece when you're a piece down isn't complicated, but when the position is very closed and there might be only one pawn push that is winning the game while everything else looks just as innocent but is a draw instead might be considered "complicated".

Much like the other issue, I think we could start this with heuristics, but then later develop a better explanation/formula for this metric. I have some commentary from the Lichess discord server saved somewhere that I'll try to post here that revolves around this topic.

Better setup instructions

The setup_postgresql.sql file doesn't actually set everything up properly. luigi_user needs to be granted rights on tables, and the tables need to be created in the first place (through Bash maybe? we need to for-loop over all the .sql files in table-sql). We also need to copy the ECO code infos in (from the content folder).

Add Leela Chess Zero evaluations

This is far out, but hopefully doable someday: Maybe Lc0 evaluations could be added (as win probabilities? or maybe Q score directly?). This would require a GPU most likely, at least if we want any proper depth/node search.

Remove Bayesian WP model

This doesn't seem to provide that much information relative to the "standard" frequentist-approach model, and it severely overcomplicates things right now. As a result, I think it's best if the Bayesian win probability model got removed in favor of focusing development on better frequentist models.

Migrate from RPi to VPS

Running this on a raspberry pi is super cool, but it's also super slow. Running Stockfish, especially, has made me realize that the RPi isn't really that great for "production-level" code - it's more like a testing ground. Additionally, the RPi comes with a few of its own problems: I need to have it connected somewhere for it to work (VPSs are always on), it needs to be on a DynDNS/permanent IP for anything to query from the DB, and opening ports on a home network (for the RPi to be open to queries from e.g. a VPS) isn't the safest thing in the world.

As such, I need to work on migrating the current setup to a VPS and making sure that it works there. Once it's up and running successfully, I can de-commission the RPi from production to testing/development.

Develop better win probability model

Right now, the win probability model relies solely on game evals and is not that accurate when compared to the data I've got. The main problem stems from enforcing the logistic model, I think - despite this making the most sense on a "win probability" level, it might not make sense for my data due to its source.

One thing that I can definitely also try out is incorporating clock times and increment values. This would certainly make the WP model more nuanced.

Finally, the WP model as it is is probably a good estimator for longer time control games that are played OTB, but it's not particularly good for internet games that are played at bullet/blitz time controls. What kind of environment do I really want to model?

Credit for these ideas goes to Scott Powers of Dodgers fame.

EloByWeekday failing on nonexistent saturday games

When there are no games in the last day of the week (currently saturday), EloByWeekday fails with the following error message:

Runtime error:
Traceback (most recent call last):
  File "/home/pi/.local/lib/python3.7/site-packages/luigi/worker.py", line 199, in run
    new_deps = self._run_get_new_deps()
  File "/home/pi/.local/lib/python3.7/site-packages/luigi/worker.py", line 141, in _run_get_new_deps
    task_gen = self.task.run()
  File "/home/pi/Git/chess-pipeline/newsletter.py", line 254, in run
    bbox_inches='tight')
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/figure.py", line 2180, in savefig
    self.canvas.print_figure(fname, **kwargs)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/backend_bases.py", line 2065, in print_figure
    **kwargs)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/backends/backend_agg.py", line 527, in print_png
    FigureCanvasAgg.draw(self)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/backends/backend_agg.py", line 388, in draw
    self.figure.draw(self.renderer)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/artist.py", line 38, in draw_wrapper
    return draw(artist, renderer, *args, **kwargs)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/figure.py", line 1709, in draw
    renderer, self, artists, self.suppressComposite)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/image.py", line 135, in _draw_list_compositing_images
    a.draw(renderer)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/artist.py", line 38, in draw_wrapper
    return draw(artist, renderer, *args, **kwargs)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/axes/_base.py", line 2647, in draw
    mimage._draw_list_compositing_images(renderer, self, artists)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/image.py", line 135, in _draw_list_compositing_images
    a.draw(renderer)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/artist.py", line 38, in draw_wrapper
    return draw(artist, renderer, *args, **kwargs)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/text.py", line 2355, in draw
    xy_pixel = self._get_position_xy(renderer)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/text.py", line 1905, in _get_position_xy
    return self._get_xy(renderer, x, y, self.xycoords)
  File "/home/pi/.local/lib/python3.7/site-packages/matplotlib/text.py", line 1761, in _get_xy
    y = float(self.convert_yunits(y))
TypeError: only size-1 arrays can be converted to Python scalars

This doesn't necessarily happen in line 254. Running the function piecewise in an interpreter seems to bring this error up earlier.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.