Giter Site home page Giter Site logo

algotraders / stock-analysis-engine Goto Github PK

View Code? Open in Web Editor NEW
1.0K 1.0K 245.0 7.98 MB

Backtest 1000s of minute-by-minute trading algorithms for training AI with automated pricing data from: IEX, Tradier and FinViz. Datasets and trading performance automatically published to S3 for building AI training datasets for teaching DNNs how to trade. Runs on Kubernetes and docker-compose. >150 million trading history rows generated from +5000 algorithms. Heads up: Yahoo's Finance API was disabled on 2019-01-03 https://developer.yahoo.com/yql/

Home Page: https://stock-analysis-engine.readthedocs.io/en/latest/README.html

Python 19.45% Shell 1.66% Jupyter Notebook 78.71% Dockerfile 0.04% Makefile 0.01% Smarty 0.11% Vim Script 0.02%
algorithmic-trading backtesting deep-learning deep-learning-tutorial deep-neural-networks docker helm helm-charts iex iexcloud jupyter keras kubernetes minio options redis s3 stocks tensorflow tradier

stock-analysis-engine's Issues

Fix python 3.7 on mac

  File "/opt/venv/lib/python3.7/site-packages/celery/backends/redis.py", line 22
    from . import async, base
                      ^
SyntaxError: invalid syntax```

KeyError("No metadata except PKG-INFO is available")

When trying to run fetch -t SPY

I get this error:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2862, in _dep_map
    return self.__dep_map
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2669, in __getattr__
    raise AttributeError(attr)
AttributeError: _DistInfoDistribution__dep_map

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2853, in _parsed_pkg_info
    return self._pkg_info
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2669, in __getattr__
    raise AttributeError(attr)
AttributeError: _pkg_info

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/bin/fetch", line 6, in <module>
    from pkg_resources import load_entry_point
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 3088, in <module>
    @_call_aside
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 3072, in _call_aside
    f(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 3101, in _initialize_master_working_set
    working_set = WorkingSet._build_master()
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 574, in _build_master
    ws.require(__requires__)
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 892, in require
    needed = self.resolve(parse_requirements(requirements))
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 786, in resolve
    new_requirements = dist.requires(req.extras)[::-1]
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2613, in requires
    dm = self._dep_map
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2864, in _dep_map
    self.__dep_map = self._compute_dependencies()
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2873, in _compute_dependencies
    for req in self._parsed_pkg_info.get_all('Requires-Dist') or []:
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2855, in _parsed_pkg_info
    metadata = self.get_metadata(self.PKG_INFO)
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 1794, in get_metadata
    raise KeyError("No metadata except PKG-INFO is available")
KeyError: 'No metadata except PKG-INFO is available'

Proposal - docker container image v1

  • i propose by default it runs an ansible playbook to trigger stock analysis
  • i propose it is a fork/partial implementation of https://github.com/jay-johnson/antinex-core/blob/master/docker/Dockerfile that is a dedicated python 3 runtime with data-science elements
  • i propose the container should start a webapp for viewing the code docs in the container if we do not open source the docs with readthedocs.org
  • i propose the https://github.com/jay-johnson/antinex-client is added to the virtualenv so we can use the ai stack that runs on kubernetes already for automation-initiated ai predictions
    this may be a v2 item, but the purchase/buy/sell algorithms can use https://github.com/quantopian/zipline to automate many strategy back tests for accuracy and celery can horizontally scale it. the last time i tried zipline it had numpy + pandas library issues in terms of the versions they supported. that could be a problem if we try to use that in our container without a separate, dedicated zipline virtualenv inside the container. celery can perform handoffs between components so this shouldn't be a problem if we communicate over a messaging layer (redis/rabbitmq/sqs/etc).

rook-ceph cluster osd pod in permanent crash loop CrashLoopBackOff with steps on how to fix without deleting the entire cluster

Took 58 days to hit the rook-ceph cluster issue again, and I figured I would leave this open until there's the next "days since last accident" event:

jay@home1:/opt/metalnetes$ k get po -n rook-ceph
NAME                                          READY   STATUS             RESTARTS   AGE
rook-ceph-mgr-a-68cb58b456-6m8df              1/1     Running            0          58d
rook-ceph-mon-a-855bbddfd4-sxq9m              1/1     Running            0          58d
rook-ceph-mon-b-f949d66dd-rr9lk               1/1     Running            1          58d
rook-ceph-mon-c-6596bcf68c-88prg              1/1     Running            5          58d
rook-ceph-osd-0-5dc6b6686f-hmtkd              1/1     Running            1          58d
rook-ceph-osd-1-5fd56d7798-fttdh              0/1     CrashLoopBackOff   239        58d
rook-ceph-osd-2-5d5965fcc8-l6qz5              1/1     Running            0          58d
rook-ceph-osd-prepare-m10.example.com-pj9p7   0/2     Completed          0          15d
rook-ceph-osd-prepare-m11.example.com-tq8zr   0/2     Completed          0          15d
rook-ceph-osd-prepare-m12.example.com-26p9x   0/2     Completed          2          15d
rook-ceph-tools-bffbf4d8f-znj7q               1/1     Running            0          58d
jay@home1:/opt/metalnetes$ 

I deleted the bad osd pod and then had to restart the ae pods for the engine and backtester to get back into a good state.

logs in the engine showing the error:

2019-05-29 13:47:42,319 - celery.worker.strategy - INFO - Received task: ae.work_tasks.get_new_pricing_data.get_new_pricing_data[cf17e8a0-a5e5-41c6-9857-f4b983f4cf75]  
2019-05-29 13:47:42,924 - ae.td.fetch_api - INFO - ticker=QQQ-tdcalls - calls - close=None ticker=QQQ exp_date=2019-06-21
2019-05-29 13:47:43,872 - ae.td.fetch_api - INFO - ticker=XLK-tdcalls - calls - close=72.415 ticker=XLK exp_date=2019-06-21
Process 'ForkPoolWorker-747' pid:17715 exited with 'signal 9 (SIGKILL)'

steps to fix using https://github.com/jay-johnson/metalnetes repo's helm charts

  1. delete the bad osd pod
kubectl delete -n rook-ceph po rook-ceph-osd-1-5fd56d7798-fttdh
  1. waited a few minutes and watched the recovery in grafana's ceph cluster dashboard:

https://grafana.example.com/d/vwcB0Bzmk/ceph-cluster?orgId=1&refresh=10s

  1. delete the ae chart (engine and backtester) and start the ae charts back up
helm delete --purge ae
cd ae
./start.sh
  1. watch the engine logs to ensure fetches work again
./logs-engine.sh
2019-05-29 13:58:25,805 - ae.td.fetch_api - INFO - ticker=XLF-tdcalls - calls - close=26.37 ticker=XLF exp_date=2019-06-21
2019-05-29 13:58:27,204 - ae.td.fetch_api - INFO - ticker=XLF-tdputs - puts - close=26.37 ticker=XLF exp_date=2019-06-21
2019-05-29 13:58:27,546 - celery.worker.strategy - INFO - Received task: ae.work_tasks.get_new_pricing_data.get_new_pricing_data[e1d62397-9a40-4951-80f9-1a815b3f4424]  
2019-05-29 13:58:28,704 - ae.td.fetch_api - INFO - ticker=XLK-tdcalls - calls - close=72.455 ticker=XLK exp_date=2019-06-21
2019-05-29 13:58:29,148 - ae.work_tasks.custom_task - INFO - on_success label=[ticker=XLF] - task_id=45ac9bbd-8b23-4cb6-a902-435b83efbdde
2019-05-29 13:58:29,148 - celery.app.trace - INFO - Task ae.work_tasks.get_new_pricing_data.get_new_pricing_data[45ac9bbd-8b23-4cb6-a902-435b83efbdde] succeeded in 4.461963811889291s: None
2019-05-29 13:58:30,356 - ae.td.fetch_api - INFO - ticker=XLK-tdputs - puts - close=72.455 ticker=XLK exp_date=2019-06-21
2019-05-29 13:58:32,619 - ae.work_tasks.custom_task - INFO - on_success label=[ticker=XLK] - task_id=e1d62397-9a40-4951-80f9-1a815b3f4424
2019-05-29 13:58:32,620 - celery.app.trace - INFO - Task ae.work_tasks.get_new_pricing_data.get_new_pricing_data[e1d62397-9a40-4951-80f9-1a815b3f4424] succeeded in 5.071241177152842s: None

pkg_resources.ContextualVersionConflict: numpy 1.14.0

Environment

  • Operating System: Darwin Kernel Version 17.5.0: Mon Mar 5 22:24:32 PST 2018; root:xnu-4570.51.1~1/RELEASE_X86_64 x86_64
  • Python Version: Python 3.6.7
  • Python Bitness: 64
  • Python runtime: python is /opt/venv/bin/python
    python is /usr/bin/python
  • Python alias: alias python | grep python
  • Python packages: pip freeze
    tensorflow==2.0.0
    tensorflow-estimator==2.0.1
  • How did you install Stock Analysis Engine: pip
    • Are you using a virtualenv? yes
    • Are you using a pipenv? no
    • Are you using anaconda? no
  • Are you running outside docker or inside containers?
    inside docker
    Now that you know a little about me, let me tell you about the issue I am having:

Description of Issue

numpy version conflicts with what is required by tensorflow vs. mains stack.

  • What did you expect to happen?
  • What happened instead?
    | --> fetch -t SPY
    Traceback (most recent call last):
    File "/opt/venv/lib/python3.6/site-packages/pkg_resources/init.py", line 583, in _build_master
    ws.require(requires)
    File "/opt/venv/lib/python3.6/site-packages/pkg_resources/init.py", line 900, in require
    needed = self.resolve(parse_requirements(requirements))
    File "/opt/venv/lib/python3.6/site-packages/pkg_resources/init.py", line 791, in resolve
    raise VersionConflict(dist, req).with_context(dependent_req)
    pkg_resources.ContextualVersionConflict: (numpy 1.14.0 (/opt/venv/lib/python3.6/site-packages), Requirement.parse('numpy<2.0,>=1.16.0'), {'tensorflow'})

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/opt/venv/bin/fetch", line 6, in
from pkg_resources import load_entry_point
File "/opt/venv/lib/python3.6/site-packages/pkg_resources/init.py", line 3251, in
@_call_aside
File "/opt/venv/lib/python3.6/site-packages/pkg_resources/init.py", line 3235, in _call_aside
f(*args, **kwargs)
File "/opt/venv/lib/python3.6/site-packages/pkg_resources/init.py", line 3264, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/opt/venv/lib/python3.6/site-packages/pkg_resources/init.py", line 585, in _build_master
return cls._build_from_requirements(requires)
File "/opt/venv/lib/python3.6/site-packages/pkg_resources/init.py", line 598, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/opt/venv/lib/python3.6/site-packages/pkg_resources/init.py", line 791, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (numpy 1.14.0 (/opt/venv/lib/python3.6/site-packages), Requirement.parse('numpy<2.0,>=1.16.0'), {'tensorflow'})
Here is how you can reproduce this issue on your machine:

What steps have you taken to resolve this already?

Tried to upgrade numpy however, then main engine stops working since it require version lower (i.e. 1.14)
...

Anything else?

...

Paper Trading

Amazing Project.. But what we need to paper trading?

creating stock_analysis_engine.egg-info error: could not create 'stock_analysis_engine.egg-info': Permission denied

Hello,

Before I tell you about my issue, let me describe my environment:

Environment

  • Operating System: (Mac OS X Version or $ uname --all) Ubuntu 18.04
  • Python Version: python --version 3.6.9
  • Python Bitness: python -c 'import math, sys;print(int(math.log(sys.maxsize + 1, 2) + 1))' 64
  • Python runtime: which python /opt/venv/bin/python
  • Python alias: alias python | grep python not found
  • Python packages: pip freeze no output, no requirements.txt file generated
  • How did you install Stock Analysis Engine: (pip, or other (please explain)) instructions for Ubuntu
    • Are you using a virtualenv? yes
    • Are you using a pipenv? no
    • Are you using anaconda? no
  • Are you running outside docker or inside containers? did not get that far

Now that you know a little about me, let me tell you about the issue I am having:

Description of Issue

When I got to the step 'pip install -e .'
I get the following error:

ERROR: Command errored out with exit status 1:
     command: /opt/venv/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/opt/sa/setup.py'"'"'; __file__='"'"'/opt/sa/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info
         cwd: /opt/sa/
    Complete output (3 lines):
    running egg_info
    creating stock_analysis_engine.egg-info
    error: could not create 'stock_analysis_engine.egg-info': Permission denied
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
  • What did you expect to happen?

Not have an error and move to the next step

  • What happened instead?

Error

Here is how you can reproduce this issue on your machine:

Reproduction Steps

  1. Follow Ubuntu installation steps
  2. execute 'pip install -e .'

What steps have you taken to resolve this already?

Google.
...

Anything else?

No.

...

Sincerely,
$ whoami

Fix S3 feature flag publishing to S3 even if the intraday.s3.enabled: False and fetch -U 0 is used

Engine logs before PR #349 showing the timeout and the jobs taking >30 seconds to finish

2019-03-01 02:01:19,220 - celery.worker.strategy - INFO - Received task: ae.work_tasks.get_new_pricing_data.get_new_pricing_data[0c8d4817-c84a-4dff-8550-724083d216f6]  
2019-03-01 02:01:24,347 - celery.worker.strategy - INFO - Received task: ae.work_tasks.get_new_pricing_data.get_new_pricing_data[eb2e14bb-e8d5-4af2-b715-b0ce708278d7]  
2019-03-01 02:01:36,947 - ae.work_tasks.publish_pricing_update - ERROR - ticker=XLF-tdcalls failed uploading bucket=backtest key=XLF_2019-02-28_tdcalls ex=Could not connect to the endpoint URL: "http://0.0.0.0:9000/backtest/XLF_2019-02-28_tdcalls"
2019-03-01 02:01:45,095 - ae.work_tasks.publish_pricing_update - ERROR - ticker=XLP-tdcalls failed uploading bucket=backtest key=XLP_2019-02-28_tdcalls ex=Could not connect to the endpoint URL: "http://0.0.0.0:9000/backtest/XLP_2019-02-28_tdcalls"
2019-03-01 02:01:52,414 - ae.work_tasks.publish_pricing_update - ERROR - ticker=XLF-tdputs failed uploading bucket=backtest key=XLF_2019-02-28_tdputs ex=Could not connect to the endpoint URL: "http://0.0.0.0:9000/backtest/XLF_2019-02-28_tdputs"
2019-03-01 02:01:52,455 - celery.app.trace - INFO - Task ae.work_tasks.get_new_pricing_data.get_new_pricing_data[0c8d4817-c84a-4dff-8550-724083d216f6] succeeded in 33.23137504400802s: None
2019-03-01 02:02:02,134 - ae.work_tasks.publish_pricing_update - ERROR - ticker=XLP-tdputs failed uploading bucket=backtest key=XLP_2019-02-28_tdputs ex=Could not connect to the endpoint URL: "http://0.0.0.0:9000/backtest/XLP_2019-02-28_tdputs"
2019-03-01 02:02:02,179 - ae.work_tasks.custom_task - INFO - on_success label=[ticker=XLP] - task_id=eb2e14bb-e8d5-4af2-b715-b0ce708278d7
2019-03-01 02:02:02,179 - celery.app.trace - INFO - Task ae.work_tasks.get_new_pricing_data.get_new_pricing_data[eb2e14bb-e8d5-4af2-b715-b0ce708278d7] succeeded in 37.829348045022925s: None

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.