Giter Site home page Giter Site logo

backtrader2 / backtrader Goto Github PK

View Code? Open in Web Editor NEW

This project forked from mementum/backtrader

224.0 224.0 53.0 21.98 MB

Python Backtesting library for trading strategies

Home Page: https://www.backtrader.com

License: GNU General Public License v3.0

Python 99.99% Shell 0.01%

backtrader's People

Contributors

ab-trader avatar andrelcgt avatar attiasr avatar backtrader avatar beejaysea avatar blenessy avatar borodiliz avatar cgi1 avatar chepurko avatar dimitar-petrov avatar discosultan avatar dovahcrow avatar edandavi avatar femtotrader avatar fygul avatar gwill avatar happydasch avatar houen avatar kizzx2 avatar mementum avatar nalepae avatar neilsmurphy avatar oudingfan avatar peterblenessy avatar remroc avatar rterbush avatar verybadsoldier avatar vladisld avatar xiexiaonan avatar xnox avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

backtrader's Issues

OLS_Slope_InterceptN slope, intercept should be intercept, slope

Andre Tavares posted this question in the community:

André Tavares 9 days ago

The class OLS_Slope_InterceptN calculates slope and intercept. However I think it is on the wrong order. It is as follows:

slope, intercept = sm.OLS(p0, p1).fit().params

I think the correct is:

intercept, slope = sm.OLS(p0, p1).fit().params

I am right?

The code in question can be found in the indicators/ols.py module at line 58.

My cursory look at statsmodel documentation suggests that the backtrader implementation is incorrect and backwards. \

Here are a few helpful links.
https://www.statsmodels.org/devel/generated/statsmodels.regression.linear_model.OLS.html
https://www.statsmodels.org/stable/examples/notebooks/generated/ols.html

Thoughts?

IB : Exception in message dispatch. Handler 'commissionReport' for 'commissionReport' <...> in push_commissionreport ex = self.executions.pop(cr.m_execId)

Not really a technical bug on Backtrader side, IMO more a design problem on IB side, but I create this issue for information.
When trading with BT via IB, one can get this kind of message :

08-Oct-20 17:47:24 ERROR Exception in message dispatch. Handler 'commissionReport' for 'commissionReport' Traceback (most recent call last): File "x/backtrader-J2SaRgr0/lib/python3.7/site-packages/ib/opt/dispatcher.py", line 44, in __call__ results.append(listener(message)) File "x/backtrader/backtrader/backtrader/stores/ibstore.py", line 1323, in commissionReport self.broker.push_commissionreport(msg.commissionReport) File x/labs/backtrader/backtrader/backtrader/brokers/ibbroker.py", line 482, in push_commissionreport ex = self.executions.pop(cr.m_execId) KeyError: '0000e215.5f7e7752.01.01

This exception is raised when backtrader is running with a given ClientID, and that an order is placed manually or with another API connection and a different ClientID.
The reason is that BT cannot filter by ClientID when receiving commissionReport, as IB do not send a ClientID in the commissionReport message.

More details on this :

  • IB do send the ClientID and the OrderId with the openorder callback but not the ExecId (Which is normal as the order may not have been executed)

  • IB send the ExecId, the ClientID and the OrderId with the execDetails messages.

  • IB do not send the ClientID or OrderId with the commissionReport messages, the only thing that can be linked to an order that is then sent is the ExecId

So a running strategy will receive commissionReport messages for all order executed on TWS : its own orders, orders placed manually and from another clientId, but not the openorder and execDetails messages.
There are 2 exception for this :

  • ClientID 0 will receive openorders, execDetails and commissionReports for itself and for orders placed manually

  • If a ClientID is set as "Master API client ID" (in Global configuration / API / Settings) it will receive openorder, execDetails and commissionReport for all clients.

Enhancement: Pluggable Strategy Optimizers

Currently Backtrader performs a grid search to optimize strategy parameters by running through all possible parameters configurations. More complex strategies would benefit from applying more advanced parameter optimizer. While it is possible to perform such optimizations outside of Backtrader one would loose the already established performance improve from running the same strategy with different parameters.

The Optimizer class would take care of:

  • Configuring parameter sets to be tested
  • Ranking different parameter settings (ranking function should also be configurable for users)
  • Determining when to stop the optimization run

Possible interesting stock optimizers are:

  • Grid search (keep current behavior as default)
  • Bayesian optimization
  • Evolutionary optimization

I'm happy to contribute for this enhancement if there is some interest in it.

B2: Creating new additions/enhancements vs. soley bug fixes. Future direction.

There was an indicator addition that brought up the discussion of creating enhancements in Backtrader 2 vs. just bug fixes. I'm going to pin this discussion to the top of the issues for a while as this is important and if we are going to build backtrader2 for the future, we should all have input as to what that might look like.

I'm going to add some of the comments below.

Price used for limit order execution can be incorrectly outside of the high/low range

Community Forum Discussion

https://community.backtrader.com/topic/2542/bug-in-bbroker-_try_exec_limit

Description

Price used for limit order execution can be incorrectly outside of the high/low range (less than low price in particular) even the bt slippage parameters prevents it. This can happen if the slippage, applied to the open price moves the execution price lower than the bar low price.

Sample script

# Bug in bbroker._try_exec_limit()
# https://community.backtrader.com/topic/2542/bug-in-bbroker-_try_exec_limit

from datetime import datetime
import backtrader as bt
import backtrader_addons as bta

class LimitOrderTest(bt.SignalStrategy):

    def log(self, txt, dt=None):
        dt = dt or self.datas[0].datetime.date(0)
        print('%s, %s' % (dt.isoformat(), txt))

    def notify_order(self, order):
        if order.status in [order.Submitted, order.Accepted]:
            return

        if order.status in [order.Completed]:
            if order.isbuy():
                self.log(
                    'BUY EXECUTED, Price: %.2f, Cost: %.2f, Ref %d' %
                    (order.executed.price,
                     order.executed.value,
                     order.executed.comm))
            else:
                self.log('SELL EXECUTED, Price: %.2f, Cost: %.2f, Ref %d' %
                         (order.executed.price,
                          order.executed.value,
                          order.executed.comm))

        elif order.status in [order.Canceled, order.Margin, order.Rejected]:
            self.log('Order Canceled/Margin/Rejected')

        self.order = None

    def __init__(self):
        self.prices = [1285.0, 1299.5, 1305.0, 1306.0]
        self.counter = 0

    def next(self):
        self.log('close %0.2f' % self.data.close[0])
        if self.counter == 0:
           self.order = self.sell(exectype=bt.Order.Limit,
                                  price=self.prices[self.counter])
           self.log('SELL ISSUED @ %0.2f' % self.prices[self.counter])
        self.counter += 1

cerebro = bt.Cerebro()
cerebro.addstrategy(LimitOrderTest)
cerebro.broker.set_slippage_fixed(4.0)
cerebro.broker.setcash(10000.0)

data0 = bta.datafeeds.PremiumDataFutCSV(dataname='test_data_1.csv', plot=True,
                                        name='Test Data')
cerebro.adddata(data0)

cerebro.run()
cerebro.plot(style='candle')

Data
date, open, high, low, close, volume, openinterest

20190101,1295.5,1305.5,1290.1,1298.6,235334,319231
20190102,1297.5,1303.5,1293.1,1296.6,235334,319231
20190103,1301.0,1309.4,1298.9,1307.3,244542,311254
20190104,1309.0,1312.9,1285.0,1298.3,316063,302111

Output

2019-01-02, close 1296.60
2019-01-02, SELL ISSUED @ 1285.00
2019-01-03, SELL EXECUTED, Price: 1297.00, Cost: -1297.00, Ref 0
2019-01-03, close 1307.30
2019-01-04, close 1298.30

Intraday + Daily datasets Date misalignment ?

I'm running 2 datasets - one intraday (1-minute), 1 daily (resampled from the intraday dataset, though the problem also occurs with any other daily dataset).

The intraday dataset is loaded first, the daily - second.
Signals are generated using the 2nd dataset (daily), trades are executed on the intraday dataset.

I'm running the buy/sell logic via timer, next() method is skipped (there's nothing in there)
self.add_timer(
when=dtm.time(9, 45),
offset=dtm.timedelta(),
repeat=dtm.timedelta(),
weekdays=[],
weekcarry=True
)

Inside notify_timer, I'm logging this:
currentIntradayDate = self.data0.datetime.date(0)
currentDailyDate = self.data1.datetime.date(0)
self.log("Daily DB Date: {} | Intraday DB Date: {}" .format(currentDailyDate, currentIntradayDate))

This is what I see:
Daily DB Date: 2020-05-18 | Intraday DB Date: 2020-05-19

So, for some reason the Daily db date is not the same as the intraday one.

Any ideas on why this is the case, and what could be done to fix it? (Already tried oldsync=True, which made the backtest not run at all)

Regards
D

High CPU load during pre-market live trading and "adaptive qcheck" logic

For some time already I've paid attention to the high CPU load (100%) during the pre-market trading while running my strategies with IB broker.

Once the regular trading session begins, the load drops to its normal 2-3% - and everything works like a dream.

After short investigation, it appears to be related to the "adaptive qcheck" logic in celerbo._runnext method - the logic which I don't fully understand the rational behind it and would like to get more information about.

TLDR Section:

The celebro._runnext method is a main workhorse of the celebro engine responsible for running the main engine loop, orchestrating all the datas, dispatching all the notifications to the strategies and doing a lot of other work.

Here the part of the _runnext method, relevant for the further discussion:

def _runnext(self, runstrats):
        
        ###  code removed for clarity ###

        clonecount = sum(d._clone for d in datas)
        ldatas = len(datas)
        ldatas_noclones = ldatas - clonecount

        while d0ret or d0ret is None:
            # if any has live data in the buffer, no data will wait anything
            newqcheck = not any(d.haslivedata() for d in datas)
            if not newqcheck:
                # If no data has reached the live status or all, wait for
                # the next incoming data
                livecount = sum(d._laststatus == d.LIVE for d in datas)
                newqcheck = not livecount or livecount == ldatas_noclones
                print("newcheck {}, livecount {}, ldatas_noclones {}".format(newqcheck, livecount, ldatas_noclones))

            ###  code removed for clarity ###

            # record starting time and tell feeds to discount the elapsed time
            # from the qcheck value
            drets = []
            qstart = datetime.datetime.utcnow()
            for d in datas:
                qlapse = datetime.datetime.utcnow() - qstart
                d.do_qcheck(newqcheck, qlapse.total_seconds())
                drets.append(d.next(ticks=False))

            ### the rest of the code removed for clarity ###

Specifically for datas, the _runnext method calls the data's next method. For live mode - this method should load the next bar from the data's incoming queue, potentially waiting for such bar to arrive if it's not there yet.

And here the code for the part of the '_load' method (eventually called from d.next above) in IBData for example:

def _load(self):
        if self.contract is None or self._state == self._ST_OVER:
            return False  # nothing can be done

        while True:
            if self._state == self._ST_LIVE:
                try:
                    msg = (self._storedmsg.pop(None, None) or
                           self.qlive.get(timeout=self._qcheck))
                except queue.Empty:
                    if True:
                        return None

The high CPU load results if the timeout parameter in the self.qlive.get is 0. In this case the queue.get method will immediately raise the queue.Empty exception and return from the _load function if there is no data in the queue (which is what usually happening during pre-market). The while loop in the _runnext method will then continue to the next iteration and the whole story will repeat itself.

As you may see the timeout parameter gets its value from data's self._qcheck, which is in turn updated in data's do_qcheck method (see the code below) called by _runnext function above.

    def do_qcheck(self, onoff, qlapse):
        # if onoff is True the data will wait p.qcheck for incoming live data
        # on its queue.
        qwait = self.p.qcheck if onoff else 0.0
        qwait = max(0.0, qwait - qlapse)
        self._qcheck = qwait

As you may see, the self._qcheck may either get the value of the qcheck data parameter (passed to the data during its creation) or zeroed

Following the logic in all the above methods, the self._qcheck is zeroed if some of the datas are not yet LIVE - meaning that no bar has been received yet for them - while other datas are already LIVE.

    newqcheck = not livecount or livecount == ldatas_noclones

This is exactly what has happened in my case - from several datas that the strategy is working with, some datas do not receive any bars until right before the trading session starts - all this time the data's self._qcheck is set to zero and celebro's main engine loop is spinning like crazy, crunching my CPU cycles at full speed like there is no tomorrow.

cerebro.plot() aquivalent for plotly

I would like to contribute to backtrader with an addition for plotting.

Personally I like using plotly over matplotlib because I find it more flexible.
I already coded a decent equivalent of the cerebro.plot() function with plotly and was curious if it's a feature more people are interested in. If so I polish up my version and contribute it to backtrader.

FileNotFoundError: [Errno 2] No such file or directory: "'^GDAXI'"

I used this example but unfortunately got errors. The full traceback is:

  File "/projects/quant_exp/stategies/weekdaysaligner.py", line 109, in <module>
    runstrat()
  File "/projects/quant_exp/stategies/weekdaysaligner.py", line 64, in runstrat
    cerebro.run(runonce=True, preload=True)
  File "/anaconda3/envs/cc/lib/python3.7/site-packages/backtrader/cerebro.py", line 1127, in run
    runstrat = self.runstrategies(iterstrat)
  File "/anaconda3/envs/cc/lib/python3.7/site-packages/backtrader/cerebro.py", line 1210, in runstrategies
    data._start()
  File "/anaconda3/envs/cc/lib/python3.7/site-packages/backtrader/feed.py", line 203, in _start
    self.start()
  File "/anaconda3/envs/cc/lib/python3.7/site-packages/backtrader/feeds/yahoo.py", line 361, in start
    super(YahooFinanceData, self).start()
  File "/anaconda3/envs/cc/lib/python3.7/site-packages/backtrader/feeds/yahoo.py", line 94, in start
    super(YahooFinanceCSVData, self).start()
  File "/anaconda3/envs/cc/lib/python3.7/site-packages/backtrader/feed.py", line 674, in start
    self.f = io.open(self.p.dataname, 'r')
FileNotFoundError: [Errno 2] No such file or directory: "'^GDAXI'"

IBBroker marks the order as rejected on unknown error notifications from the broker

Community discussion:

https://community.backtrader.com/topic/2862/ibbroker-live-trading-idealpro-minimum-value-error-code-399-odd-lot-orders-reported-as-rejected

Unexpected behavior:

Looking at ibbroker.py:

def push_ordererror(self, msg):
        with self._lock_orders:
            try:
                order = self.orderbyid[msg.id]
            except (KeyError, AttributeError):
                return  # no order or no id in error

            if msg.errorCode == 202:
                if not order.alive():
                    return
                order.cancel()
            elif msg.errorCode == 201:  # rejected
                if order.status == order.Rejected:
                    return
                order.reject()
            else:
---->        order.reject()  # default for all other cases

called from ibstore.py registered error notify method:

    @ibregister
    def error(self, msg):
        # 300-399 A mix of things: orders, connectivity, tickers, misc errors
        ...  # < code removed for clarity >
        if msg.errorCode is None:
        ...  # < code removed for clarity >
        elif msg.errorCode < 500:
            # Given the myriad of errorCodes, start by assuming is an order
            # error and if not, the checks there will let it go
            if msg.id < self.REQIDBASE:
                if self.broker is not None:
---->            self.broker.push_ordererror(msg)
            else:
                # Cancel the queue if a "data" reqId error is given: sanity
                q = self.qs[msg.id]
                self.cancelQueue(q, True)

The order is marked rejected for every "unknown" ( to be more exact - unhandled) error/warning.

Discussion:

It seem the currently implemented behavior makes a conservative/defensive decision about marking the order as rejected when broker notifies an error which is currently unsupported. It is hard to know whether or not the unknown error code constitutes a real error or just a warning.

It is possible of cause to try to search for the 'warning' or 'error' keywords in the test message itself - but this is error prone at best.

Confirmation on the sign for value of Transactions Analyzer

Hi @neilsmurphy,

Looking into this particular line in Transactions Analyzer:https://github.com/backtrader2/backtrader/blob/master/backtrader/analyzers/transactions.py#L98, it seems like the value is calculated using the following formula:

-size * price

The result is that for long order with positive size, the value will always be negative.

In contradictory, common understanding of value should be:

abs(size) * price

i.e. the size is always positive when calculating value.

Mind to comment?

LinePlotterIndicator

When using the LinePlotterIndicator, it will only work for one instance. When adding more than one LinePlotterIndicator, the lines of the previous instance are added:

See this example:
https://github.com/backtrader2/backtrader/blob/master/samples/lineplotter/lineplotter.py

when adding another LinePlotterIndicator:

    def __init__(self):
        if not self.p.ondata:
            a = self.data.high - self.data.low
        else:
            a = 1.05 * (self.data.high + self.data.low) / 2.0

        b = bt.LinePlotterIndicator(a, name='hilo')
        b.plotinfo.subplot = not self.p.ondata

        c = (self.data.open + self.data.close) / 2
        d = bt.LinePlotterIndicator(c, name='oc')

line c will be available as d.lines[0], d will have 2 lines, plotinfo for c will be in d.plotlines[1]

this is because LinePlotterIndicator is using MtLinePlotterIndicator

class MtLinePlotterIndicator(Indicator.__class__):

there every instance is adding a new line as a class attribute

lines = getattr(cls, 'lines', Lines)

so first instance will have 1 line, second instance will have 2 lines, etc. but the line will always be assigned to index 0

_obj.data.lines[0].addbinding(_obj.lines[0])

One way to fix this is to change:

        lines = getattr(cls, 'lines', Lines)
        cls.lines = lines._derive(name, (lname,), 0, [])

to

        cls.lines = Lines._derive(name, (lname,), 0, [])

No such file or directory: 'YHOO'

I tried this in backtester as well and made a post there.

I've installed backtrader2 in a new conda environment as follows with python 3.8.11 and attempted to run the hello algorithm code and get the YHOO not found error. Is there something I should be doing differently? Thanks a lot! Am excited that this looks like a good and maintained library to get started with.

pip install git+https://github.com/backtrader2/backtrader
git clone https://github.com/backtrader2/backtrader.gitpython 
python backtrader/samples/sigsmacross/sigsmacross2.py

Traceback (most recent call last):
  File "samples/sigsmacross/sigsmacross.py", line 109, in <module>
    runstrat()
  File "samples/sigsmacross/sigsmacross.py", line 68, in runstrat
    cerebro.run()
  File "/Users/andrew/opt/anaconda3/envs/anaconcda_made_38/lib/python3.8/site-packages/backtrader/cerebro.py", line 1127, in run
    runstrat = self.runstrategies(iterstrat)
  File "/Users/andrew/opt/anaconda3/envs/anaconcda_made_38/lib/python3.8/site-packages/backtrader/cerebro.py", line 1210, in runstrategies
    data._start()
  File "/Users/andrew/opt/anaconda3/envs/anaconcda_made_38/lib/python3.8/site-packages/backtrader/feed.py", line 203, in _start
    self.start()
  File "/Users/andrew/opt/anaconda3/envs/anaconcda_made_38/lib/python3.8/site-packages/backtrader/feeds/yahoo.py", line 361, in start
    super(YahooFinanceData, self).start()
  File "/Users/andrew/opt/anaconda3/envs/anaconcda_made_38/lib/python3.8/site-packages/backtrader/feeds/yahoo.py", line 94, in start
    super(YahooFinanceCSVData, self).start()
  File "/Users/andrew/opt/anaconda3/envs/anaconcda_made_38/lib/python3.8/site-packages/backtrader/feed.py", line 674, in start
    self.f = io.open(self.p.dataname, 'r')
FileNotFoundError: [Errno 2] No such file or directory: 'YHOO'

Also had to pip install requests as suggested by an earlier error but that didn't resolve it.

[tradingcal] [bugfix] Index.searchsorted() method accepts only pandas timestamps in v1.0 of the library.

Description

PandasMarketCalendar class uses the pandas dataframe to cache the trading dates schedule. Unfortunately in schedule method a wrong data type is used to search in this cache in case the pandas library v1.0 is used.

def schedule(self, day, tz=None):
        '''
        Returns the opening and closing times for the given ``day``. If the
        method is called, the assumption is that ``day`` is an actual trading
        day

        The return value is a tuple with 2 components: opentime, closetime
        '''
        while True:
            #### The problem is here #####
            i = self.idcache.index.searchsorted(ay.date())

            if i == len(self.idcache):
                # keep a cache of 1 year to speed up searching
                self.idcache = self._calendar.schedule(day, day + self.csize)
                continue

            st = (x.tz_localize(None) for x in self.idcache.iloc[i, 0:2])
            opening, closing = st  # Get utc naive times
            if day > closing:  # passed time is over the sessionend
                day += ONEDAY  # wrap over to next day
                continue

            return opening.to_pydatetime(), closing.to_pydatetime()

Unexpected behavior

I don't remember exactly whether or not an exception is raised or the search just return empty result

TA-Lib functions which require inputs besides 'close' do not work properly

Hi, I'm using backtrader with the ta-lib integration.
It seems that if you try to use an indicator which requires more than just the 'close' input it will fail, because backtrader only supplies the 'close' column from OHLC datasets.

Some examples:

  • ULTOSC
  • MFI
self.once(self._minperiod, self.buflen())
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/luke/.virtualenvs/myenv/lib/python3.8/site-packages/backtrader/talib.py", line 196, in once
    output = self._tafunc(*narrays, **self.p._getkwargs())
  File "/home/luke/.virtualenvs/myenv/lib/python3.8/site-packages/talib/__init__.py", line 27, in wrapper
    return func(*args, **kwargs)
  File "talib/_func.pxi", line 3703, in talib._ta_lib.MFI
TypeError: MFI() takes at least 4 positional arguments (1 given)

According to the documentation, the MFI indicator requires 4 inputs:

MFI(high, low, close, volume, timeperiod=14)

Forum

Since it seems the community.backtrader.com page is dying i would ask you what you think of setting up a new forum where we at least can moderate the content.

@neilsmurphy @vladisld @ab-trader and anyone else

bug: resample and plot error

It happened when resample 5m to 60m and plot. I am not sure it's from resampling or plotting. Any one can reproduce it by using the sample data in source code.

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import os
import sys
import time

import backtrader as bt

class St(bt.Strategy):
    params = dict(multi=True)

    def __init__(self):
        self.pp = pp = bt.ind.PivotPoint(self.data1)
        pp.plotinfo.plot = False  # deactivate plotting

        if self.p.multi:
            pp1 = pp()  # couple the entire indicators
            self.sellsignal = self.data0.close < pp1.s1
        else:
            self.sellsignal = self.data0.close < pp.s1()

    def next(self):
        txt = ','.join(
            ['%04d' % len(self),
             '%04d' % len(self.data0),
             '%04d' % len(self.data1),
             self.data.datetime.date(0).isoformat(),
             '%.2f' % self.data0.close[0],
             '%.2f' % self.pp.s1[0],
             '%.2f' % self.sellsignal[0]])

        print(txt)

cerebro = bt.cerebro.Cerebro()
# Data feed
data_file = r"D:\tmp\backtrader\datas\2006-min-005.txt"  # D:\tmp\backtrader\datas is where the backtrader source folder
data0 = bt.feeds.BacktraderCSVData(dataname=data_file,  timeframe=bt.TimeFrame.Minutes, compression=5)
cerebro.adddata(data0)

cerebro.resampledata(data0, timeframe=bt.TimeFrame.Minutes, compression=60)

cerebro.addanalyzer(bt.analyzers.Returns, _name="RE")
cerebro.addanalyzer(bt.analyzers.TradeAnalyzer, _name="TA")

cerebro.addstrategy(St)
btrs1 = cerebro.run()

pstart=0
pend=100

cerebro.plot(start=pstart, end=pend, style='candle')

60minutes view is obviously incorrect

image

But when I change plot end to 200
pstart=0 pend=200
It's correct now.
Figure_0

cerebro will break if having non resampled data mixed with resampled data in _runnext

When having different feeds:

  • 1 tick data feed
  • 1 or more resampled feeds

and _runnext is called because dt for resampled feed moved forward, Cerebro will break at the following block:

dt0 = min((d for i, d in enumerate(dts)

since the code:

(d for i, d in enumerate(dts) if d is not None and i not in rsonly)

will sometimes not return any data feeds (resampled feed is in rsonly)

i fixed this by changing it to the code below, but I am not sure about the consequences of this.

(d for i, d in enumerate(dts) if (d is not None and (i not in rsonly or not onlyresample)))

[influxdb feed] Multiple issues with InfluxDB Data Feed resampling and time filtering

Community Forum Discussion

https://community.backtrader.com/topic/1279/resampling-in-influxdb-data-feed/2

Description

There are multiple issues with current implementation of InfluxDB data feed:

  • Current query resamples the data using the 'mean' function - which is unoptimal
  • It seems that fromdate parameter is ignored - resulting in excessive data fetch
  • Openinterest field seems to be ignored even if present in the data
  • Time precision is excessive (nano-seconds vs seconds )

Set title for plot

Hi,

I am missing one functionality.

This would be to be able to set a title for the matplotlib.

E.g., if I generate the plot via:

cerebro.plot(volume=False, stdstats=False)

it would be nice, if I could set the title of the plot via:

cerebro.plot(volume=False, stdstats=False, title='test')

Thanks in advance!

Cerebro plot fails when running in Jupiter notebook

Link to the forum:

https://community.backtrader.com/topic/1488/cerebro-plot-failed-to-run

StackTrace of the error:

<IPython.core.display.HTML object>
cerebro.plot()
File "C:\Users~\PycharmProjects\helloworld\venv\lib\site-packages\backtrader\cerebro.py", line 996, in plot
plotter.show()
File "C:\Users~\PycharmProjects\helloworld\venv\lib\site-packages\backtrader\plot\plot.py", line 794, in show
self.mpyplot.show()
File "C:\Users~\PycharmProjects\helloworld\venv\lib\site-packages\matplotlib\pyplot.py", line 254, in show
return _show(*args, **kw)
File "C:\Users~\PycharmProjects\helloworld\venv\lib\site-packages\matplotlib\backends\backend_nbagg.py", line 255, in show
manager.show()
File "C:\Users~\PycharmProjects\helloworld\venv\lib\site-packages\matplotlib\backends\backend_nbagg.py", line 91, in show
self._create_comm()
File "C:\Users~\PycharmProjects\helloworld\venv\lib\site-packages\matplotlib\backends\backend_nbagg.py", line 123, in _create_comm
self.add_web_socket(comm)
File "C:\Users~\PycharmProjects\helloworld\venv\lib\site-packages\matplotlib\backends\backend_webagg_core.py", line 433, in add_web_socket
self.resize(w, h)
File "C:\Users~\PycharmProjects\helloworld\venv\lib\site-packages\matplotlib\backends\backend_webagg_core.py", line 419, in resize
size=(w / self.canvas._dpi_ratio, h / self.canvas._dpi_ratio))
File "C:\Users~\PycharmProjects\helloworld\venv\lib\site-packages\matplotlib\backends\backend_webagg_core.py", line 490, in _send_event
s.send_json(payload)
File "C:\Users~\PycharmProjects\helloworld\venv\lib\site-packages\matplotlib\backends\backend_nbagg.py", line 199, in send_json
self.comm.send({'data': json.dumps(content)})
File "C:\Users~\PycharmProjects\helloworld\venv\lib\site-packages\ipykernel\comm\comm.py", line 121, in send
data=data, metadata=metadata, buffers=buffers,
File "C:\Users~\PycharmProjects\helloworld\venv\lib\site-packages\ipykernel\comm\comm.py", line 66, in _publish_msg
self.kernel.session.send(self.kernel.iopub_socket, msg_type,
AttributeError: 'NoneType' object has no attribute 'session'
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "C:\Users~\PycharmProjects\helloworld\venv\lib\site-packages\matplotlib_pylab_helpers.py", line 74, in destroy_all
manager.destroy()
File "C:\Users~\PycharmProjects\helloworld\venv\lib\site-packages\matplotlib\backends\backend_nbagg.py", line 127, in destroy
self._send_event('close')
File "C:\Users~\PycharmProjects\helloworld\venv\lib\site-packages\matplotlib\backends\backend_webagg_core.py", line 490, in _send_event
s.send_json(payload)
File "C:\Users~\PycharmProjects\helloworld\venv\lib\site-packages\matplotlib\backends\backend_nbagg.py", line 199, in send_json
self.comm.send({'data': json.dumps(content)})
File "C:\Users~\PycharmProjects\helloworld\venv\lib\site-packages\ipykernel\comm\comm.py", line 121, in send
data=data, metadata=metadata, buffers=buffers,
File "C:\Users~\PycharmProjects\helloworld\venv\lib\site-packages\ipykernel\comm\comm.py", line 66, in _publish_msg
self.kernel.session.send(self.kernel.iopub_socket, msg_type,
AttributeError: 'NoneType' object has no attribute 'session'

Sample code:

from future import (absolute_import, division, print_function, unicode_literals)
import backtrader as bt

class TestStrategy(bt.Strategy):
    def log(self, txt, dt=None):
        ''' Logging function fot this strategy'''
        dt = dt or self.datas[0].datetime.date(0)
        print('%s, %s' % (dt.isoformat(), txt))

    def __init__(self):
        # Keep a reference to the "close" line in the data[0] dataseries
        self.dataclose = self.datas[0].close

        # To keep track of pending orders and buy price/commission
        self.order = None
        self.buyprice = None
        self.buycomm = None

    def notify_order(self, order):
        if order.status in [order.Submitted, order.Accepted]:
            # Buy/Sell order submitted/accepted to/by broker - Nothing to do
            return

        # Check if an order has been completed
        # Attention: broker could reject order if not enough cash
        if order.status in [order.Completed]:
            if order.isbuy():
                self.log(
                    'BUY EXECUTED, Price: %.2f, Cost: %.2f, Comm %.2f' %
                    (order.executed.price,
                     order.executed.value,
                     order.executed.comm))

                self.buyprice = order.executed.price
                self.buycomm = order.executed.comm
            else:  # Sell
                self.log('SELL EXECUTED, Price: %.2f, Cost: %.2f, Comm %.2f' %
                         (order.executed.price,
                          order.executed.value,
                          order.executed.comm))

            self.bar_executed = len(self)

        elif order.status in [order.Canceled, order.Margin, order.Rejected]:
            self.log('Order Canceled/Margin/Rejected')

        self.order = None

    def notify_trade(self, trade):
        if not trade.isclosed:
            return

        self.log('OPERATION PROFIT, GROSS %.2f, NET %.2f' %
                 (trade.pnl, trade.pnlcomm))

    def next(self):
        # Simply log the closing price of the series from the reference
        self.log('Close, %.2f' % self.dataclose[0])

        # Check if an order is pending ... if yes, we cannot send a 2nd one
        if self.order:
            return

        # Check if we are in the market
        if not self.position:
            # Not yet ... we MIGHT BUY if ...
            if self.dataclose[0] < self.dataclose[-1]:
                # current close less than previous close

                if self.dataclose[-1] < self.dataclose[-2]:
                    # previous close less than the previous close

                    # BUY, BUY, BUY!!! (with default parameters)
                    self.log('BUY CREATE, %.2f' % self.dataclose[0])

                    # Keep track of the created order to avoid a 2nd order
                    self.order = self.buy()
        else:
            # Already in the market ... we might sell
            if len(self) >= (self.bar_executed + 5):
                # SELL, SELL, SELL!!! (with all possible default parameters)
                self.log('SELL CREATE, %.2f' % self.dataclose[0])

                # Keep track of the created order to avoid a 2nd order
                self.order = self.sell()

if name == 'main':
    # Create a cerebro entity
    cerebro = bt.Cerebro()

    # Add a strategy
    strats = cerebro.addstrategy(
        TestStrategy)

    kl = abu.ABuSymbolPd.make_kl_df('F', n_folds=int(2))
    kl = kl[['open', 'high', 'low', 'close', 'volume']]
    data = bt.feeds.PandasData(dataname=kl)

    # Add the Data Feed to Cerebro
    cerebro.adddata(data)

    # Set our desired cash start
    cerebro.broker.setcash(1000.0)
    cerebro.broker.set_coc(True)
    # Add a FixedSize sizer according to the stake
    cerebro.addsizer(bt.sizers.FixedSize, stake=10)

    # Set the commission
    cerebro.broker.setcommission(commission=0.0)

    # Print out the starting conditions
    print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue())

    # Run over everything
    cerebro.run()

    # Print out the final result
    print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue())

    cerebro.plot()

Details:

While it's true that this error occurs in the guts of matplotlib, it happens because Backtrader is incorrectly assuming it's running inside a Jupyter notebook and therefore configuring matplotlib with an incorrect backend.

The fix is to use cerebro.plot(iplot=False) when not running inside a Jupyter notebook.

See:
https://www.backtrader.com/blog/posts/2016-09-17-notebook-inline/notebook-inline/
and:
https://community.backtrader.com/topic/2106/cerebro-plot-error/4

CmpEx function seems to behave incorrectly

Community discussion:
https://community.backtrader.com/topic/3042/find-bugs-when-using-the-demarkpivotpoint-indicator

Unexpected behavior:

Looking at the code of CmpEx.next() method is seems the logic in this method is different from the CmpEx.once() method:

    def next(self):
        self[0] = cmp(self.a[0], self.b[0])

    def once(self, start, end):
        # cache python dictionary lookups
        dst = self.array
        srca = self.a.array
        srcb = self.b.array
        r1 = self.r1.array
        r2 = self.r2.array
        r3 = self.r3.array

        for i in range(start, end):
            ai = srca[i]
            bi = srcb[i]

            if ai < bi:
                dst[i] = r1[i]
            elif ai > bi:
                dst[i] = r3[i]
            else:
                dst[i] = r2[i]

it results in incorrect function calculation and plotting output (see the community discussion)

Proposed fix:

   def next(self):
        if self.a[0] < self.b[0]:
            self[0] = self.r1[0]
        elif self.a[0] > self.b[0]:
            self[0] = self.r3[0]
        else:
            self[0] = self.r2[0]

The stochasticfull index is different from the KDJ index of common stock software

The stochasticfull index is different from the KDJ index of common stock software

stock software
RSV1:=(CLOSE-LLV(LOW,55))/(HHV(HIGH,55)-LLV(LOW,55))*100;
GSQR3:SMA(RSV1,5,1),COLOR0000FF;
GSQR4:SMA(GSQR3,10,1),COLOR00FF00;

python
self.kd2 = bt.indicators.StochasticFull(self.data, period=55, period_dfast=5, period_dslow=10)

The curve is wrong

Contributions Accepted?

I'm looking to add backtrader to my stack but with the blog, the old repo, the two repos with one for code and I guess the other is documentation.

Can I be added or get approved to be added as a contributor?
I was a federal securities trader for many years so the terminalogy as well as market knowledge I can help improve this great codebase.

I'd like to do a fresh install and new setup, while updating the documentation.

Enhancement: Optimized preloading of static pandas datafeeds

In my current setup preparing my pandas dataset on Cerebro start takes about ~40% of the whole execution time (in addition to actually loading the data from a database). This rather high portion is due to two points: The data is modified row-wise instead of column-wise and this process is executed once for every strategy optimization run.
Giving the static nature of data during backtesting the dataframe can be prepared in a more efficient manner and cached between strategy optimization runs.

IBStore.dt_plus_duration calculates the date offset incorrectly for month durations

Community discussion:
https://community.backtrader.com/topic/3030/exception-has-occurred-valueerror-day-is-out-of-range-for-month-file-f-ibbacktradelivedataex-py-line-165-in-module-cerebro-run

Problem description:

The following ValueError exception may be raised in case the specific fromdate parameter is used to initialize the IBStore:

Traceback (most recent call last):
  File "C:/Users/Vlad/PycharmProjects/test/test_ib_error.py", line 35, in <module>
    cerebro.run()
  File "W:\backtrader\backtrader\cerebro.py", line 1177, in run
    runstrat = self.runstrategies(iterstrat)
  File "W:\backtrader\backtrader\cerebro.py", line 1263, in runstrategies
    data._start()
  File "W:\backtrader\backtrader\feed.py", line 203, in _start
    self.start()
  File "W:\backtrader\backtrader\feeds\ibdata.py", line 406, in start
    self._st_start()
  File "W:\backtrader\backtrader\feeds\ibdata.py", line 650, in _st_start
    sessionend=self.p.sessionend)
  File "W:\backtrader\backtrader\stores\ibstore.py", line 720, in reqHistoricalDataEx
    intdate = self.dt_plus_duration(begindate, dur)
  File "W:\backtrader\backtrader\stores\ibstore.py", line 1191, in dt_plus_duration
    return dt.replace(year=dt.year + years, month=month + 1)
ValueError: day is out of range for month

Sample user code:

import backtrader as bt
import datetime

class IntraTrendStrategy(bt.Strategy):
    def next(self):
        pass

cerebro = bt.Cerebro()
store = bt.stores.IBStore(host="127.0.0.1", port=7497, clientId= 4)
cerebro.addstrategy(IntraTrendStrategy)
stockkwargs = dict(
timeframe=bt.TimeFrame.Minutes,
compression=5,
rtbar=False, # use RealTime 5 seconds bars
historical=True, # only historical download
qcheck=0.5, # timeout in seconds (float) to check for events
fromdate=datetime.datetime(2020, 1, 1), # get data from..
todate=datetime.datetime(2020, 9, 20), # get data from..
latethrough=False, # let late samples through
tradename=None, # use a different asset as order target
tz="Asia/Kolkata"
)
data0 = store.getdata(dataname="TCS-STK-NSE-INR", **stockkwargs)
cerebro.replaydata(data0, timeframe=bt.TimeFrame.Days, compression=1)
cerebro.broker.setcash(100000.0)
cerebro.broker.setcommission(commission=0.001)
cerebro.run()

Analysis:

The problem is with the following code in IBStore.py:

    def dt_plus_duration(self, dt, duration):
        size, dim = duration.split()
        size = int(size)
        if dim == 'S':
            return dt + timedelta(seconds=size)

        if dim == 'D':
            return dt + timedelta(days=size)

        if dim == 'W':
            return dt + timedelta(days=size * 7)

        if dim == 'M':
            month = dt.month - 1 + size  # -1 to make it 0 based, readd below
            years, month = divmod(month, 12)
            return dt.replace(year=dt.year + years, month=month + 1) # <=---- the bug is here

        if dim == 'Y':
            return dt.replace(year=dt.year + size)

        return dt  # could do nothing with it ... return it intact

It's plain wrong to replace the month of the date without taking care of the day of the month. In our case the date '2019.12.31' was replaced with '2020.02.31' which is definitely wrong.

Reduced test case:

import backtrader as bt
import datetime

store = bt.IBStore()
store.dt_plus_duration(datetime(2019,12,31,), '2M')

PandasData fails to autodetect columns in case the input dataframe uses indexes instead of names for columns

Community discussion:

https://community.backtrader.com/topic/3106/attributeerror-numpy-int64-object-has-no-attribute-lower-when-using-pandas-with-the-noheaders-argument

Suspected Regression Commit:

05051890efe527ee22919d354038b4dfc1ffe7ca: Rework ix -> iloc pull request and autodetection algorithm in PandasData

Description:

PandasData accepts pandas dataframe as input and in some cases tries to auto-detect the columns in this dataframe in order to correctly associate the dataframe column to the data lines. Usually it does it using dataframe column names. However in some cases dataframe may contain no column names but use index values for it instead. In this case PandasData should try to associate the dataframe columns with data line using predefined index mapping.

Apparently this mechanism fails to work after the aforementioned commit.

Test case:

Just run the data-pandas.py --noheaders from the sample directory and see the failure

Partial order execution is not reported correctly in notify_order

Upon live testing my strategy on IB paper account some order are executing partially and several notifications could be received with Order.Partial status before the Order.Completed status notification. I was expecting that iterating the pending exbits of each partial order notification will return me the bits that were relevant for each notification. So for example if the order on the first partial notification had four exbits:

idx:0 size:100 price:10.99
idx:1 size:100 price:10.99
idx:2 size:300 price:10.99
idx:3 size:100 price:10.99

and on the second notification had 6 bits:

idx:0 size:100 price:10.99
idx:1 size:100 price:10.99
idx:2 size:300 price:10.99
idx:3 size:100 price:10.99
idx:4 size:200 price:10.98
idx:5 size:109 price:10.98

the order.executed.iterpending() should return the first four ( idx:0 trough idx:3 ) on the first notification and the next two (idx:4, idx:5) on the second notification. However it's seems not to be the case. Instead, iterating the pending execbits returns all the bits every time ( four bits on the first notification and six on the second)

Looking at the code it looks like the OrderData's p1 and p2 indexes are used to implement the needed behavior and are updated upon order cloning when pushed to the notification queue. So the client code should see the p1 and p2 updated already and OrderData.iterpending() method should reflect that. However it seems that those indexes are not updated. Below I've logged the order data including the p1 and p2 indexes:

2020-01-06 12:18:35:HMI-STK-SMART-USD:track::notify_order: Limit Buy Partial, price: 10.99, size: 600, cost: 6594.0, comm: 3.0, ref: 6
2020-01-06 12:18:35:HMI-STK-SMART-USD:track::order bits: p1: 0, p2: 4
2020-01-06 12:18:35:HMI-STK-SMART-USD:track::idx: 0, dt: 2020-01-06 12:18:42, bit size:100, bit price:10.99
2020-01-06 12:18:35:HMI-STK-SMART-USD:track::idx: 1, dt: 2020-01-06 12:18:42, bit size:100, bit price:10.99
2020-01-06 12:18:35:HMI-STK-SMART-USD:track::idx: 2, dt: 2020-01-06 12:18:42, bit size:300, bit price:10.99
2020-01-06 12:18:35:HMI-STK-SMART-USD:track::idx: 3, dt: 2020-01-06 12:18:43, bit size:100, bit price:10.99
2020-01-06 12:18:35:HMI-STK-SMART-USD:track::order bits done.
2020-01-06 12:18:35:HMI-STK-SMART-USD:track::notify_order: Limit Buy Partial, price: 10.9875, size: 800, cost: 8790.0, comm: 4.0, ref: 6
2020-01-06 12:18:35:HMI-STK-SMART-USD:track::order bits: p1: 0, p2: 5
2020-01-06 12:18:35:HMI-STK-SMART-USD:track::idx: 0, dt: 2020-01-06 12:18:42, bit size:100, bit price:10.99
2020-01-06 12:18:35:HMI-STK-SMART-USD:track::idx: 1, dt: 2020-01-06 12:18:42, bit size:100, bit price:10.99
2020-01-06 12:18:35:HMI-STK-SMART-USD:track::idx: 2, dt: 2020-01-06 12:18:42, bit size:300, bit price:10.99
2020-01-06 12:18:35:HMI-STK-SMART-USD:track::idx: 3, dt: 2020-01-06 12:18:43, bit size:100, bit price:10.99
2020-01-06 12:18:35:HMI-STK-SMART-USD:track::idx: 4, dt: 2020-01-06 12:18:44, bit size:200, bit price:10.98
2020-01-06 12:18:35:HMI-STK-SMART-USD:track::idx: 5, dt: 2020-01-06 12:18:45, bit size:109, bit price:10.98
2020-01-06 12:18:35:HMI-STK-SMART-USD:track::order bits done.
2020-01-06 12:18:40:HMI-STK-SMART-USD:track::notify_order: Limit Buy Completed, price: 10.986600660066006, size: 909, cost: 9986.82, comm: 4.545, ref: 6
2020-01-06 12:18:40:HMI-STK-SMART-USD:track::order bits: p1: 0, p2: 6
2020-01-06 12:18:40:HMI-STK-SMART-USD:track::idx: 0, dt: 2020-01-06 12:18:42, bit size:100, bit price:10.99
2020-01-06 12:18:40:HMI-STK-SMART-USD:track::idx: 1, dt: 2020-01-06 12:18:42, bit size:100, bit price:10.99
2020-01-06 12:18:40:HMI-STK-SMART-USD:track::idx: 2, dt: 2020-01-06 12:18:42, bit size:300, bit price:10.99
2020-01-06 12:18:40:HMI-STK-SMART-USD:track::idx: 3, dt: 2020-01-06 12:18:43, bit size:100, bit price:10.99
2020-01-06 12:18:40:HMI-STK-SMART-USD:track::idx: 4, dt: 2020-01-06 12:18:44, bit size:200, bit price:10.98
2020-01-06 12:18:40:HMI-STK-SMART-USD:track::idx: 5, dt: 2020-01-06 12:18:45, bit size:109, bit price:10.98
2020-01-06 12:18:40:HMI-STK-SMART-USD:track::order bits done.

The code that logs the order is:

    def log_order(self, order, prefix):
        order_data = order.executed if order.status in [Order.Completed, Order.Partial] else order.created
        self.log('{}: {} {} {}, price: {}, size: {}, cost: {}, comm: {}, ref: {}',
                 prefix,
                 order.getordername(),
                 order.ordtypename(),
                 order.getstatusname(),
                 order_data.price,
                 order_data.size,
                 order_data.value,
                 order_data.comm,
                 order.ref)
        self.log('order bits: p1: {}, p2: {}', order_data.p1, order_data.p2)
        for idx, bit in enumerate(order_data.exbits):
            if bit is None:
                break
            self.log('idx: {}, dt: {}, bit size:{}, bit price:{}', idx, order.data.num2date(bit.dt), bit.size, bit.price)
        self.log('order bits done.')

The above method is called from strategy's notify_order - very simple:

    def notify_order(self, order):
        self.log_order(order, "notify_order")

After taking a little bit deeper look inside the IBBroker's code and it seems that the p1 and p2 fields of the OrderData object are indeed not updated upon partial execution notifications.

I was looking at the following sequence of events:

IBStore.execDetails callback is called by IBpy
1.2. IBBroker.push_execution is called, pushing the notification message to IBBroker.executions map

IBStore.commissionReport callback is called by IBpy
2.1 IBBroker.push_commissionreport is called:
2.1.1 existing order is fetched from IBBroker.orderbyid map given commission report message m_execId
2.1.2 existing order's order.execute is called, which adds the execbits to the order.executed OrderData object
2.1.3 existing order id is added to the IBBroker.tonotify queue if it's not already there

Note: still the p1,p2 are not updated in the existing order in the above steps

IBStore.updatePortfolio is called by IBpy
3.1 IBBroker.push_portupdate is called
3.1.1 the order id is fetched from IBBroker.tonotify queue (updated in 2.1.3)
3.1.2 the order is fetched from IBBroker.orderbyid map for order id from the previous step
3.1.3 IBBroker.notify is called, eventually cloning the order and putting the clone into the IBBroker.notifs queue.

Note 1: Here in this step the p1, p2 fields of the cloned order are updated using markpending method:

def markpending(self):
        # rebuild the indices to mark which exbits are pending in clone
        self.p1, self.p2 = self.p2, len(self.exbits)

Note 2: p1 and p2 fields in the original order ( the one we were cloning from) are zero. So in the cloned order p1 will be zero and p2 will point to end of the exbits.

Note 3: The same sequence of events will be repeated for the next partial order notifications. In no place the original order's p1 and p2 fields are updated - so the same original order will be cloned once again (with zeroed p1 and p2 fields)

A simple fix maybe to call self.markpending() before cloning the OrderData:

def clone(self):
        self.markpending()  # <-- here
        obj = copy(self)
        return obj

The above 'fix' works pretty well on real time (paper) trading.

2020-01-17 15:45:00:HMI-STK-SMART-USD:track:notify_order: Market Buy Partial, price: 14.63, size: 400, cost: 5852.0, comm: 2.0, ref: 7
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:order bits: p1: 0, p2: 3
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:idx: 0, dt: 2020-01-17 15:45:06, bit size:100, bit price:14.63
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:idx: 1, dt: 2020-01-17 15:45:06, bit size:200, bit price:14.63
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:idx: 2, dt: 2020-01-17 15:45:06, bit size:100, bit price:14.63
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:order bits done.
...
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:notify_order: Market Buy Partial, price: 14.63, size: 500, cost: 7315.0, comm: 2.5, ref: 7
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:order bits: p1: 3, p2: 4
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:idx: 0, dt: 2020-01-17 15:45:06, bit size:100, bit price:14.63
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:idx: 1, dt: 2020-01-17 15:45:06, bit size:200, bit price:14.63
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:idx: 2, dt: 2020-01-17 15:45:06, bit size:100, bit price:14.63
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:idx: 3, dt: 2020-01-17 15:45:08, bit size:100, bit price:14.63
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:idx: 4, dt: 2020-01-17 15:45:09, bit size:182, bit price:14.63
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:order bits done.
...
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:notify_order: Market Buy Completed, price: 14.629999999999999, size: 682, cost: 9977.66, comm: 3.41, ref: 7
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:order bits: p1: 4, p2: 5
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:idx: 0, dt: 2020-01-17 15:45:06, bit size:100, bit price:14.63
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:idx: 1, dt: 2020-01-17 15:45:06, bit size:200, bit price:14.63
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:idx: 2, dt: 2020-01-17 15:45:06, bit size:100, bit price:14.63
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:idx: 3, dt: 2020-01-17 15:45:08, bit size:100, bit price:14.63
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:idx: 4, dt: 2020-01-17 15:45:09, bit size:182, bit price:14.63
2020-01-17 15:45:00:HMI-STK-SMART-USD:track:order bits done.

GenericCSVData class may use lambda functions which prevent it to be used in optimizations

Community discussion:

https://community.backtrader.com/topic/2919/error-in-multi-core-optimization

Description:

using GenericCSVData class or any class inherited from it in strategy optimization may result in pickle error:

AttributeError: Can't pickle local object 'GenericCSVData.start.<locals>.<lambda>'

This happens in case the CSV file contains the timestemps in a datetime column. Looking at the code it appears that GenericCSVData is using lambdas for converting the timestamps to the UTC time. Unfortunately lambda functions could not be
serialized by pickle package and thus can not be used during optimization with multiple CPUs.

def start(self):
        super(GenericCSVData, self).start()

        self._dtstr = False
        if isinstance(self.p.dtformat, string_types):
            self._dtstr = True
        elif isinstance(self.p.dtformat, integer_types):
            idt = int(self.p.dtformat)
            if idt == 1:
                self._dtconvert = lambda x: datetime.utcfromtimestamp(int(x))
            elif idt == 2:
                self._dtconvert = lambda x: datetime.utcfromtimestamp(float(x))

        else:  # assume callable
            self._dtconvert = self.p.dtformat

[IB Feed] Subscription is not canceled on Bust error (err 10225)

Community Forum Discussion:

Description

Up to the TWS API v972, IB realtimeBar subscriptions could stop working during the trading session and no error notification was sent to the application. This happens because of what's called 'busts' events.

Upon bust event the current data subscription should be first canceled, before issuing a new subscription request

The original 'fix' for handling bust events missed this part.

Erroneous behavior

Upon bust event the IB log will show that new subscription request was sent, however in some cases data subscription will not be renewed due to the race condition on IB server side.

pd.ols is depracated

In ols.py class OLS_BetaN this line r_beta = pd.ols(y=y, x=x, window_type='full_sample') is raising an error because pd.ols no longer exists. I think it should be replaced by sm.ols.

Resampling multiple IB datas causes cerebro to grind to a halt

This seems to be affected by the following two conditions:

  1. Using cerebro.resampldata() to add the data to cerebro.
  2. Using 2 or more datas

I've tried to trace it, but I don't have much experience tracing bugs like this where I can't pinpoint an appropriate pdb breakpoint, especially when I don't know the project well.

The full trace I captured using python3 -m trace --trace is here if anyone finds it useful:
https://gist.github.com/shed909/09427c02944f5111822c767197d84cf7

And when I keyboard interrupt it:

^CTraceback (most recent call last):
  File "/Users/shane/tstrade/tstrade/main.py", line 446, in <module>
    runstrat()
  File "/Users/shane/tstrade/tstrade/main.py", line 269, in runstrat
    runs = cerebro.run(**cerebrokwargs)
  File "/Users/shane/tstrade/lib/python3.9/site-packages/backtrader/cerebro.py", line 1127, in run
    runstrat = self.runstrategies(iterstrat)
  File "/Users/shane/tstrade/lib/python3.9/site-packages/backtrader/cerebro.py", line 1298, in runstrategies
    self._runnext(runstrats)
  File "/Users/shane/tstrade/lib/python3.9/site-packages/backtrader/cerebro.py", line 1573, in _runnext
    if d.next(datamaster=dmaster, ticks=False):  # retry
  File "/Users/shane/tstrade/lib/python3.9/site-packages/backtrader/feed.py", line 407, in next
    ret = self.load()
  File "/Users/shane/tstrade/lib/python3.9/site-packages/backtrader/feed.py", line 479, in load
    _loadret = self._load()
  File "/Users/shane/tstrade/lib/python3.9/site-packages/backtrader/feeds/ibdata.py", line 573, in _load
    msg = self.qhist.get()
  File "/usr/local/Cellar/[email protected]/3.9.1_6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/queue.py", line 171, in get
    self.not_empty.wait()
  File "/usr/local/Cellar/[email protected]/3.9.1_6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/threading.py", line 312, in wait
    waiter.acquire()
KeyboardInterrupt

I have tested the following with successful results:

  • Resampling 1 datas.

And the following with unsuccessful results:

  • Using adddata() with many datas - no issues there.
  • No (default) kw args (except for the fromdate and todate) when getting/resampling the datas (so when both datas are using the same timeframe and compression, the issue is still present).
  • A combination of different args - nothing in terms on timeframe or compression seems to make a difference, so I stopped here.

Here is example of how I'm testing this:

    ibstore = bt.stores.IBStore(**ibstorekwargs)
    # Create getdata and adddata objects 
    getdata = ibstore.getdata
    # resampledata or adddata, we can switch between to test
    # adddata = cerebro.adddata
    adddata = cerebro.resampledata

    # Trade data
    tdata = 'IBAU200-CFD-SMART'

    trade_data = getdata(
            dataname=tdata,
            **tdatakwargs,
            )
    # Add the data to cerebro
    adddata(trade_data,
        name=tdata,
        **tdatarekwargs,
        )

    wdata = ['IBUS500-CFD-SMART']

    for i in range(len(wdata)):
        wdataname = wdata[i]
        # Get the data
        watch_data = getdata(
                dataname=wdataname,
                **wdatakwargs,
                )
        # Add the data to cerebro
        adddata(watch_data,
                name=wdata[i],
                **wdatarekwargs,
                )

The above includes the IB products/datas I used in the Python trace above. I've also tried many others, including the same instruments with the same timeframe and compression.
I'm just testing month blocks of daily data at the moment, eg: -fd 2021-06-02 -td 2021-07-02

I'm sure someone can replicate this quite easily, or if anyone has any pointers on how to trace it further, I'd love to have a crack myself.

[live feeds] ERROR: __len__() should return >= 0

Forum Discussion

https://community.backtrader.com/topic/1367/error-__len__-should-return-0-on-livefeed

Description

The following exception may be raised if live trading strategy with more then 1 data feed.

Exception

Traceback (most recent call last):
  File "/proyect/venv/code/test2.py", line 110, in <module>
    cerebro.run()
  File "/proyect/venv/lib/py/site-packages/backtrader/cerebro.py", line 1127, in run
    runstrat = self.runstrategies(iterstrat)
  File "/proyect/venv/lib/py/site-packages/backtrader/cerebro.py", line 1298, in runstrategies
    self._runnext(runstrats)
  File "/proyect/venv/lib/py/site-packages/backtrader/cerebro.py", line 1631, in _runnext
    strat._next()
  File "/proyect/venv/lib/py/site-packages/backtrader/strategy.py", line 325, in _next
    super(Strategy, self)._next()
  File "/proyect/venv/lib/py/site-packages/backtrader/lineiterator.py", line 255, in _next
    clock_len = self._clk_update()
  File "/proyect/venv/lib/py/site-packages/backtrader/strategy.py", line 305, in _clk_update
    newdlens = [len(d) for d in self.datas]
  File "/proyect/venv/lib/py/site-packages/backtrader/strategy.py", line 305, in <listcomp>
    newdlens = [len(d) for d in self.datas]
  File "/proyect/venv/lib/py/site-packages/backtrader/lineseries.py", line 464, in __len__
    return len(self.lines)
  File "/proyect/venv/lib/py/site-packages/backtrader/lineseries.py", line 220, in __len__
    return len(self.lines[0])
ValueError: __len__() should return >= 0

Analysis

It can happen when Cerebro._runnext loads a bar which arrives in the narrow window of time between the first check for new data:

drets.append(d.next(ticks=False))

and the second chance:

if d.next(datamaster=dmaster, ticks=False)

If the new bar's timestamp shows it is from the "future" with respect to dmaster then it is loaded once (in d.next()) but rewound twice: once within d.next() itself and again in _runnext a few lines after the above block:

di.rewind()

My guess is that the needed fix is in AbstractDataBase.next():

if self.lines.datetime[0] > datamaster.lines.datetime[0]:
    # can't deliver new bar, too early, go back
    self.rewind()
    return False # <-- This needs to be added since bar is not delivered

Resampling 1 Minute to 20 Minutes produces bars at absurd intervals

I'm trying to resample 1 minute bars to 20 minutes.
But the resampled bars are produced at absurd intervals.

Source of datafeed: Interactive Brokers API
Market hours: 09:15 hrs to 15:30 hrs
Backfill duration: 10 days

Note: Data provided by IB API is left-edge labeled (forward looking) by default. BackwardLookingFilter is a simple workaround that handles rightedge-ing minute bars. The problem persists with or without it.


import backtrader as bt

fromdate = datetime.now().date() - timedelta(days=10)
sessionstart = datetime.now().time().replace(hour=9, minute=15, second=0, microsecond=0)
sessionend = datetime.now().time().replace(hour=15, minute=30, second=0, microsecond=0)


class BackwardLookingFilter(object):

    def __init__(self, data):
        pass

    def __call__(self, data):
        data.datetime[0] = data.date2num(data.datetime.datetime(0) + timedelta(minutes=1))
        return False


class Test(bt.Strategy)

    def next(self):

        print("Minutes 1 ", self.data0.datetime.datetime(0))
        print("Minutes 20 ", self.data1.datetime.datetime(0))
        print("_"*50)



cerebro = bt.Cerebro()
cerebro.addstrategy(Test)

store = bt.stores.IBStore(port=7497, _debug=False)
cerebro.setbroker(store.getbroker())


minutedata = store.getdata(dataname="NIFTY50_IND_NSE", fromdate=fromdate, rtbar=True, sessionstart=sessionstart,
                           historical=True, sessionend=sessionend, timeframe=bt.TimeFrame.Minutes,
                           compression=1)

cerebro.adddata(minutedata)

minutedata.addfilter(BackwardLookingFilter)


cerebro.resampledata(minutedata,
                     timeframe=bt.TimeFrame.Minutes,
                     compression=20,
                     bar2edge=False,
                     boundoff=5)

cerebro.run()

Output:

Server Version: 76
TWS Time at connection:20220226 19:12:01 India Standard Time
Minutes 1  2022-02-16 09:35:00
Minutes 20 2022-02-16 09:35:00
__________________________________________________
Minutes 1  2022-02-16 09:36:00
Minutes 20 2022-02-16 09:35:00
__________________________________________________
Minutes 1  2022-02-16 09:37:00
Minutes 20 2022-02-16 09:35:00
__________________________________________________
Minutes 1  2022-02-16 09:38:00
Minutes 20 2022-02-16 09:37:00
__________________________________________________
Minutes 1  2022-02-16 09:39:00
Minutes 20 2022-02-16 09:37:00
__________________________________________________
Minutes 1  2022-02-16 09:40:00
Minutes 20 2022-02-16 09:37:00
__________________________________________________

Ideally after 09:35:00 the next 20 Minute bar should be produced at 09:55:00.

Unexpected behavior:
The 20 Minute datafeed produces a bar at time: 09:37:00. (4th row in output)

Strategy's self.positions throws AttributeError when cerebro.broker set to IBBroker

Community Forum Discussion:
https://community.backtrader.com/topic/3637/self-positions-throws-attributeerror-when-cerebro-broker-set-to-ibbroker

Description

When trying to access the self.positions attribute from within the next method of a strategy, the following exception is thrown in case IBBroker is setup as a broker:

Traceback (most recent call last):
  File "/test/support_3637.py", line 30, in <module>
    run()
  File "/test/support_3637.py", line 27, in run
    cerebro.run()
  File "W:\backtrader\backtrader\cerebro.py", line 1177, in run
    runstrat = self.runstrategies(iterstrat)
  File "W:\backtrader\backtrader\cerebro.py", line 1351, in runstrategies
    self._runnext(runstrats)
  File "W:\backtrader\backtrader\cerebro.py", line 1687, in _runnext
    strat._next()
  File "W:\backtrader\backtrader\strategy.py", line 347, in _next
    super(Strategy, self)._next()
  File "W:\backtrader\backtrader\lineiterator.py", line 273, in _next
    self.nextstart()  # only called for the 1st value
  File "W:\backtrader\backtrader\lineiterator.py", line 347, in nextstart
    self.next()
  File "/test/support_3637.py", line 9, in next
    print(self.positions)
  File "\backtrader\backtrader\lineseries.py", line 461, in __getattr__
    return getattr(self.lines, name)
AttributeError: 'Lines_LineSeries_LineIterator_DataAccessor_Strateg' object has no attribute 'positions'

Process finished with exit code 1

Analysis

It seems the property 'positions' is not defined in IBBroker class, while defined in all other broker implementations. This properly is defined in IBStore class instead. The fix seems to be easy: just add the properly to IBBrokerclass which will just return the IBStore.positions property

Test case

import backtrader as bt

class TestPositions(bt.Strategy):
    def next(self):
        print(self.positions)

def run(args=None):
    cerebro = bt.Cerebro(live=True)
    store = bt.stores.IBStore(host='127.0.0.1', port=7496)
    cerebro.setbroker(store.getbroker())
    data = store.getdata(dataname='EUR.USD-CASH-IDEALPRO',
                           historical=False,
                           backfill=False,
                           backfill_start=False)
    cerebro.adddata(data)
    cerebro.addstrategy(TestPositions)
    cerebro.run()

if __name__ == '__main__':
    run()

Math Functions

This issue seeks to add new scalar math functionality such as math.log to the available functions in Backtrader.

The functionality of And/Max/Min is located in functions.py.

class Max(MultiLogic):
    flogic = max

They use the math library in conjunction with a class MultiLogic. This is designed to work with two or more lines.

class MultiLogic(Logic):
    def next(self):
        self[0] = self.flogic([arg[0] for arg in self.args])

    def once(self, start, end):
        # cache python dictionary lookups
        dst = self.array
        arrays = [arg.array for arg in self.args]
        flogic = self.flogic

        for i in range(start, end):
            dst[i] = flogic([arr[i] for arr in arrays])

Essentially what the MultiLogic does is iterate through the lines at a particular bar, create a list, and then apply the math function to the list. Of course, Max and And and the like are looking for an iterable.

When we try to do this with a single line like math.log, then it breaks because math.log is looking for a scalar but is getting a list.

dst[i] = flogic([arr[i] for arr in arrays])

We could adjust multilogic to find out if the list is only length of one, and then select the item [0] to make it a scalar. This does work but I don't like it because we are playing with a built in class and I would prefer to leave it alone.

So instead I created a new class called SingleLogic.

class SingleLogic(Logic):
    def next(self):
        self[0] = self.flogic(self.args[0])

    def once(self, start, end):
        # cache python dictionary lookups
        dst = self.array
        flogic = self.flogic

        for i in range(start, end):
            dst[i] = flogic(self.args[0].array[i])

With this single logic class we can now use all of the math library functions that require a scalar. Here are some examples:


class Log(SingleLogic):
    flogic = math.log10

class Ceiling(SingleLogic):
    flogic = math.ceil

class Floor(SingleLogic):
    flogic = math.floor

class Abs(SingleLogic):
    flogic = math.fabs

Then I used them in a basic strategy like this:

   def __init__(self):

        self.ma = bt.ind.EMA(period=10)
        self.cross = bt.ind.CrossOver(self.datas[0].close, self.ma)
        # Single logic
        self.lg = bt.Log(self.datas[0].close)
        self.cl = bt.Ceiling(self.datas[0].close)
        self.fl = bt.Floor(self.datas[0].close)
        self.cross_abs = bt.Abs(self.cross)

        # Check Multi still works
        self.mx = bt.Max(self.datas[0].close, self.datas[0].open)

    def next(self):

        self.log(f" open {self.datas[0].open[0]:.2f} close {self.datas[0].close[0]:.2f}, max {self.mx[0]:.2f}, "
                 f"log {self.lg[0]:5.3f}, ceiling {self.cl[0]:5.3f}, floor {self.fl[0]:5.3f} "
                 f"cross {self.cross[0]:2.0f} abs cross {self.cross_abs[0]:2.0f}")

And get output of this:

/home/runout/projects/scratch/venv/bin/python "/home/runout/projects/scratch/20200726 math.py"
2020-01-16,  open 222.57 close 221.77, max 222.57, log 2.346, ceiling 222.000, floor 221.000 cross  0 abs cross  0
2020-01-17,  open 222.03 close 222.14, max 222.14, log 2.347, ceiling 223.000, floor 222.000 cross  0 abs cross  0
2020-01-21,  open 222.16 close 221.44, max 222.16, log 2.345, ceiling 222.000, floor 221.000 cross  0 abs cross  0
2020-01-22,  open 222.31 close 221.32, max 222.31, log 2.345, ceiling 222.000, floor 221.000 cross  0 abs cross  0
2020-01-23,  open 220.75 close 219.76, max 220.75, log 2.342, ceiling 220.000, floor 219.000 cross  0 abs cross  0
2020-01-24,  open 220.80 close 217.94, max 220.80, log 2.338, ceiling 218.000, floor 217.000 cross -1 abs cross  1
2020-01-27,  open 213.10 close 214.87, max 214.87, log 2.332, ceiling 215.000, floor 214.000 cross  0 abs cross  0
2020-01-28,  open 216.14 close 217.79, max 217.79, log 2.338, ceiling 218.000, floor 217.000 cross  0 abs cross  0
2020-01-29,  open 221.44 close 223.23, max 223.23, log 2.349, ceiling 224.000, floor 223.000 cross  1 abs cross  1
2020-01-30,  open 206.53 close 209.53, max 209.53, log 2.321, ceiling 210.000, floor 209.000 cross -1 abs cross  1
2020-01-31,  open 208.43 close 201.91, max 208.43, log 2.305, ceiling 202.000, floor 201.000 cross  0 abs cross  0
2020-02-03,  open 203.44 close 204.19, max 204.19, log 2.310, ceiling 205.000, floor 204.000 cross  0 abs cross  0
2020-02-04,  open 206.62 close 209.83, max 209.83, log 2.322, ceiling 210.000, floor 209.000 cross  0 abs cross  0
2020-02-05,  open 212.51 close 210.11, max 212.51, log 2.322, ceiling 211.000, floor 210.000 cross  0 abs cross  0
2020-02-06,  open 210.47 close 210.85, max 210.85, log 2.324, ceiling 211.000, floor 210.000 cross  0 abs cross  0
2020-02-07,  open 210.30 close 212.33, max 212.33, log 2.327, ceiling 213.000, floor 212.000 cross  1 abs cross  1
2020-02-10,  open 211.52 close 213.06, max 213.06, log 2.329, ceiling 214.000, floor 213.000 cross  0 abs cross  0

It is very easy now to create new functions from the math library. We just have to pick the ones we want to implement. And of course we'll need some testing and documentation.

live trading: notify_timer is not called if timer is set after eos

I need to get a timer notification after a trading session ends (including after-hours trading) - let's say 5 min after eos. Unfortunately such timers are not fired until the next session begins (or at all)

Here is the code used to set such a timer (in strategy ctor):

    (_, eos) = self.data._calendar.schedule(datetime.combine(date.today(), time()))
    offset = timedelta(hours=4, minutes=5) if self.p.outside_rth else timedelta(minutes=5)
    self.add_timer(when=eos.time(),
                   offset=offset,
                   tzdata=self.data)

(it doesn't matter if the absolute time is used as in the above code or a Timer.SESSION_END constant)

It seems the problem is that the timers are only checked if at least one data feed produces new bar inside `celebro._runnext' method.

 def _runnext(self, runstrats):
	...
	...
	if d0ret or lastret:  # if any bar, check timers before broker
        self._check_timers(runstrats, dt0, cheat=True)
        if self.p.cheat_on_open:
            for strat in runstrats:
                strat._next_open()
                if self._event_stop:  # stop if requested
                    return
    ...
    ...

However, after trading session - no data feed produces any bars, which means that _check_timers will not be called and no notify_timer notification will be fired.

After looking at it a little bit more, the problem is not only that self._check_timers method is not called anymore if no data feed produces any bars.

Even if it could be called, the timestamp (stored in dt0 variable) passed to the self._check_timers call will be the same as the last timestamp returned by the data feed.

This timestamp would also be frozen in case the data feed has reached the end of trading session - so it doesn't make any sense to call the self._check_timers at all in this case.

it seems the original timer implementation was designed to be used specifically for backtrading, while the live trading capabilities were added later.

However the current implementation is not fully suitable for live trading scenarios, since it fully depends on the time values provided by the data feed.

The proposed solution is to introduce another type of Timer class which will be used only for live trading scenarios and will use a real world time.

'frompackages' directive functionality seems to be broken when using inheritance

Community discussion:

https://community.backtrader.com/topic/2661/frompackages-directive-functionality-seems-to-be-broken-when-using-inheritance

Background:

'frompackages' directive was added to backtrader back in 2017 (starting from release 1.9.30.x). It allows specifying the external packages to be imported only during the instantiation of the class (usually indicator). It comes very handily during the optimizations, reducing the serialization size of the objects. More on this here

Usage example:

class MyIndicator(bt.Indicator):
    frompackages = (('pandas', 'SomeFunction'),)
    lines = ('myline',)
    params = (
        ('period', 50),
    )

    def next(self):
        print('mylines[0]:', SomeFunction(self.lines.myline[0]))

Here the SomeFunction will be imported from pandas package during the instantiation of MyIndicator and not earlier.

Testcase:

In the same article, it was also claimed that "Both packages and frompackages support (multiple) inheritance". However, it seems to be not the case. Here a short test case:

import os
import backtrader as bt

class HurstExponentEx(bt.indicators.HurstExponent):
    def __init__(self):
        super(HurstExponentEx, self).__init__()

    def next(self):
        super(HurstExponentEx, self).next()
        print('test')

class TheStrategy(bt.Strategy):
    def __init__(self):
        self.hurst = HurstExponentEx(self.data, lag_start=10,lag_end=500)

    def next(self):
        print('next')

def runstrat():
    cerebro = bt.Cerebro()
    cerebro.broker.set_cash(1000000)
    data_path = os.path.join(bt.__file__, '../../datas/yhoo-1996-2014.txt')
    data0 = bt.feeds.YahooFinanceCSVData(dataname=data_path)
    cerebro.adddata(data0)
    cerebro.addstrategy(TheStrategy)
    cerebro.run()
    cerebro.plot()

if __name__ == '__main__':
    runstrat()

where the HustExponent class is defined in backtrader as:

class HurstExponent(PeriodN):
    frompackages = (
        ('numpy', ('asarray', 'log10', 'polyfit', 'sqrt', 'std', 'subtract')),
    )
    ...

Unexpected behavior:

trying to run it (using python 3.6 in my case) will produce:

Traceback (most recent call last):
  File "test_frompackage.py", line 42, in <module>
    runstrat()
  File "test_frompackage.py", line 38, in runstrat
    cerebro.run()
  File "W:\backtrader\backtrader\cerebro.py", line 1182, in run
    runstrat = self.runstrategies(iterstrat)
  File "W:\backtrader\backtrader\cerebro.py", line 1275, in runstrategies
    strat = stratcls(*sargs, **skwargs)
  File "W:\backtrader\backtrader\metabase.py", line 88, in __call__
    _obj, args, kwargs = cls.doinit(_obj, *args, **kwargs)
  File "W:\backtrader\backtrader\metabase.py", line 78, in doinit
    _obj.__init__(*args, **kwargs)
  File "test_frompackage.py", line 17, in __init__
    lag_end=500)
  File "W:\backtrader\backtrader\indicator.py", line 53, in __call__
    return super(MetaIndicator, cls).__call__(*args, **kwargs)
  File "W:\backtrader\backtrader\metabase.py", line 88, in __call__
    _obj, args, kwargs = cls.doinit(_obj, *args, **kwargs)
  File "W:\backtrader\backtrader\metabase.py", line 78, in doinit
    _obj.__init__(*args, **kwargs)
  File "test_frompackage.py", line 6, in __init__
    super(HurstExponentEx, self).__init__()
  File "W:\backtrader\backtrader\indicators\hurst.py", line 82, in __init__
    self.lags = asarray(range(lag_start, lag_end))
NameError: name 'asarray' is not defined

as could be seen asarray is a method that should have been imported from numpy package upon instantiation of HurstExponent class.

If we will directly use the HurstExponent class instead of our HurstExponentEx (which inherits from HurstExponent) - everything will work just fine.

Analysis - TL;DR

Exploring this is a little bit exposes the problem with the implementation of frompackages inside backtrader.

The magic code responsible for 'frompackages' directive handling could be found in backtrader\metabase.py file inside the MetaParams.__new__ and MetaParams.donew methods. Here the 'frompackages' directive is first examined recursively (in __new__ method ) and the appropriate packages are imported using __import__ function ( in donew method)

The problem is with the following code inside the donew method:

    def donew(cls, *args, **kwargs):
        clsmod = sys.modules[cls.__module__]
        .
        . <removed for clarity>
        .
        # import from specified packages - the 2nd part is a string or iterable
        for p, frompackage in cls.frompackages:
            if isinstance(frompackage, string_types):
                frompackage = (frompackage,)  # make it a tuple

            for fp in frompackage:
                if isinstance(fp, (tuple, list)):
                    fp, falias = fp
                else:
                    fp, falias = fp, fp  # assumed is string

                # complain "not string" without fp (unicode vs bytes)
                pmod = __import__(p, fromlist=[str(fp)])
                pattr = getattr(pmod, fp)
                setattr(clsmod, falias, pattr)

The cls parameter to this function is the class that needs to be instantiated. In our case, this is our inherited class HustExponentEx.

So the clsmod variable will contain the module of our class - obviously the file that HurstExponentEx was defined in.

The problem is with the last line of the above code:

setattr(clsmod, falias, pattr)

Here the setattr will introduce the imported names to the module - our module with inherited class - not the module the original base class HurstExponent is defined in.

And it is a problem!

Once the HurstExponent class will start executing and calling the supposedly imported functions - those will be looked in the module the HurstExponent class is defined in - and will not be found, since those names are introduced in the module of our inherited class instead!

FIX

The fix seems to be obvious. Introduce the imported names to the original base class module.

test_writer.test_run fails after pulling the 1.9.74.123 release

ERROR: test_writer.test_run
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/travis/virtualenv/python3.6.7/lib/python3.6/site-packages/nose/case.py", line 198, in runTest
    self.test(*self.arg)
  File "/home/travis/build/backtrader2/backtrader/tests/test_writer.py", line 48, in test_run
    writer=(bt.WriterStringIO, dict(csv=True)))
  File "/home/travis/build/backtrader2/backtrader/tests/testcommon.py", line 110, in runtest
    cerebro.run()
  File "/home/travis/build/backtrader2/backtrader/backtrader/cerebro.py", line 1085, in run
    wr = wrcls(*wrargs, **wrkwargs)
  File "/home/travis/build/backtrader2/backtrader/backtrader/metabase.py", line 88, in __call__
    _obj, args, kwargs = cls.doinit(_obj, *args, **kwargs)
  File "/home/travis/build/backtrader2/backtrader/backtrader/metabase.py", line 78, in doinit
    _obj.__init__(*args, **kwargs)
  File "/home/travis/build/backtrader2/backtrader/backtrader/writer.py", line 218, in __init__
    self.out = self.out()
AttributeError: 'WriterStringIO' object has no attribute 'out'
----------------------------------------------------------------------

Caused by the 71b1781.

High memory consumption while optimizing using InfluxDB data feed

I'm using the InfluxDB to store the history data. Naturally the InfluxDB data feed is used for backtesting and optimizing the strategy.

Trying to optimize the strategy on ~10years 5min data set with few parameters ranges (resulting in 90 iterations) got me out of memory on my dev machine ( 12 cores + 12GB mem )

Here the cerebro flags I was using:

    cerebro = bt.Cerebro(maxcpus=args.maxcpus,
		                 live=False,
		                 runonce=True,
		                 exactbars=False,
		                 optdatas=True,
		                 optreturn=True,
		                 stdstats=False,
		                 quicknotify=True)

Analysis:

After a little bit of debugging the problem appears to be with InfluxDB data feed implementation which lacks a proper support for preload function.

In the current InfluxDB implementation the data from the influx database is loaded during the InfluxDB.start method and the result-set is kept in memory for the live time of the InfluDB instance. Even if cerebro preloads all the data, the result-set (which is no longer needed in such case) will still be in memory.

This is problematic when running optimization, where multiprocessing.Pool and Pool.imap is used for running the strategy with all its parameter permutations concurrently.

The way the multiprocessing.Pool works (the default method on Linux at least) is that the main process is simply forked for each worker process, where the latter inherits the main process memory (which will include the memory allocated for the aforementioned result-set the InfluxDB data feed). In addition, for each run of the strategy, the cerebro instance will be serialized (pickle-ized) and passed to the worker process - once again this will include the memory for the InfluxDB data feed since it is directly referenced by cerebro instance . This will unnecessarily increase the memory pressure during the optimization process

backtrader_plotting: support for live trading

Some time ago I added support for live trading to backtrader_plotting. To make it integrate cleanly I added an additional interface to backtrader. I called it "listener". Listeners are similar to writers and can be added just like other entities (observers, analyzers, writers etc.) but they get called at the very end of the loop being able to inspect the whole cerebro state.
So at the moment you need a custom backtrader that includes that interface to use live trading with backtrader_plotting.

I would like to ask if you guys would be interested to integrate that interface into backtrader2.

It is about this commit:
verybadsoldier@794a7ec

I can send a PR if you want.

PS.
I am not very happy with the term "listener". If someone has an idea for a better term it would be great.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.