scipy / scipy Goto Github PK
View Code? Open in Web Editor NEWSciPy library main repository
Home Page: https://scipy.org
License: BSD 3-Clause "New" or "Revised" License
SciPy library main repository
Home Page: https://scipy.org
License: BSD 3-Clause "New" or "Revised" License
Original ticket http://projects.scipy.org/scipy/ticket/10 on 2006-02-22 by @alberts, assigned to unknown.
From triangular
in signaltools.py
in SciPy 0.4.6:
w = numpy.r_[w, w[::-1]]
However, at the top of the file we have:
import numpy as Numeric
The following script:
from scipy.signal import triang
print triang(3)
fails with:
Traceback (most recent call last):
File "frags.py", line 9, in ?
print triang(3)
File "C:\Python24\Lib\site-packages\scipy\signal\signaltools.py", line 579, in triang
w = numpy.r_[w, w[-2::-1]]
NameError: global name 'numpy' is not defined
Original ticket http://projects.scipy.org/scipy/ticket/31 on 2006-03-07 by @timleslie, assigned to unknown.
patch attached to fix
Original ticket http://projects.scipy.org/scipy/ticket/2 on 2006-01-30 by trac user anonymous (imransomji@..., assigned to unknown.
Hello, I have:
scipy-0.4.4.win32-py2.4-P4SSE2.exe
numpy-0.9.4.win32-py2.4.exe
when I run scipy.test() at the command line, I get:
Overwriting lib=<module 'scipy.lib' from 'C:\Python24\lib\site-packages\scipy\li
b_init_.pyc'> from C:\Python24\lib\site-packages\scipy\lib__init__.pyc (was
<module 'numpy.lib' from 'C:\Python24\lib\site-packages\numpy\lib_init_.pyc'>
from C:\Python24\lib\site-packages\numpy\lib__init__.pyc)
Found 128 tests for scipy.linalg.fblas
Found 92 tests for scipy.stats.stats
Found 36 tests for scipy.linalg.decomp
Found 6 tests for scipy.optimize.optimize
Found 10 tests for scipy.stats.morestats
Found 4 tests for scipy.linalg.lapack
Found 1 tests for scipy.optimize.zeros
Found 92 tests for scipy.stats
Found 42 tests for scipy.lib.lapack
Found 339 tests for scipy.special.basic
Found 128 tests for scipy.lib.blas.fblas
Found 7 tests for scipy.linalg.matfuncs
Found 41 tests for scipy.linalg.basic
Found 1 tests for scipy.optimize.cobyla
Found 14 tests for scipy.lib.blas
Found 14 tests for scipy.linalg.blas
Found 70 tests for scipy.stats.distributions
Found 6 tests for scipy.optimize
Found 0 tests for main
EEEEEEEEEEEEE
And then python crashes.
Original ticket http://projects.scipy.org/scipy/ticket/19 on 2006-03-06 by unknown, assigned to unknown.
Original ticket http://projects.scipy.org/scipy/ticket/9 on 2006-02-16 by unknown, assigned to unknown.
See
http://scipy.org/WikiWikiH%C3%A1l%C3%B3
Found this in the RecentChanges, so please keep an eye on that.
Cheers,
Bas
Original ticket http://projects.scipy.org/scipy/ticket/26 on 2006-03-07 by @timleslie, assigned to unknown.
patch attached to fix
Original ticket http://projects.scipy.org/scipy/ticket/21 on 2006-03-07 by @timleslie, assigned to unknown.
patch attached to fix
Original ticket http://projects.scipy.org/scipy/ticket/47 on 2006-04-02 by @rkern, assigned to @rkern.
The function nanstd
in file source:trunk/Lib/stats/stats.py needs review.
Please look over the StatisticsReview guidelines and add your comments below.
Original ticket http://projects.scipy.org/scipy/ticket/1 on 2006-01-06 by trac user arnd.baecker@..., assigned to @dmcooke.
fft via scipy is much slower when using fftw3 instead of fftw2 (though fftw3 is
typically much faster than fftw2, eg. using benchfft - http://www.fftw.org/benchfft/)
This was also reported before by Darren Dale.
See http://www.physik.tu-dresden.de/~baecker/tmp/fftw/
a graphical illustration + scripts of the result.
This is what Pearu go on a Opteron box:
# Use fftw-2.1.3:
pearu@opt:~/svn/scipy/Lib/fftpack$ FFTW3=None python setup.py build
pearu@opt:~/svn/scipy/Lib/fftpack$ python tests/test_basic.py -l 10
Found 23 tests for __main__
Fast Fourier Transform
=================================================
| real input | complex input
-------------------------------------------------
size | scipy | Numeric | scipy | Numeric
-------------------------------------------------
100 | 0.07 | 0.07 | 0.07 | 0.08 (secs for 7000 calls)
1000 | 0.07 | 0.11 | 0.09 | 0.11 (secs for 2000 calls)
256 | 0.13 | 0.16 | 0.15 | 0.15 (secs for 10000 calls)
512 | 0.19 | 0.28 | 0.22 | 0.29 (secs for 10000 calls)
1024 | 0.03 | 0.06 | 0.04 | 0.06 (secs for 1000 calls)
2048 | 0.06 | 0.10 | 0.09 | 0.10 (secs for 1000 calls)
4096 | 0.06 | 0.15 | 0.09 | 0.16 (secs for 500 calls)
8192 | 0.15 | 0.68 | 0.37 | 0.70 (secs for 500 calls)
...
----------------------------------------------------------------------
Ran 23 tests in 26.286s
# Use fftw-3.0.1:
pearu@opt:~/svn/scipy/Lib/fftpack$ FFTW2=None python setup.py build
pearu@opt:~/svn/scipy/Lib/fftpack$ python tests/test_basic.py -l 10
Found 23 tests for __main__
Fast Fourier Transform
=================================================
| real input | complex input
-------------------------------------------------
size | scipy | Numeric | scipy | Numeric
-------------------------------------------------
100 | 0.07 | 0.08 | 0.43 | 0.09 (secs for 7000 calls)
1000 | 0.07 | 0.12 | 0.61 | 0.12 (secs for 2000 calls)
256 | 0.15 | 0.16 | 0.99 | 0.16 (secs for 10000 calls)
512 | 0.22 | 0.29 | 1.53 | 0.29 (secs for 10000 calls)
1024 | 0.04 | 0.06 | 0.26 | 0.06 (secs for 1000 calls)
2048 | 0.06 | 0.10 | 0.48 | 0.10 (secs for 1000 calls)
4096 | 0.06 | 0.15 | 0.48 | 0.16 (secs for 500 calls)
8192 | 0.15 | 0.68 | 1.11 | 0.69 (secs for 500 calls)
....
----------------------------------------------------------------------
Ran 23 tests in 38.188s
Original ticket http://projects.scipy.org/scipy/ticket/28 on 2006-03-07 by @timleslie, assigned to unknown.
patch attached to fix
Original ticket http://projects.scipy.org/scipy/ticket/27 on 2006-03-07 by @timleslie, assigned to unknown.
patch attached to fix
Original ticket http://projects.scipy.org/scipy/ticket/7 on 2006-02-16 by unknown, assigned to trac user cimrman3@....
from scipy.sparse import *
from scipy import *
z = csr_matrix((3,3))
zpzcsr = z+z
z = csc_matrix((3,3))
zpzcsc = z+z
Original ticket http://projects.scipy.org/scipy/ticket/15 on 2006-03-03 by @dmcooke, assigned to @dmcooke.
See LibrariesToUpgrade; we're using 2.4, current version is 2.8 at http://www.moshier.net/#Cephes
Original ticket http://projects.scipy.org/scipy/ticket/5 on 2006-02-04 by trac user sozin, assigned to trac user sozin.
the [http://svn.scipy.org/svn/scipy/trunk/Lib/stats/tests/test_morestats.py test_morestats.py:test_ansari] function crashes python on my windows host.
Here's a script to reproduce:
from scipy import stats
Original ticket http://projects.scipy.org/scipy/ticket/30 on 2006-03-07 by @timleslie, assigned to unknown.
patch attached to fix
Original ticket http://projects.scipy.org/scipy/ticket/23 on 2006-03-07 by @timleslie, assigned to unknown.
patch attached to fix
Original ticket http://projects.scipy.org/scipy/ticket/48 on 2006-04-02 by @rkern, assigned to @rkern.
The function _nanmedian
in file source:trunk/Lib/stats/stats.py needs review.
Please look over the StatisticsReview guidelines and add your comments below.
Original ticket http://projects.scipy.org/scipy/ticket/11 on 2006-02-22 by @alberts, assigned to unknown.
The savemat
function in io
doesn't work in scipy 0.4.6:
The following script:
from numpy import array
from scipy.io import savemat
savemat('foo.mat', {'x' : array([0])})
causes the following traceback:
Traceback (most recent call last):
File "frags.py", line 3, in ?
savemat('foo.mat', {'x' : array([0])})
File "C:\Python24\Lib\site-packages\scipy\io\mio.py", line 875, in savemat
fid.fwrite(variable+'\x00','char')
File "C:\Python24\Lib\site-packages\scipy\io\mio.py", line 223, in write
numpyio.fwrite(self,count,data,mtype,bs)
numpyio.error: Does not support extended types.
Original ticket http://projects.scipy.org/scipy/ticket/13 on 2006-03-02 by @nilswagner01, assigned to unknown.
from scipy import *
from scipy.sparse import *
n = 20
A = csc_matrix((n,n))
x = rand(n)
y = rand(n-1)+1j*rand(n-1)
r = rand(n)
for i in range(len(x)):
A[i,i] = x[i]
for i in range(len(y)):
A[i,i+1] = y[i]
A[i+1,i] = conjugate(y[i])
xx = sparse.lu_factor(A).solve(r)
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 16384 (LWP 15138)]
0x00002aaab2d8f8a5 in dpivotL (jcol=0, u=1, usepr=0x7fffffffd3e4, perm_r=0x6cf550, iperm_r=,
iperm_c=, pivrow=0x7fffffffd3e8, Glu=0x2aaab34bb360, stat=0xffffffffab7dfa48) at dpivotL.c:120
120 perm_r[*pivrow] = jcol;
Current language: auto; currently c
(gdb) bt
#0 0x00002aaab2d8f8a5 in dpivotL (jcol=0, u=1, usepr=0x7fffffffd3e4, perm_r=0x6cf550, iperm_r=,
iperm_c=<value optimized out>, pivrow=0x7fffffffd3e8, Glu=0x2aaab34bb360, stat=0xffffffffab7dfa48) at dpivotL.c:120
#1 0x00002aaab2d85c49 in dgstrf (options=, A=0x7fffffffd570, drop_tol=,
relax=<value optimized out>, panel_size=10, etree=<value optimized out>, work=<value optimized out>, lwork=7140688,
perm_c=0xa17510, perm_r=0x6cf550, L=0x2aaab55658c0, U=0x2aaab55658e0, stat=0x7fffffffd510, info=0x7fffffffd508)
at dgstrf.c:310
#2 0x00002aaab2d6886d in newSciPyLUObject (A=0x7fffffffd670, diag_pivot_thresh=1, drop_tol=0, relax=1, panel_size=10,
permc_spec=2, intype=<value optimized out>) at _superluobject.c:372
#3 0x00002aaab2d67bac in Py_dgstrf (self=, args=,
keywds=<value optimized out>) at _dsuperlumodule.c:187
#4 0x00002aaaaac1c17e in PyCFunction_Call () from /usr/lib64/libpython2.4.so.1.0
#5 0x00002aaaaac4f661 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0
#6 0x00002aaaaac51705 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0
#7 0x00002aaaaac4f789 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0
#8 0x00002aaaaac51705 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0
#9 0x00002aaaaac51992 in PyEval_EvalCode () from /usr/lib64/libpython2.4.so.1.0
#10 0x00002aaaaac6aeb9 in run_node () from /usr/lib64/libpython2.4.so.1.0
#11 0x00002aaaaac6c3c9 in PyRun_SimpleFileExFlags () from /usr/lib64/libpython2.4.so.1.0
#12 0x00002aaaaac6c921 in PyRun_AnyFileExFlags () from /usr/lib64/libpython2.4.so.1.0
#13 0x00002aaaaac72023 in Py_Main () from /usr/lib64/libpython2.4.so.1.0
#14 0x00000000004008d9 in main (argc=, argv=) at ccpython.cc:10
Original ticket http://projects.scipy.org/scipy/ticket/17 on 2006-03-06 by unknown, assigned to unknown.
Traceback (most recent call last):
File "sparse_test.py", line 12, in ?
x,info = linalg.gmres(A_csr,b)
File "/usr/lib64/python2.4/site-packages/scipy/linalg/iterative.py", line 578, in gmres
b = sb.asarray(b,typ)
File "/usr/lib64/python2.4/site-packages/numpy/core/numeric.py", line 74, in asarray
return array(a, dtype, copy=False, fortran=fortran, ndmin=ndmin)
TypeError: array cannot be safely cast to required type
Original ticket http://projects.scipy.org/scipy/ticket/22 on 2006-03-07 by @timleslie, assigned to unknown.
patch attached to fix
Original ticket http://projects.scipy.org/scipy/ticket/37 on 2006-03-14 by trac user Norbert@..., assigned to unknown.
attached a few patches against revision 1696
they are dependent on ticket gh-563 to numpy, see
http://projects.scipy.org/scipy/numpy/ticket/36
for some additional information.
scipy-1-eigh-eigvalsh.diff
added two new functions to scipy.linalg. All the interfaces were
there already...
scipy-2-dual-norm.diff
added 'norm' to the list of dual functions
scipy-3-norm-change-default.diff
changed the norm in the same way as "numpy.linalg.norm" was changed. See comments there.
Original ticket http://projects.scipy.org/scipy/ticket/6 on 2006-02-16 by unknown, assigned to unknown.
from scipy.sparse import *
from scipy import *
A = rand(4,4)
sparseAcsr = csr_matrix(A)
sparseA = sparseAcsr/0.5 # doesn't work
sparseA = sparseAcsr*2 # works fine
TypeError: unsupported operand type(s) for /: 'instance' and 'float'
Original ticket http://projects.scipy.org/scipy/ticket/34 on 2006-03-08 by unknown, assigned to unknown.
I made some "double (complex)" vectors in matlab, but io.loadmat only supports floats. The function fails silently and gives you only the real parts of the vectors. I think loadmat should at least give a warning in this situation. However, we could add a block like this around line 830 in mio.py:
if data.typecode() == 'd' and data2.typecode() == 'd':
new = zeros(data.shape,'D')
new.real = data
new.imag = data2
data = new
del(new)
del(data2)
Original ticket http://projects.scipy.org/scipy/ticket/4 on 2006-02-04 by trac user ivazquez@..., assigned to unknown.
There is a flaw in the numpy installation that omits the headers in build/src/numpy/core. This prevents scipy from building.
Original ticket http://projects.scipy.org/scipy/ticket/45 on 2006-04-02 by @rkern, assigned to @rkern.
The function _chk2_asarray
in file source:trunk/Lib/stats/stats.py needs review.
Please look over the StatisticsReview guidelines and add your comments below.
Original ticket http://projects.scipy.org/scipy/ticket/36 on 2006-03-11 by @timleslie, assigned to unknown.
Patch attached to fix:
Original ticket http://projects.scipy.org/scipy/ticket/41 on 2006-03-22 by @nilswagner01, assigned to unknown.
Original ticket http://projects.scipy.org/scipy/ticket/33 on 2006-03-07 by @timleslie, assigned to unknown.
patch attached to fix
Original ticket http://projects.scipy.org/scipy/ticket/3 on 2006-02-01 by @lebedov, assigned to unknown.
When I attempted to decompose an elementary matrix using the svd() function in scipy.linalg.decomp, I encountered the following error:
/usr/lib/python2.4/site-packages/scipy/linalg/decomp.py in svd(a, compute_uv, overwrite_a)
285 m,n = a1.shape
286 overwrite_a = overwrite_a or (_datanotshared(a1,a))
--> 287 gesdd, = get_lapack_funcs(('gesdd',),(a1,))
288 if gesdd.module_name[:7] == 'flapack':
289 lwork = calc_lwork.gesdd(gesdd.prefix,m,n,compute_uv)[1]
/usr/lib/python2.4/site-packages/scipy/linalg/lapack.py in get_lapack_funcs(names, arrays, debug, force_clapack)
44 ordering = []
45 for i in range(len(arrays)):
---> 46 t = arrays[i].dtypechar
47 if not _type_conv.has_key(t): t = 'd'
48 ordering.append((t,i))
AttributeError: 'numpy.ndarray' object has no attribute 'dtypechar'
I am using scipy 0.4.4 with numpy 0.9.4 built to use the stock netlib BLAS/LAPACK 3.0.
Original ticket http://projects.scipy.org/scipy/ticket/20 on 2006-03-07 by @timleslie, assigned to unknown.
Attached is a patch to fix some outstanding issues with anneal.py
Original ticket http://projects.scipy.org/scipy/ticket/40 on 2006-03-17 by @nilswagner01, assigned to unknown.
Original ticket http://projects.scipy.org/scipy/ticket/12 on 2006-02-22 by @alberts, assigned to @dmcooke.
When building scipy revision 1618 with Visual Studio .NET 2003, the build fails on Lib\special\cephes\const.c
with the error:
C:\Program Files\Microsoft Visual Studio .NET 2003\Vc7\bin\cl.exe /c /nologo /Ox
/MD /W3 /GX /DNDEBUG /TcLib\special\cephes\const.c /Fobuild\temp.win32-2.4\Lib\
special\cephes\const.obj
const.c
Lib\special\cephes\const.c(92) : error C2099: initializer is not a constant
Lib\special\cephes\const.c(97) : error C2099: initializer is not a constant
error: Command ""C:\Program Files\Microsoft Visual Studio .NET 2003\Vc7\bin\cl.e
xe" /c /nologo /Ox /MD /W3 /GX /DNDEBUG /TcLib\special\cephes\const.c /Fobuild\t
emp.win32-2.4\Lib\special\cephes\const.obj" failed with exit status 2
This issue was discussed in comp.lang.c as [http://groups.google.com/group/comp.lang.c/browse_thread/thread/23303c76b160ca90/e6ec7db9b3287aaa%23e6ec7db9b3287aaa initializer is not a constant error ??]
[http://cvs.gnome.org/viewcvs/libxml2/trionan.c?rev=1.14&view=markup libxml2's trionan.c] has some ideas on how to do this stuff on many platforms.
Original ticket http://projects.scipy.org/scipy/ticket/43 on 2006-03-24 by trac user novin01 AT gmail.com, assigned to unknown.
Changing to .dtype.char fixes the problem, but I thought I'd post anyway.
This was on a P4/WinXP box with
python-2.4.2
numpy-0.9.4.win32-py2.4
scipy-0.4.4.win32-py2.4-P4SSE2
The following code replicates the error:
In [1]: from scipy import *
In [2]: x = stats.norm.rvs(size=1000,loc=0,scale=1)
exceptions.AttributeError Traceback (most recent call last)
\W2K4670\D$\GAS\Data\Processes
C:\Python24\lib\site-packages\scipy\stats\distributions.py in fit(self, data, _args, *_kwds)
726 # location and scale are at the end
727 x0 = args + (loc0, scale0)
--> 728 return optimize.fmin(self.nnlf,x0,args=(ravel(data),),disp=0)
729
730 def est_loc_scale(self, data, *args):
C:\Python24\lib\site-packages\scipy\optimize\optimize.py in fmin(func, x0, args, xtol, ftol, maxiter, maxfun, full_output, disp, retall)
189 sim = Num.zeros((N+1,),x0.dtypechar)
190 else:
--> 191 sim = Num.zeros((N+1,N),x0.dtypechar)
192 fsim = Num.zeros((N+1,),'d')
193 sim[0] = x0
AttributeError: 'numpy.ndarray' object has no attribute 'dtypechar'
In [4]:
Original ticket http://projects.scipy.org/scipy/ticket/29 on 2006-03-07 by @timleslie, assigned to unknown.
patch attached to fix
Original ticket http://projects.scipy.org/scipy/ticket/8 on 2006-02-16 by trac user baptiste13@..., assigned to @cournape.
Hello,
The linregress function in scipy.stats returns a value called stderr-of-the-estimate which is equal to:
sqrt(1-r^2^) * samplestd(y) = sqrt( (1-r^2^) * sum(y - mean(y)) / N )
with r the correlation coefficient and N the number of data points
This is different from the usual estimator for the standard error, wich is
sqrt( (1-r^2^) * sum(y - mean(y)) / df ) = sqrt( (1-r^2^) * sum(y - mean(y)) / (N-2) )
where df stands for the number of degrees of freedom
From the docstring, one could assume that stderr-of-the-estimate is the usual estimator for stderr, as this result is relevant in most cases where linear regression is used. On the contrary, I don't see applications where the stderr-of-the-estimate result as is would be relevant.
If stderr-of-the-estimate is meant to be the usual estimator for stderr, the calculation should be corrected. If not, the docstring should describe more specifically what it stands for.
Cheers,
BC
Original ticket http://projects.scipy.org/scipy/ticket/16 on 2006-03-06 by @timleslie, assigned to unknown.
The code in optimize.anneal is broken in a number of ways, up to and including:
I have a patch which I believe fixes these problems which I would like to submit for review/inclusion.
Original ticket http://projects.scipy.org/scipy/ticket/49 on 2006-04-02 by @rkern, assigned to @rkern.
The function nanmedian
in file source:trunk/Lib/stats/stats.py needs review.
Please look over the StatisticsReview guidelines and add your comments below.
Original ticket http://projects.scipy.org/scipy/ticket/38 on 2006-03-15 by unknown, assigned to unknown.
Original ticket http://projects.scipy.org/scipy/ticket/24 on 2006-03-07 by @timleslie, assigned to unknown.
patch attached to fix:
Original ticket http://projects.scipy.org/scipy/ticket/39 on 2006-03-16 by unknown, assigned to unknown.
Original ticket http://projects.scipy.org/scipy/ticket/44 on 2006-04-02 by @rkern, assigned to @rkern.
The function _chk_asarray
in file source:trunk/Lib/stats/stats.py needs review.
Please look over the StatisticsReview guidelines and add your comments below.
Original ticket http://projects.scipy.org/scipy/ticket/46 on 2006-04-02 by @rkern, assigned to @rkern.
The function nanmean
in file source:trunk/Lib/stats/stats.py needs review.
Please look over the StatisticsReview guidelines and add your comments below.
Original ticket http://projects.scipy.org/scipy/ticket/14 on 2006-03-02 by trac user Nick Fotopoulos, assigned to unknown.
I have upgraded the loadmat functionality in scipy/io/mio.py to now handle some new features of Matlab 7. Specifically, Matlab 7 introduced compression (via zlib) and strings are now unicode by default. Please incorporate the changes. I don't see a way to attach files to tickets, so you can find the files at http://www.ligo.mit.edu/~nvf/. They are mio.py and numpyIO.py.
Compression reading is complete, but my unicode support is a mess. It does, however, work for all characters that can be represented in ASCII, since their codes are the same for UTF8 and ASCII. Good enough for me.
Also, I fixed string handling. In the previous version, v6 matfile strings came back with the correct length, but all ones ('111111', etc).
Everything is against scipy from SVN as of 2006-02-27. I tested with Matlab 7-generated v7 and v6 matfiles, both with simple tests and a small set of real data (700kB memory, complex-valued).
The biggest change I had to make was using a pure-Python implementation of numpyio (called numpyIO and written by Benyang Tang; I secured his permission to submit it to scipy). Supporting compression elegantly depends on being able to pass around StringIO streams instead of real files, for which numpyio doesn't work.
Original ticket http://projects.scipy.org/scipy/ticket/32 on 2006-03-07 by @timleslie, assigned to unknown.
patch attached to fix
Original ticket http://projects.scipy.org/scipy/ticket/50 on 2006-04-02 by @rkern, assigned to trac user ariel.rokem.
The function gmean
in file source:trunk/Lib/stats/stats.py needs review.
Please look over the StatisticsReview guidelines and add your comments below.
Original ticket http://projects.scipy.org/scipy/ticket/35 on 2006-03-09 by trac user russel@..., assigned to unknown.
Run the following program and watch it's memory usage. Removing the jacobian from the call to leastsq
keeps memory usage constant. It also leaks at the same rate if the jacobian is transposed and col_deriv=0.
import numpy as N
from scipy.optimize import leastsq
x=N.arange(0,1,0.0001)
y=(x-0.3)**2
def residual(p, x, y):
return (p[0]*x+p[1])*x+p[2]-y
def jacobian(p, x, y):
jac=N.zeros(shape=(len(p),len(x)),dtype=float)
jac[0]=x*x
jac[1]=x
jac[2]=1
return jac
def fit_j(x, y):
p=N.ones(3, dtype=float)
fit, mesg = leastsq(residual, p, args=(x, y), Dfun=jacobian, col_deriv=1)
return p
def test_leak(nfit=10000):
for k in xrange(nfit):
p=fit_j(x,y)
if __name__=="__main__":
test_leak()
Original ticket http://projects.scipy.org/scipy/ticket/42 on 2006-03-22 by @nilswagner01, assigned to @wnbell.
Given two sparse matrices A, B define C = spkron(A,B) where C is
the Kronecker product of A and B. The result should be sparse too.
Original ticket http://projects.scipy.org/scipy/ticket/25 on 2006-03-07 by @timleslie, assigned to unknown.
patch attached to fix
Original ticket http://projects.scipy.org/scipy/ticket/18 on 2006-03-06 by unknown, assigned to @cournape.
linalg.norm(A) where A is a sparse matrix is not supported yet.
A workaround is linalg.norm(A.todense()).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.