Giter Site home page Giter Site logo

dbi's Introduction

DBI - The Perl Database Interface.

Build Status

See COPYRIGHT section in DBI.pm for usage and distribution rights.

See GETTING HELP section in DBI.pm for how to get help.

QUICK START GUIDE:

The DBI requires one or more 'driver' modules to talk to databases,
but they are not needed to build or install the DBI.

Check that a DBD::* module exists for the database you wish to use.

Install the DBI using a installer like cpanm, cpanplus, cpan,
or whatever is recommened by the perl distribution you're using.
Make sure the DBI tests run successfully before installing.

Use the 'perldoc DBI' command to read the DBI documentation.

Install the DBD::* driver module you wish to use in the same way.
It is often important to read the driver README file carefully.
Make sure the driver tests run successfully before installing.

The DBI.pm file contains the DBI specification and other documentation. PLEASE READ IT. It'll save you asking questions on the mailing list which you will be told are already answered in the documentation.

For more information and to keep informed about progress you can join the a mailing list via mailto:[email protected] You can post to the mailing list without subscribing. (Your first post may be delayed a day or so while it's being moderated.)

To help you make the best use of the dbi-users mailing list, and any other lists or forums you may use, I strongly recommend that you read "How To Ask Questions The Smart Way" by Eric Raymond:

http://www.catb.org/~esr/faqs/smart-questions.html

Much useful information and online archives of the mailing lists can be found at http://dbi.perl.org/

See also http://metacpan.org/

IF YOU HAVE PROBLEMS:

First, read the notes in the INSTALL file.

If you can't fix it your self please post details to [email protected]. Please include:

  1. A complete log of a complete build, e.g.:

    perl Makefile.PL (do a make realclean first) make make test make test TEST_VERBOSE=1 (if any of the t/* tests fail)

  2. The output of perl -V

  3. If you get a core dump, try to include a stack trace from it.

    Try installing the Devel::CoreStack module to get a stack trace. If the stack trace mentions XS_DynaLoader_dl_load_file then rerun make test after setting the environment variable PERL_DL_DEBUG to 2.

  4. If your installation succeeds, but your script does not behave as you expect, the problem is possibly in your script.

    Before sending to dbi-users, try writing a small, easy to use test case to reproduce your problem. Also, use the DBI->trace method to trace your database calls.

Please don't post problems to usenet, google groups or perl5-porters. This software is supported via the dbi-users mailing list. For more information and to keep informed about progress you can join the mailing list via mailto:[email protected] (please note that I do not run or manage the mailing list).

It is important to check that you are using the latest version before posting. If you're not then we're very likely to simply say "upgrade to the latest". You would do yourself a favour by upgrading beforehand.

Please remember that we're all busy. Try to help yourself first, then try to help us help you by following these guidelines carefully.

Regards, Tim Bunce and the perl5-dbi team.

dbi's People

Contributors

1nickt avatar bor avatar demerphq avatar dveeden avatar ilmari avatar jeff-zucker avatar jluis avatar jraspass avatar manwar avatar mauke avatar mbeijen avatar mjegh avatar nigelhorne avatar oalders avatar openstrike avatar pali avatar perlover avatar petdance avatar pgollucci avatar ppisar avatar rehsack avatar ribasushi avatar robschaber avatar ronaldxs avatar rspier avatar scop avatar theory avatar timbunce avatar toddr avatar tux avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dbi's Issues

do() could accept prepared sth as first argument

Hi!

DBI select*-methods accept prepared statment handle object $sth as statement-argument.

do() does not. If there is no certain reason, it might.

In documentation are both cases refferred as statement, so there is hard to make distinction anyway.

Wbr,

Gunnar

Segfault from XS_DBI_dispatch

This might be the same thing as #51, but the backtrace looks a little different: It looks different now that I see both with line numbers!

#0  0x00000000004ec6e6 in Perl_mg_get (sv=0xad21190) at mg.c:185
185         newmg = cur = head = mg = SvMAGIC(sv);
(gdb) bt
#0  0x00000000004ec6e6 in Perl_mg_get (sv=0xad21190) at mg.c:185
#1  0x000000000052b50d in Perl_sv_2pv_flags (sv=0xad21190, lp=lp@entry=0x0, flags=flags@entry=2) at sv.c:2943
#2  0x00007f3074dcb6b4 in err_hash (imp_xxh=0x296e60b0, imp_xxh=0x296e60b0) at DBI.xs:891
#3  0x00007f3074de10d7 in XS_DBI_dispatch (cv=0x8c6cd08) at DBI.xs:3565
#4  0x000000000051286d in Perl_pp_entersub () at pp_hot.c:4228
#5  0x000000000044a666 in Perl_call_sv (sv=sv@entry=0x8c6cd08, flags=<optimized out>, flags@entry=45) at perl.c:2856
#6  0x000000000051968d in S_curse (sv=sv@entry=0x2a1723e0, check_refcnt=check_refcnt@entry=true) at sv.c:6972
#7  0x0000000000519edc in Perl_sv_clear (orig_sv=orig_sv@entry=0x2a1723e0) at sv.c:6576
#8  0x000000000051ae5e in Perl_sv_free2 (sv=0x2a1723e0, rc=<optimized out>) at sv.c:7073
#9  0x00000000005172d2 in S_visit (f=0x51aff0 <do_clean_objs>, flags=2048, mask=2048) at sv.c:476
#10 0x000000000051b40c in Perl_sv_clean_objs () at sv.c:627
#11 0x000000000044e499 in perl_destruct (my_perl=<optimized out>) at perl.c:864
#12 0x00000000004224c4 in main (argc=3, argv=0x7ffe980cfa08, env=0x7ffe980cfa28) at perlmain.c:134

This is with Perl 5.26.2 and DBI 1.641.

Unfortunately the program I'm running is very large and I can't share it. I don't have a small test case. I can't explain what's happening. It looks like it happens during program shutdown though.

Thank you!

DBI::err can be undefined when connection fails with an HandleError

Steps to reproduce:

  • update to DBI 1.635 or 1.636
  • run this code
#!/usr/bin/env perl

use strict;
use warnings;

use DBI;

use v5.014;

sub handle_error { 
    my ( $str, $dbh, $retval ) = @_;

    warn "one error...";  
    $dbh->{'RaiseError'};# and 'boom';

    return;
}

my $f = q{/tmp/I/do/not/exist!!!};

eval {
    DBI->connect( "dbi:SQLite:dbname=$f", "", "", { HandleError => \&handle_error } ) or die;
    1
} or do {
    say "catch...", $@;
    say 'DBI::str: ', $DBI::err;
}

Before update to 1.636 ( or using df9b142^)

> perl524 test.pl
one error... at test.pl line 13.
DBI connect('dbname=/tmp/I/do/not/exist!!!','',...) failed: unable to open database file at test.pl line 22.
catch...Died at test.pl line 22.

DBI::str: 14

With 1.636 (or simply with df9b142)

> perl524 test.pl
one error... at test.pl line 13.
DBI connect('dbname=/tmp/I/do/not/exist!!!','',...) failed: unable to open database file at test.pl line 22.
catch...Died at test.pl line 22.

Use of uninitialized value $DBI::err in say at test.pl line 26.
DBI::str:

This is a regression introduced by commit df9b142 which is adding an extra copy_statement_to_parent call.

Commenting the 'copy_statement_to_parent' call in set_err_sv fixes it.

The original copy_statement_to_parent was also checking that this was happening during an execute using 'ima_flags & IMA_COPY_UP_STMT' extra check. Wonder if you should also do the same thing there to avoid this problem.

Note that if we do not check $dbh->{'RaiseError'}; in the error handler than it's not triggered.
(commenting the #$dbh->{'RaiseError'};# and 'boom'; line do not expose the bug)

missing abs_path failure handling in DBD::File

A misconfiguration of f_dir and f_file has led me to the error

Execution ERROR: -d : No such file or directory at .../DBI/DBD/SqlEngine.pm line 1538

It is the croak call that raises the exception. But the exception message is useless because $searchdir is undefined. $searchdir is undefined because the former abs_path call has failed. This OS failure ($! is different from zreo) was not handled properly. Instead it was shadowed by the following -d file test.

Tests fail using built-in Perl 5.18 on macOS Catalina

Catalina does not allow relative paths to shared libraries, making t/10examp.t, t/12quote.t, t/13taint.t fail. A possible fix is using rel2abs

#!perl -w

use File::Spec::Functions 'rel2abs';
use lib qw(rel2abs('blib/arch') rel2abs('blib/lib'));   # needed since -T ignores PERL5LIB

FetchHashKeyName emits invalid keys for utf8 fieldnames

Using DBI v 1.636, DBD::Pg 3.5.3, with Perl 5.42.0 on Linux

$dbi->{FetchHashKeyName}=’NAME_lc’ or NAME_uc produces different keys from ‘lc’ and ‘uc’ functions for fieldnames containing non-ascii characters.

For example, selecting column as ‘ÄMNE-Abc’ with FetchHashKeyName=NAME_lc returns the result key ‘\x{0}\x{0}mne-abc’ while PERL_UNICODE=SA perl -e 'use utf8; print lc("ÄMNE-Abc");' returns the expected ämne-abc

Test Case:

use strict;
use utf8;
use Test::More tests => 26;
use Data::Dumper;

use DBI;

my $dbi=DBI->connect(
        'dbi:Pg:dbname=test_db',
        'chris',
        '',
        {
        pg_enable_utf8  =>  1,
        }
);

my @expect=(
    [ 'NAME', "ABc", "ABc" ],
    [ 'NAME_uc', "ABc", "ABC" ],
    [ 'NAME_lc', "ABc", "abc" ],

    [ 'NAME', "てすと-ABc", "てすと-ABc" ],
    [ 'NAME_uc', "てすと-Abc", "てすと-ABC" ],
    [ 'NAME_uc', "てすと-Abc", "てすと-ABC" ],
    [ 'NAME_lc', "てすと-Abc", "てすと-abc" ],
    [ 'NAME_lc', "てすと-Abc", "てすと-abc" ],

    [ 'NAME', "ÄMNE-Abc", "ÄMNE-Abc" ],
    [ 'NAME_uc', "ÄMNE-Abc", "ÄMNE-ABC" ],
    [ 'NAME_uc', "ämne-Abc", "ÄMNE-ABC" ],
    [ 'NAME_lc', "ämne-Abc", "ämne-abc" ],
    [ 'NAME_lc', "ÄMNE-Abc", "ämne-abc" ],
);

foreach my $e (@expect) {
    my($case,$as,$fld)=@$e;

    my $val;
    if($case eq 'NAME_uc') {
        $val = uc($as);
    } elsif($case eq 'NAME_lc') {
        $val = lc($as);
    } else {
        $val = $as;
    }

    is($val,$fld,"case-converted $as to $case");

    $dbi->{FetchHashKeyName} = $case;

    my $row=$dbi->selectrow_hashref(qq{ select now() as "$as" });

    ok(exists $row->{$fld},"hashref $case") or diag(Dumper $row);
}
Summary of my perl5 (revision 5 version 24 subversion 0) configuration:

  Platform:
    osname=linux, osvers=2.6.32-642.6.2.el6.x86_64, archname=x86_64-linux
    uname='linux yonkyo.local 2.6.32-642.6.2.el6.x86_64 #1 smp wed oct 26 06:52:09 utc 2016 x86_64 x86_64 x86_64 gnulinux '
    config_args='-de -Dprefix=/opt/perlbrew/perls/perl-5.24.0 -Aeval:scriptdir=/opt/perlbrew/perls/perl-5.24.0/bin'
    hint=recommended, useposix=true, d_sigaction=define
    useithreads=undef, usemultiplicity=undef
    use64bitint=define, use64bitall=define, uselongdouble=undef
    usemymalloc=n, bincompat5005=undef
  Compiler:
    cc='cc', ccflags ='-fwrapv -fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -D_FORTIFY_SOURCE=2',
    optimize='-O2',
    cppflags='-fwrapv -fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include'
    ccversion='', gccversion='4.4.7 20120313 (Red Hat 4.4.7-17)', gccosandvers=''
    intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678, doublekind=3
    d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16, longdblkind=3
    ivtype='long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8
    alignbytes=8, prototype=define
  Linker and Libraries:
    ld='cc', ldflags =' -fstack-protector -L/usr/local/lib'
    libpth=/usr/local/lib /usr/lib /lib/../lib64 /usr/lib/../lib64 /lib /lib64 /usr/lib64 /usr/local/lib64
    libs=-lpthread -lnsl -lgdbm -ldb -ldl -lm -lcrypt -lutil -lc
    perllibs=-lpthread -lnsl -ldl -lm -lcrypt -lutil -lc
    libc=libc-2.12.so, so=so, useshrplib=false, libperl=libperl.a
    gnulibc_version='2.12'
  Dynamic Linking:
    dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-Wl,-E'
    cccdlflags='-fPIC', lddlflags='-shared -O2 -L/usr/local/lib -fstack-protector'

Characteristics of this binary (from libperl):
  Compile-time options: HAS_TIMES PERLIO_LAYERS PERL_COPY_ON_WRITE
                        PERL_DONT_CREATE_GVSV
                        PERL_HASH_FUNC_ONE_AT_A_TIME_HARD PERL_MALLOC_WRAP
                        PERL_PRESERVE_IVUV USE_64_BIT_ALL USE_64_BIT_INT
                        USE_LARGE_FILES USE_LOCALE USE_LOCALE_COLLATE
                        USE_LOCALE_CTYPE USE_LOCALE_NUMERIC USE_LOCALE_TIME
                        USE_PERLIO USE_PERL_ATOF
  Locally applied patches:
        Devel::PatchPerl 1.42
  Built under linux
  Compiled at Nov  3 2016 12:25:49
  %ENV:
    PERLBREW_BASHRC_VERSION="0.75"
    PERLBREW_HOME="/home/chris/.perlbrew"
    PERLBREW_MANPATH="/opt/perlbrew/perls/perl-5.24.0/man"
    PERLBREW_PATH="/opt/perlbrew/bin:/opt/perlbrew/perls/perl-5.24.0/bin"
    PERLBREW_PERL="perl-5.24.0"
    PERLBREW_ROOT="/opt/perlbrew"
    PERLBREW_VERSION="0.75"
  @INC:
    /opt/perlbrew/perls/perl-5.24.0/lib/site_perl/5.24.0/x86_64-linux
    /opt/perlbrew/perls/perl-5.24.0/lib/site_perl/5.24.0
    /opt/perlbrew/perls/perl-5.24.0/lib/5.24.0/x86_64-linux
    /opt/perlbrew/perls/perl-5.24.0/lib/5.24.0
    .

Error building DBDs on Monterey macOS

Opening here, as this involves failure of the Driver_xst.h compilation step, and the same error occurs on both DBD::Pg and DBD::Sqlite:

DBD::Pg:
bucardo/dbdpg#110

DBD::SQLite:
DBD-SQLite/DBD-SQLite#106

Summary of error on both when running perl Makefile.PL:

"/usr/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- SQLite.bs blib/arch/auto/DBD/SQLite/SQLite.bs 644
make: *** No rule to make target /System/Library/Perl/Extras/5.30/darwin-thread-multi-2level/auto/DBI/Driver_xst.h', needed by SQLite.xsi'. Stop.

`begin_work` retval is sometimes incorrect if connection is `RaiseError => 0`

Consider the following order of events:

  • the DBI::connect is invoked with RaiseError => 0 and returns $handle
  • the connection to the backend gets interrupted for some reason
  • calling $handle->begin_work currently always returns true

This is a weird but not uncommon case. For example, in a high-volume environment, one may have database connections being handled by a proxy or load-balancer. Software performing the latter functions sometimes crashes; hardware performing the latter functions sometimes locks up or reboots.

If begin_work notices an error performing its duties, it should return false. For some backends there's no error to notice, because begin_work is just setting the values of AutoCommit and BeginWork in memory. For others, modifying the value of AutoCommit may result in communication with the back-end database, during which errors happen.

Here's a minimal script to demonstrate the problem on u*x talking to a local MySQL db:

#!/usr/bin/perl                                                                                

use 5.14.0;
use warnings;

use DBI;
use POSIX;

my $dbh = DBI->connect('dbi:mysql:host=127.0.0.1;database=test', 'root', '',
                       { AutoCommit => 1, mysql_auto_reconnect => 0,
                         RaiseError => 0, PrintError => 1 })
  or die $DBI::errstr;
POSIX::close(3);                # or whatever fd it is on your system                          
$dbh->begin_work and say 'begin_work ok' or die 'begin_work: ' . ($dbh->errstr // 'undef');
$dbh->err and say '$dbh->err is ', $dbh->err;
$dbh->errstr and say '$dbh->errstr is ', $dbh->errstr;
$dbh->do('do 0') and say 'do ok'         or die         'do: ' . ($dbh->errstr // 'undef');
$dbh->commit     and say 'commit ok'     or die     'commit: ' . ($dbh->errstr // 'undef');
$dbh->disconnect and say 'disconnect ok' or die 'disconnect: ' . ($dbh->errstr // 'undef');

To get this to run on your own machine, you might have to strace the process and figure out what fd is getting opened if it's not 3 (as it has been on the various machines i've run this). Closing the fd simulates the connection failure I was describing above.

When run, you'll see:

DBD::mysql::db begin_work failed: Turning off AutoCommit failed at ./db-toy2-sb.pl line 14.
begin_work ok
$dbh->err is 21
$dbh->errstr is Turning off AutoCommit failed
DBD::mysql::db do failed: MySQL server has gone away at ./db-toy2-sb.pl line 17.
do: MySQL server has gone away at ./db-toy2-sb.pl line 17.

Note that we see "begin_work ok" in the output. This could only be displayed if $dbh->begin_work returned a true value. Because it does not, we continue and end up invoking the do method which does correctly detect that the connection has gone away and returns a false value.

I've worked around this issue in my own code by checking both the retval from begin_work and the err method of the handle. It would be nice not to have to do this.

DBI refuses to allow me to provide my password as a blessed object with string overloading

This means that it is difficult to secure the password against things that dump the stack.

What I want to do is have my secrets like passwords stored in a blessed object which enforces a whitelist of modules which are allowed to inspect the contents.

Unfortunately this is impossible with the existing check to ensure that the password is not a reference.

IMO either the check should be changed to allow blessed references, OR, outright removed. I dont see why DBI should check for references, if it gets a ref then the connect will fail, and MAYBE then additional diagnostics that the password was a ref would be useful, but preventing me from using standard perl overloading to represent my password goes against basic perl expectations.

FWIW, this criticism applies generally and not just to the password argument. DBI should not be naively insisting that the arguments are pure strings. It should work fine if we pass in overloaded blessed objects.

FWIW2: I tried hard to use a tie for this, but tie assignment does not "pass along" the tiedness, it just passes along the value, so what i want to do is not possible that way.

I pushed a PR for this BTW: "remove block that prevents a reference $password argument #40"

DBD::mysql and mariadb building

# make all
...
dbdimp.c: In function 'mysql_dr_connect':
/usr/include/mysql/mariadb_version-i386.h:14:31: error: token ""mariadb-10.3"" is not valid in preprocessor expressions
 #define MARIADB_BASE_VERSION  "mariadb-10.3"
                               ^~~~~~~~~~~~~~
dbdimp.c:1910:56: note: in expansion of macro 'MARIADB_BASE_VERSION'
 #if (MYSQL_VERSION_ID >= 50723) && (MYSQL_VERSION_ID < MARIADB_BASE_VERSION)
                                                        ^~~~~~~~~~~~~~~~~~~~
/usr/include/mysql/mariadb_version-i386.h:14:31: error: token ""mariadb-10.3"" is not valid in preprocessor expressions
 #define MARIADB_BASE_VERSION  "mariadb-10.3"
                               ^~~~~~~~~~~~~~
dbdimp.c:1917:56: note: in expansion of macro 'MARIADB_BASE_VERSION'
 #if (MYSQL_VERSION_ID >= 50600) && (MYSQL_VERSION_ID < MARIADB_BASE_VERSION)

I think You shuld use MARIADB_VERSION_ID instad of MARIADB_BASE_VERSION there.

Sigsegv from XS_DBI_dispatch during destruction

#0  0x00007fe4614dc9e6 in XS_DBI_dispatch ()
   from /home/greg/perl5/perlbrew/perls/perl-5.26.0/lib/site_perl/5.26.0/x86_64-linux/auto/DBI/DBI.so
#1  0x0000562e6b08be78 in Perl_pp_entersub ()
#2  0x0000562e6b009566 in Perl_call_sv ()
#3  0x0000562e6b090766 in S_curse ()
#4  0x0000562e6b09114d in Perl_sv_clear ()
#5  0x0000562e6b09144e in Perl_sv_free2 ()
#6  0x0000562e6b08f5f7 in S_visit ()
#7  0x0000562e6b0917ae in Perl_sv_clean_objs ()
#8  0x0000562e6b00bdb6 in perl_destruct ()
#9  0x0000562e6afeb4be in main ()

Unfortunately, I don't have a small test case for this. This is with 1.636.

Tested on Perl 5.26.0 and 5.24.0.

Driver 'do' versus dbi_profile

dbi_profile does not capture the current statement when drivers do not funnel do through a separate statement object. For instance, this is broken:

DBI_TRACE=4 \
  DBI_PROFILE='!Statement:!MethodName:!Caller/DBI::Profile' \
  perl -MDBI -e '
    my $dbh = DBI->connect("dbi:SQLite:dbname=:memory:");
    local $dbh->{sqlite_allow_multiple_statements} = 0;
    $dbh->do("CREATE TABLE IF NOT EXISTS X(Y INT);");
    $dbh->do("DELETE FROM X");'

This profiles the second do with the empty string as statement. For the first one the behavior is as expected because DBD::SQLite routes that through prepare et al. With DBD::Pg instead, you get an empty string for both statements (I suspect for similar reasons, but haven't checked).

If drivers are not supposed to skip prepare/execute on specialised do implementations, could this be noted in the DBI::DBD documentation?

Memory corruption via selectrow_array + SQLite UDF

Some time ago I debugged weird behavior when using DBD::SQLite with sqlite_create_function. Valgrind had this:

==31881== Invalid write of size 8
==31881==    at 0xCAAFD78: XS_DBD__SQLite__db_selectrow_arrayref (in ...perl-5.28.1/lib/site_perl/5.28.1/x86_64-linux/auto/DBD/SQLite/SQLite.so)
==31881==    by 0xC487486: XS_DBI_dispatch (in ...perl-5.28.1/lib/site_perl/5.28.1/x86_64-linux/auto/DBI/DBI.so)
==31881==    by 0x1E73E7: Perl_pp_entersub (in ...perl-5.28.1/bin/perl)
==31881==    by 0x1DDB42: Perl_runops_standard (in ...perl-5.28.1/bin/perl)
==31881==    by 0x1659B3: perl_run (in ...perl-5.28.1/bin/perl)
==31881==    by 0x1401D1: main (in ...perl-5.28.1/bin/perl)
==31881==  Address 0xd5e1ac8 is 376 bytes inside a block of size 11,704 free'd
==31881==    at 0x4C33D2F: realloc (vg_replace_malloc.c:785)
==31881==    by 0x1C16B9: Perl_safesysrealloc (in ...perl-5.28.1/bin/perl)
==31881==    by 0x1DC2B9: Perl_av_extend_guts (in ...perl-5.28.1/bin/perl)
==31881==    by 0x2152FE: Perl_stack_grow (in ...perl-5.28.1/bin/perl)
==31881==    by 0x1DDDCD: S_pushav (in ...perl-5.28.1/bin/perl)
==31881==    by 0x1E1902: Perl_pp_rv2av (in ...perl-5.28.1/bin/perl)
==31881==    by 0x1DDB42: Perl_runops_standard (in ...perl-5.28.1/bin/perl)
==31881==    by 0x15E9F3: Perl_call_sv (in ...perl-5.28.1/bin/perl)
==31881==    by 0xCABC905: sqlite_db_func_dispatcher (in ...perl-5.28.1/lib/site_perl/5.28.1/x86_64-linux/auto/DBD/SQLite/SQLite.so)
==31881==    by 0xCB4C4F7: sqlite3VdbeExec (in ...perl-5.28.1/lib/site_perl/5.28.1/x86_64-linux/auto/DBD/SQLite/SQLite.so)
==31881==    by 0xCB53D6E: sqlite3_step (in ...perl-5.28.1/lib/site_perl/5.28.1/x86_64-linux/auto/DBD/SQLite/SQLite.so)
==31881==    by 0xCAC003E: sqlite_st_execute (in ...perl-5.28.1/lib/site_perl/5.28.1/x86_64-linux/auto/DBD/SQLite/SQLite.so)
==31881==  Block was alloc'd at
==31881==    at 0x4C33D2F: realloc (vg_replace_malloc.c:785)
==31881==    by 0x1C16B9: Perl_safesysrealloc (in ...perl-5.28.1/bin/perl)
==31881==    by 0x1DC2B9: Perl_av_extend_guts (in ...perl-5.28.1/bin/perl)
==31881==    by 0x2152FE: Perl_stack_grow (in ...perl-5.28.1/bin/perl)
==31881==    by 0x1DDDCD: S_pushav (in ...perl-5.28.1/bin/perl)
==31881==    by 0x1E1902: Perl_pp_rv2av (in ...perl-5.28.1/bin/perl)
==31881==    by 0x1DDB42: Perl_runops_standard (in ...perl-5.28.1/bin/perl)
==31881==    by 0x15E831: Perl_call_sv (in ...perl-5.28.1/bin/perl)
==31881==    by 0xC487DCD: XS_DBI_dispatch (in ...perl-5.28.1/lib/site_perl/5.28.1/x86_64-linux/auto/DBI/DBI.so)
==31881==    by 0x1E73E7: Perl_pp_entersub (in ...perl-5.28.1/bin/perl)
==31881==    by 0x1DDB42: Perl_runops_standard (in ...perl-5.28.1/bin/perl)
==31881==    by 0x1659B3: perl_run (in ...perl-5.28.1/bin/perl)

I had short-lived success avoiding this problem by appending

join(' ', (' ') x 100000)

to the return value of my sqlite_create_function function...

I now have a debug build of Perl 5.32.1 with all modules up to date and get

perl: DBI.xs:4124: XS_DBI_dispatch: Assertion `tmpXSoff >= 0' failed.

That means in XS_DBI_dispatch outitems is negative, which could
happen through the use_xsbypass logic or by call_sv returning a
negative value.

I have tried running this with PERL_DBI_XSBYPASS=0 but would still
get the assertion failure.

My DBI call is basically selectrow_array($sql, {}, $string).

With DBI_TRACE=15 the log ends like this (notice the -241095 items):

    >> selectrow_array DISPATCH (DBI::db=HASH(0x55c111269908) rc1/1 @4 g3 ima2001 pid#25674) at ...
    -> selectrow_array for DBD::SQLite::db (DBI::db=HASH(0x55c111269908)~0x55c111309888 '
    SELECT _...
  ' HASH(0x55c1111ac0d0) '...')
    >> prepare     DISPATCH (DBI::db=HASH(0x55c111309888) rc1/1 @3 g2 imaa201 pid#25674) at ...
1   -> prepare for DBD::SQLite::db (DBI::db=HASH(0x55c111309888)~INNER '
    SELECT ...
  ' HASH(0x55c1111ac0d0))
    New 'DBI::st' (for DBD::SQLite::st, parent=DBI::db=HASH(0x55c111309888), id=undef)
    dbih_setup_handle(DBI::st=HASH(0x55c111092770)=>DBI::st=HASH(0x55c11122c758), DBD::SQLite::st, 55c11130b210, Null!)
    dbih_make_com(DBI::db=HASH(0x55c111309888), 55c11119ac40, DBD::SQLite::st, 232, 0) thr#0
    dbih_setup_attrib(DBI::st=HASH(0x55c11122c758), Err, DBI::db=HASH(0x55c111309888)) SCALAR(0x55c110df76a8) (already defined)
    dbih_setup_attrib(DBI::st=HASH(0x55c11122c758), State, DBI::db=HASH(0x55c111309888)) SCALAR(0x55c11133a820) (already defined)
    dbih_setup_attrib(DBI::st=HASH(0x55c11122c758), Errstr, DBI::db=HASH(0x55c111309888)) SCALAR(0x55c110df7f30) (already defined)
    dbih_setup_attrib(DBI::st=HASH(0x55c11122c758), TraceLevel, DBI::db=HASH(0x55c111309888)) 0 (already defined)
    dbih_setup_attrib(DBI::st=HASH(0x55c11122c758), FetchHashKeyName, DBI::db=HASH(0x55c111309888)) 'NAME' (already defined)
    dbih_setup_attrib(DBI::st=HASH(0x55c11122c758), HandleSetErr, DBI::db=HASH(0x55c111309888)) undef (not defined)
    dbih_setup_attrib(DBI::st=HASH(0x55c11122c758), HandleError, DBI::db=HASH(0x55c111309888)) undef (not defined)
    dbih_setup_attrib(DBI::st=HASH(0x55c11122c758), ReadOnly, DBI::db=HASH(0x55c111309888)) undef (not defined)
    dbih_setup_attrib(DBI::st=HASH(0x55c11122c758), Profile, DBI::db=HASH(0x55c111309888)) undef (not defined)
sqlite trace: prepare statement: 
    SELECT ...
1   <- prepare= ( DBI::st=HASH(0x55c111092770) ) [1 items] at ...
sqlite trace: bind into 0x55c1111771a8: 1 => ... (0) pos 0 at dbdimp.c line 1522
sqlite trace: executing 
    SELECT ...
   at dbdimp.c line 980
sqlite trace: bind 0 type 5 as ... at dbdimp.c line 997
sqlite trace: Execute returned 1 cols at dbdimp.c line 1093
sqlite trace: exec ok - 0 rows, 1 cols at dbdimp.c line 1127
sqlite trace: numFields == 1, nrow == 0 at dbdimp.c line 1167
    dbih_setup_fbav alloc for 1 fields
    dbih_setup_fbav now 1 fields
sqlite trace: fetch column 0 as text at dbdimp.c line 1217
    <- selectrow_array= ( ) [-241095 items] at ...
perl: DBI.xs:4124: XS_DBI_dispatch: Assertion `tmpXSoff >= 0' failed.

Calling prepare/execute/fetchrow_array instead works fine.

DBI.pm

Hi DBI Developer,

DBI.pm refers to DBI::Pool, but there is no such documentation.

Please adapt the DBI documentation.

Regards,
Carsten

shebang "#!perl -w"

Hi,

Like file dbixs_rev.pl and some other files in this repo, it has shebang "#!perl -w", but seems this shebang cannot work on every system.
I run on linux system:
run "perl ./dbixs_rev.pl": works well.
run "./dbixs_rev.pl": report error: "-sh: ./dbixs_rev.pl: cannot execute: required file not found."

But if I change the shebang to "#!/usr/bin/perl -w", run "./dbixs_rev.pl" works well.

Should we update the shebang of these files?

[feature request] user/pass in DSN

See perl5-dbi/DBD-MariaDB#147

DBI->connect ($dsn, $user, $pass, $options) clearly splits the DSN and the credentials.

I'd like to see several thing "changed"

== Unification in parameter-names in the DSN ==

I have seen dbi:Xx:database=... (mysql, MariaDB), dbname=... (Pg, SQLite), db=... (Firebird), sid=... (Oracle), dbi:Xx://host:port/Foo, and f_dir=/foo (CSV) in the DSN and also plain dbi:Xx: where the database is specified in $options or environment variables (think $TWO_TASK for Oracle or $USCHEMA in Unify). Likewise for host=... vs server=.... The worst imho is an embedded DSN inside the DSN as DBD::ODBC documents: dbi:ODBC:DSN=mydsn.

The most obvious common name would be database.

== Allow User/Pass in DSN ==

Having connections stored as key-values somewhere for testing, or extending a dynamic DSN based on testing configurations would simplify testing a lot. I'd like top-level support for

 my $dsn = "dbi:Xx:dbname=foo";
 DBI->connect ($dsn, $user, $pass, ...);

to be equivalent with

 my $dsn = "dbi:Xx:dbname=foo;user=tester;pass=secrit";
 DBI->connect ($dsn, undef, undef, ...);

where the connect will extract any of user=..., dbuser=..., pass=..., dbpass=..., password=...from the DSN and put them at the spots for$userand$pass, which would default to $ENV{DBI_USER}and$ENV{DBI{PASS}`

Perl length() returns the wrong value on array elements returned by fetchrow_arrayref()

Hi,

In the below code:

#!/opt/bin/perl

use DBI;

my $db = DBI->connect('dbi:Pg:dbname=main','schemaname','password');
my $st = $db->prepare('SELECT label FROM time_code');
$st->execute;
while (my $rec = $st->fetchrow_arrayref) {
	print 'len: ' . length($rec->[0]) . ' val: ' . $rec->[0] . "\n";
}
$db->disconnect;

...the length() call returns the correct value (11) for the first row fetched, and then for all subsequent rows, it returns "11" regardless of the actual length of the string in $rec->[0]. Here's the output:

len: 11 val: Office Work
len: 11 val: Warehouse Work
len: 11 val: Holiday
len: 11 val: PTO/Vacation
len: 11 val: Sick Time

Is this a DBD::Pg issue or a DBI issue? I am running DBI 1.643 (latest) and DBD::Pg 3.12 (latest) on Perl 5.26.1 with Postgres 10.13.

Thanks in advance!

DBI seg fault in err_hash.

I'm seeing the following in a seg fault from some code after a prove t/*.t in a project I'm working on. It appears to be after everything's finished running right before it exits (perhaps during destruction?)

segfault at c ip 00007fd84d6812ad sp 00007fff24597de0 error 4 in DBI.so[7fd84d674000+21000]

Here's what I got out of gdb:

Program terminated with signal 11, Segmentation fault.
#0  err_hash (imp_xxh=0xaf5fff38, imp_xxh=0xaf5fff38) at DBI.xs:864
864    if (SvOK(err_sv)) {
(gdb) bt
#0  err_hash (imp_xxh=0xaf5fff38, imp_xxh=0xaf5fff38) at DBI.xs:864
#1  0x00007fd84d6880ed in XS_DBI_dispatch (cv=0x3df4ac8) at DBI.xs:3551
#2  0x00007fd859801879 in Perl_pp_entersub () from /usr/local/gsgperl/tax-cbs/5.32.0/lib/perl5/x86_64-linux/CORE/libperl.so
#3  0x00007fd859758142 in Perl_call_sv () from /usr/local/gsgperl/tax-cbs/5.32.0/lib/perl5/x86_64-linux/CORE/libperl.so
#4  0x00007fd859805fc1 in S_curse () from /usr/local/gsgperl/tax-cbs/5.32.0/lib/perl5/x86_64-linux/CORE/libperl.so
#5  0x00007fd85980699d in Perl_sv_clear () from /usr/local/gsgperl/tax-cbs/5.32.0/lib/perl5/x86_64-linux/CORE/libperl.so
#6  0x00007fd859806ca0 in Perl_sv_free2 () from /usr/local/gsgperl/tax-cbs/5.32.0/lib/perl5/x86_64-linux/CORE/libperl.so
#7  0x00007fd859807681 in Perl_sv_clean_objs () from /usr/local/gsgperl/tax-cbs/5.32.0/lib/perl5/x86_64-linux/CORE/libperl.so
#8  0x00007fd85975ad5b in perl_destruct () from /usr/local/gsgperl/tax-cbs/5.32.0/lib/perl5/x86_64-linux/CORE/libperl.so
#9  0x0000000000400d0a in main (argc=19, argv=0x7fff24598658, env=0x7fff245986f8) at perlmain.c:138
(gdb) p err_sv
$1 = (SV *) 0x0

Unfortunately, the code is from a massive closed-source project, so I can't provide a small use-case to trigger it, but if there's anything else I can reasonably do to help, let me know.

(fetch|select)all_arrayref_hashref

There seems to be only two variants of slurp_all type of operations, either:

  1. fetch an array ref with row as arrayref or..
  2. fetch a hashref, with key column you specify , with row as hashref

None of these seems ideal, the first option doesn't preserve column names, and the other one doesn't preserve order. The obviuos structure for this type of slurp seems to be an array ref with row as hashref.. unless its been proposed before and rejected, I'd like to request it now. I have no clue what to name it though.

"Official" support in DBI for enum columns?

Is there any appetite for having DBI support "enum" columns on column values? It's a pity that MySQL, PostgreSQL, SQLite, and probably others that support such, each have to either not report them through column_info, or have to do their own thing e.g. mysql_values.

It seems to me that optional "if supported" columns in the return from column_info would do: IS_ENUM (a boolean), and ENUM_VALUES (an array-ref).

I'll be happy to make PRs on DBI and relevant DBDs.

Unable to build in parallel if older version is installed

Hello,
DBI compilation fails if parallel build is used and an older version of DBI is installed on system:

$ perl Makefile.PL && make -j4
...
DBI object version 1.640 does not match bootstrap parameter 1.641 at /usr/lib/x86_64-linux-gnu/perl/5.26/DynaLoader.pm line 204.

This issue has been reported on Debian and solved by disabling parallel build for this package.

Regards,
Xavier

Please make DBI::Const::GetInfoType and %GetInfoType hash public

%GetInfoType hash from the DBI::Const::GetInfoType package provides mapping from string identifiers to DBI get_info numeric values. But documentation says this package is "new" and "nothing what is written here is guaranteed". Which means that it is not ready for public API usage and for stable applications.

Package DBI::Const::GetInfo::ODBC is even more strict and its documentation contains: "The API for this module is private and subject to change.". So is not for public usage too.

I would really like to use human readable identifiers (e.g. SQL_DBMS_VER) instead of magic number 18.

So can you please make current API of the %GetInfoType hash public and reflect it into DBI::Const::GetInfoType documentation?

E.g. code:

my $database_name = $dbh->get_info($GetInfoType{SQL_DBMS_NAME});

is more readable as:

my $database_name = $dbh->get_info(17);

Reordering @_ in Callbacks

I am subclassing DBI and am using @bind_values quite heavily, unlike \%attr. For convenience and aesthetics, I prefer to supply @bind_values right after $statement, with \%attr at the very end.

In the code this is achieved like so:

my $dbh = MyDBI->connect(..., {
    Callbacks => {
        selectall_arrayref => sub {
            splice @_, 2, 0, ((ref($_[-1]) eq 'HASH') ? pop : undef);
            return;
        }
    }
});

$dbh->selectall_arrayref('select foo from bar where baz = ?' => 'value');

(note the use of fat comma to connect $statement with @bind_values, which brings me immense aesthetic pleasure)

For some inexplicable reason, though, the above results in the following call:

DBD::mysql::db::prepare(MyDBI::db=HASH(...), "select foo from bar where baz = ?", "value");

and then DBD::mysql::st::_prepare gets $attribs = "value" and tries to dereference that as a hash - disastrously.

This does not look like a copy vs reference issue. If my callback simply alters some argument (eg the statement string), it goes on to DBD::mysql::db::prepare in this altered state as expected, but perhaps DBI's implementation of selectall_arrayref splits @_ into separate references, thereby allowing for modification but not reordering?

Method execute_array: mandatory ArrayTupleStatus?

I put this issued already on DBD::Oracle page, however they suggested to post it also here:

I have to insert many records into Oracle database and performance is quite crucial at this application.
My Perl script is like this:

use strict;
use DBI;
use DBD::Oracle qw(:ora_types);

my @tuple_status;
my $ora = DBI->connect("dbi:Oracle:<database>", "<user>", "<password>", { PrintError => 1, ShowErrorStatement => 1 } );
$ora->{AutoCommit} = 0;

my $sql = "INSERT /*+ APPEND_VALUES */ INTO T_EXECUTE_ARRAY (PORT) VALUES (?)";
my $sth = $ora->prepare($sql);

my @ports = (1,2,3,4);
$sth->bind_param_array(1, \@{ports} );
$sth->execute_array( { ArrayTupleStatus => \@tuple_status } ) ;

$ora->commit;
$ora->disconnect;

However, I get an error:

DBD::Oracle::st execute_array failed: ORA-38910: BATCH ERROR mode is not supported for this operation (DBD ERROR: OCIStmtExecute) [for Statement "INSERT /*+ APPEND_VALUES */ INTO T_EXECUTE_ARRAY (PORT) VALUES (?)"] at C:\Developing\Source\IMP\Mediation-Mobile\execute_array.pl line 16.

It works without the APPEND_VALUES hint, however then I cannot gain the performance benefits of direct-path inserts.

I am not interested in any errors from ArrayTupleStatus, so I tried without:
$sth->execute_array() ;

But then the error is:
DBI execute_array: invalid number of arguments: got handle + 0, expected handle + between 1 and -1 Usage: $h->execute_array(\%attribs [, @args]) at C:\Developing\Source\IMP\Mediation-Mobile\execute_array.pl line 16.

By some investigations I found a statement like "ArrayTupleStatus becomes optional in DBI version 1.38". Apparently it is still (or again) mandatory.

Version of DPI: 1.627
Version of DBD::Oracle: 1.62
Oracle Version: 12.1.0.2.0

Any idea how to solve this issue? Or should I call Oracle support because of the ORA-38910 error?

dbh->commit return value lost when run in perl debugger

When in the debugger and DBD::mysql and DBD::MariaDB connections, calls to dbh->commit return undef even when the commit succeeds.

This behavior started occurring with our update to DBI v1.642. Prior to that, when using v1.633, we experienced the bug described in https://rt.cpan.org/Public/Bug/Display.html?id=102791 instead whenever the code was run in the debugger.

Based on my reading of the XS code, this XS doc, and my experiments, I think that while 71802a1 got rid of the memory access error, the return value still isn't always being maintained properly. Localizing the stack pointer (using the dSP macro) before manipulating the call stack and calling STORE to reset AutoCommit appears to resolve the issue.

dbi/DBI.xs

Lines 3925 to 3931 in ea91ab4

/* and may mess up the error handling below for the commit/rollback */
PUSHMARK(SP);
XPUSHs(h);
XPUSHs(sv_2mortal(newSVpv("AutoCommit",0)));
XPUSHs(&PL_sv_yes);
PUTBACK;
call_method("STORE", G_VOID);

However, I've not well versed in XS. I'm not confident that this is a proper/complete fix.

local $dbh->{HandleError} = ... is not localized

A localized installation of a HandleError handler remains active even after exiting the localizing block.
See test below (perl 5.32.0, DBI v1.643).

use strict;
use warnings;
use Test::More;
use DBI;

run_tests(SQLITE => DBI->connect('dbi:SQLite:dbname=:memory:', '', '', {RaiseError => 1}));
run_tests(CSV    => DBI->connect('dbi:CSV:', '', '', {f_ext => "foo.csv", RaiseError => 1}));


sub run_tests {
  my ($db_name, $dbh) = @_;

  eval {my $bug = $dbh->prepare("BAD SQL")};
  unlike $@, qr/FROM HandleError/, "$db_name: HandleError not installed yet";

  {
    local $dbh->{HandleError} = sub {die shift . " FROM HandleError"};
    eval {my $bug = $dbh->prepare("BAD SQL")};
    like $@, qr/FROM HandleError/, "$db_name: HandleError installled";
  }

  eval {my $bug = $dbh->prepare("BAD SQL")};
  unlike $@, qr/FROM HandleError/, "$db_name: HandleError should no longer be active";

}


done_testing;

Use of deprecated GIMME macro

dbi/Driver.xst

Lines 231 to 246 in f6ba2bf

/* --- fetchrow_arrayref --- */
row_av = dbd_st_fetch(sth, imp_sth);
if (!row_av) {
if (GIMME == G_SCALAR)
PUSHs(&PL_sv_undef);
}
else if (is_selectrow_array) {
int i;
int num_fields = AvFILL(row_av)+1;
if (GIMME == G_SCALAR)
num_fields = 1; /* return just first field */
EXTEND(sp, num_fields);
for(i=0; i < num_fields; ++i) {
PUSHs(AvARRAY(row_av)[i]);
}
}

From https://perldoc.perl.org/perl5380delta
"The underlying Perl_dowantarray function implementing the long-deprecated GIMME macro has been marked as deprecated, so that use of the macro emits a compile-time warning. GIMME has been documented as deprecated in favour of GIMME_V since Perl v5.6.0, but had not previously issued a warning."

perl5-dbi/DBD-mysql#395

API for asynchronous executiion of statements

More databases supports either parallel execution of statements or at least their network client API provide asynchronous execution of one statements (via non-blocking socket and poll).

Please define stable API for DBI which would support to asynchronously prepare, execute and fetch result. So DBI application could execute statement in non-blocking mode independently of use DBD driver.

Currently DBD::Pg and DBD::mysql supports asynchronous execution, but both drivers in different way so unification at DBI API level would be useful.

Implement attribute to reorganize data

->selectall_hashref( $q, 'property_id', { Merge=>{} }, @values )
If here two rows with same property_id I would expect:

{
  6699 => {
    inv_num     => [ 'DDFD', 'ASDF' ],
    field2      => [ 7, 333 ],
    property_id => 6699,
  },
}

Or, depending, on requirements: {Merge => 'inv_num'}

{
  6699 => [ 'DDFD', 'ASDF' ],
  3433 => [ 'XYZ',  'ZZZZ' ],
}

DBI::connect calls leak passwords on failure

The DBI framework is fairly careful to avoid password leakage in debug traces. However, passwords will leak in stack traces any time there is a failure. Connection failures are quite common and because of the Carp::croak call (with $Carp::Verbose on or Carp::Always), will result in a trace like this:

Carp::croak('DBI connect(\'***DSN_STRING***\',\'***USERNAME***\',...) failed: Can\'t connect to MySQL server on \'hostname\' (110)') called at lib/perl5/x86_64-linux/DBI.pm line 692
DBI::__ANON__(undef, undef) called at lib/perl5/x86_64-linux/DBI.pm line 748
DBI::connect('DBI', '***DSN_STRING***', '***USERNAME***', '***UNREDACTED PASSWORD***', 'HASH(0xd1677e0)') called at ...

Any sort of password leakage like this requires clean up in various log sources and a password change. Various users have attempted creative solutions like #40, but it should really be fixed at the source. An easy solution is to redact it as soon as the password is collected:

sub connect {
    my $class = shift;
    my ($dsn, $user, $pass, $attr, $old_driver) = my @orig_args = @_;
    my $driver;
 
    # Hide password in stack trace
    $_[2] = ****';

The @orig_args and $pass are already captured, so those can be used freely, but the stack trace is changed to not print the password.

This is also likely a problem with connect_cached calls, too.

dbih_getcom handle DBI::dr=... is not a DBI handle (has no magic) during global destruction

We have an old 32-bit/i686-linux environment that uses Perl 5.8.6. Yeah, I know that's ancient and we should upgrade to a more modern version of Perl. But the system is being retired in less than a year, so we're procrastinating on that task until then. So why bother upgrading DBI? Well, there's that CVE for DBI < 1.643....

Anyway, when I test DBI 1.643 on this 32-bit Perl 5.8.6 system, I get the following:

t/02dbidrv.t .................... 1/54 SV = RV(0x9d33674) at 0x9d1e8b8
  REFCNT = 1
  FLAGS = (ROK,READONLY)
  RV = 0x9c8d2c4
        (in cleanup) dbih_getcom handle DBI::dr=HASH(0x9c8d2c4) is not a DBI handle (has no magic) during global destruction.
...
t/zvg_02dbidrv.t ................ 1/54 SV = RV(0x8327c5c) at 0x832588c
  REFCNT = 1
  FLAGS = (ROK,READONLY)
  RV = 0x8282ab4
        (in cleanup) dbih_getcom handle DBI::dr=HASH(0x8282ab4) is not a DBI handle (has no magic) during global destruction.

The tests all seemingly pass, but I was wondering if the above is anything we should worry about?

prepare_cached() does not cache correctly with Slice => {} passed in as attr

It looks like DBI::_concat_hash_sorted is not handling nested data structures passed in as attributes correctly. Given for example Slice => {} is passed, the reference to {} will be part of the cache key and thus the statement handle will never be hit again.

I assume _concat_hash_sorted / _join_hash_sorted would need to recurse.

Here a simple test to reproduce the issue: prepare_cached_test.pl.txt

In our local scenario this behaviour causes significant memory leakage.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.