Giter Site home page Giter Site logo

pypyodbc's Introduction

pypyodbc's People

Contributors

alxwrd avatar arvinchhi4u avatar braian87b avatar brianbunker avatar clach04 avatar dmfreemon avatar jiangwen365 avatar junctionapps avatar keipes avatar moshekaplan avatar otiai10 avatar pbatishchev avatar plannigan avatar seppedl avatar simon04 avatar thunderex avatar waynew avatar zhkvia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pypyodbc's Issues

[support] accessing local .accdb file in Linux 32bit

Hi, I'm trying to read a .accdb file in my 32-bit Lubuntu 16.04 system (analogous to Ubuntu). This file is from another system. There's no ODBC server etc running.. the data is in one file and I want to access it.

I'm seeing simple commands shared on stackoverflow etc for creating these files.

But then the solutions straightaway jump into complicated server connections for reading databases in some server. I'm not finding anything for reading tables from a .accdb file.

Could someone help out here please? The closest I've gotten is: https://stackoverflow.com/questions/25820698/how-do-i-import-an-accdb-file-into-python-and-use-the-data but it seems that will work for windows and not linux.

I also followed the steps given here: https://code.google.com/archive/p/pypyodbc/wikis/Linux_ODBC_in_3_steps.wiki

Again it ends with giving some server connection. I don't have anything in any server, I have a .accdb file. Sharing my code and error:

import pypyodbc
pypyodbc.lowercase = False
dbFile = 'MyFarmerOrders-02Apr.accdb'
conn = pypyodbc.connect(
    r"Driver={FreeTDS};" +
    r"Dbq=" + dbFile)
cur = conn.cursor()

Error:

Error                                     Traceback (most recent call last)
<ipython-input-6-8eec40246f17> in <module>()
      3 conn = pypyodbc.connect(
      4     r"Driver={FreeTDS};" +
----> 5     r"Dbq=" + dbFile)
      6 cur = conn.cursor()

~/.local/lib/python3.5/site-packages/pypyodbc.py in __init__(self, connectString, autocommit, ansi, timeout, unicode_results, readonly, **kargs)
   2452 
   2453 
-> 2454         self.connect(connectString, autocommit, ansi, timeout, unicode_results, readonly)
   2455 
   2456     def set_connection_timeout(self,connection_timeout):

~/.local/lib/python3.5/site-packages/pypyodbc.py in connect(self, connectString, autocommit, ansi, timeout, unicode_results, readonly)
   2505         else:
   2506             ret = odbc_func(self.dbc_h, 0, c_connectString, len(self.connectString), None, 0, None, SQL_DRIVER_NOPROMPT)
-> 2507         check_success(self, ret)
   2508 
   2509 

~/.local/lib/python3.5/site-packages/pypyodbc.py in check_success(ODBC_obj, ret)
   1007             ctrl_err(SQL_HANDLE_STMT, ODBC_obj.stmt_h, ret, ODBC_obj.ansi)
   1008         elif isinstance(ODBC_obj, Connection):
-> 1009             ctrl_err(SQL_HANDLE_DBC, ODBC_obj.dbc_h, ret, ODBC_obj.ansi)
   1010         else:
   1011             ctrl_err(SQL_HANDLE_ENV, ODBC_obj, ret, False)

~/.local/lib/python3.5/site-packages/pypyodbc.py in ctrl_err(ht, h, val_ret, ansi)
    983                 raise OperationalError(state,err_text)
    984             elif state[:2] in (raw_s('IM'),raw_s('HY')):
--> 985                 raise Error(state,err_text)
    986             else:
    987                 raise DatabaseError(state,err_text)
Error: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found, and no default driver specified')

Improve error handling for unsupported object types

If you pass pypyodbc a value of some type it doesn't know how to convert, the end result is an error message which doesn't give you much information about what you've done wrong.

Take for example the following code:

import pypyodbc as pyodbc

class SomeClass(object): pass

obj = SomeClass()

cnxn = pyodbc.connect('....')
cursor = cnxn.cursor()
cursor.execute("INSERT INTO some_table (a) VALUES (?)", [obj])

When I run this code, it generates the following traceback:

Traceback (most recent call last):
  File "C:\Users\Luke\StackOverflow\pypyodbctest.py", line 9, in <module>
    cursor.execute("INSERT INTO some_table (a) VALUES (?)", [obj])
  File "C:\Python27\lib\site-packages\pypyodbc.py", line 1470, in execute
    self._BindParams(param_types)
  File "C:\Python27\lib\site-packages\pypyodbc.py", line 1275, in _BindParams
    if param_types[col_num][0] == 'u':
TypeError: 'type' object has no attribute '__getitem__'

From this message alone it can be difficult to figure out what has been done wrong.

In my sample code I can fully expect that there will be a problem, as I haven't specified anything about how a SomeClass instance should be converted into a SQL datatype. However, the same error message appears in this StackOverflow question where it was less clear to the questioner what their mistake was.

C-string termination

I've noticed that sometimes, after SQLGetData call, alloc_buffer becomes dirty in a sense that it's not properly terminated by the \x00 character. Because of that, when you want to decode a string (let's say, it's encoded in utf-8), after truncating it by the \x00 character, you get not only a desired string but also a part of a previous string that was written to the same buffer and a decoding process may fail.

Here's a quick fix for that: PR

UPD:
OSes I use: Linux / macOS
ODBC Driver: unixODBC

'Byte string too long' error, query too long

It seems my insert query is too long.

I have the following settings;

odbc.ini

[sqlserverdatasource]
Driver = FreeTDS
Description = ODBC connection via FreeTDS
Trace = No
Servername = sqlserver
Database = ReadOnly

odbcinst.ini


[SQL Server]
Description=TDS driver (Sybase/MS SQL)
Driver=/usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so
Setup=/usr/lib/x86_64-linux-gnu/odbc/libtdsS.so
CPTimeout=
CPReuse=
FileUsage=1

[ODBC Driver 13 for SQL Server]
Description=Microsoft ODBC Driver 13 for SQL Server
Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-13.1.so.9.1
UsageCount=1

freetds.conf

#   $Id: freetds.conf,v 1.12 2007/12/25 06:02:36 jklowden Exp $
#
# This file is installed by FreeTDS if no file by the same 
# name is found in the installation directory.  
#
# For information about the layout of this file and its settings, 
# see the freetds.conf manpage "man freetds.conf".  

# Global settings are overridden by those in a database
# server specific section
[global]
        # TDS protocol version
;   tds version = 8.0

    # Whether to write a TDSDUMP file for diagnostic purposes
    # (setting this to /tmp is insecure on a multi-user system)
;   dump file = /tmp/freetds.log
;   debug flags = 0xffff

    # Command and connection timeouts
;   timeout = 10
;   connect timeout = 10

    # If you get out-of-memory errors, it may mean that your client
    # is trying to allocate a huge buffer for a TEXT field.  
    # Try setting 'text size' to a more reasonable limit 
    text size = 64512

# A typical Sybase server
[egServer50]
    host = symachine.domain.com
    port = 5000
    tds version = 8.0

# A typical Microsoft server
[sqlserver]
    host = xx.xx.xxx.xxx
    port = xxxxx
    tds version = 8.0


The query works fine in MSSql Server. So not sure what it going on.

Connect to Teradata

Can anyone given an example script of connecting to Teradata SQL? I can make it work in pyodbc but not pypyodbc.

SQL_WVARCHAR & SQL_VARCHAR

In SQL_data_type_dict, SQL_WVARCHAR & SQL_VARCHAR have "Variable Length" set to False.
Shouldn't they be set to True?

args in connect

Current connect receives a connectString as parameter
What about extending it to receive also other args such as 'database', 'user', 'host', 'port', 'password'? By doing this the connectString is build 'internally'

Problems with SQL for MS SQL 2012r2 and Linux Ubuntu 16.04

I found a problem with SQL query, I result the code below::

# -*- coding: utf-8 -*-
import pypyodbc
con_str = "DRIVER={FreeTDS}; SERVER=192.168.0.2; PORT=49223; DATABASE=dotProject2; UID=sa; PWD=111; TDS_Version=8.0; ClientCharset=UTF8; autocommit=False"
con = pypyodbc.connect(con_str)
cur = con.cursor()
-- request 1
cur.execute('SELECT id FROM Tasks WHERE name=? AND TopicId=?', ('Изменение в руководство пользователя АСП', 193))
print('1) good select type B by sql param:', cur.fetchone())
-- request 2
cur.execute('SELECT id FROM Tasks WHERE name=? AND TopicId=?', ('Изменение в руководство пользователя и хвост', 193))
-- request 3
cur.execute('SELECT id FROM Tasks WHERE name=? AND TopicId=?', ('Изменение в руководство пользователя АСП', 193))
print('2) bad select type B by sql param:', cur.fetchone())
con.close()

I sure, request 1 = request 2, and must have one result. But program print is:

1) good select type B by sql param: (70077,)
2) bad select type B by sql param: None

I look in SQL profiler and see for request 1:

declare @p1 int
set @p1=NULL
exec sp_prepexec @p1 output,N'@P1 NVARCHAR(255),@P2 INT',N'SELECT id FROM Tasks    WHERE name=@P1 AND TopicId=@P2',N'Изменение в руководство пользователя АСП',193
select @p1

after it is look it:

exec sp_execute 1,N'Изменение в руководство пользователя и хвост',193
exec sp_execute 1,N'Изменение в руководство пользователя АСПЀост',193

You can see, request 2 have error. Must have Изменение в руководство пользователя АСП, but have Изменение в руководство пользователя АСПЀост. I think this is the "tail" from the previous query. My system pypyodbc 1.3.5, Linux Ubuntu 16.04, python 3.5 and now I made update. I have server Ubuntu 14.04 with pypyodbc 1.3.3, python 3.3 - and see this problem too. When I rewrite program, for use pyodbc (4.0.17) - have not problem. In profiler I see::

declare @p1 int
set @p1=NULL
exec sp_prepexec @p1 output,N'@P1 NVARCHAR(40),@P2 INT',N'SELECT id FROM Tasks WHERE name=@P1 AND TopicId=@P2',N'Изменение в руководство пользователя АСП',193
select @p1
--
declare @p1 int
set @p1=NULL
exec sp_prepexec @p1 output,N'@P1 NVARCHAR(44),@P2 INT',N'SELECT id FROM Tasks WHERE name=@P1 AND TopicId=@P2',N'Изменение в руководство пользователя и хвост',193
select @p1
--
declare @p1 int
set @p1=NULL
exec sp_prepexec @p1 output,N'@P1 NVARCHAR(40),@P2 INT',N'SELECT id FROM Tasks WHERE name=@P1 AND TopicId=@P2',N'Изменение в руководство пользователя АСП',193
select @p1

ValueError for microseconds when using TDS >7.3

I am currently using an Azure SQL Server database and found out that I had to modify the way pypyodbc is handling microseconds in dttm_cvt(x) at line 592 from x[20:].ljust(6,'0') to x[20:26].ljust(6,'0'). Otherwise I get: ValueError: microsecond must be in 0..999999.

This may be related to #26

Information about my setup:

  • SELECT @@VERSION; gives Microsoft SQL Azure (RTM) - 12.0.2000.8 Feb 8 2017 04:15:27 Copyright (C) 2016 Microsoft Corporation. All rights reserved.
  • SELECT CONVERT(BINARY(4), (SELECT TOP 1 protocol_version from sys.dm_exec_connections)) AS TDS_VERSION; gives 0x74000004 which afaik should be TDS version 7.4
  • my ODBC connection string starts with: DRIVER={SQL Server Native Client 11.0};

Cf. also these variable length data types: https://msdn.microsoft.com/en-us/library/dd358341.aspx
where they say:

 DATETIMNTYPE        =   %x6F  ; (see below)
 DATENTYPE           =   %x28  ; (introduced in TDS 7.3)
 TIMENTYPE           =   %x29  ; (introduced in TDS 7.3)
 DATETIME2NTYPE      =   %x2A  ; (introduced in TDS 7.3)
 DATETIMEOFFSETNTYPE =   %x2B  ; (introduced in TDS 7.3)

Broken division for Python3

In get_type it uses the floating point division operator (//). At least in Python 2.x.

In Python3 // now peforms integer truncation, which totally blew out an operation we were doing. From what I can tell it looks like adding from __future__ import division to the top, and replacing // with / should fix it.

UPDATE: though it might need to be wrapped in an int() - ctypes complained about a TypeError

on linux, unicode parameters cannot be longer than 128

I noticed the following when posting a query from linux using parameters:

If the parameter string is longer than 127 characters, pypyodbc will throw an error on row 1572 when trying this:

param_buffer.value = c_char_buf

It complains that c_char_buf is too large.

I checked: c_char_buf is created by UCS_buf() if parameter type is 'u' (unicode less than 255 characters):

c_char_buf = UCS_buf(param_val)
c_buf_len = len(c_char_buf)

UCS_buf(string) is defined to return encoding(utf-16-le) for linux, and return its argument for windows.

Thus on linux, due to 2 byte minimum char width for utf-16, the length of the byte array will be double the length of the underlying string.

As param_buffer.value size is defined as 255 no matter what platform, this leads to above error.

My fix was to double the buffer size for linux:

 if param_types[col_num][0] == 'u':
                sql_c_type = SQL_C_WCHAR
                sql_type = SQL_WVARCHAR 
                if sys.platform not in ('win32','cli'):
                          buf_size = 255 * 2 # double buffer size for linux                 
                else:
                         buf_size = 255
                ParameterBuffer = create_buffer_u(buf_size)   

Hope this helps someone.

dttm_cvt(x) and dt_cvt(x) doesn't work correctly if year is represented with two numbers

I'm now trying to retrieve data from MS Access 2000 DB, and original code for dttm_cvt(x) as well as dt_cvt(x) fails, because year is represented as 2-digit value i.e. "11/19/96 00:00:00". Thus it reports:

ValueError: invalid literal for int() with base 10: '01/0'  # while parsing date
ValueError: invalid literal for int() with base 10: '00:00:0'  # while parsing time

So I updated a original source, as follows:

def dttm_cvt(x):
    if py_v3:
        x = x.decode('ascii')
    if x == '': return None
    x = x.ljust(26,'0')
    db_date = x[0:x.find(' ')]
    db_time = x[x.find(' ')+1:]
    if len(db_date) < 10:
        # Datetime format looks like: 01/01/76 (month/day/year)
        first_sep = db_date.find('/')
        last_sep = db_date.find('/', first_sep+1)
        db_month = int(db_date[0:first_sep])
        db_day = int(db_date[first_sep+1:last_sep])
        db_year = int(db_date[last_sep+1:])
        if len(str(db_year)) < 4:
            if db_year < 20:
                db_year += 2000
            else:
                db_year += 1900

        first_sep = db_time.find(':')
        last_sep = db_time.find(':', first_sep+1)
        db_hour = int(db_time[:first_sep])
        db_minute = int(db_time[first_sep+1:last_sep])
        db_second = int(db_time[last_sep+1:])

        return datetime.datetime(db_year, db_month, db_day, db_hour, db_minute, db_second)
    else:
        return datetime.datetime(int(x[0:4]),int(x[5:7]),int(x[8:10]),int(x[10:13]),int(x[14:16]),int(x[17:19]),int(x[20:26]))

def dt_cvt(x):
    if py_v3:
        x = x.decode('ascii')
    if x == '': return None
    else:
        db_date = x[0:x.find(' ')]
        if len(db_date) < 10:
            first_sep = db_date.find('/')
            last_sep = db_date.find('/', first_sep+1)
            db_month = int(db_date[0:first_sep])
            db_day = int(db_date[first_sep+1:last_sep])
            db_year = int(db_date[last_sep+1:])
            return datetime.datetime(db_year, db_month, db_day)
        else:
            return datetime.date(int(x[0:4]),int(x[5:7]),int(x[8:10]))

It works OK with datetime and date, but I'm not sure if this is only MS Access issue (or maybe even just only my own issue), and if there is a better way to implement this code.

Regards,
Meliowant

P.S. This is my first post here, so sorry, if I didn't make everything as expected

Sequentially inserting multiple rows in accessdb

Hi, I'm using pyodbc to insert multiple rows with sequential order, But pyodbc inserting rows in random order. Is this normal ?

For understanding I'm giving example,

params = [ ('A', 1), ('B', 2), ('C', 3), ('D', 4) ]
executemany("insert into t(name, id) values (?, ?)", params)

But in table, I can see order like this,

image

Reading a tiny float (which appears to be expressed in scientific notation in the db) causes a value error

Reading data from a Access 2007 MDB, I found that when reading values like 11022302462516E-16, I get the following stack trace.

Error Traceback (most recent call last): File "C:\Python27\lib\unittest\case.py", line 329, in run testMethod() File "D:\...\tests\test_msaccess.py", line 19, in test_odbc_connect for row in cursor: File "C:\Python27\lib\site-packages\pypyodbc.py", line 1920, in next row = self.fetchone() File "C:\Python27\lib\site-packages\pypyodbc.py", line 1871, in fetchone value_list.append(buf_cvt_func(alloc_buffer.value)) ValueError: could not convert string to float: E-16

Running the same query using pyodbc succeeds.

in Python 3, type(cursor.description[0][0]) is str on Windows but bytes on 64-bit CentOS 7 Linux

On Python 3.5.1:

Windows 7 x64: both pyodbc 3.0.10 and pypyodbc 1.3.3 obtain full column names with the below codes
CentOS 7 x86-64 (unixODBC 2.3.1-11.el7): pyodbc 3.0.10 gets full column names, but pypyodbc 1.3.3 gets only the first character --- see comments in code below

This occurs with both oracle 12.1 and mysql 5.3 ODBC drivers. For mysql, both ANSI and Unicode drivers.

import pyodbc
import pypyodbc

def get_column_names(conn, table_name):
    with conn.cursor() as cursor:
        cursor = cursor.execute("SELECT * FROM " + table_name)
        column_names = [desc[0] for desc in cursor.description]
    return column_names

oracle_connection_string = "DRIVER=/usr/lib/oracle/12.1/client64/lib/libsqora.so.12.1;..."
table_name = "..."

pyora = pyodbc.connect(oracle_connection_string, autocommit=True)
pypyora = pypyodbc.connect(oracle_connection_string, autocommit=True)

get_column_names(pyora, table_name)
#['BE_ID',
# 'SECURITY_ID',
# 'ID_TYPE',
# 'COUNTRY_OF_REG',
# 'EXCHANGE',
# 'START_DATE',
# 'END_DATE',
# 'INFERRED',
# 'UPDATE_DATE']

get_column_names(pypyora, table_name)
#[b'b', b's', b'i', b'c', b'e', b's', b'e', b'i', b'u']

pyora.close()
pypyora.close()

mysql_connection_string = "Driver=/usr/lib64/libmyodbc5w.so;..."
table_name = "..."

pymys = pyodbc.connect(mysql_connection_string, autocommit=True)
pypymys = pypyodbc.connect(mysql_connection_string, autocommit=True)

get_column_names(pymys, table_name)
#['risk_id',
# 'start_date',
# 'end_date',
# 'last_date',
# 'parent_id',
# 'security_name',
# 'ticker',
# 'cusip',
# 'isin',
# 'sedol',
# 'common_code',
# 'be_id']

get_column_names(pypymys, table_name)
#[b'r', b's', b'e', b'l', b'p', b's', b't', b'c', b'i', b's', b'c', b'b']

pymys.close()
pypymys.close()

ValueError: invalid literal for int() with base 10

This happens when trying to cast ORACLE12 variables into columns via cursor.columns() call, I was able to spin my results by executing a custom query. I do think this issue is important to address.

Table creating query (from oracle site):

CREATE TABLE departments
( department_id number(10) NOT NULL,
department_name varchar2(50) NOT NULL,
CONSTRAINT departments_pk PRIMARY KEY (department_id)
);

When querying:

columns = cursor.columns(table="DEPARTMENTS").fetchall() <== fetchall is the function with problems

I get:

/usr/local/lib/python2.7/dist-packages/pypyodbc.pyc in fetchone(self)
1869 value_list.append(buf_cvt_func(from_buffer_u(alloc_buffer)))
1870 else:
-> 1871 value_list.append(buf_cvt_func(alloc_buffer.value))
1872 else:
1873 # There are previous fetched raw data to combine

ValueError: invalid literal for int() with base 10: ''

All text is Japanese

I'm getting wrong Japanese text (or maybe a wrong character set) when trying to connect to northwind.mdb. I created a Python script called test_access.py:

# -*- coding: utf-8 -*-
import pypyodbc
c = pypyodbc.connect('Driver={MDBTools};DBQ=northwind.mdb;Charset=utf-8')
x = c.cursor()
x.execute('select * from Shippers')
r = x.fetchone()
print(r)

Calling it with python test_access.py results in:

(1, '灓敥祤䔠灸敲獳', '㔨㌰\u2029㔵ⴵ㠹ㄳ')

However, I can see correct data using mdb-sql:

echo "select * from Shippers" | mdb-sql northwind.mdb

And the result is:

+-----------+--------------------------------------------------------------------------------+------------------------------------------------+
|ShipperID  |CompanyName                                                                     |Phone                                           
|
+-----------+--------------------------------------------------------------------------------+------------------------------------------------+
|1          |Speedy Express                                                                  |(503) 555-9831                                  |
|2          |United Package                                                                  |(503) 555-3199                                  |
|3          |Federal Shipping                                                                |(503) 555-9931                                  |
+-----------+--------------------------------------------------------------------------------+------------------------------------------------+
3 Rows retrieved

Connection String "Data Source" keyword is not supported

currenly only "Server" keyword is supported, but "Data Source" is not, although they mean the same thing....

[Fri Dec 16 00:15:17.144880 2016] [wsgi:error] [pid 11572:tid 1056] [client xx.xx.xx.xx:xx] DatabaseError: (u'08001', u'[08001] [Microsoft][ODBC SQL Server Driver]Neither DSN nor SERVER keyword supplied')

All-caps columns can't be found

When working with a MS SQL 2008 database through ODBC, all-caps column names in a row must be accessed in lowercase to yield the correct information.

for row in cur.execute("select * from TOTAL_DETAIL where IDENTIFIER = ?", (id, )):
    print(row["PRODUCT"]) # None
    print(row["product"]) # the correct product name

If this is a quirk of ODBC, a possible solution (if it doesn't cause other problems) would be to .lower() all keys before searching the Row-internal dictionary.

Varchar values longer than 1024 chars are truncated

This seems to be a problem for both SQL_WVARCHAR & SQL_VARCHAR types. Any values larger than 1024 chars are truncated.

Reproduction Steps

  1. Execute a SQL query which will return a value greater than 1024 chars in length from a variable length column (e.g. SQL nvarchar(max)).

Reproducible with the following

Python/Pypyodbc versions
Python v3.5.2
Pypyodbc v1.3.4.3

ODBC Drivers Tested (64-bit):

Name Version File Driver Date
ODBC Driver 13 for SQL Server 2017.140.800.90 MSODBCSQL13.DLL 11/07/2017
SQL Server Native Client 11.0 2011.110.6540.00 SQLNCLI11.DLL 24/06/2016
SQL Server Native Client 10.0 2009.100.1600.01 SQLNCLI10.DLL 03/04/2010

Workaround

It is possible to resolve this by following the workaround detailed in the issues below (originally raised on Google code repo).

#62
https://code.google.com/archive/p/pypyodbc/issues/44
https://github.com/bpla2112/pypyodbc/issues/44

Empty value for int()

Hi,
I came over this issue with version 1.3.1 seems like it comes from an empty integer in a row, and pypyodbc does not like to do the convertion int(null). Here is the log:
Traceback (most recent call last):
File "./pypyodbc_tsmprop_test.py", line 11, in
for row in cur:
File "/root/pyodbc_install/pypyodbc.py", line 1910, in next
row = self.fetchone()
File "/root/pyodbc_install/pypyodbc.py", line 1861, in fetchone
value_list.append(buf_cvt_func(alloc_buffer.value))
ValueError: invalid literal for int() with base 10: ''

Cheers,
Beer4duke

ODBC Driver Manager Invalid string or buffer error

I am using pypyodbc from within a specialized called Motionbuilder, and recently I've been getting this strange error:

pypyodbc.Error: (u'HY090', u'[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length')

It doesn't happen every time, only sometimes. Any ideas?

TypeError: cannot use a string pattern on a bytes-like object

Hi,
sorry I don't know if this is the best place to post the issue I have seen.

I followed the instructions here.
https://code.google.com/archive/p/pypyodbc/wikis/Enable_SQLAlchemy_on_PyPy.wiki
There is an error when returning the server property on my machine its 12.0.2269.0.
in the file Traceback:
sqlalchemy\dialects\mssql\pypyodbc.py", line 285, in _get_server_version_info
for n in r.split(raw):

raw is a byte. ["b'12", '0', '2269', "0'"]

So, a string function cannot be used. to get this working, I have altered the above file by converting this to a string
for n in r.split(raw.decode("utf-8")):

this fix's the issue for me. whether its the correct fix or not, but its a hot fix for me.
I just thought id post it.
cheers

parse DB2 SQL_XML = -370

I just added this to the array SQL_data_type_dict, and I was good to go to query DB2 xml data type using their ODBC driver.

SQL_SS_XML : (unicode, lambda x: x, SQL_C_WCHAR, create_buffer_u, 20500 , True ),
SQL_SS_UDT : (bytearray, bytearray_cvt, SQL_C_BINARY, create_buffer, 5120 , True ),
SQL_XML : (unicode, lambda x: x, SQL_C_WCHAR, create_buffer_u, 20500 , True ),
SQL_BLOB : (bytearray, bytearray_cvt, SQL_C_BINARY, create_buffer, 102400, True ),
SQL_CLOB : (unicode, lambda x: x, SQL_C_WCHAR, create_buffer_u, 2048 , False ),

}

DB2 ODBC also has these types, it will be good to test all of them

//SQL extended data types
SQL_GRAPHIC = -95
SQL_VARGRAPHIC = -96
SQL_LONGVARGRAPHIC = -97
SQL_BLOB = -98
SQL_CLOB = -99
SQL_DBCLOB = -350
SQL_XML = -370
SQL_CURSORHANDLE = -380
SQL_DATALINK = -400
SQL_USER_DEFINED_TYPE = -450

I did this test and all good

these tables are from their SAMPLE database

CREATE TABLE "EMP_PHOTO" (
"EMPNO" CHAR(6) NOT NULL,
"PHOTO_FORMAT" VARCHAR(10) NOT NULL,
"PICTURE" BLOB(102400)
)
DATA CAPTURE NONE;

CREATE TABLE "EMP_RESUME" (
"EMPNO" CHAR(6) NOT NULL,
"RESUME_FORMAT" VARCHAR(10) NOT NULL,
"RESUME" CLOB(5120)
)
DATA CAPTURE NONE;

CREATE TABLE "CUSTOMER" (
"CID" BIGINT NOT NULL,
"INFO" XML,
"HISTORY" XML
)
DATA CAPTURE NONE;

Missing License file / License is unclear

Doing some research, it appears the license is MIT, but it is also unclear who holds the copyright.

I haven't looked too deeply into RealPyODBC, so I am not sure if the copyright info is there. The only thing I found was confirmation that at some point MIT license was applied.

If you could add license and copyright infomation, that would be great. I would not consider Pypi repository definitive for license info as it could be mistaken. Would rather that be available with the source.

Function sequence error with pypyodbc and MSSQL driver

Hi,
It happen randomly that when I call in robotframework-databaselibrary:
| DatabaseLibrary.Execute Sql String | delete t_Skill_Group_Member where AgentSkillTargetID = ${agent_id} | # where ${agent_id} is a number

I get:
(u'HY010', u'[HY010] [unixODBC][Driver Manager]Function sequence error')

Traceback:
Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/DatabaseLibrary/query.py", line 252, in execute_sql_string self.__execute_sql(cur, sqlString) File "/usr/lib/python2.7/site-packages/DatabaseLibrary/query.py", line 260, in __execute_sql return cur.execute(sqlStatement) File "/usr/lib/python2.7/site-packages/pypyodbc.py", line 1605, in execute self.execdirect(query_string) File "/usr/lib/python2.7/site-packages/pypyodbc.py", line 1632, in execdirect self._NumOfRows() File "/usr/lib/python2.7/site-packages/pypyodbc.py", line 1796, in _NumOfRows check_success(self, ret) File "/usr/lib/python2.7/site-packages/pypyodbc.py", line 986, in check_success ctrl_err(SQL_HANDLE_STMT, ODBC_obj.stmt_h, ret, ODBC_obj.ansi) File "/usr/lib/python2.7/site-packages/pypyodbc.py", line 964, in ctrl_err raise Error(state,err_text)

My setup is following:
robotframework-databaselibrary (0.6)
pypyodbc (1.3.3)
and odbc driver 11 for SQL Server 2011.110.2270.00

I have read some article and it says that fetchall() prior getting num of rows can help.
What I don't know is whether it should be fixed in robotframework-databaselibrary or in pypyodbc project.

Help for full migration from code.google.com

This project has been hosted previously on the code grave code.google.com, and on this python wiki page 3 tutorials from the old repo are linked:

I could not easily determine the current versions of these tutorials in the wiki of this repo. Please advise me on which is which and i will do the necessary edits.

If there are no suitable replacements: consider moving the old tutorials to this repo with proper names. If you want i can also make the necessary formatting modifications.

fetchall function sometime go for ever with no limitation.

I use pypyodbc library, and get out of memory error for Issue #2042.

The problem is in pypyodbc function:

   def fetchall(self):
        if not self.connection:
            self.close()

        rows = []
        while True:
            row = self.fetchone()
            if row is None:   //// It seems the row always is not None.
                break
            rows.append(row)
        return rows

This function I get:

C:\Dataload\Dataload Executables\AO3Dataload\src>pypy mmtmain.py -E PYPY_GC_MAX_DELTA=4.0GB

Getting all related products info... RPython traceback: File "rpython_jit_metainterp_warmspot.c", line 1300, in ll_portal_runnerUnsi gned_Bool_pypy_interpreter File "rpython_jit_metainterp_warmstate.c", line 4795, in maybe_compile_and_run star_5 File "rpython_jit_metainterp_warmstate.c", line 10053, in execute_assembler__s tar_2_2 File "rpython_jit_metainterp_compile.c", line 5694, in DoneWithThisFrameDescrR ef_handle_fail out of memory: couldn't allocate the next arena

Even if the row after the list, it is still can be None type.

Overload causes crash?

When querying a large number of times to the same DB (batching), I get the following stack trace:

  File "pypy/site-packages/pypyodbc.py", line 2652, in __exit__
    self.commit()
  File "pypy/site-packages/pypyodbc.py", line 2591, in commit
    check_success(self, ret)
  File "pypy/site-packages/pypyodbc.py", line 1007, in check_success
    ctrl_err(SQL_HANDLE_DBC, ODBC_obj.dbc_h, ret, ODBC_obj.ansi)
  File "pypy/site-packages/pypyodbc.py", line 970, in ctrl_err
    state = err_list[0][0]
IndexError: list index out of range

insert statements get lost

I am executing around 1200 insert statements , only about half of them are reflected in the target table. There is no exception raised, it just silently ignores part of the input.

    with pypyodbc.connect(conn_string) as con:
        cursor = con.cursor()
        cursor.execute('\n'.join(insert_statements))
        cursor.commit()

I checked c_query_string.value, in execdirect(), it contains the entire string, nothing gets truncated there. len(query_string) is 427118. ODBC_API.SQLExecDirect returns 0.

The database is MS SQL Server 2012. Interestingly, the number of insert statements that get executed keeps changing. There are no SQL errors - If I execute generated string in the GUI, there are no errors and all rows are inserted.

issue in python3

In python3 and ansi mode , line 1003 in func ctrl_err, will mix str and bytes ,then typeerror .

Problem inserting dataframe into MSSQL table

I've tried to use sqlalchemy with pypyodbc following these steps.

I can directly write to the table, but using pandas' to_sql method fails. The same error is thrown when I use read_sql_table.

I really hope someone can help me out.

---------------------------------------------------------------------------
Error                                     Traceback (most recent call last)
<ipython-input-77-b7f657b9fe3c> in <module>()
      6 engine = create_engine(db_connection_string)
      7 
----> 8 data.to_sql(name='tbl_something',schema='NOT_DBO', con=engine, if_exists='append',  index=False, chunksize=100)

C:\Anaconda\envs\etl2\lib\site-packages\pandas\core\generic.pyc in to_sql(self, name, con, flavor, schema, if_exists, index, index_label, chunksize, dtype)
    964             self, name, con, flavor=flavor, schema=schema, if_exists=if_exists,
    965             index=index, index_label=index_label, chunksize=chunksize,
--> 966             dtype=dtype)
    967 
    968     def to_pickle(self, path):

C:\Anaconda\envs\etl2\lib\site-packages\pandas\io\sql.pyc in to_sql(frame, name, con, flavor, schema, if_exists, index, index_label, chunksize, dtype)
    536     pandas_sql.to_sql(frame, name, if_exists=if_exists, index=index,
    537                       index_label=index_label, schema=schema,
--> 538                       chunksize=chunksize, dtype=dtype)
    539 
    540 

C:\Anaconda\envs\etl2\lib\site-packages\pandas\io\sql.pyc in to_sql(self, frame, name, if_exists, index, index_label, schema, chunksize, dtype)
   1169                          if_exists=if_exists, index_label=index_label,
   1170                          schema=schema, dtype=dtype)
-> 1171         table.create()
   1172         table.insert(chunksize)
   1173         # check for potentially case sensitivity issues (GH7815)

C:\Anaconda\envs\etl2\lib\site-packages\pandas\io\sql.pyc in create(self)
    635 
    636     def create(self):
--> 637         if self.exists():
    638             if self.if_exists == 'fail':
    639                 raise ValueError("Table '%s' already exists." % self.name)

C:\Anaconda\envs\etl2\lib\site-packages\pandas\io\sql.pyc in exists(self)
    623 
    624     def exists(self):
--> 625         return self.pd_sql.has_table(self.name, self.schema)
    626 
    627     def sql_schema(self):

C:\Anaconda\envs\etl2\lib\site-packages\pandas\io\sql.pyc in has_table(self, name, schema)
   1183 
   1184     def has_table(self, name, schema=None):
-> 1185         return self.engine.has_table(name, schema or self.meta.schema)
   1186 
   1187     def get_table(self, table_name, schema=None):

C:\Anaconda\envs\etl2\lib\site-packages\sqlalchemy\engine\base.pyc in has_table(self, table_name, schema)
   1938 
   1939         """
-> 1940         return self.run_callable(self.dialect.has_table, table_name, schema)
   1941 
   1942     def raw_connection(self):

C:\Anaconda\envs\etl2\lib\site-packages\sqlalchemy\engine\base.pyc in run_callable(self, callable_, *args, **kwargs)
   1841 
   1842         """
-> 1843         with self.contextual_connect() as conn:
   1844             return conn.run_callable(callable_, *args, **kwargs)
   1845 

C:\Anaconda\envs\etl2\lib\site-packages\sqlalchemy\engine\base.pyc in contextual_connect(self, close_with_result, **kwargs)
   1908 
   1909         return self._connection_cls(self,
-> 1910                                     self.pool.connect(),
   1911                                     close_with_result=close_with_result,
   1912                                     **kwargs)

C:\Anaconda\envs\etl2\lib\site-packages\sqlalchemy\pool.pyc in connect(self)
    336         """
    337         if not self._use_threadlocal:
--> 338             return _ConnectionFairy._checkout(self)
    339 
    340         try:

C:\Anaconda\envs\etl2\lib\site-packages\sqlalchemy\pool.pyc in _checkout(cls, pool, threadconns, fairy)
    643     def _checkout(cls, pool, threadconns=None, fairy=None):
    644         if not fairy:
--> 645             fairy = _ConnectionRecord.checkout(pool)
    646 
    647             fairy._pool = pool

C:\Anaconda\envs\etl2\lib\site-packages\sqlalchemy\pool.pyc in checkout(cls, pool)
    438     @classmethod
    439     def checkout(cls, pool):
--> 440         rec = pool._do_get()
    441         try:
    442             dbapi_connection = rec.get_connection()

C:\Anaconda\envs\etl2\lib\site-packages\sqlalchemy\pool.pyc in _do_get(self)
    962             if self._inc_overflow():
    963                 try:
--> 964                     return self._create_connection()
    965                 except:
    966                     self._dec_overflow()

C:\Anaconda\envs\etl2\lib\site-packages\sqlalchemy\pool.pyc in _create_connection(self)
    283         """Called by subclasses to create a new ConnectionRecord."""
    284 
--> 285         return _ConnectionRecord(self)
    286 
    287     def _invalidate(self, connection, exception=None):

C:\Anaconda\envs\etl2\lib\site-packages\sqlalchemy\pool.pyc in __init__(self, pool)
    414         pool.dispatch.first_connect.\
    415             for_modify(pool.dispatch).\
--> 416             exec_once(self.connection, self)
    417         pool.dispatch.connect(self.connection, self)
    418 

C:\Anaconda\envs\etl2\lib\site-packages\sqlalchemy\event\attr.pyc in exec_once(self, *args, **kw)
    248                 if not self._exec_once:
    249                     try:
--> 250                         self(*args, **kw)
    251                     finally:
    252                         self._exec_once = True

C:\Anaconda\envs\etl2\lib\site-packages\sqlalchemy\event\attr.pyc in __call__(self, *args, **kw)
    258             fn(*args, **kw)
    259         for fn in self.listeners:
--> 260             fn(*args, **kw)
    261 
    262     def __len__(self):

C:\Anaconda\envs\etl2\lib\site-packages\sqlalchemy\util\langhelpers.pyc in go(*arg, **kw)
   1217         if once:
   1218             once_fn = once.pop()
-> 1219             return once_fn(*arg, **kw)
   1220 
   1221     return go

C:\Anaconda\envs\etl2\lib\site-packages\sqlalchemy\engine\strategies.pyc in first_connect(dbapi_connection, connection_record)
    164                                     _has_events=False)
    165                 c._execution_options = util.immutabledict()
--> 166                 dialect.initialize(c)
    167             event.listen(pool, 'first_connect', first_connect, once=True)
    168 

C:\Anaconda\envs\etl2\lib\site-packages\sqlalchemy\connectors\pypyodbc.pyc in initialize(self, connection)
    141 
    142         # run other initialization which asks for user name, etc.
--> 143         super(PyODBCConnector, self).initialize(connection)
    144 
    145 

C:\Anaconda\envs\etl2\lib\site-packages\sqlalchemy\dialects\mssql\base.pyc in initialize(self, connection)
   1371 
   1372     def initialize(self, connection):
-> 1373         super(MSDialect, self).initialize(connection)
   1374         if self.server_version_info[0] not in list(range(8, 17)):
   1375             # FreeTDS with version 4.2 seems to report here

C:\Anaconda\envs\etl2\lib\site-packages\sqlalchemy\engine\default.pyc in initialize(self, connection)
    246 
    247         if self.description_encoding is not None and \
--> 248                 self._check_unicode_description(connection):
    249             self._description_decoder = self.description_encoding = None
    250 

C:\Anaconda\envs\etl2\lib\site-packages\sqlalchemy\engine\default.pyc in _check_unicode_description(self, connection)
    333                     expression.select([
    334                         expression.literal_column("'x'").label("some_label")
--> 335                     ]).compile(dialect=self)
    336                 )
    337             )

C:\Anaconda\envs\etl2\lib\site-packages\pypyodbc.py in execute(self, query_string, params, many_mode, call_mode)
   1603                 #self._BindCols()
   1604 
-> 1605         else:
   1606             self.execdirect(query_string)
   1607         return self

C:\Anaconda\envs\etl2\lib\site-packages\pypyodbc.py in execdirect(self, query_string)
   1631             ret = ODBC_API.SQLExecDirect(self.stmt_h, c_query_string, len(query_string))
   1632         check_success(self, ret)
-> 1633         self._NumOfRows()
   1634         self._UpdateDesc()
   1635         #self._BindCols()

C:\Anaconda\envs\etl2\lib\site-packages\pypyodbc.py in _UpdateDesc(self)
   1783             self._row_type = self.row_type_callable(self)
   1784         else:
-> 1785             self.description = None
   1786         self._CreateColBuf()
   1787 

C:\Anaconda\envs\etl2\lib\site-packages\pypyodbc.py in _CreateColBuf(self)
   1726             if bind_data:
   1727                 ret = ODBC_API.SQLBindCol(self.stmt_h, col_num + 1, target_type, ADDR(alloc_buffer), total_buf_len, ADDR(used_buf_len))
-> 1728                 if ret != SQL_SUCCESS:
   1729                     check_success(self, ret)
   1730 

C:\Anaconda\envs\etl2\lib\site-packages\pypyodbc.py in check_success(ODBC_obj, ret)
    984     """ Validate return value, if not success, raise exceptions based on the handle """
    985     if ret not in (SQL_SUCCESS, SQL_SUCCESS_WITH_INFO, SQL_NO_DATA):
--> 986         if isinstance(ODBC_obj, Cursor):
    987             ctrl_err(SQL_HANDLE_STMT, ODBC_obj.stmt_h, ret, ODBC_obj.ansi)
    988         elif isinstance(ODBC_obj, Connection):

C:\Anaconda\envs\etl2\lib\site-packages\pypyodbc.py in ctrl_err(ht, h, val_ret, ansi)
    962             elif state in (raw_s('HYT00'),raw_s('HYT01')):
    963                 raise OperationalError(state,err_text)
--> 964             elif state[:2] in (raw_s('IM'),raw_s('HY')):
    965                 raise Error(state,err_text)
    966             else:

Error: (u'HY090', u'[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length')

crash on close() without rollback/commit

Hi, just to report that when closing Win SQL Server connections, even just using cursor to perform retrieval Selects, the following error is generated:

"...\generating_dataset.py", line 228, in get_doc_text
    connection.close()
  File "...\Anaconda3\lib\site-packages\pypyodbc.py", line 2697, in close
    check_success(self, ret)
  File "...\Anaconda3\lib\site-packages\pypyodbc.py", line 1009, in check_success
    ctrl_err(SQL_HANDLE_DBC, ODBC_obj.dbc_h, ret, ODBC_obj.ansi)
  File "...\Anaconda3\lib\site-packages\pypyodbc.py", line 975, in ctrl_err
    raise ProgrammingError(state,err_text)
pypyodbc.ProgrammingError: ('25000', '[25000] [Microsoft][ODBC Driver Manager] Invalid transaction state')
Exception ignored in: <bound method Connection.__del__ of <pypyodbc.Connection object at 0x00000167673D4518>>
Traceback (most recent call last):
  File "...\Anaconda3\lib\site-packages\pypyodbc.py", line 2682, in __del__
    self.close()
  File "...\Anaconda3\lib\site-packages\pypyodbc.py", line 2697, in close
    check_success(self, ret)
  File "...\Anaconda3\lib\site-packages\pypyodbc.py", line 1009, in check_success
    ctrl_err(SQL_HANDLE_DBC, ODBC_obj.dbc_h, ret, ODBC_obj.ansi)
  File "...\Anaconda3\lib\site-packages\pypyodbc.py", line 975, in ctrl_err
    raise ProgrammingError(state,err_text)
pypyodbc.ProgrammingError: ('25000', '[25000] [Microsoft][ODBC Driver Manager] Invalid transaction state')
Press any key to continue . . .

To fix it, a rollback() is necessary before closing the connection. Perhaps this fix should be harcoded inside the close method is no "editing" operation was perfomed:

cursor.close()
connection.rollback()
connection.close()

Version Code 1.4.3 and 1.4.3.4

There might be something wrong with version codes 1.3.4 and 1.3.4.3..

I pip installed pypyodbc and it showed I installed 1.3.4
capture5_install_pypyodbc_on_test3

Then I exported my environment to a .yml file: still showing 1.3.4 (also showing 1.3.4 in requirement.txt)

Later when I recreate the environment with the .yml file, it throws an error saying 'Could not find a version that satisfies the requirement pypyodbc == 1.3.4'
capture6

I manually change the version in my .yml file to 1.3.4.3 and recreated the environment, there's an message showing it's version 1.3.4 that is installed.
capture7

Invalid version referenced in pypyodbc package pypyodbc-1.3.5.2.zip

The pypi package pypyodbc-1.3.5.2.zip still has version 1.3.4 referenced in it's setup.py which leads to following warning message when upgrading with pip:

Requested pypyodbc==1.3.5.2 from https://pypi.python.org/packages/ea/48/bb5412846df5b8f97d42ac24ac36a6b77a802c2778e217adc0d3ec1ee7bf/pypyodbc-1.3.5.2.zip#md5=9f262beb1aebf7556fce26cad2c5d462 (from -r requirements.txt (line 15)), but installing version 1.3.4

I guess it is best to keep the version in setup.py in sync with the pypi package version.

Performance issue

Hi,
I am very glad for pypyodbc and using pypyodbc for my day to day work. I work with 5-10GB data, will apply computations and then insert in MS Access using pypyodbc.

Observed some performance variations following code,

cur.execute("Insert")
con.commit()

Inserting row by row using above code and it is very slow. Then I came up with

cur.executemany("Bulk insert")
con.commit()

But it is little fast compared with row by row insertion, But I need extra performance :). Could you give me any suggestions for me

Selecting a uniqueidentifier field returns a string that looks like bytes

I'm not sure if I'm doing something wrong, however, if I select a uuid field from ms sql the console appears to display it as a bytes object by prefixing with b, however, it is actually a str quoted with b''. To get the uuid you need to slice with row['itemid'][2:-1]

Is that the desired behaviour? Caused some grief when trying to insert the value into another table.

    cursor.execute("select voucher_draft.ItemID, voucher_draft.comments "
                   "from voucher_draft "
                   "where voucher_draft.ExpenseRef = ?", ('1234123412345', ))
    row = cursor.fetchone()
    print(row['itemid'])
    # yields: b'B335FDAD-DE0B-41DA-BB07-0DA10C04D59A'
    print(type(row['itemid']))
    # yields: <class 'str'>
    print(row['itemid'].upper())
    # yields: B'B335FDAD-DE0B-41DA-BB07-0DA10C04D59A'
    # note the uppercase B 

MSSQL: None instead of empty string returned

Hello,

pypyodbc_issue_rename_to_py.txt

I have a problem with a MSSQL varchar column value that should be an empty string but it is actually returned as None..

I have stripped the problem down to the following code:
`import pypyodbc as pyodbc
def get_db_cursor(db_host="server.domain.invalid,3180",
db_name="db_name",
db_user="user",
db_password="pass"
):
connection_string = "Driver={SQL Server};Server=" + db_host + ";Database=" + db_name + ";UID=" + db_user + ";PWD=" + db_password + ";"
db = pyodbc.connect(connection_string)
cursor = db.cursor()
return db, cursor
def main():
db, cursor = get_db_cursor()
query = """SELECT NULL as nul_col, '' as empty_col"""
for row in cursor.execute(query):
print row

if name == "main":
main()`

I fail to understand why this prints (None, None) instead of (None,'')

Is there a way to make pypyodbc it return (None, '') in this case?

Thank you for your help!

Version 1.3.3 removed from pypi

With the recent release of 1.3.4 and 1.3.5 it looks like the 1.3.3 package has been removed from pypi.
Upgrading to the latest version isn't trivial as 1.3.4+ includes breaking changes (despite only being a minor version bump) .
We have worked around this problem by keeping a local mirror of it, but others might see issues such as production servers not provisioning, builds failing, or local dev workflow interupted.

Could this package be added to pypi again?

URL:
https://pypi.python.org/simple/pypyodbc/

OUTPUT:

Links for pypyodbc

pypyodbc-1.3.4.1.zip
pypyodbc-1.2.1.zip
pypyodbc-1.3.4.3.zip
pypyodbc-1.3.1.zip
pypyodbc-1.3.0.zip
pypyodbc-1.3.2.zip
pypyodbc-1.3.5.2.zip

BBS does not exist

The BBS linked from both the github page and the Google code page is non-existent (Yahoo says that the group does not exist).

This github repo missing latest commits?

The latest release on pypi is (at the time of writing) 1.3.3 uploaded on 2014-05-25.

At the time of writing the latest commit here on github is 73c98e1 for version 1.3.0, made on 2014-02-15

So GitHub is missing this work (taken from history on google code):

Version 1.3.3 May 25 2014

Setting connection timeout, login timeout, query timeout are now well supported

close Issue 42 only set read only of connection when explicitly required.

Version 1.3.2 May 24 2014

close Issue 37, now you can set connecton.timeout or use cursor.set_timeout(timeout) to set the time when a query should time out. Thanks Aleksey!

Version 1.3.1 Mar 11 2014

close Issue 36, handling of datetime stamps


Do the latest changes need to be pushed here? Is this still the main development repo?

Thanks for your work on this. A pure Python implementation of pyodbc is a worthy goal and I hope we can keep this going.

dttm_cvt and tm_cvt do not work when subsecond value more accurate than milliseconds

When getting datetime data from SQL Server, the subsecond value has 7 digits. The code assumes there to be 6 or fewer digits.

To get this working, I truncated the subsecond value to the first 6 digits, so that it has millisecond values.

I changed the last line of dttm_cvt to be:
else: return datetime.datetime(int(x[0:4]),int(x[5:7]),int(x[8:10]),int(x[10:13]),int(x[14:16]),int(x[17:19]),int(x[20:].ljust(6,'0')[0:6]))

Similarly, I changed the last line of tm_cvt to be:
else: return datetime.time(int(x[0:2]),int(x[3:5]),int(x[6:8]),int(x[9:].ljust(6,'0')[0:6]))

Contributors ! Helpers wanted

Hi:

I recently saw an increase on issues posting and feel of usage of this library, I had no time to help a lot making pull request, code checking/ unit tests, etc (too see if it works properly) and even find improvements of some other users on forks to see if are ready or not to be merged here, I had no tools neither on my work computers to do it, I just use the library with SQLAlchemy and SQL Server 2000/2005/2008 for some projects of different sizes (I even have one fully working on a Raspberry Pi with FreeTDS) and when I had some small problems I try to find the cause and fix it by myself on the way, (I try to do the same for other users when I catch some when I get notifications from github on my email inbox)... like #70 or #78 and have added some sample usage code snippets into the Readme.md file too.

So, If someone that have that time, and knowledge needed I could add it as a Contributor to this Main Repository like I was added for the same reason some time ago kindly by jiangwen365.

Thanks in advance for the understanding!

autocommit does not work like pyodbc

I am trying to move away from pyodbc and I switched to pypyodbc. The autocommit=True, either on the .connect() or conn.autocommit=True, does not work like pyodbc. I get the following message:

pypyodbc.ProgrammingError: ('42000', '[42000] [Microsoft][ODBC Driver 11 for SQL Server][SQL Server]CREATE DATABASE statement not allowed within multi-statement transaction.')

...When creating a snapshot (so a database):
my_cursor.execute("IF EXISTS (SELECT database_id FROM sys.databases WHERE NAME='{0}') DROP DATABASE {0}".format(my_snapshot))
my_cursor.execute("CREATE DATABASE {0} ON ( NAME = {1}, FILENAME = N'{2}') AS SNAPSHOT OF {1}".format(my_snapshot, database, my_snapshot_file))

Note: It works just fine with pyodbc

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.