Giter Site home page Giter Site logo

mariadb-corporation / mariadb-connector-nodejs Goto Github PK

View Code? Open in Web Editor NEW
360.0 44.0 92.0 3.67 MB

MariaDB Connector/Node.js is used to connect applications developed on Node.js to MariaDB and MySQL databases. MariaDB Connector/Node.js is LGPL licensed.

License: GNU Lesser General Public License v2.1

JavaScript 99.18% TypeScript 0.82%

mariadb-connector-nodejs's Introduction

MariaDB Node.js connector

npm package Test Build License (LGPL version 2.1) codecov

Non-blocking MariaDB and MySQL client for Node.js.

MariaDB and MySQL client, 100% JavaScript, with TypeScript definition, with the Promise API, distributed under the LGPL license version 2.1 or later (LGPL-2.1-or-later)

Documentation

See promise documentation for detailed API.

Callback documentation describe the callback wrapper for compatibility with existing drivers.

See dedicated part for migration from mysql/mysql2 or from 2.x version.

Why a New Client?

While there are existing MySQL clients that work with MariaDB, (such as the mysql and mysql2 clients), the MariaDB Node.js Connector offers new functionality, like Insert Streaming, Pipelining, ed25519 plugin authentication while making no compromises on performance.

Connector is production grade quality, with multiple features:

  • superfast batching
  • fast pool
  • easy debugging, trace pointing to code line on error
  • allows data streaming without high memory consumption
  • pipelining
  • metadata skipping (for MariaDB server only)
  • sql file import
  • ...

see some of those features:

Insert Streaming

Using a Readable stream in your application, you can stream INSERT statements to MariaDB through the Connector.

    
    https.get('https://someContent', readableStream => {
        //readableStream implement Readable, driver will stream data to database 
        connection.query("INSERT INTO myTable VALUE (?)", [readableStream]);
    });

Pipelining

With Pipelining, the Connector sends commands without waiting for server results, preserving order. For instance, consider the use of executing two INSERT statements.

pipelining example

The Connector doesn't wait for query results before sending the next INSERT statement. Instead, it sends queries one after the other, avoiding much of the network latency.

For more information, see the Pipelining documentation.

Bulk insert

Some use cases require a large amount of data to be inserted into a database table. By using batch processing, these queries can be sent to the database in one call, thus improving performance.

For more information, see the Batch documentation.

Benchmarks

MariaDB provides benchmarks comparing the Connector with other Node.js MariaDB/MySQL clients, including:

See the Benchmarks page for multiple results.

query

select 100 int
            mysql :  2,738.7 ops/s ± 1.3% 
           mysql2 :  2,404.9 ops/s ± 1.3%  (  -12.2% )
          mariadb :  5,650.8 ops/s ± 1.4%  ( +106.3% )

select 100 int benchmark results

execute

select 100 int - BINARY

select 100 int - BINARY
           mysql2 :  2,473.4 ops/s ± 1.3% 
          mariadb :   10,533 ops/s ± 1.7%  ( +325.9% )

select 100 int - BINARY benchmark results

Quick Start

The MariaDB Connector is available through the Node.js repositories. You can install it using npm :

$ npm install mariadb

example:

const mariadb = require('mariadb');
const pool = mariadb.createPool({host: process.env.DB_HOST, user: process.env.DB_USER, connectionLimit: 5});

async function asyncFunction() {
  let conn;
  try {

	conn = await pool.getConnection();
	const rows = await conn.query("SELECT 1 as val");
	// rows: [ {val: 1}, meta: ... ]

	const res = await conn.query("INSERT INTO myTable value (?, ?)", [1, "mariadb"]);
	// res: { affectedRows: 1, insertId: 1, warningStatus: 0 }

  } finally {
	if (conn) conn.release(); //release to pool
  }
}

Contributors

A big thanks to all contributors

contributors list

Contributing

If you would like to contribute to the MariaDB Node.js Connector, please follow the instructions given in the contributing guide.

To file an issue or follow the development, see JIRA.

mariadb-connector-nodejs's People

Contributors

an3l avatar bc-m avatar drsdavidsoft avatar ephys avatar hualibukenni avatar jimmyolo avatar jiralite avatar jtbrinkmann avatar kennethpjdyer avatar keyurboss avatar knoxcard avatar knoxcard2 avatar koendeschacht avatar lawrinn avatar mafischer avatar markus456 avatar mlc-mlapis avatar mscdex avatar nibbler999 avatar niciusb avatar nickmccally avatar outbackstack avatar p-kuen avatar rasmushoj avatar rusher avatar sehrope avatar sushantdhiman avatar urugator avatar waynemdb avatar zypa13510 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mariadb-connector-nodejs's Issues

Cannot pass JSON object to st_geoFromgeojson..

This code... "use strict"
const mariadb = require('mariadb');

async function main() {

const connectionDetails = {
        host      : "192.168.1.250"
       ,user      : "root"
       ,password  : "oracle"
       ,port      : 3307
       ,database  : "mysql"
       ,multipleStatements: true
}

let results;

try {

const pool = mariadb.createPool(connectionDetails);
const conn = await pool.getConnection();
results = await conn.query(`SET SESSION SQL_MODE=ANSI_QUOTES`);
results = await conn.query(`CREATE DATABASE IF NOT EXISTS "TEST1"`);    
results = await conn.query(`DROP TABLE IF EXISTS "TEST"`);
results = await conn.query(`CREATE TABLE IF NOT EXISTS "TEST" ("ID" INT, "LOCN" GEOMETRY)`);
results = await conn.query(`INSERT INTO "TEST" ("ID","LOCN") VALUES (?,ST_GeomFromGeoJSON(?))`,[1,{"type": "Point","coordinates": [28.8575834,47.0055601]}]);

conn.end()
pool.end();
} catch(e) {
console.log(e);
}
}
main();
`

results in

C:\Development\YADAMU\MariaDB>node client\test1 { Error: (conn=1588, no: 4079, SQLState: HY000) Illegal parameter data type geometry for operation 'st_geomfromgeojson' sql: INSERT INTO "TEST" ("ID","LOCN") VALUES (?,ST_GeomFromGeoJSON(?)) - parameters:[1,{"type":"Point","coordinates":[28.8575834,47.0055601]}] at Object.module.exports.createError (C:\Development\YADAMU\MariaDB\client\node_modules\mariadb\lib\misc\errors.js:55:10) at Packet.readError (C:\Development\YADAMU\MariaDB\client\node_modules\mariadb\lib\io\packet.js:494:19) at Query.readResponsePacket (C:\Development\YADAMU\MariaDB\client\node_modules\mariadb\lib\cmd\resultset.js:47:28) at PacketInputStream.receivePacket (C:\Development\YADAMU\MariaDB\client\node_modules\mariadb\lib\io\packet-input-stream.js:51:9) at PacketInputStream.onData (C:\Development\YADAMU\MariaDB\client\node_modules\mariadb\lib\io\packet-input-stream.js:107:20) at Socket.emit (events.js:182:13) at addChunk (_stream_readable.js:283:12) at readableAddChunk (_stream_readable.js:264:11) at Socket.Readable.push (_stream_readable.js:219:10) at TCP.onread (net.js:638:20) fatal: false, errno: 4079, sqlState: 'HY000', code: 'ER_ILLEGAL_PARAMETER_DATA_TYPE_FOR_OPERATION' }

I can get round it by 'stringifying' the geoJSON object. but I don't think that should be necessary ?

Executing multiple queries in a single string

Hi

I am using this package to import an sql db dump. If I am not mistaken, I have to isolate all queries including special comments like

/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;

because connection.query() only allows a single query.
Is there a way to execute multiple queries in a single string?

Regards,
Michael

Charset is ignored when using a connection pool

While I'm using a connection pool (with createPool method), the charset is not used anymore resulting in a encoding problem after the connection is used like 1/5 of the limit (eg: if connectionLimit is set to 10, after 4 queries the result changes).

Correct result: Snax, byali и TOAO войдут в новый состав по CS:GO
Actual result: Snax, byali ? TOAO ?????? ? ????? ?????? ?? CS:GO

Even using CONVERT() or CAST() the problem persists.

This is the connection snippet:

const mariadb = require('mariadb');
const database = mariadb.createPool({
    host: process.env.DATABASE_HOST,
    user: process.env.DATABASE_USERNAME,
    password: process.env.DATABASE_PASSWORD,
    database: process.env.DATABASE_NAME,
    timezone: 'Z'
    acquireTimeout: 2000
});

and the table structure:

CREATE TABLE `course_content` (
  `course` char(36) CHARACTER SET ascii NOT NULL,
  `content` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci

Node: v10.13.0
MariaDB: 10.2
MariaDB Connector: 2.0.2-rc

Pass array in mariadb query

Hello,
I have a simple query like this,
db.query("DELETE `tb_a`, `tb_b` FROM `tb_a` LEFT JOIN `tb_b` ON `tb_b`.a_key = `tb_a`.key WHERE `tb_a`.key IN ('"+key.join("','")+"')")
//key = ['abc','gfd']
It work well but I want to prevent sql inject so I intent to use placeholder like this
db.query("DELETE `tb_a`, `tb_b` FROM `tb_a` LEFT JOIN `tb_b` ON `tb_b`.a_key = `tb_a`.key WHERE `tb_a`.key IN (?)", [key])
But it doesn't seem to work
{ affectedRows: 0, insertId: 0, warningStatus: 0 }
Am I doing this right?

Cannot set Timezone on connection using SYSTEM timezone

lib version v2.0.5
Maria v10.3.13
Connection via Sequelize v5

{ 
  Error: (conn=55, no: 45036, SQLState: 08S01) Automatic timezone setting fails. Server timezone 'PDT' does't have a corresponding IANA timezone. Option timezone must be set according to server timezone
    at Object.module.exports.createError (/some/directory/node_modules/mariadb/lib/misc/errors.js:56:10)
    at query.then.res (/some/directory/node_modules/mariadb/lib/connection.js:730:20)
    at <anonymous>
    at process._tickCallback (internal/process/next_tick.js:189:7)
     fatal: true,
     errno: 45036,
     sqlState: '08S01',
     code: 'ER_WRONG_AUTO_TIMEZONE'
}

When I try to connect I am getting this error, I'm not sure why because my coworker has the same settings and it is working for him. Connecting through Sequelize v5.

The error points to this file: /node_modules/mariadb/lib/connection.js:730:20 where I see this query: SELECT @@system_time_zone stz, @@time_zone tz The output of which reads:

stz tz
PDT SYSTEM

however the next part of the code does a lookup in moment which is expecting an IANA timezone string like 'America/Los_Angeles' but it is getting fed the argument 'PDT' so it returns null...

As seen in the moment docs here:
https://momentjs.com/timezone/docs/#/zone-object/abbr/

const serverTimezone = res[0].tz === 'SYSTEM' ? res[0].stz : res[0].tz;
const serverZone = moment.tz.zone(serverTimezone); 
// serverTimezone is 'PDT' here not 'America/Los_Angeles' and serverZone is null as a result...

I've tried setting my timezone in mariadb to the IANA America/Los_Angeles but it says it's invalid...

How can I get this working?

Bad Connection Pool Config Can Cause CPU Thrashing

Greetings,

In my clumsiness, I specified an invalid database name in the createPool config. Since my app did not actually open any connections to the pool, there was visibly nothing wrong. The app functioned great.

However, when I would keep the app running, Docker would burn 150% CPU. I initially thought it was Docker's fault due to all the com.docker.hyperkit CPU issues out there, but after investigation, I realized that in pool.js handleConnectionError, it happily eats the error and retries opening connections as fast as it possibly can.

For example, after changing pool.js to include a global error counter:

  const handleConnectionError = function(pool, err) {
    global.errors++;

Then running:

global.errors = 0;

const MariaDB = require('mariadb');
const pool = MariaDB.createPool({
    host: 'localhost',
    port: 3307,
    user: 'root',
    password: 'password',
    database: 'oopspoops',
    acquireTimeout: 10000,
    connectionLimit: 5,
    minDelayValidation: 500,
    connectTimeout: 10000,
    socketTimeout: 0
});

const holdOpen = setInterval(() => {
    console.log(' > Errors: %d', global.errors)
}, 1000);

After 10 seconds, you'll see:

$ node oops.js 
 > Errors: 397
 > Errors: 785
 > Errors: 1169
 > Errors: 1548
 > Errors: 1940
 > Errors: 2323
 > Errors: 2703
 > Errors: 3090
 > Errors: 3483
 > Errors: 3860
 > Errors: 4266
...

The only time the operator will notice something has gone wrong is when they first try opening a connection. For example, adding to the above example:

const kaboom = setTimeout(async () => {
    try {
        await pool.query('SHOW DATABASES');
    } catch (err) {
        console.error(err);
        process.exit(1);
    }
}, 5000);

will then output:

$ node oops.js 
 > Errors: 354
 > Errors: 729
 > Errors: 1097
 > Errors: 1479
 > Errors: 1858
 { Error: (conn=101707, no: 1049, SQLState: 42000) Unknown database 'oopspoops'
     at Object.module.exports.createError (node_modules/mariadb/lib/misc/errors.js:55:10)
     at Handshake.throwError (node_modules/mariadb/lib/cmd/command.js:34:20)
     at Handshake.handshakeResult (node_modules/mariadb/lib/cmd/handshake/handshake.js:104:21)
     at PacketInputStream.receivePacket (node_modules/mariadb/lib/io/packet-input-stream.js:73:9)
     at PacketInputStream.onData (node_modules/mariadb/lib/io/packet-input-stream.js:129:20)
     at Socket.emit (events.js:189:13)
     at addChunk (_stream_readable.js:284:12)
     at readableAddChunk (_stream_readable.js:265:11)
     at Socket.Readable.push (_stream_readable.js:220:10)
     at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:94:17)
  From event:
     at _registerHandshakeCmd (node_modules/mariadb/lib/connection.js:641:11)
     at node_modules/mariadb/lib/connection.js:56:11
     at new Promise (<anonymous>)
     at Connection.connect (node_modules/mariadb/lib/connection.js:55:16)
     at addConnectionToPoolPromise (node_modules/mariadb/lib/pool.js:331:8)
     at process._tickCallback (internal/process/next_tick.js:63:19)
   fatal: true,
   errno: 1049,
   sqlState: '42000',
   code: 'ER_BAD_DB_ERROR' }

I'd happily submit a PR to address this, but I'm not sure what the best course of action is.

I feel like repeated attempts at initializing the pool should probably trigger some sort of warning or error.

Thanks,
-Kevin

Commit rows disappear after closing connection ?

I am sure I must be missing something really obvious here..
My application has just inserted 4 rows into the table "hr2"."regions"

I execute the following code

    results = await conn.query('select count(*) from hr2.regions');
    console.log(results);
    await conn.commit;
    console.log('Committed');
    results = await conn.query('select count(*) from hr2.regions');
    console.log(results);
    
    await conn.end();

    conn = await pool.getConnection();
    results = await conn.query('select count(*) from hr2.regions');
    console.log(results);

And I get the following results..

[ { 'count(*)': 4 },
  meta: [ { collation: [Collation],
      columnLength: 21,
      columnType: 8,
      scale: 0,
      type: 'LONGLONG',
      flags: 129,
      db: [Function: bound getStringProperty],
      schema: [Function: bound getStringProperty],
      table: [Function: bound getStringProperty],
      orgTable: [Function: bound getStringProperty],
      name: [Function: bound getStringProperty],
      orgName: [Function: bound getStringProperty] } ] ]
Committed
[ { 'count(*)': 4 },
  meta: [ { collation: [Collation],
      columnLength: 21,
      columnType: 8,
      scale: 0,
      type: 'LONGLONG',
      flags: 129,
      db: [Function: bound getStringProperty],
      schema: [Function: bound getStringProperty],
      table: [Function: bound getStringProperty],
      orgTable: [Function: bound getStringProperty],
      name: [Function: bound getStringProperty],
      orgName: [Function: bound getStringProperty] } ] ]
[ { 'count(*)': 0 },
  meta: [ { collation: [Collation],
      columnLength: 21,
      columnType: 8,
      scale: 0,
      type: 'LONGLONG',
      flags: 129,
      db: [Function: bound getStringProperty],
      schema: [Function: bound getStringProperty],
      table: [Function: bound getStringProperty],
      orgTable: [Function: bound getStringProperty],
      name: [Function: bound getStringProperty],
      orgName: [Function: bound getStringProperty] } ] ]

How can 4 committed rows go AWOL. There are no other users of the database..

Callback in promise never called

For some reason the then functions from the queries are never called for me. For example:

connection.query('SELECT * FROM information_schema.statistics').then(results => console.log(results));

Will never display the results. I put a console.log in the source code of this library at command.js#successEnd (before the process.nextTick(this.resolve, val);) and the val does contain the data, but the promise seems to never resolve for some reason.

$ node -v
v10.1.0

Testing on 2.0.0-alpha

Serverless Lambda Too many connections

I'm trying to use the connector in an AWS Lambda function (right now just testing with serverless-offline) but I'm receiving console warnings until it finally errors out completely. My event is scheduled to run every minute so it uses up the available connections rather quickly. From the research I've done this seems like a known issue with mysql-like connectors. None of the solutions I've found seem to work. Any help is greatly appreciated. Thanks.

serverless_1  | pool fail to create connection ((conn=-1, no: 1040, SQLState: HY000) Too many connections)
mariadb_1     | 2019-03-06 20:22:00 8 [Warning] Aborted connection 8 to db: 'unconnected' user: 'root' host: '172.20.0.4' (Got timeout reading communication packets)
import * as mariadb from 'mariadb';

const pool = mariadb.createPool({
  host: process.env.MARIADB_HOST,
  user: process.env.MARIADB_USER,
  password: process.env.MARIADB_PASSWORD,
});

export const handler = async () => {
  let conn;
  try {
    conn = await pool.getConnection();
    const rows = await conn.query("SELECT 1 as val");
    console.log(rows);
    await conn.end()
  } catch (err) {
    console.error(err);
  }
  ...
}

Insert values from object, using 'INSERT INTO table SET ?' ... ?

Given something like this using MySQL ...

let movieInfo = {
	id: 123,
	title: 'Hit Movie'
};

pool.getConnection()
.then(conn => {

	// This works in MySQL
	conn.query('INSERT INTO movies SET ?', movieInfo, (err, res) => ...

... trying to do the equivalent thing using MariaDB. Meaning, the object Keys should be used as column names, and values as the values to be inserted. Is this possible, or am I missing something?

Collation and charset

This code here is a little weird:
https://github.com/MariaDB/mariadb-connector-nodejs/blob/3eeba5ee1c4e9d7549aa857316db679c6af159a6/lib/config/connection-options.js#L28-L35 and it seems to mix up collation and charset.

Using charset: 'utf8mb4' throws and error. Adding collation: 'UTF8MB4_GENERAL_CI' also throw an error.

In order to work you need to use charset: 'UTF8MB4_GENERAL_CI'

Finally it looks like that if you pass charsetNumber: 12345678 you will get a Collations.fromIndex(224); //UTF8MB4_UNICODE_CI

In my humble opinion charset: 'utf8mb4' should work and use collation: 'UTF8MB4_GENERAL_CI'as default. Also collations should be specified withcollation:, no charset`.

`ER_ACCESS_DENIED_ERROR` differ behavior to mysql/mysql2

in mysql2, both promise and callback createPool are synchronize.
and throw error earlier when ACCESS_DENIED,
in mariadb need wait util first use and then timeout throw error.

"use strict";
require("make-promises-safe");
const mariadb = require("mariadb/promise");
//const mariadb = require("mysql2/promise");
const pool = mariadb.createPool({
     user:"Jimmy", 
     connectionLimit: 5,
});
async function test() {
  try {
	const rows = await pool.query("SELECT now()");
	console.log(rows);
  } catch (err) {
	throw err;
  } finally {
	if (pool) return pool.end();
  }
}
test();

/* in mysql2

{ Error: Access denied for user 'Jimmy'@'127.0.0.1' (using password: YES)
    at Packet.asError (/home/Projects/test/node_modules/mysql2/lib/packets/packet.js:684:17)
    at ClientHandshake.execute (/home/Projects/test/node_modules/mysql2/lib/commands/command.js:28:26)
    at PoolConnection.handlePacket (/home/Projects/test/node_modules/mysql2/lib/connection.js:455:32)
    at PacketParser.onPacket (/home/Projects/test/node_modules/mysql2/lib/connection.js:73:18)
    at PacketParser.executeStart (/home/Projects/test/node_modules/mysql2/lib/packet_parser.js:75:16)
    at Socket.<anonymous> (/home/Projects/test/node_modules/mysql2/lib/connection.js:80:31)
    at Socket.emit (events.js:189:13)
    at addChunk (_stream_readable.js:288:12)
    at readableAddChunk (_stream_readable.js:269:11)
    at Socket.Readable.push (_stream_readable.js:224:10)
  code: 'ER_ACCESS_DENIED_ERROR',
  errno: 1045,
  sqlState: '28000',
  sqlMessage:
   "Access denied for user 'Jimmy'@'127.0.0.1' (using password: YES)" }
*/

/* in mariadb

{ Error: retrieve connection from pool timeout
    at Object.module.exports.createError (/home/Projects/test/node_modules/mariadb/lib/misc/errors.js:55:10)
    at rejectTimeout (/home/Projects/test/node_modules/mariadb/lib/pool.js:268:16)
    at Timeout.rejectAndResetTimeout [as _onTimeout] (/home/Projects/test/node_modules/mariadb/lib/pool.js:288:5)
    at listOnTimeout (timers.js:326:17)
    at processTimers (timers.js:268:5)
  fatal: false,
  errno: 45028,
  sqlState: 'HY000',
  code: 'ER_GET_CONNECTION_TIMEOUT' }
*/

Connection pooling: Cannot read property 'threadId' of undefined

Occasionally when running multiple queries asynchronously I get this error. It boils down to line 431 in pool.js, handleTaskQueue()

    const task = taskQueue.shift();
    if (task) {
      const conn = idleConnections.shift();
      activeConnections[conn.threadId] = conn;

      resetTimeoutToNextTask();

The return value of idleConnections.shift() is undefined, and throwing this exception. Restarting my node application a few times, it happens about 60% of the time and only with several queries happening. My code looks like:

exports.get = async function (query, values) {
    let connection;
    try {
        connection = await Pool.getConnection();
        const response = await connection.query(query, values)
        connection.end();
        return response;
    } catch (err) {
        if (connection) {
            connection.end();
        return { err: 500 };
    }

The code that's bugging it is two calls to get in a Promise.all(); Is my use of the connection pool wrong? Thanks.

is deps "npm" still use ?

seems npm deps has some medium severity issue !?
2018-12-10 18-47-34
2018-12-10 18-47-44

but, I cannot find where this driver use npm lib
shoud remove this npm deps ?

Sequelize Warning: please use IANA standard timezone format ('Etc/GMT0')

Upon loading the latest release, the following warning immediately gets thrown on app startup.

warning: please use IANA standard timezone format ('Etc/GMT0')

I'd like to prevent the warning from displaying in the console and continue as I had always been with UTC.

Do I have to do anything?

Connection handling after streaming result set

I've encountered problems when streaming results. I've been using connections obtained from a Pool and then done connection.queryStream(...) to get the stream which was then subsequently piped to an HTTP response. Although it's possible to give the connection back to the pool, with conn.end(), often such connection ends up in a weird state, the conn.ping() fails. As a work around I decided to do conn.destroy() whenever the pipe finished. This however results in a problem as conn.destroy tries to send an error over the, stalled, socket. In line 342 of lib/connection.js: socketErrorDispatchToQueries(destroyError);
When I commented this line out, everything worked fine.

As far as I can see it will occasionally be needed to destroy the connection e.g. when the HTTP client aborts the HTTP request, so the HTTP response closes before all data was sent. So conn.destroy() should be made to always work.

The other issues is then why the connection (socket) sometimes stalls even in a completely successfully piped response. In my testing this happens more frequently in a framework like Koa, than when piped directly to the Node HTTP response stream.

cannot insert blob that exceeds max_allowed_packet

Got a packet bigger than max_allowed_packet bytes

const fs = require('fs');
const mariadb = require('mariadb');
const db = mariadb.createPool({
  host: 'localhost',
  port: 3306,
  user: 'user',
  password: 'password',
  database: 'my_database'
});

let conn;
try {
    conn = await db.getConnection();
    const file = fs.createReadStream('/my/big/file.zip');
    await conn.query({ sql: 'insert into `my_big_table`(`data`) values(?)' }, [file]);
}
catch(err) {
    console.log(err); // Got a packet bigger than 'max_allowed_packet' bytes
}
finally {
    if ( conn )
        conn.end();
}

I need the equivalent of the C connector's mysql_stmt_send_long_data. Is there any workaround other than increasing max_allowed_packet?

promise friendly implementation for connection.queryStream()

Hi,

I read the driver's documentation and I think that the driver is awesome.

This is a feature request - can you make connection.queryStream() promise/await friendly. The event method can easily be promisified but I think that users will be happy if they have this feature implemented.

For example Node.js driver for MongoDB has cursor what works in this way

while (await cursor.hasNext()){
 let doc = await cursor.next();
}

missing poolCluster 'remove' event

https://github.com/mysqljs/mysql#poolcluster

as shown in the mysql poolCluster documentation, the poolCluster emits a remove event when a node is removed.

// Target Group : SLAVE1-2, Selector : order
// If can't connect to SLAVE1, return SLAVE2. (remove SLAVE1 in the cluster)
poolCluster.on('remove', function (nodeId) {
  console.log('REMOVED NODE : ' + nodeId); // nodeId = SLAVE1
});

mariadb does not support this:

poolCluster.on('remove', function (nodeId) {
TypeError: poolCluster.on is not a function

Array / Batch Insert and Prepared Statements

I need to execute the same insert 100,000's of times. The rows are coming off a transform stream. If I comment out the insert in the following block

class DbWriter extends Writable {

constructor(conn,schema,options) {
super({objectMode: true });
const dbWriter = this;

this.systemInformation;
this.metadata;
this.statementCache;

this.tableName;
this.insertStatement;
this.rowCount;
this.startTime;

this.schema = schema;
this.conn = conn;

}

async _write(obj, encoding, callback) {
switch (Object.keys(obj)[0]) {
case 'systemInformation':
this.systemInformation = obj.systemInformation;
break;
case 'metadata':
this.metadata = obj.metadata;
this.statementCache = await createTables(this.conn, this.schema, this.metadata);
break;
case 'table':
const elapsedTime = new Date().getTime() - this.startTime;
if (this.tableName !== undefined) {
console.log(${new Date().toISOString()}: Table "${this.tableName}". Rows ${this.rowCount}. Elaspsed Time ${Math.round(elapsedTime)}ms. Throughput ${Math.round((this.rowCount/Math.round(elapsedTime)) * 1000)} rows/s.\n);
}
this.tableName = obj.table;
this.insertStatement = this.statementCache[this.tableName]
this.rowCount = 0;
this.startTime = new Date().getTime();
break;
case 'data':
// const results = await this.conn.query(this.insertStatement,obj.data);
this.rowCount++;
break;
default:
}
callback();
}
}

I get the following through put...

C:\Development\YADAMU\MariaDB>node client\import.js --USERNAME=root --HOSTNAME=192.168.1.250 --PORT=3307 --PASSWORD=oracle --DATABASE=mysql --File=..\JSON\Oracle\18c\SH_DATA_ONLY.json touser=HR1
2018-10-10T14:13:07.475Z: Table "COSTS". Rows 0. Elaspsed Time 0ms. Throughput NaN rows/s.

2018-10-10T14:13:14.993Z: Table "SALES". Rows 918843. Elaspsed Time 7481ms. Throughput 122824 rows/s.

2018-10-10T14:13:15.095Z: Table "TIMES". Rows 1826. Elaspsed Time 66ms. Throughput 27667 rows/s.

2018-10-10T14:13:15.130Z: Table "CHANNELS". Rows 5. Elaspsed Time 0ms. Throughput Infinity rows/s.

2018-10-10T14:13:15.135Z: Table "PRODUCTS". Rows 72. Elaspsed Time 4ms. Throughput 18000 rows/s.

2018-10-10T14:13:15.137Z: Table "COUNTRIES". Rows 23. Elaspsed Time 1ms. Throughput 23000 rows/s.

2018-10-10T14:13:16.571Z: Table "CUSTOMERS". Rows 55500. Elaspsed Time 1433ms. Throughput 38730 rows/s.

2018-10-10T14:13:16.647Z: Table "PROMOTIONS". Rows 503. Elaspsed Time 41ms. Throughput 12268 rows/s.

As can be seen rows are coming in at a reasonable rate...

However with the insert active...

:\Development\YADAMU\MariaDB>node client\transform2.js --USERNAME=root --HOSTNAME=192.168.1.250 --PORT=3307 --PASSWORD=oracle --DATABASE=mysql --File=..\JSON\Oracle\18c\SH_DATA_ONLY.json touser=HR1
2018-10-10T06:55:21.560Z: Table "COSTS". Rows 0. Elaspsed Time 1ms. Throughput 0 rows/s.

2018-10-10T08:04:50.436Z: Table "SALES". Rows 918843. Elaspsed Time 4168828ms. Throughput 220 rows/s.

2018-10-10T08:04:59.812Z: Table "TIMES". Rows 1826. Elaspsed Time 9320ms. Throughput 196 rows/s.

2018-10-10T08:04:59.899Z: Table "CHANNELS". Rows 5. Elaspsed Time 22ms. Throughput 227 rows/s.

2018-10-10T08:05:00.248Z: Table "PRODUCTS". Rows 72. Elaspsed Time 347ms. Throughput 207 rows/s.

^C

I am new to Maria DB so I may be doing something really dumb, in which case apologies in advance...

The tables have no indexes, and the inserts supply a value for each column.. Tables have around 20 columns each

FieldInfo.type is string and not Types

using Typescript.

I want to do some type casting when returning data.
So I use the typeCast function that is available on the createPool options

My problem is that the parameter FieldInfo.type is typed as a mariadb.Types which is a enum with values such as TINY = 0x01, // aka TINYINT, 1 byte

But the FieldInfo.type is of type string. Which causes typescript to complain.

Example

const pool = createPool({
  host: 'localhost',
  user: 'root',
  password: '',
  database: 'testdb',
  connectionLimit: 5,
  typeCast: (field: FieldInfo, useDefaultTypeCasting) => {
    if (field.type === Types.BIT || field.type === Types.TINY) {
      // we never get here because Types.BIT is a number and field.type is a string 'BIT'
      return field.int() === 1;
    }
    return useDefaultTypeCasting();
  }
});

How to fix.

in typescript for FieldInfo.type introduce new enum that is string based.

Placeholder not working "?"

I am using MariaDB 10.3 with my reactjs with express. It is continuously giving syntax error near placeholder "?". Please help.

NPM Problem

I am not able to download to my project node

Improved Pooling

Read that knex.js and supposedly other DB drivers are switching over to tarn.js for their resource pooling.
https://www.npmjs.com/package/tarn (Notice the surge in weekly downloads)

Could this potentially be more beneficial for pooling?

ER_NOT_ALLOWED_COMMAND

when setting the permitLocalInfile: true and the pipelining: true at the same time, LOAD DATE INFILE command will not allow

Integration with Sequelize

This DB connector needs to be integrated with Sequelize, the most popular MariaDB ORM .

We are going to need...

  1. typecast API (preferably for both query / execute)
  2. prepared statements, it is not a must but all other dialects now support this and we are going to implement them everywhere if possible
  3. supportBigNumbers / bigNumberStrings like api
    @sushantdhiman

Hmm, I also see Node.js 7.6+ is required???

Piping content to Large Objects

Are there any plans to provide a mechanism for doing this... Eg uploading a large file without having to buffer it in memory. With Oracle you can do this using their CLOB object, and Postgres allows content to be streamed to stdin, which can be used with their COPY command..

Sequelize - dialectOpions: true, DECIMAL returning String not Float

Refer to this thread: sequelize/sequelize#7465

dialectOptions: {
    decimalNumbers: true
}

cities.js (Model)
I shouldn't have to create a getter method for my DECIMAL type columns (latitude and longitude), to return a number instead of a string after setting dialectOptions.decimalNumbers to true in Sequelize.

	var city = db.define('cities', {
        id: {
            type: DataTypes.INTEGER,
            autoIncrement: true,
            primaryKey: true,
            allowNull: false
        },
		zip: {
			type: DataTypes.STRING(20),
			allowNull: false
		},
		country: {
			type: DataTypes.STRING(2),
			allowNull: true	
		},
		city: {
			type: DataTypes.STRING(200),
			allowNull: false
		},
		state: {
			type: DataTypes.STRING(200),
			allowNull: false,
		},
		latitude: {
			type: DataTypes.DECIMAL(11, 4),
			allowNull: false,
                       get() {
                           return parseFloat(this.getDataValue('latitude'))
                       }
		},
		longitude: {
			type: DataTypes.DECIMAL(11, 4),
			allowNull: false,
                       get() {
                           return parseFloat(this.getDataValue('longitude'))
                       }
		},
		location: {
	        type: db.Sequelize.GEOMETRY('POINT'),
	        allowNull: false
	    }
    }, 

Default date/time/timezone handling

Thanks for your work on this library! I haven't used it yet, but it looks interesting.

I'm wondering how dates, times, and timezones are handled by default in this project. In my experience, people tend to stick with the defaults (at least until the bugs start to show up), so it's important that they are sane. The existing mysql libs don't do a good job here. In my opinion, the defaults should be:

  1. Dates and datetimes are returned as string (as opposed to Date objects) since they do not have timezone info
  2. Timestamps should be parsed as UTC and returned as Date objects. This works because of the next point:
  3. The mysql session timezone should be set to UTC by default ("SET time_zone='+00:00';")

What do you think?

access denied for user 'user'@'localhost' (using password: no)

When I am trying to connect to database, I always get this error:

Error from connection - Error: (conn=64, no: 1045, SQLState: 28000) Access denied for user 'admin'@'localhost' (using password: NO)
(node:61034) UnhandledPromiseRejectionWarning: TypeError: mariadb.query is not a function
at Object.query (/Users/ky/Projects/test/server/db.js:28:28)
at processTicksAndRejections (internal/process/task_queues.js:86:5)
(node:61034) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)

I execute command mysql_secure_installation. Everything went fine. Database exists. But I still got this error...
My code:

const mariadb = require('mariadb')

let connection = null

module.exports = {
  async connect() {
    try {
      connection = await mariadb.createConnection({
        user:'admin',
        password: 'password',
        database: 'dbName',
        host: 'localhost'
      })
    } catch (error) {
      console.log(`Error from connection - ${error}`)
    }
  },
  /**
   * Executes sql and returns result
   * @param {String} sql SQL to execute
   * @param {Array<String>} values placeholder values
   */
  async query(sql, values) {
    if (connection) {
      return await mariadb.query(sql, values)
    } else {
      await this.connect()
      return await mariadb.query(sql, values)
    }
  }
}

Can not find 'geojson' in TypeScript

Since I updated to 2.0.5, TypeScript does not transpile anymore because it can not find 'geojson'.

node_modules/mariadb/types/index.d.ts:8:23 - error TS2688: Cannot find type definition file for 'geojson'.

8 /// <reference types="geojson" />
                        ~~~~~~~

node_modules/mariadb/types/index.d.ts:12:26 - error TS2307: Cannot find module 'geojson'.

12 import { Geometry } from 'geojson';
                            ~~~~~~~~~


Found 2 errors.

To get rid of both errors, geojson and @types/geojson need to be installed.

Issue with Load Balancer - HAProxy, PorxySQL

Hello Team,

I am trying to connect MariaDB database via load balancer (maxscale) using mariadb npm module but when we run below code we are getting error. And when try without load balancer it work perfect

Code :

const mariadb = require('mariadb');
const pool = mariadb.createPool({host: 'x.x.x.x', user: 'test_usr', password: '*****', port: '3307', connectionLimit: 5});

async function asyncFunction() {
  let conn;
  try {
        conn = await pool.getConnection();
        const rows = await conn.query("SELECT * from test_db.test_db");
        console.log(rows); //[ {val: 1}, meta: ... ]
//      const res = await conn.query("INSERT INTO test_db.test_db value (?, ?)", [1, "mariadb"]);
//      console.log(res); // { affectedRows: 1, insertId: 1, warningStatus: 0 }

  } catch (err) {
        throw err;
  } finally {
        if (conn) return conn.end();
  }
}

Error :

node mariasql_demo.js

events.js:183 
throw er; // Unhandled 'error' event 
^ 
Error: Host '192.168.5.13' is not allowed to connect to this MariaDB server 

I am able to connect using MySQL Client. Also able to connect with mysql npm module.

var mysql = require('mysql');

var con = mysql.createConnection({
  host: "192.168.5.19",
  user: "test_usr",
  password: "*****",
  port: "3307",
  database: "test_db"
});

con.connect(function(err) {
  if (err) throw err;
  con.query("SELECT * FROM test_db", function (err, result, fields) {
    if (err) throw err;
    console.log(result);
  });
});

handling query result

Hello,

That's probably a dumb question but how should I handle the result rows of the query in the example below, meaning access it..

Should I do first a rows.lenght to check if there are data and then rows.map(...) to access/manipulate the data ?

Thanks by advance.

const mariadb = require('mariadb');
const pool = mariadb.createPool({host: 'mydb.com', user: 'myUser', connectionLimit: 5});

async function asyncFunction() {
  let conn;
  try {
	conn = await pool.getConnection();
	const rows = await conn.query("SELECT * FROM .. ");
	console.log(rows); //[ {val: 1}, meta: ... ]
        .....
  } catch (err) {
	throw err;
  } finally {
	if (conn) return conn.end();
  }
}

Knex - next major integration

The Sequelize integration was incredible! Ready for Knex?

Wondering if everything already works?
Once I test it out...I'll let you know

Knex
https://www.npmjs.com/package/knex
Weekly Downloads: 283,939

Node.js
The primary target environment for Knex is Node.js, you will need to install the knex library, and then install the appropriate database library: pg for PostgreSQL and Amazon Redshift, mysql for MySQL or MariaDB, sqlite3 for SQLite3, or mssql for MSSQL.

$ npm install knex --save

# Then add one of the following (adding a --save) flag:

$ npm install pg
$ npm install sqlite3
$ npm install mysql
$ npm install mysql2
$ npm install oracle
$ npm install mssql

---> ### If you want to use a MariaDB instance, you can use the mysql driver.

Lib/Documentation issue?

Hey,

I'm trying to get your extension to work. I hat a look @ the docs and I pasted the following code:

const mariadb = require('mariadb/callback');
const conn = mariadb.createConnection({host: 'mydb.com', user:'myUser'});
conn.query("SELECT 1 as val", (err, rows) => {
	console.log(rows); //[ {val: 1}, meta: ... ]
	conn.query("INSERT INTO myTable value (?, ?)", [1, "mariadb"], (err, res) => {
	  console.log(res); // { affectedRows: 1, insertId: 1, warningStatus: 0 }
	  conn.end();
	});
}); 

When I try to run the code I'm getting the error:

module.js:549
    throw err;
    ^

Error: Cannot find module 'mariadb/callback'
    at Function.Module._resolveFilename (module.js:547:15)
    at Function.Module._load (module.js:474:25)
    at Module.require (module.js:596:17)
    at require (internal/module.js:11:18)
    at Object.<anonymous> (C:\Users\dat_m\Desktop\Coding\test-site\index.js:1:79)
    at Module._compile (module.js:652:30)
    at Object.Module._extensions..js (module.js:663:10)
    at Module.load (module.js:565:32)
    at tryModuleLoad (module.js:505:12)
    at Function.Module._load (module.js:497:3)

Even when I try to use the given example:

const mariadb = require('mariadb');
const pool = mariadb.createPool({host: 'mydb.com', user:'myUser', connectionLimit: 5});

async function asyncFunction() {
  let conn;
  try {
	conn = await pool.getConnection();
	const rows = await conn.query("SELECT * FROM account");
	console.log(rows); //<--- no logging output
  } catch (err) {
	throw err;
  } finally {
	if (conn) return conn.end();
  }
}

I'm getting no results and even no error :/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.