Giter Site home page Giter Site logo

abstract-leveldown's Introduction

abstract-leveldown

An abstract prototype matching the leveldown API. Useful for extending levelup functionality by providing a replacement to leveldown.

📌 This module will soon be deprecated, because it is superseded by abstract-level.

level badge npm Node version Test Coverage Standard Common Changelog Donate

Table of Contents

Click to expand

Background

This module provides a simple base prototype for a key-value store. It has a public API for consumers and a private API for implementors. To implement a abstract-leveldown compliant store, extend its prototype and override the private underscore versions of the methods. For example, to implement put(), override _put() on your prototype.

Where possible, the default private methods have sensible noop defaults that essentially do nothing. For example, _open(callback) will invoke callback on a next tick. Other methods like _clear(..) have functional defaults. Each method listed below documents whether implementing it is mandatory.

The private methods are always provided with consistent arguments, regardless of what is passed in through the public API. All public methods provide argument checking: if a consumer calls open() without a callback argument they'll get an Error('open() requires a callback argument').

Where optional arguments are involved, private methods receive sensible defaults: a get(key, callback) call translates to _get(key, options, callback) where the options argument is an empty object. These arguments are documented below.

If you are upgrading: please see UPGRADING.md.

Example

Let's implement a simplistic in-memory leveldown replacement:

var AbstractLevelDOWN = require('abstract-leveldown').AbstractLevelDOWN
var util = require('util')

// Constructor
function FakeLevelDOWN () {
  AbstractLevelDOWN.call(this)
}

// Our new prototype inherits from AbstractLevelDOWN
util.inherits(FakeLevelDOWN, AbstractLevelDOWN)

FakeLevelDOWN.prototype._open = function (options, callback) {
  // Initialize a memory storage object
  this._store = {}

  // Use nextTick to be a nice async citizen
  this._nextTick(callback)
}

FakeLevelDOWN.prototype._serializeKey = function (key) {
  // As an example, prefix all input keys with an exclamation mark.
  // Below methods will receive serialized keys in their arguments.
  return '!' + key
}

FakeLevelDOWN.prototype._put = function (key, value, options, callback) {
  this._store[key] = value
  this._nextTick(callback)
}

FakeLevelDOWN.prototype._get = function (key, options, callback) {
  var value = this._store[key]

  if (value === undefined) {
    // 'NotFound' error, consistent with LevelDOWN API
    return this._nextTick(callback, new Error('NotFound'))
  }

  this._nextTick(callback, null, value)
}

FakeLevelDOWN.prototype._del = function (key, options, callback) {
  delete this._store[key]
  this._nextTick(callback)
}

Now we can use our implementation with levelup:

var levelup = require('levelup')

var db = levelup(new FakeLevelDOWN())

db.put('foo', 'bar', function (err) {
  if (err) throw err

  db.get('foo', function (err, value) {
    if (err) throw err

    console.log(value) // 'bar'
  })
})

See memdown if you are looking for a complete in-memory replacement for leveldown.

Public API For Consumers

db = constructor(..)

Constructors typically take a location argument pointing to a location on disk where the data will be stored. Since not all implementations are disk-based and some are non-persistent, implementors are free to take zero or more arguments in their constructor.

db.status

A read-only property. An abstract-leveldown compliant store can be in one of the following states:

  • 'new' - newly created, not opened or closed
  • 'opening' - waiting for the store to be opened
  • 'open' - successfully opened the store, available for use
  • 'closing' - waiting for the store to be closed
  • 'closed' - store has been successfully closed, should not be used.

db.supports

A read-only manifest. Might be used like so:

if (!db.supports.permanence) {
  throw new Error('Persistent storage is required')
}

if (db.supports.bufferKeys && db.supports.promises) {
  await db.put(Buffer.from('key'), 'value')
}

db.open([options, ]callback)

Open the store. The callback function will be called with no arguments when the store has been successfully opened, or with a single error argument if the open operation failed for any reason.

The optional options argument may contain:

  • createIfMissing (boolean, default: true): If true and the store doesn't exist it will be created. If false and the store doesn't exist, callback will receive an error.
  • errorIfExists (boolean, default: false): If true and the store exists, callback will receive an error.

Not all implementations support the above options.

db.close(callback)

Close the store. The callback function will be called with no arguments if the operation is successful or with a single error argument if closing failed for any reason.

db.get(key[, options], callback)

Get a value from the store by key. The optional options object may contain:

  • asBuffer (boolean, default: true): Whether to return the value as a Buffer. If false, the returned type depends on the implementation.

The callback function will be called with an Error if the operation failed for any reason, including if the key was not found. If successful the first argument will be null and the second argument will be the value.

db.getMany(keys[, options][, callback])

Get multiple values from the store by an array of keys. The optional options object may contain:

  • asBuffer (boolean, default: true): Whether to return the value as a Buffer. If false, the returned type depends on the implementation.

The callback function will be called with an Error if the operation failed for any reason. If successful the first argument will be null and the second argument will be an array of values with the same order as keys. If a key was not found, the relevant value will be undefined.

If no callback is provided, a promise is returned.

db.put(key, value[, options], callback)

Store a new entry or overwrite an existing entry. There are no options by default but implementations may add theirs. The callback function will be called with no arguments if the operation is successful or with an Error if putting failed for any reason.

db.del(key[, options], callback)

Delete an entry. There are no options by default but implementations may add theirs. The callback function will be called with no arguments if the operation is successful or with an Error if deletion failed for any reason.

db.batch(operations[, options], callback)

Perform multiple put and/or del operations in bulk. The operations argument must be an Array containing a list of operations to be executed sequentially, although as a whole they are performed as an atomic operation.

Each operation is contained in an object having the following properties: type, key, value, where the type is either 'put' or 'del'. In the case of 'del' the value property is ignored.

There are no options by default but implementations may add theirs. The callback function will be called with no arguments if the batch is successful or with an Error if the batch failed for any reason.

db.batch()

Returns a chainedBatch.

db.iterator([options])

Returns an iterator. Accepts the following range options:

  • gt (greater than), gte (greater than or equal) define the lower bound of the range to be iterated. Only entries where the key is greater than (or equal to) this option will be included in the range. When reverse=true the order will be reversed, but the entries iterated will be the same.
  • lt (less than), lte (less than or equal) define the higher bound of the range to be iterated. Only entries where the key is less than (or equal to) this option will be included in the range. When reverse=true the order will be reversed, but the entries iterated will be the same.
  • reverse (boolean, default: false): iterate entries in reverse order. Beware that a reverse seek can be slower than a forward seek.
  • limit (number, default: -1): limit the number of entries collected by this iterator. This number represents a maximum number of entries and may not be reached if you get to the end of the range first. A value of -1 means there is no limit. When reverse=true the entries with the highest keys will be returned instead of the lowest keys.

Note Zero-length strings, buffers and arrays as well as null and undefined are invalid as keys, yet valid as range options. These types are significant in encodings like bytewise and charwise as well as some underlying stores like IndexedDB. Consumers of an implementation should assume that { gt: undefined } is not the same as {}. An implementation can choose to:

  • Serialize or encode these types to make them meaningful
  • Have no defined behavior (moving the concern to a higher level)
  • Delegate to an underlying store (moving the concern to a lower level).

If you are an implementor, a final note: the abstract test suite does not test these types. Whether they are supported or how they sort is up to you; add custom tests accordingly.

In addition to range options, iterator() takes the following options:

  • keys (boolean, default: true): whether to return the key of each entry. If set to false, calls to iterator.next(callback) will yield keys with a value of undefined.
  • values (boolean, default: true): whether to return the value of each entry. If set to false, calls to iterator.next(callback) will yield values with a value of undefined.
  • keyAsBuffer (boolean, default: true): Whether to return the key of each entry as a Buffer. If false, the returned type depends on the implementation.
  • valueAsBuffer (boolean, default: true): Whether to return the value of each entry as a Buffer.

Lastly, an implementation is free to add its own options.

db.clear([options, ]callback)

This method is experimental. Not all implementations support it yet.

Delete all entries or a range. Not guaranteed to be atomic. Accepts the following range options (with the same rules as on iterators):

  • gt (greater than), gte (greater than or equal) define the lower bound of the range to be deleted. Only entries where the key is greater than (or equal to) this option will be included in the range. When reverse=true the order will be reversed, but the entries deleted will be the same.
  • lt (less than), lte (less than or equal) define the higher bound of the range to be deleted. Only entries where the key is less than (or equal to) this option will be included in the range. When reverse=true the order will be reversed, but the entries deleted will be the same.
  • reverse (boolean, default: false): delete entries in reverse order. Only effective in combination with limit, to remove the last N records.
  • limit (number, default: -1): limit the number of entries to be deleted. This number represents a maximum number of entries and may not be reached if you get to the end of the range first. A value of -1 means there is no limit. When reverse=true the entries with the highest keys will be deleted instead of the lowest keys.

If no options are provided, all entries will be deleted. The callback function will be called with no arguments if the operation was successful or with an Error if it failed for any reason.

chainedBatch

chainedBatch.put(key, value[, options])

Queue a put operation on this batch. This may throw if key or value is invalid. There are no options by default but implementations may add theirs.

chainedBatch.del(key[, options])

Queue a del operation on this batch. This may throw if key is invalid. There are no options by default but implementations may add theirs.

chainedBatch.clear()

Clear all queued operations on this batch.

chainedBatch.write([options, ]callback)

Commit the queued operations for this batch. All operations will be written atomically, that is, they will either all succeed or fail with no partial commits.

There are no options by default but implementations may add theirs. The callback function will be called with no arguments if the batch is successful or with an Error if the batch failed for any reason.

After write has been called, no further operations are allowed.

chainedBatch.db

A reference to the db that created this chained batch.

iterator

An iterator allows you to iterate the entire store or a range. It operates on a snapshot of the store, created at the time db.iterator() was called. This means reads on the iterator are unaffected by simultaneous writes. Most but not all implementations can offer this guarantee.

Iterators can be consumed with for await...of or by manually calling iterator.next() in succession. In the latter mode, iterator.end() must always be called. In contrast, finishing, throwing or breaking from a for await...of loop automatically calls iterator.end().

An iterator reaches its natural end in the following situations:

  • The end of the store has been reached
  • The end of the range has been reached
  • The last iterator.seek() was out of range.

An iterator keeps track of when a next() is in progress and when an end() has been called so it doesn't allow concurrent next() calls, it does allow end() while a next() is in progress and it doesn't allow either next() or end() after end() has been called.

for await...of iterator

Yields arrays containing a key and value. The type of key and value depends on the options passed to db.iterator().

try {
  for await (const [key, value] of db.iterator()) {
    console.log(key)
  }
} catch (err) {
  console.error(err)
}

Note for implementors: this uses iterator.next() and iterator.end() under the hood so no further method implementations are needed to support for await...of.

iterator.next([callback])

Advance the iterator and yield the entry at that key. If an error occurs, the callback function will be called with an Error. Otherwise, the callback receives null, a key and a value. The type of key and value depends on the options passed to db.iterator(). If the iterator has reached its natural end, both key and value will be undefined.

If no callback is provided, a promise is returned for either an array (containing a key and value) or undefined if the iterator reached its natural end.

Note: Always call iterator.end(), even if you received an error and even if the iterator reached its natural end.

iterator.seek(target)

Seek the iterator to a given key or the closest key. Subsequent calls to iterator.next() (including implicit calls in a for await...of loop) will yield entries with keys equal to or larger than target, or equal to or smaller than target if the reverse option passed to db.iterator() was true.

If range options like gt were passed to db.iterator() and target does not fall within that range, the iterator will reach its natural end.

Note: At the time of writing, leveldown is the only known implementation to support seek(). In other implementations, it is a noop.

iterator.end([callback])

End iteration and free up underlying resources. The callback function will be called with no arguments on success or with an Error if ending failed for any reason.

If no callback is provided, a promise is returned.

iterator.db

A reference to the db that created this iterator.

Type Support

The following applies to any method above that takes a key argument or option: all implementations must support a key of type String and should support a key of type Buffer. A key may not be null, undefined, a zero-length Buffer, zero-length string or zero-length array.

The following applies to any method above that takes a value argument or option: all implementations must support a value of type String or Buffer. A value may not be null or undefined due to preexisting significance in streams and iterators.

Support of other key and value types depends on the implementation as well as its underlying storage. See also db._serializeKey and db._serializeValue.

Private API For Implementors

Each of these methods will receive exactly the number and order of arguments described. Optional arguments will receive sensible defaults. All callbacks are error-first and must be asynchronous.

If an operation within your implementation is synchronous, be sure to invoke the callback on a next tick using queueMicrotask, process.nextTick or some other means of microtask scheduling. For convenience, the prototypes of AbstractLevelDOWN, AbstractIterator and AbstractChainedBatch include a _nextTick method that is compatible with node and browsers.

db = AbstractLevelDOWN([manifest])

The constructor. Sets the .status to 'new'. Optionally takes a manifest object which abstract-leveldown will enrich:

AbstractLevelDOWN.call(this, {
  bufferKeys: true,
  snapshots: true,
  // ..
})

db._open(options, callback)

Open the store. The options object will always have the following properties: createIfMissing, errorIfExists. If opening failed, call the callback function with an Error. Otherwise call callback without any arguments.

The default _open() is a sensible noop and invokes callback on a next tick.

db._close(callback)

Close the store. If closing failed, call the callback function with an Error. Otherwise call callback without any arguments.

The default _close() is a sensible noop and invokes callback on a next tick.

db._serializeKey(key)

Convert a key to a type supported by the underlying storage. All methods below that take a key argument or option - including db._iterator() with its range options and iterator._seek() with its target argument - will receive serialized keys. For example, if _serializeKey is implemented as:

FakeLevelDOWN.prototype._serializeKey = function (key) {
  return Buffer.isBuffer(key) ? key : String(key)
}

Then db.get(2, callback) translates into db._get('2', options, callback). Similarly, db.iterator({ gt: 2 }) translates into db._iterator({ gt: '2', ... }) and iterator.seek(2) translates into iterator._seek('2').

If the underlying storage supports any JavaScript type or if your implementation wraps another implementation, it is recommended to make _serializeKey an identity function (returning the key as-is). Serialization is irreversible, unlike encoding as performed by implementations like encoding-down. This also applies to _serializeValue.

The default _serializeKey() is an identity function.

db._serializeValue(value)

Convert a value to a type supported by the underlying storage. All methods below that take a value argument or option will receive serialized values. For example, if _serializeValue is implemented as:

FakeLevelDOWN.prototype._serializeValue = function (value) {
  return Buffer.isBuffer(value) ? value : String(value)
}

Then db.put(key, 2, callback) translates into db._put(key, '2', options, callback).

The default _serializeValue() is an identity function.

db._get(key, options, callback)

Get a value by key. The options object will always have the following properties: asBuffer. If the key does not exist, call the callback function with a new Error('NotFound'). Otherwise call callback with null as the first argument and the value as the second.

The default _get() invokes callback on a next tick with a NotFound error. It must be overridden.

db._getMany(keys, options, callback)

This new method is optional for the time being. To enable its tests, set the getMany option of the test suite to true.

Get multiple values by an array of keys. The options object will always have the following properties: asBuffer. If an error occurs, call the callback function with an Error. Otherwise call callback with null as the first argument and an array of values as the second. If a key does not exist, set the relevant value to undefined.

The default _getMany() invokes callback on a next tick with an array of values that is equal in length to keys and is filled with undefined. It must be overridden to support getMany() but this is currently an opt-in feature. If the implementation does support getMany() then db.supports.getMany must be set to true via the constructor.

db._put(key, value, options, callback)

Store a new entry or overwrite an existing entry. There are no default options but options will always be an object. If putting failed, call the callback function with an Error. Otherwise call callback without any arguments.

The default _put() invokes callback on a next tick. It must be overridden.

db._del(key, options, callback)

Delete an entry. There are no default options but options will always be an object. If deletion failed, call the callback function with an Error. Otherwise call callback without any arguments.

The default _del() invokes callback on a next tick. It must be overridden.

db._batch(operations, options, callback)

Perform multiple put and/or del operations in bulk. The operations argument is always an Array containing a list of operations to be executed sequentially, although as a whole they should be performed as an atomic operation. Each operation is guaranteed to have at least type and key properties. There are no default options but options will always be an object. If the batch failed, call the callback function with an Error. Otherwise call callback without any arguments.

The default _batch() invokes callback on a next tick. It must be overridden.

db._chainedBatch()

The default _chainedBatch() returns a functional AbstractChainedBatch instance that uses db._batch(array, options, callback) under the hood. The prototype is available on the main exports for you to extend. If you want to implement chainable batch operations in a different manner then you should extend AbstractChainedBatch and return an instance of this prototype in the _chainedBatch() method:

var AbstractChainedBatch = require('abstract-leveldown').AbstractChainedBatch
var inherits = require('util').inherits

function ChainedBatch (db) {
  AbstractChainedBatch.call(this, db)
}

inherits(ChainedBatch, AbstractChainedBatch)

FakeLevelDOWN.prototype._chainedBatch = function () {
  return new ChainedBatch(this)
}

db._iterator(options)

The default _iterator() returns a noop AbstractIterator instance. It must be overridden, by extending AbstractIterator (available on the main module exports) and returning an instance of this prototype in the _iterator(options) method.

The options object will always have the following properties: reverse, keys, values, limit, keyAsBuffer and valueAsBuffer.

db._clear(options, callback)

This method is experimental and optional for the time being. To enable its tests, set the clear option of the test suite to true.

Delete all entries or a range. Does not have to be atomic. It is recommended (and possibly mandatory in the future) to operate on a snapshot so that writes scheduled after a call to clear() will not be affected.

The default _clear() uses _iterator() and _del() to provide a reasonable fallback, but requires binary key support. It is recommended to implement _clear() with more performant primitives than _iterator() and _del() if the underlying storage has such primitives. Implementations that don't support binary keys must implement their own _clear().

Implementations that wrap another db can typically forward the _clear() call to that db, having transformed range options if necessary.

The options object will always have the following properties: reverse and limit.

iterator = AbstractIterator(db)

The first argument to this constructor must be an instance of your AbstractLevelDOWN implementation. The constructor will set iterator.db which is used to access db._serialize* and ensures that db will not be garbage collected in case there are no other references to it.

iterator._next(callback)

Advance the iterator and yield the entry at that key. If nexting failed, call the callback function with an Error. Otherwise, call callback with null, a key and a value.

The default _next() invokes callback on a next tick. It must be overridden.

iterator._seek(target)

Seek the iterator to a given key or the closest key. This method is optional.

iterator._end(callback)

Free up underlying resources. This method is guaranteed to only be called once. If ending failed, call the callback function with an Error. Otherwise call callback without any arguments.

The default _end() invokes callback on a next tick. Overriding is optional.

chainedBatch = AbstractChainedBatch(db)

The first argument to this constructor must be an instance of your AbstractLevelDOWN implementation. The constructor will set chainedBatch.db which is used to access db._serialize* and ensures that db will not be garbage collected in case there are no other references to it.

chainedBatch._put(key, value, options)

Queue a put operation on this batch. There are no default options but options will always be an object.

chainedBatch._del(key, options)

Queue a del operation on this batch. There are no default options but options will always be an object.

chainedBatch._clear()

Clear all queued operations on this batch.

chainedBatch._write(options, callback)

The default _write method uses db._batch. If the _write method is overridden it must atomically commit the queued operations. There are no default options but options will always be an object. If committing fails, call the callback function with an Error. Otherwise call callback without any arguments.

Test Suite

To prove that your implementation is abstract-leveldown compliant, include the abstract test suite in your test.js (or similar):

const test = require('tape')
const suite = require('abstract-leveldown/test')
const YourDOWN = require('.')

suite({
  test: test,
  factory: function () {
    return new YourDOWN()
  }
})

This is the most minimal setup. The test option must be a function that is API-compatible with tape. The factory option must be a function that returns a unique and isolated database instance. The factory will be called many times by the test suite.

If your implementation is disk-based we recommend using tempy (or similar) to create unique temporary directories. Your setup could look something like:

const test = require('tape')
const tempy = require('tempy')
const suite = require('abstract-leveldown/test')
const YourDOWN = require('.')

suite({
  test: test,
  factory: function () {
    return new YourDOWN(tempy.directory())
  }
})

Excluding tests

As not every implementation can be fully compliant due to limitations of its underlying storage, some tests may be skipped. For example, to skip snapshot tests:

suite({
  // ..
  snapshots: false
})

This also serves as a signal to users of your implementation. The following options are available:

  • bufferKeys: set to false if binary keys are not supported by the underlying storage
  • seek: set to false if your iterator does not implement _seek
  • clear: defaults to false until a next major release. Set to true if your implementation either implements _clear() itself or is suitable to use the default implementation of _clear() (which requires binary key support).
  • getMany: defaults to false until a next major release. Set to true if your implementation implements _getMany().
  • snapshots: set to false if any of the following is true:
    • Reads don't operate on a snapshot
    • Snapshots are created asynchronously
  • createIfMissing and errorIfExists: set to false if db._open() does not support these options.

This metadata will be moved to manifests (db.supports) in the future.

Setup and teardown

To perform (a)synchronous work before or after each test, you may define setUp and tearDown functions:

suite({
  // ..
  setUp: function (t) {
    t.end()
  },
  tearDown: function (t) {
    t.end()
  }
})

Reusing testCommon

The input to the test suite is a testCommon object. Should you need to reuse testCommon for your own (additional) tests, use the included utility to create a testCommon with defaults:

const test = require('tape')
const suite = require('abstract-leveldown/test')
const YourDOWN = require('.')

const testCommon = suite.common({
  test: test,
  factory: function () {
    return new YourDOWN()
  }
})

suite(testCommon)

The testCommon object will have all the properties describe above: test, factory, setUp, tearDown and the skip options. You might use it like so:

test('setUp', testCommon.setUp)

test('custom test', function (t) {
  var db = testCommon.factory()
  // ..
})

test('another custom test', function (t) {
  var db = testCommon.factory()
  // ..
})

test('tearDown', testCommon.tearDown)

Spread The Word

If you'd like to share your awesome implementation with the world, here's what you might want to do:

  • Add an awesome badge to your README: ![level badge](https://leveljs.org/img/badge.svg)
  • Publish your awesome module to npm
  • Send a Pull Request to Level/awesome to advertise your work!

Install

With npm do:

npm install abstract-leveldown

Contributing

Level/abstract-leveldown is an OPEN Open Source Project. This means that:

Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.

See the Contribution Guide for more details.

Donate

Support us with a monthly donation on Open Collective and help us continue our work.

License

MIT

abstract-leveldown's People

Contributors

achingbrain avatar andrewrk avatar calvinmetcalf avatar deanlandolt avatar dependabot[bot] avatar dominictarr avatar flatheadmill avatar greenkeeper[bot] avatar greenkeeperio-bot avatar hden avatar huan avatar hugomrdias avatar juliangruber avatar kesla avatar mafintosh avatar marcuslyons avatar max-mapper avatar mcollina avatar meirionhughes avatar nolanlawson avatar ralphtheninja avatar raynos avatar rvagg avatar sandersn avatar shama avatar tapppi avatar timoxley avatar vweevers avatar watson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

abstract-leveldown's Issues

Some notes on iterator implementation in browser

@maxogden as I closed the pull request I thought I would move my notes on iterator here.

Currently the error test in simple iterator is returning null not undefined
https://github.com/rvagg/node-abstract-leveldown/blob/master/abstract/iterator-test.js#L125

The tests in "Test setup2" fail if the database from simple iterator is not cleared down
https://github.com/rvagg/node-abstract-leveldown/blob/master/abstract/iterator-test.js#L146
Should we have them as different test files with their own setup and teardown?

Rather than have two of us doing the same thing I am leaving iterator spec with you but
If you want me to have a go ping me.

verbose mode

I think it would be really useful to have a flag you could turn on when debugging that made leveldown write something like this to stdout:

OPEN "test"
PUT "ÿfooÿhello", "world", "String"
GET "ÿfooÿhello", "world", "String"
GET "ÿfooÿweeee", "Key not found in database [ÿfooÿweeee]", "NotFoundError"

I could monkeypatch to implement this today as a third party module but I wanted feedback on where in the stack something like this could go. I imagine that simple if (verbose) checks would get inlined by v8 etc

Strict abstract leveldown?

Yo!

I've been thinking about an alternative implementation av abstract-leveldown, one that doesn't have nice defaults but throw like crazy if there's anything missing.

I've been fooled a couple of times when working on *DOWN:s by the defaults as is today.

API-wise we could do something like

var util = require('util')
  , StrictAbstractLevelDOWN = require('./').StrictAbstractLevelDOWN

// constructor, passes through the 'location' argument to the AbstractLevelDOWN constructor
function FakeLevelDOWN (location) {
  StrictAbstractLevelDOWN.call(this, location)
}

//etc etc

Interested in a PR?

Auto casting to String on set??

I ran into this with my latest LevelDown abstraction:
https://github.com/rvagg/abstract-leveldown/blob/master/abstract-leveldown.js#L87

if (!this._isBuffer(value) && !process.browser)
    value = String(value)

Anyway we can make that an option and have process.browser default it to true? This way the abstraction can deal with the encoding?

My issue is that my backend server fully supports JSON encoding. Even if I said encoding: 'json' in my options, it's still casting that to a string on set. I'd like to have it just send the raw data to my abstraction and let me deal with the encoding.

If this is ok, I'll work on a PR just wanted to ask first :)

Remove test: serialization of object key in approximateSize()

I propose to remove this abstract test:

test('test _serialize object', function (t) {
t.plan(3)
var db = leveldown(testCommon.location())
db._approximateSize = function (start, end, callback) {
t.equal(Buffer.isBuffer(start) ? String(start) : start, '[object Object]')
t.equal(Buffer.isBuffer(end) ? String(end) : end, '[object Object]')
callback()
}
db.approximateSize({}, {}, function (err, val) {
t.error(err)
})
})

We have enough coverage on the _serialize functions, including the extensibility of _serializeKey() in combination with approximateSize(). This test has no added value anymore and assumes that object keys are stringified.

genericize test suite to make sense in browser

this almost passes all the leveldown tests (except iterators, havent worked on those yet): https://github.com/maxogden/level.js

there are two fundamental differences though:

  • indexeddb natively supports storing all JS data types (num, bool, string, typed arrays, array buffers etc). with leveldb its either a string of a buffer, and the test suite is currently set up to always expect either a string or a buffer, which means if you store a bool in the browser the test suite converts it to a string
  • Buffer doesn't exist in the browser. instead of using buffer-browserify (which if fundamentally flawed IMO) i'd rather return ArrayBuffers as they are the equivalent primitive binary data type in browsers. the problem with this is that the test suite currently does a lot of Buffer.toString() checking but toString() returns '[object ArrayBuffer]' on ArrayBuffers. instead you have to do String.fromCharCode.apply(null, new Uint16Array(arraybuffer))

so, is it cool if I add a bunch of conditional browser specific stuff to the test suite? is there a better 'paradigm' we could use for return value checking?

open, close, and open

I don't believe there is a test that opens a db, closes it and then opens it again, came up for me with sqldown

too strict about error messages

this has broken level.js tests, if you upgrade to the latest abstract-leveldown

but for silly reasons like this:

not ok 44 should have correct error message default_stream.js:12
  --- default_stream.js:12
    operator: equal default_stream.js:12
    expected: "NotFound: " default_stream.js:12
    actual:   "NotFound" default_stream.js:12
    at: Test.equal.Test.equals.Test.isEqual.Test.is.Test.strictEqual.Test.strictEquals (http://localhost:9966/test.js:7440:10) default_stream.js:12

it should test with a regular expression t.ok(/^NotFound/.test(err.message))
I'll make a pull request for this later, though, maybe not before the conference.
/cc @maxogden

silently drops batch calls

If db.batch is called and there is no _batch specified it does nothing, and fails silently.

I'd expect it to either

  • throw an error
  • have a sane default that async.map's to _put and _del

Perhaps batch guarantees being atomic. If we are unable to provide a sane default for that, then we should throw an error on a batch call

Put implementation converts `null` to `"null"` not `""`.

There are new tests that assert that when a null or undefined value is put into the database that it will be retrieved as an empty string. I assume this means it should be converted to an empty string before insertion. The _serializeValue method converts using String(value) which converts null to "null" and undefined to "undefined".

Remove gaps from batch array?

If you do db.batch([null]), or any other falsy value, should abstract-leveldown filter it?

I noticed that memdown has its own !array[i] check in _batch(), so I wondered if that's a job for abstract-leveldown. We currently do have a typeof array[i] !== 'object' check, which I didn't git blame yet, but this doesn't catch null.

@juliangruber @ralphtheninja

Clean up test and testBuffer globals

This is an investigative issue, will add points as I find them.

Get rid of test and testBuffer "globals", pass them on as function parameters instead

Try to get rid of testBuffer completely, if possible.

leveldown on AWS lambda

Has anybody attempted to install leveldown on AWS Lambda? If so how did it go? What is the best strategy for installing the binary?

changelog

someone needs to start a changelog...

binary encoding vs. asBuffer

I'm wondering what the exact purpose of the asBuffer option is (and it's cousins keyAsBuffer and valueAsBuffer). To me it feels redundant since one can use keyEncoding binary and/or valueEncoding binary.

I wonder what to do when using IndexedDB as a backend which has support for native JS types as values and supports some different types for keys (Level/level-js#48). I know in level.js they even have a raw option. I can't really tell the difference between encoding, asBuffer and raw.

/cc @nolanlawson

AbstractChainedBatch tests should commit and inspect db rather than sniff `_operations` buffer

Nothing about AbstractChainedBatch requires it to keep operations around in memory (in the _operations buffer), but some of its tests sniff this buffer rather than committing these values and reading from the db. The tests are mostly around just verifying the _serialize[Key|Value] behavior, so in this case they would probably be better off hooking _put and _del. I do have to do all this nonsense here) from outside the tests, but from within we could just hook the _put and _del methods on the batch directly.

Regardless, these tests should also commit these values and verify what ends up in the db w/ collectEntries. The only exception might be test custom _serialize*, since what will happen when this hits the db is undefined behavior. But this just suggests to me we should add companion _deserialize[Key|Value] methods to get us a hook to reliably verify expected results.

/cc @ralphtheninja @juliangruber

What is the next callback structure?

The abstract down and iterator in here doesn't seems to define what the callback for the iterator next should be. And the implementation implies the callback args could be anything.

The only place where the structure for the next callback is expected to be a certain way seems to be in the stream converter on levelup: https://github.com/Level/iterator-stream/blob/master/index.js#L25 which has it as: function (err, key, value)

Can it be anything? or should it be function (err, key, value, ...more)?

cc: @ralphtheninja

get rid of process.browser checks

It makes the tests a lot harder to understand. Not exactly sure how we would do it and how it will affect implementations, need time and help from the community to get this right. A suggestion would be to

  1. check the commit history, why were the process.browser checks added in the first place?
  2. if we remove them, what would the consequences be?
  3. rewrite/update implementations that rely on this based in 1. and 2.

See the following comments:

Thoughts on exposing the test suite?

I'm writing a backend for an internal DB and I would like to use the test suite in here in my module so that I know that I'm doing the right thing.

Any thoughts on exposing this test suite in a way that things that use it can also use the test suite?

Action required: Greenkeeper could not be activated 🚨

🚨 You need to enable Continuous Integration on all branches of this repository. 🚨

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because we are using your CI build statuses to figure out when to notify you about breaking changes.

Since we did not receive a CI status on the greenkeeper/initial branch, we assume that you still need to configure it.

If you have already set up a CI for this repository, you might need to check your configuration. Make sure it will run on all new branches. If you don’t want it to run on every branch, you can whitelist branches starting with greenkeeper/.

We recommend using Travis CI, but Greenkeeper will work with every other CI service as well.

Once you have installed CI on this repository, you’ll need to re-trigger Greenkeeper’s initial Pull Request. To do this, please delete the greenkeeper/initial branch in this repository, and then remove and re-add this repository to the Greenkeeper integration’s white list on Github. You'll find this list on your repo or organiszation’s settings page, under Installed GitHub Apps.

ES6 is ending up in the browserify builds

The const keyword is ending up the browserify builds for memdown (e.g. see https://wzrd.in/standalone/memdown@latest), and it is breaking various browsers, notably in the Hoodie tests (Hoodie uses PouchDB uses MemDOWN uses AbstractLevelDOWN). pouchdb/pouchdb#4215

I'm not sure what the most elegant way to fix this is, but for the time being I would suggest just removing const from the source, or else making it a build step to only output ES5-compatible Node code.

make location optional

The recent change to validate that a location isn't an empty string was actually a breaking change, as modules like memdown here may pass an empty string.

I'm for making location optional as it has shown that enough backends don't need one.

nonErrorValues() test breaks leveldown

A typo was fixed here:

ecd41a7

This actually introduces 40 new tests (leveldown went from 672 tests to 712) that were not being run before and some of them break, see:

https://travis-ci.org/Level/leveldown/builds/60216894#L384

More concretely these tests fail:

https://github.com/Level/abstract-leveldown/blob/master/abstract/put-get-del-test.js#L140-L144

And they fail in this location:

https://github.com/Level/abstract-leveldown/blob/master/abstract/put-get-del-test.js#L50

The result variable is a buffer and it's being compared (===) with an empty string, which obviously fails :)

The question now is where to fix this. Are the tests correctly implemented?

/cc @rvagg @juliangruber

Iterator as Async Function

It ought be possible to create an async function (function *iterator or function *async_iterator) alternative to leveldown's own .next() wielding iterator. It is likely beneficial for inclusion.

Snapshot test - do we really need it?

I noticed this test was added recently. It's causing some headaches for me in localstorage-down, but I'm wondering how many other *down authors have actually implemented this thing?

Reading the test, I can understand the problem it's trying to solve. Ideally users should be able to open up a read-only iterator against a database and continue to read from a "snapshot" even as others are writing to it.

However, that's a really, really involved feature (concurrency! transactions!), especially for simple modules like MemDOWN and localstorage-down. And this test doesn't even seem to be very thorough. I could adhere to the letter of the law by just passing this one test, while still not really implementing the proper transactional semantics.

What's the feeling about this? Should *down authors pick-and-choose the tests we know we won't support, or should we rise to the occasion and try to pass tests like this?

Enhance the test suite with PouchDB

With PouchDB and PouchDB Server, we're quickly reaching a level of stability where we can test against the various *DOWN backends and say with confidence that, if there's a bug, it's in that library rather than our own.

In fact I just finished an audit of various server-side *DOWN backends and noticed that a lot of them are failing our test suite. I'm not too surprised, though – with index-js, localstoragedown, and MemDOWN, we occasionally ran into cases where a test failed in PouchDB but not in abstract-leveldown. It's just really hard to catch all the cases, and our test suite has gotten pretty huge (somewhere north of 700 tests now).

Obviously in an ideal world the abstract-leveldown test suite would be sufficient for rooting out these bugs, and I could try to be a better LevelUP citizen and contribute some of the failing tests back to this repo. But I think maybe an easier and more effective solution would be if we just offered an quick way for *DOWN implementers to run the PouchDB test suite against their code. Since we've got PouchDB Server working now, we can even set it up so that it uses their adapter on both the server and the client, which makes for a pretty badass test. (Here's an example of MemDOWN vs. MemDOWN, although sadly it's failing.)

Is this something that would be interesting for the LevelUP community, and if so, how should we go about offering those tests to you? Would a simple bash script that npm installs all the dependencies and runs the test be enough? Or would you prefer something else?

Decide whether gte/gt/lte/lt should take precedence over start/end or ..?

See dominictarr/ltgt#1 for details

Basically there exists a discrepancy between memdown and leveldown where memdown will decide to prefer lte and lt, discarding end if it exists, while leveldown takes the minim of the options. Same would go for gte, gt and start (and the reverse combinations as well).

One of them should be wrong, we need to decide which and implement conformance tests for it here.

using empty location + open causes weird results

If I do the following:

var leveldown = require('leveldown')
var db = leveldown('')
db.open(console.log.bind(console))

I get the following output:

[Error: IO error: /LOCK: Permission denied]

Which is kind of odd. I propose that we make AbstractLevelDOWN throw an error if a zero length string is given, instead of finding out in open() that we couldn't open that location. We already know it will fail and might as well error in the constructor.

An in-range update of sinon is breaking the build 🚨

Version 4.1.0 of sinon was just published.

Branch Build failing 🚨
Dependency sinon
Current Version 4.0.2
Type devDependency

This version is covered by your current version range and after updating it in your project the build failed.

sinon is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • continuous-integration/travis-ci/push The Travis CI build failed Details

Commits

The new version differs by 22 commits.

  • c0a71c6 Update docs/changelog.md and set new release id in docs/_config.yml
  • a2b873a Add release documentation for v4.1.0
  • 0a6a660 4.1.0
  • 3b36972 Update History.md and AUTHORS for new release
  • 201a652 Issue 1598 (Feature Request): Implemented sandbox.createStubInstance, tests, and documentation.
  • d49180d Merge pull request #1603 from mroderick/fix-more-markdown
  • 2d2631c Docs: fix pre commit hook
  • 9fa87e7 Docs: remove trailing quote from heading
  • 46ffad3 Docs: verify documentation using markdownlint
  • aa10bb7 Docs: remove use of element
  • 294ada0 Docs: remove use of
     tag
  • 77e5d31 Docs: reduce unnecessary inline HTML
  • b14a261 Docs: fix invalid syntax of backticks in headers
  • 579e029 Docs: fix trailing punctuation in headers
  • 7b04012 Docs: remove extraneous blank lines

There are 22 commits in total.

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.