Giter Site home page Giter Site logo

spring-data-jdbc-repository's Introduction

Build Status Maven Central

Spring Data JDBC generic DAO implementation


Check out jirutka/spring-data-jdbc-repository fork that is actively developed and maintained. This repository is no longer supported.


The purpose of this project is to provide generic, lightweight and easy to use DAO implementation for relational databases based on JdbcTemplate from Spring framework, compatible with Spring Data umbrella of projects.

Design objectives

  • Lightweight, fast and low-overhead. Only a handful of classes, no XML, annotations, reflection
  • This is not full-blown ORM. No relationship handling, lazy loading, dirty checking, caching
  • CRUD implemented in seconds
  • For small applications where JPA is an overkill
  • Use when simplicity is needed or when future migration e.g. to JPA is considered
  • Minimalistic support for database dialect differences (e.g. transparent paging of results)

Features

Each DAO provides built-in support for:

  • Mapping to/from domain objects through RowMapper abstraction
  • Generated and user-defined primary keys
  • Extracting generated key
  • Compound (multi-column) primary keys
  • Immutable domain objects
  • Paging (requesting subset of results)
  • Sorting over several columns (database agnostic)
  • Optional support for many-to-one relationships
  • Supported databases (continuously tested):
    • MySQL
    • PostgreSQL
    • H2
    • HSQLDB
    • Derby
    • MS SQL Server (2008, 2012)
    • Oracle 10g / 11g (9i should work too)
    • ...and most likely many others
  • Easily extendable to other database dialects via SqlGenerator class.
  • Easy retrieval of records by ID

API

Compatible with Spring Data PagingAndSortingRepository abstraction, all these methods are implemented for you:

public interface PagingAndSortingRepository<T, ID extends Serializable> extends CrudRepository<T, ID> {
			 T  save(T entity);
	Iterable<T> save(Iterable<? extends T> entities);
			 T  findOne(ID id);
		boolean exists(ID id);
	Iterable<T> findAll();
		   long count();
		   void delete(ID id);
		   void delete(T entity);
		   void delete(Iterable<? extends T> entities);
		   void deleteAll();
	Iterable<T> findAll(Sort sort);
		Page<T> findAll(Pageable pageable);
	Iterable<T> findAll(Iterable<ID> ids);
}

Pageable and Sort parameters are also fully supported, which means you get paging and sorting by arbitrary properties for free. For example say you have userRepository extending PagingAndSortingRepository<User, String> interface (implemented for you by the library) and you request 5th page of USERS table, 10 per page, after applying some sorting:

Page<User> page = userRepository.findAll(
	new PageRequest(
		5, 10, 
		new Sort(
			new Order(DESC, "reputation"), 
			new Order(ASC, "user_name")
		)
	)
);

Spring Data JDBC repository library will translate this call into (PostgreSQL syntax):

SELECT *
FROM USERS
ORDER BY reputation DESC, user_name ASC
LIMIT 50 OFFSET 10

...or even (Derby syntax):

SELECT * FROM (
	SELECT ROW_NUMBER() OVER () AS ROW_NUM, t.*
	FROM (
		SELECT * 
		FROM USERS 
		ORDER BY reputation DESC, user_name ASC
		) AS t
	) AS a 
WHERE ROW_NUM BETWEEN 51 AND 60

No matter which database you use, you'll get Page<User> object in return (you still have to provide RowMapper<User> yourself to translate from ResultSet to domain object). If you don't know Spring Data project yet, Page<T> is a wonderful abstraction, not only encapsulating List<T>, but also providing metadata such as total number of records, on which page we currently are, etc.

Reasons to use

  • You consider migration to JPA or even some NoSQL database in the future.

    Since your code will rely only on methods defined in PagingAndSortingRepository and CrudRepository from Spring Data Commons umbrella project you are free to switch from JdbcRepository implementation (from this project) to: JpaRepository, MongoRepository, GemfireRepository or GraphRepository. They all implement the same common API. Of course don't expect that switching from JDBC to JPA or MongoDB will be as simple as switching imported JAR dependencies - but at least you minimize the impact by using same DAO API.

  • You need a fast, simple JDBC wrapper library. JPA or even MyBatis is an overkill

  • You want to have full control over generated SQL if needed

  • You want to work with objects, but don't need lazy loading, relationship handling, multi-level caching, dirty checking... You need CRUD and not much more

  • You want to by DRY

  • You are already using Spring or maybe even JdbcTemplate, but still feel like there is too much manual work

  • You have very few database tables

Getting started

For more examples and working code don't forget to examine project tests.

Prerequisites

Maven coordinates:

<dependency>
	<groupId>com.nurkiewicz.jdbcrepository</groupId>
	<artifactId>jdbcrepository</artifactId>
	<version>0.4</version>
</dependency>

This project is available under maven central repository.

Alternatively you can download source code as ZIP.


In order to start your project must have DataSource bean present and transaction management enabled. Here is a minimal MySQL configuration:

@EnableTransactionManagement
@Configuration
public class MinimalConfig {

	@Bean
	public PlatformTransactionManager transactionManager() {
		return new DataSourceTransactionManager(dataSource());
	}

	@Bean
	public DataSource dataSource() {
		MysqlConnectionPoolDataSource ds = new MysqlConnectionPoolDataSource();
		ds.setUser("user");
		ds.setPassword("secret");
		ds.setDatabaseName("db_name");
		return ds;
	}

}

Entity with auto-generated key

Say you have a following database table with auto-generated key (MySQL syntax):

CREATE TABLE COMMENTS (
	id INT AUTO_INCREMENT,
	user_name varchar(256),
	contents varchar(1000),
	created_time TIMESTAMP NOT NULL,
	PRIMARY KEY (id)
);

First you need to create domain object User mapping to that table (just like in any other ORM):

public class Comment implements Persistable<Integer> {

	private Integer id;
	private String userName;
	private String contents;
	private Date createdTime;

	@Override
	public Integer getId() {
		return id;
	}

	@Override
	public boolean isNew() {
		return id == null;
	}
	
	//getters/setters/constructors/...
}

Apart from standard Java boilerplate you should notice implementing Persistable<Integer> where Integer is the type of primary key. Persistable<T> is an interface coming from Spring Data project and it's the only requirement we place on your domain object.

Finally we are ready to create our CommentRepository DAO:

@Repository
public class CommentRepository extends JdbcRepository<Comment, Integer> {

	public CommentRepository() {
		super(ROW_MAPPER, ROW_UNMAPPER, "COMMENTS");
	}

	public static final RowMapper<Comment> ROW_MAPPER = //see below

	private static final RowUnmapper<Comment> ROW_UNMAPPER = //see below

	@Override
	protected <S extends Comment> S postCreate(S entity, Number generatedId) {
		entity.setId(generatedId.intValue());
		return entity;
	}
}

First of all we use @Repository annotation to mark DAO bean. It enables persistence exception translation. Also such annotated beans are discovered by CLASSPATH scanning.

As you can see we extend JdbcRepository<Comment, Integer> which is the central class of this library, providing implementations of all PagingAndSortingRepository methods. Its constructor has three required dependencies: RowMapper, RowUnmapper and table name. You may also provide ID column name, otherwise default "id" is used.

If you ever used JdbcTemplate from Spring, you should be familiar with RowMapper interface. We need to somehow extract columns from ResultSet into an object. After all we don't want to work with raw JDBC results. It's quite straightforward:

public static final RowMapper<Comment> ROW_MAPPER = new RowMapper<Comment>() {
	@Override
	public Comment mapRow(ResultSet rs, int rowNum) throws SQLException {
		return new Comment(
				rs.getInt("id"),
				rs.getString("user_name"),
				rs.getString("contents"),
				rs.getTimestamp("created_time")
		);
	}
};

RowUnmapper comes from this library and it's essentially the opposite of RowMapper: takes an object and turns it into a Map. This map is later used by the library to construct SQL CREATE/UPDATE queries:

private static final RowUnmapper<Comment> ROW_UNMAPPER = new RowUnmapper<Comment>() {
	@Override
	public Map<String, Object> mapColumns(Comment comment) {
		Map<String, Object> mapping = new LinkedHashMap<String, Object>();
		mapping.put("id", comment.getId());
		mapping.put("user_name", comment.getUserName());
		mapping.put("contents", comment.getContents());
		mapping.put("created_time", new java.sql.Timestamp(comment.getCreatedTime().getTime()));
		return mapping;
	}
};

If you never update your database table (just reading some reference data inserted elsewhere) you may skip RowUnmapper parameter or use MissingRowUnmapper.

Last piece of the puzzle is the postCreate() callback method which is called after an object was inserted. You can use it to retrieve generated primary key and update your domain object (or return new one if your domain objects are immutable). If you don't need it, just don't override postCreate().

Check out JdbcRepositoryGeneratedKeyTest for a working code based on this example.

By now you might have a feeling that, compared to JPA or Hibernate, there is quite a lot of manual work. However various JPA implementations and other ORM frameworks are notoriously known for introducing significant overhead and manifesting some learning curve. This tiny library intentionally leaves some responsibilities to the user in order to avoid complex mappings, reflection, annotations... all the implicitness that is not always desired.

This project is not intending to replace mature and stable ORM frameworks. Instead it tries to fill in a niche between raw JDBC and ORM where simplicity and low overhead are key features.

Entity with manually assigned key

In this example we'll see how entities with user-defined primary keys are handled. Let's start from database model:

CREATE TABLE USERS (
	user_name varchar(255),
	date_of_birth TIMESTAMP NOT NULL,
	enabled BIT(1) NOT NULL,
	PRIMARY KEY (user_name)
);

...and User domain model:

public class User implements Persistable<String> {

	private transient boolean persisted;

	private String userName;
	private Date dateOfBirth;
	private boolean enabled;

	@Override
	public String getId() {
		return userName;
	}

	@Override
	public boolean isNew() {
		return !persisted;
	}

	public void setPersisted(boolean persisted) {
		this.persisted = persisted;
	}

	//getters/setters/constructors/...

}

Notice that special persisted transient flag was added. Contract of CrudRepository.save() from Spring Data project requires that an entity knows whether it was already saved or not (isNew()) method - there are no separate create() and update() methods. Implementing isNew() is simple for auto-generated keys (see Comment above) but in this case we need an extra transient field. If you hate this workaround and you only insert data and never update, you'll get away with return true all the time from isNew().

And finally our DAO, UserRepository bean:

@Repository
public class UserRepository extends JdbcRepository<User, String> {

	public UserRepository() {
		super(ROW_MAPPER, ROW_UNMAPPER, "USERS", "user_name");
	}

	public static final RowMapper<User> ROW_MAPPER = //...

	public static final RowUnmapper<User> ROW_UNMAPPER = //...

	@Override
	protected <S extends User> S postUpdate(S entity) {
		entity.setPersisted(true);
		return entity;
	}

	@Override
	protected <S extends User> S postCreate(S entity, Number generatedId) {
		entity.setPersisted(true);
		return entity;
	}
}

"USERS" and "user_name" parameters designate table name and primary key column name. I'll leave the details of mapper and unmapper (see source code). But please notice postUpdate() and postCreate() methods. They ensure that once object was persisted, persisted flag is set so that subsequent calls to save() will update existing entity rather than trying to reinsert it.

Check out JdbcRepositoryManualKeyTest for a working code based on this example.

Compound primary key

We also support compound primary keys (primary keys consisting of several columns). Take this table as an example:

CREATE TABLE BOARDING_PASS (
	flight_no VARCHAR(8) NOT NULL,
	seq_no INT NOT NULL,
	passenger VARCHAR(1000),
	seat CHAR(3),
	PRIMARY KEY (flight_no, seq_no)
);

I would like you to notice the type of primary key in Persistable<T>:

public class BoardingPass implements Persistable<Object[]> {

	private transient boolean persisted;

	private String flightNo;
	private int seqNo;
	private String passenger;
	private String seat;

	@Override
	public Object[] getId() {
		return pk(flightNo, seqNo);
	}

	@Override
	public boolean isNew() {
		return !persisted;
	}

	//getters/setters/constructors/...

}

Unfortunately library does not support small, immutable value classes encapsulating all ID values in one object (like JPA does with @IdClass), so you have to live with Object[] array. Defining DAO class is similar to what we've already seen:

public class BoardingPassRepository extends JdbcRepository<BoardingPass, Object[]> {
	public BoardingPassRepository() {
		this("BOARDING_PASS");
	}

	public BoardingPassRepository(String tableName) {
		super(MAPPER, UNMAPPER, new TableDescription(tableName, null, "flight_no", "seq_no")
		);
	}

	public static final RowMapper<BoardingPass> ROW_MAPPER = //...

	public static final RowUnmapper<BoardingPass> UNMAPPER = //...

}

Two things to notice: we extend JdbcRepository<BoardingPass, Object[]> and we provide two ID column names just as expected: "flight_no", "seq_no". We query such DAO by providing both flight_no and seq_no (necessarily in that order) values wrapped by Object[]:

BoardingPass pass = boardingPassRepository.findOne(new Object[] {"FOO-1022", 42});

No doubts, this is cumbersome in practice, so we provide tiny helper method which you can statically import:

import static com.nurkiewicz.jdbcrepository.JdbcRepository.pk;
//...

BoardingPass foundFlight = boardingPassRepository.findOne(pk("FOO-1022", 42));

Check out JdbcRepositoryCompoundPkTest for a working code based on this example.

Transactions

This library is completely orthogonal to transaction management. Every method of each repository requires running transaction and it's up to you to set it up. Typically you would place @Transactional on service layer (calling DAO beans). I don't recommend placing @Transactional over every DAO bean.

Caching

Spring Data JDBC repository library is not providing any caching abstraction or support. However adding @Cacheable layer on top of your DAOs or services using caching abstraction in Spring is quite straightforward. See also: @Cacheable overhead in Spring.

Contributions

..are always welcome. Don't hesitate to submit bug reports and pull requests.

Testing

This library is continuously tested using Travis (Build Status). Test suite consists of 60+ distinct tests each run against 8 different databases: MySQL, PostgreSQL, H2, HSQLDB and Derby + MS SQL Server and Oracle tests not run as part of CI.

When filling bug reports or submitting new features please try including supporting test cases. Each pull request is automatically tested on a separate branch.

Building

After forking the official repository building is as simple as running:

$ mvn install

You'll notice plenty of exceptions during JUnit test execution. This is normal. Some of the tests run against MySQL and PostgreSQL available only on Travis CI server. When these database servers are unavailable, whole test is simply skipped:

Results :

Tests run: 484, Failures: 0, Errors: 0, Skipped: 295

Exception stack traces come from root AbstractIntegrationTest.

Design

Library consists of only a handful of classes, highlighted in the diagram below (source):

UML diagram

JdbcRepository is the most important class that implements all PagingAndSortingRepository methods. Each user repository has to extend this class. Also each such repository must at least implement RowMapper and RowUnmapper (only if you want to modify table data).

SQL generation is delegated to SqlGenerator. PostgreSqlGenerator. and DerbySqlGenerator are provided for databases that don't work with standard generator.

Changelog

0.4.1

0.4

  • Repackaged: com.blogspot.nurkiewicz -> com.nurkiewicz

0.3.2

  • First version available in Maven central repository
  • Upgraded Spring Data Commons 1.6.1 -> 1.8.0

0.3.1

0.3

0.2

0.1

License

This project is released under version 2.0 of the Apache License (same as Spring framework).

spring-data-jdbc-repository's People

Contributors

criminy avatar jucosorin avatar nurkiewicz avatar sheenobu avatar thomasdarimont avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spring-data-jdbc-repository's Issues

Bug: delete entity with compound key

Hello,

I think there is a pb with this method :

@OverRide
public void delete(T entity) {
jdbcOperations.update(sqlGenerator.deleteById(table), entity.getId());
}

If my key is a compound key with 3 String even if I declare an Object[] for Persistable, the delete method raise an exception (same thing for delete(Iterable<? extends T> entities)).

I replace it with :
@OverRide
public void delete(T entity) {
this.delete(entity.getId());
}

Case sensitive mapping of id key in row unmapper lead to update error.

Case sensitive mapping of id key

Case sensitive mapping of id key in row unmapper lead to update and probably delete errors.
Because value of primary key to run update or delete queries assigned incorrectly.

Example

public Map<String, Object> mapColumns(final Person person) {
        Map<String, Object> mapping = new LinkedHashMap<String, Object>();
        mapping.put("ID", person.getId());
        mapping.put("NAME", person.getName());
       return mapping;
}

Provided below unmapper will be used to generate following sql sentence

UPDATE person set ID=?, NAME=? where Id =?

To get value of primary key used Id field from map, which has null value, so real sentence

UPDATE person set ID=10, NAME="Jonh Dou" where Id = null

will never update record.

Always uses default DataSource

The BeanFactory in JdbcRepository always grabs the default DataSource. This prevents an application from using more than one datasource.

jdbcOperations = beanFactory.getBean(JdbcOperations.class);

Also

@Override
public void afterPropertiesSet() throws Exception {
    obtainJdbcTemplate();
    if (sqlGenerator == null) {
        obtainSqlGenerator();
    }
}

Clears the jdbc template created by a child class, i.e.
Child Class

@Autowired
public void setDataSource(@Qualifier("secondaryDataSource") DataSource dataSource) {
     super.setJdbcOperations(new JdbcTemplate(dataSource));
}

is overwritten immediately.

PostgreSQL problems with Date and Timestamp types

Using PostgreSQL and Java 7, the createPreparedStatement in createWithAutoGeneratedKey sould not handle java.sql.Date and java.sql.Timestamp parameters correctly. Not a JDBC Repository issue directly; apparently, PostgreSQL's PreparedStatement setObject(i,v) does not determine the type. I had to add the following code.

if (queryParams[i] instanceof java.sql.Date) {
    ps.setObject(i + 1, queryParams[i], java.sql.Types.DATE );                      
}
else if (queryParams[i] instanceof java.sql.Timestamp) {
    ps.setObject(i + 1, queryParams[i], java.sql.Types.TIMESTAMP );                     
}
else {
    ps.setObject(i + 1, queryParams[i]);
}

I thought I'd let you know. I love this package.

Mark

Support for GROUP BY clause

Any plans on supporting GROUP BY clauses in theSqlGenerator classes? As it stands, adding support for this clause seems like it would require a significant amount of rewriting of the base SqlGenerator, and extending the class is made difficult by so many of the core methods being scoped as private.

How can I use multiple datasource?

Hi,

My project needs to use two datasource. Assuming that I have two datasource/transaction manager beans, how can I specify bean name in Repository classes?

Thanks

Integrate Spring Data JDBC Repository into Spring Data JDBC Extensions

Hello, Tomasz. First of all, to congratulate you for the project you have developed,
because I think it is very useful and fills a gap missing between JDBC and Spring
Data repositories Core.
Honestly, I think your project would be more advantage and reach many more people if it
were part of Spring Data JDBC Extensions (http://projects.spring.io/spring-data-jdbc-ext/).
What do you think? Same? How would you like to contact the Spring team to officially
integrate the code of your project in Spring framework?
Thanks, and I hope your answer carefully.

Two questions...

Thanks for taking the time to write this nice little library; it fits my needs almost exactly. I have a couple questions that I haven't been able to figure out on my own, so I thought I'd ask here.

  1. Does the library allow me to issue arbitrary SQL statements? I thought I would see something like queryForList(String sql, Pageable page) or queryForObject(String sql, Pageable page) in the JdbcRepository class. Would this be a logical enhancement, or am I missing an easy way to do this?
  2. Can the RowMapper be a BeanPropertyRowMapper? I tried this:
    public static final RowMapper<MyObject> ROW_MAPPER = new BeanPropertyRowMapper<MyObject>(); but I'm getting an IllegalStateException saying "Mapped class was not specified"

Standalone Configuration and CDI Implementation

As requested by the @nurkiewicz I'm opening a new Issue (Enchantment Request) for standalone and CDI Implementations and Documentation (Link to original discussion).

Original request content:

Hello @nurkiewicz . Thanks for the wonderful software. Could you please write some instructions about how to run it outside of the spring container?
Maybe a Standalone usage example or CDI Integration sections like in Spring Data JPA Documentation.

I would like to write a CDI Producer for repositories and and consume it in EJBs (using standard Java EE Declarative Transactions. Is it possible?

Support for SEQUENCE in ORACLE

idea:

public Long generateId() {
		return getJdbcOperations().queryForObject("select SEQ_COLABORADOR.NEXTVAL from dual", Long.class);
	}

Returning values from the database (besides the id)

I need to return values that are set by the database as default values (e.g. created and updated timestamps). I thought doing a findOne() inside postCreate() or postUpdate() would work, but I'm not getting back the expected values.

I'm not sure if the calls to postCreate and postUpdate are in the wrong transaction scope, or if perhaps they're asynchronous and therefore called before the database has been updated.

How would you return database-defaulted values when creating or updating entities?

update springframework to 3.2

Hello Tomasz,i just update spring-framework and spring-data to the latest version and exceptions occurs.
please
update spring-framework to 3.2
update spring data to 1.4.1

THANK YOU !

Optimistic locking support

I really like the idea of this project with wonderful and light-weight Spring JDBC abstractions.

One thing i wish it has is out of the box support for optimistic locking scheme.
In Hibernate/JPA environment, we usually achieve this by having @Version annotation and let the hibernate worry managing (checking and increment) it.

Though it is not really hard to implement on the client projects (when using spring-jdbc-repository) but it will be good to have optimistic locking scheme to ensure that the underlying database table records are validated and incremented in a single atomic transaction.

For now, i had to do override the preUpdate method from JdbcRepository class in every repository to update the version column.

Referrence: http://springinpractice.com/2013/09/14/optimistic-locking-with-spring-data-rest

PostgreSQL - LIMIT #,# syntax is not supported

Very interested in the project, but it looks like your limit / offset syntax for postgres is invalid. Recommended solution is to use OFFSET X LIMIT Y instead.

Happy to dig in and submit a PR, if it's worth it. Just discovered the project though, so not sure if there's a per-database strategy I need to be aware of...

Caused by: org.postgresql.util.PSQLException: ERROR: LIMIT #,# syntax is not supported
  Hint: Use separate LIMIT and OFFSET clauses.
  Position: 32
    at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2270)
    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1998)
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
    at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:570)
    at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:406)
    at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:286)
    at org.springframework.jdbc.core.JdbcTemplate$1QueryStatementCallback.doInStatement(JdbcTemplate.java:454)
    at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:404)
    ... 17 more

Use SQL standard OFFSET .. FETCH instead of ROW_NUMBER() pagination for Derby

I've had a brief review of your solution and found that you're using ROW_NUMBER() OVER() window functions for pagination in Derby:
https://github.com/nurkiewicz/spring-data-jdbc-repository/blob/master/src/main/java/com/nurkiewicz/jdbcrepository/sql/DerbySqlGenerator.java#L13

But Derby supports the SQL standard OFFSET .. FETCH clause:
http://db.apache.org/derby/docs/10.6/ref/rrefsqljoffsetfetch.html

Why aren't you using that instead?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.