Giter Site home page Giter Site logo

koxudaxi / local-data-api Goto Github PK

View Code? Open in Web Editor NEW
111.0 111.0 20.0 1.45 MB

Data API for local, you can write unittest for AWS Aurora Serverless's Data API

Home Page: https://koxudaxi.github.io/local-data-api

License: MIT License

Python 51.41% Dockerfile 0.79% Shell 2.78% Kotlin 45.02%
api aurora aws aws-aurora data-api dataapi kotlin ktor local mock mysql proxy python python3 serverless test unittest

local-data-api's Introduction

Hi there πŸ‘‹,

I am a software developer who enjoys contributing to Open Source Software (OSS) with a goal to make the development environment friendlier πŸ˜„.

Here are some of the projects I've worked on:

If any of my tools have been helpful in your projects, would you consider sponsoring me? Your support would really make a difference and help me continue to maintain and improve these tools for everyone. Thank you! πŸš€

local-data-api's People

Contributors

apoalex92 avatar dependabot-preview[bot] avatar dependabot[bot] avatar kennethwussmann avatar koxudaxi avatar macat avatar markherhold avatar roganov avatar sdurban avatar snorberhuis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

local-data-api's Issues

Publish this image to AWS ECR gallery

Description

I would love to use this image withing my AWS CodePipeline to run e2e tests in my CI/CD environment.

Problem

When I try to fetch the image within a CodeBuild project it runs into the error:

[Container] 2021/12/23 11:32:20 Running command docker-compose up -d
--
350 | Creating network "src_default" with the default driver
351 | Pulling local-data-api (koxudaxi/local-data-api:)...
352 | toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

Possible solution

By uploading the image to the AWS ECR gallery one does not have any rate limits and the build environment can fetch the image as desired.

Incorrect parsing of timestamp's milliseconds with a leading zero

Describe the bug

Given a column of type timestamp, leading zero in milliseconds is ignored. Example: inserted value "2021-03-03T08:31:15.077Z" will be read as "2021-03-03 08:31:15.770000"

To Reproduce

Given following table:

SELECT column_name, data_type FROM information_schema.columns WHERE table_name = 'solutions';

# id | character varying
# created_at | timestamp without time zone

And following test script

const AWS = require('aws-sdk');

const resourceArn = '';
const secretArn = '';
const database = '';
const endpoint = '';

const rdsDataService = new AWS.RDSDataService({ endpoint });

const executeSql = (sql) => rdsDataService.executeStatement({ resourceArn, secretArn, database, sql }).promise();

const test = async (sql) => {
  await executeSql(`INSERT INTO solutions (id, created_at) VALUES ('1', '2021-03-03T08:31:15.077Z')`);
  const { records } = await executeSql(`SELECT * FROM solutions WHERE id = '1'`);
  console.log(records);
}

The output would be following:

[
  [
    { stringValue: '1' },
    { stringValue: '2021-03-03 08:31:15.770000' },
  ]
]

Expected behavior

Is should behave equally to AWS Aurora Data API and return:

[
  [
    { stringValue: '1' },
    { stringValue: '2021-03-03 08:31:15.077' },
  ]
]

Calling local-data-api from another container

Hi,

I have just spent over the day fixing an issue where the local-data-api is being called from another docker container rather than localhost (in this case, I was using SAM to invoke a local lambda container but the situation applies more generally). While this is no a bug with this repo, I think that the repo would benefit from a few small additions / extra clarification.

The Problem

After creating a user-defined docker network (docker network create dev-net), I did the following:

  • I used the following postgres docker compose file below (exactly the same as in the README with the addition of a docker network)
  • Set up a test table in the database
  • Ensured that the lambda container was being launched in this same network
version: '3.1'

services:
  local-data-api:
    image: koxudaxi/local-data-api
    container_name: api
    networks:
      - dev-net
    restart: always
    environment:
      ENGINE: PostgreSQLJDBC
      POSTGRES_HOST: db
      POSTGRES_PORT: 5432
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: example
      RESOURCE_ARN: 'arn:aws:rds:us-east-1:123456789012:cluster:dummy'
      SECRET_ARN: 'arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy'
    ports:
      - "8080:80"
  db:
    image: postgres:10.7-alpine
    container_name: db
    networks: 
      - dev-net
    restart: always
    environment:
      POSTGRES_PASSWORD: example
      POSTGRES_DB: test
    ports:
        - "5432:5432"
        
networks:
  dev-net:
    external: true

I tried to connect to the local-data-api with the following command, with a similar command to that in the README:

rds_client = boto3.client('rds-data', endpoint_url='http://local_local-data-api_1:8080', aws_access_key_id='aaa', aws_secret_access_key='bbb')
with local_local-data-api_1 being the automatically generated name for the api container in the docker compose.

The Solution

It turns out that there were two overlapping issues:

  1. the URL above is invalid because it has an underscore
  2. port 8080 sees to be already used so cannot be referred to in the lambda container

I solved this by explicitly naming the containers in the docker compose file as follows:

version: '3.1'

services:
  local-data-api:
    image: koxudaxi/local-data-api
    container_name: api
    networks:
      - dev-net
    restart: always
    environment:
      ENGINE: PostgreSQLJDBC
      POSTGRES_HOST: db
      POSTGRES_PORT: 5432
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: example
      RESOURCE_ARN: 'arn:aws:rds:us-east-1:123456789012:cluster:dummy'
      SECRET_ARN: 'arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy'
    ports:
      - "8080:80"
  db:
    image: postgres:10.7-alpine
    container_name: db
    networks: 
      - dev-net
    restart: always
    environment:
      POSTGRES_PASSWORD: example
      POSTGRES_DB: test
    ports:
        - "5432:5432"
        
networks:
  dev-net:
    external: true

When calling the database from the lambda container, I also switched to port 80, which seemed to work, as follows:

rds_client = boto3.client('rds-data', endpoint_url='http://api:80', aws_access_key_id='aaa', aws_secret_access_key='bbb')

Suggestion

Given the above, I suggest that the following changes should be made to the repo:

  1. The docker compose files in the repo should explicitly name the containers such that no underscores are generated when they are created
  2. An extra section should be added to the README to discuss this issue, in particular the required switch to port 80 (unless a fix can be found to switch to 8080)

Support bigint

Describe the bug
When you try to get a table definition with a query like:

SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'database' AND TABLE_NAME = 'user'

App will crash with the error:

Exception: unsupported type <class 'jpype._jclass.java.math.BigInteger'>: 1

To Reproduce
In mysql/mariadb create a schema an try to get the table definition with a query like:

POST https://localhost:8080/Execute
User-Agent: aws-sdk-nodejs/2.647.0 linux/v12.16.1 promise
Content-Type: application/json
{
"secretArn": "arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy",
"resourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:dummy",
"sql": "SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'database' AND TABLE_NAME = 'user'",
"database": "database"
}

Docker will crash with this error:

imagen

Expected behavior
Like says in AWS Documentation (https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html) it must return a LONG Data type.

imagen

Difference in behaviour when passing array formatted string parameters

Describe the bug
We had a query like:
SELECT * FROM tableName WHERE columnName = ANY(:param)
and :param is a string like '{Value1,Value2}'

This was working fine when using local-data-api with postgres but on aws data-api failed with error:

ERROR: op ANY/ALL (array) requires array on right side\n Position: 464; SQLState: 42809

We fixed it by adding a cast as suggested in the inserting arrays workaround:

SELECT * FROM tableName WHERE columnName = ANY(CAST(:param AS text[]))

To Reproduce
As described above.

Expected behavior
The local-data-api should fail in the same way as aws data-api, as we now can't be sure that our queries that work locally will work on the real thing

Support "formatRecordsAs" in ExecuteStatementRequest

The AWS SDK for the Data API supports a "formatRecordsAs" parameter on the ExecuteStatementCommand object, with default value of "NONE" and possible value of "JSON".

See details at:

If formatRecordsAs = 'JSON', then the result of ExecuteStatementCommand includes a single (JSON-formatted) string in ExecuteStatementResponse.formattedRecords.

It would be great to support this parameter in local development for ExecuteStatementCommand (via updating ExecuteStatementRequest and ExecuteStatementResponse).

But also this library is great :). Thank you.

Getting "detail": "Not found"

Describe the bug
I have the docker compose as follows:
version: "3.1"

services:
  local-data-api:
    image: koxudaxi/local-data-api
    restart: always
    environment:
      MYSQL_HOST: db
      MYSQL_PORT: 3306
      MYSQL_USER: root
      MYSQL_PASSWORD: example
      RESOURCE_ARN: 'arn:aws:rds:us-east-1:725695492778:cluster:dummy'
      SECRET_ARN: 'arn:aws:secretsmanager:us-east-1:725695492778:secret:dummy'
    ports:
      - "8080:80"
  db:
    image: mysql:5.6
    command: --default-authentication-plugin=mysql_native_password
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: example
      MYSQL_DATABASE: test
    ports:
      - "3306:3306"

To Reproduce
Steps to reproduce the behavior:

  1. docker-compose up -d
  2. Postman Request to get to http://127.0.0.1:8080 without any params.

And I get everytime ' "detail": "Not Found"'. I have tried to access it programatically in a Node JS env and it's the same everytime i get '"detail": "Not Found", no matter what I try.

The log that I see in the docker is:
image

It just gets 404, i've been strugling with this for a few days any help is much appreciated!

PS: I know the mysql database is created as I tried to connect via workbench and its there.

Thanks a lot in advance

ExecuteSql is not implemented

Set up a local-data-api environment to test against, and once my application was able to execute sql against the application, I ran into a NotImplementedError in local-data-api. At first I thought I must be missing something in the config, but looking at the source it appears that this really was never implemented. That seems to be a super basic usage of this tool, so it's kinda surprising to find that it's missing from a project that appears so mature and well-documented.

To Reproduce
Post anything to /ExecuteSql. A NotImplementedError will be raised.

Expected behavior
SQL is executed and results are returned.

Support resultSetOptions in ExecuteStatementRequest

In previous versions of local-data-api, I utilized resultSetOptions when issuing queries to Data API via execute_statement().

It appears it is not supported in the current version, which yields an error message like:

local-data-api_1  | 2021-04-16 16:33:13.754 [eventLoopGroupProxy-4-1] ERROR Application - Unhandled: POST - /Execute
local-data-api_1  | kotlinx.serialization.json.internal.JsonDecodingException: Unexpected JSON token at offset 482: Encountered an unknown key 'resultSetOptions'.
local-data-api_1  | Use 'ignoreUnknownKeys = true' in 'Json {}' builder to ignore unknown keys.
local-data-api_1  | JSON input: .....:dummy", "resultSetOptions": {"decimalReturnType": "STRING"}.....
local-data-api_1  | 	at kotlinx.serialization.json.internal.JsonExceptionsKt.JsonDecodingException(JsonExceptions.kt:24)

I'm not sure if this was deliberate or not, but it appears the option is still supported by AWS (we are in fact passing it to Data API in our deployed application):

https://docs.aws.amazon.com/rdsdataservice/latest/APIReference/API_ResultSetOptions.html

For now I think I'm going to withhold the parameter entirely since I happened to be specifying the default, anyway, and that'll allow us to upgrade to 0.6.8. :]

This is a great tool, by the way. It was one piece of our development test bed that allowed us to rapidly iterate a new application into production. Thank you!

Support columnMetadata

DataAPI can return columns information as cloumnMetadata

local-data-api should parse and map columns data from sqlalchemy's API.

Refactor conversions-per-type

We must refactor conversions-per-type.
This method is used for all DB resources.
It must be implemented per individual DB Resource.

@classmethod
def from_value(cls, value: Any) -> Field:
if isinstance(value, bool):
return cls(booleanValue=value)
elif isinstance(value, str):
return cls(stringValue=value)
elif isinstance(value, datetime):
return cls(stringValue=str(value))
elif isinstance(value, int):
return cls(longValue=value)
elif isinstance(value, float):
return cls(doubleValue=value)
elif isinstance(value, bytes):
return cls(blobValue=b64encode(value))
elif value is None:
return cls(isNull=True)
elif type(value).__name__.endswith('UUID'):
return cls(stringValue=str(value))
elif type(value).__name__.endswith('PGobject'):
return cls(stringValue=str(value))
elif type(value).__name__.endswith('BigInteger'):
return cls(longValue=int(str(value)))
elif type(value).__name__.endswith('PgArray'):
return cls(stringValue=str(value))
else:
raise Exception(f'unsupported type {type(value)}: {value} ')

Motivation

This issue is forked from #62

About PostgresSQL Engine (No JDBC)
Making the query with python native connector returns a list:

unsupported type <class 'list'>: ['1', ' 2', ' 3', ' 4']

I think is kind of dangerous make a conversion from list to a schema like {1, 2, 3, 4}. Maybe it needs a refactor like conversions-per-type (JVM(PSQL/MYSQL) an non-JVM classes (PSQL and MYSQL separated)?

Error creating foreign keys in version 0.6.0!

Describe the bug
Since yesterday I was using your awesome project without any issue. Today I've upgraded the Docker image to version 0.6.0 and the entire project is not working any more due to UnhandledPromiseRejectionWarning: BadRequestException: Database error code: 0. Message: ERROR: syntax error at or near "RETURNING" when using relations in the database.
The error message is received only when trying to use relations.
I'm using TypeORM to synchronize my local schema in development.

To Reproduce
I've created a super simple repo with a basic example.

LINK: https://github.com/carlocorradini/local-data-api-issue

All instructions are explained in the README.

Expected behavior
Working πŸ˜…

Screenshots
image

Desktop (please complete the following information):

  • OS: Microsoft Windows 10 Pro Version 10.0.19042 Build 19042

Additional context
I'm using the typeorm-aurora-data-api-driver driver.

The database is PostgreSQL.

The docker-compose.yml is the same as the one shown in the README.

Hope this is a simple fixπŸ˜₯

Cannot find parameter - Error binding null parameter SQLAlchemy

Hello,
I've encountered a problem when trying to call the the Execution API.

I'm performing a query like the one reported in the following, so I would like to extract all the employees belonging to a nation (given in the form of UUID) and with a specific language, if provided in input parameters:
select * from employes emp where emp.nation = cast(:nation as UUID) and :lang is null or t.language = ( select id from languages where language = :lang);

Given the parameters type in the form of:

	"parameters": [{
		"name": "lang",
		"value":  {
			"isNull": "true"
		}
	}, {
		"name": "nation",
		"value": {
			"stringValue": "9a370a89-b773-45d2-a0cd-9ffacefd54c8"
		}
	}]

I obtain the following error: Cannot find parameter: lang
Debugging the code i realized that the problem is related to create_query method in resource.py file.
It seems that the command str(text_sql.bindparams(**params).compile(**kwargs)) cannot bind null parameters.

I have temporarily fixed with the following code:

params_none = dict((k,v) for k,v in params.items() if v is None)
for key in params_none.keys():
    sql = sql.replace(f":{key}", "null")

before the text_sql: TextClause = text(sql) statement.

If it could be useful, I'm using a PostgreSql database

Postgres: Unable to use "::" cast syntax together with parameters

Describe the bug
I am unable to use PostgreSQL type cast "::" operator together with parameters. This works with AWS Data API.

To Reproduce
Steps to reproduce the behavior:

  1. Run a query with a parameter and a type cast. E.g. SELECT :some_param::VARCHAR
  2. See error BadRequestException: ERROR: syntax error at or near ":"

Expected behavior
Casts the parameter without an error.

Additional context
If you use the other cast syntax, it works:

CAST ( expression AS target_type );

It also works without a parameter SELECT 'value'::VARCHAR.

Doesn't emulate timetz and time with time zone correctly

Describe the bug
Currently the Data API will always convert every type containing a timezone into UTC. For example, if you save the value "15:30:00 PST" into a timetz or a time with time zone column and then query the same row it will return "23:30:00".

Local Data API, however, returns a string with timezone "15:30:00 PST".

Not able to run any SQL Query against local-data-api docker image on Mac Machine

Describe the bug
Not able to run any SQL Query against local-data-api docker image

To Reproduce
Steps to reproduce the behavior:

  1. Create new file named docker-compose-local-db.yml
  2. Copy the following content in the yml file
version: '3.1'

services:
  local-data-api:
    image: koxudaxi/local-data-api
    restart: always
    environment:
      MYSQL_HOST: db
      MYSQL_PORT: 3306
      MYSQL_USER: root
      MYSQL_PASSWORD: example
      RESOURCE_ARN: 'arn:aws:rds:us-east-1:123456789012:cluster:dummy'
      SECRET_ARN: 'arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy'
    ports:
      - "8080:80"
  db:
    image: mysql:5.6
    command: --default-authentication-plugin=mysql_native_password
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: example
      MYSQL_DATABASE: test
    ports:
        - "3306:3306"
  1. save file

  2. run command in terminal window
    docker-compose -f docker-compose-local-db.yml up

  3. docker is up and running

  4. run following command in other terminal window
    aws --endpoint-url http://127.0.0.1:8080 rds-data execute-statement --resource-arn "arn:aws:rds:us-east-1:123456789012:cluster:dummy" --sql "show databases" --secret-arn "arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy" --database 'test'

  5. Following error coming in the terminal

`

An error occurred (404) when calling the ExecuteStatement operation:
<html lang="en">
<head>
<meta charset="utf-8">
<title>Error</title>
</head>
<body>
<pre>Cannot POST /Execute
</body>
</html>

`

Expected behavior
No Error running SQL command against local mysql server

Desktop (please complete the following information):

  • OS: MAC OS

docker info | grep Version
Server Version: 19.03.13
Kernel Version: 4.19.76-linuxkit

aws --version
aws-cli/2.0.50 Python/3.8.5 Darwin/19.6.0 source/x86_64

SIGSEGV error in JRE when connecting to the database

Hello!
sometimes I run into the following error when runnning the local-data-api container:

[2020-03-24 15:46:57 +0000] [7] [INFO] Waiting for application startup.
[2020-03-24 15:47:46 +0000] [7] [INFO] ('172.19.0.1', 43568) - "POST /BeginTransaction HTTP/1.1" 200
[2020-03-24 15:47:46 +0000] [7] [INFO] ('172.19.0.1', 43568) - "POST /RollbackTransaction HTTP/1.1" 500
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007fef41ea0e69, pid=7, tid=0x00007fef49bc2b88
#
# JRE version: OpenJDK Runtime Environment (8.0_222-b10) (build 1.8.0_222-b10)
# Java VM: OpenJDK 64-Bit Server VM (25.222-b10 mixed mode linux-amd64 compressed oops)
# Derivative: IcedTea 3.13.0
# Distribution: Custom build (Tue Oct  1 11:00:28 UTC 2019)
# Problematic frame:
# C  [_jpype.cpython-37m-x86_64-linux-gnu.so+0x28e69]  JPJavaEnv::NewLocalRef(_jobject*)+0x9
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /app/hs_err_pid7.log
#
# If you would like to submit a bug report, please include
# instructions on how to reproduce the bug and visit:
#   https://icedtea.classpath.org/bugzilla
#
[2020-03-24 15:47:46 +0000] [21] [INFO] Booting worker with pid: 21
[2020-03-24 15:47:46 +0000] [21] [INFO] Started server process [21]
[2020-03-24 15:47:46 +0000] [21] [INFO] Waiting for application startup.

The reported section is taken from docker logs of a container running the local-data-api image.

It seems that the problem is related to the amount of memory space reserved to the JVM, as suggested into the stacktrace it could be useful to increase it.
I don't know how you could reproduce the problem, since it appears only sometime.

By the way, for further enhancement I suggest to remove the JDBC approach for connecting to database instances. Since the program is written in Python, it could be more efficient using psycopg2 o pymysql.

Idea: Rewrite in Java/Kotlin

This is a crazy idea, but because this project is so coupled to a JDBC driver, it might be worthwhile to rewrite local-data-api in Kotlin, perhaps with a framework like javalin. πŸ€”

I do not have the skills or expertise to do this but just wanted to throw the idea out there. Feel free to close this issue.

Non-Default database does not work with PostgreSQL

Describe the bug
Using a database other than the default database (postgres) does not work.

To Reproduce

  1. Start Postgres Docker Compose
docker-compose -f docker-compose-postgres.yml up
  1. Run AWS rds-data execute statement SELECT current_database();
aws --endpoint-url http://localhost:8080 rds-data execute-statement --database 'test' --resource-arn "arn:aws:rds:us-east-1:123456789012:cluster:dummy" --secret-arn "arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy" --sql "SELECT current_database();"

Expected behavior
Result should be:

{
    "numberOfRecordsUpdated": 0,
    "records": [
        [
            {
                "stringValue": "test"
            }
        ]
    ]
}

Actual Behavior
Actual results:

{
    "numberOfRecordsUpdated": 0,
    "records": [
        [
            {
                "stringValue": "postgres"
            }
        ]
    ]
}

Desktop (please complete the following information):

  • OS: macOS Catalina
  • Version: 10.15.2

Additional context
I believe this is due to resource.use_database executing a use {database_name} statement. This works for MySQL but not for PostgreSQL where a connection to the database needs to be established and cannot be changed using SQL.

Error: "Exception in ASGI application"

Describe the bug

Querying local-data-api causes this error:

[2020-04-30 14:04:16 +0000] [8] [ERROR] Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py", line 385, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/usr/local/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.7/site-packages/fastapi/applications.py", line 140, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.7/site-packages/starlette/applications.py", line 134, in __call__
    await self.error_middleware(scope, receive, send)
  File "/usr/local/lib/python3.7/site-packages/starlette/middleware/errors.py", line 178, in __call__
    raise exc from None
  File "/usr/local/lib/python3.7/site-packages/starlette/middleware/errors.py", line 156, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.7/site-packages/starlette/exceptions.py", line 73, in __call__
    raise exc from None
  File "/usr/local/lib/python3.7/site-packages/starlette/exceptions.py", line 62, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.7/site-packages/starlette/routing.py", line 590, in __call__
    await route(scope, receive, send)
  File "/usr/local/lib/python3.7/site-packages/starlette/routing.py", line 208, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.7/site-packages/starlette/routing.py", line 41, in app
    response = await func(request)
  File "/usr/local/lib/python3.7/site-packages/fastapi/routing.py", line 129, in app
    raw_response = await run_in_threadpool(dependant.call, **values)
  File "/usr/local/lib/python3.7/site-packages/starlette/concurrency.py", line 25, in run_in_threadpool
    return await loop.run_in_executor(None, func, *args)
  File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/app/local_data_api/main.py", line 85, in execute_statement
    request.database,
  File "/app/local_data_api/resources/resource.py", line 216, in get_resource
    connected_database = connection.get_dsn_parameters()[
AttributeError: 'Connection' object has no attribute 'get_dsn_parameters'

To Reproduce

Problem occurs only with MySQLJDBC in Codeship CI. It works great with PostgreSQLJDBC both locally and in Codeship and with MySQLJDBC locally.

My Codeship config looks like that:

aurora-mysql:
  image: mysql:5.6
  command: --default-authentication-plugin=mysql_native_password
  environment:
    MYSQL_DATABASE: dashboard
    MYSQL_ROOT_PASSWORD: password
aurora-mysql-http:
  image: koxudaxi/local-data-api
  environment:
    ENGINE: MySQLJDBC
    MYSQL_HOST: aurora-mysql
    MYSQL_PORT: 3306
    MYSQL_USER: root
    MYSQL_PASSWORD: password
    RESOURCE_ARN: arn:aws:rds:us-east-1:123456789012:cluster:aurora-mysql
    SECRET_ARN: arn:aws:secretsmanager:us-east-1:123456789012:secret:aurora-mysql
  depends_on:
    - aurora-mysql
 tester:
  image: ...
  environment:
    - AURORA_HOST=aurora-mysql
    - AURORA_ENDPOINT=http://aurora-mysql-http

What I can see is that MySQL is operational and unit tests with direct DB connection are green.

Autoreload is disabled because the development mode is off

Describe the bug
With v0.6.0 and above, running the docker run command gives the following issue -

Screenshot 2021-04-01 at 8 36 27 PM

To Reproduce

  1. Run the following command -
docker run --rm -it --name my-data-api -p 8080:80 -e MYSQL_HOST=host.docker.internal -e MYSQL_PORT=3306 -e MYSQL_USER=root -e RESOURCE_ARN=arn:aws:rds:ap-south-1:123456789012:cluster:dummy -e SECRET_ARN=arn:aws:secretsmanager:ap-south-1:123456789012:secret:dummy -e MYSQL_PASSWORD=password koxudaxi/local-data-api:0.6.0

Expected behavior
(Successful with v0.5.8)

Screenshot 2021-04-01 at 8 37 52 PM

JVM Crashes

Describe the bug
When performing the execute-statement request, I get the following error:

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007f7168a9d0f9, pid=74, tid=0x00007f71707e4b88
#
# JRE version: OpenJDK Runtime Environment (8.0_222-b10) (build 1.8.0_222-b10)
# Java VM: OpenJDK 64-Bit Server VM (25.222-b10 mixed mode linux-amd64 compressed oops)
# Derivative: IcedTea 3.13.0
# Distribution: Custom build (Tue Oct  1 11:00:28 UTC 2019)
# Problematic frame:
# C  [_jpype.cpython-37m-x86_64-linux-gnu.so+0x430f9]  JPJavaEnv::NewLocalRef(_jobject*)+0x9
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /app/hs_err_pid74.log
#
# If you would like to submit a bug report, please include
# instructions on how to reproduce the bug and visit:
#   https://icedtea.classpath.org/bugzilla
#

To Reproduce

  1. Start the docker image:
    run --rm -it --name my-data-api -p 8080:80 -e MYSQL_HOST=127.0.0.1 -e MYSQL_PORT=3306 -e MYSQL_USER=root -e MYSQL_PASSWORD= -e RESOURCE_ARN=arn:aws:rds:us-east-1:123456789012:cluster:dummy -e SECRET_ARN=arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy koxudaxi/local-data-api
  2. Perform a request:
    aws --endpoint-url http://127.0.0.1:8080 rds-data execute-statement --resource-arn "arn:aws:rds:us-east-1:123456789012:cluster:dummy" --sql "show databases" --secret-arn "arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy" --database 'test'

Expected behavior
The Local Data API should return some data.
Does anyone have any idea what could be the root cause of this issue?

Desktop (please complete the following information):

  • OS: iOS

BadRequestException: Connection refused

Describe the bug
Hoping someone can help. I am trying to use the local-data-api image as is. Then, calling it as in the sample with aws client. I keep getting Connection refused error.

To Reproduce
Steps to reproduce the behavior:

  1. On the 1st terminal, I have

docker run --rm -it --name my-data-api -p 8080:80 -e MYSQL_HOST=127.0.0.1 -e MYSQL_PORT=3306 -e MYSQL_USER=root -e MYSQL_PASSWORD=example -e RESOURCE_ARN=arn:aws:rds:us-east-1:123456789012:cluster:dummy -e SECRET_ARN=arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy koxudaxi/local-data-api
2022-01-14 16:05:36.497 [main] TRACE Application - {
# application.conf @ jar:file:/app/local-data-api.jar!/application.conf: 6
"application" : {
# application.conf @ jar:file:/app/local-data-api.jar!/application.conf: 7
"modules" : [
# application.conf @ jar:file:/app/local-data-api.jar!/application.conf: 7
"com.koxudaxi.localDataApi.ApplicationKt.module"
]
},
# application.conf @ jar:file:/app/local-data-api.jar!/application.conf: 2
"deployment" : {
# application.conf @ jar:file:/app/local-data-api.jar!/application.conf: 3
"port" : 8080
},
# Content hidden
"security" : "***"
}

2022-01-14 16:05:36.605 [main] INFO Application - Autoreload is disabled because the development mode is off.
2022-01-14 16:05:37.193 [main] INFO Application - Responding at http://0.0.0.0:80

  1. On the 2nd terminal, I use the aws script in ReadMe and the response:

aws --endpoint-url http://127.0.0.1:8080 rds-data execute-statement --resource-arn "arn:aws:rds:us-east-1:123456789012:cluster:dummy" --sql "show databases" --secret-arn "arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy" --database 'test'

An error occurred (BadRequestException) when calling the ExecuteStatement operation: Database error code: -1. Message: : Socket fail to connect to host:127.0.0.1, port:3306. Connection refused (Connection refused)

Desktop (please complete the following information):

  • OS: v 10.15.7

Additional context

  • I had tried different values of MYSQL_HOST, e.g. localhost. It did not make a different.
  • I had tried using AWS.RDSDataService (in JavaScript) to executeStatement at the endpoint = 'http://127.0.0.1:8080'

Any suggestion is appreciated. Thanks in advanced.

Pinned koxudaxi/local-data-api image tag with custom SECRET_ARN support

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

AFAIK only koxudaxi/local-data-api:latest supports providing custom SECRET_ARN environment variable. Within koxudaxi/local-data-api:0.6.12 and probably older versions, if the SECRET_ARN does not equal the default ARN of arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy then rds-data API commands raise the following error message:

Error fetching secret {secret_arn} : Secrets Manager can’t find the specified secret (Service: AWSSecretsManager; Status Code: 400; Error Code: ResourceNotFoundException; Request ID:  00000000-1111-2222-3333-44444444444)

Describe the solution you'd like
A clear and concise description of what you want to happen.

A pinned version of the koxudaxi/local-data-api:latest image pushed to Dockerhub

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Funding

  • You can sponsor this specific effort via a Polar.sh pledge below
  • We receive the pledge once the issue is completed & verified
Fund with Polar

local-data-api does not behave the same as the real AWS data-api when handling PreparedStatement string types

First, a big thank you for this fantastic project and all the work that has clearly been put into it. It is very much appreciated.

Describe the bug
Running certain queries does not behave the same in local-data-api as in a real AWS Aurora Serverless environment via the data-api.

local-data-api sets the postgres JDBC connection parameter stringtype to unspecified here, however AWS data-api simply uses the default of VARCHAR. The postgres docs describe the operation of the stringtype parameter here.

To Reproduce
Steps to reproduce the behavior:

  1. Run a prepared statement query providing a string parameter value for a uuid typed column:
It works.
  1. Run the same query agains data-api in AWS:
BadRequestException: Database error code: 0. Message: ERROR: column "id" is of type uuid but expression is of type character varying

I can try to generate some http requests you can quickly run directly as a test / proof if that would be helpful.

Expected behavior
To the extent possible local-data-api should behave the same as the real AWS data-api

Additional context
This divergence was introduced in PR#108, which seems to have been done at least in part as a fix for #106. @koxudaxi references this exact problem in this comment where they note that local-data-api behaves differently than the real AWS data-api.

Return Postgres Arrays as Type 'arrayValue'

I know there's some other discussion regarding Postgres arrays but this seemed to warrant a separate ticket.

Currently if I retrieve a column from Postgres of type _text (text array), the returned metadata for the column using local-data-api looks like:

{'arrayBaseColumnType': 0, 'isAutoIncrement': False, 'isCaseSensitive': True, 'isCurrency': False, 'isSigned': False, 'label': 'container_type', 'name': 'container_type', 'nullable': 0, 'precision': 2147483647, 'scale': 0, 'schemaName': '', 'tableName': 'required_attributes', 'type': 12, 'typeName': 'text'}

Notice 'typeName' returning text instead of _text. When deployed on AWS, Data API returns for the same column:

{'arrayBaseColumnType': 0, 'isAutoIncrement': False, 'isCaseSensitive': True, 'isCurrency': False, 'isSigned': False, 'label': 'attributes', 'name': 'attributes', 'nullable': 1, 'precision': 2147483647, 'scale': 0, 'schemaName': '', 'tableName': 'required_attributes', 'type': 2003, 'typeName': '_text'}

Notice 'typeName' indeed reflects the Postgres type _text.

This behavior continues with respect to the payload returned from the database, where local-data-api will return the value of the column as

{'stringValue': '{Volume}'}

where 'Volume' is the single item in the Postgres array. Meanwhile on AWS, Data API will return the following as value for the column:

{'arrayValue': {'stringValues': ['Volume']}}

Data API's awareness of Postgress arrays seems to be better during read than during write!

I'm using:

  • boto3
  • Postgres 10.x
  • Python 3.7

In code terms, the difference is captured in the two functions I maintain for local versus deployed Data API:

# Local Data API emulation currently returns arrays from database differently than
# AWS Data API, so let's establish at instantiation which function is doing our parsing.
def parse_array_local(field: Dict):
    """When running locally, parse string representation of array/set."""

    return next(iter(field.values())).lstrip('{').rstrip('}').split(',')


def parse_array_deployed(field: Dict):
    """When running on AWS, return nicely formatted - albeit nested - list."""

    return next(iter(field.values()))['stringValues']

Thank you, again. -jeff

Invalid Transaction Id

Describe the bug

When beginning a new transaction on first statement execution (of multiple) it throws an Error: Invalid Transaction Id

Expected behavior

The transaction id to be persistent across multiple statement executions when committed.

Additional context

Node.JS project using AWS CDK Aurora client to execute statements.

Cannot access local-data-api on AWS CodeBuild

Describe the bug

With v0.6.0 and above, I get the below error at trying to access local-data-api on AWS CodeBuild.

[Container] 2021/04/14 05:39:38 Running command curl -i "http://localhost:8080"

To Reproduce

  1. Create a stack
AWSTemplateFormatVersion: '2010-09-09'
Resources:
  CodeBuildProject:
    Type: AWS::CodeBuild::Project
    Properties:
      Name: local-data-api
      Source:
        Location: 'https://github.com/koxudaxi/local-data-api'
        Type: GITHUB
        BuildSpec: |
          version: 0.2
          phases:
            build:
              commands:
                - docker-compose -f docker-compose-postgres.yml up -d
                - sleep 30s
                - curl -i "http://localhost:8080"
      Environment:
        Type: LINUX_CONTAINER
        Image: aws/codebuild/standard:5.0
        ComputeType: BUILD_GENERAL1_SMALL
        PrivilegedMode: true
      TimeoutInMinutes: 60
      QueuedTimeoutInMinutes: 60
      ServiceRole: !GetAtt IAMRoleCodeBuildServiceRole.Arn
      Artifacts:
        Type: NO_ARTIFACTS
  IAMRoleCodeBuildServiceRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: 'codebuild-local-data-api-service-role'
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Action:
              - sts:AssumeRole
            Effect: Allow
            Principal:
              Service:
                - codebuild.amazonaws.com
      ManagedPolicyArns:
        - 'arn:aws:iam::aws:policy/AdministratorAccess'
  1. Start build

Expected behavior

Successful with v0.5.8

[Container] 2021/04/14 03:50:22 Running command curl -i "http://localhost:8080"
HTTP/1.1 404 Not Found
date: Wed, 14 Apr 2021 03:50:24 GMT
server: uvicorn
content-length: 22
content-type: application/json
Β 
{"detail":"Not Found"}

java.net.SocketException: Permission denied

Describe the bug
I got the below error when docker-compose up.

Recreating local-data-api_local-data-api_1 ... done
Recreating local-data-api_db_1             ... done
Attaching to local-data-api_local-data-api_1, local-data-api_db_1
local-data-api_1  | 2021-04-03 04:08:54.069 [main] TRACE Application - {
local-data-api_1  |     # application.conf @ jar:file:/app/local-data-api.jar!/application.conf: 6
local-data-api_1  |     "application" : {
local-data-api_1  |         # application.conf @ jar:file:/app/local-data-api.jar!/application.conf: 7
local-data-api_1  |         "modules" : [
local-data-api_1  |             # application.conf @ jar:file:/app/local-data-api.jar!/application.conf: 7
local-data-api_1  |             "com.koxudaxi.localDataApi.ApplicationKt.module"
local-data-api_1  |         ]
local-data-api_1  |     },
local-data-api_1  |     # application.conf @ jar:file:/app/local-data-api.jar!/application.conf: 2
local-data-api_1  |     "deployment" : {
local-data-api_1  |         # application.conf @ jar:file:/app/local-data-api.jar!/application.conf: 3
local-data-api_1  |         "port" : 8080
local-data-api_1  |     },
local-data-api_1  |     # Content hidden
local-data-api_1  |     "security" : "***"
local-data-api_1  | }
local-data-api_1  |
local-data-api_1  | 2021-04-03 04:08:54.142 [main] INFO  Application - Autoreload is disabled because the development mode is off.
db_1              | 2021-04-03 04:08:54.269 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
db_1              | 2021-04-03 04:08:54.269 UTC [1] LOG:  listening on IPv6 address "::", port 5432
db_1              | 2021-04-03 04:08:54.410 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
local-data-api_1  | 2021-04-03 04:08:54.532 [main] INFO  Application - Responding at http://0.0.0.0:80
db_1              | 2021-04-03 04:08:54.560 UTC [20] LOG:  database system was shut down at 2021-04-03 04:08:25 UTC
db_1              | 2021-04-03 04:08:54.593 UTC [1] LOG:  database system is ready to accept connections
local-data-api_1  | Exception in thread "main" java.net.SocketException: Permission denied
local-data-api_1  |     at java.base/sun.nio.ch.Net.bind0(Native Method)
local-data-api_1  |     at java.base/sun.nio.ch.Net.bind(Unknown Source)
local-data-api_1  |     at java.base/sun.nio.ch.Net.bind(Unknown Source)
local-data-api_1  |     at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(Unknown Source)
local-data-api_1  |     at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134)
local-data-api_1  |     at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:550)
local-data-api_1  |     at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1334)
local-data-api_1  |     at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:506)
local-data-api_1  |     at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:491)
local-data-api_1  |     at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:973)
local-data-api_1  |     at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:248)
local-data-api_1  |     at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:356)
local-data-api_1  |     at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
local-data-api_1  |     at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
local-data-api_1  |     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
local-data-api_1  |     at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
local-data-api_1  |     at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
local-data-api_1  |     at io.ktor.server.netty.EventLoopGroupProxy$Companion$create$factory$1$1.run(NettyApplicationEngine.kt:227)
local-data-api_1  |     at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
local-data-api_1  |     at java.base/java.lang.Thread.run(Unknown Source)

To Reproduce
Steps to reproduce the behavior:

  1. Git clone this project
  2. Use docker-compose to start it docker-compose -f docker-compose-postgres.yml up
  3. Here is docker-compose-postgres.yml content

services:
  local-data-api:
    image: koxudaxi/local-data-api
    restart: always
    environment:
      ENGINE: PostgreSQLJDBC
      POSTGRES_HOST: db
      POSTGRES_PORT: 5432
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: example
      RESOURCE_ARN: 'arn:aws:rds:us-east-1:123456789012:cluster:dummy'
      SECRET_ARN: 'arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy'
    ports:
      - "8080:80"
  db:
    image: postgres:10.7-alpine
    restart: always
    environment:
      POSTGRES_PASSWORD: example
      POSTGRES_DB: test
    ports:
        - "5435:5432"
  1. See error

** Environment

  • OS: CentOS Linux release 7.6.1810
  • Docker: Docker version 20.10.5, build 55c4c88/docker-compose version 1.23.2, build 1110ad01
  • Version 0.6.6

Using README instructions, receiving `botocore.exceptions.ClientError`

Describe the bug

Using the README instructions for a docker-compose.yml file for Postgres settings, I receive the following error when executing the test commands in the README:

botocore.exceptions.ClientError: An error occurred (500) when calling the ExecuteStatement operation (reached max retries: 4): Internal Server Error

To Reproduce

The README describes setup for the docker-compose.yml file for Postgres as follows, which I am using:

version: '3.1'

services:
  local-data-api:
    image: koxudaxi/local-data-api
    restart: always
    environment:
      ENGINE: PostgreSQLJDBC
      POSTGRES_HOST: db
      POSTGRES_PORT: 5432
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: example
      RESOURCE_ARN: 'arn:aws:rds:us-east-1:123456789012:cluster:dummy'
      SECRET_ARN: 'arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy'
    ports:
      - "8080:80"
  db:
    image: postgres:10.7-alpine
    restart: always
    environment:
      POSTGRES_PASSWORD: example
      POSTGRES_DB: test
    ports:
        - "5432:5432"

In my python environment, I have the boto3 package with version ==1.13.7. My python version is 3.7.

I run docker-compose up -d and see the containers running with the right ports.

In my python environment, I start a python shell and run the following commands:

>>> import boto3; client = boto3.client('rds-data', endpoint_url='http://127.0.0.1:8080', aws_access_key_id='aaa',  aws_secret_access_key='bbb')
>>> client.execute_statement(resourceArn='arn:aws:rds:us-east-1:123456789012:cluster:dummy', secretArn='arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy', sql='show databases', database='test')

The result is:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/user/.local/share/virtualenvs/writer-lr1TS-R9/lib/python3.7/site-packages/botocore/client.py", line 316, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/Users/user/.local/share/virtualenvs/writer-lr1TS-R9/lib/python3.7/site-packages/botocore/client.py", line 635, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (500) when calling the ExecuteStatement operation (reached max retries: 4): Internal Server Error

Expected behavior

I expect execute_statement command to return a valid response instead of an error, as shown in the README example.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: Mac OS 10.14.2

Additional context
In my python environment, I have the boto3 package with version ==1.13.7. My python version is 3.7.

Support PgArray

Describe the bug
When you try to get a postgresql array column like:

SELECT string_to_array('1, 2, 3, 4', ',');

App will crash with the error:

Exception: unsupported type <class 'jpype._jclass.org.postgresql.jdbc.PgArray'>: {1," 2"," 3"," 4"}

To Reproduce
In postgresql make a query like (engine configured as PostgreSQLJDBC).

POST http://127.0.0.1:8080/Execute
User-Agent: aws-sdk-nodejs/2.647.0 linux/v12.16.1 promise
Content-Type: application/json
{
"secretArn": "arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy",
"resourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:dummy",
"sql": "SELECT string_to_array('1, 2, 3, 4', ',')",
"database": "test"
}

Expected behavior
App needs to return a string value {1, 2, 3, 4} like in this example from a real aurora data api:
image

In definition is a stringvalue.
image

About PostgresSQL Engine (No JDBC)
Making the query with python native connector returns a list:

unsupported type <class 'list'>: ['1', ' 2', ' 3', ' 4']

I think is kind of dangerous make a conversion from list to a schema like {1, 2, 3, 4}. Maybe it needs a refactor like conversions-per-type (JVM(PSQL/MYSQL) an non-JVM classes (PSQL and MYSQL separated)?

cannot insert array with PostgreSQL

Describe the bug
Using local-data-api 0.6.7 with PostgreSQL I get an error when trying to pass an array as parameters in AWS.RDSDataService function executeStatement using JavaScript.
Postgres supposts Arrays as data type and RDSDataService does as well. But using arrays with local-data-api seems to not be supported as I get the following error in the docker logs:

2021-04-08 11:59:22.921 [eventLoopGroupProxy-4-2] ERROR Application - Unhandled: POST - /Execute kotlinx.serialization.json.internal.JsonDecodingException: Unexpected JSON token at offset 1146: Encountered an unknown key 'arrayValue'

To Reproduce
Steps to reproduce the behavior:

  • Have setup using ProstgreSQL
  • Create a simple table with a column called col of type TEXT[]
    CREATE TABLE test ( col TEXT[] )
  • have simple node script trying to insert data
    const AWS = require('aws-sdk');
    const client = new AWS.RDSDataService({
       region: 'us-east-1',
       endpoint: 'http://127.0.0.1:8080',
    });
    
    (async function() {
       try {
     	  await client.executeStatement({
     		  resourceArn: 'arn:aws:rds:us-east-1:123456789012:cluster:dummy',
     		  secretArn: 'arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy',
     		  database: 'test',
     		  sql: 'INSERT INTO test (col) VALUES (:data)',
     		  parameters: [{
     			  name: 'data',
     			  value: {
     				  arrayValue: {
     					  stringValues: ['a', 'b']
     				  },
     			  },
     		  }]
     	  }).promise();
       }
       catch (e) {
     	  console.log(e);
       }
    })();

Expected behavior
a new entry in table test is created with the array ['a', 'b'] as value for column col.

Additional context
AWS documentation: https://docs.aws.amazon.com/rdsdataservice/latest/APIReference/API_ExecuteStatement.html

Add EXPOSE to Dockerfile for better Gitlab CI compatibility

Is your feature request related to a problem? Please describe.
Incompatibility with Gitlab CI pipelines. It uses exposed port to determine health.

Describe the solution you'd like
Add EXPOSE to Dockerfile.

Describe alternatives you've considered
Own fork of Dockerfile - hard to maintain.

Additional context
Extract from Gitlab pipeline when using local-data-api as a service.

e2e:
  stage: quality
  services:
    - name: postgres:10.12-alpine
      alias: db
    - name: koxudaxi/local-data-api:0.6.10
      alias: local-data-api

Result:

Using docker image sha256:3b192751cea87d9926ea4c5190929eb9f6a12a637a85127cc0287f161878ec73 for koxudaxi/local-data-api:0.6.10 ...
Waiting for services to be up and running...
*** WARNING: Service runner-3q1dx6xf-project-19117306-concurrent-0-4a702edf8cdc3559-koxudaxi__local-data-api-1 probably didn't start properly.
Health check error:
service "runner-3q1dx6xf-project-19117306-concurrent-0-4a702edf8cdc3559-koxudaxi__local-data-api-1" has no exposed ports
Service container logs:
*********

PostgreSQL "RETURNING" is not returning...

Describe the bug
Using an "INSERT INTO" with "RETURNING *" is not returning any data, only empty array.

To Reproduce
Steps to reproduce the behavior:

sql = `INSERT INTO unit 
    (unit_name, supplier, deleted, "createdAt", "updatedAt", uuid) 
    VALUES (:unit_name, :supplier, :deleted, :createdAt, :updatedAt, :uuid) 
    RETURNING id`;

Expected behavior
Expected ID column value to be returned but got empty Array.

TIMESTAMP typehint parses incorrect format

Describe the bug
The local data api interprets the typehint of TIMESTAMP as needing to parse the supplied string from iso format.
However, rds data client does not interpret timestamps from the python iso format. The timestamps should instead be in the form YYYY-MM-DD HH:MM:SS[.FFF].
Source: Data Api Docs

Expected Behaviour
The local data api should parse timestamps from the supplied form.

Docker container fails to start with Python errors

Describe the bug
Docker Compose fails to start service and constantly restarts

To Reproduce
Steps to reproduce the behavior:

  1. create docker-compose.yml in project
  2. run docker-compose up

Expected behavior
The service starts successfully and accepts connections

docker-compose.yml

version: '3.1'

services:
  local-data-api:
    image: koxudaxi/local-data-api
    restart: always
    environment:
      ENGINE: PostgresSQLJDBC
      POSTGRES_HOST: db
      POSTGRES_PORT: 5432
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: secret
      RESOURCE_ARN: 'arn:aws:rds:us-east-1:123456789012:cluster:dummy'
      SECRET_ARN: 'arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy'
    ports:
      - '8080:80'
  db:
    image: mdillon/postgis
    restart: always
    environment:
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: dev
    ports:
      - '5432:5432'

Error Messages

local-data-api_1  | [2020-11-08 11:06:34 +0000] [1] [INFO] Starting gunicorn 20.0.4
local-data-api_1  | [2020-11-08 11:06:34 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
local-data-api_1  | [2020-11-08 11:06:34 +0000] [1] [INFO] Using worker: uvicorn.workers.UvicornWorker
local-data-api_1  | [2020-11-08 11:06:34 +0000] [7] [INFO] Booting worker with pid: 7
local-data-api_1  | [2020-11-08 11:06:34 +0000] [7] [ERROR] Exception in worker process
local-data-api_1  | Traceback (most recent call last):
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
local-data-api_1  |     worker.init_process()
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/uvicorn/workers.py", line 57, in init_process
local-data-api_1  |     super(UvicornWorker, self).init_process()
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 119, in init_process
local-data-api_1  |     self.load_wsgi()
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 144, in load_wsgi
local-data-api_1  |     self.wsgi = self.app.wsgi()
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/gunicorn/app/base.py", line 67, in wsgi
local-data-api_1  |     self.callable = self.load()
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 49, in load
local-data-api_1  |     return self.load_wsgiapp()
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 39, in load_wsgiapp
local-data-api_1  |     return util.import_app(self.app_uri)
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/gunicorn/util.py", line 358, in import_app
local-data-api_1  |     mod = importlib.import_module(module)
local-data-api_1  |   File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
local-data-api_1  |     return _bootstrap._gcd_import(name[level:], package, level)
local-data-api_1  |   File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
local-data-api_1  |   File "<frozen importlib._bootstrap>", line 991, in _find_and_load
local-data-api_1  |   File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
local-data-api_1  |   File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
local-data-api_1  |   File "<frozen importlib._bootstrap_external>", line 783, in exec_module
local-data-api_1  |   File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
local-data-api_1  |   File "/app/local_data_api/main.py", line 28, in <module>
local-data-api_1  |     setup()
local-data-api_1  |   File "/app/local_data_api/settings.py", line 58, in setup
local-data-api_1  |     raise NotImplementedError("Engine not already implemented")
local-data-api_1  | NotImplementedError: Engine not already implemented
local-data-api_1  | [2020-11-08 11:06:34 +0000] [7] [INFO] Worker exiting (pid: 7)
local-data-api_1  | {"loglevel": "info", "workers": 1, "bind": "0.0.0.0:80", "graceful_timeout": 120, "timeout": 120, "keepalive": 5, "errorlog": "-", "accesslog": "-", "workers_per_core": 1.0, "use_max_workers": null, "host": "0.0.0.0", "port": "80"}
local-data-api_1  | [2020-11-08 11:06:34 +0000] [1] [INFO] Shutting down: Master
local-data-api_1  | [2020-11-08 11:06:34 +0000] [1] [INFO] Reason: Worker failed to boot.
local-data-api_1  | {"loglevel": "info", "workers": 1, "bind": "0.0.0.0:80", "graceful_timeout": 120, "timeout": 120, "keepalive": 5, "errorlog": "-", "accesslog": "-", "workers_per_core": 1.0, "use_max_workers": null, "host": "0.0.0.0", "port": "80"}
api_local-data-api_1 exited with code 3

Desktop (please complete the following information):

  • OS: [e.g. iOS] Ubuntu 20.04
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

Support PGpoint

Is your feature request related to a problem? Please describe.
All queries related to point postgres type fail.

Describe the solution you'd like
Add support for PGpoint. I think point should be handled as a string type similarly to pgarray https://github.com/koxudaxi/local-data-api/pull/63/files

Additional context
Using ENGINE: PostgreSQLJDBC

Exception:

Exception: unsupported type <java class 'org.postgresql.geometric.PGpoint'>: (50.074534,14.444137)

thanks this amazing library πŸ‘

Exception with Postgres BYTEA type

local-data-api image: koxudaxi/local-data-api:0.5.4
PostgreSQL image: postgres:10.12-alpine

Given table such as

			CREATE TABLE IF NOT EXISTS events (
				id SERIAL PRIMARY KEY,
				rep VARCHAR(64),
				data BYTEA
			)

and a query such as

SELECT id, data FROM events

local-data-api throws this exception:

local-data-api_1  | 192.168.144.1:45132 - "POST /Execute HTTP/1.1" 500
local-data-api_1  | [2021-03-04 16:48:35 +0000] [7] [ERROR] Exception in ASGI application
local-data-api_1  | Traceback (most recent call last):
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py", line 396, in run_asgi
local-data-api_1  |     result = await app(self.scope, self.receive, self.send)
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
local-data-api_1  |     return await self.app(scope, receive, send)
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/fastapi/applications.py", line 140, in __call__
local-data-api_1  |     await super().__call__(scope, receive, send)
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/applications.py", line 134, in __call__
local-data-api_1  |     await self.error_middleware(scope, receive, send)
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 178, in __call__
local-data-api_1  |     raise exc from None
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 156, in __call__
local-data-api_1  |     await self.app(scope, receive, _send)
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 73, in __call__
local-data-api_1  |     raise exc from None
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 62, in __call__
local-data-api_1  |     await self.app(scope, receive, sender)
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 590, in __call__
local-data-api_1  |     await route(scope, receive, send)
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 208, in __call__
local-data-api_1  |     await self.app(scope, receive, send)
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 41, in app
local-data-api_1  |     response = await func(request)
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 129, in app
local-data-api_1  |     raw_response = await run_in_threadpool(dependant.call, **values)
local-data-api_1  |   File "/usr/local/lib/python3.8/site-packages/starlette/concurrency.py", line 25, in run_in_threadpool
local-data-api_1  |     return await loop.run_in_executor(None, func, *args)
local-data-api_1  |   File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
local-data-api_1  |     result = self.fn(*self.args, **self.kwargs)
local-data-api_1  |   File "/app/local_data_api/main.py", line 99, in execute_statement
local-data-api_1  |     response: ExecuteStatementResponse = resource.execute(
local-data-api_1  |   File "/app/local_data_api/resources/jdbc/__init__.py", line 158, in execute
local-data-api_1  |     records=[
local-data-api_1  |   File "/app/local_data_api/resources/jdbc/__init__.py", line 159, in <listcomp>
local-data-api_1  |     [
local-data-api_1  |   File "/app/local_data_api/resources/jdbc/__init__.py", line 160, in <listcomp>
local-data-api_1  |     self.get_filed_from_jdbc_type(
local-data-api_1  |   File "/app/local_data_api/resources/jdbc/postgres.py", line 45, in get_filed_from_jdbc_type
local-data-api_1  |     return super().get_filed_from_jdbc_type(value, jdbc_type)
local-data-api_1  |   File "/app/local_data_api/resources/jdbc/__init__.py", line 97, in get_filed_from_jdbc_type
local-data-api_1  |     return Field(blobValue=b64encode(value))
local-data-api_1  |   File "/usr/local/lib/python3.8/base64.py", line 58, in b64encode
local-data-api_1  |     encoded = binascii.b2a_base64(s, newline=False)
local-data-api_1  | TypeError: a bytes-like object is required, not 'str'

Local data api is always using transactions with /Execute

Describe the bug
Local data api is always using transactions and you cannot run queries which you need to run without transactions.

At least this is a big issue with PostgreSQL.

To Reproduce
Steps to reproduce the behavior:

  1. "Connect" to local data-api.
import boto3
client = boto3.client('rds-data', endpoint_url='http://127.0.0.1:8080', aws_access_key_id='aaa', aws_secret_access_key='bbb')
  1. Create new type with /Execute
client.execute_statement(resourceArn='arn:aws:rds:us-east-1:123456789012:cluster:dummy', secretArn='arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy', sql="CREATE type some_t as enum ('A','B');", database='db')
{'ResponseMetadata': {'HTTPStatusCode': 200, 'HTTPHeaders': {'content-length': '49', 'content-type': '*/*; charset=UTF-8'}, 'RetryAttempts': 0}, 'generatedFields': [], 'numberOfRecordsUpdated': 0}
  1. Alter the type with /Execute
client.execute_statement(resourceArn='arn:aws:rds:us-east-1:123456789012:cluster:dummy', secretArn='arn:aws:secretsmanager:us-east-1:123456789012:secret:dummy', sql="ALTER TYPE some_t ADD VALUE IF NOT EXISTS 'NEW_VALUE'", database='db')
  1. See error
Traceback (most recent call last):
    raise error_class(parsed_response, operation_name)
botocore.errorfactory.BadRequestException: An error occurred (BadRequestException) when calling the ExecuteStatement operation: Database error code: 0. Message: ERROR: ALTER TYPE ... ADD cannot run inside a transaction block

Expected behavior
Query will be run without problems.

Additional context
PostgreSQL v10, similar that is available in Aurora Serverless v1.
There should be option to disable always using transactions with /Execute.
Or it should even be the default as that's how the AWS data api works!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.