Giter Site home page Giter Site logo

partiql / partiql-lang-kotlin Goto Github PK

View Code? Open in Web Editor NEW
535.0 535.0 59.0 14.41 MB

PartiQL libraries and tools in Kotlin.

Home Page: https://partiql.org/

License: Apache License 2.0

Kotlin 97.93% HTML 1.48% Shell 0.01% Inno Setup 0.06% ANTLR 0.50% Java 0.03%
ion json kotlin partiql sql

partiql-lang-kotlin's People

Contributors

abhikuhikar avatar alancai98 avatar almann avatar am357 avatar appwiz avatar axiomabsolute avatar dlurton avatar eko avatar evanj avatar johnedquinn avatar jpschorr avatar lziq avatar metadaddy avatar onurkuruu avatar pcholakov avatar popematt avatar raganhan avatar rchowell avatar scb01 avatar sullis avatar therapon avatar vgapeyev avatar yliuuuu avatar yuxtang-amazon avatar zslayton avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

partiql-lang-kotlin's Issues

Move Scalar methods to ExprValue

Currently ExprValue has a scalar property of type Scalar.

This means that instantiating an ExprValue that that has a scalar type requires two allocations: one to create the ExprValue and one to create its Scalar instance. To reduce the number of allocations needed, we should move all of Scalar's functions to ExprValue and remove its scalar property.

Scalar ExprValue instances are created in great numbers at evaluation time, so I suspect this could significantly reduce the amount of allocations.

software.amazon.sql.eval.Scalar:

abstract class Scalar {
    ...
    fun booleanValue(): Boolean? = null
    fun numberValue(): Number? = null
    fun timestampValue(): Timestamp? = null
    fun stringValue(): String? = null
    fun bytesValue(): ByteArray? = null
}

EDIT: I found these extension methods which effectively move the Scalar methods to ExprValue which (IMHO) begs the question why we haven't already moved to ExprValue:

fun ExprValue.booleanValue(): Boolean =
    scalar.booleanValue() ?: errNoContext("Expected non-null boolean: $ionValue", internal = false)

fun ExprValue.numberValue(): Number =
    scalar.numberValue() ?: errNoContext("Expected non-null number: $ionValue", internal = false)

fun ExprValue.timestampValue(): Timestamp =
    scalar.timestampValue() ?: errNoContext("Expected non-null timestamp: $ionValue", internal = false)

fun ExprValue.stringValue(): String =
    scalar.stringValue() ?: errNoContext("Expected non-null string: $ionValue", internal = false)

fun ExprValue.bytesValue(): ByteArray =
scalar.bytesValue() ?: errNoContext("Expected non-null LOB: $ionValue", internal = false)

Add Equivalent Java Examples

Currently, the examples project contains usage of the public API in Kotlin.

We should also have a Java version of each of these examples to help keep us mindful of what using the public API looks like from Java. It will also help customers to deal with any deficiencies in the API as a result of not enforcing an API that conforms to idiomatic Java.

Add compile option that allows disabling of dynamic scoping

We need to add a compile option that disables dynamic scoping - essentially requiring field names to be qualified.

Given the following snippet of IonSQL:

SELECT a, b FROM `{a:1}`

The current scoping rules will cause the identifier a in the SELECT list of this snippet to resolve to the field a of the struct in the FROM clause, and the identifier b to the global variable b (if b exists in the global scope).

Once implemented and enabled this compiler option will cause the SQL++ interpreter to skip the scopes introduced in the FROM clause of the query when resolving variable names. Thus, when the option is enabled, the identifier a will resolve to the global variable a (if a exists in the global scope).

In order to resolve the identifier a in the SELECT list to the a field of the struct in the FROM clause of the above query, it will need to be modified like so:

SELECT table1.a, b FROM `{a:1}` AS table1

(And of course resolution of b remains unchanged.)

Migrated from IONSQL-148

Check that all errors have error context

We don't have meta nodes all the time so when an exception is thrown during evaluation time we may not know the location (line & column) within the query where the error originated from. Additionally, some exceptions might be thrown without any errorContext.

In order to prevent these types of issues we should modify the test framework (located at tools/sqltest) so that it requires all error expectations to have an error context and source location.

Code to change is located in ScriptExecuteVisitor.compareErrorContextProperties. Perhaps as part of this change, we will also need to fix any failing tests that appear after this change.

Migrated from IONSQL-312.

SQL++ Test suite: support annotated variable references for test expectation

It would be nice to support this:

for::{
    template: [ test::{
        ...
        expected: result: result::$value  
        // $value here currently results in an error because the annotation is removed during variable resolution.
        ...
    }],
    varaibleSets: [ { value: ... } ]
}

For reference, what does work is including the result attribute in the variableSet struct:

for::{
    template: [ test::{
        ...
        expected: $value 
        ...
    }],
    variableSets: [ { value: result:: ... } ] //<---  result:: annotation goes here.
}

Please also remove the reference to this issue from the test script specification, under the known issues heading of the section about the for command.

The AST root node is always missing a SourceLocationMeta

SqlParser determines the line and column of a ParseNode from ParseNode.token.position. However, most of the ParseNodes returned from the SqlParser.parseTerm() do not have a Token associated with them. This means that when the ParseNode tree is later transformed into ExprNode instances many are missing a SourceLocationMeta instance.

This task is to ensure that all ParseNodes returned by parseTerm() are associated with a token correctly.

The root of the problem appears to be that inside of parseTerm() we branch on the token type then call an appropriate function to parse the particular type of expression. (i.e. parseSelect(), parseCast(), etc. Because the those parse*() functions do not have access to the original token indicating their context, most ParseNodes returned by parseTerm() are not assigned a Token.

 private fun List<Token>.parseTerm(): ParseNode = when (head?.type) {
        ...  
        KEYWORD -> when (head?.keywordText) {
            "case" -> when (tail.head?.keywordText) {
                "when" -> tail.parseCase(isSimple = false)
                else -> tail.parseCase(isSimple = true)
            }
            "cast" -> tail.parseCast()
            "select" -> tail.parseSelect()
            "pivot" -> tail.parsePivot()
            ...
            }
            ...
        }
        ...
    }

One way to fix this might be to pass the head Token into each of the parse*() functions and then include this token in the ParseNode constructor, but this might be tedious. Another way to fix this is to (Kotlin data class) copy the ParseNode before it is returned, assigning the token at that time, but this might have performance implications, particularly for large queries.

Migrated from IONSQL-390.

Not able to build the release

Trying to build using ./gradlew build gives the following error:

w: /Users/argu/workspaces/partiql-lang-kotlin/cli/src/org/partiql/cli/functions/ReadFile.kt: (42, 69): 'iterate(InputStream!): (Mutable)Iterator<IonValue!>!' is deprecated. Deprecated in Java
> Task :cli:jar FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':cli:jar'.
> Failed to create MD5 hash for file '/Users/argu/workspaces/partiql-lang-kotlin/lang/build/libs/lang-1.0.0.jar' as it does not exist.

Integrate ktlint into Gradle build

In order to facilitate consistency in code formatting we should see investigate if ktlint can reformat our code as part of (every?) build.

This will also require re-evaluating CODE-STYLE.md and making changes as necessary.

Refactor visitor & rewriters

Our current visitor/rewriter implementation is rather non standard and lacks the ability to transform from an ExprNode AST to another that does not use ExprNode as a base class. To remedy this, I propose the following changes:

  • Rename AstVisitor -> AstListener
    • Add one method for every node type to AstListener (instead of just the base types like we have currently)
    • Update AstWalker to dispatch to these additional methods
  • Rename AstRewriter -> AstVistor
    • Rename the rewrite* methods to visit*
    • Add generic argument, making it AstVisitor<T> and make T the return type of every visit* method.
    • Note: Do not rename AstRewriterBase, but do make it implement AstVisitor<ExprNode>

After this change, we will be using the terms visitor, listener and walker in the same way that ANTLR4 does.

Assign error code for error that occurs when string concatenation fails due to incompatible types.

For example:

'hi' || 1

Throws an error:

code:       null
properties: {
  LINE_NUMBER:1,
  COLUMN_NUMBER:5
}

Instead of null, the error code should probably be something like CONCAT_FAILED_DUE_TO_INCOMPATIBLE_TYPE or somesuch.

This can also be observed in the REPL:

sql> 'hi' || 1
   |
software.amazon.sql.eval.EvaluationException: Expected text: 1
	<UNKNOWN>: at line 1, column 6: <UNKNOWN>

...

ERROR! (2 ms)
sql>

Optimize `IN` operator when list is only literals

Performance of the IN operator is poor when used with many values. Most often, the values on the right side of IN are literals which can sometimes number into the dozens, hundreds and in rare cases, thousands.

We can fairly easily optimize for this case--the compiler must inspect all values in the list on the right side of IN and if so, store them in a data structure such as a TreeSet. Then, when the IN operator is being evaluated we just need to evaluate the left side of IN once per query and check of the result exists in the TreeSet. If a non-literal value appears anywhere on the right side of IN we can simply fall back on the existing behavior.

Currently, both sides of the IN operator are evaluated for every row in a query and this is very expensive, especially when many values exist on the right side.

Migrated from IONSQL-447.

Document the release process

Write a document describing the process for creating a release and add it to the repository at docs/dev/RELEASE-PROCESS.md. Include as part of this any templates for email or other notifications that are to be sent as part of a release.

Add functions to convert to and from Ion

Although it is generally suboptimal to store Ion text inside of another Ion string, it happens frequently enough that we should support it. This is one of the most frequently encountered topics in our internal message forums.

We should add a pair of functions to convert an arbitrary value to an Ion string. For instance:

to_ion_string({ 'foo': 'bar'})

should return the string

"{foo:\"bar\"}"

and

parse_ion_string("{foo:\"bar\"}")

should return the value {'foo': 'bar'}.

We should also consider equivalent functions for Ion binary: to_ion_binary and parse_ion_binary.

Introduce .internal packages

The public API of the interpreter exposes a lot of components publicly without explicit intention. Part of the reason for this is that we sometimes rely on Kotlin's internal access specifier even though it is effectively public from Java's point of view.

I would like to propose that we introduce a series of .internal packages for API components that are not intended to be called directly by customers and that we move all API components currently marked as internal to those packages. We should also re-evaluate all functions, classes and members to see if they should genuinely be public and consider marking them internal and moving them to a .internal package.

Each of the existing packages should also have a corresponding .internal package, for example:

  • software.amazon.sql.ast should also have software.amazon.sql.ast.internal
  • software.amazon.sql.eval should also have software.amazon.sql.eval.internal
  • software.amazon.sql.syntax should also have software.amazon.sql.syntax.internal
  • and so on

This will more clearly communicate to JDK8 customers which parts of the API should be avoided and for JDK9 and later customers allows us to explicitly export only the packages which are intended to be part of the public API.

However, it is also possible to separate the public and private APIs into separate jars. Customers would then only add the jar containing the public API to compile-time CLASSPATH while adding both public and private jars at run-time. See the distinction between api and implementation dependencies in the Gradle documentation of the java-library core plug-in. If this is also implemented the internal components of the interpreter will not be exposed to JDK8 customers.

Assign error code to divide by zero errors

We do not currently assign an error code for divide by zero errors. These tests in our test suite will not pass otherwise.


for::{
    template: [ test::{ name: "divide by zero", sql: $sql, expected: $result } ],
    variableSets: [
        {
            sql: "1 / 0",
            result: error::{
                code: EVALUATOR_DIVIDE_BY_ZERO,  
                properties: {
                    LINE_NUMBER:1,
                    COLUMN_NUMBER:3
                }
            }
        },

        {
            sql: "1 % 0",
            result: error::{
                code: EVALUATOR_DIVIDE_BY_ZERO,
                properties: {
                    LINE_NUMBER:1,
                    COLUMN_NUMBER:3
                }
            }
        },

    ]
}

Migrated from IONSQL-276.

Don't globally override Number operators

In NumberExtensions.kt we define a series of arithmetic operators on java.lang.Number. This is bad because when these are being used it's not at all obvious that a function call is taking place. We should replace these overloaded operators with explicit function calls so that our code is more readable and less confusing.

For example:

val n1: Number = 1
val n2: Number = 2
// Some distance away, most likely in another file...
val n3 = n1 + n2

In order to know that there's an overloaded function call going on, one must remember that the + operator is being overloaded for Number and that n1 and n2 are instances of Number.

I think this is much more readable and explicit:

val n3 = n1.plus(n2)

Value of CompileOption.projectionIteration effects equivalence for structs with missing values

Consider the following snippet:

val compiler = EvaluatingCompiler(ion, compileOptions = CompileOptions.build {
    this.projectionIteration(ProjectionIterationBehavior.UNFILTERED)
})

println(compiler.compile("{ 'a': MISSING } = { }").eval(EvaluationSession.standard()))

(Note: Since the CLI currently doesn't have any way of specifying compile options, this example must be done in a unit test or command-line app.)

When we set ProjectionIterationBehavior to FILTER_MISSING (which is the default), then True is printed but with ProjectionIterationBehavior.UNFILTERED, then False is printed.

I'm pretty certain that this compile option should not effect the result of the comparison.

Migrated from IONSQL-398

Implement ORDER BY

Just like the title says: add support for ORDER BY according to the specification.

REPL: print missing instead of null

When printing out MISSING values it should read as MISSING instead of null

ionsql> MISSING
      | 
======' 
null
------- 
Result type was MISSING
OK! (1 ms)

Due to the REPL also displaying the data type of the result the user can tell that the result was MISSING, however, null is a misnomer and could be confusing.

This also affects ExprValue.toString() for missing values

Migrated from IONSQL-427.

CLI output in docs is inconsistent

Ran the first CLI query sample and got the output:

sql> SELECT id FROM `[{id: 5, name:"bill"}, {id: 6, name:"bob"}]` WHERE name = 'bob'
   | 
===' 
{
  id:6
}
--- 
Result type was BAG and contained 1 items
OK! (179 ms)

CLI docs at https://github.com/partiql/partiql-lang-kotlin/blob/master/docs/user/CLI.md shows:

sql> SELECT id FROM `[{id: 5, name:"bill"}, {id: 6, name:"bob"}]` WHERE name = 'bob'
   | 
==='
{
  id:6
}
---

OK!

The output needs to match exactly as shown from the implementation.

Simplify implementations of ExprFunction

Currently, every implementation of ExprFunction must:

  1. Check that the correct number of arguments were passed (arity)
  2. Check that each argument is of the correct type
  3. The name of the function isn't actually known to the implementation of ExprFunction which means it must be duplicated in error messages and in BuiltinFunctionFactory

This leads to a fair amount of boiler-plate code and creates a lot of opportunity for inconsistent handling of these types of errors.

We should be able to:

  1. Verify function call arity at compile-time.
  2. Verify function argument types at evaluation-time.

We need to add infrastructure to support this as well as examples of their use.

Here's a rough idea of what this might look like in practice:

class FooExprFunc : ExprFunction {
    // The name of this function
    override val name = "foo"
    // The arity of the function -- this function accepts between 2 and 3 arguments
    override val arity = 2..3 
    
    override val argumentValidators = listOf(
    
        // The first argument must be an integer
        argMustBe(ExprValueType.INT) 
        
        // The second argument must be an integer or a decimal
        argMustBe(ExprValueType.INT, ExprvalueType.DECIMAL), 
        
        // The third argument (if present) must be a string
        argMustBeString)
        
    override fun eval(args: List<ExprValue>): ExprValue {
        ...
    }
}

Currently, ExprFunction only has the eval function defined -- the name, arity and argumentValidators properties would need to be
added and implemented in all built-in functions and examples.

Expose exception details when unhandled exception occurs in a benchmark

Currently, if there's an unhandled exception that is thrown inside the forked VM created by JMH during the benchmark execution, it is difficult to debug because the exception details do not appear in the exception that results from this inside the VM which spawned the benchmark.

I have modified the car so that the benchmark is executed directly without JMH to expose any problems with the benchmark itself before executing it with JMH as per usual but this might still be suboptimal because the exception might not originate from SQL++ and such exceptions would still be difficult to debug.

This task will be to see if JMH exposes details about the exception that ocurred inside of the forked VM and to include that in the output of the test driver. If JMH doens't expose these details, another method of getting them into the output of the test driver must be determined.

Give this project its name

Once known, we need to apply the final marketing name of this product.

2 of the locations that need to be changed to the marketing name are:

But there may be others.

Reconsider equivalence for NULL and MISSING values as it relates to group keys

Currently, NaturalValuesComparators cannot distinguish between NULL and MISSING values. This causes the result of GROUP BY to be different depending on if a MISSING or NULL value is encountered first.

If a MISSING value is encountered first, the group key will the contain a missing value. Any subsequent NULL value is then considered to be part of the same group because they are considered equivalent. Conversely, if the NULL value is encountered first, the group key will contain the NULL value instead and subsequent MISSING values are considered to be part of the same group.

For example, if the environment was:

{
    'foo_null_first': [ 
        { 'bar': null }, 
        { }, 
        { 'bar': 1 } 
    ],
    'foo_missing_first': [ 
        { }, 
        { 'bar': null }, 
        { 'bar': 1 } 
    ],
}

The following two queries have different results:

ionsql> select * from foo_missing_first as f group by f.bar
      |
======'
{
}
{
  bar:1
}
-------
Result type was BAG and contained 2 items (3 ms)
ionsql> select * from foo_null_first as f group by f.bar
      |
======'
{
  bar:null
}
{
  bar:1
}
-------
Result type was BAG and contained 2 items (1 ms)

If this is not the desired behavior, the alternative result would be (for both of the above queries):

{ 
}
{
  bar:null
}
{
  bar:1
}

Migrated from IONSQL-411.

Create a release

The Getting Started instructions say:

The easiest way to get started with {productName} is to clone this repository locally, build, then run the REPL.

In order to lower the bar for adoption, provide a release in the Releases tab. This will allow users to get started with it much easily.

ExprValue should not implement ion-java's Faceted interface

Currently, ExprValue implements ion-java's Faceted interface.

We should avoid exposing a type from one of our dependencies in the interpreter's public API if at all possible.

We are currently only using this to provide ordered names for fields within SequenceStructand to provide unordered names within IonExprValue.

At the very least, the interpreter should at least have its own version of Faceted if we don't have a better way of providing names to uses of ExprValue.

Consider consolidating FromSourceExpr and FromSourceUnpivot or mitigating thier similarities

The similarities between FromSourceExpr and FromSourceUnpivot have created an unfortunate need to duplicate code in at least one place. See software.amazon.sql.ast.passes.FromSourceAliasRewriter.rewriteFromSourceExpr and rewriteFromSourceUnpivot for an example. The goal of this issue is to create a way of avoiding the need to duplicate code surrounding these two ExprNode types by consolidating them or through other means.

The reason these were not previously merged is because FromSourceUnpivot needs a distinct set of metas while FromSourceExpr doesn't have metas of its own--the metas in this case should be associated with FromSourceExpr.expr.

        (term
            (exp
                (select
                    (project
                        (list
                            (term
                                (exp
                                    (star)))))
                    (from
                        (term
                            (exp
                                (unpivot
                                    (term
                                        (exp
                                            (id item case_insensitive))))
                             (meta <unpivot's meta nodes are supposed to go here>))))))

However:

        (term
            (exp
                (select
                    (project
                        (list
                            (term
                                (exp
                                    (star)))))
                    (from
                        (term
                            (exp
                                (id table1 case_insensitive))
                            (meta <id's meta nodes go here>)))))

Note that for the second case there is no place to put a separate set of metas to be associated with FromSourceExpr.

Migrated from IONSQL-418

Don't throw IllegalStateException

In a couple dozen places within the interpreter we throw IllegalStateException. We should probably be throwing a descendant of SqlException instead.

Prevent duplicate FROM source aliases

We should fail a query at compile-time with an appropriate error message when a duplicate table aliases are encountered. For instance, these queries all contain duplicate table aliases (some are implicit aliases):

SELECT * FROM foo, foo
SELECT * FROM foo AS f, f
SELECT * FROM foo AS f, foo AS f

SELECT * FROM foo AT foo
SELECT * FROM foo AS f AT f

We currently fail at evaluation-time only when an attempt is made to access the duplicated name, such as:

SELECT f.* FROM foo AS f, f
--     ^ error happens during lookup of 'f' due to ambiguous binding

MySQL and Postgres appear to have a similar compile-time check:

In case those SQL Fiddle links die, the DDL for both was:

create table foo (
  foo_id int primary key
);

insert into foo values (1);

create table f (
  f_id int primary key
);

insert into f values (2);

And the query was:

select * from foo as f, f

Migrated from IONSQL-453

Leverage ion binary skip scanning

Currently for Ion backed ExprValues we are materializing the DOM and working from there. We should find ways to leverage the skip scanning capabilities of the binary format to only materialize the information required by the query

A cleaner s-expression representation of the GROUP clause

README-AST-V1.md currently defines GROUP_CLAUSE as:

GROUP_FULL ::= `group` 
GROUP_PARTIAL ::= `group_partial`
GROUP_ALL ::= `group_all`
GROUP_BY_EXP ::= EXP | `(` `as` SYMBOL EXP `)`
GROUP_KIND ::= GROUP_FULL | GROUP_PARTIAL

GROUP_CLAUSE  ::= 
    `(`  GROUP_KIND
        //NOTE:  the `by` node cannot be wrapped in a term (GROUP_BY_EXPs however *can* be wrapped in a term).
        `(` `by` GROUP_BY_EXP GROUP_BY_EXP... `)` 
        //NOTE:  the `name` node *can* be wrapped in a term
        [ `(` `name` SYMBOL `)` ] ')'
    | `(` GROUP_ALL SYMBOL `)`

So:

GROUP BY exp1, exp2 -> (group (by (id exp1 case_insensitive) (id exp2 case_insensitive)))
GROUP PARTIAL BY exp1, exp2 -> (group_partial (by (id exp1 case_insensitive) (id exp2 case_insensitive)))
GROUP ALL AS g -> (group_all g)

We would like to consider changing the "grammar" of the AST like so:

GROUP_BY_EXP ::= EXP | `(` `as` SYMBOL EXP `)`
GROUP_BY_CLAUSE  ::= 
    `(` `group_by` `full` GROUP_BY_EXP... `)`
    | `(` `group_by` `partial` GROUP_BY_EXP... `)`
    | `(` `group_by` `all` SYMBOL `)`

Under this "grammar", the previous examples would be represented as:

GROUP BY exp1, exp2 -> (group_by full (id exp1 case_insensitive) (id exp2 case_insensitive))
GROUP PARTIAL BY exp1, exp2 -> (group_by partial (id exp1 case_insensitive) (id exp2 case_insensitive))
GROUP ALL AS g -> (group_by all g)

This proposed "grammar" is easier to understand in part because it eliminates the ambiguity regarding the location of term nodes and their corresponding metas.

TO_STRING: block unsupported format patterns

The following format patterns are valid for DateTimeFormatter, used in to_string impl, but not for the TO_STRING function. They should throw an error

  • L
  • O
  • yyy

First, uncomment the tests marked with this issue URL in the test suite (test/integration-test/test-scripts/builtin-functions/to_string.sqlts) and then fix the issue preventing them from succeeding.

Annotations can be dropped in some circumstances

To reproduce, place this test in org]partiql/eval/NodeMetadataLookupTest.kt

    @Test
    fun withStructPreservesAnnotations() {
        val astWithMeta = """
            ((meta (lit [f::[a::b::{x:c::1,y:2,z:e::a}]]) {line:1,column:23}))
        """.sexp()

        val (ast, lookup) = NodeMetadataLookup.extractMetaNode(astWithMeta)

        with(lookup) {
            assertEquals("((lit [f::[a::b::{x:c::1,y:2,z:e::a}]]))", ast.toString())
            assertMetadata(null, ast)

            assertEquals("(lit [f::[a::b::{x:c::1,y:2,z:e::a}]])", ast[0].toString())
            assertMetadata(NodeMetadata(1, 23), ast[0])

            val sexp = ast[0] as IonSexp
            val str = ((sexp[1] as IonList)[0] as IonList)[0] as IonStruct
            assertNotNull(str.typeAnnotationSymbols)
        }
    }

Use a single hash table for case-sensitive and insensitive binding lookups

BindingsMap currently has two maps of binding names, one for names in their original letter case that is used for case-sensitive lookups and one with the names converted to lowercase that is used for case-insensitive lookups, however, there is a way to build a single map, which I have prototyped and stored here.

The method involves having two classes:

  • BindingNameKey, which stores the original (originalCaseName) and lowered letter case version (loweredCaseName) and also implements hashCode and equals.
  • BindingNameLookup, which extends BindingNameKey, and has an additional property indicating its case-sensitivity (case).

Example of use:

val demoMap = HashMap<BindingNameKey, Any>()

// Set the value
demoMap[BindingNameKey("foo")] = 123

// Look up the value using an identifier of a different case
val value = demoMap[BindingNameLookup("FOO", CaseSensitivity.INSENSITIVE)]

assertEquals(123, value)

This works because:

  • BindingNameLookup inherits from BindingNameKey
  • BindingNameKey and BindingNameLookup:
    • always use the hashCode() of loweredCaseName
    • The .equals() of both always respect the value of case in the BindingNameLookup instance.

To implement this, we'll need to:

  • Add the classes and tests I linked in my private gist, above.
  • Replace the existing BindingName class with BindingNameLookup.
  • Update any place we are currently using the dual hash table approach to use this instead.
    • software.amazon.sql.BindingMap and possibly others -- need to check.

Update to Kotlin 1.3.20 or later

Currently, we get a warning from Gradle about a deprecated feature being used during the build.

Investigation with ./gradlew build --warning-mode all --stacktrace and a bit of Googling reveals that this is caused by the Kotlin plugin: https://youtrack.jetbrains.com/issue/KT-26808

Once Kotlin 1.3.20 or later has been released, we should try migrating to it to avoid this warning.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.