partiql / partiql-lang-kotlin Goto Github PK
View Code? Open in Web Editor NEWPartiQL libraries and tools in Kotlin.
Home Page: https://partiql.org/
License: Apache License 2.0
PartiQL libraries and tools in Kotlin.
Home Page: https://partiql.org/
License: Apache License 2.0
Currently ExprValue
has a scalar property of type Scalar.
This means that instantiating an ExprValue
that that has a scalar type requires two allocations: one to create the ExprValue
and one to create its Scalar
instance. To reduce the number of allocations needed, we should move all of Scalar
's functions to ExprValue
and remove its scalar
property.
Scalar ExprValue
instances are created in great numbers at evaluation time, so I suspect this could significantly reduce the amount of allocations.
software.amazon.sql.eval.Scalar
:
abstract class Scalar {
...
fun booleanValue(): Boolean? = null
fun numberValue(): Number? = null
fun timestampValue(): Timestamp? = null
fun stringValue(): String? = null
fun bytesValue(): ByteArray? = null
}
EDIT: I found these extension methods which effectively move the Scalar
methods to ExprValue
which (IMHO) begs the question why we haven't already moved to ExprValue
:
fun ExprValue.booleanValue(): Boolean =
scalar.booleanValue() ?: errNoContext("Expected non-null boolean: $ionValue", internal = false)
fun ExprValue.numberValue(): Number =
scalar.numberValue() ?: errNoContext("Expected non-null number: $ionValue", internal = false)
fun ExprValue.timestampValue(): Timestamp =
scalar.timestampValue() ?: errNoContext("Expected non-null timestamp: $ionValue", internal = false)
fun ExprValue.stringValue(): String =
scalar.stringValue() ?: errNoContext("Expected non-null string: $ionValue", internal = false)
fun ExprValue.bytesValue(): ByteArray =
scalar.bytesValue() ?: errNoContext("Expected non-null LOB: $ionValue", internal = false)
Currently, the examples
project contains usage of the public API in Kotlin.
We should also have a Java version of each of these examples to help keep us mindful of what using the public API looks like from Java. It will also help customers to deal with any deficiencies in the API as a result of not enforcing an API that conforms to idiomatic Java.
We need to add a compile option that disables dynamic scoping - essentially requiring field names to be qualified.
Given the following snippet of IonSQL:
SELECT a, b FROM `{a:1}`
The current scoping rules will cause the identifier a
in the SELECT
list of this snippet to resolve to the field a
of the struct in the FROM
clause, and the identifier b
to the global variable b
(if b
exists in the global scope).
Once implemented and enabled this compiler option will cause the SQL++ interpreter to skip the scopes introduced in the FROM
clause of the query when resolving variable names. Thus, when the option is enabled, the identifier a
will resolve to the global variable a
(if a
exists in the global scope).
In order to resolve the identifier a
in the SELECT
list to the a
field of the struct in the FROM
clause of the above query, it will need to be modified like so:
SELECT table1.a, b FROM `{a:1}` AS table1
(And of course resolution of b
remains unchanged.)
Migrated from IONSQL-148
We don't have meta nodes all the time so when an exception is thrown during evaluation time we may not know the location (line & column) within the query where the error originated from. Additionally, some exceptions might be thrown without any errorContext
.
In order to prevent these types of issues we should modify the test framework (located at tools/sqltest
) so that it requires all error expectations to have an error context and source location.
Code to change is located in ScriptExecuteVisitor.compareErrorContextProperties
. Perhaps as part of this change, we will also need to fix any failing tests that appear after this change.
Migrated from IONSQL-312.
It would be nice to support this:
for::{
template: [ test::{
...
expected: result: result::$value
// $value here currently results in an error because the annotation is removed during variable resolution.
...
}],
varaibleSets: [ { value: ... } ]
}
For reference, what does work is including the result
attribute in the variableSet
struct:
for::{
template: [ test::{
...
expected: $value
...
}],
variableSets: [ { value: result:: ... } ] //<--- result:: annotation goes here.
}
Please also remove the reference to this issue from the test script specification, under the known issues heading of the section about the for
command.
SqlParser
determines the line and column of a ParseNode
from ParseNode.token.position
. However, most of the ParseNode
s returned from the SqlParser.parseTerm()
do not have a Token
associated with them. This means that when the ParseNode
tree is later transformed into ExprNode
instances many are missing a SourceLocationMeta
instance.
This task is to ensure that all ParseNode
s returned by parseTerm()
are associated with a token
correctly.
The root of the problem appears to be that inside of parseTerm()
we branch on the token type then call an appropriate function to parse the particular type of expression. (i.e. parseSelect()
, parseCast()
, etc. Because the those parse*()
functions do not have access to the original token indicating their context, most ParseNode
s returned by parseTerm()
are not assigned a Token
.
private fun List<Token>.parseTerm(): ParseNode = when (head?.type) {
...
KEYWORD -> when (head?.keywordText) {
"case" -> when (tail.head?.keywordText) {
"when" -> tail.parseCase(isSimple = false)
else -> tail.parseCase(isSimple = true)
}
"cast" -> tail.parseCast()
"select" -> tail.parseSelect()
"pivot" -> tail.parsePivot()
...
}
...
}
...
}
One way to fix this might be to pass the head
Token
into each of the parse*()
functions and then include this token in the ParseNode
constructor, but this might be tedious. Another way to fix this is to (Kotlin data class) copy
the ParseNode
before it is returned, assigning the token at that time, but this might have performance implications, particularly for large queries.
Migrated from IONSQL-390.
Trying to build using ./gradlew build
gives the following error:
w: /Users/argu/workspaces/partiql-lang-kotlin/cli/src/org/partiql/cli/functions/ReadFile.kt: (42, 69): 'iterate(InputStream!): (Mutable)Iterator<IonValue!>!' is deprecated. Deprecated in Java
> Task :cli:jar FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':cli:jar'.
> Failed to create MD5 hash for file '/Users/argu/workspaces/partiql-lang-kotlin/lang/build/libs/lang-1.0.0.jar' as it does not exist.
In order to facilitate consistency in code formatting we should see investigate if ktlint can reformat our code as part of (every?) build.
This will also require re-evaluating CODE-STYLE.md
and making changes as necessary.
Our current visitor/rewriter implementation is rather non standard and lacks the ability to transform from an ExprNode
AST to another that does not use ExprNode
as a base class. To remedy this, I propose the following changes:
AstVisitor
-> AstListener
AstListener
(instead of just the base types like we have currently)AstWalker
to dispatch to these additional methodsAstRewriter
-> AstVistor
rewrite*
methods to visit*
AstVisitor<T>
and make T
the return type of every visit*
method.AstRewriterBase
, but do make it implement AstVisitor<ExprNode>
After this change, we will be using the terms visitor
, listener
and walker
in the same way that ANTLR4
does.
For example:
'hi' || 1
Throws an error:
code: null
properties: {
LINE_NUMBER:1,
COLUMN_NUMBER:5
}
Instead of null
, the error code should probably be something like CONCAT_FAILED_DUE_TO_INCOMPATIBLE_TYPE
or somesuch.
This can also be observed in the REPL:
sql> 'hi' || 1
|
software.amazon.sql.eval.EvaluationException: Expected text: 1
<UNKNOWN>: at line 1, column 6: <UNKNOWN>
...
ERROR! (2 ms)
sql>
Performance of the IN
operator is poor when used with many values. Most often, the values on the right side of IN
are literals which can sometimes number into the dozens, hundreds and in rare cases, thousands.
We can fairly easily optimize for this case--the compiler must inspect all values in the list on the right side of IN
and if so, store them in a data structure such as a TreeSet
. Then, when the IN
operator is being evaluated we just need to evaluate the left side of IN
once per query and check of the result exists in the TreeSet
. If a non-literal value appears anywhere on the right side of IN
we can simply fall back on the existing behavior.
Currently, both sides of the IN
operator are evaluated for every row in a query and this is very expensive, especially when many values exist on the right side.
Migrated from IONSQL-447.
java.time.DateTimeFormatter
throws an exception when an attempt is made to parse a custom timestamp format with an offset greater than +/- 18h, however IonJava's Timestamp
supports offsets up to +/- 23:59. In practice, this may not be a significant issue because real-life offsets do not exceed +/- 12h.
Write a document describing the process for creating a release and add it to the repository at docs/dev/RELEASE-PROCESS.md
. Include as part of this any templates for email or other notifications that are to be sent as part of a release.
Although it is generally suboptimal to store Ion text inside of another Ion string, it happens frequently enough that we should support it. This is one of the most frequently encountered topics in our internal message forums.
We should add a pair of functions to convert an arbitrary value to an Ion string. For instance:
to_ion_string({ 'foo': 'bar'})
should return the string
"{foo:\"bar\"}"
and
parse_ion_string("{foo:\"bar\"}")
should return the value {'foo': 'bar'}
.
We should also consider equivalent functions for Ion binary: to_ion_binary
and parse_ion_binary
.
The public API of the interpreter exposes a lot of components publicly without explicit intention. Part of the reason for this is that we sometimes rely on Kotlin's internal
access specifier even though it is effectively public
from Java's point of view.
I would like to propose that we introduce a series of .internal packages for API components that are not intended to be called directly by customers and that we move all API components currently marked as internal
to those packages. We should also re-evaluate all functions, classes and members to see if they should genuinely be public and consider marking them internal
and moving them to a .internal
package.
Each of the existing packages should also have a corresponding .internal
package, for example:
software.amazon.sql.ast
should also have software.amazon.sql.ast.internal
software.amazon.sql.eval
should also have software.amazon.sql.eval.internal
software.amazon.sql.syntax
should also have software.amazon.sql.syntax.internal
This will more clearly communicate to JDK8 customers which parts of the API should be avoided and for JDK9 and later customers allows us to explicitly export only the packages which are intended to be part of the public API.
However, it is also possible to separate the public and private APIs into separate jars. Customers would then only add the jar containing the public API to compile-time CLASSPATH
while adding both public and private jars at run-time. See the distinction between api
and implementation
dependencies in the Gradle documentation of the java-library core plug-in. If this is also implemented the internal components of the interpreter will not be exposed to JDK8 customers.
Initially, we just need this to enforce a clean but we will later enhance this when we publish to Maven.
We should be building within a trusty environment with as many of the supported JDKs as possible. (The https://docs.travis-ci.com/user/reference/precise/#jvm-clojure-groovy-java-scala-vm-images supports fewer JDKs.)
See Travis CI docs: Building a Java Project for more information on how to go about this.
We do not currently assign an error code for divide by zero errors. These tests in our test suite will not pass otherwise.
for::{
template: [ test::{ name: "divide by zero", sql: $sql, expected: $result } ],
variableSets: [
{
sql: "1 / 0",
result: error::{
code: EVALUATOR_DIVIDE_BY_ZERO,
properties: {
LINE_NUMBER:1,
COLUMN_NUMBER:3
}
}
},
{
sql: "1 % 0",
result: error::{
code: EVALUATOR_DIVIDE_BY_ZERO,
properties: {
LINE_NUMBER:1,
COLUMN_NUMBER:3
}
}
},
]
}
Migrated from IONSQL-276.
In NumberExtensions.kt we define a series of arithmetic operators on java.lang.Number. This is bad because when these are being used it's not at all obvious that a function call is taking place. We should replace these overloaded operators with explicit function calls so that our code is more readable and less confusing.
For example:
val n1: Number = 1
val n2: Number = 2
// Some distance away, most likely in another file...
val n3 = n1 + n2
In order to know that there's an overloaded function call going on, one must remember that the +
operator is being overloaded for Number
and that n1
and n2
are instances of Number
.
I think this is much more readable and explicit:
val n3 = n1.plus(n2)
Section 12 of the specification.
Section 12 of the specification.
Add the standard copyright header to all source files and update the existing ones.
Consider the following snippet:
val compiler = EvaluatingCompiler(ion, compileOptions = CompileOptions.build {
this.projectionIteration(ProjectionIterationBehavior.UNFILTERED)
})
println(compiler.compile("{ 'a': MISSING } = { }").eval(EvaluationSession.standard()))
(Note: Since the CLI currently doesn't have any way of specifying compile options, this example must be done in a unit test or command-line app.)
When we set ProjectionIterationBehavior to FILTER_MISSING
(which is the default), then True
is printed but with ProjectionIterationBehavior.UNFILTERED
, then False
is printed.
I'm pretty certain that this compile option should not effect the result of the comparison.
Migrated from IONSQL-398
Size function doesn't currently supports SEXP, but it should support any sequence
Would be nice to split ErrorCode
per component, e.g. Lexer, Parser and Evaluator error codes. You can have an interface for ErrorCode
and one enum per component
Originally posted by @raganhan in https://github.com/amzn/ion-sql/pull/51
These two files serve exactly the same purpose but were both written at nearly the same time by @raganhan while he was working on the benchmarks and myself while I was working on the AST refactor. Both of these make writing Ion data in Kotlin a lot easier and their usage is almost the same so this should be easy housekeeping and low hanging fruit.
Github pages can host the static documentation generated by pandoc and the other markdown based documentation.
Migrated from IONSQL-93.
Just like the title says: add support for ORDER BY according to the specification.
When printing out MISSING values it should read as MISSING instead of null
ionsql> MISSING
|
======'
null
-------
Result type was MISSING
OK! (1 ms)
Due to the REPL also displaying the data type of the result the user can tell that the result was MISSING
, however, null
is a misnomer and could be confusing.
This also affects ExprValue.toString()
for missing values
Migrated from IONSQL-427.
Ran the first CLI query sample and got the output:
sql> SELECT id FROM `[{id: 5, name:"bill"}, {id: 6, name:"bob"}]` WHERE name = 'bob'
|
==='
{
id:6
}
---
Result type was BAG and contained 1 items
OK! (179 ms)
CLI docs at https://github.com/partiql/partiql-lang-kotlin/blob/master/docs/user/CLI.md shows:
sql> SELECT id FROM `[{id: 5, name:"bill"}, {id: 6, name:"bob"}]` WHERE name = 'bob'
|
==='
{
id:6
}
---
OK!
The output needs to match exactly as shown from the implementation.
Currently, every implementation of ExprFunction
must:
ExprFunction
which means it must be duplicated in error messages and in BuiltinFunctionFactory
This leads to a fair amount of boiler-plate code and creates a lot of opportunity for inconsistent handling of these types of errors.
We should be able to:
We need to add infrastructure to support this as well as examples of their use.
Here's a rough idea of what this might look like in practice:
class FooExprFunc : ExprFunction {
// The name of this function
override val name = "foo"
// The arity of the function -- this function accepts between 2 and 3 arguments
override val arity = 2..3
override val argumentValidators = listOf(
// The first argument must be an integer
argMustBe(ExprValueType.INT)
// The second argument must be an integer or a decimal
argMustBe(ExprValueType.INT, ExprvalueType.DECIMAL),
// The third argument (if present) must be a string
argMustBeString)
override fun eval(args: List<ExprValue>): ExprValue {
...
}
}
Currently, ExprFunction
only has the eval
function defined -- the name
, arity
and argumentValidators
properties would need to be
added and implemented in all built-in functions and examples.
Currently, if there's an unhandled exception that is thrown inside the forked VM created by JMH during the benchmark execution, it is difficult to debug because the exception details do not appear in the exception that results from this inside the VM which spawned the benchmark.
I have modified the car so that the benchmark is executed directly without JMH to expose any problems with the benchmark itself before executing it with JMH as per usual but this might still be suboptimal because the exception might not originate from SQL++ and such exceptions would still be difficult to debug.
This task will be to see if JMH exposes details about the exception that ocurred inside of the forked VM and to include that in the output of the test driver. If JMH doens't expose these details, another method of getting them into the output of the test driver must be determined.
Once known, we need to apply the final marketing name of this product.
2 of the locations that need to be changed to the marketing name are:
{productName}
in the main README.md file.But there may be others.
Currently, NaturalValuesComparators
cannot distinguish between NULL
and MISSING
values. This causes the result of GROUP BY
to be different depending on if a MISSING
or NULL
value is encountered first.
If a MISSING
value is encountered first, the group key will the contain a missing value. Any subsequent NULL
value is then considered to be part of the same group because they are considered equivalent. Conversely, if the NULL
value is encountered first, the group key will contain the NULL
value instead and subsequent MISSING
values are considered to be part of the same group.
For example, if the environment was:
{
'foo_null_first': [
{ 'bar': null },
{ },
{ 'bar': 1 }
],
'foo_missing_first': [
{ },
{ 'bar': null },
{ 'bar': 1 }
],
}
The following two queries have different results:
ionsql> select * from foo_missing_first as f group by f.bar
|
======'
{
}
{
bar:1
}
-------
Result type was BAG and contained 2 items (3 ms)
ionsql> select * from foo_null_first as f group by f.bar
|
======'
{
bar:null
}
{
bar:1
}
-------
Result type was BAG and contained 2 items (1 ms)
If this is not the desired behavior, the alternative result would be (for both of the above queries):
{
}
{
bar:null
}
{
bar:1
}
Migrated from IONSQL-411.
The Getting Started instructions say:
The easiest way to get started with {productName} is to clone this repository locally, build, then run the REPL.
In order to lower the bar for adoption, provide a release in the Releases
tab. This will allow users to get started with it much easily.
Currently, ExprValue
implements ion-java's Faceted interface.
We should avoid exposing a type from one of our dependencies in the interpreter's public API if at all possible.
We are currently only using this to provide ordered names for fields within SequenceStruct
and to provide unordered names within IonExprValue
.
At the very least, the interpreter should at least have its own version of Faceted
if we don't have a better way of providing names to uses of ExprValue
.
The similarities between FromSourceExpr
and FromSourceUnpivot
have created an unfortunate need to duplicate code in at least one place. See software.amazon.sql.ast.passes.FromSourceAliasRewriter.rewriteFromSourceExpr
and rewriteFromSourceUnpivot
for an example. The goal of this issue is to create a way of avoiding the need to duplicate code surrounding these two ExprNode
types by consolidating them or through other means.
The reason these were not previously merged is because FromSourceUnpivot
needs a distinct set of metas while FromSourceExpr
doesn't have metas of its own--the metas in this case should be associated with FromSourceExpr.expr
.
(term
(exp
(select
(project
(list
(term
(exp
(star)))))
(from
(term
(exp
(unpivot
(term
(exp
(id item case_insensitive))))
(meta <unpivot's meta nodes are supposed to go here>))))))
However:
(term
(exp
(select
(project
(list
(term
(exp
(star)))))
(from
(term
(exp
(id table1 case_insensitive))
(meta <id's meta nodes go here>)))))
Note that for the second case there is no place to put a separate set of metas to be associated with FromSourceExpr
.
Migrated from IONSQL-418
In a couple dozen places within the interpreter we throw IllegalStateException
. We should probably be throwing a descendant of SqlException
instead.
We should fail a query at compile-time with an appropriate error message when a duplicate table aliases are encountered. For instance, these queries all contain duplicate table aliases (some are implicit aliases):
SELECT * FROM foo, foo
SELECT * FROM foo AS f, f
SELECT * FROM foo AS f, foo AS f
SELECT * FROM foo AT foo
SELECT * FROM foo AS f AT f
We currently fail at evaluation-time only when an attempt is made to access the duplicated name, such as:
SELECT f.* FROM foo AS f, f
-- ^ error happens during lookup of 'f' due to ambiguous binding
MySQL and Postgres appear to have a similar compile-time check:
In case those SQL Fiddle links die, the DDL for both was:
create table foo (
foo_id int primary key
);
insert into foo values (1);
create table f (
f_id int primary key
);
insert into f values (2);
And the query was:
select * from foo as f, f
Migrated from IONSQL-453
We use a different pattern with AstDeserializerBuilder than we do with the other builders for CompileOptions and EvaluationSession.
This task will be to make the API for building an AstDeseriailzer
to follow the same patterns as EvaluationSession
and CompileOptions
.
After #29 has been completed, replace LocalsBinder with the new BindingsBuilder
.
Section 13 of the specification.
The query:
SELECT * FROM `[true]` WHERE '๐' LIKE '๐'
Should match and return: [{_1: true}]
But does not and returns the empty result set: []
Migrated from IONSQL-339.
Currently for Ion backed ExprValue
s we are materializing the DOM and working from there. We should find ways to leverage the skip scanning capabilities of the binary format to only materialize the information required by the query
README-AST-V1.md
currently defines GROUP_CLAUSE
as:
GROUP_FULL ::= `group`
GROUP_PARTIAL ::= `group_partial`
GROUP_ALL ::= `group_all`
GROUP_BY_EXP ::= EXP | `(` `as` SYMBOL EXP `)`
GROUP_KIND ::= GROUP_FULL | GROUP_PARTIAL
GROUP_CLAUSE ::=
`(` GROUP_KIND
//NOTE: the `by` node cannot be wrapped in a term (GROUP_BY_EXPs however *can* be wrapped in a term).
`(` `by` GROUP_BY_EXP GROUP_BY_EXP... `)`
//NOTE: the `name` node *can* be wrapped in a term
[ `(` `name` SYMBOL `)` ] ')'
| `(` GROUP_ALL SYMBOL `)`
So:
GROUP BY exp1, exp2
-> (group (by (id exp1 case_insensitive) (id exp2 case_insensitive)))
GROUP PARTIAL BY exp1, exp2
-> (group_partial (by (id exp1 case_insensitive) (id exp2 case_insensitive)))
GROUP ALL AS g
-> (group_all g)
We would like to consider changing the "grammar" of the AST like so:
GROUP_BY_EXP ::= EXP | `(` `as` SYMBOL EXP `)`
GROUP_BY_CLAUSE ::=
`(` `group_by` `full` GROUP_BY_EXP... `)`
| `(` `group_by` `partial` GROUP_BY_EXP... `)`
| `(` `group_by` `all` SYMBOL `)`
Under this "grammar", the previous examples would be represented as:
GROUP BY exp1, exp2
-> (group_by full (id exp1 case_insensitive) (id exp2 case_insensitive))
GROUP PARTIAL BY exp1, exp2
-> (group_by partial (id exp1 case_insensitive) (id exp2 case_insensitive))
GROUP ALL AS g
-> (group_by all g)
This proposed "grammar" is easier to understand in part because it eliminates the ambiguity regarding the location of term
nodes and their corresponding metas.
The following format patterns are valid for DateTimeFormatter
, used in to_string
impl, but not for the TO_STRING
function. They should throw an error
L
O
yyy
First, uncomment the tests marked with this issue URL in the test suite (test/integration-test/test-scripts/builtin-functions/to_string.sqlts
) and then fix the issue preventing them from succeeding.
To reproduce, place this test in org]partiql/eval/NodeMetadataLookupTest.kt
@Test
fun withStructPreservesAnnotations() {
val astWithMeta = """
((meta (lit [f::[a::b::{x:c::1,y:2,z:e::a}]]) {line:1,column:23}))
""".sexp()
val (ast, lookup) = NodeMetadataLookup.extractMetaNode(astWithMeta)
with(lookup) {
assertEquals("((lit [f::[a::b::{x:c::1,y:2,z:e::a}]]))", ast.toString())
assertMetadata(null, ast)
assertEquals("(lit [f::[a::b::{x:c::1,y:2,z:e::a}]])", ast[0].toString())
assertMetadata(NodeMetadata(1, 23), ast[0])
val sexp = ast[0] as IonSexp
val str = ((sexp[1] as IonList)[0] as IonList)[0] as IonStruct
assertNotNull(str.typeAnnotationSymbols)
}
}
BindingsMap
currently has two maps of binding names, one for names in their original letter case that is used for case-sensitive lookups and one with the names converted to lowercase that is used for case-insensitive lookups, however, there is a way to build a single map, which I have prototyped and stored here.
The method involves having two classes:
BindingNameKey
, which stores the original (originalCaseName
) and lowered letter case version (loweredCaseName
) and also implements hashCode
and equals
.BindingNameLookup
, which extends BindingNameKey
, and has an additional property indicating its case-sensitivity (case
).Example of use:
val demoMap = HashMap<BindingNameKey, Any>()
// Set the value
demoMap[BindingNameKey("foo")] = 123
// Look up the value using an identifier of a different case
val value = demoMap[BindingNameLookup("FOO", CaseSensitivity.INSENSITIVE)]
assertEquals(123, value)
This works because:
BindingNameLookup
inherits from BindingNameKey
BindingNameKey
and BindingNameLookup
:
hashCode()
of loweredCaseName
.equals()
of both always respect the value of case
in the BindingNameLookup
instance.To implement this, we'll need to:
BindingName
class with BindingNameLookup
.software.amazon.sql.BindingMap
and possibly others -- need to check.Currently, we get a warning from Gradle about a deprecated feature being used during the build.
Investigation with ./gradlew build --warning-mode all --stacktrace
and a bit of Googling reveals that this is caused by the Kotlin plugin: https://youtrack.jetbrains.com/issue/KT-26808
Once Kotlin 1.3.20 or later has been released, we should try migrating to it to avoid this warning.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.