Giter Site home page Giter Site logo

touk / nussknacker Goto Github PK

View Code? Open in Web Editor NEW
609.0 33.0 89.0 149.65 MB

Low-code tool for automating actions on real time data | Stream processing for the users.

Home Page: https://nussknacker.io

License: Apache License 2.0

Scala 80.04% Shell 0.43% Java 1.21% JavaScript 1.03% CSS 0.01% HTML 0.04% PLSQL 0.01% Dockerfile 0.01% TypeScript 17.25%
flink flink-kafka apache-flink gui touk real-time decision-making kafka scala big-data

nussknacker's Introduction


Real-time actions on data

Maven Central Build status Artifact HUB PR

image

What is Nussknacker

Nussknacker is a low-code visual tool for domain experts to build, run and monitor real-time decision algorithms instead of implementing them in the code.

In all IT systems, no matter the domain, decisions are made all the time. Which offer should a customer get? Who should receive a marketing message? Does this device need servicing? Is this a fraud?

Algorithms for making such decisions can be developed by programmers. With Nussknacker however, such decision algorithms can be authored and deployed without the need to involve IT.

An essential part of Nussknacker is a visual design tool for decision algorithms (scenarios in Nussknacker's speak). It allows not-so-technical users, like analysts or business people, to author decision logic in an imperative, easy-to-follow and understandable way. Scenario author uses prebuilt components to define the decision logic - routing, filtering, data transformations, aggregations in time windows (Flink engine only - see below), enrichments with data from external databases or OpenAPI endpoints, applications of ML models, etc. Once authored, with a click of a button, scenarios are deployed for execution. And can be changed and redeployed anytime there’s a need.

The way the data are processed and features available depend on the processing mode and engine used.

Nussknacker supports three processing modes: streaming, request-response and batch (planned in version 1.16). In streaming mode, Nussknacker uses Kafka as its primary interface: input streams of data and output streams of decisions. In request-response mode, it exposes HTTP endpoints with OpenAPI definitions.

There are two engines to which scenarios can be deployed: Flink and Light. Check out this document to understand which of the two fits fits your use case better.

Why Nussknacker

Nussknacker promises to make developing and deploying real-time decision algorithms as easy as it is to crunch data at rest with spreadsheets. Hundreds of millions of non-programmers create spreadsheets to crunch data at rest these days. The same should be possible with real-time data - and this is our promise with Nussknacker. If this promise is fulfilled, domain experts and developers can focus on tasks that each of these two groups is most happy to perform. Domain experts can author the decision algorithms and developers can solve problems beyond the reach of tools like Nussknacker.

We discovered that several factors heavily influence the development of algorithms that work with real-time data, including expectations placed on the tools used:

  • Domain experts - often, these are domain experts who conceptualize the algorithms, and the expertise required is very domain specific. Without proper tools for converting algorithms to code, domain experts have to delegate this work to programmers who are proficient in multiple tools, programming languages, and technologies. This approach costs money and takes time. With Nussknacker, domain experts build the algorithm from prefabricated blocks. The trick is to make these prefabricated blocks infinitely flexible to allow for any data transformation and flow control condition. Nussknacker achieves this by using SpEL, an easy-to-learn expression language.
  • Experimentation - the algorithms may require a lot of experimentation before one gets them right. If so, the iteration time required to implement a change, deploy it, and see the result should be in single minutes if not seconds. With Nussknacker, non-technical users can achieve iteration time below one minute.
  • Productivity - if low-code solutions want to be considered tools rather than toys, they must offer features available in professional developer toolkits. Nussknacker Designer has built-in syntax checking, code completion , versioning, debugging, and testing support.
  • Observability - experimenting with algorithms requires insights going beyond pure technical metrics like throughput, Kafka topics lag, etc. Out of the box, Nussknacker comes with an integrated and ready-to-use monitoring subsystem which allows monitoring not only technical aspects of the running scenario but also its internal behavior - for example events count per scenario step. You will not need to spend developers' time on this functionality.
  • Architecture - last but not least, the fundamentals on which you build matter. Nussknacker achieves exceptional throughput, horizontal scalability, resilience, and high availability through the use of tools and platforms known for their rock-solid architecture - Kafka, Flink, and Kubernetes, which handle all processing tasks.

Check out this document for a concise summary of Nussknacker features.

Use cases

image

Nussknacker is typically used as a component of a larger system, but it can be used as an end-to-end solution too. The use cases follow a common pattern: a program working on a data stream, file or in a request-response interaction style, receives a set of data (event or request) and has to deliver a decision. To “take” the decision it needs to perform one or more of the following: discard irrelevant records, enrich incoming records with data from external sources, aggregate events in time windows (if working on a data stream), run one or more ML models, compute the decision and finally deliver it either as another data stream, file or a response. The ‘decisions’ can be from a broad spectrum of vertical and horizontal applications:

  • Is it a fraud?
  • Should a loan be granted?
  • Next Best Offer
  • Clickstream analysis
  • Real-time marketing
  • ML models deployments with non-trivial pre and post-processing
  • IoT sensor readouts real-time analysis

Where to learn more

Quickstart

If you want to see Nussknacker in action without any other dependencies, you can use embedded engine in Request-response mode (scenario logic is exposed with REST API), just run:

docker run -it -p 8080:8080 -p 8181:8181 touk/nussknacker:latest

After it started go to http://localhost:8080 and login using credentials: admin/admin. REST endpoints of deployed scenarios will be exposed at http://localhost:8181/scenario/<slug>. Slug is defined in Properties, and by default it is scenario name. Be aware that some things (e.g. metrics) will not work, and this engine is not intended for production use.

If you want to follow step-by-step via more complex tutorials, based on production ready engines, read one of quickstart guides for: Streaming mode on Lite engine or Streaming mode on Flink engine or Request-response mode on Lite engine.

Contact

Talk to us on mailing list or start a discussion

Scala compatibility

Currently, we do support Scala 2.12 and 2.13, we cross publish versions. Default Scala version is 2.13. Docker images (both Designer and Lite Runtime) are tagged with _scala-2.X suffix (e.g. 1.8.0_scala_2.13 or latest_2.12). Tags without such suffix are also published, and they point to images with default Scala version build. Please be aware of that, especially if you use latest image tag.

Flink compatibility

We currently support only one Flink version (more or less the latest one, please see flinkV in build.sbt). However, it should be possible to run Nussknacker with older Flink version.

While we don't provide out-of-the-box support as it would complicate the build process, there is separate repo with detailed instructions how to run Nussknacker with some of the older versions.

Related projects

Contributing

Nussknacker is an open source project - contribution is welcome. Read how to do it in Contributing guide. There you can also find out how to build and run development version of Nussknacker.

License

Nussknacker is published under Apache License 2.0.

nussknacker's People

Contributors

arkadius avatar bartektartanus avatar bohdanprog avatar clutroth avatar coutopl avatar dawidsula26 avatar dependabot[bot] avatar dswiecki avatar dzuming avatar fijolekprojects avatar gadomsky avatar gskrobisz avatar jedrz avatar julianwielga avatar lciolecki avatar maciej-brzezinski avatar mproch avatar mslabek avatar nadberezny avatar philemone avatar pielas avatar piotrp avatar pjagielski avatar ppd-touk avatar ppiedel avatar raphaelsolarski avatar trombka avatar witekw avatar wrzontek avatar zbyszekmm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nussknacker's Issues

Error running docker-compose up

Describe the bug
Kibana dont start
Grafana dont start

To Reproduce
Steps to reproduce the behavior:
Pull the project
docker-compose up -d

see the status
docker-compose ps

kibana and grafana are not working

Expected behavior
kibana and grafana must to work

Screenshots
If applicable, add screenshots to help explain your problem.

Environment (please complete the following information):
Centos 7
Last version of project

How to fix the error(in my enviorement)

kibana:
there is in error in the docker-compose.yaml file

This line : ELASTICSEARCH_HOSTS: httelasticsearch:9200
change to: ELASTICSEARCH_HOSTS: http://elasticsearch:9200/

Grafana
in the script runWithFlinkEspBoard.sh,
the line ./run.sh "${@}" & are not working,
change to: /run.sh "${@}" &

can not use other sink name, eg: 'othersEvents' instead of 'processedEvents'

unable to parse line: demo.65a1e1357e55.taskmanagerTask.3bc122984b5b029b7b8dd8b529e9a39e.DetectLargeTransactions.Source:-DetectLargeTransactions-source.0.KafkaConsumer.heartbeat-response-time-max -Infinity 1524734472: field "demo.65a1e1357e55.taskmanagerTask.3bc122984b5b029b7b8dd8b529e9a39e.DetectLargeTransactions.Source:-DetectLargeTransactions-source.0.KafkaConsumer.heartbeat-response-time-max" value: "-Inf" is unsupported

Happen an error when to click button 'CREATE NEW PROCESS' in Running IDEA.

14:41:41.131 [nussknacker-ui-akka.actor.default-dispatcher-44] INFO p.t.n.e.m.RestartableFlinkGateway - Creating new gateway
14:41:41.138 [nussknacker-ui-akka.actor.default-dispatcher-44] WARN p.t.n.ui.api.ProcessesResources - Failed to get status of demo: Config parameter 'Key: 'jobmanager.rpc.address' , default: null (deprecated keys: [])' is missing (hostname/address of JobManager to connect to).
org.apache.flink.util.ConfigurationException: Config parameter 'Key: 'jobmanager.rpc.address' , default: null (deprecated keys: [])' is missing (hostname/address of JobManager to connect to).
at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.getJobManagerAddress(HighAvailabilityServicesUtils.java:129) ~[flink-runtime_2.11-1.4.2.jar:1.4.2]
at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:77) ~[flink-runtime_2.11-1.4.2.jar:1.4.2]
at org.apache.flink.client.program.ClusterClient.(ClusterClient.java:144) ~[flink-clients_2.11-1.4.2.jar:1.4.2]
at org.apache.flink.client.program.StandaloneClusterClient.(StandaloneClusterClient.java:44) ~[flink-clients_2.11-1.4.2.jar:1.4.2]
at pl.touk.nussknacker.engine.management.DefaultFlinkGateway.createClient(FlinkGateway.scala:47) ~[classes/:na]
at pl.touk.nussknacker.engine.management.DefaultFlinkGateway.(FlinkGateway.scala:24) ~[classes/:na]
at pl.touk.nussknacker.engine.management.FlinkProcessManager$.pl$touk$nussknacker$engine$management$FlinkProcessManager$$prepareGateway(FlinkProcessManager.scala:42) ~[classes/:na]
at pl.touk.nussknacker.engine.management.FlinkProcessManager$$anonfun$apply$2.apply(FlinkProcessManager.scala:36) ~[classes/:na]
at pl.touk.nussknacker.engine.management.FlinkProcessManager$$anonfun$apply$2.apply(FlinkProcessManager.scala:36) ~[classes/:na]
at pl.touk.nussknacker.engine.management.RestartableFlinkGateway.retrieveGateway(FlinkGateway.scala:96) ~[classes/:na]
at pl.touk.nussknacker.engine.management.RestartableFlinkGateway.pl$touk$nussknacker$engine$management$RestartableFlinkGateway$$tryToInvokeJobManager(FlinkGateway.scala:72) ~[classes/:na]
at pl.touk.nussknacker.engine.management.RestartableFlinkGateway.invokeJobManager(FlinkGateway.scala:62) ~[classes/:na]
at pl.touk.nussknacker.engine.management.FlinkProcessManager.listJobs(FlinkProcessManager.scala:188) ~[classes/:na]
at pl.touk.nussknacker.engine.management.FlinkProcessManager.findJobStatus(FlinkProcessManager.scala:129) ~[classes/:na]
at pl.touk.nussknacker.ui.process.deployment.ManagementActor$$anonfun$receive$1$$anonfun$2.apply(ManagementActor.scala:64) ~[classes/:na]
at pl.touk.nussknacker.ui.process.deployment.ManagementActor$$anonfun$receive$1$$anonfun$2.apply(ManagementActor.scala:63) ~[classes/:na]
at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:253) ~[scala-library-2.11.12.jar:na]
at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:251) ~[scala-library-2.11.12.jar:na]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36) [scala-library-2.11.12.jar:na]
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) [akka-actor_2.11-2.4.20.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91) [akka-actor_2.11-2.4.20.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) [akka-actor_2.11-2.4.20.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) [akka-actor_2.11-2.4.20.jar:na]
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) [scala-library-2.11.12.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90) [akka-actor_2.11-2.4.20.jar:na]
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39) [akka-actor_2.11-2.4.20.jar:na]
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415) [akka-actor_2.11-2.4.20.jar:na]
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [scala-library-2.11.12.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [scala-library-2.11.12.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [scala-library-2.11.12.jar:na]
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [scala-library-2.11.12.jar:na]

Quickstart - Error while reporting metrics

Hi.

I'm running docker-compose up as it says in the quickstart guide, however it seems to have an issue with starting the nussknacker service properly. Kibana , grafana and flink web services are up and running , but nussknacker returns 502.

I'm not sure, but I belive it has something to do with Error that comes up when I run docker compose.
The log below just repeats itself until I "ctrl+c" it :)

I tried fixing this with substituting flink-metrics-graphite-1.2.0.jar with the newer flink-metrics-graphite-1.6.0.jar, but it didn't help :)

nginx_1 | 172.18.0.1 - - [20/Sep/2018:08:13:36 +0000] "GET /flink/joboverview HTTP/1.1" 200 28 "http://localhost:8081/flink/" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0"
nginx_1 | 172.18.0.1 - - [20/Sep/2018:08:13:39 +0000] "GET /flink/joboverview HTTP/1.1" 200 28 "http://localhost:8081/flink/" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0"
jobmanager_1 | 2018-09-20 08:13:40,456 WARN org.apache.flink.runtime.metrics.MetricRegistryImpl - Error while reporting metrics
jobmanager_1 | java.nio.channels.UnresolvedAddressException
jobmanager_1 | at sun.nio.ch.Net.checkAddress(Net.java:101)
jobmanager_1 | at sun.nio.ch.DatagramChannelImpl.send(DatagramChannelImpl.java:429)
jobmanager_1 | at com.codahale.metrics.graphite.GraphiteUDP.send(GraphiteUDP.java:90)
jobmanager_1 | at com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:270)
jobmanager_1 | at com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
jobmanager_1 | at org.apache.flink.dropwizard.ScheduledDropwizardReporter.report(ScheduledDropwizardReporter.java:230)
jobmanager_1 | at org.apache.flink.runtime.metrics.MetricRegistryImpl$ReporterTask.run(MetricRegistryImpl.java:417)
jobmanager_1 | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
jobmanager_1 | at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
jobmanager_1 | at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
jobmanager_1 | at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
jobmanager_1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
jobmanager_1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
jobmanager_1 | at java.lang.Thread.run(Thread.java:748)
nginx_1 | 172.18.0.1 - - [20/Sep/2018:08:13:42 +0000] "GET /flink/joboverview HTTP/1.1" 200 28 "http://localhost:8081/flink/" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0"
taskmanager_1 | 2018-09-20 08:13:42,714 WARN org.apache.flink.runtime.metrics.MetricRegistryImpl - Error while reporting metrics
taskmanager_1 | java.nio.channels.UnresolvedAddressException
taskmanager_1 | at sun.nio.ch.Net.checkAddress(Net.java:101)
taskmanager_1 | at sun.nio.ch.DatagramChannelImpl.send(DatagramChannelImpl.java:429)
taskmanager_1 | at com.codahale.metrics.graphite.GraphiteUDP.send(GraphiteUDP.java:90)
taskmanager_1 | at com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:270)
taskmanager_1 | at com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
taskmanager_1 | at org.apache.flink.dropwizard.ScheduledDropwizardReporter.report(ScheduledDropwizardReporter.java:230)
taskmanager_1 | at org.apache.flink.runtime.metrics.MetricRegistryImpl$ReporterTask.run(MetricRegistryImpl.java:417)
taskmanager_1 | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
taskmanager_1 | at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
taskmanager_1 | at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
taskmanager_1 | at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
taskmanager_1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
taskmanager_1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
taskmanager_1 | at java.lang.Thread.run(Thread.java:748)
nginx_1 | 172.18.0.1 - - [20/Sep/2018:08:13:45 +0000] "GET /flink/joboverview HTTP/1.1" 200 28 "http://localhost:8081/flink/" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0"
nginx_1 | 172.18.0.1 - - [20/Sep/2018:08:13:48 +0000] "GET /flink/joboverview HTTP/1.1" 200 28 "http://localhost:8081/flink/" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0"

... and it just goes on

lots of warning for nussknacker_influxdb | [graphite] with value: "-Inf" is unsupported

Describe the bug

When using docker-compose up, granafa and elasticsearch works, but lots of warning emmited on console

To Reproduce

NUSSKNACKER_VERSION=latest docker-compose up

Expected behavior
No such warnings

Screenshots

image

Environment (please complete the following information):

  • OS: CentOS 7.5
  • Method of ran of components: in docker
  • Browser type and version: chrome 75

Additional context

nussknacker_influxdb | [graphite] 2019/06/23 14:18:06 unable to parse line: demo.5751303fa6e6.taskmanagerTask.c404bf027acc1da660ee22d9f7812161.DetectLargeTransactions.Sink:-DetectLargeTransactions-save-to-elastic-sink.1.KafkaProducer.request-latency-max -Infinity 1561299486: field "demo.5751303fa6e6.taskmanagerTask.c404bf027acc1da660ee22d9f7812161.DetectLargeTransactions.Sink:-DetectLargeTransactions-save-to-elastic-sink.1.KafkaProducer.request-latency-max" value: "-Inf" is unsupported

'No configuration setting found for key 'kafka.zkAddress'

Hi,

I just cloned the repo and wanted to start working with the demo setup. Unfortunatelly I run in to a problem when adding a new process to Nussknacker:

[ERROR] [06/19/2019 10:20:51.613] [nussknacker-ui-akka.actor.default-dispatcher-35] [akka.actor.ActorSystemImpl(nussknacker-ui)] Error during processing of request: 'No configuration setting found for key 'kafka.zkAddress''. Completing with 500 Internal Server Error response. To change default exception handling behavior, provide a custom ExceptionHandler.

I started looking for causes and found out that just 4 hours ago a commit was made:

"[CONFIG] - remove unused zkAddress config" by mproch
f803ae3

Could it be that zkAddress was used?

Thanks,
Marek

subprocess input definition UX

image

  • when several parameters are added boxes do not align (css)
  • "field name" is confusing, "name" would be enough
  • I'm not sure what is a second box when defining parametrs, I assume its for data type
  • please add autocompletion for datatype
  • please validate input completness (empty params, empty datatypes)

how the engine component works?

hi~ ,i am not familiar with Scala,can u explain it more specifically about Engine is a library which transforms json representation of graph with Nussknacker process into Flink job.

  1. what is framework of the json representation of graph?
  2. how to transform the json to flink job?

thks !

Documentation for subprocesses

Hi there,

I am taking a look at composability of the designer, can you clarify what subprocesses are, their behavior etc?

Thank you!

Can not Save and deploy my job by running in Idea.

[ERROR] [05/17/2018 15:16:35.131] [nussknacker-ui-akka.actor.default-dispatcher-11] [akka.actor.ActorSystemImpl(nussknacker-ui)] Error during processing of request: 'Tcp command [Connect(localhost:8081,None,List(),Some(10 seconds),true)] failed because of 拒绝连接'. Completing with 500 Internal Server Error response. To change default exception handling behavior, provide a custom ExceptionHandler. (akka.stream.StreamTcpException)

And UI appears the tips ‘Properties: Failed to parse expression: Empty expression(param1)’

Process list shows processes within Category that shouldn't be visible by user.

Describe the bug
For user, on the list of processes are visible all processes - even if he/she hasn't got configured Read permission for some category.

To Reproduce
Steps to reproduce the behavior:

  1. Having Flink and Nussknacker ran e.g. in docker with configured two categories: one visible for user A and the second not visible.
  2. Open browser, login as a user A and go to 'main page'

Expected behavior
User A should see processes in categories that he/she has Read permission,
, but:
User A see all processes.

Validate amount of test data to generate in UI

Clicking on Generate button shows a form and it allows for request the amount of test data. But the number is not validated and when I provide bla or 1000000000000000000000000000000000 then any error message is not shown. File is generated and contains only one line:

The requested resource could not be found.

"align" in right menu

Common use of "align" is to align several selected items in some way. I think this button shoud be renamed to "layout".

"business view" destroys graph

  • create new process
  • do not save it
  • check "business view"
  • uncheck "business view"
  • process disappears

Maybe "Business view" should be unavailable before saving the process first.

generic "sources"

NN would be more accessible as a no-code or self-service platform with generic (configurable in GUI) "sources":

  • http pull for json
  • sql select database pull
  • generic Kafka source
  • JMS source

“./sbtwrapper ui/assembly“ Packaging error “java.io.IOException: Cannot run program "npm" (in directory "ui\client"): CreateProcess error=2“

executive command ./sbtwrapper ui/assembly exception,This "npm" environment variable is already matched. this is window!

java.io.IOException: Cannot run program "npm" (in directory "ui\client"): CreateProcess error=2, ϵͳ▒Ҳ▒▒▒ָ▒▒▒▒▒ļ▒▒▒
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at sbt.SimpleProcessBuilder.run(ProcessImpl.scala:349)
at sbt.AbstractProcessBuilder.run(ProcessImpl.scala:126)
at sbt.AbstractProcessBuilder.$bang(ProcessImpl.scala:154)
at $737037a525ee067296c3$.runNpm(build.sbt:11)
at $737037a525ee067296c3$$anonfun$ui$1.apply$mcV$sp(build.sbt:532)
at $737037a525ee067296c3$$anonfun$ui$1.apply(build.sbt:531)
at $737037a525ee067296c3$$anonfun$ui$1.apply(build.sbt:531)
at sbt.std.Transform$$anon$3$$anonfun$apply$2.apply(System.scala:44)
at sbt.std.Transform$$anon$3$$anonfun$apply$2.apply(System.scala:44)
at sbt.std.Transform$$anon$4.work(System.scala:63)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.Execute.work(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: CreateProcess error=2, ϵͳ▒Ҳ▒▒▒ָ▒▒▒▒▒ļ▒▒▒
at java.lang.ProcessImpl.create(Native Method)
at java.lang.ProcessImpl.(ProcessImpl.java:386)
at java.lang.ProcessImpl.start(ProcessImpl.java:137)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
at sbt.SimpleProcessBuilder.run(ProcessImpl.scala:349)
at sbt.AbstractProcessBuilder.run(ProcessImpl.scala:126)
at sbt.AbstractProcessBuilder.$bang(ProcessImpl.scala:154)
at $737037a525ee067296c3$.runNpm(build.sbt:11)
at $737037a525ee067296c3$$anonfun$ui$1.apply$mcV$sp(build.sbt:532)
at $737037a525ee067296c3$$anonfun$ui$1.apply(build.sbt:531)
at $737037a525ee067296c3$$anonfun$ui$1.apply(build.sbt:531)
at sbt.std.Transform$$anon$3$$anonfun$apply$2.apply(System.scala:44)
at sbt.std.Transform$$anon$3$$anonfun$apply$2.apply(System.scala:44)
at sbt.std.Transform$$anon$4.work(System.scala:63)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.Execute.work(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[error] (ui/*:buildUi) java.io.IOException: Cannot run program "npm" (in directory "ui\client"): CreateProcess error=2, ϵͳ▒Ҳ▒▒▒ָ▒▒▒▒▒ļ▒▒▒

How can I solve it?

movable modal dialogs (UX)

Some windows are quite large. When they cover diagram, you lose some context e.g. can not quick check where in the diagram they are.

(react-draggable?)

menu is not responsive

Describe the bug
Menu layout does not work on a narrow screen.

Expected behavior
Menu should be visible regardless of the screen size.

Configuration flinkConfig.queryableStateProxyUrl error

Bug information
When we set on configuration flinkConfig.queryableStateProxyUrl and taskamanger service is unavaiable then application throws error and doesn't response.

To Reproduce

  1. Disable in /demo/docker/docker-compose.yml two services: taskmanager and jobmanager
  2. Run docker compose
  3. Check nussknacker_app logs - see errors.

Expected behavior
Application should run normally even then taskmanager is not avaiable.

Logs from app
java.net.UnknownHostException: taskmanager: Name or service not known at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) ~[na:1.8.0_212] at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) ~[na:1.8.0_212] at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) ~[na:1.8.0_212] at java.net.InetAddress.getAllByName0(InetAddress.java:1277) ~[na:1.8.0_212] at java.net.InetAddress.getAllByName(InetAddress.java:1193) ~[na:1.8.0_212] at java.net.InetAddress.getAllByName(InetAddress.java:1127) ~[na:1.8.0_212] at java.net.InetAddress.getByName(InetAddress.java:1077) ~[na:1.8.0_212] at org.apache.flink.queryablestate.client.QueryableStateClient.<init>(QueryableStateClient.java:114) ~[org.apache.flink.flink-queryable-state-client-java_2.11-1.7.2.jar:1.7.2] at pl.touk.nussknacker.engine.flink.queryablestate.FlinkQueryableClient$$anonfun$1.apply(FlinkQueryableClient.scala:23) ~[pl.touk.nussknacker.nussknacker-queryable-state-demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905.jar:demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905] at pl.touk.nussknacker.engine.flink.queryablestate.FlinkQueryableClient$$anonfun$1.apply(FlinkQueryableClient.scala:21) ~[pl.touk.nussknacker.nussknacker-queryable-state-demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905.jar:demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905] at scala.collection.immutable.List.map(List.scala:284) ~[org.scala-lang.scala-library-2.11.12.jar:na] at pl.touk.nussknacker.engine.flink.queryablestate.FlinkQueryableClient$.apply(FlinkQueryableClient.scala:21) ~[pl.touk.nussknacker.nussknacker-queryable-state-demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905.jar:demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905] at pl.touk.nussknacker.engine.management.FlinkProcessManagerProvider$$anonfun$createQueryableClient$1.apply(FlinkProcessManager.scala:151) ~[pl.touk.nussknacker.nussknacker-management-demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905.jar:demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905] at pl.touk.nussknacker.engine.management.FlinkProcessManagerProvider$$anonfun$createQueryableClient$1.apply(FlinkProcessManager.scala:151) ~[pl.touk.nussknacker.nussknacker-management-demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905.jar:demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905] at scala.Option.map(Option.scala:146) ~[org.scala-lang.scala-library-2.11.12.jar:na] at pl.touk.nussknacker.engine.management.FlinkProcessManagerProvider.createQueryableClient(FlinkProcessManager.scala:151) ~[pl.touk.nussknacker.nussknacker-management-demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905.jar:demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905] at pl.touk.nussknacker.engine.ProcessingTypeData$.createProcessManager(ProcessManagerProvider.scala:47) ~[pl.touk.nussknacker.nussknacker-interpreter-demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905.jar:demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905] at pl.touk.nussknacker.ui.process.ProcessingTypeDeps$$anonfun$createProcessManagers$1.apply(ProcessingTypeDeps.scala:61) ~[pl.touk.nussknacker.nussknacker-ui-demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905.jar:demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905] at pl.touk.nussknacker.ui.process.ProcessingTypeDeps$$anonfun$createProcessManagers$1.apply(ProcessingTypeDeps.scala:57) ~[pl.touk.nussknacker.nussknacker-ui-demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905.jar:demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905] at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) ~[org.scala-lang.scala-library-2.11.12.jar:na] at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) ~[org.scala-lang.scala-library-2.11.12.jar:na] at scala.collection.immutable.Map$Map1.foreach(Map.scala:116) ~[org.scala-lang.scala-library-2.11.12.jar:na] at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) ~[org.scala-lang.scala-library-2.11.12.jar:na] at scala.collection.AbstractTraversable.map(Traversable.scala:104) ~[org.scala-lang.scala-library-2.11.12.jar:na] at pl.touk.nussknacker.ui.process.ProcessingTypeDeps$.createProcessManagers(ProcessingTypeDeps.scala:57) ~[pl.touk.nussknacker.nussknacker-ui-demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905.jar:demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905] at pl.touk.nussknacker.ui.process.ProcessingTypeDeps$.apply(ProcessingTypeDeps.scala:47) ~[pl.touk.nussknacker.nussknacker-ui-demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905.jar:demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905] at pl.touk.nussknacker.ui.NussknackerApp$.initializeRoute(NussknackerApp.scala:90) ~[pl.touk.nussknacker.nussknacker-ui-demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905.jar:demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905] at pl.touk.nussknacker.ui.NussknackerApp$.delayedEndpoint$pl$touk$nussknacker$ui$NussknackerApp$1(NussknackerApp.scala:50) ~[pl.touk.nussknacker.nussknacker-ui-demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905.jar:demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905] at pl.touk.nussknacker.ui.NussknackerApp$delayedInit$body.apply(NussknackerApp.scala:31) ~[pl.touk.nussknacker.nussknacker-ui-demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905.jar:demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905] at scala.Function0$class.apply$mcV$sp(Function0.scala:34) ~[org.scala-lang.scala-library-2.11.12.jar:na] at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12) ~[org.scala-lang.scala-library-2.11.12.jar:na] at scala.App$$anonfun$main$1.apply(App.scala:76) ~[org.scala-lang.scala-library-2.11.12.jar:na] at scala.App$$anonfun$main$1.apply(App.scala:76) ~[org.scala-lang.scala-library-2.11.12.jar:na] at scala.collection.immutable.List.foreach(List.scala:392) ~[org.scala-lang.scala-library-2.11.12.jar:na] at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35) ~[org.scala-lang.scala-library-2.11.12.jar:na] at scala.App$class.main(App.scala:76) ~[org.scala-lang.scala-library-2.11.12.jar:na] at pl.touk.nussknacker.ui.NussknackerApp$.main(NussknackerApp.scala:31) ~[pl.touk.nussknacker.nussknacker-ui-demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905.jar:demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905] at pl.touk.nussknacker.ui.NussknackerApp.main(NussknackerApp.scala) ~[pl.touk.nussknacker.nussknacker-ui-demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905.jar:demo-7e66dde0541ece08ff8b5fb06b6b3c64d16b4905]

Environment:

  • Nussknacker version: all
  • Java version: 1.8
  • Method of ran of components: docker demo example with disabled: taskmanager and jobmanager

export PDF action (FE)

  • could be disabled on an empty diagram
  • should be renamed to PDF, "exportPDF" is too wide
  • icon duplicates with on on "export to JSON"

deploy error when following the first example

Hello
run with docker compose it works for me up to the Save ( first process )

when i deploy the saved process i got an error in the docker logs
app_1 | org.apache.flink.util.ConfigurationException: Config parameter 'Key: 'jobmanager.rpc.address' , default: null (deprecated keys: [])' is missing (hostname/address of JobManager to connect to)
it seems related to
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager in the docker compose file ...... ? right ?
one setting is lacking ?

best regards
philg

Nodes with id and name.

Now, visible name is used as a identifier in the process. Would be nice to have separated id and name for nodes visible on left panel. For example when in the app, there are two separate nodes for two categories, but we want to show it as a same node.

Problems with buildServer.sh

Hi, I have several errors when I try to compile "ui" via "buildServer.sh" or "build.sbt".

In order the problems i encounter are :

[error] /home/io/Scaricati/nussknacker-master/engine/kafka/src/main/scala/pl/touk/nussknacker/engine/kafka/KafkaEspUtils.scala:53:56: ambiguous reference to overloaded definition,
[error] both method putAll in class Properties of type (x$1: java.util.Map[_, _])Unit
[error] and  method putAll in class Hashtable of type (x$1: java.util.Map[_ <: Object, _ <: Object])Unit
[error] match expected type java.util.Map[String,String] => ?
[error]     config.kafkaProperties.map(_.asJava).foreach(props.putAll)
[error]                                                        ^
[error] one error found
  1. I tried to update package.json to avoid this, but without any postive response.
npm WARN [email protected] requires a peer of ajv@^6.9.1 but none is installed. You must install peer dependencies yourself.
npm WARN [email protected] requires a peer of webpack@^3.0.0 || ^4.0.0 but none is installed. You must install peer dependencies yourself.
npm WARN [email protected] requires a peer of webpack@^2.0.0 || ^3.0.0 || ^4.0.0 but none is installed. You must install peer dependencies yourself.
npm WARN [email protected] requires a peer of webpack@^4.0.0 but none is installed. You must install peer dependencies yourself.
npm WARN [email protected] requires a peer of typescript@* but none is installed. You must install peer dependencies yourself.
npm WARN [email protected] requires a peer of webpack@^2.0.0 || ^3.0.0 || ^4.0.0 but none is installed. You must install peer dependencies yourself.
npm WARN [email protected] requires a peer of webpack@^4.x.x but none is installed. You must install peer dependencies yourself.
npm WARN [email protected] requires a peer of react@^0.14.0 but none is installed. You must install peer dependencies yourself.
npm WARN [email protected] requires a peer of moment@^2.20.0 but none is installed. You must install peer dependencies yourself.
npm WARN [email protected] requires a peer of react@^16.0.0 but none is installed. You must install peer dependencies yourself.
npm WARN [email protected] requires a peer of react-dom@^16.0.0 but none is installed. You must install peer dependencies yourself.
npm WARN [email protected] requires a peer of react@^15.5.x || ^16.x but none is installed. You must install peer dependencies yourself.
npm WARN [email protected] requires a peer of react-dom@^15.5.x || ^16.x but none is installed. You must install peer dependencies yourself.
npm WARN [email protected] requires a peer of react@^0.14.8 but none is installed. You must install peer dependencies yourself.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})

> [email protected] build /home/io/Scaricati/nussknacker-master/ui/client
> NODE_ENV=production webpack -p --progress && cp -R assets/icons/license dist

/bin/sh: 1: git: not found
child_process.js:677
    throw err;
    ^

Error: Command failed: git log -1 --format=%H
/bin/sh: 1: git: not found

    at checkExecSyncError (child_process.js:637:11)
    at Object.execSync (child_process.js:674:13)
    at Object.<anonymous> (/home/io/Scaricati/nussknacker-master/ui/client/webpack.config.js:7:31)
    at Module._compile (internal/modules/cjs/loader.js:738:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:749:10)
    at Module.load (internal/modules/cjs/loader.js:630:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:570:12)
    at Function.Module._load (internal/modules/cjs/loader.js:562:3)
    at Module.require (internal/modules/cjs/loader.js:667:17)
    at require (internal/modules/cjs/helpers.js:20:18)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] build: `NODE_ENV=production webpack -p --progress && cp -R assets/icons/license dist`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] build script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2019-02-19T17_15_47_794Z-debug.log
[error] java.lang.RuntimeException: Client build failed
[error] 	at $bd43d95d8e743fc8ac8c$.runNpm(build.sbt:585)
[error] 	at $bd43d95d8e743fc8ac8c$.$anonfun$ui$2(build.sbt:603)
[error] 	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
[error] 	at sbt.std.Transform$$anon$3.$anonfun$apply$2(System.scala:44)
[error] 	at sbt.std.Transform$$anon$4.work(System.scala:64)
[error] 	at sbt.Execute.$anonfun$submit$2(Execute.scala:257)
[error] 	at sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.scala:16)
[error] 	at sbt.Execute.work(Execute.scala:266)
[error] 	at sbt.Execute.$anonfun$submit$1(Execute.scala:257)
[error] 	at sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(ConcurrentRestrictions.scala:167)
[error] 	at sbt.CompletionService$$anon$2.call(CompletionService.scala:32)
[error] 	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
[error] 	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
[error] 	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
[error] 	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
[error] 	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
[error] 	at java.base/java.lang.Thread.run(Thread.java:834)
[error] (ui/*:buildUi) Client build failed

This is the debug log

0 info it worked if it ends with ok
1 verbose cli [ '/usr/bin/node', '/usr/bin/npm', 'run', 'build' ]
2 info using [email protected]
3 info using [email protected]
4 verbose run-script [ 'prebuild', 'build', 'postbuild' ]
5 info lifecycle [email protected]~prebuild: [email protected]
6 info lifecycle [email protected]~build: [email protected]
7 verbose lifecycle [email protected]~build: unsafe-perm in lifecycle true
8 verbose lifecycle [email protected]~build: PATH: /usr/lib/node_modules/npm/node_modules/npm-lifecycle/node-gyp-bin:/home/io/Scaricati/nussknacker-master/ui/client/node_modules/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
9 verbose lifecycle [email protected]~build: CWD: /home/io/Scaricati/nussknacker-master/ui/client
10 silly lifecycle [email protected]~build: Args: [ '-c',
10 silly lifecycle   'NODE_ENV=production webpack -p --progress && cp -R assets/icons/license dist' ]
11 silly lifecycle [email protected]~build: Returned: code: 1  signal: null
12 info lifecycle [email protected]~build: Failed to exec build script
13 verbose stack Error: [email protected] build: `NODE_ENV=production webpack -p --progress && cp -R assets/icons/license dist`
13 verbose stack Exit status 1
13 verbose stack     at EventEmitter.<anonymous> (/usr/lib/node_modules/npm/node_modules/npm-lifecycle/index.js:301:16)
13 verbose stack     at EventEmitter.emit (events.js:197:13)
13 verbose stack     at ChildProcess.<anonymous> (/usr/lib/node_modules/npm/node_modules/npm-lifecycle/lib/spawn.js:55:14)
13 verbose stack     at ChildProcess.emit (events.js:197:13)
13 verbose stack     at maybeClose (internal/child_process.js:984:16)
13 verbose stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:265:5)
14 verbose pkgid [email protected]
15 verbose cwd /home/io/Scaricati/nussknacker-master/ui/client
16 verbose Linux 4.15.0-45-generic
17 verbose argv "/usr/bin/node" "/usr/bin/npm" "run" "build"
18 verbose node v11.10.0
19 verbose npm  v6.8.0
20 error code ELIFECYCLE
21 error errno 1
22 error [email protected] build: `NODE_ENV=production webpack -p --progress && cp -R assets/icons/license dist`
22 error Exit status 1
23 error Failed at the [email protected] build script.
23 error This is probably not a problem with npm. There is likely additional logging output above.
24 verbose exit [ 1, true ]

What am I doing wrong?
Thanks in advance.

Categories as a root of other objects

Now categories are assigned to each objects (sources, sinks, services, custom nodes, globalVariables etc.) in relation one to many. This design choice causes that some configurations are hard to achieve e.g.:

  • For one category you can't have global variable with other fields than in other one
  • If you want to have object executed in runtime differently according to category, you need to create many such objects with other ids (see #122)

I think that api should be redesigned that categories should be a root objects for others. Process in runtime should have information of category and should create extensions based on it.

"properties" button on UI freezes on empty process

Hi there!

Thank you for this amazingly thought-out project. I have just started experimenting with it, and I am already amazed with the kind of base/infrastructure your project can give in my use case. I am still experimenting, but I already forked the project to get myself more comfortable.

Because this issue is very minor in UI, I didn't want to open an issue first. But rather than keep a note for myself, I thought I'd add the things I see as I go along in the issue tracker.

The error from quickstart kit:

Uncaught TypeError: Cannot read property 'type' of undefined
    at findNodeDefinitionId (bundle.js?be3ab410fce0434642f9:14)
    at findNodeObjectTypeDefinition (bundle.js?be3ab410fce0434642f9:14)
    at f._additionalVariablesForParameter (bundle.js?be3ab410fce0434642f9:14)
    at findAvailableVariables (bundle.js?be3ab410fce0434642f9:14)
    at s (bundle.js?be3ab410fce0434642f9:75)
    at s.configureFinalMapState (bundle.js?be3ab410fce0434642f9:180)
    at s.computeStateProps (bundle.js?be3ab410fce0434642f9:180)
    at s.updateStatePropsIfNeeded (bundle.js?be3ab410fce0434642f9:180)
    at s.render (bundle.js?be3ab410fce0434642f9:180)
    at d._renderValidatedComponentWithoutOwnerOrContext (bundle.js?be3ab410fce0434642f9:183)

Initial state:
image

Clicked "properties" on the right side:
image

You can see the console as well as query parameter on the URL = undefined.

Thank you again! It is going to be fun experimenting with nuss.

Happen an error when running 'buildServer.sh'

[info] Set current project to nussknacker-master (in build file:/data/gits/nussknacker-master/)
Using path: /data/gits/nussknacker-master/ui/client
java.io.IOException: Cannot run program "npm" (in directory "ui/client"): error=2, 没有那个文件或目录
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at sbt.SimpleProcessBuilder.run(ProcessImpl.scala:349)
at sbt.AbstractProcessBuilder.run(ProcessImpl.scala:126)
at sbt.AbstractProcessBuilder.$bang(ProcessImpl.scala:154)
at $c9bf2fb7fddbcc6c0d5b$.runNpm(build.sbt:11)
at $c9bf2fb7fddbcc6c0d5b$$anonfun$ui$1.apply$mcV$sp(build.sbt:532)
at $c9bf2fb7fddbcc6c0d5b$$anonfun$ui$1.apply(build.sbt:531)
at $c9bf2fb7fddbcc6c0d5b$$anonfun$ui$1.apply(build.sbt:531)
at sbt.std.Transform$$anon$3$$anonfun$apply$2.apply(System.scala:44)
at sbt.std.Transform$$anon$3$$anonfun$apply$2.apply(System.scala:44)
at sbt.std.Transform$$anon$4.work(System.scala:63)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.Execute.work(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: error=2, 没有那个文件或目录
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.(UNIXProcess.java:247)
at java.lang.ProcessImpl.start(ProcessImpl.java:134)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
at sbt.SimpleProcessBuilder.run(ProcessImpl.scala:349)
at sbt.AbstractProcessBuilder.run(ProcessImpl.scala:126)
at sbt.AbstractProcessBuilder.$bang(ProcessImpl.scala:154)
at $c9bf2fb7fddbcc6c0d5b$.runNpm(build.sbt:11)
at $c9bf2fb7fddbcc6c0d5b$$anonfun$ui$1.apply$mcV$sp(build.sbt:532)
at $c9bf2fb7fddbcc6c0d5b$$anonfun$ui$1.apply(build.sbt:531)
at $c9bf2fb7fddbcc6c0d5b$$anonfun$ui$1.apply(build.sbt:531)
at sbt.std.Transform$$anon$3$$anonfun$apply$2.apply(System.scala:44)
at sbt.std.Transform$$anon$3$$anonfun$apply$2.apply(System.scala:44)
at sbt.std.Transform$$anon$4.work(System.scala:63)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.Execute.work(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[error] (ui/*:buildUi) java.io.IOException: Cannot run program "npm" (in directory "ui/client"): error=2, 没有那个文件或目录
[error] Total time: 0 s, completed May 7, 2018 5:01:47 PM

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.