Giter Site home page Giter Site logo

explorable-viz / fluid Goto Github PK

View Code? Open in Web Editor NEW
32.0 9.0 2.0 28.43 MB

Data-linked visualisations

Home Page: http://f.luid.org

License: MIT License

JavaScript 8.47% CSS 1.41% Dhall 1.39% PureScript 83.57% HTML 1.77% Shell 0.52% Python 2.87%
data-visualization provenance dependency-analysis program-slicing open-science

fluid's People

Contributors

dependabot[bot] avatar dorchard avatar harleengulati03 avatar josephbond avatar rolyp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

tiashah1

fluid's Issues

Annotation-based backward slicing

Basic debugging:

  • bot clears all annotations on an existing computation
  • uneval produces same expression (id) passed to eval
  • integrate an uneval step into every test
  • memoise setall

Sync with new match design:

  • zipper design for ExplVal.Match
  • MatchId based on ValId - requires versioned tries
  • lookup (merge with match)
  • replace mapMatch, mapArgs by setκ (might need to revert to something close to map)
  • eval_ to store matches in traces
  • uneval to use matches to unmatch

First pass over:

  • bot for annotated terms
  • join for environments - actually compute this as we go along, LVar-style
  • uncloseDefs
  • unmatch
  • uninstantiate
  • uneval

Spec for core language:

  • syntax of matches as zippers
  • typing rules for matches (first attempt)
  • new pattern-match definition
  • new reverse pattern-match definition
  • update eval and uneval accordingly

Spec for implementation language:

  • syntax of matches
  • new pattern-match definition
  • new reverse pattern-match definition
  • update eval and uneval accordingly

Test backward slicing

Tidy up memoisation infrastructure and put some proper backward-slicing tests in place.

  • uneval need not return anything - actually for uninstantiate and unmatch useful to propagate (inverse) addresses through function calls
  • consolidate lookupArg_ and lookupArg
  • reinstate old bar-chart example
  • length: needing results needs only cons cells from input lists
  • normalise:
    • retaining only pair constructor discards both subcomputations
    • retaining either component of pair retains both subcomputations
  • zipW:
    • first cons of output needs:
      • first cons of each input
      • recursive function zipW itself
    • needing first element as well requires:
      • constructor at head of op definition
      • application of op in body of zipW to pair, but not components of pair

Histogram bars

Enough work to draw some bars:

  • Point datatype (a "path" will just be a list of points, for now)
  • delete old index.html?
  • Rectangle datatype
  • stackRight
  • 3D controls
  • polylines/paths

Support '_' properly for unused variables

This is now supported, although not as useful as one might have hoped given the way we merge patterns into a single trie. Merging _ with a (named) variable is illegal, so a variable has to be unused on any branch for it to be substituted with _.

We also now (like Haskell 98) treat _ as a reserved identifier, so that it is for example illegal to define a function called _. The variable pattern grammar is extended specifically to include _. (This makes the Nearley parser produce an absurdly long error message, but one which at least starts with the string Unexpected keyword token: “_”.)

Expressions as traces

Implementation tasks:

  • "instantiate" (nee "translate")
  • eval should no longer construct an evaluand, but get one from the input expression
  • sort out trie type signatures/mess with trie nesting for products:
    • ConstrTrie should map constructor names to (T | Trie), not T
    • parser to enforce that constructors respect constructor signatures
    • parser to enforce that constructor patterns respect constructor signatures
    • evaluation and primitive evaluation can assume constructor signatures respected
  • problem with Lex.Var not being versioned/interned (related to #22)

Stacked bar chart/line plot

  • basics of stacked bar chart
  • reinstate axes
  • reinstate captions
  • add a “total” plot line (in black) to line-graph example
  • legend - #253
  • resolve scaling issues - #254

Labels on axes

Bar chart:

  • barGraph library function taking categorical data
  • compute number of y-axis ticks in barGraph, replacing numerical constant 4
  • heuristic saying how often to place a tick on an axis of length n
  • centre ticks for categorical axes
  • "centre" ticks/labels for categorical data
  • numerical labels on y-axis
    • first pass over SVG text elements (#167)
    • hard-code font size to 10pt
    • x-axis function to return offset, rather than caller to assume axis_margin + tickLength

Line chart:

  • lineGraph library function taking numerical data
  • y-axis
  • x-axis should show years
  • only showing 2 rather than 3 years

x-axis formatting:

  • centre text on tick according to text width
  • text below tick according to text height

y-axis formatting:

  • centre text on tick according to text height
  • right-justify based on text width

Still to do:

  • categorical names rather than numerical values for categorical axes
  • x-offset of bar chart incorrect

Formalise datatypes

  • drop binary sums, binary products and recursion from "appendix" language
  • new syntax for:
    • datatypes names (assume fixed datatype environment)
    • constructors (n-ary sums)
    • constructor tries
    • n-ary products
    • constructor traces
  • typing rules
  • evaluation rules
  • trie-match rules
  • matched-trie map rules
  • match typing rules
  • explanation typing rules
  • "explained match" typing rules
  • type signatures for "match" functions
  • unify "sig" of data type with signatures of primitive ops
  • syntax for matched constructor tries
  • drop numerical indices in constructor rules
  • sync "persistent evaluation" rules
  • explanation typing rules for main body of paper
  • define closeDefs

Auto-scale image to fit viewport

  • dimensions function to compute size of GraphicsElement
  • stackRight in terms of width
  • rewrite spaceRight in terms of width
  • scale by x + width and y + height?

Generic way to extract value to visualise

Currently we depend on the specific structure of the values computed by the application (see #50), whereas maybe the interpreter should operated on "compiled" values directly. At the very least we should be able to do datatype-generic reflection, rather than the currently hardcoded approach.

  • getRectsAxes and helpers (instead of populateScene) should:
    • build Rect values directly
    • build Path values directly
  • shape variable evaluates to null in App.ts
    • problem(s) with __shallowMergeAssign:
      • perhaps inconsistent with __shallowCopy? (fix by factoring latter through former)
      • mutates interned objects!
        • fix lookupArg to support ES6
        • switch to ES6 and use Object.freeze for interned objects
      • currently broken for interned objects - unclear how to reconstruct them during merging
  • reflect to convert Value to native value

Stroke width

Some notion of line/polyline with variable thickness. Played about with this:

import * as Meshline from "three.meshline"

export class ThickPath extends THREE.Geometry {
   constructor (path: THREE.Vector2[]) {
      super()   
      for (const point of path) {
         this.vertices.push(new THREE.Vector3(point.x, point.y, 0))
      }
   }

   object3D (): THREE.Object3D {
      const line = new Meshline.MeshLine()
      line.setGeometry(this)
      const material = new Meshline.MeshLineMaterial({
         color: new THREE.Color(0x000000),
         sizeAttenuation: 0,
         lineWidth: 0.020
      })
      return new THREE.Mesh( line.geometry, material )
   }
}

Reimplement deep patterns using tries

As per my "demand-indexed computation" experiment. Imported from rolyp/lambdacalc-old#4.

  • Trie datatype, analogous to Constr:
    • Trie.Var maps any argument to a continuation, plus environment binding the variable
    • Trie.Prim (?) that demands a strict value of primitive type
  • primitive operations
    • PrimOp should be an expression form?
    • prim ops as tries of ground type
    • delete Expr.OpName (consolidate with Expr.Var)
  • Trie.Constr case via nested tries**
  • new function application semantics, based on tries
  • new match..as semantics, based on tries
  • semantics on paper for demand-indexed self-explaining computation
  • adapt above to work with environments rather than substitutions
  • write-up:
    • minor corrections to self-explaining, demand-indexed
    • arbitrary ground types rather than just nat; unary primitives
    • drop de Bruijn indices in favour of explicit names
    • environment-based demand-indexed evaluation
    • compute output environment as part of the judgement?
    • define self-explaining as a function over a demand-indexed proof tree
    • delete self-explaining experiment? (its patterns won't be as nice as tries)
    • τ, υ to range over tries (υ for arbitrary tries, τ for non-empty ones)?
    • ρ, σ to range over environments?

Ad hoc polymorphism

Something like a dynamically-typed version of Haskell's type classes. Also look at Clojure's multimethods, Agda's instance arguments. Basically just want the most brain-dead way of supporting ad hoc polymorphism. No really any way to support Haskell-style return type polymorphism (in the absence of type inference).

Use cases to support:

  • overloading of primitive operations
  • defining instances of signatures like Show for data types
  • implied instances (like Show A => Show List)?

Reinstate interpreter and evaluation tests

  • get interpreter code compiling (modulo trie stuff)
  • datatype definitions
  • semantics of mutual recursion (letrec)
  • complete trie implementation (#6)
  • tests that only require primitive types (normalise, arithmetic, compose)
  • *remaining evaluation tests

Punt for now:

  • suitable demand for root computation, where we don't know the type of the result
  • reinstate projections as primitives (user-level functions instead?)

List notation

  • do #49 first?
  • infix cons (::)
  • Haskell-style list notation [a, b, c], with [x, ...xs] as notation for cons
  • revisit continuation-passing idiom for trie-parsing? (requires trie zipper)
  • extend notation to pattern-matching

Call-by-value

Drop "demand-indexed" evaluation; too much of a distraction and not directly relevant to what we want to do. Closely related to backwards slicing, but not equivalent in that it doesn't automatically compute slices of functions.

We will retain tries-with-variable-binding as a nice approach to pattern matching, but will no longer need "top" tries, which are a not-entirely-straightforward special case at the moment (see #74).

  • delete Top tries and argument tries
  • new eval_; drop EnvEntry, evalArgs
  • EvalKey, Result and Results no longer required
  • letrec can only define recursive functions, not values; revisit closeDefs
  • lookup which returns an environment
  • inline PrimOp.invoke and delete demands of primitive ops
  • do I still need merge/LVar semantics? -- no

Drop `Expr` syntax and build traced values directly

Having an expression-trace distinction is nice in some ways, but giving instantiate (the function that both turns an expression into a traced value, and gives it an id unique to its runtime environment) a meaningful type is quite hard: expressions include tries, that include nested tries with correspondingly complex types, and those types transform according to the embedding. Without type-level functions, this is hard to capture.

I think it might be preferable to build traced values directly in the parser, as we did previously, so that the only job of instantiate is to translate the ids, leaving the metatype of the input unchanged. This should allow for more precise typing.

Subtasks:

  • first check whether instantiate can be handled better via trie map
  • instantiate should take a Traced, and extract its "expression id" to make the new runtime id
  • drop the meaningless Kont supertype(s)
  • give the instantiate functions more meaningful types
  • delete Expr module

Reflection should create versioned not interned objects

  • delta representation experiment using mapped types and conditional types
    • initial experiment: mapped types potentially powerful but broken - very easy to get to a situation where the type system says everything is fine but I have no idea what anything means
  • reflection should create versioned objects not interned objects
    • interning vs. versioning as a dynamic rather than static property
      • inline util submodule, delete unused bits (including memo)
      • creation code for versioned and interned objects: nearby & consolidated
      • persistent lexemes; drop ValueObject
      • nominal idiom to enforce separation of interned and versioned objects
      • uniform construction idiom for interned and versioned objects
  • generic differencing code for versioned objects
  • genericise constructor_ methods in versioned objects - doesn't work

Broken syntax for nullary constructors

The following parses incorrectly without the brackets around Nil:

      concat 
         (Nil)
         (map (map transpose) (axis (length rects') width))

Could use a more Haskell-like (function-like) syntax for constructors, but need to consider the fact that constructors are not yet first-class. If we made constructors first-class, we could use #327 to allow the user to write infix : instead of Cons, which would leave us with array syntax very much like PureScript, although it's not clear how to make that work in patterns. I would probably then drop the old [x, …xs] notation from the TypeScript implementation.

Open questions (not all necessarily related to this problem):

  • should the syntax be "applicative", i.e. Cons e e’ rather than Cons(e, e’)?
  • relatedly, should constructors appear curried: should Cons and Cons e be values?
  • can we define constructors as infix (or provide infix synonyms)?
  • do we need “friendly” list syntax [e, e', e'']

For now what we need is a simple, reasonably familiar design that also leaves us open to improvements in the future. Current proposal:

  • distinguish constructors from identifiers by case, as we did previously (and as per Haskell) -- makes it easy to distinguish pattern variables from nullary constructors
  • treat constructor names as identifiers that resolve to operators (primitives, not closures) - then "applicative" constructor parsing just falls out -- no: retain current grammar, but at runtime permit unsaturated constructor expressions, which will then be "applicable" (but which won't themselves be interpreted as application chains)
  • generalise existing primitives infrastructure to work with (n + 1)-ary operators
  • make constructors first-class (no need to unify with primitives, though):
    • represent partially applied constructors as unsaturated constructor values
      • constructor name evaluates to constructor value with zero arguments
      • allow such things to be applied until saturated
    • forward-slicing: annotation on (partial) constructor value is α argument to eval conjoined with annotation on partial constructor being applied (when saturated, consistent with previous monolithic approach)
  • uniform apply interface to Unary and Binary, so the Binary implementation can take care of creating the PartialApp – no longer required
  • missing verification of constructor signatures against data types (not responsibility of parser):
    • access to constructor signatures from data type definitions
    • constructor argument list matches arity of constructor
    • constructor pattern parameters matches arity of constructor
    • domain of a constructor eliminator is set of summands of the same data type
  • Cons as infix :
    • define a pattern syntax analogous to expression syntax: an “operator tree” where
      • branches are binary infix constructors
      • leaves are “application chains” where the symbols in the function position are constructor names
    • operator definitions to include associativity as well as precedence
  • undefined join of partial eliminators should be fatal, rather than cause backtracking

Field accessors/updaters

Data type constructors should automatically introduce projections for extracting field values, as well as some kind of field update notation. This feature potentially interacts with #48 because fields of different constructors may have the same name - in Haskell this would be an error, but we might want to treat the duplicate field like an overloaded function/implicit typeclass. Ideally such accessors/updaters would be first-class.

  • formalise as part of implementation language
  • generate as part of datatype
  • evaluation rules
  • slicing rules

Axes

  • way to represent lines/paths that supports get_x/put_x
  • performance problem with __shallowMergeAssign
  • mock up of x-axis
    • tick marks
    • centre ticks
    • axis line
  • parameterise to support y axis
  • remove dependency of axis on sep
  • translate axes slightly away from zero position

Composable charts

A more nested approach to charts, inspired by SVG's nested viewports/coordinate systems and Tomas's Compost library.

  • new Graphic element: list of GraphicsElements with left-to-right rendering order
  • use Graphic rather than concat to assemble bar chart example from chart + 2 axes
  • Translate element not to be imperative but rather to apply to a contained Graphic
  • Transpose to work in the same way and port axes to new approach
  • support for categorical quantities

Example with multiple views

Extend bar chart example with at least one additional view and use this to drive new features.

  • basic support for floating-point numbers (#105)
  • manually import some data from Renewables Global Status Report
  • bar chart with categorical x-axis (for each country, total renewables)
  • second plot showing (for a given country, or perhaps global) change in renewables over time
  • PathStroke shouldn't close path by default
  • visualise points in a line plot (distinct from highlighted/selected point visualisation - #113)
  • stacked bar chart? (#110)
  • generate axes automatically (#111)
  • add a Scale transform and use instead of manual division by 30
  • auto-scale image to fit viewport (#112)

Monaco integration

Basic integration with the VS Code editor Monaco for use inside Wrattler.

  • initial stab at Monarch syntax definition
  • align with lexical grammar in grammar.ne

Erasing values violates monotonicity

"Erased" values (that arise either through forward slicing or through demanding a computation with a variable trie) are represented as null, where elsewhere I have started to use special Bot values (with addresses). In particular bottom environments can't be represented as null because they (probably) need to preserve the structure of the environment that they are a slice of. Also, now that Kont is a sensible interface, it is not possible to have null implement Kont.

Uniformity suggests that values should be treated the same as other erases terms. But there seems to be a more fundamental problem with allowing erased values to not have addresses: it can impact the identity of any computation whose address is determined by an environment containing that value. This violates monotonity, since slicing can create "new" computations.

A naive first fix might be to simply ensure that erased values always have addresses. This is closer to what we want, but still doesn't work because in the 'non-terminal' cases we delegate the address of the result to a subcomputation. If we somehow don't know what that subcomputation is because of slicing then we can't allocate an address consistent with the "baseline" run, again breaking monotonicity.

I think what this reinforces is the idea that (dynamic) slicing is always with respect to a fixed, fully known, computation. The identity of all subcomputations is fixed for the slice. Slicing simply propagates bottom.

Improve test infrastructure

Some infrastructure to make testing easier and to gradually tease out an end-user interaction model:

  • some kind of setup to do parseExample
  • FwdSlicing helper
  • BwdSlicing helper
  • scope (sub-)tests to revisions using newRevision
  • separate on method which asserts class of syntax node from other navigation helpers
  • annotation helpers (needed, notNeeded, need, notNeed)

Worlds

Allow a single computation to be executed with two different (but related) demands.

To allow two different computations to be executed (with corresponding demands) and compared, we will need to parameterise evaluation on a "world" (or "version"); currently this is global:

  • interned objects are similar to rigid designators; they denote the same thing (have the same content) in every possible world
  • versioned objects may take on different values in different worlds

If we had this notion then we could probably use it to execute a single computation with two different demands.

  • switch versioned objects to a "constructor"-style factory method at and make calling version part of at
  • experiment with global revision/world
    • give versioned objects a partial map from worlds to states
    • updating at a world should enforce LVar semantics (i.e. fail if not increasing)
    • updating when the most recent state was set in a strictly prior world should create a new entry in the partial map
    • ...with copy on write semantics
    • worlds are partially ordered
    • better names for __version, __mergeAssign, __merge
    • null assignment should remain a no-op at the same world, but mutate at a new world

Annotation-based forward slicing

  • remove all existing Bot and null constructs
  • abstract (lattice) notion of annotation
  • annotations on expressions, values
  • forward slicing for closeDefs
  • forward slicing for lookup (requires annotated continuations)
  • forward slicing for eval (requires resolving issue with shared values)

Test forward slicing

  • setα helper
  • basic propagation test in arithmetic.lcalc
  • show length parametric in elements
  • show length depends on tail
  • show independence of each element in map.lcalc
  • slice filter to function that counts the elements that satisfy the predicate
  • show lookup insensitive to keys in unexplored parts of the tree
  • show lookup sensitive to keys in explored parts of the tree
  • erasing closing Nil in reverse erases outermost Cons of output
  • improve navigation helpers, e.g. for constructor arguments

Curried match notation

Should be able to write nested matches using match e e' by analogy with function definitions.

Basic annotation visualisation

  • annotations for basic types
  • annotations for primitive types (number, string, boolean) - can't use ES6 Number/String
  • reflect to propagate annotations
  • how to interpret (annotated) rectangle as list of (annotated) points - migrate to user code
  • model axes using PathStroke
  • some non-trivial forward and backward slices for bar-chart test
  • basic visualisation for annotated properties of visual elements
  • improve CBV efficiency of iterate (no I think it's always quadratic)
  • introduce MemoKey or similar and consolidate ExprId etc with keys used by memo
  • α as non-enumerated field - want to treat α as a field for single-assignment purposes

Bordered Rect

  • rectsAsPoly doesn't yet make sense as we lack a generic way to draw a (filled) polygon
  • some top-level notion of what a "visualisation" is; currently hardcoded as a pair of lists of points one of which is interpreted as a bordered rectangle, but we need something more generic (perhaps a list of polygons and paths in z-order - have a look at SVG and also Two.js for grouping)
  • move border code out of App.ts - can't easily put this into user code until we have something like typeclasses (and some way to call user methods from native code)
  • preliminary notions of "fill" and "stroke" for Rect (will generalise to polygons)

Properly distinguish Expr and Trace

Currently instantiate translates an expression address by an appropriate sequence of value addresses, and promotes it from an Expr to Trace. This requires us to have trace forms that support nulls purely as an interim state after instantiate but before eval. Moreover, the logic of instantiate (while probably equivalent to the formalism), is presented differently in a way that isn't helpful. We should simplify. I seem to have the worst of both worlds at the moment: a distinction between Expr and Trace, yet a runtime representation of Exprs as Traces.

  • eval should take an expression form not a trace
  • eval to call instantiate in the cases specified in the formalism (beta reduction)
  • Trace forms no longer need to support null as an interim state - there may still be something analogous for error cases, e.g. if the function component of an application does not evaluate to a function, then the value returned by the application must be Bot (without an address, so that it can be "below" any value with an address).
  • delete Traced.RecDef?
  • delete Traced.Trie, Args and Kont
  • move join implementation out of class hierarchy to avoid problem with type signature
  • simplify EvalId and related gumpf

There is a potential issue with respect to monotonicity across computations - if the demand increases, then a previously "suspended" expression can become needed, and the relation between an expression and any of its traces would need to be an increasing one. For now the trace of an unneeded expression is bottom; we can reintroduce the embedding of expressions into traces later if needed (e.g. to visualise code given only a trace).

"Increasingness" within a computation should be unaffected.

Generate suitably sized axes automatically

  • bar graph logic to translate final composite to accommodate negative offsets for axes
  • ceiling primitive
  • manually write axis code for line plot
  • compute length (rounded up) of axis, divide by interval to get number of ticks

Pull graphics and utility code from bar-chart into prelude

Haven't got any kind of module mechanism but should at least be able to put some core prelude definitions into a source file which is automatically loaded for every program.

  • auto-import prelude code from lib/prelude
  • auto-import graphics code from lib/graphics
  • helper for navigating to named recursive function definition
  • consolidate all test cases:
    • arithmetic
    • bar-chart
    • compose
    • factorial
    • filter
    • foldr_sumSquares
    • length
    • lexicalScoping
    • lookup
    • map
    • mergeSort
    • normalise
    • reverse
    • zipW

Forward slicing/differential evaluation

A basic observation that started with our CONCUR 2016 paper and influenced the approach in our ICFP 2017 materialises again in this setting. Rather than starting with a meet-preserving "hole-propagating" evaluation semantics which doesn't require a trace, and then concluding that it has a lower adjoint, instead start with the "obvious" definition of the lower adjoint (which does require a trace), and conclude that it has an upper adjoint. That upper adjoint is a "forward dataflow" that pushes holes through the structure of an existing computation.

This way of setting up slicing is preferable whenever (as is turning out to be the majority of the time) hole-propagating forward evaluation doesn't have enough information to implement the upper adjoint. For us this arises in two situations:

  1. Evaluation requires an output environment containing values bound during pattern-matching of the computed value; the "shape" of this environment isn't known statically, because the demand partitions the type and assigns a different pattern context to each partition. When forward slicing, we need to produce an environment slice of the correct shape; a universal "bottom" environment would be too destructive.

  2. When computing the positions of graphical elements (e.g. bars in a bar chart), things like the width of the bars to the left are "dependees"; a naive hole-propagating semantics will be unable to render most of an image once holes appear. What we want instead is to preserve the structure of the image, but use the slicing to highlight parts of the image. (Less sure about this point.)

  • logic to delete one of the elements of data in the bar-chart example
  • test: differentially execute same program with two different demands
  • test: differentially execute program and slice of that program
  • forward slicing
    • Bot forms for Expr and Trace; we need these so that holes still have "addresses", and in particular sliced environments can work with instantiate
    • fix merge semantics to accommodate Bot
    • fix assignment semantics to allow for reclassification
    • Bot environments: build relative to the "complete" (unsliced) world
      • versioned Result to allow access to prior environment - requires explicit EvalKey because same computation with different demands produces output environments with different shapes
      • drop constructor-based object keys, since objects can change class between versions
      • support reclassification
    • Bot continuations: in particular what does this mean when the continuation is a trie?
    • what to do when demand "exceeds" availability: when the computation is bot but the demand is not?
    • problem with instantiate using earlier version of expression (see #33)
    • slicing causes node identities to change
  • some basic visualisation of "image slice" - same principle of monotonicity
    • populateScene should render null elements in a particular way
    • newPathGeometry should do the same with null points
  • "per-property" differencing - for any slot of a versioned object, describe how it differs from a previous version of that object
    • given that versioned objects can change class, does it make sense for getProp to have a static type argument?
  • values falsely appearing to change because of call-by-need
  • revisit copy-on-write behaviour of __commit

"Native values" prototype

Adopt a "native" (compiled) values approach similar to lambdacalc-old, simplifying where possible, enhancing where necessary.

Prototype the following, omitting tracing, annotations and execution indexing:

  • retain syntax of expressions as is
  • idiom to allow object to be "unexplained" at metalevel 1 but explained at metalevel 0
  • way of omitting metadata fields from keyof T
  • dynamic way of omitted metadata fields as well (annoying that these can't be the same)
  • new app entry point that tries out new interpreter on trivial example
  • compile constructor expressions to object creation expressions
  • denotational approach: interpret expression as function from environments to values
  • compile functions (tries to expression) to trie-like "function objects"
  • intern objects
  • intern "function objects"
  • consolidate datatype reification
  • migrate parser to new syntax classes
  • environments as prototype chains?
  • reinstate Lex.Ctr, Lex.Var
  • restore tests to evaluation level:
    • arithmetic
    • bar-chart
    • compose
    • factorial
    • filter
    • foldr_sumSquares
    • length
    • lexicalScoping
    • lookup
    • map
    • mergeSort
    • normalise
    • pattern-match
    • reverse
    • zipW

Top demands are too strict

The current system is too complicated for me to get my head around. One issue relates to the interaction between slicing and the (unsatisfying) notion of a "top" demand, which we need to force strict evaluation for images.

A "top" demand behave like a fully nested pattern: for example evaluating (with "top") a program which outputs a Rect which is a pair of points would be akin to evaluating with the following demand:

Rect(Point(x, y), Point(h, w)) -> ...

If the first point is undefined (evaluates to bottom), then the match of the subsequent point also fails because the continuation (which in this case is the demand on the second point) is undefined. Thus top demands in general "linearise" the output in a way that effectively causes cascading erasure of the output.

A top demand behaves like a fully strict client application, one which starts with a maximally deep pattern match - effectively requiring the output to have a very specific structure without any independent parts.

Slicing based on lattice annotations

  • first pass over forward slicing rules
  • first pass over backward slicing rules
  • write pattern-matching and reverse pattern-matching in a relational style
  • prove pattern-matching and reverse pattern-matching form a Galois connection
  • extend slicing rules to implementation language
  • prove helpers for mutually recursive functions form a Galois injection
  • extend pattern-matching Galois connection to explained values

Build matched tries during evaluation

The behaviour of match is quite complex and essentially reconstructs information discarded during evaluation. Instead we could build these structures as we go along.

  • build full traced computation (with matched tries) using the current factored approach
  • proof that matched tries are "products" of tries and matching values
  • new evaluation rules that integrate matched tries

Notion of "top" demand

Not sure what to do here - loathe to drop the whole demand-driven approach, but also conscious of the fact that implementing incremental demand-driven properly is a non-trivial task. And a naive non-incremental but exploratory discovery of the output will be insanely redundant.

Notes:

  • added notion of Top demand (trie) which treats the output as strict but is still demand-driven for sub-computations
  • wasn't able to get "match" (which turns a value and a compatible trie into a matched trie) to work with top demands - the code is too higher-order to understand easily, especially without a sane type sytem to help
  • however, match is essentially trivial when the trie is Top - it simply promotes the value into the corresponding isomorphic trie - so there's not much to gain by worrying about this now
  • there will be an opportunity to revisit this as part of #34 (and the factored approach really is too complex as it stands anyway)
  • test cases that use explicit demands also test match, but the default demand is now Top and those tests don't invoke match
  • no longer need the explicit demands in the bar chart example

Matched tries

More integrated view of explained values and the demands which forced them to evaluate. Precursor to implementing a UI.

  • new syntax for explanations, values
  • syntax of matched tries
  • reconcile product tries (which are "nested") with product expressions (which are "flat")
  • definition of "matching" a trie with a value
  • version of match-trie for explained values
  • version of matched-trie map for "explained" matched tries
  • "sequence" tries corresponding to n-ary products
  • implement matched tries as a separate step after evaluation
    • matchArgs problem: inj should expect a trie product, no?
    • matchArgs problem: don't understand how it should work
    • matchArgs problem: product (recursive case) looks wrong
  • reinstate type parameter for tries
    • Expr.Trie
    • Traced.Trie
    • Traced.Match
    • Instantiate
    • Primitive
    • Eval
    • Match
    • Helpers
    • replace instanceof with is pattern
  • typing rules for "explanations"
  • typing rules for matched tries should included contained explanations
  • basic validation of matched tries

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.