sra-siliconvalley / jalangi Goto Github PK
View Code? Open in Web Editor NEWAvailable for legacy purposes. New users please see Jalangi2 https://github.com/Samsung/jalangi2
License: Other
Available for legacy purposes. New users please see Jalangi2 https://github.com/Samsung/jalangi2
License: Other
In the below property lookup the object used for lookup is coerced to a string by calling toString internally in the interpreter, thus the actually field name used is '[object Object]'
var o = {}
o[{}] = {}
The call backs putFieldPre and putField are passed the original object {}, not the string.
I'm getting a path deviation during replay for a simple file that just loads jQuery 2.0.2. The example is on the html-tests
branch, under tests/html/jquery-2.0.2/justload
. If you instrument tests/html/jquery-2.0.2/jquery-2.0.2.js
, load tests/html/jquery-2.0.2/justload/index_jalangi_.html
in Chrome, and then try to replay, you should see it. I tried to reproduce under node.js and jsdom but unfortunately I didn't see the error there. Not sure how to minimize for this one.
function f() {}
function testcase() {
try {
return true;
} finally {
f()
}
}
console.log(testcase());
The expected output of this snippet is true
. However when running the instrumented version (record mode, no analysis) undefined
is printed instead.
This was derived from a test262 testcase.
Record-replay fails for the following browser script:
console.log(window.location);
During replay under node, {}
is printed instead of the location observed during record. The script is in tests/html/unit/window_location.js
.
get exception after running the command: python scripts/jalangi.py concolic -i 100000 tests/fail_case
code of fail_case.js:
function foo(input){
if(input[2] === 'r') {
1;
} else {
2;
}
}
foo();
exception I got:
---- Instrumenting ../tests/multiex/fail_case ----
Instrumenting ../tests/multiex/fail_case.js ...
==== Input 0 ====
---- Recording execution of tests/multiex/fail_case ----
fail_case_jalangi_.js
TypeError: Cannot read property '2' of undefined
at Object.G (/Users/jacksongl/macos-workspace/research/jalangi/github_multiex/repository/jalangi/src/js/analysis.js:499:33)
at foo (/Users/jacksongl/macos-workspace/research/jalangi/github_multiex/repository/jalangi/tests/multiex/fail_case_jalangi_.js:13:59)
at invokeFun (/Users/jacksongl/macos-workspace/research/jalangi/github_multiex/repository/jalangi/src/js/analysis.js:451:37)
at /Users/jacksongl/macos-workspace/research/jalangi/github_multiex/repository/jalangi/src/js/analysis.js:570:28
at Object.<anonymous> (/Users/jacksongl/macos-workspace/research/jalangi/github_multiex/repository/jalangi/tests/multiex/fail_case_jalangi_.js:28:57)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
---- Replaying tests/multiex/fail_case ----
./analyses/concolic/SymbolicEngine
TypeError: Cannot read property '2' of undefined
at Object.G (/Users/jacksongl/macos-workspace/research/jalangi/github_multiex/repository/jalangi/src/js/analysis.js:499:33)
at foo (/Users/jacksongl/macos-workspace/research/jalangi/github_multiex/repository/jalangi/tests/multiex/fail_case_jalangi_.js:13:59)
at invokeFun (/Users/jacksongl/macos-workspace/research/jalangi/github_multiex/repository/jalangi/src/js/analysis.js:451:37)
at /Users/jacksongl/macos-workspace/research/jalangi/github_multiex/repository/jalangi/src/js/analysis.js:570:28
at Object.<anonymous> (/Users/jacksongl/macos-workspace/research/jalangi/github_multiex/repository/jalangi/tests/multiex/fail_case_jalangi_.js:28:57)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
Right now, code in instrumentDir.js
serializes ASTs to disk using JSON.stringify()
. But, this won't work for RegExp literals; see here. To fix, we can use a more general serialized representation than JSON, e.g., json-literal.
I checked in a test tests/unit/defineProperty.js
:
var f = function () { return this; }
var x = { get: f };
function Foo() {}
Object.defineProperty(Foo.prototype, 'fizz', x);
This test fails when replay is run with the TrackAllValues analysis:
$ python scripts/jalangi.py analyze -a ./analyses/trackallvalues/TrackValuesEngine tests/unit/define_property
---- Instrumenting /Users/m.sridharan/git-repos/jalangi/tests/unit/define_property ----
Instrumenting /Users/m.sridharan/git-repos/jalangi/tests/unit/define_property.js ...
---- Recording execution of /Users/m.sridharan/git-repos/jalangi/tests/unit/define_property ----
define_property_jalangi_.js
---- Replaying /Users/m.sridharan/git-repos/jalangi/tests/unit/define_property ----
./analyses/trackallvalues/TrackValuesEngine
TypeError: Getter must be a function: function () {
jalangiLabel0:
while (true) {
try {
J$.Fe(13, arguments.callee, this);
arguments = J$.N(17, 'arguments', arguments, true);
return J$.Rt(9, J$.R(5, 'this', this, false));
} catch (J$e) {
J$.Ex(93, J$e);
} finally {
if (J$.Fr(97))
continue jalangiLabel0;
else
return J$.Ra();
}
}
}
at Function.defineProperty (native)
at Function.<anonymous> (/Users/m.sridharan/git-repos/jalangi/src/js/analysis.js:165:30)
at invokeFun (/Users/m.sridharan/git-repos/jalangi/src/js/analysis.js:429:37)
at /Users/m.sridharan/git-repos/jalangi/src/js/analysis.js:548:28
at Object.<anonymous> (/Users/m.sridharan/git-repos/jalangi/tests/unit/define_property_jalangi_.js:42:174)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
None
The issue is that native method Object.defineProperty()
gets executed during replay, since it is white-listed, but we don't unwrap the function stored in the get
property of the descriptor object.
when I run command line: python ./scripts/install.py then I have error this. I try run it on Ubuntu and Win7 but still don't work. The following image when I run on Win7 command and Ubuntu. Another thing, In folder [node_modules\escodegen] I just saw a file [escodegen.js] that does not show files [escodegen.browser.js]
Please help me.
how can we use the CoverageEngine to generate the coverage report.
Also write some unit tests for it
Run both Single2 and Multiple on the following test case generates two test cases
function foo(input){
if(input[2] === 'r') {
1;
} else {
2;
}
}
foo();
the first test case is correct (take the else branch):
J$.setCurrentSolutionIndex([]);
J$.setCurrentSolution({"x2":"","x2__length":0});
J$.setInput("x2","");
but the second test case is wrong (suppose to take the then branch):
J$.setCurrentSolutionIndex([]);
J$.setCurrentSolution({"x10":"r","x10__length":1,"x10__0":114});
J$.setInput("x10","r");
Right now in our regression test suite, we do record-replay analysis of many benchmarks twice, once with the "none" replay analysis and also with the track-all-values analysis. For each analysis, we instrument and record from scratch. Instead, we should re-use the trace for both replay analyses.
A website does not work after transformation. And it turns out that it is because wrapReadWithUndefinedCheck function adds variable = variable operation during transformation. For example, the following statement:
postArgMessage; will be transformed into:
J$.I(typeof postArgMessage === 'undefined' ? postArgMessage = J$.R(5, 'postArgMessage', undefined, true) : postArgMessage = J$.R(5, 'postArgMessage', postArgMessage, true)
which assigns the return value (its own value, if the analysis code does not modify that) to the variable. That can be problematic sometime, for example, execute:
location = location;
in the frontend means reloading the webpage. (This bug has been fixed).
But There are still other special global objects that should not be assigned to itself. For example, in HTML5 webworker (fontend multithreading environment), execute:
self = self;
will cause 'setting a property that has only a getter' exception.
Maybe this is not a serious bug as webworker is not supposed to be supported by Jalangi. But there might be other special objects that can trigger errors if they are assigned to themselves. That might potentially cause more bugs which are hard to diagnose.
JacksonGL
node src/js/commands/instrumentDir.js -si foo/bar baz/boo
This commands outputs the *.ast.json
files in the current working directory instead of of in baz/boo
.
We should use a package.json file + npm install
for installing dependent node packages, rather than the current strategy used in install.py.
As far as I can see (by judicious use of console.log
) the functionExit
callback is called before the return_
callback. In my intuition this seems backwards as functionExit
sounds like the last you should hear from a function. It might make more sense to have just one callback giving both return value and IID.
With only a few moves annex generates a very large trace file (>300k) lines.
We should think about preserving the ReferenceError that is thrown when reading an undefined variable. Right now, under Jalangi instrumentation, the error is no longer thrown. See existing unit test tests/unit/reference_error.js
. We should also handle such errors thrown by a call to eval
, e.g.:
var str = "{x: y}";
var indirect = eval;
indirect(str);
(The above is checked in as tests/unit/eval_undefined_var.js
.) Not preserving ReferenceErrors is causing a jQuery unit test to fail when jQuery is Jalangi-instrumented.
something is wrong with the conditional API
get exception after running the instrumented tests/octane/raytrace.js
with the following analysis code:
The following code is Dummy.js
J$.analysis = {};
((function (sandbox){
function Dummy() {
// during a conditional expression evaluation
// result_c is the evaluation result and should be returned
this.conditional = function (iid, left, result_c) {
return result_c;
}
}
if (sandbox.Constants.isBrowser) {
sandbox.analysis = new Dummy();
window.addEventListener('keydown', function (e) {
// keyboard shortcut is Alt-Shift-T for now
if (e.altKey && e.shiftKey && e.keyCode === 84) {
sandbox.analysis.endExecution();
}
});
} else {
module.exports = Dummy;
}
})(typeof J$ === 'undefined'? (J$={}):J$));
proposed fix in analysis.js (below !!! comment):
function C(iid, left) {
var left_c, ret;
executionIndex.executionIndexInc(iid);
if (sandbox.analysis && sandbox.analysis.conditionalPre) {
sandbox.analysis.conditionalPre(iid, left);
}
left_c = getConcrete(left);
ret = !!left_c;
if (sandbox.analysis && sandbox.analysis.conditional) {
// !!!!!!!! in the following line change ret to left_c
lastVal = sandbox.analysis.conditional(iid, left, ret);
if (rrEngine) {
rrEngine.RR_updateRecordedObject(lastVal);
}
} else {
lastVal = left_c;
}
if (branchCoverageInfo) {
branchCoverageInfo.updateBranchInfo(iid, ret);
}
printValueForTesting("J$.C ", iid, left_c ? 1 : 0);
return left_c;
}
Right now, our instrumentation changes the behavior of getOwnPropertyNames, since we add a *J$*
to every object. This can change the behavior of code like:
var x = {};
console.log(Object.getOwnPropertyNames(x).length);
The above is checked in as unit test getownpropnames.js
.
function foo() {}
foo.call()
When replayed this triggers one invokeFun
call back for the invocation of call
. However, the invocation of foo
does not, as it happens as an effect of call
. If the analysis tracks function calls it would miss that invocation, (but still see a functionExit
callback when leaving foo
).
The analysis can of course handle this manually by checking if the function being invoked is either call
or apply
and then take action accordingly. However, to me it seems cleaner if this happens on the Jalangi level instead of in every analysis that does something with function calls.
We need to handle sites that use Mootools. This doesn't work currently because Mootools overwrites many built-in JS functions. E.g., this game doesn't work:
http://www.lbnstudio.fr/labs/tetris/test/uTetris/
We need to grab a copy of various built-in functions and then invoke through those pointers. Here's what was done in a previous project:
https://github.com/ecspat/eavesdropper/blob/master/util.js
Not sure if this will be sufficient, but it's probably a good starting point.
Consider the following code:
var j = "0";
var k = j++;
console.log(j);
console.log(k);
var l = "1";
var m = ++l;
console.log(l);
console.log(m);
When run under node, the output is:
1
0
2
2
But after normalization, we get:
01
0
11
11
Gotta love JavaScript.
record/replay fails for the following code. Extracted from the typescript benchmark.
Date.prototype;
var start = +new Date();
It would be nice to have a mode in which Jalangi does various sanity checks on the result of a client analysis. For example, for the getField()
callback, we could check that the analysis only returns undefined
if the actual value in the field is undefined
. A failed sanity check doesn't necessarily indicate an analysis bug, but it would be a strong indicator. We could implement sanity mode as a wrapper analysis that delegates to the client analysis and adds the checks.
function __func(arguments){
arguments;
};
console.log(__func())
Expected output is simply "undefined" but the instrumented version of the above crashes with the following exception when run:
[TypeError: Cannot read property 'callee' of undefined]
TypeError: Cannot read property 'callee' of undefined
at __func (/home/simonhj/src/jalangi/scratch_jalangi_.js:17:51)
at invokeFun (/home/simonhj/src/jalangi/src/js/analysis.js:499:33)
at /home/simonhj/src/jalangi/src/js/analysis.js:590:24
at Object.<anonymous> (/home/simonhj/src/jalangi/scratch_jalangi_.js:34:224)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
at startup (node.js:119:16)
The culprit is the following line in the instrumented version:
J$.Fe(9, arguments.callee, this);
A quick google search revealed no other robust way to get the currently executing function and we can probably ignore this issue as a corner case for now.
Some analyses needs to know when execution of one statement ends and another begins, but the current set of analysis callbacks does not provide this information.
Would it be possible to enhance Jalangi to also provide a statementEnd
callback which gets triggered as a marker at statement boundaries? This added information could be optional, controlled by a switch to the instrumenter.
function foo() {
return {}
}
foo()
In the instrumented version of this snippet, J$.Fr
is called with the iid 45 which does not appear in the jalangi_sourcemap.js
file. This then gets passed into client analyses using the functionExit callback.
The functionEnter
callback receives a reference to the concrete function (not the whole ConcolicValue
), so it does not have access to any potential symbolic information attached.
We need to model the implicit type conversion to Number performed by the ++
and --
operators. Here is a test that currently fails with Jalangi instrumentation:
var j = "0";
var k = j++;
console.log(j);
console.log(k);
var l = "1";
var m = ++l;
console.log(l);
console.log(m);
Without instrumentation, the above program prints:
1
0
2
2
With instrumentation and in record mode, the program prints:
01
0
11
11
I captured the failing test in tests/unit/type_conversion.js
on master. This is the root cause of why some jQuery unit tests fail under instrumentation.
var x = new Array()
function f() {
x.push({p: "foo"})
console.log(x)
}
f()
Running normally under node this outputs:
[ { p: 'foo' } ]
However when replayed with the nop analysis, the following is printed:
---- Instrumenting /home/simonhj/src/jalangi/scratch ----
Instrumenting /home/simonhj/src/jalangi/scratch.js ...
---- Recording execution of /home/simonhj/src/jalangi/scratch ----
scratch_jalangi_.js
---- Replaying /home/simonhj/src/jalangi/scratch ----
[]
None
It appears that the element is not in the array.
Consider the following code:
var x = {
foo: function() {
fizz();
}
};
The top-level expressions here are both the object literal expression assigned to x
and the call to fizz()
inside the function assigned to the foo
property. Currently Jalangi only reports the outer object literal expression as top-level, however. I think we need some extra logic for nested functions.
I've added a test in node_test/topLevelExprTests.js
for this. To run, uncomment lines 69--71, and then run ./node_modules/.bin/mocha --reporter spec node_test/topLevelExprTests.js
.
We don't seem to handle the following test correctly:
function f() { return undefined; }
function testcase() {
try {
return true;
} finally {
f()
}
}
console.log(testcase());
Replay prints undefined
instead of true
. Test added as tests/unit/call_in_finally_2.js
.
When instrumenting eval
'd code, Jalangi generates instrumented code with IIDs that can conflict with the surrounding instrumented script. As a simple example:
var f = function foo() {
return eval("3+4+5+6+7");
}
console.log(f());
When the above code is instrumented, IIDs 13, 17, and 21 are used for certain AST nodes. If I print the result of instrumenting the eval
'd code during the record phase, I get:
J$.B(18, '+', J$.B(14, '+', J$.B(10, '+', J$.B(6, '+', J$.T(5, 3, 22), J$.T(9, 4, 22)), J$.T(13, 5, 22)), J$.T(17, 6, 22)), J$.T(21, 7, 22));
As you can see, the IIDs appear again. This can cause problems for analyses that rely on IIDs to be unique.
It seems to me that a simple solution would be to read in the generated sourcemap from instrumentation at record / replay time, and start out the IIDs for any eval'd code at a higher number than the highest IID in the sourcemap. @ksen007 what do you think?
I don't see a need for install.py to blow away all the files you already have each time it runs; we should make it only install the things you don't have. This is helpful, e.g., in cases where the install fails half-way through. Now, if you re-run, it starts from scratch.
Right now, instrumenting code is a bit slow. I've put the v8 profiling output for instrumenting pdf.js here. One concerning thing is that ~20% of time is spent in GC; would have to dig in further to see why.
Otherwise, I don't see any quick fixes. The major change I can think of would be to not transform the AST and use escodegen, but instead generate the instrumented code string directly during the AST pass. Not sure how nightmare-ish this would be, though.
Other thoughts welcome :-)
Can we remove the large video files under the paper
directory and keep them somewhere else? They probably don't belong in version control. Removing them won't speed up a git clone
without more drastic action (since they'll still be in the history), but pulling the tarball of the latest jalangi version will be much faster. @ksen007 maybe we can make a separate repository for the files in the paper
directory?
I'm trying to run the annex test under PhantomJS. If we could get this working, we could do things like run web tests in a regression suite or automate generation of certain types of traces. The first issue I ran into is that the current PhantomJS websocket support doesn't seem to be up to snuff. So, I hacked up a quick change to keep the trace in memory during record; see the in-memory-trace branch:
https://github.com/SRA-SiliconValley/jalangi/tree/in-memory-trace
I still get some errors on this branch when loading in PhantomJS, though. Steps to reproduce:
var page = require('webpage').create(),
system = require('system'),
address;
if (system.args.length === 1) {
console.log('Usage: loadspeed.js <some URL>');
phantom.exit();
}
address = system.args[1];
page.onConsoleMessage = function (msg) {
console.log('Got message ' + msg);
};
page.open(address, function (status) {
console.log("loaded");
page.evaluate(function () {
console.log("hello");
console.log(J$.trace_output);
});
});
in-memory-trace
jalangi branch. In analysis.js
, line 119, set IN_MEMORY_BROWSER_LOG
to inBrowser
.python scripts/jalangi.py server
), and instrument annex as shown in README.md. No need to start the websocket server, though.phantomjs
is in your path):phantomjs load.js http://127.0.0.1:8000/tests/tizen/annex/index_jalangi_.html
When I run, I get the following output:
TypeError: 'undefined' is not an object (evaluating 'screen.orientation.indexOf')
http://127.0.0.1:8000/tests/tizen/annex/index_jalangi_.html:24
Got message TypeError: Attempting to change writable attribute of unconfigurable property.
Got message TypeError: Attempting to change writable attribute of unconfigurable property.
at printableValue (http://127.0.0.1:8000/src/js/analysis.js:1028)
at http://127.0.0.1:8000/src/js/analysis.js:1410
at http://127.0.0.1:8000/src/js/analysis.js:1236
at G (http://127.0.0.1:8000/src/js/analysis.js:541)
at http://127.0.0.1:8000/tests/tizen/annex/lib/jquery-1.6.2.min_jalangi_.js:1222
at invokeFun (http://127.0.0.1:8000/src/js/analysis.js:494)
at http://127.0.0.1:8000/src/js/analysis.js:585
at http://127.0.0.1:8000/tests/tizen/annex/lib/jquery-1.6.2.min_jalangi_.js:13591
Got message TypeError: 'undefined' is not an object (evaluating 'g.apply')
Got message TypeError: 'undefined' is not an object (evaluating 'g.apply')
at invokeFun (http://127.0.0.1:8000/src/js/analysis.js:494)
at http://127.0.0.1:8000/src/js/analysis.js:585
at http://127.0.0.1:8000/tests/tizen/annex/js/annex_jalangi_.js:1462
loaded
Got message hello
Got message [3,"tests/tizen/annex/lib/jquery-1.6.2.min_jalangi_.js",88149,0,6]
,[4,1,88141,1,17]
,[4,3,17473,3,8]
,[4,5,17481,5,8]
,[3,"tests/tizen/annex/js/annex_jalangi_.js",11097,7,6]
,[4,3,9889,10,17]
The trace is printing, which is good, but there are some other JS errors I can't fully grok. @ksen007, any idea what could be going wrong? PhantomJS is based on Webkit, but not the absolute latest version. This is not super urgent, but if we could get this working sometime, that would be great.
Right now, if the client analysis throws an exception, it's rather unpredictable as to how exactly the program will fail. In fact, it seems quite possible that an exception thrown by an analysis could be caught by the analyzed program, which seems strange. We should aim to wrap calls into the analysis in try-catch blocks, such that we print a proper stack trace (at least when running under node.js) and exit relatively cleanly.
Is there a way to run the taint analysis engine on live websites? Or should I download the website first and then run the taint analysis on it?
I know there is an example for the annex example, but what about other sites like www.yahoo.com?
I've been looking into #6 , in particular being able to record and replay just loading mootools. It doesn't work now, and I think it's due to monkey patching. I've checked in a test tests/html/unit/mootools_reduced_1.js
. If you run the following command you can see the replay failure:
python scripts/jalangi.py testrr_browser tests/html/unit/mootools_reduced_1
The code is a bit complex, but it looks to me like Array.prototype.slice
is being monkey-patched. I tried quickly fixing this in analysis.js
but failed.
It'd be good to be able to separately instrument source files, so that if one file changes, we don't need to re-instrument everything else. The key issue is maintaining unique IIDs for client analyses. There are a couple possible approaches:
The tricky thing with option 2 is figuring out what application script is currently executing. I think one could do it by creating an Error object and parsing the stack trace, but it could affect performance (I think we'd need to do it at least at every function entry, to detect when the invoked function is in a different script.) The downside of option 1 is that it could bloat the instrumented code size.
Our best thought so far is a hybrid approach:
setCurrentFile()
callback (name appropriately shortened) at any point that the currently executing JS file may have changed. These points include script entry, function entry, and the top of a catch block (any others?).analysis.js
to keep track of the current file based on the above callbacks, and then appropriately combine the current file with the original IID into a new unique IID that gets passed to the client analysis.Of course, we'll have to change IIDInfo.js
to parse multiple sourcemap files, know about the IIDs generated by analysis.js
, etc.
Given that the above change would be a moderate amount of work, I will probably put it aside until it's more urgent, unless someone has an idea of a much simpler approach to the problem; suggestions welcome.
There is a bug in replay involving setters and a mix of instrumented and uninstrumented code. To reproduce, consider the following three files:
a.js
:
exports.foo = function (x,y) {
x.g = y;
}
b.js
:
var a = require('./a');
exports.baz = {
set p(y) {
a.foo(this,y);
}
}
c.js
:
var b = require('./b');
var baz = b.baz;
baz.p = 7;
console.log(baz.g);
Now, say that a.js
and c.js
are instrumented, but not b.js
. In order to make this work, I instrument a.js
and c.js
using esnstrument.js
, and then create the following file b_mod_.js
:
var a = require('./a_jalangi_');
exports.baz = {
set p(y) {
a.foo(this,y);
}
}
I then hack c_jalangi_.js
to pass ./b_mod_
to the require
call instead of ./b
. Anyway, with this setup, during replay, I get a path deviation:
Error: Path deviation at record = [5,2,21,12,4] iid = 105 index = 11
at checkPath (/Users/m.sridharan/git-repos/jalangi/src/js/RecordReplayEngine.js:305:27)
at RR_L (/Users/m.sridharan/git-repos/jalangi/src/js/RecordReplayEngine.js:645:17)
at RR_R (/Users/m.sridharan/git-repos/jalangi/src/js/RecordReplayEngine.js:509:53)
at Object.R (/Users/m.sridharan/git-repos/jalangi/src/js/analysis.js:731:36)
at Object.<anonymous> (/Users/m.sridharan/test/c_jalangi_.js:10:124)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
I think somehow we're not considering the possibility that a putfield can invoke a native function via a setter, which in turn invokes instrumented code. Sorry for the tricky steps to reproduce, I don't have time to make it nicer right now.
We currently get replay failures for the following test:
var x = new Date();
setTimeout(function () {
var z = new Date() - x;
console.log(String(z));
}, 1000);
The failure depends on whether z
happens to have the same value during record and replay. I think z
may differ since we are not capturing the semantics of the -
operator on Date
objects. The test is checked in as tests/unit/date-conversion.js
.
function G(x) {}
G.prototype.p = function f() {}
var y = new G()
y.p()
console.log(y)
Run normally this snippet prints "{}
".
If a trace is recorded and replayed with the NOP analysis, "{ p: [Function: f] }
" is printed at replay.
I haven't had time to investigate very deep, but it appears that the p property is somehow moved from the prototype to the object itself.
As seen in #37, use of console.log
can cause confusion since we don't model its conversion of its argument(s) to strings. Perhaps we should add modeling of this conversion.
I added the TypeScript compiler benchmark as tests/octane/typescript.js
, but Jalangi crashes during record. Unfortunately, the benchmark has several functions that shadow the arguments
array, exposing known issue #21. I've added a unit test tests/unit/shadow-arguments.js
reduced from the compiler. Opening this issue in case other problems arise.
We should move the symbolic analysis code in Jalangi to a separate git repository. This will make it easier to install Jalangi for those not doing symbolic analysis, since auxiliary tools like cvc3 won't need to be installed. Plus, it will force us to clean up the design a little bit.
In the tests run by scripts/testmultiple.py
, the tests/compos/arbitrary2
test is failing. See Travis for the output:
https://travis-ci.org/SRA-SiliconValley/jalangi/builds/21294192
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.