Giter Site home page Giter Site logo

sprocket's Introduction

Sprocket

This repository contains all things related to Sprocket. Sprocket is a precompiler that takes a SpartanRPC source file and compiles it to pure nesC for processing by the normal nesC compiler. SpartanRPC and Sprocket are research projects at the University of Vermont.

Prerequisites

  • Java development kit (I used v1.8u0)
  • Scala (I used v2.11.0)
  • IntelliJ IDEA (I used v13.1.2)
  • LaTeX (required for building the documentation)

Development is done on Linux but it is likely that most things will work on Windows or any other platform that supports Scala and IntelliJ. Some tweaking may be required on non-Linux platforms.

Building Sprocket

Sprocket is written in Scala and built with IntelliJ IDEA. First install Java, Scala, and IntelliJ (Community Edition is fine). Once you have IntelliJ installed you will need to use its plugin manager to download and install the Scala plugin for IntelliJ. Note that Sprocket uses several third party libraries but they are provided as part of this repository. Once all these preliminaries have been handled, you should be able to build Sprocket simply by simply opening the project file (Sprocket.iml) and executing the appropriate commands from within IntelliJ.

Other Resources

The RTS directory contains Sprocket's runtime support files. These files must be made available to Sprocket in order for it to compile Spartan RPC programs. The easiest way to do this is to point the TemplateFolder configuration parameter at this location using Sprocket's configuration file. See the Reference Manual in the doc folder for more information about setting up the configuration file and about how to use Sprocket in general.

The Samples directory contains various examples of Spartan RPC programs. See the README files associated with each example for more information.

nesC Grammar

The source code for the nesC parser is in the src folder in the package edu.uvm.nesc. The bulk of the code is an ANTLR v3 grammar in src/edu/uvm/nesc/nesC.g. This grammar produces the three files nesC.tokens, nesCLexer.java, and nesCParser.java. These files must be generated before the main program can be built. The current versions of these files are committed to the repository so they only need to be regenerated if the grammar is changed. Use the shell script build-parser (or the corresponding batch file) to generate the parser.

The main program in the nesc package is just a simple wrapper around the parser. It passes the file named on the command line to the parser and then outputs the abstract syntax tree that was created (using ANTLR's AST notation). This program does not preprocess the input (that must be done separately if it is desired).

Additional Thoughts

This is a work in progress and so is likely to be buggy and incomplete. Issues related to this grammar are on the GitHub issue tracker along with issues related to Sprocket in general. Here are a few additional thoughts.

  • The grammar contains some extensions to standard nesC related to SpartanRPC. These extensions are intended to be upwardly compatible with nesC; even with the extensions enabled, regular nesC programs should be parsed properly.

  • The AST generation may not cover the entire language. Also in some cases spurious tokens that should really be suppressed may appear in the AST. These quirks will eventualy be removed. Also the structure of the ASTs may change.

  • I may eventually provide a tree grammar to convert the ASTs into real parse trees. I'm not clear if I absolutely need that for my own work right now, but in general it would be desirable to have this ability.

sprocket's People

Watchers

Peter Chapin avatar  avatar

sprocket's Issues

Governing key assumed to be node's default entity

The implementation assumes the governing key for any authorization decision is the same as the node's default entity. So for example it is not currently possible for a node owned by B to require a policy like A.r <- E for access to a duty. The software behaves as if this policy was B.r <- E instead. This is true even if the programmer specifies A.r <- E (there is no error or warning message).

Preprocessing and parsing need to be directed by program structure

To be faithful to the specification of nesC, the parser should process interface and component definitions as they are encountered. This means that interfaces should be processed when they are seen in uses/provides declarations and components should be processed when as they are mentioned in configuration definitions.

The parser should start with the application's top level configuration and recursively descend the program's high level structure, locating and processing source files on demand. This is in contrast to what it currently does which is to simply process each source file independently in an arbitrary order. The structured approach is necessary so that any global types and macros defined by the interface or component can be registered in the parser's symbol table before it processes the rest of the "calling" component. Otherwise programmers will need to create and manually include header files they should not be required to use.

One implication of this is that parser should interleave preprocessing with parsing, in some cases saving some preprocessor state from one file to the next. The following example illustrates a program that is acceptable to the nesC compiler but that requires the kind of processing I'm talking about here. It consists of several file that depend on each other.

Here is I.nc:

typedef int counter_t;
#define MAX_COUNT 128

interface I {
    command void setCounter(counter_t value);
}

Here is AppC.nc"

configuration AppC { }
implementation {
    components AC, BC;
    AC.I -> BC;
}

Here is AC.nc:

typedef int acounter_t;        
module AC {
    uses interface I;
}
implementation {
    void f( )
    {
        // The following line is not an error. The type counter_t is known from interface I.
        counter_t currentCount = 0;
        call I.setCounter(currentCount);
    }
}

Here is BC.nc:

module BC {
    provides interface I;
}
implementation {
    // The following is not an error. The type counter_t is from interface I.
    command void I.setCounter(counter_t value)
    {
        // The following is not an error. The type acounter_t is from module AC.
        // Module AC was processed before this module when AppC was processed.
        acounter_t count;

        // The following is not an error. The macro MAX_COUNT is from interface I.
        if( value < MAX_COUNT ) {
            // etc...
        }
    }
}

In the example above, the parser should process AppC.nc first and then process AC.nc and BC.nc only when the need to do so is discovered (due to the components line in AppC.nc). Furthermore global types introduced in I.nc need to be made known to AC.nc and BC.nc simply by virtue of those files mentioning interface I in their uses/provides specifications. Note that preprocessor macros are handled in a similar way. Note also that global types defined in AC.nc are visible in BC.nc because AC.nc is processed before BC.nc is read.

Authorization error for multi-entity clients

Suppose a client can post duties as either entity A or entity B. Suppose also that the client tries to invoke a service over a dynamic wire "as A." If that invocation succeeds a session key is created between the client and server nodes. Now suppose the client later tries to invoke the same service over a dynamic wire "as B." The session key created previously will be used even if the B entity has no access to the service.

This is not a security problem from the server's point of view because the client node holds private keys for both A and B (thus the client node is a member of the "A" domain). However, it does mean a client that is purposely selecting, for example, a potentially weaker entity for a particular invocation might find that invocation succeeding when it should have failed.

Sprocket requires specific name for top level configuration

Currently Sprocket requires that the top level configuration be contained in a file named AppC.nc. This is because the Makefile writer in Main.scala assumes this name. This restriction should be removed. Doing this will require telling Sprocket which file contains the top level configuration, perhaps using a command line argument. This will be necessary anyway if Sprocket is changed to process the program in structured order.

Periodically received bad certificate causes repeated verification failures

Suppose node A has a bad certificate that it broadcasts periodically. Suppose a neighboring node B receives the certificate without error so the checksum is correct. Node B will attempt to verify the signature on the certificate but that verification will fail. Thus Node B discards the certificate. The problem is that because node A is broadcasting the certificate periodically node B will attempt to verify it repeatedly. This causes node B to do excessive and unnecessary computation.

To deal with this the runtime system could keep a record of the the bad certificate (checksum only). Obviously the bad certificate should not be used in the construction of the local RT model. However, if it is received again it can be discarded at once. Right now the credential storage area stores checksums of the valid certificates so that a retransmission of a valid certificate can be ignored. That facility should be extended so that checksums of even invalid certificates are also stored.

Duty parameter names share interface scope

Consider the following code

interface I {
    duty void d1( int x );
    duty void d2( int x );
}

Because the parameters of duties are transformed into global variables inside the skeleton component the current version of Sprocket will attempt to generate two global variables named x.

Duty parameter names should be mangled with the duty names to avoid this conflict. For example in the global variables generated above might be spkt_d1_x and spkt_d2_x.

Dynamic wires require interface name on target endpoint

Normally nesC allows the target endpoint of a wire to default to the same interface as was used on the source endpoint. For example:

configuration AppC { }
implementation {
    components A, B;
    A.I -> B;  // Same as A.I -> B.I;
}

However, when a dynamic wire is used Sprocket does not allow this abbreviation. For example:

configuration AppC { }
implementation {
    components A, RemoteSelectorC;
    A.I -> [RemoteSelectorC]; // Must use A.I -> [RemoteSelectorC].I;
}

The user would reasonably expect the abbreviated form to connect A to the I interface on all components selected by the remote selector. Currently Sprocket reports this as a rewriting error.

Implement session key stealing

Because of the way the runtime system is designed the server can use the same session key with multiple clients from the same domain. Thus it can skip the Diffie-Hellman negotiation step when creating a session key for the second client. Instead the server can just "steal" the existing session key that was created for the first client. This optimization should be beneficial in the common case when a server is talking to multiple client nodes in the same domain.

Session keys should expire

Currently session keys never expire. They should. This actually can cause significant problems in real systems. Suppose a server node goes down and then comes back up. Clients that believe they already have session keys with the server will not initiate a session key negotiation with the server. Thus the server session keys are never rebuilt and old clients can't communicate with it. Expiring session keys would fix this problem.

It may also be possible, and perhaps better, for a server to announce its rebirth as part of some kind of broadcast management protocol. In that case clients are given a chance to invalidate old session keys "right away" and thus speed up the network's recovery. However, session keys should still expire regardless of if such a system is also implemented or not.

CertificateSender component uses the same random sequence for broadcast intervals

In the Sprocket run time system, the CertificateSender component uses a randomized broadcast interval to prevent adjacent nodes from colliding with each other indefinitely. However, the seed for the random number generation is not derived from a random source. Thus if two adjacent nodes boot at the same moment, their certificate broadcasts will collide indefinitely anyway.

Should ECC public key compression be used?

The Sprocket runtime system should probably use compressed ECC public keys. Doing so would reduce the size of certificates (saving radio transmission energy) and save memory in the public key storage area. The cost is in the computation required to expand the keys before use (this cost should be evaluated before committing to the use of compressed keys).

This change will require adjustments to RTAdmin as well as the Sprocket runtime system.

Create system to automatically test samples

It would be desirable to test the samples in an automatic or semi-automatic way. This will likely require the use of some kind of simulator since automatically downloading to hardware and verify proper operation on that hardware would be very difficult.

Unfortunately at the time of this writing TOSSIM appears to have problems simulating the CC2420 encryption hardware. This hardware is essential to the operation of the current Sprocket runtime system, making simulating Sprocket generated programs difficult at best.

Dynamic wires do not handle multiple endpoints

Sprocket's handling of dynamic wires does not handle multiple endpoints. Right now a MAC is generated for only the first endpoint in the component set. ACRT0C should be updated to create multiple MACs as needed. ASRT0C should be updated to search the appended MACs to see if any one of them matches the expected MAC.

The handling of generic component parameters needs work

The generic parameters on a component are not parsed properly. Consider, for example code such as:

generic module queue(int n, typedef t) { ... }

Right now the syntax of the first parameter must match parameter_declaration. This is fine but the grammar assumes that such declarations are happening inside the scope of an ordinary declaration. Thus the actions for the parameter_declaration production try to access variables defined in that scope (variables inside the parser) and fail, causing exceptions.

I'm unsure how to deal with this properly but it may require creating "parallel" productions for generic component parameter declarations (I hope not).

Skeleton wiring not added to top level configuration

Currently when Sprocket generates a skeleton for a remotely provided interface it does not modify the top level configuration of the application to include the necessary skeleton components and related wirings. This is partly because Sprocket does not know which configuration file to rewrite (the top level configuration is not known to Sprocket).

Sprocket should accept programs spread over multiple folders

Currently Sprocket assumes the entire program (that needs to be processed by Sprocket anyway) exists in a single folder. This is awkward, however, for the kind of applications Sprocket is intended to support. The problem is that client and server programs must share some information, in particular: remote interface definitions. It would be convenient to have those definitions in a common space.

This implies that each program must be spread over at least two folders. Sprocket should support this. Sprocket does allow (even expect) certain auxiliary data such as component ID maps, interface ID maps, and key maps to be in a shared area. It should allow source code to be similarly shared, ideally in a consistent way.

Create a time synchronization sample

I think a time synchronization protocol would be a nice demonstration of SpartanRPC. It might even be useful (necessary?) in the RT enabled version of the system. When real certificates are used, it is desirable to check their expiration times as part of certificate validation. This means the network will need a reasonably accurate notion of time (although tight synchronization would not be necessary).

When generating stubs multiple dynamic wires cause duplicate component declarations

When generating stubs if there is more than one dynamic wire in a configuration Sprocket will duplicate certain "boiler plate" component declarations. This causes the nesC compilation to fail. Sprocket should review the entire configuration and make a master list of all additional components required, removing duplicates, before it starts to rewrite the configuration.

Add support for low power listening

After upgrading to TinyOS v2.1.1 the old method I was using to activate low power listening no longer worked. My first attempt at fixing it was not 100% right either (the receiving node missed about 50% of the packets). To make testing easier I removed low power listening support for now. It should be added back (correctly!) at some point.

Broadcasts do not work

Posting a duty to the broadcast address does not work as expected. This is because there isn't a single session key associated with the broadcast "endpoint."

Fundamentally the problem is that a node does not necessarily know all its neighbors and so does not know which session keys it should use (or negotiate) with them.

To fix this some kind of "neighbor" manager component should be created. This component would maintain a list of all known neighbor nodes (perhaps by issuing periodic queries). Then unspecified broadcasts could be converted into an appropriate multi-cast to all known neighbors... with session keys being negotiated for each neighbor in the usual way (Q: can the session key negotiation message itself be effectively broadcast in this case?).

nesC parser can't handle EOF-terminated single line comments

If the last line of the file is a single line comment such as

// This is the last line.

and if, furthermore, that line is not terminated with a normal line ending, the parser (really the lexer) produces a message about a 'mismatched character . Currently EOF is not considered a valid way to end single line comments. The workaround is to be sure the last line in the file is terminated with a proper line ending.

Add check on message size

The system should avoid creating messages that are too large. Right now with the SimpleMAC system messages (header + arguments) are limited to 16 bytes. Sprocket should verify that the size of the arguments is "reasonable" and then the run time system should check that the actual size required (which depends on the number of destination components) is in range.

If the overall message size is too large, the run time system should truncate the list of destination components. We don't promise to post duties on remote nodes anyway so dropping a few from the list would be acceptable.

Type names and structure tags not properly distinguished

Consider the following code

typedef struct example {
    int x;
} example;

This is legal because type names and structure tags occupy separate name spaces. Thus you can declare variables in both of the following ways

struct example object_1;
example object_2;

Currently Sprocket fails to parse the first declaration above because it thinks example is the name of a type in that position.

Fixing this would probably entail introducing a new semantic classification of identifiers ("tags") and then using semantic predicates in the grammar to be sure tags are used where appropriate (and only there). The symbol table handling would have to be extended as well, of course, to gather tag names as they are found.

The test case Module0230.nc could be easily extended to exercise this situation when a fix is made.

Type names can't be redeclared in an inner scope

Consider the following example

typedef int counter_t;

void f( )
{
    char *counter_t;
    // ...
}

There is a syntax error reported on the declaration of counter_t in the inner scope. This is because the parser believes counter_t is a type name and thus not an ordinary identifier. Similar problems occur if one attempts to introduce an inner typedef declaration that redeclares an existing type name.

Preprocessing environment is wrong

Sprocket preprocesses each source file before parsing it. This is necessary so that type names introduced in header files can be identified. Currently Sprocket uses the system cpp program to do preprocessing. Furthermore it creates an incomplete environment for preprocessing in terms of -I and -D options to cpp. As a result the preprocessing is not platform aware and inappropriate decisions are made in #if conditional compilation directives. The resulting translation unit isn't the same as that actually seen by the compiler when run against the original source file.

The preprocessing environment used by Sprocket should be identical to that created by the normal TinyOS MAKERULES. It should also be platform aware in the same way.

Minimum model should include both deployed and received credentials

Right now the minimum model is computed using only credentials received from other nodes. However, in order for the node policy to be considered, the minimum model must also taken into account credentials deployed with the node. In the tests I've done so far a node received its policy from a neighboring node running the same program (and deployed with the same credentials). It obviously should not be necessary for a node to receive its policy over the air from an identical neighbor!

Line directives in statement position are not recognized properly

Consider the following code fragment after preprocessing

event void SpartanBoot.booted( )
{
    bool valid;

    call Leds.led2On( );
#68 "./ProviderC.nc"
}

Since the recent grammar overhaul Sprocket no longer accepts the line directive where a statement is expected causing the above to produce a syntax error. The AST returned does not include any nodes "below" the point of the error so the rewriting of the file is truncated.

This needs to work again because the preprocessor tends to replace large blocks of comments with a single line directive. Thus this situation comes up fairly often.

The problem, however, is that line directives are allowed to have trailing integers. The current grammar finds this ambiguous with a following expression statement consisting of just one integer literal. Thus

#68 "./ProviderC.nc" 1
/* null statement */ ;

could be

#68 "./ProviderC.nc"
1;

Fixing this will probably require handling line directives more intelligently and may require special work in the lexer (not sure).

RTAdmin should allow the import of public keys and foreign certificates

The RTAdmin tool believes it controls the private key of every entity it knows about. It also assumes that it can sign every credential it has (meaning that every credential is issued by an entity for which it has the private key). In general these things are false. The tool should be able to import public keys for "external" entities and foreign certificates signed by such entities. The tool should allow its user to create credentials using external entities and to dump foreign certificates into a node's run time system.

Probably some sort of key/certificate transfer format needs to be defined. Imagine the case where an RTAdmin user downloads RT certificates from another security domain's web site.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.