Giter Site home page Giter Site logo

api-ai-javascript's Introduction

JavaScript SDK for Api.ai

api.ai.js makes it easy to integrate Api.ai natural language processing service into your web application.

Prepare your agent

You can use Api.ai developer console to create your own agent or import our example Pizza Delivery agent.

How to use the agent

Add a reference to api.ai.min.js before the main js script in your html file.

<script type="text/javascript" src="js/api.ai.min.js"></script>
<script type="text/javascript" src="js/main.js"></script>

To launch your web application, deploy the html file and all scripts using web server. It is important to use web server, as some browsers don't allow access to microphone if an html file is opened using file system (e.g. URL like "/home/johnny/demo/index.html").

Create instance

There are two options to create instance for api.ai. You can use config to set properties and listeners like that:

var config = {
    server: 'wss://api-ws.api.ai:4435/api/ws/query',
    serverVersion: '20150910', // omit 'version' field to bind it to '20150910' or use 'null' to remove it from query
    token: access_token,// Use Client access token there (see agent keys).
    sessionId: sessionId,
    onInit: function () {
        console.log("> ON INIT use config");
    }
};
var apiAi = new ApiAi(config);

Alternatively, assign properties and listeners directly:

apiAi.sessionId = 'YOUR_SESSION_ID';
apiAi.onInit = function () {
    console.log("> ON INIT use direct assignment property");
    apiAi.open();
};

Use microphone

To get permission to use the microphone, invoke init() method as shown below.

apiAi.init();

After initialisation you are able to open websocket.

apiAi.onInit = function () {
    console.log("> ON INIT use direct assignment property");
    apiAi.open();
};

Once socket is opened, start listening to the microphone.

apiAi.onOpen = function () {
    apiAi.startListening();
};

To stop listening, invoke the following method:

apiAi.stopListening();

Firefox users don't need to interrupt listening manually. End of speech detection will do it for you.

To get access to the result, use callback onResults(data).

apiAi.onResults = function (data) {
    var status = data.status;
    var code;
    if (!(status && (code = status.code) && isFinite(parseFloat(code)) && code < 300 && code > 199)) {
        text.innerHTML = JSON.stringify(status);
        return;
    }
    processResult(data.result);
};

For information about Response object structure, please visit our documentation page.

API properties

/**
 * 
 * 'wss://api-ws.api.ai:4435/api/ws/query' 
 */
apiAi.server
/**
 * Client access token of your agent. 
 */
apiAi.token
/**
 * Unique session identifier to build dialogue. 
 */
apiAi.sessionId
/**
 * How often to send audio data chunk to the server.
 */
apiAi.readingInterval

API methods

/**
 * Initialize audioContext
 * Set up the recorder (incl. asking permission)
 * Can be called multiple times
 */
apiAi.init();
/**
 * Check if recorder is initialise
 * @returns {boolean}
 */
apiAi.isInitialise();
/**
 * Send object as json
 * @param json - javascript map
 */
apiAi.sendJson(jsObjectOrMap);
/**
 * Start recording and transcribing
 */
apiAi.startListening();
/**
 * Stop listening, i.e. recording and sending new input
 */
apiAi.stopListening();
/**
 * Check if websocket is open
 */
apiAi.isOpen();
/**
 * Open websocket
 */
apiAi.open();
/**
 * Cancel everything without waiting for the server
 */
apiAi.close();

API callbacks

/**
 * It's triggered after websocket is opened.
 */
apiAi.onOpen = function () {};
/**
 * It's triggered after websocket is closed. 
 */
apiAi.onClose = function () {};
/**
 * It's triggered after initialisation is finished.
 */
apiAi.onInit = function () {};
/**
 * It's triggered when listening is started. 
 */
apiAi.onStartListening = function () {};
/**
 * It's triggered when listening is stopped.
 */
apiAi.onStopListening = function () {};
/**
 *  It's triggered when result is received.
 */
apiAi.onResults = function (result) {};
/**
 * It's triggered when event is happened.
 */
apiAi.onEvent = function (code, data) {};
/**
 * It's triggered when error is happened.
 */
apiAi.onError = function (code, data) {};

Restrictions

This version of SDK is working through new getUserMedia API and it's currently not supported by all browsers. You can find actual information here

api-ai-javascript's People

Contributors

gugic avatar sergeyi-speaktoit-com avatar xvir avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.