ahmelsayed / azurefunctions Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
Flow:
Could be as simple as have a batch file that has just:
echo Hello Functions!
You get an HTTP GET endpoint, you hit it and the response is just that text.
Most of the pieces are already there (e.g. there is already a FilePublish build profile), and we have build tasks that are already doing zipping of another package. We just need to add a task that builds the file publish and zips the output, creating a versioned package.
Designer needs to support selecting storage objects for input trigger, other input params, and output.
Initially, we'll assume the storage account is always the main one we created.
The UI to select a queue can be a drop down that lists all the queues in the account (minus the system queues). Optionally, we can offer the option to create a new queue.
Similar deal for tables.
The ScriptHost uses file watching to detect when host.json/function.json files are changed/added/removed. When this happens it restarts the host, which will then reload all functions.
We need to make sure we handle partial function / in progress edit scenarios correctly. E.g. if the function.json file exists, but no script file yet. Basically we need to handle incomplete function directories gracefully.
Use just 'Functions' instead of 'AzureFunctions' for the prefix
Ensure that the ScriptHost is plugging into the host environment properly so that we shutdown correctly. We need to implement IRegisteredObject and register ourselves (similar to the way we do in Kudu). This includes ensuring that all global exception paths result in graceful shutdown (to ensure Singleton locks etc. are released).
Also important to ensure we're managing the long running background worker (ScriptHost) lifetime properly. This will need to be kept alive reliably, and shut down cleanly.
For non Node.js scripts, the current model for passing data into the script process is temp files created in a per function instance temp folder. On the happy path, the function executor deletes these when it is finished with them, however in error cases some files might be left behind.
So we need a periodic cleanup task to run, perhaps on startup.
See Bilal's request below. Since Functions also go through the SDK execution pipeline, if we build this into the SDK core we'll get for both. There is already an "MDS Bridge" in place (used by various site extensions e.g. Zumo) that allows them to emit traces that make it to MDS. We should be able to build on that.
One thing that I wanted to make sure was on the "must have" list for Azure functions.
We need to emit efficient metrics (efficient here means a system we can live with operationally in Azure, meaning using ETW/MDS with aggregates perhaps) to let us know how many function invocations have happened per customer.
Actually this is something I would love to have for the SDK as is. Just as we measure our hit count, we must start measuring the number of trigger invocations that happen with the SDK. Once this data is in place, Nitasha will report on it like any other metric for our service.
Thanks
Bilal
We've optimized things as much as we can with the current WebJobs SDK capabilities. I.e., we only restart the underlying JobHost if listener/host level metadata has changed.
We could optimize this further if we did work in the WebJobs SDK to allow functions to be dynamically added/removed from a running host. That includes starting/stopping listeners, etc. This way all running functions can continue to run, and we can just hot swap new functions in/out.
Need to look into this on the SDK side to assess feasibility.
Currently the TraceWriter we have just dumps everything to a single log.txt file very inefficiently. This was just to get something going for initial debuggability while we get things off the ground.
We need a robust logging solution.
Scenario: While the auto-provisioned Azure Function App is great for basic use cases and getting started, being able to bring and manage your own app service apps will enable more complex applications without forcing users out of the functions portal UX.
Priority: Post-MVP
Open questions:
When cloning the repo locally, it should have a run.cmd at the root. Simply running it will download some self-hosted test server and run it over the repo's files.
e.g.
D:\MyFunctionsRepo>run.cmd
The Azure function test server is running on http://localhost:1234
Note: would be nice to have non-Windows story for this (e.g. via Mono / CoreCLR)
The blob to blob demo we've already done.
For Node.js we need to think about how we want to handle functions that never call the done callback, or never yield back to the calling thread (e.g. while(true){}). Edge.js does't provide anything here.
For other script types (e.g. BAT files), we should also consider configurable time limits.
e.g.
Result: you're still on file2, but the content is from file1.
The is because it's the same editor instance across all files. Ideally, we want each file to have its own undo buffer, so you can go from file to file and meaningfully undo things.
Integrate the WebHooks library into the request pipeline for validation of 3rd party WebHook requests
Flow:
Then it's one button click away from starting provisioning.
Complete the metadata options + runtime support for HTTP functions. This includes route specification, allowed Methods, Auth details, etc. specified via function.json.
Includes allowing headers to be accessed via script.
With confirmation
Implement the API that the functions dashboard will use to invoke functions when the "Run" button is selected.
For HTTP triggered functions, this will just be a request to the function endpoint. For other function types (e.g. Queue) we may be able to use the same REST API, but on the backend the execution will be different. In the case of non-http triggered functions, we'll need to enqueue a host invoke message for the SDK, since these types of functions can be long running jobs. We can then return 202 Accepted immediately along with a status ID that the dashboard can poll on for completion. Need to figure this out.
Currently, the only per execution logs we have are via the Dashboard. I think that is fine for non HTTP triggered functions, since they are running in the background and aren't initiated via a portal gesture.
For http functions, or "run now" invocations of functions, we need to capture all logs for that function invocation and make them easily available to the portal. One simple option would be to simply show the streaming logs that are already being written to disk. We're already writing verbose logs there and they'll show the function details, including any console.log trace output the function writes, stdout, stderr, etc. For http functions we might dump the HTTP request details, etc. However, one drawback there is that other global logs would also show up there (e.g. timer executions, singleton lock acquisitions, etc.)
Probably better to trace portal invocations to their own file. The portal could send a header containing a correlation ID (Guid) along with the request, and the runtime would write the logs to that file. The portal then knows where to look for the output, and can easily show it.
We can write these logs to /LogFiles/Functions/Invocations with the per invocation log files being named according to the invocation ID (e.g. 25a012fe-2401-4b72-9f09-c5ffce87d1a3.log). We can keep a short history of these as needed (e.g. if the portal wants to show the last N executions).
We need to come up with a story for how we handle templates.
Right now I have a list in kudu that shouldn't be there, but I need some metadata about the template. I guess I can self discover these things based on the function.json
and the file extensions, but I think it could get slow with a large number of templates.
We should make it easy to manage secrets in WebHooks scenarios. e.g.
In term of secret storage, I'm wondering if using the site file system would work better than AppSettings, in order to avoid site restarts (which take down ALL other functions!). We're already doing this for things like deploy ssh keys in Kudu, so there is precedent.
We need to have the ability to put the runtime into a short term "debug mode" for interactive function dashboard scenarios. From the SDK point of view, this means configuring the underlying JobHost with the correct development settings (e.g. short queue polling intervals, etc.) to make function invocations/triggers as responsive as possible.
I'm thinking that either the functions dashboard hits an API to put the host into this mode, or perhaps we do it automatically whenever we start getting invoke requests.
For now, to simulate this behavior, you can add an app setting "AzureWebJobsEnv" with value "Development".
Ideally, within one subscription, you can have one Functions site per region. Maybe the Resource Group would be named e.g. FunctionsWestUS
.
The Functions portal can offer a region selector, probably via drop down, similar to subscription drop down. By default, it it finds any existing function RGs, it would use that region.
/api
feels more natural and more WebAPi like than /functions
.
I wanted to deploy Azure Logic App along with the FunctionApp that it is using, via template. It created the FunctionApp but it was in a state of limbo, when I go to the FunctionApp, all the action buttons like Stop, Swap, Restart, Download publish profile, Reset publish credentials, Download app content
are disabled and I am greeted with the following errors
EDIT: The issue is not only in this function, the buttons disability issue is occurring in all my subscriptions in all my function apps, also in my colleagues subscriptions too.
Error: 'AzureWebJobsStorage' application setting is missing from your app. This setting contains a connection string for an Azure Storage account that is needed for the functions runtime to handle multiple instances synchronization, log invocation results, and other infrastructure jobs. Your function app will not work correctly without that setting. Create the app setting with a valid storage connection string. Session Id: 38323265cd3342ebab3ba915c6087947 Timestamp: 2017-06-14T08:25:13.400Z
Error: 'WEBSITE_CONTENTAZUREFILECONNECTIONSTRING' application setting is missing from your app. This setting contains a connection string for an Azure Storage account that is used to host your functions content. Your app will be completely broken without this setting. You may need to delete and recreate this function app if you no longer have access to the value of that application setting. Session Id: 38323265cd3342ebab3ba915c6087947 Timestamp: 2017-06-14T08:25:13.401Z
Error: 'WEBSITE_CONTENTSHARE' application setting is missing from your app. This setting contains a share name where your function content lives. Your app will be completely broken without this setting. You may need to delete and recreate this function app if you no longer have access to the value of that application setting. Session Id: 38323265cd3342ebab3ba915c6087947 Timestamp: 2017-06-14T08:25:13.411Z
Such functions are useful for on demand running of arbitrary logic (e.g. purge DB, purge storage, etc.)
Right now, it takes a while the first time you run a function. I think it's because the main site is cold. Maybe simply hitting the main site root (fire & forget) will help.
Would be a new App Setting, e.g. FUNCTIONS_KEY
, with possible values F1 to F12 ;)
When passed as an auth header, it overrides all other auth mechanism. This is what the functions portal will use to do test invocations.
Need to verify that all errors that can be caused by the user as a result of function coding/configuration are caught and logged by the runtime and made available in a way that the Dashboard can display. This includes ScriptHost startup errors, though those should be extremely rare since we own the host for the user.
Currently the only per function execution logging that is done is for non-http functions. All those logs/errors are shown in the Dashboard. Http functions don't do Dashboard logging (for perf reasons), so we will likely need another way. One simple idea is to have a circular log file per function where we append logs. The functions portal could show this. It's important though that http executions don't block on writing to file.
Likely the best way for us to ferret out all these logging issues is to start playing with the UI once it's ready. We need to ensure excellent debugablility.
e.g.
The ResourceGroup suffix is redundant and feels heavy.
Simple method that receives request and response objects and can party on them.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.