Giter Site home page Giter Site logo

usql's Introduction

U-SQL

U-SQL is a new language from Microsoft for processing big data. U-SQL combines the familiar syntax of SQL with the expressiveness of custom code written in C#, on top of a scale-out runtime that can handle any size data.

How can I learn more?

Here are a few links with a lot more information:

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact

Other

usql's People

Contributors

anthonychu avatar asears avatar asos-hughwoods avatar ben-kotvis avatar dagiro avatar dfsharp avatar flomader avatar jeffwilcox avatar markgar avatar matt1883 avatar microsoft-github-policy-service[bot] avatar mikerys avatar mkadaner avatar mwinkle avatar rukmanig avatar steventmayer avatar vaerge avatar vrmartin avatar xujxu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

usql's Issues

Access to Azure Blob Storage from U-SQL (credentials)

I cannot find any information: how to register Azure Blob Storage credentials for getting access to a Blob Storage account from U-SQL.
Actually I'm getting the exception when I run the script:

Exception

E_STORE_USER_FAILURE: Secret not found for the specified user account Cosmos Path: wasb://[email protected]/test1/5cbf6597-9e23-420a-b186-eed5d771a493

Script (last step):

OUTPUT @group_by_masterid
TO "wasb://[email protected]/test1/master_id_inline.bin"
USING new V3Lake.Functions.DataNode.DataRecordOutputter();

Sample ICombiner

Would be possible to add some example showing how to make custom JOIN using ICombiner?
E.g. JOIN two data sets using BETWEEN or x>=z && y<=z

IProcessor Process() signature

Is there a reason why Output is supplied as an argument into the Process function - why not just let the developer of the function create an instance of it themselves within the function? Then the function would be simply i.e. IRow -> IRow.

I know that this is a somewhat similar pattern to the old HDInsight C# SDK, which took in the input stream and output stream, and you manually "pushed" data into the output stream, but here we're just returning some IRow anyway.

Is there some state that is set in advance on the Output argument or something?

Query Execution AVG

I am using USQL to query my PUMS population files. I am using the AVG function.

@sum = SELECT St, AVG(Adjinc) AS AvgIncome
FROM @TBL
GROUP BY St;

The USQL engine appears to pass every rows' St and Adjinc values to the final vertex which then aggregates the results.

However, you could much more efficiently create hash tables in each vertex, storing the SUM Adjinc and COUNT of rows. Then when you merge the hash tables, you sum the Adjinc and sum the COUNT of rows. You then divide the Sum of Adjinc by the COUNT of rows to get the Average.

Enable CLA Bot

Enable the Azure CLA bot to allow contributors to the repo to be covered by the CLA.

String size exceeds the maximum allowed size of 131072 in U-SQL script

Hello,
I am hitting the error of string data type size limitation of 128KB during extract phase.

I tried the option of Extractors.Tsv(silent:true) to omit the corresponding rows which does not fit the 128KB but that doesn’t seem to solve the problem.

I do not want to parse using custom extractors and want to consume the entire line and then process if necessary.

Code snippet:
@eventLogs = EXTRACT Event string FROM @path USING Extractors.Tsv(silent:true);

Error:
MESSAGE: String size 215710 exceeds the maximum allowed size of 131072

Questions:
• Is there a way to store larger literals than 128KB during extraction?
• If not is there a way to omit those lines instead of run-time failure?

Could not load file or assembly 'ScopeEngineManaged.dll' or one of its dependencies - Using the SDK

I get this error when I try to compile using the Local Run Assembly directly:

string USQLRootPath = @"C:\USQLLocalRunSDK\DataRoot";
string ScriptPath = @"C:\USQLLocalRunSDK\Scripts\USQLScripts";
string ScriptName = "SimpleTest.usql";

Configuration USQLConfig = new Configuration(USQLRootPath);
USQLConfig.CppSDK = @"C:\USQLLocalRunSDK\CppSDK";
USQLConfig.WorkingDirectory = @"C:\USQLLocalRunSDK\Scripts\ScopeWorkDir";

LocalCompiler USQLCompile = new LocalCompiler(USQLConfig):

In the Compile Result I can find the following Error: "Could not load file or assembly 'ScopeEngineManaged.dll' or one of its dependencies".

I have no problem compiling the same script using localrunhelper.exe like this:

C:\USQLLocalRunSDK\LocalRunHelper.exe "compile" -Script "C:\USQLLocalRunSDK\Scripts\USQLScripts\SimpleTest.usql" -DataRoot "C:\USQLLocalRunSDK\DataRoot" -CppSDK "C:\USQLLocalRunSDK\CppSDK"

Is there something I'm doing wrong in using the Assemblies directly? The VS 2015 project is targeted for x64.

xmlextractor xml attribute support?

Hi,

I wasn't sure if the example XmlExtractor supported attributes?
This line made me think it possibly did, but I haven't had any luck reading attributes in xml.

state.ElementWriter.WriteAttributes(reader, false);

-thanks
Alex.

Update readme.md

A few things to put there:

  • how to use this site
  • links to different issues/queries
  • links to content/docs/service
    -- U-SQL Reference
  • contacting us

Provide NUGET packages for customization related assemblies

When implementing custom components (i.E extractors), we are required to implement specific interfaces. For example IExtractor. The implementation of interfaces is contained in assembly Microsoft.Analytics.Interfaces. Right now this assembly seems to be installed in following path:
C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\PublicAssemblies\

What is the recommended way to reference this and possibly other required assemblies? I'm surprised that this assembly is not contained in the NuGet package.

U-SQL Script Job Scheduling???

We're looking to potentially migrate a series of SQL Server based SQL jobs from our "analytics engine" to Azure Data Lake. Typically we do the scheduling through an SQL Server Integration Services (SSIS) directed graph, which allows for asynchronous operation. The end result is a few tables/files as output.

I'm not seeing any SSIS extensions other than the "Azure Data Lake Store" source and destination DFTs. In contrast HDInsight has a few Pig, Hive execution SSIS tasks and REST APIs.

Question

What is the typical goto method for executing a series U-SQL jobs, either in series or asynchronously? I see sample projects show order system with a number prepended to individual U-SQL script files.

Thank you,

Jon

Add support for F# as a language type

This came in as a feature request from uservoice, opening this issue to track discussion and refine the suggestion. This is the text from uservoice

It would be great to see some F# support for USQL. I understand that you can already do UDFs in any .NET language (provided you inherit from the required base class or interface etc.) but having F# inline with USQL would be excellent. F#'s lightweight syntax and expression-based syntax would be a natural fit with the SQL section of USQL, and I think would provide a more seamless experience switching when between SQL and .NET code than SQL and C#.

Pushing this further one could envisage an F# / USQL type provider along the lines of the SQL Client one (http://fsprojects.github.io/FSharp.Data.SqlClient/) which could allow you to consume USQL from within F# directly.

Query Execution Puzzle Joins and aggregation

My files are large PUMS population files from the US Census bureau (https://www.census.gov/programs-surveys/acs/data/pums.html). There is a St (state code column). Each file is handled by multiple vertices. The relevant part of the query is:
@sum = SELECT St, SUM(Adjinc) AS TotalIncome, COUNT(*) AS NumRows FROM @TBL
GROUP BY St;
When I run this query, the execution engine behaves as expected, creating hash tables when processing each vertex and writing between 12 and 16 rows for each vertex and finally combining the hash tables into one hash table output with 52 rows (50 states plus DC and PR).
However if I use an INNER JOIN the plan is different.

@sum =SELECT St, s.statenum, SUM(Adjinc) AS TotalIncome, COUNT(*) AS NumRows FROM @TBL t
INNER JOIN @State s
ON St = s.statenum
GROUP BY St, s.statenum;

Each vertex passes to the vertex that combines the results a result set that contains several hundred thousand rows. My contention is that the inner join can only reduce the number of rows output, it cannot increase the number of rows output, thus I would suggest that the optimizer apply the hashing in the vertices that first touch the data. The join to statenum would then potentially reduce the number of rows output.

By the way, the following works as I had hoped:

@sum = SELECT St, SUM(Adjinc) AS TotalIncome, COUNT(*) AS NumRows FROM @TBL
GROUP BY St;
@out = SELECT * FROM @sum t
INNER JOIN @State s
ON t.St = s.statenum;

Problems reading UTF-8 with portuguese characters

image

When reading this .log file it fails in this line... :(

2016-03-07 11:34:48 W3SVCXYZ805 SERVER13 10.101.146.157 GET /pt/Prt/PublishingImages/mailimages/visto_131114.jpg - 80 - 10.101.146.3 HTTP/1.1 Mozilla/5.0+(compatible;+MSIE+10.0;+Windows+NT+6.1;+WOW64;+Trident/6.0;+SRHE+S.R.+Habitação+e+Equipamentos) - - ind.xyz.pt 200 0 0 10143 308 0

The problem are the characters ç and ã

If you convert the file to unicode it works well, though...

PIVOT support

Hi

I'm trying to run a USQL script in VS2017 that uses PIVOT but I'm getting:

Internal error! The method or operation is not implemented.

It's the PIVOT line that is causing it.

I have ADL Tools v 2.2.5000.0.

I notice it was released recently, too recently? 20-Feb-2017 Release Notes

AFAIK I'm running the latest version. Do I have to wait until a new version? What can I do to get this working?

PS. Not as important, but the ANY_VALUE function doesn't appear to be supported in this version either.

Thanks
Steve

Add custom column in output

Hi,
I have a .tsv file which I can process. I want to add an extra column to the output .csv file.
That extra column will be a datetime column in the format YYYYMMDDHHMM.
Is it possible?
This is my simple query

@Result = 
EXTRACT 
           User_ID int,
           First_Name string,
           Middle_Name string,
           Last_Name string,
           Age int,
           Gender string,
           Team string,
           CurrentTime DateTime  // Is it possible to create a datetime column here like we do in sql ??
       FROM "/Users.tsv"
       USING Extractors.Tsv();

Any help appreciated.

Lokesh.

How to extract values from nested json objects using JSON Extractor

I have a json file that contains a few properties, say,

{
  "A": [
    {
      "property1": "value 1",
    },
    {
      "A1": [
        {
          "property2": "value 2",
          "property3": "value 3"
        }
      ],
      "property4": "value 4"
    }
  ],
  "B": {
     "custom": {
      "B1": [
        {
           "property5": "value 5",
           "property6": "value 6",
           "property7": "value 7"
        }        
      ],
      "metrics": [ ]
    },
    "property8": { }
  }
}

Few of them are nested. How to extract the nested property values using JSON extractor. For example, I want to extract the values of properties 5,6,7.

Generating rows from C# without extractor

Hello all.

I was trying to create a range of numbers (i.e. 1,2,3,4,5,... each number in new row), I found it rather hard in SQL language, so I decided to do it in C#. The only problem was, I didn't know how to produce results back to USQL. Right now I am using extractor to do the job

@loop =
    EXTRACT i int
    FROM ""
    USING new ForLoopExtractor(0, 10);
    public class ForLoopExtractor : IExtractor
    {
        int from, to;
        public ForLoopExtractor(int from, int to)
        {
            this.from = from;
            this.to = to;
        }
        public override IEnumerable<IRow> Extract(IUnstructuredReader input, IUpdatableRow output)
        {
            for (int i=this.from; i<this.to; i++)
            {
                output.Set<int>(0, i);
                yield return output.AsReadOnly();
            }
        }
    }

Is there a more convenient way of doing this ? And also, do you plan on improving language itself to allow this ?

Strongly Typed IProcessor

Any possibility of having a strongly typed version of IProcessor i.e. IProcessor<TInput, TOutput> which would then have Process(row:IRow<TInput>) : output:IRow<TOutput> - this is basically just the prototypical map function in many collection libraries (or select in LINQ).

In the current implementation, things are all stringly typed - so developers will either just put up with that and risk runtime errors, or end up writing wrapper classes around the input IRow and output IUpdateableRow in order to get some strong typing into that.

IProcessor - does it really need to be an abstract class?

Do this really need to be a class - why not just single-method interfaces. Or even better, could it not just be a single function signature e.g. (IRow * IUpdateableRow) -> IRow) - this would be more lightweight for implementers that they don't need to create a separate class for every single UDO that they want to create.

Feature Request - Client-side Parsing

Long time reader, first time submitting an issue on GitHub, so I apologize if this isn't the right place for this or if this feature has already been requested.

Pretty much just what it says in the title - it would be nice if, from Visual Studio, U-SQL queries could be parsed/checked for minor/common syntactical issues before submission to the server. It would be a huge time saver.

USqlStreamReader.Split(delimiter) doesn't seem to work correctly

var stream = new FileStream(@"....\ExtractorsUnitTests\InputFiles\Test.json", FileMode.Open);
Encoding encoding = Encoding.UTF8;
var streamList = new USqlStreamReader(stream).Split(encoding.GetBytes("\r\n")).ToList();

Although, the file has multiple lines the "streamList" only gets the first line(first stream) as the output.
I also checked that it's not reading the entire file at once.

Any solution for the same ?

Schedule usql jobs

Is there any way I can automate U-SQL jobs and run them on some schedule?

Value formatter for data partitioned by hour

When specifying a data set which is partitioned by hour I expect the format specifier "hh" to represent the hour on a 12 hour clock, and the format specifier "HH" to represent the hour on a 24 hour clock. This expectation is based on Custom Date and Time Format Strings in .net documentation.

When using the format specifier "HH" I get an error Internal error! Invalid Virtual Column: Cannot find value formatter 'HH' for type 'DateTime' of column 'logfile_timestamp'

When using the format specifier "hh" I get the hour from a 24 hour clock.

Handle files with header rows using default text extractor

It is common for data files to include one or more lines at the beginning which contain headers and should not be treated as data rows. Currently, when using the default text extractor with this type of file, every row will be treated as data, and is required to conform to the schema defined.

It would be helpful to be able to pass an optional parameter to the extractor to indicate that the first line (or first n lines) of each input file should be ignored.

Check if file exists at the location.

Hi,
Is there a way to check if the file from which we will be extracting values exists at that location?

IF File.Exists("file_location")
THEN
       //Proceed further, extract values from the file using Extractor.
ELSE
       // Other things to do

A commit request is outside of a reservation size. A row is too big.

Hi,

I am getting following error while processing a file.

E_RUNTIME_USER_ROWTOOBIG
MESSAGE
A commit request is outside of a reservation size. A row is too big.

Input is a Json file and we have custom extractor to parse the file. What could be the issue? We don't have enough details to debug.

Extracting data from multiple files

Hi,

I am trying to use the EXTRACT USQL statement when the location pointed by FROM has multiple files. It works when I have a single file and I am explicitly specify the filename in the path. But in my use case it is not guaranteed that the filename will be the same everytime or the folder can have multiple files.

In the below statement I have tried using FROM "/dimensiondata/carrier/", FROM "/dimensiondata/carrier/*" and FROM "/dimensiondata/carrier" but none of them worked. All 3 of them came back with error saying "File not found:".

@carrierdata =
EXTRACT Carrier string,
CarrierCode string
FROM "/dimensiondata/carrier/"
USING Extractors.Csv();

How would I go about reading data from multiple files using an EXTRACT statement?
Appreciate your help on this.

Thanks & Regards,
Rakesh

How to process one row at a time in USQL

How can we process one row of structured data at a time using USQL. Is it possible to use something like cursor in SQL that opens on a result set and allows processing the result set one row at a time.

Add a contributors guide

We need a document that outlines the various ways to engage with the team. Leverage the work that's been done by VS and Azure for this.

Sample User-Defined Aggregator

It would be nice to have a sample of code that implements Microsoft.Analytics.Interfaces.IAggregate in order to provide and use a user defined aggregator.

E_CSC_SYSTEM_INTERNAL: Internal error! A procedure imported by 'ScopeEngineManaged.dll' could not be loaded.

Hello,

After installing the latest version of the ADLA tool I am now receiving the following error when building the usql script.

Version: 2.2.4000.0
MSI: Microsoft.Azure.DataLakeToolsForVS2015.msi

E_CSC_SYSTEM_INTERNAL: Internal error! A procedure imported by 'ScopeEngineManaged.dll' could not be loaded.

Is there anything I can do to resolve this error?

I'm running on windows 7 with VS 2015.

Thanks

Upload files "Row-Structured"

How can I upload large datasets from Blob Storage to ADLS in "Row-Structured File Mode"? If I use Adlcopy, files get uploaded as binary which results in incorrect splits. Since each line is a valid JSON document this causes U-SQL jobs to fail. The only option I could find to upload the files correctly is Visual Studio - but this is not a good solution for large datasets.

Building recursive queries

Hi,

I have to build the transitive closure of a result set. Usually I would use recursive CTE or something like the Floyd–Warshall algorithm in C#.
Currently I find no approach, how I could solve this in U-SQL. Are there existing approaches which I have overlooked?

Regards,

Tillmann

inner query/subquery

declare @t table (id int,gender varchar(10),country int,Orderi int,age int)

insert into @t (id,gender,country,Orderi,age)values( 123,'male',1,1,22)
insert into @t (id,gender,country,Orderi,age)values( 12,'female',1,2,23)
insert into @t (id,gender,country,Orderi,age)values( 123,'male',2,1,22)
insert into @t (id,gender,country,Orderi,age)values( 12,'female',1,1,24)
insert into @t (id,gender,country,Orderi,age)values( 123,'female',1,4,23)
Select id,gender,country,age,
CONVERT(VARCHAR(10),(Count(gender)* 100 / (Select Count(*) From @t)))+'%' as Parcent
From @t
Group By gender,country,id,age

For that I need to create two C# function with static variable

VS crashes when registering an assembly larger than 15KB

Repro Steps

  1. Open Cloud Explorer
  2. Select Azure Subscription and expand Data Lake Analytics node, select an ADLA resource.
  3. Expand Databases node, select a database
  4. Right-click on "Assemblies" -> Register Assembly
  5. Select "Load Assembly from Path" and choose a local assembly that is larger than 15KB, so you get the dialog:
Register assemblies larger than 15KB requires an Azure Data Lake Store location for temporary file storage. Click YES to choose the generated path adl://.../AssemblyCache_3f7b129b-8ad2-4c49-8ad7-dac13bc5b74f. Click NO to select another path.
  1. Select YES.

Expected:

VS does not crash. It does not crash when I perform this action locally.

Actual:

[Window Title]
Microsoft Visual Studio 2015
Microsoft Visual Studio 2015 has stopped working
Windows is collecting more information about the problem. This might take several minutes...

VS Information
Microsoft Visual Studio Enterprise 2015
Version 14.0.25431.01 Update 3
Microsoft .NET Framework
Version 4.7.02020

Installed Version: Enterprise

ASP.NET and Web Tools 2015.1 14.1.20907.0
ASP.NET and Web Tools 2015.1

Azure App Service Tools v2.9.5 14.0.20810.0
Azure App Service Tools v2.9.5

Azure Data Lake Node 1.0
This package contains the Data Lake integration nodes for Server Explorer.

Azure Data Lake Tools for Visual Studio 2.2.5000.0
Microsoft Azure Data Lake Tools for Visual Studio

Common Azure Tools 1.8
Provides common services for use by Azure Mobile Services and Microsoft Azure Tools.

Microsoft Azure Hive Query Language Service 2.2.5000.0
Language service for Hive query

Microsoft Azure Mobile Services Tools 1.4
Microsoft Azure Mobile Services Tools

Microsoft Azure Tools 2.9
Microsoft Azure Tools for Microsoft Visual Studio 2015 - v2.9.40923.2

Microsoft Azure Tools 2.8
Microsoft Azure Tools for Microsoft Visual Studio 2015 - v2.8.40211.

rowpath isn't working in JsonExtractor

I see there was a check-in on 9-22 that changed the way the Extract method worked in the JsonExtractor. Now it ignores the rowpath. I am working on a MultiLevelJsonExtractor that would allow you to specify paths to map to the properties so that you could say I want a property at the root of the specified rowpath and a property a couple levels down from there in a flat result. I was going to inherit from this JsonExtractor because the logic can leverage this code but if this doesn't honor the rowpath I will have to approach things differently. What direction is this going to take?

Trying to access assembly from USQL script in portal

Hi,

I am trying to parse Json data file using Json formatter.
I have following on top of my script.
REFERENCE ASSEMBLY [Newtonsoft.Json];
REFERENCE ASSEMBLY [Microsoft.Analytics.Samples.Formats];

But I keep getting error saying Assembly 'master.[Newtonsoft.Json]' does not exist.

Do I need to upload these assemblies to data lake store ? if yes where ?

Thanks,
Nitin

ADL VS Tools freezes up computer over time

Issue

I have a VS solution with a couple of libraries and a U-SQL project. After trying to compile my U-SQL scripts a few times, I am noticing that I regularly get compilation timeouts locally for queries that used to work. Even when I am just editing code, I notice that there are lingering 'cl.exe's and 'conhost.exe's taking up my memory and CPU, which makes it impossible to code or do anything with my computer.

In the image below, I am trying to rebuild my .NET 4.5 C# library (Microsoft.Fx.Portability.DataLake) but I can't because my CPU and resources are being consumed by these other processes.

issue

Repro

  1. Create U-SQL project, copy script under "SQL.MAP" section from https://msdn.microsoft.com/en-us/library/azure/mt764126.aspx (pasted below)
  2. Open Task Manager
  3. Right-click in Solution Explorer and select "Build"
  4. Notice how cl.exe launches and so does conhost.exe
  5. Wait for it to finish building, and it shows the vertex graph, etc.
  6. Notice how conhost.exe is still there and takes up 24% CPU even though nothing is happening.

Repro2

To get all of those lingering cl.exes:

  1. "Build all scripts" in my U-SQL project. (I can share the solution file if you can't get it to reproduce.)
  2. Have a script out of those 5 or 6 that fails building.
  3. Wait for it to fail with errors or timeout on one.
  4. Notice how VS says it is "Ready" in the taskbar, but the Task Manager has 'cl.exe's still there and processing.. something.
  5. Even after closing VS, the 10+ cl.exe and conhost.exe are still running!

Visual Studio Information

Microsoft Visual Studio Enterprise 2015
Version 14.0.25431.01 Update 3
Microsoft .NET Framework
Version 4.7.02020

Installed Version: Enterprise

Azure Data Lake Node 1.0
This package contains the Data Lake integration nodes for Server Explorer.

Azure Data Lake Tools for Visual Studio 2.2.5000.0
Microsoft Azure Data Lake Tools for Visual Studio

Better Error Checking When Creating Views

I have a view that joins a couple external tables for easy access in scripts. Now that I am finally writing a script that uses the view, I keep running into errors with the view script so I am having to go back and correct that code before I can even begin to work with the new script. It would be preferable if these errors were thrown when I was creating the view.

For example, intentionally omitting a comma between column names in my view code compiles and runs, but then throws the appropriate error once I go to run the script that uses that view.

It would be preferable if these errors were checked when first creating the view.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.