cube-js / cube Goto Github PK
View Code? Open in Web Editor NEW๐ Cube โ The Semantic Layer for Building Data Applications
Home Page: https://cube.dev
License: Other
๐ Cube โ The Semantic Layer for Building Data Applications
Home Page: https://cube.dev
License: Other
Name dimension
is misleading, since filter could be applied to both measures and dimensions. Support dimension
key for the back compatibility.
GitHub now supports issue templates via a markdown file called issue_template.md
in the root or .github/
folder of your repo!
There is also added support for pull request templates via a markdown file called pull_request_template.md
in the root or .github/
folder of your repo!
Now server starts and loads all cubes from schema folder ( or another folder )
when I change my schema I need to restart start server.
hot reload feature:
t find some dimension or measure or segment, it can reload cube again and check is it present in the cube. do just 1 reload and return validation result when can
t find required property.profit:
it`ll allow changing schema in runtime.
cubejs create hello-world -d postgres
I am using postgres for database. After starting localhost on my vm and accessing it via my host machine then some dependencies got downloaded for reactjs app(Also creating a reactjs app inside a dashboard-app folder) later I wanted to add dependencies
$ npm i --save @cubejs-client/core
$ npm i --save @cubejs-client/react
Don't know where to add in hello-world or dashboard-app?
and also after running it in both places I am getting
npm ERR! Unexpected end of JSON input while parsing near '...r\n-----END PGP SIGNA'
npm ERR! A complete log of this run can be found in:
npm ERR! /home/navs/.npm/_logs/2019-04-17T10_26_24_301Z-debug.log
navs@kafka:~/CubeJS/hello-world$ npm i --save @cubejs-client/core
npm ERR! Unexpected end of JSON input while parsing near '...r\n-----END PGP SIGNA'
npm ERR! A complete log of this run can be found in:
npm ERR! /home/navs/.npm/_logs/2019-04-17T10_28_53_779Z-debug.log
navs@kafka:/CubeJS/hello-world$ cd dashboard-app//CubeJS/hello-world/dashboard-app$ npm i --save @cubejs-client/core
navs@kafka:
npm ERR! Unexpected end of JSON input while parsing near '...r\n-----END PGP SIGNA'
npm ERR! A complete log of this run can be found in:
npm ERR! /home/navs/.npm/_logs/2019-04-17T10_29_21_857Z-debug.log
navs@kafka:/CubeJS/hello-world/dashboard-app$ cd ../CubeJS/hello-world$ npm i --save @cubejs-client/core
navs@kafka:
npm ERR! Unexpected end of JSON input while parsing near '...r\n-----END PGP SIGNA'
npm ERR! A complete log of this run can be found in:
npm ERR! /home/navs/.npm/_logs/2019-04-17T10_30_37_847Z-debug.log
Query object could contain an order property, but its format needs to be changed. Once it is changed it should be added to the Query Format documentation page.
I have created 2 Schemas/Tables Orders and Users. In index.js file I am trying to render User's table attributes. I don't have this 'status' attribute in any of my sql table and schema. Why I am getting this even though,I have no where mentioned 'status' and I am trying to render Users schema.
This is my index.js File
`const CubejsServer = require('@cubejs-backend/server');
import cubejs from '@cubejs-client/core';
import Chart from 'chart.js';
import chartjsConfig from './toChartjsData';
const cubejsApi = cubejs('eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpIjozODU5NH0.5wEbQo-
VG2DEjR2nBpRpoJeIcE_oJqnrm78yUo9lasw');
const server = new CubejsServer();
const resultSet = await cubejsApi.load({
measures: ['Users.paying'],
timeDimensions: [{
dimension: 'Users.createdAt',
dateRange: ['2018-01-01', '2018-07-31'],
granularity: 'month'
}]
})
const context = document.getElementById("myChart");
new Chart(context, chartjsConfig(resultSet));
server.listen().then(({ port }) => {
console.log(`รฐลธลกโฌ Cube.js server is listening on ${port}`);
});`
Per our Slack conversation, this ticket is a placeholder for the work needed to move any existing ESLint, Prettier, or any other dot-files / config related to linting and formatting into this public Github repo.
Let me know if I can be of any help on this - happy to!
There should be test connection phase before any db operation.
See dateRange
in the docs for Query Format: https://cube.dev/docs/query-format#time-dimensions-format
When I run yarn add @cubejs-backend/server-core
I get a @cubejs-backend
in my node_modules
folder.
I would expect the following modules instead:
node_modules/@cubejs-backend/server-core
node_modules/@cubejs-backend/api-gateway
(dependency of server-core
)node_modules/@cubejs-backend/query-orchestrator
(dependency of server-core
)node_modules/@cubejs-backend/schema-compiler
(dependency of server-core
)Hello I am trying to define a cube schema in a typescript based project but I cannot just define cube()
files in and also cannot define a .js
files.
Any documentation or support for TypeScript?
Thanks!
Describe the bug
I am currently using BigQuery as DB. I have column 'age' set as 'integer' in the table.
But the schema file in cube.js, does not show this column in dimension or in measures.
Expected behavior
All the columns that are present in the table, should be mapped in the dimension.
Version:
0.5.2
Handle RangeError: Maximum call stack size exceeded
for cases like
foo: {
sql: `${foo}`,
type: `string`
},
Stop sending old requests if query has been changed
As we are implementing our own authentication middleware, it would be great to disable the authentication check in @cubejs-backend/api-gateway
.
I could imagine something like this:
import * as CubejsServerCore from "@cubejs-backend/server-core";
const config = {
...
security: false
};
const core = CubejsServerCore.create(config);
What do you think?
It can help connect to Azure SQL Database, Azure MS-SQL Server, etc.
Now, if you pass empty query it will give a weird error. Query needs to be validated.
While using mysql driver for mongoBI
Time fields are mapped correctly in "time" field. But when selected any of them this error is thorwn
Error: Error: scalar function 'convert_tz' is not supported
Hi, it seems that segments
missing in request validation.
Can you add segments
validation to cubejs-api-gateway
.
Now its not working when request contains segments
Describe the bug
Permission denied to build using node-gyp
What I did affter successfully installing cli: cubejs create hello-world -d cassandra
**- Installing DB driver dependencies
> [email protected] install /home/navs/CubeJS/hello-world/node_modules/java
> node-gyp rebuild
sh: 1: node-gyp: Permission denied
npm ERR! file sh
npm ERR! code ELIFECYCLE
npm ERR! errno ENOENT
npm ERR! syscall spawn
npm ERR! [email protected] install: `node-gyp rebuild`
npm ERR! spawn ENOENT
npm ERR!
npm ERR! Failed at the [email protected] install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2019-04-16T12_10_36_327Z-debug.log
Cube.js Error ---------------------------------------
Error: npm install --save @cubejs-backend/jdbc-driver node-java-maven failed with exit code 1
at ChildProcess.child.on.code (/root/.nvm/versions/node/v10.13.0/lib/node_modules/cubejs-cli/cubejsCli.js:31:16)
at ChildProcess.emit (events.js:182:13)
at maybeClose (internal/child_process.js:962:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:251:5)**
Version:
CubeJS version: 0.7.0
Hi all,
As quickly discussed with @paveltiunov in Slack, it would be awesome to support TypeScript for Cube.js. This can be achieved by adding a declaration file and bundling it with the npm package. Here you can find more information.
Thanks a lot and keep up the great work!
Hi,
first of all this project looks extremely interesting!
I was studying it and looking for BigQuery support. It is not clear to me whether using big query as db is supported or not, as the BQ is listed as supported in the home, but in the docs it is not mentioned in the -d
option flag. Moreover, the CLI help says that valid options for that flags are
-d, --db-type <db-type> Preconfigure for selected database. Options: postgres, mysql, athena
It looks like ResultSet.chartPivot
uses weird combination of keys and titles as keys Object returning by chartPivot.
// What is returned by resultSet.seriesNames()
[
{title: "Reports, Events Any Event - Total", key: "Reports:Events.anyEvent"},
{title: "Dashboard, Events Any Event - Total", key: "Dashboard:Events.anyEvent"},
{title: "โ
, Events Any Event - Total", key: "โ
:Events.anyEvent"}
]
// What is returned by resultSet.chartPivot()
[
{category: "2019-02-05T00:00:00.000", x: "2019-02-05T00:00:00.000", โ
, Events.anyEvent: 0, Reports, Events.anyEvent: 0, Dashboard, Events.anyEvent: 0, โฆ},
{category: "2019-02-06T00:00:00.000", x: "2019-02-06T00:00:00.000", โ
, Events.anyEvent: 0, Reports, Events.anyEvent: 0, Dashboard, Events.anyEvent: 0, โฆ},
{category: "2019-02-07T00:00:00.000", x: "2019-02-07T00:00:00.000", โ
, Events.anyEvent: 0, Reports, Events.anyEvent: 0, Dashboard, Events.anyEvent: 0, โฆ},
{category: "2019-02-08T00:00:00.000", x: "2019-02-08T00:00:00.000", โ
, Events.anyEvent: 0, Reports, Events.anyEvent: 0, Dashboard, Events.anyEvent: 0, โฆ}
...
]
The key in the chartPivot
returning object is Reports, Events.anyEvent
which is neither key nor title from seriesNames
. I'd suggest using keys.
Hi all,
I'm trying the VanillaJS Javascript integration (inside of an Angular component built with Angular CLI):
import cubejs from '@cubejs-client/core';
import Chart from 'chart.js';
import chartjsConfig from './toChartjsData';
@Component({
selector: 'home',
templateUrl: './home.html'
})
export class HomeComponent implements OnInit {
public ngOnInit() {
const cubejsApi = cubejs('dd44575826bc7113dbbbd06e7f490a5030f42c630da052913499956b6c20a993a9c72655f0c1d3921bad8b1047b411ed8f426d6342d526e082176aadc41a3e49');
const resultSet = await cubejsApi.load({
measures: ['Stories.count'],
timeDimensions: [{
dimension: 'Stories.time',
dateRange: ['2015-01-01', '2015-12-31'],
granularity: 'month'
}]
});
const context = document.getElementById('myChart');
new Chart(context, chartjsConfig(resultSet));
}
}
But at cubejs(...)
I get the following error: Uncaught Error: Uncaught (in promise): TypeError: core_2.default is not a function
When I change the import to import * as cubejs from '@cubejs-client/core';
it's fine.
Describe the bug
The current implementation of cubejs-mongobi-driver
has broken SSL connection support.
It passes ssl config to underlying node-mysql2
driver like so:
https://github.com/statsbotco/cube.js/blob/3202508c98f7c2f342e5936b7f219d78876a2117/packages/cubejs-mongobi-driver/driver/MongoBIDriver.js#L9-L17
But underlying mysql
driver used to connect to mongo bi (mongosqld)
accepts ssl config as an object:
https://github.com/sidorares/node-mysql2/blob/b38c3a45c887a38180896c3a6f256296018511dd/lib/connection.js#L299
const secureContext = Tls.createSecureContext({
ca: this.config.ssl.ca,
cert: this.config.ssl.cert,
ciphers: this.config.ssl.ciphers,
key: this.config.ssl.key,
passphrase: this.config.ssl.passphrase
});
To Reproduce
Try configuring SSL
Expected behavior
SSL configuration should work.
Version:
[e.g. 0.4.5]
Additional context
Opened a PR.
In the official blog post Node Express Analytics Dashboard with Cube.js it's mentioned that
Cube.js is
mounted into the
/cubejs-api/v1/
path namespace. But you can change it and a lot of other things by passing the configuration object to theCubejsServerCore.create()
method.
In the API Gateway the path seems to be hardcoded (see L196, L235 and L248). Nevertheless, I like the idea to have it configurable.
We need to introduce Query class, which instance should returned by query
getter in ResultSet. Query class should provide a set of convenient methods to access information about the query, such as hasDimensions
.
It would be cool if this project could add support for timeseries databases like graphite or influxDB
error while executing query with nested fields.
Error while querying: {"queryKey":["SELECT details.name flow.detailsname
, count(*) flow.count
FROM flow AS flow GROUP BY 1 ORDER BY 2 DESC LIMIT 10000",[],[]],"error":"Error: Unknown column 'details.name' in 'field list'
For schema file below
cube(
Flow, { sql:
SELECT * FROM 1224.flow`,
joins: {
},
measures: {
count: {
type: count
,
drillMembers: [createdbyuser, details.name, projectid, createddate, modifieddate]
}
},
dimensions: {
details.owner: {
sql: details.owner
,
type: string
},
createdbyuser: {
sql: `createdByUser`,
type: `string`
},
details.description: {
sql: `details.description`,
type: `string`
},
details.name: {
sql: `details.name`,
type: `string`
},
`
For mongobi query, the nested fields needs to be inside `` I tested this with mysql workbench connecting to mongo over same BI Connectors.
I think, may be query generator needs to handle this.
working version of same query from mysql test
SELECT
details.name
flow.detailsname, count(*)
flow.count FROM flow AS flow GROUP BY 1 ORDER BY 2 DESC LIMIT 10000
Hi cube.js dev team,
It would be great if the granularity in timeDimensions of a query could also be set to "year". We need that to show a multidimensional bar chart grouped by years.
Thank you!
Describe the bug
Rollups pre-aggregations require the dateRange to be passed. Currently, if it is not passed, the backend returns 500 internal error, but it should be 400 error with a clear message of the problem.
To Reproduce
Steps to reproduce the behavior:
rollup
pre-aggregations like this preAggregations: {
main: {
type: `rollup`,
measureReferences: [amount],
timeDimensionReference: createdAt,
dimensionReferences: [Users.city],
granularity: `day`,
partitionGranularity: `day`
}
}
{"query":"{\"measures\":[\"Orders.amount\"],\"timeDimensions\":[{\"dimension\":\"Orders.createdAt\",\"granularity\":\"day\"}],\"dimensions\":[\"Users.city\"],\"filters\":[]}"
Expected behavior
The server should handle this case and return a User Error with a clear message.
Version:
0.5
Hi,
I m trying to connect to postgres db (which comes with Tableau installation). I am able to connect to this postgres db from tableau whereas when I tried to establish connectivity from cubejs I got below error message
Error: connect ECONNREFUSED XXX.XX.XXX.XX:5432 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1113:14)
Please note the tableau server which has postgres db is SSL enabled. I m not sure whether that is the problem.
Please help fix this issue. I m facing same issue while connecting to mongoDB via mongoDB BI Connector
Thanks
Murugan
The core library is not compatible with Node.js because of the whatwg-fetch
module. I'd suggest moving to an isomorphic alternative that works with both browsers and server.
x
or y
is not passed.fillMissingDates
should work even if x
or y
isn't passed.1. Using mysql driver for MongoDB BI
For MongoDB BI generate schema file contains DB name as prefix in query which causes an issue.
For Example:
Schema file for TableOne generate SQL = > sql: SELECT * FROM DBANE_PREFIX.TableOne
,
If DBANE_PREFIX is removed it works fine.
2. Using monogbi driver for MongoDB BI
Schema file still contains PREFIX however it seems server can not transform the schema. (May be mongobi handling needs to be done for rendering on UI ).
In this case error as below
Error: Error: Compile errors: TypeError: Cannot read property 'aliasName' of null at CubeToMetaTransformer.measureConfig (/home/kru/antphant/git/cube.js/p1/node_modules/@cubejs-backend/schema-compiler/compiler/CubeToMetaTransformer.js:105:31) at config.measures.R.compose.R.map (/home/kru/antphant/git/cube.js/p1/node_modules/@cubejs-backend/schema-compiler/compiler/CubeToMetaTransformer.js:32:40) at _map (/home/kru/antphant/git/cube.js/p1/node_modules/@cubejs-backend/schema-compiler/node_modules/ramda/src/internal/_map.js:6:19) at map (/home/kru/antphant/git/cube.js/p1/node_modules/@cubejs-backend/schema-compiler/node_modules/ramda/src/map.js:57:14) at /home/kru/antphant/git/cube.js/p1/node_modules/@cubejs-backend/schema-compiler/node_modules/ramda/src/internal/_dispatchable.js:39:15 at /home/kru/antphant/git/cube.js/p1/node_modules/@cubejs-backend/schema-compiler/node_modules/ramda/src/internal/_curry2.js:20:46 at f1 (/home/kru/antphant/git/cube.js/p1/node_modules/@cubejs-backend/schema-compiler/node_modules/ramda/src/internal/_curry1.js:17:17) at /home/kru/antphant/git/cube.js/p1/node_modules/@cubejs-backend/schema-compiler/node_modules/ramda/src/internal/_pipe.js:3:14 at /home/kru/antphant/git/cube.js/p1/node_modules/@cubejs-backend/schema-compiler/node_modules/ramda/src/internal/_arity.js:5:45 at CubeToMetaTransformer.transform (/home/kru/antphant/git/cube.js/p1/node_modules/@cubejs-backend/schema-compiler/compiler/CubeToMetaTransformer.js:35:10)
This error happens when you set a filter for a numeric (or other non-string) dimension or measure with multiple values. It looks like athena requires to cast all values in the IN statement to the type of dimension/measure.
Consider MongoDB Document
{ "_id" : ObjectId("5c8a215c46e0fb0001055061"), "details" : { "name" : "asdfsadf", "status" : "asdfdsf", "severity" : "asdfsadf" }, "projectId" : "5c6eb78046e0fb0001ee0b32", . . . }
Schema file generate by cubejs contains measure names with "." for nested fields as below
cube(
Flow, { sql:
SELECT * FROM 1224.flow`,
joins: {
},
measures: {
count: {
type: count
,
drillMembers: [createdbyuser, details.name, projectid, createddate, modifieddate]
}
},
dimensions: {
details.owner: {
sql: details.owner
,
type: string
},
createdbyuser: {
sql: `createdByUser`,
type: `string`
},
**details.description**: {
sql: `details.description`,
type: `string`
},
**details.name**: {
sql: `details.name`,
type: `string`
},
**details.severity**: {
sql: `details.severity`,
type: `string`
},
.
.
.
`
This throws error while parsing
Recompiling schema: {"version":"default_schema_version_bc194c45a5c745c8a93e4b0515c9dcc4"}
Internal Server Error: {"error":"Error: Compile errors:\nSyntaxError: Unexpected token, expected , (16:11)\n at Parser.pp$5.raise (/home/kru/antphant/git/poc/node_modules/babylon/lib/index.js:4454:13)\n at Parser.pp.unexpected (/home/kru/antphant/git/poc/node_modules/babylon/lib/index.js:1761:8)\n at Parser.pp.expect (/home/kru/antphant/git/poc/node_modules/babylon/lib/index.js:1749:33)\n at Parser.pp$3.parseObj (/home/kru/antphant/git/poc/node_modules/babylon/lib/index.js:3978:12)\n at Parser.pp$3.parseExprAtom (/home/kru/antphant/git/poc/node_modules/babylon/lib/index.js:3719:19)\n at Parser.pp$3.parseExprSubscripts (/home/kru/antphant/git/poc/node_modules/babylon/lib/index.js:3494:19)\n at Parser.pp$3.parseMaybeUnary (/home/kru/antphant/git/poc/node_modules/babylon/lib/index.js:3474:19)\n at Parser.pp$3.parseExprOps (/home/kru/antphant/git/poc/node_modules/babylon/lib/index.js:3404:19)\n at Parser.pp$3.parseMaybeConditional (/home/kru/antphant/git/poc/node_modules/babylon/lib/index.js:3381:19)\n at Parser.pp$3.parseMaybeAssign (/home/kru/antphant/git/poc/node_modules/babylon/lib/index.js:3344:19)\n at ErrorReporter.throwIfAny (/home/kru/antphant/git/poc/node_modules/@cubejs-backend/schema-compiler/compiler/DataSchemaCompiler.js:42:13)\n at DataSchemaCompiler.throwIfAnyErrors (/home/kru/antphant/git/poc/node_modules/@cubejs-backend/schema-compiler/compiler/DataSchemaCompiler.js:149:23)\n at repository.dataSchemaFiles.then.then (/home/kru/antphant/git/poc/node_modules/@cubejs-backend/schema-compiler/compiler/DataSchemaCompiler.js:99:14)","authInfo":{"iat":1553003564,"exp":1553089964}}
Describe the bug
CLI doesn't scaffold correct .env file based on -d option
To Reproduce
Steps to reproduce the behavior:
cubejs create hello-world -d bigquery
.env
Expected behavior
Correct environment variables placeholders.
Hi!
It would be nice if we could pass custom measures based on SQL commands with a query. This could look for example like this:
{
"measures": ["statistics.count"],
"additionalMeasures": {
"id": "weekday",
"sql": "DAYNAME(statistics.createdat)"
},
"dimensions": ["statistics.weekday"],
"filters": [
{
"dimension": "statistics.key",
"operator": "equals",
"values": ["timeInRoom"]
}
]
}
At the moment we can achieve the same by adding the sql based measures in the schema on the server. However, it would be easier and more flexible if the client could pass these custom measures along with the query.
I get the following warning for @cubejs-backend/server-core
: @cubejs-backend/schema-compiler
> [email protected]
: This version is no longer maintained. Please upgrade to the latest version.
Hi all,
As we are using Cubejs in an embedded environment with our own Express server, the following logs are not correct anymore and a bit misleading:
console.log(`๐ Your temporary cube.js token: ${cubejsToken}`);
(Link)console.log(`๐ฆ
Dev environment available at ${localUrl}`);
(Link)Suggestion: These logs should not reside in cubejs-server-core
, but in cubejs-server
.
Is your feature request related to a problem? Please describe.
We provide analytics to healthcare professionals. Due to the sensitive nature of our clients' data, we have opted to separate each of our clients' data into separate database schemas. Unfortunately, even though all of our schemas have the same structure, Cube.js's single-database model prevents us from using Cube.js as an analytics engine on top of our data warehouses.
Describe the solution you'd like
Add the ability to specify multiple data sources in the configuration of the Cube.js service, and add the ability to specify which database schema to use at query time.
Suggested implementation
yaml
environment files with the following structure: api_secret: secret
dbs:
- alias: <YOUR_DB_ALIAS_NAME>
host: <YOUR_DB_HOST_HERE>
name: <YOUR_DB_NAME_HERE>
user: <YOUR_DB_USER_HERE>
password: <YOUR_DB_PASS_HERE>
type: postgres
Here alias serves for easier identification for the step 3.
If host, name, type & user are the same for any two dbs, group these dbs into a singular connection. Otherwise deploy one connection per individual database.
At query time, allow the use of the alias specified in step 1 to specify the database to interact with.
{
database: '<YOUR_DB_ALIAS_HERE>'
measures: ['Stories.count'],
dimensions: ['Stories.category'],
filters: [{
dimension: 'Stories.isDraft',
operator: 'equals',
values: ['No']
}],
timeDimensions: [{
dimension: 'Stories.time',
dateRange: ['2015-01-01', '2015-12-31'],
granularity: 'month'
}],
limit: 100
}
which would translate to picking the right connection + adding a USE database statement:
USE '<YOUR_DB_ALIAS_HERE>';
-- SQL equivalent query
// single database setup
cube(`Users`, {
sql: `SELECT * FROM users`
});
// multiple database setup
cube(`<YOUR_DB_ALIAS_HERE>`, `Users`, {
sql: `SELECT * FROM users`
});
Describe alternatives you've considered
Create one Cube.js serverless deployment per database schema (which means managing 10s-100s of deployments) + HIPAA compliance requires us to have our PHI data within a VPC thereby greatly reducing the performance of lambdas due to the extended warming up period.
I know this may seem like a niche use case at first, but I'm sure that the valid use-cases will grow along as you get to encounter more situations. This feature essentially federates all of these different database APIs into a single server, a bit a la GraphQL.
Let me know what you think :)
Have a great day โ๏ธ
Cheers,
Philippe
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.