pysal / mapclassify Goto Github PK
View Code? Open in Web Editor NEWClassification schemes for choropleth mapping.
Home Page: https://pysal.org/mapclassify
License: BSD 3-Clause "New" or "Revised" License
Classification schemes for choropleth mapping.
Home Page: https://pysal.org/mapclassify
License: BSD 3-Clause "New" or "Revised" License
Hi,
I'm currently using mapclassify to batch process some maps. I'm using the EqualInterval classifier via GeoPandas, I happened upon an error when I had a column that only contained zeroes everywhere:
stuff.py:122: in _plot_diff
a = EqualInterval(mapdata[tmp_col])
../../../.conda/envs/geo-env/lib/python3.7/site-packages/mapclassify/classifiers.py:1197: in __init__
MapClassifier.__init__(self, y)
../../../.conda/envs/geo-env/lib/python3.7/site-packages/mapclassify/classifiers.py:614: in __init__
self._classify()
../../../.conda/envs/geo-env/lib/python3.7/site-packages/mapclassify/classifiers.py:633: in _classify
self._set_bins()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <[AttributeError("'EqualInterval' object has no attribute 'bins'") raised in repr()] EqualInterval object at 0x7f661c5d12d0>
def _set_bins(self):
y = self.y
k = self.k
max_y = max(y)
min_y = min(y)
rg = max_y - min_y
width = rg * 1.0 / k
> cuts = np.arange(min_y + width, max_y + width, width)
E ValueError: arange: cannot compute length
../../../.conda/envs/geo-env/lib/python3.7/site-packages/mapclassify/classifiers.py:1207: ValueError
As far as I can tell this is because width
becomes zero when max_y
and min_y
have the same values. I would like for a ValueError to be raised in this case explaining what's wrong. Would a PR for this be ok? Another possible solution would be to fall back to using k = 1
and skip all the binning logic when this case happens, but that feels unintuitive to me.
UserDefined
classifier is based on the upper bounds of each class, which means that the resulting legend accessed as .get_legend_classes()
is inconsistent across y
s. It always uses the min value of y
.
I would add an optional min
keyword, which would fix min value no matter the y
to get consistent legends. That seems to be the best way of fixing geopandas/geopandas#2018.
Besides missing a few documentation files (this is arguable, though lacking README.md
seems unfortunate), the sdist seems to contain a few extra files:
mapclassify/deprecation.py
mapclassify/flycheck_classifiers (serges-MacBook-Pro.local's conflicted copy 2019-07-03).py
mapclassify/test.py
We have a regression in 2.4.0. This commit 7aad6fc introduced the following check:
mapclassify/mapclassify/classifiers.py
Lines 632 to 633 in b92a7e3
However, that is irrelevant if you use UserDefined
classifier. momepy's CI got red with 2.4.0. I define bins using the whole array and then using the same in subsets of data to get counts per bin, therefore it is perfectly fine that min == max
.
Would not be better to fall back to k=1
and raise a warning instead of an error in general? You may have discussed that before though... (cc @jeffcsauer)
We should ideally fix this and do 2.4.1 bugfix release before we'll do meta release.
The implemented FisherJenks
classifier is very slow. I would like to suggest using jenkspy instead. It's written in cython
and it's very fast.
AttributeError: module 'mapclassify' has no attribute 'Fisher_Jenks'
Hi,
for some reason, if I want to install mapclassify with python3.7 on windows from conda-forge, I'll get UnsatisfiableError (https://ci.appveyor.com/project/martinfleis/momepy/builds/28862313). In the environment is nothing else:
name: test
channels:
- conda-forge
dependencies:
- python=3.7
- mapclassify
If I unpin python, I get python3.8 but mapclassify 2.0.1. So I sense the issue with scikit-learn, but I have no clue why this happens. The same situation is with pysal.
Edit: If I keep mapclassify only, you get 2.0.1, if I pin mapclassify to 2.1.1 I get the error.
Hi,
I use mapclassfiy.Natural_Break() to produce bins for my MapBox heatmap.
My code like this:
df = pd.read_table('./files_output/customer_qty.txt',sep=',',header=None).iloc[:,1]
mapclassify.Natural_Breaks(df.iloc[:,1], k=5)
In my thought it should return 5 classes, but it only returned 3 classes
.
The output is:
Natural_Breaks
Lower Upper Count
=============================================
x[i] <= 1.000 54428
1.000 < x[i] <= 26.000 2475
26.000 < x[i] <= 212.000 66
Attachment is the customer_qty.txt
data file.
customer_qty.txt
I have randomly bumped into this paper proposing Maximum Likelihood–Based Classification Scheme. It may be worth adding to mapclassify's portfolio.
Wangshu Mu & Daoqin Tong (2019) Choropleth Mapping with Uncertainty: A Maximum Likelihood–Based Classification Scheme, Annals of the American Association of Geographers, 109:5, 1493-1510, DOI: 10.1080/24694452.2018.1549971
Current documentation page is missing a tutorial.
I am trying to compare the differences between the number of items in four different data sets using a consistent scale. The issue is that if one of the data sets do not have data in a certain interval, the color scheme will be thrown off and one color will be missing. This means that the color that was supposed to represent the max value for all values (0.46) will now also represent the max value for a different data set (0.2).
The data was in a GeoDataFrame with data representing each of the four cities in separate columns. I defined a scale that would match the largest value in the four columns and made a number of bins (3 or 4) from 0 to the maximum value. The scale worked well for the column with the largest value, but was incorrect for all of the other columns.
This problem persists no matter how many bins I create, making sure that each column has data within each of the bins I create.
If there is a better way to create a consistent scale across multiple axes, please let me know. Thank you.
change default mapclassify
branch from master
to main
.
Follow the changes made in libpysal to move from travis to github actions.
Since mapclassify
switched from readthedocs
to GitHub hosted docs (#41) the badges on README.md
and README.rst
no longer functional or necessary.
README.md
README.rst
For more details on this visualization component for demonstrating mapclassify see the design document/discussions
HeadTailBreaks raises RecursionError if the maximum value within data is there twice (or more). Then it simply locks itself in the loop in values[values >= mean]
- mean will be always the same and both values will always returned.
Steps to reproduce:
data = np.random.pareto(2, 1000)
data = np.append(data, data.max())
mc.HeadTailBreaks(data)
I assume that once there are only the same values within remaining values, head_tail_breaks
should stop:
def head_tail_breaks(values, cuts):
"""
head tail breaks helper function
"""
values = np.array(values)
mean = np.mean(values)
cuts.append(mean)
if len(values) > 1:
if len(set(values)) > 1: #this seems to fix the issue
return head_tail_breaks(values[values >= mean], cuts)
return cuts
However, I am not sure if it is the intended behaviour to stop and keep multiple values in the last bin as it does not reflect the definition of HeadTailBreaks algorithm (but I cannot see another solution). Happy to do a PR if this is how you want to fix that.
If you have an array which contains values with a tiny difference, you send HeadTailBreaks
into endless recursion.
from mapclassify import HeadTailBreaks
HeadTailBreaks(np.array([1 + 2**-52, 1, 1]))
---------------------------------------------------------------------------
RecursionError Traceback (most recent call last)
<ipython-input-9-9dcdc720a93b> in <module>
----> 1 HeadTailBreaks(np.array([1 + 2**-52, 1, 1]))
/opt/conda/lib/python3.7/site-packages/mapclassify/classifiers.py in __init__(self, y)
1128
1129 def __init__(self, y):
-> 1130 MapClassifier.__init__(self, y)
1131 self.name = "HeadTailBreaks"
1132
/opt/conda/lib/python3.7/site-packages/mapclassify/classifiers.py in __init__(self, y)
612 self.fmt = FMT
613 self.y = y
--> 614 self._classify()
615 self._summary()
616
/opt/conda/lib/python3.7/site-packages/mapclassify/classifiers.py in _classify(self)
631
632 def _classify(self):
--> 633 self._set_bins()
634 self.yb, self.counts = bin1d(self.y, self.bins)
635
/opt/conda/lib/python3.7/site-packages/mapclassify/classifiers.py in _set_bins(self)
1135 x = self.y.copy()
1136 bins = []
-> 1137 bins = head_tail_breaks(x, bins)
1138 self.bins = np.array(bins)
1139 self.k = len(self.bins)
/opt/conda/lib/python3.7/site-packages/mapclassify/classifiers.py in head_tail_breaks(values, cuts)
182 cuts.append(mean)
183 if len(set(values)) > 1:
--> 184 return head_tail_breaks(values[values >= mean], cuts)
185 return cuts
186
... last 1 frames repeated, from the frame below ...
/opt/conda/lib/python3.7/site-packages/mapclassify/classifiers.py in head_tail_breaks(values, cuts)
182 cuts.append(mean)
183 if len(set(values)) > 1:
--> 184 return head_tail_breaks(values[values >= mean], cuts)
185 return cuts
186
RecursionError: maximum recursion depth exceeded in comparison
mapclassify/mapclassify/classifiers.py
Lines 176 to 185 in 6cf31ee
hello, do you know how to change the default interval's .2f precision to more decimal?
currently a development install of mapclassify will fail because the sys
module is used on this line but it's never imported
Opening up a thread there to sketch together what a different, streamlined API could look like for mapclassify
algorithms. This grows out of discussions with other projects (e.g. xarray
, ipyleaflet
. The basic idea is we would like mapclassify
to be as easy to use and useful as possible to potential users, and we've started thinking maybe our current API could be streamlined for the "80% of cases" where you have a 1D array with values and you want back a set of labels.
Based on this, we were thinking of starting by adding a method that'd wrap around all the available methods available for classification. For a very rough sketch, something like:
a = numpy.random.random(100)
q = mapclassify.classify(a, "quantiles", k=5)
Another option would be to develop it following the sklearn
pattern:
classifier = mapclassify.Quantiles(k=5)
q = classifier.fit_transform(a)
And there might be others more useful for folks. Please if you feel inclined, do drop your views here, it'd be really useful to get as many views as possible. Also, of course, any other ideas or "wish-list" items you may have that'd make you more likely to use/adopt mapclassify
for choropleth classification tasks, we'd love to help if possible.
Tagging a few folks we imagine might be interested in some way (@brendancol, @kristinepetrosyan, @martinRenou, @jorisvandenbossche, @sjsrey, @ljwolf, @slumnitz ), but feel free to tag more as you go along!
update binder environment.yml
branch to main
.
xref #130
In 8505795 I commented out the update
method for MaxP
due to linting failure, and then tests were passing without it. The linting was failing because bins
is used but never passed in or defined within. I am wondering if this is a bug introduced through copy-paste of another classifier's update
method, for example UserDefined
?
color.py now has a mixture of brewer2mpl
and palettable.colorbrewer
, both of which provide the same functionality and api. I think it makes more sense to use only one of them so that mapclassify will have fewer dependencies.
Quantiles
seems to return numbers of bins that differ from k
in some contexts:
In [45]: y = db['HR60']
In [46]: breaks = ps.Quantiles(y, 9).bins
In [47]: len(breaks)
Out[47]: 8
In [49]: y.min()
Out[49]: 0.0
In [50]: y.max()
Out[50]: 92.936802974000003
In [51]: breaks
Out[51]:
array([ 0. , 1.13094026, 2.19806302, 3.44429745,
5.13821807, 7.47789373, 10.86644222, 92.93680297])
In [67]: [print(i, ps.Quantiles(y, k=i).bins.shape[0]) for i in range(1, 10)]
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 7
9 8
Out[67]:
For others who may stumble onto this same issue, I just wanted to make the note that Mapclassify will work with Python 2.7 if you clone the repository, make one small change to setup.py (add the line "from io import open"), and run pip install with the -e parameter pointing to your local version of the repository (e.g., python2 -m pip install -e ~/my_packages/mapclassify).
My experience has been that some of the other packages useful for choropleth visualizations are not compatible with Python3, so it's great that Mapclassify can work in both.
Thank you for making this excellent tool!
add example notebook showcasing the new classify API developed here: #90
In working through #135 and making some doc edits, I'm seeing that the docstrings and notebooks are in need a thorough scouring for consistent formatting, grammar & spelling, etc. And along with that an update to the docs/
infrastructure itself. I'll get to working on that and split it from the work I've already started on in #135.
There are some warnings when bytecompiling files in Python 3.8; this may eventually stop working in newer Python versions.
mapclassify-2.2.0/mapclassify/classifiers.py:568: DeprecationWarning: invalid escape sequence \l
mapclassify-2.2.0/mapclassify/classifiers.py:2593: DeprecationWarning: invalid escape sequence \s
Just wondering if utility functions gadf and K_classifiers should be included in __ini__.py
The plot
utility function for classification objects does not seem to work for pooled classifications. This might be expected behaviour or not supported functionality but just in case.
Reproducible error:
from pysal.lib import examples
import mapclassify
import geopandas
mx_ex = examples.load_example('mexico')
mx = geopandas.read_file(mx_ex.get_file_list()[0])
years = ['PCGDP1940', 'PCGDP1960', 'PCGDP1980', 'PCGDP2000']
pooled = mapclassify.Pooled(mx[years])
for i, y in enumerate(years):
classi = pooled.col_classifiers[i]
classi.plot(mx)
which on the gds_env:6.1
returns:
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/opt/conda/lib/python3.8/site-packages/geopandas/plotting.py in _mapclassify_choro(values, scheme, **classification_kwds)
1010 try:
-> 1011 scheme_class = schemes[scheme]
1012 except KeyError:
KeyError: 'pooled quantiles'
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
/opt/conda/lib/python3.8/site-packages/geopandas/plotting.py in _mapclassify_choro(values, scheme, **classification_kwds)
1014 try:
-> 1015 scheme_class = schemes[scheme]
1016 except KeyError:
KeyError: 'pooled quantiles'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-20-f10e01e9c42e> in <module>
12 for i, y in enumerate(years):
13 classi = pooled.col_classifiers[i]
---> 14 classi.plot(mx)
/opt/conda/lib/python3.8/site-packages/mapclassify/classifiers.py in plot(self, gdf, border_color, border_width, title, legend, cmap, axis_on, legend_kwds, file_name, dpi, ax)
2336 fmt = legend_kwds.pop("fmt")
2337
-> 2338 ax = gdf.assign(_cl=self.y).plot(
2339 column="_cl",
2340 ax=ax,
/opt/conda/lib/python3.8/site-packages/geopandas/plotting.py in __call__(self, *args, **kwargs)
923 kind = kwargs.pop("kind", "geo")
924 if kind == "geo":
--> 925 return plot_dataframe(data, *args, **kwargs)
926 if kind in self._pandas_kinds:
927 # Access pandas plots
/opt/conda/lib/python3.8/site-packages/geopandas/plotting.py in plot_dataframe(df, column, cmap, color, ax, cax, categorical, legend, scheme, k, vmin, vmax, markersize, figsize, legend_kwds, categories, classification_kwds, missing_kwds, aspect, **style_kwds)
750 classification_kwds["k"] = k
751
--> 752 binning = _mapclassify_choro(values[~nan_idx], scheme, **classification_kwds)
753 # set categorical to True for creating the legend
754 categorical = True
/opt/conda/lib/python3.8/site-packages/geopandas/plotting.py in _mapclassify_choro(values, scheme, **classification_kwds)
1015 scheme_class = schemes[scheme]
1016 except KeyError:
-> 1017 raise ValueError(
1018 "Invalid scheme. Scheme must be in the set: %r" % schemes.keys()
1019 )
ValueError: Invalid scheme. Scheme must be in the set: dict_keys(['boxplot', 'equalinterval', 'fisherjenks', 'fisherjenkssampled', 'headtailbreaks', 'jenkscaspall', 'jenkscaspallforced', 'jenkscaspallsampled', 'maxp', 'maximumbreaks', 'naturalbreaks', 'quantiles', 'percentiles', 'stdmean', 'userdefined'])
We should consider removing the old MapClassifier._table_string()
method, which is not used anymore. The updated funcationality is implemented in _get_table()
, which is stand-alone function. We should also consider converting _get_table()
to a method since it is only called from within MapClassifier
.
Hi all! I'm working with some census data to build choroplet maps and I'm wondering what is the difference between the natural breaks and fisher jenks schemes. As far as I know, both of them reduce the variance inside the classification group and also between classification groups. But I'm not getting the detailed difference among both methods.
I was looking into the examples that are pointed out in the implementation section here in the repo and they look pretty similar. And here I share another one with the data I'm using now:
As the results using both methods are very similar I'm wondering if Is the Fisher Jenks method a kind of optimization of the Natural Breaks scheme?
Many thanks for your help!
We need to have a new release on pypi since the api.py
is removed now and several packages have mapclassify
as a dependency. The last release is in August 2017.
I have seen the RecursionError coming from HeadTailBreaks again. This time caused by a floating point imprecision.
This snippet allows you to reproduce the issue. The parquet is just 2kb containing only the problematic bit of the values.
df = pandas.read_parquet("https://www.dropbox.com/s/p9pgg2pdvnhvgsw/sample.parquet?dl=1")
bins = mapclassify.HeadTailBreaks(df["values"])
The current workaround is to round the values before sending them to mapclassify but we should somehow resolve this under the hood (not that I know how from the top of my head).
bins = mapclassify.HeadTailBreaks(df["values"].round(6))
After #78 the 3 images/links to tutorials are broken with a 404 message.
For example:
https://nbviewer.jupyter.org/github/pysal/mapclassify/blob/master/notebooks/south.ipynb
balanced
strategy in greedy seems to be broken.
df = gpd.read_file('https://gist.githubusercontent.com/martinfleis/5c669d1204d120d87f179e31c896043d/raw/5799db9c48ec1d20a309fc1b18b7edf07a8aca6b/gb.geojson')
df.plot(greedy(df, strategy='balanced'), figsize=(12, 12), edgecolor='w')
Strategies from networkx work fine. I'll try to figure out what is going on later.
FisherJenksSampled
seems to be the only classifier which has trouble to process pandas.Series as y
. Passing the same as an array (df.pop_est.values
) works flawlessly.
We should put y = np.asarray(y)
somewhere around here to make sure we get an array every time.
df = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres'))
mapclassify.FisherJenksSampled(df.pop_est)
/opt/miniconda3/envs/geo_dev/lib/python3.7/site-packages/pandas/core/computation/expressions.py:204: UserWarning: evaluating in Python space because the '*' operator is not supported by numexpr for the bool dtype, use '&' instead
f"evaluating in Python space because the {repr(op_str)} "
/opt/miniconda3/envs/geo_dev/lib/python3.7/site-packages/pandas/core/computation/expressions.py:204: UserWarning: evaluating in Python space because the '*' operator is not supported by numexpr for the bool dtype, use '&' instead
f"evaluating in Python space because the {repr(op_str)} "
/opt/miniconda3/envs/geo_dev/lib/python3.7/site-packages/pandas/core/computation/expressions.py:204: UserWarning: evaluating in Python space because the '*' operator is not supported by numexpr for the bool dtype, use '&' instead
f"evaluating in Python space because the {repr(op_str)} "
/opt/miniconda3/envs/geo_dev/lib/python3.7/site-packages/pandas/core/computation/expressions.py:204: UserWarning: evaluating in Python space because the '*' operator is not supported by numexpr for the bool dtype, use '&' instead
f"evaluating in Python space because the {repr(op_str)} "
/opt/miniconda3/envs/geo_dev/lib/python3.7/site-packages/pandas/core/computation/expressions.py:204: UserWarning: evaluating in Python space because the '*' operator is not supported by numexpr for the bool dtype, use '&' instead
f"evaluating in Python space because the {repr(op_str)} "
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-10-5f392df02277> in <module>
----> 1 mapclassify.FisherJenksSampled(df.pop_est)
~/Git/mapclassify/mapclassify/classifiers.py in __init__(self, y, k, pct, truncate)
1852 self.name = "FisherJenksSampled"
1853 self.y = y
-> 1854 self._summary() # have to recalculate summary stats
1855
1856 def _set_bins(self):
~/Git/mapclassify/mapclassify/classifiers.py in _summary(self)
624 def _summary(self):
625 yb = self.yb
--> 626 self.classes = [np.nonzero(yb == c)[0].tolist() for c in range(self.k)]
627 self.tss = self.get_tss()
628 self.adcm = self.get_adcm()
~/Git/mapclassify/mapclassify/classifiers.py in <listcomp>(.0)
624 def _summary(self):
625 yb = self.yb
--> 626 self.classes = [np.nonzero(yb == c)[0].tolist() for c in range(self.k)]
627 self.tss = self.get_tss()
628 self.adcm = self.get_adcm()
<__array_function__ internals> in nonzero(*args, **kwargs)
/opt/miniconda3/envs/geo_dev/lib/python3.7/site-packages/numpy/core/fromnumeric.py in nonzero(a)
1894
1895 """
-> 1896 return _wrapfunc(a, 'nonzero')
1897
1898
/opt/miniconda3/envs/geo_dev/lib/python3.7/site-packages/numpy/core/fromnumeric.py in _wrapfunc(obj, method, *args, **kwds)
56 bound = getattr(obj, method, None)
57 if bound is None:
---> 58 return _wrapit(obj, method, *args, **kwds)
59
60 try:
/opt/miniconda3/envs/geo_dev/lib/python3.7/site-packages/numpy/core/fromnumeric.py in _wrapit(obj, method, *args, **kwds)
49 if not isinstance(result, mu.ndarray):
50 result = asarray(result)
---> 51 result = wrap(result)
52 return result
53
/opt/miniconda3/envs/geo_dev/lib/python3.7/site-packages/pandas/core/generic.py in __array_wrap__(self, result, context)
1788 return result
1789 d = self._construct_axes_dict(self._AXIS_ORDERS, copy=False)
-> 1790 return self._constructor(result, **d).__finalize__(
1791 self, method="__array_wrap__"
1792 )
/opt/miniconda3/envs/geo_dev/lib/python3.7/site-packages/pandas/core/series.py in __init__(self, data, index, dtype, name, copy, fastpath)
312 if len(index) != len(data):
313 raise ValueError(
--> 314 f"Length of passed values is {len(data)}, "
315 f"index implies {len(index)}."
316 )
ValueError: Length of passed values is 1, index implies 177.
Tests for greedy
on Python 3.6 have been failing for some time now (probably for the past 4 months). Seems to do with networkx
, but networkx
is installed in the CI environment (networkx
version 2.7.1
). Maybe has to do with supported Python version by networkx
?
We may want to simply remove 3.6 from the testing suite and also slim down the testing matrix.
The docs list the current version of mapclassify
as v2.1.1
. The current (pre) released version is v2.3.0
. Should the docs be rebuilt now or after the official release of v2.3.0
? If the docs should be rebuilt now, I'll go ahead and make a PR.
The spaces added to the title of classifier table strings are causing linting to fail when updating docstring. I can see no reason no to get rid of those extra spaces (tests are passing locally without the spaces in the string repr).
It's a minor thing, but I vote for removing.
@sjsrey Is the "official" mapclassify
code formatting style black
? If not, shall I create a PR that blackens the repo and add a black
badge to README.md
?
For example, see here.
mapclassify/tests/test_mapclassify.py::TestUserDefined::test_UserDefined_invariant
/Users/user/mapclassify/mapclassify/classifiers.py:908: RuntimeWarning: invalid value encountered in double_scalars
gadf = 1 - self.adcm / adam
xref: #128, geopandas/geopandas#2553
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.