libmesh / libmesh Goto Github PK
View Code? Open in Web Editor NEWlibMesh github repository
Home Page: http://libmesh.github.io
License: GNU Lesser General Public License v2.1
libMesh github repository
Home Page: http://libmesh.github.io
License: GNU Lesser General Public License v2.1
Apparently the former doesn't work well when cross-compiling, but the latter is OK? At least petsc.m4 and tbb.m4 need to be fixed.
// This example also shows how to extend example 3 to run in
// parallel. Notice how little has changed! The significant
// differences are marked with "PARALLEL CHANGE".
Grepping for PARALLEL yields one result, the comment above.
I get the following error after all the files are compiled:
make[2]: rv: Command not found
The static lib is not created.
rv are parameters passed to the ar command. So it seems that $(AR) is not defined in the makefiles, where in the build system is this variable defined?
Update: In fact $(AR) is not defined, I replaced it by the actual command /usr/bin/ar and it created the library.
I was looking into the DofMap::dof_indices()function to see what I might do to speed it up.
To see just how expensive e.g. FEInterface::n_dofs_at_node() is, I actually had to run the code through the preprocessor to see how all the macros expanded.
What I see is that it becomes a double switch which dispatches to another double switch, and it gets called obviously for each node on the element.
I was thinking about making the current n_dof_at_node() a private member of FEInterface, and using it to build up static arrays, say
_n_dofs_at_node[DIM][FETYPE][ELEM_TYPE][NODENUM];
but to do this right, it will probably involve cleaning up our FE_FAMILY enum so it is packed, which means writing a string or something instead of a number to the restart files.
@roystgnr, @jwpeterson, @pbauman, @friedmud - Thoughts on this or on the broader issue of FEInterface altogether?
-Ben
Well, I thought the GitHub "downloads" feature was the way to go, only to find out that they deprecated it two days after uploading our v0.8.0 tarballs!: https://github.com/blog/1302-goodbye-uploads
While I recognize some (@friedmud) may not want to use them, I still maintain that packaged release files are a good idea, particularly for first-time users of a tool. Rarely ever do I find something that looks promising and either svn checkout or git clone their code - rather I look for the most recent packaged version. Given the traffic to the Sourceforge download site (see for example http://sourceforge.net/projects/libmesh/files/libmesh/libmesh-0.8.0) I'm not alone in that.
So what to do?
We can't just assume SHELL elements work the same as "face" elements. There are issues with the sidesets. Either they need to be removed or fleshed out.
While
-D_GLIBCXX_DEBUG -D_GLIBCXX_DEBUG_PEDANTIC
are nice and should probably remain the default in DEBUG mode, it would be nice to defeat them when needed. This is especially true as more external dependencies are compiled with C++. For example, currently in FIN-S I cannot use Cantera in DEBUG mode, unless I were to recompile Cantera...
So if I truly need DEBUG and Cantera at the same time, I presently either
neither of which are a good solution.
Note that when the user specifies --disable-gcc-debugging, it should also be possible to run the CPPunit unit tests in DEBUG mode as well, so add a AM_CONDITIONAL as part of this.
Adding @roystgnr as a watcher in case I'm crazy and need to be set straight.
We have some folks trying to use the libmesh-generated libtool script to link Fortran libraries and executables on Linux. It may be the first time someone has tried this (?) and therefore we've never noticed the issue before, but I think I've tracked it down to the "postdeps" line in the "FC" section of $LIBMESH_DIR/contrib/bin/libtool, which I've pasted below.
postdeps="-L/opt/packages/mpich/mpich-3.0.2/gcc-opt/lib -lmpichf90
-lmpich -lopa -lmpl -lrt -lpthread -l -l
-L/opt/packages/mpich/mpich-3.0.2/gcc-opt/lib
-L/usr/lib/gcc/x86_64-linux-gnu/4.6
-L/usr/lib/gcc/x86_64-linux-gnu/4.6/../../../x86_64-linux-gnu
-L/usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib
-L/lib/x86_64-linux-gnu -L/lib/../lib -L/usr/lib/x86_64-linux-gnu
-L/usr/lib/../lib
-L/opt/packages/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21
-L/usr/lib/gcc/x86_64-linux-gnu/4.6/../../.. -lmpichf90 -lmpich -lopa
-lmpl -lrt -lpthread -lgfortran -lm -lgcc_s -lquadmath -lm -lgcc_s -lc
-lgcc_s"
As you can see, there are two naked "-l" flags in this variable, and whenever libtool tries to use it to link a Fortran shared library, it obviously fails.
Unfortunately I don't know where to go from here in debugging the problem... note that it doesn't seem to affect Macs: the postdeps line is just blank for that OS. Also, if I hand edit the libtool script file and remove those -l's things seem to work fine.
Would you like to wrap any pointer data members with the template class "std::unique_ptr"?
Update candidate: DofMap
Note that -Wall does not turn on -Wshadow.
I just noticed our BuildBot ParallelMesh configuration (four MPI ranks) hitting an error there:
Beginning Solve 9
System has: 445 degrees of freedom.
Linear solver converged at step: 30, final residual: 1.09806e-15
L2-Error is: 0.000182515
H1-Error is: 0.00601168
Warning: This MeshOutput subclass only supports meshes which are contiguously renumbered!
Assertion `!obj->valid_unique_id() || obj->unique_id() == unique_filled_request[i]' failed.
[0] src/mesh/parallel_mesh.C, line 1025, compiled Nov 5 2013 at 15:59:47
Any chance you can replicate this, @permcody ?
GMVIO is (I think) the only output format that works with --enable-complex.
Ok, the discussion in #62 and fear of the alternate implmentation got me thinking that if we had better core support for Singletons inside the library the alternative may not be so bad...
Right now the only one we really have is RemoteElem. I've introduced more in #62, so it seems maybe we need a more general way to handle these.
Ideally singletons can be created as static data inside a class, or as a static object inside a function. The latter allows you to control when the singleton is created:
const Singleton & get_singleton()
{
// some thread safety mechanism...
static Singleton singleton;
return singleton;
}
The nice thing about that is the singleton is not created until needed. Unfortunately the destruction is still a mystery (to me anyway?), and problematic if the Singleton is a reference counted object.
Our current approach is to create singletons in LibMeshInit() and destroy them in ~LibMeshInit(). I like this, but rather than have LibMeshInit() ultimately know and manage all singletons, what if instead it contained a list of pointers to LibMeshSingleton objects or something?
class LibMeshSingeton
{
public:
void tear_down();
};
The LibMeshInit destructor loops over all objects (that were added somewhere else) and calls tear_down(); - thereby allowing a predictable and safe destruction order, while still allowing singletons to get created only at the time of first access, which in some cases could be never?
In the RemoteElem implementation then, for example, there is a simple
class RemoteElemSingleton : public LibMeshSingleton
{
RemoteElemSingleton ()
{
// constructor adds *this to the LibMeshInit list of singletons in a thread-safe way
}
void tear_down ()
{
if (_remote_elem)
{
delete _remote_elem;
_remote_elem = NULL;
}
...
}
};
@roystgnr, @jwpeterson, @friedmud -- thoughts?
The last example reorganization was well-intentioned, but now we are slowly making our way into exactly the position we were in before: does anyone know what miscellaneous_ex1--8 (after Ben's recent branch merges) are off the top of their head? I didn't think so...
My current suggestion for re-reorganizing them derives from the following axioms:
1.) Naming examples with numbers implies a sequential ordering
2.) Examples cannot (easily) be maintained in sequential order; and not all examples logically fall into a step-by-step sequence
3.) Nested directory structures are limiting and arbitrary (think gmail labels vs. folders)
So I propose we do the following:
1.) Come up with a relatively short, but informative, name (which does not have a number) for every example
2.) Put each one in a separate subdirectory of the examples/ directory (like they were before)
3.) Come up with a set of "tags" for each example which can be placed in the comments, and will allow people to search for an example (via grep) that is relevant to them.
For example (bad pun, I know):
introduction_ex1 -> read_write_mesh
introduction_ex2 -> intro_to_equation_systems
introduction_ex3 -> simple_poisson
introduction_ex4 -> advanced_poisson (*the previous example could probably just be dropped?)
introduction_ex5 -> runtime_quadrature_selection (again, maybe get rid of this one?)
adaptivity_ex1 -> 1D_reaction_diffusion_amr
adaptivity_ex2 -> unsteady_convection_diffusion_amr
etc.
Please let me know your thoughts...
I'm in the process of mirroring our svn repo, but it is definitely taking some time... Be patient & I'll close this issue when it is done.
So if you remove your build directory after running make install, we are seeing warnings like the following when linking application codes against libmesh:
ld: warning: directory not found for option '-L/tmp/libmesh-builds/build/contrib/tecplot/binary'
The cause seems to be that a build directory ends up in the dependency_libs variable of the libmesh.la file:
dependency_libs=' /opt/packages/libmesh/lib/libnetcdf.la -lcurl -L/tmp/libmesh-builds/build/contrib/tecplot/binary -ltecio_vendor ...
Note: I have seen this in other places as well, for example GCC and openMPI, so I think it's fairly easy to do with automake packages :-P The developers presumably rarely see these errors because their build directories are hanging around. @benkirk, any ideas?
make
fails with:
*** Warning: Linking the shared library libcontrib_opt.la against the
*** static library ../contrib/tecplot/lib/x86_64-apple-darwin10.8.0/tecio.a is not portable!
copying selected object files to avoid basename conflicts...
/usr/bin/ranlib: archive member: .libs/libcontrib_opt.a(tecio.a) fat file for cputype (16777234) cpusubtype (0) is not an object file (bad magic number)
ar: internal ranlib command failed
make[2]: *** [libcontrib_opt.la] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all-recursive] Error 1
I was able to narrow things down a bit with git bisect
to
The first bad commit could be any of:
643180835df958ce44f3484885b5fc3bdceb1624
fd721144a8b5eb52342cf1bbed1b16cad96ba42a
6a41dc439e249ddcb039f7996cad81fb919e4fbf
We cannot bisect more!
Now that VariableGroups work, start optimizing things like SparsityPattern generation, dof_indices, etc...
In my build directory, if I type make run in reduced_basis_ex1, everything is happy, but if I do that in ex2 or ex3, I currently get:
LIBMESH_DIR=/Users/petejw/projects/libmesh_git/build/.. METHODS="opt dbg" ../../../../examples/reduced_basis/reduced_basis_ex2/
/bin/sh: ../../../../examples/reduced_basis/reduced_basis_ex2/: is a directory
make: *** [run] Error 126
ex4-7 also seem OK.
Documentation needs to be updated. Started this in 338f5ac, but more work to be done.
Also needs to document new Automake build system.
Right now the sparsity augmentation code in that example doesn't actually augment the sparsity pattern, it just bumps up the non-zero counts in each row. That's sufficient for PETSc but breaks on at least Laspack. I'll disable the example for non-PETSc builds for now but we ought to fix it eventually.
@roystgnr, does this work for you?
$ ../configure ...
$ make ... # fails linking executables
$ make LIBS="-L`pwd`/contrib/libHilbert -lopt"
Seems to for me, next I'll figure out what is special about libHilbert that could be going on...
http://thrust.github.com
https://github.com/thrust/thrust/wiki/Quick-Start-Guide
@fuentesdt told me about this today.
@roystgnr, @jwpeterson, @friedmud, @pbauman - ever hear about this?
Looks like you can write TBB-style parallel loops which are executed on GPU threads. I'm going to start investigating & track progress on this ticket. It would be awesome if we can wrap this in our Threads:: API much like we currently do with TBB.
As I understand it, the GPU-enabled code is C'ish under the hood, so it would likely be limited to simpler data structures, but very interesting...
-Ben
If we try to run a "make distcheck" on a bare checkout or a freshly configured out-of-source directory, it currently fails. I always run "distcheck" after plain "check" so I'd never noticed before.
It has been suggested to override F77 for FC. Maybe. At least handle the ridiculous case where FC is found but F77 is not, with something like this:
AC_PROG_FC # Check for a Fortran compiler.
AC_FC_LIBRARY_LDFLAGS # Determine the linker flags for the Fortran
# intrinsic and runtime libraries.
AC_FC_WRAPPERS # Determine the form of the symbol name
# mangling used by the Fortran compiler and
# setup wrappers to perform the name mangling.
AC_FC_SRCEXT(f) # Use the Fortran compiler to compile FORTRAN 77
# source code.
F77="$FC" # Set F77, FFLAGS, and FLIBS to avoid needing
# to call AC_PROG_F77.
FFLAGS="$FCFLAGS"
FLIBS="$FCLIBS"
AC_SUBST(F77)
AC_SUBST(FFLAGS)
AC_SUBST(FLIBS)
I think it's cool that we include the configure flags with the PerfLog output, but it looks pretty bad if you have a lot of configure options (see below).
Since LIBMESH_CONFIGURE_INFO comes in as a #define, I'm not sure if we can parse the string somehow at runtime?
| Time: Wed Dec 19 13:40:50 2012 |
| OS: Darwin |
| HostName: inl421321.inl.gov |
| OS Release: 12.2.0 |
| OS Version: Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 |
| Machine: x86_64 |
| Username: petejw |
It would be useful to us if Elem::build_side (i, /proxy=/false) would set the resulting side's subdomain_id to match that of the parent. If I understand it correctly, this is already the behavior of Elem::build_side (i, /proxy=/true).
This just means changing all the child class implementations (Hex20::build_side(), Quad4::build_side(), etc.) in a fairly straightforward way which I am working on now.
As discussed in #21, there's a fair bit of code duplication between the FunctionBase and FEMFunctionBase versions of project_vector. Remove that. My thinking is to create a helper class for all the different projection methods scattered around the library can use so we can eliminate the code duplication there as well.
On trying to compile and install libmesh
from source, I noticed that the library's dependencies are not listed in the README.md
file.
@benkirk , did you get email for this?
@jwpeterson, @friedmud, @permcody - Looks like Exodus 5.24 has been released at sourceforge. http://sourceforge.net/projects/exodusii
Any reason for us to upgrade from 5.22 (or just as importantly, any reasons not to upgrade)?
I'll package up the new version if it would be useful for Moose.
-Ben
A new Eigen is available - we should update accordingly.
Feel free to post random philosophical musings...
I'm not sure how useful this is even in debug mode, but it might be nice to be able to disable MPI_ERRORS_ARE_FATAL and throw an exception if we see a non-fatal error.
I finally tracked down how to replicate this bug; I still have no idea how to fix it.
Run "configure --prefix=whatever"; everything works as expected.
Run "configure --prefix=somethingelse" afterwards, and unless a "make clean" hits contrib/netcdf/v4, the libnetcdf.la file there won't be rebuilt, and will still contain the setting "libdir=whatever". This causes the build to subsequently fail after "make install".
Since this is hard to encounter, easy to workaround, and might be hard to fix, I won't let it hold up 0.9.1.
Using a clang compiler (3.1), ./configure script adds -fopenmp info CXXFLAGS and CFLAGS. It should be -openmp.
I encountered an error with the new stitch_meshes code that went away when I reverted to the old version.
John, I'll email you a little test code (I guess I can't attach files directly to issues, unfortunately)
The test case stitches two cylindrical meshes, and one of them has boundary ID of 101 on the "top" and "bottom" boundaries. So it seems like the error is triggered because this mesh has nodes on boundary 101 that are not stitched.
OK, the time has come. In the subcell integration stuff I'm working I'd like to be able to do
SerialMesh my_local_mesh(MPI_COMM_SELF);
This is a necessary step for being able to split an MPI communicator and execute physics A on one portion of the parallel resource and physics B on another.
My plan of attack is to begin with t he Mesh classes, since that is what I need now, and then propagate to the EquationSystems later.
Basically, the Mesh will have an optional constructor argument that takes a reference to a Parallel::Communicator, defaulting to CommWorld.
Once the mesh is done the EquationSystems will inherit its communicator from the Mesh, so that will be easy, just tedious.
@roystgnr, any other pitfalls I'm not thinking about?
In order to keep things like parallel_only() simple I may need to impose a standard naming convention, like have class_parallel_only() expand to
#define class_parallel_only() do { \
libmesh_assert(this->communicator().verify(std::string(__FILE__).size())); \
libmesh_assert(this->communicator().verify(std::string(__FILE__))); \
libmesh_assert(this->communicator().verify(__LINE__)); } while (0)
So here I assume that any class with a communicator object has a method communicator() that returns that object...
@roystgnr says
"The version up on SourceForge seems to be using int64_t pretty
universally; the one in our contrib/ is hard-coded for int."
I'll take a look at this, but probably after the release.
Right now it seems that ParMETIS gets compiled and pulls in system MPI libs even if we've requested having those disabled, and we have to --disable-parmesh manually to avoid "MPI_MINLOC undeclared" compilation errors if MPI is truly unavailable.
Both are actively developed and increasingly support threads, which are a significant benefit for shared-memory machines when you don't want to deal with MPI.
Eigen is released under the MPL, so as we discussed before I think we could depend heavily on it and ultimately expect it to be available:
http://eigen.tuxfamily.org/index.php?title=Licensing_FAQ
Suitesparse is GPL, so it shouldn't be a core dependency.
@libmesh-devel, thoughts?
Looks to be related to the new restart changes - it'll be Monday before I can dig, but let me know if this rings a bell to anyone, @permcody maybe?
$ make -j 4 && make -j 4 check -C examples/adaptivity/adaptivity_ex5 METHODS=dbg LIBMESH_RUN="mpirun -np 4"
…
Running: /Users/benkirk/codes/libmesh/build/examples/adaptivity/adaptivity_ex5/.libs/example-dbg -read_solution -n_timesteps 25 -output_freq 10 -init_timestep 25
Mesh Information:
mesh_dimension()=2
spatial_dimension()=3
n_nodes()=737
n_local_nodes()=212
n_elem()=884
n_local_elem()=234
n_active_elem()=664
n_subdomains()=1
n_partitions()=4
n_processors()=4
n_threads()=1
processor_id()=0
Assertion `ierr == MPI_SUCCESS' failed.
[1] ./include/libmesh/parallel_implementation.h, line 3033, compiled Nov 1 2013 at 13:38:02
Assertion `ierr == MPI_SUCCESS' failed.
[2] ./include/libmesh/parallel_implementation.h, line 3033, compiled Nov 1 2013 at 13:38:02
Assertion `ierr == MPI_SUCCESS' failed.
[1] ./include/libmesh/parallel_implementation.h, line 3033, compiled Nov 1 2013 at 13:38:02
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
Assertion `ierr == MPI_SUCCESS' failed.
[2] ./include/libmesh/parallel_implementation.h, line 3033, compiled Nov 1 2013 at 13:38:02
…
I'm in the middle of something else at the moment, so I thought I'd try out this whole "gist" thing and see if anybody had any ideas.
The basic problem is that writing elemental data to Exodus files seems to fail if you're using ParallelMesh. The same code works fine if you use SerialMesh.
Here's a test code which demonstrates the issue:
https://gist.github.com/jwpeterson/6168524
And some screenshots:
https://docs.google.com/file/d/0B9BK7pg8se_idGVHbGNJQ2E5RWc/edit?usp=sharing
https://docs.google.com/file/d/0B9BK7pg8se_iSkVzdnpLV0NVSDA/edit?usp=sharing
It looks like, because Make.common.in gets turned into Make.common at configure time, we can't currently configure with one prefix (or accidentally leave it defaulting to /usr/local) and install to another prefix and expect the result to be usable by applications that import our Make.common. I'm not sure how we'd fix this.
I'm seeing this
CXX src/numerics/libmesh_opt_la-petsc_vector.lo
src/numerics/petsc_preconditioner.C: In member function 'virtual void libMesh::PetscPreconditioner<T>::init()':
src/numerics/petsc_preconditioner.C:74:32: error: cannot convert '_p_PC**' to 'PC {aka _p_PC*}' for argument '1' to 'PetscErrorCode PCDestroy(PC)'
make[1]: *** [src/numerics/libmesh_opt_la-petsc_preconditioner.lo] Error 1
on ubuntu-LTS with PETSc-3.1.
I loop over petsc 3.0, 3.1, 3.2, & 3.3 in a different test setup, but that will run tonight.
@friedmud - I'll see what else pops up.
Specifically, in devel mode (I don't have a dbg-compatible Trilinos built), I see:
(gdb) where
#0 0x00002aaab6bf365b in Epetra_Util::Create_Root_Map(Epetra_Map const&, int) ()
from /opt/apps/ossw/libraries/trilinos/trilinos-10.12.2/sl6/gcc-system/mpich2-1.4.1p1/mkl-gf-10.3.12.361/lib/libepetra.so
#1 0x00002aaaad4706cf in libMesh::EpetraVector::localize (this=0x69df90, v_local_in=...)
at ../src/numerics/trilinos_epetra_vector.C:456
#2 0x00002aaaad5777fd in libMesh::Problem_Interface::computeF (this=0x6b50f0,
x=<value optimized out>, r=<value optimized out>)
at ../src/solvers/trilinos_nox_nonlinear_solver.C:104
#3 0x00002aaab13df8c8 in NOX::Epetra::Group::computeF() ()
from /opt/apps/ossw/libraries/trilinos/trilinos-10.12.2/sl6/gcc-system/mpich2-1.4.1p1/mkl-gf-10.3.12.361/lib/libnoxepetra.so
#4 0x00002aaab18eb62d in NOX::StatusTest::NormF::relativeSetup(NOX::Abstract::Group&) ()
from /opt/apps/ossw/libraries/trilinos/trilinos-10.12.2/sl6/gcc-system/mpich2-1.4.1p1/mkl-gf-10.3.12.361/lib/libnox.so
#5 0x00002aaab18ebd5c in NOX::StatusTest::NormF::NormF(NOX::Abstract::Group&, double, NOX::StatusTest::NormF::ScaleType, NOX::Utils const*) ()
from /opt/apps/ossw/libraries/trilinos/trilinos-10.12.2/sl6/gcc-system/mpich2-1.4.1p1/mkl-gf-10.3.12.361/lib/libnox.so
#6 0x00002aaaad583bd5 in libMesh::NoxNonlinearSolver::solve (this=0x69e230, x_in=...)
at ../src/solvers/trilinos_nox_nonlinear_solver.C:368
#7 0x00002aaaad5e60ef in libMesh::NonlinearImplicitSystem::solve (this=0x69dd40)
at ../src/systems/nonlinear_implicit_system.C:178
#8 0x0000000000411b16 in main (argc=, argv=)
at ../../../../examples/miscellaneous/miscellaneous_ex3/miscellaneous_ex3.C:528
This was working in 0.9.0.1, and I want to fix it before 0.9.1 - anyone have any ideas before I throw up my hands and try bisecting the changes in between 0.9.0.1 and now?
It is rather involved to reproduce, so I've pasted my code below. Don't worry, it's not big:
#include <libmesh/libmesh.h>
#include <uqLibMeshNegativeLaplacianOperator.h>
#include <mpi.h>
using namespace libMesh;
int main(int argc, char **argv) {
MPI_Init(&argc, &argv);
LibMeshInit init(argc, argv);
// This is my class, that solves an eigenvalue problem using
// libmesh linked with Petsc and Slepc
uqLibMeshNegativeLaplacianOperator *C =
new uqLibMeshNegativeLaplacianOperator("mesh.e");
delete C;
MPI_Finalize();
return 0;
}
It appears one can work around this problem by adding an artificial block:
#include <libmesh/libmesh.h>
#include <uqLibMeshNegativeLaplacianOperator.h>
#include <mpi.h>
using namespace libMesh;
int main(int argc, char **argv) {
MPI_Init(&argc, &argv);
// Need an artificial block here because libmesh needs to
// call PetscFinalize before we call MPI_Finalize
{
LibMeshInit init(argc, argv);
// This is my class, that solves an eigenvalue problem using
// libmesh linked with Petsc and Slepc
uqLibMeshNegativeLaplacianOperator *C =
new uqLibMeshNegativeLaplacianOperator("mesh.e");
delete C;
}
MPI_Finalize();
return 0;
}
I found this work around in @pbauman's thermocouple code after receiving the following innocuous error: Attempting to use an MPI routine after finalizing MPICH
Perhaps one could add something like a LibMeshFinalise finalise();
class to allow libMesh
to clean up after itself and prevent this error.
We kind of look like chumps for still having this stuff in the code... they were put in to support compilers with broken std versions of the same, but I don't know of any current compilers that still fit that description. They are full of non-standard macros. Standard-conforming code will be longer but much more expressive.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.