juliaci / julia-buildbot Goto Github PK
View Code? Open in Web Editor NEWBuildbot configuration for build.julialang.org
License: MIT License
Buildbot configuration for build.julialang.org
License: MIT License
The linux nightly tarballs are having some strange issues, I guess since the power outage? Did they need to recompile the source build of gcc they've been using from scratch or something?
@staticfloat would that be a natural home for this? Is it something we want to do?
Julia and LLVM have debug assertions that we can enable for better error reporting. It would be good to have PRs and pushes get put through the ringer of "assertion" builds, and also for users to be able to use assertion builds. And so here is my master plan:
We create two schedulers: the PR
scheduler, and the push
scheduler. They are identical, except for the fact that the push
scheduler adds a [release]
tag, which means that this build should eventually get promoted, instead of just being tested and then left to rot. (Side note; if at all possible, we should customize our GitHub statuses so that users can download the build artifacts right there from the PR CI screen)
Each scheduler kicks off two builds; one with assertions and one without. Both will get uploaded, both will get tested, both will get promoted (if they have the [release]
tag). We then change travis
and appveyor
to have new default versions. We'll support the following:
julia:
- 1.0
- 1.0-assert
- 1.0-noassert
- nightly
- nightly-assert
- nightly-noassert
Where 1.0
is an alias for 1.0-noassert
, and nightly
is an alias for nightly-assert
. In this way, we get:
Greater flexibility, by providing users the ability to opt out of asserts on nightly builds if they need to.
Greater coverage with assertions when testing both base Julia and the package ecosystem (for those packages that are already testing the waters with nightly
, but leaving alone the poor packages that just want to make sure their single-line test suite passes on 1.0
)
Greater AWS storage costs because we now store every build twice.
cc: @staticfloat
I triggered a build today: https://build.julialang.org/#/builders/22/builds/8
and that still triggered a coverage build even though it was not a nightly.
The coverage build then failed because the job correctly did not get promoted:
https://github.com/JuliaCI/julia-buildbot/blob/master/master/master.cfg#L100-L102
I don't understand why copr-cli fails when trying to build RPM nightlies. The command works fine locally. Could you try to run it manually on the buildbot?
Also, any idea why the build is considered as successful even if one of the steps failed? We should be able to check the return value of copr-cli
(0 for success, as usual).
Turning off inlining should no longer be needed for coverage so we can remove it for faster coverage run.
If we want to make sure all tests passes with inlining off we can add another parallel buildbot for that.
Not sure if intentional or not, but I noticed this when re-running the tests on an old PR (that was so old it predated the analyzegc support). The buildbot checked out the last commit from the PR directly instead of grabbing the merge result.
@staticfloat in addition to runtests, we should have a doctest worker. Indeed, like travis, we can also optimize and run only that worker when applicable:
FILES_CHANGED=$(git diff --name-only $TRAVIS_COMMIT_RANGE -- || git ls-files)
# skip tests if only files within the "doc" dir have changed
if [ $(echo "$FILES_CHANGED" | grep -cv '^doc/') -gt 0 ]; then
<schedule all test workers>
end
<schedule doc test worker>
RPM in RHEL5 does not support SHA1 checksums, only MD5. For the nightly builds to succeed there, the call to rpmbuild
in the build script should be modified to rpmbuild -bs --define "_source_filedigest_algorithm md5" --define "_binary_filedigest_algorithm md5"
.
Can you do that? Normally with the recent changes I made everything should work on EPEL5.
@staticfloat I'd take a stab at this myself but would probably miss a few places - right now these builds occupy the packaging buildbot for a while, since they do make -C deps distclean-llvm
every time. Probably makes sense to do these one or a handful of times a day rather than on every successful build. Honestly all the other nightlies could probably be made less frequent too, though the immediate feedback is kinda neat. Depends whether the S3 space is burning a hole in anyone's pocket. I think it's a little more useful to archive the same total space worth of nightlies going further back rather than more frequent.
Hey @tkelman I'm a little stumped on this one. I'm in the process of rebuilding the Windows buildbots (rather than jumping through all the hoops in the julia-vagrant
repo to create a VM image, then uploading that image to OpenStack, I'm instead starting with whatever default windows image the VM provider we're currently using has, and then adding the necessary packages on using powershell scripts which were inspired by you) and I'm running into a bizarre problem with cygwin's cmake
.
The first time I tried anything, I got a rather mysterious error which I then traced down to the fact that cmake
was being invoked as /bin/cmake
and not /usr/bin/cmake
:
cd libgit2/build/ && \
cmake .. -DCMAKE_INSTALL_PREFIX:PATH=/home/Administrator/buildbot/slave/package_win6_2-x64/build/usr -DCMAKE_BUILD_TYPE=Release -DCMAKE_VERBOSE_MAKEFILE=ON -DCMAKE_C_COMPILER="x86_64-w64-mingw32-gcc" -DCMAKE_C_COMPILER_ARG1="-m64 " -DCMAKE_CXX_COMPILER="x86_64-w64-mingw32-g++" -DCMAKE_CXX_COMPILER_ARG1="-m64 " -DTHREADSAFE=ON -DWIN32=ON -DMINGW=ON -DUSE_SSH=OFF -DCMAKE_SYSTEM_NAME=Windows -DBUILD_CLAR=OFF -DCMAKE_RC_COMPILER=`which x86_64-w64-mingw32-windres` -DDLLTOOL=`which x86_64-w64-mingw32-dlltool` -DCMAKE_FIND_ROOT_PATH=/usr/x86_64-w64-mingw32 -DCMAKE_FIND_ROOT_PATH_MODE_INCLUDE=ONLY
Makefile:1961: recipe for target 'libgit2/build/Makefile' failed
make[1]: Leaving directory '/home/Administrator/buildbot/slave/package_win6_2-x64/build/deps'
CMake Error: Could not find CMAKE_ROOT !!!
CMake has most likely not been installed correctly.
Modules directory not found in
//share/cmake-3.3.1
CMake Error: Error executing cmake::LoadCache(). Aborting.
make[1]: *** [libgit2/build/Makefile] Error 1
Makefile:51: recipe for target 'julia-deps' failed
make: *** [julia-deps] Error 2
I then attempted to change the PATH
(you can always see the current environment in blue text at the top of every buildbot step) and now it seems that the shell can't find cmake
at all.
cd libgit2/build/ && \
cmake .. -DCMAKE_INSTALL_PREFIX:PATH=/home/Administrator/buildbot/slave/package_win6_2-x64/build/usr -DCMAKE_BUILD_TYPE=Release -DCMAKE_VERBOSE_MAKEFILE=ON -DCMAKE_C_COMPILER="x86_64-w64-mingw32-gcc" -DCMAKE_C_COMPILER_ARG1="-m64 " -DCMAKE_CXX_COMPILER="x86_64-w64-mingw32-g++" -DCMAKE_CXX_COMPILER_ARG1="-m64 " -DTHREADSAFE=ON -DWIN32=ON -DMINGW=ON -DUSE_SSH=OFF -DCMAKE_SYSTEM_NAME=Windows -DBUILD_CLAR=OFF -DCMAKE_RC_COMPILER=`which x86_64-w64-mingw32-windres` -DDLLTOOL=`which x86_64-w64-mingw32-dlltool` -DCMAKE_FIND_ROOT_PATH=/usr/x86_64-w64-mingw32 -DCMAKE_FIND_ROOT_PATH_MODE_INCLUDE=ONLY
Makefile:1961: recipe for target 'libgit2/build/Makefile' failed
make[1]: Leaving directory '/home/Administrator/buildbot/slave/package_win6_2-x64/build/deps'
Makefile:51: recipe for target 'julia-deps' failed
/bin/sh: line 1: cmake: command not found
make[1]: *** [libgit2/build/Makefile] Error 127
make: *** [julia-deps] Error 2
If I login via SSH, I can invoke cmake
just fine. Is there anything particularly special about cygwin
that I might be missing here?
Tag a few extra commands onto the end of the packagers to download the version of julia that you just built to a new location, and Base.runtests()
to ensure that we don't have broken packages. Look at the run_code
builders for inspiration.
We probably don't need to build every PR twice; right now we are building the branch and the refs/merge
branch both. Instead, just do the refs/merge
branch.
Also cc @ihnorton - something seems wrong or in need of manual intervention with the cross-compiled Windows binaries, whereas this new buildbot system looks like it's been building bottles and OSX nightlies pretty reliably. I was experimenting with making Docker images with all the cross-compiling dependencies recently, so I can help translate those into the format used here (mostly Vagrant?). Cross-compilation is doable from Ubuntu 14.04 if you use a PPA I made for the right configuration of MinGW cross-compiler, or from openSUSE, or possibly from Arch but I think the MinGW package there is already on 4.9 which might break binary compatibility with everyone's packages on Windows.
@one-more-minute your OSX juno signing failed
error: /Library/Developer/CommandLineTools/usr/bin/codesign_allocate: can't write output file: /Volumes/Juno/Juno.app/Contents/Frameworks/ElectronFramework.framework/Versions/Current/Electron Framework.cstemp (No space left on device)
Looking at the .dmg, it doesn't have much space free. Are you creating that .DMG file or am I? It looks like it's created with a limit of 900 MB, perhaps try bumping that up to 1500MB, just to future-proof it?
These are overwriting CFLAGS resulting in slower performance in a bunch of dependency libraries. It happened in openlibm and it's happening for FFTW too, ref JuliaLang/julia#17000 (comment).
The point of a buildbot is to be a clean reproducible environment for a standard build. If all this customization is needed for homebrew bottle building, it should be isolated to the homebrew jobs, and reflected in the buildbot configuration files here rather than in the VM customization / provisioning which I still have very little visibility to.
Fedora Copr maintainers have notified me that the Julia nightlies were constantly failing for about a week. And indeed it looks like my last commit on master (83f069c) isn't taken into account when building the SRPMs. If you look e.g. at the end of this file, you can see that the list of tests I removed in the commit is still present:
https://copr-be.cloud.fedoraproject.org/results/nalimilan/julia-nightlies/fedora-21-x86_64/julia-0.4.0-0.20150116.el7.centos/build.log
Any idea where this might come from?
LLVM recently switched to require cmake 3.4.3+ so we need to upgrade it on the buildbot (It was announced on llvm-dev a few month ahead but I didn't realized that we'll be affected...)
Ping @staticfloat
Once JuliaLang/julia#32090 is merged, clangsa will be clean on the julia codebase. To ensure it stays that way, we should have a CI worker that runs it. Since clangsa doesn't require julia to be bootstrapped (or even julia to be built), it probably makes sense to have a separate worker that runs in parallel with the package_ workers (OS doesn't really matter - linux x86_64 seems fine - eventually we may want to rerun with different target arches but that's just an argument to the analyzer). I.e. all the bot would have to do is run make -C src analyzegc
. Separately, it might make sense to just have the same worker also run the llvmpasses tests (make -C test/llvmpasses
), since those currently only run on travis and they similarly do not require a bootstrapped julia.
As noted on https://build.julialang.org/#/builders/22/builds/537 and previously with patches for LLVM.jl.
LLVM patches sometimes are ignored on the the buildbots, this might actually be a issue with the Makefiles (cc: @vtjnash)
I had this locally where adding a patch didn't trigger a LLVM rebuild.
In any case since we are now using CCACHE
we might want to consider building from scratch on the buildbots
cc: @staticfloat
Once 0.5.0 is out the door, upgrade cmake on all the buildbots to at least 3.4.3. This script, newly updated, should work just fine.
Note; do NOT upgrade mingw cross-compilers to 5.4, we need to keep those held back to 4.9 until we have a solution for https://gcc.gnu.org/bugzilla/show_bug.cgi?id=77333
See original comment, essentially we need to loop over brew deps --HEAD julia
, call brew fetch --force-bottle $dep
, parse the output for downloaded bottle, ensure that bottle is uploaded to AWS, if not, upload the new one.
For parsing out downloaded location, this seems to work well:
$ brew fetch --force-bottle gcc | grep -i downloaded | cut -d-f2- | xargs echo
/Library/Caches/Homebrew/gcc-4.9.2_1.yosemite.bottle.tar.gz
To download bottles of different platforms, we can parse out download URL and replace it with our own new platforms:
$ brew fetch --force-bottle gcc | grep Downloading | awk '{ print $3 }' | sed -e "s/yosemite/mountain_lion/"
https://downloads.sf.net/project/machomebrew/Bottles/gcc-4.9.2_1.mountain_lion.bottle.tar.gz
To figure out what bottles need to be uploaded, we can just [[ -z "$(aws ls bucketname -l | grep bottle_filename)" ]]
.
As @tkelman requested, here's a list of implemented/planned/proposed changes to how arm builders works.
uname -m
is armv8l
instead of armv6l
or armv7l
. This is only an issue when compiling any dependencies (noticeably clang and gcc) that picks up the target arch automatically from uname -m
. Both of them should be overwritable. We can possibly figure this out from ARCH
but I'm not sure what's the most robust way to do it.... In order to make sure the binary is compiled for the right arch, I think we can keep the old builder and run a few simple tests on it using the binary we compiled to make sure that the binary is good to use. From my brief testing just now it seems that the LLVM was probably not compiled with LTO and it is somehow using neon instructions.example: https://build.julialang.org/#/builders/65/builds/5321/steps/2/logs/stdio
From worker 4: Exception calling "DownloadFile" with "2" argument(s): "The remote server
From worker 4: returned an error: (404) Not Found."
From worker 4: At line:1 char:96
From worker 4: + [System.Net.ServicePointManager]::SecurityProtocol =
From worker 4: [System.Net.SecurityProtoco ...
From worker 4: + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
From worker 4: ~~~
From worker 4: + CategoryInfo : NotSpecified: (:) [], MethodInvocationException
From worker 4: + FullyQualifiedErrorId : WebException
From worker 4:
From worker 4: Exception calling "DownloadFile" with "2" argument(s): "Unable to connect to
From worker 4: the remote server"
From worker 4: At line:1 char:96
From worker 4: + [System.Net.ServicePointManager]::SecurityProtocol =
From worker 4: [System.Net.SecurityProtoco ...
From worker 4: + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
From worker 4: ~~~
From worker 4: + CategoryInfo : NotSpecified: (:) [], MethodInvocationException
From worker 4: + FullyQualifiedErrorId : WebException
There was some suitesparse breakage and some LLVM breakage, and currently some win64 test-only breakage, but no automatic Windows nightlies have started off a downstream commit for several days. I think the last automatic one was before you added the clean/nuke options?
http://buildbot.e.ip.saba.us:8010/builders
This has been delaying nightlies for about 3 days and will need addressing before we can make RC binaries. We can maybe manually work around it for all platforms other than mac if needed.
cc @staticfloat
I'm not even sure what to make of this:
https://build.julialang.org/#/builders/73/builds/5889
rm -fr /Users/sabae/buildbot/worker/package_macos64/build/julia-7b826bab47
tar zxf /Users/sabae/buildbot/worker/package_macos64/build/julia-7b826bab47-mac64.tar.gz -C dmg/Julia-1.4.app/Contents/Resources/julia --strip-components 1
if [ -n "$MACOS_CODESIGN_IDENTITY" ]; then \
echo "Codesigning with identity $MACOS_CODESIGN_IDENTITY"; \
codesign -s "$MACOS_CODESIGN_IDENTITY" -v --deep dmg/Julia-1.4.app; \
else \
true; \
fi
Codesigning with identity 2053E9292809B66582CA9F042B470C0929340362
/bin/sh: line 1: 82745 Segmentation fault: 11 codesign -s "$MACOS_CODESIGN_IDENTITY" -v --deep dmg/Julia-1.4.app
make[1]: *** [dmg/Julia-1.4.app] Error 139
@staticfloat and @one-more-minute please fix this asap, these jobs are occupying the buildbot for days at a time and not working. We urgently need to create other windows binaries right now for testing and the juno jobs are refusing to obey cancel commands.
ERROR: LoadError: On worker 2:
LoadError: [Code:ERROR, Class:Net]: Unsupported URL protocol
[inlined code] from libgit2/error.jl:96
in clone at libgit2/repository.jl:95
in clone at libgit2.jl:303
in anonymous at /home/ubuntu/buildbot/slave/build_ubuntu14_04-x64/build/test/libgit2.jl:72
in temp_dir at /home/ubuntu/buildbot/slave/build_ubuntu14_04-x64/build/test/libgit2.jl:62
in temp_dir at /home/ubuntu/buildbot/slave/build_ubuntu14_04-x64/build/test/libgit2.jl:57
in include_string at loading.jl:266
in include_from_node1 at ./loading.jl:307
[inlined code] from util.jl:179
in runtests at /home/ubuntu/buildbot/slave/build_ubuntu14_04-x64/build/test/testdefs.jl:178
in anonymous at multi.jl:892
in run_work_thunk at multi.jl:645
[inlined code] from multi.jl:892
in anonymous at task.jl:59
while loading /home/ubuntu/buildbot/slave/build_ubuntu14_04-x64/build/test/libgit2.jl, in expression starting on line 69
while loading /home/ubuntu/buildbot/slave/build_ubuntu14_04-x64/build/test/runtests.jl, in expression starting on line 13
From worker 2: * libgit2 make[1]: *** [all] Error 1
I think this means they need openssl-dev to be installed.
Please allow me to force builds. I want to push the button and cause a coverage build, for instance.
example: https://build.julialang.org/#/builders/69/builds/5357
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 84.5M 100 84.5M 0 0 15.3M 0 0:00:05 0:00:05 --:--:-- 15.9M
Checksumming Protective Master Boot Record (MBR : 0)…
Protective Master Boot Record (MBR :: verified CRC32 $DB18E963
Checksumming GPT Header (Primary GPT Header : 1)…
GPT Header (Primary GPT Header : 1): verified CRC32 $288F949A
Checksumming GPT Partition Data (Primary GPT Table : 2)…
GPT Partition Data (Primary GPT Tabl: verified CRC32 $BE544760
Checksumming (Apple_Free : 3)…
(Apple_Free : 3): verified CRC32 $00000000
Checksumming EFI System Partition (C12A7328-F81F-11D2-BA4B-00A0C93EC93B : 4)…
EFI System Partition (C12A7328-F81F-: verified CRC32 $B54B659C
Checksumming disk image (Apple_HFS : 5)…
disk image (Apple_HFS : 5): verified CRC32 $A4C6E211
Checksumming (Apple_Free : 6)…
(Apple_Free : 6): verified CRC32 $00000000
Checksumming GPT Partition Data (Backup GPT Table : 7)…
GPT Partition Data (Backup GPT Table: verified CRC32 $BE544760
Checksumming GPT Header (Backup GPT Header : 8)…
GPT Header (Backup GPT Header : 8): verified CRC32 $690D44C8
verified CRC32 $0A3BDA88
/dev/disk45 GUID_partition_scheme
/dev/disk45s1 EFI
/dev/disk45s2 Apple_HFS /Volumes/Julia-1.4.0-DEV-dbad68deb3
cp: ./share/doc/julia/html/en/assets/arrow.svg: Permission denied
cp: ./share/doc/julia/html/en/assets/documenter.css: Permission denied
...
Since we have total control over the bots, we could run our own httpbin.org spoof locally and redirect to it in our /etc/hosts files, with appropriate self-signed cert injected into our chain of authority.
(contributing factor to https://build.julialang.org/#/builders/65/builds/3846/steps/2/logs/stdio#L410)
(@staticfloat are you sure you don't want to just watch this repo)
With the new docker builders, ccache may be effective again as the kernel can share a giant file cache among all the different builders. See if it's fast at all.
There was an ABI breakage in mingw-w64's libstdc++ on opensuse, so winrpm packages aren't compatible with cygwin-built Julia right now. Would this be much work to set up? Here's the build recipe, I'm testing it in a docker container of opensuse 13.1 right now and I suspect a vagrant box would be equivalent:
# Change the following to i686-w64-mingw32 for 32 bit Julia:
export XC_HOST=x86_64-w64-mingw32
# Change the following to 32 for 32 bit Julia:
export BITS=64
zypper addrepo http://download.opensuse.org/repositories/windows:mingw:win$BITS/openSUSE_13.1/windows:mingw:win$BITS.repo
zypper --gpg-auto-import-keys refresh
zypper -n install --no-recommends git make cmake tar wine which curl python python-xml patch gcc-c++ m4 p7zip.i586 libxml2-tools
zypper -n install mingw$BITS-cross-gcc-c++ mingw$BITS-cross-gcc-fortran mingw$BITS-libstdc++6 mingw$BITS-libgfortran3 mingw$BITS-libssp0
# opensuse packages the mingw runtime dlls under sys-root/mingw/bin, not /usr/lib64/gcc
cp /usr/$XC_HOST/sys-root/mingw/bin/*.dll /usr/lib*/gcc/$XC_HOST/*/
git clone git://github.com/JuliaLang/julia.git julia
cd julia
make -j4 win-extras binary-dist
It seems that the llvm-svn bot cannot find the __atomic_*
symbols for Int128
atomics. This should have been fixed by JuliaLang/julia#16066 so I'm guessing if libatomic1
is not installed on the buildbot. (If it is installed, then it would be helpful to see why can't julia find it or why can't the symbol be loaded from the lib)
As @tkelman pointed out, it is a little annoy to add another dependency that requires root (shipping this lib ourselves is possible but technically invalid, only one copy of this lib should exist at runtime since there's shared global state (lock)). So maybe we can throw a better error if the symbol is not found instead of letting llvm exit. It's still better to make the test pass on the buildbot though.
It would be great to have a buildbot that builds with MEMDEBUG
and MEMDEBUG2
and runs the tests under Valgrind, and possibly other sanitizers (separately) as well. Since this takes a long time, it should perhaps run infrequently (once a day or even less). There are already instructions for running julia under valgrind but I know less about the LLVM sanitizers. Hopefully this can help catch build errors with MEMDEBUG2
before they are nearly a year old (c.f. JuliaLang/julia#18536), and (more importantly) also help recognize when changes introduce things that valgrind considers to be memory errors.
the ubuntu toolchain test ppa should work, it has gcc-4.8 or gcc-5
Since we now have codecov.io working in CoverageBase, all that should be necessary is to take
analyze_cov_cmd = """
import CoverageBase
using Coverage, HDF5, JLD
cd(joinpath(CoverageBase.julia_top()))
results=Coveralls.process_folder("base")
save("coverage.jld", "results", results)
"""
merge_cov_cmd = """
using Coverage, CoverageBase, HDF5, JLD, Compat
cd(joinpath(CoverageBase.julia_top()))
r1 = load("coverage_noninlined.jld", "results")
r2 = load("coverage_inlined.jld", "results")
r = CoverageBase.merge_coverage(r1, r2)
git_info = @compat Dict(
"branch" => Base.GIT_VERSION_INFO.branch,
"remotes" => [
@compat Dict(
"name" => "origin",
"url" => "https://github.com/JuliaLang/julia.git"
)
],
"head" => @compat Dict(
"id" => Base.GIT_VERSION_INFO.commit,
"message" => "%(prop:commitmessage)s",
"committer_name" => "%(prop:commitname)s",
"committer_email" => "%(prop:commitemail)s",
"author_name" => "%(prop:authorname)s",
"author_email" => "%(prop:authoremail)s",
)
)
println("git_info: ")
println(git_info)
Coveralls.submit_token(r, git_info)
"""
to
analyze_cov_cmd = """
import CoverageBase
using Coverage, HDF5, JLD
cd(joinpath(CoverageBase.julia_top()))
results=Codecov.process_folder("base")
save("coverage.jld", "results", results)
"""
merge_cov_cmd = """
using Coverage, CoverageBase, HDF5, JLD, Compat
cd(joinpath(CoverageBase.julia_top()))
r1 = load("coverage_noninlined.jld", "results")
r2 = load("coverage_inlined.jld", "results")
r = CoverageBase.merge_coverage(r1, r2)
Codecov.submit_token(r, git_info)
"""
I think we may not need the full git info
that Coveralls needed. We can test it out and see. The codecov stuff runs off the same .cov
files as the Coveralls stuff so we only need to run the tests once. We don't yet have a codecov.io repo set up for JuliaLang/julia
but that's a 1 minute process.
We're trying to run CoverageBase.jl after every commit that makes it through testing and packaging (for Linux x86_64). Unfortunately, it looks like I'm not doing something quite right.
First off, here is the code that's getting run, here is the latest run with all the steps and their logs. There are two things I have questions about. First off, there's the "skipped" messages during the .cov parsing stages. Here's an example. Secondly, after running all these steps, I try to submit to Coveralls, and I get a method undefined error. I'm thinking this might have something to do with all the "skipped" messages.
@timholy any tips here would be appreciated. I ran a Pkg.clone()
on your CoverageBase
repository and am currently sitting on commit c77855df2. If you need to login and mess around with the buildbot I'm running this on, just let me know, I'll email you the login details.
We have had a lot of problems with Microsoft's signtool
, so let's try Mono's!
Install Mono on our windows buildbots
Change our signing instructions to use signtool.exe
from that Mono installation (example instructions)
I've been running into some issues with LLVM.jl, with tests segfaulting on something I've definitely fixed on master. After some debugging, I think the nightlies do not contain that patch.
For example, take my local build of LLVM without JuliaLang/julia#25794, where the assembly of LLVMGetAttributeCountAtIndex
(the function where LLVM.jl segfaults) looks like:
Dump of assembler code for function LLVMGetAttributeCountAtIndex:
0x0000000000503630 <+0>: sub $0x18,%rsp
0x0000000000503634 <+4>: mov %fs:0x28,%rax
0x000000000050363d <+13>: mov %rax,0x8(%rsp)
0x0000000000503642 <+18>: xor %eax,%eax
0x0000000000503644 <+20>: mov 0x98(%rdi),%rax
0x000000000050364b <+27>: mov %rsp,%rdi
0x000000000050364e <+30>: mov %rax,(%rsp)
0x0000000000503652 <+34>: callq 0x3815f0 <_ZNK4llvm12AttributeSet13getAttributesEj@plt>
0x0000000000503657 <+39>: mov 0x8(%rsp),%rdx
0x000000000050365c <+44>: xor %fs:0x28,%rdx
0x0000000000503665 <+53>: mov 0x8(%rax),%eax
0x0000000000503668 <+56>: jne 0x50366f <LLVMGetAttributeCountAtIndex+63>
0x000000000050366a <+58>: add $0x18,%rsp
0x000000000050366e <+62>: retq
0x000000000050366f <+63>: callq 0x3876c0 <__stack_chk_fail@plt>
Applying the patch transforms the assembly into:
Dump of assembler code for function LLVMGetAttributeCountAtIndex:
0x0000000000503630 <+0>: sub $0x18,%rsp
0x0000000000503634 <+4>: mov %fs:0x28,%rax
0x000000000050363d <+13>: mov %rax,0x8(%rsp)
0x0000000000503642 <+18>: xor %eax,%eax
0x0000000000503644 <+20>: mov 0x98(%rdi),%rax
0x000000000050364b <+27>: mov %rsp,%rdi
0x000000000050364e <+30>: mov %rax,(%rsp)
0x0000000000503652 <+34>: callq 0x3815f0 <_ZNK4llvm12AttributeSet13getAttributesEj@plt>
0x0000000000503657 <+39>: xor %edx,%edx
0x0000000000503659 <+41>: test %rax,%rax
0x000000000050365c <+44>: je 0x503661 <LLVMGetAttributeCountAtIndex+49>
0x000000000050365e <+46>: mov 0x8(%rax),%edx
0x0000000000503661 <+49>: mov 0x8(%rsp),%rcx
0x0000000000503666 <+54>: xor %fs:0x28,%rcx
0x000000000050366f <+63>: mov %edx,%eax
0x0000000000503671 <+65>: jne 0x503678 <LLVMGetAttributeCountAtIndex+72>
0x0000000000503673 <+67>: add $0x18,%rsp
0x0000000000503677 <+71>: retq
0x0000000000503678 <+72>: callq 0x3876c0 <__stack_chk_fail@plt>
Note the test %rax,%rax
and jump over mov 0x8(%rsp),%edx
, corresponding with the check for a returned null pointer.
Meanwhile, on today's nightly (0.7.0-DEV.3998, 4371808c4e) we seen the following IR:
Dump of assembler code for function LLVMGetAttributeCountAtIndex:
0x00000000004effc0 <+0>: sub $0x18,%rsp
0x00000000004effc4 <+4>: mov 0x98(%rdi),%rax
0x00000000004effcb <+11>: lea 0x8(%rsp),%rdi
0x00000000004effd0 <+16>: mov %rax,0x8(%rsp)
0x00000000004effd5 <+21>: callq 0x37d390 <_ZNK4llvm12AttributeSet13getAttributesEj@plt>
0x00000000004effda <+26>: mov 0x8(%rax),%eax
0x00000000004effdd <+29>: add $0x18,%rsp
0x00000000004effe1 <+33>: retq
Quite a bit cleaner (no stack protector?), but more notably no test for a null pointer. So it seams the patch to LLVM has not been applied?
cc @staticfloat as per template
You must use -mcpu for Arm and -march for x86
It seems to make sense to switch the ARM buildbots to use mcpu
instead of march
I'm preparing a PR but thought I'd create an issue to link to in a code comment
cc. @staticfloat
As suggested on JuliaLang/julia#30314 (comment) I'm opening the issue here on the appropriate repository to request Julia prebuilt binaries for Alpine Linux.
System information:
Julia 0.6 was known to build on Alpine Linux, but was removed because they couldn't get the tests to pass. Some of them failed.
If any other information is needed, please request and I will happily provide.
Merry Xmas.
cc @StefanKarpinski, along with cache.julialang.org, ref JuliaPackaging/WinRPM.jl#57 (comment)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.