Giter Site home page Giter Site logo

hakwerk / labca Goto Github PK

View Code? Open in Web Editor NEW
314.0 314.0 37.0 2.99 MB

A private Certificate Authority for internal (lab) use, based on the open source ACME Automated Certificate Management Environment implementation from Let's Encrypt (tm).

Home Page: https://lab-ca.net

License: Other

Python 2.63% Shell 22.34% Go 48.82% HTML 8.33% CSS 2.97% JavaScript 14.36% Makefile 0.52% Assembly 0.04%
acme ca certificate certificate-authority go homelab letsencrypt pki tls

labca's People

Contributors

dependabot[bot] avatar hakwerk avatar jamesdeeen avatar jonasled avatar ka2er avatar spagno avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

labca's Issues

Upload CA from plain text

During the setup, if we use the upload function for the pem files, the test-ca.pem is created with CRLF line and it gives error during the signing case

Use backup files to restore to new install

Make the backup files downloadable, and on the first page of the setup for new installations, have an option to restore from a backup.
This allows migrating from an old-style setup to the final docker-only style.

Reboot timeout - Installation never finishes

Hello,

I get a timeout when I press "restart LabCA" in the browser, after being able to download the root certificate.
The reboot seems to never finish and in the end I have a root and issuer certificate and the certificate for LabCA is installed but the LabCA installation itself is broken - the admin panel shows "Please wait" ending in a timeout or 404, only the public part seems to work. What could be the reason for that?

Installed LabCA a few times on freshly created minimal debian 11 virtual machines. Always the same 🙁

CAA record issue

Hi,

just setup a LabCA on a freshly installed Ubunta 18.04 and it worked fine up to the point where to generate the certificate for the webservice.

The log says :

ValueError: Challenge did not pass for ca.internal.homenet.de: {u'status': u'invalid', u'challenges': [{u'status': u'invalid', u'validationRecord': [{u'url': u'http://ca.internal.homenet.de/.well-known/acme-challenge/4uoTxHvh0rm1ScFdDHTNmtyB8YHQDoH87F15YWn45mY', u'hostname': u'ca.internal.homenet.de', u'addressUsed': u'10.10.10.68', u'port': u'80', u'addressesResolved': [u'10.10.10.68']}], u'url': u'https://ca.internal.homenet.de/acme/chall-v3/2/q3SK2g', u'token': u'4uoTxHvh0rm1ScFdDHTNmtyB8YHQDoH87F15YWn45mY', u'error': {u'status': 403, u'type': u'urn:ietf:params:acme:error:caa', u'detail': u'CAA record for ca.internal.homenet.de prevents issuance'}, u'validated': u'2022-03-23T15:03:06Z', u'type': u'http-01'}], u'identifier': {u'type': u'dns', u'value': u'ca.internal.homenet.de'}, u'expires': u'2022-03-30T15:03:06Z'}

So back to the documentation and ....... I learned that I need a CAA record in my DNS.
Fine, so I created this record and according to my "dig" command it displays just fine :

root@ca:/var/www/html/.well-known/acme-challenge# dig CAA internal.homenet.de

; <<>> DiG 9.11.3-1ubuntu1.17-Ubuntu <<>> CAA internal.homenet.de
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 48517
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;internal.homenet.de. IN CAA

;; ANSWER SECTION:
internal.homenet.de. 0 IN CAA 0 issue "ca.internal.homenet.de"

;; Query time: 0 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Wed Mar 23 16:17:59 UTC 2022
;; MSG SIZE rcvd: 89

But the issue still persists and I'm a bit lost on how to proceed.

Any help would be greatly appreciated and any logs needed can be uploaded.
Just let me know what further information you need.

Best regards and thanks in advance for anyone willing to help me here.

crayt90

Boulder not starting after update

After the last update, Boulder no longer starts.

boulder_1  | created all databases
boulder_1  | CKR_SLOT_ID_INVALID: Slot 0 does not exist.
boulder_1  | ERROR: Found multiple matching slots/tokens.
boulder_1  | Found slot 53145550 with matching token label.
boulder_1  | Found slot 1488205403 with matching token label.
boulder_1  | Starting enhanced syslogd: rsyslogd.
boulder_1  | Connected to boulder-mysql:3306
boulder_1  | Database boulder_sa_test already exists - skipping create
boulder_1  | goose: no migrations to run. current version: 20200609125504
boulder_1  | migrated boulder_sa_test database with ./sa/_db/
boulder_1  | added users to boulder_sa_test
boulder_1  | Database boulder_sa_integration already exists - skipping create
boulder_1  | goose: no migrations to run. current version: 20200609125504
boulder_1  | migrated boulder_sa_integration database with ./sa/_db/
boulder_1  | added users to boulder_sa_integration
boulder_1  | created all databases
boulder_1  | CKR_SLOT_ID_INVALID: Slot 0 does not exist.
boulder_1  | ERROR: Found multiple matching slots/tokens.
boulder_1  | Found slot 53145550 with matching token label.
boulder_1  | Found slot 1488205403 with matching token label.
boulder_1  | Starting enhanced syslogd: rsyslogd.
boulder_1  | Connected to boulder-mysql:3306
boulder_1  | Database boulder_sa_test already exists - skipping create
boulder_1  | goose: no migrations to run. current version: 20200609125504
boulder_1  | migrated boulder_sa_test database with ./sa/_db/
boulder_1  | added users to boulder_sa_test
boulder_1  | Database boulder_sa_integration already exists - skipping create
boulder_1  | goose: no migrations to run. current version: 20200609125504
boulder_1  | migrated boulder_sa_integration database with ./sa/_db/
boulder_1  | added users to boulder_sa_integration
boulder_1  | created all databases
boulder_1  | CKR_SLOT_ID_INVALID: Slot 0 does not exist.
boulder_1  | ERROR: Found multiple matching slots/tokens.
boulder_1  | Found slot 53145550 with matching token label.
boulder_1  | Found slot 1488205403 with matching token label.
boulder_1  | Starting enhanced syslogd: rsyslogd.
boulder_1  | Connected to boulder-mysql:3306
boulder_1  | Database boulder_sa_test already exists - skipping create
boulder_1  | goose: no migrations to run. current version: 20200609125504
boulder_1  | migrated boulder_sa_test database with ./sa/_db/
boulder_1  | added users to boulder_sa_test
boulder_1  | Database boulder_sa_integration already exists - skipping create
boulder_1  | goose: no migrations to run. current version: 20200609125504
boulder_1  | migrated boulder_sa_integration database with ./sa/_db/
boulder_1  | added users to boulder_sa_integration
boulder_1  | created all databases
boulder_1  | CKR_SLOT_ID_INVALID: Slot 0 does not exist.
boulder_1  | ERROR: Found multiple matching slots/tokens.
boulder_1  | Found slot 53145550 with matching token label.
boulder_1  | Found slot 1488205403 with matching token label.
boulder_boulder_1 exited with code 1
ok

the input device is not a TTY

When trying to install on Ubuntu 18.04, I was seeing the installer exit early and found the message "the input device is not a TTY" in the logs. I downloaded the installer and updated line 725 from

docker exec -it boulder_bmysql_1 mysql_upgrade &>>$installLog

to

docker exec -i boulder_bmysql_1 mysql_upgrade &>>$installLog

and it installed and configured correctly. Not 100% sure what's going on but maybe this helps. Looks like a sweet project - thanks!

Boulder not starting

Boulder fails to start after update


boulder_1  | �[0;34;1mChecking if boulder_sa_test exists�[0m
boulder_1  | boulder_sa_test already exists - skipping create
boulder_1  | applying migrations from ./sa/_db/migrations
boulder_1  | goose: no migrations to run. current version: 20210223140000
boulder_1  | added users to boulder_sa_test
boulder_1  | 
boulder_1  | �[0;34;1mChecking if boulder_sa_integration exists�[0m
boulder_1  | boulder_sa_integration already exists - skipping create
boulder_1  | applying migrations from ./sa/_db/migrations
boulder_1  | goose: no migrations to run. current version: 20210223140000
boulder_1  | added users to boulder_sa_integration
boulder_1  | 
boulder_1  | database setup complete
boulder_1  | CKR_SLOT_ID_INVALID: Slot 0 does not exist.
boulder_1  | Found slot 271399638 with matching token label.
boulder_1  | The key pair has been imported.
boulder_1  | CKR_SLOT_ID_INVALID: Slot 1 does not exist.
boulder_1  | Found slot 1625483005 with matching token label.
boulder_1  | The key pair has been imported.
boulder_1  | GOBIN=/go/src/github.com/letsencrypt/boulder/bin GO111MODULE=on go install -mod=vendor -tags "integration" ./...
boulder_1  | labca/mock-vendor.go:3:8: cannot find package "." in:
boulder_1  | 	/go/src/github.com/letsencrypt/boulder/vendor/github.com/golang/mock/mockgen/model
boulder_1  | make: *** [Makefile:39: build_cmds] Error 1
boulder_1  |  * Starting enhanced syslogd rsyslogd
boulder_1  |    ...done.
boulder_1  | Connected to boulder-mysql:3306
boulder_1  | 
boulder_1  | �[0;34;1mChecking if boulder_sa_test exists�[0m
boulder_1  | boulder_sa_test already exists - skipping create
boulder_1  | applying migrations from ./sa/_db/migrations
boulder_1  | goose: no migrations to run. current version: 20210223140000
boulder_1  | added users to boulder_sa_test
boulder_1  | 
boulder_1  | �[0;34;1mChecking if boulder_sa_integration exists�[0m
boulder_1  | boulder_sa_integration already exists - skipping create
boulder_1  | applying migrations from ./sa/_db/migrations
boulder_1  | goose: no migrations to run. current version: 20210223140000
boulder_1  | added users to boulder_sa_integration
boulder_1  | 
boulder_1  | database setup complete
boulder_1  | CKR_SLOT_ID_INVALID: Slot 0 does not exist.
boulder_1  | Found slot 271399638 with matching token label.
boulder_1  | The key pair has been imported.
boulder_1  | CKR_SLOT_ID_INVALID: Slot 1 does not exist.
boulder_1  | Found slot 1625483005 with matching token label.
boulder_1  | The key pair has been imported.
boulder_1  | GOBIN=/go/src/github.com/letsencrypt/boulder/bin GO111MODULE=on go install -mod=vendor -tags "integration" ./...
boulder_1  | labca/mock-vendor.go:3:8: cannot find package "." in:
boulder_1  | 	/go/src/github.com/letsencrypt/boulder/vendor/github.com/golang/mock/mockgen/model
boulder_1  | make: *** [Makefile:39: build_cmds] Error 1
boulder_boulder_1 exited with code 1
ok

Install fails due to missing example-expiration-template

Hi,

During the install on Debian 11, I get the following:

curl -sSL https://raw.githubusercontent.com/hakwerk/labca/master/install | bash

[✓] Running as root
[✓] Package 'git' is installed
[✓] Package 'sudo' is installed
[✓] User 'labca' already exists
[✓] Update git repository in /home/labca/labca
[✓] Determine web address
[✓] Setup admin application
[✓] Configure the admin application
[✓] Software is up-to-date
[✓] Package 'apt-transport-https' is installed
[✓] Package 'ca-certificates' is installed
[✓] Package 'curl' is installed
[✓] Package 'gnupg2' is installed
[✓] Package 'net-tools' is installed
[✓] Package 'software-properties-common' is installed
[✓] Package 'tzdata' is installed
[✓] Package 'ucspi-tcp' is installed
[✓] Package 'zip' is installed
[✓] Package 'python' is installed
[✓] Package 'docker-ce' is installed
[✓] Binary 'docker-compose' is installed
[✓] Static web pages
[✓] Certificate is present
[✓] Update git repository in /home/labca/gopath/src/github.com/letsencrypt/boulder
[✓] Boulder checkout 'release-2022-07-05'
[✓] Commit existing modifications of /home/labca/boulder_labca
[✓] Setup boulder configuration folder
[.] Configure the boulder application...sed: can't read example-expiration-template: No such file or directory

Subsequently the install fails.

Issuing Certificate Serial Number

Serial Number for the Issuing Cert is set to 1000 (see /home/labca/admin/data/serial ). This causes problems when re-installing LabCA.

how to reproduce:

  • fresh LabCA installation
  • upload existing Root Cert
  • generate new Issuing Cert 1 (will get serial number 1000)
  • finish setup, LabCA will issue server cert (using Issuing Cert)
  • kill LabCA
  • fresh Labca Installation
  • upload existing Root Cert
  • generate new Issuing Cert 2 (again, this cert will get serial number 1000)
  • finish setup, LabCA will issue server cert (using Issuing Cert 2)
  • Firefox refuses to load LabCA web page, complaining that "Your certificate contains the same serial number as another certificate issued by the certificate authority." ( https://support.mozilla.org/en-US/kb/Certificate-contains-the-same-serial-number-as-another-certificate )
  • Check Certificate Manager in Firefox and you will see that Firefox has already imported Issuing Cert 1 (with sernumber 1000). Now it refuses to load Server cert + Issuing Cert 2 with the same sernumber

Solution: use random serial number when generating Issuing Cert.

Thanks a lot!

Install fails with git error "fatal: unsafe repository"

Tying to run the curl oneliner from the readme I get this:

  [✓] Running as root
  [✓] Package 'git' is installed
  [✓] Package 'sudo' is installed
  [✓] User 'labca' already exists
  [✓] Backup existing non-git directory '/home/labca/labca'
  [✓] Clone https://github.com/hakwerk/labca/ to /home/labca/labca
fatal: unsafe repository ('/home/labca/labca' is owned by someone else)
To add an exception for this directory, call:

	git config --global --add safe.directory /home/labca/labca
fatal: unsafe repository ('/home/labca/labca' is owned by someone else)
To add an exception for this directory, call:

	git config --global --add safe.directory /home/labca/labca

I've tried a few things, I've run that as the labca user, I've tried running the oneliner as labca, I've added labca to sudoers. All of those made no difference; it still results in the same unsafe repository.

Running the git config as the local user resulted in a different error:

  [✓] Running as root
  [✓] Package 'git' is installed
  [✓] Package 'sudo' is installed
  [✓] User 'labca' already exists
  [.] Update git repository in /home/labca/labca...
  Error: Could not update local repository

Which I can't quite understand why that would effect anything done as labca...

Error during setup

When the setup gets to https://az-ca01.foo.internal/admin/final, I see the error message:

OOPS
Some unexpected error occurred!

Looking in the /etc/nginx/ssl/acme_tiny.log file, I see:

Thu Aug 19 15:21:15 UTC 2021
Parsing account key...
Parsing CSR...
Found domains: az-ca01.foo.internal
Getting directory...
Directory found!
Registering account...
Already registered!
Creating new order...
Traceback (most recent call last):
File "/home/labca/acme_tiny.py", line 197, in
main(sys.argv[1:])
File "/home/labca/acme_tiny.py", line 193, in main
signed_crt = get_crt(args.account_key, args.csr, args.acme_dir, log=LOGGER, CA=args.ca, disable_check=args.disable_check, directory_url=args.directory_url, contact=args.contact)
File "/home/labca/acme_tiny.py", line 120, in get_crt
order, _, order_headers = _send_signed_request(directory['newOrder'], order_payload, "Error creating new order")
File "/home/labca/acme_tiny.py", line 59, in _send_signed_request
return _do_request(url, data=data.encode('utf8'), err_msg=err_msg, depth=depth)
File "/home/labca/acme_tiny.py", line 45, in _do_request
raise ValueError("{0}:\nUrl: {1}\nData: {2}\nResponse Code: {3}\nResponse: {4}".format(err_msg, url, data, code, resp_data))
ValueError: Error creating new order:
Url: https://az-ca01.foo.internal/acme/new-order
Data: {"protected": "eyJ1cmwiOiAiaHR0cHM6Ly9hei1jYTAxLm53ZWguaW50ZXJuYWwvYWNtZS9uZXctb3JkZXIiLCAiYWxnIjogIlJTMjU2IiwgIm5vbmNlIjogInppbmM3VkFHYUlkQ1A1MW5aWWxCZTJUNWpMclZKa1lGbWljVzJKUnU5bDB5YkJJIiwgImtpZCI6ICJodHRwczovL2F6LWNhMDEubndlaC5pbnRlcm5hbC9hY21lL2FjY3QvMSJ9", "payload": "eyJpZGVudGlmaWVycyI6IFt7InR5cGUiOiAiZG5zIiwgInZhbHVlIjogImF6LWNhMDEubndlaC5pbnRlcm5hbCJ9XX0", "signature": "eAseK4EtkyPZhIyahrdEiLsK0sjTYPPRo16JzdYHsnjjypQXXwrJfRsz87VnfFUmJcow1Y29G8Wmcq8Vebjf_nQ9MC1Qj9V_jRStNs-fXzxwvgJZobJCLtXWMddDuF2c-hwBthOjdsrX2bQ3nfX6doM5Hjqob6Fy_GXU3flRUV5irsAgkfHc0RIadKeyzxp5SBBUIRr_vrOZzYVdd_EErjf_YCZX0VYka-w_UHr2cBZCJcYd2OfFa3kixYPfjDe7grgd__6IK89-P33CNV4Kv2p0vZF_3apXap5Khrtsmcythg-uE-JPkK3cFHu8Pgk1YY-nvtGRDs-DbI8Z3ypDrLmY5d27MM21FoWuhLFZB4CArZsKcz4wHCaSBCyLBDtxQjwWtOeSOZOZkocZvnLjgDiLL_NcE2RFDEc7JO0CuoeFC3F89-8g0c5Ly8sqhMODouBp4Xom0GRH_z8FXxZOQQWK45lnqqbJWg305lCa834er9wx61mtWeozMYO-AZ0jtWkeE02QScrf6SufN6aJuBsns5Vfd-6Nnmw14Ep-2mzbwkO3IcjK07Uw7JcYIP1jBON_rwCCUwhhhZUNPWz_wuQByh2bmpk7a8s4-U-UKIKw-tb0aiTrSUZ0Jb37CEpJ1S3KdMeSshDYtzyUFRgZGDkwxrv-0zvAuzwk5tQS_V0"}
Response Code: 400
Response: {
"type": "urn:ietf:params:acme:error:rejectedIdentifier",
"detail": "Error creating new order :: Cannot issue for "az-ca01.foo.internal": The ACME server refuses to issue a certificate for this domain name, because it is forbidden by policy",
"status": 400
}

If I edit: /home/labca/boulder_labca/hostname-policy.yaml to remove the ".foo.internal" entry under "Lockdown:" section,
and then restart (cd /home/labca/boulder; docker-compose restart boulder)
I still see the Oops message on the web-page, but in the log file, but now I see:

Thu Aug 19 15:25:42 UTC 2021
Parsing account key...
Parsing CSR...
Found domains: az-ca01.foo.internal
Getting directory...
Directory found!
Registering account...
Already registered!
Creating new order...
Traceback (most recent call last):
File "/home/labca/acme_tiny.py", line 197, in
main(sys.argv[1:])
File "/home/labca/acme_tiny.py", line 193, in main
signed_crt = get_crt(args.account_key, args.csr, args.acme_dir, log=LOGGER, CA=args.ca, disable_check=args.disable_check, directory_url=args.directory_url, contact=args.contact)
File "/home/labca/acme_tiny.py", line 120, in get_crt
order, _, order_headers = _send_signed_request(directory['newOrder'], order_payload, "Error creating new order")
File "/home/labca/acme_tiny.py", line 59, in _send_signed_request
return _do_request(url, data=data.encode('utf8'), err_msg=err_msg, depth=depth)
File "/home/labca/acme_tiny.py", line 45, in _do_request
raise ValueError("{0}:\nUrl: {1}\nData: {2}\nResponse Code: {3}\nResponse: {4}".format(err_msg, url, data, code, resp_data))
ValueError: Error creating new order:
Url: https://az-ca01.foo.internal/acme/new-order
Data: {"protected": "eyJ1cmwiOiAiaHR0cHM6Ly9hei1jYTAxLm53ZWguaW50ZXJuYWwvYWNtZS9uZXctb3JkZXIiLCAiYWxnIjogIlJTMjU2IiwgIm5vbmNlIjogInppbmM2UFpJQUpJb3hoZHRtLVo0YjBOSGV5dWNEWEJERUZOaGlzRTVKTnRubkZRIiwgImtpZCI6ICJodHRwczovL2F6LWNhMDEubndlaC5pbnRlcm5hbC9hY21lL2FjY3QvMSJ9", "payload": "eyJpZGVudGlmaWVycyI6IFt7InR5cGUiOiAiZG5zIiwgInZhbHVlIjogImF6LWNhMDEubndlaC5pbnRlcm5hbCJ9XX0", "signature": "ChtZOHrRjX9cf9P4VJjSwt5_K8u5OqNS_WZnk581RAtznYee59WxMa3GBEVethuVuwWMYrEjehrIFyzQVA2cWimmH6GuIXKkMEA0M3W5sL1KI4G3iztFxYILw9h5v6qN45rywWCstDx82YSA2Ta0n-8nyKc0u8mP2e6X2LUqdTLHfc2f6xFyEpuKNT_vy4QBLCuFKuVgcdkQ6jEjsuGHXFjLTwSSApeZ2DfhEhakYV6pH2kZ6yxwLCyghyi_U_gVG_YcKqE0-t0Y9WZeOLYcrQ6y9KEaLR2B2Pw32EHbsRB55r0kcL33vjaMo96x8eV5li0gMy9Drcw2dxle7hqMlRAE0OUXTb5tzRr3T16QI5g-rK2yS-qOYZCk7WCi9pSfpbOPznRNzLTdqOnR-kb9y3jG8u-1toslpjzDEhcj6aaQNcA5zqh3APTf4c2u0HDAwpriA0g5g2gPGNzq0siUDsftHxrYipFw--NSLXjiSvswNijpOELNcTYKaHp8tCub4aBBzQx35R0Kbk_x5mwUQXrMrErvIsfzVZvUVUOJ4N6WRj2dHbL51HRkiU6aQNU5RVlRf8zu7We1HlMGF7KK-dDviEHE8aSBV75knA2Fj8mL5WU3lwG_0sdIrFoRdZFCKE6dAInZYTRlgHjVtWMZc323BerKtC9U7VEopuaIDMk"}
Response Code: 400
Response: {
"type": "urn:ietf:params:acme:error:rejectedIdentifier",
"detail": "Error creating new order :: Cannot issue for "az-ca01.foo.internal": Domain name does not end with a valid public suffix (TLD)",
"status": 400
}

Table 'boulder_sa_integration.challenges' doesn't exist

Objective

Opening /admin/challenges page in WebUI to view

Result

"OOPS, Some unexpected error occured!"

Logs

labca_1 | 2021/05/29 13:15:30 errorHandler: Error 1146: Table 'boulder_sa_integration.challenges' doesn't exist

Environment

  • Fresh deployment of LabCA, clean install of Debian 10 in an LXC Container. Can be reproduced in full VM.

Update check

Add a button on the manage page in the web application to check for available updates to the software. Plus a button to upgrade when one is available.

Audit Errors even though the LabCA machine certificate generated properly

I have a problem, it seems, and am beginning here, without uploading an endless supply of logs. I am happy to provide what is needed but don't want to bury people with the superfluous.

I am getting many errors of the following:

[AUDIT] [core]grpc: addrConn.createTransport failed to connect to {10.88.88.88:9093 ca.boulder:9093 <nil> 0 <nil>}.

[AUDIT] [core]grpc: addrConn.createTransport failed to connect to {10.77.77.77:9093 ca.boulder:9093 <nil> 0 <nil>}.

[AUDIT] [core]grpc: addrConn.createTransport failed to connect to {10.77.77.77:9096 ca.boulder:9096 <nil> 0 <nil>}.

Each of these errors has an accompanying error that begins with...

[AUDIT] [core]grpc: Server.Serve failed to complete security handshake from

I have installed LabCA a half dozen times. After much fighting I landed on the knowledge that the machine/VM must have its host name properly assigned and be a static ip configuration. Additionally, if using your own root and issuer certificate, as I am (I maintain them in an air-gapped XCA repo), the most reliable way to get them to be used for installation is to use PKCS #12 file.

From the information below it seems the boulder image simply is not listening on the ports the system is complaining about. Am I missing something? What else can I provide to troubleshoot?

The docker image info is as follows:

Docker Image Info

using the command, docker ps

container id image
dce3eae95f95 letsencrypt/boulder-tools:go1.16.6_2021-07-12
7026fea58953 letsencrypt/boulder-tools:go1.16.6_2021-07-12
b759a8209548 mariadb:10.5

IP info

container id IPv4 name
dce3eae95f95 10.77.77.77, 10.88.88.88 boulder_boulder_1
7026fea58953 10.77.77.3 boulder_labca_1
b759a8209548 10.77.77.2 boulder_bmysql_1

IP ports

using the command, docker port

docker image id ports
dce3eae95f95 4000, 4001, 4002, 4003, 4430, 4431, 8055
7026fea58953 3000
b759a8209548 nothing returned

exec: "zip": executable file not found in $PATH

Objective

Fresh install, attempting to download the certs as a bundled zip via WebUI > Manage > Certificates

Result

I receive a zip file of approximately the right size, but it's unopenable.
7zip "cannot open as archive", unzip on LabCA host gives

Archive:  ./labca_certificates.zip
  End-of-central-directory signature not found.  Either this file is not
  a zipfile, or it constitutes one disk of a multi-part archive.  In the
  latter case the central directory and zipfile comment will be found on
  the last disk(s) of this archive.
unzip:  cannot find zipfile directory in one of ./labca_certificates or
        ./labca_certificates.zip, and cannot find ./labca_certificates.ZIP, period.

Logs

LabCA log only shows:
2021/05/29 14:24:45 errorHandler: exec: "zip": executable file not found in $PATH

Attempted Fixes / Workarounds

  • Confirmed that zip exists in /usr/bin, and that /usr/bin is in $PATH Edit: On the host
  • apt install zip hasn't made any difference
  • Downloading as pfx and manually zipping works fine

Environment

  • Clean install of Debian 10 in an LXC Container. Can be reproduced in full VM.

Missing files

In the new boulder release there are no more the files

  • test/config/wfe.json
  • test/v1_integration.py

the sed commands exit with
"Configure the boulder application...sed: can't read config/wfe.json: No such file or directory"

I've tried to modify the install removing all the sed commands regarding these 2 files but labca doesn't work

regards

Hanging after setup

I have installed this in Debian 11, ubuntu 20.04, 18.04, debian 10 (turnkeyLinux core) and in Debian 11 it went all through setup and generating the root CA and issuer cert but then the challenge failed over and over. I have my internal DNS set up properly. All other OS i list above that I tried it on go through setup and then the page hangs after generating the issuer cert, no status response or anything, eventually says timeout. I tried rebooting, reinstalling over...still no change.

Screenshots

Add some screenshots to the GitHub Pages to give a quick glance of what this project is / does.

Make LabCA name configurable ?

Any plans to make the name on the frontend pages configurable ? , to set it to something like "Internal ACME CA" instead of LabCA

Final restart fails due to "Name or Service not known" errors

Hello, when the WebGUI reaches the final setup phase, it displays an error "Name or service not found" when trying to get the acme challenge. Here are the logs from that page.

acme-tiny.log

Mon Jul 11 16:13:18 UTC 2022
Parsing account key...
Parsing CSR...
Found domains: labca.domain
Getting directory...
Directory found!
Registering account...
Already registered! Account ID: http://boulder:4001/acme/acct/1
Creating new order...
Order created!
Verifying labca.domain...
Traceback (most recent call last):
  File "/labca/acme_tiny.py", line 145, in get_crt
    assert (disable_check or _do_request(wellknown_url)[0] == keyauthorization)
  File "/labca/acme_tiny.py", line 46, in _do_request
    raise ValueError("{0}:\nUrl: {1}\nData: {2}\nResponse Code: {3}\nResponse: {4}".format(err_msg, url, data, code, resp_data))
ValueError: Error:
Url: http://labca.domain/.well-known/acme-challenge/<challenge>
Data: None
Response Code: None
Response: <urlopen error [Errno -2] Name or service not known>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/labca/acme_tiny.py", line 199, in <module>
    main(sys.argv[1:])
  File "/labca/acme_tiny.py", line 195, in main
    signed_crt = get_crt(args.account_key, args.csr, args.acme_dir, log=LOGGER, CA=args.ca, disable_check=args.disable_check, directory_url=args.directory_url, contact=args.contact, check_port=args.check_port)
  File "/labca/acme_tiny.py", line 147, in get_crt
    raise ValueError("Wrote file to {0}, but couldn't download {1}: {2}".format(wellknown_path, wellknown_url, e))
ValueError: Wrote file to /var/www/html/.well-known/acme-challenge/<challenge>, but couldn't download http://labca.domain/.well-known/acme-challenge/<challenge>: Error:
Url: http://labca.domain/.well-known/acme-challenge/<challenge>
Data: None
Response Code: None
Response: <urlopen error [Errno -2] Name or service not known>

commander.log

Container boulder-nginx-1  Restarting
Container boulder-bmysql-1  Restarting
Container boulder-bmysql-1  Started
Container boulder-labca-1  Restarting
Container boulder-boulder-1  Restarting
Container boulder-nginx-1  Started
Container boulder-labca-1  Started
Container boulder-boulder-1  Started

Upgrade 2022-05-02 / Message in screen

[AUDIT] timed out waiting for ca2.boulder:9093 health check 21 seconds
[AUDIT] timed out waiting for ra2.boulder:9094 health check 57 seconds
[AUDIT] Couldn't load rate limit policies file: yaml: line 19: could not find expected ':' 1 minute 

After the labca upgrade, this message set keeps repeating every 50 seconds.

Were there any changes that could cause this ??

Output of docker ps -a

CONTAINER ID   IMAGE                                           COMMAND                  CREATED          STATUS                                  PORTS                                                                      NAMES
62ec168df593   letsencrypt/boulder-tools:go1.17.9_2022-04-12   "labca/entrypoint.sh"    12 minutes ago   Restarting (1) Less than a second ago                                                                              boulder-boulder-1
b848bd0aea0a   letsencrypt/boulder-tools:go1.17.9_2022-04-12   "./setup.sh"             12 minutes ago   Up 12 minutes                           3000/tcp                                                                   boulder-labca-1
d73cbcffeb47   mariadb:10.5                                    "docker-entrypoint.s…"   12 minutes ago   Up 12 minutes                           3306/tcp                                                                   boulder-bmysql-1
8f6816c53f4c   letsencrypt/boulder-tools:go1.17.9_2022-04-12   "./control.sh"           12 minutes ago   Up 4 minutes                            3030/tcp                                                                   boulder-control-1
602bcaf7648c   nginx:1.21.6                                    "/docker-entrypoint.…"   12 minutes ago   Up 12 minutes                           0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp   boulder-nginx-1

questions

I would like to ask, how is the status in your install script done? Can you give me a link, I think it is very interesting
By the way, did your script not take into account the problem of reinstalling after a failed installation

Error during installation

Hello,

I just tried to install LabCA on a fresh Debian 11 VM:

$ ssh 10.20.30.40 -l root
[...]
$ apt update
[...]
$ apt upgrade
[...]
$ curl -sSL https://raw.githubusercontent.com/hakwerk/labca/master/install | bash

  [✓] Running as root
  [✓] Package 'git' is installed
  [✓] Package 'sudo' is installed
  [✓] User 'labca' already exists
  [✓] Clone https://github.com/hakwerk/labca/ to /home/labca/labca
  FQDN (Fully Qualified Domain Name) for this PKI host (users will use this in their browsers and clients)? [sahnee-ca] ca.sahnee.dev
  [✓] Determine web address
  [i] Setup admin application...hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint:
git config --global init.defaultBranch <name>
hint:
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint:
hint:
git branch -m <name>
  [✓] Setup admin application
  [✓] Configure the admin application
  [✓] Software is up-to-date
  [✓] Package 'apt-transport-https' is installed
  [✓] Package 'ca-certificates' is installed
  [✓] Package 'curl' is installed
  [✓] Package 'gnupg2' is installed
  [✓] Package 'net-tools' is installed
  [✓] Package 'software-properties-common' is installed
  [✓] Package 'tzdata' is installed
  [✓] Package 'ucspi-tcp' is installed
  [✓] Package 'zip' is installed
  [✓] Package 'python' is installed
  [✓] Package 'docker-ce' is installed
  [✓] Binary 'docker-compose' is installed
  [i] Static web pages...hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint:
git config --global init.defaultBranch <name>
hint:
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint:
hint:
git branch -m <name>
cp: cannot stat '/home/labca/labca/static/*': No such file or directory

Ive seen from the README that only Debian 9 and 10 are officially supported. Is this a Debian 11 issue? I'm hesitant to use an old Debian version especially for something as security critical as a CA.

Thank you for your time!

Documentation suggestion

Hope to have more detailed installation and configuration documents
From ssl generation verification to distribution

Install error

Hi I try to setup LabCA from a clean installation but I have this issue.

2020-11-20: Pulling from letsencrypt/boulder-tools-go1.15.5
Digest: sha256:e111c343648d4f30b0aa2323a2e79becfd397626f34b5ac4b1b63e2e2c76d195
Status: Downloaded newer image for letsencrypt/boulder-tools-go1.15.5:2020-11-20
Creating boulder_bmysql_1 ... error

ERROR: for boulder_bmysql_1 Cannot start service bmysql: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: rootfs_linux.go:59: mounting "proc" to rootfs at "/proc" caused: permission denied: unknown

ERROR: for bmysql Cannot start service bmysql: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: rootfs_linux.go:59: mounting "proc" to rootfs at "/proc" caused: permission denied: unknown
Encountered errors while bringing up the project.

MVC for ACME objects

Especially for the GUI part that browses the ACME objects, using MVC pattern makes sense. E.g. using utron

Investigate "i/o timeout"

I see several warnings like below over the past couple of days:

Failed to find certificates with missing OCSP responses: dial tcp: i/o timeout 2 hours
Failed to find stale OCSP responses: dial tcp 10.77.77.2:3306: i/o timeout 1 day
Failed to find stale OCSP responses: dial tcp: i/o timeout 2 days
[mysql] read tcp 10.77.77.77:53444->10.77.77.2:3306: i/o timeout 3 days
Failed to find certificates with missing OCSP responses: dial tcp: i/o timeout 3 days
Failed to find revoked certificates: dial tcp 10.77.77.2:3306: i/o timeout 5 days
error loading hostname policy: unexpected end of JSON input 5 days

Other connections to MySQL seem fine. Is that caused by our setup or something in the upstream Boulder implementation?

Security issue: session shared with ALL users

As soon as the admin user is logged in, all other sessions (from any other browser/device) can access the admin pages!

It was found that no Set-Cookie headers were sent to the browser but the error message of the session.Save() were not shown. The root cause turned out to be using the base64 encoded authorization and encryption keys for the session store, instead of the decoded binary keys.

Manual issue a certificate / validate CSR

Hello!

Cool project, like it.
But didn't found any facilities for manual certificate issuing / validating CSR request.

Can you add this small feature to become a complete PKI system project?

Install failure

Fresh install of Ubuntu 18.04. (I am aware this says tested on Debian.)

[✓] Running as root
[✓] Package 'git' is installed
[✓] Package 'sudo' is installed
[✓] Created user 'labca'
[i] WARNING: could not set core.excludesfile...

Running sudo -u labca git config --global core.excludesfile /home/labca/.gitignore_global I get fatal: cannot come back to cwd: Permission denied.

The second installation today failed, I hope to get support!

➜  boulder git:(main) ✗ diff <(docker ps) <(docker ps -a)
1,5c1,12
< CONTAINER ID   IMAGE                                         COMMAND                  CREATED          STATUS                    PORTS                                                                                                                                                                                        NAMES
< a5f84338049a   vaultwarden/server:latest                     "/usr/bin/dumb-init …"   11 minutes ago   Up 10 minutes (healthy)   80/tcp, 0.0.0.0:8443->8443/tcp, :::8443->8443/tcp, 3012/tcp, 0.0.0.0:8445->8445/tcp, :::8445->8445/tcp, 0.0.0.0:8445->8445/udp, :::8445->8445/udp, 0.0.0.0:8443->443/udp, :::8443->443/udp   Bitwarden
< f701d821ba98   letsencrypt/boulder-tools:go1.17_2021-10-22   "labca/entrypoint.sh"    33 minutes ago   Up 37 seconds             0.0.0.0:4000-4003->4000-4003/tcp, :::4000-4003->4000-4003/tcp, 0.0.0.0:4430-4431->4430-4431/tcp, :::4430-4431->4430-4431/tcp, 0.0.0.0:8055->8055/tcp, :::8055->8055/tcp                      boulder_boulder_1
< 7c216edf98c5   letsencrypt/boulder-tools:go1.17_2021-10-22   "./setup.sh"             51 minutes ago   Up 19 minutes             0.0.0.0:3000->3000/tcp, :::3000->3000/tcp                                                                                                                                                    boulder_labca_1
< 07454923b625   mariadb:10.5                                  "docker-entrypoint.s…"   51 minutes ago   Up 19 minutes             3306/tcp                                                                                                                                                                                     boulder_bmysql_1
---
> CONTAINER ID   IMAGE                                         COMMAND                  CREATED          STATUS                        PORTS                                                                                                                                                                                        NAMES
> a5f84338049a   vaultwarden/server:latest                     "/usr/bin/dumb-init …"   11 minutes ago   Up 10 minutes (healthy)       80/tcp, 0.0.0.0:8443->8443/tcp, :::8443->8443/tcp, 3012/tcp, 0.0.0.0:8445->8445/tcp, :::8445->8445/tcp, 0.0.0.0:8445->8445/udp, :::8445->8445/udp, 0.0.0.0:8443->443/udp, :::8443->443/udp   Bitwarden
> f701d821ba98   letsencrypt/boulder-tools:go1.17_2021-10-22   "labca/entrypoint.sh"    33 minutes ago   Up 37 seconds                 0.0.0.0:4000-4003->4000-4003/tcp, :::4000-4003->4000-4003/tcp, 0.0.0.0:4430-4431->4430-4431/tcp, :::4430-4431->4430-4431/tcp, 0.0.0.0:8055->8055/tcp, :::8055->8055/tcp                      boulder_boulder_1
> f5f9978fa3e3   redis:latest                                  "docker-entrypoint.s…"   51 minutes ago   Exited (137) 20 minutes ago                                                                                                                                                                                                boulder_bredis_clusterer_1
> 7c216edf98c5   letsencrypt/boulder-tools:go1.17_2021-10-22   "./setup.sh"             51 minutes ago   Up 19 minutes                 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp                                                                                                                                                    boulder_labca_1
> 07454923b625   mariadb:10.5                                  "docker-entrypoint.s…"   51 minutes ago   Up 19 minutes                 3306/tcp                                                                                                                                                                                     boulder_bmysql_1
> ea93f32f66df   redis:latest                                  "docker-entrypoint.s…"   51 minutes ago   Exited (0) 20 minutes ago                                                                                                                                                                                                  boulder_bredis_6_1
> fd9c513772c3   redis:latest                                  "docker-entrypoint.s…"   51 minutes ago   Exited (0) 20 minutes ago                                                                                                                                                                                                  boulder_bredis_1_1
> 62d3cd94e9b4   redis:latest                                  "docker-entrypoint.s…"   51 minutes ago   Exited (0) 20 minutes ago                                                                                                                                                                                                  boulder_bredis_5_1
> f0a56320a0ef   redis:latest                                  "docker-entrypoint.s…"   51 minutes ago   Exited (0) 20 minutes ago                                                                                                                                                                                                  boulder_bredis_3_1
> 5d24c29d7f5b   redis:latest                                  "docker-entrypoint.s…"   51 minutes ago   Exited (0) 20 minutes ago                                                                                                                                                                                                  boulder_bredis_4_1
> d37a772d164f   redis:latest                                  "docker-entrypoint.s…"   51 minutes ago   Exited (0) 20 minutes ago                                                                                                                                                                                                  boulder_bredis_2_1
➜  boulder git:(main) ✗ docker logs boulder_bredis_6_1   
1:C 27 Nov 2021 11:13:46.848 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 27 Nov 2021 11:13:46.848 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 27 Nov 2021 11:13:46.848 # Configuration loaded
1:M 27 Nov 2021 11:13:46.859 # Server initialized
1:M 27 Nov 2021 11:13:46.859 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 27 Nov 2021 11:13:47.502 # configEpoch set to 6 via CLUSTER SET-CONFIG-EPOCH
1:M 27 Nov 2021 11:13:47.625 # IP address for this node updated to 10.33.33.7
1:M 27 Nov 2021 11:13:48.872 # Cluster state changed: ok
1:S 27 Nov 2021 11:13:49.536 # Done loading RDB, keys loaded: 0, keys expired: 0.
1:signal-handler (1638012706) Received SIGTERM scheduling shutdown...
1:S 27 Nov 2021 11:31:46.498 # User requested shutdown...
1:S 27 Nov 2021 11:31:46.499 # Redis is now ready to exit, bye bye...
1:C 27 Nov 2021 11:32:00.016 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 27 Nov 2021 11:32:00.017 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 27 Nov 2021 11:32:00.017 # Configuration loaded
1:M 27 Nov 2021 11:32:00.020 # Server initialized
1:M 27 Nov 2021 11:32:00.020 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 27 Nov 2021 11:32:00.021 # Done loading RDB, keys loaded: 0, keys expired: 0.
1:S 27 Nov 2021 11:32:00.024 # Cluster state changed: ok
1:S 27 Nov 2021 11:32:00.044 # Error condition on socket for SYNC: (null)
1:S 27 Nov 2021 11:32:01.119 # Done loading RDB, keys loaded: 0, keys expired: 0.
1:signal-handler (1638013489) Received SIGTERM scheduling shutdown...
1:S 27 Nov 2021 11:44:49.427 # User requested shutdown...
1:S 27 Nov 2021 11:44:49.523 # Redis is now ready to exit, bye bye...
➜  boulder git:(main) ✗ docker logs boulder_bredis_5_1
1:C 27 Nov 2021 11:13:46.842 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 27 Nov 2021 11:13:46.842 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 27 Nov 2021 11:13:46.842 # Configuration loaded
1:M 27 Nov 2021 11:13:46.857 # Server initialized
1:M 27 Nov 2021 11:13:46.857 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 27 Nov 2021 11:13:47.502 # configEpoch set to 5 via CLUSTER SET-CONFIG-EPOCH
1:M 27 Nov 2021 11:13:47.627 # IP address for this node updated to 10.33.33.6
1:M 27 Nov 2021 11:13:48.874 # Cluster state changed: ok
1:S 27 Nov 2021 11:13:49.619 # Done loading RDB, keys loaded: 0, keys expired: 0.
1:signal-handler (1638012706) Received SIGTERM scheduling shutdown...
1:S 27 Nov 2021 11:31:46.500 # Connection with master lost.
1:S 27 Nov 2021 11:31:46.501 # Error condition on socket for SYNC: (null)
1:S 27 Nov 2021 11:31:46.507 # User requested shutdown...
1:S 27 Nov 2021 11:31:46.508 # Redis is now ready to exit, bye bye...
1:C 27 Nov 2021 11:31:58.981 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 27 Nov 2021 11:31:58.981 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 27 Nov 2021 11:31:58.981 # Configuration loaded
1:M 27 Nov 2021 11:31:58.983 # Server initialized
1:M 27 Nov 2021 11:31:58.983 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 27 Nov 2021 11:31:58.983 # Done loading RDB, keys loaded: 0, keys expired: 0.
1:S 27 Nov 2021 11:31:58.984 # Cluster state changed: ok
1:S 27 Nov 2021 11:31:59.594 # Done loading RDB, keys loaded: 0, keys expired: 0.
1:signal-handler (1638013489) Received SIGTERM scheduling shutdown...
1:S 27 Nov 2021 11:44:49.429 # User requested shutdown...
1:S 27 Nov 2021 11:44:49.523 # Redis is now ready to exit, bye bye...
➜  boulder git:(main) ✗ 

vm.overcommit_memory = 1

So do I just need to add this parameter?

Convert labca-service/commander to docker container

The service and commander script currently run on the host system, but they should be converted to run inside docker. As it will need to docker.sock, for security it is best to put this in its own container.

Error installing Ubuntu 22 - missing depandancies

Installing this in a fresh Ubuntu 22 container on Proxmox and I'm getting the following errors despite running as sudo.

linuxadmin@ca:~$ sudo curl -sSL https://raw.githubusercontent.com/hakwerk/labca/master/install | bash

  [✗] Not running as root
  [✓] Run using sudo

  [✓] Running as root
main: line 72: /tmp/labca-install.log: Permission denied

Looking at the log, you find the following:

linuxadmin@ca:~$ vim /tmp/labca-install.log
Command 'vim' not found, but can be installed with:
sudo apt install vim         # version 2:8.2.3995-1ubuntu2, or
sudo apt install vim-tiny    # version 2:8.2.3995-1ubuntu2
sudo apt install neovim      # version 0.6.1-3
sudo apt install vim-athena  # version 2:8.2.3995-1ubuntu2
sudo apt install vim-gtk3    # version 2:8.2.3995-1ubuntu2
sudo apt install vim-nox     # version 2:8.2.3995-1ubuntu2

It would be good to have this in the install script

Question: Multiple domains

I have looked through the code but couldn't find the option to enable a set of domains instead of just 1 domain

Is that part not implemented in LabCA or does it need to be handled in a different way (manual edit of the config file ?)

Installer fails because of issue with curl

There is an issue with cloning the git repository when using the default version of curl (libcurl3-gnutls:amd64 v7.74.0-1.2~bpo10+1) for Debian 10.

[2021-05-16 15:15:30.670] [INFO ] Clone https://github.com/hakwerk/labca/ to /home/labca/labca...
fatal: unable to access 'https://github.com/hakwerk/labca/': Failed sending HTTP2 data
[2021-05-16 15:15:30.824] [FATAL] Could not clone git repository

This can be worked around by downgrading curl to "libcurl3-gnutls:amd64 v7.64.0-4+deb10u2" (sudo apt install libcurl3-gnutls=7.64.0-4+deb10u2).

Additional ressources:
https://superuser.com/questions/1642858/git-throws-fatal-unable-to-access-https-github-com-user-repo-git-failed-se

You might wanna implement a check for this, or mention it in the installation guide.

No Docker

Investigate if it makes sense to drop the Docker containers and its overhead and run everything directly on the server / virtual machine.

Clean repo with new docker-compose.yml

When the docker-only setup is fully functional, and the images can be downloaded from a container registry, then pretty much the only thing end users would need is the final docker-compose.yml file plus some documentation.
Ideally this repo gets cleaned up completely and only contains those essentials. The current repo can be renamed to e.g. "labca-builder" for the patches, image build automation, etc. Can existing installs then convert to that builder repo?!

Error on line 58 in commander script

Hi,

on a clean debian i had had some problems (after switched to explicit root (su - root) everything worked well) the website is stuck on "Almost there! Now we will request a certificate for this website and restart one more time...". I restarted several times, i think the problem is, that the webserver in the docker image isn`t ready. But, here are the results from the logs:

Web Certificate Log

Di 16. Feb 13:31:15 CET 2021
Parsing account key...
Parsing CSR...
Found domains: host.domain.local
Getting directory...
Traceback (most recent call last):
File "/home/labca/acme_tiny.py", line 197, in
main(sys.argv[1:])
File "/home/labca/acme_tiny.py", line 193, in main
signed_crt = get_crt(args.account_key, args.csr, args.acme_dir, log=LOGGER, CA=args.ca, disable_check=args.disable_check, directory_url=args.directory_url, contact=args.contact)
File "/home/labca/acme_tiny.py", line 105, in get_crt
directory, _, _ = _do_request(directory_url, err_msg="Error getting directory")
File "/home/labca/acme_tiny.py", line 45, in _do_request
raise ValueError("{0}:\nUrl: {1}\nData: {2}\nResponse Code: {3}\nResponse: {4}".format(err_msg, url, data, code, resp_data))
ValueError: Error getting directory:
Url: http://host.domain.local
Data: None
Response Code: 502
Response:

<title>502 Bad Gateway</title>

502 Bad Gateway


nginx

LABCA Logs

labca_1 | 2021/02/16 12:33:56 errorHandler: ERROR! On line 58 in commander script
labca_1 | main._hostCommand(0xbd1c00, 0xc00027e0e0, 0xc00024e700, 0xb104bf, 0xc, 0x0, 0x0, 0x0, 0xb3b500)
labca_1 | /go/src/labca/main.go:1513 +0x5a5
labca_1 | main.finalHandler(0xbd1c00, 0xc00027e0e0, 0xc00024e700)
labca_1 | /go/src/labca/main.go:1860 +0xf9
labca_1 | net/http.HandlerFunc.ServeHTTP(0xb3ac58, 0xbd1c00, 0xc00027e0e0, 0xc00024e700)
labca_1 | /usr/local/go/src/net/http/server.go:2042 +0x44
labca_1 | main.authorized.func1(0xbd1c00, 0xc00027e0e0, 0xc00024e700)
labca_1 | /go/src/labca/main.go:2316 +0x230
labca_1 | net/http.HandlerFunc.ServeHTTP(0xc000130b20, 0xbd1c00, 0xc00027e0e0, 0xc00024e700)
labca_1 | /usr/local/go/src/net/http/server.go:2042 +0x44
labca_1 | github.com/gorilla/mux.(*Router).ServeHTTP(0xc00020a0c0, 0xbd1c00, 0xc00027e0e0, 0xc00024e500)
labca_1 | /go/pkg/mod/github.com/gorilla/[email protected]/mux.go:210 +0xd3
labca_1 | net/http.serverHandler.ServeHTTP(0xc0001062a0, 0xbd1c00, 0xc00027e0e0, 0xc00024e500)
labca_1 | /usr/local/go/src/net/http/server.go:2843 +0xa3
labca_1 | net/http.(*conn).serve(0xc0002540a0, 0xbd32c0, 0xc00023df00)
labca_1 | /usr/local/go/src/net/http/server.go:1925 +0x8ad
labca_1 | created by net/http.(*Server).Serve
labca_1 | /usr/local/go/src/net/http/server.go:2969 +0x36c
labca_1 | 2021/02/16 12:33:56 http: superfluous response.WriteHeader call from main.finalHandler (main.go:1861)
labca_1 | 2021/02/16 12:34:11 ERROR: Message from server: 'ERROR! On line 58 in commander script
labca_1 | '
labca_1 | 2021/02/16 12:34:11 errorHandler: ERROR! On line 58 in commander script
labca_1 | main._hostCommand(0xbd1c00, 0xc00027e2a0, 0xc00024ed00, 0xb104bf, 0xc, 0x0, 0x0, 0x0, 0xb3b500)
labca_1 | /go/src/labca/main.go:1513 +0x5a5
labca_1 | main.finalHandler(0xbd1c00, 0xc00027e2a0, 0xc00024ed00)
labca_1 | /go/src/labca/main.go:1860 +0xf9
labca_1 | net/http.HandlerFunc.ServeHTTP(0xb3ac58, 0xbd1c00, 0xc00027e2a0, 0xc00024ed00)
labca_1 | /usr/local/go/src/net/http/server.go:2042 +0x44
labca_1 | main.authorized.func1(0xbd1c00, 0xc00027e2a0, 0xc00024ed00)
labca_1 | /go/src/labca/main.go:2316 +0x230
labca_1 | net/http.HandlerFunc.ServeHTTP(0xc000130d20, 0xbd1c00, 0xc00027e2a0, 0xc00024ed00)
labca_1 | /usr/local/go/src/net/http/server.go:2042 +0x44
labca_1 | github.com/gorilla/mux.(*Router).ServeHTTP(0xc00020a0c0, 0xbd1c00, 0xc00027e2a0, 0xc00024eb00)
labca_1 | /go/pkg/mod/github.com/gorilla/[email protected]/mux.go:210 +0xd3
labca_1 | net/http.serverHandler.ServeHTTP(0xc0001062a0, 0xbd1c00, 0xc00027e2a0, 0xc00024eb00)
labca_1 | /usr/local/go/src/net/http/server.go:2843 +0xa3
labca_1 | net/http.(*conn).serve(0xc0002541e0, 0xbd32c0, 0xc00043c180)
labca_1 | /usr/local/go/src/net/http/server.go:1925 +0x8ad
labca_1 | created by net/http.(*Server).Serve
labca_1 | /usr/local/go/src/net/http/server.go:2969 +0x36c
labca_1 | 2021/02/16 12:34:11 http: superfluous response.WriteHeader call from main.finalHandler (main.go:1861)
labca_1 | 2021/02/16 12:34:39 GET /logs/web
labca_1 | 2021/02/16 12:34:39 GET /ws?logType=web
labca_1 | 2021/02/16 12:35:00 GET /logs/weberr
labca_1 | 2021/02/16 12:35:25 GET /logs/labca
labca_1 | 2021/02/16 12:35:25 GET /ws?logType=labca

Thanks for this project and your support.

OCSP: No response found for request

On Windows you can use the certutil tool to verify the AIA (Authority Information Access):

certutil -url leaf_cert.cer
  • The "Certs (from AIA)" does not work because it contains 127.0.0.1
    Find the correct path for this and put that in the config
  • The "OCSP (from AIA)" returns unsuccessful
    The server log shows "ocsp-responder tK3K_QU No response found for request: serial ff1ad4e... "

The certutil tool can also be used to verify the CRL of the root and intermediate CA certs, it looks like this works currently.

ECDSA root cert

Hi,
many thanks for this great project! I have been looking for private CA for my home network and LabCa looks like perfect solution. Though I have an issue with ECDSA root cert.

I have a clean install on Debian 11 (virtual machine). Choose ECDSA-384 when generating root cert during setup. Generate RSA-2048 issuer cert. Both certs are correctly generated. After restart, the setup process fails during an attempt to generate cert for the webserver. Boulder throws an error:

boulder-boulder-1 | E220602215402 boulder-ca qM-tDQA [AUDIT] Couldn't load issuers: failed to create lint issuer: x509: requested SignatureAlgorithm does not match private key type

Looking at the certs:
/home/labca/admin/data/root-ca.pem
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (384 bit)
Signature Algorithm: ecdsa-with-SHA384

/home/labca/admin/data/issuer/ca-int.pem
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Signature Algorithm: ecdsa-with-SHA384

And the key + request generated by acme_tiny.py
/home/labca/nginx_data/ssl/account.key
Key Details:
Type: RSA
Size (bits): 4096

/home/labca/nginx_data/ssl/domain.csr
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Signature Algorithm: sha256WithRSAEncryption

I am not an expert but IMHO the problem is the private key type and/or signature algorithm (sha256) used by acme_tiny.py in the request.
What do you think?
Thanks a lot!

Issue in installing/configure labca

Hi,
I have been trying to install labca but i am keep getting the following error whith the last step in the web interface
Mon Oct 18 03:26:24 UTC 2021
Parsing account key...
Parsing CSR...
Found domains: labca00.localdomain.be
Getting directory...
Traceback (most recent call last):
File "/home/labca/acme_tiny.py", line 197, in
main(sys.argv[1:])
File "/home/labca/acme_tiny.py", line 193, in main
signed_crt = get_crt(args.account_key, args.csr, args.acme_dir, log=LOGGER, CA=args.ca, disable_check=args.disable_check, directory_url=args.directory_url, contact=args.contact)
File "/home/labca/acme_tiny.py", line 105, in get_crt
directory, _, _ = _do_request(directory_url, err_msg="Error getting directory")
File "/home/labca/acme_tiny.py", line 45, in _do_request
raise ValueError("{0}:\nUrl: {1}\nData: {2}\nResponse Code: {3}\nResponse: {4}".format(err_msg, url, data, code, resp_data))
ValueError: Error getting directory:
Url: http://labca00.localdomain.be/directory
Data: None
Response Code: 502
Response:

<title>502 Bad Gateway</title>

502 Bad Gateway


nginx I see a docker running on port 4001 but when i go to the web labca00.localdomain.be/direcotry its giving me Bad Gateway
and i do not know where or why its giving me this its really the last step in the web interface where its creating a Certificate
So please help
Filip

Delete Root CA key

Root CA key should be stored offline, it should be deleted from LabCA once we generate Issuer CA. Suggestion:

  1. Root CA upload: make Root CA key uploading optional (with a hint that Root CA's private key is only needed for Issuer CA generation, it is not stored by LabCA). If Root key is not uploaded, Issuer CA can not be generated (only uploaded).
  2. Root CA generate: after the Root CA generation (before Issuer CA setup), Root CA key is shown to the user (plain text and/or file) with a message, something like "For comprehensive risk reduction, the Root CA's private key should be stored offline. Please copy this Root CA's private key and store it in secure, private and offline location. The Root CA's private key will be deleted from LabCA after Issuer CA generation!"
  3. Issuer CA generate: after Issuer CA generation, Root CA private key is permanently deleted from LabCA.

Pre-build docker images

Now that everything is running in docker containers, the image(s) should be standard for everyone so we should build them and put them on DockerHub / Github Container Registry.
Probably still need to do some config prep from the control container / commander script when the install script is no longer used.

broken setup

Hi,
Great project. I've tried to get it working in debian 10 but unfortunately it breaks during the initial setup.

I was able to get to the second web page of the setup. After seting the domain name and the dns server, the docker container "boulder_boulder_1" fails and restarts.

628365fc7056 letsencrypt/boulder-tools-go1.15:2020-08-12 "labca/entrypoint.sh" 16 minutes ago Restarting (1) 54 seconds ago boulder_boulder_1

Stoping and starting does not help. The web page shows "could not render requested page".

docker logs shows:
Starting enhanced syslogd: rsyslogd.
Connected to boulder-mysql:3306
Database boulder_sa_test already exists - skipping create
goose: no migrations to run. current version: 20200609125504
migrated boulder_sa_test database with ./sa/_db/
added users to boulder_sa_test
Database boulder_sa_integration already exists - skipping create
goose: no migrations to run. current version: 20200609125504
migrated boulder_sa_integration database with ./sa/_db/
added users to boulder_sa_integration
created all databases
CKR_SLOT_ID_INVALID: Slot 0 does not exist.
ERROR: Could open the PKCS#8 file: labca/test-ca.p8
Found slot 864150829 with matching token label.
Starting enhanced syslogd: rsyslogd.
Connected to boulder-mysql:3306
Database boulder_sa_test already exists - skipping create
goose: no migrations to run. current version: 20200609125504
migrated boulder_sa_test database with ./sa/_db/
added users to boulder_sa_test
Database boulder_sa_integration already exists - skipping create
goose: no migrations to run. current version: 20200609125504
migrated boulder_sa_integration database with ./sa/_db/
added users to boulder_sa_integration
created all databases
CKR_SLOT_ID_INVALID: Slot 0 does not exist.
ERROR: Could open the PKCS#8 file: labca/test-ca.p8
Found slot 864150829 with matching token label.

I can't imagine how to fix it to get it working.

I would gladly provide any required information to sort the problem.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.