Giter Site home page Giter Site logo

homeserver's People

Contributors

p3nkiln avatar shionryuu avatar valentincauro avatar zilexa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

homeserver's Issues

Persistence of /proc/fs/nfsd/version setting

Hi,

first I want to thank you for your great work and your effort to get NFS4.2 running!

I just wonder if there is a way how we could persist the "-4.1" setting in /proc/fs/nfsd/version... It's allways gone after reboot.

I thought about creating a systemd service file that runs after proc-fs-nfsd.mount (which apparently sets up the /proc stuff) and before nfs-server.service...

But as I'm a Gentoo user and have not worked with systemd extensively, I have no idea how to do that, yet.

Do you have a hint maybe?

Thanks in advance!

Local proxy address works on server and wifi devices, but not on LAN

Problem

I find a similar discussion here on Caddy community from you. But, instead of server in that case, I cannot access all local domain addresses (e.g. http://plex.o/) in my LAN-connected devices (e.g. local PC, chromecast). I can access them on wifi devices though, like my phone and laptop.

Another note, I can still access the services using serverip:port but not via *.o/.

Environment

  • Clean install using this repo
  • Set *.o to my server IP using AdGuard Home -> works great
  • Set my server IP to be static in the router, though I am not sure it is necessary (not mention anywhere in this guide)
  • Router DNS setting see below

Confusion

There are two 'Primary DNS' setting that I can change in my router (TP-Link Archer C1200): one in the internet IPv4 setting where right now it is set to automatically configured by ISP, and the other in the DHCP setting where I can just put my server ip there (it was initially blank). When I work around with these two settings, I found that the one in DHCP is the main setup that allow my wifi devices to access those local domain addresses.

Vaultwarden login

I run into an issue with Vaultwarden instructions. The first login screen requires to enter an email address and the master password. The instruction tells me to use the secret from the .env file, but there is no info on the e-mail to be used. I tried all emails that was given anywhere during install, but no luck.

Port 80 overlap between Caddy and Adguard Home

Issue:
When running the containers, Caddy is supplied port 80 in the compose file, but Adguard home wants to use port 80 as well.

Adguard home gives error:
validating ports: listen tcp 0.0.0.0:80: bind: address already in use

The guide does not say anything about needing to change this port for either container. I see that I can change this listen port in Adguard setup to other ports. Is that what I should do?

Adguard + Unbound setup

Just wondering how I should set up to get Adguard and Unbound to work together.

  • The docker/Readme.md has a section where it says "See here for tips" but doesn't link anywhere and was never linked in past commits.

  • When setting up AdguardHome, what should the WebAdmin listen port and DNS server listen port be?

  • Unbound container has always been unhealthy for me with the log only showing in the logs:

OCI runtime exec failed: exec failed: unable to start container process: exec: "dig -p 5335 sigok.verteiltesysteme.net @127.0.0.1": executable file not found in $PATH: unknown

Perhaps removing the volume from this issue could be causing problems.

Even if that is the case, I'm having trouble conceptualizing how Unbound and AdguardHome should be working together, as the current config has Unbound on it's own network and AdguardHome is on the host's network. Could you help this noobie out?

question nextcloud

Sorry to ask, I see you don't use anymore, I think I'm almost up and going, I want to test locally without a domain just localhost and tried setting these,

I'm fairly familar with Caddy 1, somewhat with 2, ok with nextcloud setup also, ok with one docker file, but little lost with docker networking at the moment.

I think I had to setup a Swarm instance for this also, which seems solved on my end.

labels: caddy.auto_https: "off"

Screenshot from 2021-12-06 18-45-55

I understand if you aren't supporting this file anymore, you can just close if you dont have time. cheers

Caddy reverse proxy not working for radarr

Hi again, I'm sure you're probably tired of hearing from me now so I apologize.

Issue:
After updating Adguard to dns rewrite radarr.home to my server IP, http://radarr.home/ will not connect and gets immediately updated to https. This does not happen on any of the other *arr apps, just radarr.

Any idea what would make Radarr function differently from the rest?

UPDATING DOCUMENTATION - STATUS

Since December I have been rewriting this guide completely:

  • the order of steps is now more logical
  • the steps are easier to understand
  • much less text, more to the point
  • some parts completely rewritten, especially filesystem guide was a total mess
  • now fully focused on Manjaro as OS. In the future I will test my scripts for EndeavourOS as well. Do not expect Ubuntu/Debian support. Arch makes more sense to me.
  • completely rewritten scripts (prep-server.sh and post-install.sh)

I still have a few things to finalize. Afterwards but hopefully before end of March I will reinstall my own server, live test my scripts and then publish all of this on Reddit/r/selfhosted and consider it "finished".

2023 - options?

Hi @zilexa
Found your guide very usefull.
I looking into updating my old homeserver - mainly because it idles at approx: 50w
and with the current danish energy prices - it's time to update the old i3 540 from 2011.

Now in 2023 - with gen 12 and gen 13 cpu's
it looks like the prices for socket 1200 or even 1700 is worth considering.
i have been looking at the asrock barebones:

H470 (socket 1200)
https://www.asrock.com/nettop/Intel/DeskMini%20H470%20Series/index.asp

and
the B660 (socket 1700)
https://www.asrock.com/nettop/Intel/DeskMini%20B660%20Series/index.asp

do you think theese are worth considering?
for me personally ECC is not a big concern.

Or do you still consider the gen9 / gen10 cpu's to be the most energy efficient?

br Ronni

>> CHANGELOG <<

I will post here the changes to the documentation & whenever I update the docker-compose.yml and its changelog.
You can subscribe (change Unwatched > Watched) to this issue to receive a notification on updates.

Error when creating datapools (and maybe mount points)

Up until Step 2b. (Create Your Datapools) I have had minimal issues with the instructions and scripts working out as expected. I am going with 'Option 1' for my storage.

In 2a, I wiped drives /dev/sda, /dev/sdc, and /dev/sdd and created individual file systems as follows:

mkfs.btrfs -m dup -L data1 /dev/sda
mkfs.btrfs -m dup -L data0 /dev/sdc
mkfs.btrfs -m dup -L backup0 /dev/sdd

All were successful, although I did have an issue where I needed to use sudo for the commands to work (which is not listed in the instructions)

I created the mountpoints as instructed
sudo mkdir -p /mnt/disks/{data0,data1,backup0}

Made sure each was unmounted (They do not seem to ever have been mounted up the this point, so there is a 'not mounted' message) and modified my fstab as directed (attached fstab up to line 31). Running sudo mount -a returns no output as expected, but when I go to verify that the discs are mounted to the right paths with sudo lsblk or sudo mount -l the only drive showing mounted is my NVMe. data0,data1,backup0 are not mounted. I'm not actually sure if they are supposed to be at this point since from what I understand adding the noauto option in fstab would stop the drives from getting mounted with sudo mount -a.

At this point, except for the mounting thing, everything seems good to go. I check out Folder Structure Recommendations & Data Migration, but it seems like this bit should not be done until after 2b. since it references the subvolumes we are about to create. (Side note, the link to the create_folderstructure.sh script is broken)

Pool is created with sudo mkdir -p /mnt/pool/{users, media}, subvolumes are created as:

sudo btrfs subvolume create /mnt/disks/data0/Media
sudo btrfs subvolume create /mnt/disks/data1/Zach

Drive pools are added to fstab by adding lines 33 - 37 in the attached fstab file. I originally used the example lines on the exapmle fstab file instead of the lines on the 2b page, but the example file adds the subvol=Users option and this caused /mnt/pool/users to fail as well, so I switched to the lines on the 2b page instead.

When sudo mount -a is run, the following error occurs:

mount: /mnt/pool/media: mount(2) system call failed: No such file or directory.
dmesg(1) may have more information after failed mount system call.

/mnt/pool/users seems to be mounting correctly but Media still will not.

I've also attached a number of outputs that I hope are useful / relevant.

Apologies if this is in bad form or format, I have not submitted one of these before.

fstab.txt
lsblk.txt
subvolume_list.txt

Impossible to write in the file directory /var/nextdata

Hi :)
Your docker compose file looks really powerfull..
But I've juste got a little problem for the install
image

I think I did something wrong with the $DOCKERDIRECTORY variable, I'd just put "." to stay in my root directory (and be able to do data backup later)

Have you got an idea ? Thanks

NFS Slow speed in one direction ?

Hi !

thank you so much for this:
https://github.com/zilexa/Homeserver/tree/master/network%20share%20(NFSv4.2)

I have followed the instructions and got running two raspberry pi arm64 with this setup.
I have a question since in one direction speed seems great and in the other directions speed is half the expected.

pi@desk:/ $ rsync -vrth --progress /Storage_5TB_nfs/temp/ /Storage_4TB/temp/
sending incremental file list
./
test1.mp4
732.95M 100% 44.89MB/s 0:00:15 (xfr#1, to-chk=2/4)
test2.img
52.43M 100% 24.17MB/s 0:00:02 (xfr#2, to-chk=0/4)

sent 785.57M bytes received 57 bytes 42.46M bytes/sec
total size is 795.87M speedup is 1.01

pi@desk:/ $ rsync --progress -vrth /Storage_4TB/temp/ /Storage_5TB_nfs/temp/
sending incremental file list
test5.mp4
732.95M 100% 91.33MB/s 0:00:07 (xfr#1, to-chk=0/4)

sent 733.13M bytes received 35 bytes 86.25M bytes/sec
total size is 795.87M speedup is 1.09

running the copy from the client (desk).
I get full network speed 1Gbit to transfer to nfs-server.
retrieving data from nfs-server to client (desk) the speed is the half....

found no reason for this... both hard drives have same specs and are able to write 120MB/s each easily with dd tool.

if someone can suggest something to fix this. I will appreciate...

Thanks.
Regards,

Pablo

www-custom.ini

You mention the file www-custom.ini, in

# Custom settings for php fpm to make nextcloud work. The default settings resulted in the error:
, but as far as I can tell, the content of this file is not shown anywhere.

Would you mind sharing the file or pointing me in the right direction in case I've missed it? Thanks :)

Nextcloud questions

@zilexa Thanks for sharing this, I know you haven't been using nextcloud. But I'd really like to test the architecture the way you've built it, but I've had no success for a week.I'm wanting to test nextcloud accessing locally on my private network and also externally.

My DockerCompose

version: "2.0"
services:
##_____________________ Caddy [CLOUD/web-proxy]
  caddy:
    container_name: caddy-proxy
    image: lucaslorentz/caddy-docker-proxy:ci-alpine
    restart: always
    networks: 
      - web-proxy
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - $DOCKERDIR/caddy/caddy_data:/data
      - $DOCKERDIR/caddy/config:/config
    volumes_from: 
      - nextcloud 
    ports:
      - 80:80
      - 443:443
    labels:
      caddy_0: http://adguard.server
      caddy_0.reverse_proxy: host.docker.internal:3000

##
##____________________ NextCloud 
  nextcloud:
    image: nextcloud:fpm-alpine
    container_name: nextcloud
    restart: always
    mem_limit: 2048m
    mem_reservation: 512m
    networks:
      - web-proxy
      - nextcloud
    depends_on:
      - nextcloud-db
      - nextcloud-cache
    environment:
      NEXTCLOUD_DATA_DIR: /var/nextdata
      NEXTCLOUD_TRUSTED_DOMAINS: next.$DOMAIN
      NEXTCLOUD_ADMIN_USER: $ADMIN
      NEXTCLOUD_ADMIN_PASSWORD: $ADMINPW
      POSTGRES_HOST: nextcloud-db
      POSTGRES_DB: nextcloud
      POSTGRES_USER: $USER_INT
      POSTGRES_PASSWORD: $PW_INT
      REDIS_HOST: nextcloud-cache
      #SMTP_HOST: $SMTPHOST
      #SMTP_SECURE: tls
      #SMTP_NAME: $SMTPUSER
      #SMTP_PASSWORD: $SMTPPASS
      #SMTP_FROM_ADDRESS: $EMAIL
      #SMTP_PORT: 587
    volumes:
      - $DOCKERDIR/nextcloud/var/nextdata:/var/nextdata
      - $DOCKERDIR/nextcloud/var/www/html:/var/www/html
      - $DOCKERDIR/nextcloud/var/www/html/config:/var/www/html/config
    labels:
      caddy: next.$DOMAIN
      caddy.tls: $EMAIL
      caddy.file_server: "" 
      caddy.root: "* /var/www/html"
      caddy.php_fastcgi: "{{upstreams 9000}}"
      caddy.php_fastcgi.root: "/var/www/html"
      caddy.php_fastcgi.env: "front_controller_active true"
      caddy.encode: gzip
      caddy.redir_0: "/.well-known/carddav /remote.php/dav 301"
      caddy.redir_1: "/.well-known/caldav /remote.php/dav 301"
      caddy.header.Strict-Transport-Security: '"max-age=15768000;includeSubDomains;preload"' 

##____________________ NextCloud Database
  nextcloud-db:
    container_name: nextcloud-db
    image: postgres:12-alpine
    restart: always
    networks:
      - nextcloud
    environment:
      POSTGRES_USER: $USER_INT
      POSTGRES_PASSWORD: $PW_INT
    volumes:
      - $DOCKERDIR/nextcloud/db:/var/lib/postgresql/data
      - /etc/localtime:/etc/localtime:ro
##____________________ NextCloud Cache
  nextcloud-cache:
    container_name: nextcloud-cache
    image: redis:alpine
    restart: always
    mem_limit: 2048m
    mem_reservation: 512m
    networks:
      - nextcloud
    command: redis-server --requirepass $PW_INT

  ##______________________ AdGuard Home [PRIVACY/Blocker]
  adguard:
    container_name: adguard
    image: adguard/adguardhome
    restart: always
    network_mode: host
    volumes:
       - $DOCKERDIR/adguardhome/work:/opt/adguardhome/work
       - $DOCKERDIR/adguardhome//conf:/opt/adguardhome/conf
    #labels:
      # plugsy.name: AdGuard
      # plugsy.link: http://adguard.o/
      # plugsy.category: Network
##____________________ Portainer [SYSTEM/Docker]
  portainer:
    container_name: portainer
    image: portainer/portainer-ce
    restart: always
    networks: 
      - web-proxy
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - $DOCKERDIR/portainer/data:/data
    ports:
      - 9000:9000
    labels:
      caddy: http://docker.server
      caddy.reverse_proxy: "{{upstreams 9000}}"
     # plugsy.name: Docker
     # plugsy.link: http://docker.o/
     # plugsy.category: System
networks:
  web-proxy:
    driver: bridge
  nextcloud:
    driver: bridge

My Problems / My Doubts

Trying to access through my domain I have the following error in the browser SSL_ERROR_INTERNAL_ERROR_ALERT
I created a ddnds through "noip.com", I configured my rooter, I unlocked ports 443 and 80, but I believe that the unlocking of the ports is not working I opened a ticket with my internet provider to understand the problem.

To test again using my domain I will wait for this port problem to be resolved.

But I would like to have nextcloud syncing and working also on my local private network, using only my LAN, either through an access "http://nextcloud.o/" or the ip of my server only within my lan.

And this I'm not getting. How can I make it work locally on the private network and on the external network? It is possible?

Problem with Unbound in docker-compose

Hi, I have a problem with your docker compose. Here is the error that comes out when I start:

Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/home/tiberio/docker/unbound/forward-records.conf" to rootfs at "/opt/unbound/etc/unbound/forward-records.conf": mount /home/tiberio/docker/unbound/forward-records.conf:/opt/unbound/etc/unbound/forward-records.conf (via /proc/self/fd/6), flags: 0x5001: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type

hotio.dev issue for various docker images

Thank you so much for this guide! This is one of the best documented github projects I've ever stumbled across.

In case anyone runs into the issue I did, it seems that all docker images from cr.hotio.dev/hotio/* have migrated to ghcr.io/hotio/*. Simply changing the image path in the docker-compose.yml file resolved all errors I encountered regaurding this.

Perhaps I missed something, but thought I would share this in case anyone else runs into this issue as I did.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.