vault backup: 2024-05-19 10:58:07
This commit is contained in:
33
IT/Deployment docs/Deploying Hydra.md
Normal file
33
IT/Deployment docs/Deploying Hydra.md
Normal file
@ -0,0 +1,33 @@
|
||||
# OOBE
|
||||
- iDrac prefers DHCP if available, the default IP address for IDRAC is 192.168.0.120
|
||||
- The server shipped with raid5 configured across the 4 drives, I delete the raid configuration by going to storage -> virtual disks -> manage
|
||||
- The disk failed to delete the first time and second time deletion was attempted
|
||||
- Problem resolved itself by rebooting
|
||||
|
||||
## Installing XCP-NG
|
||||
I do not have a monitor with VGA display output, and you need an iDrac enterprise to install an OS through iDrac, so I'm installing with an unattended installer via an [answer file](https://xcp-ng.org/docs/answerfile.html#answer-file-values).
|
||||
- I had trouble mounting the .iso file in macos, so I ended up extracting it with
|
||||
```
|
||||
7zz x -tiso -y xcp-ng-8.2.1.iso -oxcp-ng-8.2.1
|
||||
```
|
||||
|
||||
The `install.img` extraction instructions were a bit hard for me to understand, here's what I figured out.
|
||||
- `iso/` is the directory you extracted the ISO into
|
||||
- You make the `install` directory in the `iso` dir.
|
||||
The ISO packing instructions are *super* dated, you'll want `xorrisofs` instead of whatever they have.
|
||||
|
||||
I never got the unattended install to work, I believe it's because I was creating the ISO on MacOS.
|
||||
|
||||
- The install was completed with a trial license of iDrac Enterprise and the virtual console.
|
||||
|
||||
Initially the installer didn't detect any drives, that's because the RAID controller was in RAID mode, but there were no virtual disks created. This was fixed by changing the controller mode to HBA.
|
||||
|
||||
# Installing Xen Orchestra
|
||||
Xen Orchestra is a popular and extremely powerful dashboard for the Xen hypervisor.
|
||||
- Run `bash -c "$(wget -qO- https://xoa.io/deploy)"` to spin up a VM running Xen-Orchestra, sign in to all of that, yada yada yada.
|
||||
|
||||
- Make an account on the xen-orchestra website, you can leave the company name blank
|
||||
|
||||
- Register your XO instance
|
||||
|
||||
I had an issue with being unable to upload ISOs to an ISO SR, this was fixed by updating my XO installation, as it was out of date.
|
23
IT/Deployment docs/Deploying PiVPN to a debian instance.md
Normal file
23
IT/Deployment docs/Deploying PiVPN to a debian instance.md
Normal file
@ -0,0 +1,23 @@
|
||||
## Preperation
|
||||
The system was fully updated:
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt upgrade
|
||||
```
|
||||
A user was created as a designated PiVPN user. This is not strictly necessary, but I feel it is best. The home dir is set as `/opt/pivpn` because this server's schema designates a directory in `/opt` for each service
|
||||
```
|
||||
sudo adduser pivpn --home=/opt/pivpn
|
||||
```
|
||||
## Deployment
|
||||
The installation command was copied from the [PiVPN website](https://pivpn.io/):
|
||||
```
|
||||
curl -L https://install.pivpn.io | bash
|
||||
```
|
||||
- `eth0` was selected for the IPv4 and IPv6 interface
|
||||
- Yes was selected for the DHCP reservation(set via router's web interface)
|
||||
- The user previously created is selected as the designated PiVPN user, this is not strictly necessary, any user will do.
|
||||
- Wireguard is selected as the VPN, although the process is very similar for OpenVPN
|
||||
- The default port is likely fine, remember to open the port. Wireguard is strictly UDP, with OpenVPN using both TCP and UDP
|
||||
- The DNS server selection is personal preference, this is where I selected my Pihole
|
||||
- The access method was set as the network's WAN IP, I have never used the other options
|
||||
- Unattended security patches were enabled
|
12
IT/Deployment docs/Deploying Syncthing.md
Normal file
12
IT/Deployment docs/Deploying Syncthing.md
Normal file
@ -0,0 +1,12 @@
|
||||
https://docs.syncthing.net/intro/getting-started.html#getting-started
|
||||
|
||||
Syncthing was installed with `sudo apt install syncthing` after all packages and lists were updated.
|
||||
|
||||
A syncthing user and group were created to run the `systemd` service, and they were given ownership of a directory
|
||||
|
||||
The systemd service was edited to include the `--home=/opt/syncthing` path, and the service was started after `sudo systemctl daeon-reload`
|
||||
|
||||
The config was updated to replace the default gui IP (127.0.0.1) with 0.0.0.0 to make the gui accessible to other computers
|
||||
|
||||
https://docs.syncthing.net/users/firewall.html#firewall-setup
|
||||
|
@ -0,0 +1,68 @@
|
||||
#documentation #homelab
|
||||
Official docs can be found on the [github page](https://github.com/pi-hole/docker-pi-hole) and the [home page](https://docs.pi-hole.net/)
|
||||
## Preperation
|
||||
- The system was entirely updated with `sudo apt update` and `sudo apt upgrade`.
|
||||
- `docker` and `docker-compose` were installed via `apt`
|
||||
- It was noted that `docker.service` was not running, with an error similar to
|
||||
```
|
||||
Error starting daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.8.2 (nf_tables): CHAIN_ADD failed
|
||||
```
|
||||
- This was resolved by running the commands below as detailed [here](https://forums.docker.com/t/failing-to-start-dockerd-failed-to-create-nat-chain-docker/78269)
|
||||
```
|
||||
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
|
||||
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
|
||||
```
|
||||
## Deployment
|
||||
- A `docker-compose.yml` file was created with the contents:
|
||||
```
|
||||
version: "3"
|
||||
|
||||
# More info at https://github.com/pi-hole/docker-pi-hole/ and https://docs.pi-hole.net/
|
||||
services:
|
||||
pihole:
|
||||
# start on boot and when crashed
|
||||
restart: unless-stopped
|
||||
container_name: pihole
|
||||
image: pihole/pihole:latest
|
||||
# For DHCP it is recommended to remove these ports and instead add: network_mode: "host"
|
||||
ports:
|
||||
- "53:53/tcp"
|
||||
- "53:53/udp"
|
||||
- "67:67/udp" # Only required if you are using Pi-hole as your DHCP server
|
||||
- "80:80/tcp"
|
||||
environment:
|
||||
TZ: 'America/Chicago'
|
||||
# set the web dashboard to have no passwd
|
||||
WEBPASSWORD: ''
|
||||
# Volumes store your data between container upgrades
|
||||
volumes:
|
||||
- './etc-pihole:/etc/pihole'
|
||||
- './etc-dnsmasq.d:/etc/dnsmasq.d'
|
||||
# https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
|
||||
cap_add:
|
||||
- NET_ADMIN # Required if you are using Pi-hole as your DHCP server, else not needed
|
||||
restart: unless-stopped
|
||||
```
|
||||
The time zone was updated to the correct timezone, a configuration option added to make the container start automatically, and the docker container started with (Note: If you are not using a `docker` user, you will need to add your user to the docker group. This can be done with `sudo usermod -aG docker [user]`):
|
||||
```
|
||||
docker-compose -f docker-compose.yml up -d
|
||||
```
|
||||
You can check the status of all docker containers with `docker ps`, and get detailed logs for the pihole container with `docker logs pihole`
|
||||
Test and see if the pihole is running by changing a system's DNS server to the pihole's IP, then going to `http://[ip]/admin/` or `http://pi.hole`
|
||||
## Troubleshooting
|
||||
- Restart the server:
|
||||
```
|
||||
sudo reboot -h now
|
||||
```
|
||||
- Check if the container is running:
|
||||
```
|
||||
docker ps
|
||||
```
|
||||
- Check the logs:
|
||||
```
|
||||
docker logs pihole
|
||||
```
|
||||
- See if the container is listening(grep can be omitted to check all services):
|
||||
```
|
||||
sudo ss -tulpn | grep 53
|
||||
```
|
Reference in New Issue
Block a user