Intern:Cluster/xora

Aus Wiki StuRa HTW Dresden
Zur Navigation springen Zur Suche springen

Cluster/xora war ein erster Test zu einem Verbund (Cluster) von Servern mit Proxmox VE.

Später wurde der Test erneut bei der Server/Aktualisierung/2019 als Intern:ora vorgenommen.

Test[Bearbeiten]

Grundsätzlich soll ein möglichst ausfallsicherer Verbund an Servern mit Proxmox VE getestet werden.

Getestet werden soll:

  • Datensicherung
    • Replikation
  • Ausfallüberbrückung

Bezeichnung[Bearbeiten]

Es ist die künstlich geschaffene Bezeichnung für den Verbund von mehreren Servern.

Verbund[Bearbeiten]

Für einen Verbund an Servern mit Proxmox VE bedarf es wohl mindestens 3 Servern.

Zu dem Verbund gehören (theoretisch)

mindestens
zur Vervollständigung auch
zur Ergänzung eventuell auch

.

Geräte[Bearbeiten]

einzelne Geräte[Bearbeiten]

Gerät Name IPv4 OS DNS (A) OS WUI OS Mail OS Geburtshilfe Start (Dauer) IPv4 IPMI DNS (A) IPMI WUI IPMI Mail IPMI Netzwerkschnittstellen Massenspeicher Eigentum
27090
cora 141.56.51.123 cora.stura-dresden.de https://cora.stura-dresden.de:8006/ cora@stura.htw-dresden.de James 3 min 141.56.51.113 irmc.cora.stura-dresden.de https://irmc.cora.stura-dresden.de/ irmc.cora@stura.htw-dresden.de
M 2 1
X



3.5 ″ 3.5 ″




2 TB 2 TB
bsd.services:user:vater:hw#rx300_s6_2709_0
27091
dora 141.56.51.124 dora.stura-dresden.de https://dora.stura-dresden.de:8006/ dora@stura.htw-dresden.de Fullforce 3 min 141.56.51.114 irmc.dora.stura-dresden.de https://irmc.dora.stura-dresden.de/ irmc.dora@stura.htw-dresden.de
M 2 1
X



3.5 ″ 3.5 ″




2 TB 2 TB
bsd.services:user:vater:hw#rx300_s6_2709_1
8529
lora 141.56.51.127 lora.stura-dresden.de https://lora.stura-dresden.de:8006/ lora@stura.htw-dresden.de  min 141.56.51.117 irmc.lora.stura-dresden.de https://irmc.lora.stura-dresden.de/ irmc.lora@stura.htw-dresden.de
M 2 1




3.5 ″ 3.5 ″




2 TB 2 TB
bsd.services:user:vater:hw#rx300_s6_8529
8
nora 141.56.51.128 nora.stura-dresden.de https://nora.stura-dresden.de:8006/ nora@stura.htw-dresden.de 2 min 141.56.51.118 irmc.nora.stura-dresden.de https://irmc.nora.stura-dresden.de/ irmc.nora@stura.htw-dresden.de
M 2 1
X



3.5 ″ 3.5 ″




2 TB 2 TB
StuRa (srs3008)
5100
zora 141.56.51.129 zora.stura-dresden.de https://zora.stura-dresden.de:8006/ zora@stura.htw-dresden.de  min 141.56.51.119 drac.cora.stura-dresden.de https://drac.cora.stura-dresden.de/ drac.zora@stura.htw-dresden.de
M
1
2
3.5 ″ 3.5 ″ 3.5 ″ 3.5 ″
2 TB 2 TB

2 TB 2 TB

bsd.services:user:vater:hw#dell_poweredge_r510

BIOS[Bearbeiten]

F2

Boot
alles andere
USB KEY: …
PCI SCSI: #0100 ID000 LN0 HGST H
PCI SCSI: #0100 ID004 LN0 HGST H

Proxmox VE[Bearbeiten]

Installation Betriebssystem[Bearbeiten]

Vorbereitung Installation Betreibssystem[Bearbeiten]

Durchführung Installation Betreibssystem[Bearbeiten]

Install Proxmox VE
Loading Proxmox Installer ...
Loading initial ramdisk ...
Proxmox startup
End User License Agreement (EULA)
I agree
Proxmox Virtualiszation Environment (PVE)
Options
Filesystem
ext4zfs (RAID1)
Disk Setup
Harddisk 0
/dev/sda (1863GB, HUS726020ALS214)
Harddisk 1
/dev/sdb (1863GB, HUS726020ALS214)
Harddisk 2
-- do not use --
Advanced Options
ashift
12
compress
on
checksum
on
copies
1
OK
Target
zfs (RAID1)
Next
Location and Time Zone selection
Country
Germany
Time zone
Europe/Berlin
Keyboard Layout
German
Next
Administration Password and E-Mail Address
Password
8
Confirm
8
E-Mail
siehe #einzelne Geräte
Next
Management Network Configuration
Management Interface
enp8s0f0 - … (igb)
Hostname (FQDN)
siehe #einzelne Geräte
IP Address
siehe #einzelne Geräte
Netmask
255.255.255.0
Gateway
141.56.51.254
DNS Server
141.56.1.1
Next
Installation successful!
Reboot

Nachbereitung Installation Betreibssystem[Bearbeiten]

erste Aktualisierung
  • Update (WUI)
    • Refresh (WUI)
      Neustart (WUI)
    • Upgrade (WUI)
(optionales) ZFS anschauen
zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	rpool       ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sda2    ONLINE       0     0     0
	    sdb2    ONLINE       0     0     0

errors: No known data errors
zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
rpool             9.40G  1.75T   104K  /rpool
rpool/ROOT         919M  1.75T    96K  /rpool/ROOT
rpool/ROOT/pve-1   919M  1.75T   919M  /
rpool/data          96K  1.75T    96K  /rpool/data
rpool/swap        8.50G  1.75T    56K  -
(optionales) Anschauen der Partitionierung
fdisk -l /dev/sd{a,b}
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 0A3CA01C-D0CE-4750-A26A-C07C1541EF1D

Device          Start        End    Sectors  Size Type
/dev/sda1          34       2047       2014 1007K BIOS boot
/dev/sda2        2048 3907012749 3907010702  1.8T Solaris /usr & Apple ZFS
/dev/sda9  3907012750 3907029134      16385    8M Solaris reserved 1


Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: C0D3B0CA-C966-4B00-B367-EEDBD04872F7

Device          Start        End    Sectors  Size Type
/dev/sdb1          34       2047       2014 1007K BIOS boot
/dev/sdb2        2048 3907012749 3907010702  1.8T Solaris /usr & Apple ZFS
/dev/sdb9  3907012750 3907029134      16385    8M Solaris reserved 1
Sicherung des initialen Zustandes von PVE (mit der erfolgten Aktualisierung)

Erstellen eines Schnappschusses vom gesamten Pool

zfs snapshot -r rpool@fresh-installed-pve-and-updated

Im Übrigen kann überlegt werden, dass darüber hinaus auch eine Sicherung von /dev/sd{a,b}1 vorgenommen wird.

Anpassung der Quellen[Bearbeiten]

Siehe auch
Server/Proxmox#sources.list
Ergänzung der Quelle pve-no-subscription[Bearbeiten]

auf schnell!

echo 'deb http://download.proxmox.com/debian/pve stretch pve-no-subscription' > /etc/apt/sources.list.d/pve-download.list && apt update

cat /etc/apt/sources.list.d/pve-enterprise.list

cat: /etc/apt/sources.list.d/pve-enterprise.list: No such file or directory

$EDITOR /etc/apt/sources.list.d/pve-enterprise.list


cat /etc/apt/sources.list.d/pve-enterprise.list

deb http://download.proxmox.com/debian/pve stretch pve-no-subscription
Entfernung der Quelle pve-enterprise[Bearbeiten]

cat /etc/apt/sources.list.d/pve-enterprise.list

deb https://enterprise.proxmox.com/debian/pve stretch pve-enterprise

$EDITOR /etc/apt/sources.list.d/pve-enterprise.list


cat /etc/apt/sources.list.d/pve-enterprise.list

####deb https://enterprise.proxmox.com/debian/pve stretch pve-enterprise

Erstellung vom Cluster[Bearbeiten]

optionales Begutachten für die Erstellung von einem Cluster[Bearbeiten]

cora dora nora
less /etc/network/interfaces
auto lo
iface lo inet loopback

iface enp8s0f0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 141.56.51.123
        netmask 255.255.255.0
        gateway 141.56.51.254
        bridge_ports enp8s0f0
        bridge_stp off
        bridge_fd 0

iface ens4f0 inet manual

iface ens4f1 inet manual

iface ens4f2 inet manual

iface ens4f3 inet manual

iface enp8s0f1 inet manual
auto lo
iface lo inet loopback

iface enp8s0f0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 141.56.51.124
        netmask 255.255.255.0
        gateway 141.56.51.254
        bridge_ports enp8s0f0
        bridge_stp off
        bridge_fd 0

iface ens4f0 inet manual

iface ens4f1 inet manual

iface ens4f2 inet manual

iface ens4f3 inet manual

iface enp8s0f1 inet manual
auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 141.56.51.128
        netmask 255.255.255.0
        gateway 141.56.51.254
        bridge_ports eno1
        bridge_stp off
        bridge_fd 0

iface enp2s0f0 inet manual

iface enp2s0f1 inet manual

iface enp2s0f2 inet manual

iface enp2s0f3 inet manual

iface eno2 inet manual
less /etc/hosts
127.0.0.1 localhost.localdomain localhost
141.56.51.123 cora.stura-dresden.de cora pvelocalhost

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
127.0.0.1 localhost.localdomain localhost
141.56.51.124 dora.stura-dresden.de dora pvelocalhost

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
127.0.0.1 localhost.localdomain localhost
141.56.51.128 nora.stura-dresden.de nora pvelocalhost

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Erzeugung vom Cluster xora[Bearbeiten]

auf einer der Server, die zum Cluster gehören sollen

vorgenommen auf #8 (nora)
(alternativ) grafische Oberfläche Kommandozeile
  • Datacenter -> Cluster -> Create Cluster
    Create Cluster
    Cluster Name
    xora
    Ring 0 Address
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.
Writing corosync config to /etc/pve/corosync.conf
Restart corosync and cluster filesystem
TASK OK

pvecm create xora

Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.
Writing corosync config to /etc/pve/corosync.conf
Restart corosync and cluster filesystem
less /etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: nora
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 141.56.51.128
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: xora
  config_version: 1
  interface {
    bindnetaddr: 141.56.51.128
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
pvecm status
Quorum information
------------------
Date:             Fri Mmm dd HH:MM:SS yyyy
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1/12
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   1
Highest expected: 1
Total votes:      1
Quorum:           1  
Flags:            Quorate 

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 141.56.51.128 (local)

Erstellung von einem internen Netz für das Cluster von Proxmox[Bearbeiten]

frei gewählt verwenden wir 10.10.10.0/24

cora dora nora

grafische Oberfläche (mit notwendigen Neustart)

less /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp8s0f0 inet manual

iface ens4f0 inet manual

iface ens4f1 inet manual

iface ens4f2 inet manual

iface ens4f3 inet manual

auto enp8s0f1
iface enp8s0f1 inet static
        address  10.10.10.123
        netmask  255.255.255.0

auto vmbr0
iface vmbr0 inet static
        address  141.56.51.123
        netmask  255.255.255.0
        gateway  141.56.51.254
        bridge_ports enp8s0f0
        bridge_stp off
        bridge_fd 0
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp8s0f0 inet manual

iface ens4f0 inet manual

iface ens4f1 inet manual

iface ens4f2 inet manual

iface ens4f3 inet manual

auto enp8s0f1
iface enp8s0f1 inet static
        address  10.10.10.124
        netmask  255.255.255.0

auto vmbr0
iface vmbr0 inet static
        address  141.56.51.124
        netmask  255.255.255.0
        gateway  141.56.51.254
        bridge_ports enp8s0f0
        bridge_stp off
        bridge_fd 0
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno1 inet manual

iface enp2s0f0 inet manual

iface enp2s0f1 inet manual

iface enp2s0f2 inet manual

iface enp2s0f3 inet manual

auto eno2
iface eno2 inet static
        address  10.10.10.128
        netmask  255.255.255.0

auto vmbr0
iface vmbr0 inet static
        address  141.56.51.128
        netmask  255.255.255.0
        gateway  141.56.51.254
        bridge_ports eno1
        bridge_stp off
        bridge_fd 0

Eintragung der anderen Nodes unabhängig von DNS[Bearbeiten]

cora dora nora
less /etc/hosts

####    members of the cluster xora
10.10.10.123 cora.xora.stura-dresden.de cora.xora
10.10.10.124 dora.xora.stura-dresden.de dora.xora
10.10.10.128 nora.xora.stura-dresden.de nora.xora
# The following lines are desirable for IPv6 capable hosts

Hinzufügen von anderen Servern[Bearbeiten]

cora dora nora
pvecm add nora.xora
Please enter superuser (root) password for 'nora.xora':
                                                       Password for root@nora.xora: ********
Etablishing API connection with host 'nora.xora'
The authenticity of host 'nora.xora' can't be established.
X509 SHA256 key fingerprint is F3:A9:2D:9E:D5:59:DA:AE:5E:76:71:1E:02:D9:49:B6:67:5C:40:B0:0C:C0:05:FF:C5:D7:62:37:00:D8:CA:DD.
Are you sure you want to continue connecting (yes/no)? yes
Login succeeded.
Request addition of this node
Join request OK, finishing setup locally
stopping pve-cluster service
backup old database to '/var/lib/pve-cluster/backup/config-1538462961.sql.gz'
waiting for quorum...OK
(re)generate node files
generate new node certificate
merge authorized SSH keys and known hosts
generated new node certificate, restart pveproxy and pvedaemon services
successfully added node 'cora' to cluster.
pvecm add nora.xora
Please enter superuser (root) password for 'nora.xora':
                                                       Password for root@nora.xora: ********
Etablishing API connection with host 'nora.xora'
The authenticity of host 'nora.xora' can't be established.
X509 SHA256 key fingerprint is F3:A9:2D:9E:D5:59:DA:AE:5E:76:71:1E:02:D9:49:B6:67:5C:40:B0:0C:C0:05:FF:C5:D7:62:37:00:D8:CA:DD.
Are you sure you want to continue connecting (yes/no)? yes
Login succeeded.
Request addition of this node
Join request OK, finishing setup locally
stopping pve-cluster service
backup old database to '/var/lib/pve-cluster/backup/config-1538462994.sql.gz'
waiting for quorum...OK
(re)generate node files
generate new node certificate
merge authorized SSH keys and known hosts
generated new node certificate, restart pveproxy and pvedaemon services
successfully added node 'dora' to cluster.
/etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: cora
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 141.56.51.123
  }
  node {
    name: dora
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 141.56.51.124
  }
  node {
    name: nora
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 141.56.51.128
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: xora
  config_version: 3
  interface {
    bindnetaddr: 141.56.51.128
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
}

(auch in Anlehnung an https://pve.proxmox.com/wiki/Separate_Cluster_Network)

/etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: cora
    nodeid: 3
    quorum_votes: 1
    ring0_addr: cora.xora
  }
  node {
    name: dora
    nodeid: 4
    quorum_votes: 1
    ring0_addr: dora.xora
  }
  node {
    name: nora
    nodeid: 8
    quorum_votes: 1
    ring0_addr: nora.xora
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: xora
  config_version: 4
  interface {
    bindnetaddr: nora.xora
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
}

Prüfen vom Verbund[Bearbeiten]

cora dora nora
pvecm status
Quorum information
------------------
Date:             Fri Mmm dd HH:MM:SS yyyy
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000008
Ring ID:          3/52
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2  
Flags:            Quorate 

Membership information
----------------------
    Nodeid      Votes Name
0x00000003          1 10.10.10.123
0x00000004          1 10.10.10.124
0x00000008          1 10.10.10.128 (local)

Erstellen vom Cluster für Ceph[Bearbeiten]

Erstellung von einem internen Netz für das Cluster von Ceph[Bearbeiten]

frei gewählt verwenden wir 10.10.11.0/24

  • Netzwerkschnittstellen
    10.10.11.0/24
    10.10.11.123/24
    10.10.11.124/24
    10.10.11.128/24

Neustart (wegen Netzwerkschnittstellen)


Testweises Verbinden (zu den jeweils anderen beiden Servern im Cluster) per ssh

ssh root@10.10.11.123
ssh root@10.10.11.124
ssh root@10.10.11.128

Installation von Ceph[Bearbeiten]

pveceph install

#Ergänzung der Quelle pve-no-subscription

pveceph install

Initialisieren von Ceph[Bearbeiten]

gemäß doku 10.10.10.0/24, aber das wurde (versehentlich schon für die #Erstellung von einem internen Netz für das Cluster von Proxmox verwendet)

pveceph init --network 10.10.11.0/24

WUI

rados_connect failed - No such file or directory (500)
pveceph createmon
creating /etc/pve/priv/ceph.client.admin.keyring
monmaptool: monmap file /tmp/monmap
monmaptool: generated fsid 4571c8c1-89c2-44e5-8527-247470f74809
epoch 0
fsid 4571c8c1-89c2-44e5-8527-247470f74809
last_changed 2018-10-02 13:30:43.174771
created 2018-10-02 13:30:43.174771
0: 10.10.11.128:6789/0 mon.nora
monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@nora.service -> /lib/systemd/system/ceph-mon@.service.
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
INFO:ceph-create-keys:ceph-mon admin socket not ready yet.
INFO:ceph-create-keys:Key exists already: /etc/ceph/ceph.client.admin.keyring
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
creating manager directory '/var/lib/ceph/mgr/ceph-nora'
creating keys for 'mgr.nora'
setting owner for directory
enabling service 'ceph-mgr@nora.service'
Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@nora.service -> /lib/systemd/system/ceph-mgr@.service.
starting service 'ceph-mgr@nora.service'
less /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

zfspool: local-zfs
        pool rpool/data
        sparse
        content images,rootdir
less /etc/pve/ceph.conf
[global]
         auth client required = cephx
         auth cluster required = cephx
         auth service required = cephx
         cluster network = 10.10.11.0/24
         fsid = 5df2a0f7-2362-488e-9c5a-4b9ed2a16bfe
         keyring = /etc/pve/priv/$cluster.$name.keyring
         mon allow pool delete = true
         osd journal size = 5120
         osd pool default min size = 2
         osd pool default size = 3
         public network = 10.10.11.0/24

[osd]
         keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mon.nora]
         host = nora
         mon addr = 10.10.11.128:6789

Hinzufügen von anderen Servern zum Pool von Ceph[Bearbeiten]

  • nora -> Ceph -> Monitor -> Create
    Create Ceph Monitor/Manager
    Host
    cora
Task viewer: Ceph Monitor mon.cora - Create

WUI

Output
Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@cora.service -> /lib/systemd/system/ceph-mon@.service.
INFO:ceph-create-keys:ceph-mon is not in quorum: u'synchronizing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
INFO:ceph-create-keys:Talking to monitor...
exported keyring for client.admin
updated caps for client.admin
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
creating manager directory '/var/lib/ceph/mgr/ceph-cora'
creating keys for 'mgr.cora'
setting owner for directory
enabling service 'ceph-mgr@cora.service'
Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@cora.service -> /lib/systemd/system/ceph-mgr@.service.
starting service 'ceph-mgr@cora.service'
TASK OK
  • nora -> Ceph -> Monitor -> Create
    Create Ceph Monitor/Manager
    Host
    dora
    Create
Task viewer: Ceph Monitor mon.cora - Create

WUI

Output
Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@dora.service -> /lib/systemd/system/ceph-mon@.service.
INFO:ceph-create-keys:ceph-mon is not in quorum: u'synchronizing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
INFO:ceph-create-keys:Talking to monitor...
exported keyring for client.admin
updated caps for client.admin
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
creating manager directory '/var/lib/ceph/mgr/ceph-dora'
creating keys for 'mgr.dora'
setting owner for directory
enabling service 'ceph-mgr@dora.service'
Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@dora.service -> /lib/systemd/system/ceph-mgr@.service.
starting service 'ceph-mgr@dora.service'
TASK OK

Erstellung vom Pool für das Cluster von Ceph[Bearbeiten]

WUI

  • nora -> Ceph -> Pools -> Create
    Create Ceph Monitor/Manager
    Name
    xora
    Size
    3
    Min. Size
    2
    Crush Rule
    replicated_rule
    pg_num
    64
    Add Storage
    X
    Create

Containerisierung[Bearbeiten]

Proxmox CT[Bearbeiten]

Proxmox CT Erstellung[Bearbeiten]

Proxmox CT Erstellung alle[Bearbeiten]
Create CT
General
Node
Hostname
(stage) software_zweck_organisation
(stage) ist beispielsweise test oder dev, wenn es nicht produktiv sein soll.
software ist beispielsweise plone oder openldap. (Software, nicht Dienst! also nicht cms oder ldap, oder www oder acc!)
zweck ist beispielsweise website-2010 oder nothilfe-2020.
organisation ist beispielsweise stura-htw-dresden oder kss-sachsen.
Unprivileged container
[X]
Nesting
[?]
Password
nach Lebensdauer
bei Projekten mindestens 8
Template
Storage
cephfs
Type
rdb
Template
Root Disk
Storage
storage
Disk size (GiB)
nach Bedarf
CPU
Cores
2, oder nach Bedarf
Die Verwendung von nur einen Prozessor oder mehr als zwei Prozessor ist zu begründen.
Memory
nach Bedarf
Memory (MiB)
nach Bedarf
Swap (MiB)
nach Bedarf
Die Größe soll der Größe, die bei Memory (MiB) verwendet wird, entsprechen. (Die Verwendung von mehr oder weniger Größe ist zu begründen.)
Die Größe soll höchstens die Hälfte der Größe, die bei (Root Disk ->) Disk size (GiB) verwendet wird, betragen.
Network
Bridge
vmbr1
IPv4
Static
IPv4/CIDR
141.56.51.321/24
Gateway (IPv4)
141.56.51.254
321 ist die "verwendbare" Adresse für IPv4, die unverzüglich bei Intern:Server#Verwendung von IP-Adressen einzutragen ist.
DNS
DNS domain
DNS servers
141.56.1.1
Confirm
Start after created
[ ]
Finish
Datacenter (cluster)
HA
Resources
Add
VM
110
110 ist die ID der Instanz innerhalb von Proxmox.
Group
HA_cluster
Add
Proxmox CT Erstellung TurnKey[Bearbeiten]
Create CT
General
Unprivileged conatainer
[ ]
turnkey-init

Proxmox CT Verwaltung[Bearbeiten]

Auflisten aller Dateien für die Konfiguration der Container auf dem jeweiligen node

ls /etc/pve/lxc/

Virtualisierung[Bearbeiten]

Proxmox VM[Bearbeiten]

Ausfallsicherheit[Bearbeiten]

Ausfallsicherheit CT[Bearbeiten]

ping 141.56.51.321
PING 141.56.51.321 (141.56.51.321) 56(84) bytes of data.
64 bytes from 141.56.51.321: icmp_seq=1 ttl=64 time=0.283 ms
64 bytes from 141.56.51.321: icmp_seq=2 ttl=64 time=0.213 ms
64 bytes from 141.56.51.321: icmp_seq=3 ttl=64 time=0.286 ms
From 141.56.51.456 icmp_seq=4 Destination Host Unreachable
From 141.56.51.456 icmp_seq=5 Destination Host Unreachable

From 141.56.51.456 icmp_seq=78 Destination Host Unreachable
From 141.56.51.456 icmp_seq=79 Destination Host Unreachable
64 bytes from 141.56.51.321: icmp_seq=80 ttl=64 time=2609 ms
64 bytes from 141.56.51.321: icmp_seq=81 ttl=64 time=1585 ms
64 bytes from 141.56.51.321: icmp_seq=82 ttl=64 time=561 ms
64 bytes from 141.56.51.321: icmp_seq=83 ttl=64 time=0.260 ms
64 bytes from 141.56.51.321: icmp_seq=84 ttl=64 time=0.295 ms
64 bytes from 141.56.51.157: icmp_seq=85 ttl=64 time=0.200 ms
64 bytes from 141.56.51.157: icmp_seq=86 ttl=64 time=0.274 ms

Ausfallsicherheit Fehler[Bearbeiten]

Fehler bei der Ausfallsicherheit bei der Verwendung von ZFS, statt RDB[Bearbeiten]
2020-04-29 09:14:46 starting migration of CT 110 to node 'n1' (10.1.0.31)
2020-04-29 09:14:46 found local volume 'local-zfs:subvol-110-disk-0' (in current VM config)
cannot open 'rpool/data/subvol-110-disk-0': dataset does not exist
usage:
	snapshot [-r] [-o property=value] ... <filesystem|volume>@<snap> ...
For the property list, run: zfs set|get
2020-04-29 09:14:46 ERROR: zfs error: For the delegated permission list, run: zfs allow|unallow
2020-04-29 09:14:46 aborting phase 1 - cleanup resources
2020-04-29 09:14:46 ERROR: found stale volume copy 'local-zfs:subvol-110-disk-0' on node 'n1'
2020-04-29 09:14:46 start final cleanup
2020-04-29 09:14:46 ERROR: migration aborted (duration 00:00:01): zfs error: For the delegated permission list, run: zfs allow|unallow
TASK ERROR: migration aborted

Anpassungen für Produktivebetrieb[Bearbeiten]

  • Backup Limit auf 10 erhöht.
  • Vorbereitung für Plone 5 Umzug (101) Plone 5

Siehe auch[Bearbeiten]