November 19

Linux: Kubernetes setup on fedora that actually works

Fedora (Single Node)

        Prerequisites
        Instructions
        Support Level

Prerequisites

    You need 2 or more machines with Fedora installed. These can be either bare metal machines or virtual machines.

Instructions

This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc…

This guide will only get ONE node (previously minion) working. Multiple nodes require a functional networking configuration done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious.

The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run etcd (not needed if etcd runs on a different host but this guide assumes that etcd and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.

System Information:

Hosts:
Fedora (Single Node)

        Prerequisites
        Instructions
        Support Level

Prerequisites

    You need 2 or more machines with Fedora installed. These can be either bare metal machines or virtual machines.

Instructions

This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc…

This guide will only get ONE node (previously minion) working. Multiple nodes require a functional networking configuration done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious.

The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run etcd (not needed if etcd runs on a different host but this guide assumes that etcd and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.

System Information:

Hosts:

fed-master = 192.168.121.9
fed-node = 192.168.121.65

Prepare the hosts:

    Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with Kubernetes-0.18 and beyond.
    Running on AWS EC2 with RHEL 7.2, you need to enable “extras” repository for yum by editing /etc/yum.repos.d/redhat-rhui.repo and changing the enable=0 to enable=1 for extras.

dnf -y install kubernetes

    Install etcd

dnf -y install etcd

    Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping.

echo "192.168.121.9    fed-master
192.168.121.65    fed-node" >> /etc/hosts

    Edit /etc/kubernetes/config (which should be the same on all hosts) to set the name of the master server:

# Comma separated list of nodes in the etcd cluster
KUBE_MASTER="--master=http://fed-master:8080"

    Disable the firewall on both the master and node, as Docker does not play well with other firewall rule managers. Please note that iptables.service does not exist on the default Fedora Server install.

systemctl mask firewalld.service
systemctl stop firewalld.service

systemctl disable iptables.service
systemctl stop iptables.service

Configure the Kubernetes services on the master.

    Edit /etc/kubernetes/apiserver to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.

# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# Add your own!
KUBE_API_ARGS=""

    Edit /etc/etcd/etcd.conf to let etcd listen on all available IPs instead of 127.0.0.1. If you have not done this, you might see an error such as “connection refused”.

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

    Start the appropriate services on master:

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

Configure the Kubernetes services on the node.

We need to configure the kubelet on the node.

    Edit /etc/kubernetes/kubelet to appear as such:

###
# Kubernetes kubelet (node) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=fed-node"

# location of the api-server
KUBELET_ARGS="--cgroup-driver=systemd --kubeconfig=/etc/kubernetes/master-kubeconfig.yaml"

    Edit /etc/kubernetes/master-kubeconfig.yaml to contain the following information:

kind: Config
clusters:
- name: local
  cluster:
    server: http://fed-master:8080
users:
- name: kubelet
contexts:
- context:
    cluster: local
    user: kubelet
  name: kubelet-context
current-context: kubelet-context

    Start the appropriate services on the node (fed-node).

for SERVICES in kube-proxy kubelet docker; do 
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES 
done

    Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to Ready.

kubectl get nodes
NAME            STATUS      AGE      VERSION
fed-node        Ready       4h

    Deletion of nodes:

To delete fed-node from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):

kubectl delete -f ./node.json

You should be finished!

The cluster should be running! Launch a test pod.
Support Level
fed-master = 192.168.121.9
fed-node = 192.168.121.65

Prepare the hosts:

    Install Kubernetes on all hosts - fed-{master,node}. This will also pull in docker. Also install etcd on fed-master. This guide has been tested with Kubernetes-0.18 and beyond.
    Running on AWS EC2 with RHEL 7.2, you need to enable “extras” repository for yum by editing /etc/yum.repos.d/redhat-rhui.repo and changing the enable=0 to enable=1 for extras.

dnf -y install kubernetes

    Install etcd

dnf -y install etcd

    Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping.

echo "192.168.121.9    fed-master
192.168.121.65    fed-node" >> /etc/hosts

    Edit /etc/kubernetes/config (which should be the same on all hosts) to set the name of the master server:

# Comma separated list of nodes in the etcd cluster
KUBE_MASTER="--master=http://fed-master:8080"

    Disable the firewall on both the master and node, as Docker does not play well with other firewall rule managers. Please note that iptables.service does not exist on the default Fedora Server install.

systemctl mask firewalld.service
systemctl stop firewalld.service

systemctl disable iptables.service
systemctl stop iptables.service

Configure the Kubernetes services on the master.

    Edit /etc/kubernetes/apiserver to appear as such. The service-cluster-ip-range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.

# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# Add your own!
KUBE_API_ARGS=""

    Edit /etc/etcd/etcd.conf to let etcd listen on all available IPs instead of 127.0.0.1. If you have not done this, you might see an error such as “connection refused”.

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

    Start the appropriate services on master:

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

Configure the Kubernetes services on the node.

We need to configure the kubelet on the node.

    Edit /etc/kubernetes/kubelet to appear as such:

###
# Kubernetes kubelet (node) config

swapoff -a


# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=fed-node"

# location of the api-server
KUBELET_ARGS="--cgroup-driver=systemd --kubeconfig=/etc/kubernetes/master-kubeconfig.yaml"

    Edit /etc/kubernetes/master-kubeconfig.yaml to contain the following information:

kind: Config
clusters:
- name: local
  cluster:
    server: http://fed-master:8080
users:
- name: kubelet
contexts:
- context:
    cluster: local
    user: kubelet
  name: kubelet-context
current-context: kubelet-context

    Start the appropriate services on the node (fed-node).

for SERVICES in kube-proxy kubelet docker; do 
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES 
done

    Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to Ready.

kubectl get nodes
NAME            STATUS      AGE      VERSION
fed-node        Ready       4h

    Deletion of nodes:

To delete fed-node from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):

kubectl delete -f ./node.json

You should be finished!

The cluster should be running! Launch a test pod.
Category: Linux | Comments Off on Linux: Kubernetes setup on fedora that actually works
November 19

Linux: A server build with cifs share example

Cloned the server from a template

On the server:

Setup repository with satellite:
subscription-manager register --org="ORG-NAME" --activationkey="KEY"
subscription-manager subscribe --auto

Activated puppet:
Puppet agent setup from client
puppet agent -tv

Puppet agent setup from server
puppet cert sign --all


Setup SNMP using Satellite:
yum install -y net-snmp net-snmp-utils net-snmp-libs net-snmp-devel

mv /etc/snmp/snmpd.conf /etc/snmp/snmpd.conf.old ; echo "snmpd: ALL" >> /etc/hosts.deny ; echo "snmpd: snmpserver1.domainname.com" >> /etc/hosts.allow ; echo "snmpd: snmpserver2.domainname.com" >> /etc/hosts.allow ; echo "snmpd: snmpserver3.domainname.com" >> /etc/hosts.allow ;  echo "snmpd: lansweeper.domainname.com" >> /etc/hosts.allow ; echo "snmpd: snmpserver4.domainname.com" >> /etc/hosts.allow ; echo "sysname `hostname`" >> /etc/snmp/snmpd.conf ; echo "syslocation HQ Server Room" >> /etc/snmp/snmpd.conf ; echo "syscontact [email protected]" >> /etc/snmp/snmpd.conf ; echo "rocommunity communityname 172.22.0.0/24" >> /etc/snmp/snmpd.conf ; systemctl enable snmpd.service ; systemctl restart snmpd.service

Install LANSWEEPER using Satellite:
cd /root ; wget http://satelliteservername/pub/LsAgent-linux-x64.run ; chmod 744 LsAgent-linux-x64.run

cd /root ; ./LsAgent-linux-x64.run --server lansweeper.domainname.com --port 9524 --mode unattended

Installed Mcafee using Satellite:
firewall-cmd --permanent --add-port=8081/tcp ; firewall-cmd --add-port=8081/tcp ; firewall-cmd --reload

cd Downloads/ ; wget http://satelliteservername/pub/AgentforLinux5.5.zip ; unzip AgentforLinux5.5.zip ; sh installrpm.sh -i ; rm -f inst*

cd Downloads/ ; wget http://satelliteservername/pub/ISecTP-10.5.3-1650-Release-ePO.zip ; unzip ISecTP-10.5.3-1650-Release-ePO.zip ; ./isectp-setup ; rm -f isectp* ; rm -f PkgCatalog.z


Added a second scsi controller and a 60GB drive

Drive configuration:

yum install cifs-utils

pvcreate /dev/sdb
vgcreate VG_DATA /dev/sdb
lvcreate -l 100%FREE -n lv_data VG_DATA

mkfs.xfs /dev/VG_DATA/lv_data

mkdir /data
chown -R localuser1:localuser1 /data/


Remote share setup:
useradd -u 1110 localuser2
useradd -u 1120 localuser3


mkdir /share1
chown -R localuser2:localuser2 /irfile/

mkdir /mnt/data1
chown -R localuser3:localuser1 /mnt/Linuxmnt/

mkdir /root/creds

localuser2_creds
username=localuser2
password=assignedpassword

localuser3_creds
username=localuser3
password=assignedpassword


chmod 0600 /root/creds/localuser2_creds
chmod 0600 /root/creds/localuser3_creds


Added the following to fstab:

/dev/mapper/VG_DATA-lv_data  /data          xfs     defaults        1 1
//fileserver1/share1  /share1 cifs credentials=/root/creds/localuser2_creds,vers=2.0,uid=1110,gid=1110,file_mode=0773,dir_mode=0773 0 0
//fileserver2/data1  /mnt/data1 cifs credentials=/root/creds/localuser3_creds,vers=2.0,uid=1120,gid=1000,file_mode=0775,dir_mode=0775 0 0

mount -a
df -h
Category: Linux | Comments Off on Linux: A server build with cifs share example
November 19

Linux: Postgresql

Check PostgreSQL version:
psql –version

Log into PostgreSQL. Login as default user i.e. postgres with password password_of_your_ubuntu_system:
sudo -u postgres psql

Create a new user for PostgreSQL and assign a password to it. Since it is not considered a good practice to persist your data in PostgreSQL through the default user i.e. postgres, we will create a new user and assign a password to it. It can be done by 2 possible ways:

First way:
sudo -u postgres createuser –interactive

Here, you would be asked name of the role/user, if you want to assign superuser privileges to the new user or not; and if not whether the user be allowed to create databases and new roles.
After creating the user, login to PostgreSQL console via the default user i.e. postgres and then alter the password for the user via following command (assuming the new user/role you created is lihas):

ALTER USER lihas WITH PASSWORD ‘lihas’;

Note: In case the name of your user contains capital letters, wrap the username into double quotes while performing all user related operations like:

ALTER USER “LiHaS” WITH PASSWORD ‘lihas’;

Second way:
Log into PostgreSQL console via the default user ie ‘postgres’ and then create the user:

CREATE USER lihas WITH PASSWORD ‘lihas’; –Assuming the new user/role we want to create is lihas

ALTER USER lihas WITH CREATEDB; –user lihas con create databases

ALTER USER lihas WITH CREATEUSER; –user lihas con create new users/roles

Note: According to the convention, whenever a new user is created; a database with the same name as the new username must be created and this database shall not be used to store data. You may create the database by following command 10 below.

List all users in PostgreSQL:
\du

Switch to a user:
SET ROLE user_name;

Check current user:
SELECT CURRENT_USER;

Delete a user/role:
DROP USER user_name;

List all databases:
\l

Create database:
CREATE DATABASE lihas_db;

By default, the owner of any database you create is postgres, if you want your database to belong to a specific user, switch to that user (see above) and then create the database.

Enter a database. Enter inside database via the default user:

\c database_name

…or…

\connect database_name

Enter inside the database with a specific user:

\c database_name user_name

If the above command does not work (Peer authentication failed), simply change the database first and then switch to the user (see command 6 above).

Note: You may also log in directly into a database with a certain user from the terminal by following command (with password as password_of_your_ubuntu_system).

sudo -u role_name psql db_name

But for this command to work, role_name must be a valid Linux user name. You may add a user to Linux by following :

sudo adduser role_name;

Drop database:

DROP DATABASE db_name;

List all tables:
\d

Describe schema of a table:
\d table_name

Exit out of PostgreSQL:
\q

Category: Linux | Comments Off on Linux: Postgresql
November 19

Linux: Docker cheat sheet

Docker images

list:
docker images

run image:
docker run full/image/name:version

docker commit changes:
docker commit container-id full/image/name:version

removing images (linked containers need removed first)
docker rmi image-id

save container:
docker export container-id > imagename.tar

load an container:
cat imagename.tar | docker import – full/image/name:version

save image:
docker save full/image/name:version > imagename.tar

load an image:
sudo docker load < imagename.tar

━━━━━━━━━━━━━━━━━━
Docker containers

list:
docker ps -a

execute something within a container:
docker exec -it container-id /bin/bash

run detached:
docker run -d

docker run -it imagename /bin/bash (interacively start bash)

stop a container:
docker stop container-id

removing container
docker rm container-id

Detaching from a container and leave it running
Ctrl+p, Ctrl+q will now turn interactive mode into daemon mode.

Attach to a container
docker attach container-id

Connect inside a container
docker exec -it container-id /bin/bash

Automatically remove a container on exit
docker run –rm -it imagename /bin/bash

━━━━━━━━━━━━━━━━━━
Example Commands Explained

docker run -d -p 5901:5901 -v /etc/machine-id:/etc/machine-id fedora/firefox:version3

docker run -d(detaches the images bringing you back to the host machines command prompt after the container starts) -p(attaches the host ports to the image ports) -v(mounts the local machine ID to the container machine ID) runs a new container off of the image.

docker ps -a

docker ps(list containers and their information including the ID) -a(all)

docker exec -it a42c2a44b79f /bin/bash

docker exec(execute a command in a container) -i(interactive – Keep STDIN open even if not attached) -t(tty – Allocate a pseudo-TTY or a pty) container-id

docker stop a42c2a44b79f

docker stop(Stop the container. The container will be forced stoppeed within 10 seconds unless a delay is specified) container-id

docker rm a42c2a44b79f

docker rm(remove a container) container-id

Category: Linux | Comments Off on Linux: Docker cheat sheet