Thursday, February 28, 2019

LAMBDA function on AWS

LAMBDA function on AWS


AWS Lambda is Amazon's serverless compute service to reduce the configuration of servers, OS etc.
You can use it in data pipelines or to respond to web requests.
You can run your code on it without having to manage servers or even containers. 


At the very begining we should specify a handler. There are 3 ways of creating such handler:

a.Implementing the RequestHandler interface
b.Creating a custom MethodHandler
c.Implementing the RequestStreamHandler interface


public class LRequestHandler
  implements RequestHandler {
    public String handleRequest(String input, Context context) {
        return "Hello World, " + input;
    }
}


public class LMethodHandler {
    public String handleRequest(String input, Context context) {
        return "Hello World, " + input;
    }
}


public class LRequestStreamHandler
  implements RequestStreamHandler {
    public void handleRequest(InputStream inputStream, 
      OutputStream outputStream, Context context) {
        outputStream.write(("Hello World").getBytes());
    }
}



Monday, February 11, 2019

JConsole and OutOfMemoryError

jconsole and java.lang.OutOfMemoryError


JConsole is a graphical monitoring tool to monitor Java Virtual Machine (JVM) and Java applications both on a local or remote machine.

JConsole provides information on performance and resource consumption of applications running on the Java platform and it comes as part of Java Development Kit (JDK).

The graphical console can be started using "jconsole" command in every "bin" location of JDK installation.




The documentation of how to use jconsole: The documentation of how to use jconsole: 






Other such tools (used via CLI):   https://github.com/patric-r/jvmtop


JVM and memory

Big applications with large code-base can quickly fill up the segment of the memory, which will cause java.lang.OutOfMemoryError which is related directly with Perm Gen (Permanent generation).

Generally we can meet the way how is the Java memory pool divided at very interesting articles on these forums:



https://www.optaplanner.org/blog/2015/07/31/WhatIsTheFastestGarbageCollectorInJava8.html

https://openjdk.java.net/jeps/291  (JEP 291: Deprecate the Concurrent Mark Sweep (CMS) Garbage Collector)

https://plumbr.io/handbook/garbage-collection-algorithms

https://stackify.com/java-performance-tools-8-types-tools-need-know/

https://blog.idrsolutions.com/2014/06/java-performance-tuning-tools/

https://dzone.com/articles/java-performance-troubleshooti-0

https://dzone.com/articles/top-9-free-java-process-monitoring-tools-amp-how-t

https://www.dnsstuff.com/jvm-performance

http://karunsubramanian.com/websphere/how-to-choose-the-correct-garbage-collector-java-generational-heap-and-garbage-collection-explained/

https://www.petefreitag.com/articles/gctuning/

http://javahonk.com/how-many-types-memory-areas-allocated-by-jvm/

https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/cms.html

https://codeahoy.com/2017/08/06/basics-of-java-garbage-collection/

https://dzone.com/articles/understanding-the-java-memory-model-and-the-garbag

http://all-about-java-and-weblogic-server.blogspot.com/2014/01/what-are-java-heap-young-old-and.html

http://blog.icodejava.com/tag/jvm-option-parameter-xxuseconcmarksweepgc/

Heap Memory Usage - Memory Pools:
Eden Space
Survivor Space
Tenured Gen

Non-Heap Memory Usage - Memory Pools:
Code Cache
Perm Gen


Heap memory

The heap memory is the runtime data area from which the Java VM allocates memory for all class instances and arrays. The heap may be of a fixed or variable size. The garbage collector is an automatic memory management system that reclaims heap memory for objects.
  • Eden Space: The pool from which memory is initially allocated for most objects.
  • Survivor Space: The pool containing objects that have survived the garbage collection of the Eden space.
  • Tenured Generation or Old Gen: The pool containing objects that have existed for some time in the survivor space.

Non-heap memory

Non-heap memory includes a method area shared among all threads and memory required for the internal processing or optimization for the Java VM. It stores per-class structures such as a runtime constant pool, field and method data, and the code for methods and constructors. The method area is logically part of the heap but, depending on the implementation, a Java VM may not garbage collect or compact it. Like the heap memory, the method area may be of a fixed or variable size. The memory for the method area does not need to be contiguous.
  • Permanent Generation: The pool containing all the reflective data of the virtual machine itself, such as class and method objects. With Java VMs that use class data sharing, this generation is divided into read-only and read-write areas.
  • Code Cache: The HotSpot Java VM also includes a code cache, containing memory that is used for compilation and storage of native code.

Java objects reside in an area called the heap, while metadata such as class objects and method objects reside in the Permanent generation or Perm Gen area. The permanent generation is not part of the heap.
The heap is created when the JVM starts up and may increase or decrease in size while the application runs. When the heap becomes full, garbage is collected. During the garbage collection objects that are no longer used are cleared, thus making space for new objects.
-Xmssize Specifies the initial heap size.
-Xmxsize Specifies the maximum heap size.
-XX:MaxPermSize=size Sets the maximum permanent generation space size. This option was deprecated in JDK 8, and superseded by the -XX:MaxMetaspaceSize option.
Full specification of Java HotSpot VM Options are available at:


Big difference between Agile and Microservices

Big difference between Agile and Microservices (... as a joke)


Any differences or similarities between Agile and Microservices?

AGILE = a recipe of how to make order out from the mess, so that it can be cleaned and cleaned up by itself forever :-)

MICROSERVICES =
a recipe of how to make a mess from the order, so that it controls itself, monitors itself, repairs itself and can grow infinitely without losing the expected functionality  :-)


Sunday, February 10, 2019

Converting Properties Files to Escaped Unicode

Converting Properties Files to Escaped Unicode


Using Unicode characters in Java with properties files is somethimes problematic, when we want to show signs and symbols that are not ASCII characters...

Converting Properties Files to Escaped Unicode is a must!
when we have problem with displaying especial signs:

So we want to send u-escaped Unicode, using \uXXXX. 
As not only Java, but also JavaScript/JSON knows this convention, we only need this u-escaping in java on the server.

Solution:




Symbols, icons, "visual thinking" signs and letters:



Pushing on Github and Credentials and 403 error

Pushing on Github and Credentials and 403 error

Sometimes when we can work on Github with more than one account, we can obtain an error 403!!

git push (...)
remote: Permission to (...) denied to (...)
fatal: unable to access (...) The requested URL returned error: 403

The error is that our computer has saved a git username and password for Github in Windows Credentials, so if we shift to another account the error 403 will appear. 

Below is the solution:

Control panel > user accounts > credential manager > Windows credentials > Generic credentials
(IN POLISH: Panel sterowania\Konta użytkowników i Filtr rodzinny\Menedżer poświadczeń)

Next remove the Github keys and log in again.


Wednesday, February 6, 2019

OWASP TOP 10

OWASP TOP 10

As we can read at https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project
and https://blog.sucuri.net/2018/10/owasp-top-10-security-risks-part-i.html the Top 10 OWASP security vulnerabilities are:

  1. Injection
  2. Broken Authentication
  3. Sensitive data exposure.
  4. XML External Entities (XXE)
  5. Broken Access control.
  6. Security misconfigurations.
  7. Cross Site Scripting (XSS)
  8. Insecure Deserialization.
  9. Using Components with known vulnerabilities.
  10. Insufficient logging and monitoring.

Tuesday, February 5, 2019

NETFLIX

NETFLIX

As we can read at http://spring.io/projects/spring-cloud-netflix and https://netflix.github.io/ do You know that Spring Cloud Netflix provides Netflix OSS integrations for Spring Boot apps through autoconfiguration and binding to the Spring Environment and other Spring programming model idioms?

@EnableEurekaClient
With a few simple annotations you can quickly enable and configure the common patterns inside your application and build large distributed systems with battle-tested Netflix components. The patterns provided include Service Discovery (Eureka), Circuit Breaker (Hystrix), Intelligent Routing (Zuul) and Client Side Load Balancing (Ribbon)..

Spring Cloud Netflix features:


  • Service Discovery: Eureka instances can be registered and clients can discover the instances using Spring-managed beans
  • Service Discovery: an embedded Eureka server can be created with declarative Java configuration
  • Circuit Breaker: Hystrix clients can be built with a simple annotation-driven method decorator
  • Circuit Breaker: embedded Hystrix dashboard with declarative Java configuration
  • Declarative REST Client: Feign creates a dynamic implementation of an interface decorated with JAX-RS or Spring MVC annotations
  • Client Side Load Balancer: Ribbon
  • External Configuration: a bridge from the Spring Environment to Archaius (enables native configuration of Netflix components using Spring Boot conventions)
  • Router and Filter: automatic registration of Zuul filters, and a simple convention over configuration approach to reverse proxy creation

Very interesting information related to microservices and Netflix technologies:


Monday, February 4, 2019

Tribute to Wanda Rutkiewicz

Tribute to Wanda Rutkiewicz

Wanda Rutkiewicz was born on February 4, 1943 and known as the first woman to successfully climb K2 (Chhogori or Mount Godwin-Austen, at 8,611 metres / 28,251 ft above sea level). She was a Polish computer engineer and mountain climber.

https://en.wikipedia.org/wiki/Wanda_Rutkiewicz

https://www.fakt.pl/sport/inne-sporty/wanda-rutkiewicz-25-lat-od-zaginiecia-himalaistki-tracila-bliskich/wwns0sl

Spring Boot example

Spring Boot example


application.properties:

# ===============================
# H2 CONSOLE
# ===============================
spring.h2.console.path=/h2
# To See H2 Console in Browser:
# http://localhost:8080/h2
# Enabling H2 Console
spring.h2.console.enabled=true

# ===============================
# DB
# ===============================

spring.datasource.url=jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=

# ===============================
# JPA / HIBERNATE
# ===============================

spring.jpa.show-sql=true
spring.jpa.hibernate.ddl-auto=update
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.H2Dialect
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
spring.jpa.hibernate.ddl-auto = update
spring.jpa.properties.hibernate.show_sql=false
spring.jpa.properties.hibernate.use_sql_comments=false
spring.jpa.properties.hibernate.format_sql=false



-------------------


package eu.microwebservices.awesomeappproject;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.RestController;

@RestController
@SpringBootApplication
public class AwesomeApp {

public static void main(String[] args) {
SpringApplication.run(AwesomeApp.class, args);
}

}

----------------------------------

package eu.microwebservices.awesomeappproject.model;

import javax.persistence.*;
import javax.validation.constraints.NotNull;

@Entity
@Table(name = "tab_user")
public class User {

  @Id
  @GeneratedValue(strategy = GenerationType.IDENTITY)
  private Long id;

  /**
   * Sorry but, there is no validation here with email!
   * It is not secure because of no validation from user input! See https://www.owasp.org
   * It is only for educational purposes...
   */
  @NotNull
  private String email;

  /**
   * This is a name generally, which could be the nickname or the firstname,
   * but if the user prefer it could be last name...
   * There is no validation here!
   * It is only for educational purposes...
   */
  @NotNull
  private String name;

  public User() {
  }

  public User(Long id) {
    this.id = id;
  }

  public User(String email, String name) {
    this.email = email;
    this.name = name;
  }

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
 
    public String getEmail() {
        return email;
    }

    public void setEmail(String email) {
        this.email = email;
    }

    @Override
    public String toString() {
        return "User{" +
                "id=" + id +
                ", name='" + name + '\'' +
                ", email=" + email +
                '}';
    }

}


--------------------------------

package eu.microwebservices.awesomeappproject.model;

import org.springframework.data.repository.*;
import org.springframework.transaction.annotation.*;

@Transactional
public interface UserDao extends CrudRepository {

  /**
   * This method will find an User instance in the database by its email.
   * Note that this method is not implemented and its working code will be
   * automagically generated from its signature by Spring Data JPA.
   */
  public User findByEmail(String email);

    /**
   * This method will find an User instance in the database by its name.
   * Note that this method is not implemented and its working code will be
   * automagically generated from its signature by Spring Data JPA.
   */
  public User findByName(String name);

}

--------------------------------

package eu.microwebservices.awesomeappproject.controller;

import eu.microwebservices.awesomeappproject.model.User;
import eu.microwebservices.awesomeappproject.model.UserDao;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.ResponseBody;

@Controller
public class UserController {
   /**
    * HOW TO TEST:
    * $ mvn spring-boot:run
    * http://localhost:8080/
    * Use the following urls:
    *    /create-user?email=[email]&name=[name]:    create a new user with an auto-generated id and email and name as passed values.
    *    /delete-user?id=[id]:      delete the user with the passed id.
    *    /get-user-by-email?email=[email]:      retrieve the id for the user with the passed email address.
    *    /update-user?id=[id]&email=[email]&name=[name]:    update the email and the name for the user indentified by the passed id.
    */

  @Autowired
  private UserDao userDao;

  /**
   * /create-user  --> Create a new user and save it in the database.
   * It is not secure operation here! There is no validation here!  See https://www.owasp.org
   * It is only for REST educational purposes...
   *
   * @param email User's email
   * @param name User's name
   * @return A string describing if the user is successfully created or not.
   */
  @RequestMapping("/create-user")
  @ResponseBody
  public String create(String email, String name) {
    User user = null;
    try {
      user = new User(email, name);
      userDao.save(user);
    }
    catch (Exception ex) {
      return "Error while creating the user: " + ex.toString();
    }
    return "User created succesfully!! (id = " + user.getId() + ")";
  }

  /**
   * /delete-user  --> Delete the user having the passed id.
   * It is not secure operation here! There is no validation here!
   * It is only for REST educational purposes...
   *
   * @param id The id of the user to delete
   * @return A string describing if the user is successfully deleted or not.
   */
  @RequestMapping("/delete-user")
  @ResponseBody
  public String delete(Long id) {
    try {
      User user = new User(id);
      userDao.delete(user);
    }
    catch (Exception ex) {
      return "Error while deleting the user: " + ex.toString();
    }
    return "User deleted successfully!!";
  }

  /**
   * /get-user-by-email  --> Return the id for the user having the passed email.
   * It is not secure operation here! There is no validation here!
   * It is only for REST educational purposes...
   *
   * @param email The email to search in the database.
   * @return The user id or a message error if the user is not found.
   */
  @RequestMapping("/get-user-by-email")
  @ResponseBody
  public String getByEmail(String email) {
    Long userId;
    try {
      User user = userDao.findByEmail(email);
      if (user != null) {
        userId = user.getId();
      } else {
        return "User not found!!";
      }
    }
    catch (Exception ex) {
      return "User not found!!";
    }
    return "The user is found, and id is: " + userId;
  }

  /**
   * /update-user  --> Update the email and the name for the user in the database
   * having the passed id.
   * It is not secure operation here! There is no validation here!
   * It is only for REST educational purposes...
   *
   * @param id The id for the user to update.
   * @param email The new email.
   * @param name The new name.
   * @return A string describing if the user is successfully updated or not.
   */
  @RequestMapping("/update-user")
  @ResponseBody
  public String updateUser(long id, String email, String name) {
    try {
      User user = userDao.findOne(id);
      user.setEmail(email);
      user.setName(name);
      userDao.save(user);
    }
    catch (Exception ex) {
      return "Error while updating the user: " + ex.toString();
    }
    return "User updated successfully!!";
  }

}

-----------------------------

pom.xml:




Installing Kubernetes on Ubuntu 18.04

Installing Kubernetes on Ubuntu 18.04

https://kubernetes.io/docs/tasks/tools/install-kubectl/


Here are a few methods to install kubectl.
Install kubectl binary using native package management

#Ubuntu, Debian or HypriotOS from terminal:
sudo apt-get update && sudo apt-get install -y apt-transport-https
sudo apt install curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl
##Preparing to unpack .../kubectl_1.13.3-00_amd64.deb ...
##Unpacking kubectl (1.13.3-00) ...
##Setting up kubectl (1.13.3-00) ...

only root has permitions:
/usr/bin/kubectl

/usr/bin/dockerd




"kubectl" from terminal:

kubectl controls the Kubernetes cluster manager.

Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/

Basic Commands (Beginner):
  create         Create a resource from a file or from stdin.
  expose         Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
  run            Run a particular image on the cluster
  set            Set specific features on objects

Basic Commands (Intermediate):
  explain        Documentation of resources
  get            Display one or many resources
  edit           Edit a resource on the server
  delete         Delete resources by filenames, stdin, resources and names, or by resources and label selector

Deploy Commands:
  rollout        Manage the rollout of a resource
  scale          Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job
  autoscale      Auto-scale a Deployment, ReplicaSet, or ReplicationController

Cluster Management Commands:
  certificate    Modify certificate resources.
  cluster-info   Display cluster info
  top            Display Resource (CPU/Memory/Storage) usage.
  cordon         Mark node as unschedulable
  uncordon       Mark node as schedulable
  drain          Drain node in preparation for maintenance
  taint          Update the taints on one or more nodes

Troubleshooting and Debugging Commands:
  describe       Show details of a specific resource or group of resources
  logs           Print the logs for a container in a pod
  attach         Attach to a running container
  exec           Execute a command in a container
  port-forward   Forward one or more local ports to a pod
  proxy          Run a proxy to the Kubernetes API server
  cp             Copy files and directories to and from containers.
  auth           Inspect authorization

Advanced Commands:
  diff           Diff live version against would-be applied version
  apply          Apply a configuration to a resource by filename or stdin
  patch          Update field(s) of a resource using strategic merge patch
  replace        Replace a resource by filename or stdin
  wait           Experimental: Wait for a specific condition on one or many resources.
  convert        Convert config files between different API versions

Settings Commands:
  label          Update the labels on a resource
  annotate       Update the annotations on a resource
  completion     Output shell completion code for the specified shell (bash or zsh)

Other Commands:
  api-resources  Print the supported API resources on the server
  api-versions   Print the supported API versions on the server, in the form of "group/version"
  config         Modify kubeconfig files
  plugin         Provides utilities for interacting with plugins.
  version        Print the client and server version information

Usage:
  kubectl [flags] [options]

Use "kubectl --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).

---

ps -ef | grep kubectl

Example:                                                                                                                                                                                     
  # Start a single instance of nginx.                                                                                                                                                         
  kubectl run nginx --image=nginx

Let's start from the very begining... The Big Bang!


############### 1. UBUNTU ##############################
# Installing "Ubuntu 18.04.4 LTS (Bionic Beaver)"

############### 2. UPDATE UBUNTU #######################
# Update Ubuntu (+ adding google-chrome-stable_current_amd64.deb, etc.)

# To make an environment persistent for a user's environment, we export the variable from the user's profile script.
# Open the current user's profile into a text editor (open the terminal):

vi ~/.bash_profile
   
# Add the export command for every environment variable you want to persist:

export JAVA_HOME=/opt/openjdk11
export USER=kris

# ... etc. Save your changes.

# To enable silent mode of installation and avoid every "Do you want to continue? [Y/n] y"
# read https://libre-software.net/ubuntu-automatic-updates/ and ...
# ... install the unattended-upgrades package on your UBUNTU
# ... or try "sudo apt install -y ... " with "-y" mode if it is eassier and possible.

sudo apt install -y unattended-upgrades
### >> enter the password
### >> Pay attention for every "Do you want to continue? [Y/n]".

# Configure automatic updates according to advices on https://libre-software.net/ubuntu-automatic-updates/

sudo vi /etc/apt/apt.conf.d/50unattended-upgrades

# ... or try "sudo apt-get install -y ... " if it is eassier and possible.
# Notice that:
# apt-get may be considered as lower-level and "back-end", and we must know exactly what to use.
# apt is designed for end-users (human) and its output may be changed between versions.

############### 3. DOCKER ##############################
############### Installing DOCKER ######################

# Try to install the DOCKER:

sudo apt install -y docker.io
### >> Pay attention for every "Do you want to continue? [Y/n]".
### Accept "y" if it occurs. Try to enable silent mode of installation again...

sudo docker

sudo docker --version

# If something is missing or not, next let's try to use advices of Brian Hogan at his article:
# on https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04

# Updating the list of packages:

    sudo apt update
### >> enter the password

# Installing some prerequisite packages which let apt use packages over HTTPS:

    sudo apt install -y apt-transport-https ca-certificates curl software-properties-common

# Adding the GPG key for the official Docker repository to your system:

    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

# Adding the Docker repository to APT sources:

    sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"

# Updating the list of packages database with Docker from the newly added repo:

    sudo apt update

# Making sure we are about to install from the Docker repo (instead of the default Ubuntu repo):

    apt-cache policy docker-ce

# Installing Docker:

    sudo apt install -y docker-ce

# Checking: Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it's running:

    sudo systemctl status docker

############### Executing the DOCKER Command Without Sudo ######################

    docker run --help

# If you want to avoid typing "sudo" whenever you run the docker command, add your username to the docker group:

    echo ${USER}
    sudo usermod -aG docker ${USER}

# Apllying the new group membership. Type the following:

    su - ${USER}
### >> enter the password

# Checking the confirmation that the user is now added to the docker group by typing:

    id -nG

# Sample output:   kris adm cdrom sudo dip plugdev lpadmin sambashare docker


## (Optional) If you need to add a user to the docker group that you're not logged in as, declare that username explicitly using:
##   sudo usermod -aG docker username

############### Using the DOCKER Command ######################

## The syntax is "a chain of options and commands followed by arguments". Usage:
##    docker [OPTIONS] COMMAND [arguments]

# Viewing the version:

    docker --version

# Viewing all available subcommands:

    docker

 
## OUTPUT:

kris@gandalf1:~$     docker
# OPTIONS:
      --config string      Location of client config files (default "/home/kris/.docker")
  -c, --context string     Name of the context to use to connect to the daemon (overrides DOCKER_HOST env var and default context set with "docker context use")
  -D, --debug              Enable debug mode
  -H, --host list          Daemon socket(s) to connect to
  -l, --log-level string   Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") (default "info")
      --tls                Use TLS; implied by --tlsverify
      --tlscacert string   Trust certs signed only by this CA (default "/home/kris/.docker/ca.pem")
      --tlscert string     Path to TLS certificate file (default "/home/kris/.docker/cert.pem")
      --tlskey string      Path to TLS key file (default "/home/kris/.docker/key.pem")
      --tlsverify          Use TLS and verify the remote
  -v, --version            Print version information and quit

# Management COMMANDs:
  builder     Manage builds
  config      Manage Docker configs
  container   Manage containers
  context     Manage contexts
  engine      Manage the docker engine
  image       Manage images
  network     Manage networks
  node        Manage Swarm nodes
  plugin      Manage plugins
  secret      Manage Docker secrets
  service     Manage services
  stack       Manage Docker stacks
  swarm       Manage Swarm
  system      Manage Docker
  trust       Manage trust on Docker images
  volume      Manage volumes

# COMMANDs:
  attach      Attach local standard input, output, and error streams to a running container
  build       Build an image from a Dockerfile
  commit      Create a new image from a container's changes
  cp          Copy files/folders between a container and the local filesystem
  create      Create a new container
  diff        Inspect changes to files or directories on a container's filesystem
  events      Get real time events from the server
  exec        Run a command in a running container
  export      Export a container's filesystem as a tar archive
  history     Show the history of an image
  images      List images
  import      Import the contents from a tarball to create a filesystem image
  info        Display system-wide information
  inspect     Return low-level information on Docker objects
  kill        Kill one or more running containers
  load        Load an image from a tar archive or STDIN
  login       Log in to a Docker registry
  logout      Log out from a Docker registry
  logs        Fetch the logs of a container
  pause       Pause all processes within one or more containers
  port        List port mappings or a specific mapping for the container
  ps          List containers
  pull        Pull an image or a repository from a registry
  push        Push an image or a repository to a registry
  rename      Rename a container
  restart     Restart one or more containers
  rm          Remove one or more containers
  rmi         Remove one or more images
  run         Run a command in a new container
  save        Save one or more images to a tar archive (streamed to STDOUT by default)
  search      Search the Docker Hub for images
  start       Start one or more stopped containers
  stats       Display a live stream of container(s) resource usage statistics
  stop        Stop one or more running containers
  tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
  top         Display the running processes of a container
  unpause     Unpause all processes within one or more containers
  update      Update configuration of one or more containers
  version     Show the Docker version information
  wait        Block until one or more containers stop, then print their exit codes

# Runing 'docker COMMAND --help' for more information on a specific command:

    docker kill --help

# Viewing system-wide information about Docker. We can use:

    docker info

############### Working with DOCKER Images ######################

# Checking if we can access and download images from Docker Hub. We can type:

    docker run hello-world

# The output is working correctly if we can get "Unable to find image 'hello-world:latest' locally".
# It downloaded the image from Docker Hub, which is the default repository.
# Once the image downloaded, Docker created a container from the image
# and the application within the container executed, displaying the message.

## OUTPUT:

kris@gandalf1:~$     docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:f9dfddf63636d84ef479d645ab5885156ae030f611a56f3a7ac7f2fdd86d7e4e
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

 

# We can search for images available on Docker Hub by typing:

    docker search ubuntu

# Executing the following command to download the official "ubuntu" image to our machine:

    docker pull ubuntu

# Viewing the images that have been downloaded to our machine. We can type:

    docker images

## OUTPUT:

kris@gandalf1:~$     docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ubuntu              latest              72300a873c2c        2 weeks ago         64.2MB
hello-world         latest              fce289e99eb9        14 months ago       1.84kB

# Images that you use to run containers can be modified and used to generate new images,
# which may then be uploaded (pushed is the technical term) to Docker Hub or other Docker registries.

############### Running a DOCKER Container ######################

# The "hello-world" container is an example container, which runs and exits after emitting a test message.
# Containers can be much more useful than that, and they can be interactive, as a virtual machines.

# As an second example, let's run a container using the latest official image of "Ubuntu".
# The combination of the -i and -t switches gives you interactive shell access into the container. We can use "bash":

    docker run -it ubuntu bash

# Your command prompt should change to reflect the fact that you're now working inside the container:

## OUTPUT:

kris@gandalf1:~$     docker run -it ubuntu bash
root@5495e623ad66:/#

# Pay attention! Note the container ID in the command prompt: it is 5495e623ad66.
# You'll need that container ID later to identify the container when you want to remove it.


# Now you can run any command inside the container:

    apt update

# Installing any application in it. Let's install the Node.js:

    apt install -y nodejs

# It installs Node.js in the container from the official Ubuntu repository.
# "Do you want to continue? [Y/n]"
### >> type "y" + ENTER
# Verify that Node.js is installed to see the version number:

    node -v

# Any changes you make inside the container only apply to that container.
# To exit the container, type "exit" at the prompt.

    exit

############### Managing DOCKER Containers ######################

# Viewing all containers: active and inactive. Type with the "-a" switch:

    docker ps -a

## OUTPUT:

kris@gandalf1:~$     docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
5495e623ad66        ubuntu              "bash"              14 minutes ago      Exited (0) 3 minutes ago                        laughing_saha
a87b4007c298        ubuntu              "bash"              30 minutes ago      Exited (0) 29 minutes ago                       admiring_wu
c83cfac0a44e        hello-world         "/hello"            41 minutes ago      Exited (0) 41 minutes ago                       keen_jang
3671da0fff80        ubuntu              "/bin/bash"         2 hours ago         Exited (0) 2 hours ago                          sweet_bardeen
040ddf4a9ba6        hello-world         "/hello"            2 hours ago         Exited (0) 2 hours ago                          jovial_gates
ba7eb7d90505        hello-world         "/hello"            2 hours ago         Exited (0) 2 hours ago                          angry_nash

# Viewing the latest container you created:

    docker ps -l

# Starting a stopped container, use docker start, followed by the container ID or the container's name.

    docker start 5495e623ad66
 
    docker start laughing_saha
 
    docker ps -a

# Stoping a running container, use docker stop, followed by the container ID or name:

    docker stop 5495e623ad66
 
    docker stop laughing_saha
 
    docker ps -a

# Use the docker ps -a command to find the container ID or name for the container associated with the hello-world image and remove it:

    docker rm angry_nash
 
    docker ps -a
 
# As we can notice with "docker rename --help", we can rename the container using "docker rename CONTAINER NEW_NAME" and then start it:

    docker rename 5495e623ad66 kris
    docker start 5495e623ad66
    docker ps -a

## OUTPUT:

kris@gandalf1:~$     docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
5495e623ad66        ubuntu              "bash"              27 minutes ago      Up 1 second                                     kris

############### Committing Changes in a Container to a DOCKER Image ######################

# When you start up a Docker image, you can create, modify, and delete files just like you can with a virtual machine.
# Then commit the changes to a new Docker image instance using the following command.

##    docker commit -m "What you did to the image" -a "Author Name" container_id repository/new_image_name

# The -m switch (as in git) is for the commit message, that helps you and others know what changes you made, while -a is used to specify the author.
# Unless you created additional repositories on Docker Hub, the repository is usually your Docker Hub username.
# For example, for the user kris, with the container ID of 5495e623ad66, the command would be:

    docker commit -m "added Node.js" -a "kris" 5495e623ad66 kris/ubuntu-nodejs

# When you commit an image, the new image is saved locally on your computer. You can push this later...
# Listing the Docker images:

    docker images
    docker ps -a

############### Pushing DOCKER Images to a Docker Repository ######################
 
# Pushing an image to Docker Hub or any other Docker registry, you must have an account there.
# To push your image, first log into Docker Hub.

##    docker login -u docker-registry-username

# You'll be prompted to authenticate using your Docker Hub password.
### >> enter the password

# Then you may push your own image using:

##    docker push docker-registry-username/docker-image-name

############### 4. ANSIBLE #################################################################
######### How to use Ansible to install and set up the Docker on Ubuntu 18.04? #############

Ansible is an open-source software provisioning, configuration management, and application-deployment tool.
It was written by Michael DeHaan and acquired by Red Hat in 2015.  
It includes its own declarative language to describe system configuration...
(https://www.ansible.com/integrations/containers/docker)

The term "Ansible" comes from the novel of Ursula K. Le Guin ("Rocannon's World", 1966) and refers to the specific communication system.

For example, we need Ansible... when we want to use Docker images with Kubernetes...

# Let's try to use advices at following articles:

https://appfleet.com/blog/install-and-setup-docker-using-ansible-on-ubuntu-18-04-part-2/

https://www.xpresservers.com/how-to-use-ansible-to-install-and-set-up-docker-on-ubuntu-18-04/

https://www.techrepublic.com/article/how-to-deploy-a-container-with-ansible/


https://github.com/do-community/ansible-playbooks/tree/master/docker_ubuntu1804

https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04

https://www.digitalocean.com/community/tutorial_series/getting-started-with-configuration-management

https://www.digitalocean.com/community/tutorials/configuration-management-101-writing-ansible-playbooks

https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-ansible-on-ubuntu-18-04

https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-18-04

https://www.digitalocean.com/community/tutorials/how-to-use-ansible-to-automate-initial-server-setup-on-ubuntu-18-04

https://www.digitalocean.com/community/tutorials/how-to-use-ansible-to-install-and-set-up-docker-on-ubuntu-18-04


https://hub.docker.com/r/geerlingguy/docker-ubuntu1804-ansible/

https://github.com/geerlingguy/docker-ubuntu1804-ansible

https://gist.github.com/rbq/886587980894e98b23d0eee2a1d84933


https://github.com/do-community/ansible-playbooks

https://github.com/do-community/ansible-playbooks.git

https://github.com/do-community/ansible-playbooks/tree/master/docker_ubuntu1804

https://andrewaadland.me/2018-10-14-using-ansible-to-install-docker-ce-on-ubuntu-18-04/

https://docs.docker.com/app-template/working-with-template/


https://morioh.com/p/b29d5c108636

https://morioh.com/p/c66cac9ae70e

https://morioh.com/p/bd1e51fdc810


https://github.com/ferrarimarco/docker-ansible

https://github.com/ferrarimarco/docker-ansible.git


https://hub.docker.com/r/centos/mongodb-32-centos7/

https://hub.docker.com/_/centos/


Too old fashioned now?:
https://aws.amazon.com/blogs/compute/deploying-java-microservices-on-amazon-ec2-container-service/

https://aws.amazon.com/getting-started/tutorials/deploy-docker-containers/

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html

http://www.adrianmilne.com/deploying-a-spring-boot-microservice-to-docker-aws-elastic-beanstalk/

https://www.oreilly.com/ideas/how-to-manage-docker-containers-in-kubernetes-with-java

https://www.nginx.com/resources/library/kubernetes-for-java-developers/


############### 5. KUBERNETES ##############################
############### Installing KUBERNETES, K8s ##############################

# Let's try to use advices at following articles to have installed 4 main tools/components:

Docker = a container runtime. It is the component that runs your containers.
         Support for other runtimes such as rkt is under active development in Kubernetes.
kubectl = a CLI tool used for issuing commands to the cluster through its API Server.
kubeadm = a CLI tool that will install and configure the various components of a cluster in a standard way.
kubelet = a system service/program that runs on all nodes and handles node-level operations.

# And other tools:

Calico = (https://docs.projectcalico.org/introduction/) a networking and network policy provider.
         It is an open source networking and network security solution for containers, virtual machines, and native host-based workloads.
Calico supports a broad range of platforms including Kubernetes, OpenShift, Docker EE, OpenStack, and bare metal services.
Flannel = is an overlay network provider that can be used with Kubernetes
         (https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md).

https://vitux.com/install-and-deploy-kubernetes-on-ubuntu/

https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-ubuntu-18-04

https://kubernetes.io/docs/tasks/tools/install-kubectl/

https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/
(!!!)

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands

https://manpages.debian.org/experimental/kubernetes-client/kubectl-run.1.en.html

https://manpages.debian.org/experimental/kubernetes-client/

https://gist.github.com/jimmidyson/8b50ebe6c9f6ed5432cc

https://gist.github.com/jimmidyson/

https://github.com/CESNET/jupyter-meta/wiki/Kubernetes-with-Kubeadm

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
(kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml)

https://kubernetes.io/docs/concepts/cluster-administration/networking/

https://medium.com/htc-research-engineering-blog/install-a-kubernetes-cluster-with-kubeadm-on-ubuntu-step-by-stepff-c118514bc5e0

https://wiki.onap.org/display/DW/Deploying+Kubernetes+Cluster+with+kubeadm

https://www.linode.com/docs/kubernetes/getting-started-with-kubernetes/

# Let's try to use following commands to prepare and install K8s:

sudo systemctl enable docker

sudo apt install -y curl

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add

sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

sudo apt install -y kubeadm

kubeadm version

sudo swapoff -a

sudo hostnamectl set-hostname master-node

sudo hostnamectl set-hostname slave-node

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

# Your Kubernetes control-plane has initialized successfully now!
# To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

sudo kubectl get nodes

# You should now deploy a pod network to the cluster.
# Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
# Calico from https://kubernetes.io/docs/concepts/cluster-administration/addons/
# Install network plugin (Calico) - these now seem to leave the nodes in a "notReady" state,
# below is a fix from https://github.com/CESNET/jupyter-meta/wiki/Kubernetes-with-Kubeadm

sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml

sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

sudo kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# Then you can join any number of worker nodes by running the following on each as root:

sudo kubeadm join 192.168.1.22:6443 --token wv9d86.mfssvpdndne1e96h \
    --discovery-token-ca-cert-hash sha256:392ee523f3a93648a019880cb38f1cad7532be9a1e0edcb63e9a478d880bc33a

kubectl get pods --all-namespaces

sudo kubectl get nodes

sudo apt install -y net-tools

ifconfig

An example:
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.1  netmask 255.255.255.0  broadcast 0.0.0.0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.0  netmask 255.255.255.255  broadcast 0.0.0.0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0

wlp1s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.22  netmask 255.255.255.0  broadcast 192.168.1.255


"kubectl" from terminal:

kubectl controls the Kubernetes cluster manager.

Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/

Basic Commands (Beginner):
  create         Create a resource from a file or from stdin.
  expose         Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
  run            Run a particular image on the cluster
  set            Set specific features on objects

Basic Commands (Intermediate):
  explain        Documentation of resources
  get            Display one or many resources
  edit           Edit a resource on the server
  delete         Delete resources by filenames, stdin, resources and names, or by resources and label selector

Deploy Commands:
  rollout        Manage the rollout of a resource
  scale          Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job
  autoscale      Auto-scale a Deployment, ReplicaSet, or ReplicationController

Cluster Management Commands:
  certificate    Modify certificate resources.
  cluster-info   Display cluster info
  top            Display Resource (CPU/Memory/Storage) usage.
  cordon         Mark node as unschedulable
  uncordon       Mark node as schedulable
  drain          Drain node in preparation for maintenance
  taint          Update the taints on one or more nodes

Troubleshooting and Debugging Commands:
  describe       Show details of a specific resource or group of resources
  logs           Print the logs for a container in a pod
  attach         Attach to a running container
  exec           Execute a command in a container
  port-forward   Forward one or more local ports to a pod
  proxy          Run a proxy to the Kubernetes API server
  cp             Copy files and directories to and from containers.
  auth           Inspect authorization

Advanced Commands:
  diff           Diff live version against would-be applied version
  apply          Apply a configuration to a resource by filename or stdin
  patch          Update field(s) of a resource using strategic merge patch
  replace        Replace a resource by filename or stdin
  wait           Experimental: Wait for a specific condition on one or many resources.
  convert        Convert config files between different API versions

Settings Commands:
  label          Update the labels on a resource
  annotate       Update the annotations on a resource
  completion     Output shell completion code for the specified shell (bash or zsh)

Other Commands:
  api-resources  Print the supported API resources on the server
  api-versions   Print the supported API versions on the server, in the form of "group/version"
  config         Modify kubeconfig files
  plugin         Provides utilities for interacting with plugins.
  version        Print the client and server version information

Usage:
  kubectl [flags] [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).

ps -ef | grep kubectl

only root has permitions:
/usr/bin/kubectl

/usr/bin/dockerd
                                                                                                                                                                               
Examples:                                                                                                                                                                                      
  # Start a single instance of nginx.                                                                                                                                                          
  kubectl run nginx --image=nginx                                                                                                                                                              
                                                                                                                                                                                               
  # Start a single instance of hazelcast and let the container expose port 5701 .
  kubectl run hazelcast --image=hazelcast --port=5701
 
  # Start a single instance of hazelcast and set environment variables "DNS_DOMAIN=cluster" and "POD_NAMESPACE=default" in the container.
  kubectl run hazelcast --image=hazelcast --env="DNS_DOMAIN=cluster" --env="POD_NAMESPACE=default"
 
  # Start a single instance of hazelcast and set labels "app=hazelcast" and "env=prod" in the container.
  kubectl run hazelcast --image=hazelcast --labels="app=hazelcast,env=prod"
 
  # Start a replicated instance of nginx.
  kubectl run nginx --image=nginx --replicas=5
 
  # Dry run. Print the corresponding API objects without creating them.
  kubectl run nginx --image=nginx --dry-run
 
  # Start a single instance of nginx, but overload the spec of the deployment with a partial set of values parsed from JSON.
  kubectl run nginx --image=nginx --overrides='{ "apiVersion": "v1", "spec": { ... } }'
 
  # Start a pod of busybox and keep it in the foreground, don't restart it if it exits.
  kubectl run -i -t busybox --image=busybox --restart=Never
 
  # Start the nginx container using the default command, but use custom arguments (arg1 .. argN) for that command.
  kubectl run nginx --image=nginx -- <arg1> <arg2> ... <argN>
 
  # Start the nginx container using a different command and custom arguments.
  kubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>
 
  # Start the perl container to compute ? to 2000 places and print it out.
  kubectl run pi --image=perl --restart=OnFailure -- perl -Mbignum=bpi -wle 'print bpi(2000)'
 
  # Start the cron job to compute ? to 2000 places and print it out every 5 minutes.
  kubectl run pi --schedule="0/5 * * * ?" --image=perl --restart=OnFailure -- perl -Mbignum=bpi -wle 'print bpi(2000)'

Options:
      --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.
      --attach=false: If true, wait for the Pod to start running, and then attach to the Pod as if 'kubectl attach ...' were called.  Default false, unless '-i/--stdin' is set, in which case the default is true. With '--restart=Never' the exit code of the container process is returned.
      --cascade=true: If true, cascade the deletion of the resources managed by this resource (e.g. Pods created by a ReplicationController).  Default true.
      --command=false: If true and extra arguments are present, use them as the 'command' field in the container, rather than the 'args' field which is the default.
      --dry-run=false: If true, only print the object that would be sent, without sending it.
      --env=[]: Environment variables to set in the container
      --expose=false: If true, a public, external service is created for the container(s) which are run
  -f, --filename=[]: to use to replace the resource.
      --force=false: Only used when grace-period=0. If true, immediately remove resources from API and bypass graceful deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.
      --generator='': The name of the API generator to use, see http://kubernetes.io/docs/user-guide/kubectl-conventions/#generators for a list.
      --grace-period=-1: Period of time in seconds given to the resource to terminate gracefully. Ignored if negative. Set to 1 for immediate shutdown. Can only be set to 0 when --force is true (force deletion).
      --hostport=-1: The host port mapping for the container port. To demonstrate a single-machine container.
      --image='': The image for the container to run.
      --image-pull-policy='': The image pull policy for the container. If left empty, this value will not be specified by the client and defaulted by the server
  -l, --labels='': Comma separated labels to apply to the pod(s). Will override previous values.
      --leave-stdin-open=false: If the pod is started in interactive mode or with stdin, leave stdin open after the first attach completes. By default, stdin will be closed after the first attach completes.
      --limits='': The resource requirement limits for this container.  For example, 'cpu=200m,memory=512Mi'.  Note that server side components may assign limits depending on the server configuration, such as limit ranges.
  -o, --output='': Output format. One of: json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file.
      --overrides='': An inline JSON override for the generated object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field.
      --pod-running-timeout=1m0s: The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running
      --port='': The port that this container exposes.  If --expose is true, this is also the port used by the service that is created.
      --quiet=false: If true, suppress prompt messages.
      --record=false: Record current kubectl command in the resource annotation. If set to false, do not record the command. If set to true, record the command. If not set, default to updating the existing annotation value only if one already exists.
  -R, --recursive=false: Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.
  -r, --replicas=1: Number of replicas to create for this container. Default is 1.
      --requests='': The resource requirement requests for this container.  For example, 'cpu=100m,memory=256Mi'.  Note that server side components may assign requests depending on the server configuration, such as limit ranges.
      --restart='Always': The restart policy for this Pod.  Legal values [Always, OnFailure, Never].  If set to 'Always' a deployment is created, if set to 'OnFailure' a job is created, if set to 'Never', a regular pod is created. For the latter two --replicas must be 1.  Default 'Always', for CronJobs `Never`.
      --rm=false: If true, delete resources created in this command for attached containers.
      --save-config=false: If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.
      --schedule='': A schedule in the Cron format the job should be run with.
      --service-generator='service/v2': The name of the generator to use for creating a service.  Only used if --expose is true
      --service-overrides='': An inline JSON override for the generated service object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field.  Only used if --expose is true.
      --serviceaccount='': Service account to set in the pod spec
  -i, --stdin=false: Keep stdin open on the container(s) in the pod, even if nothing is attached.
      --template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].
      --timeout=0s: The length of time to wait before giving up on a delete, zero means determine a timeout from the size of the object
  -t, --tty=false: Allocated a TTY for each container in the pod.
      --wait=false: If true, wait for resources to be gone before returning. This waits for finalizers.

Usage:
  kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=bool] [--overrides=inline-json] [--command] -- [COMMAND] [args...] [options]

Use "kubectl options" for a list of global command-line options (applies to all commands).

################# LOGS from sandbox testing... ##################################

To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

kris@gandalf1:~$ sudo apt install -y docker.io
[sudo] password for kris:
Reading package lists... Done
Building dependency tree    
Reading state information... Done
The following packages were automatically installed and are no longer required:
  efibootmgr libfwup1 libwayland-egl1-mesa
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
  bridge-utils cgroupfs-mount containerd git git-man liberror-perl pigz runc ubuntu-fan
Suggested packages:
  aufs-tools btrfs-progs debootstrap docker-doc rinse zfs-fuse | zfsutils git-daemon-run | git-daemon-sysvinit git-doc git-el git-email git-gui gitk gitweb
  git-cvs git-mediawiki git-svn
The following NEW packages will be installed:
  bridge-utils cgroupfs-mount containerd docker.io git git-man liberror-perl pigz runc ubuntu-fan
0 upgraded, 10 newly installed, 0 to remove and 3 not upgraded.
Need to get 68,5 MB of archives.
After this operation, 353 MB of additional disk space will be used.
Get:1 http://pl.archive.ubuntu.com/ubuntu bionic/universe amd64 pigz amd64 2.4-1 [57,4 kB]
Get:2 http://pl.archive.ubuntu.com/ubuntu bionic/main amd64 bridge-utils amd64 1.5-15ubuntu1 [30,1 kB]
Get:3 http://pl.archive.ubuntu.com/ubuntu bionic/universe amd64 cgroupfs-mount all 1.4 [6?320 B]
Get:4 http://pl.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 runc amd64 1.0.0~rc10-0ubuntu1~18.04.2 [2?000 kB]
Get:5 http://pl.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 containerd amd64 1.3.3-0ubuntu1~18.04.1 [21,7 MB]
Get:6 http://pl.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 docker.io amd64 19.03.6-0ubuntu1~18.04.1 [39,9 MB]
Get:7 http://pl.archive.ubuntu.com/ubuntu bionic/main amd64 liberror-perl all 0.17025-1 [22,8 kB]
Get:8 http://pl.archive.ubuntu.com/ubuntu bionic-updates/main amd64 git-man all 1:2.17.1-1ubuntu0.5 [803 kB]
Get:9 http://pl.archive.ubuntu.com/ubuntu bionic-updates/main amd64 git amd64 1:2.17.1-1ubuntu0.5 [3?912 kB]
Get:10 http://pl.archive.ubuntu.com/ubuntu bionic/main amd64 ubuntu-fan all 0.12.10 [34,7 kB]
Fetched 68,5 MB in 6s (11,4 MB/s)                                                                                                                          
Preconfiguring packages ...
Selecting previously unselected package pigz.
(Reading database ... 164348 files and directories currently installed.)
Preparing to unpack .../0-pigz_2.4-1_amd64.deb ...
Unpacking pigz (2.4-1) ...
Selecting previously unselected package bridge-utils.
Preparing to unpack .../1-bridge-utils_1.5-15ubuntu1_amd64.deb ...
Unpacking bridge-utils (1.5-15ubuntu1) ...
Selecting previously unselected package cgroupfs-mount.
Preparing to unpack .../2-cgroupfs-mount_1.4_all.deb ...
Unpacking cgroupfs-mount (1.4) ...
Selecting previously unselected package runc.
Preparing to unpack .../3-runc_1.0.0~rc10-0ubuntu1~18.04.2_amd64.deb ...
Unpacking runc (1.0.0~rc10-0ubuntu1~18.04.2) ...
Selecting previously unselected package containerd.
Preparing to unpack .../4-containerd_1.3.3-0ubuntu1~18.04.1_amd64.deb ...
Unpacking containerd (1.3.3-0ubuntu1~18.04.1) ...
Selecting previously unselected package docker.io.
Preparing to unpack .../5-docker.io_19.03.6-0ubuntu1~18.04.1_amd64.deb ...
Unpacking docker.io (19.03.6-0ubuntu1~18.04.1) ...
Selecting previously unselected package liberror-perl.
Preparing to unpack .../6-liberror-perl_0.17025-1_all.deb ...
Unpacking liberror-perl (0.17025-1) ...
Selecting previously unselected package git-man.
Preparing to unpack .../7-git-man_1%3a2.17.1-1ubuntu0.5_all.deb ...
Unpacking git-man (1:2.17.1-1ubuntu0.5) ...
Selecting previously unselected package git.
Preparing to unpack .../8-git_1%3a2.17.1-1ubuntu0.5_amd64.deb ...
Unpacking git (1:2.17.1-1ubuntu0.5) ...
Selecting previously unselected package ubuntu-fan.
Preparing to unpack .../9-ubuntu-fan_0.12.10_all.deb ...
Unpacking ubuntu-fan (0.12.10) ...
Setting up git-man (1:2.17.1-1ubuntu0.5) ...
Setting up runc (1.0.0~rc10-0ubuntu1~18.04.2) ...
Setting up liberror-perl (0.17025-1) ...
Setting up cgroupfs-mount (1.4) ...
Setting up containerd (1.3.3-0ubuntu1~18.04.1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service › /lib/systemd/system/containerd.service.
Setting up bridge-utils (1.5-15ubuntu1) ...
Setting up ubuntu-fan (0.12.10) ...
Created symlink /etc/systemd/system/multi-user.target.wants/ubuntu-fan.service › /lib/systemd/system/ubuntu-fan.service.
Setting up pigz (2.4-1) ...
Setting up git (1:2.17.1-1ubuntu0.5) ...
Setting up docker.io (19.03.6-0ubuntu1~18.04.1) ...
Adding group `docker' (GID 127) ...
Done.
Created symlink /etc/systemd/system/sockets.target.wants/docker.socket › /lib/systemd/system/docker.socket.
docker.service is a disabled or a static unit, not starting it.
Processing triggers for systemd (237-3ubuntu10.39) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for ureadahead (0.100.0-21) ...
ureadahead will be reprofiled on next reboot

kris@gandalf1:~$ docker

Usage: docker [OPTIONS] COMMAND

A self-sufficient runtime for containers

Options:
      --config string      Location of client config files (default "/home/kris/.docker")
  -c, --context string     Name of the context to use to connect to the daemon (overrides DOCKER_HOST env var and default context set with "docker
                           context use")
  -D, --debug              Enable debug mode
  -H, --host list          Daemon socket(s) to connect to
  -l, --log-level string   Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") (default "info")
      --tls                Use TLS; implied by --tlsverify
      --tlscacert string   Trust certs signed only by this CA (default "/home/kris/.docker/ca.pem")
      --tlscert string     Path to TLS certificate file (default "/home/kris/.docker/cert.pem")
      --tlskey string      Path to TLS key file (default "/home/kris/.docker/key.pem")
      --tlsverify          Use TLS and verify the remote
  -v, --version            Print version information and quit

Management Commands:
  builder     Manage builds
  config      Manage Docker configs
  container   Manage containers
  context     Manage contexts
  engine      Manage the docker engine
  image       Manage images
  network     Manage networks
  node        Manage Swarm nodes
  plugin      Manage plugins
  secret      Manage Docker secrets
  service     Manage services
  stack       Manage Docker stacks
  swarm       Manage Swarm
  system      Manage Docker
  trust       Manage trust on Docker images
  volume      Manage volumes

Commands:
  attach      Attach local standard input, output, and error streams to a running container
  build       Build an image from a Dockerfile
  commit      Create a new image from a container's changes
  cp          Copy files/folders between a container and the local filesystem
  create      Create a new container
  deploy      Deploy a new stack or update an existing stack
  diff        Inspect changes to files or directories on a container's filesystem
  events      Get real time events from the server
  exec        Run a command in a running container
  export      Export a container's filesystem as a tar archive
  history     Show the history of an image
  images      List images
  import      Import the contents from a tarball to create a filesystem image
  info        Display system-wide information
  inspect     Return low-level information on Docker objects
  kill        Kill one or more running containers
  load        Load an image from a tar archive or STDIN
  login       Log in to a Docker registry
  logout      Log out from a Docker registry
  logs        Fetch the logs of a container
  pause       Pause all processes within one or more containers
  port        List port mappings or a specific mapping for the container
  ps          List containers
  pull        Pull an image or a repository from a registry
  push        Push an image or a repository to a registry
  rename      Rename a container
  restart     Restart one or more containers
  rm          Remove one or more containers
  rmi         Remove one or more images
  run         Run a command in a new container
  save        Save one or more images to a tar archive (streamed to STDOUT by default)
  search      Search the Docker Hub for images
  start       Start one or more stopped containers
  stats       Display a live stream of container(s) resource usage statistics
  stop        Stop one or more running containers
  tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
  top         Display the running processes of a container
  unpause     Unpause all processes within one or more containers
  update      Update configuration of one or more containers
  version     Show the Docker version information
  wait        Block until one or more containers stop, then print their exit codes

Run 'docker COMMAND --help' for more information on a command.

kris@gandalf1:~$ docker --version
Docker version 19.03.6, build 369ce74a3c

kris@gandalf1:~$ sudo systemctl enable docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service › /lib/systemd/system/docker.service.

kris@gandalf1:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add

Command 'curl' not found, but can be installed with:

sudo apt install -y curl

gpg: no valid OpenPGP data found.

kris@gandalf1:~$ ^C

kris@gandalf1:~$ sudo apt install curl
Reading package lists... Done
Building dependency tree    
Reading state information... Done
The following packages were automatically installed and are no longer required:
  efibootmgr libfwup1 libwayland-egl1-mesa
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
  libcurl4
The following NEW packages will be installed:
  curl libcurl4
0 upgraded, 2 newly installed, 0 to remove and 3 not upgraded.
Need to get 373 kB of archives.
After this operation, 1?038 kB of additional disk space will be used.
Get:1 http://pl.archive.ubuntu.com/ubuntu bionic-updates/main amd64 libcurl4 amd64 7.58.0-2ubuntu3.8 [214 kB]
Get:2 http://pl.archive.ubuntu.com/ubuntu bionic-updates/main amd64 curl amd64 7.58.0-2ubuntu3.8 [159 kB]
Fetched 373 kB in 0s (2?006 kB/s)
Selecting previously unselected package libcurl4:amd64.
(Reading database ... 165583 files and directories currently installed.)
Preparing to unpack .../libcurl4_7.58.0-2ubuntu3.8_amd64.deb ...
Unpacking libcurl4:amd64 (7.58.0-2ubuntu3.8) ...
Selecting previously unselected package curl.
Preparing to unpack .../curl_7.58.0-2ubuntu3.8_amd64.deb ...
Unpacking curl (7.58.0-2ubuntu3.8) ...
Setting up libcurl4:amd64 (7.58.0-2ubuntu3.8) ...
Setting up curl (7.58.0-2ubuntu3.8) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...

kris@gandalf1:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
OK

kris@gandalf1:~$ sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
Hit:1 http://pl.archive.ubuntu.com/ubuntu bionic InRelease
Hit:2 http://pl.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:3 http://pl.archive.ubuntu.com/ubuntu bionic-backports InRelease                                                                                        
Ign:4 http://dl.google.com/linux/chrome/deb stable InRelease                                                                                                
Get:5 http://security.ubuntu.com/ubuntu bionic-security InRelease [88,7 kB]        
Get:6 http://dl.google.com/linux/chrome/deb stable Release [943 B]                                                  
Get:7 http://dl.google.com/linux/chrome/deb stable Release.gpg [819 B]                              
Get:9 http://dl.google.com/linux/chrome/deb stable/main amd64 Packages [1?136 B]                    
Get:8 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8?993 B]              
Get:10 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [34,5 kB]
Fetched 135 kB in 1s (97,2 kB/s)    
Reading package lists... Done

kris@gandalf1:~$ sudo apt install -y kubeadm
Reading package lists... Done
Building dependency tree    
Reading state information... Done
The following packages were automatically installed and are no longer required:
  efibootmgr libfwup1 libwayland-egl1-mesa
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
  conntrack cri-tools ebtables ethtool kubectl kubelet kubernetes-cni socat
The following NEW packages will be installed:
  conntrack cri-tools ebtables ethtool kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 9 newly installed, 0 to remove and 3 not upgraded.
Need to get 51,8 MB of archives.
After this operation, 273 MB of additional disk space will be used.
Get:1 http://pl.archive.ubuntu.com/ubuntu bionic/main amd64 conntrack amd64 1:1.4.4+snapshot20161117-6ubuntu2 [30,6 kB]
Get:3 http://pl.archive.ubuntu.com/ubuntu bionic-updates/main amd64 ebtables amd64 2.0.10.4-3.5ubuntu2.18.04.3 [79,9 kB]
Get:4 http://pl.archive.ubuntu.com/ubuntu bionic/main amd64 ethtool amd64 1:4.15-0ubuntu1 [114 kB]              
Get:6 http://pl.archive.ubuntu.com/ubuntu bionic/main amd64 socat amd64 1.7.3.2-2ubuntu2 [342 kB]              
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.13.0-00 [8?776 kB]
Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.7.5-00 [6?473 kB]
Get:7 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.17.4-00 [19,2 MB]
Get:8 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.17.4-00 [8?741 kB]
Get:9 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.17.4-00 [8?064 kB]
Fetched 51,8 MB in 6s (8?715 kB/s)    
Selecting previously unselected package conntrack.
(Reading database ... 165596 files and directories currently installed.)
Preparing to unpack .../0-conntrack_1%3a1.4.4+snapshot20161117-6ubuntu2_amd64.deb ...
Unpacking conntrack (1:1.4.4+snapshot20161117-6ubuntu2) ...
Selecting previously unselected package cri-tools.
Preparing to unpack .../1-cri-tools_1.13.0-00_amd64.deb ...
Unpacking cri-tools (1.13.0-00) ...
Selecting previously unselected package ebtables.
Preparing to unpack .../2-ebtables_2.0.10.4-3.5ubuntu2.18.04.3_amd64.deb ...
Unpacking ebtables (2.0.10.4-3.5ubuntu2.18.04.3) ...
Selecting previously unselected package ethtool.
Preparing to unpack .../3-ethtool_1%3a4.15-0ubuntu1_amd64.deb ...
Unpacking ethtool (1:4.15-0ubuntu1) ...
Selecting previously unselected package kubernetes-cni.
Preparing to unpack .../4-kubernetes-cni_0.7.5-00_amd64.deb ...
Unpacking kubernetes-cni (0.7.5-00) ...
Selecting previously unselected package socat.
Preparing to unpack .../5-socat_1.7.3.2-2ubuntu2_amd64.deb ...
Unpacking socat (1.7.3.2-2ubuntu2) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../6-kubelet_1.17.4-00_amd64.deb ...
Unpacking kubelet (1.17.4-00) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../7-kubectl_1.17.4-00_amd64.deb ...
Unpacking kubectl (1.17.4-00) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../8-kubeadm_1.17.4-00_amd64.deb ...
Unpacking kubeadm (1.17.4-00) ...
Setting up conntrack (1:1.4.4+snapshot20161117-6ubuntu2) ...
Setting up kubernetes-cni (0.7.5-00) ...
Setting up cri-tools (1.13.0-00) ...
Setting up socat (1.7.3.2-2ubuntu2) ...
Setting up ebtables (2.0.10.4-3.5ubuntu2.18.04.3) ...
Created symlink /etc/systemd/system/multi-user.target.wants/ebtables.service › /lib/systemd/system/ebtables.service.
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
Setting up kubectl (1.17.4-00) ...
Setting up ethtool (1:4.15-0ubuntu1) ...
Setting up kubelet (1.17.4-00) ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service › /lib/systemd/system/kubelet.service.
Setting up kubeadm (1.17.4-00) ...
Processing triggers for systemd (237-3ubuntu10.39) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for ureadahead (0.100.0-21) ...

kris@gandalf1:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:01:11Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

kris@gandalf1:~$ sudo swapoff -a

kris@gandalf1:~$ sudo hostnamectl set-hostname master-node

kris@gandalf1:~$ sudo hostnamectl set-hostname slave-node

kris@gandalf1:~$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
W0313 17:07:44.931108   11694 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0313 17:07:44.931223   11694 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.4
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [slave-node kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.22]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [slave-node localhost] and IPs [192.168.1.22 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [slave-node localhost] and IPs [192.168.1.22 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0313 17:08:20.954182   11694 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0313 17:08:20.955157   11694 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.002123 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node slave-node as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node slave-node as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: wv9d86.mfssvpdndne1e96h
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.22:6443 --token wv9d86.mfssvpdndne1e96h \
    --discovery-token-ca-cert-hash sha256:392ee523f3a93648a019880cb38f1cad7532be9a1e0edcb63e9a478d880bc33a

kris@gandalf1:~$ mkdir -p $HOME/.kube

kris@gandalf1:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

kris@gandalf1:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

kris@gandalf1:~$ sudo kubectl get nodes
NAME         STATUS     ROLES    AGE     VERSION
slave-node   NotReady   master   2m35s   v1.17.4

kris@gandalf1:~$ sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml

kris@gandalf1:~$ sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

kris@gandalf1:~$ sudo kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

kris@gandalf1:~$ sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

kris@gandalf1:~$ sudo kubeadm join 192.168.1.22:6443 --token wv9d86.mfssvpdndne1e96h \
    --discovery-token-ca-cert-hash sha256:392ee523f3a93648a019880cb38f1cad7532be9a1e0edcb63e9a478d880bc33a

kris@gandalf1:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-6955765f44-59snd             1/1     Running   0          3m58s
kube-system   coredns-6955765f44-zg7cr             1/1     Running   0          3m58s
kube-system   etcd-slave-node                      1/1     Running   0          4m11s
kube-system   kube-apiserver-slave-node            1/1     Running   0          4m11s
kube-system   kube-controller-manager-slave-node   1/1     Running   0          4m11s
kube-system   kube-flannel-ds-amd64-tgbjl          1/1     Running   0          38s
kube-system   kube-proxy-5bhjs                     1/1     Running   0          3m58s
kube-system   kube-scheduler-slave-node            1/1     Running   0          4m11s

kris@gandalf1:~$ sudo kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
slave-node   Ready    master   4m42s   v1.17.4


kris@gandalf1:~$ sudo apt install net-tools
Reading package lists... Done
Building dependency tree    
Reading state information... Done
The following packages were automatically installed and are no longer required:
  efibootmgr libfwup1 libwayland-egl1-mesa
Use 'sudo apt autoremove' to remove them.
The following NEW packages will be installed:
  net-tools
0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded.
Need to get 194 kB of archives.
After this operation, 803 kB of additional disk space will be used.
Get:1 http://pl.archive.ubuntu.com/ubuntu bionic/main amd64 net-tools amd64 1.60+git20161116.90da8a0-1ubuntu1 [194 kB]
Fetched 194 kB in 0s (1?464 kB/s)
Selecting previously unselected package net-tools.
(Reading database ... 165712 files and directories currently installed.)
Preparing to unpack .../net-tools_1.60+git20161116.90da8a0-1ubuntu1_amd64.deb ...
Unpacking net-tools (1.60+git20161116.90da8a0-1ubuntu1) ...
Setting up net-tools (1.60+git20161116.90da8a0-1ubuntu1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...

kris@gandalf1:~$ ifconfig

cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.1  netmask 255.255.255.0  broadcast 0.0.0.0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.0  netmask 255.255.255.255  broadcast 0.0.0.0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0

wlp1s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.22  netmask 255.255.255.0  broadcast 192.168.1.255