Tuesday, 29 September 2020

What are collections: How to Install amazon.aws collection in Ansible Tower

 


Collections are a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. As modules move from the core Ansible repository into collections, the module documentation will move to the collections pages.

You can install and use collections through Ansible Galaxy.

Installing collections

Installing collections with ansible-galaxy

By default, ansible-galaxy collection install uses https://galaxy.ansible.com as the Galaxy server (as listed in the ansible.cfg file under GALAXY_SERVER). You do not need any further configuration.

See Configuring the ansible-galaxy client if you are using any other Galaxy server, such as Red Hat Automation Hub.

To install a collection hosted in Galaxy:

ansible-galaxy collection install my_namespace.my_collection

You can also directly use the tarball from your build:

ansible-galaxy collection install my_namespace-my_collection-1.0.0.tar.gz -p ./collections

Note

The install command automatically appends the path ansible_collections to the one specified with the -p option unless the parent directory is already in a folder called ansible_collections.

When using the -p option to specify the install path, use one of the values configured in COLLECTIONS_PATHS, as this is where Ansible itself will expect to find collections. If you don’t specify a path, ansible-galaxy collection install installs the collection to the first path defined in COLLECTIONS_PATHS, which by default is ~/.ansible/collections

You can also keep a collection adjacent to the current playbook, under a collections/ansible_collections/ directory structure.

./
├── play.yml
├── collections/
│ └── ansible_collections/
│ └── my_namespace/
│ └── my_collection/<collection structure lives here>

Installing a collection from a git repository

You can install a collection in a git repository by providing the URI to the repository instead of a collection name or path to a tar.gz file. The collection must contain a galaxy.yml file, which will be used to generate the would-be collection artifact data from the directory. The URI should be prefixed with git+ (or with git@ to use a private repository with ssh authentication) and optionally supports a comma-separated git commit-ish version (for example, a commit or tag).

Warning

Embedding credentials into a git URI is not secure. Make sure to use safe auth options for security reasons. For example, use SSHnetrc or http.extraHeader/url.<base>.pushInsteadOf in Git config to prevent your creds from being exposed in logs.

# Install a collection in a repository using the latest commit on the branch 'devel'
ansible-galaxy collection install git+https://github.com/organization/repo_name.git,devel

# Install a collection from a private github repository
ansible-galaxy collection install git@github.com:organization/repo_name.git

# Install a collection from a local git repository
ansible-galaxy collection install git+file:///home/user/path/to/repo/.git

In a requirements.yml file, you can also use the type and version keys in addition to using the git+repo,version syntax for the collection name.

collections:
- name: https://github.com/organization/repo_name.git
type: git
version: devel

Git repositories can be used for collection dependencies as well. This can be helpful for local development and testing but built/published artifacts should only have dependencies on other artifacts.

dependencies: {'git@github.com:organization/repo_name.git': 'devel'}

Default repository search locations

There are two paths searched in a repository for collections by default.

The first is the galaxy.yml file in the top level of the repository path. If the galaxy.yml file exists it’s used as the collection metadata and the individual collection will be installed.

├── galaxy.yml
├── plugins/
│   ├── lookup/
│   ├── modules/
│   └── module_utils/
└─── README.md

The second is a galaxy.yml file in each directory in the repository path (one level deep). In this scenario, each directory with a galaxy.yml is installed as a collection.

directory/
├── docs/
├── galaxy.yml
├── plugins/
│   ├── inventory/
│   └── modules/
└── roles/

Specifying the location to search for collections

If you have a different repository structure or only want to install a subset of collections, you can add a fragment to the end of your URI (before the optional comma-separated version) to indicate which path ansible-galaxy should inspect for galaxy.yml file(s). The path should be a directory to a collection or multiple collections (rather than the path to a galaxy.yml file).

namespace/
└── name/
├── docs/
├── galaxy.yml
├── plugins/
│   ├── README.md
│   └── modules/
├── README.md
└── roles/
# Install all collections in a particular namespace
ansible-galaxy collection install git+https://github.com/organization/repo_name.git#/namespace/

# Install an individual collection using a specific commit
ansible-galaxy collection install git+https://github.com/organization/repo_name.git#/namespace/name/,7b60ddc245bc416b72d8ea6ed7b799885110f5e5

Lab


Step i: Install amazon.aws collection, Boto

connect to your awx server using mobaxterm and enter

pip3 install boto3

 ansible-galaxy collection install amazon.aws

IF YOU SEE ERRORS LIKE BELOW

- downloading role 'collection', owned by

 [WARNING]: - collection was NOT installed successfully: Content has no field named 'owner'

ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.


DO THIS TO UPGRADE ANSIBLE TO 2.9

sudo apt remove ansible

  sudo add-apt-repository ppa:ansible/ansible-2.9

   sudo apt install ansible

   sudo ansible-galaxy collection install amazon.aws


Step ii: Specify your collection path in the ansible config file

vi /etc/ansible/ansible.cfg


Insert the below line

collections_paths = /home/ubuntu/.ansible/collections/ansible_collections


Save



Monday, 28 September 2020

How to create Ec2 instance using Ansible Tower /AWX


Step i: Install amazon.aws collection, Boto

connect to your awx server using mobaxterm and enter

pip3 install boto3

 ansible-galaxy collection install amazon.aws

Step ii: Specify your collection path in the ansible config file

vi /etc/ansible/ansible.cfg


Insert the below line

collections_paths = /home/ubuntu/.ansible/collections/ansible_collections


Save



Step A: Create a New branch from myfirstrepo Repository in bitbucket


 Name the branch : Ansible

Step B: Open git bash in your project folder and do a git pull to get the latest branch update

cd myfirstrepo

git pull
git checkout Ansible

This will switch to the newly created Ansible branch



Launch Vscode by entering code .

Step C: Playbook to create EC2 instance

Create a new file in Vscode called aws.yml

Paste the below code in the file:

---
- hosts: "{{ host }}"
gather_facts: true
vars:
access_key: YOUR_ACCESSKEY
secret_key: YOUR_SECRETKEY
tasks:
- name: Provision instance
ec2:
aws_access_key: "{{ access_key}}"
aws_secret_key: "{{ secret_key }}"
key_name: "{{ pem_key }}"
instance_type: t2.micro
image: ami-03657b56516ab7912
wait: yes
count: 1
region: us-east-2

Save the file....Add commit message and push....



Step 1: First create an Organization: Devops

Description: All Devops Projects



Save

Step 2: Create Credentials for SCM: 

Click Credentials ....Click the +

Enter a Name for your credential. Any Name will work. I used BITBUCKET
Select Credential Type:Source Control
Enter bitbucket username and password
Save






Step 3: Create a Project: AWS-Instance-Creation . Select Org you just created

Choose SCM Type: Git

Enter SCM http url

Select the SCM credentials you created

Enter Branch: Ansible

Check UPDATE REVISION ON LAUNCH



Save

Step 4 Create Template
Select Templates 
Click +  Select Job Template to create a New template
Enter Template Name: T2-Micro-ubuntu-18-instance
Select Inventory: Demo inventory This specifies the server where we want to run the playbook. Demo inventory is localhost,we will be running the Playbook on the awx server
Select Project: AWS-Instance-Creation
Select Playbook: aws.yml
Verbosity: Select 4
Extra Variables : Copy and paste below code
---
host: localhost
pem_key: YOUR_KEY_name





SAVE
 
GO TO TEMPLATES....CLICK THE ROCKET BUTTON TO LAUNCH

THIS WILL PROVISION YOUR INSTANCE



Thursday, 24 September 2020

Understanding Pipelines- How to create a Jenkins Pipeline from Multiple FreeStyle Jobs

 


What is Pipeline?

A pipeline in a Software Engineering team is a set of automated processes that allow Developers and DevOps professionals to reliably and efficiently compile, build and deploy their code to their production compute platforms. There is no hard and fast rule stating what a pipeline should like like and the tools it must utilise, however the most common components of a pipeline are; build automation/continuous integration, test automation, and deployment automation

In Jenkins, a pipeline is a group of events or jobs which are interlinked with one another in a sequence

What is a JenkinsFile?

Jenkins pipelines can be defined using a text file called JenkinsFile. You can implement pipeline as code using JenkinsFile, and this can be defined by using a domain specific language (DSL). With JenkinsFile, you can write the steps needed for running a Jenkins pipeline.

The benefits of using JenkinsFile are:

  • You can create pipelines automatically for all branches and execute pull requests with just one JenkinsFile.
  • You can review your code on the pipeline
  • You can audit your Jenkins pipeline
  • This is the singular source for your pipeline and can be modified by multiple users.

JenkinsFile can be defined by either Web UI or with a JenkinsFile.

Declarative versus Scripted pipeline syntax:

There are two types of syntax used for defining your JenkinsFile.

  1. Declarative
  2. Scripted

Declarative:

Declarative pipeline syntax offers an easy way to create pipelines. It contains a predefined hierarchy to create Jenkins pipelines. It gives you the ability to control all aspects of a pipeline execution in a simple, straight-forward manner.

Scripted:

Scripted Jenkins pipeline runs on the Jenkins master with the help of a lightweight executor. It uses very few resources to translate the pipeline into atomic commands. Both declarative and scripted syntax are different from each other and are defined totally differently.

Why Use Jenkin's Pipeline?

Jenkins is an open continuous integration server which has the ability to support the automation of software development processes. You can create multiple automation jobs with the help of use cases, and run them as a Jenkins pipeline.

Here are the reasons why you use should use Jenkins pipeline:

  • Jenkins pipeline is implemented as a code which allows multiple users to edit and execute the pipeline process.
  • Pipelines are robust. So if your server undergoes an unforeseen restart, the pipeline will be automatically resumed.
  • You can pause the pipeline process and make it wait to resume until there is an input from the user.
  • Jenkins Pipelines support big projects. You can run multiple jobs, and even use pipelines in a loop.


Jenkins Pipeline Concepts

TermDescription
PipelineThe pipeline is a set of instructions given in the form of code for continuous delivery and consists of instructions needed for the entire build process. With pipeline, you can build, test, and deliver the application.
NodeThe machine on which Jenkins runs is called a node. A node block is mainly used in scripted pipeline syntax.
StageA stage block contains a series of steps in a pipeline. That is, the build, test, and deploy processes all come together in a stage. Generally, a stage block is used to visualize the Jenkins pipeline process.
StepA step is nothing but a single task that executes a specific process at a defined time. A pipeline involves a series of steps.
Create a Simple Pipeline 
Step 1. Create new job for code checkout: You can name it : CheckOut
Step 2. Click Advanced:
Click on 
Use custom workspace 
Select your workSpace. Enter:tmp



Step 3. Configure the SCM checkout by adding your Bitbucket Url and credentials



Click Save.

Step 4.Create Another Job for the build stage:You can call it: Build


Step 5: Repeat Step 2
Step 6: Configure it with maven config as shown below: Then Save
Step 7. Create new Job to Archive artifacts: Name it:Archive_Artifacts
Step 8: Repeat Step 2 
Step 9 : configure as below:

Step 10: Create a New job: Name it :Publish_To_Artifactory
Step 11: Repeat Step 2
Step 12 Configure as below:




Save
Step13: Create a New job: Code_Quality
Step 14: repeat Step 2
Step 15 Configure as below




Click Save
Step 16: Create a new job: Deploy_To_Container
Step 17: Repeat Step 2
Step 18: Configure As seen below:


Click Save
Step 19: Create A New Job: Slack_Notification
Step 20: Repeat Step 2
Step 21 Configure As Seen below
Step 22: Create A Pipeline Job:MyCI_CB_CD_CT_CN_Pipeline
Step 23: Click Advanced Project Option
Enter A Project Name: Continuous Integration-Continuous Build-Continuous Deployment-Continuous Test-Continuous-Notification
Step 24: Copy and Paste Below code in Pipeline Script Box:


node {
 stage ('Checkout')  {

     build job: 'CheckOut' 
 }
stage ('Build') {
     build job: 'Build' 
    }
stage ('Code Quality scan') {
      build job: 'Code_Quality' 
        }
        
stage('Archive Artifacts') {
build job: 'Archive_Artifacts'
}
 stage('Publish to Artifactory') {
build job: 'Publish_To_Artifactory'
}
stage ('DEV Approve')
      {
            echo "Taking approval from DEV"
               
            timeout(time: 7, unit: 'DAYS') {
            input message: 'Do you want to deploy?', submitter: 'admin'
            }
     }
stage ('DEV Deploy')
         {
             build job: 'Deploy_To_Container'
          }

stage ('Slack notification') {
     
build job: 'Slack_Notification'
    }
  
}



Save
Build Now
Your Pipeline will build like below


The Above is an example of a scripted pipeline, it follows the structure below:

/ Scripted pipeline
node {
  stage('Build') {
       echo 'Building....'
  }
  stage('Test') {
      echo 'Building....'
  }
  stage('Deploy') {
      echo 'Deploying....'
  }
}
on the other hand, Declarative pipeline follows this:
// Declarative pipeline
pipeline {
  agent { label 'slave-node' }
  stages {
    stage('checkout') {
      steps {
        git 'https://bitbucket.org/myrepo''
      }
    }
    stage('build') {
      tools {
        gradle 'Maven3'
      }
      steps {
        sh 'mvn clean test'
      }
    }
  }
}