Category Archives: DevOps Automation

AWS: A typical [POC] Setup of legacy data movement into Redshift

In this discussion video you can find the the feasibility analysis to move the legacy data movement into AWD Redshift, with a feasible architecture.

Watch the below video:

In the following session we have discussed the the typical scenarios for Redshift usage:

This video has the outline on AWS Data Pipeline service.

Please follow my videos : https://business.facebook.com/vskumarcloud/

NOTE:

Let us also be aware: Due to lacks of certified professionals are available globally in the market on AWS, to differentiate them on their needs/selection, most of the clients are asking on the real experience gained or aware of IAC. They give a Console and ask you to setup a specific Infra setup in AWS.

In my coaching I focus on the candidates to gain the real Cloud Architecture implementation experience rather than pushing the course with screen operations only to complete. Through my posted videos you can watch this USP.

Contact for your real Cloud experiences learning and gaining with me and crack the interviews to get offers in AWS Roles globally or even you can transition to the role in the same company after facing the client interview/selection process. Which is very easy with this knowledge.

Please connect me on FB and have a discussion on your background and the needs/goals. I am looking for the serious learners only by having dedicated time. If you are a busy resource on the projects please note; you can wait to become free to learn. One need to spend time consistently on the practice. Otherwise its going to be in no-use.

AWS: What is Kafka and MSK service ?

What is Kafka and MSK service ?

In this blog I have presented the videos on:

  1. What is Kafka ? : https://www.facebook.com/347538165828643/videos/405168210472204/
  2. What is MSK services for Kafka on AWS?: https://www.facebook.com/347538165828643/videos/233592261348312/
  3. How to config the Kafka in EC2-Ubuntu and what are the components it has ?: https://www.facebook.com/347538165828643/videos/429917674635091

This video has the outline on AWS Data Pipeline service.

https://business.facebook.com/watch/?v=2513558698681591

Please follow my videos : https://business.facebook.com/vskumarcloud/

NOTE:

Let us also be aware: Due to lacks of certified professionals are available globally in the market on AWS, to differentiate them on their needs/selection, most of the clients are asking on the real experience gained or aware of IAC. They give a Console and ask you to setup a specific Infra setup in AWS.

In my coaching I focus on the candidates to gain the real Cloud Architecture implementation experience rather than pushing the course with screen operations only to complete. Through my posted videos you can watch this USP.

Contact for your real Cloud experiences learning and gaining with me and crack the interviews to get offers in AWS Roles globally or even you can transition to the role in the same company after facing the client interview/selection process. Which is very easy with this knowledge.

Please connect me on FB and have a discussion on your background and the needs/goals. I am looking for the serious learners only by having dedicated time. If you are a busy resource on the projects please note; you can wait to become free to learn. One need to spend time consistently on the practice. Otherwise its going to be in no-use.

Cloud Architect:Learn AWS Migration strategy

Cloud/DevOps Roles : Attend Mock Interviews to learn your skills


Mock interview practice – Contact for AWS/DevOps/SRE roles [not for Proxy!!] – for original profile only | Building Cloud cum DevOps Architects (vskumar.blog)

Learn Cloud projects building through special coaching

Learn Cloud projects building through special coaching:

Many IT Organizations are having lack of skilled people on building the right cloud projects. Hence their Cloud budgets are increasing instead of savings towards ROI.

Here is the special coaching I have been doing since 4+ years for the desired IT professionals to groom them into the desired job skills.

Visit for the past students reviews:

Manage Review (urbanpro.com)

See the below video on the coaching delivery methodology:

Learning Cloud technologies through basic skills are not enough to understand the Infra domain knowledge. One need to step into the shoes of project level tasks planning/designing/building is mandated to succeed in the job role.

AWS DevOps: How to trouble shoot with Code Repos ?

AWS DevOps: How to trouble shoot with Code Repos ?

Watch this video:

Why Learning Cloud building tasks are mandated for IT Role ?

Folks, Greetings!

Learning on how to build Cloud technology related tasks became a common and mandated activity for any IT Role. This is the first step one need to attempt to understand the environments building for their projects. Whether you take Microservices or Legacy areas. Study this video for more details. Come back to discuss for your background/role needs, to scale you up perfectly to beat the competition. Good luck in your Career growth ladder.

What are the DevOps Architect Interview FAQS ?

What are the DevOps Architect Interview FAQS ?

Visit for past mock interviews:

Cloud/DevOps roles: What is Mock Interview services | Building Cloud cum DevOps Architects (vskumar.blog)

AWS/DevOps: Part time Internships for IT Professionals – Interviews | Building Cloud cum DevOps Architects (vskumar.blog)

Cloud Cum DevOps Coaching: Your Investigation and actions | Building Cloud cum DevOps Architects (vskumar.blog)

2. AWS IAC-YAML: How to work with CF for various infrastructures setup ?

Folks, Many companies are doing the IAC activity during Cloud migration itself, to avoid future cloud infra issues.
You can see the CF Introduction and a demo:
 
If you want to learn the JSON/YAML Code along with AWS manual Infra Domain setup [practice coaching], watch this and connect with me ASAP to join in late evening [IST time] batch. Watch the below sessions also. By attending these sessions for your practice ,It saves lot of your future efforts and its easy to demonstrate the demanding skills in interviews to grab offers on your own globally. Connect with me as its mentioned in the web site main page by following the screening process for both of us.

In this blog, you can find some of my student(s) POCs automation using YAML Scripts with AWS services. [Keep visiting this blog for updates].

 
 
For continuation watch the below video also:
 
 

<==== You can learn the IAC usage Combinations from the below content =====>

An Apache2 setup with a customized VPC,  YAML code analysis discussion through CF you can see from the below video:

Facebook

 

Folks,

In this Blog I would like to add my IAC related sessions at one place.

If you want to know what is IAC, scroll bottom. The past blog contents are also copied for definitions.

How do you plan an IAC [Infrastructure As Code] ?

vskumarcloud-build-cloud-architect

When you are working for DevOps practices, the following question I would like to ask…

How do you plan an IAC [Infrastructure As Code] ?

You or your team member might be expert in Configuration tools.

But without having clear environment specifications these tools will not have any AI to get your environment.

When we do IAC as part of Devops practices, we also need to do identification of Infrastructure needs for different environments.

At that time one need to do the following activities also.

This is not only for a Cloud Architect, even for a DevOps practitioners it is mandatory.

Please note unless you give specifications to DevOps Engineer he/she can not build sustainable environment.

Your prior planning is very essential.

Cloud architect: How to build your Infrastructure planning practice ?

https://vskumar.blog/2018/12/04/1-cloud-architect-how-to-build-infrastructure-planning/

For Special Coaching details, visit:

Maximizing ROI in Cloud/DevOps: The Benefits of Coaching for Professionals

AWS: What is Kafka and MSK service ?

What is Kafka and MSK service ?

In this blog I have presented the videos on:

  1. What is Kafka ? : https://www.facebook.com/347538165828643/videos/405168210472204/
  2. What is MSK services for Kafka on AWS?: https://www.facebook.com/347538165828643/videos/233592261348312/
  3. How to config the Kafka in EC2-Ubuntu and what are the components it has ?: https://www.facebook.com/347538165828643/videos/429917674635091

This video has the outline on AWS Data Pipeline service.

https://business.facebook.com/watch/?v=2513558698681591

Please follow my videos : https://business.facebook.com/vskumarcloud/

NOTE:

Let us also be aware: Due to lacks of certified professionals are available globally in the market on AWS, to differentiate them on their needs/selection, most of the clients are asking on the real experience gained or aware of IAC. They give a Console and ask you to setup a specific Infra setup in AWS.

In my coaching I focus on the candidates to gain the real Cloud Architecture implementation experience rather than pushing the course with screen operations only to complete. Through my posted videos you can watch this USP.

Contact for your real Cloud experiences learning and gaining with me and crack the interviews to get offers in AWS Roles globally or even you can transition to the role in the same company after facing the client interview/selection process. Which is very easy with this knowledge.

Please connect me on FB and have a discussion on your background and the needs/goals. I am looking for the serious learners only by having dedicated time. If you are a busy resource on the projects please note; you can wait to become free to learn. One need to spend time consistently on the practice. Otherwise its going to be in no-use.

Cloud Architect:Learn AWS Migration strategy

AWS DevOps Course for Freshers with Project level tasks:

This course is designed for the Freshers by a three Decades globally experienced IT professional after studying many consulting projects and the skill gap issues of different project teams.
They can show these course project tasks, done by them during the course, in their profile.
The participant will be able to attend the interview for an AWS-DevOps fresher position, along with the live screen test without a Proxy interview. Interview preparation coaching will be given.
During the job; they will be honored for their skills learnt from this course and self demonstration in the team to complete the project task before the schedule. This can add value for the candidate’s future appraisals and for their IT ladder growth.
Its Purely, A job oriented course for Freshers with Live similar project activities through IT experienced professional. You will be enforced to do the tasks in the session. The Coacher/trainer will not touch the screen for lab demo.
Note: All the tasks the student will do as a demo in the session. Before coming to the demo he/she need to practice through material given to them. This makes the participant highly self motivated with confidence on technical learning. It motivates them for the job activities also, during their job.

For the course details, watch the below video:

If somebody want to attend the Basic AWS Course, before understanding the AWS-DevOps course they can see the below video and come for a call to know the details on it.

NOTE: This play list contains the videos made on the list of AWS courses made for freshers. The freshers when they are in project, they need to understand the infrastructure requirements and their tasks. During the course they coached on these areas well. And also they will be enforced to do some project activities with a requirement. And they need to present it in a team. This can give very high confidence to them not only to perform well in the interviews and also on the live project activity comparatively with many proxy interviews managed freshers.

https://www.facebook.com/watch/347538165828643/309818167129036/

For some more details on the course, visit:

Maximizing ROI in Cloud/DevOps: The Benefits of Coaching for Professionals

You can see some of the learners sessions:

How A Cloud architect is different from DevOps role ?

vskumarcloud-build-cloud-architect.pngHow A Cloud architect is different from DevOps practices ?

We have been watching on lot of FB Groups and ad sites as “learn DevOps/AWS”. In general everybody believe with these stickers/posters they need to learn AWS and DevOps together is a must for any Modern technology professional.

When we talk about AWS and DevOps they are two different work streams.

Now, one might get the below questions in their mind.

  1. Is a Cloud Architect need to be expert to work on DevOps activities also ?
  2. What are the activities related to Cloud architect ?
  3. Why the Cloud architect need not bother on DevOps ?

Now, let us analyze them as below:

The role of the cloud architect is to migrate the existing IT infrastructure setup into the cloud services. The cloud services can be AWS or Azure or Google cloud [GC] or Alibaba, etc.

From the below picture one can have clarity if they have experience in traditional Infrastructure building  practice.

How to create AWS S3 Bucket

This role need to understand clearly on the usage of those vendor related [AWS/AZURE/GC/Alibaba] cloud services and should have command on mapping the current traditional infrastructure setup to map to the cloud services and plan/design for its transformation with the additional benefits to the management in view of cost and easy operation.

Once the modern application architecture/infrastructure in cloud is operational, then the management can think of introducing the DevOps practices.

To work on DevOps practices, each Cloud services vendor provides their own setup or tools at different processes or pipeline stages. To do these tasks a separate role professionals are required, who are called DevOps Engineers. At this point the role of the cloud Architect is he/she can guide them on the available infrastructure with the Cloud vendor. As per the Cloud architect planning/guidelines the DevOps engineers need to adopt the relevant tools/processes. Basically all the setup is going to be on IAC [Infrastructure As A Code] technics. There can be Configuration tools to create the IAC for different environments. At this point the Cloud architect can monitor these tools implementation as a part of cloud infrastructure implementation.

So, the Cloud architect do not need to make his/her fingers dirty with tools/commands to implement the DevOps processes.

For example; If you read the roles of AWS with different certifications, they mention Solution Architect [SA] separately from DevOps engineer role. They have multiple roles like; Sys ops, Developer, etc. All these roles need to be expert in making their fingers dirty with the relevant AWS services usage/implementation efficiently and effectively . But here the Cloud Architect [which is SA in view of AWS] role is to monitor on their activities only. He/She doesn’t need to put the fingers into techie stuff.

Hope I have given clarity for the above questions.

I get lot of enquiries; as they want to do AWS/DevOps both the courses together. I understood due to lot of training vendors are making their posters on social for their business, these experienced professionals are getting confusion as they need to learn both.

Now, I would like to ask the below questions to you as this blog reader after the above understanding;

Do a modern technology professional need to learn Cloud services and also the DevOps as mandatory ? [Ex: Which is DevOps/AWS].

Answer: It is not. They can choose One route only. If he/she came from the real work experience of Sysadmin/Sys engineer role, the past experiences need to be utilized efficiently in IT Industry. Hence the Scalable role is Cloud architect. In view of AWS it is SA. But they need to have very good command in understanding the traditional architecture and also the cloud services to establish a well suited conversion plan. This role person is responsible to  show ROI [Return On Investment] also to the management.

You can also compare the SAA Salary among all the roles being played with AWS:

See the difference on the salary amounts to seek your role as per your professional potentiality.

Question: In the current job market in the JDs they ask on DevOps also for Cloud role why ?

Answer: Please let us note; Many organizations they wanted to use the same resource for Cloud and DevOps Architect/Engineer role to save their IT budget. But they offer more salary for these multi skills. Not only this scenario, there are many companies use multi-cloud technology for their BCP. They will ask these skills also. The skills acceleration is mandated for every Professional now a days. The more skills you acquire the early, your CTC is going to touch the Sky ASAP.

Also, Visit:

How best you can utilize Cloud Architect role as an efficient IT Management practitioner ?

Do you want to know the size of the Cloud job market globally if yes, visit:

What will be the size of Cloud market in IT by 2022 ?

For Special Coaching details, visit:

Maximizing ROI in Cloud/DevOps: The Benefits of Coaching for Professionals

To know the real articulation of SA, Visit for my AWS SAA class video:

Student Feedback:

1. AWS IAC: How many ways you can use IAC for automation ?

Folks,

In this Blog I would like to add my IAC related sessions at one place.

If you want to know what is IAC, scroll bottom. The past blog contents are also copied for definitions.

How do you plan an IAC [Infrastructure As Code] ?

vskumarcloud-build-cloud-architect

When you are working for DevOps practices, the following question I would like to ask…

How do you plan an IAC [Infrastructure As Code] ?

You or your team member might be expert in Configuration tools.

But without having clear environment specifications these tools will not have any AI to get your environment.

When we do IAC as part of Devops practices, we also need to do identification of Infrastructure needs for different environments.

At that time one need to do the following activities also.

This is not only for a Cloud Architect, even for a DevOps practitioners it is mandatory.

Look into the discussion video mentioned in the below URL.

Please note unless you give specifications to DevOps Engineer he/she can not build sustainable environment.

Your prior planning is very essential.

Cloud architect: How to build your Infrastructure planning practice ?

https://vskumar.blog/2018/12/04/1-cloud-architect-how-to-build-infrastructure-planning/

Cloud/DevOps coaching: How the Course is organized for “Building Cloud cum DevOps Architect in one go ” ?

The overall two stages details are discussed in the below video.

You can find the outline of the Stage1 course in the below video.

Why one should do it ?

What they can achieve after the course ?

How this can be used for building a client POC ?

How it can help you to move into DevOps Automation also ?

You can also visit the below blogs:

https://vskumar.blog/2020/01/20/aws-devops-stage1-stage2-course-for-modern-tech-professional/

https://vskumar.blog/2020/02/29/aws-follow-aws-saa-best-practices-for-interviews/

Guys A NOTE for you:

Folks,
How to Join in my Facebook groups to learn Cloud cum DevOps Concepts ?
For any Trainings many people conduct demos by catching the IT Professionals through their sales people. That demo denotes the trainer has the technical capabilities to handle the course for the attendees. In my coaching also I have followed the similar concept. But I don’t give demos by spending time, as I work as alone. In my case I have created the groups in Facebook: a) DevOps Practices Group, b) Cloud Practices Group c) Free Learning Agile/DevOps/AWS/AZ for freshers and IT Professionals. With few of the web pages by IT topic.

One can learn and assess them by joining. For more details you can watch this video. To join in them, you need to connect with me on FB and Linkedin and send a message. Then only you will be approved. This is a verification process to avoid fake ids.

Cloud/DevOps coaching: The outline of Stage1 [Cloud Architect] coaching

Folks,

You can find the outline of the Stage1 course in the below video.

Why one should do it ?

What they can achieve after the course ?

How this can be used for building a client POC ?

How it can help you to move into DevOps Automation also ?

You can also visit the below blogs:

https://vskumar.blog/2020/01/20/aws-devops-stage1-stage2-course-for-modern-tech-professional/

https://vskumar.blog/2020/02/29/aws-follow-aws-saa-best-practices-for-interviews/

From ITSM to Cloud/DevOps: How Traditional Professionals Can Make the Transition

Rebuild ITSM for Cloud/DevOps:Adapting to the Changing IT Landscape: How ITSM Professionals Can Stay Relevant with Cloud and DevOps

Before going through this blog; you should be aware of the demand of this coaching in the Global IT JOB market from URL:

https://vskumar.blog/2020/12/14/grab-massive-hike-offers-through-cloud-cum-devops-coaching-internship/

In recent years, the IT industry has undergone significant changes due to the rise of cloud computing and DevOps. As a result, many traditional ITSM (IT Service Management) professionals are finding themselves in a challenging situation. They must either adapt to these new methodologies and tools or risk becoming obsolete. In this blog post, we will discuss how traditional ITSM professionals can convert into Cloud/DevOps roles and the skills they need to be groomed to make this transition.

First, let’s understand the difference between ITSM, Cloud, and DevOps. ITSM is a set of best practices for managing and delivering IT services to meet business needs. Cloud computing is the delivery of on-demand computing resources such as servers, storage, and applications over the internet. DevOps is a methodology that focuses on collaboration between development and operations teams to deliver software faster and more reliably.

Now, to move from ITSM to Cloud/DevOps, traditional ITSM professionals must learn the relevant tools and methodologies. Cloud and DevOps are all about automation, scalability, and flexibility. Therefore, professionals need to have a good understanding of cloud infrastructure, virtualization, and automation tools like AWS, Azure, Puppet, Chef, and Ansible.

Apart from technical skills, professionals need to develop their soft skills, such as collaboration, communication, and problem-solving skills. These skills are essential for working effectively in a DevOps team where communication and collaboration are critical.

To learn these skills, professionals can attend training programs, read relevant books and articles, and participate in online communities. Many online courses and certifications are available, such as AWS Certified DevOps Engineer, Azure DevOps Engineer Expert, and Google Cloud DevOps Engineer.

Moreover, professionals need to gain hands-on experience by working on projects that involve cloud infrastructure and automation tools. They can start by participating in hackathons, contributing to open-source projects, or building their projects to gain practical knowledge.

In conclusion, traditional ITSM professionals must adapt to the changing IT landscape and acquire new skills to stay relevant. They must be willing to learn and embrace new methodologies and tools to succeed in their careers. COEs (Centers of Excellence) can play a significant role in providing training and support for professionals to make this transition. By doing so, IT companies can retain their knowledgeable employees and stay competitive in the market. At the same time, individuals must take responsibility for their careers and seek expert coaching to make this transition smoothly.

Rebuild Your IT career from ITSM to Infra and DevOps building with traditional exp.
I have not included the real ITSM roles. Those also can utilize into the conversion. Case to case it can be discussed. Each Organization has their own title and using them for ITSM delivery also. These role guys can work easily for implementing the IT governance in Cloud/DevOps comparatively with other roles.
This is the Stage1 Course Highlight. You will be coached for the concepts of JSON/YAML scripts towards writing the AWS Cloud service components creation. Your personal practice efforts are mandated here.
A detailed Stage2 course discussion video is there in one of the blogs. Follow the below link: https://vskumar.blog/2020/01/20/aws-devops-stage1-stage2-course-for-modern-tech-professional/
Your effort in practicing tools is required here.

Also be aware on the below points [Published in a blog also] :

I feel; For every DevOps Professional learning Infra building activity is mandatory. See the issues, what you are facing within them. Atleast one of them you are facing from your end as mentioned in the below slide/video. Then there is a gap in your implementation practice with lack of learning in a right method with best practices. So think on your actions after the below videos watching!

The new Internship programme is made for the working IT Professionals, AS PART TIME. WHICH IS ONGOING.

Please see the below blog for details and also watch the discussion with a new participant on the size of the POCs during the coaching:

https://vskumar.blog/2020/10/26/aws-devops-part-time-internships-for-it-professionals-interviews/

 You can also see the below blog/videos towards ITSM professionals exp usage for Cloud/DevOps Architect:

https://vskumar.blog/2020/02/15/do-you-want-to-become-cloud-cum-devops-architect-in-one-go/

What IT roles can vanish with Cloud transition ? 

If you are in the below roles, in the current recession you will be targeted for pinkslip among the IT professionals as 1st exit group. What you need to do on your career replan. Please see/follow the blog/videos with patience. 

As per my observation and practice with the trending technology [Cloud], all the Cloud services vendors have inbuilt serverless computing for many services. The following roles are going to be vanished or reskilled. But if they are kept under recession staff cut, these professionals need to take care of their career.

1. DBA:–>The DBA tasks are embedded as part of these services. So the DBAs used to sit hours together in the past to perform many mundane tasks. Now these all are automated.

2. Similarly, many other tasks are related to infra roles; Network admin/Sys-Admin are also automated through Cloud services.

3. As a consolidation all these 3 roles are clubbed into one role of Cloud Engineer. This role’s major task is to automate all the Cloud setup related activities under IAC[Infrastructure As Code]. In future only the IAC will sustain to save the cost to IT by automating the cloud setup creation activity.

4. If any professional need to compete with this role means; they need to understand the Infra needs and the past roles tasks in depth. Along with the multiple Infra related architecture scenarios, with in-depth knowledge on Cloud technology. Then only they can analyze the IAC requirements clearly to write the code and test it. This is the domain analysis and design activity they need to consider apart from the Cloud technology learning.

5. We can also see the Storage engineer role. All the Cloud vendors have the Cloud Storage services. With them mundane tasks creation/maintenance is vanished. Hence this role also will not exist.

IT-infra-Roles
IT-SDLC-Roles

The below video has the discussion on: A) What IT Roles can vanish after migrating to Cloud? B) How the roles/tasks are being transformed to Cloud through serverless computing technology ? C) Why anybody can learn and do the past Infra roles with Cloud ? D) What all they need to learn ? E) How the organizations can demand an employee to convert into modern technology before taking a decision for a pink slip serving ?
Finally, what are the following roles and their tasks:
Traditional Infrastructure building roles:
1.Network Admin/Engineer
2.System Administrator
3.Database Administrator
4. Deployment Engineer

5. Storage Engineer — There is a separate video done for this role education:

Technical Roles:
1.Developer
2.Test Engineer/Analyst

F) Among the above which role can pickup faster the Cloud/DevOps Automation technology ?
G) Then How these roles can perform in Cloud  with faster deployment ?
H) Why and how the manpower reduction happen after Cloud implementation ?

I) Why do you need to learn from the experienced IT mentors to transform into modern technology ?

K) Some people say, they can learn by self. How much they can learn and cope-up with the current market needs on various technologies to settle in modern technology ? [refer to point#5 in this blog].

L) Why do you need to spend/invest  more money to re-settle in the modern technology ?

Note: Along with this video, there are multiple blogs I have published with Videos links to make awareness to the IT Professionals. You see those blogs from this site itself.

For further understanding please read the below text:

In the current IT World Cloud computing became regular practice for any IT Professional. Any cloud services we use, we need to know the current/traditional infrastructure setup. But every IT professional may not have that background/knowledge to understand. Because in this industry each of us played  different roles.

The Infra activities handled IT Professional only will come to know this knowledge and will have experience. But the industry needs every IT employee need to know this knowledge apart from the Cloud service provider [Ex: AWS, Azure, GC, etc..] products operations. [Eventhough, if you are a certified Solutions architect with that Cloud service provider.]

Then only whoever the professionals are certified they will be able to use these products/services and implement them under cloud setup. Hence the Infra domain knowledge or experience is mandated for every IT professional who is working for Cloud services or in that infra setup. I have been hearing during Cloud professionals recruitment the interviewers are keeping some questions on this area also. If one understand the Network domain setup then only they will be able to design the Cloud architecture. Hence more employers are worried to have this domain knowledge with the Certified and recruited Cloud professionals.

I have started a Cloud Practices group to educate/share the IT professionals with this domain knowledge. The below link can be used to join/apply:

https://www.facebook.com/groups/585147288612549/about/

Note:

AWS: POC-How to Build WordPress web site in AWS Cloud/Network ?


Most of the Blogs on websites are operated by WordPress[WP] software which is developed in PHP. Even my site [vskumar.blog] is from this software. To setup the WP, there are tier based architecture/setup required. For PHP related software we can have a 2-tier architecture setup.

For detailed analysis and the required AWS components for setting up W, below blog you can visit/follow blog:
https://vskumar.blog/2018/12/31/2-aws-wordpresswp-infrastructure-creation-using-a-free-tier-account/

In this demo video the WP site building project was discussed with different design steps towards AWS components and it was demonstrated well by one of our [experienced IT Professional] students well.


These are weekly assignments they get and they need to prove by themselves with a demo in a team; like live environment how it happens in a typical project team. By attending this activity delivery on weekly basis, one will not get any fear/scareness on doing the live infra tasks. They will be habituated with the project activities when they join in real job. And they will be productive resources from day one onwards. These are the major benefits from my course.

Also, visit the below blog also:

AWS& DevOps: Stage1 & Stage2 course for Modern tech. professional

https://vskumar.blog/2020/02/03/contact-for-aws-devops-sre-roles-mock-interview-prep-not-proxy-for-original-profile/

AWS: POC-How to do VPC Peering with 2 VPCs ?

When we have two networks in traditional methods, we used to do internal networking within the premises. And used those servers for Software APIs. Also the Sysadmin/DBAs/Network Admin role people used to manage them well. But in Cloud how to implement this kind of setup ?

Now, In Cloud how this internetworking can be done through VPC peering ?

Watch this introduction chapter before you go to the below POCs:

In this POC project, it is well proven how we can create such kind of environment by using 2 VPCs and their peering with deeper analysis and the design steps of different services. And how the private subnet related Linux and Windows EC2s can be accessed by using NAT and Jump Servers?. And how to operate another VPC EC2 from one VPC EC2 ? Just watch this design steps analysis video from the below link:

In the below video a POC analysis is discussed with an Experienced Cloud/DevOps professional.

Watch the below video for How to config NATGATEWAY and use it for Private EC2 MySql config.

For a live VPC Peered POC, you can visit the below blog:

https://vskumar.blog/2020/10/12/aws-a-live-interview-poc-setup-with-elb-vpc-peering-ebs-mount/

To know our courses, visit the below blog also:

AWS& DevOps: Stage1 & Stage2 course for Modern tech. professional

https://vskumar.blog/2020/02/03/contact-for-aws-devops-sre-roles-mock-interview-prep-not-proxy-for-original-profile/

Learn freely Basics of Agile/DevOps/AWS

Folks, Greetings and welcome to this Group.

Through this group you can learn the following by self/free [there are several videos from the past sessions of IT working professionals]:

https://www.facebook.com/groups/817762795246646/announcements/

1. The concepts of Agile/Scrum
2. The concepts of DevOps
3. Git/Jenkins/Docker Installation/operation
4. AWS Basics.
1. Apart from the above learning, if you want to try the latest Cloud/DevOps positions in the global market these are not enough in the current global IT market for Cloud/DevOps Role.
2. You need to learn the complete Infrastructure design activities and its implementation. After that you need to learn the IAC Code writing. 3. Then you need to learn the Cloud related DevOps processes/tools towards deployment. You also need to learn the Kubernetes [K8] which is there for Containers Orchestration and for cluster management in Cloud and this is future focus to save the infra/deployment cost in IT. 4. All these will be coached in my advanced Course. 5. To know these details, please visit the blogs/video in URL: https://vskumar.blog/…/the-goals-for-cloud-and-devops-arch…/.
6. Interested people can contact me to join the course after studying the blogs/video in depth.
7. Please note; this is not a typical training. You will be working as a project team member to do the project tasks and give a demonstration.
8. We also evaluate the people on their keen learning/hardwork/grasping power/flexibility/adoptability/self learning.
9. This course will go upto 6 months on a weekly 4-6 hours of my sessions duration and your 10+ of self practice efforts on project tasks.
10. You will be delivering weekly some POCs along with the other members.

The above points were presented in a video also:

Good luck in your Cloud/DevOps Journey.

AWS: Follow AWS SAA Best practices for interviews

In the following video the AWS SAA Best practices were discussed in detailed. These are useful for Cloud Architect/Engineer role job interview.

If you want to know why you need to learn and become/grow into Cloud/.DevOps Role, watch the below video:

For my Course details please see the below blogs and the videos.

AWS& DevOps: Stage1 & Stage2 course for Modern tech. professional

https://vskumar.blog/2020/02/03/contact-for-aws-devops-sre-roles-mock-interview-prep-not-proxy-for-original-profile/

The goals for Cloud and DevOps Architects – by coaching

Folks, After the Massive Global recession in IT also. The Cloud/DevOps automation roles will be in hectic demand. If one learns the below discussed skills they are the hot professionals in the global IT for higher/self demanding CTC. The IT market will be dry for these skills.

Visit the below blogs/videos:

How to Future-Proof Your Career: Becoming a Cloud cum DevOps Architect

AWS& DevOps: Stage1 & Stage2 course for Modern tech. professional

How a DevOps Architect role is different from A Cloud Architect ?

How a DevOps Architect role is different from A Cloud Architect ?

A quick review on DevOps Practices for DevOps Engineers/Practitioners

Watch this video.

DevOps Patterns
devops-process
  1. DevOps is a terminology used to refer to a set of principles and practices to emphasize the collaboration and communication of Information Technology [IT] professionals in a software project organization, while automating the process of software delivery and infrastructure using Continuous Delivery Integration[CDI] methods.
  2. The DevOps is also connecting the teams of Development and Operations together to work collaboratively to deliver the Software to the customers in an iterative development model by adopting Continuous Delivery Integration [CDI] concepts. The software delivery happens  in small pieces at different delivery intervals. Sometimes these intervals can be accelerated depends on the customer demand.
  3. The DevOps is a new practice globally adopted by many companies and its importance and implementation is accelerating by maintaining constant speed.  So every IT professional need to learn the concepts of DevOps and its Continuous Delivery Integration [CDI] methods. To know the typical DevOps activities by role just watch the video: https://youtu.be/vpgi5zZd6bs, it is pasted below in videos.
  4. Even a college graduate or freshers also need to have this knowledge or practices to work closely with their new project teams in a company. If a fresher attends this course he/she can get into the project shoes faster to cope up with the  experienced teams.
  5. Another way; The DevOps is an extension practice of Agile and continuous delivery. To merge into this career; the IT professionals  need to learn the Agile concepts, Software configuration management, Release management, deployment management and  different DevOps principles and practices to implement the CDI patterns. The relevant tools for these practices integration. There are various tool vendors in the market. Also open source tools are very famous. Using these tools the DevOps practices can be integrated to maintain the speed for CDI.
  6. There  are tools related with version control and CDI automation. One need to learn the process steps related to these areas by attending a course. Then the tools can be understood easily.  If one understands these CDI automation practices and later on learning the tools process is very easy by self also depends on their work environment.
  7. As mentioned in the above; Every IT company or IT services company need to adopt the DevOps practices for their customers competent service delivery in global IT industry. When these companies adopt these practices, their resources also need to be with thorough knowledge of DevOps practices to serve to the customers. The companies can get more benefit by having these knowledged resources. At the same time the new joinees in any company either experienced or fresher professional if they have this knowledge, their CTC in view of perks will be offered more or with competent offer they may be invited to join in that company.
  8. Let us know if you need  DevOps training  from  the IT industry experienced people; which includes the above practice areas to boost you in the IT industry.

Training will be given by 3 decades of Global IT experienced  professional(s):

https://www.linkedin.com/in/vskumaritpractices

For DevOps roles and activities watch my video:

Folks, I also run the DevOps Practices Group: https://www.facebook.com/groups/1911594275816833/?ref=bookmarks

There are many Learning units I am creating with basics. If you are not yet a member, please apply to utilize them. Read and follow the rules before you click your mouse.

For contact/course details please visit:

https://vskumarblogs.wordpress.com/2016/12/23/devops-training-on-principles-and-best-practices/

Advertising3

How to Future-Proof Your Career: Becoming a Cloud cum DevOps Architect

Maximizing ROI in Cloud/DevOps: The Benefits of Coaching for IT Professionals

In today’s economy, businesses are constantly looking for ways to cut costs. Unfortunately, one of the first places they tend to look is the IT department. With many companies facing employee reductions due to the current global situation, it’s possible that up to 50-40% of IT professionals may lose their jobs.

So what can IT professionals do to safeguard their careers? One solution is to become a Cloud cum DevOps Architect. This is a highly in-demand role that requires expertise in both multi-cloud management and DevOps automation. According to IT research articles, the market for this combination of skills is expected to grow rapidly, and there will soon be a shortage of professionals with these abilities.

To become a Cloud cum DevOps Architect, IT professionals need to develop a range of technical skills. They must be able to plan and execute end-to-end engagements, including infrastructure and DevOps automation. This requires a deep understanding of cloud technologies and the ability to implement them effectively.

To gain these skills, IT professionals can take advantage of expert coaching and training. Coaching can help them learn the most effective techniques for managing cloud infrastructure and automating DevOps processes. They can also learn about the latest tools and technologies that are driving innovation in the field.

If you have more than five years of experience working in IT, becoming a Cloud cum DevOps Architect could be a smart career move. By investing in coaching and training, you can increase your marketability and become a valuable asset to any organization.

So, what should you do next? First, assess your current skills and knowledge to identify any gaps that need to be filled. Then, seek out the resources you need to fill those gaps, such as online courses or expert coaching. With dedication and hard work, you can become a skilled Cloud cum DevOps Architect and take your career to new heights.

If you are more than 5 years Infra role experienced IT working Professional [from any country], you can become Cloud cum DevOps Architect by learning through experts coaching with live implementable activities.
There will be market crunch for this combinational skills and dryness happens very soon as per the IT research articles.
For course details, Read the below blogs and watch the videos.

I coach the keen learners, who are working IT Professionals [globally].

For course details see the below blog/videos:

https://vskumar.blog/2020/01/20/aws-devops-stage1-stage2-course-for-modern-tech-professional/

The goals for Cloud and DevOps Architects – by coaching

Watch the below videos on how the project tasks are being handled for the course participants:

Follow: https://vskumar.blog/2020/02/25/the-goals-for-cloud-and-devops-architects-by-coaching/

Make a strong decision, before you talk to me.

https://www.facebook.com/vskumarcloud/videos/557369958492692/

Build Cloud architects-FB promotion

Why the DevOps practice team is required to involve in Infra cloud planning?

I was talking to some clients recently on the importance of Cloud migration activities planning. I have come with some guidelines for them as part of my engagement. Some of my guidelines given to them, I would like to share as below.

When the DevOps practice team need to do the infra setup  for a cloud migration they also need to participate on the identification of Infra activities and the specifications. Which is very essential.

This need to be done as initial step with any Cloud services migration.

As per my opinion we can work with any cloud services like; AWS/AZURE/Google Cloud, etc., by having the above activity as mandatory.

The attached blog/Video contains the same discussion with the details of the steps required to setup a Virtual Private Cloud. The VPC nomenclature we might have seen with AWS. But similar setup or name cane be there with other Cloud service providers also.

Once this VPC is created the systems are going to be hosted on Cloud.

The Ops team’s responsibility is; to make sure the Cloud migration is correctly and completely done for all the live setup.

At the same time they also need to conduct a pilot testing activity successfully which is mandated as per the Agile Projects management [Agile PM] standards before they announce go live.

They also need to do a parallel run along with the past production setup with a new cloud setup for few weeks.

Below URL contains the initial planning discussion as mentioned:

https://vskumar.blog/2018/12/23/9-aws-saa-what-is-the-initial-step-for-vpc-design-theorydiscussion-video/

If you want to learn detailed discussion on Infra planning, visit:

https://vskumar.blog/2018/12/04/1-cloud-architect-how-to-build-infrastructure-planning/

https://vskumar.blog/2020/04/16/5-azure-azure-coaching-on-az-104-curriculum/

For my other Azure blogs/videos visit:

1. Azure: What is Cloud Adoption Framework ?

2. Azure: How to adopt Migrate Activity and its tasks with best practices ?

3. Azure: What are Motivations in CAF and how the stakeholder use them for sign-off ?

AWS/DevOps: POC-Infra and DevOps Automation

In the Cloud/DevOps modern technology, the Automation became popular to save the manpower and the IT Budget. Among the IT Roles these roles are going to be the demanding ones always. With any technology or tools need to be followed this process.

What areas can be used for automation ?

What are the technology/tools can be used ?

In the below video a POC analysis is discussed with an Experienced Cloud/DevOps professional.

Also, visit the below blog also:

AWS& DevOps: Stage1 & Stage2 course for Modern tech. professional

https://vskumar.blog/2020/02/03/contact-for-aws-devops-sre-roles-mock-interview-prep-not-proxy-for-original-profile/

How to join in my groups of different practices and watch some [15%] of my past sessions on Cloud/DevOps Architects building course ?

Please follow the below guidelines to apply.

What is Site Reliability Engineering [SRE]?

What is Site Reliability Engineering [SRE]?
What are SRE major components ?
What is Platform Engineering [PE] ?
How the Technology Operations [TO] are associated with SRE ?

What the DevOps-SRE diagram contains ?
How the SRE tasks can be associated with DevOps ?
How the Infrastructure activity can be automated for Cloud setup ?
How the DevOps loop process works with SRE, Platform Engineering[PE] and TO ?
What is IAC for Cloud setup ?
How to get the requirements of IAC in a Cloud environment ?
How the IAC can be connected to the SRE activity ?
How the reliability can be established through IAC automation ?
How the Code snippets need to/can be planed for Infra automation ?
There are many FAQs can be identified with this video.

For all the answers you need to watch the below discussion video:

If you are an original IT profiled person and trying for SRE roles globally in any country, you can contact for a mock interview. You need to follow the pre-requisites:

1. Connect me on LinkedIn to know you.

2. Share your profile.

3. Fix up a call to discuss on the mock interview effort/phases.

Please note its chargeable for each phase.

NOTE: I ENCOURAGE THE ORIGINAL PROFESSIONALS ONLY TO GROW/SUSTAIN IN THE CURRENT/FUTURE IT BY LEARNING.

I DO NOT DO PROXY INTERVIEWS. I AM ALLERGIC FOR THOSE ATTITUDES/PRACTICES. YOU NEED NOT CALL ME FOR THAT NEED.

Also, visit for some more details:

https://vskumar.blog/2020/02/03/contact-for-aws-devops-sre-roles-mock-interview-prep-not-proxy-for-original-profile/

Cloud: What IT roles can vanish with Cloud transition ?

What IT roles can vanish with Cloud transition ? 

If you are in the below roles, in the current recession you will be targeted for pinkslip among the IT professionals as 1st exit group. What you need to do on your career replan. Please see/follow the blog/videos with patience. 

As per my observation and practice with the trending technology [Cloud], all the Cloud services vendors have inbuilt serverless computing for many services. The following roles are going to be vanished or reskilled. But if they are kept under recession staff cut, these professionals need to take care of their career.

1. DBA:–>The DBA tasks are embedded as part of these services. So the DBAs used to sit hours together in the past to perform many mundane tasks. Now these all are automated.

2. Similarly, many other tasks are related to infra roles; Network admin/Sys-Admin are also automated through Cloud services.

3. As a consolidation all these 3 roles are clubbed into one role of Cloud Engineer. This role’s major task is to automate all the Cloud setup related activities under IAC[Infrastructure As Code]. In future only the IAC will sustain to save the cost to IT by automating the cloud setup creation activity.

4. If any professional need to compete with this role means; they need to understand the Infra needs and the past roles tasks in depth. Along with the multiple Infra related architecture scenarios, with in-depth knowledge on Cloud technology. Then only they can analyze the IAC requirements clearly to write the code and test it. This is the domain analysis and design activity they need to consider apart from the Cloud technology learning.

5. We can also see the Storage engineer role. All the Cloud vendors have the Cloud Storage services. With them mundane tasks creation/maintenance is vanished. Hence this role also will not exist.

IT-infra-Roles

IT-SDLC-Roles

The below video has the discussion on: A) What IT Roles can vanish after migrating to Cloud? B) How the roles/tasks are being transformed to Cloud through serverless computing technology ? C) Why anybody can learn and do the past Infra roles with Cloud ? D) What all they need to learn ? E) How the organizations can demand an employee to convert into modern technology before taking a decision for a pink slip serving ?
Finally, what are the following roles and their tasks:
Traditional Infrastructure building roles:
1.Network Admin/Engineer
2.System Administrator
3.Database Administrator
4. Deployment Engineer

Technical Roles:
1.Developer
2.Test Engineer/Analyst

F) Among the above which role can pickup faster the Cloud/DevOps Automation technology ?
G) Then How these roles can perform in Cloud  with faster deployment ?
H) Why and how the manpower reduction happen after Cloud implementation ?

I) Why do you need to learn from the experienced IT mentors to transform into modern technology ?

K) Some people say, they can learn by self. How much they can learn and cope-up with the current market needs on various technologies to settle in modern technology ? [refer to point#5 in this blog].

L) Why do you need to spend/invest  more money to re-settle in the modern technology ?

Note: Along with this video, there are multiple blogs I have published with Videos links to make awareness to the IT Professionals. You see those blogs from this site itself.

For further understanding please read the below text:

In the current IT World Cloud computing became regular practice for any IT Professional. Any cloud services we use, we need to know the current/traditional infrastructure setup. But every IT professional may not have that background/knowledge to understand. Because in this industry each of us played  different roles.

The Infra activities handled IT Professional only will come to know this knowledge and will have experience. But the industry needs every IT employee need to know this knowledge apart from the Cloud service provider [Ex: AWS, Azure, GC, etc..] products operations. [Eventhough, if you are a certified Solutions architect with that Cloud service provider.]

Then only whoever the professionals are certified they will be able to use these products/services and implement them under cloud setup. Hence the Infra domain knowledge or experience is mandated for every IT professional who is working for Cloud services or in that infra setup. I have been hearing during Cloud professionals recruitment the interviewers are keeping some questions on this area also. If one understand the Network domain setup then only they will be able to design the Cloud architecture. Hence more employers are worried to have this domain knowledge with the Certified and recruited Cloud professionals.

I have started a Cloud Practices group to educate/share the IT professionals with this domain knowledge. The below link can be used to join/apply:

https://www.facebook.com/groups/585147288612549/about/

 

Note:

 

Why do you need to learn from Infra domain knowledge as certified Cloud Professional ?

Benefits of CloudWhy do you need to learn from Infra domain knowledge as certified Cloud Professional ?

 

In the current IT World Cloud computing became regular practice for any IT Professional. Any cloud services we use, we need to know the current/traditional infrastructure setup. But every IT professional may not have that background/knowledge to understand. Because in this industry each of us played  different roles.

The Infra activities handled IT Professional only will come to know this knowledge and will have experience. But the industry needs every IT employee need to know this knowledge apart from the Cloud service provider [Ex: AWS, Azure, GC, etc..] products operations. [Eventhough, if you are a certified Solutions architect with that Cloud service provider.]

Then only whoever the professionals are certified they will be able to use these products/services and implement them under cloud setup. Hence the Infra domain knowledge or experience is mandated for every IT professional who is working for Cloud services or in that infra setup. I have been hearing during Cloud professionals recruitment the interviewers are keeping some questions on this area also. If one understand the Network domain setup then only they will be able to design the Cloud architecture. Hence more employers are worried to have this domain knowledge with the Certified and recruited Cloud professionals.

I have started a Cloud Practices group to educate/share the IT professionals with this domain knowledge. The below link can be used to join/apply:

https://www.facebook.com/groups/585147288612549/about/

Note:

  • I also have special coaching with this domain knowledge coverage using AWS.
  • This kind of coaching you may not find everywhere.
  • Visit the discussion points from the above site to know the level of the coaching.

 

 

AWS& DevOps: Stage1 & Stage2 course for Modern tech. professional

Folks,

Pre-requisites to read this blog: Please read the below blog: What IT roles can vanish with Cloud transition ? 

Cloud: What IT roles can vanish with Cloud transition ?

I have designed my courses with reference to the current IT industry needs and most of the employers what they are looking/demanding for, with Cloud/DevOps skills and the Infrastructure knowledge from the new resource/recruitment. This is Only for Working IT Professionals please.

After completion of Stage1 & Stage2 courses you will be expert in doing the Infra and DevOps Automation. A case scenario is discussed in the below blog/video, should watch it. And also know how the participant are doing like a real project documentation with IAC design and code steps:

https://vskumar.blog/2020/01/31/aws-devops-poc-infra-and-devops-automation/

Please watch the below videos and their detailed descriptions. Connect me on linkedin [ www.linkedin.com/in/vskumaritpractices ] or on Facebook to know your profile.

For Stage1 Toc, watch the below video:

For stage2 TOC, Watch this video:

Remember; once you complete the courses of Stage1 and Stage2, in the job day one onwards you will be able to perform the tasks after understanding their [client] Infra/Cloud setup.

During the course attending, My course tasks are discussed for any Keen learner. He/she will be following the same to achieve the planned goals on that activity.

Finally when you are able to complete the Stage2 course you be doing the Infra Automation. AWS Recommends the 6 Application Migration Strategies; called as “The 6 R’s”. I have drafted the further process to follow during automation in the below video description.

The following videos has the discussion on it:

What are the skills required for a Cloud Architect ?

You can watch the below video.

NOTE:

Contact for your real Cloud experiences learning and gaining with me and crack the interviews to get offers in AWS Roles globally or even you can transition to the role in the same company after facing the client interview/selection process. Which is very easy with this knowledge.

Please connect me on FB and have a discussion on your background and the needs/goals. I am looking for the serious learners only by having dedicated time. If you are a busy resource on the projects please note; you can wait to become free to learn. One need to spend time consistently on the practice. Otherwise its going to be in no-use.

I have made the similar Curriculum for Azure and GCP also on the same roles.

For our students latest demos visit the below blog:

How the POCs or Infra activities are being planned, see an example:

AWS: Network Security questions – Interview Skills-3

This video talks on the possibility of lack of Skills for some of the Cloud/DevOps Engineers. In continuation of my previous blogs/videos; This video contains the discussion on the “Analysis of Firewall activities from traditional role to AWS”. You can get answers for the below image contained questions through the discussions with an experienced IT Professional.


You can visit the below URL for the discussion video:

AWS: Do you think, User IDs of IAM and EC2 are the same ?

When we talk about user ids in AWS or any Cloud some [Non-AWS Practiced] people feel user ids of IAM and EC2s are the same. But not!

  1. IAM user ids are made to access particular services of AWS Platform through some of the groups Privileges. These Groups are embeded with AWS policies to have access on the services.
  2. Where as EC2 is a Virtual Machine [can be either Linux OS or Windows server]. It needs the users to access. Like your physical machines have the used ids, the VM also has them.
  3. Watch the below video for details/discussion:

https://business.facebook.com/vskumarcloud/videos/2466098830321840

NOTE:

Contact for your real Cloud experiences learning and gaining with me and crack the interviews to get offers in AWS Roles globally or even you can transition to the role in the same company after facing the client interview/selection process. Which is very easy with this knowledge.

Please connect me on FB and have a discussion on your background and the needs/goals. I am looking for the serious learners only by having dedicated time. If you are a busy resource on the projects please note; you can wait to become free to learn. One need to spend time consistently on the practice. Otherwise its going to be in no-use.

AWS: Lack of Cloud/DevOps Engineer Skills-2

This video talks on the possibility of lack of Skills for some of the Cloud/DevOps Engineers.

Most of the IT employers are looking from the Cloud or DevOps engineers the Environment building automatically with DevOps process.

In this video the DevOps process and its automation process is discussed by mapping to the AWS-DevOps.

AWS: Lack of Cloud Engineer Skills-1

This video talks on the possibility of lack of Skills for some of the Cloud Engineers.

Possible AWS Interview Questions for a Firewall Engineer experienced professional.
https://business.facebook.com/vskumarcloud/videos/2588684797876068/

Lack of Skills for Cloud/EngineerAWS

AWS: Interview with a Firewall Engineer

This video talks on the possible interview questions for a Firewall Engineer attended for an AWS Cloud engineer role.

https://business.facebook.com/vskumarcloud/videos/2588684797876068/

To follow my videos visit: https://business.facebook.com/vskumarcloud/

Possible AWS Interview Questions for a Firewall Engineer experienced professional.

NOTE:

Contact for your real Cloud experiences learning and gaining with me and crack the interviews to get offers in AWS Roles globally or even you can transition to the role in the same company after facing the client interview/selection process. Which is very easy with this knowledge.

Please connect me on FB and have a discussion on your background and the needs/goals. I am looking for the serious learners only by having dedicated time. If you are a busy resource on the projects please note; you can wait to become free to learn. One need to spend time consistently on the practice. Otherwise its going to be in no-use.

AWS: What is Data Pipeline ?

This video has the outline on AWS Data Pipeline service.

Please follow my videos : https://business.facebook.com/vskumarcloud/

NOTE:

Let us also be aware: Due to lacks of certified professionals are available globally in the market on AWS, to differentiate them on their needs/selection, most of the clients are asking on the real experience gained or aware of IAC. They give a Console and ask you to setup a specific Infra setup in AWS.

In my coaching I focus on the candidates to gain the real Cloud Architecture implementation experience rather than pushing the course with screen operations only to complete. Through my posted videos you can watch this USP.

Contact for your real Cloud experiences learning and gaining with me and crack the interviews to get offers in AWS Roles globally or even you can transition to the role in the same company after facing the client interview/selection process. Which is very easy with this knowledge.

Please connect me on FB and have a discussion on your background and the needs/goals. I am looking for the serious learners only by having dedicated time. If you are a busy resource on the projects please note; you can wait to become free to learn. One need to spend time consistently on the practice. Otherwise its going to be in no-use.

For Our coaching details, visit:

For some of our students latest demos, visit:

AWS: What is CloudFormation ?

Watch this video for CF introduction:

This video has the demo on Cloudformation with IAC to create a simple EC2.

https://business.facebook.com/vskumarcloud/videos/2328450437268264/

Please note; every Cloud/DevOps/Test Engineer need to drive the infra setup tasks through IAC scripts only, when they are advanced in infra setup understanding. For every infra setup tasks they should be able to step down to develop these IAC scripts by using different tools. In AWS, we can use CF, Terraform and Ansible very easily for the medium level config scripts building.

https://vskumar.blog/2019/07/26/aws-usage-of-cloudformation-templates-1-wp/

For a jumpstart of learning process, I have started building some videos on understanding the Infra scripts process through CF Templates with POCs/demos. Following link has these videos to watch and do self practice with CF.

Please follow my videos : https://business.facebook.com/vskumarcloud/

Once you are perfect you can choose any of the Infra CM Tools to write scripts and run for automated infra setup.

The samples are made with the below infra setup scenarios:

How to Create a simple WordPress website with CloudFormation[CF] in AWS ? :
In this POC exercise, we are using the Readily available template from the CloudFormation[CF] stacks with detailed lab steps in the below attached Video. There are two categories of WP infra building CF Templates mentioned in the CF stacks. In this example initially, I have taken the simple setup without Load balancer and Autoscale setup. The specifications/guidelines are given clearly to follow through video in the below blog:

2. AWS POC : WordPress[WP] infrastructure creation using a free tier account

How to create Lamp server setup in AWS by CF Template ?

See from the below video:

https://business.facebook.com/vskumarcloud/videos/vl.530504917488179/2424073971155486/

You can also watch;

How to create simple EC2 creation using CF YAML Code ?

https://business.facebook.com/watch/?v=2424073971155486

Note:
I hope you have seen my AWS Coaching specimen on the URL: https://www.facebook.com/vskumarcloud/videos/

Let us also be aware: Due to lacks of certified professionals are available globally in the market on AWS, to differentiate them on their needs/selection, most of the clients are asking on the real experience gained or aware of IAC. They give a Console and ask you to setup a specific Infra setup in AWS.

In my coaching I focus on the candidates to gain the real Cloud Architecture implementation experience rather than pushing the course with screen operations only to complete. Through my posted videos you can watch this USP.

Contact for your real Cloud experiences learning and gaining with me and crack the interviews to get offers in AWS Roles globally or even you can transition to the role in the same company after facing the client interview/selection process. Which is very easy with this knowledge.

Please connect me on FB and have a discussion on your background and the needs/goals. I am looking for the serious learners only by having dedicated time. If you are a busy resource on the projects please note; you can wait to become free to learn. One need to spend time consistently on the practice. Otherwise its going to be in no-use.

Comparision of AWS and GCP Certifications

AWS and GCP Conducts different certification for different roles playing.

Being the top Cloud services competitors globally, they have different prioritized technologies implemented for Cloud services. AWS is matured enough by occupying the cloud gamut as 1st service provider in implementing them, years together. Where as GCP is accelerating their services year on year on the technologies those are in demand. Similarly it has evolved with professional certifications also accordingly.

You can see the comparison video from the below link:

https://business.facebook.com/vskumarcloud/videos/588782551950258/

AWS: Usage of CloudFormation Templates for IAC

Through this blog I would like to demonstrate on CloudFormation [CF] templates usage with the IAC.

I have observed in Cloud and DevOps teams, many of them are non-programming background. But when the Infra setup requirement comes into their task following questions will arise:

  1. How can they write the JSON/YAML/Go programming/script coding ?
  2. How can they understand the Cloud services and the domain setup knowledge ?
  3. What are the sequence of steps they need to follow, if they are being asked to setup the environments?
  4. How the VPCs need to be built ?
  5. How the Load Balancers need to be setup ?
  6. How the autoscaling and load balancing can be done in different locations to balance the traffic and maintain the infra setup consistency with low latency ?
  7. What are steps need to be followed to learn on the above process/procedures ?

AWS has given some of the infra setup the CloudFormation templates for us to try and test them. And later on to understand the Script/code of JSON/YAML to implement with any CM Tools.

Facebook

Please visit the below blog for samples of YAML/JSON IAC discussion videos:

2. AWS IAC-YAML: How to work with CF for various infrastructures setup ? | Building Cloud cum DevOps Architects (vskumar.blog)

Please note; every Cloud/DevOps/Test Engineer need to drive the infra setup tasks through IAC scripts only. For every infra setup tasks they should be able to step down to develop these IAC scripts by using different tools. In AWS, we can use CF, Terraform and Ansible very easily for the medium level config scripts building.

For a jumpstart of learning process, I have started building some videos on understanding the Infra scripts process through CF Templates with POCs/demos. Following links have these videos to watch and do self practice with CF.

Once you are perfect you can choose any of the Infra CM Tools to write scripts and run for automated infra setup.

How to Create a simple WordPress website with CloudFormation[CF] in AWS ? :
In this POC exercise, we are using the Readily available template from the CloudFormation[CF] stacks with detailed lab steps in the below attached Video. There are two categories of WP infra building CF Templates mentioned in the CF stacks. In this example initially, I have taken the simple setup without Load balancer and Autoscale setup. The specifications/guidelines are given clearly to follow through video.

See the blogs/videos:

https://vskumar.blog/2018/12/31/2-aws-wordpresswp-infrastructure-creation-using-a-free-tier-account/

How to create Lamp server setup in AWS by CF Template ?

https://business.facebook.com/vskumarcloud/videos/vl.530504917488179/2424073971155486/

You can also watch;

How to create simple EC2 creation using CF YAML Code ?

Note:
I hope you have seen my AWS Coaching specimen on the URL: https://www.facebook.com/vskumarcloud/videos/

Let us also be aware: Due to lacks of certified professionals are available globally in the market on AWS, to differentiate them on their needs/selection, most of the clients are asking on the real experience gained or aware of IAC. They give a Console and ask you to setup a specific Infra setup in AWS.

In my coaching I focus on the candidates to gain the real Cloud Architecture implementation experience rather than pushing the course with screen operations only to complete. Through my posted videos you can watch this USP.

Contact for your real Cloud experiences learning and gaining with me and crack the interviews to get offers in AWS Roles globally or even you can transition to the role in the same company after facing the client interview/selection process. Which is very easy.

Please connect me on FB and have a discussion on your background and the needs/goals. I am looking for the serious learners only by having dedicated time. If you are a busy resource on the projects please note; you can wait to become free to learn. One need to spend time consistently on the practice. Otherwise its going to be in no-use.

Are you safe as A Cloud architect on the role ?

vskumarcloud-build-cloud-architect.png

Are you safe as A Cloud architect on the role ?

Why the Certified AWS-Solutions Architects are being served Pinkslips ?
What could be the reasons ?
Do they understand the role clearly what the Client expects ?
Why the management is so aggressive to prove the Cloud implementation as per Schedule with ROI ?
Once the Cloud Migration Schedule is started why the IT Budget is freezed ?
Why do they attempt on Contractors as 1st instance to cut the staff ?

What the IT services companies can do with the Client terminated contracts ?

Please read the below content patiently and watch videos for solutions to protect your current Cloud role. Connect with me on linkedin to get a special coaching to rebuild your current role as per your client expectations.

Please visit the below  URLs: 

https://www.facebook.com/pg/vskumarcloud/posts/

Also, Visit:

https://vskumar.blog/2019/03/04/how-best-you-can-utilize-cloud-architect-role-as-an-efficient-it-management-practitioner/

If you want to know the size of the Cloud job market globally, visit:

https://vskumar.blog/2019/02/14/what-will-be-the-size-of-cloud-market-in-it-by-2022/

FYI: https://www.linkedin.com/jobs/aws-jobs/

To know the real articulation of SA, Visit for my AWS SAA sessions videos:

For freshers/OPTs: For Agile/DevOps/AWS training contact for schedules

For course:

  1. This is for OPTs and the Indian colleges fresh graduates who came/passed out in 2019.

  2. who are self driven and try for jobs with the given skills learning without getting into somebody shoes, come and get trained. You wouldn’t see the labs as demos. You will be practicing the labs as per the coacher guidelines with his watching. So that you will gain the technical competency with self confidence.

  3. A new batch is planned in a cost effective way. Contact in the given FB links from the blog. Good luck in your job search and in IT profession.

  4. Also, visit the below blogs for  AWS Basic course and the AWS-DevOps: https://vskumar.blog/2019/05/04/for-freshers-opts-for-agile-devops-aws-training-contact-for-schedules/
  5. https://vskumar.blog/2020/08/21/aws-devops-course-for-freshers-with-project-level-tasks/
  6. If you are keen in doing a fast track course to attack the job market instead of learning by self for months together and  getting struck to move forward, you can opt for #4 and #5.  Read the videos descriptions also. The contact details are given on this web page logo.

For specimen sessions you can watch the below videos:

  1. Agile: What are Agile manifesto Principles & How they can be used for SW ?
    https://www.facebook.com/328906801086961/videos/617149372179077/
  2. Agile: What are the phases of Agile Project ?
    https://www.facebook.com/328906801086961/videos/183496779674097/
  3. Agile: What is Disciplined Agile Delivery[DAD] ?
    https://www.facebook.com/328906801086961/videos/184822556096397/
  4. Agile: What is Model Storming ?
    https://www.facebook.com/328906801086961/videos/493982721500147/
  5. Agile: What is Scrum Framework and its roles ?
    https://www.facebook.com/328906801086961/videos/878197645967794/

Free-orientation-for Freshers-2019

Join in the below group to follow the above guidelines:

https://www.facebook.com/groups/817762795246646/

This group is meant only for freshers/OPTs coaching on the topics mentioned in the group Logo. You can forward to your circles who all came out from college for latest passed out year. They need to provide evidences as they are from latest batch only. The FB ID need to have photo with profile details. With these specs only they are allowed in this group.

For course: This is for OPTs and the Indian college graduate who came/passed out in 2019. who are self driven and try for jobs with the given skills learning without getting into somebody shoes, come and get trained. A new batch is planned in a cost effective way. Contact in the given FB links from the blog. Good luck in your job search and in IT profession.

https://www.linkedin.com/jobs/aws-jobs/

You can also see the Basic AWS and DevOps course details from the below blog/videos:

https://vskumar.blog/2020/08/21/aws-devops-course-for-freshers-with-project-level-tasks/

How to Create a Learning Organization during DevOps Practices implementation ?

Create Learning-DevOps organization.png

If you are keen in learning DevOps Practices as on latest, you can apply to join in my group: https://www.facebook.com/groups/1911594275816833/

Please note there are rules to follow.

For DevOps roles and activities watch my video:

For contact/course details please visit:

https://vskumarblogs.wordpress.com/2016/12/23/devops-training-on-principles-and-best-practices/

Contact for AWS DevOps Engineer – Professional certification. Very few people globally covering the complete syllabus like I have explained from the AWS Exam guide. If interested please ping me in FB with your profile URL. Please note I coach only the global working IT Professionals.  Hence Profile URL is mandated to know your background.

Watch the below 50 minutes video for the above analysis:

How best you can utilize Cloud Architect role as an efficient IT Management practitioner ?

vskumarcloud-build-cloud-architect.png

What are the Skills required for a Cloud Architect ?
How best you can utilize this role as efficient IT Management practitioner ?

Are you an IT management practitioner who wants to take your cloud migrations to the next level? Look no further than the role of Cloud Architect! While many organizations use this role for DevOps practices, it’s important to dedicate a separate Cloud Architect role for efficient and effective cloud migrations.

In 2017, Gartner published a list of activities and required skills for the Cloud Architect role, and while you may not be able to use all of them, there are some key activities that can help you see a difference in your business. By separating this role from DevOps activities, you can focus on Cloud Infrastructure Planning, Designing, Building, and Implementing to meet your business needs.

This separation can help to resolve burning issues related to DevOps and Cloud faster and reduce risks. If you’re an IT management practitioner who wants to implement these practices, don’t hesitate to reach out to me on Facebook. Check out my groups for IT professionals, where I post valuable discussions, videos, and blogs to help you take your cloud migrations to the next level.

Don’t miss out on the opportunity to optimize your cloud migrations and drive your business forward. Dedicate a Cloud Architect role and see the difference it can make.

For relevant Blog visit:

You can also see:

What is the need of upgrading cloud skills for a PM role ?

Finally, I would say you are mandated to separate this role from DevOps Activities assigning and dedicate the role for;
  • Cloud Infrastructure Planning,
  • Designing,
  • Building and
  • implementing effectively for business needs 
So many burning issues related to DevOps and Cloud can be separated and resolved faster to move forward. This way many risks can be reduced!!
If you are an IT management practitioner; and would like to get any clarifications to implement these practices you are advised to contact me on FB and have a call.

Visit my current running facebook groups for IT Professionals with my valuable discussions/videos/blogs posted:

 

DevOps Practices Group:

https://www.facebook.com/groups/1911594275816833/about/

 

Cloud Practices Group:

https://www.facebook.com/groups/585147288612549/about/

 

Build Cloud Solution Architects [With some videos of the live students classes/feedback]

https://www.facebook.com/vskumarcloud/

 

 

MicroServices and Docker [For learning concepts of Microservices and Docker containers]

https://www.facebook.com/MicroServices-and-Docker-328906801086961/

How to create AWS S3 Bucket

You can also compare the SAA Salary among all the roles being played with AWS:

See the difference on the salary amounts to seek your role as per your professional potentiality.

Do you want to the size of the Cloud job market globally if yes, visit:

https://vskumar.blog/2019/02/14/what-will-be-the-size-of-cloud-market-in-it-by-2022/

To know the real articulation of SA, Visit for my AWS SAA class video:

AWS-SAA-Course

AWS:Do you want to try S3 Glacier POC ?

This video has the discussion on S3 Glacier POC:

 

 

The attached video has the lab demonstration.

You can watch and give your message.

 

AWS-SAA-Course

DevOps Practices & FAQs -3 [Domain area]

For your DevOps job interviews coaching contact in FB.

Please note pre-requisites:
You should have attended/practiced tools sessions already.

 

Please read the previous FAQs series also: Devops-practices-faqs-1

https://vskumar.blog/2018/12/29/devops-practices-faqs-2-devops-practices-faqs/

faqs-devops-eng-network-knowedge

Visit for free concepts learning:

To join DevOps Practices group visit:

https://www.facebook.com/groups/1911594275816833/about/

To join Cloud Practices group visit:

https://www.facebook.com/groups/585147288612549/about/

 

 

If you want to know the above knowledge visit the below class video:

 

 

https://www.facebook.com/groups/1911594275816833/about/

FB-DevOps-Practices Group-page

Following videos are made to elaborate on the need and advantages of thinking on conversion into DevOps Practices by IT Companies and the Professionals. Comparative reports have been incorporated.

https://www.youtube.com/watch?v=O3yBGbPQ4SM

 

https://www.youtube.com/watch?v=9engYBrnwA4

 

https://www.youtube.com/watch?v=O_SxF3hJjUM

Advertising3

Why do you need to learn from Infra domain knowledge as certified Cloud Professional ?

Benefits of CloudWhy do you need to learn from Infra domain knowledge as certified Cloud Professional ?

In the current IT World Cloud computing became regular practice for any IT Professional. Any cloud services we use, we need to know the current/traditional infrastructure setup. But every IT professional may not have that background/knowledge to understand. Because in this industry each of us played  different roles.

The Infra activities handled IT Professional only will come to know this knowledge and will have experience. But the industry needs every IT employee need to know this knowledge apart from the Cloud service provider [Ex: AWS, Azure, GC, etc..] products operations. [Eventhough, if you are a certified Solutions architect with that Cloud service provider.]

Then only whoever the professionals are certified they will be able to use these products/services and implement them under cloud setup. Hence the Infra domain knowledge or experience is mandated for every IT professional who is working for Cloud services or in that infra setup. I have been hearing during Cloud professionals recruitment the interviewers are keeping some questions on this area also. If one understand the Network domain setup then only they will be able to design the Cloud architecture. Hence more employers are worried to have this domain knowledge with the Certified and recruited Cloud professionals.

Below video has the details of SAA Course with the domain knowledge expertism for you:

https://business.facebook.com/vskumarcloud/videos/642868796242922/

I have started a Cloud Practices group to educate/share the IT professionals with this domain knowledge. The below link can be used to join/apply:

https://www.facebook.com/groups/585147288612549/about/

Visit for free concepts learning:

To join DevOps Practices group visit:

https://www.facebook.com/groups/1911594275816833/about/

To join Cloud Practices group visit:

https://www.facebook.com/groups/585147288612549/about/

Note:

  • I also have special coaching with this domain knowledge coverage using AWS.
  • This kind of coaching you may not find everywhere.
  • Visit the discussion points from the above site to know the level of the coaching.

2. AWS POC : WordPress[WP] infrastructure creation using a free tier account

With reference to my previous blog on:
1. AWS:How to create and activate a new account in AWS ?
https://vskumar.blog/2018/09/01/1-awshow-to-create-and-activate-a-new-account-in-aws/

I have made a scenario based “AWS services usage” blog in this content. Which can be considered as a Proof of Concept [POC] Project also.

If you are new for cloud technology, I have made a video cum blog for you to understand its initiation/evaluation concepts. This video is more useful for PMs/Cloud Architects/DevOps role based people.

For video Visit:

For the above video’s blog:

https://vskumarcloudblogs.wordpress.com/2016/11/30/how-to-initiate-a-cloud-transformation/

Now, let us move forward with this blog content.

In this AWS exercise, I have described/demonstrated on WordPress[WP] infrastructure creation using a free tier account.

At the end of this blog a micro level lab practiced steps are copied and a recorded video is there on my channel.

I would like to explain from architecture/design perspective through this blog, before you go to Lab steps.
After doing this exercise, simply and finally we can come into the following conclusions:

a) Creating a blogging infrastructure can be fully automated through AWS services.
b) Infrastructure can be created at any time on-demand without any up-front
commitment for how long we will use it in AWS.
c) We can pay for our infrastructure depending on how many hours we use it.
d) Infrastructure consists of several parts,
such as; i) virtual servers, ii) load balancers, and iii) databases.
e) Infrastructure can be deleted with one click without costing to us.

This process is powered by AWS automation. So it will not be billed to our free tier account after deletion!!.

First let us analyze on WP and its components.

How a WordPress infrastructure can be planned?

Assume we have a startup company, which publishes more white papers and  blogs.

Assuming; our startup company currently uses WordPress[WP]  to host over 500 blogs on our own servers.  The blogging infrastructure must be highly available, because customers don’t tolerate outages of any servers.  To evaluate whether a migration is possible through AWS services, we need to do the following three Activities planning and try out with AWS free-tier account:

A) Set up a highly available blogging infrastructure in AWS.
B) Estimate monthly costs of the infrastructure.

C) Finally, Delete our blogging infrastructure to save cost from free-tier account.

For our understanding on WP;

  • WordPress[WP] is written in PHP and uses a  MySQL database to store data.
  • Apache is used as the web server to serve the blog pages.
  • With this information in our mind, we map our requirements to AWS services  to test the infrastructure creation.

Now, let us analyze on “what are the AWS services required for our WP test infrastructure?”.

We need  the below AWS services to do this activity:
I. Elastic Load Balancing (ELB),
II. Elastic Compute Cloud (EC2),
III. Relational Database Service (RDS) for MySQL and
IV. Security groups.

Let us analyze what are the functions/benefits of  these AWS services.

I. Elastic Load Balancing (ELB):

AWS offers a load balancer as a service.
The Elastic Load Balancer (ELB) distributes traffic to a bunch of servers behind it in a cloud environment. It’s highly available by default.

Let us assume our startup company’s blogs are published globally. From many countries these can be accessed by the users. Assume there are lot of users access this content globally. Then in traditional method your load is not balanced without  having physical servers connected through VPNs/networks, etc. in different locations. Think about the Hardware/Software/maintenance/FMG cost for this traditional infrastructure. We can not think it being a startup company to spend much. No way!! Hence we need to depend on cloud service provider.

With AWS ELB, this can be balanced by distributing the blog users traffic to different virtual servers under cloud environment. To denote this distributed load balancing  architecture, I have collected a diagram  on;

WordPress infrastructure and Load Balancing through ELB AWS service.
Which is pasted here for your clarity on the ELB function.

WP-Infra-ELB-load Distribution.png

II. Elastic Compute Cloud (EC2):

It is A virtual server which is provided by the Elastic Compute Cloud (EC2) service of AWS. We will use a Linux server with an optimized distribution called Amazon Linux to install Apache, PHP, and WordPress during our exercise. Please note; we are not limited to Amazon Linux only; we can also choose Ubuntu, Debian, Red Hat, or Windows. Virtual servers can fail at any time, so we need at least two of them for contingency planning. The load balancer will distribute the traffic between them. The beauty of AWS service in case of a server failure is; the load balancer will stop sending traffic to the failed server, and the remaining [contingency] server will need to handle all the requests until the failed server is replaced. Let us not worry on this communication! You will be intimated the status through alerts.

A sample architecture diagram is pasted here FYI with two EC2 instances.

EC2-two instance-ELB-Scenario.png

III. Relational Database Service (RDS) for MySQL:

WordPress relies on the popular MySQL database. AWS provides MySQL as a Relational Database Service (RDS). We can choose the database size (like; storage, CPU, RAM), and RDS takes care of the rest (backups, updates). RDS can also provide a highly available MySQL database by replication. In traditional [non-cloud] model we had the similar setup. It occurs huge costing. By using AWS cloud services this can be easily maintained with minor costing only.

On this context; from the below diagram we can see the MYSQL features from AWS services offering.

AWS-MYSQL-RDS-features

V. Security groups:

In every application architecture we need to have the security features in place. Either these can be embed in the applications or through security tools it can be applied. So the entire architecture is protected that way.

But in cloud services many providers provide these services differently with their services offering.

The Security groups are a fundamental service of AWS to control network traffic like a firewall in traditional systems. Security groups can be attached to a lot of services like ELB, EC2, and RDS. For example; with security groups, we can configure our load balancer as below:

It only accepts requests on port 80 from the internet. Web servers only accept connections on port 80 from the load balancer. And MySQL only accepts connections on port 3306 from the web servers. If we want to log in to our web servers via SSH, we must also open port 22. Similar ways the architecture setup can be configured.

FYI, I have considered a diagram from AWS docs; which denotes a typical AWS multi-tier approach security services with a Firewall:

AWS-Security mulit-tier aproach.png

As shown in the above diagram, A security group acts as a virtual firewall for our instance to control inbound and outbound traffic. When we launch an instance in a Virtual Private Cloud[VPC], we can assign the instance to up to five security groups. It means in any VPC  AWS provides five different security groups.

So, now what is our start company plan for security?:

Let us assume our startup company’s blogging infrastructure consists of two load-balanced web servers running a) WordPress and b) a MySQL database server.

The following tasks are performed automatically in the background through AWS:

  1. Creating an ELB.
  2. Creating a RDS MySQL database.
  3. Creating and attaching security groups.
  4. Creating two web servers.
  5. Creating two EC2 virtual servers.
  6. Installing Apache and PHP via yum.
  7. Install php, php-mysql, mysql, httpd.
  8. Downloading and extracting the latest version of WordPress from http://wordpress.org/latest.tar.gz   
  9. Configuring WordPress to use the created RDS MySQL database 
  10. Starting Apache. 

Before going to the above steps, I would like to show the below diagram for your understanding on “The setup of WP hosting on AWS”. You can download it through the URL given and see it as an image file for your understanding.

AWS-WP-Hosting setup

Now, let us recap our beginning conclusions for this blog. We need to do the below activities till end of the exercise.

  1. Creating a blogging infrastructure.

  2. Analyzing costs of a blogging infrastructure.

  3. Exploring a blogging infrastructure.

  4. Shutting down a blogging infrastructure.

  5. Deleting infrastructure from AWS Account.

1. What actions we need to consider for Creating blogging infrastructure in AWS?: To create the blogging infrastructure we need the below steps to follow on AWS console.

Note: Please note time to time the screen flows [micro level steps] might change on AWS, but the process should be the same to understand on creation of this WP infra.

  1. Open the AWS Management Console at https://console.aws.amazon.com.
  2. Click Services in the navigation bar, and click the Cloud-Formation service.
  3. Click on Create Stack to start the four-step wizard.

Now we will see what are these 4 steps wizard process it contains.

I. Creating a blogging infrastructure: Step 1 of 4

You need to name your infrastructure. Enter “wordpress” as the Name. For Source option, select and  Specify an Amazon S3 Template URL as shown in the screen [lab exercise screen]. Copy this URL and save somewhere in a text file for future reference/usage. About this process, during lab demo you will understand clearly.

II. Creating a blogging infrastructure: Step 2 of 4

Click Next to set the KeyName to “vskumarkey” [example only, you can give any name], for Step 2 of 4. Click Next to create a tag for our infrastructure into next screen. These steps can be seen clearly in lab practices steps.

III. Creating a blogging infrastructure: Step 3 of 4

A tag consists of a key-value pair and can be used to add information to all parts of our infrastructure. We can use tags to differentiate between testing and production resources, add the cost center to easily track costs in our organization [if any], or mark resources that belong to a certain application if we host multiple applications in the same AWS account.

In this example, we will use a tag to mark all of our resources that belong to the “wordpress system”. This will help us later to easily find our infrastructure. Use “system” as the key and “wordpress” as the value.  Click Next. Finally, we will see a confirmation page for Step 4 of 4. For clarity look into lab steps.

IV. Creating a blogging infrastructure: Step 4 of 4

In the Estimate Cost row, click Cost. This will open a new browser tab in the background. Keep this browser open only. We will come back to this screen later. Switch back to the original browser tab and click Create. We can see next Review screen in the next page.

Now, our infrastructure will be created. This Review screen shows that wordpress is in the state of CREATE_IN_PROGRESS. It takes 15-20 mts to complete this process.

Now, please take a look at the result by refreshing the page. Select the “WordPress” row, where Status should be CREATE_COMPLETE. If the status is still CREATE_IN_PROGRESS, be patient until the status becomes CREATE_COMPLETE.

Switch to the Outputs tab [below part of the screen], which is the Blogging infrastructure result. There we can find the URL to our “wordpress system”; click it to visit the system.

What is AWS Automation here?:

As we have discussed in the beginning of this blog, one of the key concepts of AWS is automation. We can automate everything. In the background, our blogging infrastructure was created based on a blueprint with its automation. So the above mentioned [10] tasks have been performed in the background by AWS cloud formation service. You can see the beauty of this automation during lab demonstration.

Blogging infrastructure result:

Now we’ve created our blogging infrastructure, let us take a  look at it. Our infrastructure consists of the following as we discussed in this blog:

  • Web servers
  • Load balancer
  • MySQL database

Now; we will  use the resource groups feature of the Management Console to get an overview.

Exploring the created WP Blogging  infrastructure

Now let us understand;

What is Resource Group in AWS?:

  1. A resource group is a collection of AWS resources.
  2. Resource is an abstract term for something in AWS like an EC2 server, a security group, or a RDS database.
  3. Resources can be tagged with key-value pairs. In such case; let us note we can have more than one key-value pairs.
  4. Resource groups specify what tags are needed for a resource to belong to the group.
  5. Furthermore, a resource group specifies the region(s) where the resource should reside in. It means globally these resource groups can be deployed for its functioning. 
  6. We can use resource groups to group resources if we run multiple systems in the same AWS account. This way we are sharing the resources among the projects or app architectures.
  7. Let us note that we have tagged the blogging infrastructure with the key “system” and the value “wordpress”.
  8. As an example; from now on, we will use this notation for key-value pairs: (system:wordpress). We’ll use that tag to create a resource group for our WordPress infrastructure.  For further clarity please look into the lab steps/video.

Now let us understand;

How to create  a resource group in AWS?:

    1. In the AWS part of the top navigation bar, click Create a Resource Group.
    2. Set Group Name to “wordpress” or whatever you like.
    3. Add the tag system with the value wordpress.
    4. Select the region N. Virginia [for example]. [I have used my existing account]
    5. Save the resource group.
    6. It will take you to next screen shown in next page. Follow the below steps.

How to see the Blogging infrastructure web servers via resource groups details?:

  1. Select Instances under EC2 on the left to see the web servers.
  2. By clicking the arrow icon in the Go column, you can easily jump to the details of a single web server. 
  3. Now, You are  looking at the details of your web server, which is also called an EC2 instance.

Details of web servers running the blogging infrastructure:

  1. On this screen the interesting/important details are as below:
  • Instance type: It tells us about how powerful your instance is.
  • Public IP address: The IP address that is  reachable over the internet. You can use that IP address to connect to the server via SSH.
  • Security groups: If you click on View Rules, you’ll see the active firewall rules like the one that enabled port 22 from all sources (0.0.0.0/0).
  • AMI ID: Let us recollect that we used the Amazon Linux operating system (OS). If you click the AMI ID, you will  see the version number of the OS, among others.  

We also need to know the utilization of webservers, like how we used to monitor in live [for production boxes].

Looking for webserver utilization and metrics in AWS:

2. In the screen; Select the Monitoring tab to see how your web server is utilized.

3. This will become part of our job: really knowing how the infrastructure is doing.

4. AWS collects some metrics and shows them in the Monitoring section. If the CPU is utilized more than 80%, you should add a third server to prevent page load times from increasing.

Now let us understand on;

How to check the Blogging infrastructure load balancer via resource groups?:

  1. We can find the load balancer by selecting Load Balancers under EC2 on the left to the page.
  2. By clicking the arrow icon in the Go column, you can easily jump to the details of the load balancer.
  3. Now, we are looking at the details of your load balancer.
  4. Here; the most interesting part is, “How the load balancer forwards traffic to the web servers?“.
  5. The blogging infrastructure runs on port 80, which is the default HTTP port.
  6. The load balancer accepts only HTTP connections to forward to one of the web servers that also listen on port 80.
  7. The load balancer performs a health check on the virtual servers attached.
  8. Both virtual servers are working as expected, so the load balancer routes traffic to them.    

How to check the MySQL server ?:

Details of the MySQL database which stores data for the blogging infrastructure

  1. Now; let’s look at the MySQL database. You can find the database in a resource group named wordpress.
  2. Select DB Instances under RDS at left.
  3. By clicking the arrow icon in the Go column, you can easily jump to the details of the database.
  4. Now the details of our MySQL database are shown in the screen.
  5. The benefit of using RDS is that we no longer need to worry about backups because AWS performs them automatically.
  6. Updates are performed by AWS in a custom maintenance window. Keep in mind that you can choose the right database size in terms of storage, CPU, and RAM, depending on your needs.
  7. AWS offers many different instance classes, from 1 core with 1 GB RAM up to 32 cores with 244 GB RAM.

Note: I would like to emphasize to compare the traditional [non-cloud] approach. We were using scheduler to backup the DB periodically. Some times we used to shutdown the live systems to take the backups. So, from AWS RDS services we do not need interruption to the business to take backup, RDS takes care everything. We can save the role of sysadmin/DBA while using the AWS services. This way the resources efforts and business services down time are saved.

As we planned three activities for this whole exercise as on now; we have completed the activity of “A) Set up a highly available blogging infrastructure in AWS.”

Now, we are going to work on; “B) Estimate monthly costs of the infrastructure.”

  1. As the  part of this exercise is’ cost estimation also need to be done.
  2. To analyze the cost of our blogging infrastructure, we will  use the AWS Simple Monthly Calculator.
  3. Recollect that we clicked the Cost link in the previous section to open a new browser tab.
  4. Now, switch to that browser tab, and you will see a screen as shown in the below  chart.
  5. To Estimate of our Monthly Bill, and expand the Amazon EC2 Service and Amazon RDS Service rows.

Now, Let us see and understand the below chart.

Blogging infrastructure cost calculation

Now it’s time to evaluate costs. We can see on How much does it cost?

  1. In this example, our infrastructure will cost is around $60 per month.
  2. Let us keep in mind that this is only an estimate.
  3. We are billed based on the actual usage till the end of the month.
  4. Everything is on-demand and usually billed by hours of usage or by gigabytes of usage.
  5. But what influences the usage for this infrastructure?

Let us analyze different situations and identify the costing parameters as below:

Traffic processed by the load balancer: Let us assume; Expect the costs to go down in during festival/vacation season like;  “December and the summer”. When the people are on vacation and not looking at our blogs.

Storage needed for the database: If our startup company increases the number of blogs, the database will grow, so the cost of storage will increase this way.

Number of web servers needed: A single web server is billed by hours of usage. If two web servers are not enough to handle all the traffic during the day, we may need a third server.  By default we need to keep in our AWS/EC2 setup. In that case, we will consume more hours of virtual servers.

Now we had a clear overview of the blogging infrastructure creation and its cost estimation/Analysis. Similar way you will be able to do for your AWS migration projects also.

Now; with reference to  the 3rd  step, it is time to shut down the infrastructure and complete our AWS migration evaluation exercise.

Let us recap our planned 3rd activity;

C) Finally, Delete our blogging infrastructure to save cost from free-tier account.

Now, go to the CloudFormation service in the Management Console and do the following:

  1. Select the WordPress row.
  2. Click Delete Stack, as shown in top of the screen.
  3. After you confirm the deletion of the infrastructure, it takes few minutes for AWS to delete all of the infrastructure’s dependencies.
  4. Please note; this is an efficient way to manage our infrastructure.
  5. Just as the infrastructure’s creation was automated, its deletion is also completely automated.
  6. You can create and delete infrastructure on-demand whenever you would like, and you only pay for infrastructure when you create and run it.  

<===== I copied the relevant lab practiced steps for your easy use ======>

These steps were used as on dated: 9th Sept 2018 on my free-tier account for student purpose.
The AWS might change its Screens flow or UI part time to time.
Hence from the above blog narration some detailed steps are given in the below lab practice steps for your easy use/practice.

1. Sign-in to your AWS console account from URL:
https://aws.amazon.com/

2. Login to the account.
3. Click on Services.
4. Please note we need to use Cloudformation service of AWS in this exercise.
Hence click on Cloudformation.
5. You will be shown the screen to create a new stack. Click on it. Note as mentioned in my blog it has 4 step process.
6. Now, select a sample template. Choose WordPress blog. It creates/shows the S3 template url.
7. Copy the S3 Template URL into a file for future usage.
8. Now, click on Next to go to next screen.
9. Under Specify Details columns, mention the details.
10. Please note my Infra name I want to give “wordpress”.
11. It has the predefined DB “wordpressdb”, I will keep it.
12. I can give DB Passwords, as required in the entry boxes/columns.
13. Dbuser “vskumarwp”. It has instance type t2.small.
14. Now, as you are aware we need to have the local SSH keys which we have created earlier. I have some keys I have selected one. [If you are new for this account, create the SSH keys…]
15. I need to give the range of IPs to be used for our wp servers.
I want to use 192.168.116.9/15.
16. Now, need to click on next…
Please note the above steps are required for you….

17. Let me give System as “worpress” and key as “system” as mentioned in my blog.
18. I want to skip ARN value in this exercise, due to as I mentioned in blog I would like to follow. I will not have any ARN, Hence monitoring is not mandated for me in this exercise. Then press “Next”.

19. Now, we will see the review screen as mentioned in Blog. On reveiw screen press Create button.
20. We are in creation process screen as mentioned in Blog.
CREATE COMPLETE IS DONE NOW.

21. Now let me click on my instance vskumarwp.

22. Now, go to the top navaigation bar and select the resource groups.
23. Select create resource group. You will get a new screen which has some entries and selections. Give tag key as “worpress” and press create group.
24. Next screen it shows wordpress as the resource group name.

25. Now, go to EC2 instance from the left side shown.
Now, you can see in bottom of the screen as mentioned in the blog.

26. I can see ELB by clicking on ELB option in the left panel.
Please note I have not given the ARN. Hence the Monitoring option is not selected.
Due to it might charge me.

27. You can see the cloud watch options through monitoring button in the below part of the screen.

28. Please note the security groups are attached by default.

29. Please note if I want to use this EC2 instance which is prepared for WP, I need to launch in AWS services live.
Which is going to be billed. Hence I will stop at this point.

30. The FINAL step is to delete the Instance of WP. I will go to cloudformation option.
Then it displays the current instance. I will select it and go to actions and select the DELETE STACK Option.
It prompts for ‘YES/NO’, Select Yes. It can take some time to perform deletion.

31. Once it is deleted it will comeback to the Stack creation screen.
Please note I have checked it, there are no existing instances in my current account.
We can see it as terminated instance.

32. So, this way we can create infra and delete it very easily.

33. So, let us have a final conclusion, section from blog.

34. Please call me if you need any coaching for AWS course….
THANKS FOR WATCHING MY VIDEOS/BLOGS ……..

 

Watch the below video on this blog Narration:

https://www.facebook.com/watch/?v=254567748762273

 

For the above steps a 40 mts videos has been made and hosted  on my channel. Please look into it also.

================= End of Lab practice ===============================>

 

Now, after doing all the above steps we can compare our conclusions mentioned in the beginning of this blog. I copied the same for your cross check!!

a) Creating a blogging infrastructure can be fully automated through AWS services.
b) Infrastructure can be created at any time on-demand without any up-front
commitment for how long we will use it in AWS.
c) We can pay for our infrastructure depending on how many hours we use it.
d) Infrastructure consists of several parts,
such as; virtual servers, load balancers, and databases.
e) Infrastructure can be deleted with one click without costing to us.
This process is powered by AWS automation. So it will not be billed to our free tier account after deletion.

I assume now, you are a fearless user of AWS to create the infrastructure through your free-tier account and delete and maintain the account without a cost to your CC/Account.

If you are interested to learn Virtualization with Vagrant visit:

1. Vagrant/Virtual Box:How to create Virtual Machine[VM] on Windows 10?:

Note to the reader of this blog:

If you are not a student of my class, and looking for it please contact me by mail with your LinkedIn identity. And send a connection request with a message on your need. You can use the below contacts. Please note; I teach globally.

Vcard-Shanthi Kumar V-v3

This blog is created as an video  also. There are series of videos made till end of the lab session. At the end,  the lab practices are also recorded for your use with your free AWS account.

 

 

 

 

For some more AWS Specimen POCs visit the below FB web pages:

Build Cloud Solution Architects

MicroServices and Docker

If you want to learn indepth Cloud/DevOps Architec role with Infra setup upto IAC Automation the following course can help you to convert into the demanding role:

https://vskumar.blog/2020/01/20/aws-devops-stage1-stage2-course-for-modern-tech-professional/

There are many global working professionals are inclined on this curriculum. Watch the videos and ping me on Facebook: https://www.facebook.com/shanthikumar.vemulapalli

3. AWS: How to create S3 Bucket and share object URL ?

In this blog, I have given the link to the discussion video:

a) Creating a Bucket on S3.

b) Uploading an Object.

c) Sharing the object URL.

d) Testing the object URL for its display in  Different Laptop.

Watch this attached video



AWS-SAA-Course

35. DevOps:How do you plan an IAC [Infrastructure As Code] ?

 

 

vskumarcloud-build-cloud-architect

When you are working for DevOps practices, the following question I would like to ask…

How do you plan an IAC [Infrastructure As Code] ?

You or your team member might be expert in Configuration tools.

But without having clear environment specifications these tools will not have any AI to get your environment.

When we do IAC as part of Devops practices, we also need to do identification of Infrastructure needs for different environments.

At that time one need to do the following activities also.

This is not only for a Cloud Architect, even for a DevOps practitioners it is mandatory.

Look into the discussion video mentioned in the below URL.

Please note unless you give specifications to DevOps Engineer he/she can not build sustainable environment.

Your prior planning is very essential.

Cloud architect: How to build your Infrastructure planning practice ?

https://vskumar.blog/2018/12/04/1-cloud-architect-how-to-build-infrastructure-planning/

 

36. DevOps:Why the DevOps practice team is required to involve in Infra cloud planning? ?

Build Cloud architects-FB promotion

Why the DevOps practice team is required to involve in Infra cloud planning?

I was talking to some clients recently on the importance of Cloud migration activities planning. I have come with some guidelines for them as part of my engagement. Some of my guidelines given to them, I would like to share as below.

When the DevOps practice team need to do the infra setup  for a cloud migration they also need to participate on the identification of Infra activities and the specifications. Which is very essential.

This need to be done as initial step with any Cloud services migration.

As per my opinion we can work with any cloud services like; AWS/AZURE/Google Cloud, etc., by having the above activity as mandatory.

The attached blog/Video contains the same discussion with the details of the steps required to setup a Virtual Private Cloud. The VPC nomenclature we might have seen with AWS. But similar setup or name cane be there with other Cloud service providers also.

Once this VPC is created the systems are going to be hosted on Cloud.

The Ops team’s responsibility is; to make sure the Cloud migration is correctly and completely done for all the live setup.

At the same time they also need to conduct a pilot testing activity successfully which is mandated as per the Agile Projects management [Agile PM] standards before they announce go live.

They also need to do a parallel run along with the past production setup with a new cloud setup for few weeks.

 

Below URL contains the initial planning discussion as mentioned:

https://vskumar.blog/2018/12/23/9-aws-saa-what-is-the-initial-step-for-vpc-design-theorydiscussion-video/

 

If you want to learn detailed discussion on Infra planning, visit:

https://vskumar.blog/2018/12/04/1-cloud-architect-how-to-build-infrastructure-planning/

 

 

AWS-SAA-Course

 

 

 

 

1. Cloud architect: How to build your Infrastructure planning practice [watch many scenario based videos] ?

If you are a Cloud Architect, you might do project initiation for Cloud migration projects. During that time you need to have a plan to get series of activities and to make a project schedule. You might need to see this discussion Video also along with your planning. It will add value for your future efforts savings or can reduce repeat activities. Please send your feedback by e-mail [mentioned in it], which can encourage us to make such Consulting/discussion videos sharing on Social.

Build Cloud architects-FB promotion

With reference to my previous blog on the role of Cloud architect, in this blog I would like to present on:

  • What is Traditional Infrastructure planning and building analysis ?

  • How to setup a new Infrastructure for an E-commerce [simple site] in Traditional manner ?

  • What are the Activities we might do ?

  • How to compare them in high level with a Cloud Architecting ?

  • If the Cloud architect apply these practices in his/her area, lot of time for roll back/back out tasks can be reduced during migration.

The following One hour Video has the entire elaboration  for your clarity with a Consulting/Training discussion:

You can also join for similar discussions:

https://www.facebook.com/groups/1911594275816833/about/

If you are looking for coaching on your role Cloud performance, please contact me on my FB with your Linkedin URL.

For details on my coaching visit:

https://vskumar.blog/2018/11/13/coaching-mentoring-on-aws-solution-architect-associate-exam/

If you are interested to know the Cloud initiation activities, visit my video:

A scenario based discussion happened with a Cloud professional from the above video/blog:

Cloud Initiation and Practices – 1:

https://www.facebook.com/101806851617834/videos/336263767430087/

Cloud Initiation and Practices – 2:
This is the 2nd discussion video on the Cloud initiation and on the needed practices.

DevOps Practices & FAQs -2

Please read the previous FAQs series also: Devops-practices-faqs-1

And the next one: https://vskumar.blog/2019/02/01/devops-practices-faqs-3-domain-area/

AWS-SAA-Course

1. Who can become DevOps Engineer ?

In traditional projects [Non Agile practiced projects] ; Build Engineers, Sys Admins, Release Engineers can convert their career into DevOps Engineer role through an Agile practiced IT organization.

In Agile projects we might have seen Build or Deployment Engineers; they can convert into DevOps Engineer roles.

2. What a desired ‘DevOps Engineer role professional’ need to learn ?

If somebody would like to convert their role into DevOps Engineer; they need to understand the following :

  1. Agile and Scrum or Lean practices
  2. DevOps Principles, practices and patterns
  3. Deployment, SCM  and Release management process
  4. Version control System tools [Ex: Git, SVN, etc..]
  5. Cloud setup and deployment [Ex: AWS, Azure,Google Cloud, Alibaba, etc..]
  6. Packaging process and tools [Ex: Maven, Gradle, etc.]
  7. Continuous Integration Tools [Ex: Jenkins, Teamcity,  etc.]
  8. Software Configuration Management [SCM]  tools [Ex: Ansible,  Chef, Puppet, etc.]
  9. Containerization [Docker]
  10. Some of the scripting languages [Ex: Shell, Bash, python, Ruby, Nodejs, etc.]
  11. Windows, Linux OS commands and operations.

They can learn incrementally also depends on the project need. Note all projects will not use the unique tools. Depends on the IT organization plans, practices and the environments they decide on choosing the vendor based or open source tools.

Note: Some of the famous tools only it has been mentioned. Hence one need to identify the customer project environment and their DevOps architecture also. If one understands the Basic process in their 1st learning phase, later on they can pickup faster.

If you want to learn DevOps Practices, join the below group:

https://www.facebook.com/groups/1911594275816833/about/

FB-DevOps-Practices Group-page

Following videos are made to elaborate on the need and advantages of thinking on conversion into DevOps Practices by IT Companies and the Professionals. Comparative reports have been incorporated.

https://www.youtube.com/watch?v=O3yBGbPQ4SM

 

https://www.youtube.com/watch?v=9engYBrnwA4

 

https://www.youtube.com/watch?v=O_SxF3hJjUM

Advertising3

 

Visit my current running facebook groups for IT Professionals with my valuable discussions/videos/blogs posted:

 

DevOps Practices Group:

https://www.facebook.com/groups/1911594275816833/about/

 

Cloud Practices Group:

https://www.facebook.com/groups/585147288612549/about/

 

Build Cloud Solution Architects [With some videos of the live students classes/feedback]

https://www.facebook.com/vskumarcloud/

 

 

MicroServices and Docker [For learning concepts of Microservices and Docker containers]

https://www.facebook.com/MicroServices-and-Docker-328906801086961/

34. DevOps:How to Install Gradle on Ubuntu 18.04 ? [Video]

Gradle logo

How to Install Gradle on Ubuntu 18.04 ? :

Through this blog it is demonstrated the Gradle 4.10.2! installation on Ubuntu 18.04 VM.

At the end of this blog the Installation video clip is attached.

PLEASE NOTE THIS VIDEO DOESN’T HAVE SPEAKING…..
I AM EXECUTING THROUGH THE BELOW STEPS ONLY …..

Step#1: Install OpenJDK:

Gradle needs Java JDK or JRE version 7 or
above to be installed.
We will install OpenJDK 8 as below.
Let us update the linux package index.

sudo apt update

Install the OpenJDK package with the below command:

Install the OpenJDK package.

sudo apt install openjdk-8-jdk
Check the java version.

Java -version

Step#2: Download Gradle

Using the below command; Used the below valid command…..

wget https://services.gradle.org/distributions/gradle-4.10.2-bin.zip -P /tmp
Once the download is completed, we need to extract the zip file into folder; /opt/gradle :

sudo unzip -d /opt/gradle /tmp/gradle-*.zip

Now,
let us Verify that the Gradle files are extracted by listing the /opt/gradle/gradle-4.10.2 directory
ls /opt/gradle/gradle-4.10.2

The typical file list will be:

bin getting-started.html init.d lib LICENSE media NOTICE

Step#3: Setting up environment variables:

Now, we need to configure the PATH environment variable to include the Gradle bin directory.
To do this task; need to open a text editor and create a new file named gradle.sh inside of the folder; /etc/profile.d/

sudo vim /etc/profile.d/gradle.sh

In this shell program [config file] paste the below lines;

export GRADLE_HOME=/opt/gradle/gradle-4.10.2
export PATH=${GRADLE_HOME}/bin:${PATH}

The above script will be initiated at startup.

Now, let us Load the environment variables using the following command;

source /etc/profile.d/gradle.sh

Step#4: Verify the Gradle installation

To validate the installation of Gradle
use the command;

gradle -v

It will display the Gradle version.

So the Gradle is installed successfully.

NOW YOU ARE READY TO CREATE YOUR BUILDS with Gradle.

==== Lab exercise output are pasted here ===>

Gradle installation steps output for Ubuntu 18.04 VM:

Step#1: Install OpenJDK:

Output for;

sudo apt update

==== Output =====>

vskumar@ubuntu:~$

vskumar@ubuntu:~$ sudo apt update

[sudo] password for vskumar:

Get:1 https://download.docker.com/linux/ubuntu bionic InRelease [64.4 kB]

Hit:3 http://us.archive.ubuntu.com/ubuntu bionic InRelease

Get:4 http://security.ubuntu.com/ubuntu bionic-security InRelease [83.2 kB]

Ign:5 http://pkg.jenkins.io/debian-stable binary/ InRelease

Get:2 http://ppa.launchpad.net/webupd8team/java/ubuntu bionic InRelease [15.4 kB]

Get:6 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]

Hit:7 http://pkg.jenkins.io/debian-stable binary/ Release

Get:8 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]

E: Repository ‘http://ppa.launchpad.net/webupd8team/java/ubuntu bionic InRelease’ changed its ‘Label’ value from ‘Oracle Java (JDK) 8 / 9 Installer PPA’ to ‘Oracle Java (JDK) 8 Installer PPA’

N: This must be accepted explicitly before updates for this repository can be applied. See apt-secure(8) manpage for details.

Do you want to accept these changes and continue updating from this repository? [y/N] y

Get:9 http://ppa.launchpad.net/webupd8team/java/ubuntu bionic/main i386 Packages [1,556 B]

Get:10 http://ppa.launchpad.net/webupd8team/java/ubuntu bionic/main amd64 Packages [1,556 B]

Get:11 http://us.archive.ubuntu.com/ubuntu bionic-updates/main i386 Packages [372 kB]

Get:13 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [416 kB]

Get:14 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [571 kB]

Get:15 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe i386 Packages [566 kB]

Fetched 2,254 kB in 25s (89.3 kB/s)

Reading package lists… Done

Building dependency tree

Reading state information… Done

345 packages can be upgraded. Run ‘apt list –upgradable’ to see them.

vskumar@ubuntu:~$

== End of output ======>

 

 

=====>Screen Output for JDK 8 Installation ===>

vskumar@ubuntu:~$ sudo apt install openjdk-8-jdk

Reading package lists… Done

Building dependency tree

Reading state information… Done

The following additional packages will be installed:

ca-certificates-java fonts-dejavu-extra libatk-wrapper-java

libatk-wrapper-java-jni libgif7 libice-dev libpthread-stubs0-dev libsm-dev

libx11-6 libx11-dev libx11-doc libxau-dev libxcb1-dev libxdmcp-dev

libxt-dev openjdk-8-jdk-headless openjdk-8-jre openjdk-8-jre-headless

x11proto-core-dev x11proto-dev xorg-sgml-doctools xtrans-dev

Suggested packages:

libice-doc libsm-doc libxcb-doc libxt-doc openjdk-8-demo openjdk-8-source

visualvm fonts-ipafont-gothic fonts-ipafont-mincho fonts-wqy-microhei

fonts-wqy-zenhei

The following NEW packages will be installed:

ca-certificates-java fonts-dejavu-extra libatk-wrapper-java

libatk-wrapper-java-jni libgif7 libice-dev libpthread-stubs0-dev libsm-dev

libx11-dev libx11-doc libxau-dev libxcb1-dev libxdmcp-dev libxt-dev

openjdk-8-jdk openjdk-8-jdk-headless openjdk-8-jre openjdk-8-jre-headless

x11proto-core-dev x11proto-dev xorg-sgml-doctools xtrans-dev

The following packages will be upgraded:

libx11-6

1 upgraded, 22 newly installed, 0 to remove and 344 not upgraded.

1 not fully installed or removed.

Need to get 41.8 MB/42.3 MB of archives.

After this operation, 165 MB of additional disk space will be used.

Do you want to continue? [Y/n] y

Get:1 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 openjdk-8-jre-headless amd64 8u181-b13-1ubuntu0.18.04.1 [27.3 MB]

Get:1 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 openjdk-8-jre-headless amd64 8u181-b13-1ubuntu0.18.04.1 [27.3 MB]

Get:2 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 ca-certificates-java all 20180516ubuntu1~18.04.1 [12.2 kB]

Get:3 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 fonts-dejavu-extra all 2.37-1 [1,953 kB]

Get:4 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libatk-wrapper-java all 0.33.3-20ubuntu0.1 [34.7 kB]

Get:5 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libatk-wrapper-java-jni amd64 0.33.3-20ubuntu0.1 [28.3 kB]

Get:6 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libgif7 amd64 5.1.4-2 [30.6 kB]

Get:7 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 xorg-sgml-doctools all 1:1.11-1 [12.9 kB]

Get:8 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 x11proto-dev all 2018.4-4 [251 kB]

Get:9 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 x11proto-core-dev all 2018.4-4 [2,620 B]

Get:10 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libice-dev amd64 2:1.0.9-2 [46.8 kB]

Get:11 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libpthread-stubs0-dev amd64 0.3-4 [4,068 B]

Get:12 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libsm-dev amd64 2:1.2.2-1 [16.2 kB]

Get:13 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libxau-dev amd64 1:1.0.8-1 [11.1 kB]

Get:14 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libxdmcp-dev amd64 1:1.1.2-3 [25.1 kB]

Get:15 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 xtrans-dev all 1.3.5-1 [70.5 kB]

Get:16 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libxcb1-dev amd64 1.13-1 [80.0 kB]

Get:17 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 libx11-dev amd64 2:1.6.4-3ubuntu0.1 [641 kB]

Get:18 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 libx11-doc all 2:1.6.4-3ubuntu0.1 [2,065 kB]

Get:19 http://us.archive.ubuntu.com/ubuntu bionic/main amd64 libxt-dev amd64 1:1.1.5-1 [395 kB]

Get:20 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 openjdk-8-jre amd64 8u181-b13-1ubuntu0.18.04.1 [69.7 kB]

Get:21 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 openjdk-8-jdk-headless amd64 8u181-b13-1ubuntu0.18.04.1 [8,248 kB]

Ign:21 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 openjdk-8-jdk-headless amd64 8u181-b13-1ubuntu0.18.04.1

Get:22 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 openjdk-8-jdk amd64 8u181-b13-1ubuntu0.18.04.1 [458 kB]

Get:21 http://security.ubuntu.com/ubuntu bionic-updates/universe amd64 openjdk-8-jdk-headless amd64 8u181-b13-1ubuntu0.18.04.1 [8,248 kB]

Fetched 6,273 kB in 1min 54s (54.9 kB/s)

(Reading database … 172315 files and directories currently installed.)

Preparing to unpack …/00-libx11-6_2%3a1.6.4-3ubuntu0.1_amd64.deb …

Unpacking libx11-6:amd64 (2:1.6.4-3ubuntu0.1) over (2:1.6.4-3) …

Selecting previously unselected package openjdk-8-jre-headless:amd64.

Preparing to unpack …/01-openjdk-8-jre-headless_8u181-b13-1ubuntu0.18.04.1_amd64.deb …

Unpacking openjdk-8-jre-headless:amd64 (8u181-b13-1ubuntu0.18.04.1) …

Selecting previously unselected package ca-certificates-java.

Preparing to unpack …/02-ca-certificates-java_20180516ubuntu1~18.04.1_all.deb …

Unpacking ca-certificates-java (20180516ubuntu1~18.04.1) …

Selecting previously unselected package fonts-dejavu-extra.

Preparing to unpack …/03-fonts-dejavu-extra_2.37-1_all.deb …

Unpacking fonts-dejavu-extra (2.37-1) …

Selecting previously unselected package libatk-wrapper-java.

Preparing to unpack …/04-libatk-wrapper-java_0.33.3-20ubuntu0.1_all.deb …

Unpacking libatk-wrapper-java (0.33.3-20ubuntu0.1) …

Selecting previously unselected package libatk-wrapper-java-jni:amd64.

Preparing to unpack …/05-libatk-wrapper-java-jni_0.33.3-20ubuntu0.1_amd64.deb …

Unpacking libatk-wrapper-java-jni:amd64 (0.33.3-20ubuntu0.1) …

Selecting previously unselected package libgif7:amd64.

Preparing to unpack …/06-libgif7_5.1.4-2_amd64.deb …

Unpacking libgif7:amd64 (5.1.4-2) …

Selecting previously unselected package xorg-sgml-doctools.

Preparing to unpack …/07-xorg-sgml-doctools_1%3a1.11-1_all.deb …

Unpacking xorg-sgml-doctools (1:1.11-1) …

Selecting previously unselected package x11proto-dev.

Preparing to unpack …/08-x11proto-dev_2018.4-4_all.deb …

Unpacking x11proto-dev (2018.4-4) …

Selecting previously unselected package x11proto-core-dev.

Preparing to unpack …/09-x11proto-core-dev_2018.4-4_all.deb …

Unpacking x11proto-core-dev (2018.4-4) …

Selecting previously unselected package libice-dev:amd64.

Preparing to unpack …/10-libice-dev_2%3a1.0.9-2_amd64.deb …

Unpacking libice-dev:amd64 (2:1.0.9-2) …

Selecting previously unselected package libpthread-stubs0-dev:amd64.

Preparing to unpack …/11-libpthread-stubs0-dev_0.3-4_amd64.deb …

Unpacking libpthread-stubs0-dev:amd64 (0.3-4) …

Selecting previously unselected package libsm-dev:amd64.

Preparing to unpack …/12-libsm-dev_2%3a1.2.2-1_amd64.deb …

Unpacking libsm-dev:amd64 (2:1.2.2-1) …

Selecting previously unselected package libxau-dev:amd64.

Preparing to unpack …/13-libxau-dev_1%3a1.0.8-1_amd64.deb …

Unpacking libxau-dev:amd64 (1:1.0.8-1) …

Selecting previously unselected package libxdmcp-dev:amd64.

Preparing to unpack …/14-libxdmcp-dev_1%3a1.1.2-3_amd64.deb …

Unpacking libxdmcp-dev:amd64 (1:1.1.2-3) …

Selecting previously unselected package xtrans-dev.

Preparing to unpack …/15-xtrans-dev_1.3.5-1_all.deb …

Unpacking xtrans-dev (1.3.5-1) …

Selecting previously unselected package libxcb1-dev:amd64.

Preparing to unpack …/16-libxcb1-dev_1.13-1_amd64.deb …

Unpacking libxcb1-dev:amd64 (1.13-1) …

Selecting previously unselected package libx11-dev:amd64.

Preparing to unpack …/17-libx11-dev_2%3a1.6.4-3ubuntu0.1_amd64.deb …

Unpacking libx11-dev:amd64 (2:1.6.4-3ubuntu0.1) …

Selecting previously unselected package libx11-doc.

Preparing to unpack …/18-libx11-doc_2%3a1.6.4-3ubuntu0.1_all.deb …

Unpacking libx11-doc (2:1.6.4-3ubuntu0.1) …

Selecting previously unselected package libxt-dev:amd64.

Preparing to unpack …/19-libxt-dev_1%3a1.1.5-1_amd64.deb …

Unpacking libxt-dev:amd64 (1:1.1.5-1) …

Selecting previously unselected package openjdk-8-jre:amd64.

Preparing to unpack …/20-openjdk-8-jre_8u181-b13-1ubuntu0.18.04.1_amd64.deb …

Unpacking openjdk-8-jre:amd64 (8u181-b13-1ubuntu0.18.04.1) …

Selecting previously unselected package openjdk-8-jdk-headless:amd64.

Preparing to unpack …/21-openjdk-8-jdk-headless_8u181-b13-1ubuntu0.18.04.1_amd64.deb …

Unpacking openjdk-8-jdk-headless:amd64 (8u181-b13-1ubuntu0.18.04.1) …

Selecting previously unselected package openjdk-8-jdk:amd64.

Preparing to unpack …/22-openjdk-8-jdk_8u181-b13-1ubuntu0.18.04.1_amd64.deb …

Unpacking openjdk-8-jdk:amd64 (8u181-b13-1ubuntu0.18.04.1) …

Setting up nginx-extras (1.14.0-0ubuntu1) …

Job for nginx.service failed because the control process exited with error code.

See “systemctl status nginx.service” and “journalctl -xe” for details.

invoke-rc.d: initscript nginx, action “start” failed.

  • nginx.service – A high performance web server and a reverse proxy server

Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)

Active: failed (Result: exit-code) since Thu 2018-11-01 05:06:40 PDT; 220ms ago

Docs: man:nginx(8)

Process: 14329 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=1/FAILURE)

Process: 14319 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)

 

Nov 01 05:06:38 ubuntu nginx[14329]: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)

Nov 01 05:06:38 ubuntu nginx[14329]: nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)

Nov 01 05:06:39 ubuntu nginx[14329]: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)

Nov 01 05:06:39 ubuntu nginx[14329]: nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)

Nov 01 05:06:39 ubuntu nginx[14329]: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)

Nov 01 05:06:39 ubuntu nginx[14329]: nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)

Nov 01 05:06:40 ubuntu nginx[14329]: nginx: [emerg] still could not bind()

Nov 01 05:06:40 ubuntu systemd[1]: nginx.service: Control process exited, code=exited status=1

Nov 01 05:06:40 ubuntu systemd[1]: nginx.service: Failed with result ‘exit-code’.

Nov 01 05:06:40 ubuntu systemd[1]: Failed to start A high performance web server and a reverse proxy server.

dpkg: error processing package nginx-extras (–configure):

installed nginx-extras package post-installation script subprocess returned error exit status 1

Setting up ca-certificates-java (20180516ubuntu1~18.04.1) …

head: cannot open ‘/etc/ssl/certs/java/cacerts’ for reading: No such file or directory

Adding debian:COMODO_ECC_Certification_Authority.pem

Adding debian:AffirmTrust_Premium_ECC.pem

Adding debian:Certinomis_-_Root_CA.pem

Adding debian:SSL.com_Root_Certification_Authority_ECC.pem

Adding debian:AffirmTrust_Premium.pem

Adding debian:Entrust_Root_Certification_Authority_-_G2.pem

Adding debian:GeoTrust_Primary_Certification_Authority_-_G2.pem

Adding debian:GlobalSign_Root_CA.pem

Adding debian:OpenTrust_Root_CA_G3.pem

Adding debian:USERTrust_RSA_Certification_Authority.pem

Adding debian:thawte_Primary_Root_CA_-_G3.pem

Adding debian:ssl-cert-snakeoil.pem

Adding debian:Baltimore_CyberTrust_Root.pem

Adding debian:Certplus_Root_CA_G2.pem

Adding debian:Staat_der_Nederlanden_Root_CA_-_G2.pem

Adding debian:T-TeleSec_GlobalRoot_Class_3.pem

Adding debian:Entrust_Root_Certification_Authority_-_EC1.pem

Adding debian:EE_Certification_Centre_Root_CA.pem

Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority_-_G3.pem

Adding debian:DigiCert_Global_Root_CA.pem

Adding debian:GlobalSign_ECC_Root_CA_-_R5.pem

Adding debian:NetLock_Arany_=Class_Gold=_Főtanúsítvány.pem

Adding debian:Network_Solutions_Certificate_Authority.pem

Adding debian:Buypass_Class_2_Root_CA.pem

Adding debian:VeriSign_Class_3_Public_Primary_Certification_Authority_-_G4.pem

Adding debian:DST_Root_CA_X3.pem

Adding debian:Hellenic_Academic_and_Research_Institutions_RootCA_2011.pem

Adding debian:Certplus_Class_2_Primary_CA.pem

Adding debian:Trustis_FPS_Root_CA.pem

Adding debian:OpenTrust_Root_CA_G1.pem

Adding debian:Taiwan_GRCA.pem

Adding debian:AC_RAIZ_FNMT-RCM.pem

Adding debian:TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem

Adding debian:AffirmTrust_Commercial.pem

Adding debian:QuoVadis_Root_CA_3.pem

Adding debian:SSL.com_EV_Root_Certification_Authority_RSA_R2.pem

Adding debian:DigiCert_Global_Root_G3.pem

Adding debian:QuoVadis_Root_CA_1_G3.pem

Adding debian:thawte_Primary_Root_CA.pem

Adding debian:thawte_Primary_Root_CA_-_G2.pem

Adding debian:CA_Disig_Root_R2.pem

Adding debian:Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem

Adding debian:Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem

Adding debian:Certum_Trusted_Network_CA.pem

Adding debian:SSL.com_EV_Root_Certification_Authority_ECC.pem

Adding debian:Chambers_of_Commerce_Root_-_2008.pem

Adding debian:certSIGN_ROOT_CA.pem

Adding debian:Hongkong_Post_Root_CA_1.pem

Adding debian:DigiCert_Assured_ID_Root_G2.pem

Adding debian:GlobalSign_Root_CA_-_R3.pem

Adding debian:AddTrust_External_Root.pem

Adding debian:QuoVadis_Root_CA_2_G3.pem

Adding debian:DigiCert_Trusted_Root_G4.pem

Adding debian:Staat_der_Nederlanden_EV_Root_CA.pem

Adding debian:COMODO_Certification_Authority.pem

Adding debian:Global_Chambersign_Root_-_2008.pem

Adding debian:VeriSign_Class_3_Public_Primary_Certification_Authority_-_G5.pem

Adding debian:OISTE_WISeKey_Global_Root_GA_CA.pem

Adding debian:Actalis_Authentication_Root_CA.pem

Adding debian:Entrust_Root_Certification_Authority.pem

Adding debian:GlobalSign_Root_CA_-_R2.pem

Adding debian:ACCVRAIZ1.pem

Adding debian:Certplus_Root_CA_G1.pem

Adding debian:Starfield_Root_Certificate_Authority_-_G2.pem

Adding debian:Buypass_Class_3_Root_CA.pem

Adding debian:Izenpe.com.pem

Adding debian:OISTE_WISeKey_Global_Root_GB_CA.pem

Adding debian:GeoTrust_Universal_CA.pem

Adding debian:QuoVadis_Root_CA.pem

Adding debian:TeliaSonera_Root_CA_v1.pem

Adding debian:QuoVadis_Root_CA_3_G3.pem

Adding debian:QuoVadis_Root_CA_2.pem

Adding debian:Go_Daddy_Class_2_CA.pem

Adding debian:DigiCert_Global_Root_G2.pem

Adding debian:Starfield_Services_Root_Certificate_Authority_-_G2.pem

Adding debian:Microsec_e-Szigno_Root_CA_2009.pem

Adding debian:SSL.com_Root_Certification_Authority_RSA.pem

Adding debian:GlobalSign_ECC_Root_CA_-_R4.pem

Adding debian:EC-ACC.pem

Adding debian:Cybertrust_Global_Root.pem

Adding debian:DigiCert_Assured_ID_Root_G3.pem

Adding debian:SecureSign_RootCA11.pem

Adding debian:Visa_eCommerce_Root.pem

Adding debian:Atos_TrustedRoot_2011.pem

Adding debian:VeriSign_Universal_Root_Certification_Authority.pem

Adding debian:TÜRKTRUST_Elektronik_Sertifika_Hizmet_Sağlayıcısı_H5.pem

Adding debian:E-Tugra_Certification_Authority.pem

Adding debian:Certigna.pem

Adding debian:Sonera_Class_2_Root_CA.pem

Adding debian:TrustCor_RootCert_CA-2.pem

Adding debian:SwissSign_Silver_CA_-_G2.pem

Adding debian:Certum_Trusted_Network_CA_2.pem

Adding debian:D-TRUST_Root_Class_3_CA_2_EV_2009.pem

Adding debian:CFCA_EV_ROOT.pem

Adding debian:AffirmTrust_Networking.pem

Adding debian:T-TeleSec_GlobalRoot_Class_2.pem

Adding debian:IdenTrust_Public_Sector_Root_CA_1.pem

Adding debian:IdenTrust_Commercial_Root_CA_1.pem

Adding debian:TrustCor_RootCert_CA-1.pem

Adding debian:Comodo_AAA_Services_root.pem

Adding debian:Amazon_Root_CA_3.pem

Adding debian:GeoTrust_Universal_CA_2.pem

Adding debian:Security_Communication_RootCA2.pem

Adding debian:GeoTrust_Global_CA.pem

Adding debian:Deutsche_Telekom_Root_CA_2.pem

Adding debian:OpenTrust_Root_CA_G2.pem

Adding debian:GDCA_TrustAUTH_R5_ROOT.pem

Adding debian:USERTrust_ECC_Certification_Authority.pem

Adding debian:SecureTrust_CA.pem

Adding debian:D-TRUST_Root_Class_3_CA_2_2009.pem

Adding debian:TrustCor_ECA-1.pem

Adding debian:SZAFIR_ROOT_CA2.pem

Adding debian:Secure_Global_CA.pem

Adding debian:DigiCert_High_Assurance_EV_Root_CA.pem

Adding debian:ePKI_Root_Certification_Authority.pem

Adding debian:GeoTrust_Primary_Certification_Authority.pem

Adding debian:Entrust.net_Premium_2048_Secure_Server_CA.pem

Adding debian:Staat_der_Nederlanden_Root_CA_-_G3.pem

Adding debian:ISRG_Root_X1.pem

Adding debian:Security_Communication_Root_CA.pem

Adding debian:SwissSign_Gold_CA_-_G2.pem

Adding debian:COMODO_RSA_Certification_Authority.pem

Adding debian:Amazon_Root_CA_4.pem

Adding debian:TWCA_Global_Root_CA.pem

Adding debian:Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem

Adding debian:LuxTrust_Global_Root_2.pem

Adding debian:TWCA_Root_Certification_Authority.pem

Adding debian:Amazon_Root_CA_2.pem

Adding debian:Go_Daddy_Root_Certificate_Authority_-_G2.pem

Adding debian:DigiCert_Assured_ID_Root_CA.pem

Adding debian:XRamp_Global_CA_Root.pem

Adding debian:GeoTrust_Primary_Certification_Authority_-_G3.pem

Adding debian:Starfield_Class_2_CA.pem

Adding debian:Amazon_Root_CA_1.pem

done.

Processing triggers for mime-support (3.60ubuntu1) …

Processing triggers for desktop-file-utils (0.23-1ubuntu3.18.04.1) …

Setting up libpthread-stubs0-dev:amd64 (0.3-4) …

Setting up xorg-sgml-doctools (1:1.11-1) …

Setting up libx11-6:amd64 (2:1.6.4-3ubuntu0.1) …

Setting up libgif7:amd64 (5.1.4-2) …

Setting up x11proto-dev (2018.4-4) …

Setting up xtrans-dev (1.3.5-1) …

Setting up libxdmcp-dev:amd64 (1:1.1.2-3) …

Processing triggers for libc-bin (2.27-3ubuntu1) …

Setting up libice-dev:amd64 (2:1.0.9-2) …

Setting up libx11-doc (2:1.6.4-3ubuntu0.1) …

Processing triggers for man-db (2.8.3-2) …

Processing triggers for gnome-menus (3.13.3-11ubuntu1) …

Setting up fonts-dejavu-extra (2.37-1) …

Processing triggers for ca-certificates (20180409) …

Updating certificates in /etc/ssl/certs…

0 added, 0 removed; done.

Running hooks in /etc/ca-certificates/update.d…

 

done.

done.

Setting up libatk-wrapper-java (0.33.3-20ubuntu0.1) …

Processing triggers for hicolor-icon-theme (0.17-2) …

Processing triggers for fontconfig (2.12.6-0ubuntu2) …

Setting up openjdk-8-jre-headless:amd64 (8u181-b13-1ubuntu0.18.04.1) …

Setting up libsm-dev:amd64 (2:1.2.2-1) …

Setting up x11proto-core-dev (2018.4-4) …

Setting up openjdk-8-jdk-headless:amd64 (8u181-b13-1ubuntu0.18.04.1) …

Setting up libxau-dev:amd64 (1:1.0.8-1) …

Setting up libatk-wrapper-java-jni:amd64 (0.33.3-20ubuntu0.1) …

Setting up libxcb1-dev:amd64 (1.13-1) …

Setting up libx11-dev:amd64 (2:1.6.4-3ubuntu0.1) …

Setting up libxt-dev:amd64 (1:1.1.5-1) …

Setting up openjdk-8-jre:amd64 (8u181-b13-1ubuntu0.18.04.1) …

Setting up openjdk-8-jdk:amd64 (8u181-b13-1ubuntu0.18.04.1) …

Processing triggers for libc-bin (2.27-3ubuntu1) …

Errors were encountered while processing:

nginx-extras

 

E: Sub-process /usr/bin/dpkg returned an error code (1)

vskumar@ubuntu:~$

vskumar@ubuntu:~$ java -version

java version “1.8.0_171”

Java(TM) SE Runtime Environment (build 1.8.0_171-b11)

Java HotSpot(TM) 64-Bit Server VM (build 25.171-b11, mixed mode)

vskumar@ubuntu:~$

 

==== End of Output =====>

 

 

 

=== Output for downloading gradle ====>

vskumar@ubuntu:~$ wget https://services.gradle.org/distributions/gradle-4.10.2-bin.zip -P /tmp

–2018-11-01 05:14:35–  https://services.gradle.org/distributions/gradle-4.10.2-bin.zip

Resolving services.gradle.org (services.gradle.org)… 104.16.174.166, 104.16.172.166, 104.16.175.166, …

Connecting to services.gradle.org (services.gradle.org)|104.16.174.166|:443… connected.

HTTP request sent, awaiting response… 301 Moved Permanently

Location: https://downloads.gradle.org/distributions/gradle-4.10.2-bin.zip [following]

–2018-11-01 05:14:35–  https://downloads.gradle.org/distributions/gradle-4.10.2-bin.zip

Resolving downloads.gradle.org (downloads.gradle.org)… 104.16.175.166, 104.16.173.166, 104.16.171.166, …

Connecting to downloads.gradle.org (downloads.gradle.org)|104.16.175.166|:443… connected.

HTTP request sent, awaiting response… 200 OK

Length: 78420037 (75M) [application/zip]

Saving to: ‘/tmp/gradle-4.10.2-bin.zip’

 

gradle-4.10.2-bin.z 100%[==================>]  74.79M  1.83MB/s    in 47s

 

2018-11-01 05:15:22 (1.60 MB/s) – ‘/tmp/gradle-4.10.2-bin.zip’ saved [78420037/78420037]

 

vskumar@ubuntu:~$

=== End of output =============>

 

=== Gradle Files verification ===>

vskumar@ubuntu:~$ ls /opt/gradle/gradle-4.10.2

bin  getting-started.html  init.d  lib  LICENSE  media  NOTICE

vskumar@ubuntu:~$

==========================>

 

=== Output for shell config file creation =====>

skumar@ubuntu:~$ sudo vim /etc/profile.d/gradle.sh

vskumar@ubuntu:~$ cat  vim /etc/profile.d/gradle.sh

cat: vim: No such file or directory

export GRADLE_HOME=/opt/gradle/gradle-4.10.2

export PATH=${GRADLE_HOME}/bin:${PATH}

vskumar@ubuntu:~$

=========== End of file content display ====>

 

=== Output for version checking ====>

vskumar@ubuntu:~$ source /etc/profile.d/gradle.sh

vskumar@ubuntu:~$ gradle -v

 

Welcome to Gradle 4.10.2!

 

Here are the highlights of this release:

– Incremental Java compilation by default

– Periodic Gradle caches cleanup

– Gradle Kotlin DSL 1.0-RC6

– Nested included builds

– SNAPSHOT plugin versions in the `plugins {}` block

 

For more details see https://docs.gradle.org/4.10.2/release-notes.html

 

 

————————————————————

Gradle 4.10.2

————————————————————

 

Build time:   2018-09-19 18:10:15 UTC

Revision:     b4d8d5d170bb4ba516e88d7fe5647e2323d791dd

 

Kotlin DSL:   1.0-rc-6

Kotlin:       1.2.61

Groovy:       2.4.15

Ant:          Apache Ant(TM) version 1.9.11 compiled on March 23 2018

JVM:          1.8.0_171 (Oracle Corporation 25.171-b11)

OS:           Linux 4.15.0-29-generic amd64

 

vskumar@ubuntu:~$

========================>

==== End of Lab exercise output  ===========>

Good luck!!
Thanks for visiting this blog/video……. bye for now…

 

DevOps Practices coaching

Coaching on DevOps Practices —–>

This coaching is meant for DevOps Managers and above positions only…

  1. Please walk-through the below chart for your DevOps practices implementation. These are nowhere connected with any specific DevOps tools. Just practices implementation only to show your velocity in DevOps complianced projects execution.  
  2. These are the best practices used by the DevOps successful implementation organizations.
  3. Who need to learn these?: If you are already working as DevOps professional, [Ex: DevOps engineer, Practitioner, Architect, Practice head, Related to DevOps implementation, etc.] and your organization is demanding/targeted to demonstrate its [DevOps implementation] velocity then you need to accelerate your speed in catching up the knowledge on several areas for  continuous improvements.
  4. Note; you also need to apply continuous learning or seek coaching to speedup your productivity through experienced professionals.

Visit for free concepts learning:

To join DevOps Practices group visit  [CONDITIONS APPLY]:

https://www.facebook.com/groups/1911594275816833/about/

To join Cloud Practices group visit [CONDITIONS APPLY]:

https://www.facebook.com/groups/585147288612549/about/

DevOps Patterns

Note:

Please note this course doesn’t contain Tools. Only Practices.

There is a separate topic “DevOps Automation”, you need to attend it.

If you are qualified you can join the below group also.

https://vskumar.blog/2018/10/17/join-devops-practices-group-on-fb/

If you are  new for DevOps, visit:

https://vskumar.blog/2017/10/22/why-the-devops-practice-is-mandatory-for-an-it-employee/

You can also visit:

https://vskumar.blog/2019/07/24/devops-advanced-devops-practices-processes-1/

Advt-course3rd page.png
Folks! Greetings!

Are you interested to transform into new technology ?

An IT employee need to learn DevOps and also one cloud technology practice which is mandatory to understand the current DevOps work culture to get accommodated into a project.
Visit for my course exercises/sample videos/blogs on youtube channel and the blog site mentioned in VCard.
I get many new users regularly  to use these content from different countries.
That itself denotes they are highly competitive techie stuff.
During the course you will be given cloud infra machine(s) [they will be your property] into your laptop for future self practice for interviews, R&D, etc.
The critical  topics will have supporting blogs/videos!! along with the pdf material.
In a corporate style training cos you will be given access [upto certain period] only to their cloud setup.
These are the USPs can be compared with other courses!
Please come with joining confirmation/determination.
For classroom sessions it will be in Vijayanagar, Bangalore, India.
Both online and classroom are available for weekend [global flexible timings] and weekdays to facilitate employees.
Corporate companies are welcome to avail it to save cost of your suppliers!!
You can join from any country for online course.
For contacts please go through vCard. Please send E-mail on your willingness.
Looking forward for your learning call/e-mail!
Look into this video also:
Visit For Aws Lab demo:
WATCH STUDENT FEEDBACK ON AWS:

 

 

Visit some more videos:
Visit:

1. AWS:How to create and activate a new account in AWS ?

AWS Account-creation scrn

How to create and activate a new account in AWS ?:

In this blog, you will see the required steps for creating and activating your new AWS account. Once have the activated account, you can start your other lab practices as I discuss in the class timely.

The following are the main 4 steps process we need to follow:

STEP1: Creating your account. It consists of 2 steps: a) Providing a valid e-mail address and choosing a password. b) Providing your contact information and setting your preferences.

STEP2: Add a payment method Please note; you need to have a valid CC to give its details. Amazon verifies its transaction also with a tiny charge and with a credit back. If by mistaken given wrong data please note; your account registration process will not be activated. You will be intimated by mail. This way Amazon is authenticating/authorizing us for AWS usage.

STEP3: Verify your phone number. You need to Provide a phone number where you can be reached in next few minutes, while creating your account.

STEP4: Choose the AWS Support plans. Time to time the AWS plans will be published. You need to choose the plan from the currently available plans for your needs. The relevant URL is given in the detailed steps section in this blog.

As a consolidated process, we can understand from the below flow chart. Which is from the collection of AWS process charts.

AWS Account-creation flowchart

Note:

I am not copying the screens due to privacy.

Detailed steps

STEP1: Detailed steps for Creating your account.

a). You need to go to Amazon Web Services home page URL: https://aws.amazon.com/

b). Now, Choose Sign Up. Click on Create an AWS account. You can see new page with Create an AWS account. You need to enter the required details. E-mail id, Pwd, AWS Account name [you can give any name for this]. And choose continue to go to next page. Please note; The above steps are valid for the new users of AWS. If you enter your email address entered incorrectly, you might not be able to access your account or change your password in the future. So you need to be careful on your data entry part. Let us assume; if you’ve signed in to AWS recently, it might say Sign In to the Console. So you need to login into your existing Account.

c). Now in the current page, choose Professional or personal. These two areas will give equal services. Depends on your need You can choose one of the options.

d). From the above options choosed, type the requested company or personal information entries. Note; At this point, You need to go through the AWS Customer Agreement to know their policies and procedures to follow while operating.

e). Finally, you choose Create Account and Continue options in the bottom.

f). Please note; at this point you will receive an e-mail to confirm that your account is created. Now, you can sign in to your new account using the valid email address and valid password you have supplied earlier.

Please note; we have done the Step1 only, the activation process is not yet completed to use the AWS services. Still we need to follow 3 more steps.

STEP2: Add a payment method- Detailed steps:

At this point; On the Payment Information page,

a) Choose the payment method as per the payment gateway standards which is displayed.

b) Type the requested information associated with your payment method. Please make sure the address for your payment method is the same as the address you provided for your account. Note; If your billing address is different; then choose Use a new address, type the billing address for your payment method.

c) Now, choose Secure Submit.

STEP3: Verify your phone number.

Please keep a valid and handy phone number at this point.

a) On this Phone Verification page, type a phone number where can use to accept incoming phone calls.

b) Enter the code displayed in the captcha. When you’re ready to receive a call, choose Call me now option.

c) In a few moments, an automated system will call you to your given phone number. Even it might have SMS feature also, if you are outside North America region.

d) Type the provided PIN on your phone’s keypad of the AWS screen. e) After the process is complete, choose Continue.

STEP4: Choose the AWS Support plans.

a) At this point please visit the below URL: https://aws.amazon.com/premiumsupport/features/

You can select the AWS support plans from the given list.

b) After your selection of a Support plan, a confirmation page denotes that your account is being activated.

c) Please note; Accounts are usually activated within a few minutes, but the process might take up to 24 hours. This process includes the validation of Bank/CC account given there.

d) Hence keep looking for a mail on this subject from Amazon to start your AWS services usage.

Assuming everything went well, and your AWS account is activated now.  Congratulations!

We can look into next lab with reference to the class session.

2. AWS: WordPress[WP] infrastructure creation using a free tier account

https://wordpress.com/post/vskumar.blog/2884

 

If you are interested to learn Virtualization with Vagrant visit:

1. Vagrant/Virtual Box:How to create Virtual Machine[VM] on Windows 10?:

 

Note:

If you are not a student of my class, and looking for it please contact me by mail with your LinkedIn identity. And send a connection request with a message on your need. You can use the below contacts. Please note; I teach globally.

 

Vcard-Shanthi Kumar V-v3

2. Graph database/Docker: How to install Neo4j on a docker container? [for Ubuntu 18.04 VM]

Neo4j                                                                                                      Docker-logo

I have shown in this blog/video on the “Installing Neo4j DB on a Docker container using Ubuntu 18.04 VM”.

Through this blog and video, I have demonstrated the below functions:

a) How to install docker on an Ubuntu 18.04 VM?

b) How to create the Neo4j container from the image ?

c) How to use the container for neo4j browser ?

d) How to login and operate the options ?

c) Then how to shutdown the neo4j container ?

On all the above steps, a practiced video is made for your lab practice.

This is attached at the end of this blog.

Step1:
Initially, we need to check whether the prerequisite packages are installed.
To check, perform the following:

sudo apt-get -y install apt-transport-https ca-certificates curl

Step2:
Then, add the docker.com keys to our local keyset:

sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –

Step3:
Next, Add the Docker repository to our system (Ubuntu users, I am assuming you have a 64-bit CPU in your VM):

sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable”

Step4:
Now, we need to work on Preparing the filesystem.
As we planned want need to keep track of the logs and be able to reuse our data, we will need to give the
Docker image some access to our filesystem.
In our home folder [~], let’s create a Neo4j folder and two subfolders named logs and data.
This below script will do it on a Linux platform:
cd ~
mkdir neo4j
cd neo4j
mkdir logs
mkdir data

Enter the above steps in a .sh script.

Step5:
How to Run Neo4j in a Docker container ?:
I need to run the below command;

sudo apt install docker.io

Now, we can run the below long command in a Terminal to run Docker with a Neo4j image.

sudo docker run –rm –publish=7474:7474 –publish=7687:7687 –volume=$HOME/neo4j/data:/data \
–volume=$HOME/neo4j/logs:/logs neo4j:3.1.2

This command triggered some downloading because our local Docker repository does not have the
Neo4j image available in its 3.1.2 version yet.

Ports used by Neo4j are 7474, 7473, and 7687, for the protocols http, https, and bolt, respectively .
In the parameters part, you can see–volume twice.
Its use is to link the folder on the local filesystem to the container filesystem.

Step6:
Providing the port numbers given as parameters were not in use,
the Terminal should display something like this:

Remote interface available at : http://localhost:7474

Now, it denotes our Docker container started for Neo4j.

This informs us that Neo4j is expecting us to connect on port 7474 as usual.
So let us fire our browser and browse to the very same
URL we saw earlier, http://localhost:7474, and go graphing!
(Make sure our data will be persisted on disk.)

Step7:
now, How to stop Docker running your image ?

In order to stop Docker , you need to pass not the name (of the image)
but the identifier of the running container (based on the image).

So, in another Terminal, let us type as follows to know the status of containers:
docker ps

This will list all the containers running, in our case, only one.
So we look at the first column, container_id, and use it as a parameter:
docker stop container_id

You can watch the terminal screen.
Docker container stopped as it should be stopped with this command.

For typical installation procedure of neo4j visit my blog:

https://vskumar.blog/2017/12/08/how-to-install-neo4j-3-2-6-graph-database-on-ubuntu/

 

 

Vcard-Shanthi Kumar V-v3

Advt-course3rd page

31. DevOps: Jenkins-How to use Backup/Restore using thinbackup plugin ?

\]jenkins

Through this video I have demonstrated the below steps using Jenkins and its thinkbackup plugin process/usage.

=== Steps used in video ====>

How to take jenkins backup ?
1. You need to configure the thinbackup plugin.
2. Search for that plugin in
Manage Jenkins option.
3. Click on Available tab. It shows the locally
available plugins.
4. Then goto filter and type the plugin name as
thinbackup.
5. Now let us check it. You can see the icon,
it is installed.
6. Once you have this, you can explore it.
7. Please note you also need to configure
Restore.
8. Now, let us configure the backup.
After that we can use backupnow option to
take a backup. It stores on the given path.
So, we should use settings.
9. Now, let us test one backup …
10. Let us check the backup file…
11. Observe the created jobs are there…
12. Now, let me run a build..
13.Created 8th Build.
14. Now let me take the new backup…
15. Now, let me use restore to restore
the past build.
16.See the current build history …
17. I am picking up the 1st it was
made in the beginning…
18. Now, let us verify the jenkins
system jobs/builds.
19. It is overwritten on the existing jobs..
20. Let us delete some jobs and restore
the 8thbuild backup.
21. Let me try to restore the latest backup
which has the 8thbuild …
22. Let us restart the server to use the
latest restore …. Let me pause you …
23. It is ready to login … let us test it..
24. Please Note;
When restored it unzipped and kept the files.
When we restarted the jenkins server
it pickedup those files only.
We can see the 8th build is there.
25. From this exercise and trouble shoot,
we can conclude:
i) We need to use thinbackup plugin to
setup backup/restore process.
ii. Initially we use backup plugin and
later we can setup restore setup after
configuring the backup options.

iii. And when we restore a particular build
we need to restart the jenkins server.
That is all for this exercise..

============================>

 

Advt-course3rd page

30. DevOps: Jenkins 2.9-How to remove and re-Install Jenkins 2.9 for Windows 10 with trial job test ?

jenkins

 

Through this video I have demonstrated the following steps:
1. Removing Jenkins from the Laptop/Desktop of Windows10 OS.
2. Installed it as fresh setup on the same machine of Windows10 OS.
3. Played around with 2 jobs creation through Build now.

Also visit:

15. DevOps: How to setup jenkins 2.9 on Ubuntu-16.04 with jdk8 with a trouble shoot video guidance

16. DevOps: Working with Git on Ubuntu 16.04/18.04 VMs

 

 

29. DevOps: How to access internet through Vmware VM Bridge setup ?

 

Through this video I showed on “How to access internet through Virtual Machine of Vmware”. The required steps for setup is demonstrated along with the played options.

 

 

 

28.DevOps: How to install LinuxBrew package for Ubuntu VM?

 

linuxbrew-256x256

LinuxBrew is a package-management-software.
It enables installing packages from source on top of the system’s default package management.
Some of the examples for default package management are: “apt/deb” in Debian/Ubuntu and “yum/rpm” in CentOS/RedHat.
So this is similar to them. We all might have seen this package usage more in Mac OS systems. 
In this blog, I would like to demonstrate it as below:
The relevant command screen outputs are copied at the end of this blog.

I. To install this pakage we need to follow some pre-requisites:
Pre-requisites:
1. We need to update the current ubuntu system with the below command:
$ sudo apt-get update

2. We need to upgrade the packages s below:
$ sudo apt-get upgrade -y

II. Now, we need to prepare the system for LinuxBrew package with the below commands:

 

$ sudo sudo apt-get install -y build-essential make cmake scons curl git \
ruby autoconf automake autoconf-archive \
gettext libtool flex bison \
libbz2-dev libcurl4-openssl-dev \
libexpat-dev libncurses-dev

III. Now we need to Clone LinuxBrew from github:

I need to clone LinuxBrew into a hidden directory in my home directory:
$ git clone https://github.com/Homebrew/linuxbrew.git ~/.linuxbrew

After cloning we need to update the Update environment variables as below:

We need to add LinuxBrew to the user’s environment variables.

As a part of this task, the following lines to the end of the user’s ~/.bashrc file:

== Adding lines ====>
# Until LinuxBrew is fixed, the following is required.
# See: https://github.com/Homebrew/linuxbrew/issues/47
export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:/usr/local/lib64/pkgconfig:/usr/lib64/pkgconfig:/usr/lib/pkgconfig:/usr/lib/x86_64-linux-gnu/pkgconfig:/usr/lib64/pkgconfig:/usr/share/pkgconfig:$PKG_CONFIG_PATH
## Setup linux brew
export LINUXBREWHOME=$HOME/.linuxbrew
export PATH=$LINUXBREWHOME/bin:$PATH
export MANPATH=$LINUXBREWHOME/man:$MANPATH
export PKG_CONFIG_PATH=$LINUXBREWHOME/lib64/pkgconfig:$LINUXBREWHOME/lib/pkgconfig:$PKG_CONFIG_PATH
export LD_LIBRARY_PATH=$LINUXBREWHOME/lib64:$LINUXBREWHOME/lib:$LD_LIBRARY_PATH
== Update .bashrc file at EOF ====>
== For the above lines =====>

IV. Now we need to test the installation:
I need to log-out and log-in again.
So, the shell should use these new settings.

To test the installation; we need to apply the below commands:

=== Testing the installation ====>
$ which brew
/home/ubuntu/.linuxbrew/bin/brew
$ echo $PKG_CONFIG_PATH
/home/ubuntu/.linuxbrew/lib64/pkgconfig:/home/ubuntu/.linuxbrew/lib/pkgconfig:/usr/local/lib/pkgconfig:/usr/local/lib64/pkgconfig:/usr/lib64/pkgconfig:/usr/lib/pkgconfig:/usr/lib/x86_64-linux-gnu/pkgconfig:/usr/lib64/pkgconfig:/usr/share/pkgconfig:
========>
To fix common problems in LinuxBrew, those we will encounter during its usage.
We need to use 2 times the below command:
$ brew update

And we also need to Run brew doctor and fix all the warnings
$ brew doctor

V. Now let us test it for vim installation in my system:

$ sudo brew install vim

You can also try to install some other packages as you need timely.

COPIED THE EXECUTED COMMANDS SCREEN OUPUT, FYI.

==== Output of screen commands ====>
vskumar@ubuntu:~/K8$ sudo apt-get update
[sudo] password for vskumar:
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Hit:2 http://ppa.launchpad.net/ansible/ansible/ubuntu xenial InRelease
Get:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]
Hit:4 http://ppa.launchpad.net/conjure-up/next/ubuntu xenial InRelease
Hit:5 http://ppa.launchpad.net/juju/devel/ubuntu xenial InRelease
Hit:6 https://download.docker.com/linux/ubuntu xenial InRelease
Hit:7 http://ppa.launchpad.net/webupd8team/java/ubuntu xenial InRelease
Ign:9 https://pkg.jenkins.io/debian-stable binary/ InRelease
Ign:10 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial InRelease
Hit:11 https://pkg.jenkins.io/debian-stable binary/ Release
Ign:12 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial InRelease
Ign:13 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial Release
Get:8 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]
Ign:14 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial Release
Ign:16 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 Packages
Ign:8 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Ign:17 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 all Packages
Ign:18 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en_US
Ign:19 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en
Ign:20 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:21 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Ign:22 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 Packages
Ign:23 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 all Packages
Ign:24 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en_US
Ign:25 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en
Ign:26 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:27 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Ign:16 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 Packages
Ign:17 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 all Packages
Ign:18 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en_US
Ign:19 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en
Ign:20 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:21 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Ign:22 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 Packages
Ign:23 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 all Packages
Ign:24 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en_US
Ign:25 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en
Ign:26 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:27 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Ign:16 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 Packages
Ign:17 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 all Packages
Ign:18 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en_US
Ign:19 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en
Ign:20 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:21 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Ign:22 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 Packages
Ign:23 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 all Packages
Ign:24 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en_US
Ign:25 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en
Ign:26 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:27 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Ign:16 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 Packages
Ign:17 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 all Packages
Ign:18 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en_US
Ign:19 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en
Ign:20 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:21 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Ign:22 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 Packages
Ign:23 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 all Packages
Ign:24 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en_US
Ign:25 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en
Ign:26 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:27 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Ign:16 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 Packages
Ign:17 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 all Packages
Ign:18 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en_US
Ign:19 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en
Ign:20 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:21 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Ign:22 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 Packages
Ign:23 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 all Packages
Ign:24 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en_US
Ign:25 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en
Ign:26 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:27 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Err:16 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 Packages
403 Forbidden
Ign:17 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 all Packages
Ign:18 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en_US
Ign:19 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en
Ign:20 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:21 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Err:22 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 Packages
403 Forbidden
Ign:23 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 all Packages
Ign:24 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en_US
Ign:25 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en
Ign:26 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:27 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Fetched 118 kB in 32s (3,666 B/s)
Reading package lists… Done
W: The repository ‘https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial Release’ does not have a Release file.
N: Data from such a repository can’t be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
W: The repository ‘https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial Release’ does not have a Release file.
N: Data from such a repository can’t be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
W: GPG error: https://packages.cloud.google.com/apt kubernetes-xenial InRelease: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB
W: The repository ‘http://apt.kubernetes.io kubernetes-xenial InRelease’ is not signed.
N: Data from such a repository can’t be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: Failed to fetch https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu/dists/xenial/test-17.06/binary-amd64/Packages 403 Forbidden
E: Failed to fetch https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu/dists/xenial/test-17.06/binary-amd64/Packages 403 Forbidden
E: Some index files failed to download. They have been ignored, or old ones used instead.
vskumar@ubuntu:~/K8$ sudo apt-get upgrade -y
Reading package lists… Done
Building dependency tree
Reading state information… Done
Calculating upgrade… Done
The following packages were automatically installed and are no longer required:
ca-certificates-java default-jre-headless java-common openjdk-8-jre-headless
Use ‘sudo apt autoremove’ to remove them.
The following packages have been kept back:
cups-filters cups-filters-core-drivers gir1.2-javascriptcoregtk-4.0
gir1.2-webkit2-4.0 libdrm-amdgpu1 libdrm2 libegl1-mesa libgbm1
libgl1-mesa-dri libjavascriptcoregtk-4.0-18 libmm-glib0 libqmi-proxy
libwayland-egl1-mesa libwebkit2gtk-4.0-37 libwebkit2gtk-4.0-37-gtk2
libxatracker2 linux-generic-hwe-16.04 linux-headers-generic-hwe-16.04
linux-image-generic-hwe-16.04 modemmanager open-vm-tools
open-vm-tools-desktop qpdf
The following packages will be upgraded:
ansible apache2 apache2-bin apache2-data apache2-utils apparmor apport
apport-gtk apt apt-transport-https apt-utils avahi-autoipd avahi-daemon
avahi-utils bamfdaemon base-files ca-certificates-java compiz compiz-core
compiz-gnome compiz-plugins-default cpp-5 cups cups-browsed cups-bsd
cups-client cups-common cups-core-drivers cups-daemon cups-ppdc
cups-server-common curl distro-info-data docker-ce dpkg dpkg-dev ebtables
firefox firefox-locale-en fonts-opensymbol friendly-recovery fwupd g++-5
gcc-5 gcc-5-base ghostscript ghostscript-x gir1.2-ibus-1.0 gir1.2-unity-5.0
gnome-accessibility-themes gnome-software gnome-software-common grub-common
grub-pc grub-pc-bin grub2-common hdparm ibus ibus-gtk ibus-gtk3 ifupdown
initramfs-tools initramfs-tools-bin initramfs-tools-core isc-dhcp-client
isc-dhcp-common jenkins libapparmor-perl libapparmor1 libapt-inst2.0
libapt-pkg5.0 libasan2 libatomic1 libaudit-common libaudit1 libavahi-client3
libavahi-common-data libavahi-common3 libavahi-core7 libavahi-glib1
libavahi-ui-gtk3-0 libbamf3-2 libcc1-0 libcilkrts5 libcompizconfig0 libcups2
libcupscgi1 libcupsfilters1 libcupsimage2 libcupsmime1 libcupsppdc1 libcurl3
libcurl3-gnutls libdecoration0 libdfu1 libdpkg-perl libfontembed1 libfwupd1
libgcc-5-dev libgcrypt20 libgomp1 libgs9 libgs9-common libibus-1.0-5
libicu55 libitm1 liblsan0 libmpx0 libnuma1 libpam-modules libpam-modules-bin
libpam-runtime libpam-systemd libpam0g libpci3 libperl5.22 libplymouth4
libpoppler-glib8 libpoppler58 libprocps4 libpulse-mainloop-glib0 libpulse0
libpulsedsp libpython-all-dev libpython-dev libpython-stdlib libquadmath0
libraw15 libreoffice-avmedia-backend-gstreamer libreoffice-base-core
libreoffice-calc libreoffice-common libreoffice-core libreoffice-draw
libreoffice-gnome libreoffice-gtk libreoffice-impress libreoffice-math
libreoffice-ogltrans libreoffice-pdfimport libreoffice-style-breeze
libreoffice-style-galaxy libreoffice-writer libruby2.3 libsmbclient
libsnmp-base libsnmp30 libssl1.0.0 libstdc++-5-dev libstdc++6 libsystemd0
libtiff5 libtsan0 libubsan0 libudev1 libunity-core-6.0-9
libunity-protocol-private0 libunity-scopes-json-def-desktop libunity9
libvncclient1 libvorbis0a libvorbisenc2 libvorbisfile3 libwayland-client0
libwayland-cursor0 libwayland-server0 libwbclient0 light-themes linux-base
linux-firmware linux-libc-dev lshw openjdk-8-jre-headless openssh-client
openssh-server openssh-sftp-server openssl patch pciutils perl perl-base
perl-modules-5.22 plymouth plymouth-label plymouth-theme-ubuntu-logo
plymouth-theme-ubuntu-text poppler-utils procps pulseaudio
pulseaudio-module-bluetooth pulseaudio-module-x11 pulseaudio-utils python
python-all python-all-dev python-apt python-apt-common python-crypto
python-dev python-minimal python-paramiko python-samba
python-software-properties python3-apport python3-apt python3-distupgrade
python3-problem-report python3-uno python3-update-manager ruby2.3 samba
samba-common samba-common-bin samba-dsdb-modules samba-libs
samba-vfs-modules sensible-utils suru-icon-theme systemd systemd-sysv
thunderbird thunderbird-gnome-support thunderbird-locale-en
thunderbird-locale-en-us ubuntu-artwork ubuntu-drivers-common
ubuntu-mobile-icons ubuntu-mono ubuntu-release-upgrader-core
ubuntu-release-upgrader-gtk ubuntu-software udev unity unity-schemas
unity-scopes-runner unity-services uno-libs3 update-manager
update-manager-core update-notifier update-notifier-common ure wget
xdg-user-dirs xdg-utils
245 upgraded, 0 newly installed, 0 to remove and 23 not upgraded.
Need to get 459 MB of archives.
After this operation, 109 MB of additional disk space will be used.
Get:1 http://ppa.launchpad.net/ansible/ansible/ubuntu xenial/main amd64 ansible all 2.5.4-1ppa~xenial [3,181 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 base-files amd64 9.4ubuntu4.6 [55.0 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 dpkg amd64 1.18.4ubuntu1.4 [2,088 kB]
Get:4 https://download.docker.com/linux/ubuntu xenial/edge amd64 docker-ce amd64 18.05.0~ce~3-0~ubuntu [34.2 MB]
Get:5 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libperl5.22 amd64 5.22.1-9ubuntu0.3 [3,402 kB]
Get:6 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 perl amd64 5.22.1-9ubuntu0.3 [237 kB]
Get:7 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 perl-base amd64 5.22.1-9ubuntu0.3 [1,286 kB]
Get:8 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 perl-modules-5.22 all 5.22.1-9ubuntu0.3 [2,646 kB]
Get:9 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libquadmath0 amd64 5.4.0-6ubuntu1~16.04.9 [131 kB]
Get:10 https://pkg.jenkins.io/debian-stable binary/ jenkins 2.107.3 [72.5 MB]
Get:11 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgomp1 amd64 5.4.0-6ubuntu1~16.04.9 [55.0 kB]
Get:12 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libitm1 amd64 5.4.0-6ubuntu1~16.04.9 [27.4 kB]
Get:13 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libatomic1 amd64 5.4.0-6ubuntu1~16.04.9 [8,882 B]
Get:14 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libasan2 amd64 5.4.0-6ubuntu1~16.04.9 [264 kB]
Get:15 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 liblsan0 amd64 5.4.0-6ubuntu1~16.04.9 [105 kB]
Get:16 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libtsan0 amd64 5.4.0-6ubuntu1~16.04.9 [244 kB]
Get:17 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libubsan0 amd64 5.4.0-6ubuntu1~16.04.9 [95.2 kB]
Get:18 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcilkrts5 amd64 5.4.0-6ubuntu1~16.04.9 [40.1 kB]
Get:19 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libmpx0 amd64 5.4.0-6ubuntu1~16.04.9 [9,774 B]
Get:20 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 g++-5 amd64 5.4.0-6ubuntu1~16.04.9 [8,333 kB]
Get:21 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 gcc-5 amd64 5.4.0-6ubuntu1~16.04.9 [8,650 kB]
Get:22 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 cpp-5 amd64 5.4.0-6ubuntu1~16.04.9 [7,685 kB]
Get:23 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcc1-0 amd64 5.4.0-6ubuntu1~16.04.9 [38.8 kB]
Get:24 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libstdc++-5-dev amd64 5.4.0-6ubuntu1~16.04.9 [1,427 kB]
Get:25 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgcc-5-dev amd64 5.4.0-6ubuntu1~16.04.9 [2,242 kB]
Get:26 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 gcc-5-base amd64 5.4.0-6ubuntu1~16.04.9 [17.3 kB]
Get:27 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libstdc++6 amd64 5.4.0-6ubuntu1~16.04.9 [393 kB]
Get:28 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libapt-pkg5.0 amd64 1.2.26 [706 kB]
Get:29 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libapt-inst2.0 amd64 1.2.26 [55.4 kB]
Get:30 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 apt amd64 1.2.26 [1,043 kB]
Get:31 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 apt-utils amd64 1.2.26 [197 kB]
Get:32 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libaudit-common all 1:2.4.5-1ubuntu2.1 [3,924 B]
Get:33 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libaudit1 amd64 1:2.4.5-1ubuntu2.1 [36.2 kB]
Get:34 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpam0g amd64 1.1.8-3.2ubuntu2.1 [55.6 kB]
Get:35 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpam-modules-bin amd64 1.1.8-3.2ubuntu2.1 [36.9 kB]
Get:36 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpam-modules amd64 1.1.8-3.2ubuntu2.1 [244 kB]
Get:37 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpam-runtime all 1.1.8-3.2ubuntu2.1 [37.9 kB]
Get:38 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libprocps4 amd64 2:3.3.10-4ubuntu2.4 [33.1 kB]
Get:39 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 procps amd64 2:3.3.10-4ubuntu2.4 [222 kB]
Get:40 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libsystemd0 amd64 229-4ubuntu21.2 [205 kB]
Get:41 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpam-systemd amd64 229-4ubuntu21.2 [115 kB]
Get:42 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ifupdown amd64 0.8.10ubuntu1.4 [54.9 kB]
Get:43 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 systemd amd64 229-4ubuntu21.2 [3,634 kB]
Get:44 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 udev amd64 229-4ubuntu21.2 [993 kB]
Get:45 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libudev1 amd64 229-4ubuntu21.2 [54.4 kB]
Get:46 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 grub-pc amd64 2.02~beta2-36ubuntu3.18 [197 kB]
Get:47 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 grub-pc-bin amd64 2.02~beta2-36ubuntu3.18 [889 kB]
Get:48 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 grub2-common amd64 2.02~beta2-36ubuntu3.18 [511 kB]
Get:49 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 grub-common amd64 2.02~beta2-36ubuntu3.18 [1,706 kB]
Get:50 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 friendly-recovery all 0.2.31ubuntu1 [9,496 B]
Get:51 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 initramfs-tools all 0.122ubuntu8.11 [8,590 B]
Get:52 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 initramfs-tools-core all 0.122ubuntu8.11 [42.9 kB]
Get:53 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 initramfs-tools-bin amd64 0.122ubuntu8.11 [9,592 B]
Get:54 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-base all 4.5ubuntu1~16.04.1 [18.1 kB]
Get:55 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 systemd-sysv amd64 229-4ubuntu21.2 [11.9 kB]
Get:56 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libapparmor1 amd64 2.10.95-0ubuntu2.9 [29.9 kB]
Get:57 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libssl1.0.0 amd64 1.0.2g-1ubuntu4.12 [1,085 kB]
Get:58 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 apache2 amd64 2.4.18-2ubuntu3.8 [86.8 kB]
Get:59 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 apache2-bin amd64 2.4.18-2ubuntu3.8 [926 kB]
Get:60 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 apache2-utils amd64 2.4.18-2ubuntu3.8 [82.0 kB]
Get:61 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 apache2-data all 2.4.18-2ubuntu3.8 [162 kB]
Get:62 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libavahi-common-data amd64 0.6.32~rc+dfsg-1ubuntu2.2 [21.5 kB]
Get:63 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libavahi-common3 amd64 0.6.32~rc+dfsg-1ubuntu2.2 [21.6 kB]
Get:64 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libavahi-client3 amd64 0.6.32~rc+dfsg-1ubuntu2.2 [25.2 kB]
Get:65 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libavahi-glib1 amd64 0.6.32~rc+dfsg-1ubuntu2.2 [7,708 B]
Get:66 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 cups-core-drivers amd64 2.1.3-4ubuntu0.4 [27.2 kB]
Get:67 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 cups-server-common all 2.1.3-4ubuntu0.4 [494 kB]
Get:68 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 cups-common all 2.1.3-4ubuntu0.4 [134 kB]
Get:69 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcupscgi1 amd64 2.1.3-4ubuntu0.4 [27.2 kB]
Get:70 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 cups-client amd64 2.1.3-4ubuntu0.4 [133 kB]
Get:71 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcupsimage2 amd64 2.1.3-4ubuntu0.4 [16.1 kB]
Get:72 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcupsppdc1 amd64 2.1.3-4ubuntu0.4 [45.0 kB]
Get:73 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 cups-browsed amd64 1.8.3-2ubuntu3.4 [92.9 kB]
Get:74 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 cups-daemon amd64 2.1.3-4ubuntu0.4 [302 kB]
Get:75 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcupsmime1 amd64 2.1.3-4ubuntu0.4 [13.0 kB]
Get:76 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcups2 amd64 2.1.3-4ubuntu0.4 [197 kB]
Get:77 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 cups amd64 2.1.3-4ubuntu0.4 [192 kB]
Get:78 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 cups-bsd amd64 2.1.3-4ubuntu0.4 [34.8 kB]
Get:79 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libtiff5 amd64 4.0.6-1ubuntu0.4 [148 kB]
Get:80 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcupsfilters1 amd64 1.8.3-2ubuntu3.4 [80.5 kB]
Get:81 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpoppler58 amd64 0.41.0-0ubuntu1.7 [758 kB]
Get:82 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 poppler-utils amd64 0.41.0-0ubuntu1.7 [130 kB]
Get:83 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ghostscript amd64 9.18~dfsg~0-0ubuntu2.8 [40.9 kB]
Get:84 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ghostscript-x amd64 9.18~dfsg~0-0ubuntu2.8 [34.4 kB]
Get:85 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgs9-common all 9.18~dfsg~0-0ubuntu2.8 [2,979 kB]
Get:86 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgs9 amd64 9.18~dfsg~0-0ubuntu2.8 [2,057 kB]
Get:87 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 cups-ppdc amd64 2.1.3-4ubuntu0.4 [26.5 kB]
Get:88 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libicu55 amd64 55.1-7ubuntu0.4 [7,646 kB]
Get:89 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libreoffice-calc amd64 1:5.1.6~rc2-0ubuntu1~xenial3 [6,452 kB]
Get:90 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libreoffice-gnome amd64 1:5.1.6~rc2-0ubuntu1~xenial3 [60.8 kB]
Get:91 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libreoffice-gtk amd64 1:5.1.6~rc2-0ubuntu1~xenial3 [206 kB]
Get:92 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libreoffice-writer amd64 1:5.1.6~rc2-0ubuntu1~xenial3 [7,558 kB]
Get:93 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libreoffice-style-galaxy all 1:5.1.6~rc2-0ubuntu1~xenial3 [1,522 kB]
Get:94 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 uno-libs3 amd64 5.1.6~rc2-0ubuntu1~xenial3 [704 kB]
Get:95 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libreoffice-ogltrans amd64 1:5.1.6~rc2-0ubuntu1~xenial3 [73.3 kB]
Get:96 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ure amd64 5.1.6~rc2-0ubuntu1~xenial3 [1,535 kB]
Get:97 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libreoffice-style-breeze all 1:5.1.6~rc2-0ubuntu1~xenial3 [470 kB]
Get:98 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libreoffice-common all 1:5.1.6~rc2-0ubuntu1~xenial3 [22.4 MB]
Get:99 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libreoffice-pdfimport amd64 1:5.1.6~rc2-0ubuntu1~xenial3 [182 kB]
Get:100 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-uno amd64 1:5.1.6~rc2-0ubuntu1~xenial3 [137 kB]
Get:101 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libreoffice-base-core amd64 1:5.1.6~rc2-0ubuntu1~xenial3 [716 kB]
Get:102 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libreoffice-math amd64 1:5.1.6~rc2-0ubuntu1~xenial3 [373 kB]
Get:103 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libreoffice-avmedia-backend-gstreamer amd64 1:5.1.6~rc2-0ubuntu1~xenial3 [24.2 kB]
Get:104 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libreoffice-draw amd64 1:5.1.6~rc2-0ubuntu1~xenial3 [2,401 kB]
Get:105 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libreoffice-impress amd64 1:5.1.6~rc2-0ubuntu1~xenial3 [970 kB]
Get:106 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libreoffice-core amd64 1:5.1.6~rc2-0ubuntu1~xenial3 [28.2 MB]
Get:107 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 fonts-opensymbol all 2:102.7+LibO5.1.6~rc2-0ubuntu1~xenial3 [104 kB]
Get:108 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 curl amd64 7.47.0-1ubuntu2.8 [139 kB]
Get:109 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcurl3-gnutls amd64 7.47.0-1ubuntu2.8 [185 kB]
Get:110 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 samba-vfs-modules amd64 2:4.3.11+dfsg-0ubuntu0.16.04.13 [257 kB]
Get:111 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 samba-dsdb-modules amd64 2:4.3.11+dfsg-0ubuntu0.16.04.13 [215 kB]
Get:112 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-all-dev amd64 2.7.12-1~16.04 [1,016 B]
Get:113 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-dev amd64 2.7.12-1~16.04 [1,186 B]
Get:114 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-all amd64 2.7.12-1~16.04 [996 B]
Get:115 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-minimal amd64 2.7.12-1~16.04 [28.1 kB]
Get:116 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python amd64 2.7.12-1~16.04 [137 kB]
Get:117 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython-all-dev amd64 2.7.12-1~16.04 [1,006 B]
Get:118 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython-dev amd64 2.7.12-1~16.04 [7,840 B]
Get:119 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython-stdlib amd64 2.7.12-1~16.04 [7,768 B]
Get:120 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-crypto amd64 2.6.1-6ubuntu0.16.04.3 [246 kB]
Get:121 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-samba amd64 2:4.3.11+dfsg-0ubuntu0.16.04.13 [1,059 kB]
Get:122 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 samba-common-bin amd64 2:4.3.11+dfsg-0ubuntu0.16.04.13 [506 kB]
Get:123 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libsmbclient amd64 2:4.3.11+dfsg-0ubuntu0.16.04.13 [53.3 kB]
Get:124 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 samba-libs amd64 2:4.3.11+dfsg-0ubuntu0.16.04.13 [5,166 kB]
Get:125 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libwbclient0 amd64 2:4.3.11+dfsg-0ubuntu0.16.04.13 [30.4 kB]
Get:126 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 samba amd64 2:4.3.11+dfsg-0ubuntu0.16.04.13 [906 kB]
Get:127 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 samba-common all 2:4.3.11+dfsg-0ubuntu0.16.04.13 [83.5 kB]
Get:128 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 pciutils amd64 1:3.3.1-1.1ubuntu1.2 [234 kB]
Get:129 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpci3 amd64 1:3.3.1-1.1ubuntu1.2 [24.5 kB]
Get:130 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-apt-common all 1.1.0~beta1ubuntu0.16.04.1 [16.0 kB]
Get:131 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-apt amd64 1.1.0~beta1ubuntu0.16.04.1 [137 kB]
Get:132 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ubuntu-drivers-common amd64 1:0.4.17.7 [49.9 kB]
Get:133 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ubuntu-release-upgrader-gtk all 1:16.04.25 [9,344 B]
Get:134 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ubuntu-release-upgrader-core all 1:16.04.25 [29.6 kB]
Get:135 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-apt amd64 1.1.0~beta1ubuntu0.16.04.1 [139 kB]
Get:136 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 update-manager all 1:16.04.13 [543 kB]
Get:137 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-distupgrade all 1:16.04.25 [104 kB]
Get:138 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-update-manager all 1:16.04.13 [32.6 kB]
Get:139 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 update-manager-core all 1:16.04.13 [5,496 B]
Get:140 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 update-notifier amd64 3.168.8 [47.3 kB]
Get:141 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libdpkg-perl all 1.18.4ubuntu1.4 [195 kB]
Get:142 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 dpkg-dev all 1.18.4ubuntu1.4 [584 kB]
Get:143 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 patch amd64 2.7.5-1ubuntu0.16.04.1 [90.5 kB]
Get:144 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 update-notifier-common all 3.168.8 [164 kB]
Get:145 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgcrypt20 amd64 1.6.5-2ubuntu0.4 [337 kB]
Get:146 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 sensible-utils all 0.0.9ubuntu0.16.04.1 [10.0 kB]
Get:147 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 distro-info-data all 0.28ubuntu0.8 [4,502 B]
Get:148 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 isc-dhcp-client amd64 4.3.3-5ubuntu12.10 [224 kB]
Get:149 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 isc-dhcp-common amd64 4.3.3-5ubuntu12.10 [105 kB]
Get:150 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libapparmor-perl amd64 2.10.95-0ubuntu2.9 [31.5 kB]
Get:151 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 apparmor amd64 2.10.95-0ubuntu2.9 [450 kB]
Get:152 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 apt-transport-https amd64 1.2.26 [26.1 kB]
Get:153 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 hdparm amd64 9.48+ds-1ubuntu0.1 [92.6 kB]
Get:154 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libnuma1 amd64 2.0.11-1ubuntu1.1 [21.0 kB]
Get:155 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libplymouth4 amd64 0.9.2-3ubuntu13.5 [85.2 kB]
Get:156 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 lshw amd64 02.17-1.1ubuntu3.5 [215 kB]
Get:157 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 openssh-sftp-server amd64 1:7.2p2-4ubuntu2.4 [38.7 kB]
Get:158 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 openssh-server amd64 1:7.2p2-4ubuntu2.4 [335 kB]
Get:159 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 openssh-client amd64 1:7.2p2-4ubuntu2.4 [589 kB]
Get:160 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 openssl amd64 1.0.2g-1ubuntu4.12 [492 kB]
Get:161 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 plymouth-theme-ubuntu-text amd64 0.9.2-3ubuntu13.5 [9,090 B]
Get:162 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 plymouth amd64 0.9.2-3ubuntu13.5 [107 kB]
Get:163 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 plymouth-theme-ubuntu-logo amd64 0.9.2-3ubuntu13.5 [22.1 kB]
Get:164 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 plymouth-label amd64 0.9.2-3ubuntu13.5 [6,080 B]
Get:165 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 wget amd64 1.17.1-1ubuntu1.4 [299 kB]
Get:166 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 xdg-user-dirs amd64 0.15-2ubuntu6.16.04.1 [61.8 kB]
Get:167 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-paramiko all 1.16.0-1ubuntu0.1 [109 kB]
Get:168 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-problem-report all 2.20.1-0ubuntu2.18 [9,754 B]
Get:169 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-apport all 2.20.1-0ubuntu2.18 [79.6 kB]
Get:170 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 apport all 2.20.1-0ubuntu2.18 [121 kB]
Get:171 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 apport-gtk all 2.20.1-0ubuntu2.18 [9,578 B]
Get:172 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 avahi-autoipd amd64 0.6.32~rc+dfsg-1ubuntu2.2 [36.5 kB]
Get:173 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 pulseaudio-module-bluetooth amd64 1:8.0-0ubuntu3.10 [58.5 kB]
Get:174 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpulsedsp amd64 1:8.0-0ubuntu3.10 [21.1 kB]
Get:175 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 pulseaudio-utils amd64 1:8.0-0ubuntu3.10 [50.9 kB]
Get:176 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpulse-mainloop-glib0 amd64 1:8.0-0ubuntu3.10 [11.5 kB]
Get:177 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 pulseaudio-module-x11 amd64 1:8.0-0ubuntu3.10 [15.9 kB]
Get:178 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 pulseaudio amd64 1:8.0-0ubuntu3.10 [769 kB]
Get:179 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpulse0 amd64 1:8.0-0ubuntu3.10 [249 kB]
Get:180 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libavahi-core7 amd64 0.6.32~rc+dfsg-1ubuntu2.2 [81.5 kB]
Get:181 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 avahi-daemon amd64 0.6.32~rc+dfsg-1ubuntu2.2 [59.5 kB]
Get:182 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 avahi-utils amd64 0.6.32~rc+dfsg-1ubuntu2.2 [24.3 kB]
Get:183 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 bamfdaemon amd64 0.5.3~bzr0+16.04.20180209-0ubuntu1 [82.2 kB]
Get:184 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libbamf3-2 amd64 0.5.3~bzr0+16.04.20180209-0ubuntu1 [51.8 kB]
Get:185 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 openjdk-8-jre-headless amd64 8u171-b11-0ubuntu0.16.04.1 [27.0 MB]
Get:186 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ca-certificates-java all 20160321ubuntu1 [12.5 kB]
Get:187 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcompizconfig0 amd64 1:0.9.12.3+16.04.20180221-0ubuntu1 [118 kB]
Get:188 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 compiz-gnome amd64 1:0.9.12.3+16.04.20180221-0ubuntu1 [127 kB]
Get:189 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 compiz-plugins-default amd64 1:0.9.12.3+16.04.20180221-0ubuntu1 [821 kB]
Get:190 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libdecoration0 amd64 1:0.9.12.3+16.04.20180221-0ubuntu1 [51.9 kB]
Get:191 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 unity amd64 7.4.5+16.04.20180221-0ubuntu1 [1,619 kB]
Get:192 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libunity-protocol-private0 amd64 7.1.4+16.04.20180209.1-0ubuntu1 [78.7 kB]
Get:193 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libunity9 amd64 7.1.4+16.04.20180209.1-0ubuntu1 [199 kB]
Get:194 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libunity-core-6.0-9 amd64 7.4.5+16.04.20180221-0ubuntu1 [437 kB]
Get:195 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 unity-schemas all 7.4.5+16.04.20180221-0ubuntu1 [12.9 kB]
Get:196 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libunity-scopes-json-def-desktop all 7.1.4+16.04.20180209.1-0ubuntu1 [3,548 B]
Get:197 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 unity-services amd64 7.4.5+16.04.20180221-0ubuntu1 [33.4 kB]
Get:198 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 compiz-core amd64 1:0.9.12.3+16.04.20180221-0ubuntu1 [348 kB]
Get:199 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 compiz all 1:0.9.12.3+16.04.20180221-0ubuntu1 [3,860 B]
Get:200 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ebtables amd64 2.0.10.4-3.4ubuntu2.16.04.1 [79.6 kB]
Get:201 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 firefox amd64 60.0.1+build2-0ubuntu0.16.04.1 [44.0 MB]
Get:202 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 firefox-locale-en amd64 60.0.1+build2-0ubuntu0.16.04.1 [740 kB]
Get:203 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libdfu1 amd64 0.8.3-0ubuntu3 [48.6 kB]
Get:204 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libfwupd1 amd64 0.8.3-0ubuntu3 [33.1 kB]
Get:205 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 fwupd amd64 0.8.3-0ubuntu3 [119 kB]
Get:206 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libibus-1.0-5 amd64 1.5.11-1ubuntu2.1 [125 kB]
Get:207 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ibus amd64 1.5.11-1ubuntu2.1 [205 kB]
Get:208 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 gir1.2-ibus-1.0 amd64 1.5.11-1ubuntu2.1 [66.0 kB]
Get:209 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 gir1.2-unity-5.0 amd64 7.1.4+16.04.20180209.1-0ubuntu1 [20.2 kB]
Get:210 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 gnome-accessibility-themes all 3.18.0-2ubuntu2 [2,298 kB]
Get:211 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ubuntu-software amd64 3.20.5-0ubuntu0.16.04.10 [11.7 kB]
Get:212 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 gnome-software amd64 3.20.5-0ubuntu0.16.04.10 [244 kB]
Get:213 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 gnome-software-common all 3.20.5-0ubuntu0.16.04.10 [2,521 kB]
Get:214 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ibus-gtk amd64 1.5.11-1ubuntu2.1 [14.7 kB]
Get:215 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ibus-gtk3 amd64 1.5.11-1ubuntu2.1 [14.8 kB]
Get:216 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libavahi-ui-gtk3-0 amd64 0.6.32~rc+dfsg-1ubuntu2.2 [19.0 kB]
Get:217 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcurl3 amd64 7.47.0-1ubuntu2.8 [187 kB]
Get:218 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libfontembed1 amd64 1.8.3-2ubuntu3.4 [47.2 kB]
Get:219 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpoppler-glib8 amd64 0.41.0-0ubuntu1.7 [104 kB]
Get:220 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libraw15 amd64 0.17.1-1ubuntu0.3 [230 kB]
Get:221 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libsnmp-base all 5.7.3+dfsg-1ubuntu4.1 [224 kB]
Get:222 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libsnmp30 amd64 5.7.3+dfsg-1ubuntu4.1 [811 kB]
Get:223 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libvncclient1 amd64 0.9.10+dfsg-3ubuntu0.16.04.2 [54.2 kB]
Get:224 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libvorbisfile3 amd64 1.3.5-3ubuntu0.2 [15.9 kB]
Get:225 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libvorbisenc2 amd64 1.3.5-3ubuntu0.2 [70.6 kB]
Get:226 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libvorbis0a amd64 1.3.5-3ubuntu0.2 [86.0 kB]
Get:227 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libwayland-client0 amd64 1.12.0-1~ubuntu16.04.3 [22.5 kB]
Get:228 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libwayland-cursor0 amd64 1.12.0-1~ubuntu16.04.3 [10.1 kB]
Get:229 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libwayland-server0 amd64 1.12.0-1~ubuntu16.04.3 [28.0 kB]
Get:230 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ubuntu-mono all 14.04+16.04.20180326-0ubuntu1 [178 kB]
Get:231 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 light-themes all 14.04+16.04.20180326-0ubuntu1 [154 kB]
Get:232 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-firmware all 1.157.19 [50.7 MB]
Get:233 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-libc-dev amd64 4.4.0-127.153 [870 kB]
Get:234 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 python-software-properties all 0.96.20.7 [20.7 kB]
Get:235 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ubuntu-mobile-icons all 14.04+16.04.20180326-0ubuntu1 [6,840 kB]
Get:236 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 suru-icon-theme all 14.04+16.04.20180326-0ubuntu1 [1,626 kB]
Get:237 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 thunderbird-locale-en amd64 1:52.8.0+build1-0ubuntu0.16.04.1 [469 kB]
Get:238 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 thunderbird amd64 1:52.8.0+build1-0ubuntu0.16.04.1 [42.3 MB]
Get:239 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 thunderbird-gnome-support amd64 1:52.8.0+build1-0ubuntu0.16.04.1 [8,530 B]
Get:240 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 thunderbird-locale-en-us all 1:52.8.0+build1-0ubuntu0.16.04.1 [9,336 B]
Get:241 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ubuntu-artwork all 1:14.04+16.04.20180326-0ubuntu1 [7,612 B]
Get:242 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 unity-scopes-runner all 7.1.4+16.04.20180209.1-0ubuntu1 [4,180 B]
Get:243 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 xdg-utils all 1.1.1-1ubuntu1.16.04.3 [59.6 kB]
Get:244 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libruby2.3 amd64 2.3.1-2~16.04.9 [2,963 kB]
Get:245 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ruby2.3 amd64 2.3.1-2~16.04.9 [41.0 kB]
Fetched 459 MB in 4min 39s (1,640 kB/s)
Extracting templates from packages: 100%
Preconfiguring packages …
(Reading database … 224868 files and directories currently installed.)
Preparing to unpack …/base-files_9.4ubuntu4.6_amd64.deb …
Unpacking base-files (9.4ubuntu4.6) over (9.4ubuntu4.5) …
Processing triggers for plymouth-theme-ubuntu-text (0.9.2-3ubuntu13.2) …
update-initramfs: deferring update (trigger activated)
Processing triggers for cracklib-runtime (2.9.2-1ubuntu1) …
Processing triggers for install-info (6.1.0.dfsg.1-5) …
Processing triggers for man-db (2.7.5-1) …
Processing triggers for initramfs-tools (0.122ubuntu8.10) …
update-initramfs: Generating /boot/initrd.img-4.10.0-40-generic
Setting up base-files (9.4ubuntu4.6) …
Installing new version of config file /etc/issue …
Installing new version of config file /etc/issue.net …
Installing new version of config file /etc/lsb-release …
Processing triggers for plymouth-theme-ubuntu-text (0.9.2-3ubuntu13.2) …
update-initramfs: deferring update (trigger activated)
Processing triggers for initramfs-tools (0.122ubuntu8.10) …
update-initramfs: Generating /boot/initrd.img-4.10.0-40-generic
(Reading database … 224868 files and directories currently installed.)
Preparing to unpack …/dpkg_1.18.4ubuntu1.4_amd64.deb …
Unpacking dpkg (1.18.4ubuntu1.4) over (1.18.4ubuntu1.3) …
Setting up dpkg (1.18.4ubuntu1.4) …
Processing triggers for man-db (2.7.5-1) …
(Reading database … 224868 files and directories currently installed.)
Preparing to unpack …/libperl5.22_5.22.1-9ubuntu0.3_amd64.deb …
Unpacking libperl5.22:amd64 (5.22.1-9ubuntu0.3) over (5.22.1-9ubuntu0.2) …
Preparing to unpack …/perl_5.22.1-9ubuntu0.3_amd64.deb …
Unpacking perl (5.22.1-9ubuntu0.3) over (5.22.1-9ubuntu0.2) …
Preparing to unpack …/perl-base_5.22.1-9ubuntu0.3_amd64.deb …
Unpacking perl-base (5.22.1-9ubuntu0.3) over (5.22.1-9ubuntu0.2) …
Processing triggers for man-db (2.7.5-1) …
Setting up perl-base (5.22.1-9ubuntu0.3) …
(Reading database … 224868 files and directories currently installed.)
Preparing to unpack …/perl-modules-5.22_5.22.1-9ubuntu0.3_all.deb …
Unpacking perl-modules-5.22 (5.22.1-9ubuntu0.3) over (5.22.1-9ubuntu0.2) …
Preparing to unpack …/libquadmath0_5.4.0-6ubuntu1~16.04.9_amd64.deb …
Unpacking libquadmath0:amd64 (5.4.0-6ubuntu1~16.04.9) over (5.4.0-6ubuntu1~16.04.6) …
Preparing to unpack …/libgomp1_5.4.0-6ubuntu1~16.04.9_amd64.deb …
Unpacking libgomp1:amd64 (5.4.0-6ubuntu1~16.04.9) over (5.4.0-6ubuntu1~16.04.6) …
Preparing to unpack …/libitm1_5.4.0-6ubuntu1~16.04.9_amd64.deb …
Unpacking libitm1:amd64 (5.4.0-6ubuntu1~16.04.9) over (5.4.0-6ubuntu1~16.04.6) …
Preparing to unpack …/libatomic1_5.4.0-6ubuntu1~16.04.9_amd64.deb …
Unpacking libatomic1:amd64 (5.4.0-6ubuntu1~16.04.9) over (5.4.0-6ubuntu1~16.04.6) …
Preparing to unpack …/libasan2_5.4.0-6ubuntu1~16.04.9_amd64.deb …
Unpacking libasan2:amd64 (5.4.0-6ubuntu1~16.04.9) over (5.4.0-6ubuntu1~16.04.6) …
Preparing to unpack …/liblsan0_5.4.0-6ubuntu1~16.04.9_amd64.deb …
Unpacking liblsan0:amd64 (5.4.0-6ubuntu1~16.04.9) over (5.4.0-6ubuntu1~16.04.6) …
Preparing to unpack …/libtsan0_5.4.0-6ubuntu1~16.04.9_amd64.deb …
Unpacking libtsan0:amd64 (5.4.0-6ubuntu1~16.04.9) over (5.4.0-6ubuntu1~16.04.6) …
Preparing to unpack …/libubsan0_5.4.0-6ubuntu1~16.04.9_amd64.deb …
Unpacking libubsan0:amd64 (5.4.0-6ubuntu1~16.04.9) over (5.4.0-6ubuntu1~16.04.6) …
Preparing to unpack …/libcilkrts5_5.4.0-6ubuntu1~16.04.9_amd64.deb …
Unpacking libcilkrts5:amd64 (5.4.0-6ubuntu1~16.04.9) over (5.4.0-6ubuntu1~16.04.6) …
Preparing to unpack …/libmpx0_5.4.0-6ubuntu1~16.04.9_amd64.deb …
Unpacking libmpx0:amd64 (5.4.0-6ubuntu1~16.04.9) over (5.4.0-6ubuntu1~16.04.6) …
Preparing to unpack …/g++-5_5.4.0-6ubuntu1~16.04.9_amd64.deb …
Unpacking g++-5 (5.4.0-6ubuntu1~16.04.9) over (5.4.0-6ubuntu1~16.04.6) …
Preparing to unpack …/gcc-5_5.4.0-6ubuntu1~16.04.9_amd64.deb …
Unpacking gcc-5 (5.4.0-6ubuntu1~16.04.9) over (5.4.0-6ubuntu1~16.04.6) …
Preparing to unpack …/cpp-5_5.4.0-6ubuntu1~16.04.9_amd64.deb …
Unpacking cpp-5 (5.4.0-6ubuntu1~16.04.9) over (5.4.0-6ubuntu1~16.04.6) …
Preparing to unpack …/libcc1-0_5.4.0-6ubuntu1~16.04.9_amd64.deb …
Unpacking libcc1-0:amd64 (5.4.0-6ubuntu1~16.04.9) over (5.4.0-6ubuntu1~16.04.6) …
Preparing to unpack …/libstdc++-5-dev_5.4.0-6ubuntu1~16.04.9_amd64.deb …
Unpacking libstdc++-5-dev:amd64 (5.4.0-6ubuntu1~16.04.9) over (5.4.0-6ubuntu1~16.04.6) …
Preparing to unpack …/libgcc-5-dev_5.4.0-6ubuntu1~16.04.9_amd64.deb …
Unpacking libgcc-5-dev:amd64 (5.4.0-6ubuntu1~16.04.9) over (5.4.0-6ubuntu1~16.04.6) …
Preparing to unpack …/gcc-5-base_5.4.0-6ubuntu1~16.04.9_amd64.deb …
Unpacking gcc-5-base:amd64 (5.4.0-6ubuntu1~16.04.9) over (5.4.0-6ubuntu1~16.04.6) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
Processing triggers for man-db (2.7.5-1) …
Setting up gcc-5-base:amd64 (5.4.0-6ubuntu1~16.04.9) …
(Reading database … 224868 files and directories currently installed.)
Preparing to unpack …/libstdc++6_5.4.0-6ubuntu1~16.04.9_amd64.deb …
Unpacking libstdc++6:amd64 (5.4.0-6ubuntu1~16.04.9) over (5.4.0-6ubuntu1~16.04.6) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
Setting up libstdc++6:amd64 (5.4.0-6ubuntu1~16.04.9) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
(Reading database … 224868 files and directories currently installed.)
Preparing to unpack …/libapt-pkg5.0_1.2.26_amd64.deb …
Unpacking libapt-pkg5.0:amd64 (1.2.26) over (1.2.25) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
Setting up libapt-pkg5.0:amd64 (1.2.26) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
(Reading database … 224868 files and directories currently installed.)
Preparing to unpack …/libapt-inst2.0_1.2.26_amd64.deb …
Unpacking libapt-inst2.0:amd64 (1.2.26) over (1.2.25) …
Preparing to unpack …/archives/apt_1.2.26_amd64.deb …
Unpacking apt (1.2.26) over (1.2.25) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
Processing triggers for man-db (2.7.5-1) …
Setting up apt (1.2.26) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
(Reading database … 224868 files and directories currently installed.)
Preparing to unpack …/apt-utils_1.2.26_amd64.deb …
Unpacking apt-utils (1.2.26) over (1.2.25) …
Preparing to unpack …/libaudit-common_1%3a2.4.5-1ubuntu2.1_all.deb …
Unpacking libaudit-common (1:2.4.5-1ubuntu2.1) over (1:2.4.5-1ubuntu2) …
Processing triggers for man-db (2.7.5-1) …
Setting up libaudit-common (1:2.4.5-1ubuntu2.1) …
(Reading database … 224868 files and directories currently installed.)
Preparing to unpack …/libaudit1_1%3a2.4.5-1ubuntu2.1_amd64.deb …
Unpacking libaudit1:amd64 (1:2.4.5-1ubuntu2.1) over (1:2.4.5-1ubuntu2) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
Setting up libaudit1:amd64 (1:2.4.5-1ubuntu2.1) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
(Reading database … 224868 files and directories currently installed.)
Preparing to unpack …/libpam0g_1.1.8-3.2ubuntu2.1_amd64.deb …
Unpacking libpam0g:amd64 (1.1.8-3.2ubuntu2.1) over (1.1.8-3.2ubuntu2) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
Setting up libpam0g:amd64 (1.1.8-3.2ubuntu2.1) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
(Reading database … 224868 files and directories currently installed.)
Preparing to unpack …/libpam-modules-bin_1.1.8-3.2ubuntu2.1_amd64.deb …
Unpacking libpam-modules-bin (1.1.8-3.2ubuntu2.1) over (1.1.8-3.2ubuntu2) …
Processing triggers for man-db (2.7.5-1) …
Setting up libpam-modules-bin (1.1.8-3.2ubuntu2.1) …
(Reading database … 224868 files and directories currently installed.)
Preparing to unpack …/libpam-modules_1.1.8-3.2ubuntu2.1_amd64.deb …
Unpacking libpam-modules:amd64 (1.1.8-3.2ubuntu2.1) over (1.1.8-3.2ubuntu2) …
Processing triggers for man-db (2.7.5-1) …
Setting up libpam-modules:amd64 (1.1.8-3.2ubuntu2.1) …
(Reading database … 224868 files and directories currently installed.)
Preparing to unpack …/libpam-runtime_1.1.8-3.2ubuntu2.1_all.deb …
Unpacking libpam-runtime (1.1.8-3.2ubuntu2.1) over (1.1.8-3.2ubuntu2) …
Processing triggers for man-db (2.7.5-1) …
Setting up libpam-runtime (1.1.8-3.2ubuntu2.1) …
(Reading database … 224868 files and directories currently installed.)
Preparing to unpack …/libprocps4_2%3a3.3.10-4ubuntu2.4_amd64.deb …
Unpacking libprocps4:amd64 (2:3.3.10-4ubuntu2.4) over (2:3.3.10-4ubuntu2.3) …
Preparing to unpack …/procps_2%3a3.3.10-4ubuntu2.4_amd64.deb …
Unpacking procps (2:3.3.10-4ubuntu2.4) over (2:3.3.10-4ubuntu2.3) …
Preparing to unpack …/libsystemd0_229-4ubuntu21.2_amd64.deb …
Unpacking libsystemd0:amd64 (229-4ubuntu21.2) over (229-4ubuntu21.1) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
Processing triggers for man-db (2.7.5-1) …
Processing triggers for ureadahead (0.100.0-19) …
Setting up libsystemd0:amd64 (229-4ubuntu21.2) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
(Reading database … 224868 files and directories currently installed.)
Preparing to unpack …/libpam-systemd_229-4ubuntu21.2_amd64.deb …
Unpacking libpam-systemd:amd64 (229-4ubuntu21.2) over (229-4ubuntu21.1) …
Preparing to unpack …/ifupdown_0.8.10ubuntu1.4_amd64.deb …
Unpacking ifupdown (0.8.10ubuntu1.4) over (0.8.10ubuntu1.2) …
Preparing to unpack …/systemd_229-4ubuntu21.2_amd64.deb …
Unpacking systemd (229-4ubuntu21.2) over (229-4ubuntu21.1) …
Processing triggers for man-db (2.7.5-1) …
Processing triggers for ureadahead (0.100.0-19) …
Processing triggers for dbus (1.10.6-1ubuntu3.3) …
Setting up systemd (229-4ubuntu21.2) …
addgroup: The group `systemd-journal’ already exists as a system group. Exiting.
(Reading database … 224868 files and directories currently installed.)
Preparing to unpack …/udev_229-4ubuntu21.2_amd64.deb …
Unpacking udev (229-4ubuntu21.2) over (229-4ubuntu21.1) …
Preparing to unpack …/libudev1_229-4ubuntu21.2_amd64.deb …
Unpacking libudev1:amd64 (229-4ubuntu21.2) over (229-4ubuntu21.1) …
Processing triggers for systemd (229-4ubuntu21.2) …
Processing triggers for ureadahead (0.100.0-19) …
Processing triggers for man-db (2.7.5-1) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
Setting up libudev1:amd64 (229-4ubuntu21.2) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
(Reading database … 224868 files and directories currently installed.)
Preparing to unpack …/grub-pc_2.02~beta2-36ubuntu3.18_amd64.deb …
Unpacking grub-pc (2.02~beta2-36ubuntu3.18) over (2.02~beta2-36ubuntu3.16) …
Preparing to unpack …/grub-pc-bin_2.02~beta2-36ubuntu3.18_amd64.deb …
Unpacking grub-pc-bin (2.02~beta2-36ubuntu3.18) over (2.02~beta2-36ubuntu3.16) …
Preparing to unpack …/grub2-common_2.02~beta2-36ubuntu3.18_amd64.deb …
Unpacking grub2-common (2.02~beta2-36ubuntu3.18) over (2.02~beta2-36ubuntu3.16) …
Preparing to unpack …/grub-common_2.02~beta2-36ubuntu3.18_amd64.deb …
Unpacking grub-common (2.02~beta2-36ubuntu3.18) over (2.02~beta2-36ubuntu3.16) …
Preparing to unpack …/friendly-recovery_0.2.31ubuntu1_all.deb …
Unpacking friendly-recovery (0.2.31ubuntu1) over (0.2.31) …
Preparing to unpack …/initramfs-tools_0.122ubuntu8.11_all.deb …
Unpacking initramfs-tools (0.122ubuntu8.11) over (0.122ubuntu8.10) …
Preparing to unpack …/initramfs-tools-core_0.122ubuntu8.11_all.deb …
Unpacking initramfs-tools-core (0.122ubuntu8.11) over (0.122ubuntu8.10) …
Preparing to unpack …/initramfs-tools-bin_0.122ubuntu8.11_amd64.deb …
Unpacking initramfs-tools-bin (0.122ubuntu8.11) over (0.122ubuntu8.10) …
Preparing to unpack …/linux-base_4.5ubuntu1~16.04.1_all.deb …
Unpacking linux-base (4.5ubuntu1~16.04.1) over (4.0ubuntu1) …
Preparing to unpack …/systemd-sysv_229-4ubuntu21.2_amd64.deb …
Unpacking systemd-sysv (229-4ubuntu21.2) over (229-4ubuntu21.1) …
Processing triggers for man-db (2.7.5-1) …
Processing triggers for install-info (6.1.0.dfsg.1-5) …
Processing triggers for systemd (229-4ubuntu21.2) …
Processing triggers for ureadahead (0.100.0-19) …
Processing triggers for doc-base (0.10.7) …
Processing 1 changed doc-base file…
Registering documents with scrollkeeper…
Setting up systemd-sysv (229-4ubuntu21.2) …
(Reading database … 224873 files and directories currently installed.)
Preparing to unpack …/libapparmor1_2.10.95-0ubuntu2.9_amd64.deb …
Unpacking libapparmor1:amd64 (2.10.95-0ubuntu2.9) over (2.10.95-0ubuntu2.8) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
Setting up libapparmor1:amd64 (2.10.95-0ubuntu2.9) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
(Reading database … 224873 files and directories currently installed.)
Preparing to unpack …/libssl1.0.0_1.0.2g-1ubuntu4.12_amd64.deb …
Unpacking libssl1.0.0:amd64 (1.0.2g-1ubuntu4.12) over (1.0.2g-1ubuntu4.10) …
Preparing to unpack …/apache2_2.4.18-2ubuntu3.8_amd64.deb …
Unpacking apache2 (2.4.18-2ubuntu3.8) over (2.4.18-2ubuntu3) …
Preparing to unpack …/apache2-bin_2.4.18-2ubuntu3.8_amd64.deb …
Unpacking apache2-bin (2.4.18-2ubuntu3.8) over (2.4.18-2ubuntu3) …
Preparing to unpack …/apache2-utils_2.4.18-2ubuntu3.8_amd64.deb …
Unpacking apache2-utils (2.4.18-2ubuntu3.8) over (2.4.18-2ubuntu3) …
Preparing to unpack …/apache2-data_2.4.18-2ubuntu3.8_all.deb …
Unpacking apache2-data (2.4.18-2ubuntu3.8) over (2.4.18-2ubuntu3) …
Preparing to unpack …/libavahi-common-data_0.6.32~rc+dfsg-1ubuntu2.2_amd64.deb …
Unpacking libavahi-common-data:amd64 (0.6.32~rc+dfsg-1ubuntu2.2) over (0.6.32~rc+dfsg-1ubuntu2) …
Preparing to unpack …/libavahi-common3_0.6.32~rc+dfsg-1ubuntu2.2_amd64.deb …
Unpacking libavahi-common3:amd64 (0.6.32~rc+dfsg-1ubuntu2.2) over (0.6.32~rc+dfsg-1ubuntu2) …
Preparing to unpack …/libavahi-client3_0.6.32~rc+dfsg-1ubuntu2.2_amd64.deb …
Unpacking libavahi-client3:amd64 (0.6.32~rc+dfsg-1ubuntu2.2) over (0.6.32~rc+dfsg-1ubuntu2) …
Preparing to unpack …/libavahi-glib1_0.6.32~rc+dfsg-1ubuntu2.2_amd64.deb …
Unpacking libavahi-glib1:amd64 (0.6.32~rc+dfsg-1ubuntu2.2) over (0.6.32~rc+dfsg-1ubuntu2) …
Preparing to unpack …/cups-core-drivers_2.1.3-4ubuntu0.4_amd64.deb …
Unpacking cups-core-drivers (2.1.3-4ubuntu0.4) over (2.1.3-4ubuntu0.3) …
Preparing to unpack …/cups-server-common_2.1.3-4ubuntu0.4_all.deb …
Unpacking cups-server-common (2.1.3-4ubuntu0.4) over (2.1.3-4ubuntu0.3) …
Preparing to unpack …/cups-common_2.1.3-4ubuntu0.4_all.deb …
Unpacking cups-common (2.1.3-4ubuntu0.4) over (2.1.3-4ubuntu0.3) …
Preparing to unpack …/libcupscgi1_2.1.3-4ubuntu0.4_amd64.deb …
Unpacking libcupscgi1:amd64 (2.1.3-4ubuntu0.4) over (2.1.3-4ubuntu0.3) …
Preparing to unpack …/cups-client_2.1.3-4ubuntu0.4_amd64.deb …
Unpacking cups-client (2.1.3-4ubuntu0.4) over (2.1.3-4ubuntu0.3) …
Preparing to unpack …/libcupsimage2_2.1.3-4ubuntu0.4_amd64.deb …
Unpacking libcupsimage2:amd64 (2.1.3-4ubuntu0.4) over (2.1.3-4ubuntu0.3) …
Preparing to unpack …/libcupsppdc1_2.1.3-4ubuntu0.4_amd64.deb …
Unpacking libcupsppdc1:amd64 (2.1.3-4ubuntu0.4) over (2.1.3-4ubuntu0.3) …
Preparing to unpack …/cups-browsed_1.8.3-2ubuntu3.4_amd64.deb …
Unpacking cups-browsed (1.8.3-2ubuntu3.4) over (1.8.3-2ubuntu3.1) …
Preparing to unpack …/cups-daemon_2.1.3-4ubuntu0.4_amd64.deb …
Warning: Stopping cups.service, but it can still be activated by:
cups.socket
Unpacking cups-daemon (2.1.3-4ubuntu0.4) over (2.1.3-4ubuntu0.3) …
Preparing to unpack …/libcupsmime1_2.1.3-4ubuntu0.4_amd64.deb …
Unpacking libcupsmime1:amd64 (2.1.3-4ubuntu0.4) over (2.1.3-4ubuntu0.3) …
Preparing to unpack …/libcups2_2.1.3-4ubuntu0.4_amd64.deb …
Unpacking libcups2:amd64 (2.1.3-4ubuntu0.4) over (2.1.3-4ubuntu0.3) …
Preparing to unpack …/cups_2.1.3-4ubuntu0.4_amd64.deb …
Unpacking cups (2.1.3-4ubuntu0.4) over (2.1.3-4ubuntu0.3) …
Preparing to unpack …/cups-bsd_2.1.3-4ubuntu0.4_amd64.deb …
Unpacking cups-bsd (2.1.3-4ubuntu0.4) over (2.1.3-4ubuntu0.3) …
Preparing to unpack …/libtiff5_4.0.6-1ubuntu0.4_amd64.deb …
Unpacking libtiff5:amd64 (4.0.6-1ubuntu0.4) over (4.0.6-1ubuntu0.2) …
Preparing to unpack …/libcupsfilters1_1.8.3-2ubuntu3.4_amd64.deb …
Unpacking libcupsfilters1:amd64 (1.8.3-2ubuntu3.4) over (1.8.3-2ubuntu3.1) …
Preparing to unpack …/libpoppler58_0.41.0-0ubuntu1.7_amd64.deb …
Unpacking libpoppler58:amd64 (0.41.0-0ubuntu1.7) over (0.41.0-0ubuntu1.6) …
Preparing to unpack …/poppler-utils_0.41.0-0ubuntu1.7_amd64.deb …
Unpacking poppler-utils (0.41.0-0ubuntu1.7) over (0.41.0-0ubuntu1.6) …
Preparing to unpack …/ghostscript_9.18~dfsg~0-0ubuntu2.8_amd64.deb …
Unpacking ghostscript (9.18~dfsg~0-0ubuntu2.8) over (9.18~dfsg~0-0ubuntu2.7) …
Preparing to unpack …/ghostscript-x_9.18~dfsg~0-0ubuntu2.8_amd64.deb …
Unpacking ghostscript-x (9.18~dfsg~0-0ubuntu2.8) over (9.18~dfsg~0-0ubuntu2.7) …
Preparing to unpack …/libgs9-common_9.18~dfsg~0-0ubuntu2.8_all.deb …
Unpacking libgs9-common (9.18~dfsg~0-0ubuntu2.8) over (9.18~dfsg~0-0ubuntu2.7) …
Preparing to unpack …/libgs9_9.18~dfsg~0-0ubuntu2.8_amd64.deb …
Unpacking libgs9:amd64 (9.18~dfsg~0-0ubuntu2.8) over (9.18~dfsg~0-0ubuntu2.7) …
Preparing to unpack …/cups-ppdc_2.1.3-4ubuntu0.4_amd64.deb …
Unpacking cups-ppdc (2.1.3-4ubuntu0.4) over (2.1.3-4ubuntu0.3) …
Preparing to unpack …/libicu55_55.1-7ubuntu0.4_amd64.deb …
Unpacking libicu55:amd64 (55.1-7ubuntu0.4) over (55.1-7ubuntu0.3) …
Preparing to unpack …/libreoffice-calc_1%3a5.1.6~rc2-0ubuntu1~xenial3_amd64.deb …
Unpacking libreoffice-calc (1:5.1.6~rc2-0ubuntu1~xenial3) over (1:5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/libreoffice-gnome_1%3a5.1.6~rc2-0ubuntu1~xenial3_amd64.deb …
Unpacking libreoffice-gnome (1:5.1.6~rc2-0ubuntu1~xenial3) over (1:5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/libreoffice-gtk_1%3a5.1.6~rc2-0ubuntu1~xenial3_amd64.deb …
Unpacking libreoffice-gtk (1:5.1.6~rc2-0ubuntu1~xenial3) over (1:5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/libreoffice-writer_1%3a5.1.6~rc2-0ubuntu1~xenial3_amd64.deb …
Unpacking libreoffice-writer (1:5.1.6~rc2-0ubuntu1~xenial3) over (1:5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/libreoffice-style-galaxy_1%3a5.1.6~rc2-0ubuntu1~xenial3_all.deb …
Unpacking libreoffice-style-galaxy (1:5.1.6~rc2-0ubuntu1~xenial3) over (1:5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/uno-libs3_5.1.6~rc2-0ubuntu1~xenial3_amd64.deb …
Unpacking uno-libs3 (5.1.6~rc2-0ubuntu1~xenial3) over (5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/libreoffice-ogltrans_1%3a5.1.6~rc2-0ubuntu1~xenial3_amd64.deb …
Unpacking libreoffice-ogltrans (1:5.1.6~rc2-0ubuntu1~xenial3) over (1:5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/ure_5.1.6~rc2-0ubuntu1~xenial3_amd64.deb …
Unpacking ure (5.1.6~rc2-0ubuntu1~xenial3) over (5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/libreoffice-style-breeze_1%3a5.1.6~rc2-0ubuntu1~xenial3_all.deb …
Unpacking libreoffice-style-breeze (1:5.1.6~rc2-0ubuntu1~xenial3) over (1:5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/libreoffice-common_1%3a5.1.6~rc2-0ubuntu1~xenial3_all.deb …
Unpacking libreoffice-common (1:5.1.6~rc2-0ubuntu1~xenial3) over (1:5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/libreoffice-pdfimport_1%3a5.1.6~rc2-0ubuntu1~xenial3_amd64.deb …
Unpacking libreoffice-pdfimport (1:5.1.6~rc2-0ubuntu1~xenial3) over (1:5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/python3-uno_1%3a5.1.6~rc2-0ubuntu1~xenial3_amd64.deb …
Unpacking python3-uno (1:5.1.6~rc2-0ubuntu1~xenial3) over (1:5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/libreoffice-base-core_1%3a5.1.6~rc2-0ubuntu1~xenial3_amd64.deb …
Unpacking libreoffice-base-core (1:5.1.6~rc2-0ubuntu1~xenial3) over (1:5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/libreoffice-math_1%3a5.1.6~rc2-0ubuntu1~xenial3_amd64.deb …
Unpacking libreoffice-math (1:5.1.6~rc2-0ubuntu1~xenial3) over (1:5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/libreoffice-avmedia-backend-gstreamer_1%3a5.1.6~rc2-0ubuntu1~xenial3_amd64.deb …
Unpacking libreoffice-avmedia-backend-gstreamer (1:5.1.6~rc2-0ubuntu1~xenial3) over (1:5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/libreoffice-draw_1%3a5.1.6~rc2-0ubuntu1~xenial3_amd64.deb …
Unpacking libreoffice-draw (1:5.1.6~rc2-0ubuntu1~xenial3) over (1:5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/libreoffice-impress_1%3a5.1.6~rc2-0ubuntu1~xenial3_amd64.deb …
Unpacking libreoffice-impress (1:5.1.6~rc2-0ubuntu1~xenial3) over (1:5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/libreoffice-core_1%3a5.1.6~rc2-0ubuntu1~xenial3_amd64.deb …
Unpacking libreoffice-core (1:5.1.6~rc2-0ubuntu1~xenial3) over (1:5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/fonts-opensymbol_2%3a102.7+LibO5.1.6~rc2-0ubuntu1~xenial3_all.deb …
Unpacking fonts-opensymbol (2:102.7+LibO5.1.6~rc2-0ubuntu1~xenial3) over (2:102.7+LibO5.1.6~rc2-0ubuntu1~xenial2) …
Preparing to unpack …/curl_7.47.0-1ubuntu2.8_amd64.deb …
Unpacking curl (7.47.0-1ubuntu2.8) over (7.47.0-1ubuntu2.6) …
Preparing to unpack …/libcurl3-gnutls_7.47.0-1ubuntu2.8_amd64.deb …
Unpacking libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.8) over (7.47.0-1ubuntu2.6) …
Preparing to unpack …/samba-vfs-modules_2%3a4.3.11+dfsg-0ubuntu0.16.04.13_amd64.deb …
Unpacking samba-vfs-modules (2:4.3.11+dfsg-0ubuntu0.16.04.13) over (2:4.3.11+dfsg-0ubuntu0.16.04.12) …
Preparing to unpack …/samba-dsdb-modules_2%3a4.3.11+dfsg-0ubuntu0.16.04.13_amd64.deb …
Unpacking samba-dsdb-modules (2:4.3.11+dfsg-0ubuntu0.16.04.13) over (2:4.3.11+dfsg-0ubuntu0.16.04.12) …
Preparing to unpack …/python-all-dev_2.7.12-1~16.04_amd64.deb …
Unpacking python-all-dev (2.7.12-1~16.04) over (2.7.11-1) …
Preparing to unpack …/python-dev_2.7.12-1~16.04_amd64.deb …
Unpacking python-dev (2.7.12-1~16.04) over (2.7.11-1) …
Preparing to unpack …/python-all_2.7.12-1~16.04_amd64.deb …
Unpacking python-all (2.7.12-1~16.04) over (2.7.11-1) …
Preparing to unpack …/python-minimal_2.7.12-1~16.04_amd64.deb …
Unpacking python-minimal (2.7.12-1~16.04) over (2.7.11-1) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
Processing triggers for man-db (2.7.5-1) …
Processing triggers for ufw (0.35-0ubuntu2) …
Processing triggers for systemd (229-4ubuntu21.2) …
Processing triggers for ureadahead (0.100.0-19) …
Processing triggers for doc-base (0.10.7) …
Processing 1 changed doc-base file…
Registering documents with scrollkeeper…
Processing triggers for mime-support (3.59ubuntu1) …
Processing triggers for gnome-menus (3.13.3-6ubuntu3.1) …
Processing triggers for desktop-file-utils (0.22-1ubuntu5.1) …
Processing triggers for bamfdaemon (0.5.3~bzr0+16.04.20160824-0ubuntu1) …
Rebuilding /usr/share/applications/bamf-2.index…
Processing triggers for hicolor-icon-theme (0.15-0ubuntu1) …
Processing triggers for shared-mime-info (1.5-2ubuntu0.1) …
Processing triggers for fontconfig (2.11.94-0ubuntu1.1) …
Setting up python-minimal (2.7.12-1~16.04) …
(Reading database … 224873 files and directories currently installed.)
Preparing to unpack …/python_2.7.12-1~16.04_amd64.deb …
Unpacking python (2.7.12-1~16.04) over (2.7.11-1) …
Preparing to unpack …/libpython-all-dev_2.7.12-1~16.04_amd64.deb …
Unpacking libpython-all-dev:amd64 (2.7.12-1~16.04) over (2.7.11-1) …
Preparing to unpack …/libpython-dev_2.7.12-1~16.04_amd64.deb …
Unpacking libpython-dev:amd64 (2.7.12-1~16.04) over (2.7.11-1) …
Preparing to unpack …/libpython-stdlib_2.7.12-1~16.04_amd64.deb …
Unpacking libpython-stdlib:amd64 (2.7.12-1~16.04) over (2.7.11-1) …
Preparing to unpack …/python-crypto_2.6.1-6ubuntu0.16.04.3_amd64.deb …
Unpacking python-crypto (2.6.1-6ubuntu0.16.04.3) over (2.6.1-6ubuntu0.16.04.2) …
Preparing to unpack …/python-samba_2%3a4.3.11+dfsg-0ubuntu0.16.04.13_amd64.deb …
Unpacking python-samba (2:4.3.11+dfsg-0ubuntu0.16.04.13) over (2:4.3.11+dfsg-0ubuntu0.16.04.12) …
Preparing to unpack …/samba-common-bin_2%3a4.3.11+dfsg-0ubuntu0.16.04.13_amd64.deb …
Unpacking samba-common-bin (2:4.3.11+dfsg-0ubuntu0.16.04.13) over (2:4.3.11+dfsg-0ubuntu0.16.04.12) …
Preparing to unpack …/libsmbclient_2%3a4.3.11+dfsg-0ubuntu0.16.04.13_amd64.deb …
Unpacking libsmbclient:amd64 (2:4.3.11+dfsg-0ubuntu0.16.04.13) over (2:4.3.11+dfsg-0ubuntu0.16.04.12) …
Preparing to unpack …/samba-libs_2%3a4.3.11+dfsg-0ubuntu0.16.04.13_amd64.deb …
Unpacking samba-libs:amd64 (2:4.3.11+dfsg-0ubuntu0.16.04.13) over (2:4.3.11+dfsg-0ubuntu0.16.04.12) …
Preparing to unpack …/libwbclient0_2%3a4.3.11+dfsg-0ubuntu0.16.04.13_amd64.deb …
Unpacking libwbclient0:amd64 (2:4.3.11+dfsg-0ubuntu0.16.04.13) over (2:4.3.11+dfsg-0ubuntu0.16.04.12) …
Preparing to unpack …/samba_2%3a4.3.11+dfsg-0ubuntu0.16.04.13_amd64.deb …
Unpacking samba (2:4.3.11+dfsg-0ubuntu0.16.04.13) over (2:4.3.11+dfsg-0ubuntu0.16.04.12) …
Preparing to unpack …/samba-common_2%3a4.3.11+dfsg-0ubuntu0.16.04.13_all.deb …
Unpacking samba-common (2:4.3.11+dfsg-0ubuntu0.16.04.13) over (2:4.3.11+dfsg-0ubuntu0.16.04.12) …
Preparing to unpack …/pciutils_1%3a3.3.1-1.1ubuntu1.2_amd64.deb …
Unpacking pciutils (1:3.3.1-1.1ubuntu1.2) over (1:3.3.1-1.1ubuntu1.1) …
Preparing to unpack …/libpci3_1%3a3.3.1-1.1ubuntu1.2_amd64.deb …
Unpacking libpci3:amd64 (1:3.3.1-1.1ubuntu1.2) over (1:3.3.1-1.1ubuntu1.1) …
Preparing to unpack …/python-apt-common_1.1.0~beta1ubuntu0.16.04.1_all.deb …
Unpacking python-apt-common (1.1.0~beta1ubuntu0.16.04.1) over (1.1.0~beta1build1) …
Preparing to unpack …/python3-apt_1.1.0~beta1ubuntu0.16.04.1_amd64.deb …
Unpacking python3-apt (1.1.0~beta1ubuntu0.16.04.1) over (1.1.0~beta1build1) …
Preparing to unpack …/ubuntu-drivers-common_1%3a0.4.17.7_amd64.deb …
Unpacking ubuntu-drivers-common (1:0.4.17.7) over (1:0.4.17.3) …
Preparing to unpack …/ubuntu-release-upgrader-gtk_1%3a16.04.25_all.deb …
Unpacking ubuntu-release-upgrader-gtk (1:16.04.25) over (1:16.04.23) …
Preparing to unpack …/ubuntu-release-upgrader-core_1%3a16.04.25_all.deb …
Unpacking ubuntu-release-upgrader-core (1:16.04.25) over (1:16.04.23) …
Preparing to unpack …/python-apt_1.1.0~beta1ubuntu0.16.04.1_amd64.deb …
Unpacking python-apt (1.1.0~beta1ubuntu0.16.04.1) over (1.1.0~beta1build1) …
Preparing to unpack …/update-manager_1%3a16.04.13_all.deb …
Unpacking update-manager (1:16.04.13) over (1:16.04.12) …
Preparing to unpack …/python3-distupgrade_1%3a16.04.25_all.deb …
Unpacking python3-distupgrade (1:16.04.25) over (1:16.04.23) …
Preparing to unpack …/python3-update-manager_1%3a16.04.13_all.deb …
Unpacking python3-update-manager (1:16.04.13) over (1:16.04.12) …
Preparing to unpack …/update-manager-core_1%3a16.04.13_all.deb …
Unpacking update-manager-core (1:16.04.13) over (1:16.04.12) …
Preparing to unpack …/update-notifier_3.168.8_amd64.deb …
Unpacking update-notifier (3.168.8) over (3.168.7) …
Preparing to unpack …/libdpkg-perl_1.18.4ubuntu1.4_all.deb …
Unpacking libdpkg-perl (1.18.4ubuntu1.4) over (1.18.4ubuntu1.3) …
Preparing to unpack …/dpkg-dev_1.18.4ubuntu1.4_all.deb …
Unpacking dpkg-dev (1.18.4ubuntu1.4) over (1.18.4ubuntu1.3) …
Preparing to unpack …/patch_2.7.5-1ubuntu0.16.04.1_amd64.deb …
Unpacking patch (2.7.5-1ubuntu0.16.04.1) over (2.7.5-1) …
Preparing to unpack …/update-notifier-common_3.168.8_all.deb …
Unpacking update-notifier-common (3.168.8) over (3.168.7) …
Preparing to unpack …/libgcrypt20_1.6.5-2ubuntu0.4_amd64.deb …
Unpacking libgcrypt20:amd64 (1.6.5-2ubuntu0.4) over (1.6.5-2ubuntu0.3) …
Processing triggers for doc-base (0.10.7) …
Processing 1 changed doc-base file…
Registering documents with scrollkeeper…
Processing triggers for man-db (2.7.5-1) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
Processing triggers for ufw (0.35-0ubuntu2) …
Processing triggers for ureadahead (0.100.0-19) …
Processing triggers for libglib2.0-0:amd64 (2.48.2-0ubuntu1) …
Processing triggers for gnome-menus (3.13.3-6ubuntu3.1) …
Processing triggers for desktop-file-utils (0.22-1ubuntu5.1) …
Processing triggers for bamfdaemon (0.5.3~bzr0+16.04.20160824-0ubuntu1) …
Rebuilding /usr/share/applications/bamf-2.index…
Processing triggers for mime-support (3.59ubuntu1) …
Processing triggers for gconf2 (3.2.6-3ubuntu6) …
Processing triggers for hicolor-icon-theme (0.15-0ubuntu1) …
Setting up libgcrypt20:amd64 (1.6.5-2ubuntu0.4) …
Processing triggers for systemd (229-4ubuntu21.2) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
(Reading database … 224872 files and directories currently installed.)
Preparing to unpack …/sensible-utils_0.0.9ubuntu0.16.04.1_all.deb …
Unpacking sensible-utils (0.0.9ubuntu0.16.04.1) over (0.0.9) …
Processing triggers for mime-support (3.59ubuntu1) …
Processing triggers for man-db (2.7.5-1) …
Setting up sensible-utils (0.0.9ubuntu0.16.04.1) …
(Reading database … 224872 files and directories currently installed.)
Preparing to unpack …/distro-info-data_0.28ubuntu0.8_all.deb …
Unpacking distro-info-data (0.28ubuntu0.8) over (0.28ubuntu0.7) …
Preparing to unpack …/isc-dhcp-client_4.3.3-5ubuntu12.10_amd64.deb …
Unpacking isc-dhcp-client (4.3.3-5ubuntu12.10) over (4.3.3-5ubuntu12.7) …
Preparing to unpack …/isc-dhcp-common_4.3.3-5ubuntu12.10_amd64.deb …
Unpacking isc-dhcp-common (4.3.3-5ubuntu12.10) over (4.3.3-5ubuntu12.7) …
Preparing to unpack …/libapparmor-perl_2.10.95-0ubuntu2.9_amd64.deb …
Unpacking libapparmor-perl (2.10.95-0ubuntu2.9) over (2.10.95-0ubuntu2.8) …
Preparing to unpack …/apparmor_2.10.95-0ubuntu2.9_amd64.deb …
Unpacking apparmor (2.10.95-0ubuntu2.9) over (2.10.95-0ubuntu2.8) …
Preparing to unpack …/apt-transport-https_1.2.26_amd64.deb …
Unpacking apt-transport-https (1.2.26) over (1.2.25) …
Preparing to unpack …/hdparm_9.48+ds-1ubuntu0.1_amd64.deb …
Unpacking hdparm (9.48+ds-1ubuntu0.1) over (9.48+ds-1) …
Preparing to unpack …/libnuma1_2.0.11-1ubuntu1.1_amd64.deb …
Unpacking libnuma1:amd64 (2.0.11-1ubuntu1.1) over (2.0.11-1ubuntu1) …
Preparing to unpack …/libplymouth4_0.9.2-3ubuntu13.5_amd64.deb …
Unpacking libplymouth4:amd64 (0.9.2-3ubuntu13.5) over (0.9.2-3ubuntu13.2) …
Preparing to unpack …/lshw_02.17-1.1ubuntu3.5_amd64.deb …
Unpacking lshw (02.17-1.1ubuntu3.5) over (02.17-1.1ubuntu3.4) …
Preparing to unpack …/openssh-sftp-server_1%3a7.2p2-4ubuntu2.4_amd64.deb …
Unpacking openssh-sftp-server (1:7.2p2-4ubuntu2.4) over (1:7.2p2-4) …
Preparing to unpack …/openssh-server_1%3a7.2p2-4ubuntu2.4_amd64.deb …
Unpacking openssh-server (1:7.2p2-4ubuntu2.4) over (1:7.2p2-4) …
Preparing to unpack …/openssh-client_1%3a7.2p2-4ubuntu2.4_amd64.deb …
Unpacking openssh-client (1:7.2p2-4ubuntu2.4) over (1:7.2p2-4) …
Preparing to unpack …/openssl_1.0.2g-1ubuntu4.12_amd64.deb …
Unpacking openssl (1.0.2g-1ubuntu4.12) over (1.0.2g-1ubuntu4.10) …
Preparing to unpack …/plymouth-theme-ubuntu-text_0.9.2-3ubuntu13.5_amd64.deb …
Unpacking plymouth-theme-ubuntu-text (0.9.2-3ubuntu13.5) over (0.9.2-3ubuntu13.2) …
Preparing to unpack …/plymouth_0.9.2-3ubuntu13.5_amd64.deb …
Unpacking plymouth (0.9.2-3ubuntu13.5) over (0.9.2-3ubuntu13.2) …
Preparing to unpack …/plymouth-theme-ubuntu-logo_0.9.2-3ubuntu13.5_amd64.deb …
Unpacking plymouth-theme-ubuntu-logo (0.9.2-3ubuntu13.5) over (0.9.2-3ubuntu13.2) …
Preparing to unpack …/plymouth-label_0.9.2-3ubuntu13.5_amd64.deb …
Unpacking plymouth-label (0.9.2-3ubuntu13.5) over (0.9.2-3ubuntu13.2) …
Preparing to unpack …/wget_1.17.1-1ubuntu1.4_amd64.deb …
Unpacking wget (1.17.1-1ubuntu1.4) over (1.17.1-1ubuntu1.3) …
Preparing to unpack …/xdg-user-dirs_0.15-2ubuntu6.16.04.1_amd64.deb …
Unpacking xdg-user-dirs (0.15-2ubuntu6.16.04.1) over (0.15-2ubuntu6) …
Preparing to unpack …/python-paramiko_1.16.0-1ubuntu0.1_all.deb …
Unpacking python-paramiko (1.16.0-1ubuntu0.1) over (1.16.0-1) …
Preparing to unpack …/ansible_2.5.4-1ppa~xenial_all.deb …
Unpacking ansible (2.5.4-1ppa~xenial) over (2.5.0-1ppa~xenial) …
Preparing to unpack …/python3-problem-report_2.20.1-0ubuntu2.18_all.deb …
Unpacking python3-problem-report (2.20.1-0ubuntu2.18) over (2.20.1-0ubuntu2.15) …
Preparing to unpack …/python3-apport_2.20.1-0ubuntu2.18_all.deb …
Unpacking python3-apport (2.20.1-0ubuntu2.18) over (2.20.1-0ubuntu2.15) …
Preparing to unpack …/apport_2.20.1-0ubuntu2.18_all.deb …
Unpacking apport (2.20.1-0ubuntu2.18) over (2.20.1-0ubuntu2.15) …
Preparing to unpack …/apport-gtk_2.20.1-0ubuntu2.18_all.deb …
Unpacking apport-gtk (2.20.1-0ubuntu2.18) over (2.20.1-0ubuntu2.15) …
Preparing to unpack …/avahi-autoipd_0.6.32~rc+dfsg-1ubuntu2.2_amd64.deb …
Unpacking avahi-autoipd (0.6.32~rc+dfsg-1ubuntu2.2) over (0.6.32~rc+dfsg-1ubuntu2) …
Preparing to unpack …/pulseaudio-module-bluetooth_1%3a8.0-0ubuntu3.10_amd64.deb …
Unpacking pulseaudio-module-bluetooth (1:8.0-0ubuntu3.10) over (1:8.0-0ubuntu3.7) …
Preparing to unpack …/libpulsedsp_1%3a8.0-0ubuntu3.10_amd64.deb …
Unpacking libpulsedsp:amd64 (1:8.0-0ubuntu3.10) over (1:8.0-0ubuntu3.7) …
Preparing to unpack …/pulseaudio-utils_1%3a8.0-0ubuntu3.10_amd64.deb …
Unpacking pulseaudio-utils (1:8.0-0ubuntu3.10) over (1:8.0-0ubuntu3.7) …
Preparing to unpack …/libpulse-mainloop-glib0_1%3a8.0-0ubuntu3.10_amd64.deb …
Unpacking libpulse-mainloop-glib0:amd64 (1:8.0-0ubuntu3.10) over (1:8.0-0ubuntu3.7) …
Preparing to unpack …/pulseaudio-module-x11_1%3a8.0-0ubuntu3.10_amd64.deb …
Unpacking pulseaudio-module-x11 (1:8.0-0ubuntu3.10) over (1:8.0-0ubuntu3.7) …
Preparing to unpack …/pulseaudio_1%3a8.0-0ubuntu3.10_amd64.deb …
Unpacking pulseaudio (1:8.0-0ubuntu3.10) over (1:8.0-0ubuntu3.7) …
Preparing to unpack …/libpulse0_1%3a8.0-0ubuntu3.10_amd64.deb …
Unpacking libpulse0:amd64 (1:8.0-0ubuntu3.10) over (1:8.0-0ubuntu3.7) …
Preparing to unpack …/libavahi-core7_0.6.32~rc+dfsg-1ubuntu2.2_amd64.deb …
Unpacking libavahi-core7:amd64 (0.6.32~rc+dfsg-1ubuntu2.2) over (0.6.32~rc+dfsg-1ubuntu2) …
Preparing to unpack …/avahi-daemon_0.6.32~rc+dfsg-1ubuntu2.2_amd64.deb …
Unpacking avahi-daemon (0.6.32~rc+dfsg-1ubuntu2.2) over (0.6.32~rc+dfsg-1ubuntu2) …
Preparing to unpack …/avahi-utils_0.6.32~rc+dfsg-1ubuntu2.2_amd64.deb …
Unpacking avahi-utils (0.6.32~rc+dfsg-1ubuntu2.2) over (0.6.32~rc+dfsg-1ubuntu2) …
Preparing to unpack …/bamfdaemon_0.5.3~bzr0+16.04.20180209-0ubuntu1_amd64.deb …
Unpacking bamfdaemon (0.5.3~bzr0+16.04.20180209-0ubuntu1) over (0.5.3~bzr0+16.04.20160824-0ubuntu1) …
Preparing to unpack …/libbamf3-2_0.5.3~bzr0+16.04.20180209-0ubuntu1_amd64.deb …
Unpacking libbamf3-2:amd64 (0.5.3~bzr0+16.04.20180209-0ubuntu1) over (0.5.3~bzr0+16.04.20160824-0ubuntu1) …
Preparing to unpack …/openjdk-8-jre-headless_8u171-b11-0ubuntu0.16.04.1_amd64.deb …
Unpacking openjdk-8-jre-headless:amd64 (8u171-b11-0ubuntu0.16.04.1) over (8u77-b03-3ubuntu3) …
Preparing to unpack …/ca-certificates-java_20160321ubuntu1_all.deb …
Unpacking ca-certificates-java (20160321ubuntu1) over (20160321) …
Preparing to unpack …/libcompizconfig0_1%3a0.9.12.3+16.04.20180221-0ubuntu1_amd64.deb …
Unpacking libcompizconfig0:amd64 (1:0.9.12.3+16.04.20180221-0ubuntu1) over (1:0.9.12.3+16.04.20171116-0ubuntu1) …
Preparing to unpack …/compiz-gnome_1%3a0.9.12.3+16.04.20180221-0ubuntu1_amd64.deb …
Unpacking compiz-gnome (1:0.9.12.3+16.04.20180221-0ubuntu1) over (1:0.9.12.3+16.04.20171116-0ubuntu1) …
Preparing to unpack …/compiz-plugins-default_1%3a0.9.12.3+16.04.20180221-0ubuntu1_amd64.deb …
Unpacking compiz-plugins-default:amd64 (1:0.9.12.3+16.04.20180221-0ubuntu1) over (1:0.9.12.3+16.04.20171116-0ubuntu1) …
Preparing to unpack …/libdecoration0_1%3a0.9.12.3+16.04.20180221-0ubuntu1_amd64.deb …
Unpacking libdecoration0:amd64 (1:0.9.12.3+16.04.20180221-0ubuntu1) over (1:0.9.12.3+16.04.20171116-0ubuntu1) …
Preparing to unpack …/unity_7.4.5+16.04.20180221-0ubuntu1_amd64.deb …
Unpacking unity (7.4.5+16.04.20180221-0ubuntu1) over (7.4.5+16.04.20171201.3) …
Preparing to unpack …/libunity-protocol-private0_7.1.4+16.04.20180209.1-0ubuntu1_amd64.deb …
Unpacking libunity-protocol-private0:amd64 (7.1.4+16.04.20180209.1-0ubuntu1) over (7.1.4+16.04.20160701-0ubuntu1) …
Preparing to unpack …/libunity9_7.1.4+16.04.20180209.1-0ubuntu1_amd64.deb …
Unpacking libunity9:amd64 (7.1.4+16.04.20180209.1-0ubuntu1) over (7.1.4+16.04.20160701-0ubuntu1) …
Preparing to unpack …/libunity-core-6.0-9_7.4.5+16.04.20180221-0ubuntu1_amd64.deb …
Unpacking libunity-core-6.0-9:amd64 (7.4.5+16.04.20180221-0ubuntu1) over (7.4.5+16.04.20171201.3) …
Preparing to unpack …/unity-schemas_7.4.5+16.04.20180221-0ubuntu1_all.deb …
Unpacking unity-schemas (7.4.5+16.04.20180221-0ubuntu1) over (7.4.5+16.04.20171201.3) …
Preparing to unpack …/libunity-scopes-json-def-desktop_7.1.4+16.04.20180209.1-0ubuntu1_all.deb …
Unpacking libunity-scopes-json-def-desktop (7.1.4+16.04.20180209.1-0ubuntu1) over (7.1.4+16.04.20160701-0ubuntu1) …
Preparing to unpack …/unity-services_7.4.5+16.04.20180221-0ubuntu1_amd64.deb …
Unpacking unity-services (7.4.5+16.04.20180221-0ubuntu1) over (7.4.5+16.04.20171201.3) …
Preparing to unpack …/compiz-core_1%3a0.9.12.3+16.04.20180221-0ubuntu1_amd64.deb …
Unpacking compiz-core (1:0.9.12.3+16.04.20180221-0ubuntu1) over (1:0.9.12.3+16.04.20171116-0ubuntu1) …
Preparing to unpack …/compiz_1%3a0.9.12.3+16.04.20180221-0ubuntu1_all.deb …
Unpacking compiz (1:0.9.12.3+16.04.20180221-0ubuntu1) over (1:0.9.12.3+16.04.20171116-0ubuntu1) …
Preparing to unpack …/docker-ce_18.05.0~ce~3-0~ubuntu_amd64.deb …
Warning: Stopping docker.service, but it can still be activated by:
docker.socket
Unpacking docker-ce (18.05.0~ce~3-0~ubuntu) over (17.06.0~ce-0~ubuntu) …
Preparing to unpack …/ebtables_2.0.10.4-3.4ubuntu2.16.04.1_amd64.deb …
Unpacking ebtables (2.0.10.4-3.4ubuntu2.16.04.1) over (2.0.10.4-3.4ubuntu2) …
Preparing to unpack …/firefox_60.0.1+build2-0ubuntu0.16.04.1_amd64.deb …
Unpacking firefox (60.0.1+build2-0ubuntu0.16.04.1) over (58.0.2+build1-0ubuntu0.16.04.1) …
Preparing to unpack …/firefox-locale-en_60.0.1+build2-0ubuntu0.16.04.1_amd64.deb …
Unpacking firefox-locale-en (60.0.1+build2-0ubuntu0.16.04.1) over (58.0.2+build1-0ubuntu0.16.04.1) …
Preparing to unpack …/libdfu1_0.8.3-0ubuntu3_amd64.deb …
Unpacking libdfu1:amd64 (0.8.3-0ubuntu3) over (0.7.0-0ubuntu4.3) …
Preparing to unpack …/libfwupd1_0.8.3-0ubuntu3_amd64.deb …
Unpacking libfwupd1:amd64 (0.8.3-0ubuntu3) over (0.7.0-0ubuntu4.3) …
Preparing to unpack …/fwupd_0.8.3-0ubuntu3_amd64.deb …
Unpacking fwupd (0.8.3-0ubuntu3) over (0.7.0-0ubuntu4.3) …
Preparing to unpack …/libibus-1.0-5_1.5.11-1ubuntu2.1_amd64.deb …
Unpacking libibus-1.0-5:amd64 (1.5.11-1ubuntu2.1) over (1.5.11-1ubuntu2) …
Preparing to unpack …/ibus_1.5.11-1ubuntu2.1_amd64.deb …
Unpacking ibus (1.5.11-1ubuntu2.1) over (1.5.11-1ubuntu2) …
Preparing to unpack …/gir1.2-ibus-1.0_1.5.11-1ubuntu2.1_amd64.deb …
Unpacking gir1.2-ibus-1.0:amd64 (1.5.11-1ubuntu2.1) over (1.5.11-1ubuntu2) …
Preparing to unpack …/gir1.2-unity-5.0_7.1.4+16.04.20180209.1-0ubuntu1_amd64.deb …
Unpacking gir1.2-unity-5.0:amd64 (7.1.4+16.04.20180209.1-0ubuntu1) over (7.1.4+16.04.20160701-0ubuntu1) …
Preparing to unpack …/gnome-accessibility-themes_3.18.0-2ubuntu2_all.deb …
Unpacking gnome-accessibility-themes (3.18.0-2ubuntu2) over (3.18.0-2ubuntu1) …
Preparing to unpack …/ubuntu-software_3.20.5-0ubuntu0.16.04.10_amd64.deb …
Unpacking ubuntu-software (3.20.5-0ubuntu0.16.04.10) over (3.20.5-0ubuntu0.16.04.8) …
Preparing to unpack …/gnome-software_3.20.5-0ubuntu0.16.04.10_amd64.deb …
Unpacking gnome-software (3.20.5-0ubuntu0.16.04.10) over (3.20.5-0ubuntu0.16.04.8) …
Preparing to unpack …/gnome-software-common_3.20.5-0ubuntu0.16.04.10_all.deb …
Unpacking gnome-software-common (3.20.5-0ubuntu0.16.04.10) over (3.20.5-0ubuntu0.16.04.8) …
Preparing to unpack …/ibus-gtk_1.5.11-1ubuntu2.1_amd64.deb …
Unpacking ibus-gtk:amd64 (1.5.11-1ubuntu2.1) over (1.5.11-1ubuntu2) …
Preparing to unpack …/ibus-gtk3_1.5.11-1ubuntu2.1_amd64.deb …
Unpacking ibus-gtk3:amd64 (1.5.11-1ubuntu2.1) over (1.5.11-1ubuntu2) …
Preparing to unpack …/libavahi-ui-gtk3-0_0.6.32~rc+dfsg-1ubuntu2.2_amd64.deb …
Unpacking libavahi-ui-gtk3-0:amd64 (0.6.32~rc+dfsg-1ubuntu2.2) over (0.6.32~rc+dfsg-1ubuntu2) …
Preparing to unpack …/libcurl3_7.47.0-1ubuntu2.8_amd64.deb …
Unpacking libcurl3:amd64 (7.47.0-1ubuntu2.8) over (7.47.0-1ubuntu2.6) …
Preparing to unpack …/libfontembed1_1.8.3-2ubuntu3.4_amd64.deb …
Unpacking libfontembed1:amd64 (1.8.3-2ubuntu3.4) over (1.8.3-2ubuntu3.1) …
Preparing to unpack …/libpoppler-glib8_0.41.0-0ubuntu1.7_amd64.deb …
Unpacking libpoppler-glib8:amd64 (0.41.0-0ubuntu1.7) over (0.41.0-0ubuntu1.6) …
Preparing to unpack …/libraw15_0.17.1-1ubuntu0.3_amd64.deb …
Unpacking libraw15:amd64 (0.17.1-1ubuntu0.3) over (0.17.1-1ubuntu0.1) …
Preparing to unpack …/libsnmp-base_5.7.3+dfsg-1ubuntu4.1_all.deb …
Unpacking libsnmp-base (5.7.3+dfsg-1ubuntu4.1) over (5.7.3+dfsg-1ubuntu4) …
Preparing to unpack …/libsnmp30_5.7.3+dfsg-1ubuntu4.1_amd64.deb …
Unpacking libsnmp30:amd64 (5.7.3+dfsg-1ubuntu4.1) over (5.7.3+dfsg-1ubuntu4) …
Preparing to unpack …/libvncclient1_0.9.10+dfsg-3ubuntu0.16.04.2_amd64.deb …
Unpacking libvncclient1:amd64 (0.9.10+dfsg-3ubuntu0.16.04.2) over (0.9.10+dfsg-3ubuntu0.16.04.1) …
Preparing to unpack …/libvorbisfile3_1.3.5-3ubuntu0.2_amd64.deb …
Unpacking libvorbisfile3:amd64 (1.3.5-3ubuntu0.2) over (1.3.5-3ubuntu0.1) …
Preparing to unpack …/libvorbisenc2_1.3.5-3ubuntu0.2_amd64.deb …
Unpacking libvorbisenc2:amd64 (1.3.5-3ubuntu0.2) over (1.3.5-3ubuntu0.1) …
Preparing to unpack …/libvorbis0a_1.3.5-3ubuntu0.2_amd64.deb …
Unpacking libvorbis0a:amd64 (1.3.5-3ubuntu0.2) over (1.3.5-3ubuntu0.1) …
Preparing to unpack …/libwayland-client0_1.12.0-1~ubuntu16.04.3_amd64.deb …
Unpacking libwayland-client0:amd64 (1.12.0-1~ubuntu16.04.3) over (1.12.0-1~ubuntu16.04.2) …
Preparing to unpack …/libwayland-cursor0_1.12.0-1~ubuntu16.04.3_amd64.deb …
Unpacking libwayland-cursor0:amd64 (1.12.0-1~ubuntu16.04.3) over (1.12.0-1~ubuntu16.04.2) …
Preparing to unpack …/libwayland-server0_1.12.0-1~ubuntu16.04.3_amd64.deb …
Unpacking libwayland-server0:amd64 (1.12.0-1~ubuntu16.04.3) over (1.12.0-1~ubuntu16.04.2) …
Preparing to unpack …/ubuntu-mono_14.04+16.04.20180326-0ubuntu1_all.deb …
Unpacking ubuntu-mono (14.04+16.04.20180326-0ubuntu1) over (14.04+16.04.20171116-0ubuntu1) …
Preparing to unpack …/light-themes_14.04+16.04.20180326-0ubuntu1_all.deb …
Unpacking light-themes (14.04+16.04.20180326-0ubuntu1) over (14.04+16.04.20171116-0ubuntu1) …
Preparing to unpack …/linux-firmware_1.157.19_all.deb …
Unpacking linux-firmware (1.157.19) over (1.157.16) …
Preparing to unpack …/linux-libc-dev_4.4.0-127.153_amd64.deb …
Unpacking linux-libc-dev:amd64 (4.4.0-127.153) over (4.4.0-112.135) …
Preparing to unpack …/python-software-properties_0.96.20.7_all.deb …
Unpacking python-software-properties (0.96.20.7) over (0.96.20) …
Preparing to unpack …/ubuntu-mobile-icons_14.04+16.04.20180326-0ubuntu1_all.deb …
Unpacking ubuntu-mobile-icons (14.04+16.04.20180326-0ubuntu1) over (14.04+16.04.20171116-0ubuntu1) …
Preparing to unpack …/suru-icon-theme_14.04+16.04.20180326-0ubuntu1_all.deb …
Unpacking suru-icon-theme (14.04+16.04.20180326-0ubuntu1) over (14.04+16.04.20171116-0ubuntu1) …
Preparing to unpack …/thunderbird-locale-en_1%3a52.8.0+build1-0ubuntu0.16.04.1_amd64.deb …
Unpacking thunderbird-locale-en (1:52.8.0+build1-0ubuntu0.16.04.1) over (1:52.6.0+build1-0ubuntu0.16.04.1) …
Preparing to unpack …/thunderbird_1%3a52.8.0+build1-0ubuntu0.16.04.1_amd64.deb …
Unpacking thunderbird (1:52.8.0+build1-0ubuntu0.16.04.1) over (1:52.6.0+build1-0ubuntu0.16.04.1) …
Preparing to unpack …/thunderbird-gnome-support_1%3a52.8.0+build1-0ubuntu0.16.04.1_amd64.deb …
Unpacking thunderbird-gnome-support (1:52.8.0+build1-0ubuntu0.16.04.1) over (1:52.6.0+build1-0ubuntu0.16.04.1) …
Preparing to unpack …/thunderbird-locale-en-us_1%3a52.8.0+build1-0ubuntu0.16.04.1_all.deb …
Unpacking thunderbird-locale-en-us (1:52.8.0+build1-0ubuntu0.16.04.1) over (1:52.6.0+build1-0ubuntu0.16.04.1) …
Preparing to unpack …/ubuntu-artwork_1%3a14.04+16.04.20180326-0ubuntu1_all.deb …
Unpacking ubuntu-artwork (1:14.04+16.04.20180326-0ubuntu1) over (1:14.04+16.04.20171116-0ubuntu1) …
Preparing to unpack …/unity-scopes-runner_7.1.4+16.04.20180209.1-0ubuntu1_all.deb …
Unpacking unity-scopes-runner (7.1.4+16.04.20180209.1-0ubuntu1) over (7.1.4+16.04.20160701-0ubuntu1) …
Preparing to unpack …/xdg-utils_1.1.1-1ubuntu1.16.04.3_all.deb …
Unpacking xdg-utils (1.1.1-1ubuntu1.16.04.3) over (1.1.1-1ubuntu1.16.04.1) …
Preparing to unpack …/jenkins_2.107.3_all.deb …
Unpacking jenkins (2.107.3) over (2.107.1) …
Preparing to unpack …/libruby2.3_2.3.1-2~16.04.9_amd64.deb …
Unpacking libruby2.3:amd64 (2.3.1-2~16.04.9) over (2.3.1-2~16.04.6) …
Preparing to unpack …/ruby2.3_2.3.1-2~16.04.9_amd64.deb …
Unpacking ruby2.3 (2.3.1-2~16.04.9) over (2.3.1-2~16.04.6) …
Processing triggers for man-db (2.7.5-1) …
Processing triggers for systemd (229-4ubuntu21.2) …
Processing triggers for ureadahead (0.100.0-19) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
Processing triggers for ufw (0.35-0ubuntu2) …
Processing triggers for install-info (6.1.0.dfsg.1-5) …
Processing triggers for hicolor-icon-theme (0.15-0ubuntu1) …
Processing triggers for gnome-menus (3.13.3-6ubuntu3.1) …
Processing triggers for desktop-file-utils (0.22-1ubuntu5.1) …
Processing triggers for mime-support (3.59ubuntu1) …
Processing triggers for dbus (1.10.6-1ubuntu3.3) …
Processing triggers for libglib2.0-0:amd64 (2.48.2-0ubuntu1) …
Processing triggers for gconf2 (3.2.6-3ubuntu6) …
Setting up perl-modules-5.22 (5.22.1-9ubuntu0.3) …
Setting up libperl5.22:amd64 (5.22.1-9ubuntu0.3) …
Setting up perl (5.22.1-9ubuntu0.3) …
Setting up libquadmath0:amd64 (5.4.0-6ubuntu1~16.04.9) …
Setting up libgomp1:amd64 (5.4.0-6ubuntu1~16.04.9) …
Setting up libitm1:amd64 (5.4.0-6ubuntu1~16.04.9) …
Setting up libatomic1:amd64 (5.4.0-6ubuntu1~16.04.9) …
Setting up libasan2:amd64 (5.4.0-6ubuntu1~16.04.9) …
Setting up liblsan0:amd64 (5.4.0-6ubuntu1~16.04.9) …
Setting up libtsan0:amd64 (5.4.0-6ubuntu1~16.04.9) …
Setting up libubsan0:amd64 (5.4.0-6ubuntu1~16.04.9) …
Setting up libcilkrts5:amd64 (5.4.0-6ubuntu1~16.04.9) …
Setting up libmpx0:amd64 (5.4.0-6ubuntu1~16.04.9) …
Setting up cpp-5 (5.4.0-6ubuntu1~16.04.9) …
Setting up libcc1-0:amd64 (5.4.0-6ubuntu1~16.04.9) …
Setting up libgcc-5-dev:amd64 (5.4.0-6ubuntu1~16.04.9) …
Setting up gcc-5 (5.4.0-6ubuntu1~16.04.9) …
Setting up libstdc++-5-dev:amd64 (5.4.0-6ubuntu1~16.04.9) …
Setting up g++-5 (5.4.0-6ubuntu1~16.04.9) …
Setting up libapt-inst2.0:amd64 (1.2.26) …
Setting up apt-utils (1.2.26) …
Setting up libprocps4:amd64 (2:3.3.10-4ubuntu2.4) …
Setting up procps (2:3.3.10-4ubuntu2.4) …
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
Setting up libpam-systemd:amd64 (229-4ubuntu21.2) …
Setting up ifupdown (0.8.10ubuntu1.4) …
Setting up udev (229-4ubuntu21.2) …
addgroup: The group `input’ already exists as a system group. Exiting.
update-initramfs: deferring update (trigger activated)
Setting up grub-common (2.02~beta2-36ubuntu3.18) …
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
Setting up grub2-common (2.02~beta2-36ubuntu3.18) …
Setting up grub-pc-bin (2.02~beta2-36ubuntu3.18) …
Setting up grub-pc (2.02~beta2-36ubuntu3.18) …
Installing for i386-pc platform.
Installation finished. No error reported.
Generating grub configuration file …
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
Found linux image: /boot/vmlinuz-4.10.0-40-generic
Found initrd image: /boot/initrd.img-4.10.0-40-generic
Found linux image: /boot/vmlinuz-4.10.0-28-generic
Found initrd image: /boot/initrd.img-4.10.0-28-generic
Found memtest86+ image: /boot/memtest86+.elf
Found memtest86+ image: /boot/memtest86+.bin
done
Setting up friendly-recovery (0.2.31ubuntu1) …
Generating grub configuration file …
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
Found linux image: /boot/vmlinuz-4.10.0-40-generic
Found initrd image: /boot/initrd.img-4.10.0-40-generic
Found linux image: /boot/vmlinuz-4.10.0-28-generic
Found initrd image: /boot/initrd.img-4.10.0-28-generic
Found memtest86+ image: /boot/memtest86+.elf
Found memtest86+ image: /boot/memtest86+.bin
done
Setting up initramfs-tools-bin (0.122ubuntu8.11) …
Setting up initramfs-tools-core (0.122ubuntu8.11) …
Setting up linux-base (4.5ubuntu1~16.04.1) …
Setting up initramfs-tools (0.122ubuntu8.11) …
update-initramfs: deferring update (trigger activated)
Setting up libssl1.0.0:amd64 (1.0.2g-1ubuntu4.12) …
Setting up apache2-bin (2.4.18-2ubuntu3.8) …
Setting up apache2-utils (2.4.18-2ubuntu3.8) …
Setting up apache2-data (2.4.18-2ubuntu3.8) …
Setting up apache2 (2.4.18-2ubuntu3.8) …
Setting up libavahi-common-data:amd64 (0.6.32~rc+dfsg-1ubuntu2.2) …
Setting up libavahi-common3:amd64 (0.6.32~rc+dfsg-1ubuntu2.2) …
Setting up libavahi-client3:amd64 (0.6.32~rc+dfsg-1ubuntu2.2) …
Setting up libavahi-glib1:amd64 (0.6.32~rc+dfsg-1ubuntu2.2) …
Setting up libcups2:amd64 (2.1.3-4ubuntu0.4) …
Setting up libcupsmime1:amd64 (2.1.3-4ubuntu0.4) …
Setting up cups-daemon (2.1.3-4ubuntu0.4) …
Setting up cups-core-drivers (2.1.3-4ubuntu0.4) …
Setting up cups-server-common (2.1.3-4ubuntu0.4) …
Setting up cups-common (2.1.3-4ubuntu0.4) …
Setting up libcupscgi1:amd64 (2.1.3-4ubuntu0.4) …
Setting up libtiff5:amd64 (4.0.6-1ubuntu0.4) …
Setting up libcupsfilters1:amd64 (1.8.3-2ubuntu3.4) …
Setting up libcupsimage2:amd64 (2.1.3-4ubuntu0.4) …
Setting up cups-client (2.1.3-4ubuntu0.4) …
Setting up libcupsppdc1:amd64 (2.1.3-4ubuntu0.4) …
Setting up cups-browsed (1.8.3-2ubuntu3.4) …
Setting up libpoppler58:amd64 (0.41.0-0ubuntu1.7) …
Setting up poppler-utils (0.41.0-0ubuntu1.7) …
Setting up libgs9-common (9.18~dfsg~0-0ubuntu2.8) …
Setting up libgs9:amd64 (9.18~dfsg~0-0ubuntu2.8) …
Setting up ghostscript (9.18~dfsg~0-0ubuntu2.8) …
Setting up cups-ppdc (2.1.3-4ubuntu0.4) …
Setting up cups (2.1.3-4ubuntu0.4) …
Updating PPD files for cups …
Setting up cups-bsd (2.1.3-4ubuntu0.4) …
Setting up ghostscript-x (9.18~dfsg~0-0ubuntu2.8) …
Setting up libicu55:amd64 (55.1-7ubuntu0.4) …
Setting up fonts-opensymbol (2:102.7+LibO5.1.6~rc2-0ubuntu1~xenial3) …
Setting up uno-libs3 (5.1.6~rc2-0ubuntu1~xenial3) …
Setting up ure (5.1.6~rc2-0ubuntu1~xenial3) …
Setting up libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.8) …
Setting up curl (7.47.0-1ubuntu2.8) …
Setting up libwbclient0:amd64 (2:4.3.11+dfsg-0ubuntu0.16.04.13) …
Setting up samba-libs:amd64 (2:4.3.11+dfsg-0ubuntu0.16.04.13) …
Setting up samba-vfs-modules (2:4.3.11+dfsg-0ubuntu0.16.04.13) …
Setting up samba-dsdb-modules (2:4.3.11+dfsg-0ubuntu0.16.04.13) …
Setting up libpython-stdlib:amd64 (2.7.12-1~16.04) …
Setting up python (2.7.12-1~16.04) …
Setting up python-all (2.7.12-1~16.04) …
Setting up libpython-dev:amd64 (2.7.12-1~16.04) …
Setting up libpython-all-dev:amd64 (2.7.12-1~16.04) …
Setting up python-dev (2.7.12-1~16.04) …
Setting up python-all-dev (2.7.12-1~16.04) …
Setting up python-crypto (2.6.1-6ubuntu0.16.04.3) …
Setting up python-samba (2:4.3.11+dfsg-0ubuntu0.16.04.13) …
Setting up samba-common (2:4.3.11+dfsg-0ubuntu0.16.04.13) …
Setting up samba-common-bin (2:4.3.11+dfsg-0ubuntu0.16.04.13) …
Setting up libsmbclient:amd64 (2:4.3.11+dfsg-0ubuntu0.16.04.13) …
Setting up samba (2:4.3.11+dfsg-0ubuntu0.16.04.13) …
Setting up libpci3:amd64 (1:3.3.1-1.1ubuntu1.2) …
Setting up pciutils (1:3.3.1-1.1ubuntu1.2) …
Setting up python-apt-common (1.1.0~beta1ubuntu0.16.04.1) …
Setting up python3-apt (1.1.0~beta1ubuntu0.16.04.1) …
Setting up ubuntu-drivers-common (1:0.4.17.7) …
Setting up patch (2.7.5-1ubuntu0.16.04.1) …
Setting up python-apt (1.1.0~beta1ubuntu0.16.04.1) …
Setting up libdpkg-perl (1.18.4ubuntu1.4) …
Setting up dpkg-dev (1.18.4ubuntu1.4) …
Setting up distro-info-data (0.28ubuntu0.8) …
Setting up isc-dhcp-client (4.3.3-5ubuntu12.10) …
Setting up isc-dhcp-common (4.3.3-5ubuntu12.10) …
Setting up libapparmor-perl (2.10.95-0ubuntu2.9) …
Setting up apparmor (2.10.95-0ubuntu2.9) …
Installing new version of config file /etc/apparmor.d/abstractions/base …
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox
AppArmor parser error for /etc/apparmor.d/usr.lib.snapd.snap-confine.real in /etc/apparmor.d/usr.lib.snapd.snap-confine.real at line 11: Could not open ‘/var/lib/snapd/apparmor/snap-confine.d’
Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd
Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox
AppArmor parser error for /etc/apparmor.d/usr.lib.snapd.snap-confine.real in /etc/apparmor.d/usr.lib.snapd.snap-confine.real at line 11: Could not open ‘/var/lib/snapd/apparmor/snap-confine.d’
Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd
Setting up apt-transport-https (1.2.26) …
Setting up hdparm (9.48+ds-1ubuntu0.1) …
Setting up libnuma1:amd64 (2.0.11-1ubuntu1.1) …
Setting up libplymouth4:amd64 (0.9.2-3ubuntu13.5) …
Setting up lshw (02.17-1.1ubuntu3.5) …
Setting up openssh-client (1:7.2p2-4ubuntu2.4) …
Setting up openssh-sftp-server (1:7.2p2-4ubuntu2.4) …
Setting up openssh-server (1:7.2p2-4ubuntu2.4) …
Installing new version of config file /etc/network/if-up.d/openssh-server …
Setting up openssl (1.0.2g-1ubuntu4.12) …
Setting up plymouth (0.9.2-3ubuntu13.5) …
update-initramfs: deferring update (trigger activated)
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
Setting up plymouth-theme-ubuntu-text (0.9.2-3ubuntu13.5) …
update-initramfs: deferring update (trigger activated)
Setting up plymouth-label (0.9.2-3ubuntu13.5) …
Setting up plymouth-theme-ubuntu-logo (0.9.2-3ubuntu13.5) …
update-initramfs: deferring update (trigger activated)
Setting up wget (1.17.1-1ubuntu1.4) …
Setting up xdg-user-dirs (0.15-2ubuntu6.16.04.1) …
Setting up python-paramiko (1.16.0-1ubuntu0.1) …
Setting up ansible (2.5.4-1ppa~xenial) …
Setting up python3-problem-report (2.20.1-0ubuntu2.18) …
Setting up python3-apport (2.20.1-0ubuntu2.18) …
Setting up apport (2.20.1-0ubuntu2.18) …
Setting up apport-gtk (2.20.1-0ubuntu2.18) …
Setting up avahi-autoipd (0.6.32~rc+dfsg-1ubuntu2.2) …
Setting up libpulse0:amd64 (1:8.0-0ubuntu3.10) …
Setting up libpulsedsp:amd64 (1:8.0-0ubuntu3.10) …
Setting up pulseaudio-utils (1:8.0-0ubuntu3.10) …
Setting up pulseaudio (1:8.0-0ubuntu3.10) …
Setting up pulseaudio-module-bluetooth (1:8.0-0ubuntu3.10) …
Setting up libpulse-mainloop-glib0:amd64 (1:8.0-0ubuntu3.10) …
Setting up pulseaudio-module-x11 (1:8.0-0ubuntu3.10) …
Setting up libavahi-core7:amd64 (0.6.32~rc+dfsg-1ubuntu2.2) …
Setting up avahi-daemon (0.6.32~rc+dfsg-1ubuntu2.2) …
Installing new version of config file /etc/avahi/avahi-daemon.conf …
Setting up avahi-utils (0.6.32~rc+dfsg-1ubuntu2.2) …
Setting up libbamf3-2:amd64 (0.5.3~bzr0+16.04.20180209-0ubuntu1) …
Setting up bamfdaemon (0.5.3~bzr0+16.04.20180209-0ubuntu1) …
Rebuilding /usr/share/applications/bamf-2.index…
Setting up ca-certificates-java (20160321ubuntu1) …
Setting up compiz-core (1:0.9.12.3+16.04.20180221-0ubuntu1) …
Setting up libcompizconfig0:amd64 (1:0.9.12.3+16.04.20180221-0ubuntu1) …
Setting up libdecoration0:amd64 (1:0.9.12.3+16.04.20180221-0ubuntu1) …
Setting up compiz-plugins-default:amd64 (1:0.9.12.3+16.04.20180221-0ubuntu1) …
Setting up compiz-gnome (1:0.9.12.3+16.04.20180221-0ubuntu1) …
Setting up libunity-protocol-private0:amd64 (7.1.4+16.04.20180209.1-0ubuntu1) …
Setting up unity-services (7.4.5+16.04.20180221-0ubuntu1) …
Setting up unity-schemas (7.4.5+16.04.20180221-0ubuntu1) …
Setting up libunity-core-6.0-9:amd64 (7.4.5+16.04.20180221-0ubuntu1) …
Setting up compiz (1:0.9.12.3+16.04.20180221-0ubuntu1) …
Setting up unity (7.4.5+16.04.20180221-0ubuntu1) …
Setting up libunity-scopes-json-def-desktop (7.1.4+16.04.20180209.1-0ubuntu1) …
Setting up libunity9:amd64 (7.1.4+16.04.20180209.1-0ubuntu1) …
Setting up docker-ce (18.05.0~ce~3-0~ubuntu) …
Setting up ebtables (2.0.10.4-3.4ubuntu2.16.04.1) …
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
Setting up firefox (60.0.1+build2-0ubuntu0.16.04.1) …
Please restart all running instances of firefox, or you will experience problems.
Setting up firefox-locale-en (60.0.1+build2-0ubuntu0.16.04.1) …
Setting up libdfu1:amd64 (0.8.3-0ubuntu3) …
Setting up libfwupd1:amd64 (0.8.3-0ubuntu3) …
Setting up fwupd (0.8.3-0ubuntu3) …
Installing new version of config file /etc/fwupd.conf …
Setting up libibus-1.0-5:amd64 (1.5.11-1ubuntu2.1) …
Setting up gir1.2-ibus-1.0:amd64 (1.5.11-1ubuntu2.1) …
Setting up ibus (1.5.11-1ubuntu2.1) …
Setting up gir1.2-unity-5.0:amd64 (7.1.4+16.04.20180209.1-0ubuntu1) …
Setting up gnome-accessibility-themes (3.18.0-2ubuntu2) …
Setting up gnome-software-common (3.20.5-0ubuntu0.16.04.10) …
Setting up gnome-software (3.20.5-0ubuntu0.16.04.10) …
Setting up ubuntu-software (3.20.5-0ubuntu0.16.04.10) …
Setting up ibus-gtk:amd64 (1.5.11-1ubuntu2.1) …
Setting up ibus-gtk3:amd64 (1.5.11-1ubuntu2.1) …
Setting up libavahi-ui-gtk3-0:amd64 (0.6.32~rc+dfsg-1ubuntu2.2) …
Setting up libcurl3:amd64 (7.47.0-1ubuntu2.8) …
Setting up libfontembed1:amd64 (1.8.3-2ubuntu3.4) …
Setting up libpoppler-glib8:amd64 (0.41.0-0ubuntu1.7) …
Setting up libraw15:amd64 (0.17.1-1ubuntu0.3) …
Setting up libsnmp-base (5.7.3+dfsg-1ubuntu4.1) …
Setting up libsnmp30:amd64 (5.7.3+dfsg-1ubuntu4.1) …
Setting up libvncclient1:amd64 (0.9.10+dfsg-3ubuntu0.16.04.2) …
Setting up libvorbis0a:amd64 (1.3.5-3ubuntu0.2) …
Setting up libvorbisfile3:amd64 (1.3.5-3ubuntu0.2) …
Setting up libvorbisenc2:amd64 (1.3.5-3ubuntu0.2) …
Setting up libwayland-client0:amd64 (1.12.0-1~ubuntu16.04.3) …
Setting up libwayland-cursor0:amd64 (1.12.0-1~ubuntu16.04.3) …
Setting up libwayland-server0:amd64 (1.12.0-1~ubuntu16.04.3) …
Setting up ubuntu-mono (14.04+16.04.20180326-0ubuntu1) …
Setting up light-themes (14.04+16.04.20180326-0ubuntu1) …
Setting up linux-firmware (1.157.19) …
update-initramfs: Generating /boot/initrd.img-4.10.0-40-generic
update-initramfs: Generating /boot/initrd.img-4.10.0-28-generic
Setting up linux-libc-dev:amd64 (4.4.0-127.153) …
Setting up python-software-properties (0.96.20.7) …
Setting up ubuntu-mobile-icons (14.04+16.04.20180326-0ubuntu1) …
Setting up suru-icon-theme (14.04+16.04.20180326-0ubuntu1) …
Setting up thunderbird (1:52.8.0+build1-0ubuntu0.16.04.1) …
Setting up thunderbird-locale-en (1:52.8.0+build1-0ubuntu0.16.04.1) …
Setting up thunderbird-gnome-support (1:52.8.0+build1-0ubuntu0.16.04.1) …
Setting up thunderbird-locale-en-us (1:52.8.0+build1-0ubuntu0.16.04.1) …
Setting up ubuntu-artwork (1:14.04+16.04.20180326-0ubuntu1) …
Setting up unity-scopes-runner (7.1.4+16.04.20180209.1-0ubuntu1) …
Setting up xdg-utils (1.1.1-1ubuntu1.16.04.3) …
Setting up jenkins (2.107.3) …
Installing new version of config file /etc/default/jenkins …
Installing new version of config file /etc/init.d/jenkins …
Installing new version of config file /etc/logrotate.d/jenkins …
Setting up libruby2.3:amd64 (2.3.1-2~16.04.9) …
Setting up ruby2.3 (2.3.1-2~16.04.9) …
Processing triggers for ca-certificates (20170717~16.04.1) …
Updating certificates in /etc/ssl/certs…
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d…

done.
done.
Setting up openjdk-8-jre-headless:amd64 (8u171-b11-0ubuntu0.16.04.1) …
Installing new version of config file /etc/java-8-openjdk/jvm-amd64.cfg …
Installing new version of config file /etc/java-8-openjdk/management/management.properties …
Installing new version of config file /etc/java-8-openjdk/net.properties …
Installing new version of config file /etc/java-8-openjdk/security/java.security …
Setting up libreoffice-common (1:5.1.6~rc2-0ubuntu1~xenial3) …
Installing new version of config file /etc/bash_completion.d/libreoffice.sh …
Setting up libreoffice-style-galaxy (1:5.1.6~rc2-0ubuntu1~xenial3) …
Setting up libreoffice-style-breeze (1:5.1.6~rc2-0ubuntu1~xenial3) …
Setting up libreoffice-core (1:5.1.6~rc2-0ubuntu1~xenial3) …
Setting up libreoffice-base-core (1:5.1.6~rc2-0ubuntu1~xenial3) …
Setting up libreoffice-calc (1:5.1.6~rc2-0ubuntu1~xenial3) …
Setting up libreoffice-gtk (1:5.1.6~rc2-0ubuntu1~xenial3) …
Setting up libreoffice-gnome (1:5.1.6~rc2-0ubuntu1~xenial3) …
Setting up libreoffice-writer (1:5.1.6~rc2-0ubuntu1~xenial3) …
Setting up libreoffice-draw (1:5.1.6~rc2-0ubuntu1~xenial3) …
Setting up libreoffice-impress (1:5.1.6~rc2-0ubuntu1~xenial3) …
Setting up libreoffice-ogltrans (1:5.1.6~rc2-0ubuntu1~xenial3) …
Setting up libreoffice-pdfimport (1:5.1.6~rc2-0ubuntu1~xenial3) …
Setting up python3-uno (1:5.1.6~rc2-0ubuntu1~xenial3) …
Setting up libreoffice-math (1:5.1.6~rc2-0ubuntu1~xenial3) …
Setting up libreoffice-avmedia-backend-gstreamer (1:5.1.6~rc2-0ubuntu1~xenial3) …
Setting up python3-distupgrade (1:16.04.25) …
Setting up python3-update-manager (1:16.04.13) …
Setting up ubuntu-release-upgrader-core (1:16.04.25) …
Setting up update-manager-core (1:16.04.13) …
Setting up update-notifier-common (3.168.8) …
Setting up ubuntu-release-upgrader-gtk (1:16.04.25) …
Setting up update-manager (1:16.04.13) …
Setting up update-notifier (3.168.8) …
Processing triggers for libgtk2.0-0:amd64 (2.24.30-1ubuntu1.16.04.2) …
Processing triggers for libgtk-3-0:amd64 (3.18.9-1ubuntu3.3) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
Processing triggers for initramfs-tools (0.122ubuntu8.11) …
update-initramfs: Generating /boot/initrd.img-4.10.0-40-generic
Processing triggers for systemd (229-4ubuntu21.2) …
Processing triggers for ureadahead (0.100.0-19) …
vskumar@ubuntu:~/K8$

vskumar@ubuntu:~/K8$ sudo sudo apt-get install -y build-essential make cmake scons curl git \
> ruby autoconf automake autoconf-archive \
> gettext libtool flex bison \
> libbz2-dev libcurl4-openssl-dev \
> libexpat-dev libncurses-dev
[sudo] password for vskumar:
Reading package lists… Done
Building dependency tree
Reading state information… Done
Note, selecting ‘libexpat1-dev’ instead of ‘libexpat-dev’
Note, selecting ‘libncurses5-dev’ instead of ‘libncurses-dev’
build-essential is already the newest version (12.1ubuntu2).
gettext is already the newest version (0.19.7-2ubuntu3).
make is already the newest version (4.1-6).
ruby is already the newest version (1:2.3.0+1).
ruby set to manually installed.
curl is already the newest version (7.47.0-1ubuntu2.8).
git is already the newest version (1:2.7.4-0ubuntu1.3).
libexpat1-dev is already the newest version (2.1.0-7ubuntu0.16.04.3).
libexpat1-dev set to manually installed.
The following packages were automatically installed and are no longer required:
ca-certificates-java default-jre-headless java-common openjdk-8-jre-headless
Use ‘sudo apt autoremove’ to remove them.
The following additional packages will be installed:
autotools-dev bzip2-doc cmake-data libbison-dev libfl-dev libjsoncpp1
libltdl-dev libsigsegv2 libtinfo-dev m4
Suggested packages:
gnu-standards autoconf-doc bison-doc codeblocks eclipse ninja-build
libcurl4-doc libcurl3-dbg libidn11-dev libkrb5-dev libldap2-dev librtmp-dev
libssl-dev zlib1g-dev libtool-doc ncurses-doc gfortran | fortran95-compiler
gcj-jdk
The following NEW packages will be installed:
autoconf autoconf-archive automake autotools-dev bison bzip2-doc cmake
cmake-data flex libbison-dev libbz2-dev libcurl4-openssl-dev libfl-dev
libjsoncpp1 libltdl-dev libncurses5-dev libsigsegv2 libtinfo-dev libtool m4
scons
0 upgraded, 21 newly installed, 0 to remove and 23 not upgraded.
Need to get 8,095 kB of archives.
After this operation, 40.6 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial/universe amd64 autoconf-archive all 20150925-1 [637 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 cmake-data all 3.5.1-1ubuntu3 [1,121 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial/main amd64 libjsoncpp1 amd64 1.7.2-1 [73.0 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 cmake amd64 3.5.1-1ubuntu3 [2,623 kB]
Get:5 http://archive.ubuntu.com/ubuntu xenial/main amd64 libsigsegv2 amd64 2.10-4 [14.1 kB]
Get:6 http://archive.ubuntu.com/ubuntu xenial/main amd64 m4 amd64 1.4.17-5 [195 kB]
Get:7 http://archive.ubuntu.com/ubuntu xenial/main amd64 libfl-dev amd64 2.6.0-11 [12.5 kB]
Get:8 http://archive.ubuntu.com/ubuntu xenial/main amd64 flex amd64 2.6.0-11 [290 kB]
Get:9 http://archive.ubuntu.com/ubuntu xenial/main amd64 autoconf all 2.69-9 [321 kB]
Get:10 http://archive.ubuntu.com/ubuntu xenial/main amd64 autotools-dev all 20150820.1 [39.8 kB]
Get:11 http://archive.ubuntu.com/ubuntu xenial/main amd64 automake all 1:1.15-4ubuntu1 [510 kB]
Get:12 http://archive.ubuntu.com/ubuntu xenial/main amd64 libbison-dev amd64 2:3.0.4.dfsg-1 [338 kB]
Get:13 http://archive.ubuntu.com/ubuntu xenial/main amd64 bison amd64 2:3.0.4.dfsg-1 [259 kB]
Get:14 http://archive.ubuntu.com/ubuntu xenial/main amd64 bzip2-doc all 1.0.6-8 [295 kB]
Get:15 http://archive.ubuntu.com/ubuntu xenial/main amd64 libbz2-dev amd64 1.0.6-8 [29.1 kB]
Get:16 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcurl4-openssl-dev amd64 7.47.0-1ubuntu2.8 [263 kB]
Get:17 http://archive.ubuntu.com/ubuntu xenial/main amd64 libltdl-dev amd64 2.4.6-0.1 [162 kB]
Get:18 http://archive.ubuntu.com/ubuntu xenial/main amd64 libtinfo-dev amd64 6.0+20160213-1ubuntu1 [77.4 kB]
Get:19 http://archive.ubuntu.com/ubuntu xenial/main amd64 libncurses5-dev amd64 6.0+20160213-1ubuntu1 [175 kB]
Get:20 http://archive.ubuntu.com/ubuntu xenial/main amd64 libtool all 2.4.6-0.1 [193 kB]
Get:21 http://archive.ubuntu.com/ubuntu xenial/universe amd64 scons all 2.4.1-1 [469 kB]
Fetched 8,095 kB in 15s (537 kB/s)
Selecting previously unselected package autoconf-archive.
(Reading database … 224983 files and directories currently installed.)
Preparing to unpack …/autoconf-archive_20150925-1_all.deb …
Unpacking autoconf-archive (20150925-1) …
Selecting previously unselected package cmake-data.
Preparing to unpack …/cmake-data_3.5.1-1ubuntu3_all.deb …
Unpacking cmake-data (3.5.1-1ubuntu3) …
Selecting previously unselected package libjsoncpp1:amd64.
Preparing to unpack …/libjsoncpp1_1.7.2-1_amd64.deb …
Unpacking libjsoncpp1:amd64 (1.7.2-1) …
Selecting previously unselected package cmake.
Preparing to unpack …/cmake_3.5.1-1ubuntu3_amd64.deb …
Unpacking cmake (3.5.1-1ubuntu3) …
Selecting previously unselected package libsigsegv2:amd64.
Preparing to unpack …/libsigsegv2_2.10-4_amd64.deb …
Unpacking libsigsegv2:amd64 (2.10-4) …
Selecting previously unselected package m4.
Preparing to unpack …/archives/m4_1.4.17-5_amd64.deb …
Unpacking m4 (1.4.17-5) …
Selecting previously unselected package libfl-dev:amd64.
Preparing to unpack …/libfl-dev_2.6.0-11_amd64.deb …
Unpacking libfl-dev:amd64 (2.6.0-11) …
Selecting previously unselected package flex.
Preparing to unpack …/flex_2.6.0-11_amd64.deb …
Unpacking flex (2.6.0-11) …
Selecting previously unselected package autoconf.
Preparing to unpack …/autoconf_2.69-9_all.deb …
Unpacking autoconf (2.69-9) …
Selecting previously unselected package autotools-dev.
Preparing to unpack …/autotools-dev_20150820.1_all.deb …
Unpacking autotools-dev (20150820.1) …
Selecting previously unselected package automake.
Preparing to unpack …/automake_1%3a1.15-4ubuntu1_all.deb …
Unpacking automake (1:1.15-4ubuntu1) …
Selecting previously unselected package libbison-dev:amd64.
Preparing to unpack …/libbison-dev_2%3a3.0.4.dfsg-1_amd64.deb …
Unpacking libbison-dev:amd64 (2:3.0.4.dfsg-1) …
Selecting previously unselected package bison.
Preparing to unpack …/bison_2%3a3.0.4.dfsg-1_amd64.deb …
Unpacking bison (2:3.0.4.dfsg-1) …
Selecting previously unselected package bzip2-doc.
Preparing to unpack …/bzip2-doc_1.0.6-8_all.deb …
Unpacking bzip2-doc (1.0.6-8) …
Selecting previously unselected package libbz2-dev:amd64.
Preparing to unpack …/libbz2-dev_1.0.6-8_amd64.deb …
Unpacking libbz2-dev:amd64 (1.0.6-8) …
Selecting previously unselected package libcurl4-openssl-dev:amd64.
Preparing to unpack …/libcurl4-openssl-dev_7.47.0-1ubuntu2.8_amd64.deb …
Unpacking libcurl4-openssl-dev:amd64 (7.47.0-1ubuntu2.8) …
Selecting previously unselected package libltdl-dev:amd64.
Preparing to unpack …/libltdl-dev_2.4.6-0.1_amd64.deb …
Unpacking libltdl-dev:amd64 (2.4.6-0.1) …
Selecting previously unselected package libtinfo-dev:amd64.
Preparing to unpack …/libtinfo-dev_6.0+20160213-1ubuntu1_amd64.deb …
Unpacking libtinfo-dev:amd64 (6.0+20160213-1ubuntu1) …
Selecting previously unselected package libncurses5-dev:amd64.
Preparing to unpack …/libncurses5-dev_6.0+20160213-1ubuntu1_amd64.deb …
Unpacking libncurses5-dev:amd64 (6.0+20160213-1ubuntu1) …
Selecting previously unselected package libtool.
Preparing to unpack …/libtool_2.4.6-0.1_all.deb …
Unpacking libtool (2.4.6-0.1) …
Selecting previously unselected package scons.
Preparing to unpack …/archives/scons_2.4.1-1_all.deb …
Unpacking scons (2.4.1-1) …
Processing triggers for doc-base (0.10.7) …
Processing 4 added doc-base files…
Registering documents with scrollkeeper…
Processing triggers for install-info (6.1.0.dfsg.1-5) …
Processing triggers for man-db (2.7.5-1) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
Setting up autoconf-archive (20150925-1) …
Setting up cmake-data (3.5.1-1ubuntu3) …
Setting up libjsoncpp1:amd64 (1.7.2-1) …
Setting up cmake (3.5.1-1ubuntu3) …
Setting up libsigsegv2:amd64 (2.10-4) …
Setting up m4 (1.4.17-5) …
Setting up libfl-dev:amd64 (2.6.0-11) …
Setting up flex (2.6.0-11) …
Setting up autoconf (2.69-9) …
Setting up autotools-dev (20150820.1) …
Setting up automake (1:1.15-4ubuntu1) …
update-alternatives: using /usr/bin/automake-1.15 to provide /usr/bin/automake (automake) in auto mode
Setting up libbison-dev:amd64 (2:3.0.4.dfsg-1) …
Setting up bison (2:3.0.4.dfsg-1) …
update-alternatives: using /usr/bin/bison.yacc to provide /usr/bin/yacc (yacc) in auto mode
Setting up bzip2-doc (1.0.6-8) …
Setting up libbz2-dev:amd64 (1.0.6-8) …
Setting up libcurl4-openssl-dev:amd64 (7.47.0-1ubuntu2.8) …
Setting up libltdl-dev:amd64 (2.4.6-0.1) …
Setting up libtinfo-dev:amd64 (6.0+20160213-1ubuntu1) …
Setting up libncurses5-dev:amd64 (6.0+20160213-1ubuntu1) …
Setting up libtool (2.4.6-0.1) …
Setting up scons (2.4.1-1) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
vskumar@ubuntu:~/K8$

vskumar@ubuntu:~/K8$
vskumar@ubuntu:~/K8$ git clone https://github.com/Homebrew/linuxbrew.git ~/.linuxbrew
Cloning into ‘/home/vskumar/.linuxbrew’…
remote: Counting objects: 353749, done.
remote: Total 353749 (delta 0), reused 0 (delta 0), pack-reused 353749
Receiving objects: 100% (353749/353749), 67.99 MiB | 2.02 MiB/s, done.
Resolving deltas: 100% (267333/267333), done.
Checking connectivity… done.
vskumar@ubuntu:~/K8$

 

vskumar@ubuntu:~/K8$
vskumar@ubuntu:~/K8$ sudo vi ~/.bashrc
vskumar@ubuntu:~/K8$ tail ~/.bashrc
# Until LinuxBrew is fixed, the following is required.
# See: https://github.com/Homebrew/linuxbrew/issues/47
export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:/usr/local/lib64/pkgconfig:/usr/lib64/pkgconfig:/usr/lib/pkgconfig:/usr/lib/x86_64-linux-gnu/pkgconfig:/usr/lib64/pkgconfig:/usr/share/pkgconfig:$PKG_CONFIG_PATH
## Setup linux brew
export LINUXBREWHOME=$HOME/.linuxbrew
export PATH=$LINUXBREWHOME/bin:$PATH
export MANPATH=$LINUXBREWHOME/man:$MANPATH
export PKG_CONFIG_PATH=$LINUXBREWHOME/lib64/pkgconfig:$LINUXBREWHOME/lib/pkgconfig:$PKG_CONFIG_PATH
export LD_LIBRARY_PATH=$LINUXBREWHOME/lib64:$LINUXBREWHOME/lib:$LD_LIBRARY_PATH
vskumar@ubuntu:~/K8$

skumar@ubuntu:~$ which brew
/home/vskumar/.linuxbrew/bin/brew
vskumar@ubuntu:~$
vskumar@ubuntu:~$ echo $PKG_CONFIG_PATH
/home/vskumar/.linuxbrew/lib64/pkgconfig:/home/vskumar/.linuxbrew/lib/pkgconfig:/usr/local/lib/pkgconfig:/usr/local/lib64/pkgconfig:/usr/lib64/pkgconfig:/usr/lib/pkgconfig:/usr/lib/x86_64-linux-gnu/pkgconfig:/usr/lib64/pkgconfig:/usr/share/pkgconfig:
vskumar@ubuntu:~$

vskumar@ubuntu:~$
vskumar@ubuntu:~$ brew update
remote: Counting objects: 1101, done.
remote: Compressing objects: 100% (1021/1021), done.
remote: Total 1101 (delta 167), reused 324 (delta 39), pack-reused 0
Receiving objects: 100% (1101/1101), 1.13 MiB | 388.00 KiB/s, done.
Resolving deltas: 100% (167/167), completed with 80 local objects.
From https://github.com/Linuxbrew/brew
+ 5320403…191f6b0 master -> origin/master (forced update)
* [new tag] 1.6.6 -> 1.6.6
HEAD is now at 191f6b0 Merge tag Homebrew/1.6.6 into Linuxbrew/master
/home/vskumar/.linuxbrew/Library/Homebrew/cmd/update.sh: line 6: /home/vskumar/.linuxbrew/Library/ENV/scm/git: No such file or directory
vskumar@ubuntu:~$ brew update

vskumar@ubuntu:~$
vskumar@ubuntu:~$ brew doctor
==> Downloading https://homebrew.bintray.com/bottles-portable-ruby/portable-ruby-2.3.3_2.x86_64_linux.bottle.tar.gz
######################################################################## 100.0%
==> Pouring portable-ruby-2.3.3_2.x86_64_linux.bottle.tar.gz
Your system is ready to brew.
vskumar@ubuntu:~$

===== Now, Linuxbrew is ready to use ====>

 

 

 

 

26.DevOps:How to install Apache-Ant for Ubuntu ?:

Ant-Logo

 

 

In this blog, I would like to demonstrate the Apache-Ant installtion on Ubuntu.

What are the pre-requisites:
You need to have JDK 8/9 in your Ubuntu machine.
If you do not have it please visit my blog to get the installation instructions.
Please go through my JENKINS Instllation blog.
It has JDK installation procedure also.
URL: https://vskumar.blog/2017/11/25/1-devops-jenkins2-9-installation-with-java-9-on-windows-10/

How to uninstall existing ant?:
Step1:
I have ant installed in my ubuntu VM.
1st let me remove it and restart the install process:
We need to use the below command:
sudo apt-get remove ant
===== Screen display =====>
vskumar@ubuntu:~$ sudo apt-get remove ant
[sudo] password for vskumar:
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following packages will be REMOVED:
ant ant-optional
0 upgraded, 0 newly installed, 2 to remove and 4 not upgraded.
After this operation, 3,108 kB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database … 236912 files and directories currently installed.)
Removing ant-optional (1.9.6-1ubuntu1) …
Removing ant (1.9.6-1ubuntu1) …
Processing triggers for man-db (2.7.5-1) …
========= Ant is Removed ===>

Step2:
=== Checking Ant version ===>
vskumar@ubuntu:~$ ant -v
The program ‘ant’ is currently not installed. You can install it by typing:
sudo apt install ant
vskumar@ubuntu:~$ D
===Now there is no Ant setup ===>
Looks like; still the ant is existing.

Step3:
Also please let us note the following:
If we want to delete configuration and/or data files of ant from Ubuntu Xenial completely,
then the below command will work:
sudo apt-get purge ant
== Screen display ===>
vskumar@ubuntu:~$ sudo apt-get purge ant
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following packages will be REMOVED:
ant* ant-optional*
0 upgraded, 0 newly installed, 2 to remove and 4 not upgraded.
After this operation, 3,108 kB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database … 236912 files and directories currently installed.)
Removing ant-optional (1.9.6-1ubuntu1) …
Removing ant (1.9.6-1ubuntu1) …
Processing triggers for man-db (2.7.5-1) …
vskumar@ubuntu:~$
======================>

Now, let us check it.
=== Check the version now also ===>
vskumar@ubuntu:~$
vskumar@ubuntu:~$ ant -v
bash: /usr/bin/ant: No such file or directory
vskumar@ubuntu:~$
=================================>

Still you if you feel ant older version is there, we can follow the below step also:
To delete configuration and/or data files of ant and it’s dependencies from Ubuntu Xenial
then we should execute the below command:
sudo apt-get purge –auto-remove ant

Now, we will see how to install, configure and compile ant latest version1.10.1 ?:

Step1:
We need to update the packages/repos in Ubuntu VM as below:
sudo apt-get update
==== Screen display ======>
vskumar@ubuntu:~$
vskumar@ubuntu:~$ sudo apt-get update
[sudo] password for vskumar:
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Hit:2 http://ppa.launchpad.net/ansible/ansible/ubuntu xenial InRelease
Hit:3 http://ppa.launchpad.net/webupd8team/java/ubuntu xenial InRelease
Get:4 https://download.docker.com/linux/ubuntu xenial InRelease [65.8 kB]
Ign:5 https://apt.datadoghq.com stable InRelease
Get:6 https://apt.datadoghq.com stable Release [4,525 B]
Get:7 https://apt.datadoghq.com stable Release.gpg [819 B]
Ign:8 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial InRelease
Ign:9 https://pkg.jenkins.io/debian-stable binary/ InRelease
Ign:10 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial InRelease
Get:11 https://pkg.jenkins.io/debian-stable binary/ Release [2,042 B]
Get:12 https://pkg.jenkins.io/debian-stable binary/ Release.gpg [181 B]
Ign:13 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial Release
Ign:14 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial Release
Get:15 https://download.docker.com/linux/ubuntu xenial/edge amd64 Packages [4,793 B]
Ign:15 https://download.docker.com/linux/ubuntu xenial/edge amd64 Packages
Ign:16 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 Packages
Ign:17 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 all Packages
Ign:18 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en_US
Ign:19 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en
Ign:20 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:21 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Ign:22 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 Packages
Get:23 https://apt.datadoghq.com stable/6 amd64 Packages [2,447 B]
Ign:24 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 all Packages
Get:15 https://download.docker.com/linux/ubuntu xenial/edge amd64 Packages [4,521 B]
Ign:15 https://download.docker.com/linux/ubuntu xenial/edge amd64 Packages
Ign:25 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en_US
Ign:26 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en
Ign:27 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:28 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Ign:16 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 Packages
Ign:17 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 all Packages
Ign:18 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en_US
Ign:19 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en
Get:15 https://download.docker.com/linux/ubuntu xenial/edge amd64 Packages [29.9 kB]
Ign:20 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:21 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Ign:22 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 Packages
Ign:24 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 all Packages
Ign:25 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en_US
Ign:26 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en
Get:29 https://pkg.jenkins.io/debian-stable binary/ Packages [12.7 kB]
Ign:29 https://pkg.jenkins.io/debian-stable binary/ Packages
Ign:27 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:28 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Ign:16 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 Packages
Ign:17 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 all Packages
Ign:18 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en_US
Ign:19 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en
Ign:20 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:21 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Get:29 https://pkg.jenkins.io/debian-stable binary/ Packages [11.9 kB]
Ign:22 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 Packages
Ign:24 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 all Packages
Ign:25 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en_US
Ign:26 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en
Ign:27 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:28 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Ign:16 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 Packages
Ign:17 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 all Packages
Ign:18 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en_US
Ign:19 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en
Ign:20 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:21 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Ign:22 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 Packages
Ign:24 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 all Packages
Ign:25 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en_US
Ign:26 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en
Ign:27 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:28 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Ign:16 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 Packages
Ign:17 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 all Packages
Ign:18 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en_US
Ign:19 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en
Ign:20 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:21 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Ign:22 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 Packages
Ign:24 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 all Packages
Ign:25 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en_US
Ign:26 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en
Ign:27 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:28 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Err:16 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 Packages
403 Forbidden
Ign:17 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 all Packages
Ign:18 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en_US
Ign:19 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 Translation-en
Ign:20 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:21 https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Err:22 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 Packages
403 Forbidden
Ign:24 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 all Packages
Ign:25 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en_US
Ign:26 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 Translation-en
Ign:27 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 amd64 DEP-11 Metadata
Ign:28 https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial/test-17.06 DEP-11 64×64 Icons
Fetched 118 kB in 35s (3,328 B/s)
Reading package lists… Done
W: The repository ‘https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu xenial Release’ does not have a Release file.
N: Data from such a repository can’t be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
W: The repository ‘https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu xenial Release’ does not have a Release file.
N: Data from such a repository can’t be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: Failed to fetch https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu/dists/xenial/test-17.06/binary-amd64/Packages 403 Forbidden
E: Failed to fetch https://storebits.docker.com/ee/ubuntu/vskumardocker/ubuntu/dists/xenial/test-17.06/binary-amd64/Packages 403 Forbidden
E: Some index files failed to download. They have been ignored, or old ones used instead.
vskumar@ubuntu:~$
====================================>

Step2:
Now, We can get the install file of ant with the below command:
sudo apt-get install ant
==== Screen Display =====>
vskumar@ubuntu:~$
vskumar@ubuntu:~$ sudo apt-get install ant
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following additional packages will be installed:
ant-optional
Suggested packages:
ant-doc ant-gcj default-jdk | java-compiler | java-sdk ant-optional-gcj
antlr javacc jython libbcel-java libbsf-java libgnumail-java libjdepend-java
liboro-java libregexp-java
The following NEW packages will be installed:
ant ant-optional
0 upgraded, 2 newly installed, 0 to remove and 4 not upgraded.
Need to get 0 B/2,205 kB of archives.
After this operation, 3,108 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Selecting previously unselected package ant.
(Reading database … 236678 files and directories currently installed.)
Preparing to unpack …/ant_1.9.6-1ubuntu1_all.deb …
Unpacking ant (1.9.6-1ubuntu1) …
Selecting previously unselected package ant-optional.
Preparing to unpack …/ant-optional_1.9.6-1ubuntu1_all.deb …
Unpacking ant-optional (1.9.6-1ubuntu1) …
Processing triggers for man-db (2.7.5-1) …
Setting up ant (1.9.6-1ubuntu1) …
Setting up ant-optional (1.9.6-1ubuntu1) …
vskumar@ubuntu:~$
==========================>

Step3:
Now let me check its version.
===== Version check ===>
vskumar@ubuntu:~$ ant -v
Apache Ant(TM) version 1.9.6 compiled on July 8 2015
Trying the default build file: build.xml
Buildfile: build.xml does not exist!
Build failed
vskumar@ubuntu:~$
====================>

Step4:
We need to Install Apache Ant on Ubuntu 16.04 using SDKMan.
SDKMAN is a tool which can be usd to manage parallel versions of multiple
Software Development Kits on most Unix based systems.
The same way, we can leverage SDKMAN to install Apache Ant on Ubuntu 16.04.
Using the below command:
sdk install ant
Before doing this I need to install SDK in my ubuntu VM.

===== Screen display =====>
vskumar@ubuntu:~$
vskumar@ubuntu:~$ curl -s “https://get.sdkman.io&#8221; | bash

-+syyyyyyys:
`/yho:` -yd.
`/yh/` +m.
.oho. hy .`
.sh/` :N` `-/o` `+dyyo:.
.yh:` `M- `-/osysoym :hs` `-+sys: hhyssssssssy+
.sh:` `N: ms/-“ yy.yh- -hy. `.N-““““+N.
`od/` `N- -/oM- ddd+` `sd: hNNm -N:
:do` .M. dMMM- `ms. /d+` `NMMs `do
.yy- :N` “`mMMM. – -hy. /MMM: yh
`+d+` `:/oo/` `-/osyh/ossssssdNMM` .sh: yMMN` /m.
-dh- :ymNMMMMy `-/shmNm-`:N/-.“ `.sN /N- `NMMy .m/
`oNs` -hysosmMMMMydmNmds+-.:ohm : sd` :MMM/ yy
.hN+ /d: -MMMmhs/-.` .MMMh .ss+- `yy` sMMN` :N.
:mN/ `N/ `o/-` :MMMo +MMMN- .` `ds mMMh do
/NN/ `N+….–:/+oooosooo+:sMMM: hMMMM: `my .m+ -MMM+ :N.
/NMo -+ooooo+/:-….`…:+hNMN. `NMMMd` .MM/ -m: oMMN. hs
-NMd` :mm -MMMm- .s/ -MMm. /m- mMMd -N.
`mMM/ .- /MMh. -dMo -MMMy od. .MMMs..—yh
+MMM. sNo`.sNMM+ :MMMM/ sh`+MMMNmNm+++-
mMMM- /–ohmMMM+ :MMMMm. `hyymmmdddo
MMMMh. ““ `-+yy/`yMMM/ :MMMMMy -sm:.“..-:-.`
dMMMMmo-.“““..-:/osyhddddho. `+shdh+. hMMM: :MmMMMM/ ./yy/` `:sys+/+sh/
.dMMMMMMmdddddmmNMMMNNNNNMMMMMs sNdo- dMMM- `-/yd/MMMMm-:sy+. :hs- /N`
`/ymNNNNNNNmmdys+/::—-/dMMm: +m- mMMM+ohmo/.` sMMMMdo- .om: `sh
`.—–+/.` `.-+hh/` `od. NMMNmds/ `mmy:` +mMy `:yy.
/moyso+//+ossso:. .yy` `dy+:` .. :MMMN+—/oys:
/+m: `.-:::-` /d+ +MMMMMMMNh:`
+MN/ -yh. `+hddhy+.
/MM+ .sh:
:NMo -sh/
-NMs `/yy:
.NMy `:sh+.
`mMm` ./yds-
`dMMMmyo:-.““.-:oymNy:`
+NMMMMMMMMMMMMMMMMms:`
-+shmNMMMNmdy+:`

Now attempting installation…

Looking for a previous installation of SDKMAN…
Looking for unzip…
Looking for zip…
Looking for curl…
Looking for sed…
Installing SDKMAN scripts…
Create distribution directories…
Getting available candidates…
Prime the config file…
Download script archive…
######################################################################## 100.0%
Extract script archive…
Install scripts…
Set version to 5.6.3+299 …
Attempt update of interactive bash profile on regular UNIX…
Added sdkman init snippet to /home/vskumar/.bashrc
Attempt update of zsh profile…
Updated existing /home/vskumar/.zshrc

All done!

Please open a new terminal, or run the following in the existing one:

source “/home/vskumar/.sdkman/bin/sdkman-init.sh”

Then issue the following command:

sdk help

Enjoy!!!
vskumar@ubuntu:~$
== SDK installed =====>
We need to use the below command:
=====>
vskumar@ubuntu:~$ source “$HOME/.sdkman/bin/sdkman-init.sh”
vskumar@ubuntu:~$
======>

Now, let us check SDK Version.
===== SDK Version checking ====>
vskumar@ubuntu:~$ sdk version
==== BROADCAST =================================================================
* 09/05/18: sbt 1.1.5 released on SDKMAN! #scala
* 09/05/18: Springboot 2.0.2.RELEASE released on SDKMAN! #springboot
* 09/05/18: Springboot 1.5.13.RELEASE released on SDKMAN! #springboot
================================================================================

SDKMAN 5.6.3+299
vskumar@ubuntu:~$
==========================>

Step5:

Now, let us use the below command:
sdk install ant

=== Screen display ==>
vskumar@ubuntu:~$
vskumar@ubuntu:~$ sdk install ant

 

Downloading: ant 1.10.1

In progress…

######################################################################## 100.0%

Installing: ant 1.10.1
Done installing!

 

Setting ant 1.10.1 as default.
vskumar@ubuntu:~$
vskumar@ubuntu:~$
=================>

Step6:
Now, let us check the ant’s latest version:

== Screen display ===>
vskumar@ubuntu:~$ ant -v
Apache Ant(TM) version 1.10.1 compiled on February 2 2017
Trying the default build file: build.xml
Buildfile: build.xml does not exist!
Build failed
vskumar@ubuntu:~$
== Now version change you can see after SDK usage ===>

Step7:
How to Create ANT_HOME Environment Variables?:

Create an ant.sh file at /etc/profile.d folder (you can use vi with below command)

== Let us see the files===>
vskumar@ubuntu:~$ pwd
/home/vskumar
vskumar@ubuntu:~$ ls /etc/profile.d
appmenu-qt5.sh bash_completion.sh vte-2.91.sh
apps-bin-path.sh cedilla-portuguese.sh
vskumar@ubuntu:~$
==========================>
There is no ant.sh file.

sudo vi /etc/profile.d/ant.sh
Enter the follow content to the file:

export ANT_HOME=/usr/local/ant
export PATH=${ANT_HOME}/bin:${PATH}
Save the file.
====== ant.sh file creation ===>
vskumar@ubuntu:~$ sudo vim /etc/profile.d/ant.sh
vskumar@ubuntu:~$
vskumar@ubuntu:~$ sudo cat /etc/profile.d/ant.sh

export ANT_HOME=/usr/local/ant
export PATH=${ANT_HOME}/bin:${PATH}
vskumar@ubuntu:~$
vskumar@ubuntu:~$ ls /etc/profile.d
ant.sh apps-bin-path.sh cedilla-portuguese.sh
appmenu-qt5.sh bash_completion.sh vte-2.91.sh
vskumar@ubuntu:~$
============ Contents of ant.sh=====>

Step8:
We need to activate the above environment variables.
We can do that by log out and log in again or simply run below command:
source /etc/profile
==== Screen display ===>
vskumar@ubuntu:~$ source /etc/profile
vskumar@ubuntu:~$
=======================>

Now let us check the ant version after doing the above steps to observe the change:

==== Display ==>
vskumar@ubuntu:~$
vskumar@ubuntu:~$ ant -version
Apache Ant(TM) version 1.10.1 compiled on February 2 2017
vskumar@ubuntu:~$
== Now error now =====>

Finally, we have configured Apache Ant(TM) version 1.10.1 and compiled successfully.

For Ant installation on windows 10 visit my blog:

https://vskumar.blog/2018/05/12/24-devops-how-to-install-apache-ant-for-windows-10/

25.DevOps:How to install Apache-Maven for Windows 10 ?

Apache-Maven Logo

With reference to my previous blog for Maven installation on Ubuntu:
https://vskumar.blog/2018/05/05/21-devops-how-to-install-maven-3-3-9-on-ubuntu-linux/

In this blog, I have shown the steps for Maven installation on
Windows 10.

Step1:
Goto the site: http://maven.apache.org/download.cgi
You can find the file: apache-maven-3.5.3-bin.zip
Save it to your desired location.

Step2:
Unzip it and it show have created the folder as below:
E:\apache-maven-3.5.3-bin

Note: You can replace this folder path with your Maven path.

It should have the following files/folders:

E:\apache-maven-3.5.3-bin>dir/p
Volume in drive E is New Volume
Volume Serial Number is 1870-3E6A

Directory of E:\apache-maven-3.5.3-bin

05/12/2018 01:40 PM <DIR> .
05/12/2018 01:40 PM <DIR> ..
05/12/2018 01:40 PM <DIR> apache-maven-3.5.3
0 File(s) 0 bytes
3 Dir(s) 33,347,407,872 bytes free

E:\apache-maven-3.5.3-bin>
E:\apache-maven-3.5.3-bin>cd apache*
E:\apache-maven-3.5.3-bin\apache-maven-3.5.3>
E:\apache-maven-3.5.3-bin\apache-maven-3.5.3>dir/p
Volume in drive E is New Volume
Volume Serial Number is 1870-3E6A

Directory of E:\apache-maven-3.5.3-bin\apache-maven-3.5.3

05/12/2018 01:40 PM <DIR> .
05/12/2018 01:40 PM <DIR> ..
05/12/2018 01:40 PM <DIR> bin
05/12/2018 01:40 PM <DIR> boot
05/12/2018 01:40 PM <DIR> conf
05/12/2018 01:40 PM <DIR> lib
05/12/2018 01:40 PM 20,959 LICENSE
05/12/2018 01:40 PM 182 NOTICE
05/12/2018 01:40 PM 2,544 README.txt
3 File(s) 23,685 bytes
6 Dir(s) 33,347,407,872 bytes free

E:\apache-maven-3.5.3-bin\apache-maven-3.5.3>

Step3:
Let us update the windows system/environment variables:

Check for M2_HOME variable. If it is not there,
create a new one and add the below:
Variable Name : M2_HOME
Variable Value : E:\apache-maven-3.5.3-bin\apache-maven-3.5.3

In your system variables,
if you have MAVEN_HOME as avriable you need update its value also.
Now, append this path to your PATH variable also.

Step4:
Now, how to verify the installed Maven version ?:
Open a fresh windows command prompt.
Type mvn -version
You should see the screen output as below wuth its 3.5.3 version:
Microsoft Windows [Version 10.0.16299.431]
(c) 2017 Microsoft Corporation. All rights reserved.

C:\Users\Toshiba>mvn -version
Apache Maven 3.5.3 (3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-25T01:19:05+05:30)
Maven home: E:\apache-maven-3.5.3-bin\apache-maven-3.5.3
Java version: 9.0.1, vendor: Oracle Corporation
Java home: D:\Java\jdk-9.0.1
Default locale: en_US, platform encoding: Cp1252
OS name: “windows 10”, version: “10.0”, arch: “amd64”, family: “windows”

C:\Users\Toshiba>

If you are getting the above version display;

So it means you have the correct/latest Maven version in your machine.

 

24.DevOps: How to install Apache-Ant for Windows 10 ?

Ant-Logo

With reference to my previous blogs on DevOps CI Tools
installation/integration, in this blog you can learn on how to install ANT for Windows10.

Downloand windows 10 version of APACHE ANT from the below url:

http://redrockdigimark.com/apachemirror//ant/binaries/apache-ant-1.10.3-bin.zip

Assuming you have JDK and JAVA_Hoome are setup in windows variables environment.
If you need to install JDK in your windows machine, please go through my JENKINS
Instllation blog. It has JDK installation procedure also.
URL: https://vskumar.blog/2017/11/25/1-devops-jenkins2-9-installation-with-java-9-on-windows-10/

Unzip the file : apache-ant-1.10.3-bin.zip
Note this path. And go to Windows System variables and use new.
You can type the Variable name : ANT_HOME
Give your Ant software unzipped path, Variable value:
D:\Ant\apache-ant-1.10.3-bin\apache-ant-1.10.3\bin

Note: I gave my ANT path as example.

Goto System variables section and edit/add ANT_HOME variable and
give your current path:
D:\Ant\apache-ant-1.10.3-bin\apache-ant-1.10.3

And the variable can be named as %ANT_HOME%\bin

So you have updated the windows settings for ANT folder Location.

Now, open a fresh CMD window and check the ANT folder as below:
Microsoft Windows [Version 10.0.16299.431]
(c) 2017 Microsoft Corporation. All rights reserved.

C:\Users\Toshiba>echo %ANT_HOME%
D:\Ant\apache-ant-1.10.3-bin\apache-ant-1.10.3\bin

C:\Users\Toshiba>

How to cheeck your current ANT version?:
Now, In a fresh command window,
And apply as below:

C:\Users\Toshiba>echo %ANT_HOME%
D:\Ant\apache-ant-1.10.3-bin\apache-ant-1.10.3

C:\Users\Toshiba>ant -v
Apache Ant(TM) version 1.10.3 compiled on March 24 2018
Trying the default build file: build.xml
Buildfile: build.xml does not exist!
Build failed

C:\Users\Toshiba>

Now, it shows your ANT folder and its version also.
It means your ANT software can be used.

 

23.DevOps: How to install Ansible on Ubuntu [Linux] VM ?

 

ansible-logo.png

In this blog, I would like to demonstrate  “Installing Ansible on Ubuntu VM”.

At the End of this blog you can see the demonstrated Video.

Let us follow the below steps:

Step 1:
To get Ansible for Ubuntu is to add the project’s PPA (personal package archive) to ubuntu system.
We can add the Ansible PPA by typing the following command:

$sudo apt-add-repository ppa:ansible/ansible

=== Screen output ====>
vskumar@ubuntu:~$
vskumar@ubuntu:~$ sudo apt-add-repository ppa:ansible/ansible
[sudo] password for vskumar:
Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy.
Avoid writing scripts or custom code to deploy and update your applications— automate in a language that
approaches plain English, using SSH, with no agents to install on remote systems.

http://ansible.com/
More info: https://launchpad.net/~ansible/+archive/ubuntu/ansible
Press [ENTER] to continue or ctrl-c to cancel adding it

gpg: keyring `/tmp/tmpzhb6yoiy/secring.gpg’ created
gpg: keyring `/tmp/tmpzhb6yoiy/pubring.gpg’ created
gpg: requesting key 7BB9C367 from hkp server keyserver.ubuntu.com
gpg: /tmp/tmpzhb6yoiy/trustdb.gpg: trustdb created
gpg: key 7BB9C367: public key “Launchpad PPA for Ansible, Inc.” imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
OK
vskumar@ubuntu:~$
========= Added Ansible to PPA ===>
Step 2:
Now, let us refresh ubuntu [VM] system package index, so that it is aware of the packages available in the PPA.
Then, we can install the software.
We need to follow the below commands:
$sudo apt-get update
$sudo apt-get install ansible
==== Update package=======>
vskumar@ubuntu:~$ sudo apt-get update
Get:1 http://ppa.launchpad.net/ansible/ansible/ubuntu xenial InRelease [18.0 kB]
Hit:2 https://download.docker.com/linux/ubuntu xenial InRelease
Hit:3 http://archive.ubuntu.com/ubuntu xenial InRelease
Hit:4 http://ppa.launchpad.net/webupd8team/java/ubuntu xenial InRelease
Get:5 http://ppa.launchpad.net/ansible/ansible/ubuntu xenial/main amd64 Packages [540 B]
Ign:6 https://pkg.jenkins.io/debian-stable binary/ InRelease
Get:7 http://ppa.launchpad.net/ansible/ansible/ubuntu xenial/main i386 Packages [540 B]
Hit:8 https://pkg.jenkins.io/debian-stable binary/ Release
Get:10 http://ppa.launchpad.net/ansible/ansible/ubuntu xenial/main Translation-en [344 B]
Fetched 19.5 kB in 2s (7,857 B/s)
Reading package lists… Done
vskumar@ubuntu:~$
===== Updated =====>

Step 3:
Now, let us install Ansible as below:
==== Installing Ansible =====>
vskumar@ubuntu:~$ sudo apt-get install ansible
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following additional packages will be installed:
python-ecdsa python-httplib2 python-jinja2 python-markupsafe python-paramiko
sshpass
Suggested packages:
python-jinja2-doc
The following NEW packages will be installed:
ansible python-ecdsa python-httplib2 python-jinja2 python-markupsafe
python-paramiko sshpass
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 3,001 kB of archives.
After this operation, 24.1 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-markupsafe amd64 0.23-2build2 [15.5 kB]
Get:2 http://ppa.launchpad.net/ansible/ansible/ubuntu xenial/main amd64 ansible all 2.4.3.0-1ppa~xenial [2,690 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-jinja2 all 2.8-1 [109 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-ecdsa all 0.13-2 [34.0 kB]
Get:5 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-paramiko all 1.16.0-1 [109 kB]
Get:6 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-httplib2 all 0.9.1+dfsg-1 [34.2 kB]
Get:7 http://archive.ubuntu.com/ubuntu xenial/universe amd64 sshpass amd64 1.05-1 [10.5 kB]
Fetched 3,001 kB in 9s (306 kB/s)
Selecting previously unselected package python-markupsafe.
(Reading database … 218383 files and directories currently installed.)
Preparing to unpack …/python-markupsafe_0.23-2build2_amd64.deb …
Unpacking python-markupsafe (0.23-2build2) …
Selecting previously unselected package python-jinja2.
Preparing to unpack …/python-jinja2_2.8-1_all.deb …
Unpacking python-jinja2 (2.8-1) …
Selecting previously unselected package python-ecdsa.
Preparing to unpack …/python-ecdsa_0.13-2_all.deb …
Unpacking python-ecdsa (0.13-2) …
Selecting previously unselected package python-paramiko.
Preparing to unpack …/python-paramiko_1.16.0-1_all.deb …
Unpacking python-paramiko (1.16.0-1) …
Selecting previously unselected package python-httplib2.
Preparing to unpack …/python-httplib2_0.9.1+dfsg-1_all.deb …
Unpacking python-httplib2 (0.9.1+dfsg-1) …
Selecting previously unselected package sshpass.
Preparing to unpack …/sshpass_1.05-1_amd64.deb …
Unpacking sshpass (1.05-1) …
Selecting previously unselected package ansible.
Preparing to unpack …/ansible_2.4.3.0-1ppa~xenial_all.deb …
Unpacking ansible (2.4.3.0-1ppa~xenial) …
Processing triggers for man-db (2.7.5-1) …
Setting up python-markupsafe (0.23-2build2) …
Setting up python-jinja2 (2.8-1) …
Setting up python-ecdsa (0.13-2) …
Setting up python-paramiko (1.16.0-1) …
Setting up python-httplib2 (0.9.1+dfsg-1) …
Setting up sshpass (1.05-1) …
Setting up ansible (2.4.3.0-1ppa~xenial) …
vskumar@ubuntu:~$
=== Ansible installation is done! ====>

Step 4:
Let us add the below python properties  also:

sudo apt-get install python-software-properties
== Installing python properties =======>
vskumar@ubuntu:/etc/ansible$ sudo apt-get install python-software-properties
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following additional packages will be installed:
python-apt python-pycurl
Suggested packages:
python-apt-dbg python-apt-doc libcurl4-gnutls-dev python-pycurl-dbg
python-pycurl-doc
The following NEW packages will be installed:
python-apt python-pycurl python-software-properties
0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
Need to get 202 kB of archives.
After this operation, 927 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-apt amd64 1.1.0~beta1build1 [139 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-pycurl amd64 7.43.0-1ubuntu1 [43.3 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial/universe amd64 python-software-properties all 0.96.20 [20.1 kB]
Fetched 202 kB in 1s (181 kB/s)
Selecting previously unselected package python-apt.
(Reading database … 220895 files and directories currently installed.)
Preparing to unpack …/python-apt_1.1.0~beta1build1_amd64.deb …
Unpacking python-apt (1.1.0~beta1build1) …
Selecting previously unselected package python-pycurl.
Preparing to unpack …/python-pycurl_7.43.0-1ubuntu1_amd64.deb …
Unpacking python-pycurl (7.43.0-1ubuntu1) …
Selecting previously unselected package python-software-properties.
Preparing to unpack …/python-software-properties_0.96.20_all.deb …
Unpacking python-software-properties (0.96.20) …
Setting up python-apt (1.1.0~beta1build1) …
Setting up python-pycurl (7.43.0-1ubuntu1) …
Setting up python-software-properties (0.96.20) …
vskumar@ubuntu:/etc/ansible$
===== Installed python properties ======>

Step 5:
Let us check the version:
=== Checking ANSIBLE Version ===>
vskumar@ubuntu:~$ ansible –version
ansible 2.4.3.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u’/home/vskumar/.ansible/plugins/modules’, u’/usr/share/ansible/plugins/modules’]
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
vskumar@ubuntu:~$
=============================>
It means from the above display it is confirmed ansible is available.

Step 6:
The ansible is on the below dir:

======= Check List of files ===>
vskumar@ubuntu:~$ ls -lha /etc/ansible
total 48K
drwxr-xr-x 4 root root 4.0K Mar 6 08:52 .
drwxr-xr-x 142 root root 12K Mar 6 05:59 ..
-rw-r–r– 1 root root 19K Jan 31 15:21 ansible.cfg
drwxr-xr-x 2 root root 4.0K Mar 6 08:59 group_vars
-rw-r–r– 1 root root 1.2K Mar 6 08:20 hosts
drwxr-xr-x 2 root root 4.0K Jan 31 19:46 roles
vskumar@ubuntu:~$
========================>

Step 7:
Always it is better we need to have backup of the above files in a folder.
Now let me copy all of them as below:
Make a backup of all the files as below :
== Making backup ====>

vskumar@ubuntu:~$ sudo cp -R /etc/ansible ansplatform1

vskumar@ubuntu:~$ cd ansplatform1
vskumar@ubuntu:~/ansplatform1$ ls
ansible.cfg group_vars hosts roles
vskumar@ubuntu:~/ansplatform1$
===== Backup files ====>

Step 8:
In the above dir, let us modify ansible.cfg
to have the below line uncommented:
inventory = hosts
====Modifying ansible.cfg ====>
vskumar@ubuntu:~/ansplatform1$ sudo vim ansible.cfg
vskumar@ubuntu:~/ansplatform1$
======>

You can see part of the file as below :
=== Part of config file to update ====>
vskumar@ubuntu:/etc/ansible$ ls
ansible.cfg group_vars hosts roles
vskumar@ubuntu:/etc/ansible$ vim ansible
vskumar@ubuntu:/etc/ansible$
vskumar@ubuntu:/etc/ansible$ vim ansible.cfg
vskumar@ubuntu:/etc/ansible$

Updated line:
inventory = /etc/ansible/hosts

== Updated area only ===>

Step 9:

Configuring Ansible Hosts:
Ansible keeps track of all of the servers.
It knows about them through a “hosts” file.
We need to set up this file first, before we can begin to
communicate with our other computers.
Now let us see the current content of hosts file:
Using : $sudo cat /etc/ansible/hosts

====== The default Contents of hosts file ===>
vskumar@ubuntu:~$ sudo cat /etc/ansible/hosts
# This is the default ansible ‘hosts’ file.
#
# It should live in /etc/ansible/hosts
#
# – Comments begin with the ‘#’ character
# – Blank lines are ignored
# – Groups of hosts are delimited by [header] elements
# – You can enter hostnames or ip addresses
# – A hostname/ip can be a member of multiple groups

# Ex 1: Ungrouped hosts, specify before any group headers.

## green.example.com
## blue.example.com
## 192.168.100.1
## 192.168.100.10

# Ex 2: A collection of hosts belonging to the ‘webservers’ group

## [webservers]
## alpha.example.org
## beta.example.org
## 192.168.1.100
## 192.168.1.110

# If you have multiple hosts following a pattern you can specify
# them like this:

## www[001:006].example.com

# Ex 3: A collection of database servers in the ‘dbservers’ group

## [dbservers]
##
## db01.intranet.mydomain.net
## db02.intranet.mydomain.net
## 10.25.1.56
## 10.25.1.57

# Here’s another example of host ranges, this time there are no
# leading 0s:

## db-[99:101]-node.example.com

vskumar@ubuntu:~$
==================>

We can see a file that has a lot of example configurations,
none of them will actually work for us since these hosts are made up.
So to start with, let’s make sure they all are commented out on the
lines in this file by adding a “#” before each line.

We will keep these examples in the file only as they were to help us with
configuration.

If we want to implement more complex scenarios in the future these can be reused.

After making sure all of these lines are commented,
we can start adding our hosts in the hosts file.
To do our lab exercise;
Now, we need to identify our local hosts.
You can check your laptop or desktop ip as one host.
Another host you consider your ubuntu VM, where the current Ansible is configured.
For now, let us work with two hosts only.
In my systems:
To identify my ubuntu host1:
====== ifconfig =====>

vskumar@ubuntu:~$ ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:06:95:ca:2d
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

ens33 Link encap:Ethernet HWaddr 00:0c:29:f8:40:61
inet addr:192.168.116.129 Bcast:192.168.116.255 Mask:255.255.255.0
inet6 addr: fe80::2fed:4aa:a6:34ad/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3621 errors:0 dropped:0 overruns:0 frame:0
TX packets:1342 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:5111534 (5.1 MB) TX bytes:112090 (112.0 KB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:530 errors:0 dropped:0 overruns:0 frame:0
TX packets:530 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:47656 (47.6 KB) TX bytes:47656 (47.6 KB)

vskumar@ubuntu:~$
=======================>
I need to consider  my base ubuntu VM is as ‘192.168.116.129’
Hence my host1=192.168.116.129 from ens33
You can also check your VM IP.

Now, let me check my local host [laptop] ip:

====== IPCONFIG info from Laptop CMD =====>
Connection-specific DNS Suffix . :
Link-local IPv6 Address . . . . . : fe80::197c:6a85:f86:a3e4%20
IPv4 Address. . . . . . . . . . . : 192.168.137.1
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
======================>
Let me check the ip connection from my Ubuntu VM.
=== Testing laptop ip from VM ====>
vskumar@ubuntu:~$ ping 192.168.137.1
PING 192.168.137.1 (192.168.137.1) 56(84) bytes of data.
64 bytes from 192.168.137.1: icmp_seq=1 ttl=128 time=3.89 ms
64 bytes from 192.168.137.1: icmp_seq=2 ttl=128 time=1.15 ms
64 bytes from 192.168.137.1: icmp_seq=3 ttl=128 time=1.19 ms
64 bytes from 192.168.137.1: icmp_seq=4 ttl=128 time=1.38 ms
64 bytes from 192.168.137.1: icmp_seq=5 ttl=128 time=1.15 ms
64 bytes from 192.168.137.1: icmp_seq=6 ttl=128 time=1.26 ms
64 bytes from 192.168.137.1: icmp_seq=7 ttl=128 time=1.13 ms
64 bytes from 192.168.137.1: icmp_seq=8 ttl=128 time=1.13 ms
64 bytes from 192.168.137.1: icmp_seq=9 ttl=128 time=1.39 ms
64 bytes from 192.168.137.1: icmp_seq=10 ttl=128 time=1.29 ms
64 bytes from 192.168.137.1: icmp_seq=11 ttl=128 time=1.26 ms
64 bytes from 192.168.137.1: icmp_seq=12 ttl=128 time=1.14 ms
64 bytes from 192.168.137.1: icmp_seq=13 ttl=128 time=1.22 ms
64 bytes from 192.168.137.1: icmp_seq=14 ttl=128 time=1.37 ms
64 bytes from 192.168.137.1: icmp_seq=15 ttl=128 time=1.14 ms
^C
— 192.168.137.1 ping statistics —
15 packets transmitted, 15 received, 0% packet loss, time 14032ms
rtt min/avg/max/mdev = 1.134/1.411/3.899/0.672 ms
vskumar@ubuntu:~$
==========>
Now, I consider my host2 = 192.168.137.1

Let me ping my VM from Laptop CMD:
==== Pinging Ubuntu IP from CMD prompt =====>
C:\Users\Toshiba>ping 192.168.116.129

Pinging 192.168.116.129 with 32 bytes of data:
Reply from 192.168.116.129: bytes=32 time=2ms TTL=64
Reply from 192.168.116.129: bytes=32 time<1ms TTL=64
Reply from 192.168.116.129: bytes=32 time<1ms TTL=64
Reply from 192.168.116.129: bytes=32 time<1ms TTL=64

Ping statistics for 192.168.116.129:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 2ms, Average = 0ms

C:\Users\Toshiba>
====== Replied VM ====>

It means both hosts are working fine.
Now, below block we should add to our hosts file to connect them:

[servers]
host1 ansible_ssh_host=192.168.116.129
host2 ansible_ssh_host=192.168.137.1
We can consider two groups from these two hosts.
Let me check the files as below:
==== List the current files ====>

vskumar@ubuntu:/etc/ansible$ ls -l
total 28
-rw-r–r– 1 root root 19155 Jan 31 15:21 ansible.cfg
-rw-r–r– 1 root root 1016 Jan 31 15:21 hosts
drwxr-xr-x 2 root root 4096 Jan 31 19:46 roles
vskumar@ubuntu:/etc/ansible$
===============================>

Now, let me update the host file.
=== After adding the content of hosts file ===>
vskumar@ubuntu:/etc/ansible$ sudo vim hosts
[sudo] password for vskumar:
Sorry, try again.
[sudo] password for vskumar:
vskumar@ubuntu:/etc/ansible$
vskumar@ubuntu:/etc/ansible$ tail -10 hosts

# Here’s another example of host ranges, this time there are no
# leading 0s:

## db-[99:101]-node.example.com

[servers]
host1 ansible_ssh_host=192.168.116.129
host2 ansible_ssh_host=192.168.137.1
vskumar@ubuntu:/etc/ansible$
== You can see the lst 3 lines of the hosts file ===>

We also need to add the group name as below in the hosts file.

[group_name]
alias ansible_ssh_host=your_server_ip

Here, the group_name is an organizational tag that you will refer to any servers listed
under it with one word.
The alias is just a name to refer to that server.
Now let me add the above lines in hosts above the servers line as below.
[ansible_test1]
alias ansible_ssh_host=192.168.116.129
===== Hosts updated – latest ===>
vskumar@ubuntu:/etc/ansible$ sudo vim hosts
vskumar@ubuntu:/etc/ansible$
vskumar@ubuntu:/etc/ansible$ tail -10 hosts
# leading 0s:

## db-[99:101]-node.example.com
[ansible_test1]
alias ansible_ssh_host=192.168.116.129

[servers]
host1 ansible_ssh_host=192.168.116.129
host2 ansible_ssh_host=192.168.137.1

vskumar@ubuntu:/etc/ansible$
==============================>

Now let me goto ansible dir:
======>
vskumar@ubuntu:~$ cd /etc/ansible
vskumar@ubuntu:/etc/ansible$
======>

Assuming in our Ansible test scenario,
we are imagining that we have two servers we are going to control with Ansible.
These servers are accessible from the Ansible server by typing:
$ssh root@your_server_ip

Means as:
$ssh root@192.168.116.129

==============>
vskumar@ubuntu:/etc/ansible$ ssh root@192.168.116.129
ssh: connect to host 192.168.116.129 port 22: Connection refused
vskumar@ubuntu:/etc/ansible$
==============>
TROUBLE SHOOT THE HOSTS:
=== Trouble shoot ===>
vskumar@ubuntu:/etc/ansible$ ansible -m ping all
host1 | UNREACHABLE! => {
“changed”: false,
“msg”: “Failed to connect to the host via ssh: ssh: connect to host 192.168.116.129 port 22: Connection refused\r\n”,
“unreachable”: true
}
alias | UNREACHABLE! => {
“changed”: false,
“msg”: “Failed to connect to the host via ssh: ssh: connect to host 192.168.116.129 port 22: Connection refused\r\n”,
“unreachable”: true
}
host2 | UNREACHABLE! => {
“changed”: false,
“msg”: “Failed to connect to the host via ssh: \r\n ****USAGE WARNING****\r\n\r\nThis is a private computer system. This computer system, including all\r\nrelated equipment, networks, and network devices (specifically including\r\nInternet access) are provided only for authorized use. This computer system\r\nmay be monitored for all lawful purposes, including to ensure that its use\r\nis authorized, for management of the system, to facilitate protection against\r\nunauthorized access, and to verify security procedures, survivability, and\r\noperational security. Monitoring includes active attacks by authorized entities\r\nto test or verify the security of this system. During monitoring, information\r\nmay be examined, recorded, copied and used for authorized purposes. All\r\ninformation, including personal information, placed or sent over this system\r\nmay be monitored.\r\n\r\nUse of this computer system, authorized or unauthorized, constitutes consent\r\nto monitoring of this system. Unauthorized use may subject you to criminal\r\nprosecution. Evidence of unauthorized use collected during monitoring may be\r\nused for administrative, criminal, or other adverse action. Use of this system\r\nconstitutes consent to monitoring for these purposes.\r\n\r\n\r\nPermission denied (publickey,password,keyboard-interactive).\r\n”,
“unreachable”: true
}
vskumar@ubuntu:/etc/ansible$
===============>
The reason for the above error is;
With our current settings, we tried to connect to any of these hosts with Ansible,
the command failed.
This is because your SSH key is embedded for the root user on the remote systems
and Ansible will by default try to connect as your current user.
A connection attempt will get the above error.

To rectify it;
We can create a file that tells all of the servers in the “servers” group to connect
using the root user.

To do this, we will create a directory in the Ansible configuration structure called group_vars.
Let us use the below dir commands:
$sudo mkdir /etc/ansible/group_vars

========================>
vskumar@ubuntu:/etc/ansible$ sudo mkdir /etc/ansible/group_vars
vskumar@ubuntu:/etc/ansible$ ls -l
total 32
-rw-r–r– 1 root root 19155 Jan 31 15:21 ansible.cfg
drwxr-xr-x 2 root root 4096 Mar 6 08:52 group_vars
-rw-r–r– 1 root root 1158 Mar 6 08:20 hosts
drwxr-xr-x 2 root root 4096 Jan 31 19:46 roles
vskumar@ubuntu:/etc/ansible$
=================>
Within this folder, we can create YAML-formatted files for each group we want to configure.
By using below command:
$sudo vim /etc/ansible/group_vars/servers
We can put our configuration in here. YAML files start with “—“, so make sure you don’t forget that part.

Below Code:

ansible_ssh_user: root

==========>
udo vim /etc/ansible/group_vars/servers
vskumar@ubuntu:/etc/ansible$ cat /etc/ansible/group_vars/servers


ansible_ssh_user: root
vskumar@ubuntu:/etc/ansible$
=======================>

NOTE:
If you want to specify configuration details for every server, regardless of group association, you can put those details in a file at: 

/etc/ansible/group_vars/all.

Individual hosts can be configured by creating files under a directory at: /etc/ansible/host_vars.

Assuming this helped you to configure your Ansible.

Please leave your positive comment for others also to follow.

You can see next blog on ssh setup and usage from the below url:

https://vskumar.blog/2018/05/26/27-devopsworking-with-ssh-for-ansible-usage/

I have made a video for Ansible installation using Ubuntu 18.04 VM:

https://youtu.be/AGHJ5hL6Wv4

18. DevOps: How to create a MySQL docker container ?

Docker-logo

MySql DB docker container:

In this blog I would like to demonstrate the container creation for MYSQL DB.

The following dockerfile code can be used to create the mysqldb container:
I have made this as  group of commands to be executed from Ubuntu CLI.
=== Dockerfile code for MySql DB=====>
sudo docker container run \
–detach \
–name mysqldb \
-e MYSQL_ROOT_PASSWORD=my-secret-pw \
mysql:latest
=== To create mysqldb container ====>

=== Screen output ====>
vskumar@ubuntu:~$
vskumar@ubuntu:~$ sudo docker container run \
> –detach \
> –name mysqldb \
> -e MYSQL_ROOT_PASSWORD=my-secret-pw \
> mysql:latest
dcfc16b7fba9075c59035e29a0efed91b7872e5f5cf72c8656afade824651041
vskumar@ubuntu:~$
==== Created mysql =====>

Please note this time, I have not copied the complete display contents.

=== listed ====>
vskumar@ubuntu:~$ sudo docker image ls mysql
REPOSITORY TAG IMAGE ID CREATED SIZE
mysql latest 5d4d51c57ea8 5 weeks ago 374MB
vskumar@ubuntu:~$

vskumar@ubuntu:~$ sudo docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dcfc16b7fba9 mysql:latest “docker-entrypoint.s…” 3 minutes ago Up 3 minutes 3306/tcp mysqldb
a5f1ce30c02d swarm “/swarm manage” 11 days ago Restarting (1) 28 seconds ago gracious_bhabha
vskumar@ubuntu:~$
=================>

So we can have the mysql container also running in background currently.

Let us understand the commands/options used for dockerfile syntax:

Using ‘–detach’ command it runs the container in background.
I have given the container name ‘mysqldb’ with ‘–name’ option.
MySql DB needs the root password.
It has been executed with ‘-e’ option.
Since the mysql db image is not available in my current images list,
it pulls it from dockerhub.

You can try to use the same container for your db usage.

16. DevOps: Working with Git on Ubuntu 16.04/18.04 VMs

Git-logo

All the below commands were copied from the Ubuntu 16.04 VM.

You can see the below video on how to uninstall/install git from Ubuntu 18.04.

https://www.facebook.com/328906801086961/videos/720291215170115/%MCEPASTEBIN%

You can use all the below commands from 8.04 VM also.

In this GIT exercise, I would like to present the below lab sessions for git in an Ubuntu 16.04 VM for the people who attended my sessions so far.
1. How to install git in ubuntu [linux] ?:
2. How To Set Up Git ?:
3. How to check the config file content? :
4. How to clone a project from an url ?:
5. How to Create a test dir or folder for git project?:
6. How initiate the git for the current folder or dir in linux ?:
7. How to Create local files and check the status in the current git folder?:
8. How to commit the files into a local repository and check their status ?:
9. How to commit files into local repo with a message ?:
10. How to check the history of the local git repository ?:
11. How to identify the difference of two commit ids ?:
12. How to check and operate the staged files in local repository ?:
13. What are the ultimate format of the git log ?:
14. How to setup aliases for different git commands?:
15. How to use tags and operate for different versions in a repository?:
16. How to revert back the changes to older version ?:
17. How to cancel the committed changes? :
18. How to reset the reverted changes through commit from the branch? :

19. Working with git directory:

     20. Working with git branches and master :

     21. How to Merge latest objects into single branch ?:

1. How to install git in ubuntu [linux] ?:

$sudo apt-get update

$sudo apt-get install git

2. How To Set Up Git ?:
$git config –global user.name “Your Name”

$git config –global user.email “youremail@domain.com”

Ex:
git config –global user.kumar2018 “Vskumar”
git config –global user.email “vskumar35@gmail.com”

==== Screen output for the above commands==>
vskumar@ubuntu:~$ git config –global user.kumar2018 “Vskumar”
vskumar@ubuntu:~$ git config –global user.email “vskumar35@gmail.com”
vskumar@ubuntu:~$ git config –list
user.kumar2018=Vskumar
user.email=vskumar35@gmail.com
vskumar@ubuntu:~$
===========>
3. How to check the config file content? :

$cat ~/.gitconfig
==== Output of config file ===>
vskumar@ubuntu:~$ cat ~/.gitconfig
[user]
kumar2018 = Vskumar
email = vskumar35@gmail.com
vskumar@ubuntu:~$
=============>

4. How to clone a project from an url ?: Let us clone one project as below:
$git clone https://github.com/vskumar2017/VSKTestproject1

=== Screen outout ==>
vskumar@ubuntu:~$
vskumar@ubuntu:~$ sudo git clone https://github.com/vskumar2017/VSKTestproject1
Cloning into ‘VSKTestproject1’…
remote: Counting objects: 57, done.
remote: Total 57 (delta 0), reused 0 (delta 0), pack-reused 57
Unpacking objects: 100% (57/57), done.
Checking connectivity… done.
vskumar@ubuntu:~$
vskumar@ubuntu:~$ ls
data-volume1 examples.desktop Pictures VSKTestproject1
Desktop flask-test Public
Documents jdk-9.0.4_linux-x64_bin.tar.gz Templates
Downloads Music Videos
vskumar@ubuntu:~$
=====================>
5. How to Create a test dir or folder for git project?:

=============>
vskumar@ubuntu:~$ mkdir test-git
vskumar@ubuntu:~$ pwd
/home/vskumar
vskumar@ubuntu:~$ ls
data-volume1 examples.desktop Pictures Videos
Desktop flask-test Public VSKTestproject1
Documents jdk-9.0.4_linux-x64_bin.tar.gz Templates
Downloads Music test-git
vskumar@ubuntu:~$
vskumar@ubuntu:~$
vskumar@ubuntu:~$ cd test-git
vskumar@ubuntu:~/test-git$ ls
vskumar@ubuntu:~/test-git$
=======>
6. How initiate the git for the current folder or dir in linux ?:
== Initialize the current dir for git init===>
vskumar@ubuntu:~/test-git$ git init
Initialized empty Git repository in /home/vskumar/test-git/.git/
vskumar@ubuntu:~/test-git$
========>

7. How to Create local files and check the status in the current git folder?:
== Create a text file========>
vskumar@ubuntu:~/test-git$ echo “Testing line1 for git ..” >> test1.txt vskumar@ubuntu:~/test-git$ cat test1.txt Testing line1 for git .. vskumar@ubuntu:~/test-git$ ls -l
total 4
-rw-rw-r– 1 vskumar vskumar 25 Feb 24 04:03 test1.txt

vskumar@ubuntu:~/test-git$ git status On branch master Initial commit Untracked files: (use “git add <file>…” to include in what will be committed) test1.txt nothing added to commit but untracked files present (use “git add” to track) vskumar@ubuntu:~/test-git$
===== Add a new file====>
vskumar@ubuntu:~/test-git$ git add test1.txt
vskumar@ubuntu:~/test-git$ git status
On branch master
Initial commit

Changes to be committed:
(use “git rm –cached <file>…” to unstage)

new file: test1.txt

vskumar@ubuntu:~/test-git$

===========>

8. How to commit the files into a local repository and check their status ?:

Now, let us do simple commit the file to the local repo.

$git commit -m “First Commit”
=== Commite output ==>
vskumar@ubuntu:~/test-git$ git commit -m “First Commit”
[master (root-commit) 56ccc1e] First Commit
1 file changed, 1 insertion(+)
create mode 100644 test1.txt
vskumar@ubuntu:~/test-git$
======================>
We can check the current status:

=== status after commit ===>
vskumar@ubuntu:~/test-git$ git status
On branch master
nothing to commit, working directory clean
vskumar@ubuntu:~/test-git$
==========>

== Added a new message ===>
vskumar@ubuntu:~/test-git$ echo ‘Testing line2 for git—->’ >> test1.txt
vskumar@ubuntu:~/test-git$ cat test1.txt
Testing line1 for git ..
Testing line2 for git—->
vskumar@ubuntu:~/test-git$
===============>

=== Current status ===>
vskumar@ubuntu:~/test-git$ git status
On branch master
Changes not staged for commit:
(use “git add <file>…” to update what will be committed)
(use “git checkout — <file>…” to discard changes in working directory)

modified: test1.txt

no changes added to commit (use “git add” and/or “git commit -a”)
vskumar@ubuntu:~/test-git$
=====================>
Now, add these two files:
git add test1.txt
git add test2.txt

=== Add and check status for two files ==>
vskumar@ubuntu:~/test-git$ git add test1.txt
vskumar@ubuntu:~/test-git$ git add test2.txt
vskumar@ubuntu:~/test-git$ git status
On branch master
Changes to be committed:
(use “git reset HEAD <file>…” to unstage)

modified: test1.txt
new file: test2.txt

vskumar@ubuntu:~/test-git$
====================================>
9. How to commit files into local repo with a message ?:

Commit these two files:

git commit -m “Committed:Changes for test1.txt and test2.txt”

==== Committed changes and status ===>
vskumar@ubuntu:~/test-git$ git commit -m “Committed:Changes for test1.txt and test2.txt”
[master 2a7192d] Committed:Changes for test1.txt and test2.txt
2 files changed, 2 insertions(+)
create mode 100644 test2.txt
vskumar@ubuntu:~/test-git$ git status
On branch master
nothing to commit, working directory clean
vskumar@ubuntu:~/test-git$
======================================>

Now, let us test the add command . prompt by having 2 or more files.

=== Updated two files ==>
vskumar@ubuntu:~/test-git$ cat test1.txt
Testing line1 for git ..
Testing line2 for git—->
Testing test1.tx for add . function
vskumar@ubuntu:~/test-git$ cat test2.txt
File Test2: Testing for Git commit –>
Testing test2.tx for add . function
vskumar@ubuntu:~/test-git$
=============>
Let us check the status:
==== Status ==>

vskumar@ubuntu:~/test-git$ git status
On branch master
Changes not staged for commit:
(use “git add <file>…” to update what will be committed)
(use “git checkout — <file>…” to discard changes in working directory)

modified: test1.txt
modified: test2.txt

no changes added to commit (use “git add” and/or “git commit -a”)
vskumar@ubuntu:~/test-git$
===============>

Now to add these two file together we need to use ‘git addd .’
== Added all files ===>
vskumar@ubuntu:~/test-git$
vskumar@ubuntu:~/test-git$ git add .
vskumar@ubuntu:~/test-git$ git status
On branch master
Changes to be committed:
(use “git reset HEAD <file>…” to unstage)

modified: test1.txt
modified: test2.txt

vskumar@ubuntu:~/test-git$
============>
Now let us commit the changes of one file at a time.

=== test1.txt commitment===>
vskumar@ubuntu:~/test-git$ git commit test1.txt -m ‘Committed test1.txt 3rd change’
[master 6bfd9b0] Committed test1.txt 3rd change
1 file changed, 1 insertion(+)
vskumar@ubuntu:~/test-git$ ^C
============================>
Now, let us check the status:

=======>
vskumar@ubuntu:~/test-git$
vskumar@ubuntu:~/test-git$ git status
On branch master
Changes to be committed:
(use “git reset HEAD <file>…” to unstage)

modified: test2.txt

vskumar@ubuntu:~/test-git$
========>
In git we can check the history by using ‘git log’ command.

=== Histroy ====>
vskumar@ubuntu:~/test-git$ git log
commit 69282e8d8c07e7cbc68e93b16df1d943d3b518d5
Author: Vsk <vskumar35@gmail.com>
Date: Sat Feb 24 06:49:27 2018 -0800

Committed test2.txt 3rd change

commit 6bfd9b045c352f13c36d8f82f12567058a8bb468
Author: Vsk <vskumar35@gmail.com>
Date: Sat Feb 24 06:46:24 2018 -0800

Committed test1.txt 3rd change

commit 2a7192dcdd1a123b8164f0d48dd0631645cf0630
Author: Vsk <vskumar35@gmail.com>
Date: Sat Feb 24 06:32:03 2018 -0800

Committed:Changes for test1.txt and test2.txt

commit 56ccc1ec9ae7db9f97e3a08e5488a64b4f130f1b
Author: Vsk <vskumar35@gmail.com>
Date: Sat Feb 24 06:08:42 2018 -0800

First Commit
vskumar@ubuntu:~/test-git$
=====================>
10. How to check the history of the local git repository ?:

In git we can check the history by using ‘git log’ command. It can give entire committed history with the relevant comments. If we use ‘git log –pretty=oneline’ it gives only chekcsums for different times commited occasions with commit the messages.

If we use ‘git log –pretty=oneline’ it gives only chekcsums for different times commited occasions with commit the messages.

=== Output for pretty ===>
vskumar@ubuntu:~/test-git$ git log –pretty=oneline
69282e8d8c07e7cbc68e93b16df1d943d3b518d5 Committed test2.txt 3rd change
6bfd9b045c352f13c36d8f82f12567058a8bb468 Committed test1.txt 3rd change
2a7192dcdd1a123b8164f0d48dd0631645cf0630 Committed:Changes for test1.txt and test2.txt
56ccc1ec9ae7db9f97e3a08e5488a64b4f130f1b First Commit
vskumar@ubuntu:~/test-git$
==============>

We can also check the commitments by author also using
‘git log –pretty=oneline –author=<your name>’

git log –pretty=oneline –author=kumar

============>
vskumar@ubuntu:~/test-git$ git log –pretty=oneline –author=kumar
69282e8d8c07e7cbc68e93b16df1d943d3b518d5 Committed test2.txt 3rd change
6bfd9b045c352f13c36d8f82f12567058a8bb468 Committed test1.txt 3rd change
2a7192dcdd1a123b8164f0d48dd0631645cf0630 Committed:Changes for test1.txt and test2.txt
56ccc1ec9ae7db9f97e3a08e5488a64b4f130f1b First Commit
============>
Let me give wrong user name to test:
=== Wrong user name ====>
vskumar@ubuntu:~/test-git$ git log –pretty=oneline –author=kumar1
vskumar@ubuntu:~/test-git$ git log –pretty=oneline –author=kumar202
vskumar@ubuntu:~/test-git$
= No files committed for the above users ==>

We can see more detail about a particular commit through show command.
The command ‘git log’ yields a sequential history of the individual commits within the repository.
Then you need to collect the commit id.

=== git show =====>
vskumar@ubuntu:~/test-git$ git show 56ccc1ec9ae7db9f97e3a08e5488a64b4f130f1b
commit 56ccc1ec9ae7db9f97e3a08e5488a64b4f130f1b
Author: Vsk <vskumar35@gmail.com>
Date: Sat Feb 24 06:08:42 2018 -0800

First Commit

diff –git a/test1.txt b/test1.txt
new file mode 100644
index 0000000..73b0484
— /dev/null
+++ b/test1.txt
@@ -0,0 +1 @@
+Testing line1 for git ..
vskumar@ubuntu:~/test-git$
=============================>
11. How to identify the difference of two commit ids ?:
The diff command recalls both full commit ID names and run ‘git diff’

git diff 2a7192dcdd1a123b8164f0d48dd0631645cf0630 6bfd9b045c352f13c36d8f82f12567058a8bb468

== Output of two commit diffs ==>
vskumar@ubuntu:~/test-git$ git diff 2a7192dcdd1a123b8164f0d48dd0631645cf0630 6bfd9b045c352f13c36d8f82f12567058a8bb468
diff –git a/test1.txt b/test1.txt
index 931bb8b..b9132c1 100644
— a/test1.txt
+++ b/test1.txt
@@ -1,2 +1,3 @@
Testing line1 for git ..
Testing line2 for git—->
+Testing test1.tx for add . function
vskumar@ubuntu:~/test-git$
==============================>
12. How to check and operate the staged files in local repository ?:

We can use the below command:
git ls-files –stage

=== Stage of current files ===>
vskumar@ubuntu:~/test-git$
vskumar@ubuntu:~/test-git$ git ls-files –stage
100644 b9132c1dd4ac08fa9c1e3dea5d7100e33557ad20 0 test1.txt
100644 0866cfd2c7ac9bf17f0a0590551a3580359e7250 0 test2.txt
vskumar@ubuntu:~/test-git$
========================>
=== Rm and later files stage ==>
vskumar@ubuntu:~/test-git$ git rm –cached test1.txt
rm ‘test1.txt’
vskumar@ubuntu:~/test-git$ git ls-files –stage
100644 0866cfd2c7ac9bf17f0a0590551a3580359e7250 0 test2.txt
vskumar@ubuntu:~/test-git$
vskumar@ubuntu:~/test-git$ git rm –cached test1.txt
rm ‘test1.txt’
vskumar@ubuntu:~/test-git$ git ls-files –stage
100644 0866cfd2c7ac9bf17f0a0590551a3580359e7250 0 test2.txt
vskumar@ubuntu:~/test-git$
=====>
You can see the removed file is back into dir:
== Status of removed file ==>
vskumar@ubuntu:~/test-git$ git status
On branch master
Changes to be committed:
(use “git reset HEAD <file>…” to unstage)
deleted: test1.txt
Untracked files:
(use “git add <file>…” to include in what will be committed)

test1.txt
vskumar@ubuntu:~/test-git$
=== It need to be be added and committed ====>
vskumar@ubuntu:~/test-git$ git add .
vskumar@ubuntu:~/test-git$ git status
On branch master
nothing to commit, working directory clean
vskumar@ubuntu:~/test-git$
======= It need to be added only===.

====== list the stage files =>
vskumar@ubuntu:~/test-git$
vskumar@ubuntu:~/test-git$ git ls-files –stage
100644 b9132c1dd4ac08fa9c1e3dea5d7100e33557ad20 0 test1.txt
100644 0866cfd2c7ac9bf17f0a0590551a3580359e7250 0 test2.txt
vskumar@ubuntu:~/test-git$
=========================>
13. What are the ultimate format of the git log ?:
We can use the ultimate format of the log as: git log –pretty=format:”%h %ad | %s%d [%an]” –graph –date=short

===== Screen output ==========>
vskumar@ubuntu:~$ cd test-git vskumar@ubuntu:~/test-git$ pwd /home/vskumar/test-git vskumar@ubuntu:~/test-git$ git log –pretty=format:”%h %ad | %s%d [%an]” –graph –date=short * 69282e8 2018-02-24 | Committed test2.txt 3rd change (HEAD -> master) [Vsk] * 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk] * 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk] * 56ccc1e 2018-02-24 | First Commit [Vsk] vskumar@ubuntu:~/test-git$

==============================>

14. How to setup aliases for different git commands?:

If there is a common and complex the Git command you type frequently, consider setting up a simple Git alias for it.

We can use the below common aliases:

git config –global alias.ci commit

git config –global alias.st status

git config –global alias.br branch

git config –global alias.hist “log –pretty=format:’%h %ad | %s%d [%an]’ –graph –date=short”

Once you setup the above aliases for git commands you need to use them only, instead of commands.

Like; for command: ‘log –pretty=format:’%h %ad | %s%d [%an]’ –graph –date=short You need to use git hist, etc.

You also need to remember them well. Once you setup the above aliases for git commands you need to use them only instead of commands.

Like; for command: ‘log –pretty=format:”%h %ad | %s%d [%an]” –graph –date=short’

You need to use git hist, etc. You also need to remember them well.

Let us try one command for branch:

===== Screen output ==========>

vskumar@ubuntu:~$ cd test-git

vskumar@ubuntu:~/test-git$ pwd /home/vskumar/test-git

vskumar@ubuntu:~/test-git$ git log –pretty=format:”%h %ad | %s%d [%an]” –graph –date=short

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (HEAD -> master) [Vsk]

*6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk] vskumar@ubuntu:~/test-git$ ==========================>

=== History =======>

vskumar@ubuntu:~/test-git$ git log –pretty=format:”%h %ad | %s%d [%an]” –graph –date=short

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (HEAD -> master) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk]

vskumar@ubuntu:~/test-git$ git config –global alias.hist “log –pretty=format:’%h %ad | %s%d [%an]’ –grap h –date=short”

==== With alias hist ========>

vskumar@ubuntu:~/test-git$ git hist

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (HEAD -> master) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk]

vskumar@ubuntu:~/test-git$

==========================>

15. How to use tags and operate for different versions in a repository?:

Tags for previous versions of git: W

We can tag latest committed versions from a local repo to reuse them later stages. Let’s tag the version prior to the current version with the name v1.

First, we will checkout the previous version. Instead of looking up the hash, we are going to use the  notation indicating “the parent of v1”. git tag v1

=== You can see the tagging process for the latest commit ===>

vskumar@ubuntu:~/test-git$

vskumar@ubuntu:~/test-git$ git hist

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (HEAD -> master) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk]

vskumar@ubuntu:~/test-git$ git tag v1

vskumar@ubuntu:~/test-git$ git checkout v1

Note: checking out ‘v1’. You are in ‘detached HEAD’ state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> HEAD is now at 69282e8… Committed test2.txt 3rd change

vskumar@ubuntu:~/test-git$ git hist

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (HEAD, tag: v1, master) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk]

vskumar@ubuntu:~/test-git$ git status

HEAD detached at v1 nothing to commit, working directory clean vskumar@ubuntu:~/test-git$

================================>

Now, let us add a line to test1.txt and commit it.

===== Screen output ===>

vskumar@ubuntu:~/test-git$ ls -l total 8

-rw-rw-r– 1 vskumar vskumar 88 Feb 24 06:37 test1.txt

-rw-rw-r– 1 vskumar vskumar 75 Feb 24 06:37 test2.txt

vskumar@ubuntu:~/test-git$ git status

HEAD detached at v1 nothing to commit, working directory clean vskumar@ubuntu:~/test-git$

vskumar@ubuntu:~/test-git$ echo ‘Testing test1.txt for tagging v2’ >> test1.txt vskumar@ubuntu:~/test-git$ cat test1.txt

Testing line1 for git .. Testing line2 for git—->

Testing test1.tx for add . function

Testing test1.txt for tagging v2

vskumar@ubuntu:~/test-git$ git status

HEAD detached at v1 Changes not staged for commit: (use “git add <file>…” to update what will be committed) (use “git checkout — <file>…” to discard changes in working directory) modified: test1.txt no changes added to commit (use “git add” and/or “git commit -a”)

vskumar@ubuntu:~/test-git$

========================>

Now let me add and commit

=== Add and commit for tagging 2nd time ===>

vskumar@ubuntu:~/test-git$ git add .

vskumar@ubuntu:~/test-git$ git status

HEAD detached at v1 Changes to be committed: (use “git reset HEAD <file>…” to unstage) modified: test1.txt

vskumar@ubuntu:~/test-git$ git commit -m ‘Added for tagging 2nd time’

[detached HEAD 0bec7c0] Added for tagging 2nd time 1 file changed, 1 insertion(+) vskumar@ubuntu:~/test-git$ git status HEAD detached from v1 nothing to commit, working directory clean

vskumar@ubuntu:~/test-git$

===============>

Now, let us tag 2nd time :

== You can see two versions are tagged ===>

vskumar@ubuntu:~/test-git$

vskumar@ubuntu:~/test-git$ git hist

* 0bec7c0 2018-02-24 | Added for tagging 2nd time (HEAD) [Vsk]

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (tag: v1, master) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk]

vskumar@ubuntu:~/test-git$ git tag v2

vskumar@ubuntu:~/test-git$ git hist

* 0bec7c0 2018-02-24 | Added for tagging 2nd time (HEAD, tag: v2) [Vsk]

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (tag: v1, master) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk]

vskumar@ubuntu:~/test-git$

===========================>

Now let us check the latest version of test1.txt content.

=== Latest version test1.txt ===>

vskumar@ubuntu:~/test-git$ ls -l

total 8 -rw-rw-r– 1

vskumar vskumar 121 Feb 24 20:58 test1.txt -rw-rw-r– 1 vskumar vskumar 75 Feb 24 06:37 test2.txt

vskumar@ubuntu:~/test-git$ cat test1.txt

Testing line1 for git .. Testing line2 for git—-> Testing test1.tx for add . function Testing test1.txt for tagging v2

vskumar@ubuntu:~/test-git$

============>

Now, let us checkout the older version to test the content of test1.txt. using : git checkout v1

====== Chekout v1 to track changes ====>

vskumar@ubuntu:~/test-git$ git checkout v1

Previous HEAD position was 0bec7c0… Added for tagging 2nd time HEAD is now at 69282e8… Committed test2.txt 3rd change

vskumar@ubuntu:~/test-git$ cat test1.txt

Testing line1 for git .. Testing line2 for git—->

Testing test1.tx for add . function

vskumar@ubuntu:~/test-git$

vskumar@ubuntu:~/test-git$ git status

HEAD detached at v1 nothing to commit, working directory clean

vskumar@ubuntu:~/test-git$

=== We can see the older version content only===>

===== We can see the hist and make the current master checkout s v1 ===> vskumar@ubuntu:~/test-git$ git hist

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (HEAD, tag: v1, master) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk]

vskumar@ubuntu:~/test-git$

vskumar@ubuntu:~/test-git$ git checkout

master Switched to branch ‘master’

vskumar@ubuntu:~/test-git$ git hist

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (HEAD -> master, tag: v1) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk] vskumar@ubuntu:~/test-git$ vskumar@ubuntu:~/test-git$ git status On branch master nothing to commit, working directory clean

vskumar@ubuntu:~/test-git$

==============================>

16. How to revert back the changes to older version ?:

Now, let us understand on how to revertback the changes in a local file.

We can use reset command to bring back the previous version. If you have a modified object in the working dir and want to revertback to older version you can follow the below steps.

The reset command resets the buffer zone to HEAD. This clears the buffer zone from the changes that we have just staged. The reset command (default) does not change the working directory. Hence, the working directory still contains unwanted comments.

We can use the checkout command from the previous tutorial to remove unwanted changes from working directory.

===== to revertback Reset the modified file and checkout that file only =====> vskumar@ubuntu:~/test-git$ git status On branch master nothing to commit, working directory clean

vskumar@ubuntu:~/test-git$ echo ‘Testing for reset Head command’ >> test1.txt vskumar@ubuntu:~/test-git$ git status On branch master Changes not staged for commit: (use “git add <file>…” to update what will be committed) (use “git checkout — <file>…” to discard changes in working directory) modified: test1.txt no changes added to commit (use “git add” and/or “git commit -a”)

vskumar@ubuntu:~/test-git$ git reset HEAD test1.txt Unstaged changes after reset: M test1.txt

vskumar@ubuntu:~/test-git$ git status On branch master Changes not staged for commit: (use “git add <file>…” to update what will be committed) (use “git checkout — <file>…” to discard changes in working directory) modified: test1.txt no changes added to commit (use “git add” and/or “git commit -a”)

vskumar@ubuntu:~/test-git$

=== Finally, You can see the older version contents only ====>

17. How to cancel the committed changes? :

We have seen on how to cancel the modified files.

Now we can check on how to revert back the committed changes in the local git repo.

== Let us see the current hist ===>

vskumar@ubuntu:~/test-git$ git hist

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (HEAD -> master, tag: v1) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk]

vskumar@ubuntu:~/test-git$

=================>

Now let us consider v1 to revert back.

We need to use command:git revert HEAD. When we use this command it will open a editor by showing the details.

We can save it using ‘wq!’ like in vi/vim.

===== Output =======>

vskumar@ubuntu:~/test-git$ git revert HEAD

[master fdc40ac] Revert “Committed test2.txt 3rd change” 1 file changed, 1 deletion(-)

vskumar@ubuntu:~/test-git$

===================>

Now, let us check hist

=== git hist ====>

vskumar@ubuntu:~/test-git$ git hist

* fdc40ac 2018-02-24 | Revert “Committed test2.txt 3rd change” (HEAD -> master) [Vsk]

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (tag: v1) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk]

vskumar@ubuntu:~/test-git$

=================>

============ Exercise ====================>

If you want to can do an Exercise: You can checkout the committed version. You can add a test to text1 file and commit it. Again you can use the same lab practice.

============ You can revert back also =====>

18. How to reset the reverted changes through commit from the branch? :

Check the previous screen display for the usage of revert command.

Now, if we have decided to reset the changes we can use the command: ‘git reset –hard v1’

=== Screen output for resetting reverted commit=====>

vskumar@ubuntu:~/test-git$ git reset –hard v1

HEAD is now at 69282e8 Committed test2.txt 3rd change

vskumar@ubuntu:~/test-git$ git hist

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (HEAD, tag: v1) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk]

vskumar@ubuntu:~/test-git$

=====================>

We can check the committed history using the below command: git hist –all

==== Screen output git hist –all ===>

vskumar@ubuntu:~/test-git$ git hist –all

* fdc40ac 2018-02-24 | Revert “Committed test2.txt 3rd change” (master) [Vsk] |

* 0bec7c0 2018-02-24 | Added for tagging 2nd time (tag: v2) [Vsk] |

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (HEAD, tag: v1) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk]

vskumar@ubuntu:~/test-git$

=====================>

You can also observe the reverted information.

We can also drop the tags as below: git tag -d v1

===== Output for tag removal ====>

vskumar@ubuntu:~/test-git$ git tag -d v2

Deleted tag ‘v2’ (was 0bec7c0)

vskumar@ubuntu:~/test-git$ git hist –all

* fdc40ac 2018-02-24 | Revert “Committed test2.txt 3rd change” (master) [Vsk]

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (HEAD, tag: v1) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk]

vskumar@ubuntu:~/test-git$

==========So, now v2 tag is removed ========>

You can also see the config file as below:

=== Config file ====>

vskumar@ubuntu:~/test-git$ cat .git/config

[core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true vskumar@ubuntu:~/test-git$

=====This configuration file is created for each individual project.===============>

19. Working with git directory:

 

Every Git project will have directories and files.

We can see the git dir items:

==================== Git dir items ===>

vskumar@ubuntu:~/test-git$ ls -l -C .git

branches        config       HEAD   index  logs     ORIG_HEAD

COMMIT_EDITMSG  description  hooks  info   objects  refs

vskumar@ubuntu:~/test-git$

======Root folder of git project =====>

 

We can explore the objects dir to check the objects details.

using ls -l -C .git/objects

 

==== Output ====>

vskumar@ubuntu:~/test-git$ ls -l -C .git/objects

07  0b  44  4f  68  6b  92  b0  bd  fd    pack

08  2a  47  56  69  73  93  b9  e0  info

vskumar@ubuntu:~/test-git$

================>

 

We can see a lot of folders named with two characters.

The first two letters sha1 hash of the object stored in git are the directory names.

 

What is SHA1 ?:

The SHA-1 (Secure Hash Algorithm 1) is a cryptographic hash function.  

It takes an input and produces a 160-bit (20-byte) hash value known as a message digest to the object.

Typically it is rendered as a hexadecimal number, with 40 digits long.

 

Now, let us see it by inquiring our database objects from the above listed items:

Using the below command we can check the files in dir.

ls -C .git/objects/<dir>   — The dir is the two characters of the above shown values.

==== Checking one dir ====>

vskumar@ubuntu:~/test-git$ ls -l  -C .git/objects/07

5e722b3161a24fd5adcefb574b5360118abbef

vskumar@ubuntu:~/test-git$ ls -l  -C .git/objects/92

d62ee30d26c444d85b3d81a4e2b8b69e0f093f

vskumar@ubuntu:~/test-git$

==== You can see 40 digits size hexadecimal value of objects in 07 and 92 objects dir====>

 

Note, we have seen the config file from the previous exercises.

Now, We will check the branches and tags as below commands:

 

ls .git/refs

ls .git/refs/heads

ls .git/refs/tags

cat .git/refs/tags/v1

 

== Checking branches and tags ===>

vskumar@ubuntu:~/test-git$

vskumar@ubuntu:~/test-git$ ls .git/refs

heads  tags

vskumar@ubuntu:~/test-git$ ls .git/refs/heads

master

vskumar@ubuntu:~/test-git$ ls .git/refs/tags

v1

vskumar@ubuntu:~/test-git$ cat .git/refs/tags/v1

69282e8d8c07e7cbc68e93b16df1d943d3b518d5

vskumar@ubuntu:~/test-git$

=================================>

 

Let us note;  each file corresponds to the tag previously created using the git tag command.

Its content is a hash commit attached to the tag.

We have only one branch, and everything we see here in this folder is a master branch.

Now, let us check what the HEAD file contains?:

using :cat .git/HEAD

 

==============>

vskumar@ubuntu:~/test-git$ cat .git/HEAD

683ed74fda585e10f38111ebb4c84026d5678290

vskumar@ubuntu:~/test-git$

=============>

 

Let us search for the last committed items:

We can explore the structure of the database objects

using SHA1 hashes for searching the content in the repository.

 

The below command should find the last commit in the repository.

======= SHA hash of git hist ==>

vskumar@ubuntu:~/test-git$ git hist –max-count=1

* 683ed74 2018-02-24 | Added updated test1.txt (HEAD) [Vsk]

vskumar@ubuntu:~/test-git$

==============================>

SHA1 hash is probably different on our systems from this git format.

Now, let us check the last commit details.

Using the below commands:

git cat-file -t <hash>

git cat-file -p <hash>

 

==== SHA1 content and the tree details ===>

vskumar@ubuntu:~/test-git$ git cat-file -t 683ed74

commit

vskumar@ubuntu:~/test-git$ git cat-file -p 683ed74

tree 44298909c5e8873c5870f9f1ca77951ea4e028eb

parent 69282e8d8c07e7cbc68e93b16df1d943d3b518d5

author Vsk <vskumar35@gmail.com> 1519544092 -0800

committer Vsk <vskumar35@gmail.com> 1519544092 -0800

Added updated test1.txt

vskumar@ubuntu:~/test-git$

==========================================>

Now, we can display the tree referenced in the above commit.

Using :git cat-file -p <treehash>

From the couple of characters shown for tree.

====== We can see the real files stored under git blob ====>

vskumar@ubuntu:~/test-git$ git cat-file -p 44298909

100644 blob 075e722b3161a24fd5adcefb574b5360118abbef test1.txt

100644 blob 0866cfd2c7ac9bf17f0a0590551a3580359e7250 test2.txt

vskumar@ubuntu:~/test-git$

================================================>

 

== See the contents for text1.txt also ===>

vskumar@ubuntu:~/test-git$ git cat-file -p 075e722b

Testing line1 for git ..

Testing line2 for git—->

Testing test1.tx for add . function

Checking for changing commit comment

vskumar@ubuntu:~/test-git$

====================================>

 

So we have seen the main branch to till the object content level.

 

20. Working with git branches and master :

 

Now, let us see the branches operation. We can create different branches as each developer

can have his/her own branch while working with the same or different objects.

Let us see the current statusof git project:

=========>

vskumar@ubuntu:~/test-git$ pwd

/home/vskumar/test-git

vskumar@ubuntu:~/test-git$ git status

HEAD detached from v1

Changes not staged for commit:

  (use “git add <file>…” to update what will be committed)

  (use “git checkout — <file>…” to discard changes in working directory)

 

modified:   test1.txt

modified:   test2.txt

 

Untracked files:

  (use “git add <file>…” to include in what will be committed)

 

class

test1.class

test1.java

 

no changes added to commit (use “git add” and/or “git commit -a”)

vskumar@ubuntu:~/test-git$

======================>

 

Now, Let us create a new branch as below [using checkout] and test the operations.

 

git checkout -b testgitbr1

git status

 

====== We are just copied the same project as a branch ======>

vskumar@ubuntu:~/test-git$ git checkout -b testgitbr1

M test1.txt

M test2.txt

Switched to a new branch ‘testgitbr1’

vskumar@ubuntu:~/test-git$ git status

On branch testgitbr1

Changes not staged for commit:

  (use “git add <file>…” to update what will be committed)

  (use “git checkout — <file>…” to discard changes in working directory)

 

modified:   test1.txt

modified:   test2.txt

 

Untracked files:

  (use “git add <file>…” to include in what will be committed)

 

class

test1.class

test1.java

 

no changes added to commit (use “git add” and/or “git commit -a”)

vskumar@ubuntu:~/test-git$

==================>

Assume a developer want to add his own program.

Now, I would like to add the test1.java program into this new branch:

======= Adding new file ====>

vskumar@ubuntu:~/test-git$ git add test1.java

vskumar@ubuntu:~/test-git$ git status

On branch testgitbr1

Changes to be committed:

  (use “git reset HEAD <file>…” to unstage)

 

new file:   test1.java

 

Changes not staged for commit:

  (use “git add <file>…” to update what will be committed)

  (use “git checkout — <file>…” to discard changes in working directory)

 

modified:   test1.txt

modified:   test2.txt

 

Untracked files:

  (use “git add <file>…” to include in what will be committed)

 

class

test1.class

 

vskumar@ubuntu:~/test-git$

=== New branch has a java program also ===>

Now, let me commit this file with a message:

 

git commit -m “Added a java program [test1.java] to new branch”

 

========>

vskumar@ubuntu:~/test-git$ git commit -m “Added a java program [test1.java] to new branch”

[testgitbr1 4e7baf8] Added a java program [test1.java] to new branch

 1 file changed, 11 insertions(+)

 create mode 100644 test1.java

vskumar@ubuntu:~/test-git$

=============>

Let us see the status:

 

=== Current status ====>

vskumar@ubuntu:~/test-git$ git status

On branch testgitbr1

Changes not staged for commit:

  (use “git add <file>…” to update what will be committed)

  (use “git checkout — <file>…” to discard changes in working directory)

 

modified:   test1.txt

modified:   test2.txt

 

Untracked files:

  (use “git add <file>…” to include in what will be committed)

 

class

test1.class

 

no changes added to commit (use “git add” and/or “git commit -a”)

vskumar@ubuntu:~/test-git$

========================>

Now let me add all the modified files also with a commit:

 

=======>

vskumar@ubuntu:~/test-git$ git add .

vskumar@ubuntu:~/test-git$ git status

On branch testgitbr1

Changes to be committed:

  (use “git reset HEAD <file>…” to unstage)

 

new file:   class

new file:   test1.class

modified:   test1.txt

modified:   test2.txt

 

vskumar@ubuntu:~/test-git$

vskumar@ubuntu:~/test-git$

vskumar@ubuntu:~/test-git$ git commit -m “Added all 4 files [2-new and 2 modified]”

[testgitbr1 26b971b] Added all 4 files [2-new and 2 modified]

 4 files changed, 14 insertions(+), 4 deletions(-)

 create mode 100644 class

 create mode 100644 test1.class

 mode change 100644 => 100755 test1.txt

 mode change 100644 => 100755 test2.txt

vskumar@ubuntu:~/test-git$

vskumar@ubuntu:~/test-git$ git status

On branch testgitbr1

nothing to commit, working directory clean

vskumar@ubuntu:~/test-git$

== So, we have updated the new branch ====>

 

Now we need to navigate the available branches.

Let us apply git hist command and check the history as below:

== Checking the git project repo history ====>

vskumar@ubuntu:~/test-git$ git hist

* 26b971b 2018-03-05 | Added all 4 files [2-new and 2 modified] (HEAD -> testgitbr1) [Vsk]

* 4e7baf8 2018-03-05 | Added a java program [test1.java] to new branch [Vsk]

* 683ed74 2018-02-24 | Added updated test1.txt [Vsk]

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (tag: v1) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk]

vskumar@ubuntu:~/test-git$

== You can see the latest branch and commit messages===>

Now, let us toggle or do workaround between the branch and the master as below:

Please note so far we are with the testgitbr1 which is  branch.

Now let us use master as below:

git checkout master

==== Master ====>

vskumar@ubuntu:~/test-git$ git checkout master

Switched to branch ‘master’

vskumar@ubuntu:~/test-git$ git status

On branch master

nothing to commit, working directory clean

vskumar@ubuntu:~/test-git$

================>

 

Let us check some files and their content:

=== Checking master ===>

 vskumar@ubuntu:~/test-git$ cat test1.txt

Testing line1 for git ..

Testing line2 for git—->

Testing test1.tx for add . function

vskumar@ubuntu:~/test-git$ cat test1.java

cat: test1.java: No such file or directory

vskumar@ubuntu:~/test-git$

==========>

 

Now, let us switch to branch and check the files:

== You can see the difference from master ====>

vskumar@ubuntu:~/test-git$ git checkout testgitbr1

Switched to branch ‘testgitbr1’

vskumar@ubuntu:~/test-git$ git status

On branch testgitbr1

nothing to commit, working directory clean

vskumar@ubuntu:~/test-git$ cat test1.java

 

class test1{

  public static void main(String args[]){

    System.out.println(“Hello Welcome to DevOps course”);

System.out.println(“Hope you are practicing well Jenkins 2.9”);

System.out.println(“Now, create a java object file through javac compiler”);

System.out.println(“Using Jenkins job creation”);

System.out.println(“Once it is created, you run it by java runtime”);

System.out.println(“Now, compare the console output with your expectation!!”);

  }

}

vskumar@ubuntu:~/test-git$ cat test1.txt

echo ‘Testing line1 for git ..’

echo ‘Testing line2 for git—->’

echo ‘Testing test1.tx for add . function’

echo ‘Checking for changing commit comment’

echo ‘For removal of old comment’

vskumar@ubuntu:~/test-git$

==Note the test1.txt has different content from master =====>

 

Now, let us try to add one README file into master.

I want to create the README file as below:

 

== README file content ===>

vskumar@ubuntu:~/test-git$ pwd

/home/vskumar/test-git

vskumar@ubuntu:~/test-git$ git checkout master

Switched to branch ‘master’

vskumar@ubuntu:~/test-git$ git status

On branch master

nothing to commit, working directory clean

vskumar@ubuntu:~/test-git$

vskumar@ubuntu:~/test-git$ touch README

vskumar@ubuntu:~/test-git$ echo “Testing Master and branches” >> README

vskumar@ubuntu:~/test-git$ cat README

Testing Master and branches

vskumar@ubuntu:~/test-git$ echo “Added this README file into master only” >> README

vskumar@ubuntu:~/test-git$ cat README

Testing Master and branches

Added this README file into master only

vskumar@ubuntu:~/test-git$

===================================>

Let us commit this file into master.

 

=== Status ====>

 

vskumar@ubuntu:~/test-git$ git status

On branch master

Untracked files:

  (use “git add <file>…” to include in what will be committed)

 

README

 

nothing added to commit but untracked files present (use “git add” to track)

vskumar@ubuntu:~/test-git$

=======>

 

Add and  Commit it with message :

 

==== Commit master ===>

 

vskumar@ubuntu:~/test-git$ git add README

vskumar@ubuntu:~/test-git$ git status

On branch master

Changes to be committed:

  (use “git reset HEAD <file>…” to unstage)

 

new file:   README

 

vskumar@ubuntu:~/test-git$

===== Added README ====>

Now, Commit:

 

==== Commit master ===>

vskumar@ubuntu:~/test-git$ git commit -m “Added README file into mater”

[master 1fad32b] Added README file into mater

 1 file changed, 2 insertions(+)

 create mode 100644 README

vskumar@ubuntu:~/test-git$

======================>

Now, let us view the current hist of master and branch as below with git hist –all:

 

== Current history ===>

 vskumar@ubuntu:~/test-git$ git hist –all

* 1fad32b 2018-03-05 | Added README file into mater (HEAD -> master) [Vsk]

* fdc40ac 2018-02-24 | Revert “Committed test2.txt 3rd change” [Vsk]

| * 26b971b 2018-03-05 | Added all 4 files [2-new and 2 modified] (testgitbr1) [Vsk]

| * 4e7baf8 2018-03-05 | Added a java program [test1.java] to new branch [Vsk]

| * 683ed74 2018-02-24 | Added updated test1.txt [Vsk]

|/  

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (tag: v1) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk]

vskumar@ubuntu:~/test-git$

=== You can see on top last two commits of both master and branch =========>

 

21. How to Merge latest objects into single branch ?:

 

Now, we have two different branches. We can consolidate both branches versions and

merge them into a new branch for future developers to use them as one latest project.

Let us go back to the testgitbr1 branch and merge it with master by using the below commands.

git checkout testgitbr1

git merge master

================>

vskumar@ubuntu:~/test-git$ git checkout testgitbr1

Switched to branch ‘testgitbr1’

======>

When you use merge command a file opens in vi, just save it as it is.

==========================>

vskumar@ubuntu:~/test-git$ git merge master

Auto-merging test2.txt

Merge made by the ‘recursive’ strategy.

 README    | 2 ++

 test2.txt | 1 –

 2 files changed, 2 insertions(+), 1 deletion(-)

 create mode 100644 README

vskumar@ubuntu:~/test-git$

=================>

And let us see the current history:

git hist –all

 

===== History =====>

vskumar@ubuntu:~/test-git$ git hist –all

*   6b67f05 2018-03-05 | Merge branch ‘master’ into testgitbr1 (HEAD -> testgitbr1) [Vsk]

|\  

| * 1fad32b 2018-03-05 | Added README file into mater (master) [Vsk]

| * fdc40ac 2018-02-24 | Revert “Committed test2.txt 3rd change” [Vsk]

* | 26b971b 2018-03-05 | Added all 4 files [2-new and 2 modified] [Vsk]

* | 4e7baf8 2018-03-05 | Added a java program [test1.java] to new branch [Vsk]

* | 683ed74 2018-02-24 | Added updated test1.txt [Vsk]

|/  

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (tag: v1) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk]

vskumar@ubuntu:~/test-git$

== Let us check the above commit history also =============>

 

From the above exercise, we can conclude any developer can merge his/her current branch into master once they decide to release the code.

 

22. How to reset the earlier created branch from the local repository?:

 

We will see from this exercise, how a developer can reset the earlier branch.

We need to be on the branch now.

we should use ‘git checkout testgitbr1’

 

= Switching to testgitbr1 branch ==>

vskumar@ubuntu:~/test-git$ git checkout testgitbr1

Switched to branch ‘testgitbr1’

vskumar@ubuntu:~/test-git$

============>

Now. let us see the current history of the git local repo:

 

==== Git hist====>

 

vskumar@ubuntu:~/test-git$ git hist

*   6b67f05 2018-03-05 | Merge branch ‘master’ into testgitbr1 (HEAD -> testgitbr1) [Vsk]

|\  

| * 1fad32b 2018-03-05 | Added README file into mater (master) [Vsk]

| * fdc40ac 2018-02-24 | Revert “Committed test2.txt 3rd change” [Vsk]

* | 26b971b 2018-03-05 | Added all 4 files [2-new and 2 modified] [Vsk]

* | 4e7baf8 2018-03-05 | Added a java program [test1.java] to new branch [Vsk]

* | 683ed74 2018-02-24 | Added updated test1.txt [Vsk]

|/  

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (tag: v1) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk]

vskumar@ubuntu:~/test-git$

== Note, both branches info is available ===>

To remove the testgitbr1 branch we need to reset it.

git reset –hard <hash> —> Here the testgitbr1 branch 1st commit Hash is 4e7baf8.

So our command is; git reset –hard 4e7baf8

 

=== Resetting branch ===>

vskumar@ubuntu:~/test-git$ git reset –hard 4e7baf8

HEAD is now at 4e7baf8 Added a java program [test1.java] to new branch

==== Rsetting is done for branch ===>

Now let us check the hist all:

 

===== Hist all =====>

vskumar@ubuntu:~/test-git$ git hist –all

* 1fad32b 2018-03-05 | Added README file into mater (master) [Vsk]

* fdc40ac 2018-02-24 | Revert “Committed test2.txt 3rd change” [Vsk]

| * 4e7baf8 2018-03-05 | Added a java program [test1.java] to new branch (HEAD -> testgitbr1) [Vsk]

| * 683ed74 2018-02-24 | Added updated test1.txt [Vsk]

|/  

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (tag: v1) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk]

vskumar@ubuntu:~/test-git$

== Note;testgitbr1 branch is disappeared ====>

Now let us see the latest history.

So, this way any developer can create and merge a branch and later on it can be destroyed also.

 

23. How to add the current code to github?:

 

create a new repository or use the existing repo on the command line

Create your own github userid and a project in it.

Then you can create repository online. You will see set of commands display on the web page.

Follow them. Or follow as below:

 

Steps for Guthub access:

 

You need to set the project url as below:

========================>

vskumar@ubuntu:~/test-git$ git remote set-url origin https://github.com/vskumar2017/git-test1.git

vskumar@ubuntu:~/test-git$

========================>

Please see the below content, I have pushed the code to my account VSKUMAR2017 as below:

 

==== Pushed code to github account ====>

vskumar@ubuntu:~/test-git$ git push origin master –force

Username for ‘https://github.com&#8217;: VSKUMAR2017

Password for ‘https://VSKUMAR2017@github.com&#8217;:

Counting objects: 20, done.

Compressing objects: 100% (16/16), done.

Writing objects: 100% (20/20), 1.85 KiB | 0 bytes/s, done.

Total 20 (delta 2), reused 0 (delta 0)

remote: Resolving deltas: 100% (2/2), done.

To https://github.com/vskumar2017/git-test1.git

 * [new branch]      master -> master

vskumar@ubuntu:~/test-git$

====================>

 

I have gone throuhg my account and saw the below url:

https://github.com/vskumar2017/git-test1

 

=== As per the below hist it is stored ===>

vskumar@ubuntu:~/test-git$ git hist

* e3fab98 2018-03-05 | first commit (HEAD -> master) [Vsk]

* 1fad32b 2018-03-05 | Added README file into mater [Vsk]

* fdc40ac 2018-02-24 | Revert “Committed test2.txt 3rd change” [Vsk]

* 69282e8 2018-02-24 | Committed test2.txt 3rd change (tag: v1) [Vsk]

* 6bfd9b0 2018-02-24 | Committed test1.txt 3rd change [Vsk]

* 2a7192d 2018-02-24 | Committed:Changes for test1.txt and test2.txt [Vsk]

* 56ccc1e 2018-02-24 | First Commit [Vsk]

vskumar@ubuntu:~/test-git$

=====================>

I have done one more push:

 

==== One more push to github ===>

vskumar@ubuntu:~/test-git$ git checkout master

Switched to branch ‘master’

vskumar@ubuntu:~/test-git$ git add .

vskumar@ubuntu:~/test-git$ git commit -m “A shell sample added “

[master c61b3dd] A shell sample added

 1 file changed, 3 insertions(+)

 create mode 100644 sh1.sh

vskumar@ubuntu:~/test-git$ git push origin master –force

Username for ‘https://github.com&#8217;: VSKUMAR2017

Password for ‘https://VSKUMAR2017@github.com&#8217;:

Counting objects: 3, done.

Compressing objects: 100% (2/2), done.

Writing objects: 100% (3/3), 299 bytes | 0 bytes/s, done.

Total 3 (delta 1), reused 0 (delta 0)

remote: Resolving deltas: 100% (1/1), completed with 1 local object.

To https://github.com/vskumar2017/git-test1.git

   e3fab98..c61b3dd  master -> master

vskumar@ubuntu:~/test-git$

=========================>

 

I have see the below content on my guthub web page of url:https://github.com/vskumar2017/git-test1

===== Message ===>

sh1.sh  A shell sample added just now

======>

 Hope you enjoyed it technically!!

END OF LAB SESSION FOR Git

 

 Vcard-Shanthi Kumar V-v3

If you are keen practicing Mock interviews for a Job Description, Please read the below blog to contact:

https://vskumar.blog/2020/02/03/contact-for-aws-devops-sre-roles-mock-interview-prep-not-proxy-for-original-profile/

15. DevOps: How to setup jenkins 2.9 on Ubuntu-16.04 with jdk8 with a trouble shoot video guidance

jenkins

In continuation of blog related to Jenkins installation on Win10 url :https://vskumar.blog/2017/11/25/1-devops-jenkins2-9-installation-with-java-9-on-windows-10/

In this blog I would like to demonstrate on Jenkins 2.9 installation using Ubuntu 16.04 OS with JDK8. I used Ubuntu 16.04 VM.  You can use your standalone Ubuntu machine also.

At the end  a video link is given for this entire exercise…. with a job run…

[Note: If you are a student and in need of  Ubuntu 16.04 VM copy, I can share it. You need to send a request through linkedin with your identity please. At the bottom of this blog you can get my details.]

If you want to install Jenkins you need to have JDK in the Ubuntu machine.

You need to install JDK8 as 1st instance for Jenkins setup:

How to install JDK8?:

Step 1:-
Download JDK 8 tar.gz file from official website

Step 2:-
Extract contents using tar command

$tar -xzvf filename.tar.gz

Step 3 :-
Move Extracted file to /usr/local/java Directory
=====CLI Screen output ===>
vskumar@ubuntu:~$ pwd
/home/vskumar
vskumar@ubuntu:~$ cd /usr/local
vskumar@ubuntu:/usr/local$ ls
bin etc games include lib man sbin share src
vskumar@ubuntu:/usr/local$ mkdir java
mkdir: cannot create directory ‘java’: Permission denied
vskumar@ubuntu:/usr/local$ sudo mkdir java
[sudo] password for vskumar:
vskumar@ubuntu:/usr/local$ ls
bin etc games include java lib man sbin share src
vskumar@ubuntu:/usr/local$
vskumar@ubuntu:/usr/local$ cd java
vskumar@ubuntu:/usr/local/java$

vskumar@ubuntu:~$
vskumar@ubuntu:~$ ls
data-volume1 Downloads jdk-9.0.4_linux-x64_bin.tar.gz Public
Desktop examples.desktop Music Templates
Documents flask-test Pictures Videos
vskumar@ubuntu:~$ pwd
/home/vskumar
vskumar@ubuntu:~$

vskumar@ubuntu:~/Downloads$ ls
firefox-57.0.tar.bz2 jdk1.8.0_161 jdk-8u161-linux-x64.tar.gz

vskumar@ubuntu:~/Downloads$ sudo mv jdk1.8.0_161 /usr/local/java
[sudo] password for vskumar:
vskumar@ubuntu:~/Downloads$ ls
firefox-57.0.tar.bz2 jdk-8u161-linux-x64.tar.gz
vskumar@ubuntu:~/Downloads$
vskumar@ubuntu:/usr/local/java$ pwd
/usr/local/java
vskumar@ubuntu:/usr/local/java$ ls -l
total 4
drwxr-xr-x 8 vskumar vskumar 4096 Dec 19 16:24 jdk1.8.0_161
vskumar@ubuntu:/usr/local/java$

==jdk8 unzipped files are moved into /usr/local/java ===>

Step 4: – Now,
Update Alternatives

$sudo update-alternatives –install “/usr/bin/java” “java” “/usr/local/java/jdk1.8.0_161/bin/java” 1

$sudo update-alternatives –install “/usr/bin/javac” “javac” “/usr/local/java/jdk1.8.0_161/bin/javac” 1

$sudo update-alternatives –install “/usr/bin/javaws” “javaws” “/usr/local/java/jdk1.8.0_161/bin/javaws” 1

=== Output===>

vskumar@ubuntu:/usr/local/java/jdk1.8.0_161$
vskumar@ubuntu:/usr/local/java/jdk1.8.0_161$ pwd
/usr/local/java/jdk1.8.0_161
vskumar@ubuntu:/usr/local/java/jdk1.8.0_161$ sudo update-alternatives –install “/usr/bin/java” “java” “/usr/local/java/jdk1.8.0_161/bin/java” 1
vskumar@ubuntu:/usr/local/java/jdk1.8.0_161$ sudo update-alternatives –install “/usr/bin/javac” “javac” “/usr/local/java/jdk1.8.0_161/bin/javac” 1
update-alternatives: using /usr/local/java/jdk1.8.0_161/bin/javac to provide /usr/bin/javac (javac) in auto mode
vskumar@ubuntu:/usr/local/java/jdk1.8.0_161$ sudo update-alternatives –install “/usr/bin/javaws” “javaws” “/usr/local/java/jdk1.8.0_161/bin/javaws” 1
update-alternatives: using /usr/local/java/jdk1.8.0_161/bin/javaws to provide /usr/bin/javaws (javaws) in auto mode
vskumar@ubuntu:/usr/local/java/jdk1.8.0_161$
==============>

Step 5 :-
Check Java version
$ java -version

======>
vskumar@ubuntu:/usr/local/java/jdk1.8.0_161$ java -version
java version “1.8.0_161”
Java(TM) SE Runtime Environment (build 1.8.0_161-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)
vskumar@ubuntu:/usr/local/java/jdk1.8.0_161$
====We have done JDKsetup ===>

Now, You can see “How to install Jenkins on Ubuntu 16.04?”:

For details Visit: https://wiki.jenkins.io/display/JENKINS/Installing+Jenkins+on+Ubuntu

Step1:
First, we need to add the jenkins repository key to the ubuntu system.

$ sudo wget -q -O – https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add –

When the key is added, the system will return ‘OK’

Step2:
Now, we need to append the Debian package repository
address to the server’s sources.list:

$echo deb https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list

The screen ouput for the above steps:
=== Screen output of Step1 and Step2====>
vskumar@ubuntu:~$ sudo wget -q -O – https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add –
OK
vskumar@ubuntu:~$ echo deb https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list
deb https://pkg.jenkins.io/debian-stable binary/
vskumar@ubuntu:~$ sudo apt-get update
===============>

When both of the above steps are executed, we’ll run update so that apt-get will use
the new repository:

Step3:
$sudo apt-get update

==== You will see the below screen output ===>
vskumar@ubuntu:~$ echo deb https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list
deb https://pkg.jenkins.io/debian-stable binary/
vskumar@ubuntu:~$ sudo apt-get update
Hit:1 https://download.docker.com/linux/ubuntu xenial InRelease
Ign:2 https://pkg.jenkins.io/debian-stable binary/ InRelease
Hit:3 https://pkg.jenkins.io/debian-stable binary/ Release
Reading package lists… Done
vskumar@ubuntu:~$
========================>

========>
vskumar@ubuntu:~$ sudo find . / jenkins | grep ‘jenkins’
find: ‘/run/user/1000/gvfs’: Permission denied
/var/lib/apt/lists/pkg.jenkins.io_debian-stable_binary_Packages
/var/lib/apt/lists/pkg.jenkins.io_debian-stable_binary_Release.gpg
/var/lib/apt/lists/pkg.jenkins.io_debian-stable_binary_Release
find: ‘jenkins’: No such file or directory
vskumar@ubuntu:~$
==============>

Step4:
Include JAVA_HOME = /usr/local/java/jdk1.8.0_161
in .shrc file.

Now, we will install Jenkins and its dependencies:

$sudo apt-get install jenkins
Whenyou run this command your might get the below error:
=== Dependecy issues ======>
vskumar@ubuntu:~$ sudo apt-get install jenkins
Reading package lists… Done
Building dependency tree
Reading state information… Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
jenkins : Depends: daemon but it is not installable
Depends: default-jre-headless (>= 2:1.8) or
java8-runtime-headless
E: Unable to correct problems, you have held broken packages.
vskumar@ubuntu:~$
======================>

To resolve this issued, Under Ubuntu Software tab, enable all the repositories.
The system updates all the packages/libs.

Visit for details:
https://askubuntu.com/questions/140246/how-do-i-resolve-unmet-dependencies-after-adding-a-ppa

After doing the repositories updates as mentioned above, I have re-executed the install command:

==== Screen Output of Jenkins installation ====
vskumar@ubuntu:~$ sudo apt-get install jenkins
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following additional packages will be installed:
ca-certificates-java daemon default-jre-headless java-common
openjdk-8-jre-headless
Suggested packages:
default-jre openjdk-8-jre-jamvm fonts-dejavu-extra fonts-ipafont-gothic
fonts-ipafont-mincho ttf-wqy-microhei | ttf-wqy-zenhei fonts-indic
The following NEW packages will be installed:
ca-certificates-java daemon default-jre-headless java-common jenkins
openjdk-8-jre-headless
0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded.
Need to get 101 MB of archives.
After this operation, 174 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 java-common all 0.56ubuntu2 [7,742 B]
Get:2 http://archive.ubuntu.com/ubuntu xenial/main amd64 default-jre-headless amd64 2:1.8-56ubuntu2 [4,380 B]
Get:3 http://archive.ubuntu.com/ubuntu xenial/main amd64 ca-certificates-java all 20160321 [12.9 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial/main amd64 openjdk-8-jre-headless amd64 8u77-b03-3ubuntu3 [26.9 MB]
Get:5 https://pkg.jenkins.io/debian-stable binary/ jenkins 2.89.4 [73.7 MB]
Get:6 http://archive.ubuntu.com/ubuntu xenial/universe amd64 daemon amd64 0.6.4-1 [98.2 kB]
Fetched 101 MB in 2min 9s (775 kB/s)
Selecting previously unselected package java-common.
(Reading database … 217445 files and directories currently installed.)
Preparing to unpack …/java-common_0.56ubuntu2_all.deb …
Unpacking java-common (0.56ubuntu2) …
Selecting previously unselected package default-jre-headless.
Preparing to unpack …/default-jre-headless_2%3a1.8-56ubuntu2_amd64.deb …
Unpacking default-jre-headless (2:1.8-56ubuntu2) …
Selecting previously unselected package ca-certificates-java.
Preparing to unpack …/ca-certificates-java_20160321_all.deb …
Unpacking ca-certificates-java (20160321) …
Selecting previously unselected package openjdk-8-jre-headless:amd64.
Preparing to unpack …/openjdk-8-jre-headless_8u77-b03-3ubuntu3_amd64.deb …
Unpacking openjdk-8-jre-headless:amd64 (8u77-b03-3ubuntu3) …
Selecting previously unselected package daemon.
Preparing to unpack …/daemon_0.6.4-1_amd64.deb …
Unpacking daemon (0.6.4-1) …
Selecting previously unselected package jenkins.
Preparing to unpack …/jenkins_2.89.4_all.deb …
Unpacking jenkins (2.89.4) …
Processing triggers for man-db (2.7.5-1) …
Processing triggers for ca-certificates (20170717~16.04.1) …
Updating certificates in /etc/ssl/certs…
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d…
done.
Processing triggers for systemd (229-4ubuntu21.1) …
Processing triggers for ureadahead (0.100.0-19) …
Setting up java-common (0.56ubuntu2) …
Setting up daemon (0.6.4-1) …
Setting up openjdk-8-jre-headless:amd64 (8u77-b03-3ubuntu3) …
update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/rmid to provide /usr/bin/rmid (rmid) in auto mode
update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java to provide /usr/bin/java (java) in auto mode
update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/keytool to provide /usr/bin/keytool (keytool) in auto mode
update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/jjs to provide /usr/bin/jjs (jjs) in auto mode
update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/pack200 to provide /usr/bin/pack200 (pack200) in auto mode
update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/rmiregistry to provide /usr/bin/rmiregistry (rmiregistry) in auto mode
update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/unpack200 to provide /usr/bin/unpack200 (unpack200) in auto mode
update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/orbd to provide /usr/bin/orbd (orbd) in auto mode
update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/servertool to provide /usr/bin/servertool (servertool) in auto mode
update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/tnameserv to provide /usr/bin/tnameserv (tnameserv) in auto mode
update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/jexec to provide /usr/bin/jexec (jexec) in auto mode
Setting up ca-certificates-java (20160321) …
Adding debian:Chambers_of_Commerce_Root_-_2008.pem
Adding debian:GeoTrust_Primary_Certification_Authority_-_G3.pem
Adding debian:OISTE_WISeKey_Global_Root_GA_CA.pem
Adding debian:Deutsche_Telekom_Root_CA_2.pem
Adding debian:Izenpe.com.pem
Adding debian:Microsec_e-Szigno_Root_CA_2009.pem
Adding debian:EC-ACC.pem
Adding debian:DigiCert_Global_Root_G2.pem
Adding debian:QuoVadis_Root_CA_3.pem
Adding debian:ePKI_Root_Certification_Authority.pem
Adding debian:GeoTrust_Primary_Certification_Authority_-_G2.pem
Adding debian:Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem
Adding debian:ACCVRAIZ1.pem
Adding debian:Cybertrust_Global_Root.pem
Adding debian:COMODO_ECC_Certification_Authority.pem
Adding debian:GeoTrust_Universal_CA_2.pem
Adding debian:Entrust_Root_Certification_Authority_-_EC1.pem
Adding debian:Sonera_Class_2_Root_CA.pem
Adding debian:Comodo_AAA_Services_root.pem
Adding debian:Security_Communication_EV_RootCA1.pem
Adding debian:AddTrust_Low-Value_Services_Root.pem
Adding debian:Amazon_Root_CA_1.pem
Adding debian:DST_Root_CA_X3.pem
Adding debian:OpenTrust_Root_CA_G1.pem
Adding debian:T-TeleSec_GlobalRoot_Class_3.pem
Adding debian:Camerfirma_Chambers_of_Commerce_Root.pem
Adding debian:Atos_TrustedRoot_2011.pem
Adding debian:Starfield_Class_2_CA.pem
Adding debian:Certigna.pem
Adding debian:Buypass_Class_3_Root_CA.pem
Adding debian:COMODO_Certification_Authority.pem
Adding debian:thawte_Primary_Root_CA_-_G3.pem
Adding debian:Swisscom_Root_EV_CA_2.pem
Adding debian:Go_Daddy_Class_2_CA.pem
Adding debian:VeriSign_Universal_Root_Certification_Authority.pem
Adding debian:Global_Chambersign_Root_-_2008.pem
Adding debian:CNNIC_ROOT.pem
Adding debian:AddTrust_External_Root.pem
Adding debian:SwissSign_Gold_CA_-_G2.pem
Adding debian:QuoVadis_Root_CA_1_G3.pem
Adding debian:GeoTrust_Primary_Certification_Authority.pem
Adding debian:Hongkong_Post_Root_CA_1.pem
Adding debian:TWCA_Global_Root_CA.pem
Adding debian:ACEDICOM_Root.pem
Adding debian:Go_Daddy_Root_Certificate_Authority_-_G2.pem
Adding debian:Staat_der_Nederlanden_EV_Root_CA.pem
Adding debian:GlobalSign_ECC_Root_CA_-_R4.pem
Adding debian:Entrust_Root_Certification_Authority_-_G2.pem
Adding debian:Taiwan_GRCA.pem
Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority_-_G3.pem
Adding debian:COMODO_RSA_Certification_Authority.pem
Adding debian:ssl-cert-snakeoil.pem
Adding debian:DigiCert_High_Assurance_EV_Root_CA.pem
Adding debian:GeoTrust_Global_CA.pem
Adding debian:Security_Communication_RootCA2.pem
Adding debian:QuoVadis_Root_CA.pem
Adding debian:D-TRUST_Root_Class_3_CA_2_2009.pem
Adding debian:D-TRUST_Root_Class_3_CA_2_EV_2009.pem
Adding debian:UTN_USERFirst_Hardware_Root_CA.pem
Adding debian:DST_ACES_CA_X6.pem
Adding debian:Visa_eCommerce_Root.pem
Adding debian:Certinomis_-_Autorité_Racine.pem
Adding debian:thawte_Primary_Root_CA_-_G2.pem
Adding debian:Staat_der_Nederlanden_Root_CA_-_G2.pem
Adding debian:E-Tugra_Certification_Authority.pem
Adding debian:QuoVadis_Root_CA_2_G3.pem
Adding debian:Entrust.net_Premium_2048_Secure_Server_CA.pem
Adding debian:TURKTRUST_Certificate_Services_Provider_Root_2007.pem
Adding debian:SwissSign_Silver_CA_-_G2.pem
Adding debian:TWCA_Root_Certification_Authority.pem
Adding debian:Certum_Trusted_Network_CA_2.pem
Adding debian:T-TeleSec_GlobalRoot_Class_2.pem
Adding debian:GlobalSign_Root_CA_-_R2.pem
Adding debian:LuxTrust_Global_Root_2.pem
Adding debian:AddTrust_Public_Services_Root.pem
Adding debian:Staat_der_Nederlanden_Root_CA_-_G3.pem
Adding debian:USERTrust_ECC_Certification_Authority.pem
Adding debian:AffirmTrust_Networking.pem
Adding debian:Amazon_Root_CA_4.pem
Adding debian:Starfield_Services_Root_Certificate_Authority_-_G2.pem
Adding debian:TÜRKTRUST_Elektronik_Sertifika_Hizmet_Saglayicisi_H5.pem
Adding debian:ISRG_Root_X1.pem
Adding debian:AC_RAIZ_FNMT-RCM.pem
Adding debian:Swisscom_Root_CA_2.pem
Adding debian:DigiCert_Trusted_Root_G4.pem
Adding debian:GlobalSign_Root_CA.pem
Adding debian:CA_Disig_Root_R2.pem
Adding debian:OISTE_WISeKey_Global_Root_GB_CA.pem
Adding debian:TeliaSonera_Root_CA_v1.pem
Adding debian:Comodo_Trusted_Services_root.pem
Adding debian:Certum_Trusted_Network_CA.pem
Adding debian:NetLock_Arany_=Class_Gold=_Fotanúsítvány.pem
Adding debian:AffirmTrust_Premium.pem
Adding debian:AffirmTrust_Premium_ECC.pem
Adding debian:VeriSign_Class_3_Public_Primary_Certification_Authority_-_G5.pem
Adding debian:Secure_Global_CA.pem
Adding debian:Certinomis_-_Root_CA.pem
Adding debian:AddTrust_Qualified_Certificates_Root.pem
Adding debian:Certplus_Root_CA_G2.pem
Adding debian:Amazon_Root_CA_2.pem
Adding debian:Security_Communication_Root_CA.pem
Adding debian:GeoTrust_Global_CA_2.pem
Adding debian:DigiCert_Global_Root_CA.pem
Adding debian:Starfield_Root_Certificate_Authority_-_G2.pem
Adding debian:IdenTrust_Public_Sector_Root_CA_1.pem
Adding debian:VeriSign_Class_3_Public_Primary_Certification_Authority_-_G4.pem
Adding debian:SZAFIR_ROOT_CA2.pem
Adding debian:PSCProcert.pem
Adding debian:AffirmTrust_Commercial.pem
Adding debian:certSIGN_ROOT_CA.pem
Adding debian:Swisscom_Root_CA_1.pem
Adding debian:Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem
Adding debian:Certplus_Class_2_Primary_CA.pem
Adding debian:XRamp_Global_CA_Root.pem
Adding debian:GeoTrust_Universal_CA.pem
Adding debian:QuoVadis_Root_CA_3_G3.pem
Adding debian:QuoVadis_Root_CA_2.pem
Adding debian:China_Internet_Network_Information_Center_EV_Certificates_Root.pem
Adding debian:CFCA_EV_ROOT.pem
Adding debian:OpenTrust_Root_CA_G2.pem
Adding debian:Network_Solutions_Certificate_Authority.pem
Adding debian:Amazon_Root_CA_3.pem
Adding debian:Certum_Root_CA.pem
Adding debian:EE_Certification_Centre_Root_CA.pem
Adding debian:Actalis_Authentication_Root_CA.pem
Adding debian:Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem
Adding debian:DigiCert_Assured_ID_Root_G3.pem
Adding debian:SecureTrust_CA.pem
Adding debian:Entrust_Root_Certification_Authority.pem
Adding debian:DigiCert_Assured_ID_Root_G2.pem
Adding debian:Certplus_Root_CA_G1.pem
Adding debian:TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem
Adding debian:TÜBITAK_UEKAE_Kök_Sertifika_Hizmet_Saglayicisi_-_Sürüm_3.pem
Adding debian:GlobalSign_Root_CA_-_R3.pem
Adding debian:IdenTrust_Commercial_Root_CA_1.pem
Adding debian:OpenTrust_Root_CA_G3.pem
Adding debian:thawte_Primary_Root_CA.pem
Adding debian:USERTrust_RSA_Certification_Authority.pem
Adding debian:DigiCert_Assured_ID_Root_CA.pem
Adding debian:Buypass_Class_2_Root_CA.pem
Adding debian:CA_Disig_Root_R1.pem
Adding debian:Trustis_FPS_Root_CA.pem
Adding debian:Baltimore_CyberTrust_Root.pem
Adding debian:GlobalSign_ECC_Root_CA_-_R5.pem
Adding debian:Comodo_Secure_Services_root.pem
Adding debian:DigiCert_Global_Root_G3.pem
Adding debian:Hellenic_Academic_and_Research_Institutions_RootCA_2011.pem
Adding debian:Camerfirma_Global_Chambersign_Root.pem
Adding debian:SecureSign_RootCA11.pem
done.
Setting up default-jre-headless (2:1.8-56ubuntu2) …
Setting up jenkins (2.89.4) …
Processing triggers for ca-certificates (20170717~16.04.1) …
Updating certificates in /etc/ssl/certs…
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d…

done.
done.
Processing triggers for systemd (229-4ubuntu21.1) …
Processing triggers for ureadahead (0.100.0-19) …
vskumar@ubuntu:~$
== End of Jenkins intallation ========>

==== Updates ====>

vskumar@ubuntu:~$
vskumar@ubuntu:~$ sudo apt-get update
Hit:1 http://ppa.launchpad.net/webupd8team/java/ubuntu xenial InRelease
Hit:2 http://archive.ubuntu.com/ubuntu xenial InRelease
Hit:3 https://download.docker.com/linux/ubuntu xenial InRelease
Ign:4 https://pkg.jenkins.io/debian-stable binary/ InRelease
Hit:5 https://pkg.jenkins.io/debian-stable binary/ Release
Reading package lists… Done
vskumar@ubuntu:~$
=============>

Step5:

How to install apache2?:
If we need to use jenkins apache2 is required.
Now let us install apache2

=== Output for apache installation ===>
vskumar@ubuntu:~$
vskumar@ubuntu:~$ sudo apt install apache2
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following additional packages will be installed:
apache2-bin apache2-data apache2-utils libapr1 libaprutil1
libaprutil1-dbd-sqlite3 libaprutil1-ldap liblua5.1-0
Suggested packages:
apache2-doc apache2-suexec-pristine | apache2-suexec-custom
The following NEW packages will be installed:
apache2 apache2-bin apache2-data apache2-utils libapr1 libaprutil1
libaprutil1-dbd-sqlite3 libaprutil1-ldap liblua5.1-0
0 upgraded, 9 newly installed, 0 to remove and 0 not upgraded.
Need to get 1,532 kB of archives.
After this operation, 6,350 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libapr1 amd64 1.5.2-3 [86.0 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial/main amd64 libaprutil1 amd64 1.5.4-1build1 [77.1 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial/main amd64 libaprutil1-dbd-sqlite3 amd64 1.5.4-1build1 [10.6 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial/main amd64 libaprutil1-ldap amd64 1.5.4-1build1 [8,720 B]
Get:5 http://archive.ubuntu.com/ubuntu xenial/main amd64 liblua5.1-0 amd64 5.1.5-8ubuntu1 [102 kB]
Get:6 http://archive.ubuntu.com/ubuntu xenial/main amd64 apache2-bin amd64 2.4.18-2ubuntu3 [918 kB]
Get:7 http://archive.ubuntu.com/ubuntu xenial/main amd64 apache2-utils amd64 2.4.18-2ubuntu3 [81.1 kB]
Get:8 http://archive.ubuntu.com/ubuntu xenial/main amd64 apache2-data all 2.4.18-2ubuntu3 [162 kB]
Get:9 http://archive.ubuntu.com/ubuntu xenial/main amd64 apache2 amd64 2.4.18-2ubuntu3 [86.6 kB]
Fetched 1,532 kB in 5s (270 kB/s)
Selecting previously unselected package libapr1:amd64.
(Reading database … 217703 files and directories currently installed.)
Preparing to unpack …/libapr1_1.5.2-3_amd64.deb …
Unpacking libapr1:amd64 (1.5.2-3) …
Selecting previously unselected package libaprutil1:amd64.
Preparing to unpack …/libaprutil1_1.5.4-1build1_amd64.deb …
Unpacking libaprutil1:amd64 (1.5.4-1build1) …
Selecting previously unselected package libaprutil1-dbd-sqlite3:amd64.
Preparing to unpack …/libaprutil1-dbd-sqlite3_1.5.4-1build1_amd64.deb …
Unpacking libaprutil1-dbd-sqlite3:amd64 (1.5.4-1build1) …
Selecting previously unselected package libaprutil1-ldap:amd64.
Preparing to unpack …/libaprutil1-ldap_1.5.4-1build1_amd64.deb …
Unpacking libaprutil1-ldap:amd64 (1.5.4-1build1) …
Selecting previously unselected package liblua5.1-0:amd64.
Preparing to unpack …/liblua5.1-0_5.1.5-8ubuntu1_amd64.deb …
Unpacking liblua5.1-0:amd64 (5.1.5-8ubuntu1) …
Selecting previously unselected package apache2-bin.
Preparing to unpack …/apache2-bin_2.4.18-2ubuntu3_amd64.deb …
Unpacking apache2-bin (2.4.18-2ubuntu3) …
Selecting previously unselected package apache2-utils.
Preparing to unpack …/apache2-utils_2.4.18-2ubuntu3_amd64.deb …
Unpacking apache2-utils (2.4.18-2ubuntu3) …
Selecting previously unselected package apache2-data.
Preparing to unpack …/apache2-data_2.4.18-2ubuntu3_all.deb …
Unpacking apache2-data (2.4.18-2ubuntu3) …
Selecting previously unselected package apache2.
Preparing to unpack …/apache2_2.4.18-2ubuntu3_amd64.deb …
Unpacking apache2 (2.4.18-2ubuntu3) …
Processing triggers for libc-bin (2.23-0ubuntu10) …
Processing triggers for man-db (2.7.5-1) …
Processing triggers for systemd (229-4ubuntu21.1) …
Processing triggers for ureadahead (0.100.0-19) …
Processing triggers for ufw (0.35-0ubuntu2) …
Setting up libapr1:amd64 (1.5.2-3) …
Setting up libaprutil1:amd64 (1.5.4-1build1) …
Setting up libaprutil1-dbd-sqlite3:amd64 (1.5.4-1build1) …
Setting up libaprutil1-ldap:amd64 (1.5.4-1build1) …
Setting up liblua5.1-0:amd64 (5.1.5-8ubuntu1) …
Setting up apache2-bin (2.4.18-2ubuntu3) …
Setting up apache2-utils (2.4.18-2ubuntu3) …
Setting up apache2-data (2.4.18-2ubuntu3) …
Setting up apache2 (2.4.18-2ubuntu3) …
Enabling module mpm_event.
Enabling module authz_core.
Enabling module authz_host.
Enabling module authn_core.
Enabling module auth_basic.
Enabling module access_compat.
Enabling module authn_file.
Enabling module authz_user.
Enabling module alias.
Enabling module dir.
Enabling module autoindex.
Enabling module env.
Enabling module mime.
Enabling module negotiation.
Enabling module setenvif.
Enabling module filter.
Enabling module deflate.
Enabling module status.
Enabling conf charset.
Enabling conf localized-error-pages.
Enabling conf other-vhosts-access-log.
Enabling conf security.
Enabling conf serve-cgi-bin.
Enabling site 000-default.
Processing triggers for libc-bin (2.23-0ubuntu10) …
Processing triggers for systemd (229-4ubuntu21.1) …
Processing triggers for ureadahead (0.100.0-19) …
Processing triggers for ufw (0.35-0ubuntu2) …
vskumar@ubuntu:~$
=== Apaches is installed ===>

Let us check its status:

== Status of apache2 ===>
vskumar@ubuntu:~$
vskumar@ubuntu:~$ service status apache2
status: unrecognized service
vskumar@ubuntu:~$ service apache2 status
? apache2.service – LSB: Apache2 web server
Loaded: loaded (/etc/init.d/apache2; bad; vendor preset: enabled)
Drop-In: /lib/systemd/system/apache2.service.d
+-apache2-systemd.conf
Active: active (running) since Thu 2018-02-22 02:30:24 PST; 1min 6s ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/apache2.service
+-12680 /usr/sbin/apache2 -k start
+-12683 /usr/sbin/apache2 -k start
+-12684 /usr/sbin/apache2 -k start

Feb 22 02:30:22 ubuntu systemd[1]: Starting LSB: Apache2 web server…
Feb 22 02:30:22 ubuntu apache2[12651]: * Starting Apache httpd web server apach
Feb 22 02:30:23 ubuntu apache2[12651]: AH00558: apache2: Could not reliably dete
Feb 22 02:30:24 ubuntu apache2[12651]: *
Feb 22 02:30:24 ubuntu systemd[1]: Started LSB: Apache2 web server.
lines 1-16/16 (END)
================>

Now, let us check the status for Jenkins:

===== Jenkins status ===>

vskumar@ubuntu:~$ service jenkins status
? jenkins.service – LSB: Start Jenkins at boot time
Loaded: loaded (/etc/init.d/jenkins; bad; vendor preset: enabled)
Active: active (exited) since Thu 2018-02-22 02:13:57 PST; 23min ago
Docs: man:systemd-sysv-generator(8)

Feb 22 02:13:51 ubuntu systemd[1]: Starting LSB: Start Jenkins at boot time…
Feb 22 02:13:51 ubuntu jenkins[10140]: * Starting Jenkins Automation Server jen
Feb 22 02:13:52 ubuntu su[10168]: Successful su for jenkins by root
Feb 22 02:13:52 ubuntu su[10168]: + ??? root:jenkins
Feb 22 02:13:52 ubuntu su[10168]: pam_unix(su:session): session opened for user
Feb 22 02:13:57 ubuntu jenkins[10140]: …done.
Feb 22 02:13:57 ubuntu systemd[1]: Started LSB: Start Jenkins at boot time.
lines 1-12/12 (END)
==== Jenkins is running ======>

How to check on browser in Ubuntu for Jenkins web page ?:

You can also check the ubuntu browser with the below
url:http://localhost:8080/login?from=%2F
There should be a web page displayed with the message:
“Unlock Jenkins”.

It means Jenkins is available to use.

Step6:
Now, we need to setup the admin password.
Read the content on the web page.
You are advised to use the initial password from:
/var/lib/jenkins/secrets/initialAdminPassword

Let us go to that directory to get copy and paste it on
web page.

In my case I got the below:
==== PWD display ===>
vskumar@ubuntu:~$ sudo cat /var/lib/jenkins/secrets/initialAdminPassword
711a4d7d01244651a490cfd4a61439e2
vskumar@ubuntu:~$
========>

I have pasted it on web page.
Till you change the pwd, this will be the default one.

==== List of the files in jenkins dir ===>
vskumar@ubuntu:/var/lib/jenkins$ ls -l
total 60
-rw-r–r– 1 jenkins jenkins 1820 Feb 22 02:17 config.xml
-rw-r–r– 1 jenkins jenkins 156 Feb 22 02:15 hudson.model.UpdateCenter.xml
-rw——- 1 jenkins jenkins 1712 Feb 22 02:15 identity.key.enc
-rw-r–r– 1 jenkins jenkins 94 Feb 22 02:16 jenkins.CLI.xml
-rw-r–r– 1 jenkins jenkins 6 Feb 22 02:16 jenkins.install.UpgradeWizard.state
drwxr-xr-x 2 jenkins jenkins 4096 Feb 22 02:15 jobs
drwxr-xr-x 3 jenkins jenkins 4096 Feb 22 02:16 logs
-rw-r–r– 1 jenkins jenkins 907 Feb 22 02:16 nodeMonitors.xml
drwxr-xr-x 2 jenkins jenkins 4096 Feb 22 02:15 nodes
drwxr-xr-x 2 jenkins jenkins 4096 Feb 22 02:15 plugins
-rw-r–r– 1 jenkins jenkins 64 Feb 22 02:15 secret.key
-rw-r–r– 1 jenkins jenkins 0 Feb 22 02:15 secret.key.not-so-secret
drwx—— 4 jenkins jenkins 4096 Feb 22 02:16 secrets
drwxr-xr-x 2 jenkins jenkins 4096 Feb 22 02:18 updates
drwxr-xr-x 2 jenkins jenkins 4096 Feb 22 02:16 userContent
drwxr-xr-x 3 jenkins jenkins 4096 Feb 22 02:16 users
vskumar@ubuntu:/var/lib/jenkins$ cd jobs
vskumar@ubuntu:/var/lib/jenkins/jobs$ ls
vskumar@ubuntu:/var/lib/jenkins/jobs$ pwd
/var/lib/jenkins/jobs
vskumar@ubuntu:/var/lib/jenkins/jobs$
==================>
We need to remember the above directories.

By default we can install ‘suggested plugins’ for now.
It shows the plugins updates.
Once they are installed, you will be prompted for a uid/pwd/name/e-mail.
You can fill them, and start using jenkins.

That is all the installation of Jenkins 2.9.

This video is made to motivate the new DevOps engineers to learn and do a kind of trouble shooting, while doing Jenkins installation on a Virtual machine of Ubuntu.

To try one job build you can go through my another blog:

https://vskumar.blog/2017/11/26/2-devops-jenkins2-9-how-to-create-and-build-the-job/

Good luck!! and you can start practicing Jenkins. 

Please leave your feedback!!

Vcard-Shanthi Kumar V-v3

12. DevOps: How to build docker images using dockerfile ? -1

 

Docker-logo

In continuation of my  previous session on :”11. DevOps: How to Launch a container as a daemon ?”, in this session I would like to demonstrate the exercises on:

“How to build docker images using dockerfile ?:”

These images are basic operating environments, such as ubuntu.
We found these while doing the other lab exercises.
The docker images can craft advanced application stacks for the enterprise and cloud IT environments.
Currently let us craft an image manually by launching a container from a base image.
A best practices is, we can build an automated approach of crafting the images using Dockerfile.
The dockerfile is a a text-based build script which contains special instructions in a sequence for building the correct and the relevant images from the base images.

Please note; we will explore all these combinations in different sessions.

Now, let us understand this automated approach from the below steps:
1. The sequential instructions inside Dockerfile can include selecting the base image as 1st statement.
2. And in the later statements; installing the required application, adding the configuration and the data files, and automatically running the services as well as exposing those services to the external world.

This way the dockerfile based automated build approach has simplified the image building process.

It also offers a great deal of flexibility in organizing the build instructions and in visualizing the complete build process, while running the script instructions.

The Docker Engine tightly integrates this build process with the help of the docker ‘build’ subcommand.

This process involes the below steps:
1. Let us imagine; in the client server scenario of Docker, the Docker server (or daemon) is responsible towards complete build process.
2. And the Docker command-line interface is responsible for transferring the build context, including transferring Dockerfile to the daemon.
Now, let us list our existing images as below, in continuation of previous exercise:
=============>
vskumar@ubuntu:~$
vskumar@ubuntu:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-exercise/ubuntu-wgetinstall latest e34304119838 7 days ago 169MB
<none> <none> fc7e4564eb92 7 days ago 169MB
hello-world latest f2a91732366c 12 days ago 1.85kB
ubuntu 16.04 20c44cd7596f 2 weeks ago 123MB
ubuntu latest 20c44cd7596f 2 weeks ago 123MB
busybox latest 6ad733544a63 4 weeks ago 1.13MB
busybox 1.24 47bcc53f74dc 20 months ago 1.11MB
vskumar@ubuntu:~$
====================>
Now,  let us create a simple container from the ubuntu base image.
To create it, we can create ‘dockerfile’ without extension using vi in the current pwd.

Please note we do not have vim utility in this Ubuntu base image.
=================>
vskumar@ubuntu:~$ pwd
/home/vskumar
vskumar@ubuntu:~$ vi dockerfile
============>
Now, let us cat the dockerfile as below:
=============>
vskumar@ubuntu:~$ ls -l
total 48
drwxr-xr-x 3 vskumar vskumar 4096 Nov 24 23:32 Desktop
-rw-rw-r– 1 vskumar vskumar 86 Dec 3 04:29 dockerfile
drwxr-xr-x 2 vskumar vskumar 4096 Nov 22 21:23 Documents
drwxr-xr-x 2 vskumar vskumar 4096 Nov 25 06:33 Downloads
-rw-r–r– 1 vskumar vskumar 8980 Nov 22 21:03 examples.desktop
drwxr-xr-x 2 vskumar vskumar 4096 Nov 22 21:23 Music
drwxr-xr-x 2 vskumar vskumar 4096 Nov 25 06:02 Pictures
drwxr-xr-x 2 vskumar vskumar 4096 Nov 22 21:23 Public
drwxr-xr-x 2 vskumar vskumar 4096 Nov 22 21:23 Templates
drwxr-xr-x 2 vskumar vskumar 4096 Nov 22 21:23 Videos
vskumar@ubuntu:~$
vskumar@ubuntu:~$ cat dockerfile
FROM ubuntu
CMD [“echo”, “This is done by vskumar for a lab practice of dockerfile”]
vskumar@ubuntu:~$
================>
From the above dockerfile contents:
1st line FROM ubuntu – denotes it is using the buntu as the base image to create the container.
2nd line: CMD [“echo”, “This is done by vskumar for a lab practice of dockerfile”]
denotes using CMD echo command is executed to print the message “This is done by vskumar for a lab practice of dockerfile”.
Now let us run this file through the below command:
$ sudo docker build .
We can see the output as below:
===============>
vskumar@ubuntu:~$ sudo docker build .
Sending build context to Docker daemon 112MB
Step 1/2 : FROM ubuntu
—> 20c44cd7596f
Step 2/2 : CMD [“echo”, “This is done by vskumar for a lab practice of dockerfile”]
—> Running in 1de59a4799fa
Removing intermediate container 1de59a4799fa
—> 8de083612fef
Successfully built 8de083612fef
vskumar@ubuntu:~$
===================>
Now, let us list the images:
==================>
vskumar@ubuntu:~$
vskumar@ubuntu:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 8de083612fef About a minute ago 123MB
docker-exercise/ubuntu-wgetinstall latest e34304119838 7 days ago 169MB
<none> <none> fc7e4564eb92 7 days ago 169MB
hello-world latest f2a91732366c 12 days ago 1.85kB
ubuntu 16.04 20c44cd7596f 2 weeks ago 123MB
ubuntu latest 20c44cd7596f 2 weeks ago 123MB
busybox latest 6ad733544a63 4 weeks ago 1.13MB
busybox 1.24 47bcc53f74dc 20 months ago 1.11MB
vskumar@ubuntu:~$
==========================>
We can see Imgae id: 8de083612fef is created just now.
Look into that line there is no tag given.
Now let us tag it as below:
$ sudo docker tag 8de083612fef ubuntu-testbox1
================>
vskumar@ubuntu:~$
vskumar@ubuntu:~$ sudo docker tag 8de083612fef ubuntu-testbox1
vskumar@ubuntu:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-testbox1 latest 8de083612fef 4 minutes ago 123MB
docker-exercise/ubuntu-wgetinstall latest e34304119838 7 days ago 169MB
<none> <none> fc7e4564eb92 7 days ago 169MB
hello-world latest f2a91732366c 12 days ago 1.85kB
ubuntu 16.04 20c44cd7596f 2 weeks ago 123MB
ubuntu latest 20c44cd7596f 2 weeks ago 123MB
busybox latest 6ad733544a63 4 weeks ago 1.13MB
busybox 1.24 47bcc53f74dc 20 months ago 1.11MB
vskumar@ubuntu:~$
====================>
Now, let us do some housekeeping on these containers.
Let us list the containers using ps -a command
====================>
vskumar@ubuntu:~$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0fe495fc93ed ubuntu “/bin/bash -c ‘while…” 8 hours ago Exited (137) 4 hours ago hungry_engelbart
10ffea6140f9 ubuntu “bash” 7 days ago Exited (0) 7 days ago quizzical_lalande
b2a79f8d2fe6 ubuntu “/bin/bash -c ‘while…” 7 days ago Exited (255) 7 days ago goofy_borg
155f4b0764b1 ubuntu:16.04 “/bin/bash” 7 days ago Exited (0) 7 days ago zen_volhard
vskumar@ubuntu:~$
=====================>
I want to remove all of them. We can recreate with the dockerfile as an exercise.
$ Sudo docker containers prune
=======================================>
vskumar@ubuntu:~$ sudo docker ps -aq
0fe495fc93ed
10ffea6140f9
b2a79f8d2fe6
155f4b0764b1
vskumar@ubuntu:~$ sudo docker container prune
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Deleted Containers:
0fe495fc93edee3aaadc7fc0fbf21997f0ca3cde4d7e563aa8c61352a43957dd
10ffea6140f9c93b37bad2f9d159ad53aa121c0de69a9d145f07cc12f9591324
b2a79f8d2fe65453fce19f00d7adf03ed6dcced69ae68fba94ad0c416545263e
155f4b0764b16f1c8776a101cced6ea95c55eeabe69aeab8520cbe925bedc456

Total reclaimed space: 186B
vskumar@ubuntu:~$ sudo docker ps -aq
vskumar@ubuntu:~$
============== so now there are no containers =========>
Let us build the container.
Before building let us check the available images:
==================>
vskumar@ubuntu:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-testbox1 latest 8de083612fef 24 minutes ago 123MB
docker-exercise/ubuntu-wgetinstall latest e34304119838 7 days ago 169MB
<none> <none> fc7e4564eb92 7 days ago 169MB
hello-world latest f2a91732366c 12 days ago 1.85kB
ubuntu 16.04 20c44cd7596f 2 weeks ago 123MB
ubuntu latest 20c44cd7596f 2 weeks ago 123MB
busybox latest 6ad733544a63 4 weeks ago 1.13MB
busybox 1.24 47bcc53f74dc 20 months ago 1.11MB
vskumar@ubuntu:~$
====================>
Let us remove some more images also.
We need to use the below commands:
=========== Let us try one image removal =========>
vskumar@ubuntu:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-testbox1 latest 8de083612fef 33 minutes ago 123MB
docker-exercise/ubuntu-wgetinstall latest e34304119838 7 days ago 169MB
<none> <none> fc7e4564eb92 7 days ago 169MB
hello-world latest f2a91732366c 12 days ago 1.85kB
ubuntu 16.04 20c44cd7596f 2 weeks ago 123MB
ubuntu latest 20c44cd7596f 2 weeks ago 123MB
busybox latest 6ad733544a63 4 weeks ago 1.13MB
busybox 1.24 47bcc53f74dc 20 months ago 1.11MB
vskumar@ubuntu:~$ sudo docker rmi image 47bcc53f74dc
Untagged: busybox:1.24
Untagged: busybox@sha256:8ea3273d79b47a8b6d018be398c17590a4b5ec604515f416c5b797db9dde3ad8
Deleted: sha256:47bcc53f74dc94b1920f0b34f6036096526296767650f223433fe65c35f149eb
Deleted: sha256:f6075681a244e9df4ab126bce921292673c9f37f71b20f6be1dd3bb99b4fdd72
Deleted: sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6
Error: No such image: image
vskumar@ubuntu:~$
=================================>
So, by using :
sudo docker rmi image [image id], we can remove the image.

Now, further continuation of our dockerfile exercise;
We can create a container from ubuntu base image and install vim package on it with the help of dockerfile.
To do this we need to have following dockerfile script.
——————>
FROM ubuntu
RUN apt-get update
RUN apt-get -y install vim
CMD [“echo”, “This is done by vskumar for a lab demo on dockerfile”]
—————–>
Before doing it, let me do some housekeeping.
I have removed the below image:
==================>
vskumar@ubuntu:~$
vskumar@ubuntu:~$ sudo docker rmi image 6ad733544a63
Untagged: busybox:latest
Untagged: busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0
Deleted: sha256:6ad733544a6317992a6fac4eb19fe1df577d4dec7529efec28a5bd0edad0fd30
Deleted: sha256:0271b8eebde3fa9a6126b1f2335e170f902731ab4942f9f1914e77016540c7bb
Error: No such image: image
=====================>
See the current status:
===================>
vskumar@ubuntu:~$ ls
Desktop dockerfile Documents Downloads examples.desktop Music Pictures Public Templates Videos
vskumar@ubuntu:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-testbox1 latest 8de083612fef About an hour ago 123MB
docker-exercise/ubuntu-wgetinstall latest e34304119838 7 days ago 169MB
<none> <none> fc7e4564eb92 7 days ago 169MB
hello-world latest f2a91732366c 12 days ago 1.85kB
ubuntu 16.04 20c44cd7596f 2 weeks ago 123MB
vskumar@ubuntu:~$
======================>
Now let me update the dockerfile through vi and cat that file:
====================>
vskumar@ubuntu:~$
vskumar@ubuntu:~$ pwd
/home/vskumar
vskumar@ubuntu:~$ vi dockerfile
vskumar@ubuntu:~$ cat dockerfile
FROM ubuntu
RUN apt-get update
RUN apt-get -y install vim
CMD [“echo”, “This is done by vskumar for a lab practice of dockerfile”]
vskumar@ubuntu:~$
====================>
Now let me run the below command:
$ sudo docker build -t ubuntu-vmbox .
This time; I have added the tag name as ‘ ubuntu-vmbox’.
We need to understand; there are below tasks it involves:
1. Updating the ubuntu libraries – it takes some time by displaying lot of output.
2. Installing vim utility. — This also takes some time.
3. Displaying the message.
We can see this large size output:
=========== Update the packages and install the vim in a conatiner ==========>
vskumar@ubuntu:~$ pwd
/home/vskumar
vskumar@ubuntu:~$ sudo docker build -t ubuntu-vmbox .
Sending build context to Docker daemon 112MB
Step 1/4 : FROM ubuntu
latest: Pulling from library/ubuntu
Digest: sha256:7c67a2206d3c04703e5c23518707bdd4916c057562dd51c74b99b2ba26af0f79
Status: Downloaded newer image for ubuntu:latest
—> 20c44cd7596f
Step 2/4 : RUN apt-get update
—> Running in df81eaef9437
Get:1 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB]
Get:5 http://archive.ubuntu.com/ubuntu xenial/universe Sources [9802 kB]
Get:6 http://security.ubuntu.com/ubuntu xenial-security/universe Sources [53.1 kB]
Get:7 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages [1558 kB]
Get:8 http://archive.ubuntu.com/ubuntu xenial/restricted amd64 Packages [14.1 kB]
Get:9 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages [9827 kB]
Get:10 http://archive.ubuntu.com/ubuntu xenial/multiverse amd64 Packages [176 kB]
Get:11 http://archive.ubuntu.com/ubuntu xenial-updates/universe Sources [231 kB]
Get:12 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [866 kB]
Get:13 http://archive.ubuntu.com/ubuntu xenial-updates/restricted amd64 Packages [13.7 kB]
Get:14 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [719 kB]
Get:15 http://archive.ubuntu.com/ubuntu xenial-updates/multiverse amd64 Packages [18.5 kB]
Get:16 http://archive.ubuntu.com/ubuntu xenial-backports/main amd64 Packages [5174 B]
Get:17 http://archive.ubuntu.com/ubuntu xenial-backports/universe amd64 Packages [7150 B]
Get:18 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [505 kB]
Get:19 http://security.ubuntu.com/ubuntu xenial-security/restricted amd64 Packages [12.9 kB]
Get:20 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [229 kB]
Get:21 http://security.ubuntu.com/ubuntu xenial-security/multiverse amd64 Packages [3479 B]
Fetched 24.6 MB in 2min 5s (196 kB/s)
Reading package lists…
Removing intermediate container df81eaef9437
—> 13cd766374bc
Step 3/4 : RUN apt-get -y install vim
—> Running in d37783a8cb7d
Reading package lists…
Building dependency tree…
Reading state information…
The following additional packages will be installed:
file libexpat1 libgpm2 libmagic1 libmpdec2 libpython3.5 libpython3.5-minimal
libpython3.5-stdlib libsqlite3-0 libssl1.0.0 mime-support vim-common
vim-runtime
Suggested packages:
gpm ctags vim-doc vim-scripts vim-gnome-py2 | vim-gtk-py2 | vim-gtk3-py2
| vim-athena-py2 | vim-nox-py2
The following NEW packages will be installed:
file libexpat1 libgpm2 libmagic1 libmpdec2 libpython3.5 libpython3.5-minimal
libpython3.5-stdlib libsqlite3-0 libssl1.0.0 mime-support vim vim-common
vim-runtime
0 upgraded, 14 newly installed, 0 to remove and 2 not upgraded.
Need to get 12.2 MB of archives.
After this operation, 58.3 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libgpm2 amd64 1.20.4-6.1 [16.5 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial/main amd64 libmagic1 amd64 1:5.25-2ubuntu1 [216 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial/main amd64 file amd64 1:5.25-2ubuntu1 [21.2 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libexpat1 amd64 2.1.0-7ubuntu0.16.04.3 [71.2 kB]
Get:5 http://archive.ubuntu.com/ubuntu xenial/main amd64 libmpdec2 amd64 2.4.2-1 [82.6 kB]
Get:6 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libssl1.0.0 amd64 1.0.2g-1ubuntu4.9 [1085 kB]
Get:7 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython3.5-minimal amd64 3.5.2-2ubuntu0~16.04.4 [523 kB]
Get:8 http://archive.ubuntu.com/ubuntu xenial/main amd64 mime-support all 3.59ubuntu1 [31.0 kB]
Get:9 http://archive.ubuntu.com/ubuntu xenial/main amd64 libsqlite3-0 amd64 3.11.0-1ubuntu1 [396 kB]
Get:10 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython3.5-stdlib amd64 3.5.2-2ubuntu0~16.04.4 [2132 kB]
Get:11 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 vim-common amd64 2:7.4.1689-3ubuntu1.2 [103 kB]
Get:12 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libpython3.5 amd64 3.5.2-2ubuntu0~16.04.4 [1360 kB]
Get:13 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 vim-runtime all 2:7.4.1689-3ubuntu1.2 [5164 kB]
Get:14 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 vim amd64 2:7.4.1689-3ubuntu1.2 [1036 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 12.2 MB in 12s (949 kB/s)
Selecting previously unselected package libgpm2:amd64.
(Reading database … 4768 files and directories currently installed.)
Preparing to unpack …/libgpm2_1.20.4-6.1_amd64.deb …
Unpacking libgpm2:amd64 (1.20.4-6.1) …
Selecting previously unselected package libmagic1:amd64.
Preparing to unpack …/libmagic1_1%3a5.25-2ubuntu1_amd64.deb …
Unpacking libmagic1:amd64 (1:5.25-2ubuntu1) …
Selecting previously unselected package file.
Preparing to unpack …/file_1%3a5.25-2ubuntu1_amd64.deb …
Unpacking file (1:5.25-2ubuntu1) …
Selecting previously unselected package libexpat1:amd64.
Preparing to unpack …/libexpat1_2.1.0-7ubuntu0.16.04.3_amd64.deb …
Unpacking libexpat1:amd64 (2.1.0-7ubuntu0.16.04.3) …
Selecting previously unselected package libmpdec2:amd64.
Preparing to unpack …/libmpdec2_2.4.2-1_amd64.deb …
Unpacking libmpdec2:amd64 (2.4.2-1) …
Selecting previously unselected package libssl1.0.0:amd64.
Preparing to unpack …/libssl1.0.0_1.0.2g-1ubuntu4.9_amd64.deb …
Unpacking libssl1.0.0:amd64 (1.0.2g-1ubuntu4.9) …
Selecting previously unselected package libpython3.5-minimal:amd64.
Preparing to unpack …/libpython3.5-minimal_3.5.2-2ubuntu0~16.04.4_amd64.deb …
Unpacking libpython3.5-minimal:amd64 (3.5.2-2ubuntu0~16.04.4) …
Selecting previously unselected package mime-support.
Preparing to unpack …/mime-support_3.59ubuntu1_all.deb …
Unpacking mime-support (3.59ubuntu1) …
Selecting previously unselected package libsqlite3-0:amd64.
Preparing to unpack …/libsqlite3-0_3.11.0-1ubuntu1_amd64.deb …
Unpacking libsqlite3-0:amd64 (3.11.0-1ubuntu1) …
Selecting previously unselected package libpython3.5-stdlib:amd64.
Preparing to unpack …/libpython3.5-stdlib_3.5.2-2ubuntu0~16.04.4_amd64.deb …
Unpacking libpython3.5-stdlib:amd64 (3.5.2-2ubuntu0~16.04.4) …
Selecting previously unselected package vim-common.
Preparing to unpack …/vim-common_2%3a7.4.1689-3ubuntu1.2_amd64.deb …
Unpacking vim-common (2:7.4.1689-3ubuntu1.2) …
Selecting previously unselected package libpython3.5:amd64.
Preparing to unpack …/libpython3.5_3.5.2-2ubuntu0~16.04.4_amd64.deb …
Unpacking libpython3.5:amd64 (3.5.2-2ubuntu0~16.04.4) …
Selecting previously unselected package vim-runtime.
Preparing to unpack …/vim-runtime_2%3a7.4.1689-3ubuntu1.2_all.deb …
Adding ‘diversion of /usr/share/vim/vim74/doc/help.txt to /usr/share/vim/vim74/doc/help.txt.vim-tiny by vim-runtime’
Adding ‘diversion of /usr/share/vim/vim74/doc/tags to /usr/share/vim/vim74/doc/tags.vim-tiny by vim-runtime’
Unpacking vim-runtime (2:7.4.1689-3ubuntu1.2) …
Selecting previously unselected package vim.
Preparing to unpack …/vim_2%3a7.4.1689-3ubuntu1.2_amd64.deb …
Unpacking vim (2:7.4.1689-3ubuntu1.2) …
Processing triggers for libc-bin (2.23-0ubuntu9) …
Setting up libgpm2:amd64 (1.20.4-6.1) …
Setting up libmagic1:amd64 (1:5.25-2ubuntu1) …
Setting up file (1:5.25-2ubuntu1) …
Setting up libexpat1:amd64 (2.1.0-7ubuntu0.16.04.3) …
Setting up libmpdec2:amd64 (2.4.2-1) …
Setting up libssl1.0.0:amd64 (1.0.2g-1ubuntu4.9) …
debconf: unable to initialize frontend: Dialog
debconf: (TERM is not set, so the dialog frontend is not usable.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (Can’t locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.22.1 /usr/local/share/perl/5.22.1 /usr/lib/x86_64-linux-gnu/perl5/5.22 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.22 /usr/share/perl/5.22 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base .) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
debconf: falling back to frontend: Teletype
Setting up libpython3.5-minimal:amd64 (3.5.2-2ubuntu0~16.04.4) …
Setting up mime-support (3.59ubuntu1) …
Setting up libsqlite3-0:amd64 (3.11.0-1ubuntu1) …
Setting up libpython3.5-stdlib:amd64 (3.5.2-2ubuntu0~16.04.4) …
Setting up vim-common (2:7.4.1689-3ubuntu1.2) …
Setting up libpython3.5:amd64 (3.5.2-2ubuntu0~16.04.4) …
Setting up vim-runtime (2:7.4.1689-3ubuntu1.2) …
Setting up vim (2:7.4.1689-3ubuntu1.2) …
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vim (vim) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vimdiff (vimdiff) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/rvim (rvim) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/rview (rview) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vi (vi) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/view (view) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/ex (ex) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/editor (editor) in auto mode
Processing triggers for libc-bin (2.23-0ubuntu9) …
Removing intermediate container d37783a8cb7d
—> c07c6f2d2c65
Step 4/4 : CMD [“echo”, “This is done by vskumar for a lab practice of dockerfile”]
—> Running in f7e85f87b578
Removing intermediate container f7e85f87b578
—> f6675f4738b7
Successfully built f6675f4738b7
Successfully tagged ubuntu-vmbox:latest
vskumar@ubuntu:~$
=== Finally you can see the ‘ubuntu-vmbox’ tagged conatiner ======>
We can see the latest image from the below images:
===== Current images list =====>
vskumar@ubuntu:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-vmbox latest f6675f4738b7 3 minutes ago 220MB
ubuntu-testbox1 latest 8de083612fef About an hour ago 123MB
docker-exercise/ubuntu-wgetinstall latest e34304119838 7 days ago 169MB
<none> <none> fc7e4564eb92 7 days ago 169MB
hello-world latest f2a91732366c 12 days ago 1.85kB
ubuntu 16.04 20c44cd7596f 2 weeks ago 123MB
ubuntu latest 20c44cd7596f 2 weeks ago 123MB
vskumar@ubuntu:~$
=======================>
Now, I want to work with this newly created container. Please recollect my blog “https://vskumar.blog/2017/11/29/6-devops-how-to-work-with-interactive-docker-containers/”.
As we did practice in it; we can use the below command to work with this new container:

sudo docker run -i -t ubuntu-vmbox /bin/bash
I want to test the vim is working on it. See the below output:
==================>
vskumar@ubuntu:~$ sudo docker run -i -t ubuntu-vmbox /bin/bash

root@1169bb1285cf:/#
root@1169bb1285cf:/# pwd
/
root@1169bb1285cf:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root@1169bb1285cf:/# vim test1
===== I have created the file with vim successfully ====>
Now let me use cat command and see its output:

================>
root@1169bb1285cf:/#
root@1169bb1285cf:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys test1 tmp usr var
root@1169bb1285cf:/# cat test1
testing this vim box……
root@1169bb1285cf:/#
=================>

So, in this exercise we have updated the ubuntu libraries and installed vim utility.
And tested the container for vim usage by using interactive mode.

=========== Now let me exit and check the list of images =====>
root@1169bb1285cf:/#
root@1169bb1285cf:/# exit
exit
vskumar@ubuntu:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-vmbox latest f6675f4738b7 13 minutes ago 220MB
ubuntu-testbox1 latest 8de083612fef About an hour ago 123MB
docker-exercise/ubuntu-wgetinstall latest e34304119838 7 days ago 169MB
<none> <none> fc7e4564eb92 7 days ago 169MB
hello-world latest f2a91732366c 12 days ago 1.85kB
ubuntu 16.04 20c44cd7596f 2 weeks ago 123MB
ubuntu latest 20c44cd7596f 2 weeks ago 123MB
vskumar@ubuntu:~$
=============================>
So, the new container ‘ubuntu-vmbox’ is existing.

Now, I want to remove some images:
sudo docker rmi 20c44cd7596f
================>
vskumar@ubuntu:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-vmbox latest f6675f4738b7 18 minutes ago 220MB
ubuntu-testbox1 latest 8de083612fef About an hour ago 123MB
docker-exercise/ubuntu-wgetinstall latest e34304119838 7 days ago 169MB
<none> <none> fc7e4564eb92 7 days ago 169MB
hello-world latest f2a91732366c 12 days ago 1.85kB
ubuntu 16.04 20c44cd7596f 2 weeks ago 123MB
ubuntu latest 20c44cd7596f 2 weeks ago 123MB
vskumar@ubuntu:~$ sudo docker rmi 20c44cd7596f
Error response from daemon: conflict: unable to delete 20c44cd7596f (cannot be forced) – image has dependent child images
vskumar@ubuntu:~$
== Please note the last image was the base to build the top tow containers ===>
Hence it has the child and parent relationship.
First we need to remove the child images and later the parent need to be removed.
=== You can see the removal of child one and one more image=====>
vskumar@ubuntu:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-vmbox latest f6675f4738b7 20 minutes ago 220MB
ubuntu-testbox1 latest 8de083612fef 2 hours ago 123MB
docker-exercise/ubuntu-wgetinstall latest e34304119838 7 days ago 169MB
<none> <none> fc7e4564eb92 7 days ago 169MB
hello-world latest f2a91732366c 12 days ago 1.85kB
ubuntu 16.04 20c44cd7596f 2 weeks ago 123MB
ubuntu latest 20c44cd7596f 2 weeks ago 123MB
vskumar@ubuntu:~$ sudo docker rmi 8de083612fef
Untagged: ubuntu-testbox1:latest
Deleted: sha256:8de083612fefbf9723913748f7db4aba4154b17adc500d011f44df356736f06c
vskumar@ubuntu:~$ sudo docker rmi e34304119838
Untagged: docker-exercise/ubuntu-wgetinstall:latest
Deleted: sha256:e34304119838d79da60e12776529106c350b1972cd517648e8ab90311fad7b1a
vskumar@ubuntu:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-vmbox latest f6675f4738b7 21 minutes ago 220MB
<none> <none> fc7e4564eb92 7 days ago 169MB
hello-world latest f2a91732366c 12 days ago 1.85kB
ubuntu 16.04 20c44cd7596f 2 weeks ago 123MB
ubuntu latest 20c44cd7596f 2 weeks ago 123MB
vskumar@ubuntu:~$
=================>
Let me do some more exercises on housekeeping.
I would like to present some more dependency issues for the above images. You can clearly see the output:
========= Dependencies =======>
vskumar@ubuntu:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-vmbox latest f6675f4738b7 22 minutes ago 220MB
<none> <none> fc7e4564eb92 7 days ago 169MB
hello-world latest f2a91732366c 12 days ago 1.85kB
ubuntu 16.04 20c44cd7596f 2 weeks ago 123MB
ubuntu latest 20c44cd7596f 2 weeks ago 123MB
vskumar@ubuntu:~$ sudo docker rmi fc7e4564eb92
Deleted: sha256:fc7e4564eb928ccfe068c789f0d650967e8d5dc42d4e8d92409aab6614364075
Deleted: sha256:b16d78406b12e6dbc174f4e71bedb7b9edc0593cad10458ddf042738694c06db
vskumar@ubuntu:~$ sudo docker rmi 20c44cd7596f
Error response from daemon: conflict: unable to delete 20c44cd7596f (cannot be forced) – image has dependent child images
vskumar@ubuntu:~$ sudo docker rmi f6675f4738b7
Error response from daemon: conflict: unable to delete f6675f4738b7 (must be forced) – image is being used by stopped container 1169bb1285cf
vskumar@ubuntu:~$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1169bb1285cf ubuntu-vmbox “/bin/bash” 15 minutes ago Exited (0) 11 minutes ago heuristic_mayer
vskumar@ubuntu:~$
====================>
It means the container “1169bb1285cf ubuntu-vmbox” is the child to image id:f6675f4738b7.
===========>
vskumar@ubuntu:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-vmbox latest f6675f4738b7 27 minutes ago 220MB
hello-world latest f2a91732366c 12 days ago 1.85kB
ubuntu 16.04 20c44cd7596f 2 weeks ago 123MB
ubuntu latest 20c44cd7596f 2 weeks ago 123MB
vskumar@ubuntu:~$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1169bb1285cf ubuntu-vmbox “/bin/bash” 19 minutes ago Exited (0) 14 minutes ago heuristic_mayer
vskumar@ubuntu:~$
==============>
So if I want to remove Image id: f6675f4738b7, I need to remove the container id:1169bb1285cf , and later I need to remove this image.
$ sudo docker rm container 1169bb1285cf
And later image removal command need to be used as below.
======================>
vskumar@ubuntu:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu-vmbox latest f6675f4738b7 31 minutes ago 220MB
hello-world latest f2a91732366c 12 days ago 1.85kB
ubuntu 16.04 20c44cd7596f 2 weeks ago 123MB
ubuntu latest 20c44cd7596f 2 weeks ago 123MB
vskumar@ubuntu:~$ sudo docker rmi f6675f4738b7
Untagged: ubuntu-vmbox:latest
Deleted: sha256:f6675f4738b721780721f345906a0c78c13a67ee8239a16f071504b217f41658
Deleted: sha256:c07c6f2d2c651dd406977d42d5504c941d7f975a84c8547abaf3869b50942820
Deleted: sha256:4855cfb7ae6f84279bbbfe87e7691377531a541785c613014f64909e6e0f4528
Deleted: sha256:13cd766374bcb31cc0e8cac971e82754bb8e1bc66780abaff264f847e00a94b2
Deleted: sha256:dc6fab8a33a18a8c840e19612253657c4610ab865a26de5a31260f71bcef5f76
vskumar@ubuntu:~$
========================>
So we have the below images only now:
==== Current images ======>
vskumar@ubuntu:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest f2a91732366c 12 days ago 1.85kB
ubuntu 16.04 20c44cd7596f 2 weeks ago 123MB
ubuntu latest 20c44cd7596f 2 weeks ago 123MB
vskumar@ubuntu:~$
==========================>
We can try to remove the above images:
========= See it is declined due to it is base image ===========>
vskumar@ubuntu:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest f2a91732366c 12 days ago 1.85kB
ubuntu 16.04 20c44cd7596f 2 weeks ago 123MB
ubuntu latest 20c44cd7596f 2 weeks ago 123MB
vskumar@ubuntu:~$ sudo docker rmi 20c44cd7596f
Error response from daemon: conflict: unable to delete 20c44cd7596f (must be forced) – image is referenced in multiple repositories
vskumar@ubuntu:~$
=========================>
Both ubuntu images are interlinked and they can not be removed as the base docker engine is working on top of their OS.

We will stop this session at this time.

We will continue some more sessions on “dockerfile”.

 

Vcard-Shanthi Kumar V-v3

11. DevOps: How to Launch a container as a daemon ?

Docker-logo

In continuation of my previous blog on “10. DevOps: How to Build images from Docker containers?”, I am continuing my lab exercises. In this session we can see ”

How to Launch a container as a daemon ?:

Note: If you want to recollect the docker commands to be used during your current lab practice, visit my blog link:

https://vskumarblogs.wordpress.com/2017/12/13/some-useful-docker-commands-for-handling-images-and-containers/

 

Let us recap the past exercises; So far we have experimented with an interactive container, tracked the changes that were made to the containers., created images from the containers, and then gained insights in the containerization scenarios.

Now, let us see the container usage in a detached mode.

When we run the container in a detached mode it runs under a daemon process.

I want to use the “ubuntu” image and run detached mode command.

First, let me check my current docker images:

==================>

vskumar@ubuntu:~$

vskumar@ubuntu:~$ sudo docker images

[sudo] password for vskumar:

REPOSITORY TAG IMAGE ID CREATED SIZE

docker-exercise/ubuntu-wgetinstall latest e34304119838 7 days ago 169MB

<none> <none> fc7e4564eb92 7 days ago 169MB

hello-world latest f2a91732366c 12 days ago 1.85kB

ubuntu 16.04 20c44cd7596f 2 weeks ago 123MB

ubuntu latest 20c44cd7596f 2 weeks ago 123MB

busybox latest 6ad733544a63 4 weeks ago 1.13MB

busybox 1.24 47bcc53f74dc 20 months ago 1.11MB

vskumar@ubuntu:~$

===================>

You can see my previous image with ‘docker-exercise/ubuntu-wgetinstall ‘. This was created in the previous exercise.

As per our plan in this session I am using the below commands to run the ubuntu image as below:

sudo docker run -d ubuntu \

    /bin/bash -c "while true; do date; sleep 5; done";

========== Output ======>
vskumar@ubuntu:~$  sudo docker run -d ubuntu \
>     /bin/bash -c "while true; do date; sleep 5; done";
0fe495fc93edee3aaadc7fc0fbf21997f0ca3cde4d7e563aa8c61352a43957dd
vskumar@ubuntu:~$ $ 
=======================>

Now, to view the docker logs I want to run the docker logs subcommand on image id: ‘ 0fe495fc93edee3aaadc7fc0fbf21997f0ca3cde4d7e563aa8c61352a43957dd’

$ sudo docker logs 0fe495fc93edee3aaadc7fc0fbf21997f0ca3cde4d7e563aa8c61352a43957dd;

=====See the output of the Daemon process running with the ubuntu image ===============>

vskumar@ubuntu:~$ sudo docker logs 0fe495fc93edee3aaadc7fc0fbf21997f0ca3cde4d7e563aa8c61352a43957dd;

Sun Dec 3 05:11:57 UTC 2017

Sun Dec 3 05:12:02 UTC 2017

Sun Dec 3 05:12:07 UTC 2017

Sun Dec 3 05:12:12 UTC 2017

Sun Dec 3 05:12:17 UTC 2017

Sun Dec 3 05:12:22 UTC 2017

Sun Dec 3 05:12:27 UTC 2017

Sun Dec 3 05:12:32 UTC 2017

Sun Dec 3 05:12:37 UTC 2017

Sun Dec 3 05:12:42 UTC 2017

Sun Dec 3 05:12:48 UTC 2017

Sun Dec 3 05:12:53 UTC 2017

Sun Dec 3 05:12:58 UTC 2017

Sun Dec 3 05:13:03 UTC 2017

Sun Dec 3 05:13:08 UTC 2017

Sun Dec 3 05:13:13 UTC 2017

Sun Dec 3 05:13:18 UTC 2017

Sun Dec 3 05:13:23 UTC 2017

Sun Dec 3 05:13:28 UTC 2017

Sun Dec 3 05:13:33 UTC 2017

Sun Dec 3 05:13:38 UTC 2017

Sun Dec 3 05:13:43 UTC 2017

Sun Dec 3 05:13:48 UTC 2017

Sun Dec 3 05:13:53 UTC 2017

Sun Dec 3 05:13:58 UTC 2017

Sun Dec 3 05:14:03 UTC 2017

Sun Dec 3 05:14:08 UTC 2017

Sun Dec 3 05:14:13 UTC 2017

Sun Dec 3 05:14:18 UTC 2017

Sun Dec 3 05:14:23 UTC 2017

Sun Dec 3 05:14:28 UTC 2017

Sun Dec 3 05:14:33 UTC 2017

Sun Dec 3 05:14:38 UTC 2017

Sun Dec 3 05:14:43 UTC 2017

Sun Dec 3 05:14:48 UTC 2017

Sun Dec 3 05:14:53 UTC 2017

Sun Dec 3 05:14:58 UTC 2017

Sun Dec 3 05:15:03 UTC 2017

Sun Dec 3 05:15:08 UTC 2017

Sun Dec 3 05:15:13 UTC 2017

Sun Dec 3 05:15:18 UTC 2017

Sun Dec 3 05:15:23 UTC 2017

vskumar@ubuntu:~$

=================You can see the output for every few seconds listed =======>

It means the container is running as a daemon.

Now, let us use ps -eaf command to check the processed running in linux by using :

$ ps -eaf | grep ‘daemon’

========= See the output of daemon processes ==========>

vskumar@ubuntu:~$

vskumar@ubuntu:~$ ps -eaf | grep ‘daemon’

message+ 837 1 0 20:26 ? 00:00:05 /usr/bin/dbus-daemon –system –address=systemd: –nofork –nopidfile –systemd-activation

root 871 1 0 20:26 ? 00:00:03 /usr/sbin/NetworkManager –no-daemon

avahi 873 1 0 20:26 ? 00:00:00 avahi-daemon: running [ubuntu.local]

root 876 1 0 20:26 ? 00:00:01 /usr/lib/accountsservice/accounts-daemon

avahi 893 873 0 20:26 ? 00:00:00 avahi-daemon: chroot helper

rtkit 1370 1 0 20:28 ? 00:00:00 /usr/lib/rtkit/rtkit-daemon

vskumar 2426 1 0 20:55 ? 00:00:00 /usr/bin/gnome-keyring-daemon –daemonize –login

vskumar 2508 2428 0 20:55 ? 00:00:00 upstart-udev-bridge –daemon –user

vskumar 2515 2428 0 20:55 ? 00:00:04 dbus-daemon –fork –session –address=unix:abstract=/tmp/dbus-nPaV5rWlQc

vskumar 2570 2428 0 20:55 ? 00:00:03 /usr/lib/x86_64-linux-gnu/bamf/bamfdaemon

vskumar 2572 2428 0 20:55 ? 00:00:04 /usr/bin/ibus-daemon –daemonize –xim –address unix:tmpdir=/tmp/ibus

vskumar 2575 2428 0 20:55 ? 00:00:00 upstart-file-bridge –daemon –user

vskumar 2579 2428 0 20:55 ? 00:00:00 upstart-dbus-bridge –daemon –system –user –bus-name system

vskumar 2582 2428 0 20:55 ? 00:00:00 upstart-dbus-bridge –daemon –session –user –bus-name session

vskumar 2605 2428 0 20:55 ? 00:00:00 /usr/lib/ibus/ibus-x11 –kill-daemon

vskumar 2630 2428 0 20:56 ? 00:00:00 gpg-agent –homedir /home/vskumar/.gnupg –use-standard-socket –daemon

vskumar 2645 2428 0 20:56 ? 00:00:02 /usr/lib/unity-settings-daemon/unity-settings-daemon

vskumar 2664 2653 0 20:56 ? 00:00:00 /usr/bin/dbus-daemon –config-file=/etc/at-spi2/accessibility.conf –nofork –print-address 3

vskumar 2851 2654 0 20:56 ? 00:00:01 /usr/lib/unity-settings-daemon/unity-fallback-mount-helper

vskumar 2914 2428 0 20:57 ? 00:00:00 /bin/sh -c /usr/lib/x86_64-linux-gnu/zeitgeist/zeitgeist-maybe-vacuum; /usr/bin/zeitgeist-daemon

vskumar 2920 2914 0 20:57 ? 00:00:00 /usr/bin/zeitgeist-daemon

vskumar 3094 2428 0 21:00 ? 00:00:01 /usr/lib/x86_64-linux-gnu/unity-lens-files/unity-files-daemon

root 4148 1253 0 21:11 ? 00:00:00 docker-containerd-shim –namespace moby –workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/0fe495fc93edee3aaadc7fc0fbf21997f0ca3cde4d7e563aa8c61352a43957dd –address /var/run/docker/containerd/docker-containerd.sock –runtime-root /var/run/docker/runtime-runc

vskumar 4480 3206 0 21:19 pts/19 00:00:00 grep –color=auto daemon

vskumar@ubuntu:~$

======== You can see the list of processes running currently ========>

So we are successful! to run a container in a detached mode [not in an interactive mode!] using the command: ‘ sudo docker run -d ubuntu’

You can think in an application architecture having multiple servers or SOA running with different services.

You can simulate the same services using the docker containers, by setting up as images by configuring the required services and connect them to the architecture.

This way the advantages of containers can be utilized well. Where different companies are using and implementing their applications into containers architecture by saving lot of infrastructure cost. No hardware or physical servers are required. Lot of space also can be saved. The microservices architecture leads to the same way.

At this point, I would like to stop this session and in the next blog we will see other exercises.

Vcard-Shanthi Kumar V-v3

 

 

10. DevOps: How to Build images from Docker containers?

Docker-logo

This is in continuation of my last blog “9. DevOps: How to do Containers housekeeping ?”. In this blog I would like to demonstrate on:

How to Build images from docker containers?:

Note: If you want to recollect the docker commands to be used during your current lab practice, visit my blog link:

https://vskumarblogs.wordpress.com/2017/12/13/some-useful-docker-commands-for-handling-images-and-containers/

So far we have built the containers and operated them through the previous exercises. Now, let us see how  can we add  software to our base image on a running container and then convert that container into an image for future usage.

Let’s take ubuntu:16.04 as our base image, install the wget application, and then convert the running container to an image with the below steps:

To make ubuntu:16.04 container is our base image, we need to install the wget application, and then convert it as the running container to a docker image by using the below steps:

  1. Launch an ubuntu:16.04 container using the docker run subcommand, as shown below:
      $ sudo docker run -i -t ubuntu:16.04 /bin/bash
========================>
vskumar@ubuntu:~$ sudo docker ps -aq
155f4b0764b1
vskumar@ubuntu:~$ sudo docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
155f4b0764b1        ubuntu:16.04        "/bin/bash"         2 hours ago         Up 11 minutes                           zen_volhard
vskumar@ubuntu:~$ sudo docker run -i -t ubuntu:16.04 /bin/bash
root@3484664d454a:/# 
=========================>
2. Now, let's  verify is wget  available for this image or not.
============== the display shows there is no wget in this image =========>

root@3484664d454a:/# ls
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@3484664d454a:/# which wget
root@3484664d454a:/# 

      root@472c96295678:/# apt-get update
==================>
As we know that it is a brand new ubuntu container we built it, before installing wget we must synchronize it with the Ubuntu package repository, as shown below:
====================>
root@3484664d454a:/# apt-get update
Get:1 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]         
Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]                                                                      
Get:4 http://archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB]                                                                    
Get:5 http://archive.ubuntu.com/ubuntu xenial/universe Sources [9802 kB]                                                                      
Get:6 http://security.ubuntu.com/ubuntu xenial-security/universe Sources [53.1 kB]                                                            
Get:7 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [504 kB]                                                          
Get:8 http://security.ubuntu.com/ubuntu xenial-security/restricted amd64 Packages [12.9 kB]                                                   
Get:9 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [229 kB]                                                      
Get:10 http://security.ubuntu.com/ubuntu xenial-security/multiverse amd64 Packages [3479 B]                                                   
Get:11 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages [1558 kB]                                                                  
Get:12 http://archive.ubuntu.com/ubuntu xenial/restricted amd64 Packages [14.1 kB]                                                            
Get:13 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages [9827 kB]                                                              
Get:14 http://archive.ubuntu.com/ubuntu xenial/multiverse amd64 Packages [176 kB]                                                             
Get:15 http://archive.ubuntu.com/ubuntu xenial-updates/universe Sources [228 kB]                                                              
Get:16 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [864 kB]                                                           
Get:17 http://archive.ubuntu.com/ubuntu xenial-updates/restricted amd64 Packages [13.7 kB]                                                    
Get:18 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [711 kB]                                                       
Get:19 http://archive.ubuntu.com/ubuntu xenial-updates/multiverse amd64 Packages [18.5 kB]                                                    
Get:20 http://archive.ubuntu.com/ubuntu xenial-backports/main amd64 Packages [5174 B]                                                         
Get:21 http://archive.ubuntu.com/ubuntu xenial-backports/universe amd64 Packages [7135 B]                                                     
Fetched 24.6 MB in 59s (412 kB/s)                                                                                                             
Reading package lists... Done
root@3484664d454a:/# 
================================>
Now, we can install wget as below:
=========== Output of wget installation on container ===========>

root@3484664d454a:/# 
root@3484664d454a:/# apt-get install -y wget
Reading package lists... Done
Building dependency tree        
Reading state information... Done
The following additional packages will be installed:
  ca-certificates libidn11 libssl1.0.0 openssl
The following NEW packages will be installed:
  ca-certificates libidn11 libssl1.0.0 openssl wget
0 upgraded, 5 newly installed, 0 to remove and 1 not upgraded.
Need to get 2089 kB of archives.
After this operation, 6027 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libidn11 amd64 1.32-3ubuntu1.2 [46.5 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libssl1.0.0 amd64 1.0.2g-1ubuntu4.9 [1085 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 openssl amd64 1.0.2g-1ubuntu4.9 [492 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 ca-certificates all 20170717~16.04.1 [168 kB]
Get:5 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 wget amd64 1.17.1-1ubuntu1.3 [299 kB]
Fetched 2089 kB in 4s (421 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package libidn11:amd64.
(Reading database ... 4768 files and directories currently installed.)
Preparing to unpack .../libidn11_1.32-3ubuntu1.2_amd64.deb ...
Unpacking libidn11:amd64 (1.32-3ubuntu1.2) ...
Selecting previously unselected package libssl1.0.0:amd64.
Preparing to unpack .../libssl1.0.0_1.0.2g-1ubuntu4.9_amd64.deb ...
Unpacking libssl1.0.0:amd64 (1.0.2g-1ubuntu4.9) ...
Selecting previously unselected package openssl.
Preparing to unpack .../openssl_1.0.2g-1ubuntu4.9_amd64.deb ...
Unpacking openssl (1.0.2g-1ubuntu4.9) ...
Selecting previously unselected package ca-certificates.
Preparing to unpack .../ca-certificates_20170717~16.04.1_all.deb ...
Unpacking ca-certificates (20170717~16.04.1) ...
Selecting previously unselected package wget.
Preparing to unpack .../wget_1.17.1-1ubuntu1.3_amd64.deb ...
Unpacking wget (1.17.1-1ubuntu1.3) ...
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Setting up libidn11:amd64 (1.32-3ubuntu1.2) ...
Setting up libssl1.0.0:amd64 (1.0.2g-1ubuntu4.9) ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.22.1 /usr/local/share/perl/5.22.1 /usr/lib/x86_64-linux-gnu/perl5/5.22 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.22 /usr/share/perl/5.22 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base .) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
debconf: falling back to frontend: Teletype
Setting up openssl (1.0.2g-1ubuntu4.9) ...
Setting up ca-certificates (20170717~16.04.1) ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.22.1 /usr/local/share/perl/5.22.1 /usr/lib/x86_64-linux-gnu/perl5/5.22 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.22 /usr/share/perl/5.22 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base .) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
debconf: falling back to frontend: Teletype
Setting up wget (1.17.1-1ubuntu1.3) ...
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for ca-certificates (20170717~16.04.1) ...
Updating certificates in /etc/ssl/certs...
148 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
root@3484664d454a:/# 
=========================== End of installation ===========>
Now, we can verify with  'which wget ' command
============>
root@3484664d454a:/# which wget
/usr/bin/wget
root@3484664d454a:/# 
============>
Please let us recollect; installation of any software would alter the Dockwer base image composition. In which, we can also trace using the docker diff subcommand as we did in the previous exercises. 
I will open a second Terminal/screen, the docker diff subcommand can be issued from it, as below:
      $ sudo docker diff 472c96295678
===============>
vskumar@ubuntu:~$  
vskumar@ubuntu:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
3484664d454a        ubuntu:16.04        "/bin/bash"         15 minutes ago      Up 15 minutes                           jolly_cray
155f4b0764b1        ubuntu:16.04        "/bin/bash"         2 hours ago         Up 40 minutes                           zen_volhard
vskumar@ubuntu:~$ sudo docker diff 155f4b0764b1
C /root
A /root/.bash_history
vskumar@ubuntu:~$ 
============>

How to save this container ?:
The docker commit subcommand can be performed on a running or a stopped container. When a commit is performed on a running container, the Docker Engine pauses the container during the commit operation in order to avoid any data inconsistency. 
Now we can stop our running container.
We can commit a container to an image with the docker commit subcommand, as shown here:
      $ sudo docker commit 

================== Using commit for container ============>

root@3484664d454a:/# 
root@3484664d454a:/# exit
exit
vskumar@ubuntu:~$ sudo docker commit 3484664d454a
[sudo] password for vskumar: 
Sorry, try again.
[sudo] password for vskumar: 
sha256:fc7e4564eb928ccfe068c789f0d650967e8d5dc42d4e8d92409aab6614364075
vskumar@ubuntu:~$ 
=======================>
You can see the container id from the above output.

=========== We can also give a message to the commit command as below ===>
vskumar@ubuntu:~$ sudo docker commit 3484664d454a  Docker-exercise/ubuntu-wgetinstall
invalid reference format: repository name must be lowercase
vskumar@ubuntu:~$ sudo docker commit 3484664d454a  docker-exercise/ubuntu-wgetinstall
sha256:e34304119838d79da60e12776529106c350b1972cd517648e8ab90311fad7b1a
vskumar@ubuntu:~$ sudo docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                       PORTS               NAMES
3484664d454a        ubuntu:16.04        "/bin/bash"         24 minutes ago      Exited (130) 6 minutes ago                       jolly_cray
155f4b0764b1        ubuntu:16.04        "/bin/bash"         2 hours ago         Up About an hour                                 zen_volhard
vskumar@ubuntu:~$ 
===================== Note there are two containers created  ====>
Now, I want to remove one container :
==========>

vskumar@ubuntu:~$ sudo docker rm 3484664d454a
3484664d454a
vskumar@ubuntu:~$ sudo docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
155f4b0764b1        ubuntu:16.04        "/bin/bash"         3 hours ago         Up About an hour                        zen_volhard
vskumar@ubuntu:~$ 
========================>

Now let us check the docker images how many we have in our store :
=========== List of images ==========>
vskumar@ubuntu:~$ 
vskumar@ubuntu:~$ sudo docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
docker-exercise/ubuntu-wgetinstall   latest              e34304119838        5 minutes ago       169MB
<none>                               <none>              fc7e4564eb92        7 minutes ago       169MB
hello-world                          latest              f2a91732366c        5 days ago          1.85kB
ubuntu                               16.04               20c44cd7596f        8 days ago          123MB
ubuntu                               latest              20c44cd7596f        8 days ago          123MB
busybox                              latest              6ad733544a63        3 weeks ago         1.13MB
busybox                              1.24                47bcc53f74dc        20 months ago       1.11MB
vskumar@ubuntu:~$ 

==============================>
How to remove images:

by using :

sudo docker rmi image [image id], we can remove the image. For example; if you want to remove the image id:
47bcc53f74dc
you can use: $ sudo docker rmi image 47bcc53f74dc
=================>
vskumar@ubuntu:~$ sudo docker rmi image 47bcc53f74dc
Untagged: busybox:1.24
Untagged: busybox@sha256:8ea3273d79b47a8b6d018be398c17590a4b5ec604515f416c5b797db9dde3ad8
Deleted: sha256:47bcc53f74dc94b1920f0b34f6036096526296767650f223433fe65c35f149eb
Deleted: sha256:f6075681a244e9df4ab126bce921292673c9f37f71b20f6be1dd3bb99b4fdd72
Deleted: sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6
Error: No such image: image
vskumar@ubuntu:~$ 
=================>
 

So by using :

sudo docker rmi image [image id], we can remove the image.  Just recollect the difference between the image removal and container removal. For containers removal refer to my blog on "Housekeeping containers". Now we have learned how to create an image from containers using a few easy steps by installing the wget application. You can also add some other software applications to the same or different container(s) in the similar way.

You can use this method for testing also.  Let us say, you want to test a set of java programs. Then you need to install jdk and copy your programs. Write a shell script to compile and execute the programs by piping their output into a text file in a Linux background. So this way, you will be using the container as a test environment also.

The most easy and recommended way of creating an image is to use the Dockerfile method.

Within dockerfile we can mention the setup required to build a container through different steps. Then dockerfile creates the required setup for a container, under docker’s building activity.

We will see it in future exercises.

Please leave your feedback!

Vcard-Shanthi Kumar V-v39. DevOps: How to do Containers housekeeping ?

9. DevOps: How to do Containers housekeeping ?

Docker-logo

In  continuation of my previous blog on “8. DevOps:How to control and operate docker containers”, in this blog I would like to show some lab practice on “docker Containers housekeeping”.

From the previous lab sessions, we have seen many containers when we used ps -a option.

We have used two containers most of the times.

Others are not required. This time we will see how to remove a container physically.

Let us consider the below containers to remove using rm command:

32bc16b508d4        ubuntu 
a744246ffb8e        hello-world
1dd55efde43f        hello-world

$sudo docker rm 1dd55efde43f 
$sudo docker rm a744246ffb8e 
$sudo docker rm 32bc16b508d4 
================ You can see the above three containers are removed =========>
vskumar@ubuntu:~$ sudo docker rm 1dd55efde43f 
1dd55efde43f
vskumar@ubuntu:~$ sudo docker rm a744246ffb8e 
a744246ffb8e
vskumar@ubuntu:~$ sudo docker rm 32bc16b508d4 
32bc16b508d4
vskumar@ubuntu:~$ sudo docker ps -a |more
CONTAINER ID        IMAGE               COMMAND                 CREATED         
    STATUS                         PORTS               NAMES
f123dbd09116        ubuntu:16.04        "/bin/bash"             18 minutes ago  
    Exited (0) 18 minutes ago                          elastic_nightingale
3cfdea29ce6e        ubuntu              "bash"                  27 minutes ago  
    Exited (0) 26 minutes ago                          gallant_nobel
155f4b0764b1        ubuntu:16.04        "/bin/bash"             About an hour ag
o   Exited (0) 12 minutes ago                          zen_volhard
11e293722c64        ubuntu:16.04        "/bin/bash"             About an hour ag
o   Exited (0) About an hour ago                       dreamy_bassi
d10ad2bd62f7        ubuntu:16.04        "/bin/bash"             About an hour ag
o   Exited (0) About an hour ago                       cranky_dijkstra
cb1ff260d48e        ubuntu              "ls /usr/src"           11 hours ago    
    Exited (0) 11 hours ago                            wonderful_hawking
b20691fd8fb5        ubuntu              "ls /usr"               11 hours ago    
    Exited (0) 11 hours ago                            friendly_mirzakhani
431ba4c53028        ubuntu              "ls"                    11 hours ago    
    Exited (0) 28 minutes ago                          affectionate_nobel
2c31684bb1f4        ubuntu              "ls -la"                11 hours ago    
    Exited (0) 11 hours ago                            zealous_meitner
fe2e3b449daf        ubuntu              "ls -la /home/."        11 hours ago    
    Exited (0) 11 hours ago                            dreamy_shirley
c44bdd05b94d        ubuntu              "ls -la home."          11 hours ago    
    Exited (2) 11 hours ago                            elastic_pasteur
8b8afa82859a        ubuntu              "ls -la"                11 hours ago    
    Exited (0) 11 hours ago                            festive_panini
2811eb37af61        ubuntu              "ls -la 604831dbce2a"   11 hours ago    
    Exited (2) 11 hours ago                            jolly_swartz
604831dbce2a        ubuntu:16.04        "/bin/bash"             11 hours ago    
    Exited (0) 11 hours ago                            vibrant_ride
718636415a7f        ubuntu:16.04        "/bin/bash"             12 hours ago    
    Exited (0) 12 hours ago                            reverent_noyce
53a7751d4673        ubuntu:16.04        "/bin/bash"             13 hours ago    
    Exited (0) 13 hours ago                            musing_chandrasekhar
1ba71598b7b8        hello-world         "/hello"                16 hours ago    
    Exited (0) 16 hours ago                            musing_kare
vskumar@ubuntu:~$  
==============>
Now let us consider some more examples  as below:
3cfdea29ce6e        ubuntu 
cb1ff260d48e        ubuntu 
b20691fd8fb5        ubuntu 
431ba4c53028        ubuntu 
c31684bb1f4        ubuntu 
2c31684bb1f4        ubuntu 
fe2e3b449daf        ubuntu
c44bdd05b94d        ubuntu 
2811eb37af61        ubuntu
Now, let us use the below rm commands:
$sudo docker rm 3cfdea29ce6e
$sudo docker rm cb1ff260d48e
$sudo docker rm b20691fd8fb5 
$sudo docker rm 431ba4c53028 
$sudo docker rm 2c31684bb1f4
$sudo docker rm fe2e3b449daf 
$sudo docker rm c44bdd05b94d
$sudo docker rm 2811eb37af61
=================>
See the below output also:
 ================== Container removal ==========>
vskumar@ubuntu:~$ clear

vskumar@ubuntu:~$ sudo docker rm 3cfdea29ce6e
3cfdea29ce6e
vskumar@ubuntu:~$ sudo docker rm cb1ff260d48e
cb1ff260d48e
vskumar@ubuntu:~$ sudo docker rm b20691fd8fb5
b20691fd8fb5
vskumar@ubuntu:~$ sudo docker rm 431ba4c53028
431ba4c53028
vskumar@ubuntu:~$ sudo docker rm 2c31684bb1f4
2c31684bb1f4
vskumar@ubuntu:~$ sudo docker rm fe2e3b449daf
fe2e3b449daf
vskumar@ubuntu:~$ sudo docker rm fc44bdd05b94d
Error: No such container: fc44bdd05b94d
vskumar@ubuntu:~$ sudo docker rm c44bdd05b94d
c44bdd05b94d
vskumar@ubuntu:~$ sudo docker rm 2811eb37af61
2811eb37af61
vskumar@ubuntu:~$ 
==========================>
Now we can see the list of available containers:
============= List of latest containers ==============>
vskumar@ubuntu:~$ sudo docker ps -a |more
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                         PORTS               NAMES
f123dbd09116        ubuntu:16.04        "/bin/bash"         28 minutes ago      Exited (0) 28 minutes ago                          elastic_nigh
tingale
155f4b0764b1        ubuntu:16.04        "/bin/bash"         About an hour ago   Exited (0) 22 minutes ago                          zen_volhard
11e293722c64        ubuntu:16.04        "/bin/bash"         About an hour ago   Exited (0) About an hour ago                       dreamy_bassi
d10ad2bd62f7        ubuntu:16.04        "/bin/bash"         2 hours ago         Exited (0) About an hour ago                       cranky_dijks
tra
8b8afa82859a        ubuntu              "ls -la"            11 hours ago        Exited (0) 11 hours ago                            festive_pani
ni
604831dbce2a        ubuntu:16.04        "/bin/bash"         12 hours ago        Exited (0) 11 hours ago                            vibrant_ride
718636415a7f        ubuntu:16.04        "/bin/bash"         12 hours ago        Exited (0) 12 hours ago                            reverent_noy
ce
53a7751d4673        ubuntu:16.04        "/bin/bash"         13 hours ago        Exited (0) 13 hours ago                            musing_chand
rasekhar
1ba71598b7b8        hello-world         "/hello"            16 hours ago        Exited (0) 16 hours ago                            musing_kare
vskumar@ubuntu:~$ 
===========================>
Now, I wan to keep very few containers only and remove the below containers:

604831dbce2a        ubuntu:16.04
718636415a7f        ubuntu:16.04 
53a7751d4673        ubuntu:16.04
8b8afa82859a        ubuntu 
I want to use the below  commands to remove  the above containers:
$sudo docker rm 604831dbce2a
$sudo docker rm 718636415a7f
$sudo docker rm 53a7751d4673
$sudo docker rm 8b8afa82859a

========================= We can see the latest/limited containers =======>
vskumar@ubuntu:~$ sudo docker rm 604831dbce2a
604831dbce2a
vskumar@ubuntu:~$ sudo docker rm 718636415a7f
718636415a7f
vskumar@ubuntu:~$ sudo docker rm 53a7751d4673
53a7751d4673
vskumar@ubuntu:~$ sudo docker rm 8b8afa82859a
8b8afa82859a
vskumar@ubuntu:~$ sudo docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                         PORTS               NAMES
f123dbd09116        ubuntu:16.04        "/bin/bash"         36 minutes ago      Exited (0) 36 minutes ago                          elastic_nightingale
155f4b0764b1        ubuntu:16.04        "/bin/bash"         About an hour ago   Exited (0) 30 minutes ago                          zen_volhard
11e293722c64        ubuntu:16.04        "/bin/bash"         About an hour ago   Exited (0) About an hour ago                       dreamy_bassi
d10ad2bd62f7        ubuntu:16.04        "/bin/bash"         2 hours ago         Exited (0) About an hour ago                       cranky_dijkstra
1ba71598b7b8        hello-world         "/hello"            16 hours ago        Exited (0) 16 hours ago                            musing_kare
vskumar@ubuntu:~$ sudo docker ps 
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
vskumar@ubuntu:~$ 
================================>
We can also see the current container ids as below:
========== Listing containers ids ===============>
vskumar@ubuntu:~$ sudo docker ps -aq
f123dbd09116
155f4b0764b1
11e293722c64
d10ad2bd62f7
1ba71598b7b8
vskumar@ubuntu:~$ 
===============================>
To remove the inactive containers there is a prune command. Let us try with it.
Before doing it I want to make a container active and try this prune command on it:
================= I have made one container Active ======>
vskumar@ubuntu:~$ sudo docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                         PORTS               NAMES
f123dbd09116        ubuntu:16.04        "/bin/bash"         About an hour ago   Exited (0) About an hour ago                       elastic_nightingale
155f4b0764b1        ubuntu:16.04        "/bin/bash"         2 hours ago         Exited (0) 40 minutes ago                          zen_volhard
11e293722c64        ubuntu:16.04        "/bin/bash"         2 hours ago         Exited (0) 2 hours ago                             dreamy_bassi
d10ad2bd62f7        ubuntu:16.04        "/bin/bash"         2 hours ago         Exited (0) 2 hours ago                             cranky_dijkstra
1ba71598b7b8        hello-world         "/hello"            17 hours ago        Exited (0) 17 hours ago                            musing_kare
vskumar@ubuntu:~$ sudo docker start 155f4b0764b1
155f4b0764b1
vskumar@ubuntu:~$ sudo docker ps 
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
155f4b0764b1        ubuntu:16.04        "/bin/bash"         2 hours ago         Up 6 seconds                            zen_volhard
vskumar@ubuntu:~$ 
========================>
To use prune , below format should be used:
$ sudo docker container prune
=========== The usage of prune command =======>
vskumar@ubuntu:~$ 
vskumar@ubuntu:~$ sudo docker container prune
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Deleted Containers:
f123dbd09116561a042e12060f449daa9a36d9a59034b1dd1b96846e66ead14d
11e293722c646a0def7a8a1f2cdf85a47654eb62ef7701bd2d7221c7e69a943f
d10ad2bd62f7a8de379272f21dfccec89c0e5829b3a58ce01927530b6b44ea01
1ba71598b7b8d97fcbd3a589a6665238690be99936b6782647b5040eeb82aafa
Total reclaimed space: 844B
vskumar@ubuntu:~$ 
========== You can see the removed container ids =============>
You can see the existing  containers:
====== Available containers after Housekeeping is done =========>
vskumar@ubuntu:~$ 
vskumar@ubuntu:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
155f4b0764b1        ubuntu:16.04        "/bin/bash"         2 hours ago         Up 6 minutes                            zen_volhard
vskumar@ubuntu:~$ sudo docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
155f4b0764b1        ubuntu:16.04        "/bin/bash"         2 hours ago         Up 6 minutes                            zen_volhard
vskumar@ubuntu:~$ sudo docker ps -aq
155f4b0764b1
vskumar@ubuntu:~$
===================>
In this exercise we have seen the housekeeping of containers well.
Please note if you have deleted all the containers by mistake, you need to install the containers again. 
Follow the containers creation exercise.

I would like to break this session at this point. In the next blog I would like to present the lab practice on:

 “How to Build images from Docker containers?

Vcard-Shanthi Kumar V


					

8. DevOps:How to control and operate docker containers

Docker-logo

In  continuation of my previous blog on “7. DevOps: How to track changes in a container”, in this blog I would like to show some lab practice “How to control and operate docker containers”.

Controlling/operating Docker container:

In this exercise initially, we can see on how to start/stop/restart the containers.

The Docker Engine enables us to start, stop, and restart a container with a set of docker subcommands.

Let me display the docker images:

=======================>

vskumar@ubuntu:~$ sudo service docker status

docker.service – Docker Application Container Engine

Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: e

Active: active (running) since Sat 2017-11-25 15:09:35 PST; 2min 24s ago

Docs: https://docs.docker.com

Main PID: 1356 (dockerd)

Tasks: 30

Memory: 95.2M

CPU: 3.998s

=========================>

vskumar@ubuntu:~$ sudo docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

hello-world latest f2a91732366c 4 days ago 1.85kB

ubuntu 16.04 20c44cd7596f 8 days ago 123MB

ubuntu latest 20c44cd7596f 8 days ago 123MB

busybox latest 6ad733544a63 3 weeks ago 1.13MB

busybox 1.24 47bcc53f74dc 20 months ago 1.11MB

vskumar@ubuntu:~$

=======================>

Now, I want to launch our container ubuntu 16.04 with start subcommand and experiment with the docker stop subcommand, as given below:

$ sudo docker run -i -t ubuntu:16.04 /bin/bash

======================>
vskumar@ubuntu:~$ 
vskumar@ubuntu:~$ sudo docker run -i -t ubuntu:16.04 /bin/bash
root@d10ad2bd62f7:/# 
======================>
Now, we are with this container in interactive mode.
Let us apply some linux commands as below:
========================>
root@d10ad2bd62f7:/# pwd
/
root@d10ad2bd62f7:/# ls
bin   dev  home  lib64  mnt  proc  run   srv  tmp  var
boot  etc  lib   media  opt  root  sbin  sys  usr
root@d10ad2bd62f7:/# cd home
root@d10ad2bd62f7:/home# ls
root@d10ad2bd62f7:/home# cd ../var
root@d10ad2bd62f7:/var# ls
backups  cache  lib  local  lock  log  mail  opt  run  spool  tmp
root@d10ad2bd62f7:/var# cd tmp
root@d10ad2bd62f7:/var/tmp# pwd
/var/tmp
root@d10ad2bd62f7:/var/tmp# ls
root@d10ad2bd62f7:/var/tmp# cd ../lib
root@d10ad2bd62f7:/var/lib# ls
apt  dpkg  initscripts  insserv  misc  pam  systemd  update-rc.d  urandom
root@d10ad2bd62f7:/var/lib# 
================================>

Now I want to create a file as below in this container:
==================>

root@d10ad2bd62f7:/var/lib# pwd
/var/lib
root@d10ad2bd62f7:/var/lib# cd ../../home
root@d10ad2bd62f7:/home# ls
root@d10ad2bd62f7:/home# touch file1.txt
===================>

Let me add some text into this file as below:
==========>
root@d10ad2bd62f7:/home# echo " Testing containers " > file1.txt
root@d10ad2bd62f7:/home# echo " Applying stop command on containers " > file1.txt
root@d10ad2bd62f7:/home# cat file1.txt
 Applying stop command on containers 
root@d10ad2bd62f7:/home# echo " Testing containers " > file1.txtroot@d10ad2bd62f7:/home# echo " Applying stop command on containers " >> file1.txt
root@d10ad2bd62f7:/home# ls
file1.txt
root@d10ad2bd62f7:/home# ls -l
total 4
-rw-r--r-- 1 root root 59 Nov 25 23:20 file1.txt
root@d10ad2bd62f7:/home# cat file1.txt 
Testing containers 
Applying stop command on containers 
root@d10ad2bd62f7:/home# 
===============>

I have applied some more linux file operations on this container as below:
=================>
root@d10ad2bd62f7:/home#      
root@d10ad2bd62f7:/home# cat file1.txt >> file2.txt
root@d10ad2bd62f7:/home# ls
file1.txt  file2.txt
root@d10ad2bd62f7:/home# ls -l
total 8
-rw-r--r-- 1 root root 59 Nov 25 23:20 file1.txt
-rw-r--r-- 1 root root 59 Nov 25 23:22 file2.txt
root@d10ad2bd62f7:/home# diff file1.txt file2.txt
root@d10ad2bd62f7:/home# echo " Applying restart command also on containers " >> file1.txt
root@d10ad2bd62f7:/home# ls -l
total 8
-rw-r--r-- 1 root root 105 Nov 25 23:23 file1.txt
-rw-r--r-- 1 root root  59 Nov 25 23:22 file2.txt
root@d10ad2bd62f7:/home# diff file1.txt file2.txt3d2
<  Applying restart command also on containers 
root@d10ad2bd62f7:/home# 
====================>
Now, let me apply a stop command on this container and see as 
below by using exit to come out and stop:
=====================>
root@155f4b0764b1:/# 
root@155f4b0764b1:/# exit
exit
vskumar@ubuntu:~$ sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
hello-world         latest              f2a91732366c        4 days ago          1.85kB
ubuntu              16.04               20c44cd7596f        8 days ago          123MB
ubuntu              latest              20c44cd7596f        8 days ago          123MB
busybox             latest              6ad733544a63        3 weeks ago         1.13MB
busybox             1.24                47bcc53f74dc        20 months ago       1.11MB
vskumar@ubuntu:~$ sudo docker stop  d10ad2bd62f7
d10ad2bd62f7
vskumar@ubuntu:~$ 
=============>
Now, I want to check the containers status using ps -a command as below:
==============>
vskumar@ubuntu:~$ sudo docker ps -a
CONTAINER ID        IMAGE               COMMAND                 CREATED             STATUS                     PORTS               NAMES
155f4b0764b1        ubuntu:16.04        "/bin/bash"             2 minutes ago       Exited (0) 2 minutes ago                       zen_volhard
cb1ff260d48e        ubuntu              "ls /usr/src"           10 hours ago        Exited (0) 10 hours ago                        wonderful_hawking
b20691fd8fb5        ubuntu              "ls /usr"               10 hours ago        Exited (0) 10 hours ago                        friendly_mirzakhani
431ba4c53028        ubuntu              "ls"                    10 hours ago        Exited (0) 10 hours ago                        affectionate_nobel
2c31684bb1f4        ubuntu              "ls -la"                10 hours ago        Exited (0) 10 hours ago                        zealous_meitner
fe2e3b449daf        ubuntu              "ls -la /home/."        10 hours ago        Exited (0) 10 hours ago                        dreamy_shirley
c44bdd05b94d        ubuntu              "ls -la home."          10 hours ago        Exited (2) 10 hours ago                        elastic_pasteur
8b8afa82859a        ubuntu              "ls -la"                10 hours ago        Exited (0) 10 hours ago                        festive_panini
2811eb37af61        ubuntu              "ls -la 604831dbce2a"   10 hours ago        Exited (2) 10 hours ago                        jolly_swartz
604831dbce2a        ubuntu:16.04        "/bin/bash"             10 hours ago        Exited (0) 10 hours ago                        vibrant_ride
718636415a7f        ubuntu:16.04        "/bin/bash"             11 hours ago        Exited (0) 10 hours ago                        reverent_noyce
53a7751d4673        ubuntu:16.04        "/bin/bash"             12 hours ago        Exited (0) 12 hours ago                        musing_chandrasekhar
32bc16b508d4        ubuntu              "bash"                  13 hours ago        Exited (0) 13 hours ago                        eager_goldberg
1dd55efde43f        hello-world         "/hello"                13 hours ago        Exited (0) 13 hours ago                        peaceful_pasteur
a744246ffb8e        hello-world         "/hello"                15 hours ago        Exited (0) 15 hours ago                        naughty_wing
1ba71598b7b8        hello-world         "/hello"                15 hours ago        Exited (0) 15 hours ago                        musing_kare
vskumar@ubuntu:~$ 
===================>
you can see the latest status of our container;
155f4b0764b1        ubuntu:16.04        "/bin/bash"             2 minutes ago       Exited (0) 2 minutes ago                       zen_volhard
It means Docker  maintains in the logs on the usage of containers also.
Now, I want to start the previously stopped container using the docker start subcommand 
by specifying the container ID as an argument, as follows:
$ sudo docker start 155f4b0764b1
===============>
vskumar@ubuntu:~$ sudo docker start 155f4b0764b1
155f4b0764b1
vskumar@ubuntu:~$ 
===============>
Let us check the images status also as below:
==================> Copied the 1st two lines only ----->
vskumar@ubuntu:~$ sudo docker ps -a |more
CONTAINER ID        IMAGE               COMMAND                 CREATED         
    STATUS                      PORTS               NAMES
155f4b0764b1        ubuntu:16.04        "/bin/bash"             10 minutes ago  
    Up About a minute                               zen_volhard
11e293722c64        ubuntu:16.04        "/bin/bash"             12 minutes ago  
    Exited (0) 12 minutes ago  
====================>
It means it shows the current status of the container id:155f4b0764b1 
We need to notice one thing here.

By default, the docker start subcommand will not attach to the container.

We can attach it to the container either using the -a option in the docker start subcommand or by explicitly using the docker attach subcommand.

Now let us try these options.

We will see attach command

$ sudo docker attach 155f4b0764b1
=================>
vskumar@ubuntu:~$ 
vskumar@ubuntu:~$ sudo docker attach 155f4b0764b1
root@155f4b0764b1:/# 
root@155f4b0764b1:/#
=================>
So the attach command brought the container into interactive mode.
Now let me exit it and try the -a option with docker start command:
==================>
root@155f4b0764b1:/home# 
root@155f4b0764b1:/home# exit
exit
===============>
with start -a option:
=============>
vskumar@ubuntu:~$ sudo docker start -a 155f4b0764b1
root@155f4b0764b1:/# 
=================>
After exit, I have tried ps command:
=====================>
vskumar@ubuntu:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
155f4b0764b1        ubuntu:16.04        "/bin/bash"         21 minutes ago      Up 3 minutes                            zen_volhard
vskumar@ubuntu:~$ 
======================>
From the above display you can see that its start and current status.
It means the container is active and running.
Now, I want to make another [below] container active.
1dd55efde43f        hello-world         "/hello"                13 hours ago Exited (0) 13 hours ago                        peaceful_pasteur
Let us see the ps command after these 2 containers are in active state.
I want to use the below command:
$ sudo docker start -a 1dd55efde43f
===================>
vskumar@ubuntu:~$ sudo docker start -a 1dd55efde43f

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

Let us  try something more ambitious, we can run an Ubuntu container with:
 $ docker run -it ubuntu bash

If you want to;
Share images, automate workflows, and more with a free Docker ID:
visit: https://cloud.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/

vskumar@ubuntu:~$ 
===================>
Please note the above container doesn't have a any os related process to 
keep running continuously. 
Just it displays the message only. Hence 
in the list it will not appear. 
Now, let me list the current processes using docker ps command:
===========>
vskumar@ubuntu:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
155f4b0764b1        ubuntu:16.04        "/bin/bash"         32 minutes ago      Up 14 minutes                           zen_volhard
vskumar@ubuntu:~$ 
==============>
So as on now one container is running.

The next set of container controlling subcommands are docker pause and docker unpause.

The docker pause subcommand will freeze the execution of all the processes within the container.

The docker unpause subcommand will unfreeze the execution of all the processes within the container and resume the execution from the point where it was frozen.

Let us try the below command 
$sudo docker pause 155f4b0764b1
========================>
vskumar@ubuntu:~$ 
vskumar@ubuntu:~$ sudo docker pause 155f4b0764b1
155f4b0764b1
vskumar@ubuntu:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                   PORTS               NAMES
155f4b0764b1        ubuntu:16.04        "/bin/bash"         About an hour ago   Up 30 minutes (Paused)                       zen_volhard
vskumar@ubuntu:~$ 
==========================>
You can see the current status as Paused.
Now let me try unpause command also.
$ sudo docker unpause 155f4b0764b1
You can see the total output of this container with pause and unpause statuses:
===================>
vskumar@ubuntu:~$ 
vskumar@ubuntu:~$ sudo docker pause 155f4b0764b1
155f4b0764b1
vskumar@ubuntu:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                   PORTS               NAMES
155f4b0764b1        ubuntu:16.04        "/bin/bash"         About an hour ago   Up 30 minutes (Paused)                       zen_volhard
vskumar@ubuntu:~$ ^C
vskumar@ubuntu:~$ sudo docker unpause 155f4b0764b1
155f4b0764b1
vskumar@ubuntu:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
155f4b0764b1        ubuntu:16.04        "/bin/bash"         About an hour ago   Up 32 minutes                           zen_volhard
vskumar@ubuntu:~$ 
======================>
Now, in this lab session finally we will use the stop command:

The container and the script running within it can be stopped using the docker stop subcommand, as shown below:

$ sudo docker stop 155f4b0764b1
=====================> 
vskumar@ubuntu:~$ sudo docker stop 155f4b0764b1
155f4b0764b1
vskumar@ubuntu:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED STATUS              PORTS               NAMES
=============== It shows there is no active container =============>
Now, let me try with -a and more options.
=========== Partial display is shown here upto the container ===========>
vskumar@ubuntu:~$ sudo docker ps -a |more
CONTAINER ID        IMAGE               COMMAND                 CREATED         
    STATUS                         PORTS               NAMES
f123dbd09116        ubuntu:16.04        "/bin/bash"             6 minutes ago   
    Exited (0) 5 minutes ago                           elastic_nightingale
3cfdea29ce6e        ubuntu              "bash"                  14 minutes ago  
    Exited (0) 14 minutes ago                          gallant_nobel
155f4b0764b1        ubuntu:16.04        "/bin/bash"             About an hour ag
o   Exited (0) 17 seconds ago  
================================>

So far in this lab session, we have seen the differences of different commands to operate and control the containers. I would like to break this session for now. In the next blog we will see on how to manage “Housekeeping containers“.

 Vcard-Shanthi Kumar V

 

7. DevOps: How to track changes in a container

Docker-logo

In  continuation of my previous blog on “6. DevOps: How to work with interactive docker containers”, in this blog I would like to show some lab practice “How to track changes in a container”.

Tracking changes inside containers:

Now, let us see the container operations and tracking them.

Let’s launch a container in interactive mode, as we have done in previous session, we can use the below command.

$ sudo docker run -i -t ubuntu:16.04 /bin/bash 
=================>
vskumar@ubuntu:/var/log$ sudo docker run -i -t ubuntu:16.04 /bin/bash  
root@718636415a7f:/# ps
   PID TTY          TIME CMD
     1 pts/0    00:00:00 bash
     9 pts/0    00:00:00 ps
root@718636415a7f:/# ps -ef
UID         PID   PPID  C STIME TTY          TIME CMD
root          1      0  0 12:39 pts/0    00:00:00 /bin/bash
root         10      1  0 12:53 pts/0    00:00:00 ps -ef
root@718636415a7f:/# ls
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@718636415a7f:/# 
======================>
Now, let us go to home directory:
========>
root@718636415a7f:/# pwd
/
root@718636415a7f:/# cd home
root@718636415a7f:/home# pwd
/home
root@718636415a7f:/home# ls
root@718636415a7f:/home# 
==============>
Now, as a standalone machine of this docker container, 
I want to create 4 text files using touch command as below:
==============>
root@718636415a7f:/home# ls
root@718636415a7f:/home# ls -l
total 0
root@718636415a7f:/home# touch {vsk1,vsk2,vsk3,vsk4}
root@718636415a7f:/home# ls -l
total 0
-rw-r--r-- 1 root root 0 Nov 25 12:57 vsk1
-rw-r--r-- 1 root root 0 Nov 25 12:57 vsk2
-rw-r--r-- 1 root root 0 Nov 25 12:57 vsk3
-rw-r--r-- 1 root root 0 Nov 25 12:57 vsk4
root@718636415a7f:/home# 
======================>

I am adding some text to each of them as below:

====================>

root@718636415a7f:/home# pwd

/home

root@718636415a7f:/home# echo ‘Testing vsk1’ > vsk1

root@718636415a7f:/home# ls -l

total 4

-rw-r–r– 1 root root 13 Nov 25 13:02 vsk1

-rw-r–r– 1 root root 0 Nov 25 12:57 vsk2

-rw-r–r– 1 root root 0 Nov 25 12:57 vsk3

-rw-r–r– 1 root root 0 Nov 25 12:57 vsk4

root@718636415a7f:/home# echo ‘Testing vsk2’ > vsk2

root@718636415a7f:/home# echo ‘Testing vsk3’ > vsk3

root@718636415a7f:/home# echo ‘NOT Testing vsk4’ > vsk4

root@718636415a7f:/home# ls -l

total 16

-rw-r–r– 1 root root 13 Nov 25 13:02 vsk1

-rw-r–r– 1 root root 13 Nov 25 13:02 vsk2

-rw-r–r– 1 root root 13 Nov 25 13:02 vsk3

-rw-r–r– 1 root root 17 Nov 25 13:02 vsk4

root@718636415a7f:/home#

=====================>

I have created 4 files and added some text into them.

Now, I want to execute a diff command on them:

==========================>

root@718636415a7f:/home# diff vsk1 vsk2

1c1

< Testing vsk1

> Testing vsk2

root@718636415a7f:/home# diff vsk2 vsk3

1c1

< Testing vsk2

> Testing vsk3

root@718636415a7f:/home# echo ‘NOT Testing vsk4’ > vsk1

root@718636415a7f:/home# diff vsk1 vsk4

root@718636415a7f:/home# diff vsk2 vsk4

1c1

< Testing vsk2

> NOT Testing vsk4

root@718636415a7f:/home#

===========================>

Now, I want to exit this container and go back to docker host.

I have detached it using exit.

And back to docker host.

Now, I want to use the diff command as below from host machine to the container:

===========================>

root@718636415a7f:/home# ls -l

total 16

-rw-r–r– 1 root root 17 Nov 25 13:05 vsk1

-rw-r–r– 1 root root 13 Nov 25 13:02 vsk2

-rw-r–r– 1 root root 13 Nov 25 13:02 vsk3

-rw-r–r– 1 root root 17 Nov 25 13:02 vsk4

root@718636415a7f:/home# exit

exit

vskumar@ubuntu:/var/log$ sudo docker diff 718636415a7f

[sudo] password for vskumar:

C /home

A /home/vsk1

A /home/vsk2

A /home/vsk3

A /home/vsk4

C /root

A /root/.bash_history

vskumar@ubuntu:/var/log$

=====================>

The 1st line ‘C /home’ shows; the home directory is modified by showing ‘C’ as changed.

The ‘A’ shows;  before each line denotes the file is added.

If you have a deleted file, it can show as ‘D’ before the file.

Also please let us note here on how docker engine picks up the image with the below priority;

When we work with an image and if we don’t specify that image name, then the latest image (recently generated) will be identified and used by the Docker Engine.

We can check the status of the containers as below using ps -a:

You can see a detailed output from this command from the below display:

==================================>

vskumar@ubuntu:/var/log$ ls

alternatives.log bootstrap.log dmesg fsck kern.log speech-dispatcher unattended-upgrades wtmp

apport.log btmp dpkg.log gpu-manager.log lastlog syslog upstart Xorg.0.log

apt cups faillog hp lightdm syslog.1 vmware Xorg.0.log.old

auth.log dist-upgrade fontconfig.log installer samba syslog.2.gz vmware-vmsvc.log

vskumar@ubuntu:/var/log$ sudo docker ps -a

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

cb1ff260d48e ubuntu “ls /usr/src” 3 minutes ago Exited (0) 3 minutes ago wonderful_hawking

b20691fd8fb5 ubuntu “ls /usr” 3 minutes ago Exited (0) 3 minutes ago friendly_mirzakhani

431ba4c53028 ubuntu “ls” 3 minutes ago Exited (0) 3 minutes ago affectionate_nobel

2c31684bb1f4 ubuntu “ls -la” 3 minutes ago Exited (0) 3 minutes ago zealous_meitner

fe2e3b449daf ubuntu “ls -la /home/.” 4 minutes ago Exited (0) 4 minutes ago dreamy_shirley

c44bdd05b94d ubuntu “ls -la home.” 4 minutes ago Exited (2) 4 minutes ago elastic_pasteur

8b8afa82859a ubuntu “ls -la” 4 minutes ago Exited (0) 4 minutes ago festive_panini

2811eb37af61 ubuntu “ls -la 604831dbce2a” 4 minutes ago Exited (2) 4 minutes ago jolly_swartz

604831dbce2a ubuntu:16.04 “/bin/bash” 8 minutes ago Exited (0) 6 minutes ago vibrant_ride

718636415a7f ubuntu:16.04 “/bin/bash” 45 minutes ago Exited (0) 18 minutes ago reverent_noyce

53a7751d4673 ubuntu:16.04 “/bin/bash” 2 hours ago Exited (0) 2 hours ago musing_chandrasekhar

32bc16b508d4 ubuntu “bash” 3 hours ago Exited (0) 3 hours ago eager_goldberg

1dd55efde43f hello-world “/hello” 3 hours ago Exited (0) 3 hours ago peaceful_pasteur

a744246ffb8e hello-world “/hello” 5 hours ago Exited (0) 5 hours ago naughty_wing

1ba71598b7b8 hello-world “/hello” 5 hours ago Exited (0) 5 hours ago musing_kare

vskumar@ubuntu:/var/log$

============================>

I would like to terminate the session at this point. In the next blog I would like to present “How to control and operate docker containers”.

Vcard-Shanthi Kumar V-v3

https://youtu.be/IlPhLm_2se4

6. DevOps: How to work with interactive Docker containers

Docker-logo

In  continuation of my previous blog on “5. DevOps: How to work with Docker Images”, in this blog I would like to show some lab practice on Interactive Docker containers.

Working with an interactive Docker container:

In the previous lab session, we worked with first Hello World container. And we came to know how the containerization works. Now, we are going to run a container in interactive mode.

What is docker run command ?:

The docker run subcommand takes an image as an input and launches it as a container.

What flags we need to use ?:

We have to pass the -t and -i flags to the docker run subcommand in order to make the container interactive.

The -i flag is the key driver, it makes the container interactive by grabbing the standard input (STDIN) of the container into the terminal.

The -t flag allocates a pseudo-TTY or a pseudo Terminal (Terminal emulator) and then assigns that to the container.

Note: Please note in the earlier session we have executed a container on unbuntu name.

But now, we will explore completely the interactive container operations.

In the below example, we are going to launch an interactive container using the ubuntu:16.04 image and /bin/bash as the command:

$ sudo docker run -i -t ubuntu:16.04 /bin/bash 

=========== Output ============>

vskumar@ubuntu:/var/log$ sudo docker run -i -t ubuntu:16.04 /bin/bash

Unable to find image ‘ubuntu:16.04’ locally

16.04: Pulling from library/ubuntu

Digest: sha256:7c67a2206d3c04703e5c23518707bdd4916c057562dd51c74b99b2ba26af0f79

Status: Downloaded newer image for ubuntu:16.04

root@53a7751d4673:/#

===================>

Why the error messages [Unable to find image] appear ?:

As the ubuntu 16.04 image is not downloaded yet, we get the above message and with the docker run command it will start pulling the ubuntu 16.04 image automatically with following message:

Unable to find image 'ubuntu:16.04' locally
16.04: Pulling from library/ubuntu

When the download is completed, the container will get launched along with the ubuntu:16.04 image.

It will also launch a Bash shell within the container, because we have specified /bin/bash as the command to be executed. This landed us in a Bash prompt, as shown below:

root@53a7751d4673:/#

What is ’53a7751d4673′?:

It is the hostname of the container. In Docker, the hostname is the same as the container ID.

Now, let us run a few commands interactively and confirm what we mentioned about the prompt is correct, as shown below:

To check the hostname below commands need to be executed:

root@53a7751d4673:/# hostname

root@53a7751d4673:/# id

root@53a7751d4673:/# echo $PS1

When we execute them, can see the below output:

==============>

root@53a7751d4673:/#

root@53a7751d4673:/# hostname

53a7751d4673

root@53a7751d4673:/# id

uid=0(root) gid=0(root) groups=0(root)

root@53a7751d4673:/# echo $PS1

\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\u@\h:\w\$

root@53a7751d4673:/#

=====================>

So, we have seen the Host name as ’53a7751d4673′.

Id as ‘root ‘

Using ‘PS1’, —>Displays username, hostname and current working directory in the prompt.

PS1 in this example displays the following three information in the prompt:

\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\u@\h:\w\$

  • \u – Username
  • \h – Hostname
  • \w – Full path of the current working directory

==============>

root@53a7751d4673:/# pwd

/

root@53a7751d4673:/#

==========>

Note, we are within the ubuntu 16.04 container and it works as Linux machine. So we can try some Linux commands also:

===============>

root@53a7751d4673:/# ps

PID TTY TIME CMD

1 pts/0 00:00:00 bash

26 pts/0 00:00:00 ps

root@53a7751d4673:/# ps -ef

UID PID PPID C STIME TTY TIME CMD

root 1 0 0 11:28 pts/0 00:00:00 /bin/bash

root 27 1 0 11:48 pts/0 00:00:00 ps -ef

root@53a7751d4673:/#

root@53a7751d4673:/# ls

bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var

root@53a7751d4673:/# ls -l

total 64

drwxr-xr-x 2 root root 4096 Nov 14 13:49 bin

drwxr-xr-x 2 root root 4096 Apr 12 2016 boot

drwxr-xr-x 5 root root 360 Nov 25 11:28 dev

drwxr-xr-x 45 root root 4096 Nov 25 11:28 etc

drwxr-xr-x 2 root root 4096 Apr 12 2016 home

drwxr-xr-x 8 root root 4096 Sep 13 2015 lib

drwxr-xr-x 2 root root 4096 Nov 14 13:49 lib64

drwxr-xr-x 2 root root 4096 Nov 14 13:48 media

drwxr-xr-x 2 root root 4096 Nov 14 13:48 mnt

drwxr-xr-x 2 root root 4096 Nov 14 13:48 opt

dr-xr-xr-x 250 root root 0 Nov 25 11:28 proc

drwx—— 2 root root 4096 Nov 14 13:49 root

drwxr-xr-x 6 root root 4096 Nov 14 13:49 run

drwxr-xr-x 2 root root 4096 Nov 17 21:59 sbin

drwxr-xr-x 2 root root 4096 Nov 14 13:48 srv

dr-xr-xr-x 13 root root 0 Nov 25 11:28 sys

drwxrwxrwt 2 root root 4096 Nov 14 13:49 tmp

drwxr-xr-x 11 root root 4096 Nov 14 13:48 usr

drwxr-xr-x 13 root root 4096 Nov 14 13:49 var

root@53a7751d4673:/#

================>

So, ubuntu 16.04 container is nothing but a linux machine and we executed the above commands.

Now, I want to change the root permissions as below:

==============>

root@53a7751d4673:/# chmod +777 root

root@53a7751d4673:/# ls -l

total 64

drwxr-xr-x 2 root root 4096 Nov 14 13:49 bin

drwxr-xr-x 2 root root 4096 Apr 12 2016 boot

drwxr-xr-x 5 root root 360 Nov 25 11:28 dev

drwxr-xr-x 45 root root 4096 Nov 25 11:28 etc

drwxr-xr-x 2 root root 4096 Apr 12 2016 home

drwxr-xr-x 8 root root 4096 Sep 13 2015 lib

drwxr-xr-x 2 root root 4096 Nov 14 13:49 lib64

drwxr-xr-x 2 root root 4096 Nov 14 13:48 media

drwxr-xr-x 2 root root 4096 Nov 14 13:48 mnt

drwxr-xr-x 2 root root 4096 Nov 14 13:48 opt

dr-xr-xr-x 255 root root 0 Nov 25 11:28 proc

drwxrwxrwx 2 root root 4096 Nov 14 13:49 root

drwxr-xr-x 6 root root 4096 Nov 14 13:49 run

drwxr-xr-x 2 root root 4096 Nov 17 21:59 sbin

drwxr-xr-x 2 root root 4096 Nov 14 13:48 srv

dr-xr-xr-x 13 root root 0 Nov 25 11:48 sys

drwxrwxrwt 2 root root 4096 Nov 14 13:49 tmp

drwxr-xr-x 11 root root 4096 Nov 14 13:48 usr

drwxr-xr-x 13 root root 4096 Nov 14 13:49 var

root@53a7751d4673:/#

================>

Now, I want to exit from container and come back to host machine.

==================>

root@53a7751d4673:/#

root@53a7751d4673:/# exit

exit

vskumar@ubuntu:/var/log$

===============>

Whenever the Bash exit command is used in the interactive container, it will terminate the Bash shell process.

In turn it will stop the container and returns to the docker host machine.

As a result, we can see the Docker host’s prompt $


You can see the status of docker images as below when I used ‘sudo docker images’ :

==================>

vskumar@ubuntu:/var/log$

vskumar@ubuntu:/var/log$ sudo docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

hello-world latest f2a91732366c 4 days ago 1.85kB

ubuntu 16.04 20c44cd7596f 7 days ago 123MB

ubuntu latest 20c44cd7596f 7 days ago 123MB

busybox latest 6ad733544a63 3 weeks ago 1.13MB

busybox 1.24 47bcc53f74dc 20 months ago 1.11MB

vskumar@ubuntu:/var/log$

=====================>

You can see whatever containers; we have have used in the past exercises.

At this point, I would like to stop this lab session. And in the next blog we can see on “How to track changes in a container?”.

 

Vcard-Shanthi Kumar V

 

 

5. DevOps: How to work with Docker Images

Docker-logo

In  continuation of my previous blog on “4. DevOps: How to work with Docker Containers”, in this blog I would like to give some lab practice on Docker Images.

How to pull the docker public images ?:
Docker portal will have numerous images available  under public.
Now, there is a need for us to know the usage of docker pull command, which is the defacto command to download Docker images.

Now, in this section, we will use the busybox image, one of the smallest but a very handy Docker image, to dive deep into Docker image handling:
$sudo docker pull busybox
============= Output ============>
vskumar@ubuntu:/var/tmp$ sudo docker pull busybox
Using default tag: latest
latest: Pulling from library/busybox
0ffadd58f2a6: Pull complete
Digest: sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0
Status: Downloaded newer image for busybox:latest
vskumar@ubuntu:/var/tmp$
=================>
Sometimes it might reject this request. We need to keep on trying to get it. I tried 4 times at different timings to connect to it.
Please note now, we have three images as below through all of our so far exercises:
===================>
vskumar@ubuntu:/var/log$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest f2a91732366c 4 days ago 1.85kB
ubuntu latest 20c44cd7596f 7 days ago 123MB
busybox latest 6ad733544a63 3 weeks ago 1.13MB
vskumar@ubuntu:/var/log$
===============>
Now, let us stop the docker services and check the status as below:
=================>
vskumar@ubuntu:/var/log$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest f2a91732366c 4 days ago 1.85kB
ubuntu latest 20c44cd7596f 7 days ago 123MB
busybox latest 6ad733544a63 3 weeks ago 1.13MB
vskumar@ubuntu:/var/log$ sudo service docker stop
vskumar@ubuntu:/var/log$ sudo service docker status
● docker.service – Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: e
Active: inactive (dead) since Sat 2017-11-25 02:52:25 PST; 8s ago
Docs: https://docs.docker.com
Process: 1224 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=0/SUCCE
Main PID: 1224 (code=exited, status=0/SUCCESS)
Nov 25 02:21:42 ubuntu dockerd[1224]: time=”2017-11-25T02:21:42.863518710-08:00″
Nov 25 02:21:43 ubuntu dockerd[1224]: time=”2017-11-25T02:21:43-08:00″ level=inf
Nov 25 02:27:08 ubuntu dockerd[1224]: time=”2017-11-25T02:27:08.010096274-08:00″
Nov 25 02:27:08 ubuntu dockerd[1224]: time=”2017-11-25T02:27:08-08:00″ level=inf
Nov 25 02:27:08 ubuntu dockerd[1224]: time=”2017-11-25T02:27:08.199685599-08:00″
Nov 25 02:52:25 ubuntu dockerd[1224]: time=”2017-11-25T02:52:25.010875880-08:00″
Nov 25 02:52:25 ubuntu systemd[1]: Stopping Docker Application Container Engine.
Nov 25 02:52:25 ubuntu dockerd[1224]: time=”2017-11-25T02:52:25.081714537-08:00″
Nov 25 02:52:25 ubuntu systemd[1]: Stopped Docker Application Container Engine.
Nov 25 02:52:25 ubuntu systemd[1]: Stopped Docker Application Container Engine.
vskumar@ubuntu:/var/log$
====================>
You can see the inactive status of docker.
In such cases, restart the Docker service, as shown here:

$ sudo service docker restart
You can see the output as below:
==================>
vskumar@ubuntu:/var/log$ sudo service docker restart
vskumar@ubuntu:/var/log$ sudo service docker status 
docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: e
   Active: active (running) since Sat 2017-11-25 02:54:42 PST; 6s ago
     Docs: https://docs.docker.com
 Main PID: 3769 (dockerd)
    Tasks: 18
   Memory: 24.6M
      CPU: 989ms
   CGroup: /system.slice/docker.service
           ├─3769 /usr/bin/dockerd -H fd://
           └─3778 docker-containerd --config /var/run/docker/containerd/containe

Nov 25 02:54:41 ubuntu dockerd[3769]: time="2017-11-25T02:54:41.159062708-08:00"
Nov 25 02:54:41 ubuntu dockerd[3769]: time="2017-11-25T02:54:41.159806997-08:00"
Nov 25 02:54:41 ubuntu dockerd[3769]: time="2017-11-25T02:54:41.163503112-08:00"
Nov 25 02:54:41 ubuntu dockerd[3769]: time="2017-11-25T02:54:41.743276580-08:00"
Nov 25 02:54:41 ubuntu dockerd[3769]: time="2017-11-25T02:54:41.955217284-08:00"
Nov 25 02:54:41 ubuntu dockerd[3769]: time="2017-11-25T02:54:41.975961283-08:00"
Nov 25 02:54:42 ubuntu dockerd[3769]: time="2017-11-25T02:54:42.092220161-08:00"
Nov 25 02:54:42 ubuntu dockerd[3769]: time="2017-11-25T02:54:42.094334663-08:00"
Nov 25 02:54:42 ubuntu systemd[1]: Started Docker Application Container Engine.
Nov 25 02:54:42 ubuntu dockerd[3769]: time="2017-11-25T02:54:42.190194886-08:00"

vskumar@ubuntu:/var/log$ 

========================>

Now, let us reconfirm the existing docker images as below:
================>

vskumar@ubuntu:/var/log$ ^C
vskumar@ubuntu:/var/log$ sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
hello-world         latest              f2a91732366c        4 days ago          1.85kB
ubuntu              latest              20c44cd7596f        7 days ago          123MB
busybox             latest              6ad733544a63        3 weeks ago         1.13MB
vskumar@ubuntu:/var/log$ 
=======================>

By default, Docker always uses the image that is tagged as latest.

Each image variant can be directly identified by qualifying it with an appropriate tag.

An image can be tag-qualified by adding a colon (:) between the tag and the repository name (<repository>:<tag>). For demonstration, we will pull the 1.24 tagged version of busybox as shown here:

Now, For lab demonstration, we will pull the 1.24 tagged version of busybox as shown here:

$ sudo docker pull busybox:1.24

Now. You can see the total output before and after executing the above command:

==============================>

vskumar@ubuntu:/var/log$ ^C

vskumar@ubuntu:/var/log$ sudo docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

hello-world latest f2a91732366c 4 days ago 1.85kB

ubuntu latest 20c44cd7596f 7 days ago 123MB

busybox latest 6ad733544a63 3 weeks ago 1.13MB

vskumar@ubuntu:/var/log$ ^C

vskumar@ubuntu:/var/log$

vskumar@ubuntu:/var/log$

vskumar@ubuntu:/var/log$ sudo docker pull busybox:1.24

1.24: Pulling from library/busybox

385e281300cc: Pull complete

a3ed95caeb02: Pull complete

Digest: sha256:8ea3273d79b47a8b6d018be398c17590a4b5ec604515f416c5b797db9dde3ad8

Status: Downloaded newer image for busybox:1.24

vskumar@ubuntu:/var/log$ sudo docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

hello-world latest f2a91732366c 4 days ago 1.85kB

ubuntu latest 20c44cd7596f 7 days ago 123MB

busybox latest 6ad733544a63 3 weeks ago 1.13MB

busybox 1.24 47bcc53f74dc 20 months ago 1.11MB

vskumar@ubuntu:/var/log$

==============================>

There are two busybox containers with different versions.

So, on the basis of TAG values the containers are being pulled.

How to Search Docker images:

So far we have pulled the known images from the docker-hub.

Let us identify some docker images by using a search option as below. We can search for Docker images in the Docker Hub Registry using the docker search subcommand, as shown in this example:

$ sudo docker search mysql

You can see the displayed output of mysql images from the Docker Hub Registry:

=============>

vskumar@ubuntu:/var/log$ sudo docker search mysql

NAME DESCRIPTION STARS OFFICIAL AUTOMATED

mysql MySQL is a widely used, open-source relation… 5278 [OK]

mariadb MariaDB is a community-developed fork of MyS… 1634 [OK]

mysql/mysql-server Optimized MySQL Server Docker images. Create… 368 [OK]

percona Percona Server is a fork of the MySQL relati… 303 [OK]

hypriot/rpi-mysql RPi-compatible Docker Image with Mysql 74

zabbix/zabbix-server-mysql Zabbix Server with MySQL database support 64 [OK]

centurylink/mysql Image containing mysql. Optimized to be link… 53 [OK]

sameersbn/mysql 48 [OK]

zabbix/zabbix-web-nginx-mysql Zabbix frontend based on Nginx web-server wi… 38 [OK]

tutum/mysql Base docker image to run a MySQL database se… 29

1and1internet/ubuntu-16-nginx-php-phpmyadmin-mysql-5 ubuntu-16-nginx-php-phpmyadmin-mysql-5 17 [OK]

schickling/mysql-backup-s3 Backup MySQL to S3 (supports periodic backup… 16 [OK]

centos/mysql-57-centos7 MySQL 5.7 SQL database server 15

linuxserver/mysql A Mysql container, brought to you by LinuxSe… 12

centos/mysql-56-centos7 MySQL 5.6 SQL database server 6

openshift/mysql-55-centos7 DEPRECATED: A Centos7 based MySQL v5.5 image… 6

frodenas/mysql A Docker Image for MySQL 3 [OK]

dsteinkopf/backup-all-mysql backup all DBs in a mysql server 3 [OK]

circleci/mysql MySQL is a widely used, open-source relation… 2

cloudfoundry/cf-mysql-ci Image used in CI of cf-mysql-release 0

cloudposse/mysql Improved `mysql` service with support for `m… 0 [OK]

ansibleplaybookbundle/rhscl-mysql-apb An APB which deploys RHSCL MySQL 0 [OK]

astronomerio/mysql-sink MySQL sink 0 [OK]

inferlink/landmark-mysql landmark-mysql 0 [OK]

astronomerio/mysql-source MySQL source 0 [OK]

vskumar@ubuntu:/var/log$ ^C

vskumar@ubuntu:/var/log$

====================>

You can get the top 5 images by suing head -5 linux command.

$sudo docker search mysql | head -5

=============>

vskumar@ubuntu:/var/log$ sudo docker search mysql | head -5

NAME DESCRIPTION STARS OFFICIAL AUTOMATED

mysql MySQL is a widely used, open-source relation… 5278 [OK]

mariadb MariaDB is a community-developed fork of MyS… 1634 [OK]

mysql/mysql-server Optimized MySQL Server Docker images. Create… 368 [OK]

percona Percona Server is a fork of the MySQL relati… 303 [OK]

vskumar@ubuntu:/var/log$

===============>

If you see the above list, The mysql image curated and hosted by Docker Inc has a 5278 star rating, which is indicated as this is the most popular mysql image and aslo as Official image to use it. For security reasons we should use the official and highly rated images only.

As we planned, in this blog we have worked with the Docker images.

At this point we can stop this session and in the next blog we can see on “How to work with interactive containers”.

Vcard-Shanthi Kumar V-v3

Feel free to Contact me :

 

4. DevOps: How to create and work with Docker Containers

Docker-logo

In continuation of my previous blog on 2. DevOps: How to install Docker 17.03.0 community edition and start working with it on Ubuntu 16.x VM [https://vskumar.blog/2017/11/25/2-devops-how-to-install-docker-17-03-0-community-edition-and-start-working-with-it-on-ubuntu-16-x-vm/], in this blog I would like to cover the lab practice on Docker containers.

Assuming you have the same setup as we did in the previous lab session,

using the below subcommand, you can view the current image hello-world

Use the below command:

sudo docker run -it hello-world

$docker history hello-world

You can run this image and see:

======================>

vskumar@ubuntu:~$ sudo docker history hello-world

[sudo]

password for vskumar:
IMAGE CREATED CREATED BY SIZE COMMENT
f2a91732366c 5 days ago /bin/sh -c #(nop) CMD [“/hello”] 0B
<missing> 5 days ago /bin/sh -c #(nop) COPY file:f3dac9d5b1b0307f… 1.85kB
vskumar@ubuntu:~$

======================>

Check the current docker information:

sudo docker info |more

======================================>

vskumar@ubuntu:~$ sudo docker info |more
Containers: 2
Running: 1
Paused: 0
Stopped: 1
Images: 6
Server Version: 17.11.0-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 14
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 992280e8e265f491f7a624ab82f3e238be086e49
runc version: 0351df1c5a66838d0c392b4ac4cf9450de844e2d
–More–WARNING: No swap limit support
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.10.0-40-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.933GiB
Name: ubuntu
ID: KH7E:PWA2:EJGE:MZCA:3RVJ:LU2W:BA7S:DTIQ:32HP:XXO7:RXBR:4XQI
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

vskumar@ubuntu:~$

=============================>

Now, let us work on the Docker images operations:

In the previous session, we demonstrated the typical Hello World example using the
hello-world image.

you can run an Ubuntu container with:

$ sudo docker run -it ubuntu bash

you can run an Ubuntu container with:

======= We are in Docker container =====>

vskumar@ubuntu:~$ sudo docker run -it ubuntu bash
root@10ffea6140f9:/#

============>

Now, let us apply some Linux commands as below:

==================>

root@10ffea6140f9:/# ls
bin dev home lib64 mnt proc run srv tmp var
boot etc lib media opt root sbin sys usr
root@10ffea6140f9:/# ps -a
PID TTY TIME CMD
11 pts/0 00:00:00 ps
root@10ffea6140f9:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 05:36 pts/0 00:00:00 bash
root 12 1 0 05:38 pts/0 00:00:00 ps -ef
root@10ffea6140f9:/# cd lib
root@10ffea6140f9:/lib# ls
init lsb systemd terminfo udev x86_64-linux-gnu
root@10ffea6140f9:/lib# cd ..
root@10ffea6140f9:/# cd var
root@10ffea6140f9:/var# pwd
/var
root@10ffea6140f9:/var# ls
backups cache lib local lock log mail opt run spool tmp
root@10ffea6140f9:/var# cd log
root@10ffea6140f9:/var/log# ls
alternatives.log bootstrap.log dmesg faillog lastlog
apt btmp dpkg.log fsck wtmp

root@10ffea6140f9:/var/log# cat dpkg.log |more
2017-11-14 13:48:30 startup archives install
2017-11-14 13:48:30 install base-passwd:amd64 <none> 3.5.39
2017-11-14 13:48:30 status half-installed base-passwd:amd64 3.5.39
2017-11-14 13:48:30 status unpacked base-passwd:amd64 3.5.39
2017-11-14 13:48:30 status unpacked base-passwd:amd64 3.5.39
2017-11-14 13:48:30 configure base-passwd:amd64 3.5.39 3.5.39
2017-11-14 13:48:30 status unpacked base-passwd:amd64 3.5.39
2017-11-14 13:48:30 status half-configured base-passwd:amd64 3.5.39
2017-11-14 13:48:30 status installed base-passwd:amd64 3.5.39
2017-11-14 13:48:30 startup archives install
2017-11-14 13:48:30 install base-files:amd64 <none> 9.4ubuntu4
2017-11-14 13:48:30 status half-installed base-files:amd64 9.4ubuntu4
2017-11-14 13:48:30 status unpacked base-files:amd64 9.4ubuntu4
2017-11-14 13:48:30 status unpacked base-files:amd64 9.4ubuntu4
2017-11-14 13:48:30 configure base-files:amd64 9.4ubuntu4 9.4ubuntu4
2017-11-14 13:48:30 status unpacked base-files:amd64 9.4ubuntu4
2017-11-14 13:48:30 status unpacked base-files:amd64 9.4ubuntu4
2017-11-14 13:48:30 status unpacked base-files:amd64 9.4ubuntu4
2017-11-14 13:48:30 status unpacked base-files:amd64 9.4ubuntu4
2017-11-14 13:48:30 status unpacked base-files:amd64 9.4ubuntu4
2017-11-14 13:48:30 status unpacked base-files:amd64 9.4ubuntu4
2017-11-14 13:48:30 status unpacked base-files:amd64 9.4ubuntu4
2017-11-14 13:48:30 status unpacked base-files:amd64 9.4ubuntu4

root@10ffea6140f9:/var/log#

==================================>

WE have seen this container like a Linux machine only.

Now, to come out into Docker use ‘exit’ command.

====================>

root@10ffea6140f9:/var/log#
root@10ffea6140f9:/var/log# exit
exit
vskumar@ubuntu:~$

vskumar@ubuntu:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker-exercise/ubuntu-wgetinstall latest e34304119838 4 hours ago 169MB
<none> <none> fc7e4564eb92 4 hours ago 169MB
hello-world latest f2a91732366c 5 days ago 1.85kB
ubuntu 16.04 20c44cd7596f 8 days ago 123MB
ubuntu latest 20c44cd7596f 8 days ago 123MB
busybox latest 6ad733544a63 3 weeks ago 1.13MB
busybox 1.24 47bcc53f74dc 20 months ago 1.11MB
vskumar@ubuntu:~$

======================>

It means earlier when we run the command ‘$ sudo docker run -it ubuntu bash’ it went into terminal interactive mode of unbuntu container. When we applied ‘exit’ it came out from that container to ‘docker’ . Now through docker we have seen the list of docker images.

So, we have seen from the above session the container usage and the docker images.

Now, let us check the docker services status as below:

$sudo service docker status

vskumar@ubuntu:/var/tmp$ sudo service docker status

================================>

docker.service – Docker Application Container Engine

Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: e

Active: active (running) since Sat 2017-11-25 02:07:54 PST; 25min ago

Docs: https://docs.docker.com

Main PID: 1224 (dockerd)

Tasks: 18

Memory: 255.2M

CPU: 35.334s

CGroup: /system.slice/docker.service

├─1224 /usr/bin/dockerd -H fd://

└─1415 docker-containerd –config /var/run/docker/containerd/containe

================================>

Now, we will stop this session at this point in the next block we will learn how to download public docker image and work with images and containers.

Vcard-Shanthi Kumar V-v3

3. DevOps – Jenkins[2.9]: How to create and build the job ?

jenkins

In continuation of my previous blog on Jenkins 2.9 installation [https://vskumar.blog/2017/11/25/1-devops-jenkins2-9-installation-with-java-9-on-windows-10], We have installed and retested it. In this blog we can see a simple job creation and running it in a build.

Now, Let us do some exercise:

Let us create a new job using Jenkins.

As you are aware of; with Jenkins any kind of manual tasks can be done.

For example:

  1. I want to compile a java program and run it.
  2. To do this 1st let us identify the manual steps.
  3. I need to go to the below directory: D:\JavaSamples\Javatest , where my java programs are available.
  4. I need to use the below command:

CD [to know the current directory.]

Assume, I am in the below dir:

D:\Jenkins\jenkins-2.90>cd

D:\Jenkins\jenkins-2.90

Then , I need to use cd \JavaSamples\Javatest

Now I need to check my current path using cd

Then see the hello*.java using

dir command.

Then I need to compile it using ‘javac HellowWorld.java’ command.

Then I need to check the files on hello*.* name

There should be one file as ‘HellowWorld.class’

It means my program has been compiled correctly without errors.

Now, I need to run the program using java command.

Now, I need to run this file as java HellowWorld

It should display the output.

I have executed all the steps in a command window to compile and test the program.

You can see the executed commands and the  output also from the below screen:

HelloWorld-compile&amp;execute-CMD

As we know these are the manual tasks we need to do repetetively to select a program, compile it and run it. Why don’t we use Jenkins to create a job which has these set of tasks.

Now, let us learn how to create a job in Jenkins?:

Now assuming you are on the below screen:

Jenkins-New Job Creation

Click on “create new jobs”

You will get the below screen:

Jenkins job creation-enter an item name scrn.png

I want to create a job with the name of “vskumar2017test1”.

Jenkins-Freestyle project-creation

I want to create a Freestyle project, as I do not work with any of the plugins for now.
Hence I need to select on the 1st option “ Freestyle project”. When I click on “OK” button, we can see the below screen:

Jenkins-Freestyle project-creation2

I have entered the project description as below, as per our activity plan:

Jenkins-Freestyle project-description

As you are aware we are using Jenkins for simple task in this exercise,

Now click on build options. You will get the below screen:

Jenkins-Freestyle project-Build.png

In this screen we need to use build option. So click on add build steps. Using this option, we will get the below features to use the command window and the commands:

Jenkins-Freestyle project-Build-Add build step

There are two different options we can see to use the commands execution. 1. Execute Windows batch command , 2. Execute shell. Currently e are working with windows only. Hence 1st option need to be selected.

You can see the below screen:

Jenkins-New Job-Build-window commands1

Now, whatever commands we tried using command prompt we need to enter those.

For example I used as below:

cd

cd \JavaSamples\Javatest

dir hellow*.java

javac HellowWorld.java

dir HellowWorld.*

java HellowWorld

Now, I am copying these commands into the window. You can see the partially displayed commands in the window. In reality it has all the commands.

Jenkins-Build-Windows-batch commands-entry1

Now, let us save this job.
We will get the below screen on the created job name:

Jenkins-Project-vskumar2017test1.png

How to run the created project ?:

We need to run the created project using the option “Build now”.

You can see the build history as the job running, Under build#1.

Now, How to see the executed job output ?:
To see the output we need to click on the down arrow mark ate the job#.
It displays the below:

We need to select the console output, It displays the output as below:

Jenkins-Project-vskumar2017test1-running-build1-consoleOuputIf we scroll down we can see the job status message:

Jenkins-Project-vskumar2017test1-running-build1-consoleOuput

It shows all the output for our commands along with the job status.

Now let us review and analyze the display messages and commands as below.

Started by user Vskumar2017
Building in workspace D:\Jenkins\Jenkins 2.9\workspace\vskumar2017test1
[vskumar2017test1] $ cmd /c call C:\WINDOWS\TEMP\jenkins5234154176626743505.bat

The above commands shows, Jenkins started the job with user id vskumar2017.N

And it displays the current path of the job where it is created.

And it invokes a .bat file to execute the command prompt commands what we entered. It means it stored the batch commands into a file and it is opened by a cmd command from a shell prompt.

Now, let us see the below commands:

D:\Jenkins\Jenkins 2.9\workspace\vskumar2017test1>cd
D:\Jenkins\Jenkins 2.9\workspace\vskumar2017test1

It denotes its current job/project directory. And executed the cd to show the path.

Through the below:

D:\Jenkins\Jenkins 2.9\workspace\vskumar2017test1>cd  \JavaSamples\Javatest 

D:\JavaSamples\Javatest>dir hellow*.java 
 Volume in drive D is New Volume
 Volume Serial Number is 5C67-6F04

 Directory of D:\JavaSamples\Javatest

04/16/2017  03:52 PM               234 HellowWorld.java
04/16/2017  03:53 PM               570 HellowWorld10.java
               2 File(s)            804 bytes
               0 Dir(s)  22,762,598,400 bytes free

It changed the directory where the java program is there.

And displayed the files.

Let us see the next output:

D:\JavaSamples\Javatest>javac HellowWorld.java 
'javac' is not recognized as an internal or external command,
operable program or batch file.

It shows error for java path. Jenkins is not recognizing the path. The javac [compiler application is in the below path: D:\Java\jdk-9.0.1\bin ]

Javac-path.png

Now, this need to be corrected in the project.
To correct this we need to goto option: “Configure”.

Open it into project window to update some more commands.

I have updated the command window with the below commands:
cd \JavaSamples\Javatest
dir hellow*.java
del HellowWorld.class
dir hellow*.java
D:\Java\jdk-9.0.1\bin\javac HellowWorld.java
dir HellowWorld.*
java HellowWorld

Now, let me run the job by using “Build now option”.

For debugging purpose, I have executed this job some more times. Hence history shows multiple builds on it.

Our current build is #5. And let us open it and see the console output:

Jenkins-Project-vskumar2017test1-buildsNow#5

The console output shows as below:

Jenkins-Project-vskumar2017test1-console#5

Now you can see the whole console output as below in text format:

Console Output
Started by user Vskumar2017
Building in workspace D:\Jenkins\Jenkins 2.9\workspace\vskumar2017test1
[vskumar2017test1] $ cmd /c call C:\WINDOWS\TEMP\jenkins1398066858541735603.bat

D:\Jenkins\Jenkins 2.9\workspace\vskumar2017test1>cd
D:\Jenkins\Jenkins 2.9\workspace\vskumar2017test1

D:\Jenkins\Jenkins 2.9\workspace\vskumar2017test1>cd \JavaSamples\Javatest

D:\JavaSamples\Javatest>dir hellow*.java
Volume in drive D is New Volume
Volume Serial Number is 5C67-6F04

Directory of D:\JavaSamples\Javatest

04/16/2017 03:52 PM 234 HellowWorld.java
04/16/2017 03:53 PM 570 HellowWorld10.java
2 File(s) 804 bytes
0 Dir(s) 22,761,955,328 bytes free

D:\JavaSamples\Javatest>del HellowWorld.class

D:\JavaSamples\Javatest>dir hellow*.java
Volume in drive D is New Volume
Volume Serial Number is 5C67-6F04

Directory of D:\JavaSamples\Javatest

04/16/2017 03:52 PM 234 HellowWorld.java
04/16/2017 03:53 PM 570 HellowWorld10.java
2 File(s) 804 bytes
0 Dir(s) 22,761,955,328 bytes free

D:\JavaSamples\Javatest>D:\Java\jdk-9.0.1\bin\javac HellowWorld.java

D:\JavaSamples\Javatest>dir HellowWorld.*
Volume in drive D is New Volume
Volume Serial Number is 5C67-6F04

Directory of D:\JavaSamples\Javatest

11/17/2017 12:3

9 PM 427 HellowWorld.class
04/16/2017 03:52 PM 234 HellowWorld.java
2 File(s) 661 bytes
0 Dir(s) 22,761,955,328 bytes free

D:\JavaSamples\Javatest>java HellowWorld
Hello World

D:\JavaSamples\Javatest>exit 0
Finished: SUCCESS

=======================>
Observe there was no error displayed as we have given the correct javac application path.

Now I have updated the commands as below to remove the existing HellowWorld.class file.
cd
cd \JavaSamples\Javatest
dir hellow*.*
echo ‘Assuming .class file is already there..”
del HellowWorld.class
dir hellow*.java
D:\Java\jdk-9.0.1\bin\javac HellowWorld.java
dir HellowWorld.*
java HellowWorld
=======================>
You can see the output under build#7:
========================>

Console Output
Started by user Vskumar2017
Building in workspace D:\Jenkins\Jenkins 2.9\workspace\vskumar2017test1
[vskumar2017test1] $ cmd /c call C:\WINDOWS\TEMP\jenkins7100481173282587024.bat

D:\Jenkins\Jenkins 2.9\workspace\vskumar2017test1>cd
D:\Jenkins\Jenkins 2.9\workspace\vskumar2017test1

D:\Jenkins\Jenkins 2.9\workspace\vskumar2017test1>cd \JavaSamples\Javatest

D:\JavaSamples\Javatest>dir hellow*.*
Volume in drive D is New Volume
Volume Serial Number is 5C67-6F04

Directory of D:\JavaSamples\Javatest

11/17/2017 12:45 PM 427 HellowWorld.class
04/16/2017 03:52 PM 234 HellowWorld.java
04/16/2017 03:53 PM 570 HellowWorld10.java
3 File(s) 1,231 bytes
0 Dir(s) 22,760,976,384 bytes free

D:\JavaSamples\Javatest>echo ‘Assuming .class file is already there..”
‘Assuming .class file is already there..”

D:\JavaSamples\Javatest>del HellowWorld.class

D:\JavaSamples\Javatest>dir hellow*.java
Volume in drive D is New Volume
Volume Serial Number is 5C67-6F04

Directory of D:\JavaSamples\Javatest

04/16/2017 03:52 PM 234 HellowWorld.java
04/16/2017 03:53 PM 570 HellowWorld10.java
2 File(s) 804 bytes
0 Dir(s) 22,760,976,384 bytes free

D:\JavaSamples\Javatest>D:\Java\jdk-9.0.1\bin\javac HellowWorld.java

D:\JavaSamples\Javatest>dir HellowWorld.*
Volume in drive D is New Volume
Volume Serial Number is 5C67-6F04

Directory of D:\JavaSamples\Javatest

11/17/2017 12:47 PM 427 HellowWorld.class
04/16/2017 03:52 PM 234 HellowWorld.java
2 File(s) 661 bytes
0 Dir(s) 22,760,976,384 bytes free

D:\JavaSamples\Javatest>java HellowWorld
Hello World

D:\JavaSamples\Javatest>exit 0
Finished: SUCCESS
=================================>
Now, you can see message display from echo command.
And the old class file is removed and the new file time stamp can be seen differently.

Now, how to make failure a job ?:

Please see the screen display with a failed job:

Jenkins-vskumar2017-success1

Now, let us see the console output:

Console-output-vskumar2017-3.png

In this example I have given a wrong file name to execute. Hence it is failed.
It checks the last commands results. You can change them and cross check as an exercise.
The failure error flag also it shows as “1”.
When the job was success the flag showed as “0”.

Hope you understand the difference between failure and success of Jenkins build.
So, we need to make sure the commands/script mentioned in the command window should be a debugged one. Then the jobs success can be seen.

Please note;
If it is not a recognized command as an internal or external command,
operable program or batch file. Jenkins will not count it as a failure.

Exercise:
Take a new Java program and create a job to compile and run it.

How to use My views:
You can see the build history in graphical format as below with My views option:

Jenkins-Job-Views

 

At this point with this blog I want to close now, with the above scenarios.

You can also see:

https://www.youtube.com/edit?o=U&video_id=lciTHyxCgfE

 

Feel free to contact for any support:

Vcard-Shanthi Kumar V-v3

2. DevOps: How to install Docker 17.03.0 community edition and start working with it on Ubuntu 16.x VM

Docker-logo.png

In this blog, I would like to demonstrate the Docker 17.03.0  CE edition installation on Ubuntu 16.0.4 VM machine. And later on little practice can be shown  using containers in a series of blogs. Please keep visiting for weekly new blogs or subscribe it. If you are interested to follow this site blogs, please send e-mail [with your linkedin message ] to approve with authentication.

Assume you have an unbuntu machine or a Virtual machine [VM] configured. And in this blog you can see on how to install the Docker 17.03.0  CE [as on this blog’s date] with screen display outputs:

$ sudo service docker restart

However, if the Active column shows inactive or maintenance as the status, your Docker service is not running. In such cases, restart the Docker service, as shown here:

$ sudo service docker restart 
 
Install Docker on Unbuntu:

1.Add the Docker package repository path for Ubuntu 16.04 to your APT sources, as shown below:
 $ sudo sh -c "echo deb https://apt.dockerproject.org/repo \
 ubuntu-xenial main > /etc/apt/sources.list.d/docker.list"

2.Add the GNU Privacy Guard (GPG) key by  running the following command:
 $ sudo apt-key adv --keyserver \
 hkp://p80.pool.sks-keyservers.net:80 --recv-keys \ 
 
If the above format is expired; you can try as below:
==== Alternate method with screen output ====>
$sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

Executing: /tmp/apt-key-gpghome.YU0Rk7y5kX/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
gpg: key F76221572C52609D: 7 signatures not checked due to missing keys
gpg: key F76221572C52609D: public key "Docker Release Tool (releasedocker) <docker@docker.com>" imported
gpg: Total number processed: 1
gpg:               imported: 1
=========================================>
The above format should work. Even if that is not working, please google the same keys for the latest validated keys.

3.Resynchronize with the package repository using the below command:
 $ sudo apt-get update
Now docker software is in your unbuntu machine.

4. Now, you can Install Docker and start  Docker service:
 $ sudo apt-get install -y docker-engine

5.Now you have  installed the Docker Engine, we need to  verify our installation by running docker --version as shown below:
 $ docker --version

We have successfully installed Docker version 17.03.0 community edition.
Other options is; in a single script by avoiding the above steps you can install it:

If you are working on Ubuntu, follow the below command:

==================== Screen output ==========>
vskumar@ubuntu:/var/log$ sudo wget -qO- https://get.docker.io/ | sh |more
[sudo] password for vskumar: 
# Executing docker install script, commit: 11aa13e
Warning: the "docker" command appears to already exist on this system.
If you already have  installed Docker, this script can cause trouble, which is
why we're displaying this warning and provide the opportunity to cancel the
installation.
If you installed the current Docker package using this script and are using it
again to update Docker, you can safely ignore this message.
You may press Ctrl+C now to abort this script.
+ sleep 20
+ sudo -E sh -c apt-get update -qq >/dev/null
============== Since I have already installed, I have breaked this process ====>
But, if you try this script; it takes some time to download and install docker for the whole process. 
Be patient to see the final result.  Later check the docker version to reconfirm. Using the below command.
$docker --version

It is very easy to install docker with the above step(s) in your Ubuntu machine.

Assuming you have studied the theory part of docker usage I am moving forward to lab practice.

Now, let us do some practice with docker images/containers.

We will do the below steps:

1. Downloading the first Docker image:
we will download a sample hello-world docker image using the following command:
$ sudo docker pull hello-world

2. Once the image is downloaded, 
they can be verified using the docker images subcommand, as given below:
To check the image run the below command:
$sudo docker run hello-world
It displays the message “Hello from Docker!”
You have set up your first Docker container and it is running now.

3. How to Troubleshoot with Docker containers?:
If you want to troubleshoot with container, the first step is  to check the Docker's running status by using the below command:
$ sudo service docker status 
It displays the status and shows as Docker 'Active running' message on the screen along with other messages.
Press ctrl+C to come out from the display.

=========== Partial content from the above command =====>
vskumar@ubuntu:/var/log$ sudo service docker status 
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2017-11-25 02:54:42 PST; 3h 56min ago
Docs: https://docs.docker.com
Main PID: 3769 (dockerd)
Tasks: 19
Memory: 43.6M
CPU: 1min 21.184s
CGroup: /system.slice/docker.service
├─3769 /usr/bin/dockerd -H fd://
└─3778 docker-containerd --config /var/run/docker/containerd/containerd.toml
===============================>
For some reasons, if the Active column shows inactive or maintenance as the status, it means your Docker service is not running. 
In that case to restart the Docker service, use the below command:

$ sudo service docker restart

We will see in the next blog, some more exercises on Docker containers and images.

Vcard-Shanthi Kumar V-v3

https://youtu.be/O0_Vnc7X6iI

 

 

 

 

 

 

 

 

 

 

1. DevOps – Jenkins[2.9] Installation with Java 9 on Windows 10

 

 

 

 

 

 

 

jenkins

I am publishing  series of blogs on DevOps tools  practice. The interested people can keep watching this site or you can subscribe/follow.

In this blog we will see what are the pre-requisites for Jenkins 2.9 to install and how to install Jenkins.

 =====================================>

Visit my current running facebook groups for IT Professionals with my valuable discussions/videos/blogs posted:

 DevOps Practices Group:

https://www.facebook.com/groups/1911594275816833/about/

Cloud Practices Group:

https://www.facebook.com/groups/585147288612549/about/

Build Cloud Solution Architects [With some videos of the live students classes/feedback]

https://www.facebook.com/vskumarcloud/

 =====================================>

 

MicroServices and Docker [For learning concepts of Microservices and Docker containers]

https://www.facebook.com/MicroServices-and-Docker-328906801086961/

To setup Jenkins, you need to have Java 9 in your local machine.

Hence in the Step1 to setup Java, you need to follow the below steps:

STEP1: How to download and install JDK SE Development kit 9.0.1 ?:

go to URL:

http://www.oracle.com/technetwork/java/javase/downloads/jdk9-downloads-3848520.html

You will see the below page [as on today’s display]

Java Kit SE 9 download

From this web page, Click on Windows file jdk-9.0.1_windows-x64_bin

It will download.

Double click on the file.

You will see the series of screens, while it is doing installation. I have copied some of them here.Java SE 9 install scrn-2.png

Java SE 9 install scrn-1

Java SE 9 install scrn-3.png

 

Java SE 9 install scrn-4.png

You can change the directory if you want, in the above screen.

 

Java SE 9 install scrn-4-Oracle 3 billion.png

 

Finally you should get the below screen as installed it successfully.

Java SE 9 install scrn-5-complete

Now, you need to set the Java environment and path variable in Windows setting.

Java SE 9 install scrn-7-windows env setup2Java SE 9 install scrn-8-windows env setup3

 

My Java directory path is:

Java SE 9 install scrn-9-windows env setup4

 

Java SE 9 install scrn-10-windows env setup5.png

You  need to edit the below path variables also for the latest path:

Java SE 9 install scrn-11-windows env setup6Java SE 9 install scrn-12-windows env setup7.png

After you have done the settings, you can check the java version as below in a command prompt:

Java SE 9 install scrn-13-CMD-1

You should get the same version.

Now, You need a simple java program to run and check your compiler and runtime environment.

Please goto google search and check for “Java Hello wordl program”.

Follow the below URL:
https://en.wikiversity.org/wiki/Java_Tutorial/Hello_World!

Copy the program into a text file named as hellowworld.java

Then compile and run the program as below:

Java SE 9 install scrn-14-CMD-Javacompile&run-1.png

If you are getting the above, then your installed java software is working fine.

You need to remember the below:
To compile this program you need to use the below command in command prompt of that program directory:

D:\JavaSamples\Javatest>javac HellowWorld.java

To run the java program you need to use the below command:

D:\JavaSamples\Javatest>java HellowWorld
Hello World

Now, you can plan for setting up Jenkins.

STEP2: How to setup Jenkins on Windows ?:

Follow the below link to download Jenkins for Windows-x64
https://jenkins.io/download/thank-you-downloading-windows-installer/

It downloads the installer as below:
You can see the downloaded installer file for Jenkins.

Jenkins-installer-file1.png

 

How to install Jenkins?:
Now you can copy this file into a new directory as Jenkins.

I have copied into the below directory.

Jenkins-installer-file-copy1.png

You need to unzip this file.

Jenkins-installer-file-unzip1.png

You can see the new directory is created with its unzipped files:

 

Jenkins-installer-file-unziped-new Dir

You can double click on it and can see the below screen:

 

Jenkins-installer-file-double-click.png

I have changed the path as below:

Jenkins-installer-file-path.png

Click on install and say “Yes” in windows confirmation screen.

 

Jenkins-installer-file-path-install.png

You can see the below screen:

Jenkins-installer-install-complete1.png

Once you click on finish, it will take you to a browser:

Jenkins-initial browser1.png

Jenkins will have a default user id as “admin” and the password.
The password is available from the given path.

Jenkins-admin-initial-pwd-file

You can open this file in notepad as below:

Jenkins-admin-initial-pwd-file-open-notepad

Now, copy this password as below into windows clipboard.

Now you goto the Jenkins browser and paste this password.

Close your notepad.

Now, on browser press continue.

You can see the Jenkins initial  screen as below for plugins selection:

 

Jenkins-initial screen for plugins

Jenkins will have 100s of plugins. But there are default plugins those can be used initially to save you disk space and time. Hence now, you click on “Install suggested plugins”.

It will show the below screen as it is working for this activity:

Jenkins-default-plugins-install-screen1.png

You can see in the right side window the tasks what Jenkins is doing:Jenkins-initial screen for plugins-tasks1.png

You can also watch as it is doing one by one the plugins installation and the tasks on right side.
It might take more than 30 mts depends on your internet speed and the RAM.

I am copying some of the screens as it is moving on …

 

Jenkins-initial screen for plugins-tasks2.png

Once the plugins are installed, you can see the 1st screen to setup your 1st admin user id and password as below:

Jenkins-create-first-UID &amp; PWD

You can enter the details and click on “Save and Finish” button.

Now, it shows the below screen with Jenkins readyness to use:

Jenkins-is-Ready.png

When you click on “Start using Jenkins” button,
You can see the below screen as in the beginning of the Jenkins usage:

Jenkins-welcome-1st time.png

Please observe the right corner and verify your created user id.

Now, let us do some login and logout operations to make sure it is working.

When you logout you can see the below screen:

Jenkins-initial-logout-test

Now let us understand the url of Jenkins server which we are using:

When we install Jenkins in any machine either Windows or Linux.
By default its url should be : http://localhost:8080/
Your local host is your current machine Ip address.
You can see the screen now with the above url:

Jenkins-URL-test

Now, you can try one more option, check your ip address from command prompt as below:

Check-IPs-CMD.png

You can pickup the 1st IP address which displays from the command prompt screen.

And key-inn the below url in your browser:
http://192.168.137.1:8080/login?from=%2F

Your ip need to be used in place of 192.168.137.1

Now, let us see What is 8080?:
Every server software creates a port address to access its web pages from the installed machine. In our case Jenkins has been configured on 8080 port as default. The 8080 is a default port for Jenkins. Similarly other server softwares also will have their specific ports.

Now, I have used a different browser using the above url to access Jenkins web page as below:

Jenkins-using-IP & 8080-Port.png

Using the login screen I am logging into my admin user id: vskumar2017 , which was created earlier.

Login-UID-vskumar2017.png

You can also check in your windows services on Jenkins running status.
Please note on this setup, you have made a standalone Jenkins by using your PC or Laptop.

Now, you can restart your windows machine. You need to start Jenkins as fresh service.

To start Jenkins from command line
  1. Open command prompt.
  2. Go to the directory where your war file is placed and run the following command: java -jar jenkins.war
  3. OR One more  option is; go to your Jenkins directory in CMD window and execute: jenkins.exe start

 

Restart-Jenkins-CMD

Open browser and Use you can check the Jenkins access. It should be showing the login page.

How to remove Jenkins from your system?:

If you want to remove Jenkins from your system, you can find the  Jenkins Windows installer file from the Jenkins directory and double click on it. You can see the below window to choose your action:

Remove-repair-Jenkins.png

So far we have seen the installation of Java 9 and Jenkins.

Some times, you might need to configure other servers [Ex:Tomcat, etc,]. They might also use 8080 port. Hence there will be conflict. We need to change the port# in that case.

Now, How to change your 8080 port to other port#?:

Please find Jenkins.xml in Jenkins dir:

Ex:

In my system I have the URL:D:\Jenkins\Jenkins 2.9

You need to replace 8080 with the  required port# in the below line:

<arguments>-Xrs -Xmx256m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle -jar “%BASE%\jenkins.war” –httpPort=8080 –webroot=”%BASE%\war”</arguments>

In the next blog we can see some simple exercise with Jenkins by creating and running the project into different builds.

https://vskumar.blog/2017/11/26/2-devops-jenkins2-9-how-to-create-and-build-the-job/

 

https://vskumar.blog/2018/02/26/15-devops-how-to-setup-jenkins-2-9-on-ubuntu-16-04-with-jdk8/

Note to the reader/user of this blog:

If you are not a student of my class, and looking for it please contact me by mail with your LinkedIn identity. And send a connection request with a message on your need. You can use the below contacts. Please note; I teach globally.

Vcard-Shanthi Kumar V-v3

https://youtu.be/x7d6dl0k0JU

If you want to learn for Ubuntu installation you can visit:

https://vskumar.blog/2018/02/26/15-devops-how-to-setup-jenkins-2-9-on-ubuntu-16-04-with-jdk8/

https://www.facebook.com/vskumarcloud/videos/985410704977805/?t=0

 

Advertising3.pngFor Intended DevOps Engineers: We also train freshers [limited people] on Agile & Scrum concepts to till DevOps practices and  CDI automation [with tools]. Interested candidates can contact by e-mail/phone to join or book your seat in Bangalore.

For Employers: If you are planning to accelerate your DevOps practices from Agile & Scrum onwards, we can do resourcing for you [in Bangalore]. Please contact for details.

If you are DevOps practices:

https://vskumar.blog/2018/10/17/join-devops-practices-group-on-fb/

 

 

Why the DevOps Practice is mandatory for an IT Employee

DevOps Patterns
devops-process
  1. DevOps is a terminology used to refer to a set of principles and practices to emphasize the collaboration and communication of Information Technology [IT] professionals in a software project organization, while automating the process of software delivery and infrastructure using Continuous Delivery Integration[CDI] methods.
  2. The DevOps is also connecting the teams of Development and Operations together to work collaboratively to deliver the Software to the customers in an iterative development model by adopting Continuous Delivery Integration [CDI] concepts. The software delivery happens  in small pieces at different delivery intervals. Sometimes these intervals can be accelerated depends on the customer demand.
  3. The DevOps is a new practice globally adopted by many companies and its importance and implementation is accelerating by maintaining constant speed.  So every IT professional need to learn the concepts of DevOps and its Continuous Delivery Integration [CDI] methods. To know the typical DevOps activities by role just watch the video: https://youtu.be/vpgi5zZd6bs, it is pasted below in videos.
  4. Even a college graduate or freshers also need to have this knowledge or practices to work closely with their new project teams in a company. If a fresher attends this course he/she can get into the project shoes faster to cope up with the  experienced teams.
  5. Another way; The DevOps is an extension practice of Agile and continuous delivery. To merge into this career; the IT professionals  need to learn the Agile concepts, Software configuration management, Release management, deployment management and  different DevOps principles and practices to implement the CDI patterns. The relevant tools for these practices integration. There are various tool vendors in the market. Also open source tools are very famous. Using these tools the DevOps practices can be integrated to maintain the speed for CDI.
  6. There  are tools related with version control and CDI automation. One need to learn the process steps related to these areas by attending a course. Then the tools can be understood easily.  If one understands these CDI automation practices and later on learning the tools process is very easy by self also depends on their work environment.
  7. As mentioned in the above; Every IT company or IT services company need to adopt the DevOps practices for their customers competent service delivery in global IT industry. When these companies adopt these practices, their resources also need to be with thorough knowledge of DevOps practices to serve to the customers. The companies can get more benefit by having these knowledged resources. At the same time the new joinees in any company either experienced or fresher professional if they have this knowledge, their CTC in view of perks will be offered more or with competent offer they may be invited to join in that company.
  8. Let us know if you need  DevOps training  from  the IT industry experienced people; which includes the above practice areas to boost you in the IT industry.

Training will be given by 3 decades of Global IT experienced  professional(s):

https://www.linkedin.com/in/shanthi-kumar-v-itil%C2%AE-v3-expert-devops-istqb-752201a/

Watch the below videos on why the IT company need to shift to DevOps work culture and practices and what advantages the company can get and the employees can get :

For DevOps roles and activities watch my video:

Folks, I also run the DevOps Practices Group: https://www.facebook.com/groups/1911594275816833/?ref=bookmarks

There are many Learning units I am creating with basics. If you are not yet a member, please apply to utilize them. Read and follow the rules before you click your mouse.

For contact/course details please visit:

https://vskumarblogs.wordpress.com/2016/12/23/devops-training-on-principles-and-best-practices/

Advertising3
Vcard-Shanthi Kumar V-v3

SDLC & Agile – Interview questions for Freshers -5

In continuation of my previous blog  on this subject following questions and answers are continued:

1. What is retrospective in agile and where it can be useful?

Ans: During agile development model in each iteration different requirements are considered to design, develop and construct the code. While performing these tasks there can be different issues identified and resolved by the teams at each stage. The teams need to maintain knowledge information against to each issue as lessons learnt. These issues resolution mechanism processes are going to be considered for any process improvements  for next iteration. During the retrospective [after completing iteration] the team is going to discuss the lessons learnt  from the completed  iteration and the best practices  identified for next iteration. The retrospective is a mandatory activity for every iteration of Agile projects. And this need to be conducted before starting the next iteration.

 

2.   What is continuous stream of development in agile model?

Ans: As per the agile concept continuous software delivery need to  happen by following iterative development. Let us assume the development team consider the four days for development and fifth day it need to go for release and deployment, fifth day onwards developer considered as other iteration/SPRINT as their continuous development activity. The developers are picking up one  by one SPRINT items for their construction activity this is called as continuous stream of development. When the testing activity is ongoing the developer can pickup other workable items from the SPRINT to do construction activity.

 

3. What is Continuous Delivery[CD] in Agile ?

Ans: As per the agile concepts and principles, the developer need to get small chunk of workable item only which can be delivered in hours or few days.  When this kind of continuous development is happening through the agile developers there will be builds for continuous testing and deployments. Obviously then the agile project leads to have continuous  delivery [CD] of software into production with small chunks of functionality or fixes.

Example: Many technology companies consider each SPRINT item to complete in hours only to speed up their ongoing software deployments for their daily business needs. This kind of concept is called as Continuous Delivery [CD] in Agile.

 

4.  What is transition activity and their tasks involved in  agile project?

Ans:  Transition activity is start with deploying software release into production. Once the software construction phase is signed off the transition activity need to be started, typically transition activity contains following tasks.

i) Active stakeholder participation

ii) Final system testing

iii) Final acceptance testing

iv) Finalize documentation

v) Final testing of the release

vi) Train end users

vii) Train production staff

viii) Deploy into production.

All the above tasks are performed in the sequential order.

 

5.  What is final system testing during transition stage?

Ans: Once software can be deployed  internally, the planned system testing need to be conducted by testing team for a specific iteration. Once system testing is passed or certified the  acceptance testing need to be started.

 

 6.  When can you conduct final acceptance testing in agile model?

Ans:  In any agile project developers need to conduct a skeleton software demo to the users. Depends on the design requirement once users approved it, the Construction phase need to be  started. Once the software is constructed and  it can be deployed internally for various levels of testing during the transition stage of agile project. At this stage the software release is deployed in test environment. Then the  system testing is conducted and signed off.  The final acceptance testing is conducted on the software to be delivered to the users in production. Once the final acceptance is signed off the remaining tasks are being performed during transition phase, as mentioned in the list of tasks.

 

7. When can the pilot testing  happen and who all will perform it?

Ans: During the transition stage once the acceptance test is signed off , and final document is done, the software build is executed under a pilot test in a preproduction test or in a production environment depends on the organization policy. The pilot test is attended by the business users and testers or nominated coordinator along with the development team and operations[ops] team.

 

8.  During the transition stage who all need to be trained?

Ans:  Once the pilot test is done software end users and the production staff (ops team) needs to be trained to operate the product in live [production] for business operations.

 

9.  When can you deploy the system into production?

Ans: During transition stage once the pilot test is signed off, end users and production staff will be trained on software system and then it is deployed into production.

 

10. How a  prototype can be designed ?

Ans: When the business user give some requirements which consists of user interface and some data processing to provide output, there are two ways we can design prototype software; a) Prototype model  b) Design and developing the complete software.

a) Prototype model: During prototype model developer design and develop the critical requirements of the users and demonstrate those things as the skeleton software. The skeleton software will not have the complete software operations. It will have an user interface to get an idea by the user on the software to be delivered by the development in future. Once user approved skeleton model, developer can design complete model through Agile SDLC. Note; the prototype model or process can be applied  for one or more SPRINT cycles or iterations.

b) Design and developing the complete software: This kind of software happens in a regular Agile project process from collection of user story onwards. All the agile phases and their tasks will be applied for execution. If the team agreed to a demo [for prototype], the user demos can also happen as and when required for each SPRINT during construction phase.

Keep watching this site for further updates.

Contact for any guidance/coaching.

 Vcard-Shanthi Kumar V

 

https://youtu.be/vpgi5zZd6bs

https://www.youtube.com/watch?v=fe5S-Mav1tU

Continuous test automation planning during Agile iterations

Continuous test automation planning during Agile iterations

Please refer to my blog and videos on Agile practices and the importance of Re-usable code libraries for cycle time reduction.

During the reusable code usage and the iterations or sprint planning, the test automation also can be planned, designed and implemented.

This blog eloborates on  the easy processes can be used to implement it and demonstrate the cycle time reduction. Please note atleast after passing the two cycles of tests on the selected manual scripts need to be planned for test automation.

I am trying to elaborate on the process of the Automation of unit testing and component or module integration test automation. Please note the test automation is also a development project. Hence some of the phases are similar to SDLC. The pictorial chart elaborates the detailed steps involved in these test phases automation. Module (Unit) or Component Development in Agile: The below contents and the chart narrates the relationship of automation Development process and the Testing process under each development phase. Development and Testing process Relationship table:

Phase Development Process Test Process
Module (Unit) or component Development Design module from requirements Perform test planning and test environment set up.
  Code module Create test design and develop test data.
  Debug module Write test scripts or record test scenario using module.
  Unit test module Debug automated test script by running against module. Also, tools that support unit testing [Purify, etc] can be used.
  Correct defects Rerun automated test script to regression test as defects are corrected.
  Conduct Performance Testing Verify system is scaleable and will meet the performance requirements. This is the entry criteria for Integration test automation.
Integration
Build system by connecting modules.Conduct Integration test with connected modules.Review trouble reports. Combine unit test scripts and add new scripts that demonstrate module inter-connectivity. Use test tool to support automated integration testing.
  Correct defects and update defect status. Rerun automated test script as part of regression test, as defects are corrected.
  Continued Performance Testing Activities At this point, Verifying system is scaleable and will meet performance requirements with the integrated modules. If this passes then the system test or VVT entry can be considered.

Below chart demsontrates the process steps to be used for test automation of unit test and integration testing:

UT&IT

The acronyms used in the chart: TC–>Test case, TD–>Test data, TR–> Test requirement, UT–>Unit test, IT–> Integration test.

All the automated test scripts and test data  need to be preserved under configuration management tools.

Choosing the right tools for test automation comes under tools evaluation process. Once the tools are identified, the above processes can be planned and adopted for regular practice on the Agile projects.

 

https://www.youtube.com/watch?v=XlhM5FmKcsc&list=PL5NmC6t0N8tHJoaaOAjM58bhu18zzbPI1

 

Vcard-Shanthi Kumar V-v3

SDLC & Agile – Interview questions for Freshers -4

Agile Cirlce1

In continuation of my previous blog [https://wordpress.com/post/vskumar.blog/1944]

on this subject following questions and answers are continued:

 

1.  What is a collaborative development approach in  agile development model ?

Ans: In any agile project as per the Agile manifesto principles the team need to pull up the ideas through a prototype like;  either phased prototype or iterative prototype or rapid prototype. With these pulled ideas, the team need to work together by sharing knowledge among themselves and which is considered as a collaborative development approach. 

2.  What is model storming during construction phase of an agile development model?

Ans: When the initial requirements are envisioned they all are being transmitted into different iterations. A single team or multiple teams need to execute the iteration during software code construction. The requirements also can be changed or newly added by the stakeholders as per the agile principles at any stage of Agile project phases.  The team need to be brain stormed to execute the iterations correctly and completely as per the user’s desire. The iteration can be considered as a single agile model for construction phase and this model storming can happen within team for clear understanding of SPRINT by each developer. During the model storming; the requirements decomposition happens like; from user story to design specifications those can lead to SPRINT items, and from design to code specifications. Depends on the team planning; sometimes the outcome of model storming can also be a TDD [Test Driven Design]. [Please look into my youtube videos on Agile topic reusable code example]

 

3. What is Test Driven Design [TDD]?

Ans: Any requirement [story] need to be decomposed into design requirement. Each design requirement need to be converted into code through construction phase. When the code is visualized [before development] by the developer a test driven scenario need to be identified or visualized by the developer and it need to be documented into a test case with different test design steps. Once the developer feels this test case can be executed by using different code paths the developer can start the code writing, this concept is called Test Driven Design and using this TDD specification the development can be started.  Hence the Agile developers need to make TDD  1st ready and plan for code writing, review and unit testing. Sometimes the TDD  can be the outcome of the model storming also.

4. What is confirmatory testing?

Ans: In any software build there can be defects through different levels of testing. When the developer fixes one or more defects and deploy code in test environment, the test engineer need to retest it for confirming the software function with reference to the regression requirements or functionality and the fixes [if any]. For every fix confirmation test is mandatory.

 5.  What is evolving documentation?

Ans: As per the agile process when the code is constructed and tested the prepared documents need to be updated with reference to the tested and certified build. If any new requirement has to be incorporated into document, the documentation evolving is an ongoing activity for an iteration build till it goes to production.

 

6.  What is internally deploying software?

Ans: Once the construction is over for an iteration requirement, software can be unit tested and integration tested. If it is passed, it can be move to other test environments. As per the deployment process when we are moving software into the different environments [after test certification or confirmation] the build is known as internally deployable software.

 

7.  When can you finalize the documentation in agile model?

Ans: During the transition stage once the acceptance test is signed off users suggestions are considered to finalize the documentation.

 

8.  What are  tangible and intangible benefits for users?

Ans: In any business requirements there are direct benefits from business to incorporate software requirements into software system which is considered as tangible [direct] benefits. There are intangible [indirect benefits] also  by incorporating different requirements into software with a business usage.

Example: If  the system performance is increased by a technical design  in the software architecture, users can access the data faster which is intangible benefits. Then the  iteration can facilitate to perform the software with faster data access or the web pages appearance can be faster. Sometimes this kind of requirements can come into  technical areas rather than coming through a user story in Agile and those can be intangible benefit. Even we might consider an upgrade to database or OS or memory, etc.  then also the data access speed can be increased.

 

9.  What is the feedback analysis? When it can be done?

Ans: As per the agile principles the stakeholder collaboration is an ongoing activity. At any time the stakeholder can give informal or formal feedback for any software items or in any approach followed by agile teams. In agile model many times informal feedback can happen during the discussion. At the same time the scheduled reviews also can happen. During the review the feedback can be given by the reviewers. Even a test result can come into a feedback category. All these feedback items need to be analyzed for delivering a working software by the teams as per the principles.  Sometimes the feedback analysis outcome can come into process improvements areas for the next iteration and these should be considered for Retrospective items. Hence the feedback analysis is a mandated activity at every task completion stage in  Agile project.

 

10.  What is demo in agile model?

Ans: With reference to the rapid prototype approach agile teams are supposed to demonstrate skeleton design for a new module. it is a plan to demonstrate skeleton system to the stakeholder and to get the feedback for processing further SPRINT  or Iteration items. This demo is organized depends on the software or initial plan for a given iteration.

Keep watching this site for further updates.

Contact for any guidance/coaching.

 

Vcard-Shanthi Kumar V

SDLC & Agile – Interview questions for Freshers -2

Agile Cirlce1

In continuation of my previous questions blog [https://vskumar.blog/2017/09/04/sdlc-agile-interview-questions-for-freshers-1] on this topic these were made.

SDLC and Agile Model:

Questions on SDLC Phases:

1. How the agile methodology has been architected?

Ans: The agile methodology has been architected with 12 principles to govern the agile development approach.

2. What is highest priority during agile development model?

Ans: The highest priority is customer satisfaction and the early and continuous delivery of software which will work for the customer requirement.

3. Why the agile development models need to accept the request on irrespective of the development stage?

Ans: As per the  fundamental approach of agile development  it provides facility to the users towards inception of new or enhanced requirements before the delivery.

4. During agile development approach who all need to work together?

Ans: The business people and software developers need to work collaboratively and consistently throughout the project life cycle.

5. To get the right delivery what do we need to do during agile development model?

Ans: In agile project we need to have self motivated individuals at the same time we also need to supply the required human and nonhuman resources to get the job done.

6. When the life cycle of agile model ends?

Ans: The agile model continues till the retirement of the product or project. When the customer decides the retirement of the product then the project operation is terminated. 

7. Why do we need to have face to face conversation during agile development approach?

Ans: The agile principle guides to have face to face conversation among the project resources to have most efficient and effective method of communication.

8. How can you measure the progress and success of agile project?

Ans: The basic concept of agile is to deliver the working software of component.

9. How  the agile development process need to be promoted and to whom  all ?

Ans: Agile development process need to be promoted in a  sustainable development for continuous delivery to the sponsors, developers and users.

10. Why do we need technical excellence and good design project delivery?

Ans: The concept of the agile is for continuous delivery to the users as per the requirements in an iterative development approach. The team capacity needs to be accelerated towards the functions and processes to work on good software design.

Keep watching this site for further updates.

Contact for any guidance/coaching.

View my UrbanPro-profile

URL is : https://www.urbanpro.com/vskumar 

 

Vcard-Shanthi Kumar V