Uninstalling Packages from ubuntu or Redhat

No comments

Just like Windows XP, Windows 7, Windows 8, and Mac OS X, Linux is an operating system. An operating system is software that manages all of the hardware resources associated with your desktop or laptop. To put it simply – the operating system manages the communication between your software and your hardware. Without the operating system (often referred to as the “OS”), the software wouldn’t function.

The OS is comprised of a number of pieces: 
  • The Bootloader: The software that manages the boot process of your computer. For most users, this will simply be a splash screen that pops up and eventually goes away to boot into the operating system.
  • The kernel: This is the one piece of the whole that is actually called “Linux”. The kernel is the core of the system and manages the CPU, memory, and peripheral devices. The kernel is the “lowest” level of the OS.
  • Daemons: These are background services (printing, sound, scheduling, etc) that either start up during boot, or after you log into the desktop.
  • The Shell: You’ve probably heard mention of the Linux command line. This is the shell – a command process that allows you to control the computer via commands typed into a text interface. This is what, at one time, scared people away from Linux the most (assuming they had to learn a seemingly archaic command line structure to make Linux work). This is no longer the case. With modern desktop Linux, there is no need to ever touch the command line.
  • Graphical Server: This is the sub-system that displays the graphics on your monitor. It is commonly referred to as the X server or just “X”.
  • Desktop Environment: This is the piece of the puzzle that the users actually interact with. There are many desktop environments to choose from (Unity, GNOME, Cinnamon, Enlightenment, KDE, XFCE, etc). Each desktop environment includes built-in applications (such as file managers, configuration tools, web browsers, games, etc).
  • Applications: Desktop environments do not offer the full array of apps. Just like Windows and Mac, Linux offers thousands upon thousands of high-quality software titles that can be easily found and installed. Most modern Linux distributions (more on this in a moment) include App Store-like tools that centralize and simplify application installation. For example: Ubuntu Linux has the Ubuntu Software Center (Figure 1) which allows you to quickly search among the thousands of apps and install them from one centralized location. 

How to uninstall SSH server in Linux Redhat/Ubuntu?

Some times you in order to make your system more secure you want to stop unwanted service and remove them permanently so that your machine is not compromised. Such service include SSH, FTP etc which should be uninstall if they are not used.
In this post we will see how to stop and uninstall SSH server in Redhat based and Ubuntu based machines.
On Redhat based machine:
Step1: Stop SSH service before uninstalling it.
service sshd stop
chkconfig sshd off
Step2: Remove ssh package from the machine by using below yum command.
yum remove openssh-server
or to completely remove the package as well from the machine use below command
yum erase openssh-server

On Ubuntu machines:
Step1: Stop the ssh service before uninstalling it.
/etc/init.d/ssh stop
or
service ssh stop
Step2: Uninstalling ssh server package
apt-get –purge remove openssh-server
That’s it your done with stopping and uninstalling SSH server from Linux Redhat/Ubuntu based machines.
Please comment your thoughts on this.
Show your love by sharing this..!

No comments :

Post a Comment

Docker Basics

No comments
What is a Docker Container?  In Part 1 of this series, we explore the Docker open source project.  Visit Part 2, to learn how Docker open sources containers work.
Docker container is an open source software development platform. Its main benefit is to package applications in “containers,” allowing them to be portable among any system running the Linux operating system (OS).

Container technology has been around for a while, but momentum and hype around Docker’s approach to containers has pushed this approach to the forefront in the last year. It is one form of container technology.

Docker Open Source Background

Docker came along in March, 2013, when the code, invented by Solomon Hykes, was released as open source. It’s also the name of a company founded by Hykes that supports and develops Docker code.
Both the Docker open source container and company’s approach have a lot of appeal, especially for cloudapplications and agile development. Because many different Docker applications can run on top of a single OS instance, this can be a more efficient way to run applications.
The company’s approach also speeds up applications development and testing, because software developers don’t have to worry about shipping special versions of the code for different operating systems. Because of the lightweight nature of its containers, the approach can also improve the portability of applications. Docker and containers are an efficient and fast way to move pieces of software around in the cloud.
The company received $40 million in venture capital funding from Sequoia Inc. in September of 2014, and several reports at the time said the valuation was close to $400M. The platform consists of Docker Engine, a runtime and software packaging tool, and Docker Hub, a service for sharing applications in the cloud.

Portability and Scalability

Some software gurus argue that the real benefit of container technology allows for much larger scale of applications in virtualized environments, because of the efficiencies of virtualizing the OS. Others argue that the real benefit is inDevOps and testing, because the applications can be built and tested much more quickly.
The downside of Docker open source container technology is that it is limited to use in Linux environments. Also, as an application technology, it requires specific expertise and security safeguards geared toward a container architecture.
Some Basics here:
1.Remove the containers that are using this image as:
docker ps -a | grep arungupta/wildfly-centos | awk '{print $1}' | xargs docker rm

2.To remove any conatiner:
docker rm <container_id>

3.To know Container IDs of all images:
docker ps -a

4.Creating docker image using Dockerfile
docker build -t  <image_name> <directory path in which Dockerfile exist>

5.To know the IP Address of any container

docker inspect <container_id>| grep IP

6.Tagging and renaming of any  docker image


docker tag <image_id> <repo_name>/<image_name>:latest
7.removing of doker images and containers(tricks)
tricks to remove docker container and images according to requirement
8.creating image from existing image or container
docker commit <running_container_id> ubuntu-with-jenkinsuser




Note: Cocept of EXPOSE and -p
If you do not specify any of those, the service in the container will not be accessible from anywhere except from inside the container itself.
If you EXPOSE a port, the service in the container is not accessible from outside Docker, but from inside other Docker containers. So this is good for inter-container communication.
If you EXPOSE and -p a port, the service in the container is accessible from anywhere, even outside Docker.
IMP:If you do -p, but do not EXPOSE, Docker does an implicit EXPOSE. This is because if a port is open to the public, it is automatically also open to other Docker containers. Hence -p includes EXPOSE. That's why I didn't list it above as a fourth case.





Docker Networking :
1.https://www.ctl.io/developers/blog/post/docker-networking-rules/
2.
3.http://linoxide.com/linux-how-to/networking-commands-docker-containers/



No comments :

Post a Comment

Continuos Integration with docker

No comments

The Problem

Git has made merging almost a non event. A common way to handle development is to have a separate branch per feature and merge that feature back to the development/master branches when the feature is done and all tests pass. For unit tests, there is already viable tooling that can help you auto-test feature branches before you merge them into develop or master.Travis-ci already offers this for Github projects, you should try it out if you haven’t already!
When it comes to integration tests there is a problem. When you have complex integration tests that need a running environment, it can become hard to manage all these branches and environments. You could spin up VMs for each feature branch that you create or try to share one environment for all feature branches. Or the worst alternative: you can leave integration tests for the integration branch. The problem with the last option is that it breaks the principle of “do no harm”: you want to merge when you know it won’t break things and now you have a branch that’s meant to break? Sounds like bad design to me.
Docker

Over the last year Docker sprung up as a technology. Docker is a technology to manage and run lightweight Linux containers. Containers can be booted in milliseconds and provide full isolation of the filesystem, network and processes. A couple of months ago I realised Docker could be the solution to the issues we’ve been having with our old CI setup and started looking into ways to integrate Docker with Jenkins.

Integrating Jenkins with Docker

I started by looking at existing Jenkins plugins that would handle this, but wasn’t pleased with the existing solutions. For example the Jenkins Docker plugin requires Docker images running SSH and provisions those containers dynamically as Jenkins slaves which is in my opinion needlessly complex. I also wanted to integrate Docker as seamlessly as possible, without requiring team members to have a lot of work settings things up.




To Dpwnlaod Docker-plugin click here
Part1

No comments :

Post a Comment

Mesos Vs Kubernetes

No comments
1.Kubernetes and Mesos are a match made in heaven. Kubernetes enables the Pod (group of co-located containers) abstraction, along with Pod labels for service discovery, load-balancing, and replication control. Mesos provides the fine-grained resource allocations for pods across nodes in a cluster, and can make Kubernetes play nicely with other frameworks running on the same cluster resources.

2.Kubernetes is an opinionated orchestration tool that comes with service discovery and replication baked-in. It may require some re-designing of existing applications, but used correctly will result in a fault-tolerant and scalable system.Mesos is a low-level, battle-hardened scheduler that supports several frameworks for container orchestration including Marathon, Kubernetes, and Swarm.

3. At the time of writing, Kubernetes and Mesos are more developed and stable than Swarm. In terms of scale, only Mesos has been proven to support large-scale systems of hundreds or thousands of nodes. However, when looking at small clusters of, say, less than a dozen nodes, Mesos may be an overly complex solution.

4.Kubernetes is a great place to start if you are new to the clustering world; it is the quickest, easiest and lightest way to kick the tires and start experimenting with cluster oriented development. It offers a very high level of portability since it is being supported by a lot of different providers (Microsoft, IBM, Red Hat, CoreOs, MesoSphere, VMWare, etc).

5.If you have existing workloads (Hadoop, Spark, Kafka, etc), Mesos gives you a framework that let's you interleave those workloads with each other, and mix in a some of the new stuff including Kubernetes apps.

6.Mesos gives you an escape valve if you need capabilities that are not yet implemented by the community in the Kubernetes framework.



Mesos cluster


Figure 12-4. Mesos Cluster



About ZooKeeper


ZooKeeper is an open source Apache™ project that provides a centralized infrastructure and services that enable synchronization across a cluster. ZooKeeper maintains common objects needed in large cluster environments. Examples of these objects include configuration information, hierarchical naming space, and so on. Applications can leverage these services to coordinate distributed processing across large clusters.

ZooKeeper provides an infrastructure for cross-node synchronization and can be used by applications to ensure that tasks across the cluster are serialized or synchronized. It does this by maintaining status type information in memory on ZooKeeper servers. A ZooKeeper server is a machine that keeps a copy of the state of the entire system and persists this information in local log files.

Within ZooKeeper, an application can create what is called a znode (a file that persists in memory on the ZooKeeper servers). The znode can be updated by any node in the cluster, and any node in the cluster can register to be informed of changes to that znode (in ZooKeeper parlance, a server can be set up to “watch” a specific znode). Using this znode infrastructure (and there is much more to this such that we can’t even begin to do it justice in this section), applications can synchronize their tasks across the distributed cluster by updating their status in a ZooKeeper znode, which would then inform the rest of the cluster of a specific node’s status change. This cluster-wide status centralization service is essential for management and serialization tasks across a large distributed set of servers.

No comments :

Post a Comment

Basic vagrant tutorial for windows

No comments








Maybe you know that situation. You work in a company with a big software development department and you have high fluctuation. Every couple weeks a new intern is starting. Every couple months a new employee or freelancer is starting and you always have the problem of on boarding. They all need a development environment to get started. And in the best case all dev. environments should be identical. You want to ensure that everybody is working with the exact same version of MongoDB, Java, Ruby, PHP, Eclipse, NetBeans and whatever.
In some companies this on boarding and the whole process of setting up a new development environment takes 5 days. A whole week, until somebody can start to become productive. With Vagrant that whole ramp up time can be cut down to less than 1 hour!
In the previous blog post I described how to get started with Vagrant quickly. This blog post will describe how to use Vagrant to setup a whole development environment.
This blogpost goes into detail how on we leverage Vagrant, an open source  tool for building and distributing virtualized development environments, in our day to day work. We use it with a team of 7 people to integrate a pretty complex application

No comments :

Post a Comment

Shared and Sync folder concept in vagrant

No comments
Basically shared folders are renamed to synced folder from v1 to v2 (docs).

Vagrantfile directory mounted as /vagrant in guest

Vagrant is mounting the current working directory (where Vagrantfile resides) as /vagrant in the guest, this is the default behaviour.
Note:You can disable this behaviour by adding 
cfg.vm.synced_folder ".", "/vagrant", disabled: true
in your Vagrantfile.

Use VAGRANT_INFO=debug vagrant up or VAGRANT_INFO=debug vagrant reload to start the VM for more output regarding why the synced folder is not mounted. Could be a permission issue (mode bits of /tmp on host should be drwxrwxrwt).
I did a test quick test using the following and it worked (I used opscode bento raring vagrant base box)
config.vm.synced_folder "/tmp", "/tmp/src"


No comments :

Post a Comment

Provisioning in vagrant

No comments
1.Allows to install software automatically
2.Alter configurations
3.you can install software manually(it is not advisable for repeatable task)
4.you can vagrant destroy and vagrant up and have a fully ready-to-go work environment with a single command.

The shell provisioner takes various options. One of inline or path is required:
1.inline (string) - Specifies a shell command inline to execute on the remote machine. See the inline scripts section below for more information.

2.path (string) - Path to a shell script to upload and execute. It can be a script relative to the project Vagrantfile or a remote script (like a gist).

3.args (string or array) - Arguments to pass to the shell script when executing it as a single string. These arguments must be written as if they were typed directly on the command line, so be sure to escape characters, quote, etc. as needed. You may also pass the arguments in using an array. In this case, Vagrant will handle quoting for you.

Sample Examples:
Vagrant.configure("2") do |config|
config.vm.provision "shell",
inline: "echo Hello, World"
end
$script = <<SCRIPT
echo I am provisioning...
date > /etc/vagrant_provisioned_at
SCRIPT

Vagrant.configure("2") do |config|
config.vm.provision "shell", inline: $script
end
Vagrant.configure("2") do |config|
config.vm.provision "shell", path: "script.sh"
end
Vagrant.configure("2") do |config|
config.vm.provision "shell", path: "https://example.com/provisioner.sh"
end
Vagrant.configure("2") do |config|
config.vm.provision "shell" do |s|
s.inline = "echo $1"
s.args = "'hello, world!'"
end
end
Vagrant.configure("2") do |config|
config.vm.provision "shell" do |s|
s.inline = "echo $1"
s.args = ["hello, world!"]
end
end

No comments :

Post a Comment

Configuring Vagrant with Eclipse

No comments
If you are a developer and not a pure user yourself, you might want to setup your Eclipse IDE, so that you can debug and develop your Application in Optimized way.
We are here to tell you why we need in vagrant in our Dev environment and why we need to integrate eclipse with vagrant and and how can we integrate eclipse with vagrant.

Really it's amazing to think that wahy we need vagrant in our local.There can be different Different purpose. If you want to test your application in Different different environment manually then obviuosly you need vagrant.What vagrant will do is good question for all of us.Basically vagrant will try to create an environemnt like VM and then it can deploy our Application in particular server.

I have tried to vagrant to create an Envirinment whcih will create VM in which docker is installed and some tomcat container was running inside that VM whose webapps directory was synched to local dierctory in which generally i used to put my war file and then running restarting tomcat.

really it helps alot to test my application in different different environment.



1. Installing plugin in eclipse








No comments :

Post a Comment

Nginx Installation

No comments

How to setup nginx 
In this article you will learn how to setup NGINX along with is directory structure. You’ll also learn how to setup virtual hosts for NGINX, port forwarding also known as reverse proxy.

We assume that you have already up and running Ubuntu based machine up and running with your hostname and network configure.

If you do not, or you don’t know how to set your hostname for Linux or Ubuntu, please refer to one of our previous articles on how to set your hostname under Ubuntu or Linux.

First update your repositories and upgrade any existing packages that may have installed.
sudo apt-get update
sudo apt-get upgrade
When all the repositories and upgrade process is completed, you can now install MySQL server and client.

sudo apt-get install mysql-server mysql-client

During the installation of MySQL it will ask you for the root password. Please enter the root password for the root user of the MySQL server and remember it as you may need it later when creating new databases, users etc.

Now, our MySQL server is up and running and you can install NGINX web server.

sudo apt-get install nginx

When you get “Completed” message it means NGINX is installed. You need to start theservice for NGINX as by default it does not start.

You can start service for NGINX by using below command.
sudo /etc/init.d/nginx start
or
sudo service nginx start

When the service is started you can find your ip address or if you have domain configured and DNS is pointed to the machine you are installing NGINX on, you can try opening it to see the test default page.

In our case, we find our ip address using below command
$ sudo ifconfig

We get result “192.168.10.20”


vOpen up any web browser and point URL to http://192.168.10.20 and you should get a default test page of NGINX. This means NGINX is installed and working properly.

Location of Configuration file of nginx
----------------------------------------------


All Nginx configuration files are located in the /etc/nginx/ directory. The primary configuration file is /etc/nginx/nginx.conf.

This is where the files will be located if you install Nginx from the package manager. Other possible locations include /opt/nginx/conf/.

     access_log  = /var/log/nginx/access.log;

        error_log   = /var/log/nginx/error.log;
        include      = /etc/nginx/conf.d/*.conf;

        include      =/etc/nginx/sites-enabled/*;


by defult path of html folder of nginx in ubuntu :/usr/share/nginx/html
To restart nginx : sudo service nginx restart  Or  nginx -s reload
To test nginx is started or not: netstat | grep nginx
To test nginx url: curl  http://127.0.1.1:81/index.html


Note:To check NGINX is working or not  "netstat -nlp | grep nginx"


No comments :

Post a Comment

Configure Jenkins with SonarQube for static code analysis and integration

1 comment

Sonar and jenkins Intgration

Continuous integration deals with merging code implemented by multiple developers into a single build system. Developers frequently integrate their code and the final build is automated, developer unit test are executed automatically to ensure the stability of the build. This approach is inspired by extreme programming methodologies. With a test driven approach put into place continuous integration would yield in the following benefits.

1.Install SonarQube Plugin

2.Globa settings
3.Project level configuration

here "Sonar way is defualt profile". you can create new profile also.


4.Adding profile in pom.xml

  <profile>
  <id>sonar</id>
  <properties>
    <sonar.jdbc.url>jdbc:mysql://localhost:3306/sonar</sonar.jdbc.url>
    <sonar.jdbc.driverClassName>com.mysql.jdbc.Driver</sonar.jdbc.driverClassName>
    <sonar.jdbc.username>sonar</sonar.jdbc.username>
    <sonar.jdbc.password>sonar</sonar.jdbc.password>
    <sonar.host.url>http://localhost:9000/sonar/</sonar.host.url>
  </properties>
</profile>

1 comment :

Post a Comment