How To Install Apache (Web Server) in Linux

For Fedora / RHEL / Cent OS Linux

Step 1: Install apache

# yum install httpd

Step 2: Start apache

To start the Apache/httpd, run:

# /etc/init.d/httpd start
OR
# service httpd start

Also configure httpd service to start on system start.

# chkconfig httpd on

Note: The default directory in Fedora / RHEL / Cent OS Linux operating system is “/var/www/html”

Step 3: Verify apache Port

To verify the httpd port is open or not use the command:

# netstat -antp | grep :80

For Debian / Ubuntu

Step 1: Install apache

Use the apt-get command:

# apt-get install apache2

Step 2: Start apache

# /etc/init.d/apache2 start

Note: The default directory in Debian / Ubuntu Linux operating system is /var/www/

Installation from CD/DVD

We can also install httpd from CDROM with rpm command:

# rpm -ivh httpd*

How To Create FTP Server in Linux

Introduction

The File Transfer Protocol (FTP) is a standard network protocol used to transfer computer files from one host to another host over a TCP-based network, such as the Internet.
FTP is built on a client-server architecture and uses separate control and data connections between the client and the server. FTP users may authenticate themselves using a clear-text sign-in protocol, normally in the form of a username and password, but can connect anonymously if the server is configured to allow it. For secure transmission that protects the username and password, and encrypts the content, FTP is often secured with SSL/TLS (FTPS). SSH File Transfer Protocol (SFTP) is sometimes also used instead, but is technologically different.

Package Name:
Linux—–vsftpd
Windows—IIS
Data Connection Port of FTP # 21
Data Transaction Port of FTP # 20

Step 1: Installation

Firstly we have to check the package vsftpd is installed in our machine, if not so then follow the steps:

# yum install -y vsftpd

Step 2: Start the service

# service vsftpd start

To Enable vsftpd in multi-user levels.

# chkconfig vsftpd on

Step 3: Configure vsftpd.conf File

Now edit the /etc/vsftpd/vsftpd.conf file.

vim /etc/vsftpd/vsftpd.conf
# Example config file /etc/vsftpd/vsftpd.conf
#
# The default compiled in settings are fairly paranoid. This sample file
# loosens things up a bit, to make the ftp daemon more usable.
# Please see vsftpd.conf.5 for all compiled in defaults.
#
# READ THIS: This example file is NOT an exhaustive list of vsftpd options.
# Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's
# capabilities.
#
# Allow anonymous FTP? (Beware - allowed by default if you comment this out).
anonymous_enable=NO
#
# Uncomment this to allow local users to log in.
local_enable=YES
# Uncomment this to enable any form of FTP write command.
write_enable=YES
#
# Default umask for local users is 077. You may wish to change this to 022,
# if your users expect that (022 is used by most other ftpd's)
local_umask=022
#
# Uncomment this to allow the anonymous FTP user to upload files. This only
# has an effect if the above global write enable is activated. Also, you will
# obviously need to create a directory writable by the FTP user.
#anon_upload_enable=YES
#
# Uncomment this if you want the anonymous FTP user to be able to create
# new directories.
#anon_mkdir_write_enable=YES
#
# Activate directory messages - messages given to remote users when they
# go into a certain directory.
dirmessage_enable=YES
#
# The target log file can be vsftpd_log_file or xferlog_file.
# This depends on setting xferlog_std_format parameter
xferlog_enable=YES
#
# Make sure PORT transfer connections originate from port 20 (ftp-data).
connect_from_port_20=YES
#
# If you want, you can arrange for uploaded anonymous files to be owned by
# a different user. Note! Using "root" for uploaded files is not
# recommended!
#chown_uploads=YES
#chown_username=whoever
#
# The name of log file when xferlog_enable=YES and xferlog_std_format=YES
# WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log
#xferlog_file=/var/log/xferlog
#
# Switches between logging into vsftpd_log_file and xferlog_file files.
# NO writes to vsftpd_log_file, YES to xferlog_file
xferlog_std_format=YES
#
# You may change the default value for timing out an idle session.
#idle_session_timeout=600
#
# You may change the default value for timing out a data connection.
#data_connection_timeout=120
#
# It is recommended that you define on your system a unique user which the
# ftp server can use as a totally isolated and unprivileged user.
#nopriv_user=ftpsecure
#
# Enable this and the server will recognise asynchronous ABOR requests. Not
# recommended for security (the code is non-trivial). Not enabling it,
# however, may confuse older FTP clients.
#async_abor_enable=YES
#
# By default the server will pretend to allow ASCII mode but in fact ignore
# the request. Turn on the below options to have the server actually do ASCII
# mangling on files when in ASCII mode.
# Beware that on some FTP servers, ASCII support allows a denial of service
# attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd
# predicted this attack and has always been safe, reporting the size of the
# raw file.
# ASCII mangling is a horrible feature of the protocol.
ascii_upload_enable=YES
ascii_download_enable=YES
#
# You may fully customise the login banner string:
ftpd_banner=Welcome to our FTP service.
#
# You may specify a file of disallowed anonymous e-mail addresses. Apparently
# useful for combatting certain DoS attacks.
#deny_email_enable=YES
# (default follows)
#banned_email_file=/etc/vsftpd/banned_emails
#
# You may specify an explicit list of local users to chroot() to their home
# directory. If chroot_local_user is YES, then this list becomes a list of
# users to NOT chroot().
#chroot_local_user=YES
#chroot_list_enable=YES
# (default follows)
#chroot_list_file=/etc/vsftpd/chroot_list
#
# You may activate the "-R" option to the builtin ls. This is disabled by
# default to avoid remote users being able to cause excessive I/O on large
# sites. However, some broken FTP clients such as "ncftp" and "mirror" assume
# the presence of the "-R" option, so there is a strong case for enabling it.
ls_recurse_enable=YES
#
# When "listen" directive is enabled, vsftpd runs in standalone mode and
# listens on IPv4 sockets. This directive cannot be used in conjunction
# with the listen_ipv6 directive.
listen=YES
#
# This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6
# sockets, you must run two copies of vsftpd with two configuration files.
# Make sure, that one of the listen options is commented !!
#listen_ipv6=YES
pam_service_name=vsftpd
userlist_enable=YES
tcp_wrappers=YES
use_localtime=YES

Step 4: Restart the service

Now restart the service vsftpd

# service vsftpd restart

Step 5: Create FTP Users

Now create FTP user:

# adduser -c 'FTP USER Test' -m test
# passwd test

To change the home directory of user, use the following command:

# usermod --home /var/www/ username

Note: If selinux is enabled on server, then execute the following command

# setsebool -P ftp_home_dir=1

Conculsion

FTP acoount is working now. You can use your FTP server. 🙂

Top 10 Linux Distributions

Linux is omnipresent, even if you don’t realize it. I have been using Linux as my only OS since 2005 and with every passing year I come to realize that it has much more to offer than I initially, back in 2005, understood. There is something for everyone. In this article, I have picked some of the best Linux distros to help you get the job done.

1. Linux Mint

Linux Mint, a distribution based on Ubuntu, was first launched in 2006 by Clement Lefebvre, a French-born IT specialist living in Ireland. Originally maintaining a Linux web site dedicated to providing help, tips and documentation to new Linux users, the author saw the potential of developing a Linux distribution that would address the many usability drawbacks associated with the generally more technical, mainstream products. After soliciting feedback from the visitors on his web site, he proceeded with building what many refer to today as an “improved Ubuntu” or “Ubuntu done right”.

2. Ubuntu

The launch of Ubuntu was first announced in September 2004. Although a relative newcomer to the Linux distribution scene, the project took off like no other before, with its mailing lists soon filled in with discussions by eager users and enthusiastic developers. In the years that followed, Ubuntu grew to become the most popular desktop Linux distribution and has greatly contributed towards developing an easy-to-use and free desktop operating system that can compete well with any proprietary ones available on the market.

3. Debian

Debian GNU/Linux was first announced in 1993. Its founder, Ian Murdock, envisaged the creation of a completely non-commercial project developed by hundreds of volunteer developers in their spare time. With sceptics far outnumbering optimists at the time, it seemed destined to disintegrate and collapse, but the reality was very different. Debian not only survived, it thrived and, in less than a decade, it became the largest Linux distribution and possibly the largest collaborative software project ever created!

4. Mageia

Mageia might be the newest distribution on this list, but its roots go back to July 1998 when Gaël Duval launched Mandrake Linux. At the time it was just a fork of Red Hat Linux with KDE as the default desktop, better hardware detection and some user-friendly features, but it gained instant popularity due to positive reviews in the media. Mandrake was later turned into a commercial enterprise and renamed to Mandriva (to avoid some trademark-related hassles and to celebrate its merger with Brazil’s Conectiva) before almost going bankrupt in 2010. It was eventually saved by a Russian venture capital firm, but this came at a cost when the new management decided to lay off most of the established Mandriva developers at the company’s Paris headquarters. Upon finding themselves without work, they decided to form Mageia, a community project which is a logical continuation of Mandrake and Mandriva, perhaps more so than Mandriva itself.

5. Fedoro

Although Fedora was formally unveiled only in September 2004, its origins effectively date back to 1995 when it was launched by two Linux visionaries — Bob Young and Marc Ewing — under the name of Red Hat Linux. The company’s first product, Red Hat Linux 1.0 “Mother’s Day”, was released in the same year and was quickly followed by several bug-fix updates. In 1997, Red Hat introduced its revolutionary RPM package management system with dependency resolution and other advanced features which greatly contributed to the distribution’s rapid rise in popularity and its overtaking of Slackware Linux as the most widely-used Linux distribution in the world. In later years, Red Hat standardised on a regular, 6-month release schedule.

6. Open Suse

The beginnings of openSUSE date back to 1992 when four German Linux enthusiasts — Roland Dyroff, Thomas Fehr, Hubert Mantel and Burchard Steinbild — launched the project under the name of SuSE (Software und System Entwicklung) Linux. In the early days, the young company sold sets of floppy disks containing a German edition of Slackware Linux, but it wasn’t long before SuSE Linux became an independent distribution with the launch of version 4.2 in May 1996. In the following years, the developers adopted the RPM package management format and introduced YaST, an easy-to-use graphical system administration tool. Frequent releases, excellent printed documentation, and easy availability of SuSE Linux in stores across Europe and North America resulted in growing popularity for the distribution.

7. Arch Linux

The KISS (keep it simple, stupid) philosophy of Arch Linux was devised around the year 2002 by Judd Vinet, a Canadian computer science graduate who launched the distribution in the same year. For several years it lived as a marginal project designed for intermediate and advanced Linux users and only shot to stardom when it began promoting itself as a “rolling-release” distribution that only needs to be installed once and which is then kept up-to-date thanks to its powerful package manager and an always fresh software repository. As a result, Arch Linux “releases” are few and far between and are now limited to a basic installation CD that is issued only when considerable changes in the base system warrant a new install media.

8. CentOS

Launched in late 2003, CentOS is a community project with the goals of rebuilding the source code for Red Hat Enterprise Linux (RHEL) into an installable Linux distribution and to provide timely security updates for all included software packages. To put in more bluntly, CentOS is a RHEL clone. The only technical difference between the two distributions is branding – CentOS replaces all Red Hat trademarks and logos with its own. Nevertheless, the relations between Red Hat and CentOS remain amicable and many CentOS developers are in active contact with, or even employed directly by, Red Hat.

9. PCLinuxOS

PCLinuxOS was first announced in 2003 by Bill Reynolds, better known as “Texstar”. Prior to creating his own distribution, Texstar was already a well-known developer in the Mandrake Linux community of users for building up-to-date RPM packages for the popular distribution and providing them as a free download. In 2003 he decided to build a new distribution, initially based on Mandrake Linux, but with several significant usability improvements. The goals? It should be beginner-friendly, have out-of-the box support for proprietary kernel modules, browser plugins and media codecs, and should function as a live CD with a simple and intuitive graphical installer.

10. FreeBSD

FreeBSD, an indirect descendant of AT&T UNIX via the Berkeley Software Distribution (BSD), has a long and turbulent history dating back to 1993. Unlike Linux distributions, which are defined as integrated software solutions consisting of the Linux kernel and thousands of software applications, FreeBSD is a tightly integrated operating system built from a BSD kernel and the so-called “userland” (therefore usable even without extra applications). This distinction is largely lost once installed on an average computer system – like many Linux distributions, a large collection of easily installed, (mostly) open source applications are available for extending the FreeBSD core, but these are usually provided by third-party contributors and aren’t strictly part of FreeBSD.

How to Download Linux CentOS

Version Minor release CD and DVD ISO Images
CentOS-7 7.0.1406 For 32-bit & 64-bit   http://mirrors.nayatel.com/centos/7/isos/x86_64/
CentOS-6 6.6 For 32-bit   http://mirrors.nayatel.com/centos/6.6/isos/i386/
For 64-bit   http://mirrors.nayatel.com/centos/6.6/isos/x86_64/
CentOS-5 5.1 For 32-bit   http://mirrors.nayatel.com/centos/5.11/isos/i386/
For 64-bit   http://mirrors.nayatel.com/centos/5.11/isos/x86_64/

Linux World

The history of Linux began in 1991 with the commencement of a personal project by Finnish student Linus Torvalds to create a new free operating system kernel. Since then, the resulting Linux kernel has been marked by constant growth throughout its history. Since the initial release of its source code in 1991, it has grown from a small number of C files under a license prohibiting commercial distribution to the 3.18 version in 2015 with more than 18 million lines of source code under the GNU General Public License

In 1991, in Helsinki, Linus Torvalds began a project that later became the Linux kernel. He wrote the program specifically for the hardware he was using and independent of an operating system because he wanted to use the functions of his new PC with an 80386 processor. Development was done on MINIX using the GNU C compiler. The GNU C Compiler is still the main choice for compiling Linux today. The code however, can be built with other compilers, such as the Intel C Compiler.

As Torvalds wrote in his book Just for Fun,[10] he eventually ended up writing an operating system kernel. On 25 August 1991 (age 21), he announced this system in a Usenet posting to the newsgroup “comp.os.minix.”:

Hello everybody out there using minix –

I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I’d like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things).

I’ve currently ported bash(1.08) and gcc(1.40), and things seem to work. This implies that I’ll get something practical within a few months, and I’d like to know what features most people would want. Any suggestions are welcome, but I won’t promise I’ll implement them 🙂

Linus (torvalds@kruuna.helsinki.fi)

PS. Yes – it’s free of any minix code, and it has a multi-threaded fs. It is NOT portable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that’s all I have :-(.

—Linus Torvalds

throughput

There’s an emerging discipline of information services design, delivery and lifecycle management — perhaps best represented by formal structures like ITIL or COBIT that also include a serious governance component — that helps make IT a game-changing ingredient in any organization’s efforts to realize its mission and/or financial objectives. It also helps to implement the notion of “continuous improvement” that lets companies and organizations move from success to success in an environment where tools, technologies, and business models change all the time. It think the ITIL fundamentals should be REQUIRED for all IT workers, especially system administrators, because it helps put their jobs into perspective within the overall mission for and operation of information technology within a company or organization. A good sysadmin must understand the management and procedural components that play such an important role in the successful operations of a business.
Yes, you know what the swap usage is today because there’s a problem with the disks thrashing and it’s causing the server to go slow. But your users are complaining to management that it’s an ongoing issue and now management is asking you for data. What, you haven’t been documenting this, so it’s now your word against Sales and Marketing? Guess who wins that argument by default? You’re responsible for the system, so they will make this your problem. So get/build/buy a system to monitor, measure, and record that data so you can build pretty power-point slides for finance next time you need to ask for hardware upgrades, or to prove that the issues are caused by bad software rather than your perfectly functioning servers. Even if you are just running a single server for an employer, a client, or even yourself, it’s good data to have for some unforeseen reason someday.

A shortlist of things to start monitoring/recording/charting/graphing:

  • Load average
  • Memory usage
  • Disk I/O (transactions per second)
  • Network throughput (in Mbits/sec)
  • Network throughput per virtual host/site
  • Transfer (in GB/month)
  • Transfer per virtual host
  • Disk storage (monthly in GB) and also daily rolling average if files are uploaded and deleted regularly)
  • Average response time of test URI under your control (in milliseconds)
  • Average response time of a PHP (or Ruby/Python/etc.) page under your control that does not change. Testing real web pages gives you a consistent baseline that you can use to narrow the problem to the server, the OS, or the web code itself.
  • SSH logins per day/month by user and IP address
  • Anything you feel is necessary, or will get questions on later

Once you have consistent information, you’ll start seeing patterns and can look for things out of the ordinary. It’s also good for correlating data to behaviors when you’re troubleshooting issues and aren’t sure where to start.
Even for small, one-person projects. Write up a small scope of work, write requirements, get sign-off from stakeholders on their expectations, plan a schedule, and record your activities. Write up a postmortem document at the end. Even if it’s just for yourself. It doesn’t have to be fancy, and it certainly doesn’t have to be formal PMBoK activities. It may seem bureaucratic managing all that paper and it may seem like you’re spending more time on paperwork than sysadminning, but it helps keep you organized when your boss hands you random high-priority assignment that strays you from your task. It’s also handy when you build a new system and users complain that it doesn’t do what they wanted it to do. See? You got their sign-off on the requirements document right there…

Even if it’s just for yourself, one day you’ll ask yourself, “now why on earth did I install Acme::Phlegethoth on this server? Oh yeah, it was for that weird commune who needs it for their application code…”
Again, this may seem bureaucratic, but if you spend your days just “doing stuff” without a To-Do list, you may find it difficult to explain to your boss next week exactly what you’ve been doing with your time. I’ve become a fan of Kanban boards lately because it’s a visual device that your boss (or anyone who assigns you work) can interact with. Let’s say I’ve got three items I plan to work on today that should fill up my 8 hours. “Oh, you need me work on this other item instead? Yes sir! Here is what I planned to work on today. Which one should I deprioritize in favor of this one? Oh, so it’s more important than this one, but not as important as these two? That’s fine, I can requeue that lower priority one and get to it later.” This helps set expectations. I know of one graphic designer who used it to coordinate her work between three competing project managers. If one asked her to prioritize something, she’d show him her board and send him to the other project managers to negotiate the conflict and coordinate their deadlines. Even if no one else looks at your board but you, it helps to keep you organized.
It took me a while to really understand why this is important. Yes, today you just want to sit in a server room, keep things running, and look at Lolcats. But tomorrow, you may have other people assisting (or working for) you. You need to be able to communicate expectations. You need to propose and advocate your ideas (great ideas never stand on their own merit unless and until they are properly communicated), to your peers or to management. Maybe you need to convince someone that they need to upgrade the web server. Maybe you need to explain your new server proposal that will fix all their problems. Maybe you need to convince the developer that his code is really causing those memory leaks, but you need to present it in a non-accusatory manner. I’m personally a big fan of Toastmasters for this, as it’s the cheapest and most effective way to improve your ability to communicate.
Your servers will crash. Your servers will be hax0r3d. Your backups will be corrupted. So start figuring out how to react when that happens. One of the unhappiest days of my life was when my personal server was r00t3d. I did all the right things, but the attackers were more dedicated to getting in than I was in keeping them out. How do you remove a rootkit after it’s discovered? I didn’t know then, because I never asked the question (remember? I thought I did all the right things to prevent it in the first place). You can bet I certainly know now! What happens when the server drops off the network because of a power outage, and now it’s saying “kernel not found”? What happens when your client or internal user asks for you to restore a backup, and the backup is corrupted? You may not get all the answers to these until you actually experience them first-hand, but it’s better to start asking the questions now and not when you have angry people yelling at you. Also, once you start asking the questions, you can start setting up “self-training” scenarios to test it. Set up a test box and remove the kernel. See if you can get it back to operational. Try and get someone to install a rootkit on it, or at least do a bunch of stuff that you have to troubleshoot and fix. By asking these questions now, you’ll be in a much better position to deal with them later.

5 Tips to become a system administrator

Healthcare administrators focus on the ins-and-outs that make a healthcare system or hospital function. Health administrators have a strong background in both management and healthcare, and, more often than not, hold at least a Master’s degree. For an individual who is organized, driven, and knowledgeable, healthcare administration is a dream job. Learn how to get started on the path toward becoming a healthcare administrator now.

Read more