Friday, February 25, 2011

HOWTO: Set maximum CPU consumption in percentage by any process

Purpose of cpulimit daemon:
Daemon runs in background and checks if some process is consuming more then 20% of CPU and if it does then daemon lowers CPU consumption of particular process to maximum of 20%. The goal is to have no single process that consumes more then 20% of CPU power.

Note: If you would like to omit only one process then you don't need this "cpulimit daemon". In this case you only need cpulimit program to be executed from terminal.

Tested environment:
Cpulimit daemon was tested on Ubuntu 8.04 LTS and Ubuntu 10.04 LTS. However it should be running fine on other Ubuntu versions and also on other Linux distributions, because it does not uses any Ubuntu specific code.


Tuesday, February 22, 2011

Installing and Configuring Ubuntu 10.x KVM Virtualization

Virtualization is the ability to run multiple operating systems simultaneously on a single computer system. Virtualization has come to prominence in recent years because it provides a way to fully utilize CPU and resource capacity of a server system whilst providing stability (in that if one virtualized guest system crashes, the host and any other guest systems continue to run).

Virtualization is also useful in terms of trying out different operating systems without having to configure dual boot environments. For example, you can try out a different operating system without having to re-partition the disk, shut down Ubuntu and then boot from the new system. You simply start up a virtualized version of the new operating as a guest operating system. Similarly, virtualization allows you to run Windows operating systems from within an Ubuntu system, providing concurrent access to both operating systems.


There are a number of ways to implement virtualization on Ubuntu. Options include VMware, VirtualBox and KVM. In this and subsequent chapters we will look at KVM based virtualization hosted on an Ubuntu system.


An Overview of Virtualization Techniques

Throughout this book the word virtualization is used within the context of using Xen technology to run multiple operating systems on a single physical computer system. It is important, however, to appreciate that virtualization is actually a "catch all" term that refers to a variety of different solutions and technologies, of which Xen is only one.

When deciding on the best approach to implementing virtualization it is important to have a clear understanding of the different virtualization solutions which are currently available. The purpose of this chapter, therefore, is to describe in general terms the four virtualization techniques in common use today, namely guest operating system, shared kernel, hypervisor and kernel level.
 

Hardware Accelerated Virtualization with QEMU and KVM

In this post I will take you through a series of steps for setting up hardware accelerated virtualization on your PC, if it is supported, along with some background on virtualization. The assumption that I make here is that you already have a Intel/ AMD based processor with virtualization support with a GNU/Linux installation (preferably Debian).
Although it may sound little tricky but believe me, it is not as difficult to set it up, as it seems, given you follow the right steps.

A Must Read for those are planning to buy new PC hardware

Even if you don’t intend to set up virtualization on your PC, it will be a good read and probably you will think about Virtualization support before buying a new PC.

Why should You care?

Usually people prefer to stick with an Operating System which they are more comfortable with and even though they may setup a dual boot machine, they hate to switch between them frequently. At the same time, once in a while you require the other OS for certain things…for example to test some software, to learn an OS (for example if you are a Windows user and want to familiarize yourself with GNU/Linux), to isolate an environment for a software (say a crack for a software which you do not want to infect your windows installation :) ) etc. , Thus having Hardware Accelerated Virtualization will definitely make the experience of using a Virtual or Guest OS a breeze and will avoid any kind of frustation due to slow emualation without h/w accelerated virtualization.

Monday, February 21, 2011

Bash scripting Tutorial


This bash script tutorial assumes no previous knowledge of bash scripting.As you will soon discover in this quick comprehensive bash scripting guide, learning the bash shell scripting is very easy task.

Lets begin this bash scripting tutorial with a simple "Hello World" script. Let's start with Learning the bash Shell: Unix Shell Programming

1. Hello World Bash Shell Script


First you need to find out where is your bash interpreter located. Enter the following into your command line:
$ which bash


Open up you favorite text editor and a create file called hello_world.sh. Insert the following lines to a file:


NOTE: very bash shell script in this tutorial starts with shebang:"#!" which is not read as a comment. First line is also a place where you put your interpreter which is in this case: /bin/bash.


Here is our first bash shell script example:
#!/bin/bash
# declare STRING variable
STRING="Hello World"
#print variable on a screen
echo $STRING


Navigate to a directory where your hello_world.sh is located and make the file executable:
$ chmod +x hello_world.sh


Now you are ready to execute your first bash script:
./hello_world.sh

How to rebuild Intel Raid (isw) on Linux

For years, I’ve ran many small servers running the popular ICH/ISW Intel Storage Matrix RAID in Raid-1 configuration. For many years this has worked absolutely perfectly with no issues on both Windows and Linux. But something has always really bugged me. What do i do when (and they will) a drive fails? How does ISW handle it?
On Windows, this is simple, you launch the Storage Matrix software and click rebuild (if it isn’t rebuilding automagically). But how do you do this on a Linux server which has no Storage Matrix software? After hours of Googling, i came across the command “dmraid -R”. But that didn’t work in my test environments.
So i spent a whole afternoon figuring this out. This is what i found.

DMRaid Works. Sort of

DMRaid is the linux implementation of popular onboard RAID setups. Your raid can be from Intel, Nvidia, Promise and a few others who do implement it. Intel is the most common one, and that’s the one i generally have on all my Intel servers. What *you* may find is that your implementation is different, but this posting should help you.
My test setup was a simple ICH6R machine with two 160gb Seagate hard drives. I booted up the machine, went into the Intel raid setup, and created a 20gb mirror partition called “System”. I then installed CentOS 5.5 32bit on this machine, and went to work.

Friday, February 18, 2011

Munin Monitoring



Introduction

Munin is a very powerful, feature rich monitoring server based on Tobias Oetiker's RRDTool. The monitoring server runs every 5 minutes via cron and connect to various configured nodes. Each node runs a daemon listening for connections from the server, and executes a wide range of completely customisable scripts to return data to the munin server to generate graphs from.
As the backend graphing engine is based on RRDTool, any feature available in RRDTool is also available as options to Munin. The really nice thing about Munin and RRDTool, is that negative numbers can be graphed.
In this article I will explain how to install Munin as well as Munin-Node on a single server, and how to get Munin to probe your Mikrotik devices via SNMP as well as Telnet (Depending on the type of graph). I would strongly advise that time is spend reading the Munin as well as RRDTool documentation available at the web sites, so that a clear understanding can be obtained on how Munin operates and generates graphs.

Munin - HOWTO Monitor Windows

There is a munin-node for Windows called "munin-node-win32" and another project called "munin-nude-win32" (nude... funny...).
Alternatively, here are two other ways to monitor Windows. One uses SNMP, the other an agent specific to Munin.


Using munin-node-win32

This can get you more info than SNMP will supply (namely system temperatures). You can get it here, http://code.google.com/p/munin-node-win32/ it works the same as the native munin-node program.
Just run the munin-node-win32 program on each Windows system and on the monitoring server add each Windows box as if it was a Linux box.
You may need to place msvcr71.dll and msvcp71.dll (part of the Microsoft C++ runtime) in the same directory as munin-node.exe to get it to run (it depends if they're already in the system library path). If you do need them, search your system, as another application has probably already installed them elsewhere (and you can just copy them over). Otherwise, a quick google should get you a site offering them for download.
If you are using plugins written in Python (or any other language), and you want them to interact with a UNIX Munin server, you need to take care of the way newlines are written to stdout. By default, Windows will automatically convert any '\n' character into '\r\n', and Munin doesn't seem to like it. To solve this issue, you need to write in binary mode. For Python, add the following lines (taken fromhttp://code.activestate.com/recipes/65443/):
import sys

if sys.platform == "win32":
    import os, msvcrt
    msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)

sys.stdout.write("some text\n")
Fred Chu forked the munin node win32 client, fixed some bug and added some functions. Last relase in April 2010 (http://code.google.com/p/munin-nude-win32/)


Thursday, February 17, 2011

Hardware RAID 1 on Ubuntu server 10.10 32bit

My server has Intel hardware RAID (Intel Matrix RAID storage) on mother board, I decided to install RAID 1 with 2 HDD 160GB with Ubuntu server 10.10 32bit. Luckily, this version can detect this Intel RAID Matrix driver. So I can install it without any problem, but Ubuntu did not let me change the FLAG of “Bootable” to ON. So that I can not boot into my system. This is really !@#.

In this post, I’ll show you how to turn that FLAG ON.

Use your Ubuntu server CD, then choose “Boot from the first hard disk”

After you log on to the system, use the fdisk utility as follow.

List all disks on your server

$sudo fdisk –l

nautilus-open-terminal : terminal quick launch

Tonite it’s getting late but I wanted to post something that is useful for quickly getting to the shell from any GUI location. The package nautilus-open-terminal does just what you might guess it does. It allows you to launch a gnome-terminal from a right-click within nautilus.
You might remember I blogged about something similar long-long ago with nautilus scripts.  This is based on the same idea, but now wrapped in a nice shiny deb package.  From the package description:
Nautilus plugin for opening terminals in arbitrary local paths nautilus-open-terminal is a proof-of-concept Nautilus extension which allows you to open a terminal in arbitrary local folders.
To install this quick-launch to the terminal simply run:
$sudo aptitude install nautilus-open-terminal
You may need to restart gnome / nautilus for the change to take effect, but afterwards you’ll have a “open terminal” button on your right-click menu anywhere within nautilus or gnome-desktop area. 

Aptitude vs Apt-get Comparison

One of the many attractive features of Ubuntu and Debian Linux is the package management system. Coming from other operating systems and other distributions makes the discovery of the Advanced Packaging Tool, APT, a source of pleasure and delight. Here is a system that solves dependency hell, makes keeping soft up to date easy and facilitates simple installation and removal of software packages.
The mainstay of this system has been apt-get, an extremely useful and versatile program that has been the heart of the Debian APT system. A great and useful program, but not perfect, the newer program aptitude is the result of an effort to improve on apt-get. In addition to a cleaner command line interface, aptitude offers a fullscreen character-based UI and more complete tracking of what has been installed and interdependencies.
aptitude is a newer and improved replacement for apt-get

Installation Low Memory Systems

How to install Ubuntu on low memory systems (Pentium III and earlier machines, with 32-192 MB RAM).
/!\ This documentation should be further reviewed for Ubuntu 10.04.
/!\ See proposed blueprint: https://blueprints.launchpad.net/ubuntu/+spec/low-memory-install

Requirements

Memory Requirements

Installing Ubuntu on any system requires at least 32 MB of memory: The text-based installer included with the alternate (install) CDs needs that much space to run reliably. Smaller memory configurations run into problems, and while not impossible, it can be very difficult to complete an installation with less than the minimum RAM requirement.
Depending on the hardware requirements, you can expect a sparse Ubuntu system to boot to a graphical desktop on anywhere from 19 MB to 54 MB. That requirement will fluctuate with the system demands, and increase while the system is active. Swap space is crucial to low-memory machines, so don't be stingy when setting up your system.
For systems with less than 64MB RAM it is probably worth considering an alternative such as Debian or Wolvix. Other minimalist distros worthy of consideration includePuppyLinux or DSL (Damn Small Linux).

Wednesday, February 16, 2011

Example syntax for Secure Copy (scp)

What is Secure Copy?

scp allows files to be copied to, from, or between different hosts. It uses ssh for data transfer and provides the same authentication and same level of security as ssh.

Examples

Copy the file "foobar.txt" from a remote host to the local host

    $ scp your_username@remotehost.edu:foobar.txt /some/local/directory

Copy the file "foobar.txt" from the local host to a remote host

    $ scp foobar.txt your_username@remotehost.edu:/some/remote/directory

Copy the directory "foo" from the local host to a remote host's directory "bar"

    $ scp -r foo your_username@remotehost.edu:/some/remote/directory/bar

Copy the file "foobar.txt" from remote host "rh1.edu" to remote host "rh2.edu"

    $ scp your_username@rh1.edu:/some/remote/directory/foobar.txt \
    your_username@rh2.edu:/some/remote/directory/

Copying the files "foo.txt" and "bar.txt" from the local host to your home directory on the remote host

    $ scp foo.txt bar.txt your_username@remotehost.edu:~

Copy multiple files from the remote host to your current directory on the local host

    $ scp your_username@remotehost.edu:/some/remote/directory/\{a,b,c\} .
    $ scp your_username@remotehost.edu:~/\{foo.txt,bar.txt\} .

scp Performance

By default scp uses the Triple-DES cipher to encrypt the data being sent. Using the Blowfish cipher has been shown to increase speed. This can be done by using option -c blowfish in the command line.
    $ scp -c blowfish some_file your_username@remotehost.edu:~
It is often suggested that the -C option for compression should also be used to increase speed. The effect of compression, however, will only significantly increase speed if your connection is very slow. Otherwise it may just be adding extra burden to the CPU. An example of using blowfish and compression:
    $ scp -c blowfish -C local_file your_username@remotehost.edu:~

Tuesday, February 15, 2011

32 bit Ubuntu with 4GB+ of memory

Overview

This guide is to explain how Ubuntu handles 4 or more gigabytes of memory and the options you have for utilizing all your memory.

By default, the 32bit versions of Ubuntu support up to 4GB of memory however, in practice, if you have 4GB of memory, you will generally only see somewhere between 3GB and 3.5GB. The reason for this is that various devices on your computer need large chunks of the memory address space to work properly and so the system maps them into the highest parts of memory. When that happens on a machine with 4GB of memory and a 32bit kernel, it limits the amount addressable physical memory.

The options you have for running Ubuntu with 4G+ of memory are:

1) Use a 64bit version of Ubuntu
If your machine supports it, the easiest way to make use of 4GB+ of memory on Ubuntu is to simply use a 64bit version of Ubuntu which can inherently handle large amounts of memory. For some motherboards, you may have to enable a feature called "Map around memory hole" in the BIOS for this to work (it may be called something different depending on the BIOS however, it should be named something similar to this).

UNetbootin - Live USB bootable

Introduction

UNetbootin allows you to create bootable Live USB drives for Ubuntu, Fedora, and other Linux distributions without burning a CD. It runs on both Windows and Linux. You can either let UNetbootin download one of the many distributions supported out-of-the-box for you, orsupply your own Linux .iso file if you've already downloaded one or your preferred distribution isn't on the list.

Requirements

  • Microsoft Windows 2000/XP/Vista/7, or Linux.
  • Internet access for downloading a distribution to install, or a pre-downloaded ISO file

Monitoring Servers and Clients using Munin in Debian Linux

What is Munin?

"Munin" means "memory".

Munin the tool surveys all your computers and remembers what it saw. It presents all the information in in graphs through a web interface. Its emphasis is on plug and play capabilities. After completing a installation a high number of monitoring plugins will be playing with no more effort. Using Munin you can easily monitor the performance of your computers, networks, SANs, and quite possibly applications as well. It makes it easy to determine "what's different today" when a performance problem crops up. It makes it easy to see how you're doing capacity wise on all limited resources.

It uses the excellent RRDTool  and is written in Perl. Munin has a master/node architecture in which the master connects to all the nodes at regular intervals and asks them for sdata. It then stores the data in RRD files, and (if needed) updates the graphs. One of the main goals has been ease of creating new plugins (graphs).


Monday, February 14, 2011

Could not find /usr/local/apache2/bin/apxs

When making tomat to work with apache2 throught mod_jk, get the following error:

could not find /usr/local/apache/bin/apxs
configure: error: You must specify a valid --with-apxs path


To fix this, compile apache2 with --enable-module=so

Friday, February 11, 2011

Installing munin on CentOS

What you'll need

Munin's output is in html, so you will want to have a web server running to make reports available through a web browser. You can use any web server, such as apache or nginx, but for convenience the examples in this article series will assume you are running apache. It's best if you go through a tutorial series for installing apache or nginx so you understand what's being installed, but there are also barebones instructions for a default apache install in our repository if you're an experienced user and just want the web server for the purposes of accessing munin's reports.

If you want munin to send you email alerts you'll need to have a mail server running on your munin master slice that is configured to send outgoing mail messages. Slicehost has several articles that go into detail on setting up a mail server, and as with the web server, it's best if you go through a tutorial series so you get a good explanation of how to set up the mail server. If you want a quick, minimal mail server install, however, there are barebones install articles available there as well.

Munin can monitor just a single slice or it can be used to monitor several slices from one "master" slice. If you are planning on monitoring additional slices later, be sure to perform this installation on the slice you want to use as munin's master. If you are monitoring only one slice you don't have to make that decision, of course — just follow the directions in this series and don't worry about the subsequent article on installing additional nodes.

All commands assume you're running as a non-root user with sudo access.

Shell Script For Monitoring System network with ping command

#!/bin/bash 
HOST="google.com" 
COUNT=1 
#PACKET_SIZE=<to_guess> 
SUBJECT="Server is down" 
EMAIL_ADDR="xxx@gmail.com" 
for myhost in $HOST 
do 
    count=0
    for ((i=1; i<=3; i++)) 
    do 
        count=$(($count + $(ping -c $COUNT $myhost | grep received | awk -F ',' '{print $2}' | awk '{print $1}'))) 
        #echo "$count" 
    done 
    if [ $count -eq 0 ]; then 
        echo "Host $myhost is down at $(date)" | mail -s "$SUBJECT" $EMAIL_ADDR 
        #echo "Failed" 
    fi
done

How do I install cron?

The following commands will ensure that cron is installed properly on your server.

$sudo yum install vixie-cron crontabs
$sudo /sbin/chkconfig crond on
$sudo /sbin/service crond start

Mod_security: Processing phase

Processing Phases 

ModSecurity 2.x allows rules to be placed in one of the following five phases: 

1. Request headers (REQUEST_HEADERS) 
2. Request body (REQUEST_BODY) 
3. Response headers (RESPONSE_HEADERS) 
4. Response body (RESPONSE_BODY) 
5. Logging (LOGGING) 

Below is a diagram of the standard Apache Request Cycle. In the diagram, the 5 ModSecurity processing phases are shown. 

 

Securing Apache 2: step by step

Credits 

This article was inspired by Artur Maj’s article Securing Apache: Step-by-Step shows in a step-by-step fashion, how to install and configure the Apache 2.0.x series web server in much the same way as the 1.3.X covered in Artur’s article, i.e. “in order to mitigate or avoid successful break-in when new vulnerabilities in this software are found”. 

Introduction 

Configuring your Apache2 server in accordance to the specifications laid out in this howto will result in limited functionality. The following will be available: 
· Apache2 will be accessible from the Internet 
· only static pages will be served (e.g. HTML or XHTML) 
· name based virtual hosting 
· specified web pages can be accessible only from selected IP addresses or users (basic .htaccess authentication) 
· all web requests will be logged including information about the browsers 

This howto does not cover such things as relational databases (MySQL, PostgreSQL, etc.), scripting languages (Python, Perl, PHP, Tcl, etc.), or any of a myriad of server side gadgets for interaction with web services. The reasons for this are beyond the scope of this howto, but involve security on multiple levels. For a better explanation to this and a number of other good security oriented observations, read Artur’s article. 

Securing PHP: Step-by-Step

In my previous article (“Securing Apache: Step-by-Step“) I described the method of securing the Apache web server against unauthorized access from the Internet. Thanks to the described method it was possible to achieve a high level of security, but only when static HTML pages were served. But how can one improve security when interaction with the user is necessary and the users’ data must be saved into a local database? 

This article shows the basic steps in securing PHP, one of the most popular scripting languages used to create dynamic web pages. In order to avoid repeating information covered in the previous article, only the main differences related to the process of securing Apache will be described. 

Operating system 

Like in the previous article, the target operating system is FreeBSD 4.7. However, the methods presented should also apply on most modern UNIX and UNIX-like systems. This article also assumes that a MySQL database is installed on the host, and is placed in the “/usr/local/mysql” directory. 

Securing MySQL: step-by-step

1. Introduction

MySQL is one of the most popular databases on the Internet and it is often used in conjunction with PHP. Besides its undoubted advantages such as easy of use and relatively high performance, MySQL offers simple but very effective security mechanisms. Unfortunately, the default installation of MySQL, and in particular the empty root password and the potential vulnerability to buffer overflow attacks, makes the database an easy target for attacks.
This article describes the basic steps which should be performed in order to secure a MySQL database against both local and remote attacks. This is the third and last of the series of articles devoted to securing Apache, PHP and MySQL. 

1.1 Functionality

The article assumes that the Apache web server with the PHP module is installed in accordance with the previous articles, and is placed in the /chroot/httpd directory.
Apart from the above we assume the following:
• The MySQL database will be used only by PHP applications, installed on the same host;
• The default administrative tools, such as mysqladmin, mysql, mysqldump etc. will be used to manage the database;
• Remote data backup will be performed by utilizing the SSH protocol. 

Install snort on CentOS 5

Download snort

Install required library: libpcap, pcre, libdnet

#yum install libpcap pcre libdnet 

Install snort from source code: 

Extract snort:

#tar xzvf snort-2.8.6.tar.gz 

Move to snort directory:

#cd snort-2.8.6 

Choose the option when configuring snort, example to add the option flexresp:

#./configure --enable-flexresp 
#make 
#make install


Apache Optimization

All the important configuration options are stored by Apache in a config file called httpd.conf that is located at /usr/local/apache/conf/httpd.conf. We will start by opening this file in your favorite text editor. 

For example: 

vi /usr/local/apache/conf/httpd.conf 

MaxClients 

Total number of concurrent connections. 
Locate it in the configuration file. This should be set to a reasonable value. I suggest using this formula to determine the right value for your server. 

MaxClients = 150 x RAM (GB) 

Writing Snort Rules

The Basics

Snort uses a simple, lightweight rules description language that is flexible and quite powerful. There are a number of simple guidelines to remember when developing Snort rules. 

The first is that Snort rules must be completely contained on a single line, the Snort rule parser doesn't know how to handle rules on multiple lines. 

Snort rules are divided into two logical sections, the rule header and the rule options. The rule header contains the rule's action, protocol, source and destination IP addresses and netmasks, and the source and destination ports information. The rule option section contains alert messages and information on which parts of the packet should be inspected to determine if the rule action should be taken. 

Here is an example rule: 

alert tcp any any -> 192.168.1.0/24 111 (content:"|00 01 86 a5|"; msg: "mountd access";)

The text up to the first parenthesis is the rule header and the section enclosed in parenthesis is the rule options. The words before the colons in the rule options section are called option keywords. Note that the rule options section is not specifically required by any rule, they are just used for the sake of making tighter definitions of packets to collect or alert on (or drop, for that matter). All of the elements in that make up a rule must be true for the indicated rule action to be taken. When taken together, the elements can be considered to form a logical AND statement. At the same time, the various rules in a Snort rules library file can be considered to form a large logical OR statement. Let's begin by talking about the rule header section. 

TCPDUMP

To display the Standard TCPdump output: 

#tcpdump 

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode 
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes

21:57:29.004426 IP 192.168.1.2.1034 > valve-68-142-64-164.phx3.llnw.net.27014: UDP, length 53 
21:57:31.228013 arp who-has 192.168.1.2 tell 192.168.1.1 
21:57:31.228020 arp reply 192.168.1.2 is-at 00:04:75:22:22:22 (oui Unknown) 
21:57:38.035382 IP 192.168.1.2.1034 > valve-68-142-64-164.phx3.llnw.net.27014: UDP, length 53 
21:57:38.613206 IP valve-68-142-64-164.phx3.llnw.net.27014 > 192.168.1.2.1034: UDP, length 36

Netstat

Netstat (NETwork STATistics) is a command-line tool that provides information about your network configuration and activity.

To display the routing table: 

#netstat -rn 

-r: Kernel routing tables.
-n: Shows numerical addresses instead of trying to determine hosts.


Kernel IP routing table
Destination 
192.168.1.0 
0.0.0.0
Gateway 
0.0.0.0
192.168.1.1
Genmask 
255.255.255.0
0.0.0.0
Flags 
U
UG
MSS 
0
0
Window 
0
0
irtt 
0
0
Iface 
eth1
eth1

How to mount remote Windows shares on CentOS

Contents

1. Basic method
2. Better Method
3. Even-better method
4. Yet Another Even-better method 

CentOS 5.0 and CentOS 4.5 users. Please read this important note

OK, we live in the wonderful world of Linux. BUT, for many of us, having to deal with Windows is a fact of life. For example, you may want to use a Linux server to back up Windows files. This can be made easy by mounting Windows shares on the server. You will be accessing Windows files as if they are local and essentially all Linux commands can be used. Mounting Windows (or other samba) shares is done through the cifs virtual file system client (cifs vfs) implemented in kernel and a mount helper mount.cifs which is part of the samba suite.

The following names are used in our examples.
  • remote Windows machine winbox 
  • share name on winbox: getme 
  • username: sushi 
  • password: yummy
  • domain:google

Install RRDTool on Red Hat Enterprise Linux

Q. I've downloaded RRDTool package called rrdtool-1.3.1.tar.gz. But ./configure command giving out lots of error messages. How do I install RRDTool on Red Hat Enterprise Linux 5.x - 64 bit version?

A. RRD is the Acronym for Round Robin Database. RRD is a system to store and display time-series data (i.e. network bandwidth, machine-room temperature, server load average). It stores the data in a very compact way that will not expand over time, and it presents useful graphs by processing the data to enforce a certain data density. It can be used either via simple wrapper scripts (from shell or Perl) or via frontends that poll network devices and put a friendly user interface on it.

Installing RRDTool on RHEL

In order to install RRDTool on Red Hat Enterprise Linux / CentOS Linux 64 bit version you need to install few development tools and libraries.

PF: Block OS Fingerprinting

When performing network reconnaissance, one very valuable piece of information for would-be attackers is the operating system running on each system discovered in their scans. From an attacker's point of view, this is very helpful in figuring out what vulnerabilities the system might have or which exploits may work on a system. Combined with the knowledge of open ports found during a port-scan, this information can be devastating. After all, an RPC exploit for SPARC Solaris isn't very likely to work for x86 Linux—the code for the portmap daemon isn't common to both systems, and they have different processor architectures. Armed with the knowledge of a given server's platform, attackers can very efficiently try the techniques most likely to grant them further access without wasting time on exploits that cannot work.

Traditionally, individuals performing network reconnaissance would simply connect to any services detected by their port-scan, to see which operating system the remote system is running. This works because many daemons, such as Sendmail, Telnet, and even FTP, readily announce the underlying operating system, as well as their own version numbers. Even though this method is easy and straightforward, it is now seen as intrusive since it's easy to spot someone connecting in the system log files. Additionally, most services can be configured not to disclose this sensitive information. In response, more sophisticated methods were developed that do not require a full connection to the target system to determine which operating system it is running. These methods rely on the eccentricities of the host operating system's TCP/IP stack and its behavior when responding to certain types of packets. Since individual operating systems respond to these packets in a particular way, it is possible to make a very good guess at what OS a particular server is running based on how it responds to probe packets, which normally don't show up in log files. Luckily, such probe packets can be blocked at the firewall to circumvent any operating system detection attempts that deploy methods like this.

PF với FreeBSD 8.0

PF là một stateful firewall được phát triển bởi OpenBSD, tuy nhiên cũng được port sang FreeBSD. PF là một firewall đầy đủ tính năng, hỗ trợ ALTQ (Alternate Queuing), ALTQ cung cấp tính năng QoS.

1. Nạp PF kernel module, thêm dòng sau vào /etc/rc.conf:

pf_enable="YES"

Khởi động bằng tay:

#/etc/rc.d/pf start

Khi PF chạy, nó sẽ tìm đến file chứa cấu hình các luật, mặc định là /etc/pf.conf. Hãy tạo ra file pf.conf nếu nó chưa có trong /etc. Nếu file chứa các luật này nằm ở một nơi khác, thêm dòng sau vào /etc/rc.conf để PF load file này lên khi khởi động:

pf_rules="/path/to/pf.conf"

Throttling SSH attacks with pf

After a brief chat with Claudio about ways to throttle SSH brute force attacks, I got inspired to do some testing of my own. There are already plenty of howtos on throttling or even automatic blacklisting, but few with some real numbers on how effective it can be. I had some requirements for this pf ruleset:
Not deny legit connections
Not permanently block anything

The test system is FreeBSD 6.0-STABLE. Note that some of the features used in this ruleset are only available in pf 3.7 and later (FreeBSD 6.x is synched to 3.7). I also use 
expiretable to automatically flush old entries in firewall tables. I found this ruby script for brute forcing. It is single threaded and often fails/stalls under non-ideal conditions, but it was better than nothing. 

I dont plan on explaining my rules in detail as this is no guide on pf, but contact me if anything is unclear.

LANIF = "em0"
LOIF = "lo0"
set block-policy drop
pass out all keep state
block in all
pass on $LOIF all
pass on $LANIF inet proto tcp from any to $LANIF port ssh keep state

Securing Apache

Bảo mật Apache: từng bước.
Artur Maj

Tài liệu này theo dạng chỉ dẫn từng bước cách cài đặt và chỉnh lý Apache 1.3.x web server với mục đích xử lý và phòng tránh các trường hợp đột nhập lúc những yếu điểm của chương trình này được khám phá.

Chức năng
Trước khi bắt đầu kiện toàn bảo mật Apache, chúng ta phải xác định rõ chức năng cần thiết nào của server sẽ được xử dụng. Tính đa năng của Apache tạo ra những khó khăn để thực hiện một mô thức tổng quát với mục đích kiện toàn bảo mật cho server trong mọi trường hợp có thể được. Ðây là lý do tài liệu này dựa trên các chức năng sau:
  • Web server có thể truy cập từ Internet; và,
  • Chỉ những trang HTML tĩnh (static HTML pages) sẽ được phục vụ,
  • Server hỗ trợ tên miền cho cơ chế dịch vụ ảo,
  • Các trang web đã ấn định chỉ có thể truy cập từ các cụm IP addresses hoặc người dùng (khai báo căn bản),
  • Server sẽ tường trình trọn bộ các thỉnh cầu (bao gồm những thông tin về các web browsers).

Local exploit stuff

Tôi định viết về điều này nhiều lần rồi nhưng mãi vẫn chưa có bài nào cho hợp tình hợp lí đúng thời điểm. Hôm nay, sau những ngày tháng dài đánh vật với nhu cầu và mục đích an toàn thông tin cho khách hàng tôi lại có dịp hí hoáy viết lại vài dòng. Coi như 1 sự lưu giữ kỉ niệm và kinh nghiệm cho bản thân

Khách hàng của tôi là 1 cậu chủ nhỏ đang quản lí 1 hệ thống hàng trăm website. Từ các doanh nghiệp vừa và nhỏ đến các trung tâm, dịch vụ và thậm chí là cả trường THPT, đại học. Cậu chủ này sở hữu 1 server chính và 2 VPS. Hầu hết các site được xây dựng trên nền tảng Joomla (Open Source) Platform : Linux CentOS x86_64. Cài đặt đầy đủ các dịch vụ WebServer + MailServer + DNS + FTP. Tình hình tệ hại xảy ra liên tục trong những tháng đầu năm. Các Website của cậu ta liên tục gặp các vấn đề error database, deface, v.v… Nói chung cảm giác đầu khi bước vào nó như một ngôi nhà hoang không khóa với nhiều ô cửa sổ to và rộng. Tôi nhận nhiệm vụ hạn chế triệt để vấn đề nhà không cửa, cửa không khóa này trong 1 ngày không nắng không mưa, không chất xúc tác.

Nó đã diễn ra như 1 giấc mơ, có đôi lúc là ác mộng, lắm khi là 1 cảm giác ngon ngọt tươi mát. Nhưng tất cả qua đi thật chậm và chính xác theo lối vô định

Send mail to Gmail with SSMTP

Usually, you do not need to setup an email server under Linux desktop operating system. Most GUI email clients (such as Thunderbird) supports Gmail POP3 and IMAP configurations. But, how do you send mail via the standard or /usr/bin/mail user agents or a shell script? Programs such as sendmail / postfix / exim can be configured as a gmail smarthost but they are largely overkill for this use. 

You can use gmail as a smart host to send all messages from your Linux / UNIX desktop systems. You need to use a simple program called ssmtp. It accepts a mail stream on standard input with recipients specified on the command line and synchronously forwards the message to the mail transfer agent of a mailhub for the mailhub MTA to process. Failed messages are placed in dead.letter in the sender's home directory. 

Install ssmtp 

Type the following command under CentOS / RHEL / Red Hat / Fedora Linux: 


# rpm –Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm 
# yum install ssmtp