Jump to content

LPI Linux Certification/Print Version

From Wikibooks, open books for an open world


LPI Linux Certification
Current, editable version of this book is available in Wikibooks, collection of open-content textbooks at URL:

http://en.wikibooks.org/wiki/LPI_Linux_Certification

Automatic reload all new Certificates


LPI Linux Certification

[edit | edit source]

This book covers the Linux Professional Institute™ family of certifications. There are three levels of LPI™ certification:

  • Level 1: Junior Level Linux Professional.
  • Level 2: Advanced Level Linux Professional.
  • Level 3: Senior Level Linux Professional.

To obtain a certification, a candidate is required to pass exams and, for Level 2 and Level 3, to hold a lower-level certification from the LPI™. All LPIC candidates are encouraged to browse the documentation at the LPI™ website. The resources there will familiarize the candidate with many things that are outside the scope of this book (e.g. exam cost, testing centers, other training resources) you are also encouraged to register with the LPI™ so that you can access the candidate area.
The Detailed Objectives listed within each of the modules in this book have been reproduced from the LPI™ website with kind Permission. We are however, to make it clear that the Linux Professional Institute™ does not endorse the work contained within this book in any way whatsoever.

Audience

[edit | edit source]
Logo of the Linux Professional Institute.

This book is written specifically for the LPIC candidate. It is based, as indeed is the exam, around a community driven documentation project known as "The Linux Documentation Project". Each module in the book, however, is organized around a particular subject. It is hence feasible for the casual reader to pick one particular module and study its material with a view to gain a better understanding of just that material. However, many of the modules - and in particular the Advanced modules - will assume a certain skill level. It is also feasible for a new Linux user to come here with a view to learn Linux. However, although such readers are very welcome, they may be better served by studying the following material, Linux Guide. The modules on the LPI Linux certification are heavily slanted toward up-and-coming sysadmins.

About this book

[edit | edit source]

This book is organized so that each and every module can be accessed via the front page. This will be useful for readers who just wish to study or quickly gain information about one aspect of the exam syllabus. For exam candidates we have created an exam page which also has a table of contents that covers only the modules required for you to study for the various levels of the LPI™ . It is the hope of the contributors that the exam candidates will use the exam pages and their accompanying discussion pages to leave advice, tips and gotchas etc. for other exam candidates.
The Module pages will contain detailed objectives followed by an overview which in turn will be followed by section headings covering the module's syllabus. At the beginning of each section will be a list of prerequisite reading (hopefully all nicely formatted). It is advisable to read them, although the linked articles may not be required knowledge to pass the exam. However, they should relate to the individual sections they are contained within.

Version

[edit | edit source]

To keep Your Linux knowledge up to date and the LPI Certification relevant, the LPI regularly updates its syllabus. As of 2021-02-19, the latest Version of LPIC-1 is Version 5.0 with Exams 101-500 and 102-500. The most recent version of LPIC-2 is Version 4.5 with the Exams 201-450 and 202-450.

Lastly, we are obviously looking for Authors, We encourage all positive edits even if it is just to correct a simple spelling mistake or fix a link; in short, "Every addition is very welcome."

Table of Contents

[edit | edit source]
Wikibook Development Stages
Sparse text 0% Developing text 25% Maturing text 50% Developed text 75% Comprehensive text 100%

Exam 101-500

[edit | edit source]
Topic 101: System Architecture
[edit | edit source]
Topic 102: Linux Installation & Package Management
[edit | edit source]
Topic 103: GNU and UNIX Commands
[edit | edit source]
Topic 104: Devices, Linux Filesystems, Filesystem Hierarchy Standard
[edit | edit source]

Exam 102-500

[edit | edit source]
Topic 105: Shells and Shell Scripting
[edit | edit source]
Topic 106: User Interfaces and Desktops
[edit | edit source]
Topic 107: Administrative Tasks
[edit | edit source]
Topic 108: Essential System Services
[edit | edit source]
Topic 109: Networking Fundamentals
[edit | edit source]
Topic 110: Security
[edit | edit source]
[edit | edit source]

Hardware & Architecture (Obsolete!)

The X Window System

Kernel

Security

Advanced Level Linux Professional (LPIC-2)

[edit | edit source]

Exam 201-450

[edit | edit source]
Capacity Planning (200)
[edit | edit source]
Linux Kernel (201)
[edit | edit source]
System Startup (202)
[edit | edit source]
Filesystems And Devices (203)
[edit | edit source]
Advanced Storage Device Administration (204)
[edit | edit source]
Networking Configuration (205)
[edit | edit source]
System Maintenance (206)
[edit | edit source]

Exam 202-450

[edit | edit source]
DNS (207)
[edit | edit source]
Web Services (208)
[edit | edit source]
File Sharing (209)
[edit | edit source]
Network Client Management (210)
[edit | edit source]
E-Mail Services (211)
[edit | edit source]
System Security (212)
[edit | edit source]
Obsolete!
[edit | edit source]

Troubleshooting (213) (Obsolete!)

[edit | edit source]



Junior Level Linux Professional

[edit | edit source]

Welcome! If you are here, then you are considering or have decided to take the Junior Level Linux Professional Exam. This page and its accompanying discussion page are specifically for you, and will explain your overall objectives for each exam (there are two exams you must complete before being certified). It is a good idea to come back here and perform a sanity check on your understanding against the overall objectives presented here. All that remains is for the authors and contributors of this book to wish you good luck.

Please note DO NOT contribute actual exam questions you may have been presented with in the past ANYWHERE in this book.


LPI 101 Exam Objectives

[edit | edit source]

Each objective is assigned a weighting value. The weights range roughly from 1 to 10 and indicate the relative importance of each objective. Objectives with higher weights will be covered in the exam with more questions.

LPI 101 Exam Table of Contents

[edit | edit source]

Topic 101: System Architecture

[edit | edit source]

Topic 102: Linux Installation and Package Management

[edit | edit source]

Topic 103: GNU and Unix Commands

[edit | edit source]

Topic 104: Devices, Linux Filesystems, Filesystem Hierarchy Standard

[edit | edit source]

LPI 102 Exam Objectives

[edit | edit source]

Each objective is assigned a weighting value. The weights range roughly from 1 to 10 and indicate the relative importance of each objective. Objectives with higher weights will be covered in the exam with more questions.

LPI 102 Table of Contents

[edit | edit source]

Topic 105: Shells, Scripting and Data Management

[edit | edit source]

Topic 106: User Interfaces and Desktops

[edit | edit source]

Topic 107: Administrative Tasks

[edit | edit source]

Topic 108: Essential System Services

[edit | edit source]

Topic 109: Networking Fundamentals

[edit | edit source]

Topic 110: Security

[edit | edit source]


Sections below are refer to old exam version and are being made obsolete

[edit | edit source]

See page discussion for more details.

LPI 101 Exam Objectives

[edit | edit source]
  • Hardware & Architecture
    • Candidates should have a clear understanding of the concept of a BIOS and what role it performs, from the initial computer power-on to the services it provides to the Linux kernel. Furthermore, candidates should be able to identify all the options presented to them in a standard BIOS interface and further be able to gather basic information about the system from the BIOS (Menu navigation). Candidates should also be able to navigate a BIOS and make changes that will enable or disable peripherals, and compare the information provided to them from the BIOS with the information provided from the kernel. Candidates should also be able to determine compatible modems, configure those modems for outbound dial up and set specific port speeds from the command line. Candidates should have an understanding of the term SCSI (Small Computer System Interface) and how SCSI devices work (this includes the terms termination & SCSI ID). Candidates should also have an understanding of the terms Coldplug and Hotplug and be able to determine via BIOS and kernel methods the resources used for any given device that is attached. Candidates should be able to identify a Sound Card, and be able to determine if the kernel recognizes the sound card, as well as determine if the device has an issue/conflict pertaining to IRQ, DMA, or I/O. Candidates should also be able to understand USB devices and demonstrate a knowledge of the USB layer architecture.
  • Linux Installation & Package Management
    • Candidates should be able to design a disk layout that takes into account your system requirements and its purpose. Candidates should also be able to setup various boot locations such as a floppy or cdrom, install a bootloader and interact with that bootloader. Further, the candidate should be able to install, remove and query programs from both the RPM and DPKG commands. Using both RPM and DPKG distributions, candidates should be able to obtain package versions, installed package content, installation status and find any files or libraries that may or may not be installed on the system. Candidates should also be able to install programs from source via the make program, which generally includes the use of the tar, gzip, and bz2 compression utilities. Finally candidates should be able to identify shared libraries, load them and identify where the shared libraries should be located.
  • GNU & Unix Commands
    • Candidates should be able to understand the shell environment and how to change its behavior by modifying the .profile file, which is located in home directories. Candidates should also be able to send text files and output streams through utility filters to modify the output. Candidates will be expected to know how to move, copy, delete, find, create files and directories, and use recursion to delete and create both files and directories. Candidates should understand the use of redirects, pipes and sending your output to stdout (Standard Output) and to a file. Candidates should be able to list, create and kill processes as well as understand the "&" option and what it does. The candidate should also be able to monitor processes in real time, and be able to modify the priority of any given process. Candidates should be able to create simple regular expressions and use regular expression tools to search through filesystems or file content. Lastly, candidates should know the basic commands for vi.
  • Devices, Linux Filesystems, Filesystem Hierarchy Standard
    • Candidates should be able to set-up partitions and create filesystems, namely ext2, ext3, reiserfs, vfat and xfs. Candidates will know the tools that help maintain those filesystems and keep them in good working order, and use those tools to perform simple filesystem repairs. Candidates will be able to mount and unmount filesystems manually and configure the system to mount them automatically during the boot process. Candidates will be able to implement a disk quota solution for your users. The candidate will understand file permissions and what tools to use to modify those permissions, as well as the concept of file ownership and how to modify file ownership attributes. The candidate will be introduced to both hard & symbolic linking, why it is used, and the usage of the ln command. Finally the candidate will understand the FHS standard and be able to determine where files should be located in FHS-based distributions.
  • The X Window System
    • Candidates should be able to install and configure an X Server, install fonts and configure an X font server, and determine if your hardware is suitable for an X server. Candidates will be introduced to the display managers gdm kdm and xdm, and will be able to configure any of these three display managers. Candidates will then be introduced to the Window Manager Environment and GUI. Lastly the candidate will be introduced to the usage of the DISPLAY environment variable, as well as the various files used for customization.

LPI 101 Table of Contents

[edit | edit source]

Hardware & Architecture

[edit | edit source]

Linux Installation & Package Management

[edit | edit source]

GNU & Unix Commands

[edit | edit source]
  • Work On The Command Line
  • Process Text Streams Using Filters
  • Perform Basic File Management
  • Use Streams, Pipes & Redirects
  • Create, Monitor & Kill Processes
  • Modify Process Execution Priorities
  • Search Text Files Using Regular Expressions
  • Perform Basic File Editing Operations Using Vi

Devices, Linux Filesystems, Filesystem Hierarchy Standard

[edit | edit source]
  • Create Partitions & Filesystems
  • Maintaining The Integrity Of Filesystems
  • Control Mounting & Unmounting Filesystems
  • Managing Disk Quota
  • Use File Permissions To Control Access To Files
  • Manage File Ownership
  • Create & Change Hard & Symbolic Links
  • Find System Files & Place Files In The Correct Location

The X Window System

[edit | edit source]
  • Install & Configure X11
  • Setup A Display Manager
  • Install & Customise A Window Manager Environment

LPI 102 Exam Objectives

[edit | edit source]

The LPI 102 exam tests basic capabilities in the following areas:

  • Kernel
    • Candidates should be able to build, install, configure, manage and query a Linux kernel. This includes using the command line to get information about the running kernel as well as any kernel modules. The candidate should also be able to understand how to manually load and unload modules and to further understand when those commands are safe to perform. The candidate should be able to determine what parameters you can pass to any given module and how to load those modules with a name other than the file name that represents the module. The candidate should understand at a basic level the difference between monolithic and modular kernels with regards kernel module management.
  • Boot, Initialization, Shutdown & Runlevels
    • Candidates should be able to boot the system level by level, This starts with passing commands to the bootloader that will define kernel location and pass parameters to the kernel in order to solve problems with the boot process. The candidate will know how to locate and gather information from log files pertaining to the boot process. The candidate will understand the runlevel process and be able to set the default runlevel, as well as shutdown and restart the system from the command prompt this will include being able to terminate individual processes. The candidate will understand how to alert connected users that a major event is about to occur.
  • Candidates should be able to install/configure printers, print files, and manage printers both local and remote.
  • Candidates should be able to find and use man pages, internet documentation.
  • Candidates should be able to customize the shell environment, write and administrate simple shell scripts.
  • Candidates should be able to Administrate users, groups, basic security, implement backups, and the use of cron.
  • Candidates should be able to understand/configure/troubleshoot the TCP/IP stack, as well as configure a PPP client.
  • Candidates should be able to manage NFS and Samba daemons, administrate MTA's and the Apache webserver, configure DNS and SSH.
  • Candidates should be able to implement user level security, basic host security, and perform basic security administration tasks.

LPI 102 Table of Contents

[edit | edit source]

Kernel

[edit | edit source]

Boot, Initialization, Shutdown & Runlevels

[edit | edit source]
  • Boot The System
  • Change Runlevels & Shutdown Or Reboot System

Printing

[edit | edit source]
  • Manage Printers & Print Queues
  • Print Files
  • Install & Configure Local & Remote Printers

Documentation

[edit | edit source]
  • Use & Manage Local System Documentation
  • Find Linux Documentation On The Internet
  • Notify Users On System-Related Issues

Shells, Scripting, Programming, & Compiling

[edit | edit source]
  • Customise & Use The Shell Environment
  • Customise Or Write Simple Shell Scripts

Administrative Tasks

[edit | edit source]
  • Manage Users & Group Accounts & Related System Files
  • Tune The User Environment & System Environment Variables
  • Configure & Use System Log Files To Meet Administrative & Security Needs
  • Automate System Administrative Tasks By Scheduling Jobs To Run In The Future
  • Maintain An Effective Data Backup Strategy
  • Maintain System Time 0.00

Networking Fundamentals

[edit | edit source]
  • Fundamentals Of TCP/IP
  • TCP/IP Configuration & Troubleshooting
  • Configure Linux As A PPP Client

Networking Services

[edit | edit source]
  • Configure & Manage xinetd, inetd & Related Services
  • Operate & Perform Basic Configuration Of Mail Transfer Agent (MTA)
  • Operate & Perform Basic Configuration Of Apache
  • Properly Manage The NFS & SAMBA Daemons
  • Setup & Configure Basic DNS Services
  • Setup Secure Shell (OpenSSH)

Security

[edit | edit source]
  • Perform Security Administration Tasks
  • Setup Host Security
  • Setup User Level Security
[edit | edit source]


Hardware & Architecture

[edit | edit source]

Configure Fundamental BIOS Settings

[edit | edit source]

Detailed Objective

[edit | edit source]

Weight: 1

Description
Candidates should be able to configure fundamental system hardware by making the correct settings in the system BIOS in x86 based hardware.
  • Key knowledge area(s):
    • Enable and disable integrated peripherals.
    • Configure systems with or without external peripherals such as keyboards.
    • Correctly set IRQ,DMA and I/O addresses for all BIOS administrated ports and settings for error handling.
  • The following is a partial list of the used files, terms and utilities:
    • /proc/ioports
    • /proc/interrupts
    • /proc/dma
    • /proc/pci
BIOS Tips & Tricks
Familiarize yourself with
BIOS settings in equipment
that you support.
Know your beeps: You may
not have access to the
internet when things go wrong.
Change control: Always
make sure you can reverse
any change you make in a
BIOS.
BIOS updates: Keep informed.
Don't roll them out as soon
as they hit the mirrors. Wait
a couple of months then check
manufacturer forums for
problems with the update.
Once you are happy, update
one system, monitor it and then
roll out to the rest of your
systems. Document the
change, BIOS updates are
normally a nightmare to
reverse.
Be aware of the F1 key to
continue, particularly when
rebooting remote servers.
Lights Out Management
if it is available, utilize it.
Think long and hard about
implementing BIOS security.
Can the same level of
security be implemented
elsewhere? Normally it can.
Understand the limitations
of BIOS date and time.
Can system date and time
be better maintained by
other means?

Introduction

[edit | edit source]

The BIOS (Basic Input / Output System) can be thought of as a suite of small programs that operate between the operating system and the hardware on any given computer. It provides a number of services that enable the computer to boot any given operating system. The BIOS can also provide or present other services to the operating system depending on the operating system and / or the type of hardware installed. It is also wise to note that a modern-day computer may have multiple BIOS chips interfacing the various different hardware components that combine to build the whole computer. These include Disk Array Controllers, Graphic Cards, Sound Cards, and possibly a few others. Firstly let's look at the services the BIOS provides regardless of which operating system is installed: these being the POST (Power On Self Test), Hardware Management, Security and Date & Time.

Intel and other manufacturers have developed another standard called EFI (Extensible Firmware Interface), which performs a similar function to BIOS, but does the job in a different manner. EFI is far more flexible and powerful than BIOS, but it has not enjoyed as much commercial success. Exploration of EFI is beyond the scope of this document for now.

POST - Power On Self Test

[edit | edit source]
  • The POST process involves a small diagnostic program that checks system hardware such as RAM or motherboard components. If a particular piece of hardware is present, a basic test is performed to check for faults. More advanced tests such as a long memory test may be performed, but normally these features need to be manually enabled in the BIOS.
  • If the POST process finds errors it will usually sound beeps on the motherboard speaker and / or show some visual message via LEDs on the motherboard and / or messages on the screen. This is known as an "Irregular POST Condition".
  • The number (and in some cases the pattern) of the beeps, lights, or messages will aid you in diagnosing the problem; however, different motherboard models (even those from the same manufacturer) have different implementations of these signals, so it is always wise to have a printed reference manual for each model you support or internet access on another machine for a quick look-up.

Hardware Management

[edit | edit source]
  • During the POST process, the BIOS allows you great flexibility to customize certain aspects of the system via settings stored in CMOS (Complementary Metal Oxide Semiconductor) memory. CMOS memory is volatile memory, but your motherboard has a backup battery to preserve any customized system configurations that you have made. This battery will eventually die. If you find that your computer is not retaining BIOS settings from one power cycle to another, the usual reason is that you need to replace this battery.
  • Useful BIOS settings often edited by users and system administrators may include:
    • Boot device priority
    • Enable / disable motherboard features like integrated video, LAN, or sound
    • Setting preferred memory addresses or IRQ vectors for PCI (or older) cards
  • On older motherboards these configurations were done by positioning certain jumpers or dip switches to the hardware manufacturer's specifications. Modern CMOS menus have replaced nearly all of these devices, with the exception of setting SCSI ID or resetting a BIOS password. There are still some "old school" motherboards in operation, so always keep the possibility of jumpers in mind.

Security

[edit | edit source]
  • Most BIOSs allow the user to set a password. The computer will require this password to be input before completing the boot process. Often this BIOS password adds inconvenience without any real security: information on how to get around these passwords is freely available on the internet. If the user forgets this password, the computer will not proceed to load an operating system. It's not hard to see why BIOS Passwords are rarely invoked at the business level.
  • Many modern computers have the ability to detect configuration changes such as memory size changes and even if the case has been removed. The BIOS will often report these changes and prompt the user to press a key (usually the F1 key) to continue if this change is acceptable. Users may be required to hit another key to enter the BIOS configuration screens to change parameters depending on the particular BIOS manufacturer.

Date and Time

[edit | edit source]
  • Setting the time and date are options within any modern BIOS. This is a "real-time" clock that runs constantly, powered by the same battery that preserves the CMOS settings. It's not very accurate, even compared to a wrist-watch, but it's better to have this poor clock than to require users to enter the time manually at every reboot. (That's how it was done in the early days of computers.)
  • Linux (like other operating systems) maintains its own clock in software by counting interrupts generated by an oscillator circuit in your computer. This clock only functions while the operating system is running.
  • The BIOS provides the date and time to the operating system upon booting. After the operating system has gathered this information, the BIOS clock and the Operating System clock continue to run independently. This means that the BIOS clock will soon differ from the operating system clock, even if it is only in milliseconds.
  • Linux has a command called hwclock which can be used to synchronize the operating system clock with the BIOS. Once synchronized, they will drift apart again, however. (This is due to the Hardware nature of the BIOS clock and the Software nature of the OS clock.)
  • Further on in the course, you will start to look at ntp and how important it is to maintain a consistent "Network Time". Knowing that the BIOS and operating system maintain separate clocks will aid you in setting out a solution.
  • The BIOS does not handle time zone or daylight savings time adjustments. These are handled by the operating system. For this reason, some administrators may choose to set their BIOS clocks to UTC rather than the local time.

Disk Drives

[edit | edit source]

Most computers use Hard Disk Drives to hold an operating system and users' data. Some newer computers use Solid State Disk Drives instead. Though the physical devices vary greatly, there is little difference from the standpoint of configuring Linux or other operating systems.

Attachment Interfaces

[edit | edit source]

Firstly let's address the confusion that often comes around from disk drive terminology such as IDE/ATA (Integrated Drive Electronics / Advanced Technology Attachment) and SATA (Serial Advanced Technology Attachment) and indeed PATA (Parallel Advanced Technology Attachment), which all use the ATA (Advanced Technology Attachment) standard to communicate with the device. The first part of the acronym can be thought of simplistically as a revision. Take for instance the revisions IDE, Fast IDE, EIDE, etc. These revisions changed the physical cables or ribbons that connect the disk drives to the computer, which enabled certain features, e.g. to address more disk space or speed up communications with the device. SATA was like a rewrite, once SATA came into being it was decided that all historical ATA devices that predated SATA (IDE, etc.) were to be grouped under the terminology PATA.

SCSI is another popular attachment interface that has undergone several generations of revision over the years: SCSI, SCSI-2, SCSI-3, U160, U320, and SAS. Click on the link at the head of this paragraph for more details, if desired. The SCSI family of attachment interfaces is not hardware-compatible with the ATA family, nor do they use the same software command set, so you cannot mix SCSI drives with ATA controllers or ATA drives with SCSI controllers. Because they use different commands, Linux will enumerate them with different labels. This will be handled in more detail when it becomes important later.

A Brief History

[edit | edit source]

To get an understanding of modern hard drives, it helps to have some background. The BIOS traditionally uses INT13h as an interface to the hard drive. INT13h, from a historical standpoint, had certain limitations. On the other hand, the IDE/ATA interfaces also had restrictions. These restrictions are highlighted in the table below.

Specification Max Cylinders Max heads Max sectors Max Size
IDE/ATA 65,536 16 256 138GB
INT13h 1,024 256 63 528MB







Clearly you can see that because of the limitations of INT13h and IDE/ATA (which we have highlighted) under the above scenario, the largest drive your average computer could handle was 528MB. We call this specification CHS (cylinder-head-sector). You may recall that to calculate the total size of a hard drive use the following formula:

  • Cylinders * Heads * Sectors * 512 = Capacity


To get around this a new specification was implemented called ECHS (extended cylinder-head-sector), sometimes also referred to as "Large Mode". This introduced a translation layer between the BIOS and INT13h. The translation layer then allowed a computer to handle disk drives up to 8.4GB in size. We can see this with a modification to the table above, which we have set out below and highlighted the relevant row.

Specification Max Cylinders Max heads Max sectors Max Size
IDE/ATA 65,536 16 256 138GB
ECHS 620 128 63 2.5GB
INT13h 1,024 256 63 8.4GB








To see how the translation works, let's take a 2.5GB hard drive with 4960 cylinders, 16 heads, and 63 sectors. The translation program looks at the number of cylinders and makes a "best fit" with the INT13h limitation of 1,024 cylinders. The translation program does this by division, normally. It divides the number of cylinders by one of the following numbers: 2,4,6,8 and in some cases 16. In our case, 4960 / 8 = 620, which does not break the limitation of INT13h. Now the translation program multiplies the number of heads by 8, so 16 * 8 = 128. In this way, the translation program maintains the INT13h standard and provides a way in which the computer can see the whole disk. We can see this by calculating the disk space at both points previous translation and after.

  • Native 4960 * 16 * 63 * 512 = 2.5GB
  • Translation 620 * 128 * 63 * 512 = 2.5GB

The Table above needs a little more clarification. You will note that the maximum number of heads for the ECHS (translation layer) is 128, which is incompatible with the IDE/ATA Layer, which specifies a limit of 16. We get away with this because the translation layer is only concerned with INT13h and is not in any way related to the IDE/ATA layer. The next table will show how this model really looks.

Specification Max Cylinders Max heads Max sectors Max Size
Physical Drive 4,660 16 63 2.5GB
IDE/ATA 65,536 16 256 138GB
INT13h 1,024 256 63 8.4GB
ECHS 620 128 63 2.5GB










Needless to say, hard drives got a lot bigger than 8.4GB, so some other way was needed, as the cylinder-head-sector method was no longer a viable option. This is covered in the next section where we bring you right up to date.

LBA (Logical Block Addressing) is the most common scheme in use today to get past the 528MB limit imposed on an IDE/ATA disk drive. With LBA each block has a unique identification number that starts at 0 and then 1,2,3,4,5... In order for this mechanism to work it must be supported by the BIOS, the operating system, and the IDE drive. A common misconception with LBA is that it is the LBA itself that gets around the 528MB limit when in fact LBA uses translation. When you enable LBA mode in a BIOS you are in effect enabling translation. The translation can be the same as ECHS as discussed above, or another algorithm can be used by a 3rd party. It is way beyond the scope of this course to look at these algorithms. But the point of 3rd party algorithms should be made. More and more with modern operating systems the BIOS is taking a back seat when "talking" to the drive, and modern operating systems now perform this function with their own interpretation of the ATA specification preferring to bypass the BIOS altogether.

There are 16 IRQs (Interupt ReQuest) channels on x86 architecture. Of those only a few are freely available. The table below lists the IRQs that cannot be used in red and the IRQs that could be reassigned (providing that certain hardware does not exist in your system) in orange, and those that you are free to assign as you please in white.

IRQ No. Hardware Assignment IRQ No. Hardware Assignment IRQ No. Hardware Assignment IRQ No. Hardware Assignment
0 System timer 4 COM1 8 Real Time Clock 12 PS2 Mouse
1 Keyboard 5 LPT2 / Sound Card 9 Available 13 Floating Point Proc
2 Handles IRQ 8 - 15 6 Floppy Controller 10 Available 14 Primary IDE
3 COM2 7 Parallel Port 11 Available 15 Secondary IDE










In essence IRQs are used to halt the computer from processing any further information and immediately service the request from the interrupt. That being the device that is assigned to the interrupt. The table above explains what the IRQ architecture looked like under PIC (Programmable Interrupt Controller), however it does hide the issue of priorities. The priorities of the IRQ structure are given by 0-1-2-8-9-10-11-12-13-14-15-3-4-5-6-7. The reason 8-15 have a higher priority is that they hook into IRQ 2, in fact IRQ 2 can be said to be IRQ 9. What we have looked at here is somewhat historical. Under the above scenario adding new hardware quickly became an art and a pain! The advent of PCI and USB enabled a greater range of addresses and also the ability to just plug things in and go.

DMA (Direct Memory Access) is a feature of the modern computer to enable devices to bypass the CPU when needing to write or read information to or from another device, the purpose of this is to take the load off the CPU and utilize the DMA controller and RAM to move blocks of data from one area to another. Although the CPU is never completely eliminated in a DMA transfer, its role is purely to initiate the process rather than manage it.

I/O (Input / Output) refers to moving data among all devices, both external and internal, within a modern computer system. Some devices can perform both input and output functions. An example of this is a Network Card. Obviously keyboards, mouse, etc. are examples of input devices and monitors and printers are examples of output devices.

Putting it all together

[edit | edit source]

When you turn the PC on, BIOS instructions are loaded into RAM from a permanently available ROM chip on the motherboard. These instructions, after performing a POST, may further inform the processor where the operating system is located and how to load it into RAM. In order to allow operating systems and applications to run on a PC, the BIOS provides a standard layer of services that the operating system can use to "talk" to the hardware. In turn, the operating system provides standard services to applications to perform their functions. It is important to understand that not all operating systems use all BIOS services: some use their own instructions to access the hardware. The direct method of accessing the hardware may improve performance.

The BIOS utilizes a number of technologies to perform the services we have addressed above. However, as with all things in the computer industry, technology is moving forward fast. The BIOS performs a crucial role within the system and new technology added to the motherboard will normally require BIOS cooperation so that the OS can utilize the new technology.

By now you should have a good understanding of the BIOS and the role it performs with hardware. In the next section we look at Linux and how it interacts with the BIOS / Hardware. This will hopefully give you a system administrator's view of these relationships.

[edit | edit source]

Introduction

[edit | edit source]

From this point onward it becomes necessary to have access to a Linux PC. Although some theory is involved, we shall be interacting with Linux more and more. I advise that you attempt the commands as you come across them, testing your understanding as you go. Do be careful with some of the commands as an incorrect switch, or in some cases running a command from the wrong directory is not healthy. (One famous example is running rm -R * from / as root.) So if you are new to Linux, be careful: don't misuse the root account. Only use it when you have to. I personally advise a separate Linux installation for the course that contains no personal data.

Understand that No author / contributor to this book is in any way responsible for any loss of data or damage to any hardware, however it is caused. Mistakes in typing can happen and this is an open book for anyone to edit regardless of their knowledge.

/proc

[edit | edit source]

/proc is a pseudo-filesystem which is used as an interface to kernel data structures. Most of it is read-only, but some files allow kernel variables to be changed, particularly in /proc/sys. if you were to list the file system in /proc you would see something like this:

user@host:~$ cd /proc
user@host:/proc$ ls
1     4190  5071  5462  5859  6          dma          pagetypeinfo
128   4312  5103  5478  5867  6024       driver       partitions
1475  44    5162  5547  5868  6553       execdomains  sched_debug
1481  45    5164  5563  5871  6583       fb           scsi
1508  4589  5205  5574  5879  6593       filesystems  self
1524  4590  5224  5579  5880  6685       fs           slabinfo
1526  4594  5227  5655  5884  6694       interrupts   stat
165   4595  5289  5660  5890  6714       iomem        swaps
166   4597  5302  5661  5892  6716       ioports      sys
1784  4765  5315  5695  5901  6717       irq          sysrq-trigger
1786  4805  5318  5697  5902  6735       kallsyms     sysvipc
1787  4878  5328  5698  5903  7          kcore        timer_list
2     4932  5336  5816  5905  acpi       key-users    timer_stats
207   4934  5356  5820  5912  asound     kmsg         tty
2272  4956  5362  5821  5915  buddyinfo  loadavg      uptime
2273  4972  5363  5829  5918  bus        locks        version
2515  4986  5370  5832  5925  cgroups    meminfo      version_signature
2718  4999  5373  5842  5938  cmdline    misc         vmcore
3     5     5378  5851  5941  cpuinfo    modules      vmnet
3181  5021  5416  5854  5970  crypto     mounts       vmstat
4     5042  5419  5856  5973  devices    mtrr         zoneinfo
41    5043  5423  5858  5982  diskstats  net

The first thing that you will notice is the numbered directories these represent processes running on your system. Each numbered directory, has a common subset of directories that provide information about that process. The number representing the directory is consistent with the process number seen with the ps command. We cover processes in a later section.
The directories and files we are interested in are the following:

/proc/acpi        * Power Management  
/proc/bus/pci     * Note on some distributions this may be /proc/pci
/proc/cpuinfo     * processor information
/proc/devices      
/proc/dma
/proc/interrupts
/proc/iomem
/proc/ioports
/proc/irq
/proc/meminfo

Getting kernel information

[edit | edit source]

/proc is a pseudo-filesystem which is used as an interface to kernel data structures. Most of it is read-only, but some files allow kernel variables to be changed.

Examples of available directories are:

[Number]: Process information running on the system.
cmdline: The complete command line, cwd: The working directory, ...
/proc/uptime       Since when the system is up and running.
/proc/sys/kernel   Kernel information.
/proc/sys/net      Network information.
/proc/partitions   Hard drive partitions information.
/proc/scsi         SCSI information.
/proc/mounts       Mounted file system information.
/proc/devices      List the loaded drivers.
/proc/bus          Bus information. 
/proc/version      Linux version.

/proc/acpi

[edit | edit source]

acpi is the interface to monitor events and states.

Getting hard drive Information

[edit | edit source]

In order to get disk information, use hdparm. More information is available at the hdparm man page

hdparm [options] [devices]

Common options:
-g: Get the disk geometry.
-C: Display the power mode of the hard drive.
 active/idle: Normal operation,
 Standby: Low  power  mode,
 or sleeping: Lowest power mode.
-v: Display  all  settings,  except  -i (same as -acdgkmnru for IDE, -gr for SCSI or -adgr for XT).
 This is also the default behaviour when no flags are specified.


Examples:

hdparm -g /dev/hda
 /dev/hda:
 geometry     = 3648/255/63, sectors = 58605120, start = 0
hdparm -C /dev/hda
 /dev/hda:
 drive state is:  active/idle

And more... Bold text

Exercises

[edit | edit source]
  1. What is the RAM size of your system?
  2. Which devices are sharing an interrupt line?
  3. Use the lspci utility with the right option to draw the PCI architecture of your system.
    • How many PCI buses and bridges are there?
    • Are there any PCI/ISA bridges?
  4. What is the lspci option to list all the Intel PCI devices?
  5. What is the command to set your IDE hard drive to read-only mode?
  6. What is the command to turn on/off the hard drive disk cache?
  7. What does the setpci utility do? (Not mentioned in the above article, but do a web search to understand what it does)
  8. What is the command to write a word in register N of a PCI device?
Exercise Results
  1. To show the amount of physical RAM available: use free or cat /proc/meminfo | grep MemTotal
  2. Which are the devices that are sharing an interrupt line? cat /proc/interrupts | more
    • How many PCI buses and bridges are there? lspci | wc -l
    • Are there any PCI/ISA bridges? lspci | grep 'PCI\|ISA'
  3. What is the option with lspci to list all the Intel PCI devices? lspci -d 8086:*
  4. What is the command to set you IDE hard drive in read only mode? hdparm -r1 <device>
  5. What is the command to turn on/off the disk cache hard drive? hdparm -W1 <device>    hdparm -W0 <device>
  6. What does the setpci utility do? setpci is a utility for querying and configuring PCI devices.
  7. What would be the command to write a word in register N of a PCI device? setpci -s 12:3.4 N.W=1

Configure Modem & Sound Cards

[edit | edit source]

Detailed Objective

[edit | edit source]

Weight: 1

Description
Candidates should be able to configure modem and sound card settings.
  • Key knowledge area(s):
    • Ensure devices meet compatibility requirements (particularly that the modem is not an unsupported "win-modem").
    • Verify that correct resources are used by the cards.
    • Configure modem for outbound dial-up.
    • Set serial port speeds.
  • The following is a partial list of the used files, terms and utilities:
    • /proc/dma
    • /proc/interrupts
    • /proc/ioports
    • /proc/pci
    • lspci
    • lsusb

Modems

[edit | edit source]

A modem is a device that lets you send digital data through a telephone line. The four types of modem are:

  • External: Connected through the serial port.
  • USB: Connected through USB.
  • Internal: ISA or PCI board.
  • Built-in: Part of the motherboard.

Most new modems are Plug and Play and you have various ways to deal with it:

  • The serial driver does it all for you.
  • Use the isapnp program.
  • Let a PnP BIOS do the configuration.

To display the configuration of an ISA device, use pnpdump. This utility can dump information (IO ports, interrupts, and DMA channels) that the card uses. To configure any ISA devices, use isapnp. For more information check the man page of isapnp.conf file.

Serial ports

[edit | edit source]

An external modem can be configured with setserial.

setserial [options] device [parameters]

The available serial ports are:
/dev/ttyS0 (COM1), port 0x3f8, irq 4
/dev/ttyS1 (COM2), port 0x2f8, irq 3
/dev/ttyS2 (COM3), port 0x3e8, irq 4
/dev/ttyS3 (COM4), port 0x2e8, irq 3

Common options:
-a: report all available information on a connected device.

Common parameters:
-port: Port number.
-irq: IRQ number.
-uart: Type of UART permitted (none, 8250, 16450,...).
-autoconfig: Ask the kernel to determine the UART, IRQ number,...
-baud_rate: Communication bandwidth. (Maximum: 115200 bytes/sec)

Example:

setserial -g /dev/ttyS*

Dial Out and In

[edit | edit source]

In order to dial out with a modem, you can use the application setserial or minicom. A configuration file can be created with the -s option.

minicom -s

In order to be able to handle users dialing in, the system needs to be able to start a getty process to handle the dial-in session. The configuration must be done in the /etc/inittab file.

 D1:45:respawn:/sbin/agetty -mt60 19200,9600 ttyS0 vt100
 
 -m:          tells getty to try to extract the bps rate.
 19200,9600:  bps rate when it receives a BREAK character.
 t60:         timeout of 60 seconds.
 ttyS0:       Port on which the modem is connected.
 vt100:       Terminal type used in the TERM env variable.

Once /etc/inittab is modified, init needs to re-read it.

telinit -q

Sound Cards

[edit | edit source]

Exercises

[edit | edit source]


Detailed Objective

[edit | edit source]

Weight: 1

Description:
Candidates should be able to configure non-IDE devices such as SCSI, SATA, USB drives using the special BIOS as well as the necessary Linux tools.

  • Key knowledge area(s):
    • Differentiate between the various types of non-IDE devices.
    • Manipulate BIOS to detect used and available SCSI IDs.
    • Set the correct hardware ID for different devices, especially the boot device.
    • Configure BIOS settings to control the boot sequence when both non-IDE and IDE devices are present .
  • The following is a partial list of the used files, terms and utilities:
    • SCSI ID
    • /proc/scsi/
    • scsi_info

The SCSI BIOS can be accessed at boot time with some special key sequences (Ctrl+A for most Adaptec host bus adapters, Ctrl+G, Ctrl+M, or other keys for other vendors) and allow you to set up some parameters. Bootable SCSI and more.

In order to get SCSI information, use scsi_info or hdparm.

Examples:

scsi_info /dev/sda
hdparm -grv /dev/sda

obs: Tested with hdparm v6.1 (debian sarge kernel 2.6.8-3 arch 386)

Exercises

[edit | edit source]


Setup Different PC Expansion Cards

[edit | edit source]

Detailed Objective

[edit | edit source]

Weight: 3

Description
Candidates should be able to configure various cards for the various expansion slots.
  • Key knowledge area(s):
    • Know the differences between coldplug and hotplug devices.
    • Determine hardware recourses for devices.
  • The following is a partial list of the used files, terms and utilities:
    • The appropriate subdirectories of /proc
    • hotplug configuration files, terms and utilities
    • lspci
    • lsusb

Hotplug

[edit | edit source]

With proper support from the operating system, some devices can be added and/or removed without shutting the system down, much like a CD-ROM or floppy disk can be mounted or unmounted. USB was designed to be hot-pluggable, but the operating system must still be prepared to deal with the possibility of devices appearing and disappearing.

Some server motherboards support a hot-pluggable PCI slot standard, intended to reduce downtime by allowing administrators to replace failed components without shutting-down the entire server. A few server vendors even go as far as to allow swapping-out bad RAM while the system is running, but this is very rare and expensive. Both the hardware and the operating system must support hot-plugging components in order for the system to work. (There's a limited amount of repair that can be done on an airplane while flying at 10,000 feet.)

Coldplug

[edit | edit source]

It is much less confusing to your computer if you shutdown the power before making any changes to hardware you are connecting.

All PCI cards are normally detected by the BIOS. At boot time the BIOS probes the PCI configuration space and detects all the different devices and bridges. To insure that the BIOS has detected all the PCI devices, use lspci. Check for bridges, special devices, and functions.

All ISA cards are also normally detected with the respected drivers. The utilities that allow you to manually configure any ISA cards are pnpdump, pnpisa and /etc/pnpisa.conf file. The pnpdump program allows you to dump information on all the detected ISA cards. The isapnp works with a configuration file /etc/pnpisa.conf that has the same syntax of the output of pnpdump and it allows you to customize any ISA card settings.

Exercises

[edit | edit source]


Configure Communication Devices

[edit | edit source]

Detailed Objective

[edit | edit source]

Weight: 1

Description
Candidates should be able to install and configure different internal and external communication devices like modems, ISDN adapters and DSL modems.
  • Key knowledge area(s):
    • Verification of compatibility requirements (such as the modem is not a "winmodem").
    • Correctly set IRQs, DMAs and I/O Ports of the cards to avoid conflicts between devices.
    • Load and configure suitable device drivers.
    • Set serial port speed.
    • Setup modem for outbound PPP connections.
  • The following is a partial list of the used files, terms and utilities:
    • /proc/dma
    • /proc/interrupts
    • /proc/ioports
    • setserial

I/O Ports

[edit | edit source]

To list the I/O ports the system uses, print the /proc/ioports file.

$ cat /proc/ioports
0000-001f : dma1
0020-003f : pic1
0040-005f : timer
0060-006f : keyboard
0070-007f : rtc
0080-008f : dma page reg
00a0-00bf : pic2
00c0-00df : dma2
00f0-00ff : fpu
0170-0177 : PCI device 8086:248a
0170-0177 : ide1
01f0-01f7 : PCI device 8086:248a
01f0-01f7 : ide0
02f8-02ff : serial(auto)
0376-0376 : PCI device 8086:248a
0376-0376 : ide1
0378-037a : parport0
037b-037f : parport0
03c0-03df : vesafb
03f6-03f6 : PCI device 8086:248a
03f6-03f6 : ide0

Interrupts

[edit | edit source]

To list all the interrupts used by all the devices, print the /proc/interrupts file.

 $ cat /proc/interrupts
            CPU0
   0:     397517          XT-PIC  timer
   1:       7544          XT-PIC  keyboard
   2:          0          XT-PIC  cascade
   5:          0          XT-PIC  usb-uhci, usb-uhci
   8:          2          XT-PIC  rtc
   10:       2024          XT-PIC  eth0, usb-uhci, PCI device
  104c:ac51, PCI device 104c:ac51, Intel ICH3
  12:      19502          XT-PIC  PS/2 Mouse
  14:      11445          XT-PIC  ide0
  15:       2770          XT-PIC  ide1
 NMI:          0
 ERR:          0

An optimized system will not have any interrupt lines used by more than one heavily-used device.

Remember that every ISR from every device will be executed for each interrupt.

To list all the ISA DMA (Direct Memory Access) channels in-use, print out the /proc/dma file.

$ cat /proc/dma
4: cascade

To list all the devices on the pci buses, print out the /proc/pci file.

$ cat /proc/pci
PCI devices found:
 Bus  0, device   0, function  0:
   Class 0600: PCI device 8086:3575 (rev 2).
     Prefetchable 32 bit memory at 0xe0000000 [0xefffffff].
 Bus  0, device   1, function  0:
   Class 0604: PCI device 8086:3576 (rev 2).
     Master Capable.  Latency=96.  Min Gnt=12.
 Bus  0, device  29, function  0:
   Class 0c03: PCI device 8086:2482 (rev 1).
     IRQ 10.
     I/O at 0x1800 [0x181f].
 Bus  0, device  29, function  1:
   Class 0c03: PCI device 8086:2484 (rev 1).
     IRQ 5.
     I/O at 0x1820 [0x183f].
 Bus  0, device  29, function  2:
   Class 0c03: PCI device 8086:2487 (rev 1).
     IRQ 5.
     I/O at 0x1840 [0x185f].
 Bus  0, device  30, function  0:
   Class 0604: PCI device 8086:2448 (rev 65).
     Master Capable.  No bursts.  Min Gnt=4.

Exercises

[edit | edit source]


Configure USB Devices

[edit | edit source]

Detailed Objective

[edit | edit source]

Weight: 1

Description
Candidates should be able to activate USB support, use and configure different USB devices.
  • Key knowledge area(s):
    • Identify and load the correct USB driver module.
    • Demonstrate knowledge of the USB layer architecture and the modules used in the different layers.
  • The following is a partial list of the used files, terms and utilities:

Auto detection of new USB Devices

[edit | edit source]

The program that gets executed when a new hardware is connected is hotplug.

 hotplug name
 
 Common names are:
 pci: PCI devices.
 usb: USB devices.

The /etc/hotplug directory contains the script that must be executed each time a device gets inserted or removed.

 * /etc/hotplug/pci.agent: To install the appropriate PCI driver.
 * /etc/hotplug/usb.agent: To install the appropriate USB driver.

The hotplug program is also started at boot time to initialize all the connected devices. /etc/init.d/hotplug

List USB Devices

[edit | edit source]

To verify your devices have been detected, use lsusb.

lsusb [options]

Example:

 $ lsusb -v
 Bus 001 Device 004: ID 04a9:3045 Canon Inc. PowerShot S100
 
 Device Descriptor:
 
  bLength                18
  bDescriptorType         1
  bcdUSB               1.00
  bDeviceClass          255 Vendor Specific Class
  bDeviceSubClass       255 Vendor Specific Subclass
  bDeviceProtocol       255 Vendor Specific Protocol
  bMaxPacketSize0        32
  idVendor           0x04a9 Canon Inc.
  idProduct          0x3045 PowerShot S100
  ...

To display a graphical view of the connected USB devices, use usbview

USB Drivers

[edit | edit source]

Every detected USB device will be mounted in the /proc/bus/usb filesystem and can be accessed with the appropriate application.

Each USB device will be viewed through a filename like /proc/bus/usb/001/005 .

To check if the appropriate driver has been loaded for a USB devices, use usbmodules.

usbmodules [options]

Examples:

usbmodules –device /proc/bus/usb/001/001
usbcore
usbmodules –device /proc/bus/usb/001/005 –mapfile /etc/hotplug/usb.handman

The default modules to be loaded are /lib/modules/<kernel-version>/modules.usbmap.

The mapping is stored in the file /lib/modules/<kernel-version>/modules.usbmap.

All the drivers are stored in the directory /lib/modules/<kernel-version>/kernel/drivers/usb/.

USB Applications

[edit | edit source]

Many applications exist for many different devices. It is sometimes time consuming to make them work. An application that can be used for a digital camera is gphoto2.

Common options:

--debug: See what is the problem when talking to the camera.
--print-usb-usermap: Store the output in /etc/hotplug/usb.usermap in order for the application to support your camera.
-P: Download pictures.

Example:

$ gphoto2 --summary

Detected a 'Canon PowerShot S100'.
Camera summary :
Camera identification:
  Model : Canon PowerShot S100
  Owner:  

Power status: on battery (power OK) 

Flash disk information:
  Drive D:
   31'885'312 bytes total
   27'668'480 bytes available

Exercises

[edit | edit source]

USB:

  1. Check if you can detect a digital camera.
  2. View the camera device information.
  3. Take a picture and download it into a system with gphoto2.
  4. Configure your own device (HD, camera, mouse, keyboard,...)


Linux Installation & Package Management

[edit | edit source]

Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 2

Description:
Candidates should be able to design a disk partitioning scheme for a Linux system.

Key knowledge area(s):

  • Allocate filesystems and swap space to separate partitions or disks.
  • Tailor the design to the intended use of the system.
  • Ensure the /boot partition conforms to the hardware requirements for booting.

The following is a partial list of the used files, terms and utilities:

  • / (root) filesystem
  • /var filesystem
  • /home filesystem
  • /boot filesystem
  • EFI System Partition (ESP)
  • swap space
  • mount points
  • partitions

Filesystems

[edit | edit source]

A filesystem is simply a way of organizing data in computer-accessible form on the hard disk or other media. Different filesystems have different organizing structures to determine where the data and indexing information will be stored. Some popular filesystems include:

ext2      one of the oldest and most universally supported filesystems on Linux, Unix, and BSD operating systems
ext3      an extended version of ext2 which overcomes some limitations and adds journaling
ext4      fourth extended filesystem - it is a journaling file system for Linux, developed as the successor to ext3.
btrfs
reiserfs  an enhanced journaling filesystem written by Hans Reiser and extended by the open source community since his incarceration
jfs
xfs
fat or  
vfat      the file allocation table-based filesystem used by MS-DOS and Windows 9x
NTFS      A more advanced (than fat) filesystem used by Windows NT, 2000, XP and Vista

Partitions

[edit | edit source]

When doing an installation there is normally a minimum disk configuration of two partitions that needs to be created:

  • / (root): directory that contains the Linux distribution.
  • Swap space: partition that allows a kernel to run more processes than can normally fit into RAM.

If multiple disks are available it is good practice to also have the /usr and /home directories on different partitions. Each partition will contain a filesystem type and can be mounted on the active system in the filesystem global tree. To print the active mounted filesystems, use mount.

$ mount
dev/hda3 on / type reiserfs (rw)
proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw)
/dev/hda1 on /boot type ext2 (rw)
shmfs on /dev/shm type shm (rw)
usbdevfs on /proc/bus/usb type usbdevfs (rw)

The swap partition doesn't need a file system. It will be accessed in raw mode by the kernel with no filesystem system calls as overhead.

Disk speed issues

[edit | edit source]

Before deciding on your partitioning scheme, you really need to know exactly what sort of applications you will be running.

  • Mail Server
  • Web Server
  • Graphical X-Windows based applications
  • And more

If your system has multiple disks, use the fastest one to store most of your data.

  • / Contains most of the system utilities and doesn't get used much. These can be shipped off to the slowest disk.
  • /var/log contains a lot of logging information. Best on a fast disk.
  • /usr is typically on a separate partition anyway and if you have a lot of clients starting lots of X applications, use a fast disk.

Examples of system applications:

For e-mail serving, Sendmail writes to two main locations: mail queue, usually /var/spool/mqueue and /var/spool/mail as well as other locations. Apache uses several different files, two log files per site hosted for logging and access to the actual pages. Apache spends quite a bit of time writing to log files in /var/log (or where configured to do so).

Virtual memory (Swap)

[edit | edit source]

When you set up a new system, swap should be twice your actual RAM. This is not always sensible in real-world scenarios, however it is a traditional guideline and a conservative answer to give in an exam. swapon is used to enable specified devices/partitions designated for swapping. On the contrary, swapoff is used to disable a specified device/partition of swapping. Both commands can take the -a option, which iterates all swap devices found in /proc/swaps or /etc/fstab.

Information on the swap partition can be displayed with swapon.

swapon -s # Display the active partition

To get information on the usage of virtual memory, use vmstat.

$ vmstat -n 1
  procs                      memory    swap          io     system         cpu
r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us  sy  id
5  0  1    184   3228  37684  92828   0   0    37    19  124   228   3   0  97
1  0  0    184   3476  37684  92596   0   0     0     0  102   368   0   0 100
2  0  0    184   3476  37684  92596   0   0     0     0  101   328   0   0 100
 
R:processes waiting for run time.
b: processes in uninterruptable sleep.
w: processes swapped out.
swpd: virtual memory used (kB).
free: Idle memory (kB).
buff: Memory used as buffers (kB).
si: Memory swapped in from disk (kB/s).
so: Memory swapped to disk (kB/s).
bi: Blocks received from a block device (blocks/s).
bo: Blocks sent to a block device (blocks/s).
in: The number of interrupts per second.
cs: The number of context switches per second.
us: user time
sy: system time
id: idle time

Exercises

[edit | edit source]
  1. Open two terminals: In one terminal display periodically the virtual memory usage. In the second terminal disable the virtual memory and re-enable it. Notify the changes in the first terminal.
  2. What is the disk layout of your system and how many disks do you have?
  3. How many swap space can you use?


Detailed Objective

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 2

Description:
Candidates should be able to select, install and configure a boot manager.

Key knowledge area(s):

  • Providing alternative boot locations and backup boot options.
  • Install and configure a boot loader such as GRUB Legacy.
  • Perform basic configuration changes for GRUB 2.
  • Interact with the boot loader.

The following is a partial list of the used files, terms and utilities:

  • menu.lst, grub.cfg and grub.conf
  • grub-install
  • grub-mkconfig
  • MBR

Boot managers

[edit | edit source]

A boot loader is installed in the MBR (Master Boot Record). When a system starts, it loads what is in the MBR to RAM. Under Linux there are two main boot loaders:

  • Lilo: LInux LOader.
  • Grub: GRand Unified Boot Loader.

A boot loader allows you to select the image that you would like to boot from. A system can contain multiple images (operating systems).

A boot loader allows you to interactively run commands and pass parameters to the image that you will boot. The initrd is the driver that will be used to build a filesystem on RAM to mount other filesystems and execute programs.

GRUB is today's default boot loader for many distributions. When installing Windows with Linux, install Windows first and Linux second, because Windows overwrites the MBR without asking.

LILO vs. GRUB

[edit | edit source]

Both are used to load an image from a disk to RAM. GRUB has the following advantages to LILO:

  • More pre-OS commands.
  • Supports images stored beyond the 1024 BIOS cylinder limitation.
  • Can access its configuration file through the filesystem.

When using LILO, each time you add a new image or change an image a new LILO needs to be installed in the MBR.

  • LILO keeps its boot information in the MBR
  • GRUB keeps its boot information in the filesystem (menu.lst).
  • LILO also has a configuration file /etc/lilo.conf.

To install GRUB on the MBR, use grub-install. The command setup will override the MBR.

To install LILO on the MBR, use lilo. The lilo will use the /etc/lilo.conf file to know what to write into the MBR.

Example of /etc/lilo.conf:

# LILO global section
boot = /dev/hda # LILO installation target: MBR
vga = normal # (normal, extended, or ask)
read-only # Mount the root file systems read-only

# LILO Linux section
image=/boot/vmlinuz # Image to load
label=linux   # Label to display
root=/dev/hda1  # Root partition for the kernel
initrd=/boot/initrd # Ramdisk

# LILO DOS/Windows section
other=/dev/hda3
label=windows
# LILO memtest section
image=/boot/memtest.bin
label=memtest86

Example of menu.lst (GRUB configuration file):

# GRUB default values
timeout 10 # Boot the default kernel after 10 seconds
default 0 # Default kernel

# Grub for Linux section 0
title GNU/Linux  # Title
root (hd0,1)  # /dev/hda2 root filesystem
# Kernel and parameters to pass to the kernel
kernel /boot/vmlinuz root=/dev/hda2 read-only
initrd /boot/initrd
boot

# Grub for DOS/Windows section
title Winblows
root (hd0,2)  # /dev/hda3
makeactive
chainloader+1

GRUB Resources

[edit | edit source]
 * GRUB Manual
 * GRUB homepage
 * Grub wiki
 * Linux+Win+Grub HowTo
 * Linux Recovery and Boot Disk Creation with Grub.
 * Win32 Grub
 * Booting with GRUB
 * WinGRUB
 * GRUB Installer for Windows
 * GRUB for DOS - Bridging DOS/Windows to Unix/Linux

Exercises

[edit | edit source]

1) Install Grub on a floppy disk and try to boot your image manually:

mkfs -t ext2 /dev/fd0
mount /dev/fd0 /mnt
mkdir -p /mnt/boot/grub
cp /boot/grub/stage* /mnt/boot/grub/
cp /boot/grub/e2fs-stage1_5 /mnt/boot/grub/
touch /mnt/boot/grub
umount /mnt
grub
root (fd0)
setup (fd0)
quit

Now reboot with the floppy and from the prompt select the kernel on the hard disk.

root (hd0,1)
kernel /boot/vmlinuz root=/dev/hda2 read-only
initrd /boot/initrd
boot

2) Create /boot/grub/menu.lst file and install Grub on your hard drive with the grub utility.

3) Install back lilo. Change the linux label of the default kernel image to SuSE in /etc/lilo.conf and re-install the lilo program in the MBR.


Detailed Objective (206.1)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 2


Description: Candidates should be able to build and install an executable program from source. This objective includes being able to unpack a file of sources.


Key Knowledge Areas:

  • Unpack source code using common compression and archive utilities.
  • Understand basics of invoking make to compile programs.
  • Apply parameters to a configure script.
  • Know where sources are stored by default.


Terms and Utilities:

  • /usr/src/
  • gunzip
  • gzip
  • bzip2
  • xz
  • tar
  • configure
  • make
  • uname
  • install
  • patch

Source files

[edit | edit source]

An archive is a collection of related files stored in one file. The command that allows you to store files and subtree directories in one file is tar.

tar function & options files

Common functions: -c: Create a new tar file. -t: Tell the contents of a tar file. -x: Extract the contents of a tar file.

Common options: -f file: Specify the name of the tar file.

Examples:

tar cvf mybackup.tar ~
tar cvf usr.tar /usr
tar tvf mybackup.tar
tar xvf mybackup.tar

It is good practice to use the .tar extension for all files archived with tar.

File compression

[edit | edit source]

Compression saves space for storage and file transfer. There are multiple utilities to do compression:

  • compress, uncompress # Old Unix compression algorithm
  • gzip, gunzip # Most common use
  • bzip2, bunzip2 # Best compression algorithm

Once an archive has been created , it can be compressed. Examples:

$ ls -l backup.tar
-rw-r--r-- 1 rarrigon users 22773760 nov 10 11:07 backup.tar
$ gzip -v backup.tar
backup.tar:  53.8% -- replaced with backup.tar.gz
$ ls -l backup.tar.gz
-rw-r--r-- 1 rarrigon users 10507393 nov 10 11:07 backup.tar.gz
gunzip backup.tar.gz
$ bzip2 -v backup.tar
backup.tar:  2.260:1,  3.540 bits/byte, 55.75% saved, 22773760 in, 10077846 out.

Files archiving and compression

[edit | edit source]

When archiving files and subdirectories it is possible to package and compress them in one command. Examples:

tar cvzf backup.tgz ~ # Backup of home with gzip
tar cvjf backup.tbz ~ # Backup of home with bzip2
tar xvzf backup.tgz # Extract and gunzip backup.tgz
tar xvjf backup.tbz # Extract and bunzip2 backup.tbz

By default tar uses a relative path but with the -P option it is possible to save files with an absolute path. Files in this mode will always be extracted at the same location.
For compressing and archiving in one line

$ tar cvf - . | gzip > target.tar.gz

For unzipping a compressed archive:

$ gunzip -c file_name.tar.gz |tar xvf -

GNU tool chain

[edit | edit source]

Under Linux all the sources can be built with the standard GNU tool chain. make Utility to maintain group of programs. Use the rules defined in Makefile.

  • gcc ANSI C Compiler
  • g++ C++ Compiler

Many compressed or archived packages once installed will have information files (README, INSTALL) that should explain how it should be built and installed. The files Makefile.in and configure.in are the basic files that will be used to generate a final Makefile. The configured file in general scans the system and will build a final Makefile.

Exercises

[edit | edit source]
  1. Do an archive of the /bin and the /sbin directories. With which compression utilities do you get the smallest file size? Use -v to get in percentage the size file reduction.
  2. Install the file /usr/src/packages/SOURCES/grub-09.tar.bz2 in /tmp and by reading INSTALL and README build the sources.
  3. Find the way to uncompress a .deb an a .rpm archive, what is in ?
  4. In on command line, compress a new file and uncompress this new file somewhere else.


Detailed Objective

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 1

Description:
Candidates should be able to determine the shared libraries that executable programs depend on and install them when necessary.

Key knowledge areas:

  • Identify shared libraries.
  • Identify the typical locations of system libraries.
  • Load shared libraries.

The following is a partial list of the used files, terms and utilities:

  • ldd
  • ldconfig
  • /etc/ld.so.conf
  • LD_LIBRARY_PATH

Shared libraries

[edit | edit source]

A library is a set of functions that programs can use to implement their functionalities. When building (linking) a program, those libraries can be statically or dynamically linked to an executable. Static link means that the final program will contain the library function within its file. (lib.a) Dynamic link means that the needed libraries would need to be loaded into RAM when the program needs to be executed. (lib.so)

The default directories for all the standard libraries are:

  • /lib: Used mainly by /bin programs.
  • /usr/lib: Used mainly by /usr/bin programs.

The file /etc/ld.so.conf is used by the system to specify other library locations. To build a cache file used by the runtime loader of all the available libraries, use ldconfig. The file /etc/ld.so.cache will be generated.

Library dependencies

[edit | edit source]

To print shared programs or library dependencies, use ldd.

ldd [-vdr] program|library

Example:

$ ldd -d -v /bin/cp
  libc.so.6 => /lib/libc.so.6 (0x40027000)
  /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

  Version information:
  /bin/cp:
               libc.so.6 (GLIBC_2.1.3) => /lib/libc.so.6
               libc.so.6 (GLIBC_2.1) => /lib/libc.so.6
               libc.so.6 (GLIBC_2.2) => /lib/libc.so.6
               libc.so.6 (GLIBC_2.0) => /lib/libc.so.6
  /lib/libc.so.6:
               ld-linux.so.2 (GLIBC_2.1.1) => /lib/ldlinux.so.2
               ld-linux.so.2 (GLIBC_2.2.3) => /lib/ldlinux.so.2
               ld-linux.so.2 (GLIBC_2.1) => /lib/ldlinux.so.2
               ld-linux.so.2 (GLIBC_2.2) => /lib/ld-linux.so.2
               ld-linux.so.2 (GLIBC_2.0) => /lib/ld-linux.so.2

Runtime loader

[edit | edit source]

The runtime loader ld.so finds the needed library of a program and will load them into RAM. The search order of ld.so is:

  • LD_LIBRARY_PATH
  • The cache file /etc/ld.so.cache
  • The default directories /lib and /usr/lib.

Exercises

[edit | edit source]


Detailed Objective

[edit | edit source]

(LPIC-1 Version 5.0

Weight: 3

Description:
Candidates should be able to perform package management using the Debian package tools.

Key knowledge areas:

  • Install, upgrade and uninstall Debian binary packages.
  • Find packages containing specific files or libraries which may or may not be installed.
  • Obtain package information like version, content, dependencies, package integrity and installation status (whether or not the package is installed).
  • Awareness of apt.

The following is a partial list of the used files, terms and utilities:

  • /etc/apt/sources.list
  • dpkg
  • dpkg-reconfigure
  • apt-get
  • apt-cache

Package Structure

[edit | edit source]

In order to understand how to use Debian's package management system, it would be useful to first have an understanding of how a Debian package is named. For example, the package ncftp_3.1.3-1_i386.deb has 5 major parts:

  • ncftp - the name of the program/application/library
  • 3.1.3 - the version of the program/application/library assigned by the original (upstream) author(s)
  • 1 - the revision number of the package assigned by the person(s) who packaged the program for a Debian system
  • i386 - the architecture the packaged program is designed to run on
  • .deb - signifies this is a Debian package

Note that there is special significance to the use of underscores(_) and hyphens(-); an underscore shall separate the name of the program and its version, a hyphen shall separate a version number and the revision number, and an underscore shall separate the revision number and the architecture.

dpkg is the "grandaddy" or "back-end" of the Debian Package Management System. Features present in the more advanced tools are not available to dpkg but it is nevertheless a useful tool.

Some notes:

  • dpkg keeps its record of available packages in /var/lib/dpkg/available.

Some of the more common functions used by administrators by dpkg are:

Adding, Removing, and Configuring Packages

  • dpkg {-i|--install} <package-name> will install the specified package
  • dpkg {-r|--remove} <package-name> will remove the specified package (but leave the configuration files intact)
  • dpkg {-P|--purge} <package-name> will remove the specified package and the corresponding configuration files
  • dpkg --root /target -i <package> will install a package into a unbootable system by specifying the system root.
  • dpkg --unpack <package-name> will unpack (but do not configure) a Debian archive into the file system of the hard disk
  • dpkg --configure <package-name> will configure a package that already has been unpacked

Querying Package Information

  • dpkg --info <package-name> will print out the control file (and other information) for a specified package
  • dpkg {-l|--list} this will give you a list of installed packages.
  • dpkg {-a|--pending} is given instead of a package name, then all packages unpacked, but marked to be removed or purged in file /var/lib/dpkg/status, are removed or purged, respectively.
  • dpkg -s (--status) <package-name> will give you a description of installed package

Updating Package Information

  • dpkg --update-avail <package-name> will replace old information with with new information from package.
  • dpkg --merge-avail <package-name> will combine old information with new information from package.

dpkg-reconfigure

[edit | edit source]

dpkg-reconfigure reconfigures packages after they have already been installed.

  • dpkg-reconfigure <package-name> to reconfigure the initial installation settings
  • dpkg-reconfigure --priority=medium package [...] will set the minimum priority of question that will be displayed
  • dpkg-reconfigure --all will reconfigure all packages
  • dpkg-reconfigure locales will generate any extra locales
  • dpkg-reconfigure --p=low xserver-xfree86 will reconfigure X server

Dselect

[edit | edit source]

The utility that allows you on Debian to easily add/remove packages is dselect.

  • Choose the access method to use.
  • Update the list of available packages, if possible.
  • Request which packages you want on your system.
  • Install and upgrade wanted packages.
  • Configure any packages that are unconfigured.
  • Remove unwanted software.

There is on dselect an interactive menu that will allow you to install/remove packages. Care must be taken with this utility. You can damage your system.

Dselect menu example:

Debian `dselect' package handling frontend.
0. [A]ccess    Choose the access method to use. 
1. [U]pdate    Update list of available packages, if possible. 
2. [S]elect    Request which packages you want on your system.
3. [I]nstall   Install and upgrade wanted packages. 
4. [C]onfig    Configure any packages that are unconfigured. 
5. [R]emove    Remove unwanted software.
6. [Q]uit      Quit dselect.
$ dselect - list of access methods
Abbrev.        Descriptio  cdrom          Install from a CD-ROM.
* multi_cd       Install from a CD-ROM set.
nfs            Install from an NFS server (not yet mounted).
multi_nfs      Install from an NFS server (using the CD-ROM set) (not yet mounted).
harddisk       Install from a hard disk partition (not yet mounted).
mounted        Install from a filesystem which is already mounted.
multi_mount    Install from a mounted partition with changing contents.
floppy         Install from a pile of floppy disks.
apt            APT Acquisition [file,http,ftp]

apt-get

[edit | edit source]

If you know the name of a package you want to install, use apt-get. You must configure the sources.list file. This same file is used when you choose the apt access method of dselect. The location is /etc/apt.

  • apt-get install <package-name> will search its database for the most recent version of this package and will retrieve and install it from the corresponding archive as specified in sources.list. In the event that this package depends on another APT will check the dependencies and install the needed packages.
    • apt-get install <package-name>=<version> will install a package at the version specified
    • apt-get install <package-name> -o DPkg::options::="--force-overwrite" installs a package ignoring "error processing ..., which is also in package ..." errors.
  • apt-get remove <package-name> will remove the specified package but keep its configuration files.
  • apt-get --purge remove <package-name> will remove the specified package and its configuration files.
  • apt-get -u install <package-name> will upgrade and install a specific package.
  • apt-get -u upgrade will upgrade packages within the same distribution packages except those which have been kept back because of broken dependencies or new dependencies.
  • apt-get -u dist-upgrade will upgrade an entire Debian system at once.

apt-file

[edit | edit source]
  • apt-file search <file-name> will search for a package which includes the specified file.
  • apt-file list <package-name> will list the contents of a package matching the pattern. This action is very close to the dpkg -S command except the package does not need to be installed or fetched.

Apt-cache

[edit | edit source]

To find the name of a package that you want to install use apt-cache. apt-cache main options are :

  • add - Add a package file to the source cache
  • showpkg - Show some general information for a single package
  • stats - Show some basic statistics
  • search - Search the package list for a regex pattern
  • show - Show a readable record for the package
  • depends – Show raw dependency information for a package
user@host:~$ apt-cache search gimp
babygimp - An icon editor in Perl-Tk
blackbook - GTK+ Address Book Applet
cupsys-driver-gimpprint - Gimp-Print printer drivers for CUPS
escputil - A maintenance utility for Epson Stylus printers
filmgimp - A motion picture editing and retouching tool

Resources

[edit | edit source]

APT HOWTO
http://www.debian.org/doc/manuals/apt-howto/index.en.html
dselect Documentation for Beginners
http://www.debian.org/doc/manuals/dselect-beginner/

Exercises

[edit | edit source]
  1. Install a system with Debian.
  2. Get familiar with dselect and remove the tcpdump utility.
  3. Install back with apt-get the package that contains the tcpdump utility.
  4. Try kpackage to install ethereal

Red Hat Package Manager is a powerful package manager, which can be used to build, install, query, verify, update, and erase individual software packages. A package consists of an archive of files and meta-data used to install and erase the archive files. The meta-data includes helper scripts, file attributes, and descriptive information about the package. Packages come in two varieties: binary packages, used to encapsulate software to be installed, and source packages, containing the source code and recipe necessary to produce binary packages.

Detailed Objective

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 3

Description:
Candidates should be able to perform package management using RPM, YUM and Zypper.

Key Knowledge Areas:

  • Install, re-install, upgrade and remove packages using RPM, YUM and Zypper.
  • Obtain information on RPM packages such as version, status, dependencies, integrity and signatures.
  • Determine what files a package provides, as well as find which package a specific file comes from.
  • Awareness of dnf.

The following is a partial list of the used files, terms and utilities:

  • rpm
  • rpm2cpio
  • /etc/yum.conf
  • /etc/yum.repos.d/
  • yum
  • zypper

Red Hat Package Manager

[edit | edit source]

Some Linux distribution uses rpm the “Red Hat Package Manager” for all its distribution software. RPM maintains a detailed database of all software installed in the system.

To install a RPM package, do:

rpm -i [package].rpm

The package will be installed only if the dependency are met and there is no conflict with another package. To upgrade a package, do:

rpm -U [package].rpm

The files of the old package version will be removed and replaced by the new files. To remove a RPM package, do:

rpm -e [package].rpm

The package will be removed only if no other package depends on it.

RPM Queries

[edit | edit source]

With the -q option you can query the RPM database or display information about package file.

There are several switches that you can use:

  • -i: to get package information
rpm -q -i apache
  • -l: To get a file list of a package.
$ rpm -q -l pciutils
/sbin/lspci
/sbin/setpci
/usr/share/doc/package/pciutils
...
/usr/share/pci.ids
  • -f file: Query which package a file belongs to.
$ rpm -q -f /sbin/lspci
pciutils-2.1.9-58
  • -s: File list with status information.
  • -d: list only documentation files.
  • -a: List all the installed packages.

If you want to display information about package file you can specify filename using -p switch:

rpm -q -i -p [package].rpm

RPM Commands

[edit | edit source]

To get general information on a package or program, use rpmlocate.

rpmlocate ipcs -q -i apache

Searching for ipcs in rpm db:

util-linux-2.11n-75:
/usr/bin/ipcs
/usr/share/man/man8/ipcs.8.gz

To list all the installed packages, use rpmqpack:

rpmqpack

Alternatively use:

rpm -qa


Source Installation

[edit | edit source]

The RPM source files have generally the format package.src.rpm and can be installed the same way as binaries. The directories where they will be installed from /usr/src/packages are:

  • SOURCES: For the original sources.
  • SPECS: For the .spec file that controls the build process.
  • BUILD: All the sources are built in this directory.
  • RPMS: Where the complete binary packages are stored.
  • SRPMS: The sources.

To install the source of a package, do:

$ rpm -i mypack.src.rpm

The source files will be stored in the /usr/src/packages in directories SPEC and SOURCES. To compile the sources, do:

$ rpm -ba /usr/src/packages/SPECS/mypack.spec

The result of the compilation will be stored in the BUILD directory

Exercises

[edit | edit source]
  1. Is the apache package installed?
  2. In which package are the files /bin/ls, /usr/sbin/tcpdump, and /sbin/ifconfig?
  3. From the floppy disk install the pci utilities and grub packages. Build the binaries and try to execute them. The sources should be in the /usr/src/packages/BINARY directory.

GNU & UNIX Commands

[edit | edit source]

Detailed Objective

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 4

Description:
Candidates should be able to interact with shells and commands using the command line. The objective assumes the Bash shell.

Key Knowledge Areas:

  • Use single shell commands and one line command sequences to perform basic tasks on the command line.
  • Use and modify the shell environment including defining, referencing and exporting environment variables.
  • Use and edit command history.
  • Invoke commands inside and outside the defined path.

The following is a partial list of the used files, terms and utilities:

  • bash
  • echo
  • env
  • export
  • pwd
  • set
  • unset
  • type
  • which
  • man
  • uname
  • history
  • .bash_history
  • Quoting

Command line

[edit | edit source]

Command lines have a common form:

command  [options]  [arguments]

Examples:

pwd
ls -ld or ls -l -d or ls -d -l
rm -r /tmp/toto
cat  ../readme helpme > save
more /etc/passwd /etc/hosts /etc/group
find . -name *.[ch] -print
date "+day is %a"

Command lines can be stored into a file for a script.

To display a string to the standard output (stdout) use echo.

echo [-n][string|command|$variable]
echo my home directory is: $HOME
echo I use the $SHELL shell

Shells and Bash

[edit | edit source]

The order of precedence of the various sources of command when you type a command to the shell are:

  • Aliases
  • Keywords, such as if, for and more.
  • Functions
  • Built-ins like cd, type, and kill.
  • Scripts and executable programs, for which the shell searches in the directories listed in the PATH environment variable.

If you need to know the exact source of a command, do:

$ type kill
kill is a shell builtin

Not the same as:

/bin/kill

To list all the built-in commands use help.

/bin/bash

/bin/bash can be invoked at login time or explicitly from the command line. At login time the following script files will be executed:

  • /etc/profile default system file
  • $HOME/.profile if it exists
  • $HOME/.bash_profile if it exists

When bash is executed the following script files will be executed

  • /etc/bash.bashrc if it exists
  • $HOME/.bashrc if it exists

When a user explicitly invokes a bash shell, the following script files will be executed:

  • /etc/bash.bashrc if it exists
  • $HOME/.bashrc if it exists

The history of commands typed from the bash shell are stored in ~/.bash_history. A script is a list of commands and operations saved in a text file to be executed in the context of the shell. The bash scripts intend to setup your environment variables and more.

Overlay /bin/bash

Each time you execute a program a new process is created. When the program terminates, the process will be terminated and you get back your prompt. In some cases you can run a program in background with the '&' following the command.

myscript &

In some situations, it is also possible to overlay the running process bash. exec [program] This is usefull when you don't need to get the prompt back. The login program for example can be a good example to overlay the bash process in which it has been started.

exec login

Shell variables

[edit | edit source]

All local variables to the bash session can be viewed with set.

To declare a local variable, do:

VARNAME=foo

To unset a variable, do:

unset VARNAME

All the environment variables can be viewed with env. To declare a variable that will be seen by other shells use export.

export VARNAME=foo

or

VARNAME=foo
export VARNAME

The variable will only be seen by the shell that has been started from where the variable has been declared. Here are some important variables:

  • HOME: Home directory of username logged in.
  • PATH: Command search path.

Man pages

[edit | edit source]

The online manuals describe most of the commands available in your system.

man mkdir
man cal

If you are looking for a key word in all the man pages, use the -k option.

man -k compress
apropos compress

The location of all the man pages must be set in the MANPATH variable.

echo $MANPATH
/usr/local/man:/usr/share/man:/usr/X11R6/man:/opt/gnome/man

Exercises

[edit | edit source]
  1. Get information on the useradd and userdel commands.
  2. Create two new accounts user1 and user2 and set the passwords to those accounts with the passwd command. As root lock the accounts and check if you can still log in.
  3. What is the command to concatenate files?
  4. Declare and initialize the following environment variables: NAME and LASTNAME. Use echo to print them out.
  5. Start a new bash (type bash) and check that you can still see those declared variables.
  6. Use exec to start a new bash session. Can you still see those declared variables?
  7. Use date to display the month.
  8. Add a new user named notroot with root's rights and lock the root account.


Detailed Objective

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 2

Description:
Candidates should be able to apply filters to text streams.

Key knowledge areas:

  • Send text files and output streams through text utility filters to modify the output using standard UNIX commands found in the GNU textutils package.

The following is a partial list of the used files, terms and utilities:

  • bzcat
  • cat
  • cut
  • head
  • less
  • md5sum
  • nl
  • od
  • paste
  • sed
  • sha256sum
  • sha512sum
  • sort
  • split
  • tail
  • tr
  • uniq
  • wc
  • xzcat
  • zcat

Pattern matching and wildcards

[edit | edit source]

Wildcards are pattern matching characters commonly used to find file names or text within a file. Common utilizations of a wildcard are: locating file names that you don't fully remember, locating files that have something in common, or performing operations on multiple files rather than individual.

The shell interprets these special characters:

! @ # $ % ^ & * ( ) { } [ ] | \ ; ~ ' " ` ?

The characters used for wildcard are:

?  *  [  ]  ~

If you use the wildcard characters the shell will try to generate a file from them. Try the following:

echo all files *

Special wildcard characters

? match any one character.
* Any string
[abcfghz] One char set
[a-z] One char in range
[!x-z] Not in set
~ Home directory
~user User home directory

Examples:

? One character filenames only
[aA]??? Four characters, starting with a or A.
~toto pathname of toto home directory
[!0-9]* All string not starting with a number.

What about these commands?

ls [a-z][A-Z]??.[uk]
ls big*
ls a???a
ls ??*

Shell and wildcards

[edit | edit source]

A shell command line can be a simple command or more complex.

ls -l [fF]*
ls *.c | more
ls -l [a-s]* | mail `users`

The first event in the shell is to interpret wildcards. Only the shell interprets unquoted wildcards.

Quoting and Comments

[edit | edit source]

Quoting

[edit | edit source]

Do quote to prevent the shell interpreting the special characters and to transform multiple words into one shell word.

  • 'string' - Nearly everything within the quote is literal:
echo 'He did it, "Why?"'
echo 'Because "#@&^:-)"'
echo '$VAR='Me
  • "string" - Like 'string', however it interprets $, \, !:
echo "What's happening?"
echo "I don't know but check this $ANSWER"
  • The backslash (\) treats the following character as literal:
echo \$VAR=Me
echo What\'s happening\?
  • How could we display the backslash? With the following line:
echo \\

Comments

[edit | edit source]

You can add comments in a command line or a script. Use the character #. A white space must immediately precede #.

Examples:

echo $HOME # Print my Home directory
echo "### PASSED ###" # Only this part is a comment
echo The key h#, not g was pressed.

Commands

  • cat, tac: Concatenate files and print on the standard output, from beginning to end or end to beginning, respectively.
  • head, tail: Output the first and last part of files.
  • nl: Number lines of files.
  • wc: Print the number of lines, words, and bytes (in that order) in files.
  • cut: Remove sections from each line of files.
  • tr: Translate or delete character.
  • expand, unexpand: Convert tabs to spaces and space to tabs.
  • paste: Merge lines of files.
  • join: Join lines of two files on a common field.
  • uniq: Remove duplicate lines from a sorted file.
  • split: Split a file into pieces.
  • fmt: Simple optimal text formatter.
  • pr: Convert text files for printing.
  • sort: Sort lines of text files.
  • od: Dump files in octal and other formats.

Concatenate files

[edit | edit source]

To concatenate files, use cat.

cat [options] [files...]
tac [options] [files...]

The results are displayed to the standard output.

Common options:

-s: never more than one single blank line.
-n: number all output lines.

Examples:

cat file  # Display file to the standard output.
cat chapter* # Display all chapters to standard output.
cat -n -s file # Display file with line number with single blank line.

To concatenate files in reverse order, use tac.

View the beginning and the end of a file

[edit | edit source]

To view only few lines at the beginning or at the end of a file, use head or tail.

head [options] [files...]
tail [options] [files...]

The results are displayed to the standard output.

Common options:

-n: number of lines to be displayed. (head and tail)
-c: number of bytes to be displayed (head and tail)
-f: append output. (tail)
-s #: iteration for new data every # sec. (tail)

Examples:

head file # Display the first 10 lines of file.
head -n 2 file # Display the first 2 lines of file.
tail -c 10 file # Display the last 10 bytes of file.
tail -f -s 1 /var/log/messages 
Display the last 10 lines of messages, block and check for new data every second.

Numbering file lines

[edit | edit source]

To add the line number to a file, use nl.

nl [options] [files...]

The results are displayed to the standard output.

Common options:

-i #: increment line number by #.
-b: numbering style:
   a: number all lines
   t: non-empty lines
   n: number no lines
-n: numbering format:
   rz: right justified
   ln: left justified.

Examples:

nl file # Add the line number in each line in the file.
nl -b t -n rz file # Add the line number to each non-empty line with zero-completed format.

Counting items in a file

[edit | edit source]

To print the number of lines, words and bytes of a file, use wc.

wc [options] [files...]

The results are displayed to the standard output.

Common options:

-c: print the size in bytes.
-m: print the number of characters.
-w: print the number of words.
-l: print the number of lines.
-L: print the length of the longest line.

Examples:

wc *.[ch] # Display the number of lines, words, and characters for all files .c or .h.
wc -L file # Display the size of the longest line.
wc -w file # Display the number of words.

Cutting fields in files

[edit | edit source]

To remove sections from each line of files, use cut.

cut [options] [files...]

The results are displayed to the standard output.

Common options:

-b #: Extract the byte at position #.
-f #: Extract the field number #.

Examples:

cut -b 4 file # Extract and display the 4th byte of each line of file. 
cut -b 4,7 file # Extract and display the 4th and 7th byte of each line.
cut -b -2,4-6, 20- file # Extract characters leading up to 2 (1 and 2), 4 to 6, and 20 to the end of the line for each line of file.
cut -f 1,3 -d: /etc/passwd # Extract the username and ID of each line in /etc/passwd.

The default delimiter is TAB but can be specified with -d.

Character conversion

[edit | edit source]

To translate the standard input (stdin) to standard output, use tr.

tr [options] SET1 SET2

Common options: -d: delete character in SET1. -s: replace sequence of characters in SET1 by one.

Examples:

tr ‘a‘ 'A'  # Translate lower a with A
tr ‘[A-Z]’ ‘[a-z]’ # Translate uppercase to lowercase
tr -d ‘ ‘ # Delete all spaces from file

To convert tabs to spaces, use expand and to convert spaces to tabs, use unexpand.

expand  file
unexpand file

Line manipulation

[edit | edit source]

To paste multiple lines of files, use paste.

paste [options] [files...]

Common options:

-d #: delimiter: Use # for the delimiter.
-s: serial: paste one file at a time.

Examples:

paste f1 f2 # Display line of f1 followed by f2.
paste -d: file1 file2 # Use ':' for the delimiter.

To join multiple lines of files, use join.

join file1 file2 

To remove duplicated line, use uniq.

uniq [options] [files...]

Common options:

-d: only print duplicated lines.
-u: only print unique lines.

Examples:

uniq -cd file # Display the number of duplicated line.

Splitting files

To split big files, use split.

split [options] file

Common options:

-l #: split every # lines.
-b #: split file in bytes or b for 512 bytes, k for kilobytes, m for megabytes.

Examples:

split -l 25 file  # Split file into 25-line files.
split -b 512 file # Split file into 512-byte files.
split -b 2b file  # Split file into 2*512-byte files.

Formatting for printing

[edit | edit source]

To format a file, use fmt.

fmt [options] [files...]

Common options: -w #: maximum line width.

Examples:

$ fmt -w 35 file # Display lines with a maximum width of 35 characters.

To format a file for a printer, use pr.

pr [options] [files...]

Common options: -d: double space.

Examples:

$ pr -d file # Format file with double-spacing.

Sort lines of text files

[edit | edit source]

To sort the lines of the named files, use sort.

sort [options] file

The results are displayed to the standard output.

Common options:

-r : Reverse
-f : Ignore case
-n : Numeric
-o file: Redirect output to file
-u : No duplicate records
-t; : Use ';' as delimiter, rather than tab or space.

Examples:

sort file -r
sort file -ro result

Binary file dump

[edit | edit source]

To dump a binary file, use od.

od [options] file

The results are displayed to the standard output and start with an offset address in octal format.

Common options:

-c: each byte as a character
-x: 2-byte in hex
-d: 2-byte in decimal
-X: 4-byte in hex
-D: 4-byte in decimal

Examples:

$ od -cx /bin/ls
0000000 177   E   L   F 001 001 001  \0  \0  \0  \0  \0  \0  \0  \0  \0
       457f 464c 0101 0001 0000 0000 0000 0000
0000020 002  \0 003  \0 001  \0  \0  \0     224 004  \b   4  \0  \0  \0
       0002 0003 0001 0000 9420 0804 0034 0000
0000040   °   ²  \0  \0  \0  \0  \0  \0   4  \0      \0 006  \0   (  \0
       b2b0 0000 0000 0000 0034 0020 0006 0028
0000060 032  \0 031  \0 006  \0  \0  \0   4  \0  \0  \0   4 200 004  \b
       001a 0019 0006 0000 0034 0000 8034 0804

Exercises

[edit | edit source]
  1. Use wildcard characters and list all filenames that contain any character followed by 'in' in the /etc directory.
  2. Use wildcard characters and list all filenames that start with any character between 'a' and 'e' that have at least two more characters and do not end with a number.
  3. Use wildcard characters and list all filenames of exactly 4 characters and all filenames starting with an uppercase letter. Do not descend into any directory found.
  4. Use wildcard characters and list all files that contain 'sh' in /bin.
  5. Display your environment variable HOME preceded by the string "$HOME value is:"
  6. Display the contents of $SHELL with two asterisk characters before and after it.
  7. How would you display the following string of characters as is with echo using double quote and \.
    • @ # $ % ^ & * ( ) ' " \
  8. Compose echo commands to display the following two strings:
    • That's what he said!
    • 'Never Again!' he replied.
  9. Display the number of words in all files that begin with the letter 'h' in the /etc directory.
  10. How would you send a 2M (megabyte) file with two 1.44 M floppy. How would you put back together the split file?
  11. What is the command to translate the : delimiter in /etc/password by #?


Detailed Objective

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 4

Description: Candidates should be able to use basic Linux commands to manage files and directories.

Key Knowledge Areas:

  • Copy, move and remove files and directories individually.
  • Copy multiple files and directories recursively.
  • Remove files and directories recursively.
  • Use simple and advanced wildcard specifications in commands.
  • Use find to locate and act on files based on type, size, or time.
  • Usage of tar, cpio and dd.

The following is a partial list of the used files, terms and utilities:

  • cp
  • find
  • mkdir
  • mv
  • ls
  • rm
  • rmdir
  • touch
  • tar
  • cpio
  • dd
  • file
  • gzip
  • gunzip
  • bzip2
  • xz
  • unxz
  • file globbing

Create and Remove directories

[edit | edit source]

To create a directory, use mkdir.

mkdir [options] dir

Common options:

-m  mode: set permission mode. Default use umask.
-p  parent: create parent directory as needed.

Examples:

mkdir -m 0700 bin
mkdir -p bin/system/x86

To delete an empty directory, use rmdir.

rmdir [options] dir

Common options:

-p  parent: remove empty superdirectories.

Examples:

rmdir  tmp
rmdir -p bin/system/x86

Copy files and directories

[edit | edit source]

To copy one file to another, or to a directory, use cp.

cp [options] source target

Source and target can be a file or a directory.

Common options:

-i  interactive: prompt to overwrite
-r  recursive: copy the subdirectories and contents. Use -R for special files.
-f  force: force the overwriting

The default is to silently clobber the target file. (This does not alter the source.

Examples:

cp *.[a-z] /tmp
cp readme readme.orig
cp  ls /bin
cp -ri bin/* /bin

Move & Rename files

[edit | edit source]

To rename a file or directory or to move a file or directory to another location, use mv.

mv [options] source target

Source and target can be a file or a directory.

Common options:

-i  interactive: prompt to overwrite
-f  force: force the overwriting
-v  verbose

The default is to silently clobber the target file.

Examples:

mv *.[a-z] /tmp
mv readme readme.orig
mv ls /bin
mv -fi bin/* /bin

Listing filenames and information

[edit | edit source]

The command to list files in the current directory is ls.

ls [options] [filenames]

Common options are:

-l For a long format
-F Append a file type character
-a All files, including hidden files
-R Recursive listing of subtree
-d Do not descend into directory

The ls is equivalent to the dir command on DOS.

Examples of ls output:

$ ls -l /bin/ls
-rwxr-xr-x    1   root  root  46784 mar 23  2002 /bin/ls
$ ls -ld /bin
drwxr-xr-x    2 root   root   2144 nov  5 11:55 /bin
$ ls -a .
.bash_history .bash_profile .bashrc ...
$ ls -dF /etc .bashrc /bin/ls
.bashrc  /bin/ls*  /etc/

File types

[edit | edit source]

The long format means:

$ ls -l /etc/hosts    #List a long format of the file hosts
-rw-r—r-- 1 root root 677 Jul 5 22:18 /etc/hosts

File content and location Linux/Unix does not distinguish file by filename extension, like Windows. To determine the file content use file.

$ file /etc .bashrc /bin/ls /dev/cdrom
/etc:       directory
.bashrc:    ASCII English text
/bin/ls:    ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), stripped
/dev/cdrom: symbolic link to /dev/hdc

To determine if a command is a built-in shell command or a program, use type, and use which to find its location.

$ type cp cd ls which type
cp is /bin/cp
cd is a shell builtin
ls is aliased to `ls $LS_OPTIONS'
which is aliased to `type -p'
type is a shell builtin
$ which cut
/usr/bin/cut

Creating and using filenames

[edit | edit source]

Filenames can be created with:

  • I/O redirection
cat chapter1 chapter2 > book
  • An editor, such as vi.
vi mynewfile
  • Many of the Unix utilities
cp file newfile
  • An application
netscape
  • The touch command, which creates empty files (or updates the "date modified" of existing files)
touch memo

The valid filename may have (or be):

  • Maximum 255 characters per filename
  • Any character except forward '/'
  • Recommended alphanumeric characters as well as plus, minus, and underscore characters.
  • Case sensitive ('A' and 'a' are treated differently)

Characters to avoid

  • Hyphen character.
touch my-file -lt
  • White space.
touch more drink
touch "more drink"
  • Most other special characters !@#$%^&*():;"'}{|\<,>.?~`
touch memo*

Remove files or directories

[edit | edit source]

To remove files or subtree directories, use rm.

rm [options] files

Files can be a file or a directory.

Common options:

-i  interactive: prompt for each removal
-f  force: force the overwriting
-r  recursive: remove subtree directories and contents

There is no 'unremove' or 'undelete' command.

Examples:

rm *.[a-z]
rm readme readme.orig
rm  ls /bin
rm -rfi /bin
cd; rm -rf *  .* # This removes all files in the home directory of the current user, as well as those in the subdirectories therein!

Locating files in a subtree directory

[edit | edit source]

To search for a file in a subtree directory, use find.

find [subtrees] [conditions] [actions]

The command can take multiple conditions and will search recursively in the subtree.

Some possible conditions are:

-name [FNG]  # Search for the FNG name
-type c      # Type of file [bcdfl]
-size [+-]#  # Has a +- size in blocks (c: bytes, k: kilobytes)
-user [name] # Own by user
-atime [+-]# # Accessed days ago.  +n means the file has not been accessed for the last n days.  -n means the file has been accessed in the last n days.
-mtime [+-]# # Modified days ago
-perm nnn    # Has permision flags nnn 

Some possible actions are:

-print  # Print the pathname
-exec cmd {} \; # Execute cmd on the file
-ok cmd {} \;   # Same as -exec but ask first

Examples:

find . -name '*.[ch]' -print
find /var /tmp . -size +20 -print
find ~ -type c -name '*sys*' -print
find / -type f -size +2c -exec rm -i {} \;
find / -atime -3 -print
find ~jo ~toto -user chloe -exec mv {} /tmp \;

To locate a binary, source file, or man page, use whereis.

whereis [options]

Common options:

-b: Search only for binaries.
-m: Search only for manual sections.
-s: Search only for sources.

Examples:

$ whereis host
host: /usr/bin/host /etc/host.conf /usr/share/man/man1/host.1.gz
$ whereis -m host
host: /usr/share/man/man1/host.1.gz

To locate a file located somewhere defined by the PATH variable, use which.

$ which -a ls
/bin/ls

The -a will look for all possible matches in PATH, not just for the first one.

Exercises

[edit | edit source]
  1. Compose an interactive command to remove all .tmp files in your home directory. Respond y to every prompt.
  2. List all the files in the user's home directories ending with .pdf that are bigger than 50 blocks and have not been accessed for a month.
  3. Create a file file.h that will contain all the filenames ending with .h found in the /usr directory.
  4. Do a touch on all the c files found in /usr/src/packages directory.
  5. What are the default permissions when you create a new file and a new directory?
  6. How would you create a new file or directory that contains a space in the filename? (Example: 'new dir')
  7. What is the command to remove all the files of types char and block in your home directory?
  8. How would you find the location of the program find?
  9. Delete all files in /tmp which are not owned by root and have not been accessed for a week.


Detailed Objective

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 4

Description:
Candidates should be able to redirect streams and connect them in order to efficiently process textual data. Tasks include redirecting standard input, standard output and standard error, piping the output of one command to the input of another command, using the output of one command as an argument for another command and sending output to both standard output and a file.

Key Knowledge Areas:

  • Redirect standard input, standard output and standard error.
  • Pipe the output of one command to the input of another command.
  • Use the output of one command as an argument to another command.
  • Send output to both standard output and a file.

The following is a partial list of the used files, terms and utilities:

  • tee
  • xargs

Standard input and standard output

[edit | edit source]

For each command executed in a terminal, there is: a standard input value 0 (default keyboard), a standard output value 1 (default terminal), and a standard output for errors value 2 (default terminal).

Each channel can also be identified by an address: &0 for input, &1 for output, And &2 for errors.

Each channel [n] can be redirected. [n]< file: Default value of n is 0 and it reads standard input from file. [n]> file: Default value is 1 and it sends standard output to file, overwriting the file if it exists. (Thus clobbering the file.) [n]>>file: Default value is 1 and it appends standard output to file. <<word: Read standard input until word is reached. `command`: Substitute the command name with the result.

Examples:

$ pwd > file  # out=file, in=none, error=terminal
cat chap* >book # out=book, in=none, error=terminal
mv /etc/* . 2>error # out=terminal, in=none, error=error
echo end of file >> book # out=book, in=none, error=terminal
set -o noclobber # Shell does not clobber existing files.
ls > list 2>&1 # ls and errors are redirected to list.
ls 2>&1 > list # Errors are redirected to standard output and ls output is redirected to list.
cat `ls /etc/*.conf` > conffile 2>>/tmp/errors

Concatenate all the configuration files from /etc dir in conffile and append errors in file /tmp/errors.

Redirecting with pipes

Pipes are an efficient way to apply multiple commands concurrently.

command1 | command2

The standard output of command1 will be piped to the standard input of command2. The standard error is not piped.

Examples:

ls -l /dev | more
ls -l /etc/*.conf | grep user | grep 500
ls -l /bin | mail `users`

To redirect the standard output to a file and to the terminal at the same time, use tee.

ls -l /dev | tee file
ls -l /etc | tee -a file # Append to the file

Building arguments

The xargs utility constructs an argument list for a command using standard input.

xargs [options] [command]

The xargs command creates an argument list for command from standard input. It is typically used with a pipe.

Common options: -p: prompt the user before executing each command.

Examples:

ls f* | xargs cat # Print to standard output the content of all files starting with f.
find ~ -name 'proj1*' print | xargs cat

Search in the home directory for files starting with proj1 and send it to the standard input of cat.

Use the /dev/null device file to discard output or error messages.

Try the following:

grep try /etc/*
grep try /etc/* 2> /dev/null
grep try /etc/* > /dev/null 2> /dev/null

Exercises

[edit | edit source]

1. Create a file list.bin that will contain all the filenames from the /bin directory.
2. Write a command that will append the list of files from /usr/local/bin to the file named list.bin and discard any error output.
3. Split your list.bin file into files that are 50 lines long and remove list.bin.
4. From the split files recreate list.bin (but with inversed order).
5. Simplify the following commands:

ls *.c | xargs rm
ls [aA]* | xargs cat
cat `ls *.v` 2>/dev/null

6. Use find to do the following command:

more `ls *.c`

7. Write a command that will create a file list.sbin with the contents of /sbin and at the same time display it to standard output.
8. Create a file that within the filename you include the creation time.
9. Create a file that will contain all the filenames in reverse order with extension .conf from the /etc directory.


Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 4

Description:
Candidates should be able to perform basic process management.

Key Knowledge Areas:

  • Run jobs in the foreground and background.
  • Signal a program to continue running after logout.
  • Monitor active processes.
  • Select and sort processes for display.
  • Send signals to processes.

The following is a partial list of the used files, terms and utilities:

  • &
  • bg
  • fg
  • jobs
  • kill
  • nohup
  • ps
  • top
  • free
  • uptime
  • pgrep
  • pkill
  • killall
  • watch
  • screen
  • tmux

Create processes

[edit | edit source]

A running application is a process. Every process has: a process ID, a parent process ID, a current directory PWD, a file descriptor table, a program which it is executing, environment variables (inherited from its parent process), stdin, stdout, stderr (standard error), and possibly even more (optional) traits.

Bash is a program that when it is executed becomes a process. Each time you execute a command in a shell a new process is created. Except for the built-in shell command. They run in the shell context. Use type to check if a command is a built-in shell command.

Example:

type cp ls which type

Monitor processes

[edit | edit source]

To monitor the processes in real-time, use top.

top - 9:20am  up  2:48,  4 users,  load average: 0.15, 0.13, 0.09
78 processes: 75 sleeping, 3 running, 0 zombie, 0 stopped
CPU states: 15.3% user,  0.3% system,  0.0% nice, 84.2% idle
Mem:   254896K av,  251204K used,    3692K free,       0K shrd,   27384K buff
Swap:  514072K av,       0K used,  514072K free                  120488K cached
 PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
1517 rarrigon   0   0 40816  39M 17372 R    15.0 16.0   2:59 mozilla-bin
1727 rarrigon  19   0   988  988   768 R     0.3  0.3   0:00 top
   1 root      20   0   220  220   188 S     0.0  0.0   0:04 init
   2 root      20   0     0    0     0 SW    0.0  0.0   0:00 keventd

RSS is the total amount of physical memory used by the task. SHARE is the amount of shared memory used by the task. %CPU is the task's share of the CPU time. %MEM is the task's share of the physical memory. Once top is running it is also possible to execute interactive commands:

  • Type N to sort tasks by pid.
  • Type A to sort tasks by age (newest first).
  • Type P to sort tasks by CPU usage.
  • Type M to sort tasks by memory usage.
  • Type k to kill a process (NOTE: You will be prompted for the process' pid).

Once the system is up and running from a terminal it is possible to see which processes are running with the ps program.

To display a long format of all the processes in the system, use the following:

ps -Al
F S   UID   PID  PPID  C PRI  NI ADDR SZ WCHAN  TTY          TIME CMD
004 S     0     1     0  0  80   0 -   112 do_sel ?        00:00:04 init
004 S     0   381     1  0  80   0 -   332 do_sel ?        00:00:00 dhcpcd
006 S     0  1000     1  0  80   0 -   339 do_sel ?        00:00:00 inetd
044 R     0  1524  1222  0  79   0 -   761 -      pts/3    00:00:00 ps

The ps program will display all the processes running and their PID numbers and other information. To see a long format of the processes in your login session, use:

ps -l
F S   UID   PID  PPID  C PRI  NI ADDR SZ WCHAN  TTY          TIME CMD
000 S   500  1154  1139  0  80   0 -   724 wait4  pts/1    00:00:00 bash
002 S   500  1285  1283  0  77   0 - 24432 wait_f pts/1    00:00:00 soffice.bin
040 R   500  1442  1435  0  79   0 -   768 -      pts/4    00:00:00 ps
F: Process Flags 002: being created, 040: forked but didn't exec, 400: killed by a signal.
S: Process States: R: runnable, S: sleeping, Z: zompbie
UID: User ID, PID: Process ID, PPID: Parent Process ID, C: Scheduler, PRI: priority
NI: Nice value, SZ: size of routine, WCHAN: name of routine

Kill processes

[edit | edit source]

The ps program will display all the processes running and their PID numbers. Once the PID is known, it is possible to send signals to the process:

  • SIGSTOP to stop a process.
  • SIGCONT to continue a stopped process.
  • SIGKILL to kill a process.

The program to send a signal to a process is called kill.

kill -SIGKILL [pid]
kill -63 [pid]
kill -l

By default a process is started in the foreground and it is the only one to receive keyboard input. Use CTRL+Z to suspend it.

To start a process in the background use the &.

bash &
xeyes &

In a bash process it is possible to start multiple jobs. The command to manipulate jobs is jobs.

jobs      # List all the active jobs
bg %job   # Resume job in background
fg %job   # Resume job in foreground
kill %job # Kill background job

When bash is terminated all processes that have been started from the session will receive the SIGHUP signal. This will by default terminate the process.

To prevent the termination of a process, the program can be started with the nohup command.

nohup mydaemon

Exercises

[edit | edit source]

1. How can you control CPU usage for PID 3196


Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 2

Description:
Candidates should be able to manage process execution priorities.

Key Knowledge Areas:

  • Know the default priority of a job that is created.
  • Run a program with higher or lower priority than the default.
  • Change the priority of a running process.

The following is a partial list of the used files, terms and utilities:

  • nice
  • ps
  • renice
  • top

Priorities

[edit | edit source]

To start a command with an adjusted priority, use nice.

nice -n +2 [command]
nice -n -19 [command]

The program nice changes the base time quantum of the scheduler. This means it informs the scheduler of how important a process is, which is used as a guide to how much CPU time to give it.

For example, if you wanted to perform another task (such as listening to music) while ripping another CD, you could use the following:

nice -n +5 oggenc

Were you listening to music, you would not get any “hops” in the music playback, as the scheduler “knows” the oggenc process is less important.

The values can go from -19 (highest priority) to +20 (lowest priority). The default value is 0. Only root can set a value below zero. To modify the priority of a running program, use renice.

renice +1 -u root # Change the priority for all root processes.
renice +2 -p 193  # Change the priority for PID 193

Exercises

[edit | edit source]
  1. Which user and root processes are using most of the memory?
  2. Same start as 2), but make the print out stop for 3[s] and to continue for 1[s] repeatedly.
  3. Make a shell script to renice all processes called apache to a value of 19.
  4. Do a print from ps formatted as: “username”, “command”, “nice value”
  5. Kill all the processes called “bash” that are owned by user polto.
  6. Open two terminals. In one terminal type the following, and from the other terminal see that you can stop and continue the print out:
while [ 1 ]
do
echo -n The date is:;
date;
done


Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 2

Description:
Candidates should be able to manipulate files and text data using regular expressions. This objective includes creating simple regular expressions containing several notational elements as well as understanding the differences between basic and extended regular expressions. It also includes using regular expression tools to perform searches through a filesystem or file content.

Key Knowledge Areas:

  • Create simple regular expressions containing several notational elements.
  • Understand the differences between basic and extended regular expressions.
  • Understand the concepts of special characters, character classes, quantifiers and anchors.
  • Use regular expression tools to perform searches through a filesystem or file content.
  • Use regular expressions to delete, change and substitute text.

The following is a partial list of the used files, terms and utilities:

  • grep
  • egrep
  • fgrep
  • sed
  • regex(7)

Pattern matching

[edit | edit source]

There are two kinds of pattern matching:

  • Wildcards (File Name Generation)
  • Regexp (Regular Expression)

Wildcard characters are mainly applied when they are used in the current directory or subdirectories. When wildcard characters *, ?, [ - ], ~, and ! are used in regexp they no longer generate filenames.

Some of the utilities that use regexp are:

  • grep, egrep
  • vi
  • more
  • sed
  • Perl

Limited regexp search patterns used by all utilities able to use regexp.

  • Any 1 char . Ab.a Abla or Abca
  • 1 char set [ ] Ab[sd]a Absa or Abda only
  • 1 char range [ - ] Ab[a-z]a Abaa or Abba or ...
  • Not in set [^ ] Ab[^0-9]a Abaa or Abba or ...
  • 0 or more * Ab*a Absala or Aba or ...
  • Begin line ^ ^Aba Line starts>Aba
  • End line $ Aba$ Aba<line ends
  • Literal \ Aba\$ Aba$

Example:

Ab[0-3]s
^Ab\^bA
[01]bin$
^..\\
[^zZ]oro

Combinations of limited regexp combination used by all utilities using regexp.

  • Any string .* Ab.*a Abrahma or Abaa or ...
  • String from [ ]* th[aersti]* There or This or ...
  • Multi range [ - - ] Ab[0-2][a-c]a Ab0aa or Ab1aa or ...
  • Match \ \\ \\[a-zA-Z]* \Beethoven

Examples:

Ab[0-3][a-z]s
...$
^[01]\^2
[0-9][a-z] \$
[a-zA-Z]*
^[^c-zC-Z]*
^[a-zA-Z0-9]$

Modifier patterns Replace strings matched by regexp patterns

  • Match m \{m\} b[0-9]\{3\} b911
  • One or more \{m,\} b[0-9]\{2,\} b52
  • Up to n \{m,n\} b[0-9]\{2,4\} b1234
  • Beginning of word \< \<wh where
  • End of word \> [0-9]\> bin01

To find text in a file, use grep.

grep [options] [string] [files]

It is best to quote the string to prevent misinterpretation.

Common options:

  • -i: Ignore case
  • -E: Extended, use regular expressions
  • -l: List filename only if at least one matches
  • -c: Display only count of matched lines
  • -n: Also display line number
  • -v: Must not match.

Examples:

grep host /etc/*.conf
grep -l '\<mai' /usr/include/*.h
grep -n toto /etc/group
grep -vc root /etc/passwd
grep '^user' /etc/passwd
grep '[rR].*' /etc/passwd
grep '\<[rR].*' /etc/passwd

To apply a command on a stream, use sed.

sed [address1][,address2][!]command[options] [files...]

The program sed will apply a command from address1 to address2 in a file. The address1 and address2 format is a regular expression.

The sed program is a noninteractive editing tool.

Examples:

sed '1,3s/aa/bb/g' file               # Replace in file from lines 1 to 3 'aa' with 'bb'.
sed '/here/,$d' file                  # Delete line from here to the end.
sed '/here/d' file                    # Delete lines including the word 'here'.
sed '1,/xxx/p' file                   # Print lines 1 to xxx.
sed '/ll/,/ff/!s/maison/house/g' file # In file replace words 'maison' with 'house' excluding lines from ll to ff.

Exercises

[edit | edit source]
  1. Process your bookmarks.html file to produce a list containing just the web sites' titles in a file called mywebsites.txt.
  2. Copy all the files from /etc into your home directory etc/. Display the contents of all the *.conf files by replacing the word 'host' with 'machine'.
  3. Display the contents of all the *.conf files that don't contain the word 'root'. What is the command using grep and sed?
  4. Print out all the group names that root belongs to.
  5. List all the group names that are 4 or 5 characters long.
  6. List all the files that contain character lines without spaces (blank lines).
  7. List in the etc/ directory all the files that contain numerical characters.
  8. Print with ls only the directory names in /.
  9. Do “ps -aux” and replace user r_polto with root and print it to a file called new_process.txt
  10. List all processes called 'apache' that are owned by usernames starting with “p” or “P”.


Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 3

Description:
Candidates should be able to edit text files using vi. This objective includes vi navigation, vi modes, inserting, editing, deleting, copying and finding text. It also includes awareness of other common editors and setting the default editor.

Key Knowledge Areas:

  • Navigate a document using vi.
  • Understand and use vi modes.
  • Insert, edit, delete, copy and find text in vi.
  • Awareness of Emacs, nano and vim.
  • Configure the standard editor.

The following is a partial list of the used files, terms and utilities:

  • vi
  • /, ?
  • h,j,k,l
  • i, o, a
  • d, p, y, dd, yy
  • ZZ, :w!, :q!
  • EDITOR

When using X-Windows, you can use mouse oriented editors such as xedit. In a cross-development environment, users use their favorite editor. On a non-windowing system, you only need a keyboard editor such as vi. The vi editor on Linux is the same as on any Unix systems. vi has two modes:

  • Command mode: Anything you type will be interpreted as a command
  • Input mode: Anything you type will be inserted into the file

To transition from Command mode to Input mode, use the i, I, a, A, o, and O keys. To transition from Input Mode to Command Mode, use the ESC key.

The default starting mode is the Command mode.

The file configuration .exrc can be created in your home directory to set up some vi behaviors.

set ignorecase # vi will not be case-sensitive
set tabs=3     # each tab will be three spaces long
set ai         # auto indent
set nu         # show line numbers

To perform basic file editing using vi, use the following keys:

  • Move cursor
    • l one space right
    • h one space left
    • j one line down
    • k one line up
    • $ end of line
    • ^ start of line
    • w next word
    • e end of word
  • Enter Input Mode
    • i before cursor
    • I at start of line
    • a after cursor
    • A at end of line
    • o open line below
    • O open line above
  • Delete
    • dw delete word
    • dd delete line
    • D delete to end of line
    • x delete char at cursor

Exercises

[edit | edit source]
  1. Use vi from any directory to begin editing an empty buffer.
  2. Enter a few lines of text into this buffer.
  3. Save the contents of the buffer to the directory.
  4. Open the file again with vi.
  5. Create a new line beneath what you typed earlier. (Without using i from command mode.)
  6. Exit vi without saving these changes.


Devices, Linux Filesystems, Filesystem Hierarchy Standard

[edit | edit source]

Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 2

Description: Candidates should be able to configure disk partitions and then create filesystems on media such as hard disks. This includes the handling of swap partitions.

Key Knowledge Areas:

  • Manage MBR and GPT partition tables
  • Use various mkfs commands to set up partitions and create various filesystems such as:
    • ext2/ext3/ext4
    • XFS
    • VFAT
    • exFAT
  • Basic feature knowledge of Btrfs, including multi-device filesystems, compression and subvolumes.

The following is a partial list of the used files, terms and utilities:

  • fdisk
  • gdisk
  • parted
  • mkfs
  • mkswap

Partitions

[edit | edit source]

Media can be divided into partitions. Partitions are usually created at installation time but can also be created with the fdisk program or other utilities. This will divide the media into partitions where different filesystems can be built and different operating systems can be installed.

IDE is recognized as follows:

  • Primary Master:
/dev/hda  : Full disk
/dev/hda1: First partition
/dev/hda2: Second partition
  • Primary Slave: /dev/hdb
  • Secondary Master: /dev/hdc
  • Secondary Slave: /dev/hdd

SCSI is recognized as follows:

  • ID1:
/dev/sda: Full disk
/dev/sda1: First partition
  • ID2: /dev/sdb

The Personal computer systems does not support more than four primary partitions, to overcome this limitation, the concept of extended partitions are used.

  • Hard Disk
/dev/sda1 : First primary
/dev/sda2 : Second Primary
/dev/sda3 : Third Primary
/dev/sda4 : Extended Partition
/dev/sda5 : First extended/Logical
/dev/sda6 : Second extended/Logical

USB and FireWire disks are recognized as SCSI disks.

Once it is partitioned it is possible to build a filesystem on every partition.

Filesystems

[edit | edit source]

Filesystems exist to allow you to store, retrieve and manipulate data on a medium. Filesystems maintain an internal data structure (meta-data) that keeps all of your data organized and accessible. The structure of this meta-data imparts the characteristics of the filesystem. A filesystem is accessed by a driver through the organized meta-data structure. When Linux boots it reads in /etc/fstab all the filesystems that need to be mounted and checks if they are in a usable state.

When a power failure occurs Linux won't be able to unmount the filesystem properly and some data in the cache won't be synchronized on the media.

As such, the meta-data may be corrupted.

Once you reboot the system, it will detect this and do a fsck to the full meta-data structure for a consistency check. This can take a very long time. Few minutes to few hours in proportion of the media size. Journaling a filesystem is adding a new data structure called a journal. This journal is on-disk and before each modification to the meta-data is made by the driver it is first written into the journal. Before each meta-data modification the journal maintains a log of the next operation.

Now, when a power failure occurs there is only the need to check the journal. A journaling filesystem recovery is very fast. It just needs to go through the logs and fix the latest operation. Journaling filesystem recovery can take only few seconds.

On a cluster systems, journaling allows to quickly recover a shared partition of a node that went down.

Linux filesystems

[edit | edit source]
  • ext2: Old very stable Linux filesystem. Efficient for files larger than ~2-3K.
  • ext3: Journaling extension for ext2. Possible to move a filesystem back and forth between ext2 and ext3.
  • Reiserfs: Journaling filesystem. 8-15 time faster than ext2 when manipulating small files.
  • XFS: A powerful journaling filesystem with possible quota and ACL
  • Msdos: MS-Windows FAT filesystem type. (mainly used for floppy)
  • Vfat: MS-Windows FAT filesystem type. (mainly used for large hdd partitions)
  • NTFS (ReadOnly but loop files): MS-Windows journaling filesystem
  • SMBFS: A filesystem to mount windows or samba sharing from Linux
  • NFS: Network File System

...

Filesystem tree

[edit | edit source]

A Linux file system has one top directory called root (/) where all sub directories of the entire system is stored. The sub directories can be another partition, a remote directory, or a remote partition accessible through the network with NFS protocol.

Create filesystems

[edit | edit source]

To create a file system on a partition, use mkfs.

mkfs [options] -t [fstype] device [blocksize]

Common options:

-t: fstype: File system type.
-c : Check the device for bad blocks before building the filesystem.

The full partition will be ereased and organized to the type of filesystem requested. There is no undo command. The fstype possible are: msdos, ext2, ext3, reiserfs,minix,xfs

The blocksize allows to customize the block size for your filesystem.

Examples:

mkfs -t msdos /dev/fd0
mkfs -t reiserfs /dev/hdd1  4096

Create extended filesystems

[edit | edit source]

To create an extended (ext2, ext3) filesystem on an partition, use mke2fs.

mke2fs [options] device [blocksize]

Common options:

-b: Specify the block sizefstype: File system type.
-c : Check the device for bad blocks before building the filesystem.
-j: Create the filesystem with an ext3 journal.
-L: Set the volume label for the filesystem.

With mke2fs it is possible to store the super-block the journal information on another device. Examples:

mkefs -b 2048 /dev/fd0 -L floppy
mkfs -V
mke2fs 1.26 (3-Feb-2002) Using EXT2FS Library version 1.263

Monitoring disk usage

[edit | edit source]

To print the disk usage, use du.

du [options] [files...]

Common options:

-a: All files not just directories
-b: Print size in bytes
-c: Total
-h: Human readable format. (1K, 20M,...)

Examples:

$ du -ch Documents
112k    Documents/Cours/LPI101
4.0k    Documents/Cours/LPI102
4.0k    Documents/Cours/LPI201
4.0k    Documents/Cours/LPI202
124k    total
du -sk ~ # Sums up your total disk usage in kilobytes
du -ak ~ | sort -n | more # Display every file and its disk space in numerical order.

Filesystem disk space

[edit | edit source]

A filesystem is composed of a meta-data structure plus a list of blocks. To print the filesystem disk space usage, use df.

df [options] [files...]

Common options:

-a: All included filesystems with 0 blocks.
-t: Limit listing to a filesystem type.
-h: Human readable format. (1K, 20M,...)
-i: List inode information instead of block usage

Examples:

$ df -t reiserfs -h
F           1k-blocks      Used Available Use% Mounted on
/dev/hda3             28771528   3121536  25649992  11% /
$ df -t ext2 -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/hda1              15M  3.8M   10M  27% /boot
$ df -ih /boot
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/hda1        126K   402  125K    1% /boot

Exercises

[edit | edit source]


Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 2

Description:
Candidates should be able to maintain a standard filesystem, as well as the extra data associated with a journaling filesystem.

Key Knowledge Areas:

  • Verify the integrity of filesystems.
  • Monitor free space and inodes.
  • Repair simple filesystem problems.

The following is a partial list of the used files, terms and utilities:

  • du
  • df
  • fsck
  • e2fsck
  • mke2fs
  • tune2fs
  • xfs_repair
  • xfs_fsr
  • xfs_db

Checking filesystems

[edit | edit source]

To check filesystems consistency, use fsck.

fsck [options] -t [fstype] device [fsck-options]

Common options:

-A: Go through the /etc/fstab file and try to check all file systems. Typically used at boot time from a script.
-t fslist: Specify the type of file system to be checked. With -A, only filesystems that match fslist are checked
-C: Display completion/progression bar.

Common fsck-options:

-a: Automatically repair.
-r: Interactively repair.

Examples:

fsck -t msdos /dev/fd0 -a
fsck -t reiserfs /dev/hda2 -r

Checking extended filesystems

[edit | edit source]

To check extended filesystems consistency, use e2fsck.

e2fsck [options] device

Common options:

-b: Use an alternate super block filename.
-c: This option makes badblocks program to run and marks all the bad blocks.
-f: Force checking even if the filesystem seems clean.
-a or -p: Automatically repair.
-y: non-interactive mode

Examples:

e2fsck -ay /dev/fd0
e2fsck -f /dev/hda2

/usr/lib/debug /usr/lib/debug/.build-id /usr/lib/debug/.build-id/00 /usr/lib/debug/.build-id/00/f25a7575b32d815a0d44afef1e9728b2b73e26 /usr/lib/debug/.build-id/00/f25a7575b32d815a0d44afef1e9728b2b73e26.debug /usr/lib/debug/.build-id/10/3d2c09a9bdc29888885c48109933c75e44663a /usr/lib/debug/.build-id/10/3d2c09a9bdc29888885c48109933c75e44663a.debug /usr/lib/debug/.build-id/13 /usr/lib/debug/.build-id/13/3abe344cde4ffef5a53479fac7c81bafb6e21a /usr/lib/debug/.build-id/13/3abe344cde4ffef5a53479fac7c81bafb6e21a.debug /usr/lib/debug/.build-id/43 /usr/lib/debug/.build-id/43/bcc311b1d6a6f32daf50dd63bf2180ab82dc90 /usr/lib/debug/.build-id/43/bcc311b1d6a6f32daf50dd63bf2180ab82dc90.debug /usr/lib/debug/.build-id/50 /usr/lib/debug/.build-id/50/bff171879889b755b937f511b1b235498fba8d /usr/lib/debug/.build-id/50/bff171879889b755b937f511b1b235498fba8d.debug /usr/lib/debug/.build-id/6d /usr/lib/debug/.build-id/6d/6243efea770beba54aefb8db5d85ddd36ae8c3 /usr/lib/debug/.build-id/6d/6243efea770beba54aefb8db5d85ddd36ae8c3.debug /usr/lib/debug/.build-id/76 /usr/lib/debug/.build-id/76/d35de43753c7ff333dd51f91ecbf5c7986e4fa /usr/lib/debug/.build-id/76/d35de43753c7ff333dd51f91ecbf5c7986e4fa.debug /usr/lib/debug/.build-id/a9 /usr/lib/debug/.build-id/a9/7f35946075b329d7464b2c1c3350080819c618 /usr/lib/debug/.build-id/a9/7f35946075b329d7464b2c1c3350080819c618.debug /usr/lib/debug/.build-id/b8 /usr/lib/debug/.build-id/b8/76d35e830768b5a5486a7ead2d18ed788324a5 /usr/lib/debug/.build-id/b8/76d35e830768b5a5486a7ead2d18ed788324a5.debug /usr/lib/debug/.build-id/d7/24af1b54d9d90564088140b56afeeb98f3d301 /usr/lib/debug/.build-id/d7/24af1b54d9d90564088140b56afeeb98f3d301.debug /usr/lib/debug/usr /usr/lib/debug/usr/lib64 /usr/lib/debug/usr/lib64/libwx_gtk2u_adv-3.0.so.0.5.0-3.0.5.1-1.el8.x86_64.debug /usr/lib/debug/usr/lib64/libwx_gtk2u_aui-3.0.so.0.5.0-3.0.5.1-1.el8.x86_64.debug /usr/lib/debug/usr/lib64/libwx_gtk2u_core-3.0.so.0.5.0-3.0.5.1-1.el8.x86_64.debug /usr/lib/debug/usr/lib64/libwx_gtk2u_html-3.0.so.0.5.0-3.0.5.1-1.el8.x86_64.debug /usr/lib/debug/usr/lib64/libwx_gtk2u_propgrid-3.0.so.0.5.0-3.0.5.1-1.el8.x86_64.debug /usr/lib/debug/usr/lib64/libwx_gtk2u_qa-3.0.so.0.5.0-3.0.5.1-1.el8.x86_64.debug /usr/lib/debug/usr/lib64/libwx_gtk2u_ribbon-3.0.so.0.5.0-3.0.5.1-1.el8.x86_64.debug /usr/lib/debug/usr/lib64/libwx_gtk2u_richtext-3.0.so.0.5.0-3.0.5.1-1.el8.x86_64.debug /usr/lib/debug/usr/lib64/libwx_gtk2u_stc-3.0.so.0.5.0-3.0.5.1-1.el8.x86_64.debug /usr/lib/debug/usr/lib64/libwx_gtk2u_xrc-3.0.so.0.5.0-3.0.5.1-1.el8.x86_64.debug /usr/lib/debug /usr/lib/debug/.build-id /usr/lib/debug/.build-id/ea /usr/lib/debug/.build-id/ea/b65cabc1e1cd7e1801e2a874178eb1b46252e8 /usr/lib/debug/.build-id/ea/b65cabc1e1cd7e1801e2a874178eb1b46252e8.debug /usr/lib/debug/usr /usr/lib/debug/usr/lib64 /usr/lib/debug/usr/lib64/libwx_gtk2u_gl-3.0.so.0.5.0-3.0.5.1-1.el8.x86_64.debug /usr/lib/debug /usr/lib/debug/.build-id /usr/lib/debug/.build-id/0a /usr/lib/debug/.build-id/0a/9cfbf6fa20e173be209d005846d4d83d2eccc8 /usr/lib/debug/.build-id/0a/9cfbf6fa20e173be209d005846d4d83d2eccc8.debug /usr/lib/debug/usr /usr/lib/debug/usr/lib64 /usr/lib/debug/usr/lib64/libwx_gtk2u_media-3.0.so.0.5.0-3.0.5.1-1.el8.x86_64.debug /usr/lib/debug /usr/lib/debug/.build-id /usr/lib/debug/.build-id/34 /usr/lib/debug/.build-id/34/9d6bc71cf4f405919844e52630e2a38865b55e /usr/lib/debug/.build-id/34/9d6bc71cf4f405919844e52630e2a38865b55e.debug /usr/lib/debug/usr /usr/lib/debug/usr/lib64 /usr/lib/debug/usr/lib64/erlang /usr/lib/debug/usr/lib64/erlang/lib /usr/lib/debug/usr/lib64/erlang/lib/asn1-5.0.12 /usr/lib/debug/usr/lib64/erlang/lib/asn1-5.0.12/priv /usr/lib/debug/usr/lib64/erlang/lib/asn1-5.0.12/priv/lib /usr/lib/debug/usr/lib64/erlang/lib/asn1-5.0.12/priv/lib/asn1rt_nif.so-22.3.4.1-1.el8.x86_64.debug /usr/lib/debug /usr/lib/debug/.build-id /usr/lib/debug/.build-id/57 /usr/lib/debug/.build-id/57/831dc482ea0c229def270864a66b785b79121d /usr/lib/debug/.build-id/57/831dc482ea0c229def270864a66b785b79121d.debug /usr/lib/debug/usr /usr/lib/debug/usr/lib64 /usr/lib/debug/usr/lib64/erlang /usr/lib/debug/usr/lib64/erlang/erts-10.7.2.1 /usr/lib/debug/usr/lib64/erlang/erts-1

Dumping extended filesystems info

[edit | edit source]

To print the super block and blocks group information of an extended filesystem, use dumpe2fs.

dumpe2fs [options] device

Common options:

-b: print the bad blocks of the filesystem.
-h: Display only the superblock information.

Example:

dumpe2fs -h /dev/fd0
dumpe2fs 1.26 (3-Feb-2002)
Filesystem volume name:   floppy
Last mounted on:          <not available>
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              184
Block count:              1440
Reserved block count:     72
Free blocks:              1258
Free inodes:              168
First block:              1
Block size:               1024
First inode:              11
Inode size:               128
...

Tuning extended filesystems

[edit | edit source]

To tune an extended filesystem, use tune2fs.

tune2fs [options] device 

Common options:

-i#: Interval between filesystem checks [d|m|w].
-l: List the contents of the filesystem superblock.
-L: Set the volume label of the filesystem.

Examples:

tune2fs -L floppy /dev/fd0
tune2fs -l /dev/fd0
(Same output as dumpe2fs -h /dev/fd0)
tune2fs 1.26 (3-Feb-2002)
Filesystem volume name:   floppy
Block count:              1440
Reserved block count:     72
Free blocks:              1258
Free inodes:              168
First block:              1
Block size:               1024
First inode:              11
Inode size:               128
...

Exercises

[edit | edit source]
  1. Build an ext2 file system, with a block size of 2048 bytes, on a floppy.
  2. Change the label of the floppy to BACKUP.
  3. Try to add a journal on the floppy media.
  4. Use debugfs to validate your floppy file system information, and check when it was last accessed.
  5. Use watch to monitor the size when you copy a big file.
  6. Create a shell script to list all files on the floppy bigger than 100 Kb.
  7. Display file system usage for all MSDOS file systems.
  8. Which directory MUST exist in / to qualify this OS as Linux?
  9. What is the file system usage of /proc?


Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 3

Description:
Candidates should be able to configure the mounting of a filesystem.

Key Knowledge Areas:

  • Manually mount and unmount filesystems.
  • Configure filesystem mounting on bootup.
  • Configure user mountable removeable filesystems.
  • Use of labels and UUIDs for identifying and mounting file systems.
  • Awareness of systemd mount units.

The following is a partial list of the used files, terms and utilities:

  • /etc/fstab
  • /media/
  • mount
  • umount
  • blkid
  • lsblk

Attach a filesystem

[edit | edit source]

The mount command serves to attach the file system found on some device to the big file tree.

mount [options]
mount [options] [-t vfstype] [-o options] device dir

If the device or directory is listed in /etc/fstab you can use the following:

mount [options] [-o options [,...]] device | dir

Normally only root has the privilege to mount devices unless it is specified in the /etc/fstab file. Examples:

# Print all the mounted filesystems (/etc/mtab).
mount
# Mount devices or dirs listed in /etc/fstab.
mount -a
# Mount /dev/hdc partition in read only mode without updating /etc/mtab.
mount -n -o ro /dev/hdc /mnt
# Allow a user to mount the CDROM if the following line is in /etc/fstab:
# /dev/cdrom /media/cdrom iso9660 ro,user,noauto,unhide
mount /media/cdrom 
mount /dev/cdrom
# Sync in realtime
mount -o sync /dev/sdb1 /mnt/usb

Detach a filesystem

[edit | edit source]

To detach a filesystem from the file tree, use umount.

umount [options]
umount [options] [-o options [,...]] device | dir

A busy filesystem cannot be unmounted.

  • Open files
  • Working directory of a process.

Examples:

umount -a # Unmount devices or dirs listed in /etc/fstab.
umount /mnt # Unmount the filesystem attached to /mnt.
umount /media/cdrom  # Allow a user to unmount the CDROM if the following line is in /etc/fstab:
/dev/cdrom  /media/cdrom  iso9660  ro,user,noauto,unhide

File system information

[edit | edit source]

The file /etc/fstab contains all the file systems and related information that will be used when doing a mount -a. (Boot time)

The file /etc/mtab is maintained by the kernel and keeps track on what is or isn't mounted. The /etc/fstab format is:

#Device     Mount point    Fs type  Options             1 2
/dev/hda3   /              reiserfs defaults            1 2
/dev/hda1   /boot          ext2     defaults            1 2
/dev/cdrom  /media/cdrom   auto     ro,noauto,user,exec 0 0
usbdevfs    /proc/bus/usb  usbdevfs noauto              0 0
/dev/hda2   swap           swap     pri=42              0 0

Common options:

ro: read only
noauto: Don't mount automatically
exec: Can execute binary on the filesystem
suid: Allow to setuser bit
user: Allow a user to mount/unmount it
unhide: hidden file visible
async: All operations will be done asynchronously
default: rw, suid, dev, exec, auto, nouser, and async

Exercises

[edit | edit source]
  1. Create a line in /etc/fstab that allows any user to access the floppy disk. Check that you can mount the floppy and can create a file with touch.
  2. Do the following manipulation:
    • Create a ext2 file system on the floppy.
    • Mount the floppy.
    • Copy all the files /etc/*.conf into the floppy.
    • Unmount it. What's happening?
    • Mount it back and check that all the files are there.
    • Issue the following command:
    • Tar cvf /dev/fd0 /etc/*.conf
    • Try to mount it back. What's happening?
    • Use tar to view the contents of the floppy.


Detailed Objective

[edit | edit source]

Weight: 1

Description:
Candidates should be able to manage disk quotas for users.

  • Key knowledge area(s):
    • Set up a disk quota for a filesystem.
    • Edit, check and generate user quota reports.
  • The following is a partial list of the used files, terms and utilities:
    • quota
    • edquota
    • repquota
    • quotaon

Quotas

[edit | edit source]

On a system, root can manage the usage of disk space per user and per filesystems. The two limits that can be setup are: The soft limit (soft =) specifies the maximum amount of disk usage a quota user is allowed to have. The hard limit (hard =) specifies the absolute limit on the disk usage a quota user can't go beyond it. There is also the possibility to setup a grace period that will enforce the soft limit only after an amount of time specified.

Setting up quotas for users

[edit | edit source]

1) The keyword usrquota or/and grpquota must be added in file /etc/fstab for the partition interested.

/dev/fd0  /home/yann/mnt auto    rw,noauto,user,usrquota 0 0
/dev/hda5 /home     ext2    defaults,usrquota,grpquota 1 2

2) Add in each root filesystems the file user.quota or/and group.quota.

touch /mnt/aquota.user
touch /home/aquota.user
touch /home/aquota.group 
chmod 600  /mnt/aquota.user 
chmod 600  /home/aquota.user 
chmod 600  /home/aquota.group

Only root can do the quota administration and once the empty files have been created some disk quota can be set such as:

  • Soft limitation on number of files and inodes.
  • Hard limitation on number of files and inodes if the grace time is set.

Setting up quotas for users

[edit | edit source]

3) Check the setting

quotacheck -v mnt
quotacheck: Scanning /dev/fd0 [/home/yann/mnt] done
quotacheck: Checked 6 directories and 1 files

4) Enable quota on the disk

quotaon -av
/dev/fd0 [/home/yann/mnt]: user quotas turned on

5) Customize the disk quota limits:

$ edquota -u yann
Disk quotas for user yann (uid 500):
Filesystem    blocks       soft       hard     inodes     soft     hard
/dev/fd0       15          0          0          4        0        0
$ edquota -g yann 
$ edquota -t
Grace period before enforcing soft limits for users:
Time units may be: days, hours, minutes, or seconds
Filesystem             Block grace period     Inode grace period
/dev/fd0                      7days                  7days

List quotas

[edit | edit source]

To list quotas for an user or group, use quota.

quota [options] [user|group]

Common options:

-u:Default, print user quotas.
-g:Print group quotas for the group of which the user is a member.
-q: Print a more terse message, containing only information on filesystems where usage is over quota.

Example:

quota -u yann

Display a quota report

[edit | edit source]

To display a quota report, use repquota.

repquota [options] [user|group]

Common options:

-a: Report on all filesystems indicated in /etc/mtab to be read-write with quotas.
-g: Report for group.

Example:

$ repquota /dev/fd0
*** Report for user quotas on device /dev/fd0
Block grace time: 7days; Inode grace time: 7days
                Block limits                   File limits
User            used    soft    hard  grace    used  soft  hard  grace
----------------------------------------------------------------------
root   --       8       0       0              2     0     0
yann   --      15       0       0              4     0     0

Exercises

[edit | edit source]
  1. Setup a soft limitation for any users that have their home directory in /home to 500M.
  2. Change the grace time to 0.
  3. Log as the user and check if the limitation works.


LPI Linux Certification/Use File Permissions To Control Access To Files

Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 3

Description:
Candidates should be able to control file access through the proper use of permissions and ownerships.

Key Knowledge Areas:

  • Manage access permissions on regular and special files as well as directories.
  • Use access modes such as suid, sgid and the sticky bit to maintain security.
  • Know how to change the file creation mask.
  • Use the group field to grant file access to group members.

The following is a partial list of the used files, terms and utilities:

  • chmod
  • umask
  • chown
  • chgrp

Changing file owner and group

[edit | edit source]

To change the owner of a file or directory, use chown.

chown yann mon_fichier.txt

To change the group of a file or directory, use chgrp.

chgrp dialout caller

The programs gpasswd and yast2 allow you to administrate groups.

gpasswd [-A user,...] [-M user,...] group

-A: Add users with group administrator privileges.
-M: Add members in group.

Group administrators can add or delete members of the group

gpasswd -d toto users
gpasswd -a toto users

Group administrators can set or remove the password for the group.

gpasswd users
gpasswd -r users

More privileges

[edit | edit source]

It is possible to give more privileges to an user when it executes a particular script or program by setting the uid or gid bit of the file.

If the bit is set, the process will inherit the permissions of the owner of the file not the permissions of the user. To set the effective uid or gid, use chmod.

chmod 2640 [file] # (2) gid is inheritable for group.
chmod 4640 [file] # (4) uid is inheritable for user.

Example of such program is /bin/passwd.

The sticky bit can also be set and can make the program text segment resident in RAM. chmod 1640 [file] (1) The file program stays in RAM.


File and Directory Permissions

[edit | edit source]

The permission of a file or of a directory can be viewed with ls -l.

Examples of file permissions:

ls -l readme
-rwxrw---- 1 toto users 14 Jul 5 10:00 readme

This means read,write, and execution permissions for user toto, read and write permissions for members of group users. No permissions for others. (0760)

ls -l /etc/hosts
-rw-r--r-- 1 root root 14 Jul 5 10:00 /etc/hosts

This means read and write permissions for user root, read permissions for members of group root and all others. (0644)

Examples of directory permissions:

ls -ld /bin
drwxr-xr-x 2 root root 4096 Jul 5 10:00 /bin

This means read,write, and execution permissions for user root, read and execution permissions for members of group root and others. (0755)

ls -l /home/toto
drwxr-xr-x 10 toto  users 4096 Jul 5 1:00 /home/toto

This means read, write, and execution permissions for user toto, read and execution permissions for members of group users and others. (0755)

Default permissions

[edit | edit source]

The default permissions when creating a file are 0666 and when creating a directory are 0777. Most of the systems overwrite this at boot time with the program umask. Generally the mask value is 022. It means the write for group and other will be blocked. To check or change the mask value, do:

umask 
umask 066  

Examples for file:

default: rw- rw- rw- (0666)
umask: 0 2 2 (0022) Block
result: rw- r-- r-- (0644)

Examples for directory:

default: rwx rwx rwx (0777)
umask: 0 2 2 (0022) Block
result: rwx r-x r-x (0755)

Calculating umasks

[edit | edit source]

Finding the correct umask is not all that easy of a process, but certainly doable. The final permission of a file is the result of a logical AND operation between the negation of the umask and the default permission. (The same applies to directories)

In order to visualize this, we translate the octal default permissions into binary form first:

 octal: 0666
 binary: 000 110 110 110
 octal: 0777
 binary: 000 111 111 111

Then we take the umask. This time we'll use 0027 for our umaks and translate that into binary. Then invert (~) it.

 octal:  0027
 binary: 000 000 010 111
 ~:      111 111 101 000

Now, to get the actual permission for a file, we logically AND it with the default permissions and translate back into octal:

 default permission: 000 110 110 110
 ~ umask:            111 111 101 000
 logical AND:        000 110 100 000
 octal representation: 0640

And the same for directories:

 default permission: 000 111 111 111
 ~ umask:             111 111 101 000
 logical AND:        000 111 101 000
 octal representation: 0750

Changing file permissions

[edit | edit source]

To change permissions on a file or directory, use chmod. To overwrite the existing permissions, do:

chmod 0755 /tmp #rwx for user, rx for group and others

To change add or cancel some permissions without overwriting all the existing permissions, do:

chmod u+w readme  # Add write permission for user
chmod +r readme  # Add read permission for everybody
chmod -r readme  # Remove read permission for everybody
chmod u+x,g=r readme  # Add execution for user and set read for group
chmod u=rwx,go=rx readme  # Set read write and execution for user, read and execution for group and others

To change in recursive mode, use the -R option.

chmod -R +x /sbin/*


Exercises

[edit | edit source]

1) Write the command line by using letters with chmod to set the following permissions:

rwxrwxr-x :
rwxr--r-- :
r--r----- :
rwxr-xr-x :
rwxr-xr-x :
r-x--x--x :
-w-r----x :
-----xrwx :

2) Write the command line by using octal numbers with chmod to set the following permissions:

rwxrwxrwx :
--x--x--x :
r---w---x :
-w------- :
rw-r----- :
rwx--x--x :

3) With the following umask values what would be the files and directories creation permissions?

umask = 0027
File permissions:
Directory permissions:
umask = 0011
File permissions:
Directory permissions:
umask = 0541
File permissions:
Directory permissions:
umask = 0777
File permissions:
Directory permissions:

4) Create two user accounts

Logging in id: tst1, group users, with bash shell, home directory /home/tst1
Logging in id: tst2, group public, with bash shell, home directory /home/tst2
For the two accounts set a password.

Logging in as tst1 and copy /bin/ls into tst1 home directory as myls. Change the owner of myls to tst1 and the permissions to 0710. What does this permission value mean?

Logging in as tst2 and try to use /home/tst1/myls to list your current directory. Does it work ?

Create in /etc/group and /etc/gshadow a new group labo with tst1 and tst2. Change the owner group of myls to labo.

Try again from tst2 account to execute /home/tst1/myls to list your current directory. Does it work?


Exercises results

Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 2

Description:
Candidates should be able to create and manage hard and symbolic links to a file.

Key Knowledge Areas:

  • Create links.
  • Identify hard and/or softlinks.
  • Copying versus linking files.
  • Use links to support system administration tasks.

The following is a partial list of the used files, terms and utilities:

  • ln
  • ls
[edit | edit source]

Use link when: You want to create a pathname to a file. Set a shorter or fixed pathname to a file.

To link one file to another, use ln:

ln [options] filename linkname
ln [options] filename linkdirectory

Common options:

-f force: clobber existing link
-s symbolic link

The default links are hard links (ln without an option). A hard link can only be created to an existing file on the same physical device, after creation no visible association can be displayed between a link name and a file name.

Symbolic links are like shortcuts in Windows, in the sense that the file may be removed but the link will remain (although useless). Unlike in Windows however, a symbolic link can be created on a file that doesn’t exist yet. The association between the link name and the file name can be viewed with the ls command.

Linking to a file

[edit | edit source]

The symbolic and hard link can be displayed with ls -l. Symbolic link are indicated with an arrow: link_name->real_filename.

$ ls -l /dev/midi
lrwxrwxrwx   1   root   root        6    Jul 4 21:50   /dev/midi -> midi00

Hard links are indicated with the number of links counter (3-1=2 in this case).

$ ls -l readme
-rwxrwxrwx   3   yann   users       677  Jul 4 21:50   readme

When removing a link name, use rm. Only the link will be removed not the linked file.

Exercises

[edit | edit source]
  1. Create a directory etc and bin in your home directory.
  2. Copy all the files in recursive mode from /etc to your etc directory and do the same for /bin to bin.
  3. In your local etc directory rename all files *.conf by *.conf.bak
  4. Create in your home directory a symbolic link called dir that points to your local bin/ls. Check if dir do execute ls.
  5. Remove the dir link. Is bin/ls still there?


Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 2

Description:
Candidates should be thoroughly familiar with the Filesystem Hierarchy Standard (FHS), including typical file locations and directory classifications.

Key Knowledge Areas:

  • Understand the correct locations of files under the FHS.
  • Find files and commands on a Linux system.
  • Know the location and propose of important file and directories as defined in the FHS.

The following is a partial list of the used files, terms and utilities:

  • find
  • locate
  • updatedb
  • whereis
  • which
  • type
  • /etc/updatedb.conf

Filesystem Hierarchy Standard

[edit | edit source]

Filesystem Hierarchy Standard (FHS) defines standard locations for different file types and was adopted by many Unix and Linux distributions. Full description can be found here.

In essence FHS divides files based on 2 criteria:
Shareable/unshareable

  • shareable - files which can be stored on one host and used on others. This files do not contain any host specific information. Some examples are binaries under /usr/ or home directories
  • unshareable - files which should not be shared between hosts, for example lockfiles under /var/run define states of processes on a given host so it would make no sense for this information to be shared in normal scenarios (this might be different for software which is aware of existence of other servers sharing the same directory/filesystem)

variable/static

  • static - this are libraries, binaries, documentation and other files which normally do not change without administrator intervention
  • 'variable - this files can change during normal operation. One example are files in /var/run/ directory which might change when services are started and stopped

Below table summarizes main distinction between file types:

shareable unshareable
static /usr /etc
/opt /boot
variable /var/mail /var/run
/var/spool/news /var/lock

Main directories which can be found under / (root) filesystem in Linux are:
bin - essential command binaries, required for system to boot
boot - static files of the boot loader. See Boot the System section for more information
dev - special device files
etc - host-specific system configuration
lib - essential shared libraries and kernel modules. Files within this directory are required during boot
media - mount point for removable media like CD-ROM or USB drives
mnt - mount point for mounting a filesystem temporarily
opt - add-on application software packages. Usually used by third party software
sbin - essential binaries used for system administration, required during boot
srv - data for services provided by this system
tmp - temporary files
var - variable data which includes logfiles and some other frequently changing files
usr - secondary hierarchy which might look like:

/usr/
├── bin
├── include
├── lib
├── lib64
├── local
├── sbin
├── share
└── src

Reason for having separate /usr hierarchy is historical. In the past disk space was very limited and many systems were sharing filesystems for example using NFS. /usr was one of the directories shared this way. Because of that reason all files, libraries and kernel modules required for network and NFS to be operational had to be outside of /usr tree.

Most of the files in directories like /bin/ , /sbin/ , /usr/bin/ , /usr/sbin/ , /usr/lib/ come from Linux packages (see Use Debian Package Management and Use RPM and YUM package management) and almost never have to be changed manually.

If administrator needs to build software from source normally the best place to install it is under /usr/local/ hierarchy which might look similar to:

/usr/local/
├── bin
├── etc
├── games
├── include
├── lib
├── sbin
├── share
└── src

So custom binary files would end up in /usr/local/bin/ , libraries in /usr/local/lib/ , configuration files in /usr/local/etc/ and so on.

Find Files and Commands on a Linux System

[edit | edit source]

find - can be used to search directory tree for files and directories meeting certain criteria. It is very powerful command and expressions can be quite complex.

find [-H] [-L] [-P] [-D debugopts] [-Olevel] [path...] [expression]

Normally no options are used so command can be similar to:

find /var/log -type f -mtime -1

In above example /var/log is path which will be searched. Path is optional and if it is not specified current directory will be used. The rest of the command above is expression. Explicit AND operator is assumed between parts of the expression so in above example command will print files (-type f) which were modified within the last 24 hours (-mtime -1).

Examples
List logfiles which were not written to in the last 2 days:

find /var/log -type f -mtime +2

List directories within /etc which are owned by UID 0 OR GID 0:

find /etc -type d \( -uid 0 -o -gid 0 \)

Find files within current directory and subdirectories which do not end with .conf:

find -type f ! -name "*.conf"

List files bigger than 1MB in /var/log

find /var/log -type f -size +1M

Find can also be used to perform actions on result files or directories. Command below will remove all backup files from root's home directory:

find /root -type f -name "*.bak" -exec rm '{}' \;

locate - reads one or more databases prepared by updatedb and writes file names matching at least one of the PATTERNs. This command is much faster than find because it uses prebuilt database but has some limitations. First of all because it uses static database results are not guaranteed to be accurate. If files were removed or added after the database was created locate will not know about it. Second limitation is search pattern. Find allows for very complex expressions while locate permits only simple pattern match.

locate [OPTION]... PATTERN...

Common options:

-i, --ignore-case - perform case insensitive search
-b, --basename - match only the base name against the specified patterns. This is the opposite of --wholename
-w, --wholename - match only the whole path name against the specified patterns

Examples Show location all files matching "rc.local":

locate rc.local

updatedb - creates or updates a database used by locate. If the database already exists, its data is reused to avoid rereading directories that have not changed.

updatedb [OPTION]...

Common options:

-v, --verbose - output path names of files to standard output, as soon as they are found
-e, --add-prunepaths - exclude whitespace separated list of directories from the database

Examples Rebuild the database excluding logfiles:

updatedb -e /var/log

/etc/updatedb.conf is a configuration file used by updatedb command described above. It customizes default behavior of updatedb which is normally executed using cron once a day. Example file from Ubuntu Linux looks like:

PRUNE_BIND_MOUNTS="yes"
# PRUNENAMES=".git .bzr .hg .svn"
PRUNEPATHS="/tmp /var/spool /media"
PRUNEFS="NFS nfs nfs4 rpc_pipefs afs binfmt_misc proc smbfs autofs iso9660 ncpfs coda devpts ftpfs devfs mfs shfs sysfs cifs lustre_lite tmpfs usbfs udf fuse.glusterfs fuse.sshfs ecryptfs fusesmb devtmpfs"

Above directives exclude some directories from being indexed (/tmp /var/spool and /media) as well as some filesystems.

whereis - whereis locates source/binary and manuals sections for specified files.

whereis [-bmsu] [-BMS directory...  -f] filename...

Common options:

-b - search only for binaries
-m - search only for manual sections
-s - search only for sources

Examples Display location of binaries and manpages for cp command:

whereis cp

Print location of tar command manpages:

whereis -m tar

which - which returns the pathnames of the files (or links) which would be executed in the current environment.

which [-a] filename ...

Common options:

-a print all matching pathnames of each argument

Examples Show location of rm binary:

which rm


The X Window System

[edit | edit source]

Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)


Weight: 2

Description:
Candidates should be able to install and configure X11.

Key Knowledge Areas:

  • Understanding of the X11 architecture.
  • Basic understanding and knowledge of the X Window configuration file.
  • Overwrite specific aspects of Xorg configuration, such as keyboard layout.
  • Understand the components of desktop environments, such as display managers and window managers.
  • Manage access to the X server and display applications on remote X servers.
  • Awareness of Wayland.


The following is a partial list of the used files, terms and utilities:

  • /etc/X11/xorg.conf
  • /etc/X11/xorg.conf.d/
  • ~/.xsession-errors
  • xhost
  • xauth
  • DISPLAY
  • X

Configuration

[edit | edit source]

1. XFree86

type "XFree86 -configure", it will scan your hardware and auto. generate a configuration file matching to your hardware. However, FOR PS/2 MOUSE, you might need to modify this config file manually from ""Device" "/dev/mouse"" to ""Device" "/dev/psaux""

2. Xorg

"Xorg -configure"

Starting and stopping X

[edit | edit source]

To start X you can use:
startx - terminal command used at level 3;
edit /etc/inittab to run default at level 5;
xinit - when there is no .xinitrc file;
init 5 - to change manually runlevel to 5 (and run display manager);
xdm - (X Display Manager) - graphical login manager, which run automatically at boot process when starting Linux at level 5 (there are also external graphical login managers ex. kdm, gdm).

To stop X you can use: <CTRL>+<ALT>+<BACKSPACE>;
init 3 - at lower level than 5 Linux will stop X-Windows;
kill the XFree process.

Configuring X To configure X on a system use XF86Setup. The program will generate a configuration file that will be used by the XFree86 server. To tune the screen under X use Xfine2.

Under X, the user can configure every conceivable aspect of graphic display. Screen font size, styles Pointer behaviour Screen colors Window manager

The tuning can be done on a system-wide or per-user. .xinitrc contains the default window manager and style information to be used by the startx command. This file is usually located under /home/username when defined on a per-user basis. .Xdefaults used to setup pointer behaviour, colors, fonts, etc...

Exercises

[edit | edit source]


Detailed Objective

[edit | edit source]

Weight: 2

Description:
Candidates should be able setup and customize a display manager. This objective covers the display managers XDM (X Display Manger), GDM (Gnome Display Manager) and KDM (KDE Display Manager).

  • Key knowledge area(s):
    • Turn the display manager on or off.
    • Change the display manager greeting.
    • Change default color depth for the display manager.
    • Configure display managers for use by X-stations.
  • The following is a partial list of the used files, terms and utilities:
    • /etc/inittab
    • xdm configuration files
    • kdm configuration files
    • gdm configuration files

Exercises

[edit | edit source]


LPI Linux Certification/Install & Customise A Window Manager Environment

Kernel

[edit | edit source]

Weight: 4

Description:
Candidates should be able to manage and/or query a kernel and kernel loadable modules.

  • Key knowledge area(s):
    • Use command-line utilities to get information about the currently running kernel and kernel modules.
    • Manually load and unload kernel modules.
    • Determine when modules can be unloaded.
    • Determine what parameters a module accepts.
    • Configure the system to load modules by names other than their file name.
  • The following is a partial list of the used files, terms and utilities:
    • /lib/modules/kernel-version/modules.dep
    • /etc/modules.conf
    • /etc/modprobe.conf
    • depmod
    • insmod
    • lsmod
    • rmmod
    • modinfo
    • modprobe
    • uname

Obtain information about kernel and modules

[edit | edit source]

To display version of currently running kernel use uname command:

uname -r
uname -v

lsmod command can be used to display currently loaded kernel modules:

$ lsmod
Module                  Size  Used by
nls_iso8859_1           3261  0 
nls_cp437               4931  0 
vfat                    9201  0 
fat                    48240  1 vfat
usb_storage            40172  0
.............

Used by column shows how many modules are dependent on a given one. In example above vfat depends on fat which has to be loaded first.

Loading and Unloading Modules

[edit | edit source]

To load and unload kernel modules you need superuser privileges.

insmod - this command can be used to load kernel module (however use of modprobe is recommended). It is automatically located by the tool but this command works at low level and does not resolve module dependencies. Command below fails because vfat module requires fat to be loaded first:

# insmod /lib/modules/2.6.35-22-generic/kernel/fs/fat/vfat.ko
insmod: error inserting '/lib/modules/2.6.35-22-generic/kernel/fs/fat/vfat.ko': -1 Unknown symbol in module

When we load fat module first everything works fine:

# insmod /lib/modules/2.6.35-22-generic/kernel/fs/fat/fat.ko 
# insmod /lib/modules/2.6.35-22-generic/kernel/fs/fat/vfat.ko

rmmod - this command can be used to remove modules from running kernel. As insmod it will not resolve dependencies:

# rmmod fat
ERROR: Module fat is in use by vfat
# rmmod vfat
# rmmod fat


modprobe - this command allows to load and unload modules and automatically resolves dependencies using System.map file (ie. /lib/modules/2.6.31-21-generic/modules.dep). To load module use command with module name as parameter. It will make sure all required modules are loaded as well:

# modprobe vfat

To remove module using modprobe command use -r switch:

# modprobe -r vfat

To list all available modules for currently running kernel use -l switch:

# modprobe -l
..................
kernel/drivers/net/ne2k-pci.ko
kernel/drivers/net/8390.ko
kernel/drivers/net/pcnet32.ko
kernel/drivers/net/e100.ko
kernel/drivers/net/tlan.ko
kernel/drivers/net/epic100.ko
kernel/drivers/net/smsc9420.ko
kernel/drivers/net/sis190.ko
kernel/drivers/net/sis900.ko
..................


To determine whether a module can be safely removed use lsmod command described above. You have to make sure that number in the last column is 0 (so that no modules are using the one you are removing)

Getting Information About Modules

[edit | edit source]

modinfo - can be used to display information about a module. Common switches are -a to display author information, -d to display description and -p to display options (parameters) a module accepts:

$ modinfo  bonding
filename:       /lib/modules/2.6.35-22-generic/kernel/drivers/net/bonding/bonding.ko
alias:          rtnl-link-bond
author:         Thomas Davis, tadavis@lbl.gov and many others
description:    Ethernet Channel Bonding Driver, v3.6.0
version:        3.6.0
license:        GPL
srcversion:     EC8FCCE4D57BF7B3823F70F
depends:        
vermagic:       2.6.35-22-generic SMP mod_unload modversions 686 
parm:           max_bonds:Max number of bonded devices (int)
parm:           num_grat_arp:Number of gratuitous ARP packets to send on failover event (int)
parm:           num_unsol_na:Number of unsolicited IPv6 Neighbor Advertisements packets to send on failover event (int)
parm:           miimon:Link check interval in milliseconds (int)
.....................

Creating Name Aliases

[edit | edit source]

To create an alias which used by modprobe command it needs to be added to one of its configuration files. It could be either /etc/modprobe.conf or a file in /etc/modprobe.d/ directory. Sample entries below define eth0 as an alias for bnx2 network card driver and scsi_hostadapter will be an alias for mptbase. Once entries are added one can use modprobe eth0 to load bnx2 network card module.

alias eth0 bnx2
alias scsi_hostadapter mptbase


LPI Linux Certification/Reconfigure, Build & Install A Custom Kernel & Kernel Modules

Boot, Initialisation, Shutdown & Runlevels

[edit | edit source]

101.2 Boot the system

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 3

Description:
Candidates should be able to guide the system through the booting process.

Key Knowledge Areas:

  • Provide common commands to the boot loader and options to the kernel at boot time
  • Demonstrate knowledge of the boot sequence from BIOS/UEFI to boot completion
  • Understanding of SysVinit and systemd.
  • Awareness of Upstart.
  • Check boot events in the log files


The following is a partial list of the used files, terms and utilities:

  • dmesg
  • journalctl
  • BIOS
  • UEFI
  • bootloader
  • kernel
  • initramfs
  • init
  • SysVinit
  • systemd


Boot Process Overview

[edit | edit source]

When a typical PC is powered on, the BIOS (Basic Input Output Service) detects hardware devices including hard drives and goes through a boot priority list. Assuming we are booting from a hard drive, it will execute the first 512 bytes of code from the first boot disk's MBR (stage1 bootloader). The stage1 boot loader is very basic and its function is to load the stage2 (or stage1.5 in some cases) boot loader. The next step depends on the type of bootloader being used. There are 2 popular boot loaders in Linux: LILO and GRUB

LILO (LInux LOader) is an older boot loader and modern Linux distributions use GRUB rather than LILO but it might still be in use on older systems. When LILO stage2 boot loader is loaded, depending on configuration, it might display a menu that allows a user to specify kernel options. Once kernel options are specified GRUB loads the kernel and initrd and executes the kernel passing any parameters specified by the user. It is important to remember that LILO is not filesystem aware so kernel and initrd are loaded from a predefined location on the hard drive.

GRUB (GRand Unified Bootloader) is a newer and more flexible boot loader. When stage1.5 is loaded it reads /boot partition and loads the stage2 bootloader from there. The stage2 bootloader reads configuration from /boot/grub/grub.conf (RedHat based distributions) or menu.lst (Debian based distributions) and presents user with options (if configured to do so). When the choice is made it loads kernel and initrd from /boot/ partition and executes it passing any options passed by user.

When the kernel loads it detects devices. One of the tasks it has to perform is to mount the root partition. If required hardware and filesystem support is built into the kernel it can be done immediately and /sbin/init program is loaded. This situation is very rare and majority of modern systems will require dynamically loaded modules to be loaded into the kernel in order to mount the root partition. These modules are contained within initial RAM disk (initrd). Once the root partition is ready the kernel runs /sbin/init program which continues to load the operating system.

/sbin/init is the first process started by the kernel and it becomes parent of all other processes. Usually it reads /etc/inittab configuration file however, some newer alternatives like upstart are not using this file any more. In RedHat based system init will execute /etc/rc.d/rc.sysinit and in Debian based systems /etc/init.d/rcS. This script will mount partitions and perform basic system setup. Next it will run all scripts located in /etc/rcX.d/ where X is runlevel number. This could be the default specified in /etc/inittab file or the number provided by administrator as kernel parameter. The last script to run is /etc/rc.local which can be used to add system specific initialization steps. Once this script is executed boot process is complete.


Bootloaders Configuration

[edit | edit source]

LILO is older boot loader and has some limitations. One of them is the fact that kernel and initrd locations are hardcoded into stage2 bootloader. It means that every time new kernel image is added, administrator has to run /sbin/lilo command to reinstall LILO; configuration file used by the /sbin/lilo executable is located in /etc/lilo.conf and might look like:


#begin LILO global section
boot=/dev/hda
bitmap=/boot/image.bmp
bmp-colors=255,0,255,0,255,0
append="vt.default_utf8=0"
map=/boot/System.map
install=/boot/boot.b
prompt
timeout=50
#VESA framebuffer console @ 1246x768x256
vga=773
#end LILO global section
#Linux bootable partition config begins
image=/boot/vmlinuz-2.0.36
	label=linux
	root=/dev/hda2
	read-only
#Linux bootable partition config ends
#second bootable partition config begins...
  • boot - this parameter specifies where LILO will be installed. In example above it will be MBR of /dev/hda disk. (or in some cases /dev/sda).
  • bitmap - it will append a custom background image for your distribution.
  • bmp-colors - it will set other background color scheme to the loader.
  • append - will provide any additional commands to the kernel during boot for a specific setup, adding or removing a kernel feature or module.
  • map - the map file is automatically generated by LILO and is used internally. It is recommended not to change this option.
  • install - specifies which image to use for boot sector. Again it is not recommended to change this parameter.
  • prompt - this options enabled administrator to append kernel command line options when system boots.
  • timeout - specifies prompt timeout in tenths of a second so in example above timeout is set to 5 seconds.
  • vga - in some systems this will indicate what video resolution the boot process will be in.
  • image - kernel location. (use different kernel names for separate boot sections)
  • label - name which will be displayed in LILO menu during boot.
  • root - root filesystem location.
  • read-only - see kernel parameters section below


Grub offers many benefits over LILO. One of them is fact that it offers bash-like command line which can be used to dynamically change kernel or initrd image used. Sample configuration file is listed below:

default         0
timeout         5
title           Debian GNU/Linux, kernel 2.6.26-2-686
root            (hd0,0)
kernel          /vmlinuz-2.6.26-2-686 root=/dev/mapper/Disk-root ro quiet
initrd          /initrd.img-2.6.26-2-686

First line specifies default image, in this case image number 0 which is the only one configured. Second line specifies prompt timeout in seconds. If it is set to 0 grub will immediately boot default image without any prompt. Next 4 lines define menu entry:

  • title - name of menu entry
  • root - partition on which to find kernel and initrd. In example above (hd0,0) is first partition on the first disk
  • kernel - kernel image to be loaded together with kernel parameters. Kernel's location is relative to the top of root partition, usually /boot (do not confuse with linux root partition or /)
  • initrd - initial ramdisk image.

It is common for grub to contain many title/root/kernel/initrd sets for different kernel versions.

Kernel Parameters

[edit | edit source]

As mentioned above kernel requires some arguments which can be configured in grub, LILO or dynamically appended by user during system startup. List of common kernel parameters below:

  • root=<disk> - specifies root filesystem which will be mounted as /. It can be partition (ie. root=/dev/sda2), disk label (root=LABEL=root_partition) or logical volume (root=/dev/mapper/vg_root-root)
  • ro or read-only - instructs the kernel to mount root filesystem as read only. This allows filesystem to be checked before partition is remounted read-write. This option is used in almost all cases
  • quiet - do not print diagnostic messages to the screen
  • single - boot system into single user mode (runlevel 1)
  • console= - tells kernel to send output to console. For example to send it to serial port use console=/dev/ttyS0
  • init=/ - can be used to specify replacement for /sbin/init process for example to get root shell without password use init=/bin/bash

Checking Logs

[edit | edit source]

All kernel messages diagnostic messages are send to kernel ring buffer. Its content can be reviewed using dmesg command, however the output can tedious and is best to use dmesg | less, use arrow keys to scroll verticle and Q to exit dmesg. Bear in mind that this buffer has limited capacity and when new messages come in the oldest ones could be removed. Once all partitions are mounted one of init scripts will write content of kernel buffer to a logfile, normally /var/log/messages so that diagnostic messages are available when needed.

[edit | edit source]

101.3 Change runlevels / boot targets and shutdown or reboot system

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 3

Description:
Candidates should be able to manage the SysVinit runlevel or systemd boot target of the system. This objective includes changing to single user mode, shutdown or rebooting the system. Candidates should be able to alert users before switching runlevels / boot targets and properly terminate processes. This objective also includes setting the default SysVinit runlevel or systemd boot target. It also includes awareness of Upstart as an alternative to SysVinit or systemd.

Key knowledge area(s):

  • Set the default runlevel or boot target.
  • Change between runlevels / boot targets including single user mode.
  • Shutdown and reboot from the command line.
  • Alert users before switching runlevels / boot targets or other major system event.
  • Properly terminate processes.
  • Awareness of acpid.

The following is a partial list of the used files, terms and utilities:

  • /etc/inittab
  • shutdown
  • init
  • /etc/init.d/
  • telinit
  • systemd
  • systemctl
  • /etc/systemd/
  • /usr/lib/systemd/
  • wall

Runlevels Overview

[edit | edit source]

Runlevels are used in Linux to customize the way operating system is initialized. Runlevel defines which services are started automatically during initialization. Each runlevel has a number (and possibly alias) which defines it. Common runlevel number can be found in a table below, unfortunately some of the differ between RedHat and Debian based distributions:

Runlevel RedHat Debian
0 Halt the system. This runlevel should never be set as default The same as RedHat
1(single) Single user mode. In this runlevel Linux starts only basic services and in most distributions will start root shell which does not requird password to login. No networks or NFS services are started. This runlevel can be used to reset root password or to perform some basic system maintenance The same as RedHat
2 Multiuser without NFS support Full multiuser, normally default
3 Full multiuser, normally default Normally unused
4 Normally unused Normally unused
5 Full multiuser with X11. This runlevel normally allows for graphical login Normally unused
6 Reboot the system. This runlevel should never be set as default as it will cause system to enter reboot loop The same as RedHat

Setting Default Runlevel

[edit | edit source]

Default runlevel is controlled in /etc/inttab file in most of the distributions however this is currently changing as some new Linux distributions start using more advanced event driven upstart replacement for traditional init program. In /etc/inittab the following line controls default runlevel (example from Debian with runlevel 2 set as default):

id:2:initdefault:

When operating system loads /sbin/init program it will read the line above and determine that it should use runlevel 2 (unless another runlevel number was passed as kernel parameter). Following this init will run all scripts from /etc/rcX.d/ directory where X is runlevel number. Sample content of the directory might look similar to:

-rw-r--r-- 1 root root 556 2008-08-12 15:09 README
lrwxrwxrwx 1 root root  17 2010-01-07 22:08 S10rsyslog -> ../init.d/rsyslog
lrwxrwxrwx 1 root root  15 2010-01-07 22:10 S12acpid -> ../init.d/acpid
lrwxrwxrwx 1 root root  15 2010-05-09 11:39 S15bind9 -> ../init.d/bind9
lrwxrwxrwx 1 root root  13 2010-01-07 22:40 S16ssh -> ../init.d/ssh
lrwxrwxrwx 1 root root  15 2010-01-07 22:15 S20exim4 -> ../init.d/exim4
lrwxrwxrwx 1 root root  20 2010-01-07 22:15 S20nfs-common -> ../init.d/nfs-common
lrwxrwxrwx 1 root root  27 2010-01-07 23:10 S20nfs-kernel-server -> ../init.d/nfs-kernel-server
lrwxrwxrwx 1 root root  23 2010-01-07 22:15 S20openbsd-inetd -> ../init.d/openbsd-inetd
lrwxrwxrwx 1 root root  13 2010-01-07 22:15 S89atd -> ../init.d/atd
lrwxrwxrwx 1 root root  14 2010-01-07 22:08 S89cron -> ../init.d/cron
lrwxrwxrwx 1 root root  17 2010-01-07 22:59 S91apache2 -> ../init.d/apache2
lrwxrwxrwx 1 root root  18 2010-01-07 22:08 S99rc.local -> ../init.d/rc.local
lrwxrwxrwx 1 root root  19 2010-01-07 22:08 S99rmnologin -> ../init.d/rmnologin
lrwxrwxrwx 1 root root  23 2010-01-07 22:08 S99stop-bootlogd -> ../init.d/stop-bootlogd

All services from the directory will be run in order starting from the lowest number to the highest. As you might have notices all file in the /etc/rcX.d/ directory are symbolic links to startup scripts in /etc/init.d/. When operating system enters the runlevel init program passes "start" parameter to all scripts prefixed with "S" character and "stop" to all scripts prefixed with "K". In above example one of the first programs to run is rsyslog:

S10rsyslog -> ../init.d/rsyslog

This will cause init to execute the following command:

/etc/init.d/rsyslog start

Above example is runlevel 2 from Debian which is full multiuser so no services have to be killed however if we look at runlevel 1 on the same system it looks very differently:

lrwxrwxrwx 1 root root  17 2010-01-07 22:59 K09apache2 -> ../init.d/apache2
lrwxrwxrwx 1 root root  13 2010-01-07 22:15 K11atd -> ../init.d/atd
lrwxrwxrwx 1 root root  14 2010-01-07 22:08 K11cron -> ../init.d/cron
lrwxrwxrwx 1 root root  15 2010-01-07 22:15 K20exim4 -> ../init.d/exim4
lrwxrwxrwx 1 root root  20 2010-01-07 22:15 K20nfs-common -> ../init.d/nfs-common
lrwxrwxrwx 1 root root  23 2010-01-07 22:15 K20openbsd-inetd -> ../init.d/openbsd-inetd
lrwxrwxrwx 1 root root  27 2010-01-07 23:10 K80nfs-kernel-server -> ../init.d/nfs-kernel-server
lrwxrwxrwx 1 root root  17 2010-01-07 22:15 K81portmap -> ../init.d/portmap
lrwxrwxrwx 1 root root  13 2010-01-07 22:40 K84ssh -> ../init.d/ssh
lrwxrwxrwx 1 root root  15 2010-05-09 11:39 K85bind9 -> ../init.d/bind9
lrwxrwxrwx 1 root root  15 2010-01-07 22:10 K88acpid -> ../init.d/acpid
lrwxrwxrwx 1 root root  17 2010-01-07 22:08 K90rsyslog -> ../init.d/rsyslog
-rw-r--r-- 1 root root 369 2007-12-23 11:04 README
lrwxrwxrwx 1 root root  19 2010-01-07 22:08 S30killprocs -> ../init.d/killprocs
lrwxrwxrwx 1 root root  16 2010-01-07 22:08 S90single -> ../init.d/single

When system enters runlevel 1 it will kill all services beginning with "K" leaving bare minimum running.

Changing Runlevels, shutting down and rebooting system

[edit | edit source]

init and telinit - telinit command is just a link to init, functionality of both is the same. They can be used to change the current runlevel. They take only one parameter, which is the new runlevel number. Example usage:
Change runlevel single user mode:

init 1

Runlevel 6 is reboot. init can be used to reboot the system:

telinit 6

To shutdown we go to runlevel 0

init 0


shutdown - shutdown command can be used to reboot, shutdown and send a warning message to all logged in users.

/sbin/shutdown [-t sec] [-arkhncfFHP] time [warning-message]

Common options:

-h - halt or power off after shutdown
-r - reboot after shutdown
-k - Don't really shutdown; only send the warning message to everybody

Examples:
Shutdown in 5 minutes and send warning message

shutdown -h +5 System is going down for maintenance

Reboot immediately

shutdown -r now

Do not reboot or shutdown, only send message to users

shutdown -k Make sure you keep your password safe

Shutdown system at 23:59

shutdown -h 23:59 This system is going down for maintenance at 23:59


Printing

[edit | edit source]

Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)


Weight: 2


Description:
Candidates should be able to manage print queues and user print jobs using CUPS and the LPD compatibility interface.


Key Knowledge Areas:

  • Basic CUPS configuration (for local and remote printers).
  • Manage user print queues.
  • Troubleshoot general printing problems.
  • Add and remove jobs from configured printer queues.


The following is a partial list of the used files, terms and utilities:

  • CUPS configuration files, tools and utilities
  • /etc/cups
  • lpd legacy interface (lpr, lprm, lpq)


LPI Linux Certification/Print Files LPI Linux Certification/Install & Configure Local & Remote Printers

Documentation

[edit | edit source]

LPI Linux Certification/Use & Manage Local System Documentation LPI Linux Certification/Find Linux Documentation On The Internet LPI Linux Certification/Notify Users On System-Related Issues

Shells, Scripting, Programming & Compiling

[edit | edit source]

Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 4

Description:
Candidates should be able to customize shell environments to meet users' needs. Candidates should be able to modify global and user profiles.

Key Knowledge Areas:

  • Set environment variables (e.g. PATH) at login or when spawning a new shell.
  • Write Bash functions for frequently used sequences of commands.
  • Maintain skeleton directories for new user accounts.
  • Set command search path with the proper directory.

The following is a partial list of the used files, terms and utilities:

  • .
  • source
  • /etc/bash.bashrc
  • /etc/profile
  • env
  • export
  • set
  • unset
  • ~/.bash_profile
  • ~/.bash_login
  • ~/.profile
  • ~/.bashrc
  • ~/.bash_logout
  • function
  • alias


Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)

Weight: 4

Description:
Candidates should be able to customize existing scripts, or write simple new Bash scripts.

Key Knowledge Areas:

  • Use standard sh syntax (loops, tests).
  • Use command substitution.
  • Test return values for success or failure or other information provided by a command.
  • Execute chained commands.
  • Perform conditional mailing to the superuser.
  • Correctly select the script interpreter through the shebang (#!) line.
  • Manage the location, ownership, execution and suid-rights of scripts.

The following is a partial list of the used files, terms and utilities:

  • for
  • while
  • test
  • if
  • read
  • seq
  • exec
  • ||
  • &&


Administrative Tasks

[edit | edit source]

Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)


Weight: 5

Description:
Candidates should be able to add, remove, suspend and change user accounts.

Key Knowledge Areas:

  • Add, modify and remove users and groups.
  • Manage user/group info in password/group databases.
  • Create and manage special purpose and limited accounts.

The following is a partial list of the used files, terms and utilities:

  • /etc/passwd
  • /etc/shadow
  • /etc/group
  • /etc/skel
  • chage
  • groupadd
  • groupdel
  • groupmod
  • passwd
  • useradd
  • userdel
  • usermod


LPI Linux Certification/Tune The User Environment & System Environment Variables LPI Linux Certification/Configure & Use System Log Files To Meet Administrative and Security Needs

Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)


Weight: 4


Description:
Candidates should be able to use cron and systemd timers to run jobs at regular intervals and to use at to run jobs at a specific time.


Key Knowledge Areas:

  • Manage cron and at jobs.
  • Configure user access to cron and at services.
  • Understand systemd timer units.


The following is a partial list of the used files, terms and utilities:

  • /etc/cron.{d,daily,hourly,monthly,weekly}/
  • /etc/at.deny
  • /etc/at.allow
  • /etc/crontab
  • /etc/cron.allow
  • /etc/cron.deny
  • /var/spool/cron/
  • crontab
  • at
  • atq
  • atrm
  • systemctl
  • systemd-run


LPI Linux Certification/Maintain An Effective Data Backup Strategy

Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)


Weight: 3


Description:
Candidates should be able to properly maintain the system time and synchronize the clock via NTP.


Key Knowledge Areas:

  • Set the system date and time.
  • Set the hardware clock to the correct time in UTC.
  • Configure the correct timezone.
  • Basic NTP configuration using ntpd and chrony.
  • Knowledge of using the pool.ntp.org service.
  • Awareness of the ntpq command.


The following is a partial list of the used files, terms and utilities:

  • /usr/share/zoneinfo/
  • /etc/timezone
  • /etc/localtime
  • /etc/ntp.conf
  • /etc/chrony.conf
  • date
  • hwclock
  • timedatectl
  • ntpd
  • ntpdate
  • chrony
  • pool.ntp.org


Networking Fundamentals

[edit | edit source]

LPI Linux Certification/Fundamentals Of TCP-IP LPI Linux Certification/TCP-IP Configuration & Troubleshooting LPI Linux Certification/Configure Linux As A PPP Client

Networking Services

[edit | edit source]

LPI Linux Certification/Configure & Manage xinetd, inetd & Related Services

Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)


Weight: 3


Description:
Candidates should be aware of the commonly available MTA programs and be able to perform basic forward and alias configuration on a client host. Other configuration files are not covered.


Key Knowledge Area(s):

  • Create e-mail aliases.
  • Configure e-mail forwarding.
  • Knowledge of commonly available MTA programs (postfix, sendmail, exim) (no configuration).


The following is a partial list of the used files, terms and utilities:

  • ~/.forward
  • sendmail emulation layer commands
  • newaliases
  • mail
  • mailq
  • postfix
  • sendmail
  • exim


LPI Linux Certification/Properly Manage The NFS & SAMBA Daemons LPI Linux Certification/Setup & Configure Basic DNS Services LPI Linux Certification/Setup Secure Shell (OpenSSH)

Security

[edit | edit source]

Weight: 3

Description:
Candidates should know how to review system configuration to ensure host security in accordance with local security policies.

  • Key knowledge area(s):
    • Audit a system to find files with the suid/sgid bit set.
    • Set or change user passwords and password aging information.
    • Being able to use nmap and netstat to discover open ports on a system.
    • Set up limits on user logins, processes and memory usage.
    • Basic sudo configuration and usage.
  • The following is a partial list of the used files, terms and utilities:
    • find
    • passwd
    • lsof
    • nmap
    • chage
    • netstat
    • sudo
    • /etc/sudoers
    • su
    • usermod
    • ulimit


Detailed Objectives

[edit | edit source]

(LPIC-1 Version 5.0)


Weight: 3


Description:
Candidates should know how to set up a basic level of host security.


Key Knowledge Areas:

  • Awareness of shadow passwords and how they work.
  • Turn off network services not in use.
  • Understand the role of TCP wrappers.


The following is a partial list of the used files, terms and utilities:

  • /etc/nologin
  • /etc/passwd
  • /etc/shadow
  • /etc/xinetd.d/
  • /etc/xinetd.conf
  • systemd.socket
  • /etc/inittab
  • /etc/inetd.d/
  • /etc/hosts.allow
  • /etc/hosts.deny


LPI Linux Certification/Setup User Level Security

Advanced Level Linux Professional

[edit | edit source]

LPI Linux Certification/Advanced Level Linux Professional

Linux Kernel

[edit | edit source]

Detailed Objective (201.1)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 2


Description: Candidates should be able to utilize kernel components that are necessary to specific hardware, hardware drivers, system resources and requirements. This objective includes implementing different types of kernel images, identifying stable and development kernels and patches, as well as using kernel modules.


Key Knowledge Areas:

  • Kernel 2.6.x, 3.x and 4.x documentation


The following is a partial list of the used files, terms and utilities:

  • /usr/src/linux
  • /usr/src/linux/Documentation/
  • zImage
  • bzImage
  • xz compression


Kernel Image Formats

[edit | edit source]

Two types of kernel image formats can be used on Intel platforms: zImage and bzImage. The difference between them is the way they bootstrap and how large the kernel can be, not the compression algorithm as one might think. Both use gzip for compression.

zImage

This is the old boot image format for Intel, which works on all known PC hardware. The bootstrap and the unpacked kernel are loaded into the good old, 8086-era 640 KB of low memory. The allowed kernel size is 520 KB. If your kernel excedes this size, you either have to switch to bzImage or put more of the kernel into modules. The boot image builder will tell you when this is the case.

bzImage

The b in this format stands for big. The bzImage kernel image is not restricted to 520 KB or even 640 KB. bzImage is now the preferred boot image. Though there are some reports of boot failures using this boot image type, these problems are being pursued because the kernel developers want the format to work on all Intel hardware.

Detailed Objectives (201.2)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 2


Description: Candidates should be able to properly configure a kernel to include or disable specific features of the Linux kernel as necessary. This objective includes compiling and recompiling the Linux kernel as needed, updating and noting changes in a new kernel, creating an initrd image and installing new kernels.


Key Knowledge Areas:

  • /usr/src/linux/
  • Kernel Makefiles
  • Kernel 2.6.x/3.x make targets
  • Customize the current kernel configuration.
  • Build a new kernel and appropriate kernel modules.
  • Install a new kernel and any modules.
  • Ensure that the boot manager can locate the new kernel and associated files.
  • Module configuration files
  • Use DKMS to compile kernel modules.
  • Awareness of dracut


The following is a partial list of the used files, terms and utilities:

  • mkinitrd
  • mkinitramfs
  • make
  • make targets (all, config, xconfig, menuconfig, gconfig, oldconfig, mrproper, zImage, bzImage, modules, modules_install, rpm-pkg, binrpm-pkg, deb-pkg)
  • gzip
  • bzip2
  • module tools
  • /usr/src/linux/.config
  • /lib/modules/kernel-version/
  • depmod
  • dkms


Detailed Objective

[edit | edit source]

Weight: 1

Description:
Candidates should be able to properly patch a kernel to add support for new hardware. This objective also includes being able to properly remove kernel patches from already patched kernels.

  • Key knowledge area(s):
    • Kernel Makefiles
  • The following is a partial list of the used files, terms and utilities:
    • patch
    • gzip
    • bzip2


Detailed Objective

[edit | edit source]

Weight: 2

Description:
Candidates should be able to customise, build and install a 2.6 kernel for specific system requirements, by patching, compiling and editing configuration files as required. This objective includes being able to assess requirements for a kernel compile as well as build and configure kernel modules.

  • Key knowledge area(s):
    • Customize the current kernel configuration
    • Build a new kernel and appropriate kernel modules
    • Install a new kernel and any modules
    • Ensure that the boot manager can locate the new kernel and associated files
    • /usr/src/linux/
    • Module configuration files
  • The following is a partial list of the used files, terms and utilities:
    • patch
    • make
    • module tools
    • /usr/src/linux/*
    • /usr/src/linux/.config
    • /lib/modules/kernel-version/*
    • /boot/*
    • make targets: all, config, menuconfig, xconfig, gconfig oldconfig, modules, install, modules_install, depmod, rpm-pkg, binrpm-pkg, deb-pkg


System Startup

[edit | edit source]

Detailed Objective (202.1)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 4


Description: Candidates should be able to query and modify the behaviour of system services at various targets / run levels. A thorough understanding of the systemd, SysV Init and the Linux boot process is required. This objective includes interacting with systemd targets and SysV Init runlevels.


Key Knowledge Areas:

  • Systemd
  • SysV init
  • Linux Standard Base Specification (LSB)


The following is a partial list of the used files, terms and utilities:

  • /usr/lib/systemd/
  • /etc/systemd/
  • /run/systemd/
  • systemctl
  • systemd-delta
  • /etc/inittab
  • /etc/init.d/
  • /etc/rc.d/
  • chkconfig
  • update-rc.d
  • init and telinit


Detailed Objective (202.2)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 4


Description: Candidates should be able to properly manipulate a Linux system during both the boot process and during recovery mode. This objective includes using both the init utility and init-related kernel options. Candidates should be able to determine the cause of errors in loading and usage of bootloaders. GRUB version 2 and GRUB Legacy are the bootloaders of interest. Both BIOS and UEFI systems are covered.


Key Knowledge Areas:

  • BIOS and UEFI
  • NVMe booting
  • GRUB version 2 and Legacy
  • grub shell
  • boot loader start and hand off to kernel
  • kernel loading
  • hardware initialisation and setup
  • daemon/service initialisation and setup
  • Know the different boot loader install locations on a hard disk or removable device.
  • Overwrite standard boot loader options and using boot loader shells.
  • Use systemd rescue and emergency modes.


Terms and Utilities:

  • mount
  • fsck
  • inittab, telinit and init with SysV init
  • The contents of /boot/, /boot/grub/ and /boot/efi/
  • EFI System Partition (ESP)
  • GRUB
  • grub-install
  • efibootmgr
  • UEFI shell
  • initrd, initramfs
  • Master boot record
  • systemctl


File Systems

[edit | edit source]

Detailed Objective (203.1)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 4


Description: Candidates should be able to properly configure and navigate the standard Linux filesystem. This objective includes configuring and mounting various filesystem types.


Key Knowledge Areas:

  • The concept of the fstab configuration.
  • Tools and utilities for handling swap partitions and files.
  • Use of UUIDs for identifying and mounting file systems
  • Understanding of systemd mount units


Terms and Utilities:

  • /etc/fstab
  • /etc/mtab
  • /proc/mounts
  • mount and umount
  • blkid
  • sync
  • swapon
  • swapoff


Mounting and umounting partition

[edit | edit source]

To access an existing partition you need to mount it first using the mount command.
For example if you want to mount a ntfs partition on /mnt/windows you should issue the following command:

mount -t ntfs /dev/hda3 /mnt/windows

Of course you need to change hda3 with your ntfs partition.
To umount a partition you simply need to use umount

umount /mnt/windows

or

umount /dev/hda3

If you use mount without arguments it will print the currently mounted devices, you can also see /proc/mounts and /etc/mtab to discover which partition are currently mounted.

Fstab

[edit | edit source]

If you want to use a more automatic method to mount filesystem you should edit /etc/fstab

<file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0
/dev/hda2       /               ext3    defaults        0       1
/dev/hda4       none            swap    defaults        0       0
/dev/hda1       /boot           ext3    defaults        0       2
/dev/hda3       /mnt/windows    ntfs    defaults        0       0
/dev/hdb        /media/cdrom    iso9660 ro,user,noauto  0       0
/dev/fd0        /media/floppy   auto    user,noauto     0       0

In the above example of /etc/fstab we have the ntfs partition mounted automatically during the system boot on /mnt/windows, while on the cdrom and floppy devices we have specified the noauto and user options, this means that they aren't mounted during boot but also that any user can mount it whenever they need. The sixth field should be 1 for root filesystem and 2 for other fs that need to be checked with fsck during boot.

The swap partition can be used as virtual memory, to create a swap partition you should use mkswap

mkswap /dev/hda4

and need to be activated with swapon

swapon /dev/hda4

you can also deactivate it with swapoff

swapoff /dev/hda4

The sync utility can be used to force the change onto the partition, modern filesystem like ext3 or reiserfs sync the partition every time that a change is made so you don't need to issue the command manually.

Exercises

[edit | edit source]


Detailed Objective (203.2)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 4


Description: Candidates should be able to properly maintain a Linux filesystem using system utilities. This objective includes manipulating standard filesystems and monitoring SMART devices.


Key Knowledge Areas:

  • Tools and utilities to manipulate ext2, ext3 and ext4
  • Tools and utilities to perform basic Btrfs operations, including subvolumes and snapshots
  • Tools and utilities to manipulate XFS
  • Awareness of ZFS


Terms and Utilities:

  • mkfs (mkfs.*)
  • mkswap
  • fsck (fsck.*)
  • tune2fs, dumpe2fs, debugfs
  • btrfs, btrfs-convert
  • xfs_info, xfs_check, xfs_repair, xfsdump and xfsrestore
  • smartd, smartctl


Formatting a partition

[edit | edit source]

Before you format a partition you need to choose the right filesystem for your needs. The most common filesystem on linux is ext3 which is a journaled filesystem based on ext2. To format a partition with a filesystem you need to use the mkfs.* commands

 #ext3
 mkfs.ext3 /dev/hda1
 #fat
 mkfs.vfat /dev/hda1
 #xfs 
 mkfs.xfs /dev/hda1
 #reiserfs
 mkfs.reiserfs /dev/hda1

to create an ext2/ext3 filesystem you can also use the mke2fs utility

#ext2
mke2fs /dev/hda1
#ext3
mke2fs -j /dev/hda1

Configuring and repair filesystem

[edit | edit source]

tune2fs it's an utility used to tune ext2/ext3 filesystem

#add the journal to an ext2 filesystem(convert from ext2 to ext3)
tune2fs -j /dev/hda1
#set the max mount count before the filesystem is checked for errors to 30
tune2fs -c 30 /dev/hda1 
#set the max time before the filesystem is checked for errors to 10 days
tune2fs -i 10d /dev/hda1

you can also tune a reiserfs partition using reiserfstune

#create a new journal for /dev/hda1 into /dev/hda2 
reiserfstune --journal-new-device /dev/hda2 -f /dev/hda1

to check a filesystem for errors you can use fsck.*

 #ext3
 fsck.ext3 /dev/hda1
 #fat
 fsck.vfat /dev/hda1
 #xfs 
 fsck.xfs /dev/hda1
 #reiserfs
 fsck.reiserfs /dev/hda1

you can also just run fsck /dev/hda1 directly and it will detect the filesystem


Exercises

[edit | edit source]


Detailed Objectives (203.3)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 2


Description: Candidates should be able to configure automount filesystems using AutoFS. This objective includes configuring automount for network and device filesystems. Also included is creating filesystems for devices such as CD-ROMs and a basic feature knowledge of encrypted filesystems.


Key Knowledge Areas:

  • autofs configuration files
  • Understanding of automount units
  • UDF and ISO9660 tools and utilities
  • Awareness of other CD-ROM filesystems (HFS)
  • Awareness of CD-ROM filesystem extensions (Joliet, Rock Ridge, El Torito)
  • Basic feature knowledge of data encryption (dm-crypt / LUKS)


Terms and Utilities:

  • /etc/auto.master
  • /etc/auto.[dir]
  • mkisofs
  • cryptsetup


Hardware

[edit | edit source]

Detailed Objective (204.1)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 2


Description: Candidates should be able to configure and implement software RAID. This objective includes using and configuring RAID 0, 1 and 5.


Key Knowledge Areas:

  • Software raid configuration files and utilities


Terms and Utilities:

  • mdadm.conf
  • mdadm
  • /proc/mdstat
  • partition type 0xFD


Detailed Objective

[edit | edit source]

Weight: 3

Description:
Candidates should be able to configure internal and external devices for a system including new hard disks, dumb terminal devices, serial UPS devices, multi-port serial cards, and LCD panels.

  • Key knowledge area(s):
    • X.org
    • XFree86
    • Module tools
    • Tools and utilities to list various hardware information (e.g. lsdev, lspci, etc.)
    • Tools and utilities to manipulate legacy interfaces (RS232, LPT, etc)
    • Tools and utilities to manipulate USB devices
  • The following is a partial list of the used files, terms and utilities:
    • modprobe
    • lsmod
    • lsdev
    • lspci
    • setserial
    • usbview
    • lsusb


Detailed Objective

[edit | edit source]

Weight: 2

Description:
Candidates should be able to configure kernel options to support various drives. This objective includes using LVM (Logical Volume Manager) to manage hard disk drives and partitions, as well as software tools to view & modify hard disk settings.

  • Key knowledge area(s):
    • Tools and utilities to configure DMA for IDE devices including ATAPI and SATA
    • LVM tools and utilities
    • Tools and utilities to manipulate or analyse system resources (e.g. interrupts)
  • The following is a partial list of the used files, terms and utilities:
    • /proc/interrupts
    • hdparm
    • tune2fs
    • sysctl


Detailed Objective

[edit | edit source]

Weight: 1

Description:
Candidates should be able to configure a Linux installation to include support for mobile computer hardware extensions. This objective includes configuring those devices.

  • Key knowledge area(s):
    • PCCard and PCMCIA
    • PCCard and PCMCIA configuration files, tools and utilities
  • The following is a partial list of the used files, terms and utilities:
    • /etc/pcmcia/
    • *.opts
    • cardctl
    • cardmgr


File & Service Sharing

[edit | edit source]

Detailed Objectives

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 5


Description: Candidates should be able to set up a Samba server for various clients. This objective includes setting up Samba as a member in an Active Directory. Furthermore, the configuration of simple CIFS and printer shares is covered. Also covered is a configuring a Linux client to use a Samba server. Troubleshooting installations is also tested.


Key Knowledge Areas:

  • Samba 4 documentation
  • Samba 4 configuration files
  • Samba 4 tools and utilities and daemons
  • Mounting CIFS shares on Linux
  • Mapping Windows user names to Linux user names
  • User-Level, Share-Level and AD security


Terms and Utilities:

  • smbd, nmbd, winbindd
  • smbcontrol, smbstatus, testparm, smbpasswd, nmblookup
  • samba-tool
  • net
  • smbclient
  • mount.cifs
  • /etc/samba/
  • /var/log/samba/


Detailed Objectives (209.2)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 3


Description: Candidates should be able to export filesystems using NFS. This objective includes access restrictions, mounting an NFS filesystem on a client and securing NFS.


Key Knowledge Areas:

  • NFS version 3 configuration files
  • NFS tools and utilities
  • Access restrictions to certain hosts and/or subnets
  • Mount options on server and client
  • TCP Wrappers
  • Awareness of NFSv4


Terms and Utilities:

  • /etc/exports
  • exportfs
  • showmount
  • nfsstat
  • /proc/mounts
  • /etc/fstab
  • rpcinfo
  • mountd
  • portmapper


System Maintenance

[edit | edit source]

Detailed Objective

[edit | edit source]

Weight: 2

Description:
Candidates should be able to configure the syslog daemon. This objective also includes configuring the logging daemon to send log output to a central log server or accept log output as a central log server.

  • Key knowledge area(s):
    • syslog configuration files
    • syslog
    • standard facilities, priorities and actions
  • The following is a partial list of the used files, terms and utilities:
    • syslog.conf
    • syslogd
    • klogd
    • logger


Detailed Objective

[edit | edit source]

Weight: 1

Description:
Candidates should be able to build a software package. This objective includes building or rebuilding both RPM and DEB packaged software.

  • Key knowledge area(s):
    • RPM description, software and commands
    • DEB description, software and commands
    • SPEC file format
    • debian/rules file format
  • The following is a partial list of the used files, terms and utilities:
    • rpmbuild
    • The contents of /var/lib/rpm and /usr/lib/rpm/
    • var/cache/debconf/
    • dpkg-deb
    • dpkg


Detailed Objective (206.2)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 3


Description: Candidates should be able to use system tools to back up important system data.


Key Knowledge Areas:

  • Knowlegde about directories that have to be include in backups.
  • Awareness of network backup solutions such as Amanda, Bacula, Bareos and BackupPC.
  • Knowledge of the benefits and drawbacks of tapes, CDR, disk or other backup media.
  • Perform partial and manual backups.
  • Verify the integrity of backup files.
  • Partially or fully restore backups.


Terms and Utilities:

  • /bin/sh
  • dd
  • tar
  • /dev/st* and /dev/nst*
  • mt
  • rsync


further reading

[edit | edit source]

System Customisation & Automation

[edit | edit source]

Detailed Objective

[edit | edit source]

Weight: 3

Description:
Candidates should be able to write simple scripts to automate tasks using different common scripting languages. Tasks for automation include checking processes, process execution, parsing logs, synchronising files across machines, monitoring files for changes, generating and sending e-mail alerts and notifying administrators when specified users log in or out.

  • Key knowledge area(s):
    • Standard text manipulation software such as awk and sed
    • BASH
    • cron configuration files
    • at daemon usage
    • Remote copying software such as rsync and scp
    • Perl: basic commands
  • The following is a partial list of the used files, terms and utilities:
    • perl, bash, awk, sed
    • crontab
    • at


Troubleshooting

[edit | edit source]

Detailed Objective

[edit | edit source]

Weight: 1

Description:
Candidates should be able to create bootdisks for system entrance and recovery disks for system repair.

  • Key knowledge area(s):
    • Filesystem configuration files
    • INIT configuration files
    • Any standard editor
    • Familiarity with the location and contents of the LDP Bootdisk-HOWTO
    • Tools to manipulate the MBR
    • Tools and utilities to copy and mount filesystems
    • GRUB
    • LILO
    • Loop devices
    • Making CD or USB storage devices bootable
  • The following is a partial list of the used files, terms and utilities:
    • /etc/fstab
    • /etc/inittab
    • /usr/sbin/rdev
    • /bin/cat
    • /bin/mount (includes -o loop switch)
    • /sbin/lilo
    • /bin/dd
    • /sbin/mke2fs
    • /usr/sbin/chroot


Detailed Objective

[edit | edit source]

Weight: 1

Description:
Candidates should be able to determine, from bootup text, the 4 stages of the boot sequence and distinguish between each.

  • Key knowledge area(s):
    • boot loader start and hand off to kernel
    • kernel loading
    • hardware initialisation and setup
    • daemon initialisation and setup
  • The following is a partial list of the used files, terms and utilities:
    • Not applicable.


Detailed Objective

[edit | edit source]

Weight: 1

Description:
Candidates should be able to determine specific stage failures and corrective techniques.

  • Key knowledge area(s):
    • LILO
    • GRUB
    • Know the different bootloader install locations on a hard disk or removable device
    • Overwriting standard bootloader options or using bootloader shells
  • The following is a partial list of the used files, terms and utilities:
    • The contents of /boot/ and /boot/grub/
    • /etc/lilo.conf
    • grub
    • grub-install
    • lilo

further reading

[edit | edit source]


Detailed Objective

[edit | edit source]

Weight: 1

Description:
Candidates should be able to recognise and identify boot loader and kernel specific stages and utilise kernel boot messages to diagnose kernel errors. This objective includes being able to identify and correct common hardware issues and be able to determine if the problem is hardware or software.

  • Key knowledge area(s):
    • /proc filesystem
    • Various system and daemon log files
    • Content of /, /boot , and /lib/ modules
    • Screen output during bootup
    • Kernel syslog entries in system logs (if entry is able to be gained)
    • Tools and utilities to analyse information about the used hardware
    • Tools and utilities to trace software
  • The following is a partial list of the used files, terms and utilities:
    • dmesg
    • /sbin/lspci
    • /usr/bin/lsdev
    • /sbin/lsmod
    • /sbin/modprobe
    • /sbin/insmod
    • /bin/uname
    • strace
    • strings
    • ltrace
    • lsof


Detailed Objective

[edit | edit source]

Weight: 1

Description:
Candidates should be able to identify, diagnose and repair local system issues when using software from the command line.

  • Key knowledge area(s):
    • Core system variables
    • The contents of:
      • /etc/profile && /etc/profile.d/
      • /etc/init.d/
      • /etc/rc.*
      • /etc/sysctl.conf
      • /etc/bashrc
      • /etc/ld.so.conf
        • or other appropriate global shell configuration files
    • Any standard editor
    • Standard tools, utilities and commands to manipulate the above files and variables
  • The following is a partial list of the used files, terms and utilities:
    • /bin/ln
    • /bin/rm
    • /sbin/ldconfig
    • /sbin/sysctl


Detailed Objective

[edit | edit source]

Weight: 1

Description:
Candidates should be able to identify common local system and user environment configuration issues and common repair techniques.

  • Key knowledge area(s):
    • Core system variables
    • init configuration files
    • init start process
    • cron configuration files
    • Login process
    • User-password storage files
    • Determine user group associations
    • SHELL configuration files of bash and csh
    • Analysing which processes or daemons are running
  • The following is a partial list of the used files, terms and utilities:
    • /etc/inittab
    • /etc/rc.local
    • /etc/rc.boot
    • /var/spool/cron/crontabs/
    • The default shell configuration file(s) in /etc/
    • /etc/login.defs
    • /etc/syslog.conf
    • /etc/passwd
    • /etc/shadow
    • /etc/group
    • /sbin/init
    • /usr/sbin/cron
    • /usr/bin/crontab


Networking

[edit | edit source]

Detailed Objectives (205.1)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 3


Description: Candidates should be able to configure a network device to be able to connect to a local, wired or wireless, and a wide-area network. This objective includes being able to communicate between various subnets within a single network including both IPv4 and IPv6 networks.


Key Knowledge Areas:

    • Utilities to configure and manipulate ethernet network interfaces
    • Configuring wireless networks.


Terms and Utilities:

  • ip
  • ifconfig
  • route
  • arp
  • iw
  • iwconfig
  • iwlist


Introduction to Getty

[edit | edit source]

getty is the program you run for dialin. You don't need it for dialout. In addition to presenting a login prompt, it also may help answer the telephone. Originally getty was used for logging in to a computer from a dumb terminal. A major use of it today is for logging in to a Linux console. There are several different getty programs but a few of these work OK with modems for dialin. The getty program is usually started at boot-time. It must be called from the /etc/inittab file. In this file you may find some examples which you will likely need to edit a bit. Hopefully these examples will be for the flavor of getty installed on your PC.

There are four different getty programs to choose from that may be used with modems for dial-in: mgetty, uugetty, getty_em, and agetty. A brief overview is given in the following subsections. Agetty is the weakest of the four and it's mainly for use with directly connected text-terminals. mgetty has support for fax and voice mail but Uugetty doesn't. mgetty allegedly lacks a few of the features of uugetty. getty_em is a simplified version of uugetty. Thus mgetty is likely your best choice unless you are already familiar with uugetty (or find it difficult to get mgetty). The syntax for these getty programs differs, so be sure to check that you are using the correct syntax in /etc/inittab for whichever getty you use.

In order to see what documentation exists about the various gettys on your computer, use the "locate" command. Type: locate "*getty*" (including the quotes may help). Note that many distributions just call the program getty even though it may actually be agetty, uugetty, etc. But if you read the man page (type: man getty), it might disclose which getty it is. This should be the getty program with path /sbin/getty.

Getty "exits" after login (and can respawn) :

After you log in you will notice (by using "top", "ps -ax", or "ptree") that the getty process is no longer running. What happened to it? Why does getty restart again if your shell is killed? Here's why : after you type in your user name, getty takes it and calls the login program telling it your user name. The getty process is replaced by the login process. The login process asks for your password, checks it and starts whatever process is specified in your password file. This process is often the bash shell. If so, bash starts and replaces the login process. Note that one process replaces another and that the bash shell process originally started as the getty process. The implications of this will be explained below.

Now in the /etc/inittab file getty is supposed to respawn (restart) if killed. It says so on the line that calls getty.

Example: getty entry from /etc/inittab S0:12345:respawn:/sbin/agetty -L 9600 ttyS0 vt102


But if the bash shell (or the login process) is killed, getty respawns (restarts). Why? Well, both the login process and bash are replacements for getty and inherit the signal connections establish by their predecessors. In fact if you observe the details you will notice that the replacement process will have the same process ID as the original process. Thus bash is sort of getty in disguise with the same process ID number. If bash is killed it is just like getty was killed (even though getty isn't running anymore). This results in getty respawning.

When one logs out, all the processes on that serial port are killed including the bash shell. This may also happen (if enabled) if a hangup signal is sent to the serial port by a drop of DCD voltage by the modem. Either the logout or drop in DCD will result in getty respawning. One may force getty to respawn by manually killing bash (or login) either by hitting the k key, etc. while in "top" or with the "kill" command. You will likely need to kill it with signal 9 (which can't be ignored).


The cycle illustrated:

      init - spawns -> getty -- starts -> login - starts --> shell
        \                                                      /
         \---------------<<  returns control to <<------------/

You can identify the login shell by a minus in start of the name.

Example: ps output (filtered) walter 32255 0.0 0.7 4012 1772 pts/36 Ss 14:53 0:00 -bash

About mgetty

[edit | edit source]

mgetty was written as a replacement for uugetty which was in existence long before mgetty. Both are for use with modems but mgetty is best (unless you already are committed to uugetty). Mgetty may be also used for directly connected terminals. In addition to allowing dialup logins, mgetty also provides FAX support and auto PPP detection. It permits dialing out when mgetty is waiting for an incoming phone call. There is a supplemental program called vgetty which handles voicemail for some modems. mgetty documentation is fair (except for voice mail). To automatically start PPP one must edit /etc/mgetty/login.conf to enable "AutoPPP" You can find the latest information on mgetty at http://www.leo.org/~doering/mgetty/ and http://alpha.greenie.net/mgetty/

About uugetty

[edit | edit source]

getty_ps contains two programs: getty is used for console and terminal devices, and uugetty for modems. Greg Hankins (former author of Serial-HOWTO) used uugetty so his writings about it are included here. See Uugetty.

About getty_em

[edit | edit source]

This is a simplified version of uugetty. It was written by Vern Hoxie after he became fully confused with complex support files needed for getty_ps and uugetty. It is part of the collection of serial port utilities and information by Vern Hoxie available via ftp from scicom.alphacdc.com/pub/linux. The name of the collection is serial_suite.tgz.

About agetty

[edit | edit source]

This subsection is long since the author tried using agetty for dialin. agetty is seemingly simple since there are no initialization files. But when I tried it, it opened the serial port even when there was no CD signal present. It then sent both a login prompt and the /etc/issue file to the modem in the AT-command state before a connection was made. The modem thinks all this an AT command and if it does contain any "at" strings (by accident) it is likely to adversely modify your modem profile. Echo wars can start where getty and the modem send the same string back and forth over and over. You may see a "respawning too rapidly" error message if this happens. To prevent this you need to disable all echoing and result codes from the modem (E0 and Q1). Also use the -i option with agetty to prevent any /etc/issue file from being sent.

If you start getty on the modem port and a few seconds later find that you have the login process running on that port instead of getty, it means that a bogus user name has been sent to agetty from the modem. To keep this from happening, I had to save my dial-in profile in the modem so that it become effective at power-on. The other saved profile is for dial-out. Then any dial-out programs which use the modem must use a Z, Z0, or Z1 in their init string to initialize the modem for dial-out (by loading the saved dial-out profile). If the 1-profile is for dial-in you use Z1 to load it, etc. If you want to listen for dial-in later on, then the modem needs to be reset to the dial-in profile. Not all dial-out programs can do this reset upon exit from them.

Thus while agetty may work OK if you set up a dial-in profile correctly in the modem hardware, it's probably best suited for virtual consoles or terminals rather than modems. If agetty is running for dialin, there's no easy way to dial out. When someone first dials in to agetty, they should hit the return key to get the login prompt. agetty in the Debian distribution is just named getty.

About mingetty, and fbgetty

[edit | edit source]

mingetty is a small getty that will work only for monitors (the usual console) so you can't use it with modems for dialin. fbgetty is as above but supports framebuffers.

Basic networking configuration

Configuring PAP/CHAP authentication for PPP

if the server to which you are connecting requires PAP or CHAP authentication, edit your PPP options file and add the following lines :

#
# force pppd to use your ISP user name as your 'host name' during the authentication process
name <your ISP user name> # you need to edit this line
#
# If you are running a PPP *server* and need to force PAP or CHAP uncomment the appropriate
# one of the following lines. Do NOT use these is you are a client connecting to a PPP server (even if
# it uses PAP or CHAP) as this tells the SERVER to authenticate itself to your machine (which
# almost certainly can't do - and the link will fail).
#+chap
#+pap
#
# If you are using ENCRYPTED secrets in the /etc/ppp/pap-secrets file, then uncomment the
# following line. Note: this is NOT the same as using MS encrypted passwords as can be
# set up in MS RAS on Windows NT.
#+papcrypt

Basic networking configuration Using MSCHAP

Microsoft Windows NT RAS can be set up to use a variation on CHAP (Challenge/Handshake Authentication Protocol). In your PPP sources tar ball, you will find a file called README.MSCHAP80 that discusses this.

You can determine if the server is requesting authentication using this protocol by enabling debugging for pppd. If the server is requesting MS CHAP authentication, you will see lines like :

rcvd [LCP ConfReq id=0x2 <asyncmap 0x0> <auth chap 80> <magic 0x46a3>]

The critical information here is auth chap 80. In order to use MS CHAP, you will need to recompile pppd to support this. Please see the instructions in the README.MSCHAP80 file in the PPP source file for instructions on how to compile and use this variation.

You should note that at present this code supports only Linux PPP clients connecting to an MS Windows NT server. It does NOT support setting up a Linux PPP server to use MSCHAP80 authentication from clients.

The PAP/CHAP secrets file

[edit | edit source]

If you are using pap or chap authentication, then you also need to create the secrets file. These are:

/etc/ppp/pap-secrets
/etc/ppp/chap-secrets

They must be owned by user root, group root and have file permissions 740 for security. The first point to note about PAP and CHAP is that they are designed to authenticate computer systems not users. Huh? What's the difference? I hear you ask. Well now, once your computer has made its PPP connection to the server, ANY user on your system can use that connection - not just you. This is why you can set up a WAN (wide area network) link that joins two LANs (local area networks) using PPP.

PAP can (and for CHAP DOES) require bidirectional authentication - that is a valid name and secret is required on each computer for the other computer involved. However, this is NOT the way most PPP servers offering dial-up PPP PAP-authenticated connections operate.

That being said, your ISP will probably have given you a user name and password to allow you to connect to their system and thence the Internet. Your ISP is not interested in your computer's name at all, so you will probably need to use the user name at your ISP as the name for your computer. This is done using the name user name option to pppd. So, if you are to use the user name given you by your ISP, add the line :

name your_user name_at_your_ISP

to your /etc/ppp/options file. Technically, you should really use user our_user name_at_your_ISP for PAP, but pppd is sufficiently intelligent to interpret name as user if it is required to use PAP. The advantage of using the name option is that this is also valid for CHAP.

As PAP is for authenticating computers, technically you need also to specify a remote computer name. However, as most people only have one ISP, you can use a wild card (*) for the remote host name in the secrets file. It is also worth noting that many ISPs operate multiple modem banks connected to different terminal servers - each with a different name, but ACCESSED from a single (rotary) dial in number. It can therefore be quite difficult in some circumstances to know ahead of time what the name of the remote computer is, as this depends on which terminal server you connect to! Basic networking configuration

The PAP secrets file

[edit | edit source]

The /etc/ppp/pap-secrets file looks like :

# Secrets for authentication using PAP
# client        server       secret     acceptable_local_IP_addresses

The four fields are white space delimited and the last one can be blank (which is what you want for a dynamic and probably static IP allocation from your ISP). Suppose your ISP gave you a user name of fred and a password of flintstone you would set the name fred option in /etc/ppp/options[.ttySx] and set up your /etc/ppp/pap-secrets file as follows :

# Secrets for authentication using PAP
# client        server  secret  acceptable local IP addresses
fred  * flintstone

This says for the local machine name fred (which we have told pppd to use even though it is not our local machine name) and for ANY server, use the password (secret) of flintstone. Note that we do not need to specify a local IP address, unless we are required to FORCE a particular local, static IP address. Even if you try this, it is unlikely to work as most PPP servers (for security reasons) do not allow the remote system to set the IP number they are to be given.

The CHAP secrets file

[edit | edit source]

This requires that you have mutual authentication methods - that is you must allow for both your machine to authenticate the remote server AND the remote server to authenticate your machine.

So, if your machine is fred and the remote is barney, your machine would set name fred remotename barney and the remote machine would set name barney remotename fred in their respective /etc/ppp/options.ttySx files.

The /etc/chap-secrets file for fred would look like :

# Secrets for authentication using CHAP
# client        server  secret            acceptable local IP addresses
fred  barney flintstone
barney  fred wilma

and for barney :

# Secrets for authentication using CHAP
# client        server  secret            acceptable local IP addresses
barney          fred    flintstone
fred  barney wilma

Note in particular that both machines must have entries for bidirectional authentication. This allows the local machine to authenticate itself to the remote AND the remote machine to authenticate itself to the local machine.

Handling multiple PAP-authenticated connections

[edit | edit source]

Some users have more than one server to which they connect that use PAP. Provided that your user name is different on each machine to which you want to connect, this is not a problem.

However, many users have the same user name on two (or more - even all) systems to which they connect. This then presents a problem in correctly selecting the appropriate line from /etc/ppp/pap- secrets.

As you might expect, PPP provides a mechanism for overcoming this. PPP allows you to set an 'assumed name' for the remote (server) end of the connection using the remotename option to pppd.

Let us suppose that you connect to two PPP servers using the username fred. You set up your /etc/ppp/pap-secrets something like :

fred pppserver1 barney
fred pppserver2 wilma

Now, to set connect to pppserver1 you would use name fred remotename pppserver1 in your ppp- options and for pppserver2 name fred remotename pppserver2.

As you can select the ppp options file to use with pppd using the file filename option, you can set up a script to connect to each of your PPP servers, correctly picking the options file to use and hence selecting the right remotename option.

Key terms, files and utilities :

/sbin/route
/sbin/ifconfig
PAP, CHAP, PPP
/etc/*

Exercises

[edit | edit source]


Detailed Objective (205.2)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 4


Description: Candidates should be able to configure a network device to implement various network authentication schemes. This objective includes configuring a multi-homed network device and resolving communication problems.


Key Knowledge Areas:

  • Utilities to manipulate routing tables
  • Utilities to configure and manipulate ethernet network interfaces
  • Utilities to analyse the status of the network devices
  • Utilities to monitor and analyse the TCP/IP traffic


Terms and Utilities:

  • ip
  • ifconfig
  • route
  • arp
  • ss
  • netstat
  • lsof
  • ping, ping6
  • nc
  • tcpdump
  • nmap

Advanced Network Configuration and Troubleshooting

[edit | edit source]

Overview

[edit | edit source]

Description: The candidate should be able to configure a network device to implement various network authentication schemes. This objective includes configuring a multi-homed network device, configuring a virtual private network and resolving networking and communication problems.

Key files, terms, and utilities include:

/sbin/route 
/sbin/ifconfig 
/bin/netstat 
/bin/ping 
/sbin/arp 
/usr/sbin/tcpdump 
/usr/sbin/lsof 
/usr/bin/nc

Network packet filtering

[edit | edit source]

Network packet filtering is done in one of three ways under Linux :

  • Ipfwadm : kernel 2.0.x and up (RedHat 5.x)
  • Ipchains : kernel 2.2.x and up (RedHat 6.x, 7.0)
  • Iptables : kernel 2.4.x and up (RedHat 7.1 – 9.0)

Their design and capabilities are quite different – note that ipfwadm is considered obsolete, iptables being the most advanced and current In iptables, the packet flow diagram looks like :

Iptables need some options configured in your kernel (or as modules) :

CONFIG_PACKET
This option allows applications and utilities that needs to work directly to various network devices. Examples of such utilities are tcpdump or snort.
CONFIG_NETFILTER
This option is required if you're going to use your computer as a firewall or gateway to the Internet.
CONFIG_IP_NF_CONNTRACK
This module is needed to make connection tracking. Connection tracking is used by, among other things, NAT and Masquerading. If you need to firewall machines on a LAN you most definitely should mark this option.
CONFIG_IP_NF_FTP
This module is required if you want to do connection tracking on FTP connections. Since FTP connections are quite hard to do connection tracking on in normal cases, conntrack needs a so called helper, this option compiles the helper. If you do not add this module you won't be able to FTP through a firewall or gateway properly.
CONFIG_IP_NF_IPTABLES
This option is required if you want do any kind of filtering, masquerading or NAT. It adds the whole iptables identification framework to the kernel. Without this you won't be able to do anything at all with iptables.
CONFIG_IP_NF_MATCH_LIMIT
This module isn't exactly required. This option provides the LIMIT match, that adds the possibility to control how many packets per minute that are to be matched, governed by an appropriate rule. For example, -m limit --limit 3/minute would match a maximum of 3 packets per minute. This module can also be used to avoid certain Denial of Service attacks.
CONFIG_IP_NF_MATCH_MAC
This allows us to match packets based on MAC addresses. Every Ethernet adapter has its own MAC address. We could for instance block packets based on what MAC address is used and block a certain computer pretty well since the MAC address very seldom change.
CONFIG_IP_NF_MATCH_MARK
This allows us to use a MARK match. For example, if we use the target MARK we could mark a packet and then depending on if this packet is marked further on in the table, we can match based on this mark. This option is the actual match MARK, and further down we will describe the actual target MARK.
CONFIG_IP_NF_MATCH_MULTIPORT
This module allows us to match packets with a whole range of destination ports or source ports. Normally this wouldn't be possible, but with this match it is.
CONFIG_IP_NF_MATCH_TOS
With this match we can match packets based on their TOS field. TOS stands for Type Of Service. TOS can also be set by certain rules in the mangle table and via the ip/tc commands.
CONFIG_IP_NF_MATCH_TCPMSS
This option adds the possibility for us to match TCP packets based on their MSS field.
CONFIG_IP_NF_MATCH_STATE
This is one of the biggest news in comparison to ipchains. With this module we can do stateful matching on packets. For example, if we have already seen traffic in two directions in a TCP connection, this packet will be counted as ESTABLISHED.
CONFIG_IP_NF_MATCH_UNCLEAN
This module will add the possibility for us to match IP, TCP, UDP and ICMP packets that don't conform to type or are invalid. We could for example drop these packets, but we never know if they are legitimate or not. Note that this match is still experimental and might not work perfectly in all cases.
CONFIG_IP_NF_MATCH_OWNER
This option will add the possibility for us to do matching based on the owner of a socket. For example, we can allow only the user root to have Internet access. This module was originally just written as an example on what could be done with the new iptables. Note that this match is still experimental and might not work for everyone.
CONFIG_IP_NF_FILTER
This module will add the basic filter table which will enable you to do IP filtering at all. In the filter table you'll find the INPUT, FORWARD and OUTPUT chains. This module is required if you plan to do any kind of filtering on packets that you receive and send.
CONFIG_IP_NF_TARGET_REJECT
This target allows us to specify that an ICMP error message should be sent in reply to incoming packets, instead of plainly dropping them dead to the floor. Keep in mind that TCP connections, as opposed to ICMP and UDP, are always reset or refused with a TCP RST packet.
CONFIG_IP_NF_TARGET_MIRROR
This allows packets to be bounced back to the sender of the packet. For example, if we set up a MIRROR target on destination port HTTP on our INPUT chain and someone tries to access this port, we would bounce his packets back to him and finally he would probably see his own homepage.
CONFIG_IP_NF_NAT
This module allows network address translation, or NAT, in its different forms. This option gives us access to the nat table in iptables. This option is required if we want to do port forwarding, masquerading, etc. Note that this option is not required for firewalling and masquerading of a LAN, but you should have it present unless you are able to provide unique IP addresses for all hosts.
CONFIG_IP_NF_TARGET_MASQUERADE
This module adds the MASQUERADE target. For instance if we don't know what IP we have to the Internet this would be the preferred way of getting the IP instead of using DNAT or SNAT. In other words, if we use DHCP, PPP, SLIP or some other connection that assigns us an IP, we need to use this target instead of SNAT. Masquerading gives a slightly higher load on the computer than NAT, but will work without us knowing the IP address in advance.
CONFIG_IP_NF_TARGET_REDIRECT
This target is useful together with application proxies, for example. Instead of letting a packet pass right through, we remap them to go to our local box instead. In other words, we have the possibility to make a transparent proxy this way.
CONFIG_IP_NF_TARGET_LOG
This adds the LOG target and its functionality to iptables. We can use this module to log certain packets to syslogd and hence see what is happening to the packet. This is invaluable for security audits, forensics or debugging a script you are writing.
CONFIG_IP_NF_TARGET_TCPMSS
This option can be used to counter Internet Service Providers and servers who block ICMP Fragmentation Needed packets. This can result in web-pages not getting through, small mails getting through while larger mails don't, ssh works but scp dies after handshake, etc. We can then use the TCPMSS target to overcome this by clamping our MSS (Maximum Segment Size) to the PMTU (Path Maximum Transmit Unit). This way, we'll be able to handle what the authors of Netfilter them selves call "criminally brain-dead ISPs or servers" in the kernel configuration help.
CONFIG_IP_NF_COMPAT_IPCHAINS
Adds a compatibility mode with the obsolescent ipchains. Do not look to this as any real long term solution for solving migration from Linux 2.2 kernels to 2.4 kernels, since it may well be gone with kernel 2.6.
CONFIG_IP_NF_COMPAT_IPFWADM
Compatibility mode with obsolescent ipfwadm. Definitely don't look to this as a real long term solution.

When a packet first enters the firewall, it hits the hardware and then gets passed on to the proper device driver in the kernel. Then the packet starts to go through a series of steps in the kernel, before it is either sent to the correct application (locally), or forwarded to another host - or whatever happens to it.

First, let us have a look at a packet that is destined for our own local host. It would pass through the following steps before actually being delivered to our application that receives it:

Note that this time the packet was passed through the INPUT chain instead of the FORWARD chain. Quite logical. Most probably the only thing that's really logical about the traversing of tables and chains in your eyes in the beginning, but if you continue to think about it, you'll find it will get clearer in time. Now we look at the outgoing packets from our own local host and what steps they go through.

In this example, we're assuming that the packet is destined for another host on another network. The packet goes through the different steps in the following fashion :

As you can see, there are quite a lot of steps to pass through. The packet can be stopped at any of the iptables chains, or anywhere else if it is malformed; however, we are mainly interested in the iptables aspect of this lot. Do note that there are no specific chains or tables for different interfaces or anything like that. FORWARD is always passed by all packets that are forwarded over this firewall/router.

Do not use the INPUT chain to filter on in the previous scenario! INPUT is meant solelyfor packets to our local host that do not get routed to any otherdestination.

We have now seen how the different chains are traversed in three separate scenarios. If we were to figure out a good map of all this, it would look something like this:

To clarify this image, consider this. If we get a packet into the first routing decision that is not destined for the local machine itself, it will be routed through the FORWARD chain. If the packet is, on the other hand, destined for an IP address that the local machine is listening to, we would send the packet through the INPUT chain and to the local machine.

Also worth a note, is the fact that packets may be destined for the local machine, but the destination address may be changed within the PREROUTING chain by doing NAT. Since this takes place before the first routing decision, the packet will be looked upon after this change. Because of this, the routing may be changed before the routing decision is done. Do note, that all packets will be going through one or the other path in this image. If you DNAT a packet back to the same network that it came from, it will still travel through the rest of the chains until it is back out on the network.

Mangle table

[edit | edit source]

This table should as we've already noted mainly be used for mangling packets. In other words, you may freely use the mangle matches etc that could be used to change TOS (Type Of Service) fields and so on.

You are strongly advised not to use this table for any filtering; nor will any DNAT, SNAT or Masquerading work in this table.

Targets that are only valid in the mangle table:

  • TOS
  • TTL
  • MARK

The TOS target is used to set and/or change the Type of Service field in the packet. This could be used for setting up policies on the network regarding how a packet should be routed and so on. Note that this has not been perfected and is not really implemented on the Internet and most of the routers don't care about the value in this field, and sometimes, they act faulty on what they get. Don't set this in other words for packets going to the Internet unless you want to make routing decisions on it, with iproute2.

The TTL target is used to change the TTL (Time To Live) field of the packet. We could tell packets to only have a specific TTL and so on. One good reason for this could be that we don't want to give ourself away to nosy Internet Service Providers. Some Internet Service Providers do not like users running multiple computers on one single connection, and there are some Internet Service Providers known to look for a single host generating different TTL values, and take this as one of many signs of multiple computers connected to a single connection.

The MARK target is used to set special mark values to the packet. These marks could then be recognized by the iproute2 programs to do different routing on the packet depending on what mark they have, or if they don't have any. We could also do bandwidth limiting and Class Based Queuing based on these marks.

NAT table

[edit | edit source]

This table should only be used for NAT (Network Address Translation) on different packets. In other words, it should only be used to translate the packet's source field or destination field. Note that, as we have said before, only the first packet in a stream will hit this chain. After this, the rest of the packets will automatically have the same action taken on them as the first packet. The actual targets that do these kind of things are:

  • DNAT
  • SNAT
  • MASQUERADE

The DNAT target is mainly used in cases where you have a public IP and want to redirect accesses to the firewall to some other host (on a DMZ for example). In other words, we change the destination address of the packet and reroute it to the host.

SNAT is mainly used for changing the source address of packets. For the most part you'll hide your local networks or DMZ, etc. A very good example would be that of a firewall of which we know outside IP address, but need to substitute our local network's IP numbers with that of our firewall. With this target the firewall will automatically SNAT and De-SNAT the packets, hence making it possible to make connections from the LAN to the Internet. If your network uses 192.168.0.0/netmask for example, the packets would never get back from the Internet, because IANA has regulated these networks (among others) as private and only for use in isolated LANs.

The MASQUERADE target is used in exactly the same way as SNAT, but the MASQUERADE target takes a little bit more overhead to compute. The reason for this, is that each time that the MASQUERADE target gets hit by a packet, it automatically checks for the IP address to use, instead of doing as the SNAT target does - just using the single configured IP address. The MASQUERADE target makes it possible to work properly with Dynamic DHCP IP addresses that your ISP might provide for your PPP, PPPoE or SLIP connections to the Internet.

Filter table

[edit | edit source]

The filter table is mainly used for filtering packets. We can match packets and filter them in whatever way we want. This is the place that we actually take action against packets and look at what they contain and DROP or /ACCEPT them, depending on their content. Of course we may also do prior filtering; however, this particular table, is the place for which filtering was designed. Almost all targets are usable in this chain. We will be more prolific about the filter table here; however you now know that this table is the right place to do your main filtering.

The state machine

[edit | edit source]

The state machine is a special part within iptables that should really not be called the state machine at all, since it is really a connection tracking machine. However, most people recognize it under the first name. Throughout this chapter i will use this names more or less as if they were synonymous. This should not be overly confusing. Connection tracking is done to let the Netfilter framework know the state of a specific connection. Firewalls that implement this are generally called stateful firewalls. A stateful firewall is generally much more secure than non-stateful firewalls since it allows us to write much tighter rule-sets.

Within iptables, packets can be related to tracked connections in four different so called states. These are known as NEW, ESTABLISHED, RELATED and INVALID. We will discuss each of these in more depth later. With the --state match we can easily control who or what is allowed to initiate new sessions.

All of the connection tracking is done by special framework within the kernel called conntrack. conntrack may be loaded either as a module, or as an internal part of the kernel itself. Most of the time, we need and want more specific connection tracking than the default conntrack engine can maintain. Because of this, there are also more specific parts of conntrack that handles the TCP, UDP or ICMP protocols among others. These modules grabs specific, unique, information from the packets, so that they may keep track of each stream of data. The information that conntrack gathers is then used to tell conntrack in which state the stream is currently in. For example, UDP streams are, generally, uniquely identified by their destination IP address, source IP address, destination port and source port. In previous kernels, we had the possibility to turn on and off defragmentation. However, since iptables and Netfilter were introduced and connection tracking in particular, this option was gotten rid of. The reason for this is that connection tracking can not work properly without defragmenting packets, and hence defragmenting has been incorporated into conntrack and is carried out automatically. It can not be turned off, except by turning off connection tracking. Defragmentation is always carried out if connection tracking is turned on.

All connection tracking is handled in the PREROUTING chain, except locally generated packets which are handled in the OUTPUT chain. What this means is that iptables will do all recalculation of states and so on within the PREROUTING chain. If we send the initial packet in a stream, the state gets set to NEW within the OUTPUT chain, and when we receive a return packet, the state gets changed in the PREROUTING chain to ESTABLISHED, and so on. If the first packet is not originated by ourself, the NEW state is set within the PREROUTING chain of course. So, all state changes and calculations are done within the PREROUTING and OUTPUT chains of the nat table.

The conntrack entries

[edit | edit source]

Let's take a brief look at a conntrack entry and how to read them in /proc/net/ip_conntrack. This gives a list of all the current entries in your conntrack database. If you have the ip_conntrack module loaded, a cat of /proc/net/ip_conntrack might look like:

tcp      6 117 SYN_SENT src=192.168.1.6 dst=192.168.1.9 sport=32775 \
    dport=22 [UNREPLIED] src=192.168.1.9 dst=192.168.1.6 sport=22 \
    dport=32775 use=2

This example contains all the information that the conntrack module maintains to know which state a specific connection is in. First of all, we have a protocol, which in this case is tcp. Next, the same value in normal decimal coding. After this, we see how long this conntrack entry has to live. This value is set to 117 seconds right now and is decremented regularly until we see more traffic. This value is then reset to the default value for the specific state that it is in at that relevant point of time. Next comes the actual state that this entry is in at the present point of time. In the above mentioned case we are looking at a packet that is in the SYN_SENT state. The internal value of a connection is slightly different from the ones used externally with iptables. The value SYN_SENT tells us that we are looking at a connection that has only seen a TCP SYN packet in one direction. Next, we see the source IP address, destination IP address, source port and destination port. At this point we see a specific keyword that tells us that we have seen no return traffic for this connection. Lastly, we see what we expect of return packets. The information details the source IP address and destination IP address (which are both inverted, since the packet is to be directed back to us). The same thing goes for the source port and destination port of the connection. These are the values that should be of any interest to us.

The connection tracking entries may take on a series of different values, all specified in the conntrack headers available in linux/include/netfilter-ipv4/ip_conntrack*.h files. These values are dependent on which sub-protocol of IP we use. TCP, UDP or ICMP protocols take specific default values as specified in linux/include/netfilter-ipv4/ip_conntrack.h. We will look closer at this when we look at each of the protocols; however, we will not use them extensively through this chapter, since they are not used outside of the conntrack internals. Also, depending on how this state changes, the default value of the time until the connection is destroyed will also change.

Recently there was a new patch made available in iptables patch-o-matic, called tcp-window-tracking. This patch adds, among other things, all of the above timeouts to special sysctl variables, which means that they can be changed on the fly, while the system is still running. Hence, this makes it unnecessary to recompile the kernel every time you want to change the timeouts. These can be altered via using specific system calls available in the /proc/sys/net/ipv4/netfilter directory. You should in particular look at the /proc/sys/net/ipv4/netfilter/ip_ct_* variables.

When a connection has seen traffic in both directions, the conntrack entry will erase the [UNREPLIED] flag, and then reset it. The entry tells us that the connection has not seen any traffic in both directions, will be replaced by the [ASSURED] flag, to be found close to the end of the entry. The [ASSURED] flag tells us that this connection is assured and that it will not be erased if we reach the maximum possible tracked connections. Thus, connections marked as [ASSURED] will not be erased, contrary to the non assured connections (those not marked as [ASSURED]). How many connections that the connection tracking table can hold depends upon a variable that can be set through the ip-sysctl functions in recent kernels. The default value held by this entry varies heavily depending on how much memory you have. On 128 MB of RAM, you will get 8192 possible entries, and at 256 MB of RAM, you will get 16376 entries. You can read and set your settings through the /proc/sys/net/ipv4/ip_conntrack_max setting.

User-land states

[edit | edit source]

As you have seen, packets may take on several different states within the kernel itself, depending on what protocol we are talking about. However, outside the kernel, we only have the 4 states as described previously. These states can mainly be used in conjunction with the state match which will then be able to match packets based on their current connection tracking state. The valid states are NEW, ESTABLISHED, RELATED and INVALID states. The following table will briefly explain each possible state

These states can be used together with the --state match to match packets based on their connection tracking state. This is what makes the state machine so incredibly strong and efficient for our firewall. Previously, we often had to open up all ports above 1024 to let all traffic back into our local networks again. With the state machine in place this is not necessary any longer, since we can now just open up the firewall for return traffic and not for all kinds of other traffic.

TCP connections

[edit | edit source]

In this section and the upcoming ones, we will take a closer look at the states and how they are handled for each of the three basic protocols TCP, UDP and ICMP. Also, we will take a closer look at how connections are handled per default, if they can not be classified as either of these three protocols. We have chosen to start out with the TCP protocol since it is a stateful protocol in itself, and has a lot of interesting details with regard to the state machine in iptables.

A TCP connection is always initiated with the 3-way handshake, which establishes and negotiates the actual connection over which data will be sent. The whole session is begun with a SYN packet, then a SYN/ACK packet and finally an ACK packet to acknowledge the whole session establishment. At this point the connection is established and able to start sending data. The big problem is, how does connection tracking hook up into this? Quite simply really.

As far as the user is concerned, connection tracking works basically the same for all connection types. Have a look at the picture below to see exactly what state the stream enters during the different stages of the connection. As you can see, the connection tracking code does not really follow the flow of the TCP connection, from the users viewpoint. Once it has seen one packet(the SYN), it considers the connection as NEW. Once it sees the return packet(SYN/ACK), it considers the connection as ESTABLISHED. If you think about this a second, you will understand why. With this particular implementation, you can allow NEW and ESTABLISHED packets to leave your local network, only allow ESTABLISHED connections back, and that will work perfectly. Conversely, if the connection tracking machine were to consider the whole connection establishment as NEW, we would never really be able to stop outside connections to our local network, since we would have to allow NEW packets back in again. To make things more complicated, there is a number of other internal states that are used for TCP connections inside the kernel, but which are not available for us in User-land. Roughly, they follow the state standards specified within RFC 793 - Transmission Control Protocol at page 21-23.

As you can see, it is really quite simple, seen from the user's point of view. However, looking at the whole construction from the kernel's point of view, it's a little more difficult. Let's look at an example. Consider exactly how the connection states change in the /proc/net/ip_conntrack table. The first state is reported upon receipt of the first SYN packet in a connection.

tcp      6 117 SYN_SENT src=192.168.1.5 dst=192.168.1.35 sport=1031 \
    dport=23 [UNREPLIED] src=192.168.1.35 dst=192.168.1.5 sport=23 \
    dport=1031 use=1

As you can see from the above entry, we have a precise state in which a SYN packet has been sent, (the SYN_SENT flag is set), and to which as yet no reply has been sent (witness the [UNREPLIED] flag). The next internal state will be reached when we see another packet in the other direction.

tcp      6 57 SYN_RECV src=192.168.1.5 dst=192.168.1.35 sport=1031 \
    dport=23 src=192.168.1.35 dst=192.168.1.5 sport=23 dport=1031 \
    use=1

Now we have received a corresponding SYN/ACK in return. As soon as this packet has been received, the state changes once again, this time to SYN_RECV. SYN_RECV tells us that the original SYN was delivered correctly and that the SYN/ACK return packet also got through the firewall properly. Moreover, this connection tracking entry has now seen traffic in both directions and is hence considered as having been replied to. This is not explicit, but rather assumed, as was the [UNREPLIED] flag above. The final step will be reached once we have seen the final ACK in the 3-way handshake.

tcp 6 431999 ESTABLISHED src=192.168.1.5 dst=192.168.1.35 \

    sport=1031 dport=23 src=192.168.1.35 dst=192.168.1.5 \
    sport=23 dport=1031 use=1

In the last example, we have gotten the final ACK in the 3-way handshake and the connection has entered the ESTABLISHED state, as far as the internal mechanisms of iptables are aware. After a few more packets, the connection will also become [ASSURED], as shown in the introduction section of this chapter. When a TCP connection is closed down, it is done in the following way and takes the following states.

As you can see, the connection is never really closed until the last ACK is sent. Do note that this picture only describes how it is closed down under normal circumstances. A connection may also, for example, be closed by sending a RST(reset), if the connection were to be refused. In this case, the connection would be closed down after a predetermined time.

When the TCP connection has been closed down, the connection enters the TIME_WAIT state, which is per default set to 2 minutes. This is used so that all packets that have gotten out of order can still get through our rule-set, even after the connection has already closed. This is used as a kind of buffer time so that packets that have gotten stuck in one or another congested router can still get to the firewall, or to the other end of the connection.

If the connection is reset by a RST packet, the state is changed to CLOSE. This means that the connection per default have 10 seconds before the whole connection is definitely closed down. RST packets are not acknowledged in any sense, and will break the connection directly. There are also other states than the ones we have told you about so far.

Here is the complete list of possible states that a TCP stream may take, and their timeout values.

These values are most definitely not absolute. They may change with kernel revisions, and they may also be changed via the proc file-system in the /proc/sys/net/ipv4/netfilter/ip_ct_tcp_* variables. The default values should, however, be fairly well established in practice. These values are set in jiffies (or 1/100th parts of seconds), so 3000 means 30 seconds.

Also note that the User-land side of the state machine does not look at TCP flags set in the TCP packets. This is generally bad, since you may want to allow packets in the NEW state to get through the firewall, but when you specify the NEW flag, you will in most cases mean SYN packets.

UDP connections

[edit | edit source]

UDP connections are in them selves not stateful connections, but rather stateless. There are several reasons why, mainly because they don't contain any connection establishment or connection closing; most of all they lack sequencing. Receiving two UDP datagrams in a specific order does not say anything about which order in which they were sent. It is, however, still possible to set states on the connections within the kernel. Let's have a look at how a connection can be tracked and how it might look in conntrack.

As you can see, the connection is brought up almost exactly in the same way as a TCP connection. That is, from the user-land point of view. Internally, conntrack information looks quite a bit different, but intrinsically the details are the same. First of all, let's have a look at the entry after the initial UDP packet has been sent.

udp      17 20 src=192.168.1.2 dst=192.168.1.5 sport=137 dport=1025 \
    [UNREPLIED] src=192.168.1.5 dst=192.168.1.2 sport=1025 \
    dport=137 use=1

As you can see from the first and second values, this is an UDP packet. The first is the protocol name, and the second is protocol number. This is just the same as for TCP connections. The third value marks how many seconds this state entry has to live. After this, we get the values of the packet that we have seen and the future expectations of packets over this connection reaching us from the initiating packet sender. These are the source, destination, source port and destination port. At this point, the [UNREPLIED] flag tells us that there's so far been no response to the packet. Finally, we get a brief list of the expectations for returning packets. Do note that the latter entries are in reverse order to the first values. The timeout at this point is set to 30 seconds, as per default.

udp      17 170 src=192.168.1.2 dst=192.168.1.5 sport=137 \
    dport=1025 src=192.168.1.5 dst=192.168.1.2 sport=1025 \
    dport=137 use=1

At this point the server has seen a reply to the first packet sent out and the connection is now considered as ESTABLISHED. This is not shown in the connection tracking, as you can see. The main difference is that the [UNREPLIED] flag has now gone. Moreover, the default timeout has changed to 180 seconds - but in this example that's by now been decremented to 170 seconds - in 10 seconds' time, it will be 160 seconds. There's one thing that's missing, though, and can change a bit, and that is the [ASSURED] flag described above. For the [ASSURED] flag to be set on a tracked connection, there must have been a small amount of traffic over that connection.

udp      17 175 src=192.168.1.5 dst=195.22.79.2 sport=1025 \
    dport=53 src=195.22.79.2 dst=192.168.1.5 sport=53 \
    dport=1025 [ASSURED] use=1

At this point, the connection has become assured. The connection looks exactly the same as the previous example, except for the [ASSURED] flag. If this connection is not used for 180 seconds, it times out. 180 Seconds is a comparatively low value, but should be sufficient for most use. This value is reset to its full value for each packet that matches the same entry and passes through the firewall, just the same as for all of the internal states.

ICMP connections

[edit | edit source]

ICMP packets are far from a stateful stream, since they are only used for controlling and should never establish any connections. There are four ICMP types that will generate return packets however, and these have 2 different states. These ICMP messages can take the NEW and ESTABLISHED states. The ICMP types we are talking about are Echo request and reply, Timestamp request and reply, Information request and reply and finally Address mask request and reply. Out of these, the timestamp request and information request are obsolete and could most probably just be dropped. However, the Echo messages are used in several setups such as pinging hosts. Address mask requests are not used often, but could be useful at times and worth allowing. To get an idea of how this could look, have a look at the following image.

As you can see in the above picture, the host sends an echo request to the target, which is considered as NEW by the firewall. The target then responds with a echo reply which the firewall considers as state ESTABLISHED. When the first echo request has been seen, the following state entry goes into the ip_conntrack.

icmp     1 25 src=192.168.1.6 dst=192.168.1.10 type=8 code=0 \
    id=33029 [UNREPLIED] src=192.168.1.10 dst=192.168.1.6 \
    type=0 code=0 id=33029 use=1

This entry looks a little bit different from the standard states for TCP and UDP as you can see. The protocol is there, and the timeout, as well as source and destination addresses. The problem comes after that however. We now have 3 new fields called type, code and id. They are not special in any way, the type field contains the ICMP type and the code field contains the ICMP code. These are all available in ICMP types appendix. The final id field, contains the ICMP ID. Each ICMP packet gets an ID set to it when it is sent, and when the receiver gets the ICMP message, it sets the same ID within the new ICMP message so that the sender will recognize the reply and will be able to connect it with the correct ICMP request.

The next field, we once again recognize as the [UNREPLIED] flag, which we have seen before. Just as before, this flag tells us that we are currently looking at a connection tracking entry that has seen only traffic in one direction. Finally, we see the reply expectation for the reply ICMP packet, which is the inversion of the original source and destination IP addresses. As for the type and code, these are changed to the correct values for the return packet, so an echo request is changed to echo reply and so on. The ICMP ID is preserved from the request packet.

The reply packet is considered as being ESTABLISHED, as we have already explained. However, we can know for sure that after the ICMP reply, there will be absolutely no more legal traffic in the same connection. For this reason, the connection tracking entry is destroyed once the reply has traveled all the way through the Netfilter structure.

In each of the above cases, the request is considered as NEW, while the reply is considered as ESTABLISHED. Let's consider this more closely. When the firewall sees a request packet, it considers it as NEW. When the host sends a reply packet to the request it is considered ESTABLISHED.

Note that this means that the reply packet must match the criterion given by the connection tracking entry to be considered as established, just as with all other traffic types.

ICMP requests has a default timeout of 30 seconds, which you can change in the /proc/sys/net/ipv4/netfilter/ip_ct_icmp_timeout entry. This should in general be a good timeout value, since it will be able to catch most packets in transit.

Another hugely important part of ICMP is the fact that it is used to tell the hosts what happened to specific UDP and TCP connections or connection attempts. For this simple reason, ICMP replies will very often be recognized as RELATED to original connections or connection attempts. A simple example would be the ICMP Host unreachable or ICMP Network unreachable. These should always be spawned back to our host if it attempts an unsuccessful connection to some other host, but the network or host in question could be down, and hence the last router trying to reach the site in question will reply with an ICMP message telling us about it. In this case, the ICMP reply is considered as a RELATED packet. The following picture should explain how it would look.


In the above example, we send out a SYN packet to a specific address. This is considered as a NEW connection by the firewall. However, the network the packet is trying to reach is unreachable, so a router returns a network unreachable ICMP error to us. The connection tracking code can recognize this packet as RELATED. thanks to the already added tracking entry, so the ICMP reply is correctly sent to the client which will then hopefully abort. Meanwhile, the firewall has destroyed the connection tracking entry since it knows this was an error message.

The same behavior as above is experienced with UDP connections if they run into any problem like the above. All ICMP messages sent in reply to UDP connections are considered as RELATED. Consider the following image.

This time an UDP packet is sent to the host. This UDP connection is considered as NEW. However, the network is administratively prohibited by some firewall or router on the way over. Hence, our firewall receives a ICMP Network Prohibited in return. The firewall knows that this ICMP error message is related to the already opened UDP connection and sends it as an RELATED packet to the client. At this point, the firewall destroys the connection tracking entry, and the client receives the ICMP message and should hopefully abort.

Default connections

[edit | edit source]

In certain cases, the conntrack machine does not know how to handle a specific protocol. This happens if it does not know about that protocol in particular, or doesn't know how it works. In these cases, it goes back to a default behavior. The default behavior is used on, for example, NETBLT, MUX and EGP. This behavior looks pretty much the same as the UDP connection tracking. The first packet is considered NEW, and reply traffic and so forth is considered ESTABLISHED.

When the default behavior is used, all of these packets will attain the same default timeout value. This can be set via the /proc/sys/net/ipv4/netfilter/ip_ct_generic_timeout variable. The default value here is 600 seconds, or 10 minutes. Depending on what traffic you are trying to send over a link that uses the default connection tracking behavior, this might need changing. Especially if you are bouncing traffic through satellites and such, which can take a long time.

Complex protocols and connection tracking

[edit | edit source]

Certain protocols are more complex than others. What this means when it comes to connection tracking, is that such protocols may be harder to track correctly. Good examples of these are the ICQ, IRC and FTP protocols. Each and every one of these protocols carries information within the actual data payload of the packets, and hence requires special connection tracking helpers to enable it to function correctly.

Let's take the FTP protocol as the first example. The FTP protocol first opens up a single connection that is called the FTP control session. When we issue commands through this session, other ports are opened to carry the rest of the data related to that specific command. These connections can be done in two ways, either actively or passively. When a connection is done actively, the FTP client sends the server a port and IP address to connect to. After this, the FTP client opens up the port and the server connects to that specified port from its own port 20 (known as FTP-Data) and sends the data over it.

The problem here is that the firewall will not know about these extra connections, since they were negotiated within the actual payload of the protocol data. Because of this, the firewall will be unable to know that it should let the server connect to the client over these specific ports.

The solution to this problem is to add a special helper to the connection tracking module which will scan through the data in the control connection for specific syntaxes and information. When it runs into the correct information, it will add that specific information as RELATED and the server will be able to track the connection, thanks to that RELATED entry. Consider the following picture to understand the states when the FTP server has made the connection back to the client.

Passive FTP works the opposite way. The FTP client tells the server that it wants some specific data, upon which the server replies with an IP address to connect to and at what port. The client will, upon receipt of this data, connect to that specific port, from its own port 20(the FTP-data port), and get the data in question. If you have an FTP server behind your firewall, you will in other words require this module in addition to your standard iptables modules to let clients on the Internet connect to the FTP server properly. The same goes if you are extremely restrictive to your users, and only want to let them reach HTTP and FTP servers on the Internet and block all other ports. Consider the following image and its bearing on Passive FTP.

Some conntrack helpers are already available within the kernel itself. More specifically, the FTP and IRC protocols have conntrack helpers as of writing this. If you can not find the conntrack helpers that you need within the kernel itself, you should have a look at the patch-o-matic tree within user-land iptables. The patch-o-matic tree may contain more conntrack helpers, such as for the ntalk or H.323 protocols.

Conntrack helpers may either be statically compiled into the kernel, or as modules. If they are compiled as modules, you can load them with the following command :

modprobe ip_conntrack_*

Do note that connection tracking has nothing to do with NAT, and hence you may require more modules if you are NAT'ing connections as well. For example, if you were to want to NAT and track FTP connections, you would need the NAT module as well. All NAT helpers starts with ip_nat_ and follow that naming convention; so for example the FTP NAT helper would be named ip_nat_ftp and the IRC module would be named ip_nat_irc. The conntrack helpers follow the same naming convention, and hence the IRC conntrack helper would be named ip_conntrack_irc, while the FTP conntrack helper would be named ip_conntrack_ftp.

How a rule is built

[edit | edit source]

As we have already explained, each rule is a line that the kernel looks at to find out what to do with a packet. If all the criteria - or matches - are met, we perform the target - or jump - instruction. Normally we would write our rules in a syntax that looks something like this:

iptables [-t table] command [match] [target/jump]

There is nothing that says that the target instruction has to be last function in the line. However, you would usually adhere to this syntax to get the best readability. Anyway, most of the rules you'll see are written in this way. Hence, if you read someone else's script, you'll most likely recognize the syntax and easily understand the rule.

If you want to use another table than the standard table, you could insert the table specification at the point at which [table] is specified. However, it is not necessary to state explicitly what table to use, since by default iptables uses the filter table on which to implement all commands. Neither do you have to specify the table at just this point in the rule. It could be set pretty much anywhere along the line. However, it is more or less standard to put the table specification at the beginning.

One thing to think about though: The command should always come first, or alternatively directly after the table specification. We use 'command' to tell the program what to do, for example to insert a rule or to add a rule to the end of the chain, or to delete a rule. We shall take a further look at this below. The match is the part of the rule that we send to the kernel that details the specific character of the packet, what makes it different from all other packets. Here we could specify what IP address the packet comes from, from which network interface, the intended IP address, port, protocol or whatever. There is a heap of different matches that we can use that we will look closer at further down.

Finally we have the target of the packet. If all the matches are met for a packet, we tell the kernel what to do with it. We could, for example, tell the kernel to send the packet to another chain that we've created ourselves, and which is part of this particular table. We could tell the kernel to drop the packet dead and do no further processing, or we could tell the kernel to send a specified reply to the sender. As with the rest of the content in this section, we'll look closer at it further down.

Tables

[edit | edit source]

The -t option specifies which table to use. Per default, the filter table is used. We may specify one of the following tables with the -t option. Do note that this is an extremely brief summary

Commands

[edit | edit source]

In this section we will cover all the different commands and what can be done with them. The command tells iptables what to do with the rest of the rule that we send to the parser. Normally we would want either to add or delete something in some table or another. The following commands are available to iptables:


You should always enter a complete command line, unless you just want to list the built-in help for iptables or get the version of the command. To get the version, use the -v option and to get the help message, use the -h option. As usual, in other words. Next comes a few options that can be used with various different commands. Note that we tell you with which commands the options can be used and what effect they will have. Also note that we do not include any options here that affect rules or matches. Instead, we'll take a look at matches and targets in a later section of this chapter.

Options :

Matches

[edit | edit source]

First of all we have the generic matches, which can be used in all rules. Then we have the TCP matches which can only be applied to TCP packets. We have UDP matches which can only be applied to UDP packets, and ICMP matches which can only be used on ICMP packets. Finally we have special matches, such as the state, owner and limit matches and so on. These final matches have in turn been narrowed down to even more subcategories, even though they might not necessarily be different matches at all.

Generic matches

[edit | edit source]

This section will deal with Generic matches. A generic match is a kind of match that is always available, whatever kind of protocol we are working on, or whatever match extensions we have loaded. No special parameters at all are needed to use these matches; in other words. I have also included the --protocol match here, even though it is more specific to protocol matches. For example, if we want to use a TCP match, we need to use the --protocol match and send TCP as an option to the match. However, --protocol is also a match in itself, since it can be used to match specific protocols. The following matches are always available.

Generic matches

Implicit matches

[edit | edit source]

Those are matches that are loaded implicitly. Implicit matches are implied, taken for granted, automatic. For example when we match on --protocol tcp without any further criteria. There are currently three types of implicit matches for three different protocols. These are TCP matches, UDP matches and ICMP matches. The TCP based matches contain a set of unique criteria that are available only for TCP packets. UDP based matches contain another set of criteria that are available only for UDP packets. And the same thing for ICMP packets. On the other hand, there can be explicit matches that are loaded explicitly. Explicit matches are not implied or automatic, you have to specify them specifically. For these you use the -m or --match option, which we will discuss in the next section.

TCP matches

[edit | edit source]

These matches are protocol specific and are only available when working with TCP packets and streams. To use these matches, you need to specify --protocol tcp on the command line before trying to use them. Note that the --protocol tcp match must be to the left of the protocol specific matches. These matches are loaded implicitly in a sense, just as the UDP and ICMP matches are loaded implicitly. The other matches will be looked over in the continuation of this section, after the TCP match section.

UDP matches

[edit | edit source]

will only work together with UDP packets. These matches are implicitly loaded when you specify the --protocol UDP match and will be available after this specification. Note that UDP packets are not connection oriented, and hence there is no such thing as different flags to set in the packet to give data on what the datagram is supposed to do, such as open or closing a connection, or if they are just simply supposed to send data. UDP packets do not require any kind of acknowledgment either. If they are lost, they are simply lost (Not taking ICMP error messaging etc into account). This means that there are quite a lot less matches to work with on a UDP packet than there is on TCP packets. Note that the state machine will work on all kinds of packets even though UDP or ICMP packets are counted as connectionless protocols. The state machine works pretty much the same on UDP packets as on TCP packets.

ICMP matches

[edit | edit source]

These are the ICMP matches. These packets are even more ephemeral, that is to say short lived, than UDP packets, in the sense that they are connectionless. The ICMP protocol is mainly used for error reporting and for connection controlling and suchlike. ICMP is not a protocol subordinated to the IP protocol, but more of a protocol that augments the IP protocol and helps in handling errors. The headers of ICMP packets are very similar to those of the IP headers, but differ in a number of ways. The main feature of this protocol is the type header, that tells us what the packet is for. One example is, if we try to access an unaccessible IP address, we would normally get an ICMP host unreachable in return. For a complete listing of ICMP types, see the ICMP types appendix. There is only one ICMP specific match available for ICMP packets, and hopefully this should suffice. This match is implicitly loaded when we use the --protocol ICMP match and we get access to it automatically. Note that all the generic matches can also be used, so that among other things we can match on the source and destination addresses.

Explicit matches

[edit | edit source]

Explicit matches are those that have to be specifically loaded with the -m or --match option. State matches, for example, demand the directive -m state prior to entering the actual match that you want to use. Some of these matches may be protocol specific . Some may be unconnected with any specific protocol - for example connection states. These might be NEW (the first packet of an as yet unestablished connection), ESTABLISHED (a connection that is already registered in the kernel), RELATED (a new connection that was created by an older, established one) etc. A few may just have been evolved for testing or experimental purposes, or just to illustrate what iptables is capable of. This in turn means that not all of these matches may at first sight be of any use. Nevertheless, it may well be that you personally will find a use for specific explicit matches. And there are new ones coming along all the time, with each new iptables release. Whether you find a use for them or not depends on your imagination and your needs. The difference between implicitly loaded matches and explicitly loaded ones, is that the implicitly loaded matches will automatically be loaded when, for example, you match on the properties of TCP packets, while explicitly loaded matches will never be loaded automatically - it is up to you to discover and activate explicit matches.

Limit match

[edit | edit source]

The limit match extension must be loaded explicitly with the -m limit option. This match can, for example, be used to advantage to give limited logging of specific rules etc. For example, you could use this to match all packets that does not exceed a given value, and after this value has been exceeded, limit logging of the event in question. Think of a time limit : You could limit how many times a certain rule may be matched in a certain time frame, for example to lessen the effects of DoS syn flood attacks. This is its main usage, but there are more usages, of course. The limit match may also be inverted by adding a ! flag in front of the limit match. It would then be expressed as -m limit ! --limit 5/s.This means that all packets will be matched after they have broken thelimit.

To further explain the limit match, it is basically a token bucket filter. Consider having a leaky bucket where the bucket leaks X packets per time-unit. X is defined depending on how many matching packets we get, so if we get 3 packets, the bucket leaks 3 packets per that time-unit. The --limit option tells us how many packets to refill the bucket with per time-unit, while the --limit-burst option tells us how big the bucket is in the first place. So, setting --limit 3/minute --limit-burst 5, and then receiving 5 matches will empty the bucket. After 20 seconds, the bucket is refilled with another token, and so on until the --limit-burst is reached again or until they get used.

Consider the example below for further explanation of how this may look.

  1. We set a rule with -m limit --limit 5/second --limit-burst 10. The limit-burst token bucket is set to 10 initially. Each packet that matches the rule uses a token.
  2. We get packet that matches, 1-2-3-4-5-6-7-8-9-10, all within a 1/1000 of a second.
  3. The token bucket is now empty. Once the token bucket is empty, the packets that qualify for the rule otherwise no longer match the rule and proceed to the next rule if any, or hit the chain policy.
  4. For each 1/5 s without a matching packet, the token count goes up by 1, upto a maximum of 10. 2 seconds after receiving the 10 packets, we will once again have 10 tokens left.
  5. And of course, the bucket will be emptied by 1 token for each packet it receives.

MAC match

[edit | edit source]

The MAC (Ethernet Media Access Control) match can be used to match packets based on their MAC source address. As of writing this documentation, this match is a little bit limited, however, in the future this may be more evolved and may be more useful. This match can be used to match packets on the source MAC address only as previously said.

Do note that to use this module we explicitly load it with the -m mac option. The reason that I am saying this is that a lot of people wonder if it should not be -m mac-source, which it should not.

Mark match

[edit | edit source]

The mark match extension is used to match packets based on the marks they have set. A mark is a special field, only maintained within the kernel, that is associated with the packets as they travel through the computer. Marks may be used by different kernel routines for such tasks as traffic shaping and filtering. As of today, there is only one way of setting a mark in Linux, namely the MARK target in iptables. This was previously done with the FWMARK target in ipchains, and this is why people still refer to FWMARK in advanced routing areas. The mark field is currently set to an unsigned integer, or 4294967296 possible values on a 32 bit system. In other words, you are probably not going to run into this limit for quite some time.

Multiport match

[edit | edit source]

The multiport match extension can be used to specify multiple destination ports and port ranges. Without the possibility this match gives, you would have to use multiple rules of the same type, just to match different ports.

You can not use both standard port matching and multiport matching at the same time, for example you can't write: --sport 1024:63353 -m multiport --dport 21,23,80. This will simply not work. What in fact happens, if you do, is that iptables honors the first element in the rule, and ignores the multiport instruction.

Owner match

[edit | edit source]

The owner match extension is used to match packets based on the identity of the process that created them. The owner can be specified as the process ID either of the user who issued the command in question, that of the group, the process, the session, or that of the command itself. This extension was originally written as an example of what iptables could be used for. The owner match only works within the OUTPUT chain, for obvious reasons: It is pretty much impossible to find out any information about the identity of the instance that sent a packet from the other end, or where there is an intermediate hop to the real destination. Even within the OUTPUT chain it is not very reliable, since certain packets may not have an owner. Notorious packets of that sort are (among other things) the different ICMP responses. ICMP responses will never match.

State match

[edit | edit source]

The state match extension is used in conjunction with the connection tracking code in the kernel. The state match accesses the connection tracking state of the packets from the conntracking machine. This allows us to know in what state the connection is, and works for pretty much all protocols, including stateless protocols such as ICMP and UDP. In all cases, there will be a default timeout for the connection and it will then be dropped from the connection tracking database. This match needs to be loaded explicitly by adding a -m state statement to the rule. You will then have access to one new match called state.

TOS match

[edit | edit source]

The TOS match can be used to match packets based on their TOS field. TOS stands for Type Of Service, consists of 8 bits, and is located in the IP header. This match is loaded explicitly by adding -m tos to the rule. TOS is normally used to inform intermediate hosts of the precedence of the stream and its content (it doesn't really, but it informs of any specific requirements for the stream, such as it having to be sent as fast as possible, or it needing to be able to send as much payload as possible). How different routers and administrators deal with these values depends. Most do not care at all, while others try their best to do something good with the packets in question and the data they provide.

TTL match

[edit | edit source]

The TTL match is used to match packets based on their TTL (Time To Live) field residing in the IP headers. The TTL field contains 8 bits of data and is decremented once every time it is processed by an intermediate host between the client and recipient host. If the TTL reaches 0, an ICMP type 11 code 0 (TTL equals 0 during transit) or code 1 (TTL equals 0 during reassembly) is transmitted to the party sending the packet and informing it of the problem. This match is only used to match packets based on their TTL, and not to change anything. The latter, incidentally, applies to all kinds of matches. To load this match, you need to add an -m ttl to the rule.

Targets/Jumps

[edit | edit source]

The target/jumps tells the rule what to do with a packet that is a perfect match with the match section of the rule. There are a couple of basic targets, the ACCEPT and DROP targets, which we will deal with first. However, before we do that, let us have a brief look at how a jump is done.

The jump specification is done in exactly the same way as in the target definition, except that it requires a chain within the same table to jump to. To jump to a specific chain, it is of course a prerequisite that that chain exists. As we have already explained, a user-defined chain is created with the -N command. For example, let's say we create a chain in the filter table called tcp_packets, like this: iptables -N tcp_packets

We could then add a jump target to it like this:

iptables -A INPUT -p tcp -j tcp_packets

We would then jump from the INPUT chain to the tcp_packets chain and start traversing that chain. When/If we reach the end of that chain, we get dropped back to the INPUT chain and the packet starts traversing from the rule one step below where it jumped to the other chain (tcp_packets in this case). If a packet is ACCEPTed within one of the sub chains, it will be ACCEPT'ed in the superset chain also and it will not traverse any of the superset chains any further. However, do note that the packet will traverse all other chains in the other tables in a normal fashion.

Targets on the other hand specify an action to take on the packet in question. We could for example, DROP or ACCEPT the packet depending on what we want to do. There are also a number of other actions we may want to take, which we will describe further on in this section. Jumping to targets may incur different results, as it were. Some targets will cause the packet to stop traversing that specific chain and superior chains as described above. Good examples of such rules are DROP and ACCEPT. Rules that are stopped, will not pass through any of the rules further on in the chain or in superior chains. Other targets, may take an action on the packet, after which the packet will continue passing through the rest of the rules. A good example of this would be the LOG, ULOG and TOS targets. These targets can log the packets, mangle them and then pass them on to the other rules in the same set of chains. We might, for example,

Exercises

[edit | edit source]

Mail & News

[edit | edit source]

Detailed Objective

[edit | edit source]

Weight: 1

Description: Install and maintain mailing lists. Monitor and resolve problems by viewing the logs.

  • Key knowledge area(s):
    • Install, configure and manipulate mailing lists
    • Mailman configuration files, terms and utilities
    • Majordomo configuration files, terms and utilities
    • Ezmlm configuration files, terms and utilities
  • The following is a partial list of the used files, terms and utilities:
    • Not applicable

Configuring mailing lists

[edit | edit source]

Majordomo is a mailing list management program. Its goal is to handle all incoming mails to a particular email address, and re-distribute them to a list of email addresses. Majordomo also handles adding and deleting an email address from its lists.

Since Majordomo is responsible for managing E-mail lists, Majordomo relies heavily on a MTA such as Sendmail, Smail, Qmail or Postfix.

aliases file (usually /etc/aliases) is used for making aliases for E-mail addresses. For example, once Majordomo is installed, usually an entry in the aliases file reads:

majordomo-owner: jarchie

This entry means that all mail addressed to majordomo-owner@host.com will actually be sent to jarchie@host.com. Notice it is unnecessary to append the @host.com to jarchie because both users are on the same host. If it were desired to redirect the message to a different user on a different host, one would have to add the @host.com portion.

Another type of entry in the aliases file allows E-mail to be redirected to multiple addresses listed in a file:

testlist: :include:/usr/local/majordomo-1.94.5/lists/testlist

This entry states that any message sent to testlist@host.com will be redirected to all the addresses listed in the file /usr/local/majordomo-1.94.5/lists/testlist. The testlist file might look something like this:

johnarchie@emeraldis.com
srobirds@yahoo.com
acreswell@geocities.com

Majordomo is able to add or remove addresses from a list by taking advantage of this feature. When a subscribe request is processed, the user's E-mail address is appended to the testlist file; when an unsubscribe request is processed, the user's E-mail address is removed from the testlist file. One can also add or remove addresses manually simply by editing the file with a text editor such as vi.

Since Majordomo needs to be able to process commands sent to it via E-mail, Sendmail must be able to execute the Majordomo program and pass the message to it. This is done by adding another type of entry to the aliases file:

majordomo:  "|/usr/local/majordomo-1.94.5/wrapper majordomo"

The program /usr/local/majordomo-1.94.5/wrapper is a wrapper (SUID and SGID majordomo or daemon depending on the configuration) that runs the Majordomo program. The quotation marks around the second part of the alias entry are used to tell Sendmail that this part of the entry is all one statement; the quotation marks would be unnecessary if there were not a space between wrapper and majordomo. The | is known as a "pipe"; it is used to tell Sendmail to send the E-mail to the wrapper via the standard input. (Since all the wrapper does here is to call majordomo, the E-mail is actually being sent to Majordomo.) The wrapper accepts one parameter--the parameter of the program it is supposed to execute. (Any parameters after the first will be passed to the program the wrapper is executing.) For security reasons, the wrapper only executes programs located in the Majordomo directory, /usr/local/majordomo-1.94.5/. This restriction prevents a programmer from using the wrapper to run programs that should not have Majordomo privileges. (i.e, wrapper /bin/vi would allow any user to edit any Majordomo configuration file.) When a message is sent to majordomo@host.com, Sendmail starts up the wrapper which, in turn, starts up majordomo, and Sendmail sends the message to the majordomo script via the standard input. Majordomo then extracts the commands out of the message and responds appropriately.

Majordomo is, of course, the piece of code that this document revolves around; it consists of a collection of Perl scripts with the sole purpose of managing mailing lists.

Majordomo must run under a specific UID and GID so when any of the scripts are run, they will run under Majordomo's UID. Thus, it is necessary to decide what UID and GID Majordomo should run under. Also, Majordomo must be a Sendmail trusted user

Check the /etc/passwd and /etc/group files to find a UID and GID that are not taken. For this example, a UID of 16 and a GID of 16 was chosen. You have to decide on the location where the Majordomo scripts will reside. If you are using a shadowed password file, add entries similar to

majordomo:x:16:16:Majordomo List Manager:/usr/local/majordomo-1.94.5:

to your /etc/passwd and add an appropriate entry to /etc/shadow.

majordomo:*:10883:0:88888:7:::

Use the other entries in these files as a guide for exactly what should be added. These are only the values for my system. If you are not using shadowed passwords, only an entry in the /etc/passwd file is necessary.

To create a Majordomo group, add a line similar to

majordomo:x:16:jarchie

to your /etc/group file. Appending your username to the end of the line will give you access to the Majordomo files that are group writable.

The Makefile contains all the information needed to install Majordomo; it is usually necessary to edit lines in the Makefile that refer to system specific settings so Majordomo will be able to install cleanly on your system. Most of the default settings are correct; however, the following settings, almost invariably, need to be changed on a per system basis.

PERL = /bin/perl
CC = cc
W_HOME = /usr/test/majordomo-$(VERSION)
MAN = $(W_HOME)/man
W_USER = 123
W_GROUP = 45

should be changed to something more appropriate for your system. For example, in my setup, the values were changed to

PERL = /usr/bin/perl
CC = gcc
W_HOME = /usr/local/majordomo-1.94.5
MAN = /usr/man
W_USER = 16
W_GROUP = 16

Also the majordomo.cf file must be created. An easy way to create this file is to copy the provided sample.cf file to majordomo.cf and edit it.

Again, most of the settings are correct by default, but the following lines might need to be changed for your system from :

$whereami = "example.com";
$whoami = "Majordomo\@$whereami";
$whoami_owner = "Majordomo-Owner\@$whereami";
$homedir = "/usr/test/majordomo";
$digest_work_dir = "/usr/local/mail/digest";
$sendmail_command = "/usr/lib/sendmail";

to something more appropriate such as

$whereami = "kes.emeraldis.com";
$whoami = "majordomo\@$whereami";
$whoami_owner = "majordomo-owner\@$whereami";
$homedir = "/usr/local/majordomo-1.94.5";
$digest_work_dir = "/usr/local/majordomo-1.94.5/digest";
$sendmail_command = "/usr/sbin/sendmail";

$whoami and $whoami_owner do not need to be changed for Majordomo to work; however, I changed them because I like to avoid typing capital letters. $digest_work_dir is a temporary directory where digest files should be placed; this directory should be assigned to wherever you want digests to be stored. If you do not plan to use digested lists, do not worry about this option. $whereami, $homedir, and $sendmail_command should be changed to appropriate values for your system. Unlike the Makefile, these options can always be changed after Majordomo is installed by editing majordomo.cf in the directory where Majordomo was installed. (The configuration file is simply copied during setup.)

The next step is to compile the Majordomo wrapper. The wrapper is the only Majordomo component that needs to be compiled because everything else is a collection of perl scripts and, therefore, is not compiled.

$ make wrapper

To install the Majordomo files, execute the commands

# make install
# make install-wrapper

The first command can be done as the Majordomo user (assuming majordomo can create or has access to $home_dir), but the second command needs to be done as root so the installation script can SUID root the Majordomo wrapper. (Since, majordomo was created without a login shell or password, if you want to execute the first command as majordomo, you will need to su majordomo as root in order to become majordomo.)

Sendmail aliases must be created for Majordomo so commands sent to Majordomo can be processed by majordomo, and an alias for the Majordomo owner must be created so people can E-mail you through the standard owner-majordomo address. Add the following entries to your aliases file :

majordomo:       "|/usr/local/majordomo-1.94.5/wrapper majordomo"
owner-majordomo: jarchie
majordomo-owner: jarchie

Then test your configuration, as a regular user (not as majordomo or as root), run :

$ /usr/local/majordomo-1.94.5/wrapper config-test

This program can detect most problems in the Majordomo installation.

To create a list, create a file with the name of the list in the Majordomo lists directory. For example, to create a list called test, create a test file as Majordomo :

[root@kes /]# su majordomo
[majordomo@kes /]$ touch /usr/local/majordomo-1.94.5/lists/test

and add the related aliases :

test:        :include:/usr/local/majordomo-1.94.5/lists/test
owner-test:    jarchie
test-request:  "|/usr/local/majordomo-1.94.5/wrapper request-answer test"
test-approval: jarchie

Now test the operation of the list by issuing a lists command to Majordomo :

[jarchie@kes jarchie]$ echo lists | mail majordomo

It should only take a second for majordomo to reply with a message containing all the lists which are currently set up. Next, try issuing a help command.

[jarchie@kes jarchie]$ echo help | mail majordomo

Majordomo should reply with a list of all commands that Majordomo accepts. It might be a good idea to save the message for future reference.

To see if the aliases are working properly, try subscribing and unsubscribing yourself to the list :

[jarchie@kes jarchie]$ echo subscribe test | mail majordomo

You will receive an E-mail message containing instructions on how to confirm your subscription as well as a letter confirming that your command was successful. After sending back your confirmation, Majordomo should send back two letters--one letter stating that your subscribe request was successful and another letter welcoming you to the test list. The owner of the list will also be sent a message stating that you have subscribed to the list.

To unsubscribe from a list, send a unsubscribe command

[jarchie@kes jarchie]$ echo unsubscribe test | mail majordomo

You should be sent back a letter stating that your command was successful.

For some lists, it may be desirable to have Majordomo process messages before they reach the list. For example, Majordomo has the resend script to automatically filter messages based on content (such as taboo words), to prevent people from sending Majordomo commands to the list, and other features. To use these options, it is necessary to use a better set of aliases such as :

test:        "|/usr/local/majordomo-1.94.5/wrapper resend -l test test-list"
test-list:   :include:/usr/local/majordomo-1.94.5/lists/test
owner-test:  jarchie
test-owner:  jarchie
test-request:  "|/usr/local/majordomo-1.94.5/wrapper majordomo -l test"

The last entry allows someone simply to send a message to test-request@kes.emeraldis.com with the text subscribe rather than sending a letter to majordomo@kes.emeraldis.com with the text subscribe test. Also, note that if sendmail is using smrsh, the above aliases should reference the copy of the wrapper in the safe path--usually /etc/smrsh/wrapper.

It is common for Majordomo's permissions to be set incorrectly causing Majordomo to work improperly. Fortunately, Sendmail and Majordomo typically, give decent error messages indicating a problem. For example, the lists directory must be executable by the user sendmail setuids to, typically mail or daemon. If sendmail cannot execute lists, the permissions must be loosened.

[root@kes root]# chmod +x /usr/local/majordomo-1.94.5/lists

Another common problem is caused by the lists directory being group writable. To solve this problem, one can ether clear the group writable bit, or use the sendmail option IncludeFileInGroupWritableDirPath

Majordomo is intended to run on a isolated system; there are a couple of well-known security holes in the scripts that allow any local user capable of executing wrapper to execute code as the majordomo user. If Majordomo must be run on a system providing users with shell access, then it is advisable to tighten up permissions on the wrapper. This can be done by clearing the world executable bit and chgrping the wrapper to the user that needs to run the Majordomo scripts. For example, if Sendmail and MajorCool are both being used to execute the wrapper use the commands

[root@kes root]# cp /usr/local/majordomo-1.94.5/wrapper /etc/smrsh/wrapper
[root@kes root]# chmod 4750 /usr/local/majordomo-1.94.5/wrapper
[root@kes root]# chown root:nobody /usr/local/majordomo-1.94.5/wrapper
[root@kes root]# chmod 4750 /etc/smrsh/wrapper
[root@kes root]# chown root:mail /etc/smrsh/wrapper

to secure the system. This will allow sendmail (while running under mail) to execute /etc/smrsh/wrapper while allowing the webserver's MajorCool (running under nobody) to execute /usr/local/majordomo-1.94.5/wrapper. This solution, however, will allow anyone with the UID or GID of mail or nobody to also obtain access to the majordomo account. To protect the nobody account, it is important not to allow normal users to make use of server side includes or cgi scripts unless those services do not run under nobody.

Key terms, files and utilities : Majordomo MTA

Exercises

[edit | edit source]


Detailed Objectives (211.1)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 4


Description: Candidates should be able to manage an email server, including the configuration of e-mail aliases, e-mail quotas and virtual e-mail domains. This objective includes configuring internal e-mail relays and monitoring e-mail servers.


Key Knowledge Areas:

  • Configuration files for postfix.
  • Basic TLS configuration for postfix
  • Basic knowledge of the SMTP protocol
  • Awareness of sendmail and exim


Terms and Utilities:

  • Configuration files and commands for postfix
  • /etc/postfix/
  • /var/spool/postfix/
  • sendmail emulation layer commands
  • /etc/aliases
  • mail-related logs in /var/log/

Using Sendmail

[edit | edit source]

Exercises

[edit | edit source]

Using Postfix

[edit | edit source]

Postfix is written and maintained by Wietse Venema who has also written tcp_wrappers, and Satan. Postfix began its life as VMailer but Wietse has released the software under the IBM GPL and IBM's lawyers discovered that VMailer was too similar to an existing trade mark so the name had to be changed. Postfix is written as a drop in replacement for sendmail and it comes very close to hitting the mark on this. There are a few ?gotchas? which can bite you but they are not serious. Wietse actively supports Postfix through the postfix-users mailing list and there is also a developers mailing list. You can subscribe to the postfix-users mailing list this way:

echo subscribe postfix-users | mail majordomo@postfix.org.

You can subscribe to the developers list in this way:

echo subscribe postfix-testers | mail majordomo@postfix.org.

One last list we should mention is the announce list. You can join the announce list in this way: echo subscribe postfix-announce | mail majordomo@postfix.org. Postfix development is on-going and these mailing lists are quite active as of this writing. Archives for the mailing lists may be found at: http://www.egroups.com/group/postfix-users/ and at: http://msgs.SecurePoint.com/postfix/.

When a message enters the Postfix mail system, the first stop on the inside is the incoming queue. The figure below shows the main components that are involved with new mail.

The figure shows the main Postfix system components, and the main information flows between them. Yellow ellipsoids are mail programs, Yellow boxes are mail queues or files. Blue boxes are lookup tables.

Programs in the large box run under control by the Postfix resident master daemon. Data in the large box is property of the Postfix mail system.

Mail is posted locally. The Postfix sendmail program invokes the privileged postdrop program which deposits the message into the maildrop directory, where the message is picked up by the pickup daemon. This daemon does some sanity checks, in order to protect the rest of the Postfix system.

Mail comes in via the network. The Postfix SMTP server receives the message and does some sanity checks, in order to protect the rest of the Postfix system. The SMTP server can be configured to implement UCE controls on the basis of local or network-based black lists, DNS lookups, and other client request information. Mail is generated internally by the Postfix system itself, in order to return undeliverable mail to the sender. The bounce or defer daemon brings the bad news.

Mail is forwarded by the local delivery agent, either via an entry in the system-wide alias database, or via an entry in a per-user .forward file. This is indicated with the unlabeled arrow.

Mail is generated internally by the Postfix system itself, in order to notify the postmaster of a problem (this path is also indicated with the unlabeled arrow). The Postfix system can be configured to notify the postmaster of SMTP protocol problems, UCE policy violations, and so on.

The cleanup daemon implements the final processing stage for new mail. It adds missing From: and other message headers, arranges for address rewriting to the standard user@fully.qualified.domain form, and optionally extracts recipient addresses from message headers. The cleanup daemon inserts the result as a single queue file into the incoming queue, and notifies the queue manager of the arrival of new mail. The cleanup daemon can be configured to transform addresses on the basis of canonical and virtual table lookups.

On request by the cleanup daemon, the trivial-rewrite daemon rewrites addresses to the standard user@fully.qualified.domain form. The initial Postfix version does not implement a rewriting language. Implementing one would take a lot of effort, and most sites do not need it. Instead, Postfix makes extensive use of table lookup.

The primary configuration file for Postfix (the working equivalent to /etc/sendmail.cf) is main.cf. The install.cf file contains the initial settings for Postfix which were set up during the RPM installation. The file master.cf is Postfix' master process configuration file. Each line in the master file describes how a mailer component program should be run. In the debugging section we will talk some more about this file. The postfix-script is a wrapper used by Postfix to execute Postfix commands safely for the Linux environment. Let's take a closer look at the install.cf file as this file contains some data which we will need when we start to configure Postfix with main.cf.

The install.cf file is really just a list of the default settings used by the installation program built into the RPM.

Here is the main.cf file with comments by Wietse Venema and our suggested changes interspersed throughout :

  # Global Postfix configuration file. This file lists only a subset
  # of all 100+ parameters. See the sample-xxx.cf files for a full list.
  # 
  # The sample files mentioned above are located in /usr/doc/postfix-19990906_pl06/
  # The general format is lines with parameter = value pairs. Lines
  # that begin with whitespace continue the previous line. A value can
  # contain references to other $names or ${name}s.
  # LOCAL PATHNAME INFORMATION
  #
  # The queue_directory specifies the location of the Postfix queue.
  # This is also the root directory of Postfix daemons that run chrooted.
  # See the files in examples/chroot-setup for setting up Postfix chroot
  # environments on different UNIX systems.
  #
  queue_directory = /var/spool/postfix

This is the same directory that sendmail uses for the incoming mail queue.

  # The program_directory parameter specifies the default location of
  # Postfix support programs and daemons. This setting can be overruled
  # with the command_directory and daemon_directory parameters.
  #
  program_directory = /some/where/postfix/bin

The line above must be corrected. The RPM installs the Postfix binaries into /usr/libexec/postfix by default.

  # The command_directory parameter specifies the location of all
  # postXXX commands.  The default value is $program_directory.
  #
  command_directory = /usr/sbin

The line above is correct and may be left as is.

  # The daemon_directory parameter specifies the location of all Postfix
  # daemon programs (i.e. programs listed in the master.cf file). The
  # default value is $program_directory. This directory must be owned
  # by root.
  #
  daemon_directory = /usr/libexec/postfix

The line above is correct and may be left as is.

  # QUEUE AND PROCESS OWNERSHIP
  #
  # The mail_owner parameter specifies the owner of the Postfix queue
  # and of most Postfix daemon processes.  Specify the name of a user
  # account THAT DOES NOT SHARE A GROUP WITH OTHER ACCOUNTS AND THAT
  # OWNS NO OTHER FILES OR PROCESSES ON THE SYSTEM.  In particular,
  # don't specify nobody or daemon. PLEASE USE A DEDICATED USER.
  #
  mail_owner = postfix

The line above is correct and may be left as is.

  # The default_privs parameter specifies the default rights used by
  # the local delivery agent for delivery to external file or command.
  # These rights are used in the absence of a recipient user context.
  # DO NOT SPECIFY A PRIVILEGED USER OR THE POSTFIX OWNER.
  #
  #default_privs = nobody

The line above is correct and may be left as is but it should be uncommented (e.g. Remove the leading pound sign).

  # INTERNET HOST AND DOMAIN NAMES
  # 
  # The myhostname parameter specifies the Internet hostname of this
  # mail system. The default is to use the fully-qualified domain name
  # from gethostname(). $myhostname is used as a default value for many
  # other configuration parameters.
  #
  #myhostname = host.domain.name

Set the value in the line above to the Fully Qualified Domain Name (FQDN) for you machine. E.g. if your hostname is turkey and your domain is trot.com then your FQDN would be ?turkey.trot.com?. You will also need to uncomment this line.

  #myhostname = virtual.domain.name

The line above is redundant for most configurations and can usually be left commented.

  # The mydomain parameter specifies the local Internet domain name.
  # The default is to use $myhostname minus the first component.
  # $mydomain is used as a default value for many other configuration
  # parameters.
  #
  #mydomain = domain.name

The line above should be your domain name only without the hostname prepended to the front of it. As in the example we gave above the correct value here would be trot.com. Don't forget to uncomment the line as well.

  # SENDING MAIL
  # 
  # The myorigin parameter specifies the domain that locally-posted
  # mail appears to come from. The default is to append $myhostname,
  # which is fine for small sites.  If you run a domain with multiple
  # machines, you should (1) change this to $mydomain and (2) set up
  # a domain-wide alias database that aliases each user to
  # user@that.users.mailhost.
  #
  #myorigin = $myhostname
  #myorigin = $mydomain

The instructions here are pretty good. Typically what's done here is to let this default to $mydomain. Be sure to uncomment your choice.

  # RECEIVING MAIL
  # The inet_interfaces parameter specifies the network interface
  # addresses that this mail system receives mail on.  By default,
  # the software claims all active interfaces on the machine. The
  # parameter also controls delivery of mail to user@[ip.address].
  #
  #inet_interfaces = all

Once again the instructions here are good. Just uncomment the above listed line and you should be fine. Unless you have some odd requirement the next two entries can be left commented. You shouldn't need them.

  #inet_interfaces = $myhostname
  #inet_interfaces = $myhostname, localhost
  # The mydestination parameter specifies the list of domains that this
  # machine considers itself the final destination for.
  # The default is $myhostname + localhost.$mydomain.  On a mail domain
  # gateway, you should also include $mydomain. Do not specify the
  # names of domains that this machine is backup MX host for. Specify
  # those names via the relay_domains or permit_mx_backup settings for
  # the SMTP server (see sample-smtpd.cf.
  # The local machine is always the final destination for mail addressed
  # to user@[the.net.work.address] of an interface that the mail system
  # receives mail on (see the inet_interfaces parameter).
  # Specify a list of host or domain names, /file/name or type:table
  # patterns, separated by commas and/or whitespace. A /file/name
  # pattern is replaced by its contents; a type:table is matched when
  # a name matches a lookup key.  Continue long lines by starting the
  # next line with whitespace.
  #
  #mydestination = $myhostname, localhost.$mydomain
  #mydestination = $myhostname, localhost.$mydomain $mydomain

The most common practice is to select the line immediately above as your choice here. Be sure to uncomment it and put a comma between the last two entries as it appears to have been omitted.

  #mydestination = $myhostname, localhost.$mydomain, $mydomain,
  #       mail.$mydomain, www.$mydomain, ftp.$mydomain
  # INTERNET VERSUS INTRANET
  # The relayhost parameter specifies the default host to send mail to
  # when no entry is matched in the optional transport(5) table. When
  # no relayhost is given, mail is routed directly to the destination.
  # 
  # On an intranet, specify the organizational domain name. If your
  # internal DNS uses no MX records, specify the name of the intranet
  # gateway host instead.
  #
  # Specify a domain, host, host:port, [address] or [address:port].
  # Use the form [destination] to turn off MX lookups. See also the
  # default_transport parameter if you're connected via UUCP.
  #
  #relayhost = $mydomain
  #relayhost = gateway.my.domain
  #relayhost = uucphost
  #relayhost = [mail.$mydomain:9999]

If you are behind some sort of a firewall or you need to masquerade the envelope (which will be covered later in this document) you would set the ?relayhost? value to the MTA for your domain. If this host is to be *the* MTA for the domain then leave all of these commented out.

  # DEFAULT TRANSPORT
  #
  # The default_transport parameter specifies the default message
  # delivery transport to use when no transport is explicitly given in
  # the optional transport(5) table.
  #
  #default_transport = smtp

In most cases the above line would be uncommented and left as is.

  #default_transport = uucp
  # ADDRESS REWRITING
  #
  # Insert text from sample-rewrite.cf if you need to do address
  # masquerading.
  #
  # Insert text from sample-canonical.cf if you need to do address
  # rewriting, or if you need username->Firstname.Lastname mapping.
  # ADDRESS REDIRECTION (VIRTUAL DOMAIN)
  #
  # Insert text from sample-virtual.cf if you need virtual domain support.
  # "USER HAS MOVED" BOUNCE MESSAGES
  #
  # Insert text from sample-relocated.cf if you need "user has moved"
  # style bounce messages. Alternatively, you can bounce recipients
  # with an SMTP server access table. See sample-smtpd.cf.
  # TRANSPORT MAP
  #
  # Insert text from sample-transport.cf if you need explicit routing.
  
  # ALIAS DATABASE
  #
  # The alias_maps parameter specifies the list of alias databases used
  # by the local delivery agent. The default list is system dependent.
  # On systems with NIS, the default is to search the local alias
  # database, then the NIS alias database. See aliases(5) for syntax
  # details.
  # 
  # If you change the alias database, run "postalias /etc/aliases" (or
  # wherever your system stores the mail alias file), or simply run
  # "newaliases" to build the necessary DBM or DB file.
  #
  # It will take a minute or so before changes become visible.  Use
  # "postfix reload" to eliminate the delay.
  #
  #alias_maps = dbm:/etc/aliases
  alias_maps = hash:/etc/aliases

The alias_maps line is pointing at the /etc/aliases file which we preserved prior to removing sendmail. Best practice (recommended) usually prefers that all the Postfix config files be kept together so it might be a good idea to change this line to read:

  alias_maps = hash:/etc/postfix/aliases

and also make sure that you put the aliases file in /etc/postfix. Otherwise Postfix will complain on startup and fail to run. The default db type on Red Hat Linux is hash so be sure and use it as we have here. One common error which people have is when they use dbm instead of hash. Don't fall into that trap.

  #alias_maps = hash:/etc/aliases, nis:mail.aliases
  #alias_maps = netinfo:/aliases
  # The alias_database parameter specifies the alias database(s) that
  # are built with "newaliases" or "sendmail -bi".  This is a separate
  # configuration parameter, because alias_maps (see above) may specify
  # tables that are not necessarily all under control by Postfix.
  #
  #alias_database = dbm:/etc/aliases
  #alias_database = dbm:/etc/mail/aliases
  #alias_database = hash:/etc/aliases

As the instructions say if you want to use the newaliases command to handle the aliases file (recommended) you should uncomment the above line but be sure (if you made the path change we recommended in the alias_maps section) and change it to read:

  alias_database = hash:/etc/postfix/aliases

Then be sure to uncomment the line and run the newaliases command before starting Postfix.

   #alias_database = hash:/etc/aliases, hash:/opt/majordomo/aliases

If you happen to run majordomo then you should use the line above instead of just the aliases line. Be sure the path to the file majordomo is correct. The best practice convention is to put it into /etc/postfix. Most Red Hat Linux sendmail installations would have had it in /etc/mail/. We will discuss this a bit more when we get to the listserv section of this document.

  # DELIVERED-TO
  #
  # The prepend_delivered_header controls when Postfix should prepend
  # a Delivered-To: message header.
  #
  # By default, Postfix prepends a Delivered-To: header when forwarding
  # mail and when delivering to file (mailbox) or command.  Turning off
  # the Delivered-To: header when forwarding mail is not recommended.
  #
  # prepend_delivered_header = command, file, forward
  # prepend_delivered_header = forward

The defaults will work fine so you can leave this section commented out unless you have some special need or preference.

  # ADDRESS EXTENSIONS (e.g., user+foo)
  #
  # The recipient_delimiter parameter specifies the separator between
  # user names and address extensions (user+foo). See canonical(5),
  # local(8), relocated(5) and virtual(5) for the effects this has on
  # aliases, canonical, virtual, relocated and .forward file lookups.
  # Basically, the software tries user+foo and .forward+foo before
  # trying user and .forward.
  #
  # recipient_delimiter = +

This one can be left commented out also, unless you have some special need or preference.

  # DELIVERY TO MAILBOX
  #
  # The home_mailbox parameter specifies the optional pathname of a
  # mailbox relative to a user's home directory. The default is to
  # deliver to the UNIX-style /var/spool/mail/user or /var/mail/user.
  # Specify "Maildir/" for qmail-style delivery (the / is required).
  #
  #home_mailbox = Mailbox
  #home_mailbox = Maildir/

On Red Hat Linux systems you should leave this alone unless you know what you're doing. If you're converting from qmail to Postfix (doubtful) then it would probably be useful.

  # The mail_spool_directory parameter specifies the directory where
  # UNIX-style mailboxes are kept. The default setting depends on the
  # system type.
  #
  # mail_spool_directory = /var/mail
  # mail_spool_directory = /var/spool/mail

The previous line is correct for Red Hat Linux defaults so it should be uncommented and left as is.

  # The mailbox_command parameter specifies the optional external
  # command to use instead of mailbox delivery. The command is run as
  # the recipient with proper HOME, SHELL and LOGNAME environment settings.
  # Exception:  delivery for root is done as $default_user.
  #
  # Other environment variables of interest: USER (recipient username),
  # EXTENSION (address extension), DOMAIN (domain part of address),
  # and LOCAL (the address localpart).
  #
  # Unlike other Postfix configuration parameters, the mailbox_command
  # parameter is not subjected to $parameter substitutions. This is to
  # make it easier to specify shell syntax (see example below).
  #
  # Avoid shell meta characters because they will force Postfix to run
  # an expensive shell process. Procmail alone is expensive enough.
  #
  #mailbox_command = /some/where/procmail

The default MDA on Red Hat Linux systems is procmail. You can use the command ?which procmail? to verify the path but unless you've changed procmail's location it is located in ?/usr/bin/procmail?. Don't forget to uncomment the line.

  #mailbox_command = /some/where/procmail -a "$EXTENSION"
  # The mailbox_transport specifies the optional transport in master.cf
  # to use after processing aliases and .forward files. This parameter
  # has precedence over the mailbox_command, fallback_transport and
  # luser_relay parameters.
  #
  #mailbox_transport = cyrus

On a default Red Hat Linux system you should leave the above line alone.

  # The fallback_transport specifies the optional transport in master.cf
  # to use for recipients that are not found in the UNIX passwd database.
  # This parameter has precedence over the luser_relay parameter.
  #
  #fallback_transport =

On a default Red Hat Linux system you should leave the above line alone.

  # The luser_relay parameter specifies an optional destination address
  # for unknown recipients.  By default, mail for unknown local recipients
  # is bounced.
  #
  # The following expansions are done on luser_relay: $user (recipient
  # username), $shell (recipient shell), $home (recipient home directory),
  # $recipient (full recipient address), $extension (recipient address
  # extension), $domain (recipient domain), $local (entire recipient
  # localpart), $recipient_delimiter. Specify ${name?value} or
  # ${name:value} to expand value only when $name does (does not) exist.
  #
  # luser_relay = $user@other.host
  # luser_relay = $local@other.host
  # luser_relay = admin+$local

It's your choice what you do here but it can be quite annoying to receive a bazillion bounces a day. Leave this alone (recommended).

  # JUNK MAIL CONTROLS
  # 
  # The controls listed here are only a very small subset. See the file
  # sample-smtpd.cf for an elaborate list of anti-UCE controls.
  # The header_checks parameter restricts what may appear in message
  # headers. This requires that POSIX or PCRE regular expression support
  # is built-in. Specify "/^header-name: stuff you do not want/ REJECT"
  # in the pattern file. Patterns are case-insensitive by default. Note:
  # specify only patterns ending in REJECT. Patterns ending in OK are
  # mostly a waste of cycles.
  #
  #header_checks = regexp:/etc/postfix/filename
  #header_checks = pcre:/etc/postfix/filename

The above section enables a filter which you can use to detect and ?bounce? mail which matches a certain regular expression (REGEXP). The difference between using procmail and regexp or PCRE is that these two catch the mail prior to delivery and can effectively block unwanted mail at the SMTP port.

  # The relay_domains parameter restricts what domains (and subdomains
  # thereof) this mail system will relay mail from or to.  See the
  # smtpd_recipient_restrictions restriction in the file sample-smtpd.cf.
  #
  # By default, Postfix relays mail only from or to sites in or below
  # $mydestination, or in the optional virtual domain list.
  # 
  # Specify a list of hosts or domains, /file/name patterns or type:name
  # lookup tables, separated by commas and/or whitespace.  Continue
  # long lines by starting the next line with whitespace. A file name
  # is replaced by its contents; a type:name table is matched when a
  # (parent) domain appears as lookup key.
  #
  # NOTE: Postfix will not automatically forward mail for domains that
  # list this system as their primary or backup MX host. See the
  # permit_mx_backup restriction in the file sample-smtpd.cf.
  #
  #relay_domains = $mydestination, $virtual_maps

For anyone who knows already how MX records work this is a critical component in the Postfix configuration. Home users probably won't need this line but anyone who handles mail for mutiple domains will.

Here's a sample of how it can be used:

   relay_domains = $mydestination, /etc/postfix/relay-domains

In this example the domains you want to relay for would be placed in the file /etc/postfix/relay-domains. One to a line like so:

  here.com 
  mail.here.com 
  there.org 
  mail.there.org 

Note: this file is *not* hashed or mapped. It is a simple text file. You can also use IP addresses instead of names.

  # The mynetworks parameter specifies the list of networks that are
  # local to this machine.  The list is used by the anti-UCE software
  # to distinguish local clients from strangers. See permit_mynetworks
  # and smtpd_recipient_restrictions in the file sample-smtpd.cf file.
  #
  # The default is a list of all networks attached to the machine:  a
  # complete class A network (X.0.0.0/8), a complete class B network
  # (X.X.0.0/16), and so on. If you want stricter control, specify a
  # list of network/mask patterns, where the mask specifies the number
  # of bits in the network part of a host address. You can also specify
  # the absolute pathname of a pattern file instead of listing the
  # patterns here.
  #
  #mynetworks = 168.100.189.0/28, 127.0.0.0/8

The line above is another critical component in the configuration of Postfix. As the instructions say it specifies the list of networks that are local to this host. For those unfamiliar with the syntax used, it's called Classless Inter-Domain Routing (CIDR) or supernetting. For those familiar with the network classes (A, B, C etc.) it is a way of dividing IP addresses up without reference to class.

  #mynetworks = $config_directory/mynetworks
  # SHOW SOFTWARE VERSION OR NOT
  #
  # The smtpd_banner parameter specifies the text that follows the 220
  # status code in the SMTP greeting banner. Some people like to see
  # the mail version advertised. By default, Postfix shows no version.
  #
  # You MUST specify the $myhostname at the start of the text. When
  # the SMTP client sees its own hostname at the start of an SMTP
  # greeting banner it will report a mailer loop. That's better than
  # having a machine meltdown.
  #
  #smtpd_banner = $myhostname ESMTP $mail_name
  #smtpd_banner = $myhostname ESMTP $mail_name ($mail_version)

The above config entry is a matter of personal preference. It is not required and is up to the administrator to choose.

  # PARALLEL DELIVERY TO THE SAME DESTINATION
  #
  # How many parallel deliveries to the same user or domain? With local
  # delivery, it does not make sense to do massively parallel delivery
  # to the same user, because mailbox updates must happen sequentially,
  # and expensive pipelines in .forward files can cause disasters when
  # too many are run at the same time. With SMTP deliveries, 10
  # simultaneous connections to the same domain could be sufficient to
  # raise eyebrows.
  # 
  # Each message delivery transport has its XXX_destination_concurrency_limit
  # parameter.  The default is $default_destination_concurrency_limit.
  local_destination_concurrency_limit = 2
  default_destination_concurrency_limit = 10

As the text above says this section is really about rate limiting. It is, essentially, the gas pedal for Postfix. Unless you have some really good reason to change these the defaults should be fine. Once you've run Postfix for a while (particularly those who use it in a professional setting) you may have a better idea of how this should be set for your environment.

  # DEBUGGING CONTROL
  #
  # The debug_peer_level parameter specifies the increment in verbose
  # logging level when an SMTP client or server host name or address
  # matches a pattern in the debug_peer_list parameter.
  #
  debug_peer_level = 2

We recommend the default here unless there is some overriding reason to change this. Debugging will be covered in a later chapter of this document. For what it's worth this section has no real relevance unless the next is enabled.

  # The debug_peer_list parameter specifies an optional list of domain
  # or network patterns, /file/name patterns or type:name tables. When
  # an SMTP client or server host name or address matches a pattern,
  # increase the verbose logging level by the amount specified in the
  # debug_peer_level parameter.
  #
  # debug_peer_list = 127.0.0.1
  # debug_peer_list = some.domain

This section is used in combination with debug_peer_level so if that's not enabled then this one is moot. This is actually a very neat feature of Postfix. Think about it for a minut. If everything works fine but there is this one host which seems to have problem receiving or sending mail to or from your host then you could use this feature to increase the logging level for just that host.

  # The debugger_command specifies the external command that is executed
  # when a Postfix daemon program is run with the -D option.
  #
  # Use "command .. & sleep 5" so that the debugger can attach before
  # the process marches on. If you use an X-based debugger, be sure to
  # set up your XAUTHORITY environment variable before starting Postfix.
  #
  debugger_command =
           PATH=/usr/bin:/usr/X11R6/bin
           xxgdb $daemon_directory/$process_name $process_id & sleep 5

Leave this section alone for now. We will cover debugging in some detail in a later section of this document. That's it. We've made it through the main.cf file and we're almost ready to start it up.

master.cf

[edit | edit source]

The master daemon is a supervisory application which controls and monitors all of the other Postfix processes. The master.cf file is the master daemon's configuration file. The master.cf file is the throttle for Postfix. Here you set all of the daemon process count limits. A good example of a useful limit would be to set a limit on the number of SMTP processes which can be executed simultaneously, after all, you might not want to receive 50 messages inbound all at the same time. The key thing to understand here is that any process without an express limit defaults to a 50 process limit.

In general terms the master.cf file is fine with the defaults as they are so you can leave it alone.

aliases

[edit | edit source]

This is simply the default aliases file and it could be exactly the same one you used with sendmail (recommended) and it works the same way it always has with the newaliases command. If you use majordomo your majordomo aliases will work the same way they always have and they will work with the newaliases command as well.

Control of the postfix server is done through the init.d scripts Don't forget to issue a postfix reload command after changing the configuration ! If you modify the aliases database (/etc/aliases), don't forget to activate the changes by issueing a newaliases command (as with sendmail)

Key terms, files and utilities : /etc/aliases /etc/postfix/main.cf /etc/postfix/master.cf /var/spool/postfix

Exercises

[edit | edit source]

Detailed Objective

[edit | edit source]

Weight: 3

Description: Candidates should be able to implement client email management software to filter, sort and monitor incoming user email.

  • Key knowledge area(s):
    • procmail configuration files, tools and utilities
    • Usage of procmail on both server and client side
  • The following is a partial list of the used files, terms and utilities:
    • ~/.procmail
    • /etc/procmailrc
    • procmail

Managing mail traffic

[edit | edit source]

procmail is the mail processing utility language written by Stephen van den Berg of Germany. This article provides a bit of background for the intermediate Unix user on how to use procmail. As a "little" language (to use the academic term) procmail lacks many of the features and constructs of traditional, general-purpose languages. It has no "while" or "for" loops. However it "knows" a lot about Unix mail delivery conventions and file/directory permissions -- and in particular about file locking. Although it is possible to write a custom mail filtering script in any programming language using the facilities installed on most Unix systems -- we'll show that procmail is the tool of choice among sysadmins and advanced Unix users.

Unix mail systems consist of MTA's (mail transport agents like sendmail, smail, qmail mmdf etc), MDA's (delivery agents like sendmail, deliver, and procmail), and MUA's (user agents like elm, pine, /bin/mail, mh, Eudora, and Pegasus).

On most Unix systems on the Internet sendmail is used as an integrated transport and delivery agent. sendmail and compatible MTA's have the ability to dispatch mail *through* a custom filter or program through either of two mechanisms: aliases and .forwards.

The aliases mechanism uses a single file (usually /etc/aliases or /usr/lib/aliases) to redirect mail. This file is owned and maintained by the system administrator. Therefore you (as a user) can't modify it. The ".forward" mechanism is decentralized. Each user on a system can create a file in their home directory named .forward and consisting of an address, a filename, or a program (filter). Usually the file *must* be owned by the user or root and *must not* be "writeable" by other users (good versions of sendmail check these factors for security reasons).

It's also possible, with some versions of sendmail, for you to specify multiple addresses, programs, or files, separated with commas. However we'll skip the details of that. Managing mail trafic

You could forward your mail through any arbitrary program with a .forward that consisted of a line like:

"|$HOME/bin/your.program -and some arguments"

Note the quotes and the "pipe" character. They are required. "Your.program" could be a Bourne shell script, an awk or perl script, a compiled C program or any other sort of filter you wanted to write.

However "your.program" would have to be written to handle a plethora of details about how sendmail would pass the messages (headers and body) to it, how you would return values to sendmail, how you'd handle file locking (in case mail came in while "your.program" was still processing one, etc). That's what procmail gives us.

What we have seen so far is general information that applies to all sendmail compatible MTA/MDA's. Managing mail traffic

So, to ensure that mail is passed to procmail for processing the first step is to create the .forward file. (This is safe to do before you do any configuration of procmail itself -- assuming that the package's binaries are installed). Here's the canonical example, pasted from the procmail man pages:

"|IFS=' '&&exec /usr/local/bin/procmail -f-||exit 75 #YOUR_USERNAME"

If you did this and nothing else your mail would basically be unaffected. procmail would just look for its default recipe file (.procmailrc) and finding none -- it would perform its default action on each messages. In other words it would append new messages into your normal spool file.

You can setup procmail system-wide as local delivery agent in sendmail/postfix. When this is done, you can skip the whole part of about using the .forward file -- or you can use it anyway. For instance in sendmail this can be done by changing sendmail.mc with the following:

MAILER_DEFINITIONS
dnl # MAILER(`local')dnl <- comment this one out with dnl
MAILER(`procmail')dnl
MAILER(`smtp')dnl

In postfix this can be done according to the postfix FAQ. Basically it is just editing /etc/postfix/main.cf with the following and reloading postfix.

/etc/postfix/mail.cf:
mailbox_command = /path/to/procmail

In either event the next step to automating your mail handling is to create a .procmailrc file in your home directory. You could actually call this file anything you wanted -- but then you'd have to slip the name explicitly into the .forward file (right before the "||" operator). Almost everyone just uses the default.

So far all we've talked about it how everything gets routed to procmail -- which mostly involves sendmail and the Bourne shell's syntax. Almost all sendmails are configured to use /bin/sh (the Bourne shell) to interpret alias and .forward "pipes."

So, here's a very simple .procmailrc file:

:0c:
$HOME/mail.backup

This just appends an extra copy of all incoming mail to a file named "mail.backup" in your home directory. Note that a bunch of environment variables are preset for you. It's been suggested that you should explicitly set SHELL=/bin/sh (or the closest derivative to Bourne Shell available on your system). I've never had to worry about that since the shells I use on most systems are already Bourne compatible.

However, csh and other shell users should take note that all of the procmail recipe examples that I've ever seen use Bourne syntax.

The :0 line marks the beginning of a "recipe" (procedure, clause, whatever. :0 can be followed be any of a number of "flags." There is a literally dizzying number of ways to combine these flags. The one flag we're using in this example is 'c' for "copy."

The second colon on this line marks the end of the flags and the beginning of the name for a lockfile. Since no name is given procmail will pick one automatically.

This bit is a little complicated. Mail might arrive in bursts. If a new message arrives while your script is still busy processing the last message -- you'll have multiple sendmail processes. Each will be dealing with one message. This isn't a problem by itself. However -- if the two processes might try to write into one file at the same time they are likely to get jumbled in unpredictable ways (the result will not be a properly formatted mail folder).

So we hint to procmail that it will need the check for and create a lockfile. In this particular case we don't care what the name of the lock file would be (since we're not going to have *other* programs writing into the backup file). So we leave the last field (after the colon) blank. procmail will then select its own lockfile name.

If we leave the : off of the recipe header line (omitting the last field entirely) then no lockfile is used. This is appropriate whenever we intend to only read from the files in the recipe -- or in cases where we intend to only write short, single line entries to a file in no particular order (like log file entries). The way procmail works is:

It receives a single message from sendmail (or some sendmail compatible MTA/MDA). There may be several procmail processing running currently since new messages may be coming in faster than they are being processed. It opens its recipe file (.procmailrc by default or specified on its command line) and parses each recipe from the first to the last until a message has been "delivered" (or "disposed of" as the case may be).

Any recipe can be a "disposition" or "delivery" of the message. As soon as a message is "delivered" then procmail closes its files, removes its locks and exits.

If procmail reaches the end of it's rc file (and thus all of the INCLUDE'd files) without "disposing" of the message -- than the message is appended to your spool file (which looks like a normal delivery to you and all of your "mail user agents" like Eudora, elm, etc).

This explains why procmail is so forgiving if you have *no* .procmailrc. It simply delivers your message to the spool because it has reached the end of all its recipes (there were none). The 'c' flag causes a recipe to work on a "copy" of the message -- meaning that any actions taken by that recipe are not considered to be "dispositions" of the message.

Without the 'c' flag this recipe would catch all incoming messages, and all your mail would end up in mail.backup. None of it would get into your spool file and none of the other recipes would be parsed.

The next line in this sample recipe is simply a filename. Like sendmail's aliases and .forward files -- procmail recognizes three sorts of disposition to any message. You can append it to a file, forward it to some other mail address, or filter it through a program.

Actually there is one special form of "delivery" or "disposition" that procmail handles. If you provide it with a directory name (rather than a filename) it will add the message to that directory as a separate file. The name of that file will be based on several rather complicated factors that you don't have to worry about unless you use the Rand MH system, or some other relatively obscure and "exotic" mail agent.

A procmail recipe generally consists of three parts -- a start line (:0 with some flags) some conditions (lines starting with a '*' -- asterisk -- character) and one "delivery" line which can be file/directory name or a line starting with a '!' -- bang -- character or a '|' -- pipe character.

Here's another example:

:0
* ^From.*someone.i.dont.like@somewhere.org
/dev/null

This is a simple one consisting of no flags, one condition and a simple file delivery. It simply throws away any mail from "someone I don't like." (/dev/null under Unix is a "bit bucket" -- a bottomless well for tossing unwanted output DOS has a similar concept but it's not nearly as handy).

Here's a more complex one:

:0
* !^FROM_DAEMON
* !^FROM_MAILER
* !^X-Loop: myaddress@myhost.mydomain.org
| $HOME/bin/my.script

This consists of a set of negative conditions (notice that the conditions all start with the '!' character). This means: for any mail that didn't come from a "daemon" (some automated process) and didn't come a "mailer" (some other automated process) and which doesn't contain any header line of the form: "X-Loop: myadd..." send it through the script in my bin directory.

I can put the script directly in the rc file (which is what most procmail users do most of the time). This script might do anything to the mail. In this case -- whatever it does had better be good because procmail way will consider any such mail to be delivered and any recipes after this will only be reached by mail from DAEMONs, MAILERs and any mail with that particular X-Loop: line in the header.

These two particular FROM_ conditions are actually "special." They are preset by procmail and actually refer to a couple of rather complicated regular expressions that are tailored to match the sorts of things that are found in the headers of most mail from daemons and mailers.

The X-Loop: line is a normal procmail condition. In the RFC822 document (which defines what e-mail headers should look like on the Internet) any line started with X- is a "custom" header. This means that any mail program that wants to can add pretty much any X- line it wants.

A common procmail idiom is to add an X-Loop: line to the header of any message that we send out -- and to check for our own X-Loop: line before sending out anything. This is to protect against "mail loops" -- situations where our mail gets forwarded or "bounced" back to us and we endlessly respond to it.

So, here's a detailed example of how to use procmail to automatically respond to mail from a particular person. We start with the recipe header.

:0

... then we add our one condition (that the mail appears to be from the person in question):

* ^FROMharasser@spamhome.com

FROM is a "magic" value for procmail -- it checks from, resent-by, and similar header lines. You could also use ^From: -- which would only match the header line(s) that start with the string "From:"

The ^ (hiccup or, more technically "caret") is a "regular expression anchor" (a techie phrase that means "it specifies *where* the pattern must be found in order to match." There is a whole book on regular expression (O'Reilly & Associates). "regexes" permeate many Unix utilities, scripting languages, and other programs. There are slight differences in "regex" syntax for each application. However the man page for 'grep' or 'egrep' is an excellent place to learn more.

In this case the hiccup means that the pattern must occur at the beginning of a line (which is its usual meaning in grep, ed/sed, awk, and other contexts).

... and we add a couple of conditions to avoid looping and to avoid responding to automated systems

* !^FROM_DAEMON
* !^FROM_MAILER

(These are a couple more "magic" values. The man pages show the exact regexes that are assigned to these keywords -- if you're curious or need to tweak a special condition that is similar to one or the other of these).

... and one more to prevent some tricky loop:

* !^X-Loop: myaddress@myhost.mydomain.org

(All of these patterns start with "bangs" (exclamation points) because the condition is that *no* line of the header start with any of these patterns. The 'bang' in this case (and most other regex contexts) "negates" or "reverses" the meaning of the pattern).

... now we add a "disposition" -- the autoresponse.

| (formail -rk \
-A "X-Loop: yourname@youraddress.com" \
-A "Precedence: junk"; \
echo "Please don't send me any more mail";\
echo "This is an automated response";\
echo "I'll never see your message";\
echo "So, GO AWAY" ) | $SENDMAIL -t -oi 

This is pretty complicated -- but here's how it works: The | character tells procmail that it should launch a program and feed the message to it. The open parenthesis is a Bourne shell construct that groups a set of commands in such a way as to combine the output from all of them into one "stream."

The 'formail' command is a handy program that is included with the procmail package. It "formats" mail headers according to its command line switches and its input. -rk tells 'formail' to format a "reply" and to "keep" the message body. With these switches formail expects a header and body as input.

The -A parameters tells formail to "add" the next parameter as a header line. The parameters provided to the -A switch must be enclosed in quotes so the shell treats the whole string (spaces and all) as single parameters. The backslashes at the end of each line tell procmail mail to treat the next line as part of this one. So, all of the lines ending in backslashes are passed to the shell as one long line.

This "trailing backslash" or "line continuation" character is a common Unix idiom found in a number of programming languages and configuration file formats. The semicolons tell the shell to execute another command -- they allow several commands to be issued on the same command line.

Each of the echo commands should be reasonably self-explanatory. We could have used a 'cat' command and put our text into a file if we wanted. We can also call other programs here -- like 'fortune' or 'date' and their output would be combined with the rest of this).

Now we get to the closing parenthesis. This marks the end of the block of commands that we combined. The output from all of those is fed into the next pipe -- which starts the local copy of sendmail (note that this is another variable that procmail thoughtfully presets for us).

The -t switch on sendmail tell it to take the "To:" address from the header of it's input (where 'formail -r' put it) and the -oi switch enables the sendmail "option" to "ignore" lines that consist only of a 'dot' (don't worry about the details on that).

Most of the difficulty in understanding procmail as nothing to do with procmail itself. The intricacies of regular expressions (those weird things on the '*' -- conditional lines) and shell quoting and command syntax, and how to format a reply header that will be acceptable to sendmail (the 'formail' and 'sendmail' stuff) are the parts that require so much explanation.

More information about procmail can be found in Era Eriksson's "Mini-FAQ." at http://www.iki.fi/~era/procmail/mini-faq.html or one of the several mirrors like http://www.zer0.org/procmail/mini-faq.html http://www.dcs.ed.ac.uk/home/procmail/faq/mini-faq.html

Key terms, files and utilities : Procmail .procmailrc

Exercises

[edit | edit source]


Detailed Objective

[edit | edit source]

Weight: 1

Description: Candidates should be able to install and configure news servers. This objective includes customising and monitoring served newsgroups.

  • Key knowledge area(s):
    • INN configuration files, terms and utilities
    • Leafnode configuration files, terms and utilities
  • The following is a partial list of the used files, terms and utilities:
    • innd
    • fetnchnews

Serving news

[edit | edit source]

The INND daemon is one of the most used news server program It provides Network News Transport Protocol (NNTP) service. Major newsgroups include: alt,comp,gnu,misc,news,rec,sci,soc, and talk. Newsgroups are configured in a hierarchical fashion. INND by default uses the NNTP port TCP 119

Configuration :

Location of configuration files is /etc/news/. A minimal leafnode setup requires that you modify the following files:

inn.conf :

Set the following options. The defaults for the remaining options should be fine.

organization:   MyOrganization
domain:         mydomain.com
server:         news.mydomain.com
incoming.conf :

Place your ISP's news server information in here.

# Peer definition
# MyISP.com  (800) 555-1212 news@MyISP.com
peer myisp.com {
   hostname:  news.myisp.com
}

Newsfeeds

[edit | edit source]

If you want to post articles, you need to modify newsfeeds. news.myisp.com:comp.*,!comp.sources.*,comp.sources.unix/!foo:Tf,Wnm:news.myisp.com The colon is the field delimiter used above. The format of that above line is: sitename[/exclude,exclude,...]:pattern,pattern,...[/distrib,distrib,..]:flag,flag,...:param

Options:

sitename
Names the site to which this feed relates. It can be called anything you want and does not have to be the domain name of the site.
pattern
Indicates which news groups are to be sent to this site. The default is to send all groups (leave it blank if that's what you want). The above example will cause all "comp" groups to be received, but not any group under "comp.sources" except for "comp.sources.unix".
distribution
If specified, and an article has a "Distribution" header, it is check against this value. If the distribution specified matches the distribution header in the article, it is sent. However, if the distribution specified starts with an exclamation point, and the distribution header in the article matches, it is not sent. In the above example, any article with a distribution header containing "foo" will not be sent.
flag
Specify various options about the newsfeed. The above options specify that this is a file feed type (Tf), and that only articles "message-id" and "token" (Wmn) should be written.

param - Meaning varies depending on the feed type. When the feed type is "file" as in the example above, it specifies the file to write an entry to when an article is received. If not an absolute path, it is relative to the "pathoutgoing" option in inn.conf.

readers.conf : Edit this file if you want to allow readers on other computers. motd.news : if you allow readers, it is a good idea to put a banner in this file that relays your usage policies to your readers.

Run inncheck to correct any permissions problems and catch any configuration file errors. Run makehistory to initialize the INN history database. Run makedbz to rebuild the dbz database files. Run innd and test with a news client

Troubleshooting

[edit | edit source]

innd won't start Use inncheck. Check logs under /var/log/news. Readers can't read : Verify that the reader is allowed access by checking nnrp.access. Make sure innd is running. Check logs under /var/log/news. telnet to port 119 and see if a banner comes up. Posters can't post : Confirm poster is allowed to post by checking nnrp.access. Check logs under /var/log/news. telnet to port 119 and see if a banner comes up with (posting allowed).

Key terms, files and utilities : Innd

Exercices

[edit | edit source]

Detailed Objectives (207.1)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 3


Description: Candidates should be able to configure BIND to function as a caching-only DNS server. This objective includes the ability to manage a running server and configuring logging.


Key Knowledge Areas:

  • BIND 9.x configuration files, terms and utilities.
  • Defining the location of the BIND zone files in BIND configuration files.
  • Reloading modified configuration and zone files.
  • Awareness of dnsmasq, djbdns and PowerDNS as alternate name servers.


The following is a partial list of the used files, terms and utilities:

  • /etc/named.conf
  • /var/named/
  • /usr/sbin/rndc
  • kill
  • host
  • dig

Basic BIND 8 configuration

[edit | edit source]

Setting up a caching-only nameserver

[edit | edit source]

To speed up the cumbersome process of DNS queries, DNS servers usually cache answers from other DNS servers – even negative queries (i.e an authoritative server's answer « name does not exist » is also cached by your local DNS)

Configuring BIND as a caching-only nameserver involves setting up only a « . » zone, that is, only tell it about the root nameservers and not specifying any zones, as follows :

zone « . » in {
type hint;
file « named.cache »;
};

The file named.cache can be generated by using dig @a.root-servers.net

Logging in BIND is controlled by two main concepts : channels and categories A channel specifies where logged data goes : to syslog, to a file, etc... A category specifies what data is logged

Channels allows you to filter messages by priority, like syslog's priorities. They are essentially the same, but two more are available for BIND : debug and dynamic, which affect debug level logging Debug sets a debug level, which will be active after the first trace command is given via ndc; dynamic will increment and decrement debug levels after each trace command is given via ndc

Example of logging configuration:

logging {
channel my_syslog {
syslog daemon;
severity info;
};
channel my_file {
file « log.msgs »;
severity dynamic;
};
category statistics { my_syslog; my_file; };
category queries { my_file; };
};

To activate logging, after bind is started, issue a command :

ndc trace

Key terms, files and utilities : /etc/named.conf /usr/sbin/ndc /usr/sbin/named-bootconf Kill

Exercises

[edit | edit source]

Detailed Objectives

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 3


Description: Candidates should be able to create a zone file for a forward or reverse zone or root level server. This objective includes setting appropriate values for records, adding hosts in zones and adding zones to the DNS. A candidate should also be able to delegate zones to another DNS server.


Key Knowledge Areas:

  • BIND 9.x configuration files, terms and utilities.
  • Utilities to request information from the DNS server.
  • Layout, content and file location of the BIND zone files.
  • Various methods to add a new host in the zone files, including reverse zones.


Terms and Utilities:

  • /var/named/
  • zone file syntax
  • resource record formats
  • named-checkzone
  • named-compilezone
  • dig
  • nslookup
  • host

Create and maintain DNS zones

[edit | edit source]

DNS zone files are composed mostly by resources records (RR) Resource records must start in the first column of a line The order in which they appear is not important, but most people tend to follow the order in the DNS RFCs SOA (Start Of Authority) : indicates authority for this zone NS (NameServer) : lists a nameserver for this zone Other records :

A : name-to-address mapping PTR : address-to-name mapping CNAME (canonical name) : aliases Don't forget to create a zone for 127.0.0 ! Create and maintain DNS zones Usual zone file format :

$TTL <ttl value>
<domain name>. IN SOA <nameserver name>. <user.email>. (
<serial>;  serial number
<refresh>; refresh value
<retry>;  retry value
<expire>;  expire value
<n-ttl>;  negative caching TTL of 1 day
<domain name>.  IN  NS  <authoritative NS name>.
<domain name>.  IN  NS  <authoritative NS name>.
...
<hostname>.  IN  A  <IP address>
<hostname>.  IN  A  <IP address>
...

Create and maintain DNS zones For reverse mappings : $TTL <ttl value> <reverse net addr>.in-addr.arpa. IN SOA <NS name>. <user.email>. ( <serial>; serial number <refresh>; refresh value <retry>; retry value <expire>; expire value <n-ttl>; negative caching TTL of 1 day <rev net addr>.in-addr.arpa. IN NS <authoritative NS name>. <rev net addr>.in-addr.arpa. IN NS <authoritative NS name>. ... <rev IP addr>.in-addr.arpa. IN PTR <fqdn>. <rev IP addr>.in-addr.arpa. IN PTR <fqdn>. ...

Key terms, files and utilities : Content of /var/named Zone file syntax Resources record formats Dig Nslookup Host

Exercises

[edit | edit source]

Detailed Objectives (207.3)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 3


Description: Candidates should be able to configure a DNS server to run as a non-root user and run in a chroot jail. This objective includes secure exchange of data between DNS servers.


Key Knowledge Areas:

  • BIND 9 configuration files
  • Configuring BIND to run in a chroot jail
  • Split configuration of BIND using the forwarders statement
  • Configuring and using transaction signatures (TSIG)
  • Awareness of DNSSEC and basic tools
  • Awareness of DANE and related records


Terms and Utilities:

  • /etc/named.conf
  • /etc/passwd
  • DNSSEC
  • dnssec-keygen
  • dnssec-signzone

Securing a DNS server

[edit | edit source]

First of all, check security mailing lists and web sites for new versions of BIND. Particularly, versions prior to 8.2.3 are vulnerable to known attacks.

Hide your version number from foreign queries – it could be used to craft a special attack against you. Since BIND 8.2, you may use in named.conf:

options {
version « None of your business »;
};

You can also restrict queries : Globally :

options {
allow-query { address-match-list; };
};

Or per-zone (which take precedence over global ACLs) :

zone « test.com » {
type slave;
file « db.test »;
allow-query { 192.168.0.0/24; };
};

Even more important, make sure only real slave DNS can transfer your zones from your master. Use the keyword allow-transfer : Globally (in an « options » statement), applies to all zones Per-zone On the slaves, disable zone transfers! Use « allow-transfer { none; }; »

Don't run BIND as root ! Since 8.1.2, there are options to change the user (-u ) and group (-g) under which BIND runs. Use a non-privileged user (i.e. create a new one, without shell access). Make sure your zone files have their correct permission (named.conf is read while BIND is still under root's permissions, so don't change this file's permissions)

Also, run bind in a chroot jail. Since 8.1.2, there is option -t to specify the directory for the nameserver to chroot() to. Make sure all the files needed by BIND (i.e log files, etc..) are under the root-jail If you plan to use ndc with a chroot'ed BIND, don't forget to pass the new pathname to the UNIX socket to ndc !

Here's a little bit on how to setup a chrooted bind9 environment in Debian. As the configuration in bind9 is very similar, the same procedure applies to bind8 for creating a chrooted environment.

  • Stop the currently running bind.
/etc/init.d/bind9 stop
  • In order to chroot bind in a jail, we need to specify what environment in /etc/default/bind9:
OPTIONS="-u bind -t /var/lib/named"
  • We still want logging in our /var/log/syslog, so we change /etc/default/syslogd that it opens an extra socket to which the chrooted bind can log through into /var/log/syslog.
SYSLOGD="-a /var/lib/named/dev/log"
  • Run a couple of mkdir's for the environment
mkdir /var/lib/named
mkdir -p /var/lib/named/var/run/bind/run
mkdir /var/lib/named/etc
mkdir /var/lib/named/dev
mkdir /var/lib/named/var/cache 
  • Move over our existing config
mv /etc/bind /var/lib/named/etc/bind
  • Link it
ln -s /var/lib/named/etc/bind /etc/bind
  • Change ownership in the chrooted var and etc
chown -R bind:bind /var/lib/named/var/* 
chown -R bind:bind /var/lib/named/etc/bind
  • Create some devices & set permissions
mknod /var/lib/named/dev/null c 1 3
mknod /var/lib/named/dev/random c 1 8
chown 666 /var/lib/named/dev/random /var/lib/named/dev/null
  • Restart syslogd & start bind
/etc/init.d/sysklogd restart
/etc/init.d/bind9 start

If bind does not start and there are error messages in the syslog, keep in mind that these messages where created from inside the chrooted domain, hence a permission problem about /var/run/bind/run/named.pid would mean that it is really a problem about /var/lib/named/var/run/bind/run/named.pid


Key terms, files and utilities : SysV init files /etc/named.conf /etc/passwd

Exercises

[edit | edit source]

Web Services

[edit | edit source]

208.1 Implementing a Web server

[edit | edit source]

Detailed Objectives (208.1)

[edit | edit source]

(LPIC-1 Version 4.5)


Weight: 4


Description: Candidates should be able to install and configure a web server. This objective includes monitoring the servers load and performance, restricting client user access, configuring support for scripting languages as modules and setting up client user authentication. Also included is configuring server options to restrict usage of resources. Candidates should be able to configure a web server to use virtual hosts and customize file access.


Key Knowledge Areas:

  • Apache 2.4 configuration files, terms and utilities.
  • Apache log files configuration and content.
  • Access restriction methods and files.
  • mod_perl and PHP configuration.
  • Client user authentication files and utilities.
  • Configuration of maximum requests, minimum and maximum servers and clients.
  • Apache 2.4 virtual host implementation (with and without dedicated IP addresses).
  • Using redirect statements in Apache’s configuration files to customize file access.


Terms and Utilities:

  • access logs and error logs
  • .htaccess
  • httpd.conf
  • mod_auth_basic, mod_authz_host, mod_access_compat
  • htpasswd
  • AuthUserFile, AuthGroupFile
  • apachectl, apache2ctl
  • httpd, apache2

Overview

[edit | edit source]

Apache is the most used web server on the Internet,[1] and the "poster child" for successful open source development. While a web server itself don't need to be particularly fancy (many programming languages have tutorials how to write a HTTP server) Apaches "secret of success" is its flexibility and robustness. Apache can be easily extended by various modules, mod_perl and mod_auth will be featured in this section.

Installation and Configuration

[edit | edit source]

The Apache HTTP server in its most recent version (2.2 as of writing this) can be downloaded in source code from the Apache HTTP Server Website, or pre compiled as binary package from the repository of your favorite Linux distribution.

For the rest of this section we will refer to the Apache documentation for file names. This documentation is usually installed with the Apache binary inside the DocumentRoot. If we cannot reach the local documentation, there still is the official documentation from the Apache Website. We will use a virtual network with Slackware 13.0 inside VirtualBox, which is free (as in cost) and available Free (as in Freedom) with small restrictions. Distribution specific summaries for Debian Lenny and a clone of Redhat Enterprise, Centos 5.4 will follow below.

If we want to compile Apache from source, we use the usual configure, make, make install steps. For further details please refer to the documentation page.

The web server binary httpd itself is usually located in /usr/sbin/. We can use the binary directly to start and stop the web server through command line options, but a better idea is to use the control script apachectl to interface with the httpd. apachectl can control the web server process (start and stop) in a convenient way and sets up the environment and checks the configuration file in the background. Back in the days of the transition from Apache 1.3 to the Apache 2.x series the control script was called apache2ctl to tell it apart from the Apache 1.3 script (then) apachectl.

It is unfortunate that the LPI still refers to apache2ctl while the Apache source code produces apachectl.

[root@lpislack ~]# apachectl
Usage: /usr/sbin/httpd [-D name] [-d directory] [-f file]
                       [-C "directive"] [-c "directive"]
                       [-k start|restart|graceful|graceful-stop|stop]
                       [-v] [-V] [-h] [-l] [-L] [-t] [-S]
Options:
  -D name            : define a name for use in <IfDefine name> directives
  -d directory       : specify an alternate initial ServerRoot
  -f file            : specify an alternate ServerConfigFile
  -C "directive"     : process directive before reading config files
  -c "directive"     : process directive after reading config files
  -e level           : show startup errors of level (see LogLevel)
  -E file            : log startup errors to file
  -v                 : show version number
  -V                 : show compile settings
  -h                 : list available command line options (this page)
  -l                 : list compiled in modules
  -L                 : list available configuration directives
  -t -D DUMP_VHOSTS  : show parsed settings (currently only vhost settings)
  -S                 : a synonym for -t -D DUMP_VHOSTS
  -t -D DUMP_MODULES : show all loaded modules
  -M                 : a synonym for -t -D DUMP_MODULES
  -t                 : run syntax check for config files

Hmmm. This does not look right, because if apachectl encounters parameters it does not understand, it passes them directly to httpd. And no parameter is such a parameter, so apachectl invokes httpd without any parameter.

apachectl can start, stop and restart the web server, but even more useful is gracefull and gracefull-stop which restarts/stops the web server while not stopping currently open connections. configtest does the same as httpd -t in testing the apache configuration file. The options status and fullstatus need the mod_status module to display many useful status informations about our http server.

The logs for your Apache instance go to /var/log/httpd/. The two most important log files are access_log, which logs every access to the web server and error_log which only records errors. Tools like Awstats and Webalizer use the access_log to generate their reports.

A snippet of access_log shows (taken from the Debian Lenny machine lpidebian) the IP 192.162.10.21 accessing / on the website, which is the “welcome” page of the web server (more on this later), then trying to GET /favicon.ico and then /login.html which both result in a “404”, which means “File does not exist”.

192.168.10.21 - - [02/Jun/2009:17:06:01 -0400] "GET / HTTP/1.1" 200 56 "-" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.10) Gecko/2009042315 Firefox/3.0.10"
192.168.10.21 - - [02/Jun/2009:17:17:12 -0400] "GET /favicon.ico HTTP/1.1" 404 300 "-" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.10) Gecko/2009042315 Firefox/3.0.10"
192.168.10.21 - - [05/Jun/2009:16:41:39 -0400] "GET / HTTP/1.1" 200 56 "-" "Mozilla/5.0 (compatible; Konqueror/3.5; Linux 2.6.27.7-smp) KHTML/3.5.10 (like Gecko)"
192.168.10.21 - - [05/Jun/2009:16:41:39 -0400] "GET /favicon.ico HTTP/1.1" 404 300 "-" "Mozilla/5.0 (compatible; Konqueror/3.5; Linux 2.6.27.7-smp) KHTML/3.5.10 (like Gecko)"
192.168.10.21 - - [05/Jun/2009:16:41:50 -0400] "GET /login.html HTTP/1.1" 404 299 "-" "Mozilla/5.0 (compatible; Konqueror/3.5; Linux 2.6.27.7-smp) KHTML/3.5.10 (like Gecko)"

This snippet from error_log shows the same errors but in greater detail:

[Fri Jun 05 13:41:10 2009] [notice] mod_python: using mutex_directory /tmp
[Fri Jun 05 13:41:11 2009] [notice] Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny3 with Suhosin-Patch mod_python/3.3.1 Python/2.5.2 mod_perl/2.0.4 Perl/v5.10.0 configured -- resuming normal operations
[Fri Jun 05 16:41:39 2009] [error] [client 192.168.10.21] File does not exist: /var/www/favicon.ico
[Fri Jun 05 16:41:50 2009] [error] [client 192.168.10.21] File does not exist: /var/www/login.html

The configuration of Apache takes place in /etc/httpd/httpd.conf. This lengthy, but well documented, configuration file is in part structured similar to a HTML page. To strip out any comments you can easily use grep.

root@lpislack:~# grep -v ^# /etc/httpd/httpd.conf | grep -v ^$ | grep -v "^    #"
ServerRoot "/usr"
Listen 80
LoadModule auth_basic_module lib/httpd/modules/mod_auth_basic.so
LoadModule auth_digest_module lib/httpd/modules/mod_auth_digest.so
...
LoadModule log_config_module lib/httpd/modules/mod_log_config.so
LoadModule userdir_module lib/httpd/modules/mod_userdir.so
LoadModule alias_module lib/httpd/modules/mod_alias.so
LoadModule rewrite_module lib/httpd/modules/mod_rewrite.so
User apache
Group apache
ServerAdmin webadmin@your.site
DocumentRoot "/srv/httpd/htdocs"
<Directory />
    Options FollowSymLinks
    AllowOverride None
    Order deny,allow
    Deny from all
</Directory>
<Directory "/srv/httpd/htdocs">
    Options Indexes FollowSymLinks
    AllowOverride None
    Order allow,deny
    Allow from all
</Directory>
DirectoryIndex index.html
ErrorLog "/var/log/httpd/error_log"
LogLevel warn
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
CustomLog "/var/log/httpd/access_log" common
ScriptAlias /cgi-bin/ "/srv/httpd/cgi-bin/"
<Directory "/srv/httpd/cgi-bin">
    AllowOverride None
    Options None
    Order allow,deny
    Allow from all
</Directory>
DefaultType text/plain
TypesConfig /etc/httpd/mime.types
root@lpislack:~#

This slightly striped down httpd.conf is taken from a Slackware 13.0 system (lpislack. There are two terms to know when talking about httpd.conf: "directives" and "containers". "Directives" are the configuration options (and their values) themselves, while "containers" are directories or collections of files. Any directive inside a container will only be valid inside this container, directives outside the container are of global effect for the whole site. On the other side, there are directives that are only valid inside a container.

ServerRoot "/usr"
This is a tricky on. All relative paths start from here, the absolute ones are, as implied by the name, absolute.
Listen 80
The TCP port the httpd listens for incoming connection requests. If our machine has more than one network address, we can bind the httpd to one (ore more) IP adresses/port combinations here as well.
LoadModule auth_basic_module lib/httpd/modules/mod_auth_basic.so
Loads the module auth_basic_module located in lib/httpd/modules/mod_auth_basic.so relative to the ServerRoot, so the whole path to this module is /usr/lib/httpd/modules/mod_auth_basic.so
User apache
The user account httpd runs as. This better be an restricted account. One (the first) httpd process has to run as root, if it wants to claim port 80.
Group apache
The group of the user httpd runs as.
ServerAdmin webadmin@your.site
the e-mail address of the administrator responsible for running the httpd. This shows up when errors occur.
DocumentRoot "/srv/httpd/htdocs"
This is the directory where the actual HTML documents live on your hard drive!
<Directory /> ... </Directory>
This is a container object. All directives inside are only valid for this directory "/" and all of its subdirectories.
Options FollowSymLinks
Potential security risk! Does what its name suggest.
AllowOverride None
You can override most directives with a .htaccess file. This is a security risk and the use of .htascess is denied by this directive.
Order deny,allow
Controls the access to files and directories. First look who is not allowed, then look who is allowed. The default is the last control that matches, if non matches or both match, use the default (=last)!
Deny from all
Denies all hosts the access to all file in this container.
<Directory "/srv/httpd/htdocs"> ... </Directory>
Container for the ServerRoot directory. Note the Order allow,deny and the Allow from all directives. Here we want access from all hosts.
DirectoryIndex index.html
The file with this name is presented to the client when a web browser accesses a directory and not a specific HTML page.
If no index.html exist in this directory the contents of the directory itself is shown. Options Indexes allows this, while Options -Indexes generates an error message instead of listing the directories contents.
ErrorLog "/var/log/httpd/error_log"
Sets the logfile for error messages.
LogLevel warn
Sets the verbosity of the error messages.
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
Sets the format of the entries in the custom log file (usually access_log)
CustomLog "/var/log/httpd/access_log" common
Sets name and location of the custom log file.
ScriptAlias /cgi-bin/ "/srv/httpd/cgi-bin/"
Directory for CGI scripts.
DefaultType text/plain
Apache uses this MIME type for the HTML pages it provides to the web browser, if the HTML page itself contains no other information.
TypesConfig /etc/httpd/mime.types
List of MIME types to use for different types of file names.

Access restrictions methods and files

[edit | edit source]

Access to files and directories on the web server can be restricted based on the machines IP or network (hostname, domain, IP address, or network) or based on user name and password. While the access can be restricted by this methods all content transmitted in both directions is still not encrypted! To secure the communication and ensure the identity of the web server the SSL/TLS protocol will be used in the next chapter of this book.

Container

[edit | edit source]

The behaviour of the Apache web server can be finely tuned in the Apache configuration (or the .htaccess file) on a per directory (<Directory> container), per file (<File> container), or per URL (<Location> container) basis. The directives inside a <Directory> (or <Location>) container are valid for the directory itself and all its subdirectories. Most of this directives can be overwritten by external configuration files, usually .htaccess. This is highly discouraged for security and sanity reasons. Some possible directives for AllowOverride are:

None
no use of external configuration changes allowed (safest)
All
all directives can be changed (most insecure)
Limit
some changes are allowed (not secure at all)
AuthConfig
mainly authentication related directives can be changed
Options
mainly Options directives can be overwritten

Machine Restrictions

[edit | edit source]

Order sets the sequence of access restrictions, where the last matching rule wins. The last rule is also the default rule if neither rule matches or both match. The possible Allow/Deny restrictions are hostname (host.domain.example), domain (domain2.example), ip (192.168.10.3) and network (192.168.10 192.168.10.0/24).

<Directory "/srv/httpd/htdocs">
    Options Indexes FollowSymLinks
    AllowOverride None
    Order allow,deny
    Allow from all
    Deny from example.com
</Directory>

All access from everywhere is allowed, but the domain example.com is denied.

User Based Restrictions

[edit | edit source]

User based access restrictions are insecure on different levels:

  • passwords are not encrypted (danger: snooping)
  • every content up- and download is clear text (danger: snooping)
  • there is no guarantee about the identity of the server (danger: fraud/phishing)

One big part of Apaches flexibility is its capability to talk to different back ends for user authentication, the most simple being plain text files, which is OK for smaller numbers of users, but do not scale to more than (about) 150 people.

Usernames, passwords and groups are stored in text files usually called .htpasswd and .htgroup. These names are defined in httpd.conf or .htaccess by the directives AuthUserFile and AuthGroupFile. Both directives are part of the mod_auth module. We create/change a username and password with the htpasswd utility. The -c option creates a new password file, if such a file already exists it will be destroyed without warning! htpasswd requires two parameters: password and username.

root@lpislack# htpasswd -c /etc/httpd/htpasswd newuser
...

One other important thing to keep in mind is that the only safe place for password file and group file is outside the DocumentRoot, where these files can't accidentally or maliciously be downloaded by unauthorized visitors.

Going back to reality, overwriting the httpd.conf directives with a .htaccess file and placing the .htpasswd and .htgroup inside the document directories is often done, if the web site administrator does not have full access to the Apache configuration, e. g. in shared hosting environments. To protect these files one can restrict the access to them in a <Files> container spelled out in the httpd.conf:

<FilesMatch "^\.ht">
    Order allow,deny
    Deny from all
    Satisfy All
</FilesMatch>

Example 1

[edit | edit source]

This example shows the preferred, but sadly not always possible configuration. The restricted directory is /srv/httpd/htdocs/private1, that can be reached at http://lpislack.vbox.privat/private1/ by my web browser.

httpd.conf:

<Directory "/srv/httpd/htdocs/private1">
    AuthType Basic
    AuthName "Private1! Restricted Access!"
    require valid-user
    AuthUserFile /etc/httpd/htpasswd
</Directory>

After fiddling with the configuration file we probably should restart the httpd server process.

root@lpislack:/etc/httpd# /etc/rc.d/rc.httpd restart

Create password file htaccess:

root@lpislack:/etc/httpd# htpasswd -c /etc/httpd/htpasswd firstuser
New password:
Re-type new password:
Adding password for user firstuser
root@lpislack:/etc/httpd# cat htpasswd
firstuser:2km7TAXpj3scw
root@lpislack:/etc/httpd#

This file is only accessible by authorized (root!) users. (And by the way, the password is tee2Seih.)

The password protected page /srv/httpd/htdocs/private1/index.html source code:

root@lpislack:/srv/www/htdocs/private1# cat index.html
<html><body><h1>This is private!</h1></body></html>

Example 2

[edit | edit source]

This example shows a commonly used configuration. It is not the best, but sometimes the only possible setup. We can do much better (much safer) if we can locate the password file outside the DocumentRoot. The restricted directory here is /srv/httpd/htdocs/private2, that can be reached at http://lpislack.vbox.privat/private2/ by the web browser.

The only change to httpd.conf is to allow AllowOverride. In fact, if we can change httpd.conf, we could do the right thing in the first place (see Example 1).

<Directory "/srv/httpd/htdocs/private2">
    AllowOverride AuthConfig
</Directory>

Set up the external configuration file .htaccess in /srv/httpd/htdocs/private2/:

    AuthType Basic
    AuthName "Private2! Restricted Access!"
    require valid-user
    AuthUserFile /srv/httpd/htdocs/private2/.htpasswd

Restart httpd:

root@lpislack:/etc/httpd# /etc/rc.d/rc.httpd restart

Create password file with the user seconduser with the password uu2yo1Wo:

root@lpislack:/etc/httpd# htpasswd -c /srv/www/htdocs/private2/.htpasswd seconduser
New password:
Re-type new password:
Adding password for user seconduser
root@lpislack:/etc/httpd# cat /srv/www/htdocs/private2/.htpasswd
seconduser:2l.jKENGUwyQ6

Modules and CGI

[edit | edit source]

Flexibility and easy extendability are two important reasons for Apaches success. They are achieved in part by the CGI (=common gateway interconnect) concept and the ability to extend an already compiled Apache instance with modules. CGI programs (often called "CGI scripts") are executable programs that can be written in any language, be it bash, pearl, php, basic, assembler or ada. They run on the server, which uses up hardware resources of the server (RAM and CPU time) but do not impact the client. He receives what looks like any static HTML page, although the HTML page was dynamically created by the CGI program. The httpd takes the output of the CGI program and gives it unchanged and unchecked to the client web browser. (e.g., the HTTP headers have to be crafted by the CGI program). CGI programms can also take user input (via PUT or GET requests).

Example

[edit | edit source]

This bash script outputs "don't try this at home" in ugly blinking letters and prints the content of /etc/passwd to show how dangerous CGI programming can be!

#!/bin/sh
echo "Content-type: text/html"
echo ""
echo "<html>"
echo "<body>"
echo "<blink>DON'T TRY THIS AT HOME!</blink>"
cat /etc/passwd
echo "</body>"
echo "<html>"

Modules

[edit | edit source]

Modules on the other hand can extend the abilites of the httpd with features that are not part of the main Apache source code. (Some modules can be compiled into the httpd directly.) Modules can be switched on and off (e. g. for security reasons) with a simple change in the httpd.conf. Most modules need addional configuration directives in httpd.conf, usually by importing configuration files.

For security reasons we will only enable modules actually needed by our web site.

mod_php

[edit | edit source]

One very useful example is mod_php. If PHP code is executed as a simple CGI script, every script starts the PHP parsing engine, the HTML text is generated, and then the PHP parsing engine is shut down.

mod_php starts the PHP engine as a module for the Apache process and the PHP engine will be persisted over multiple requests. This drastically reduces the overhead of using PHP for dynamic web site creation. As an added bonus we can use PHP code directly in our HTTP sourcecode. This code also runs at the server side and is then replaced by its output before the complete HTML page is sent to the client. If we use a database, this connection can also be persisted if we use mod_php.

The PHP language itself is configured by php.ini, located at /etc/httpd/, but this file usually don't need to be changed.

To enable mod_php on Slackware 13.0 we only need to uncomment the line

Include /etc/httpd/mod_php.conf

in /etc/httpd/httpd.conf. This will include this already set up configuration directly into our httpd.conf.

mod_php.conf:

LoadModule php5_module lib/httpd/modules/libphp5.so
AddType application/x-httpd-php .php

We now change AddType application/x-httpd-php to

AddType application/x-httpd-php .php .html .htm

to use PHP code inside HTML documents. This can be convenient, but increases the workload on high traffic websites considerably, because ever requested HTML page is shoved through the PHP interpreter. Another thing we can do to make our lives a bit easier, is adding index.php to the DirectoryIndex directive.

Now we restart the httpd

Example

[edit | edit source]

To check if it works we create testforphp.php somewhere below the DocumentRoot

<html>
<head>
<title>Status for PHP</title>
</head>
<body>
<?php
phpinfo();
?>
</body>
</html>

Now remove this file (or at least deny read access), because this will blast our entire web server configuration to the whole internet, where every creep of the planet is just milliseconds away from us. (Try searching for intitle:phpinfo "PHP Version" in Google...)

mod_perl

[edit | edit source]

While Slackware 13.0 comes with perl as an installable package, the minimal test CGI script printenv in /srv/www/cgi-bin needs a small help. First we need to mark it as an executable by

root@lpislack:/srv/www/htdocs# chmod a+x ../cgi-bin/printenv

and then change the first line "#!/usr/local/bin/perl" to "#!/usr/bin/perl". Now we can navigate our web browser to http://lpislack.vbox.privat/cgi-bin/printenv and see if it works:

DOCUMENT_ROOT="/srv/httpd/htdocs"
GATEWAY_INTERFACE="CGI/1.1"
HTTP_ACCEPT="text/html, application/xml;q=0.9, application/xhtml+xml, image/png, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1"
HTTP_ACCEPT_CHARSET="iso-8859-1, utf-8, utf-16, *;q=0.1"
HTTP_ACCEPT_ENCODING="deflate, gzip, x-gzip, identity, *;q=0"
HTTP_ACCEPT_LANGUAGE="de-DE,de;q=0.9,en;q=0.8"
HTTP_CACHE_CONTROL="no-cache"
HTTP_CONNECTION="Keep-Alive, TE"
HTTP_HOST="lpislack.vbox.privat"
HTTP_TE="deflate, gzip, chunked, identity, trailers"
HTTP_USER_AGENT="Opera/9.80 (X11; Linux i686; U; de) Presto/2.2.15 Version/10.10"
PATH="/bin:/usr/bin:/sbin:/usr/sbin"
QUERY_STRING=""
REMOTE_ADDR="192.168.10.21"
REMOTE_PORT="40206"
REQUEST_METHOD="GET"
REQUEST_URI="/cgi-bin/printenv"
SCRIPT_FILENAME="/srv/httpd/cgi-bin/printenv"
SCRIPT_NAME="/cgi-bin/printenv"
SERVER_ADDR="172.25.28.4"
SERVER_ADMIN="you@example.com"
SERVER_NAME="lpislack.vbox.privat"
SERVER_PORT="80"
SERVER_PROTOCOL="HTTP/1.1"
SERVER_SIGNATURE=""
SERVER_SOFTWARE="Apache/2.2.14 (Unix) DAV/2 PHP/5.2.12"
UNIQUE_ID="S3ibiawZHAQAAAq2Hf0AAAAD"

We do not use mod_perl at this time but run CGI scripts written in Perl as we would run any other executable.

So mod_perl (from http://perl.apache.org) does for the Perl language the same as mod_php does for PHP : it adds native language support directly into the Apache web server and so reduces load and speeds up response time.

Sadly there is no pre built mod_perl package for Slackware 13.0, but http://slackbuilds.org has at http://slackbuilds.org/repository/13.0/network/mod_perl/ a tried and true buildscript for everyone who can read the instructions. (As as sidenote, SlackBuilds are the preferred method to build Slackware packages from source.)

This situation demonstrates the use of modules: Functionality that is not included in Apache can be added by external modules without recompiling Apache. If there was a bugfix for Apache and we had to upgrade, mod_perl will still work fine as a modul. If mod_perl was compiled into Apache we had to get the source code, fit it to our setup, compile and install it. With every update we would need to go through the same process, just to keep using Perl.

After building and installing the package we simply need to include mod_perl.conf to httpd.conf and restart the Apache server.

mod_perl.conf:

LoadModule perl_module lib/httpd/modules/mod_perl.so
AddHandler perl-script pl
<Files *.pl>
     # mod_perl mode
     SetHandler perl-script
     PerlResponseHandler ModPerl::Registry
     PerlOptions +ParseHeaders
     Options +ExecCGI
</Files>

Perl files can live everywhere in the DocumentRoot and their name has to end in ".pl". Let's go back to the printenv example. If we call it again, it will still be executed as CGI, but if we copy it to the DocumentRoot and rename it printenv.pl it will be run by mod_perl, as we can clearly see by the MOD_PERL and MOD_PERL_API_VERSION lines in the output below:

DOCUMENT_ROOT="/srv/httpd/htdocs"
GATEWAY_INTERFACE="CGI/1.1"
HTTP_ACCEPT="text/html, application/xml;q=0.9, application/xhtml+xml, image/png, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1"
HTTP_ACCEPT_CHARSET="iso-8859-1, utf-8, utf-16, *;q=0.1"
HTTP_ACCEPT_ENCODING="deflate, gzip, x-gzip, identity, *;q=0"
HTTP_ACCEPT_LANGUAGE="de-DE,de;q=0.9,en;q=0.8"
HTTP_CONNECTION="Keep-Alive, TE"
HTTP_HOST="lpislack.vbox.privat"
HTTP_TE="deflate, gzip, chunked, identity, trailers"
HTTP_USER_AGENT="Opera/9.80 (X11; Linux i686; U; de) Presto/2.2.15 Version/10.10"
MOD_PERL="mod_perl/2.0.4"
MOD_PERL_API_VERSION="2"
PATH="/bin:/usr/bin:/sbin:/usr/sbin"
QUERY_STRING=""
REMOTE_ADDR="192.168.10.21"
REMOTE_PORT="45519"
REQUEST_METHOD="GET"
REQUEST_URI="/printenv.pl"
SCRIPT_FILENAME="/srv/httpd/htdocs/printenv.pl"
SCRIPT_NAME="/printenv.pl"
SERVER_ADDR="172.25.28.4"
SERVER_ADMIN="webadmin@lpislack.vbox.privat"
SERVER_NAME="lpislack.vbox.privat"
SERVER_PORT="80"
SERVER_PROTOCOL="HTTP/1.1"
SERVER_SIGNATURE=""
SERVER_SOFTWARE="Apache/2.2.14 (Unix) DAV/2 PHP/5.2.12 mod_perl/2.0.4 Perl/v5.10.0"
UNIQUE_ID="S3i3VqwZHAQAAAxfFDUAAAAA"

Restrict Resource Usage

[edit | edit source]

Apache is capable of serving up pretty busy web sites. One mechanism to provide quick responsetimes under heavy load is to have waiting processes ready to jump into action at any given time. So unlike most other programs Apache spawns multiple processes when it is started. The number of processes is adjusted depending on the numbers of connections by creating and destroying child processes as needed.

One control process listens for new requests, usually on TCP port 80, while every client is connected to its very own child process that serves requests for the whole lifetime of this connection. StartServers determines the number of processes to begin with when Apache is started. But this is of little meaning, because MinSpareServers sets the minimum number of idle Apache processes waiting to server new connections. If there are less spare servers left, they are created at a rate of one per second. If that is not enough, the rate of process creation is doubled every second up to 32 new processes per second. If this is not sufficient, we sure as hell have other problems. On the other hand, if there are more idle servers than MaxSpareServers the unneeded processes are shut down one by one.

MaxClients limits the absolute number of simultaneously running server processes, and with that the maximum number of simultaneous client connections. The maximum number of 256 is a hard limit set at compile time. If there are more connection requests than apache processes to serve them, the requests are first moved to a backlog, and only if this backlog is filled up too, the requests are rejected.

The lifetime of an apache (child) process can be limited by the absolute number of connections he will serve as defined by MaximumRequests. This can mitigate problems when memory leaks on less stable platforms occur or problems caused by buggy modules or badly written CGIs. If set to 0 child processes can live indefinitely, if they are not terminated because of too many spare servers.

Redhat/CentOS

[edit | edit source]

Installation

# yum install httpd

The web server binary is called httpd. The control script is called apachectl. The access and error log files are located in /var/log/httpd/ and called access_log and error_log.

Debian

[edit | edit source]

Intallation

# aptitude install apache2

The web server binary is called apache2. The control script is called apache2ctl, which is not the same as apachectl by another name. The access and error log files are located in /var/log/apache2/ and called access.log and error.log. (Note the dot “.” instead of the underscore “_”.)

References

[edit | edit source]
  1. "Netcraft report on web server usage". Retrieved 2009-12-27.

208.2 Maintaining A Web Server

[edit | edit source]

Detailed Objectives (208.2)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 3


Description: Candidates should be able to configure a web server to provide HTTPS.


Key knowledge areas

  • SSL configuration files, tools and utilities.
  • Generate a server private key and CSR for a commercial CA.
  • Generate a self-signed Certificate.
  • Install the key and certificate, including intermediate CAs.
  • Configure Virtual Hosting using SNI.
  • Awareness of the issues with Virtual Hosting and use of SSL.
  • Security issues in SSL use, disable insecure protocols and ciphers.


Terms and Utilities:

  • Apache2 configuration files
  • /etc/ssl/, /etc/pki/
  • openssl, CA.pl
  • SSLEngine, SSLCertificateKeyFile, SSLCertificateFile
  • SSLCACertificateFile, SSLCACertificatePath
  • SSLProtocol, SSLCipherSuite, ServerTokens, ServerSignature, TraceEnable

Overview

[edit | edit source]

Apache is an impressive and powerful application. It is not only able to serve simple (static) HTTP pages, which is (essentially) a trivial task. Apache can host multiple web sites (http://www.example.com and http://www.beispiel.de) on one physical machine at one IP address using one Apache process by using "virtual hosts". Apache can also use multiple IP addresses to store different web sites on the same physical machine, which does not (necessarily) need different networking card. This is also realized by "virtual hosts".

Also, Apache can use very sophisticated methods to redirect queries.

Most importantly, at least to me, is the use SSL (as OpenSSL). SSL can do many things for many people: it can secure (encrypt) the content going back and forth between the web client and the web server. It can also ensure the identity of both parties communicating, the server and the client.

Virtual Hosts

[edit | edit source]

VirtualHost sections contain directives that apply only to a specific hostname or IP address. See [1] and [2]

IP Based Virtual Hosts

[edit | edit source]

Name Based Virtual Hosts

[edit | edit source]

OpenSSL

[edit | edit source]

OpenSSL(link) is a collection of tools that implement and handle certificates(link) that conform to the Transport Layer Security (TLS) ??? protocol(link).

What are certificates?

[edit | edit source]

Secure Socket Layer (SSL)(link), or Transport Layer Security (TLS)(link) as SSL versions beyond 3 are called now, uses Public Key Cryptography(link) to protect transactions over the insecure and not secureable internet. Like all public key cryptographic schemes (I know of) TLS uses a secret private key and an openly shared public key, called a certificate. The special twist with TLS certificates is the certification authority (CA)(link). For a TLS certificate to be recognized as valid, it has to be (cryptographically) signed by a "Certification Authority".

Flashback Public Key Crypto

[edit | edit source]

In a nutshell, public key crypto works like this: there are two keys, one public key for everyone to have, and one private key, for my eyes only. The private key is also (usually) protected by a very strong password.

Both keys can be used to encrypt data that only the other key can decrypt. There is in principle no difference between the public and private key!

On the other hand side it seems to make no sense to encrypt data with your private key, because everyone on the internet has already or can get your public key and decrypt the data. But if you encrypt data with your private key, you can prove you are in possession of the private key. This way you can (cryptographically) sign the data. To sign a piece of data we usually don't encrypt the whole but a (cryptographic) hash(link) of it and so can prove the authenticity of the data, provided we guard our private key very carefully. The password simply is a second security measure, in case the private key leaked into the public or gets lost.

What does a CA do exactly?

[edit | edit source]

A Certification Authority signs our public keys with its private key. Then they are called certificates. Thats it! Almost. We send in a "certificate signing request" (more on this later), a claim of your identity and a varying sum of money and the CA tries (depending on the amount of money we spent) to check our identity and if succeeded will sign our request and finally send back the signed certificate. But keep in mind that there is only so much The Hong Kong Post Office(TM) (or any other CA) can do to verify e. g. a Brazilian identity. But now the problem is to get the certificate of the CA... and here the trick is: we already have it! Most pieces of software that can use TLS certificates come with a list of trusted (this is the magic word) CAs. Any new certificate (e. g. shop.example.com) signed by this trusted CAs (e. g. by StartSSL) with their certificates installed on our machines is also regarded as trusted. Unsigned certificates or ones signed by unknown CA are regarded as "not trusted" an we are presented with a dire warning.

How do certificates work exactly?

[edit | edit source]

We can use TLS certificates with almost any insecure service on the internet, if we only try hard enough.

  • Web browsing (HTTPS instead of HTTP)
  • Sending Mail (SMTPS instead of SMTP)
  • Receiving Mail (IMAPS instead of IMAP/POP3S instead of POP3)
  • Chat (IRC over TLS)
  • VPN (OpenVPN)

How does it work (for web surfing)? A bit simplified:

  1. the client connects to the server
  2. the server sends over the certificate
  3. the client checks certain properties of the certificate
    1. the certificate is bound to the Full Qualified Host Name (FQHN) of the server we connect to. The web browser checks if the FQHN of the server and the certificate match, if not it generates an error.
    2. the certificate needs to be signed by a trusted CA, if not the web browser generates an error.
    3. certificates have a limited lifetime, depending on the amount of money we paid. The web browser checks if the certificate is still "fresh" and if not, it generates an error.
    4. there is a list of invalid certificates on our computer. These certificates are revoked for different reasons: they were compromised, had errors, were stolen, ... If the server certificate is on that list, the client software generates an error.
  4. if the certificate is deemed valid (or if an invalid certificate is accepted, despite being not valid) the client encrypts a random value with the certificate and sends it to the server.
  5. only the client (because he generated it) and the server (because only he can decrypt it) know the random value, generated by the client
  6. from the random value a symmetric key is generated on both ends and any further communication both ways is encrypted with this generated key. (A symmetric key is used, because encryption and decryption are much easier to handle by the CPU.)

How to find and use certificates

[edit | edit source]

CA root certificates are stored on our computers in lots of different places. Often every piece of software that uses TLS brings their own list of trusted CAs.

  • openssl: /etc/ssl/certs
  • firefox:
  • thunderbird:
  • claws-mail: ~/.claws-mail/certs/

On the other hand side openssl can act as an TLS client for classical "clear text" protocols like, well like all the internet protocols, e. g. POP3


How to get a certificate

[edit | edit source]

The easy way: Buy them from a CA

[edit | edit source]

The fast way: Be your own CA

[edit | edit source]

The hard way: BE your OWN CA

[edit | edit source]

SSL Certificates

[edit | edit source]

Detailed Objectives (208.3)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 2


Description: Candidates should be able to install and configure a proxy server, including access policies, authentication and resource usage.


Key Knowledge Areas:

  • Squid 3.x configuration files, terms and utilities
  • Access restriction methods
  • Client user authentication methods
  • Layout and content of ACL in the Squid configuration files


Terms and Utilities:

  • squid.conf
  • acl
  • http_access


Exercises

[edit | edit source]
  • Implementing a proxy server

We will be using the squid web proxy server version 2.4 and Linux kernel version 2.4 .

Proxying can be done in two ways : normal proxying and transparent proxying

  • In normal proxying, the client specifies the hostname and port number of a proxy in his web browsing software. The browser then makes requests to the proxy, and the proxy forwards them to the origin servers.
  • In transparent proxying, ...

Use transparent proxying if : You want to force clients on your network to use the proxy, whether they want to or not. You want clients to use a proxy, but don't want them to know they're being proxied. You want clients to be proxied, but don't want to go to all the work of updating the settings in hundreds or thousands of web browsers.

There are two types of transparent proxying :

  • Squid on the gateway
  • Squid on a separate box than the gateway


Squid on the gateway box

[edit | edit source]

Setting up squid for ordinary proxying is quite simple : after installing squid, edit the default configuration file squid.conf Find the following directives, uncomment them, and change them to the appropriate values:

  • httpd_accel_host virtual
  • httpd_accel_port 80
  • httpd_accel_with_proxy on
  • httpd_accel_uses_host_header on

Next, look at the cache_effective_user and cache_effective_group directives, and set them up with a dedicated user and group (i.e squid/squid)

Finally, look at the http_access directive. The default is usually ``http_access deny all. This will prevent anyone from accessing squid. For now, you can change this to ``http_access allow all, but once it is working, you will probably want to read the directions on ACLs (Access Control Lists), and setup the cache such that only people on your local network (or whatever) can access the cache.


ACLs in squid will enable you to restrict access to the proxy.
The general format for an ACL rule is :
acl aclname acltype string1 ...
ACL rules can then be used in the http_access directive

ACL types are :

  • Src : acl aclname src ip-address/netmask
acl aclname src 172.16.1.0/24
  • Dst : acl aclname dst ip-address/netmask
acl aclname dst 172.16.1.0/24
  • Time : acl aclname time [day-abbreviations: M,T,W,H,F,A,S] [h1:m1-h2:m2]
acl ACLTIME time M 9:00-17:00
  • Port : acl aclname port port-no
acl acceleratedport port 80
  • Proto : acl aclname proto protocol
acl aclname proto HTTP FTP
  • Method : acl aclname method method-type
acl aclname method GET POST
  • Maxconn : acl aclname maxconn integer
acl twoconn maxconn 5


Next, initialize the cache directories with squid -z (if this is a not a new installation of squid, you should skip this step). Next, launch squid via the /etc/init.d/squid script, and you should be able to set your web browser's proxy settings to the IP of the box and port 3128 (unless you changed the default port number) and access squid as a normal proxy. Implementing a proxy server Transparent proxying can be set up in two different ways : on the router or on another (remote) host Transparent proxying on the router will involve setting up squid in the « normal », and configuring the packet filtering subsystem to redirect clients' connections to squid
The kernel's networking options required are :

  • Under 'General Setup'
Networking support
Sysctl support
  • Under 'Networking Options'
Network packet filtering
TCP/IP networking
  • Under 'Networking Options' -> IP: Netfilter Configuration
Connection tracking
IP tables support
Full NAT
REDIRECT target support
  • Under 'File Systems'
/proc filesystem support

You must say NO to Fast switching under Networking Options !


Once you have your new kernel up and running, make sure you have IP forwarding enabled. Next, to configure iptables to enable transparent proxying, all you have to do is :

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 3128

Transparent proxying to a remote box

[edit | edit source]

Let's assume we have two boxes called squid-box and iptables-box, and that they are on the network local-network. First, on the machine that squid will be running on, squid-box, you do not need iptables or any special kernel options on this machine, just squid. You *will*, however, need the 'http_accel' options as described above. Now, on the machine that iptables will be running on, iptables-box, you will need to configure the kernel as described above, except that you don't need the REDIRECT target support. You will need 2 iptables rules :
iptables -t nat -A PREROUTING -i eth0 -s ! squid-box -p tcp --dport 80 -j DNAT --to squid-box:3128
iptables -t nat -A POSTROUTING -o eth0 -s local-network -d squid-box -j SNAT --to iptables-box
The first one sends the packets to squid-box from iptables-box. The second makes sure that the reply gets sent back through iptables-box, instead of directly to the client. This is very important, because otherwise squid will never receive the answer from the target web server (and thus, no caching can take place!)


Key terms, files and utilities :

  • squid.conf
  • Acl
  • http_access
  • Exercises


Network Client Management

[edit | edit source]

Detailed Objectives (210.1)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 2


Description: Candidates should be able to configure a DHCP server. This objective includes setting default and per client options, adding static hosts and BOOTP hosts. Also included is configuring a DHCP relay agent and maintaining the DHCP server.


Key Knowledge Areas:

  • DHCP configuration files, terms and utilities.
  • Subnet and dynamically-allocated range setup.
  • Awareness of DHCPv6 and IPv6 Router Advertisements


Terms and Utilities:

  • dhcpd.conf
  • dhcpd.leases
  • DHCP Log messages in syslog or systemd journal
  • arp
  • dhcpd
  • radvd
  • radvd.conf

DHCP configuration =

[edit | edit source]

Overview

[edit | edit source]

Description: The candidate should be able to configure a DHCP server and set default options, create a subnet, and create a dynamically-allocated range. This objective includes adding a static host, setting options for a single host, and adding bootp hosts. Also included is to configure a DHCP relay agent, and reload the DHCP server after making changes.

Key files, terms, and utilities include:

dhcpd.conf 
dhcpd.leases

Exercises

[edit | edit source]

DHCP?

[edit | edit source]

Most people reading this will already know the DHCP protocol. Just as a quick reminder. DHCP stands for Dynamic Host Configuration Protocol and is commonly used to distribute specific network settings in networks. Settings such as the default gateway, nameservers, IP addresses and much more.

As for a small illustration of the protocol itself.

<insert schematic about DHCP Requests here>

Configuring dhcpd

[edit | edit source]

After the installation of dhcpd the main configuration file can be found at /etc/dhcpd.conf. For Debian installations, one should edit /etc/default/dhcp as soon as the installation is finished and change the following line according to your setup.

INTERFACES="eth1" # or "eth1 eth2", whatever interfaces you wish to serve ip's.

The dhcpd.conf file is divided in global parameters and subnet specific parameters. Each subnet can override the global parameters. The most commonly used parameters are the following.

option domain-name "example.com"; 
option domain-name-servers "192.168.0.1, 193.190.63.172"
option subnet-mask 255.255.255.0; # global Subnet mask
default-lease-time 600; # Seconds each DHCP lease is granted and after which a request for the same ip is launched.
max-lease-time 7200; # If DHCP server does not respond, keep IP till 7200 seconds are passed.

subnet 192.168.0.0 netmask 255.255.255.240 { # Subnet for first 13 devices, 10 of which are servers, 3 printers
 range 192.168.0.10 192.168.0.13;  # Range of IP's for our printers
 option subnet-mask 255.255.255.240;
 option broadcast-address 192.168.0.15; # This is the subnets broadcast address
 option routers 192.168.0.14; # The gateway of this subnet
 option time-servers 192.168.0.14; # Gateway is running a timeserver
 option ntp-servers 192.168.0.14; # Gateway running a timeserver
}
subnet 192.168.0.16 netmask 255.255.255.224 { # Subnet for 29 computers
 range 192.168.0.17 192.168.0.45;
 option subnet-mask 255.255.255.224;
 option broadcast-address 192.168.0.47;
 option routers 192.168.0.46;
}
group {
 host server1 { # the first fixed server for subnet 192.168.0.0/28
  server-name server1;
  hardware ethernet 0f:45:d3:23:11:90;
  fixed-address 192.168.0.1;
 }
 host server2 {
  server-name server2;
  hardware ethernet 0f:45:d3:23:11:91;
  fixed-address 192.168.0.2;
 }
}

This example is just providing a hint about possible options and overrides.

More info can be found on dhcpd.conf and dhcp-options in man pages. Look in those pages too for information about using the DHCP server to serve BOOTP as well, useful for diskless clients.


Detailed Objective

[edit | edit source]

Weight: 1

Description:
Candidates should be able to configure an NIS server. This objective includes configuring a system as an NIS client.

  • Key knowledge area(s):
    • NIS configuration files, terms and utilities
    • Create NIS maps for major configuration files
    • Manipulate nsswitch.conf to configure the ability to search local files, DNS, NIS, etc.
  • The following is a partial list of the used files, terms and utilities:
    • ypbind
    • ypcat
    • ypmatch
    • ypserv
    • yppasswd
    • yppoll
    • yppush
    • ypwhich
    • rpcinfo
    • nsswitch.conf
    • ypserv.conf
    • contents of /var/yp/*
    • netgroup
    • nicknames
    • securenets
    • Makefile

NIS configuration

[edit | edit source]

Overview

[edit | edit source]

Description: The candidate should be able to configure an NIS server and create NIS maps for major configuration files. This objective includes configuring a system as a NIS client, setting up an NIS slave server, and configuring ability to search local files, DNS, NIS, etc. in nsswitch.conf.

Key files, terms, and utilities include:

nisupdate, ypbind, ypcat, ypmatch, ypserv, ypswitch, yppasswd, yppoll, yppush, ypwhich, rpcinfo
nis.conf, nsswitch.conf, ypserv.conf 
/etc/nis/netgroup
/etc/nis/nicknames
/etc/nis/securenets 

NIS stands for Network Information Service. Its purpose is to provide information, that has to be known throughout the network, to all machines on the network. Information likely to be distributed by NIS is login names/passwords/home directories (/etc/passwd) and group information (/etc/group)

If, for example, your password entry is recorded in the NIS passwd database, you will be able to login on all machines on the network which have the NIS client programs running.

Within a network there must be at least one machine acting as a NIS server. You can have multiple NIS servers, each serving different NIS "domains" - or you can have cooperating NIS servers, where one is the master NIS server, and all the other are so-called slave NIS servers (for a certain NIS "domain", that is!) - or you can have a mix of them...

Slave servers only have copies of the NIS databases and receive these copies from the master NIS server whenever changes are made to the master's databases. Depending on the number of machines in your network and the reliability of your network, you might decide to install one or more slave servers. Whenever a NIS server goes down or is too slow in responding to requests, a NIS client connected to that server will try to find one that is up or faster.

NIS databases are in so-called DBM format, derived from ASCII databases. For example, the files /etc/passwd and /etc/group can be directly converted to DBM format using ASCII-to-DBM translation software ("makedbm", included with the server software). The master NIS server should have both, the ASCII databases and the DBM databases.

Slave servers will be notified of any change to the NIS maps, (via the "yppush" program), and automatically retrieve the necessary changes in order to synchronize their databases. NIS clients do not need to do this since they always talk to the NIS server to read the information stored in it's DBM databases.

To run any of the software mentioned below you will need to run the program /usr/sbin/portmap. The RPC portmapper (portmap(8)) is a server that converts RPC program numbers into TCP/IP (or UDP/IP) protocol port numbers. It must be running in order to make RPC calls (which is what the NIS/NIS+ client software does) to RPC servers (like a NIS or NIS+ server) on that machine. When an RPC server is started, it will tell portmap what port number it is listening to, and what RPC program numbers it is prepared to serve. When a client wishes to make an RPC call to a given program number, it will first contact portmap on the server machine to determine the port number where RPC packets should be sent.

Since RPC servers could be started by inetd(8), portmap should be running before inetd is started. For secure RPC, the portmapper needs the Time service. Make sure, that the Time service is enabled in /etc/inetd.conf on all hosts:

# Time service is used for clock synchronization.
#
time    stream  tcp     nowait  root    internal
time    dgram   udp     wait    root    internal

IMPORTANT: Don't forget to restart inetd after changes on its configuration file !

What do you need to set up NIS?

Determine whether you are a Server, Slave or Client : Your machine is going to be part of a network with existing NIS servers You do not have any NIS servers in the network yet

In the first case, you only need the client programs (ypbind, ypwhich, ypcat, yppoll, ypmatch). The most important program is ypbind. This program must be running at all times, which means, it should always appear in the list of processes. It is a daemon process and needs to be started from the system's startup file (eg. /etc/init.d/nis, /sbin/init.d/ypclient, /etc/rc.d/init.d/ypbind, /etc/rc.local). As soon as ypbind is running your system has become a NIS client.

In the second case, if you don't have NIS servers, then you will also need a NIS server program (usually called ypserv). Section 9 describes how to set up a NIS server on your Linux machine using the "ypserv" daemon.

Setting Up the NIS Client

[edit | edit source]

The ypbind daemon

Newer ypbind versions have a configuration file called /etc/yp.conf. You can hardcode a NIS server there - for more info see the manual page for ypbind(8). You also need this file for NYS. An example:

 ypserver 10.10.0.1
 ypserver 10.0.100.8
 ypserver 10.3.1.1

If the system can resolve the hostnames without NIS, you may use the name, otherwise you have to use the IP address. ypbind 3.3 has a bug and will only use the last entry (ypserver 10.3.1.1 in the example). All other entries are ignored. ypbind-mt handle this correct and uses that one, which answered at first.

It might be a good idea to test ypbind before incorporating it in the startup files. To test ypbind do the following:

Make sure you have your YP-domain name set. If it is not set then issue the command: /bin/domainname nis.domain

where nis.domain should be some string _NOT_ normally associated with the DNS-domain name of your machine! The reason for this is that it makes it a little harder for external crackers to retrieve the password database from your NIS servers. If you don't know what the NIS domain name is on your network, ask your system/network administrator.

Start up "/usr/sbin/portmap" if it is not already running. Create the directory "/var/yp" if it does not exist. Start up "/usr/sbin/ypbind"

Use the command "rpcinfo -p localhost" to check if ypbind was able to register its service with the portmapper. The output should look like:

      program vers proto   port
       100000    2   tcp    111  portmapper
       100000    2   udp    111  portmapper
       100007    2   udp    637  ypbind
       100007    2   tcp    639  ypbind

Or like this (depending on the version of ypbind you are using) :

      program vers proto   port
       100000    2   tcp    111  portmapper
       100000    2   udp    111  portmapper
       100007    2   udp    758  ypbind
       100007    1   udp    758  ypbind
       100007    2   tcp    761  ypbind
       100007    1   tcp    761  ypbind

You may also run "rpcinfo -u localhost ypbind". This command should produce something like:

       program 100007 version 1 ready and waiting
       program 100007 version 2 ready and waiting

The output depends on the ypbind version you have installed. Important is only the "version 2" message. At this point you should be able to use NIS client programs like ypcat, etc... For example, "ypcat passwd.byname" will give you the entire NIS password database.

IMPORTANT: If you skipped the test procedure then make sure you have set the domain name, and created the directory /var/yp. This directory MUST exist for ypbind to start up successfully. To check if the domainname is set correct, use the /bin/ypdomainname from yp-tools 2.2. It uses the yp_get_default_domain() function which is more restrict. It doesn't allow for example the "(none)" domainname, which is the default under Linux and makes a lot of problems.

If the test worked you may now want to change your startupd files so that ypbind will be started at boot time and your system will act as a NIS client. Make sure that the domainname will be set before you start ypbind. Well, that's it. Reboot the machine and watch the boot messages to see if ypbind is actually started. For host lookups you must set (or add) "nis" to the lookup order line in your /etc/host.conf file. Please read the manpage "resolv+.8" for more details. Add the following line to /etc/passwd on your NIS clients:

+::::::

You can also use the + and - characters to include/exclude or change users. If you want to exclude the user guest just add -guest to your /etc/passwd file. You want to use a different shell (e.g. ksh) for the user "linux"? No problem, just add "+linux::::::/bin/ksh" (without the quotes) to your /etc/passwd. Fields that you don't want to change have to be left empty. You could also use Netgroups for user control.

For example, to allow login-access only to miquels, dth and ed, and all members of the sysadmin netgroup, but to have the account data of all other users available use:

     +miquels:::::::
     +ed:::::::
     +dth:::::::
     +@sysadmins:::::::
     -ftp
     +:*::::::/etc/NoShell

Note that in Linux you can also override the password field, as we did in this example. We also remove the login "ftp", so it isn't known any longer, and anonymous ftp will not work. The netgroup would look like :

sysadmins (-,software,) (-,kukuk,)

The nsswitch.conf File

[edit | edit source]

The Network Services switch file /etc/nsswitch.conf determines the order of lookups performed when a certain piece of information is requested, just like the /etc/host.conf file which determines the way host lookups are performed. For example, the line :

   hosts: files nis dns

specifies that host lookup functions should first look in the local /etc/hosts file, followed by a NIS lookup and finally through the domain name service (/etc/resolv.conf and named), at which point if no match is found an error is returned. This file must be readable for every user! You can find more information in the man-page nsswitch.5 or nsswitch.conf.5.

A good /etc/nsswitch.conf file for NIS is:

# /etc/nsswitch.conf
passwd:     compat
group:      compat
# For libc5, you must use shadow: files nis
shadow:     compat
passwd_compat: nis
group_compat: nis
shadow_compat: nis
hosts:      nis files dns
services:   nis [NOTFOUND=return] files
networks:   nis [NOTFOUND=return] files
protocols:  nis [NOTFOUND=return] files
rpc:        nis [NOTFOUND=return] files
ethers:     nis [NOTFOUND=return] files
netmasks:   nis [NOTFOUND=return] files
netgroup:   nis
bootparams: nis [NOTFOUND=return] files
publickey:  nis [NOTFOUND=return] files
automount:  files
aliases:    nis [NOTFOUND=return] files

Setting up a NIS Server

[edit | edit source]

The Server Program ypserv

If you run your server as master, determine what files you require to be available via NIS and then add or remove the appropriate entries to the "all" rule in /var/yp/Makefile. You always should look at the Makefile and edit the Options at the beginning of the file.

There was one big change between ypserv 1.1 and ypserv 1.2. Since version 1.2, the file handles are cached. This means you have to call makedbm always with the -c option if you create new maps. Make sure, you are using the new /var/yp/Makefile from ypserv 1.2 or later, or add the -c flag to makedbm in the Makefile. If you don't do that, ypserv will continue to use the old maps, and not the updated one.

Now edit /var/yp/securenets and /etc/ypserv.conf. For more information, read the ypserv(8) and ypserv.conf(5) manual pages.

Make sure the portmapper (portmap(8)) is running, and start the server ypserv. The command « rpcinfo -u localhost ypserv » should output something like :

   program 100004 version 1 ready and waiting
   program 100004 version 2 ready and waiting

The "version 1" line could be missing, depending on the ypserv version and configuration you are using. It is only necessary if you have old SunOS 4.x clients.

Now generate the NIS (YP) database. On the master, run :

   % /usr/lib/yp/ypinit -m

On a slave make sure that ypwhich -m works. This means, that your slave must be configured as NIS client before you could run « /usr/lib/yp/ypinit -s masterhost » to install the host as NIS slave. That's it, your server is up and running.

If you have bigger problems, you could start ypserv and ypbind in debug mode on different xterms. The debug output should show you what goes wrong.

If you need to update a map, run make in the /var/yp directory on the NIS master. This will update a map if the source file is newer, and push the files to the slave servers. Please don't use ypinit for updating a map. You might want to edit root's crontab *on the slave* server and add the following lines:

     20 *    * * *    /usr/lib/yp/ypxfr_1perhour
     40 6    * * *    /usr/lib/yp/ypxfr_1perday
     55 6,18 * * *    /usr/lib/yp/ypxfr_2perday

This will ensure that most NIS maps are kept up-to-date, even if an update is missed because the slave was down at the time the update was done on the master.

You can add a slave at every time later. At first, make sure that the new slave server has permissions to contact the NIS master. Then run :

   % /usr/lib/yp/ypinit -s masterhost

on the new slave. On the master server, add the new slave server name to /var/yp/ypservers and run make in /var/yp to update the map.

The Program rpc.ypxfrd

[edit | edit source]

rpc.ypxfrd is used for speed up the transfer of very large NIS maps from a NIS master to NIS slave servers. If a NIS slave server receives a message that there is a new map, it will start ypxfr for transfering the new map. ypxfr will read the contents of a map from the master server using the yp_all() function. This process can take several minutes when there are very large maps which have to store by the database library.

The rpc.ypxfrd server speeds up the transfer process by allowing NIS slave servers to simply copy the master server's map files rather than building their own from scratch. rpc.ypxfrd uses an RPC-based file transfer protocol, so that there is no need for building a new map.

rpc.ypxfrd can be started by inetd. But since it starts very slow, it should be started with ypserv. You need to start rpc.ypxfrd only on the NIS master server.

The Program rpc.yppasswdd

[edit | edit source]

Whenever users change their passwords, the NIS password database and probably other NIS databases, which depend on the NIS password database, should be updated. The program "rpc.yppasswdd" is a server that handles password changes and makes sure that the NIS information will be updated accordingly. rpc.yppasswdd is now integrated in ypserv. You don't need the older, separate yppasswd-0.9.tar.gz or yppasswd-0.10.tar.gz, and you shouldn't use them any longer. The rpc.yppasswdd in ypserv 1.3.2 has full shadow support. yppasswd is now part of yp-tools-2.2.tar.gz.

You need to start rpc.yppasswdd only on the NIS master server. By default, users are not allowed to change their full name or the login shell. You can allow this with the -e chfn or -e chsh option. If your passwd and shadow files are not in another directory then /etc, you need to add the -D option. For example, if you have put all source files in /etc/yp and wish to allow the user to change his shell, you need to start rpc.yppasswdd with the following parameters:

  rpc.yppasswdd -D /etc/yp -e chsh

or

  rpc.yppasswdd -s /etc/yp/shadow -p /etc/yp/passwd -e chsh

There is nothing more to do. You just need to make sure, that rpc.yppasswdd uses the same files as /var/yp/Makefile. Errors will be logged using syslog.

If everything is fine (as it should be), you should be able to verify your installation with a few simple commands. Assuming, for example, your passwd file is being supplied by NIS, the command :

   % ypcat passwd

should give you the contents of your NIS passwd file. The command :

   % ypmatch userid passwd

(where userid is the login name of an arbitrary user) should give you the user's entry in the NIS passwd file. The "ypcat" and "ypmatch" programs should be included with your distribution of traditional NIS or NYS. Once you have NIS correctly configured on the server and client, you do need to be sure that the configuration will survive a reboot. On RedHat, create or modify the variable NISDOMAIN in the file /etc/sysconfig/network.

Exercises

[edit | edit source]

Detailed Objectives (210.4)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 4


Description: Candidates should be able to configure a basic OpenLDAP server including knowledge of LDIF format and essential access controls.


Key Knowledge Areas:

  • OpenLDAP
  • Directory based configuration
  • Access Control
  • Distinguished Names
  • Changetype Operations
  • Schemas and Whitepages
  • Directories
  • Object IDs, Attributes and Classes


Terms and Utilities:

  • slapd
  • slapd-config
  • LDIF
  • slapadd
  • slapcat
  • slapindex
  • /var/lib/ldap/
  • loglevel


Detailed Objectives (210.2)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 3


Description: The candidate should be able to configure PAM to support authentication using various available methods. This includes basic SSSD functionality.


Key Knowledge Areas:

  • PAM configuration files, terms and utilities.
  • passwd and shadow passwords.
  • Use sssd for LDAP authentication.


Terms and Utilities:

  • /etc/pam.d
  • pam.conf
  • nsswitch.conf
  • pam_unix, pam_cracklib, pam_limits, pam_listfile, pam_sss
  • sssd.conf

== PAM authentication == test

PAM (Pluggable Authentication Modules) is a flexible mechanism for authenticating users.

Since the beginnings of UNIX, authenticating a user has been accomplished via the user entering a password and the system checking if the entered password corresponds to the encrypted official password that is stored in /etc/passwd . The idea being that the user *is* really that user if and only if they can correctly enter their secret password.

That was in the beginning. Since then, a number of new ways of authenticating users have become popular. Including more complicated replacements for the /etc/passwd file, and hardware devices Smart cards etc.. The problem is that each time a new authentication scheme is developed, it requires all the necessary programs (login, ftpd etc...) to be rewritten to support it.

PAM provides a way to develop programs that are independent of authentication scheme. These programs need "authentication modules" to be attatched to them at run-time in order to work. Which authentication module is to be attatched is dependent upon the local system setup and is at the discretion of the local system administrator.

PAM authentication

Linux-PAM (Pluggable Authentication Modules for Linux) is a suite of shared libraries that enable the local system administrator to choose how applications authenticate users.

In other words, without (rewriting and) recompiling a PAM-aware application, it is possible to switch between the authentication mechanism(s) it uses. Indeed, one may entirely upgrade the local authentication system without touching the applications themselves.

Historically an application that has required a given user to be authenticated, has had to be compiled to use a specific authentication mechanism. For example, in the case of traditional UN*X systems, the identity of the user is verified by the user entering a correct password. This password, after being prefixed by a two character salt, is encrypted (with crypt(3)). The user is then authenticated if this encrypted password is identical to the second field of the user's entry in the system password database (the /etc/passwd file). On such systems, most if not all forms of privileges are granted based on this single authentication scheme. Privilege comes in the form of a personal user-identifier (uid) and membership of various groups. Services and applications are available based on the personal and group identity of the user. Traditionally, group membership has been assigned based on entries in the /etc/group file.

PAM authentication

Unfortunately, increases in the speed of computers and the widespread introduction of network based computing, have made once secure authentication mechanisms, such as this, vulnerable to attack. In the light of such realities, new methods of authentication are continuously being developed. It is the purpose of the Linux-PAM project to separate the development of privilege granting software from the development of secure and appropriate authentication schemes. This is accomplished by providing a library of functions that an application may use to request that a user be authenticated. This PAM library is configured locally with a system file, /etc/pam.conf (or a series of configuration files located in /etc/pam.d/) to authenticate a user request via the locally available authentication modules. The modules themselves will usually be located in the directory /lib/security and take the form of dynamically loadable object files (see dlopen(3)).

PAM authentication

Overview

For the uninitiated, we begin by considering an example. We take an application that grants some service to users; login is one such program. Login does two things, it first establishes that the requesting user is whom they claim to be and second provides them with the requested service: in the case of login the service is a command shell (bash, tcsh, zsh, etc.) running with the identity of the user.

Traditionally, the former step is achieved by the login application prompting the user for a password and then verifying that it agrees with that located on the system; hence verifying that as far as the system is concerned the user is who they claim to be. This is the task that is delegated to Linux-PAM. From the perspective of the application programmer (in this case the person that wrote the login application), Linux-PAM takes care of this authentication task -- verifying the identity of the user.

PAM authentication

The flexibility of Linux-PAM is that you, the system administrator, have the freedom to stipulate which authentication scheme is to be used. You have the freedom to set the scheme for any/all PAM-aware applications on your Linux system. That is, you can authenticate from anything as naive as simple trust (pam_permit) to something as paranoid as a combination of a retinal scan, a voice print and a one-time password!

To illustrate the flexibility you face, consider the following situation: a system administrator (parent) wishes to improve the mathematical ability of her users (children). She can configure their favorite Shoot 'em up game (PAM-aware of course) to authenticate them with a request for the product of a couple of random numbers less than 12. It is clear that if the game is any good they will soon learn their multiplication tables. As they mature, the authentication can be upgraded to include (long) division!

PAM authentication

Linux-PAM deals with four separate types of (management) task. These are: authentication management; account management; session management; and password management. The association of the preferred management scheme with the behavior of an application is made with entries in the relevant Linux-PAM configuration file. The management functions are performed by modules specified in the configuration file.

The Linux-PAM library consults the contents of the PAM configuration file and loads the modules that are appropriate for an application. These modules fall into one of four management groups and are stacked in the order they appear in the configuration file. These modules, when called by Linux-PAM, perform the various authentication tasks for the application. Textual information, required from/or offered to the user, can be exchanged through the use of the application-supplied conversation function.

PAM authentication

Linux-PAM is designed to provide the system administrator with a great deal of flexibility in configuring the privilege granting applications of their system. The local configuration of those aspects of system security controlled by Linux-PAM is contained in one of two places: either the single system file, /etc/pam.conf; or the /etc/pam.d/ directory.

Linux-PAM specific tokens in this file are case insensitive. The module paths, however, are case sensitive since they indicate a file's name and reflect the case dependence of typical Linux file-systems. The case-sensitivity of the arguments to any given module is defined for each module in turn. In addition to the lines described below, there are two special characters provided for the convenience of the system administrator: comments are preceded by a `#' and extend to the next end-of-line; also, module specification lines may be extended with a `\' escaped newline.

A general configuration line of the /etc/pam.conf file has the following form: : service-name module-type control-flag module-path args

PAM authentication

Below, we explain the meaning of each of these tokens. The second (and more recently adopted) way of configuring Linux-PAM is via the contents of the /etc/pam.d/ directory. Once we have explained the meaning of the above tokens, we will describe this method.


Service-name

The name of the service associated with this entry. Frequently the service name is the conventional name of the given application. For example, `ftpd', `rlogind' and `su', etc. . There is a special service-name, reserved for defining a default authentication mechanism. It has the name `OTHER' and may be specified in either lower or upper case characters. Note, when there is a module specified for a named service, the `OTHER' entries are ignored.

PAM authentication Module-type One of (currently) four types of module. The four types are as follows:

auth; this module type provides two aspects of authenticating the user. Firstly, it establishes that the user is who they claim to be, by instructing the application to prompt the user for a password or other means of identification. Secondly, the module can grant group membership (independently of the /etc/groups file discussed above) or other privileges through its credential granting properties.

account; this module performs non-authentication based account management. It is typically used to restrict/permit access to a service based on the time of day, currently available system resources (maximum number of users) or perhaps the location of the applicant user---`root' login only on the console.

session; primarily, this module is associated with doing things that need to be done for the user before/after they can be given service. Such things include the logging of information concerning the opening/closing of some data exchange with a user, mounting directories, etc. .

password; this last module type is required for updating the authentication token associated with the user. Typically, there is one module for each `challenge/response' based authentication (auth) module-type. PAM authentication

Control-flag

The control-flag is used to indicate how the PAM library will react to the success or failure of the module it is associated with. Since modules can be stacked (modules of the same type execute in series, one after another), the control-flags determine the relative importance of each module. The application is not made aware of the individual success or failure of modules listed in the `/etc/pam.conf' file. Instead, it receives a summary success or fail response from the Linux-PAM library. The order of execution of these modules is that of the entries in the /etc/pam.conf file; earlier entries are executed before later ones. As of Linux-PAM v0.60, this control-flag can be defined with one of two syntaxes.

The simpler (and historical) syntax for the control-flag is a single keyword defined to indicate the severity of concern associated with the success or failure of a specific module. There are four such keywords: required, requisite, sufficient and optional.

PAM authentication

Control-flag

The Linux-PAM library interprets these keywords in the following manner: required; this indicates that the success of the module is required for the module-type facility to succeed. Failure of this module will not be apparent to the user until all of the remaining modules (of the same module-type) have been executed.

requisite; like required, however, in the case that such a module returns a failure, control is directly returned to the application. The return value is that associated with the first required or requisite module to fail. Note, this flag can be used to protect against the possibility of a user getting the opportunity to enter a password over an unsafe medium. It is conceivable that such behavior might inform an attacker of valid accounts on a system. This possibility should be weighed against the not insignificant concerns of exposing a sensitive password in a hostile environment.

sufficient; the success of this module is deemed `sufficient' to satisfy the Linux-PAM library that this module-type has succeeded in its purpose. In the event that no previous required module has failed, no more `stacked' modules of this type are invoked. (Note, in this case subsequent required modules are not invoked.). A failure of this module is not deemed as fatal to satisfying the application that this module-type has succeeded.

Optional; as its name suggests, this control-flag marks the module as not being critical to the success or failure of the user's application for service. In general, Linux-PAM ignores such a module when determining if the module stack will succeed or fail. However, in the absence of any definite successes or failures of previous or subsequent stacked modules this module will determine the nature of the response to the application. One example of this latter case, is when the other modules return something like PAM_IGNORE. PAM authentication

Control-flag :

The more elaborate (newer) syntax is much more specific and gives the administrator a great deal of control over how the user is authenticated. This form of the control flag is delimited with square brackets and consists of a series of value=action tokens:

 [value1=action1 value2=action2 ...]

Here, valueI is one of the following return values: success; open_err; symbol_err; service_err; system_err; buf_err; perm_denied; auth_err; cred_insufficient; authinfo_unavail; user_unknown; maxtries; new_authtok_reqd; acct_expired; session_err; cred_unavail; cred_expired; cred_err; no_module_data; conv_err; authtok_err; authtok_recover_err; authtok_lock_busy; authtok_disable_aging; try_again; ignore; abort; authtok_expired; module_unknown; bad_item; and default. The last of these (default) can be used to set the action for those return values that are not explicitly defined.

The actionI can be a positive integer or one of the following tokens: ignore; ok; done; bad; die; and reset. A positive integer, J, when specified as the action, can be used to indicate that the next J modules of the current module-type will be skipped. In this way, the administrator can develop a moderately sophisticated stack of modules with a number of different paths of execution. Which path is taken can be determined by the reactions of individual modules.

PAM authentication

ignore - when used with a stack of modules, the module's return status will not contribute to the return code the application obtains.

bad - this action indicates that the return code should be thought of as indicative of the module failing. If this module is the first in the stack to fail, its status value will be used for that of the whole stack.

die - equivalent to bad with the side effect of terminating the module stack and PAM immediately returning to the application.

ok - this tells PAM that the administrator thinks this return code should contribute directly to the return code of the full stack of modules. In other words, if the former state of the stack would lead to a return of PAM_SUCCESS, the module's return code will override this value. Note, if the former state of the stack holds some value that is indicative of a modules failure, this 'ok' value will not be used to override that value.

done - equivalent to ok with the side effect of terminating the module stack and PAM immediately returning to the application.

reset - clear all memory of the state of the module stack and start again with the next stacked module. PAM authentication Each of the four keywords: required; requisite; sufficient; and optional, have an equivalent expression in terms of the [...] syntax. They are as follows:

required is equivalent to [success=ok new_authtok_reqd=ok ignore=ignore default=bad]

requisite is equivalent to [success=ok new_authtok_reqd=ok ignore=ignore default=die]

sufficient is equivalent to [success=done new_authtok_reqd=done default=ignore]

optional is equivalent to [success=ok new_authtok_reqd=ok default=ignore]

Just to get a feel for the power of this new syntax, here is a taste of what you can do with it. With Linux-PAM-0.63, the notion of client plug-in agents was introduced. This is something that makes it possible for PAM to support machine-machine authentication using the transport protocol inherent to the client/server application. With the [ ... value=action ... ] control syntax, it is possible for an application to be configured to support binary prompts with compliant clients, but to gracefully fall over into an alternative authentication mode for older, legacy, applications.

PAM authentication

Module-path

The path-name of the dynamically loadable object file; the pluggable module itself. If the first character of the module path is `/', it is assumed to be a complete path. If this is not the case, the given module path is appended to the default module path: /lib/security

Args

The args are a list of tokens that are passed to the module when it is invoked. Much like arguments to a typical Linux shell command. Generally, valid arguments are optional and are specific to any given module. Invalid arguments are ignored by a module, however, when encountering an invalid argument, the module is required to write an error to syslog(3). For a list of generic options see the next section.

Any line in (one of) the configuration file(s), that is not formatted correctly, will generally tend (erring on the side of caution) to make the authentication process fail. A corresponding error is written to the system log files with a call to syslog(3).

PAM authentication

Directory based configuration

More flexible than the single configuration file, as of version 0.56, it is possible to configure libpam via the contents of the /etc/pam.d/ directory. In this case the directory is filled with files each of which has a filename equal to a service-name (in lower-case): it is the personal configuration file for the named service.

Linux-PAM can be compiled in one of two modes. The preferred mode uses either /etc/pam.d/ or /etc/pam.conf configuration but not both. That is to say, if there is a /etc/pam.d/ directory then libpam only uses the files contained in this directory. However, in the absence of the /etc/pam.d/ directory the /etc/pam.conf file is used (this is likely to be the mode your preferred distribution uses). The other mode is to use both /etc/pam.d/ and /etc/pam.conf in sequence. In this mode, entries in /etc/pam.d/ override those of /etc/pam.conf. The syntax of each file in /etc/pam.d/ is similar to that of the /etc/pam.conf file and is made up of lines of the following form:

module-type control-flag module-path arguments The only difference being that the service-name is not present. The service-name is of course the name of the given configuration file. For example, /etc/pam.d/login contains the configuration for the login service.

PAM authentication

This method of configuration has a number of advantages over the single file approach. We list them here to assist the reader in deciding which scheme to adopt:

A lower chance of misconfiguring an application. There is one less field to mis-type when editing the configuration files by hand.

Easier to maintain. One application may be reconfigured without risk of interfering with other applications on the system.

It is possible to symbolically link different services configuration files to a single file. This makes it easier to keep the system policy for access consistent across different applications. (It should be noted, to conserve space, it is equally possible to hard link a number of configuration files. However, care should be taken when administering this arrangement as editing a hard linked file is likely to break the link.)

A potential for quicker configuration file parsing. Only the relevant entries are parsed when a service gets bound to its modules. It is possible to limit read access to individual Linux-PAM configuration files using the file protections of the filesystem.

Package management becomes simpler. Every time a new application is installed, it can be accompanied by an /etc/pam.d/xxxxxx file.

PAM authentication

The following are optional arguments which are likely to be understood by any module. Arguments (including these) are in general optional.

Debug : Use the syslog(3) call to log debugging information to the system log files.

no_warn : Instruct module to not give warning messages to the application. 
use_first_pass : The module should not prompt the user for a password. Instead, it should obtain the previously typed password (from the preceding auth module), and use that. If that doesn't work, then the user will not be authenticated. (This option is intended for auth and password modules only). 
try_first_pass : The module should attempt authentication with the previously typed password (from the preceding auth module). If that doesn't work, then the user is prompted for a password. (This option is intended for auth modules only). 
use_mapped_pass : This argument is not currently supported by any of the modules in the Linux-PAM distribution because of possible consequences associated with U.S. encryption exporting restrictions. Within the U.S., module developers are, of course, free to implement it (as are developers in other countries).

expose_account : In general the leakage of some information about user accounts is not a secure policy for modules to adopt. Sometimes information such as users names or home directories, or preferred shell, can be used to attack a user's account. In some circumstances, however, this sort of information is not deemed a threat: displaying a user's full name when asking them for a password in a secured environment could also be called being 'friendly'. The expose_account argument is a standard module argument to encourage a module to be less discrete about account information as it is deemed appropriate by the local administrator.

PAM authentication

Example configuration file entries

Default policy : If a system is to be considered secure, it had better have a reasonably secure `OTHER' entry. The following is a paranoid setting (which is not a bad place to start!):

  1. default; deny access

OTHER auth required pam_deny.so OTHER account required pam_deny.so OTHER password required pam_deny.so OTHER session required pam_deny.so

Whilst fundamentally a secure default, this is not very sympathetic to a misconfigured system. For example, such a system is vulnerable to locking everyone out should the rest of the file become badly written. The module pam_deny is not very sophisticated. For example, it logs no information when it is invoked so unless the users of a system contact the administrator when failing to execute a service application, the administrator may go for a long while in ignorance of the fact that his system is misconfigured.

PAM authentication

The addition of the following line before those in the above example would provide a suitable warning to the administrator.

  1. default; wake up! This application is not configured

OTHER auth required pam_warn.so OTHER password required pam_warn.so

Having two OTHER auth lines is an example of stacking. On a system that uses the /etc/pam.d/ configuration, the corresponding default setup would be achieved with the following file:

  1. default configuration: /etc/pam.d/other

auth required pam_warn.so auth required pam_deny.so account required pam_deny.so password required pam_warn.so password required pam_deny.so session required pam_deny.so

PAM authentication

On a less sensitive computer, one on which the system administrator wishes to remain ignorant of much of the power of Linux-PAM, the following selection of lines (in /etc/pam.conf) is likely to mimic the historically familiar Linux setup.

  1. default; standard UN*X access

OTHER auth required pam_unix.so OTHER account required pam_unix.so OTHER password required pam_unix.so OTHER session required pam_unix.so


PAM authentication Key terms, files and utilities : /etc/pam.d /etc/pam.conf /lib/libpam.so.*

Exercises

[edit | edit source]


System Security

[edit | edit source]

Detailed Objectives (212.1)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 3


Description: Candidates should be able to configure a system to forward IP packet and perform network address translation (NAT, IP masquerading) and state its significance in protecting a network. This objective includes configuring port redirection, managing filter rules and averting attacks.


Key Knowledge Areas:

  • iptables and ip6tables configuration files, tools and utilities.
  • Tools, commands and utilities to manage routing tables.
  • Private address ranges (IPv4) and Unique Local Addresses as well as Link Local Addresses (IPv6)
  • Port redirection and IP forwarding.
  • List and write filtering and rules that accept or block IP packets based on source or destination protocol, port and address.
  • Save and reload filtering configurations.


Terms and Utilities:

  • /proc/sys/net/ipv4/
  • /proc/sys/net/ipv6/
  • /etc/services
  • iptables
  • ip6tables

Configuring a router

[edit | edit source]

Overview

[edit | edit source]

Description: The candidate should be able to configure ipchains and iptables to perform IP masquerading, and state the significance of Network Address Translation and Private Network Addresses in protecting a network. This objective includes configuring port redirection, listing filtering rules, and writing rules that accept or block datagrams based upon source or destination protocol, port and address. Also included is saving and reloading filtering configurations, using settings in /proc/sys/net/ipv4 to respond to DOS attacks, using /proc/sys/net/ipv4/ip_forward to turn IP forwarding on and off, and usingtools such as PortSentry to block port scans and vulnerability probes.

Key files, terms, and utilities include:

/proc/sys/net/ipv4 
/etc/services 
ipchains 
iptables
routed

Configuring a router

[edit | edit source]

There are numerous steps you should take to configure a router connected to insecure networks like the Internet First of all, identify what services you need, and have a policy of blocking everything else ! This minimize your exposure to security breaches.

Common steps for routers are :

Log all dropped/rejected packets (and limit the rate at which you log, to avoid logfiles size explosion) Use NAT whenever you can – unroutable addresses are more difficult to hack Define a default policy for TCP/UDP block ports answers: drop/reject/reset ?

Dropping isn't really helpful, scanners nowadays detect it easily. Rejecting may still show that a firewall is blocking access, resetting acts as if nothing is listening (i.e the « normal » way)

Unless you know you need it, drop (and log + limit) all ICMP packets except the most useful : dest-unreachable, time-exceeded and echo-reply

Protect against known attacks, i.e : anti-spoofing of IP addresses, disable source_route packets, disable icmp_redirect, log « martians » IP addresses (i.e addresses which appear on an interface they don't belong to), disable syn_cookies, disable ECN (Explicit Congestion Notification), disable TCP timestamps, ICMP broadcasts and ICMP bogus errors

Exercises

[edit | edit source]

Detailed Objective (212.2)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 2


Description: Candidates should be able to configure an FTP server for anonymous downloads and uploads. This objective includes precautions to be taken if anonymous uploads are permitted and configuring user access.


Key Knowledge Areas:

  • Configuration files, tools and utilities for Pure-FTPd and vsftpd.
  • Awareness of ProFTPd.
  • Understanding of passive vs. active FTP connections


Terms and Utilities:

  • vsftpd.conf
  • important Pure-FTPd command line options

Securing FTP servers

[edit | edit source]

Overview

[edit | edit source]

Description: The candidate should be able to configure an anonymous download FTP server. This objective includes configuring an FTP server to allow anonymous uploads, listing additional precautions to be taken if anonymous uploads are permitted, configuring guest users and groups with chroot jail, and configuring ftpaccess to deny access to named users or groups.

Key files, terms, and utilities include:

ftpaccess, ftpusers, ftpgroups 
/etc/passwd 
chroot

Securing an FTP server will include :

  • FTP Warning Banner customization
  • FTP Greeting Banner customization
  • Securing, denying and restricting User Accounts
  • Securing Anonymous Access
  • Securing Anonymous Upload

FTP protocol

[edit | edit source]

The File Transport Protocol (FTP) is an older TCP protocol designed to transfer files over a network. Because all transactions with the server, including user authentication, are unencrypted, it is considered an insecure protocol and should be carefully configured.

wu-ftpd FTP server

[edit | edit source]

We will focus on the wu-ftpd FTP server from Washington University

Wu-ftpd's main configuration files are in /etc : ftpusers,ftpaccess and ftpconversions the ftpusers file contains a list of all those users who are not allowed to log into your FTP server. As you can imagine, user root should be listed here. You should also make sure that other special user accounts such as lp, shutdown, mail, etc. are included here.

the ftpaccess file is used to configure issues such as security, user definitions, etc. It's actually the general configuration file. Some interesting settings that you can establish here are: loginfails [number]

    where number is a number that stands for the amount of times that a user is allowed to fail to authenticate before being totally disabled.

shutdown [filename]

    where filename is the name of a file that, if it exists, automatically shuts down the FTP server without a need to actually close the port in the /etc/inetd.conf file and then restarting inetd.

Finally, the ftpconversions file is used to allow the clients special "on-the-fly" conversions of files, i.e automatic uncompression of files on download

FTP Warning Banner

[edit | edit source]

Returning a customized banner to FTP clients when they connect is a good idea, as it helps disguise what system the FTP server is running on. ou can send banners to incoming connections either using TCP wrappers, or as described below.

Add the ollowing line to its configuration file, /etc/ftpaccess : banner /etc/banners/warning.msg

The contents of the banner file should look something like this : Hello, all activity on ftp.example.com is logged.

FTP Greeting Banner

[edit | edit source]

After login, all users are presented with a greeting banner. By default, this banner includes version information useful to crackers trying to identify weaknesses in a system.. To change the greeting banner for wu-ftpd, add the following directive to /etc/ftpusers: greeting text <insert_greeting_here> Securing FTP servers

Because FTP passes unencrypted usernames and passwords over insecure networks for authentication, it is a good idea to deny system users access to the server from their user accounts. To disable user accounts in wu-ftpd, add the following directive to /etc/ftpusers: deny-uid * To disable specific user accounts in wu-ftpd, add the username to /etc/ftpusers

Anonymous Access

[edit | edit source]

The best way to setup anonymous FTP is by configuring a chroot jail : instead of allowing total access to the system, this will limit access to a given directory. In other words, after an anonymous user logs into the system she will only have access to the user ftp's home directory and nothing else. If she enters cd /, which in most other cases should take her to the system's root directory, it will only take her to /home/ftp most likely (it's the default home directory for the user ftp).

Most distributions like RedHat provide an anonymous ftp package, to help prepare the chroot jail It's important to give to your strictly FTP users no real shell account on the Linux system. In this manner, if for any reasons someone could successfully get out of the FTP chrooted environment (see below for definition), it would not have the possibility of executing any user tasks since it doesn't have a bash shell. First, create new users for this purpose. This has to be separate from a regular user account with unlimited access because of how the chroot environment works. Chroot makes it appear from the user's perspective as if the level of the file system you've placed them in is the top level of the file system.

Setup these new users with a shell as /dev/null, and add /dev/null in the list of allowed shells, /etc/shells. Make sure also that in /etc/passwd, their home dir is listed as /home/./ftp (for user ftp), even though the real dir is /home/ftp

Setup a chroot user environment : what you're essentially doing is creating a skeleton root file system with enough components necessary, binaries, password files, etc. to allow Unix to do a chroot when the user logs in. Note that wu-ftpd may be compiled with the --enable-ls option, in which case the /home/ftp/bin, and /home/ftp/lib directories are not required since this new option allows Wu-ftpd to use its own ls function. We still continue to demonstrate the old method for people that prefer to copy /bin/ls to the chroot'd FTP directory, /home/ftp/bin and create the appropriated library related tools. The following are the necessary steps to run Wu-ftpd software in a chroot jail: first create all the necessary chrooted environment directories:

[root@deep ] /# mkdir /home/ftp/dev
[root@deep ] /# mkdir /home/ftp/etc
[root@deep ] /# mkdir /home/ftp/bin
[root@deep ] /# mkdir /home/ftp/lib

Change the new directories permission to 0511 for security reasons: The chmod command will make our chrooted dev, etc, bin, and lib directories readable and executable by the super-user root and executable by the user-group and all users :

[root@deep ] /# chmod 0511 /home/ftp/dev/
[root@deep ] /# chmod 0511 /home/ftp/etc/
[root@deep ] /# chmod 0511 /home/ftp/bin
[root@deep ] /# chmod 0511 /home/ftp/lib

Copy the /bin/ls binary to /home/ftp/bin directory and change the permission of the ls program to 0111. You don't want users to be able to modify the binaries:

[root@deep ] /# cp /bin/ls /home/ftp/bin
[root@deep ] /# chmod 0111 /bin/ls /home/ftp/bin/ls

Find the shared library dependencies of the ls Linux binary program: :

[root@deep ] /# ldd /bin/ls
   libc.so.6 => /lib/libc.so.6 (0x00125000)
   /lib/ld-linux.so.2 =7gt; /lib/ld-linux.so.2 (0x00110000)
         

Copy the shared libraries identified above to your new lib directory under /home/ftp directory:

[root@deep ] /# cp /lib/libc.so.6 /home/ftp/lib/
[root@deep ] /# cp /lib/ld-linux.so.2 /home/ftp/lib/ 

Create your /home/ftp/dev/null file:

[root@deep ] /# mknod /home/ftp/dev/null c 1 3
[root@deep ] /# chmod 666 /home/ftp/dev/null

Copy the group and passwd files in /home/ftp/etc directory. This should not be the same as your real ones. For this reason, we'll remove all non FTP users except for the super-user root in both of these files, passwd and group.

Edit the passwd file, vi /home/ftp/etc/passwd and delete all entries except for the super-user root and your allowed FTP users. It is very important that the passwd file in the chroot environment has entries like:

root:x:0:0:root:/:/dev/null
ftpadmin:x:502:502::/ftpadmin/:/dev/null

(notice two things here: first, the home directory for all users inside this modified passwd file are now changed to reflect the new chrooted FTP directory i.e. /home/ftp/./ftpadmin/ begins /ftpadmin/, and also, the name of the user's login shell for the root account has been changed to /dev/null) Edit the group file, vi /home/ftp/etc/group and delete all entries except for the super-user root and all your allowed FTP users. The group file should correspond to your normal group file:

root:x:0:root
ftpadmin:x:502:              

Now we must set passwd, and group files in the chroot jail directory immutable for better security.

[root@deep ] /# cd /home/ftp/etc/
[root@deep ] /# chattr +i passwd

Set the immutable bit on group file:

[root@deep ] /# cd /home/ftp/etc/
[root@deep ] /# chattr +i group

Configure your /etc/pam.d/ftp file to use pam authentication by creating the /etc/pam.d/ftp file and add the following lines:

#%PAM-1.0
auth    required /lib/security/pam_listfile.so item=user sense=deny \ file=/etc/ftpusers onerr=succeed
auth    required /lib/security/pam_pwdb.so shadow nullok
auth    required /lib/security/pam_shells.so
account required /lib/security/pam_pwdb.so
session required /lib/security/pam_pwdb.so

Anonymous Upload

[edit | edit source]

If you want to allow anonymous users to upload, it is recommended you create a write-only directory within /var/ftp/pub/. To do this type:

mkdir /var/ftp/pub/upload

Next change the permissions so that anonymous users cannot see what is within the directory by typing:

chmod 744 /var/ftp/pub/upload

A long format listing of the directory should look like this:

drwxr--r--    2 root     ftp          4096 Aug 20 18:26 upload

Exercises

[edit | edit source]


Detailed Objectives (212.3)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 4


Description: Candidates should be able to configure an SSH daemon. This objective includes managing keys and configuring SSH for users. Candidates should also be able to forward an application protocol over SSH and manage the SSH login.


Key Knowledge Areas:

  • OpenSSH configuration files, tools and utilities.
  • Login restrictions for the superuser and the normal users.
  • Managing and using server and client keys to login with and without password.
  • Usage of multiple connections from multiple hosts to guard against loss of connection to remote host following configuration changes.


Terms and Utilities:

  • ssh
  • sshd
  • /etc/ssh/sshd_config
  • /etc/ssh/
  • Private and public key files
  • PermitRootLogin, PubKeyAuthentication, AllowUsers, PasswordAuthentication, Protocol

Secure Shell (OpenSSH)

[edit | edit source]

Overview

[edit | edit source]

Description: The candidate should be able to configure sshd to allow or deny root logins, enable or disable X forwarding. This objective includes generating server keys, generating a user's public/private key pair, adding a public key to a user's authorized_keys file, and configuring ssh-agent for all users. Candidates should also be able to configure port forwarding to tunnel an application protocol over ssh, configure ssh to support the ssh protocol versions 1 and 2, disable non-root logins during system maintenance, configure trusted clients for ssh logins without a password, and make multiple connections from multiple hosts to guard against loss of connection to remote host following configuration changes.

Key files, terms, and utilities include:

ssh, sshd
/etc/ssh/sshd_config 
~/.ssh/identity.pub, ~/.ssh/identity
~/.ssh/authorized_keys 
.shosts, .rhosts

OpenSSH

[edit | edit source]

OpenSSH is a free, open source implementation of the SSH (Secure SHell) protocols. It replaces telnet, ftp, rlogin, rsh, and rcp with secure, encrypted network connectivity tools. OpenSSH supports versions 1.3, 1.5, and 2.0 of the SSH protocol.

If you use OpenSSH tools, you are enhancing the security of your machine. All communications using OpenSSH tools, including passwords, are encrypted. Telnet and ftp use plaintext passwords and send all information unencrypted. The information can be intercepted, the passwords can be retrieved, and then your system can be compromised by an unauthorized person logging in to your system using one of the intercepted passwords. The OpenSSH set of utilities should be used whenever possible to avoid these security problems. Another reason to use OpenSSH is that it automatically forwards the DISPLAY variable to the client machine. In other words, if you are running the X Window System on your local machine, and you log in to a remote machine using the ssh command, when you execute a program on the remote machine that requires X, it will be displayed on your local machine. This is convenient if you prefer graphical system administration tools but do not always have physical access to your server.

The ssh command is a secure replacement for the rlogin, rsh, and telnet commands. It allows you to log in to and execute commands on a remote machine.

Logging in to a remote machine with ssh is similar to using telnet. To log in to a remote machine named penguin.example.net, type the following command at a shell prompt: ssh penguin.example.net

The first time you ssh to a remote machine, you will see a message similar to the following: The authenticity of host 'penguin.example.net' can't be established.

DSA key fingerprint is 94:68:3a:3a:bc:f3:9a:9b:01:5d:b3:07:38:e2:11:0c.
Are you sure you want to continue connecting (yes/no)? 
Type yes to continue. This will add the server to your list of known hosts as seen in the following message:
Warning: Permanently added 'penguin.example.net' (DSA) to the list of known hosts.

Next, you'll see a prompt asking for your password for the remote machine. After entering your password, you will be at a shell prompt for the remote machine. If you use ssh without any command line options, the username that you are logged in as on the local client machine is passed to the remote machine. If you want to specify a different username, use the following command:

ssh -l username penguin.example.net

You can also use the syntax ssh username@penguin.example.net. The ssh command can be used to execute a command on the remote machine without logging in to a shell prompt. The syntax is ssh hostname command. For example, if you want to execute the command ls /usr/share/doc on the remote machine penguin.example.net, type the following command at a shell prompt:

ssh penguin.example.net ls /usr/share/doc

After you enter the correct password, the contents of /usr/share/doc will be displayed, and you will return to your shell prompt.

The scp command can be used to transfer files between machines over a secure, encrypted connection. It is similar to rcp.

The general syntax to transfer a local file to a remote system is scp localfile user@hostname:/newfilename. The localfile specifies the source, and the group of user@hostname:/newfilename specifies the destination. To transfer the local file shadowman to your account on penguin.example.net, type the following at a shell prompt (replace user with your username):

scp shadowman user@penguin.example.net:/home/user

This will transfer the local file shadowman to /home/user/shadowman on penguin.example.net. The general syntax to transfer a remote file to the local system is scp user@hostname:/remotefile /newlocalfile. The remotefile specifies the source, and newlocalfile specifies the destination.

Multiple files can be specified as the source files. For example, to transfer the contents of the directory /downloads to an existing directory called uploads on the remote machine penguin.example.net, type the following at a shell prompt:

scp /downloads/* username@penguin.example.net:/uploads/

The sftp utility can be used to open a secure, interactive FTP session. It is similar to ftp except that it uses a secure, encrypted connection. The general syntax is sftp username@hostname.com. Once authenticated, you can use a set of commands similar to using FTP. Refer to the sftp manual page for a list of these commands. To read the manual page, execute the command man sftp at a shell prompt. The sftp utility is only available in OpenSSH version 2.5.0p1 and higher.

Generating Key Pairs

[edit | edit source]

If you do not want to enter your password every time you ssh, scp, or sftp to a remote machine, you can generate an authorization key pair.

Note: Separate Authorization Key Pairs You must have separate authorization key pairs for SSH Protocol 1 (RSA) and SSH Protocol 2 (DSA).

Warning : Each User Needs Their Own Key Pair !

Keys must be generated for each user. To generate keys for a user, follow the following steps as the user who wants to connect to remote machines. If you complete the following steps as root, only root will be able to use the keys.

Use the following steps to generate a DSA key pair. DSA is used by SSH Protocol 2 and is the default for Red Hat. 1. To generate a DSA key pair to work with version 2.0 of the protocol, type the following command at a shell prompt:

ssh-keygen -t dsa

Accept the default file location of ~/.ssh/id_dsa. Enter a passphrase different from your account password and confirm it by entering it again.

(A passphrase is a string of words and characters used to authenticate a user. Passphrases differ from passwords in that you can use spaces or tabs in the passphrase. Passphrases are generally longer than passwords because they are usually phrases instead of just a word.)

2. Change the permissions of your .ssh directory using the command chmod 755 ~/.ssh.

3. Copy the contents of ~/.ssh/id_dsa.pub to ~/.ssh/authorized_keys2 on the machine to which you want to connect. If the file ~/.ssh/authorized_keys2 doesn't exist, you can copy the file ~/.ssh/id_dsa.pub to the file ~/.ssh/authorized_keys2 on the other machine.

Use the following steps to generate a RSA key pair for version 2.0 of the SSH protocol.

1. To generate a RSA key pair to work with version 2.0 of the protocol, type the following command at a shell prompt:

ssh-keygen -t rsa

Accept the default file location of ~/.ssh/id_rsa. Enter a passphrase different from your account password and confirm it by entering it again. [1]

2. Change the permissions of your .ssh directory using the command chmod 755 ~/.ssh.

3. Append the contents of ~/.ssh/id_rsa.pub to ~/.ssh/authorized_keys2 on the machine to which you want to connect. If the file ~/.ssh/authorized_keys2 doesn't exist, you can copy the file ~/.ssh/id_rsa.pub to the file ~/.ssh/authorized_keys2 on the other machine.

Use the following steps to generate an RSA key pair, which is used by version 1 of the SSH Protocol.

1. To generate an RSA (for version 1.3 and 1.5 protocol) key pair, type the following command at a shell prompt:

ssh-keygen

Accept the default file location (~/.ssh/identity). Enter a passphrase different from your account password. Confirm the passphrase by entering it again.

2. Change the permissions of your .ssh directory and your keys with the commands chmod 755 ~/.ssh and chmod 644 ~/.ssh/identity.pub.

3. Copy the contents of ~/.ssh/identity.pub to the file ~/.ssh/authorized_keys on the machine to which you wish to connect. If the file ~/.ssh/authorized_keys doesn't exist, you can copy the file ~/.ssh/identity.pub to the file ~/.ssh/authorized_keys on the remote machine.

X Forwarding

[edit | edit source]

You can forward the X11 port through SSH to enable encrypted X11 connections. There's no need to export a DISPLAY variable or to call the xhost utility.

On the server-side you must check the file /etc/ssh/sshd_config to be sure that the "X11Forwarding" option is set to "yes".

On the client-side, use the -X option :

ssh -X user@remotehost

When the remote host prompt appears, start a X11 application:

xterm &

A xterm window from the remote host will open on your local desktop.

Exercises

[edit | edit source]


Detailed Objective

[edit | edit source]

Weight: 1

Description:
Candidates should be able to configure tcpwrappers to allow connections to specified servers only from certain hosts or subnets.

  • Key knowledge area(s):
    • tcpwrappers configuration files, tools and utilities
    • (x)inetd configuration files, tools and utilities
  • The following is a partial list of the used files, terms and utilities:
    • /etc/xinetd.conf
    • /etc/xinetd.d/*
    • /etc/inetd.conf
    • tcpd
    • /etc/hosts.allow
    • /etc/hosts.deny

TCP_wrappers

[edit | edit source]

Overview

[edit | edit source]

Description: The candidate should be able to configure tcpwrappers to allow connections to specified servers from only certain hosts or subnets.

Key files, terms, and utilities include:

inetd.conf, tcpd 
hosts.allow, hosts.deny 
xinetd

TCP_wrappers

[edit | edit source]

The TCP wrapper is a system to control access to network services For each service protected by TCP wrappers, the tcpd program is used and consults 2 files where access rights are defined, in search order :

/etc/hosts.deny: if a rule here is met, access is denied
/etc/hosts.allow: if a rule here is met, access is allowed

Rules are constructed to match all services or specific services. If no match occurs in the two files, access is granted.

It is common to set specific rules in /etc/hosts.allow, and provide a blanket denial in /etc/hosts.deny (i.e deny everything except when specifically allowed) Rules format are :

[list of services] : [list of hosts]

i.e : deny all incoming requests except FTP from the local domain

/etc/hosts.allow :
ftp : LOCAL
/etc/hosts.deny :
ALL : ALL

Exercises

[edit | edit source]


Detailed Objectives (212.4)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 3


Description: Candidates should be able to receive security alerts from various sources, install, configure and run intrusion detection systems and apply security patches and bugfixes.


Key Knowledge Areas:

  • Tools and utilities to scan and test ports on a server.
  • Locations and organisations that report security alerts as Bugtraq, CERT or other sources.
  • Tools and utilities to implement an intrusion detection system (IDS).
  • Awareness of OpenVAS and Snort.


Terms and Utilities:

  • telnet
  • nmap
  • fail2ban
  • nc
  • iptables

Security tasks

[edit | edit source]

Overview

[edit | edit source]

Description: The candidate should be able to install and configure kerberos and perform basic security auditing of source code. This objective includes arranging to receive security alerts from Bugtraq, CERT, CIAC or other sources, being able to test for open mail relays and anonymous FTP servers, installing and configuring an intrusion detection system such as snort or Tripwire. Candidates should also be able to update the IDS configuration as new vulnerabilities are discovered and apply security patches and bugfixes.

Key files, terms, and utilities include:

Tripwire 
nessus
netsaint
snort
telnet 
nmap

Kerberos

[edit | edit source]

Reference: Red Hat Enterprise Linux 4: Reference Guide - Chapter 19. Kerberos (http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/ref-guide/ch-kerberos.html)

1. Installing Server

2. Installing Client

3. Basic Configuration (e.g. krb5.conf ..)

Security tasks

[edit | edit source]

Use atelnet client to test/debug your servers This implies you know a little about the protocol used : read the corresponding RFCs Check security mailing lists such as Bugtraq, CERT, et al. regularly Patch your systems ASAP !

Run a security scanner on your system regularly Network security scanners Nessus and Netsaint are widely used, highly considered and open-source Bastille Linux is a great host-based security scanner Use some Intrusion Detection Systems (IDS), both network- and hosts-based Tripwire Snort

Don't forget : security is a never-ending process, not a state or a product !

Exercises

[edit | edit source]


Network Troubleshooting

[edit | edit source]

Detailed Objective (205.3)

[edit | edit source]

(LPIC-2 Version 4.5)


Weight: 4


Description: Candidates should be able to identify and correct common network setup issues, to include knowledge of locations for basic configuration files and commands.


Key Knowledge Areas:

  • Location and content of access restriction files
  • Utilities to configure and manipulate ethernet network interfaces
  • Utilities to manage routing tables
  • Utilities to list network states.
  • Utilities to gain information about the network configuration
  • Methods of information about the recognized and used hardware devices
  • System initialization files and their contents (SysV init process)
  • Awareness of NetworkManager and its impact on network configuration


Terms and Utilities:

  • ip
  • ifconfig
  • route
  • ss
  • netstat
  • /etc/network, /etc/sysconfig/network-scripts/
  • /bin/ping, ping6
  • traceroute, traceroute6
  • mtr
  • hostname
  • System log files such as /var/log/syslog, /var/log/messages and the systemd journal
  • dmesg
  • /etc/resolv.conf
  • /etc/hosts
  • /etc/hostname, /etc/HOSTNAME
  • /etc/hosts.allow, /etc/hosts.deny


LPI101 Exercises

[edit | edit source]
  • Configure Fundamental BIOS Settings Exercise Results
  1. To show the amount of physical RAM available: use free or cat /proc/meminfo | grep MemTotal
  2. Which are the devices that are sharing an interrupt line? cat /proc/interrupts | more
    • How many PCI buses and bridges are there? lspci | wc -l
    • Are there any PCI/ISA bridges? lspci | grep 'PCI\|ISA'
  3. What is the option with lspci to list all the Intel PCI devices? lspci -d 8086:*
  4. What is the command to set you IDE hard drive in read only mode? hdparm -r1 <device>
  5. What is the command to turn on/off the disk cache hard drive? hdparm -W1 <device>    hdparm -W0 <device>
  6. What does the setpci utility do? setpci is a utility for querying and configuring PCI devices.
  7. What would be the command to write a word in register N of a PCI device? setpci -s 12:3.4 N.W=1