Jump to content

Internet Technologies/Print version

From Wikibooks, open books for an open world

The Internet

The Internet is a worldwide collection of computer networks that began as a single network that was originally created in 1969 by ARPA (Advanced Research Projects Agency), a U.S. government agency that was far more interested in creating projects that would survive a nuclear war than in creating anything useful for the civilian population.

In its original form, ARPANET, the U.S. government hoped to create a network of computers that would allow communication between government agencies and certain educational centers that would be able to survive a nuclear explosion. It is doubtful that the original founders of ARPANET foresaw what we now know as "the Internet." From its humble beginnings as a military project, the ARPANET grew slowly throughout the 70's and 80's as a community of academics accomplished the truly monumental task of hammering out the building blocks of this new, open, modular conglomeration of networks.

In addition to the U.S. ARPANET, other countries also developed their own computer networks which quickly linked up to ARPANET, such as the UK's JANET (1983 onwards), and Australia's ACSnet (mid-1970s until replaced). Connecting these together would help develop a global internetwork.

The various protocols, including IP, TCP, DNS, POP, and SMTP, took shape over the years, and by the time the World Wide Web (HTML and HTTP) was created in the early 90's, this "Internet" had become a fully functional, fairly robust system of network communication, able to support this new pair of protocols which eventually turned the Internet into a household word.

While a large portion of users today confuse the Web with the Internet itself, it must be emphasized that the Web is only one type of Internet application, and one set of protocols among a great many which were in use for over a decade before the Web entered into the public awareness.

The Web is a subset of the Net. Email is not a part of the Web, and neither are newsgroups, although Web designers have developed web sites through which users, the world over, commonly access both of these much older forms of Internet media.

While the Net is a largely abstract phenomenon, it cannot (at least, not yet) be accurately equated with the concept of "cyberspace" as depicted in science fiction. If "judgement day" were to occur as depicted in the latest "Terminator" film, much of the Internet would survive it, but most of the electrical and data infrastructure by which we access the net would not. The line which currently demarcates the "digital divide" would shift dramatically to a point where it would leave only a small segment of humanity in virtual touch. This limitation, however, will slowly be overcome as wireless technologies continue to proliferate and wired technologies become increasingly cheaper.

In March 1972 ARPA became known as DARPA, the Defense Advanced Research Project Agency, and then went back to ARPA in February 1993 and back to DARPA in March 1996 and has been ever since. It was originally created as ARPA in 1958 in response to the launching of Sputnik. The launch of Sputnik made America realize that the Soviet Union could exploit military technology. DARPA has contributed to the creation of ARPANET as well as the Packet Radio Network, the Packet Satellite Network and the Internet. As well as research into the Artificial Intelligence field commonly referred to as AI. By the late 1970's the Department of Defense had adopted BSD UNIX as the primary operating system for DARPA.

References


Domain names

The Domain Name System, most often known as simply DNS, is a core feature of the Internet. It is a distributed database that handles the mapping between host names (domain names), which are more convenient for humans, and the numerical Internet addresses. For example, www.wikipedia.org is a domain name and 130.94.122.199 the corresponding numerical internet address. The domain name system acts much like an automated phone book, so you can "call" www.wikipedia.org instead of 130.94.122.199. So, it converts human-friendly names such as "www.wikipedia.org" into computer-friendly (IP) addresses such as 130.94.122.199. It can also handle the reverse mapping, meaning that we can query for a name for 130.94.122.199, that return larousse.wikipedia.org

DNS was first invented in 1983 by Paul Mockapetris; the original specifications are described in RFC 882. In 1987 RFC 1034 and RFC 1035 were published which updated the DNS specification and made RFC 882 and RFC 883 obsolete. Subsequent to that there have been quite a few RFCs published that propose various extensions to the core protocols.

DNS implements a hierarchical name space by allowing name service for parts of a name space known as zones to be "delegated" by a name server to subsidiary name-servers. DNS also provides additional information, such as alias names for systems, contact information, and which hosts act as mail hubs for groups of systems or domains.

The present restriction on the length of domain names is 63 characters, excluding the www. and .com or other extension. Domain names are also limited to a subset of ASCII characters, preventing many languages from representing their names and words correctly. The Punycode-based IDNA system, which maps Unicode strings into the valid DNS character set, has been approved by ICANN and adopted by some registries as a workaround.

The DNS system is run by various flavors of DNS software, including:

  • BIND (Berkeley Internet Name Domain), the most commonly used namedaemon.
  • DJBDNS (Dan J Bernstein's DNS implementation)
  • MaraDNS
  • NSD (Name Server Daemon)
  • PowerDNS

Any IP computer network can use DNS to implement its own private name system. However, the term "domain name" is most commonly used to refer to domain names implemented in the public Internet DNS system. This is based on thirteen "root servers" worldwide, all but three of which are in the United States of America. From these thirteen root servers, the rest of the Internet DNS name space is delegated to other DNS servers which serve names within specific parts of the DNS name space.

An 'owner' of a domain name can be found by looking in the WHOIS database: for most TLDs a basic WHOIS is held by ICANN, with the detailed WHOIS maintained by the domain registry which controls that domain. For the 240+ Country Code TLDs the position is usually that the registry holds the entire authorative WHOIS for that extension, as part of their many functions.

The current way the main DNS system is controlled is often criticized. The most common problems pointed at are that it is abused by monopolies or near-monopolies such as VeriSign Inc., and problems with assignment of top-level domains.

Some also allege that many implementations of DNS server software fail to work gracefully with dynamically allocated IP addresses, although that is the failure of specific implementations and not failures of the protocol itself.

DNS uses TCP and UDP port 53. Most DNS queries (such as name resolution requests) use UDP connections as the amount of data transferred is small and the session establishment overhead would introduce unnecessary traffic and load on nameservers. DNS zone file transfers between nameserver peers use TCP connections as the volume of data transferred is potentially much larger.

A DNS domain definition (sometimes referred to as a 'zone file') consists of individual DNS records. There are several record types in common usage:

  • SOA or Start Of Authority records contain parameters for the domain definition itself.
  • A records resolve names into an IP addresses
  • PTR records resolve IP addresses to names
  • NS records define the authoritative nameservers for the domain.
  • CNAME or Canonical Name records allow aliasing of one name to another.
  • MX or Mail Exchange records define the mail server associated with a domain or A record.
  • HINFO or Hardware Information records can be used to hold descriptive text about a specific device.

Virtually all modern operating systems and network applications contain resolved libraries or routines for interrogating DNS services. However, OSs generally provide a command line interface for querying DNS servers. The Windows NT family of operating systems provides the 'nslookup' command. Unix-based operating systems may also offer 'nslookup' or 'dig' tools.

nslookup can either be used interactively, or non-interactively. An example of non-interactive usage follows. In this example, we gather the A record for www.wikipedia.org from the client's default nameserver:

nslookup www.wikipedia.org

Nslookup is somewhat more powerful when used interactively. An example of this follows. In the example, we find the mail servers for the domain wikipedia.org:

nslookup

> set q=MX

> wikipedia.org

Non-authoritative answer:

wikipedia.org MX preference = 50, mail exchanger = mormo.org

wikipedia.org MX preference = 10, mail exchanger = mail.wikimedia.org

>

See also: cybersquatting, dynamic DNS, ICANN, DNSSEC


Web hosting

Web Hosting: An Introduction

What is Web Hosting?

When someone sets up a server and hooks it up to the Internet, the files on the server become accessible over the Internet. Web Hosting is the space on a Web server where you can upload files. If you upload HTML files, you'll have a Web site. If you upload .ZIP files, you'll have a download area. There are a lot of organizations that provide Web hosting.

What does Web Hosting do?

Most companies own their own servers these days, but some still pay for Web hosting. Let's say John Doe decides he wants to sell his paintings online. He purchases Web hosting, and sets up a Web site. Jane Doe, on the other hand, wants to set up a forum system. She purchases Web hosting and installs the forums. If you know how, you can make an entire Web site, or put any files you want, online, via your Web hosting.

In addition, over the last few years hundreds of "Web site softwares" have been written that, when installed onto your hosting account, give you an immediate Web site.

When looking for a Web site software, it's important to consider the software license. If you chose to use free software, you will have the advantage of new software releases that are made available free of charge. Commercial, albeit more powerful, alternatives are often available.

How and Where can I get hosting?

Web hosting is available in two forms — paid and free.

Paid Web hosting has a periodic fee, be it monthly, quarterly or annually, and typically provides considerable amounts of storage space for your files. Depending on the service, one may get additional support for server-side scripts, Web commerce support, visitor traffic reports, and so on. If the hosting service has high-speed connections to the Internet and fast server equipment, it may be able to provide access for many thousands of visitors and allow downloading of tens of gigabytes of files, and more, per day. Technical support is usually available around the clock, by telephone, Web chat or email, allowing the subscriber to resolve most problems in a few minutes, or, at most, in several hours.

Free hosting, on the other hand, typically has severely limited file storage space and low bandwidth provisions. It may be run on equipment that is just as capable as that used by paid hosting services, but it is usually an adjunct to some other business being carried out by the provider. Your Web pages will be presented to the site visitor along with advertising banners or pop-up advertisements as a way of recovering the cost of providing this free service to you. Support for server-side scripts may be limited or non-existent, as well as access to many of the basic functions that most Web hosting programs are capable of providing. Technical support is usually limited, usually by email only, with response times figured in days. If you are a novice, a free service may be the place to start, as they often provide semi-automated site-building templates and online tutorials.

With the recent drop in monthly fees for Web hosting — usually in the range of a few dollars a month for a basic plan — it is recommended that one subscribe to decent, commercial hosting service.

Free hosting sites are over-subscribed and would be good for those who do not mind waiting for periods of time for support resolution.

Reseller vs. Shared

Reseller accounts allow clients to divide their account and sell or distribute these allocations to other users. For example, if John purchases a 10GB reseller account, he can then sell ten 1GB accounts. Shared hosting refers to the fact that there are usually 100+ people per server — in effect, "sharing" the server.

Types of hosting control panels

There are multiple types of hosting control panels available on the market. Each one is specifically designed for a particular operating system and type of web hosting. For example, some control panels are specific to Linux and others are specific to Windows. Some hosting control panels are designed for hosting resellers, while others are designed for shared web hosting or dedicated server hosting.

The most popular hosting control panels

DotNetPanel - Windows hosting control panel for creating, managing, and selling shared web hosting accounts, dedicated servers, virtual private servers, Exchange hosting, SharePoint hosting, Dynamics CRM hosting, and BlackBerry hosting.

cPanel - Linux hosting control panel by cPanel Inc. The company develops two control panels: 1) cPanel for end users and 2) Web Host Manager (WHM) for managing a dedicated server including creating and managing shared web hosting accounts and reseller hosting accounts. cPanel Inc. is working on a Windows hosting control panel platform.

Plesk - Windows and Linux hosting control panel. Integrated with a billing system and provides functionality to create, manage, and sell shared web hosting and reseller web hosting accounts.

DirectAdmin - Linux hosting control panel for creating, managing, and selling shared web hosting and reseller web hosting accounts.

Ensim - Linux and Windows hosting control panel for infrastructure management software enabling; access control, identity management, change audit & reporting, and automated provisioning for enterprises and service providers. Ensim Unify offers an integrated suite of tools for Active Directory, Exchange, Windows Mobile, Blackberry, Google Apps, SharePoint, SQL, Office Communications Server (OCS), and Web Hosting – providing an automated, secure, compliant management environment that overlays existing infrastructure.

Interworx - The Interworx dedicated server hosting control panels include: 1) NodeWorx for system administrators and 2) SiteWorx for website administrators. Interworx runs on Linux dedicated servers and Linux virtual private servers.

Hosting Controller - Hosting Controller is a web hosting automation control panel designed for web hosting companies that offer services in a cluster environment. Web hosting companies can manage Windows and Linux servers through a centralized interface. Web hosting companies can diversify their hosting offerings by adding multiple mail servers within a cluster and offering MS-Exchange & SharePoint hosting.

Helm - Parallels Helm is a Microsoft windows control panel solution, empowering hosting providers to control, automate and sell products and services.

H-Sphere - Parallels H-Sphere delivers a multi-server hosting automation solution for Linux, BSD, and Windows platforms. H-Sphere includes its own controls panels, automated billing, and provisioning solution in a single integrated system. H-Sphere is scalable to any number of boxes. Web, mail, database, and Windows hosting servers can be added without downtime.[1]

[1] Source of information: http://www.daveonwebhosting.com/web-hosting-control-panels/types-of-hosting-control-panels/

Further reading

Comparison of Web hosting control panels at Wikipedia

More information is available on the Wikipedia article about Web Hosting.


Routing

A route is the path that data takes when travelling through a network from one host to another. Routing is the process by which the path, or some subset of it, is determined. One of the characteristic features of the Internet, as compared to other network architectures, is that each node that receives a packet will typically determine for itself what the next step in the path should be.


IP routing decisions are generally made based on the destination of network traffic. When an IP packet is sent from a node on the network, it will consult its routing table to determine the next hop device that the traffic should be sent to, in order for it to reach its final destination. The routing table on a typical home machine may look something like this (except formatted properly :):

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
x.y.z           *               255.255.255.255 UH    0      0        0 ppp0
192.168.0.0     *               255.255.255.0   U     0      0        0 eth0
127.0.0.0       *               255.0.0.0       U     0      0        0 lo
default         x.y.z           0.0.0.0         UG    0      0        0 ppp0

So, for example, when it receives a packet on interface eth0 which has a destination of 216.239.59.104, it will consult the table and see that it should send it through the default interface, the host x.y.z, which is on interface ppp0.

The routing table is constructed from a combination of statically defined routes and those learned from dynamic routing protocols.

Statically defined routes may be declared at system boot time, or via a command line interface. They will generally include the following parameters:

  • Destination - this may be either a single host, or a network (in which case a network mask is also required).
  • Gateway - the device to which traffic with the defined destination should be sent.

Static routes may also include the following parameters:

  • Interface - the interface through which the traffic to a destination must be sent. (Most OSs can determine this automatically)
  • Metric - the number of 'hops' away that the gateway is from this host. For a gateway that resides on a directly connected network, the metric is '1'.

The default route is a special case of a statically defined route. It is the route of last resort. All traffic that does not match another destination in the routing table is forwarded to the default gateway.

Dynamic routing protocols allow network attached devices to learn about the structure of the network dynamically from peer devices. This reduces the administrative effort required to implement and change routing throughout a network. Some examples of dynamic routing protocols are:

  • RIP (Routing Information Protocol)
  • OSPF (Open Shortest Path First)
  • ISIS (Intermediate system to intermediate system)
  • BGP (Border Gateway Protocol)
  • IGRP (Interior Gateway Routing Protocol)

ISIS and OSPF are link-state protocols, meaning each node part of the same zone, will know the state of all the link in the mesh. Due to the exponential number of link in a mesh, thoses protocols are for small mesh such as an ISP national backbone.

RIP is usually used to easily announce customer's routes in a backbone.

BGP is used as an external routing protocol to exchange routes with other entities. ISP use BGP extensively to "trade" their routes. It can also be used to carry customers routes across a network, in a MPLS backbone for example.


Protocols

In networking, a communications protocol or network protocol is the specification of a set of rules for a particular type of communication.

Different protocols often describe different aspects of a single communication; taken together, these form a protocol stack. The terms "protocol" and "protocol stack" also refer to the software that implements a protocol.

Most recent protocols are assigned by the Internet Engineering Task Force (IETF) for internet communications, and the Institute of Electrical and Electronics Engineers (IEEE), or the International Organization for Standardization (ISO) organizations for other types. The ITU-T handles telecommunications protocols and formats.

Index page for network protocols and protocol layers, categorised by the nearest matching layers of the OSI seven layer model.

Systems engineering principles have been applied to design network protocols.

Common Internet protocols

Common Internet protocols include TCP/IP (Transmission Control Protocol/Internet Protocol), UDP/IP (User Datagram Protocol/Internet Protocol), HTTP (HyperText Transfer Protocol) and FTP (File Transfer Protocol).

TCP/IP
TCP/IP is a stream protocol. This means that a connection is negotiated between a client and a server. Any data transmitted between these two endpoints is guaranteed to arrive, thus it is a so-called lossless protocol. Since the TCP protocol (as it is also referred to in short form) can only connect two endpoints, it is also called a peer-to-peer protocol.
HTTP
HTTP is the protocol used to transmit all data present on the World Wide Web. This includes text, multimedia and graphics. It is the protocol used to transmit HTML, the language that makes all the fancy decorations in your browser. It works upon TCP/IP.
FTP
FTP is the protocol used to transmit files between computers connected to each other by a TCP/IP network, such as the Internet.

See also


History and evolution

The Internet began life as a research project sponsored by ARPA. Previously US defense computers were connected to each other in a one way fashion such that each computer was connected to the others in series. In case of one computer being destroyed all others would lose communication. To avoid this the government decided to connect the computers in a web with each computer connected to all others. The motivation was to connect the few then existing proprietary computer networks to one interconnected network. The first version of the Internet was called ARPANET and was implemented in 1969. It then consisted of 4 Nodes in UCLA, UC Santa Barbara, Stanford Research Lab, and the University of Utah. In 1984, ARPA net has more than 1000 individual computers linked as hosts. In 1986, NSF connects NSF net to ARPA net and becomes what is known as internet. In 1995, NSF net terminates its networks on the internet and resumes status as research network. In 1996, internet2 was founded. It was to be the predecessor of what is today called the Internet. Today more than 550 million hosts connect to internet.


The Web

The World Wide Web (the "Web" or "WWW" for short) is a hypertext system that operates over the Internet. To view the information, you use a software program called a web browser to retrieve pieces of information (called "documents" or "web pages") from web servers (or "web sites") and view them on your screen. You can then follow hyperlinks on the page to other documents or even send information back to the server to interact with it. The act of following hyperlinks is often called "surfing" the web.

Looking further at web browsers, a web browser is an application program that accesses the World Wide Web, which then searches for wanted information on the Internet. The first web browser named Mosaic was developed in the early 1990s. The ease of information access provided by web browsers greatly added to the popularity of the Internet. Companies and individual users alike can use a browser to access untold amounts of information, and its as easy to find as clicking a mouse. The four most popular web browsers are Internet Explorer, Chrome, Firefox, and Netscape. Tight competition has caused for continual improvement in the programs and associated technologies. Web browsers are loaded with ease-of-use features and are customizable to an individual user’s preference.

URLs, HTTP and HTML

The core functionality of the Web is based on three standards: the Uniform Resource Locator (URL), which specifies how each page of information is given a unique "address" at which it can be found; Hyper Text Transfer Protocol (HTTP), which specifies how the browser and server send the information to each other; and Hyper Text Markup Language (HTML), a method of encoding the information so it can be displayed on a variety of devices.

Tim Berners-Lee now heads the World Wide Web Consortium, which develops and maintains these standards and others that enable computers on the Web to effectively store and communicate all kinds of information.

Beyond text

The initial "www" program at CERN only displayed text, but later browsers such as Pei Wei's Viola (1992) added the ability to display graphics as well. Marc Andreessen of NCSA released a browser called "Mosaic for X" in 1993 that sparked a tremendous rise in the popularity of the Web among novice users. Andreesen went on to found Mosaic Communications Corporation (now Netscape Communications, a unit of AOL Time Warner). Additional features such as dynamic content, music and animation can be found in modern browsers.

Frequently, the technical capability of browsers and servers advances much faster than the standards bodies can keep up with, so it is not uncommon for these newer features to not work properly on all computers, and the web as seen by Netscape is not at all the same as the web seen by Internet Explorer. The ever-improving technical capability of the WWW has enabled the development of real-time web-based services such as webcasts, web radio and live web cams.

Java and Javascript

Another significant advance in the technology was Sun Microsystems' Java programming language, which enabled web servers to embed small programs (called applets) directly into the information being served that would run on the user's computer, allowing faster and richer user interaction.

The similarly named, but actually quite different, JavaScript is a scripting language developed for Web pages. In conjunction with the Document Object Model (DOM), JavaScript has become a much more powerful language than its creators originally envisaged.

Sociological Implications

The exponential growth of the Internet was primarily attributed to the emergence of the web browser Mosaic, followed by another, Netscape Navigator during the mid-1990s.

It brought unprecedented attention to the Internet from media, industries, policy makers, and the general public.

Eventually, it led to several visions of how our society might change, although some point out that those visions are not unique to the Internet, but repeated with many new technologies (especially information and communications technologies) of various era.

Because the web is global in scale, some suggested that it will nurture mutual understanding on a global scale.

Publishing web pages

The web is available to individuals outside the mass media. To "publish" a web page, one does not have to go through a publisher or other media institution, and the potential reader is around the globe, some thought. This to some is an opportunity to enhance democracy by giving voices to alternative and minority views. Some others took it as a path to anarchy and unrestrained freedom of expression. Yet others took it as a sign that hierarchically organized society, mass media being a symptomatic part of it, will be replaced by the so-called network society.

Also, the hyper-text seemed to promote a non-hierarchical and non-linear way of expression and thinking. Unlike books and documents, hypertext does not have a linear order from the beginning to the end. It is not broken down into the hierarchy of chapters, sections, subsections, etc. This reminded some of the ideas of Marshall McLuhan that new media change people's perception of the world, mentality, and way of thinking. While not unique issue to the web, hypertext in this sense is closely related to the notion of "death of author" and intertextuality in structuralist literary theory.

These bold visions are at least not fully realized yet. We can find both supporting and countering aspects of web usage.

First, regarding the increased global unity, indeed, many different kinds of information are now available on the web, and for those who wish to know other societies, their cultures, and people, it became easier. When one travels to a foreign country or a remote town, s/he might be able to find some information about the place on the web, especially if the place is in one of the developed countries. Local newspapers, government publications, and other materials are easier to access, and therefore the variety of information obtainable with the same effort may be said to have increased, for the users of the Internet.

At the same time, there are some obvious limitations. The web is so far a very text-centered medium, and those who are illiterate cannot make much use of it. Even among the literate, usage of a computer may or may not be easy enough. It has been known during the late 1990s, though with ample exceptions, that web users are dominantly young males in college or with a college degree. Now the trend has been changing and female and elderly are also using the web, level of education and income are related to the web use, some think (See also the Wikipedia article Digital divide). Another significant obstacle is language. Currently, only a limited number of languages are useable on the web, due to software and standard issues, and none would understand all the available languages. These factors would challenge the notion that the World Wide Web will bring unity to the world.

Second, the increased opportunity to individuals is certainly observable in the countless personal pages, as well as other groups such as families, small shops, which are not among those who publish materials. The emergence of free web hosting services is perhaps an important factor in bringing this possibility into reality. The activities of alternative media expanded into the web as well.

Yet not a small part of those pages seem to be either prematurely abandoned or one-time practice. Very few of those pages, even when they are well-developed, are popular. When it comes to the expression of ideas and provision of information, it seems that the major media organizations and those companies who became major organizations through their online operations are still favored by the dominant majority. Besides, the Web is not necessarily a tool for political self-education and deliberation. The most popular uses of the Web include searching and downloading pornography, which perhaps has a very limited effect in improving democracy. The most intensively accessed web pages include the document detailing the former U.S. president Bill Clinton's sexual misconduct with Monica Lewinsky, as well as the lingerie fashion show by Victoria's Secret. In sum, both in terms of writers and readers, the Web is not popularly used for democracy. While this is not enough to categorically reject the possibility of the Web as a tool for democracy, the effect so far seems to be smaller than some of the expectations for a quite simple reason, lack of interest and popularity. Anarchistic freedom of expression may be enjoyed by some, but many web hosting companies have developed their acceptable use policy over time, sometimes prohibiting some sensitive and potentially illegal expressions. And again, those expressions may not reach great many. The web is still largely a hierarchical place, some may argue.

Third, regarding the non-linear and non-hierarchical structure of the Web, the effect of those on people's perception and psychology are still largely unknown. Some argue that our culture is changing to that of postmodernity, which is closely related to a non-linear and non-hierarchical way of thinking, being, and even social organization. Yet the counter-evidence is available as well. Among the most notable would be the existence of web directories and search engines. Those sites often provide navigations to the most popular sites to visitors. Besides, it is quite obvious that many web sites are organized according to a simple hierarchy, having the "home page" at the top. At least the present state of the Web and web users seem to suggest the change has not been as great as envisioned by some.


History of the Web

The Web grew out of a project at CERN, beginning around 1989, where Tim Berners Lee and Robert Cailliau built the prototype system that became the core of what is now the World Wide Web. The original intent of the system was to make it easier to share research papers among colleagues. The original name of the first prototype was Enquire Within Upon Everything, after a famous 19th century reference work of how-tos. Berners-Lee released files describing his idea for the "World Wide Web" onto the Internet on August 6, 1991.

Works Cited


HTML

HTML

HTML is the language of the web.

Hyper

Text

Mark-up

Language

For further reading


Embedded technologies

Embedded technologies

Clipboard

To do:


Embedded Technologies in the context of the internet are stand-alone programs and plugins that extend the normal functionality of a web page. Java, Shockwave Flashtm, audio and video players are all examples of embedded technologies that can assist creating a web site.

See also


Proxy servers

Proxy servers provide a cache of items available on other servers which are presumably slower, more expensive to access or unavailable from the local network.

The process of proxying a network through a single host on another network is called network masquerading or IP-masquerading if the source and target networks use the Internet Protocol.

This term is used particularly for a World Wide Web server which accepts URLs with a special prefix. When it receives a request for such a URL, it strips off the prefix and looks for the resulting URL in its local cache. If found, it returns the document immediately, otherwise it fetches it from the remote server, saves a copy in the cache and returns it to the requester. The cache will usually have an expiry algorithm which flushes documents according to their age, size, and access history.

The Squid cache is the popular http proxy server in UNIX/Linux world. However, Apache's mod_proxy module also provides proxying and caching capabilities, and has the advantage of already being installed on nearly all systems.


Search engine

A search engine is a type of computer software used to search data in the form of text or a database for specified information.

Search engines normally consist of spiders (also known as bots) which roam the web searching for links and keywords. They send collected data back to the indexing software which categorizes and adds the links to databases with their related keywords. When you specify a search term the engine does not scan the whole web but extracts related links from the database.

Please take note that this is not a simple process. Search engines literally scan through millions of pages in its database. Once this has taken place all the results are put together in order of relevancy. Remember also not to get a search engine and directory mixed up. Yes they are used interchangeably, but they do in fact perform two different tasks!

Before 1993 the term search engine never existed. From then until now that has changed drastically, and almost everyone knows what it is. Since the Internet is used by millions of Americans daily, a search engine sees a lot of visitors especially ones such as Google and Yahoo. Almost all of us use one of the two if we have the Internet. By simply typing words into the engine, we get several results which gives us a list of sites. (Seigel)

Usually a search engine sends out a spider which fetches as many documents as possible. An index is then created by what is called an indexer that reads the documents and creates it. Only meaningful results are created for each query though, a process called proprietary algorithm. (Webopedia)

Sources

(2004, October 5). Search Engine. Retrieved September 19, 2008, from Webopedia Web site: http://www.webopedia.com/TERM/s/search_engine.html

Seigel, Carolyn (2006). Internet Marketing. Boston, MA: Houghton Mifflin Company.

Work Cited

Boswell, Wendy. "How do search engines work?" What is a search engine? 19 September 2008. http://websearch.about.com/od/enginesanddirectories/a/searchengine.htm


Did You Know?

Google's spider is known as Googlebot. You can make Googlebot not add a page to its database. For more information, click here.


Web advertising

Web Advertising uses static images, animated GIFs, and Flash Animation to display a product or service. Advertising on the Web displays in many different ways such as banners or buttons, Pop-up and Pop-under, and intro pages to websites.

A banner or a button can be an image, animated gif, flash video or plain text

A Pop-up or Pop-under causes a new browser window to open and displays a webpage that can contain text, images and flash.

Intro Pages to websites usually have a flash or gif animation to introduce a company, their services or Products.


Internet advertising is delivered by ad servers. The most common type of ad server in use today is the Central ad server.


Standard Sizes for Banner Style Advertisements are as follows:

Rectangles -

300x250
250x250
240x400
336x280
180x150

Banners -

468x60
234x60
88x31
120x90
120x240
125x125
728x90

Skyscrapers -

160x600
120x600
300x600


Online shopping

Online shopping is just like going to the store and shopping around and looking for things you want except in form of the internet. Tim Berners-Lee started the World Wide Web in 1990. Shortly there after, online shopping started and has expanded exponentially ever since. It provides such a convenience for consumers to keep from having to leave the house or use gas to get to all the different stores, where online shopping you can get anything you want from as many different stores as you want all at one time. Online shopping mainly takes place from people using search engines and typing in exactly what they want. Then the search engine will bring up the most popular sites for the items you look for. Once you find what you are looking for online, most sites will let you put the item in a shopping cart just like a brick and mortar store, so you can keep searching around for anything else that you may need.

Shopping online makes life so much easier because you can just type in a debit card number, wire money, deliver money, electronic money, etc. and be done with it within seconds or minutes. It makes life easy, but you just have to be careful what sites you order from because of all the scams that go on everyday. It is so easy for hackers today to steal people's identity, so being smart is definitely a key when shopping online.

It makes it easy for companies to use online catalogs or stores to do business not only because people will use it but because there is so much more of a selection that they can put online and not even have in the actual store due to space constraints.

Online Auctions

Online auctions have become increasing popular with the rise of online shopping. These auctions allow buyers and sellers to obtain and sell products and services quickly and efficiently. Auctions allow for individuals and businesses to extend their products and services to a much larger customer base.

In 1995, the current powerhouse, Ebay, was started from founder Pierre Omidyar's house. While bidding and searching through auctions is free, sellers are charged certain fees:

- A nonrefundable Insertion Fee is charged which can cost a seller anywhere from 30 cents up to $3.30, depending on the seller's starting bid on the item. - If a seller wishes to further promote their listings, the seller can highlight a listing for an additional fee. - A Final Sale fee is charged at the end of the seller's auction. This fee generally ranges from 1.25% to 5% of the final sale price.

At the end of the auction, both the seller and the buyer are notified by Ebay. The seller usually has a minimum price they will accept for the bid item. If the buyer's bid exceeds the seller's lowest price, the transaction is completed between the two parties or independently of eBay. The above rules apply to Ebay itself, but all auction-based websites charge fees for providing customers with a specific place for e-commerce.

Although Ebay is a widely recognized name in the Internet world, there are several other popular auctions sites such as Ubid.com, Bidz.com, Webidz.com. Popularity and use of auction sites has become more common, however, there are some downsides to these websites. Once an auction has ended, the binding contract of the auction is between the winning bidder and the seller only. Meaning the actual website has limited to no liability for a bad transaction. Auction sites do provide their customers with several options to retrieve money from a dishonest buyer or seller. If these options fail to achieve results, the buyer or seller is essentially on their own.

There are many benefits from online auctions. Individuals and businesses sell and purchase products that are sometimes hard-to-find and hard to sell. Usually, these products come at cheaper prices, and can be found right in the comfort of your own home.

Works Cited


Email

Since the beginning of time, our innate nature creates a desire for convenience and a desire to find new ways to create convenience. The business world is especially guilty of this characteristic as efficient operations are necessary in order to provide a profitable book of business. Over the past few decades the internet has made a grand entrance and a lasting impression on the business world. Since then, it seems, that the internet, and furthermore email, has made an even larger impression on the way business is conducted. While, seemingly enough all effects are positive, there are reasons that a company should "second-guess" its decision to utilize email in such heavy and reliant ways.

While email is convenient and efficient to send a message, whether it be to someone in your clientele base, a sister company across the nation or to someone in the cubicle next to you, email can also create communication barriers that could have an everlasting effect on your business, especially concerning customer service. Email creates a way to send bills, eliminate phone calls and provide services to your customers without ever having contact with that customer. Eliminating your "one-on-one" contact with everyone involved in your business creates hazards such as dishonesty, miscommunication in unerstanding company policies, etc. Below are a few more adverse effects that email could have on a business as reported by Paul McFredrie, (http://www.mcfedries.com/Ramblings/email-pros-cons.asp):

    Creates an impersonal environment
    Excessive involvement (can require more attention than you can give)
    Lax in security
    "Text-only" (can create massive and detrimental misunderstandings)

As duly noted, email has its obvious positive effects on our business world and definitely plays a large role in its success. However, we should always be aware of the problems that can be caused as well.


History of email

Email began as an experiment by the military to be able to send to and from the battlefield. Thus was born email or electronic-mail

The first email was sent in 1972 using two machines by an engineer named Ray Tomlinsin. Later he wrote a mail program for Tenex, the BBN-grown operating system that, by now, was running on most of the ARPANET's PDP-10 machines. (Heliomedia) The mail program was written in two parts (1)to send messages, you would use a program called SNDMSG; (2)to receive mail, you would use the other part called READMAIL.(Heliomedia)

In 1972, the commands MAIL and MLFL were added to the FTP program and provided standard network transport capabilities for email transmission. FTP sent a separate copy of each email to each recipient, and provided the standard ARPANET email functionality until the early 1980's when the more efficient SMTP protocol was developed. Among other improvements, SMTP enabled sending a single message to a domain with more than one addressee, after which the local server would locally copy the message to each recipient. (Livinginternet)

Over the years the email has evolved with many different programs put with it and many people working to improve the email systems.

In 1993, the large network service providers America Online and Delphi started to connect their proprietary email systems to the Internet, beginning the large scale adoption of Internet email as a global standard. (Livinginternet)

The first important email standard was called SMTP, or simple message transfer protocol. SMTP was very simple and is still in use - however, as we will hear later in this series, SMTP was a fairly naïve protocol, and made no attempt to find out whether the person claiming to send a message was the person they purported to be. Forgery was (and still is) very easy in email addresses. These basic flaws in the protocol were later to be exploited by viruses and worms, and by security frauds and spammers forging identities. Some of these problems are still being addressed in 2004.

(Category: Net History)

Ian, Peter; http://www.nethistory.info/History%20of%20the%20Internet/email.html


E-mail predates the Internet; existing e-mail systems were a crucial tool in creating the Internet. 1965 as a way for multiple users of a time-sharing mainframe computer to communicate. Although the exact history is murky, among the first systems to have such a facility were SDC's Q32 and MIT's CTSS. E-mail was quickly extended to become network e-mail, allowing users to pass messages between different computers. The early history of network e-mail is also murky; the AUTODIN system may have been the first allowing electronic text messages to be transferred between users on different computers in 1966, but it is possible the SAGE system had something similar some time before. The ARPANET computer network made a large contribution to the evolution of e-mail. There is one report [1] which indicates experimental inter-system e-mail transfers on it shortly after its creation, in 1969. Ray Tomlinson initiated the use of the @ sign to separate the names of the user and their machine in 1971 [2]. The common report that he "invented" e-mail is an exaggeration, although his early e-mail programs SNDMSG and READMAIL were very important. The first message sent by Ray Tomlinson is not preserved; it was "a message announcing the availability of network email"[3]. The ARPANET significantly increased the popularity of e-mail, and it became the killer app of the ARPANET.


((Origins of E-mail)

http://www.lookforemails.com/EmailFacts.aspx


BBN was the first company to send an e-mail. BBN stands for Bolt Beranek and Newman. BBN was hired by the US Defense Department and created what is known as the ARPANET. The ARPANET helped to evolved into what is known today as the internet. The first e-mail was sent three years later in 1971 by Ray Tomlinson. Ray Tomlinson eventually was the first to use the @ symbol in his emails to show which message was sent at which computer to show that it wasnt a local host.

http://www.mailmsg.com/history.htm

One of the first new developments when personal computers came on the scene was "offline readers". Offline readers allowed email users to store their email on their own personal computers, and then read it and prepare replies without actually being connected to the network - sort of like Microsoft Outlook can do today. (Net History)

This was particularly useful in parts of the world where telephone costs to the nearest email system were expensive. (often this involved international calls in the early days) With connection charges of many dollars a minute, it mattered to be able to prepare a reply without being connected to a telephone, and then get on the network to send it. It was also useful because the "offline" mode allowed for more friendly interfaces. Being connected direct to the host email system in this era of very few standards often resulted in delete keys and backspace keys not working, no capacity for text to "wrap around" on the screen of the users computer, and other such annoyances. Offline readers helped a lot. (Net History)

The first important email standard was called SMTP, or simple message transfer protocol. SMTP was very simple and is still in use - however, as we will hear later in this series, SMTP was a fairly naïve protocol, and made no attempt to find out whether the person claiming to send a message was the person they purported to be. Forgery was (and still is) very easy in email addresses. These basic flaws in the protocol were later to be exploited by viruses and worms, and by security frauds and spammers forging identities. Some of these problems are still being addressed in 2004. (Net History)

But as it developed email started to take on some pretty neat features. One of the first good commercial systems was Eudora, developed by Steve Dorner in 1988. Not long after Pegasus mail appeared. (Net History)

When Internet standards for email began to mature the POP (or Post Office Protocol) servers began to appear as a standard - before that each server was a little different. POP was an important standard to allow users to develop mail systems that would work with each other. (Net History)

These were the days of per-minute charges for email for individual dialup users. For most people on the Internet in those days email and email discussion groups were the main uses. These were many hundreds of these on a wide variety of topics, and as a body of newsgroups they became known as USENET. (Net History)

With the World Wide Web, email started to be made available with friendly web interfaces by providers such as Yahoo and Hotmail. Usually this was without charge. Now that email was affordable, everyone wanted at least one email address, and the medium was adopted by not just millions, but hundreds of millions of people. (Net History)


Routing email

Email routing is performed based entirely on the destination address of the email message. An email address has the following format:

username @ domain

(For example: user@wikipedia.org)

While it would be theoretically possible for mail clients to deliver their own messages directly to recipients, this is not desirable. So, an end user's mail client will deliver outbound messages to their local mail server using SMTP or a similar protocol.

The local mail server then performs a DNS lookup to find the 'MX' (mail exchanger) records for the recipient's domain name. These MX devices are the designated mail servers for all email addresses within that domain.

The local server then attempts an SMTP connection to each of the MX servers in order of priority, until a connection is successful. It forwards the message to the remote server and ends the connection.

The remote mail server then either repeats this process, forwarding the message closer to the intended recipient, or may deliver the message directly to the recipient.

History

Prior to the advent of the SMTP protocol, email was delivered using the UUCP (Unix-to-Unix Copy Protocol).

In the early days of SMTP, before Spam became a massive problem on the Internet, it was possible to manually define the route that an email message was to take. This was done by appending multiple "@domain" entries to a recipient address. For example:

user@domain1.com@domain2.com@domain3.com

A message with this recipient address would be routed first to the mail server for domain3.com, then to the mail server for domain2.com, then finally to the mail server for domain1.com where it would be delivered to the local user.

Due to anti-relaying restrictions now in place on Internet-accessible mail servers, this is generally no longer possible.


Email spam

Email spam (or just "Spam") is unsolicited email, similar to conventional "junk mail", but often on a larger scale.

Spam is estimated to have cost U.S. organizations over $10 billion (in lost productivity) in 2003.

Various software products are available to block or filter spam. Even lawmakers are stepping up to fight spam- in the United States the Can-Spam Act of 2003 was passed into law.

"Spammers" - people or organizations responsible for sending out email spam - use large lists of email addresses that are collected in a number of ways. One of these means is by using computer programs that search websites for email addresses. An email address that appears on a website is therefore more likely to get spammed, other things being equal. There are various ways of preventing your email being harvested in this way. One is to include it as an image rather than a link. This is not ideal, though, because it means firstly that people will have to type the address themselves, and secondly, it makes the address inaccessible to blind people, or people who browse without images. An alternative, better method, is to use a small piece of 'JavaScript' to insert the email address into the page when it is displayed, keeping it out of the html which an email-collecting spam program might look at. One final method is to create a contact form to display on your website. The website user would fill out the form which when submitted forwards the message to you without displaying your email address. This final method has one drawback in that it circumvents the user's email system and does not provide the user with a record of the email that they sent.

I have also seen many people try the format of me@REMOVETHISaol.com or whatever. The person who has a genuine interest in emailing you will remove the REMOVETHIS part of the eddress before sending, but programmes that gather email addresses en-masse will use it as-is - thus, you do not receive spam. One must be fairly inventive to make this effective; the instance given would be algorithmically handled by most site-scraping email bots.

In addition, there is a new anti-spam feature now available. Named the "Challenge/Response System", this either sends a link, or a word-verification page to a user, the first time they e-mail you. The user must either click the link or enter the word to verify they are not a spamming program. After this, you get the e-mail and they're added to your allow list.

A study, by Brockmann & Company IT consultants showed that challenge-response proved to be superior to appliances, hosted spam filters and commercial filters.Brockmann surveyed more than 500 businesses, with 40% of the respondents having IT responsibilities. The independently funded study resulted in the creation of a spam index to measure how satisfied workers were with their spam technologies.

Despite being less sophisticated than filtering technology sold by antispam and antivirus vendors, the challenge-response method was twice as effective as hosted services for spam prevention. According to the survey, 67% of challenge-response users specified that they are very satisfied with their email experience as compared to next highest technology, hosted services, in which 42% reported that they were very satisfied. Commercial software filters, such as those produced by McAfee, Symantec and Trend Micro, scored the lowest satisfying only 22% of respondents.(SearchSecurity.com, Robert Westervelt Jul. 2007)

www.bluebottle.com currently offers a public beta of this software.

Spam is a very common thing and at one point or another it is something that we have all had. Spam is not only an annoying email, it is a tactics for marketing. However many times you have received this type of email, it is becoming more and more dangerous. It can be generated by businesses as well as individuals. It is used to promote products and is also brought about by things like forwarding ( things like jokes, images or chain letters) spam can be used to gather information from your computer if opened as well as used to send out viruses. It is becoming an increasing threat. Spam has become such an issue, that people now have to have an entire email address just as a "throw away account" an email address specifically catered toward junk mail because if we were to let it, it would take up about 90% of our inbox's, not only is this an issue for our email account but it is also being directed towards our phone, and needs to be handled with caution Email-spam-sample.png‎ Spam is a very common thing and at one point or another it is something that we have all had. Spam is not only an annoying email, it is a tactics for marketing. However many times you have received this type of email, it is becoming more and more dangerous. It can be generated by businesses as well as individuals. It is used to promote products and is also brought about by things like forwarding ( things like jokes, images or chain letters) spam can be used to gather information from your computer if opened as well as used to send out viruses. It is becoming an increasing threat. Spam has become such an issue, that people now have to have an entire email address just as a "throw away account" an email address specifically catered toward junk mail because if we were to let it, it would take up about 90% of our inbox's, not only is this an issue for our email account but it is also being directed towards our phone, and needs to be handled with caution


Email security

Over time, e-mail has been one of the most used components to send messages around the world or to people around oneself. Also, e-mail has a lot of harm that can happen while in the process. There are different types of components that may occur while sending an innocent message to others. Email threats can be divided into several distinct categories: Viruses, Worms and Trojan Horses, Phishing, and also Spam. According to "IT Security," Viruses, Worms and Trojan Horses are delivered as email attachments, destructive code can devastate a host system's data, turn computers into remote control slaves known as bootnets and cause recipients to lose serious money. Trojan horse keyloggers, for example, can surreptitiously record system activities, giving unauthorized external parties access to corporate bank accounts, internal business Web sites and other private resources. Phishing attacks utilize social engineering to steal consumers' personal and financial data. Spam Although not an overt threat like a virus-infected attachment, junk email can quickly overwhelm an inbox, making it difficult or even impossible for its owner to view legitimate messages. The spam problem has gotten so bad that it is commonplace for users to abandon email accounts that are overrun with spam rather than try to fight the problem. Spam is also the delivery medium of choice for both phishers and virus attackers. As we can see, emails might be one of the top used networks, but many consequences can come from the technology.


Usenet

Usenet is a "network" of newsgroup servers (often run by ISPs) working together. Once connected to Usenet one can find everything from logical conversation to porn. One bad thing about Usenet is that it is often used to distribute warez, or illegal digital content like mp3s, Screeners, or cracked video games. In most cases Usenet is the first step in the life of warez. Unfortunately this has caused many newsgroup servers to block access to known warez newsgroups.

Usenet providers normally charge money for access but many ISPs have newsgroup servers running that their customers can access for free. So if you would like to check out Usenet first call your ISP and find out if they have a newsgroup server running.


History of Usenet

Tom Truscott and Jim Ellis developed Usenet as graduate students in 1979. They thought that the software could be replacement software for Duke University to post announcements. Steve Bellovin became interested in the software and wrote the first UNIX-based “news” software for the system. (Interview with Tom Truscott April 18, 2007)

At the time the current announcement software that was used by the university was made obsolete because of a hardware upgrade. Usenet was a side project of Truscott and Ellis that they worked on in their spare time. Bellovin wrote the script, and soon after “netnews” was created; linking Duke University and the University of North Carolina. Soon after the program was made available to the public and “A News” was created which was the first Usenet package. (Usenet History)

Usenet is bulletin board software where users with the correct software can read and post messages. It is still running today, URL’s that begin with news refer to Usenet groups. The NNTP or Network News Transport Protocol is the transportation system that sends out the Usenet messages. (Siegel 2006)

•Giganews, "Tom Truscott", Interview April 18, 2007, www.giganews.com/usenet-history/truscott.html

•Giganews, "Usenet History" http://www.giganews.com/usenet-history/

•Siegel F Carolyn, "Internet Marketing" Foundations and Applications 2E. Chapter 2 Internet Fundamentals, P-31


IRC

Internet Relay Chat, commonly abbreviated IRC, is a chat protocol, a way how to enable several people to talk to each other by entering text messages, each participant seeing everything that the other participants write, as if they were in a telephone conference.

Technology of IRC

Formally, IRC is a real-time text-based multi-user communication protocol specification and implementation, which relays messages between users on the network. According to Efnet.org, IRC was born sometime in 1988. According to IRChelp.org, the official specification for IRC was written in 1993 in the RFC format. The specification "RFC 1459: Internet Relay Chat Protocol" is a really excellent source for both an introduction to and detailed information about the IRC protocol. Today IRC has a very wide range of users and anyone can find a place to participate in chat.

IRC's largest unit of architecture is the IRC network. There are perhaps hundreds of IRC networks in the world each one running parallel and disjoint from the others. A client logged into one network can communicate only with other clients on the same network, not with clients on other networks. Each network is composed of one or more IRC servers. An IRC client is a program that connects to a given IRC server in order to have the server relay communications to and from other clients on the same network but not necessarily the same server.

Messages on IRC are sent as blocks. That is, other IRC clients will not see one typing and editing as one does so. One creates a message block (often just a sentence) and transmits that block all at once, which is received by the server and based on the addressing, delivers it to the appropriate client or relays it to other servers so that it may be delivered or relayed again, et cetera.

Once connected to a server, addressing of other clients is achieved through IRC nicknames. A nickname is simply a unique string of ASCII characters identifying a particular client. Although implementations vary, restrictions on nicknames usually dictate that they be composed only of characters a-z, A-Z, 0-9, underscore, and dash.

Another form of addressing on IRC, and arguably one of its defining features, is the IRC channel. IRC channels are often compared to CB Radio (Citizen's Band Radio) channels. While with CB one is said to be "listening" to a channel, in IRC one's client is said to be "joined" to the channel. Any communication sent to that channel is then "heard" or seen by the client. On the other hand, other clients on the same network or even on the same server, but not on the same channel will not see any messages sent to that channel.

While IRC is by definition not a P2P protocol, IRC does have some extensions that support text and file transmission directly from client to client without any relay at all. These extensions are known as DCC (Direct Client Connect) and CTCP (Client To Client Protocol). For CTCP, clients like mIRC implement commands such as "ctcp nickname version" or "ctcp nickname ping" to get some interesting infos about other users.

Using Internet Relay Chat

To use Internet Relay Chat, you need to do the following:

  1. Choose and install an IRC client.
  2. Find the channel discussing the topic of your interest (similar to a room in other chat environments).
  3. Find the server at which the channel is located. You can be directed to both the server and the channel by the website of a project, such as Wikibooks.
  4. Connect to the server using the client, using a nickname of your choice.
  5. Connect to the channel (a room).


Clipboard

To do:
Provide more information on how to choose nicknames and their use and restrictions of registering them.


Registering your nickname

Some IRC networks offer to register your nickname through a service bot. This provides sometimes access to channels that are blocked to unregistered users and in most cases reserves your nickname so no one else can use it (it will at least mark you as the logged in user and anyone else who uses it as not logged in).

The service bots providing this is mostly named "NickServ", sometimes also "AuthServ" or on a big network just "Q". When you found out which one of those bots exists, you can gather more information by typing:

/msg [BOTNAME] help

This should get you detailed instructions on how to use the service.

Example HowTo for a network

The process is fairly simple, once you have chosen a nickname you would like to register (assuming it's not owned by anyone else) and chosen a password, follow these steps:

  • If you have not done so already, change your nickname to the one you would like to register
/nick [NICKNAME]

For example:

/nick JohnDoe
  • Send a private message to the network's nickserv service with the password you chose and your email address with
/msg nickserv register [PASSWORD] [EMAIL]

For example:

/msg nickserv register 1234abcd JohnDoe@email.com
  • After messaging the nickserv you should shortly receive a reply back stating that the it received your registration request and sent an email to the address you provided.
An email containing nickname activation instructions has been sent to JohnDoe@email.com
  • To complete the registration process, you will need to message the nickserv with the registration code emailed to your address.
/msg NickServ VERIFY REGISTER JohnDoe p4huc5gqunnc
  • Once you have correctly entered the registration code, the nickserv should message you back stating the nick registration process was completed successfully.
JohnDoe has now been verified.

You should now be "logged in" under your nick. If you disconnect from the server, to relogin under your nick you will need to message the nickserv with your password:

/msg nickserv identify [PASSWORD]

For example:

/msg nickserv identify 1234abcd

Once doing so, it should reply back saying you successfully logged in.

You are now identified for JohnDoe

Private conversations and chats

By default, the conversations using IRC are public, visible to all users in the channel.

To have a private conversation with a user in the channel, type "/query nickname".

To have a private chat, join an non-existent channel, and then allow joining only by invitation using the command "/mode +i". Chunked into steps:

  1. /join #mynewchannel
  2. /mode +i
  3. /invite someotherguy

IRC clients

To use IRC, you'll need an IRC client--a program that lets you connect to an IRC server, and enter an IRC channel. There is a variety of IRC clients:

Some IRC clients
IRC Client Description OS Restrictions Note
ChatZilla An add-in for Firefox.
IRSSI Has a text-only user interface.
mIRC Windows Good for beginners.
XChat (XChat-WDK for Windows)
Smuxi A user-friendly client for GNOME. Linux and Windows
Colloquy For Mac OS X only.
Pidgin A multi-protocol client. Has more chat protocols than IRC.
Miranda A multi-protocol client.
Trillian A multi-protocol client.
Opera A web browser with integrated IRC client.
BitchX

IRC commands

What follows is an overview of some of the basic commands of the IRC protocol. All the commands are already prefixed with a slash "/", as in most clients this will indicate that an IRC command follows that shall be executed. With some IRC clients including ChatZilla and Pidgin, you do not need to know these commands: you tell the client what you want to do using the graphical user interface and the client sends the necessary commands for you.

Basic commands

Some basic commands for IRC are listed in the following section. Please note that not all of them are available in all clients, as some of them are client-sided inventions to make your life easier and not part of the IRC protocol itself.

Command What it does Example
/attach

/server

/connect

Sign on to a server /attach irc.freenode.net

/server irc.freenode.net

/connect irc.freenode.net

/nick Set your nickname /nick YourName
/join Join a channel /join #wikibooks
/msg Sends a message (can either be private or to the entire channel) Message the channel: /msg #wikibooks hello world!

Send a private message: /msg JohnDoe Hi john.

/whois Display information about a user on the server /whois JohnDoe
/clear

/clearall

Clears a channel's text.

Clears all open channel's text.

/clear

/clearall

/away Sets an away message. To return from "away", type /away or send a message. /away I'm away because...
/me Sends an action to the channel. See example. The following:

/me loves pie.

would output to the chat in the case of JohnDoe:

JohnDoe loves pie.

/topic Queries or sets the topic of discussion. /topic Using IRC

Privileged User Commands

Commands for half-operators, channel operators, channel owners, and admins:

Command What it does Example
/kick Kicks, or boots a user from the channel. You must be a half-operator or greater to do this. Kick a user from the channel with a reason: /kick #channel JohnDoe I kicked you because...
/ban

/unban

Bans a user from the channel. You must be a channel operator or greater to do this.

Unbans a user from the channel. You must be a channel operator or greater to do this.

/ban JohnDoe

/unban JohnDoe


Remote Access

Remote access allows you to access one computer from another using a protocol (ex. Secure Shell).


Telnet

Telnet is a protocol designed to remotely access computers in a client-server fashion. Telnet is inherently insecure, as the data passed from the client to the server or vice-versa is not encrypted. For connections through insecure networks (such as the Internet), SSH (Secured SHell), should be used so all communications between the client and server are encrypted.

Examples

Test if the HTTP port is open and its service listening

telnet localhost 80

Send an email

telnet localhost 25
Trying 127.0.0.1...
Connected to localhost.localdomain.
Escape character is '^]'.
220 smtp.mydomain.com
mail from:<superman@mydomain.com>
250 2.1.0 <superman@mydomain.com>... Sender ok
rcpt to:<wonderwoman@herdomain.com>
250 2.1.5 <wonderwoman@herdomain.com>... Recipient ok
data
354 Enter mail, end with "." on a line by itself
Let's meet
.
250 2.0.0 n514OvkN019941 Message accepted for delivery
quit
221 2.0.0 smtp.mydomain.com closing connection
Connection closed by foreign host.


SSH

SSH is a secure replacement for Telnet and rsh. All communications between the client and server are encrypted. To access an SSH client (usually OpenSSH) in most Unix OSs, type ssh user@host.com in a terminal window. If you don't specify the username, the user that entered the command ($USER) will be used. In Windows, you will need to download a 3rd-party utility such as PuTTY or Cygwin. Find more information in the ssh(1) man page. On other Operating Systems (smart phones for example), you will have to use a webbased client. There are several SSH apps for Android, including ConnectBot, Dropbear, ServerAssistant, and the Telnet / SSH Simple Client.

Uses

SSH is actually so much more than just a way to access a remote shell securely. It can be used for lots of other ways to transfer information securely.

Using SSH

The secure shell client is conveniently called ssh. It is typically used to access a remote host. A typical usage of ssh is

ssh user@host

This means that the client intends to login as user on host machine. On successful authentication, an SSH session is established between the client and the host.

Rsync and SFTP are the two recommended ways to transfer files, since there are unfixable flaws with SCP.

Using SFTP

SFTP has nothing to do with FTP. SFTP merely works like FTP, meaning you use it as you would FTP. Using SFTP requires only the SSH server. The FTP server is irrelevant to SFTP. Files are transferred as binary by default.

sftp user@host

Using Rsync

"rsync" is not part of the SSH suite but is nearly ubqiquitous. It uses SSH to secure connections when transferring to or from a remote system.

rsync file user@host:/path/

or

rsync user@host:/path/file .

One of the best parts of "rsync" is that it only transfers any changes, if an earlier version the file is on the destination. That saves on time and bandwidth. There are a lot of useful options for "rsync" including the -a option which combines recursive copying with preserved times, attributes, owner, and group, among others.

Using SCP

The SSH suite still includes a neat utility "scp", which stands for secure copy, which has a great way to copy files between machines. In recent version it is a wrapper for SFTP but it works almost exactly like the default unix cp command. scp also allows you to copy a file from a remote host to a remote host. An example of scp:

scp user@host.com:~/files/ . which means copy files from files/ directory in user's home directory on the host.com machine (it will copy ALL files from the files/ directory) to the CWD (current working directory).

Another great use is to use it to encrypt the transport of any data from one machine to another. As an extreme example, you can use SSH to remotely move a disk from one machine to another (akin to ghost, but securely). This may not be the best use of SSH, or the fastest way to transfer data from one machine to another over a network, but it shows you how powerful SSH can be.

scp, aka Secure Copy, works just like rcp.

  • Copy to a remote host - You must use the colon. REMOTE_PATH is not necessary and all REMOTE_PATHs are relative to the user's home directory.
    scp FILE_PATH user@host:REMOTE_PATH
  • Copy from a remote host ,
    scp user@host:REMOTE_PATH LOCAL_PATH

Note : If your filename contains spaces then, use scp like this:-

  • file name is /media/sda6/Tutorials/Linux Unix/linux_book.pdf then destination directory is home/narendra/data
    • $scp user@host:"/media/sda6/Tutorials/Linux\\ Unix/linux_book.pdf" /home/narendra/data
  • file name is /home/narendra/linux_book.pdf then destination directory is /media/Tutorials/Linux Unix/
    • $ scp /home/narendra/linux_book.pdf user@host:"/media/Tutorials/Linux\\ Unix/"

Note : If you want to copy the whole directory then use

  • scp -r user@host:"<syntaxhighlight_dirname>" <destination_dirname>

Creating SSH Keys

Although SSH can be used with passwords, doing so is not recommended, and many servers will not allow password logins. Instead, use a key - this is more secure, and more convenient.

To create an SSH key ,

Most modern Unix systems include the OpenSSH client. To generate a key, run:

$ ssh-keygen

This will store your private key in $HOME/.ssh/id_rsa, and your public key in $HOME/.ssh/id_rsa.pub. You can use different filenames, but these are the default filenames, so it's easiest to not change them.

Permissions

Because the security of your private key is so important, SSH will not work if file permissions are insecure. SSH will create files and directories with the appropriate permissions, but sometimes things will go wrong. To fix permission issues:

$ chmod 600 ~/.ssh/KEY ~/.ssh/KEY.pub
$ chmod 700 ~/.ssh

Establish Trust

To log into a remote server using SSH keys, you'll need to put the public key on that server's list of authorized keys.

In other words, you need to append a copy of your local ~/.ssh/id_rsa.pub file to the end of the remote ~/.ssh/authorized_keys file.

Transfer Method A: ssh-copy-id

The easiest way to do that is using ssh-copy-id. This requires some alternate form of authentication, usually password (since you haven't got a key on the server you cannot use key authentication yet).

ssh-copy-id -i ~/.ssh/KEY user@host.example.net

Transfer Method B: Step-by-step

The hard way to do that is to manually do each step that the above ssh-copy-id command does automatically for you:

  • First create the remote ~/.ssh folder on the destination server, if it does not already exist:
ssh user@host "mkdir ~/.ssh && chmod 700 ~/.ssh"
  • Next upload your PUBLIC key only (not your private key).
cd ~/.ssh
sftp user@host.example.net:.ssh
put KEY.pub
  • Then append your PUBLIC key to the server's list of authorized keys:
ssh user@host.example.net
cat ~/.ssh/KEY.pub >> ~/.ssh/authorized_keys
rm ~/.ssh/KEY.pub

Advanced *nix users could do all those steps in one line:

cat ~/.ssh/id_rsa.pub | ssh user@host.example.net "cat >> ~/.ssh/authorized_keys"

SSH Personal Configuration

You don't need to set up a ~/.ssh/config file, but it makes authentication easier. The important part is to specify your user name and your private key - if this is specified in the config file, you needn't provide it on the command line. Using HostName, you can shorten the ssh command to:

$ ssh servername

Example

#Specific configuration applied to one host
#This configuration applies specifically to a host which uses Windows Domain login
Host Short_Name
        HostName server1.example.com
        User domain\username
        IdentityFile ~/.ssh/KEY

# Use this login as default for all hosts in the one domain
# It will look for a key with the hostname in the key's file name
Host *.example.com
        User domain\username
        IdentityFile ~/.ssh/%h_KEY

# Generic configuration that applies to my private LAN.
# Of note, the options to forward X11 lets you run remote graphical 
# programs while viewing and interacting with them locally.
Host localnetwork 192.168.1.0/24
        User USERNAME
        IdentityFile ~/.ssh/key_37_rsa
        AddKeysToAgent yes
        ForwardX11 yes
        # In a pesky lab environment, add the following to your config
        # CheckHostIP no

# Catch-all settings which apply these settings to all hosts, 
# if the particular option has not yet already been set.
# If there are a lot of keys in the SSH agent, then IdentitiesOnly is needed
Host *
        IdentitiesOnly yes
        ServerAliveCountMax 2
        ServerAliveInterval 20

You can now ssh into server1.example.com with just ssh Short_Name. The configuration options are chosen on a first-match basis, so put very specific rules torwards the beginning and more general rules towards the end.

Using an SSH Agent

Most desktop environments provide SSH agents automatically these days. So if you wish to see the details, look for the environment variables $SSH_AUTH_SOCK and $SSH_AGENT_PID, though only the former is used for connecting to it and must be available in to any program which needs to connect to the agent.

Keys can be added to the agent easily manually,

ssh-add ~/.ssh/KEY

Or they can be added automatically on first use by setting AddKeysToAgent to "yes" in the appropriate rule set in the client configuration file.

However, keep in mind that once you go over six keys in the agent, special considerations have to be taken to keep the agent from trying the wrong keys in the wrong order and preventing logging in. Specifically, IdentitiesOnly should be set to "yes" for each host, ideally using the client configuration file. See below.

The most significant difference between SSH and Telnet & "rsh" is in the realm of security. SSH uses RSA, EcDSA, or Ed25519 for public-key cryptography.

  • The server or domain to which you are trying to connect generates a pair of keys (public and private) for a client.
  • The public key is given to the client the first time it tries to connect. The corresponding private key is a secret and kept with the server.
  • The client sends the packets of data by encrypting it through the public key and this data is decrypted by using the corresponding private key stored there.

Communication from the server to the client is also possible in the same way — the server encrypts using the client's public key and the client decrypts using it's private key.

Setting up OpenSSH with public key cryptography

The following presumes physical access to the server or some out-of-band equivalent.

With your distro's package manager, install sshd (or openssh-server) on the server, and on the client install ssh (or openssh-client). It is likely that they're already installed since they're usually part of the distro's default installation for servers and workstations respectively. Be sure the following is in /etc/ssh/sshd_config on the server and uncommented there. That is to say, that there's no # in front of them:

PubkeyAuthentication yes
PasswordAuthentication no
  1. On the server,
    1. Open TCP port 22 on the server for incomming connections. This varies depending on your firewall. on-standard port.
    2. If the server is behind a router with DHCP:
      1. Stop using DHCP and assign a static IP address to your server. See the Gentoo Handbook or Arch Linux Wiki for instructions if you do not know how.
      2. Forward external TCP port 22 (or another port) on your router to port 22 on your server.
  2. Over on the client, create and test the a client key pair
    1. On the client command line, run ssh-keygen -f ~/.ssh/server.key ("rsa" is the default, it is not necessary to explicitly specify "rsa"). Consider annotating the key pair with a comment using the -C option.
    2. Copy the public key to removable media and transfer that media to the server and mount it. Copy the public key to the server, say to ~/.ssh/server.key there. Then append the key to the authorized_keys file, cat ~/.ssh/server.key >> ~/.ssh/authorized_keys
    3. Back over on the client, test that the new key works: ssh -i ~/.ssh/server.key serve.example.com
  3. (Re)start the sshd service.
  4. Log in again from the client using the key

Tip: If the username that you are logging in as on the server is the same as the one you're currently using on the client, you don't need to specify the user to log in as on the server.

SSH as a Proxy

If you can make an SSH connection, you can (most likely) use that connection as a SOCKS5 proxy, without any extra setup on the remote computer. Traffic can then be tunneled securely through the SSH connection. If you are on a wireless connection, you can use this to effectively secure all your traffic from snooping. You can also use this to bypass IP restrictions, because you will appear to be connecting from the remote computer. Note that DNS traffic is not tunneled, unless specific provisions are made to do so.

Pick some big port number (bigger than 1024 so you can use it as non-root). Here I choose 1080, the standard SOCKS port. Use the -D option for dynamic port forwarding.

ssh -D 1080 user@host

That's it. Now as long as the SSH connection is open, your application can use a SOCKS proxy on port 1080 on your own computer (localhost). For example, in Firefox on Linux:

  • go to Edit -> Preferences -> Advanced -> Network -> Connection -> Settings...
  • check "Manual proxy configuration"
  • make sure "Use this proxy server for all protocols" is cleared
  • clear "HTTP Proxy", "SSL Proxy", "FTP Proxy", and "Gopher Proxy" fields
  • enter "127.0.0.1" for "SOCKS Host", and "1080" (or whatever port you chose) for Port.

SSH from your webbrowser

You can also use ssh from a webbrowser with javascript support even when you don't have a secure shell client. In order to do this you have to install AnyTerm, AjaxTerm or WebShell on the system where the SSH server is running or use a third party service like WebSSH.

SSH reverse tunneling

With reverse tunneling you can use a remote machine (for example on AWS) as an entry point to your local machine. Thus a request to that specific port on the remote machine IP will be forwarded to your local machine with a response returned. It is called reverse tunneling because this is not your local machine sending data to the remote one one that port, but instead from the remote to the local.

Use case: if your computer is behind NAT - without a public IP address (or not a local static IP address) - but you still want to make your local HTTP server or some database or other specific service be available to the public.

Another use case: if torrent seeding because with a public IP more computers will be able to fetch your legal data.

Such a command can setup a reverse tunnel where port 63368 on the remote machine is forwarded back through the SSH reverse tunnel to port 8000 on the originating machine, via SSH connecting on port 4160:

ssh -R 63368:localhost:8000 -p 4160 host.example.net

The format is:

ssh -R port-incoming:localhost:port-local -p port-for-for-connecting host.example.net

The remote machine's /etc/ssh/sshd_config can include the following:

AllowTcpForwarding yes
AllowAgentForwarding no
AllowStreamLocalForwarding yes
PermitTunnel yes
GatewayPorts yes	# optional, for external visibility

Further reading


VNC

Virtual Network Computing (VNC) is a remote desktop protocol to remote control another computer. VNC is used to transport the desktop environment of a graphical user interface from one computer to a viewer application on another computer on the network. There are clients and servers for many platforms including Linux, Microsoft Windows, Berkeley Software Distribution variants and MacOS X. In fact you would be hard pressed to not find a viewer available for any GUI operating system. The VNC protocol allows for complete platform independence. A VNC viewer on any operating system can connect to a VNC server on any other operating system. It is also possible for multiple clients to connect to a VNC server at the same time. Popular uses of the technology include remote tech support, and accessing your files on your work PC while at home or even on the road. There is even a Java viewer for VNC, so you can connect to a VNC server from your web browser without installing any software. The original VNC code is open source, as are many of the flavors of VNC available today.

How it works

VNC is actually two parts, a client and a server. A server is the machine that is sharing its screen, and the client, or viewer is the program that is doing the watching and perhaps interacting with the server. VNC is actually a VERY simple protocol and is based one one and only one graphic primitive, "Put a rectangle of pixel data at a given x,y position". What this means is VNC takes small rectangles of the screen (actually the framebuffer) and transports them from the server to the client. This in its simplest form would cause lots of bandwidth to be used, and hence various methods have been invented to make this process go faster. There are now many different 'encodings' or methods to determine the most efficient way to transfer these rectangles. The VNC protocol allows the client and server to negotiate which encoding it will use. The simplest and lowest common denominator is the raw encoding method where the pixel data is sent in left-to-right scanline order, and after initial setup, then only transfers the rectangles that have changed.

how to copy and paste

How do I copy-and-paste from applications running on a server (visible inside a local VNC window) to applications running locally (outside the VNC window) and back?

Some people [1] [2] suggest using xcutsel or autocutsel as a work-around:

On the VNC server side (inside the VNC window) run "xcutsel &". Leave it up and running.

  • 1. If you want to copy from VNC to local, select what you want to copy, then click "copy PRIMARY to 0" in xcutsel, then paste in local.
  • 2. If you want to copy from local to VNC, select what you want to copy, then click "copy 0 to PRIMARY" in xcutsel, then paste in VNC window.

Others [3] recommend autocutsel (or is it autcutsel?), pointing at the VNC FAQ.

For more about the subtleties of cutting and pasting in the X Window System, see "X Selections, Cut Buffers, and Kill Rings." by Jamie Zawinski 2002 (especially helpful if you are writing X11 applications).


Remote Desktop Connection

Note: This is for Windows. Linux users may be interested in VNC.

What you can do with Remote Desktop Connection

Wish you could access your home desktop from work? Going on vacation, but want to be able to use your home computer? This can be easily done with Windows XP’s built in Remote Desktop Connection feature.

Some companies also use remote Desktop for technical support. Cisco systems (among others) allow engineers to use remote desktop to look in on issues and correct them. This troubleshooting method can save time and money for numerous industries.

Setup

In order to take advantage of remote desktop connection, you have to have port 3389 (TCP) open on your firewall/router. To do so consult your firewall or router’s manual.

User Password

In order for remote desktop to work, you have to set a password on your user account. Follow these steps to set a password.

  1. Go to Start then to My Computer (or go to your desktop and go to My Computer)
  2. Go to Control Panel
  3. Go to User Accounts
  4. Click on your user account
  5. Go to Create a Password
  6. Fill everything out and hit Create Password

Enabling Remote Desktop Connection (on the host)

Now you’re ready to enable remote desktop. You have to be logged in as an administrator

  1. Go to Start then right click on My Computer (or right click My Computer on your desktop)
  2. Select Properties
  3. Go to the Remote tab
  4. Check “Allow users to connect remotely to this computer”
  5. If you have other users that you want to allow remote access to the computer then go to Select Remote Users and select add and type in the user account name of the user where it says “Enter the object names to select”

Getting your IP address

You now have to get your ip address. Please see Finding Your IP Address for more information.

Connecting to the host machine from another machine

Now, this is how to connect to your computer from using Remote Desktop Connection. If you’re using a computer that doesn’t have Windows XP then you can load the client side of Remote Desktop Connection. Get it at http://www.microsoft.com/windowsxp/pro/downloads/rdclientdl.asp and then you can follow these steps.

  1. Go to Start
  2. Go to All Programs
  3. Go to Accessories
  4. Go to Communications
  5. Click Remote Desktop Connection
  6. Go to Options and modify the options as you seem fit
  7. Type in your IP address by 'Computer'
  8. Log in with your username and password

You should now have access to your computer. Take note that playing music will not work well with over the remote connection unless you have it stay at the local computer (change this in the options). Also, if you view images and webpages over the connection it will be very slow. Oh, and of course, you can’t play games like Quake 3 over the connection, this simply isn’t possible.

Similar software does exist for other operating systems, such as rdesktop.

Pros

  • You can access your documents from anywhere.
  • You can keep your IRC (internet relay chat) and IM clients open on one machine and then connect to your computer from other places. Thus you don’t have to leave your screen names and whatnot behind on other people’s computers.
  • Check and save email in YOUR email client instead of using webmail.
  • Perhaps your workplace/school doesn’t allow you to run IRC or IM clients, but you can use remote desktop. Then you can log on to your computer and IRC and IM from there.
  • You could use this as some sort of tech support method.
  • You can use Remote Desktop if you can’t always have a monitor plugged in to the box.
  • Great for servers, you don't need to buy keyboard, mouse, monitor etc. to use.

Cons

  • Applications with high graphics abilities can't be used (due to network and graphical lag).
  • In order to start the server you have to log into your account, meaning if you’re on vacation and your computer locks up, you’ll have to have someone else reboot the computer and log in with your password. But this problem can be fixed by modifying a DLL file and adding a registry value. And up to 2 more computers can be connected remotely at the same time.

See the following link for more information on the procedure for enabling it.

http://www.golod.com/2005/10/enabling-multiple-remote-desktop-sessions-in-windows-xp-professional-and-media-center-edition-2005/

  • You need a broadband connection that is always on, if you want to be able to access it all the time. Dialup won’t cut it there.