Rabu, 14 Februari 2007

Maximum Security: Hacker's Guide to Protecting Your Internet Site and Network

Hacking and cracking are activities that generate intense public interest. Stories of hacked servers and downed Internet providers appear regularly in national news. Consequently, publishers are in a race to deliver books on these subjects. To its credit, the publishing community has not failed in this resolve. Security books appear on shelves in ever-increasing numbers. However, the public remains wary. Consumers recognize driving commercialism when they see it, and are understandably suspicious of books such as this one. They need only browse the shelves of their local bookstore to accurately assess the situation.
Books about Internet security are common (firewall technology seems to dominate the subject list). In such books, the information is often sparse, confined to a narrow range of products. Authors typically include full-text reproductions of stale, dated documents that are readily available on the Net. This poses a problem, mainly because such texts are impractical. Experienced readers are already aware of these reference sources, and inexperienced ones are poorly served by them. Hence, consumers know that they might get little bang for their buck. Because of this trend, Internet security books have sold poorly at America's neighborhood bookstores.
Another reason that such books sell poorly is this: The public erroneously believes that to hack or crack, you must first be a genius or a UNIX guru. Neither is true, though admittedly, certain exploits require advanced knowledge of the target's operating system. However, these exploits can now be simplified through utilities that are available for a wide range of platforms. Despite the availability of such programs, however, the public remains mystified by hacking and cracking, and therefore, reticent to spend forty dollars for a hacking book.
So, at the outset, Sams.net embarked on a rather unusual journey in publishing this book. The Sams.net imprint occupies a place of authority within the field. Better than two thirds of all information professionals I know have purchased at least one Sams.net product. For that reason, this book represented to them a special situation.
Hacking, cracking, and Internet security are all explosive subjects. There is a sharp difference between publishing a primer about C++ and publishing a hacking guide. A book such as this one harbors certain dangers, including
  • The possibility that readers will use the information maliciously
  • The possibility of angering the often-secretive Internet-security community


  • The possibility of angering vendors that have yet to close security holes within their software

Wireless LAN Communications

This document presents an overview of two IBM wireless LAN products, IBMWireless LAN Entry and IBM Wireless LAN, and the technology they use for wireless communications. The information provided includes product descriptions, features and functions. Some known product limitations as well as a cross-product comparison are included to assist the reader in understanding where and which product to use for given circumstances.
Also documented are examples of product setup, configuration and the development of various scenarios conducted by the authors. Our intended audience is customers, network planners, network administrators and system specialists who have a need to evaluate, implement and maintain wireless networks. A basic understanding of LAN communications terminology and familiarity with common IBM and industry network products and tools is assumed.

Netizens On the History and Impact of the Net

By Michael Hauben and Ronda Hauben

Introduction By Thomas Truscott

Netizens: On the Impact and History of Usenet and the Internet is an ambitious look at the social aspects of computer networking. It examines the present and the turbulent future, and especially it explores the technical and social roots of the "Net". A well told history can be entertaining, and an accurately told history can provide us valuable lessons. Here follow three lessons for inventors and a fourth for social engineers. Please test them out when reading the book.
The first lesson is to keep projects simple at the beginning. Projects tend to fail so the more one can squeeze into a year the better the chance of stumbling onto a success. Big projects do happen, but there is not enough time in life for very many of them, so choose carefully.
The second lesson is to innovate by taking something old and something new and putting them together in a new way. In this book the "something new" is invariably the use of a computer network. For example, ancient timesharing computer systems had local "mail" services so its users could communicate. But the real power of E-mail was when mail could be distributed to distant computers and all the networked users could communicate. Similarly, Usenet is a distributed version of preexisting bulletin-board-like systems. The spectacularly successful World Wide Web is just a distributed version of a hypertext document system. It was remarkably simple, and seemingly obvious, yet it caught the world by complete surprise. Here is another way to state this lesson: If a feature is good, then a distributed version of the feature is good. And vice-versa.
The third lesson is to keep on the lookout for "something new", or for something improved enough to make a qualitative difference. For example, in the future we will have home computers that are always on and connected to the Net. That is a qualitative difference that will trigger numerous innovations.
The fourth lesson is that we learn valuable lessons by trying out new innovations. Neither the original ARPAnet nor Usenet would have been commercially viable. Today there are great forces battling to structure and control the information superhighway, and it is invaluable that the Internet and Usenet exist as working models. Without them it would be quite easy to argue that the information superhighway should have a top-down hierarchical command and control structure. After all there are numerous working models for that.
It seems inevitable that new innovations will continue to make the future so bright that it hurts. And it also seems inevitable that as innovations permeate society the rules for them will change. I am confident that Michael Hauben and Ronda Hauben will be there to chronicle the rapidly receding history and the new future, as "Netizens" increasingly becomes more than a title for a book.

Looking Over the Fence at Networks: A Neighbor's View of Networking Research (2001)

The Internet has been highly successful in meeting the original vision of providing ubiquitous computer-to-computer interaction in the face of heterogeneous underlying technologies. No longer a research plaything, the Internet is widely used for production systems and has a very large installed base. Commercial interests play a major role in shaping its ongoing development. Success, however, has been a double-edged sword, for with it has come the danger of ossification, or inability to change, in multiple dimensions:
  • Intellectual ossification—The pressure for compatibility with the current Internet risks stifling innovative intellectual thinking. For example, the frequently imposed requirement that new protocols not compete unfairly with TCP-based traffic constrains the development of alternatives for cooperative resource sharing. Would a paper on the NETBLT protocol that proposed an alternative approach to control called “rate-based” (in place of “window-based”) be accepted for publication today?
  • Infrastructure ossification—The ability of researchers to affect what is deployed in the core infrastructure (which is operated mainly by businesses) is extremely limited. For example, pervasive network-layer multicast remains unrealized, despite considerable research and efforts to transfer that research to products.
  • System ossification—Limitations in the current architecture have led to shoe-horn solutions that increase the fragility of the system. For example, network address translation violates architectural assumptions about the semantics of addresses. The problem is exacerbated because a research result is often judged by how hard it will be to deploy in the Internet, and the Internet service providers sometimes favor more easily deployed approaches that may not be desirable solutions for the long run.

At the same time, the demands of users and the realities of commercial interests present a new set of challenges that may very well require a fresh approach. The Internet vision of the last 20 years has been to have all computers communicate. The ability to hide the details of the heterogeneous underlying technologies is acknowledged to be a great strength of the design, but it also creates problems because the performance variability associated with underlying network capacity, time-varying loads, and the like means that applications work in some circumstances but not others. More generally, outsiders advocated a more user-centric view of networking research—a perspective that resonated with a number of the networking insiders as well. Drawing on their own experiences, insiders commented that users are likely to be less interested in advancing the frontiers of high communications bandwidth and more interested in consistency and quality of experience, broadly defined to include the “ilities”—reliability, manageability, configurability, predictability, and so forth—as well as non-performance-based concerns such as security and privacy. (Interest was also expressed in higher-performance, broadband last-mile access, but this is more of a deployment issue than a research problem.) Outsiders also observed that while as a group they may share some common requirements, users are very diverse—in experience, expertise, and what they wish the network could do. Also, commercial interests have given rise to more diverse roles and complex relationships that cannot be ignored when developing solutions to current and future networking problems. These considerations argue that a vision for the future Internet should be to provide users the quality of experience they seek and to accommodate a diversity of interests.

Click to Read More

An Introduction to Socket Programming

By Reg Quinton
These course notes are directed at Unix application programmers who want to develop client/server applications in the TCP/IP domain (with some hints for those who want to write UDP/IP applications). Since the Berkeley socket interface has become something of a standard these notes will apply to programmers on other platforms.
Fundamental concepts are covered including network addressing, well known services, sockets and ports. Sample applications are examined with a view to developing similar applications that serve other contexts. Our goals are
  • to develop a function, tcpopen(server,service), to connect to service.
  • to develop a server that we can connect to.

This course requires an understanding of the C programming language and an appreciation of the programming environment (ie. compilers, loaders, libraries, Makefiles and the RCS revision control system).

Netstat Observations:
Inter Process Communication (or IPC) is between host.port pairs (or host.service if you like). A process pair uses the connection -- there are client and server applications on each end of the IPC connection.

Note the two protocols on IP -- TCP (Transmission Control Protocol) and UDP (User Datagram Prototocol). There's a third protocl ICMP (Internet Control Message Protocol) which we'll not look at -- it's what makes IP work in the first place!

TCP services are connection orientated (like a stream, a pipe or a tty like connection) while UDP services are connectionless (more like telegrams or letters).

We recognize many of the services -- SMTP (Simple Mail Transfer Protocol as used for E-mail), NNTP (Network News Transfer Protocol service as used by Usenet News), NTP (Network Time Protocol as used by xntpd(8)), and SYSLOG is the BSD service implemented by syslogd(1M).

The netstat(1M) display shows many TCP services as ESTABLISHED (there is a connection between client.port and server.port) and others in a LISTEN state (a server application is listening at a port for client connections). You'll often see connections in a CLOSE_WAITE state -- they're waiting for the socket to be torn down.

Click to Read More

Introduction to Securing Data in Transit

The secure transmission of data in transit relies on both encryption and authentication - on both the hiding or concealment of the data itself, and on ensuring that the computers at each end are the computers they say they are.
Authentication
Authentication is a difficult task - computers have no way of knowing that they are 'the computer that sits next to the printer on the third floor' or 'the computer that runs the sales for www.dotcom.com'. And those are the matters which are important to humans - humans don't care if the computer is '10.10.10.10', which is what the computers know.
However, if the computer can trust the human to tell it which computer address to look for - either in the numeric or the name form - the computers can then verify that each other is, in fact, the computer at that address. It's similar to using the post office - we want to know if 100 Somewhere Street is where our friend Sally is, but the post office just wants to know where to send the parcel.
The simplest form of authentication is to exchange secret information the first time the two computers communicate and check it on each subsequent connection. Most exchanges between computers take place over a long period of time, in computer terms, so they tend to do this in a small way for the duration of each connection - as if you were checking, each time you spoke in a phone call, that the person you were talking to was still that person. (Sally, is that you? Yeah. Good, now I was telling you about the kids .. is that still you?)
It may sound paranoid, but this sort of verification system can inhibit what is called a 'man in the middle' attack - where a third party tries to 'catch' the connection and insert their own information. Of course, this relies on the first communication not being intercepted.
Public key encryption (see below) is the other common means of authentication. It doesn't authenticate the sender, but it does authenticate the receiver - and if both parties exchange public keys, and verify by some independant means that the key they have is the key of the party they wish to send to, it authenticates both.

Introduction to Networking Technologies

There are many different computing and networking technologies -- some available today, some just now emerging, some well-proven, some quite experimental. Understanding the computing dilemma more completely involves recognizing technologies; especially since a single technology by itself seldom suffices, and instead, multiple technologies are usually necessary.
This document describes a sampling of technologies of various types, by using a tutorial approach. It compares the technologies available in the three major technology areas: application support, transport networks, and subnetworking. In addition, the applicability of these technologies within a particular situation is illustrated using a set of typical customer situations.
This document can be used by consultants and system designers to better understand, from a business and technical perspective, the options available to solve customers' networking problems.