Rabu, 14 Februari 2007

Local Area Network Concepts and Products: Routers and Gateways

Local Area Network Concepts and Products is a set of four reference books forthose looking for conceptual and product-specific information in the LAN environment. They provide a technical introduction to the various types of IBM local area network architectures and product capabilities.
The four volumes are as follows:
SG24-4753-00 - LAN Architecture
SG24-4754-00 - LAN Adapters, Hubs and ATM
SG24-4755-00 - Routers and Gateways
SG24-4756-00 - LAN Operating Systems and Management
These redbooks complement the reference material available for the products discussed. Much of the information detailed in these books is available through current redbooks and IBM sales and reference manuals. It is therefore assumed that the reader will refer to these sources for morein-depth information if required.
These documents are intended for customers, IBM technical professionals, services specialists, marketing specialists, and marketing representatives working in networking and in particular the local area network environments.
Details on installation and performance of particular products will not be included in these books, as this information is available from other sources.
Some knowledge of local area networks, as well as an awareness of the rapidly changing intelligent workstation environment, is assumed.

Linux IPv6 HOWTO

By Peter Bieringer
IPv6 is a new layer 3 protocol (see linuxports/howto/intro_to_networking/ISO - OSI Model) which will supersede IPv4 (also known as IP). IPv4 was designed long time ago (RFC 760 / Internet Protocol from January 1980) and since its inception, there have been many requests for more addresses and enhanced capabilities. Latest RFC is RFC 2460 / Internet Protocol Version 6 Specification. Major changes in IPv6 are the redesign of the header, including the increase of address size from 32 bits to 128 bits. Because layer 3 is responsible for end-to-end packet transport using packet routing based on addresses, it must include the new IPv6 addresses (source and destination), like IPv4.
Because of lack of manpower, the IPv6 implementation in the kernel was unable to follow the discussed drafts or newly released RFCs. In October 2000, a project was started in Japan, called USAGI, whose aim was to implement all missing, or outdated IPv6 support in Linux. It tracks the current IPv6 implementation in FreeBSD made by the KAME project. From time to time they create snapshots against current vanilla Linux kernel sources.
USAGI is now making use of the new Linux kernel development series 2.5.x to insert all of their current extensions into this development release. Hopefully the 2.6.x kernel series will contain a true and up-to-date IPv6 implementation.

Internetworking Technology Handbook

What Is an Internetwork?
An internetwork is a collection of individual networks, connected by intermediate networking devices, that functions as a single large network. Internetworking refers to the industry, products, and procedures that meet the challenge of creating and administering internetworks.
History of Internetworking

The first networks were time-sharing networks that used mainframes and attached terminals. Such environments were implemented by both IBM's Systems Network Architecture (SNA) and Digital's network architecture.
Local-area networks (LANs) evolved around the PC revolution. LANs enabled multiple users in a relatively small geographical area to exchange files and messages, as well as access shared resources such as file servers and printers.
Wide-area networks (WANs) interconnect LANs with geographically dispersed users to create connectivity. Some of the technologies used for connecting LANs include T1, T3, ATM, ISDN, ADSL, Frame Relay, radio links, and others. New methods of connecting dispersed LANs are appearing everyday.
Today, high-speed LANs and switched internetworks are becoming widely used, largely because they operate at very high speeds and support such high-bandwidth applications as multimedia and videoconferencing.
Internetworking evolved as a solution to three key problems: isolated LANs, duplication of resources, and a lack of network management. Isolated LANs made electronic communication between different offices or departments impossible. Duplication of resources meant that the same hardware and software had to be supplied to each office or department, as did separate support staff. This lack of network management meant that no centralized method of managing and troubleshooting networks existed.

Realizing the Information Future - The Internet and Beyond

The potential for realizing a national information networking marketplace that can enrich people's economic, social, and political lives has recently been unlocked through the convergence of three developments:
  • The federal government's promotion of the National Information Infrastructure through an administration initiative and supporting congressional actions;
  • The runaway growth of the Internet, an electronic network complex developed initially for and by the research community; and
  • The recognition by entertainment, telephone, and cable TV companies of the vast commercial potential in a national information infrastructure.

A national information infrastructure (NII) can provide a seamless web of interconnected, interoperable information networks, computers, databases, and consumer electronics that will eventually link homes, workplaces, and public institutions together. It can embrace virtually all modes of information generation, transport, and use. The potential benefits can be glimpsed in the experiences to date of the research and education communities, where access through the Internet to high-speed networks has begu n to radically change the way researchers work, educators teach, and students learn.

To a large extent, the NII will be a transformation and extension of today's computing and communications infrastructure (including, for example, the Internet, telephone, cable, cellular, data, and broadcast networks). Trends in each of these component areas are already bringing about a next-generation information infrastructure. Yet the outcome of these trends is far from certain; the nature of the NII that will develop is malleable. Choices will be made in industry and government, beginning with inv estments in the underlying physical infrastructure. Those choices will affect and be affected by many institutions and segments of society. They will determine the extent and distribution of the commercial and societal rewards to this country for invest ments in infrastructure-related technology, in which the United States is still currently the world leader.

1994 is a critical juncture in our evolution to a national information infrastructure. Funding arrangements and management responsibilities are being defined (beginning with shifts in NSF funding for the Internet), commercial service providers are playi ng an increasingly significant role, and nonacademic use of the Internet is growing rapidly. Meeting the challenge of "wiring up" the nation will depend on our ability not only to define the purposes that the NII is intended to serve, but also to ensure that the critical technical issues are considered and that the appropriate enabling physical infrastructure is put in place.

Click to Read More

PVM: Parallel Virtual Machine - A Users' Guide and Tutorial for Networked Parallel Computing

The PVM project began in the summer of 1989 at Oak Ridge National Laboratory. The prototype system, PVM 1.0, was constructed by Vaidy Sunderam and Al Geist; this version of the system was used internally at the Lab and was not released to the outside. Version 2 of PVM was written at the University of Tennessee and released in March 1991. During the following year, PVM began to be used in many scientific applications. After user feedback and a number of changes (PVM 2.1 - 2.4), a complete rewrite was undertaken, and version 3 was completed in February 1993. It is PVM version 3.3 that we describe in this book (and refer to simply as PVM). The PVM software has been distributed freely and is being used in computational applications around the world.
To successfully use this book, one should be experienced with common programming techniques and understand some basic parallel processing concepts. In particular, this guide assumes that the user knows how to write, execute, and debug Fortran or C programs and is familiar with Unix.

Teach Yourself THE INTERNET in 24 Hours

By Noel Estabrook
Contributing Author: Bill Vernon
Here is a quick preview of what you'll find in this book:
Part I, "The Basics," takes you through some of the things you'll need to know before you start. You'll get a clear explanation of what the Internet is really like, learn how you can actually use the Internet in real life, find tips on Internet Service Providers, and receive an introduction to the World Wide Web.
Part II, "E-Mail: The Great Communicator," teaches you all you'll need to know about e-mail. Learn basics like reading and sending e-mail, as well as more advanced functions such as attaching documents, creating aliases, and more. You'll also find out all about listservs and how to use them to your advantage.
Part III, "News and Real-Time Communication," shows you many of the things that make the Internet an outstanding tool for communication. You'll learn about newsgroups and how to communicate with thousands of people by clicking your mouse. You'll also learn how to carry on live, real-time conversations over the Internet, as well as get information on some of the hottest new technology such as Net Phones.
Part IV, "The World Wide Web," shows you what is now the most exciting part of the Internet. Learn which browser is best for you, get the basics of Web navigation, and find out how to help your browser with plug-ins. Finally, you'll discover the most powerful tool on the Web today--the search engine--and more importantly, how to use it.
Part V, "Finding Information on the Net," explains some of the other useful functions of the Net. You'll learn how to transfer files and use Gopher. You'll also learn how to access libraries and other resources by using Telnet. Finally, this section will show you how to use the Internet to locate people, places, and things that might not be available directly through the Web.
Part VI, "Getting the Most Out of the Internet," shows you practical ways to use the Internet. You can find resources and techniques on how to get information about entertainment, education, and business. Finally, learn how to use the Internet just to have fun.

Client/Server Computing Second Edition

In a competitive world it is necessary for organizations to take advantage of every opportunity to reduce cost, improve quality, and provide service. Most organizations today recognize the need to be market driven, to be competitive, and to demonstrate added value.
A strategy being adopted by many organizations is to flatten the management hierarchy. With the elimination of layers of middle management, the remaining individuals must be empowered to make the strategy successful. Information to support rational decision making must be made available to these individuals. Information technology (IT) is an effective vehicle to support the implementation of this strategy; frequently it is not used effectively. The client/server model provides power to the desktop, with information available to support the decision-making process and enable decision-making authority.
The Gartner Group, a team of computer industry analysts, noted a widening chasm between user expectations and the ability of information systems (IS) organizations to fulfill them. The gap has been fueled by dramatic increases in end-user comfort with technology (mainly because of prevalent PC literacy); continuous cost declines in pivotal hardware technologies; escalation in highly publicized vendor promises; increasing time delays between vendor promised releases and product delivery (that is, "vaporware"); and emergence of the graphical user in terface (GUI) as the perceived solution to all computing problems.
In this book you will see that client/server computing is the technology capable of bridging this chasm. This technology, particularly when integrated into the normal business process, can take advantage of this new literacy, cost-effective technology, and GUI friendliness. In conjunction with a well-architected systems development environment (SDE), it is possible for client/server computing to use the technology of today and be positioned to take advantage of vendor promises as they become real.
The amount of change in computer processing-related technology since the introduction of the IBM PC is equivalent to all the change that occurred during the previous history of computer technology. We expect the amount of change in the next few years to be even more geometrically inclined. The increasing rate of change is primarily attributable to the coincidence of four events: a dramatic reduction in the cost of processing hardware, a significant increase in installed and available processing power, the introduction of widely adopted software standards, and the use of object-oriented development techniques. The complexity inherent in the pervasiveness of these changes has prevented most business and government organizations from taking full advantage of the potential to be more competitive through improved quality, increased service, reduced costs, and higher profits. Corporate IS organizations, with an experience based on previous technologies, are often less successful than user groups in putting the new technologies to good use.
Taking advantage of computer technology innovation is one of the most effective ways to achieve a competitive advantage and demonstrate value in the marketplace. Technology can be used to improve service by quickly obtaining the information necessary to make decisions and to act to resolve problems. Technology can also be used to reduce costs of repetitive processes and to improve quality through consistent application of those processes. The use of workstation technology implemented as part of the business process and integrated with an organization's existing assets provides a practical means to achieve competitive advantage and to demonstrate value.
Computer hardware continues its historical trend toward smaller, faster, and lower-cost systems. Competitive pressures force organizations to reengineer their business processes for cost and service efficiencies. Computer technology trends prove to leading organizations that the application of technology is the key to successful reengineering of business processes.
Unfortunately, we are not seeing corresponding improvements in systems development. Applications developed by inhouse computer professionals seem to get larger, run more slowly, and cost more to operate. Existing systems consume all available IS resources for maintenance and enhancements. As personal desktop environments lead users to greater familiarity with a GUI, corporate IS departments continue to ignore this technology. The ease of use and standard look and feel, provided by GUIs in personal productivity applications at the desktop, is creating an expectation in the user community. When this expectation is not met, IS departments are considered irrelevant by their users.
Beyond GUI, multimedia technologies are using workstation power to re-present information through the use of image, video, sound, and graphics. These representations relate directly to the human brain's ability to extract information from images far more effectively than from lists of facts.
Accessing information CAN be as easy as tapping an electrical power utility. What is required is the will among developers to build the skills to take advantage of the opportunity offered by client/server computing.
This book shows how organizations can continue to gain value from their existing technology investments while using the special capabilities that new technologies offer. The book demonstrates how to architect SDEs and create solutions that are solidly based on evolving technologies. New systems can be built to work effectively with today's capabilities and at the same time can be based on a technical architecture that will allow them to evolve and to take advantage of future technologies.
For the near future, client/server solutions will rely on existing minicomputer and mainframe technologies to support applications already in use, and also to provide shared access to enterprise data, connectivity, and security services. To use existing investments and new technologies effectively, we must understand how to integrate these into our new applications. Only the appropriate application of standards based technologies within a designed architecture will enable this to happen.
It will not happen by accident.
Patrick N. Smith with Steven L. Guengerich