to Cyreenik book index

Meeting the Challenges

The Business Challenge

When you asked a typical Novell employee in 1986 who the competition was, the answer was 3Com, Orchid, or one of the other LAN board makers of that era. When you asked Ray, the answer was “the minicomputer companies”, such as General Automation or General Systems or even DEC, then the 2nd largest computer company after IBM.

To compete with those minicomputer companies, Novell products had to talk to their products. The minicomputers stored the valuable company information of businesses that Novell wanted to sell LANs to. If PC LANs were to take over market share from minicomputers and their terminals, the PCs had to be able to access the information on the minicomputers. From this realization came the “stack company” concept.

The Technology Challenge

Ray saw that the business challenge was taking on the minis. Craig’s vision was to meet that challenge by Novell becoming a stack company.

“Stack” was short for communications protocol stacks, another concept that Craig embraced. A stack is a series of rules or protocols that describe how the various components of any network system—mainframe, mini, or PC—are going to talk to each other. This communicating task is complex enough that a networking system will be using between four and six protocols that coordinate with each other. Some examples of communications protocols are the Internet’s TCP/IP (Transmission Control Protocol/Internet Protocol), Novell’s IPX (Internetwork Packet Exchange), Apple’s AppleTalk, and the NCP (Network Control Program) part of IBM’s SNA (System Network Architecture).

What Craig meant was that Novell would develop the technology to allow workstations and file servers to talk in these “alien” protocols and allow file servers to support requests for file service given in these alien tongues. When this was accomplished, then NetWare LANs could accept minicomputers as just another device on the network and the minicomputer could accept workstations as just more terminals requesting service. Once that happened, the low cost of LAN networks compared to minicomputer networks would suck the lifeblood out of the minicomputer business.

When Craig did a survey of what it was going to take to have PCs communicate with minicomputer environments, he quickly encountered a “software fortress” situation; it was clear that minicomputer companies were going to move towards having PCs connect to their networks at a glacial pace. But Craig saw this as an opportunity.

He saw that PCs could support minicomputer protocols as easily, or more easily, than minicomputers could support PC protocols. He saw that the NetWare Everywhere concept could be extended to communications protocols, to the great benefit of Novell.

Early in 1985 Craig started outlining the environments that Novell would be moving to support.

Workstation Environments

Communications Protocols

Server/Disk Environments

MS-DOS

IPX

MS-DOS/Windows

Windows

TCP/IP

UNIX

Macintosh

SNA/3270

Macintosh

 

AppleTalk

 

The goal was to make NetWare the glue that would join these diverse environments together so that files and data could move between them. NetWare was to become a glue product.

The Evolution of Computer Networking and Protocol Stacks

Computer networking began in the ’60s, before there were even minicomputer companies. It was driven by the then-new concepts of timesharing and CRTs (cathode-ray tubes—terminals with keyboards and TV-like screens). Before timesharing, mainframe computers were batch processors—you fed them punch cards or a paper tape and they worked on just one job at a time. But if a collection of Teletype printers or CRTs were going to work with a mainframe computer, a communication network was needed to connect them.

As each mainframe manufacturer got into timesharing and CRTs they came up with their own proprietary network design, and each was very custom. The most famous and enduring was IBM’s 3270 network design. (A trivia point: The 3270 was a fallout technology of the Space Race—IBM was contracted by NASA.)

As computer networks were being designed in the US, the European standards organization, ISO, was coming up with the OSI (Open Systems Interconnect) Model—something well known to network aficionados and one of the origins of the term “protocol stack”. It was this ISO OSI model that outlined the concept of communications protocol stacks as they are known today.

In the ’70s minicomputers were developed. These computers needed to network, too. But in the early ’70s some protocols were developed that were non-proprietary, or open. An example was TCP/IP, which was sponsored by a government agency, DARPA (Defense Advanced Research Projects Agency), and was available for any computer maker to use. Universities were the first civilian users to embrace TCP/IP. The minicomputer companies used a mix of proprietary protocols such as DECnet and open protocols such as TCP/IP.

As LANs developed in the ’80s, they were faced with the same proprietary-or-open networking choices. But by the ’80s even more open standards were available.

Ironically, the first Novell LANs did not need a communication protocol stack—with the S-Net system all the communication was point-to-point between the workstations and the server, and SuperSet cobbled together their own homebrew communications system. But as soon as plans to support other LAN boards solidified, NetWare needed to support a formal communication protocol that could work with many kinds of boards and work in a multi-point environment where many computers would share the same communications line.

SuperSet investigated several possibilities, including TCP/IP, and selected … their own version! It was based on a protocol developed at Xerox PARC (Palo Alto Research Center) called XNS (Xerox network services). The Novell version was called IPX (for internetwork packet exchange). Unlike the minicomputer companies that preceded them, Novell decided to make IPX an open protocol, and as Novell prospered IPX prospered. For many years it was the protocol of choice for LAN-based multiplayer games along with many other applications.

The Evolution of Open Systems Software

In the world of mainframes in the ’50s, the concept of standardizing software was hardly recognized even within a company’s own line of products. It was a nice idea but hardly necessary.

When minicomputers burst on the scene in the ’70s, there was some trend towards standardizing but it was not strong—each minicomputer company was content to reside behind the walls of its own “software fortress”. And the mainframe companies now had considerable investments in their own software technologies so they weren’t about to open up, either.

When personal computers gained in popularity in the mid-’80s, something different happened: Customers demanded that third-party software run on their computers. The earliest version of that demand expressed itself at trade shows of the mid-’80s when customers would make the rounds of new computer offerings and ask. “Can this run Lotus 123 [the popular spreadsheet package] and Flight Simulator [a game]?” These packages were notoriously fussy about how the computer’s architecture was set up, so if a computer could run these it could run most of the software library available for PC-compatibles at the time.

In the personal computer arena open systems became a reality rather than lip service.

This customer pressure for open systems was steady and in some areas produced results even in the minicomputer world. Those areas developing around UNIX, TCP/IP, and in particular the education environments were becoming open.

to Cyreenik book index