Hypertext publishing will aid evaluation in several ways. With a suitable interface, it will enable readers to transmit evaluations with a mouse-click rather than a letter, reducing labor-costs by orders of magnitude. It will enable critics to attach their comments directly to a target work, making them available almost immediately (instead of months later) and potentially visible to all future readers (instead of just those who later happen across them). It will improve filtering, enabling readers to benefit from others' judgment, read a richer mix of material, and save considerable skimming-and-rejecting. More indirectly, higher-quality criticism will foster higher-quality review articles, enabling readers to survey fields with greater ease and confidence, easily retreating to simpler explanations when needed. Finally, it will enable authors to attach a retracted annotation to obsolete views, making them disappear as seen through a typical filter. These advantages in evaluation seem great.
'Bulletin boards are boring'
Computer bulletin boards and mailing lists might seem like models for hypertext publishing, but they are often full of trashy material. All, however, seem to lack one or more essential features, such as links (to enable effective criticism) or evaluation and filtering mechanisms (to make trashy material invisible). Further, they aren't archival, as journals are: writers know they are writing for the garbage can, and act accordingly. Nothing about a computer medium per se degrades the quality of writing.
'Most writing will be trash'
Assume that 99.99% of published material will be trash. With charging for use, its storage needn't cost anyone but the author. With suitable database algorithms and organization, its presence needn't slow access to other material. With suitable filters (which display only favorably-rated material), its existence needn't be visible. Thus trash, however abundant, is irrelevant: what matters is what has value.
'Evaluation won't work'
This argument places a heavy burden on evaluation and filtering. How are they to work? A simple majority vote of readers seems a poor basis for evaluation (and how would people vote?). Expert evaluators would be hard to choose, harder to agree on, and likely to refuse the job. Distributed evaluation and filtering systems deserve serious research; there are many issues to consider, ranging from the game-theoretical analysis of vote-weighting schemes through the details of effective user interfaces.
Regarding the latter, opinion-capture in a window-based system might be simplified by providing several go-away boxes, with meanings ranging from what a waste of time! to so-so to that was great!. A simple facility for tipping authors for pithy insights (and recording the amount as an evaluation) might be of value. In general, evaluations could be associated with their source (or with characteristics of their source) so that filters could weight different evaluators differently.
A good hypertext publishing medium need not begin with good filtering and evaluation mechanisms; it need only provide a medium for evolving good mechanisms. Are good mechanisms possible? We know that editors and reputable journals work, and their basic principle of reputation-and-recommendation seems extensible.
'Filtering will block novelty'
If filters pass only material with positive evaluations, how will new material ever be seen or evaluated? Material from some established authors might have high ratings a priori, but what about material from new authors, and the occasional good ideas from bad authors?
If the system supports easy passing of references via electronic mail, this problem evaporates. A bad or unknown author can pass references to friends and colleagues; if the work is good, they can give it a high rating, and pass references to their friends and colleagues. In a half-dozen or so steps with a fan-out ratio of ten, this process would reach millions. Before then, a good work would accumulate enough favorable ratings to pass a typical filter without a personal recommendation. Public reading and evaluation then takes off.
Experience with evaluation
Today, electronic mail carries considerable electronic trash;
the problems of automatic evaluation and filtering are broadly
similar to those in hypertext publishing. In their work on the
Information Lens system for filtering and disseminating
electronic messages, Thomas Malone et al.  have made a substantial
start toward developing the sorts of mechanisms that would be
needed in a hypertext publishing system.
They note that:
- many of the unsolved problems of natural language understanding can be avoided in intelligent information-sharing systems through the use of semistructured templates (or frames) for different types of messages. These templates can be used by senders to facilitate message composition. The same templates can then be used by recipients to facilitate construction of a set of rules for filtering and categorizing messages.
This approach (like many others they discuss) has application to hypertext. In particular, it suggests that fine-grained publication will have particular advantages when coupled with standard templates. (See also Lowe's work on structured, fine-grained argumentation systems. )
'It would be too hard to build'
Designing and coding hypertext publishing systems will be a challenging task. Indeed, it would be easy to draw up specifications that would make this impossible. A sensible goal is to avoid really hard problems (such as ensuring tightly-coupled consistency across a loosely-coupled network, or designing a versioning mechanism with ideal semantics) while designing a database kernel that provides essential basic capabilities. Likewise, open-ended problems, such as evaluation, filtering, and user-interface design can be left to the open-ended process of evolution, so long as the design provides support for the basic mechanisms. There are suggestions for such designs that seem implementable .
'If it were good, we'd have it now'
The idea of hypertext publishing is over two decades old, and several attempts at implementing it have been made - without the dramatic results anticipated here. One might argue that hypertext publishing is an already-tried, already-failed idea, that mere theory supports the idea of its great value, while solid experience shows its value to be quite limited. But in fact, though the idea may be old, it hasn't really been put to the test. Past software either hasn't been available in working form (Xanadu), or has been based on old technology and sold at a high price to a small community (NLS/Augment). No past systems have been full, filtered, and public. (And none, of course, has been based on next year's computer, disk, and telecommunications technology.) In short, the implementation and use of this sort of system has not yet been tried, hence experience hasn't yet had its chance to contradict theory. The theory might even be true.
'If successful, it will be abused'
A successful hypertext publishing medium will surely be abused. A distributed system would be resistant to the 1984 problem of revised history, but lesser problems will remain. Some readers will use filtering to help them keep their minds closed, or to seek out their favorite brand of falsehood. Some will use the system for criminal purposes.
But everything since the rock has been abused by someone. And there is a presumption that the advantages of hypertext publishing for the expression, transmission, and evaluation of ideas will, on the whole, be a good thing - at least if one regards thought and communication as good things.
In an established hypertext publishing system, the operation of many minds on a shared, linked literature should aid several valuable emergent phenomena. Among these are the growth of intellectual communities and fields, the evolution and use of standard conceptual tools, the ease of seeing holes (and the lack of holes) in arguments, and the growth of clearly-summarized consensus. The following sketches these phenomena in a fictional context.
Forming intellectual communities
In a hypertext publishing medium, authors will typically sign
their work and readers will often sign their evaluations and
comments. Landmark writings will collect many evaluations. Those
sharing an interest in a set of landmark writings form a
potential intellectual community; any one of them can identify
others by their signed publications, and can compile and
distribute a mailing list to facilitate communication.
Intellectual communities thus should form more easily.
For example, assume that researchers studying hypertext argumentation and those studying connectionist models (a.k.a. artificial neural systems, a.k.a. parallel distributed processing ) are publishing in a hypertext medium. Someone might notice an overlap between these communities. A mailing to this group could establish a landmark publication asking: 'In a fine-grained argumentation structure, the various 'supports-' and 'undermines-' link-types formally resemble exitatory and inhibitory connections in connectionist models. In both fields, we seek to derive coherent patterns from conflicting data. Might connectionist network-relaxation algorithms be useful for deriving a robust, largely self-consistent consensus position from a network of conflicting argumentation relationships?' This initial publication provides a natural coordination point for attaching criticism and elaboration if the basic idea proves fruitful. Thus the community is born possessing something like a journal.
Building a field
The growth of a field involves the growth of a community of
researchers who share a literature and a set of problems. Speed
of publication, ease of referencing, ease of finding who has
referenced what to make what points - all of these facilitate
building a new field.
In our fictional example, colleagues circulate abstracts with references to the new body of work, and some researchers are intrigued. Since links have the functionality of a citation index (but always current), new work becomes visible from the older work it builds on; this attracts attention from the authors of that older work, and again, some researchers are intrigued. The nascent field of connectionist hypertext grows rapidly.
In an early move, someone publishes a landmark item which says, in effect, 'Link statements of unsolved problems in connectionist hypertext here'. Evaluation, filtering, and display mechanisms then make it easy to sort these links to show the problems the community presently regards as most important. Other landmarks accumulate links to discussions of particular subproblems, algorithms, and so forth. Some publications consist of classifications and evaluations of other ideas and approaches.
The first statement of the connectionist hypertext idea is soon amended: Someone notes that people implicitly relate themselves to items by agree and disagree links, and can relate themselves to each other by respect and don't-respect links. These links can have varying weights, and again have a formal resemblance to excitatory and inhibitory connections in neural models. The revised proposal, then, is to take all these links among points and people, filter them in some way, and run a connectionist network-relaxation algorithm on the resulting system to identify coherent sets of thoughts and thinkers. Discussion soon centers on this new idea.
Using standard conceptual tools
Having existed for some time and accumulated some of the best
material from the paper literature, the publishing system holds a
wealth of crisp statements of useful points, distinctions,
theorems, schemata, fallacies, logical principles, general system
principles, economic principles, definitions of terms, and other
conceptual tools. These are linked to discussions of their truth
or falsity and to paradigmatic examples of their use and misuse.
The availability of these standard conceptual tools economizes
Soon after the connectionist hypertext idea surfaces, someone applies the evolution schema (with no need to restate it and trot out the standard examples). The network settling process has aspects that meet the criteria for variation and for selection, but nothing in it corresponds to replication. Therefore, network settling is a non-evolutionary form of spontaneous order: this observation sinks an idea floated the day before.
One specialized conceptual tool is a taxonomy of connectionist models. A member of the connectionist hypertext group applies this to the proposal. The idea involves use of relaxation algorithms, but not learning algorithms: connection weights are set by people, not by algorithms operating on the network itself. Placing connectionist hypertext in this taxonomy clarifies its nature without redundant explanation and indicates relevant parts of the connectionist literature (categorized by that same taxonomy).
Several weeks after the proposal of connectionist hypertext, a
member of the community notices a publication describing an
unfamiliar network relaxation algorithm. Is it worth relating to
connectionist hypertext, or is it old hat to the rest of the
group? A quick check shows no links between the landmark
publications on the algorithm and the landmark publication on
algorithms for relaxing connectionist hypertext networks. This
shows a hole in the literature and an opportunity to contribute
(in the paper media, this would have required a tedious
literature search, with results that might well be out of date).
The researcher who noted this absence wonders how fast the algorithm works on large networks - a point seemingly not covered in the literature. The researcher posts this question to the algorithm's authors, and to readers in general; this highlights another hole in the literature (or at least in its indexing) and hence another opportunity to contribute.
A skeptic about connectionist hypertext wonders about its strategic stability. If someone wanted to bias the results of the relaxation process, couldn't they play games with their statements so as to gain credibility and abuse it? The skeptic looks at the landmark compilation of problems - game playing isn't mentioned! A moment later it is, and the skeptic, having seen a hole in the list of problems, has enriched the field.
Each problem labels a hole and encourages work to fill it. Proposals to deal with the game-playing problem soon accumulate: they include using multiple algorithms for relaxing argumentation-networks and multiple algorithms for filtering and mapping the hypertext structures into a connectionist model, followed by a comparison of the different results. This is argued to make effective game-playing more difficult. A further proposal is to try to identify and screen out game-players' contributions as part of the filtering process. Finally, someone notes that this would be a problem only if the basic idea of connectionist hypertext has considerable merit - why else would game-playing be a problem?
Seeing a lack of holes
Weeks later, another skeptic examines the connectionist
hypertext literature, and finds that the landmark compilation
lists every major problem that the skeptic can think of. The
skeptic's filter places these problems in roughly
worst-problem-first order, but in going down the list, all the
problems seem well in hand. The remaining objections to the basic
idea aren't rated highly by the skeptic's filter; the answers to
them are. Some of the wilder early proposals have been refuted,
but the expected devastating criticism of the basic idea just
isn't there. The remaining questions demand experimental test,
and three groups report work underway.
The skeptic concludes that the idea should, at least provisionally, be regarded as sound. After several months of hypertext debate, the idea has been tested more thoroughly and visibly than it would have been in several years of papertext debate. The skeptic adds a bit of support to a call for increased research funding.
One result of all this activity is what amounts to a review
article, developed incrementally, thoroughly critiqued, and
regularly updated. It takes the form of a hierarchy of topics
bottoming out in a hierarchy of result-summaries; disagreements
appear as argumentation structures . When new results are
accepted, their authors propose modifications to the
summary-document; they become visible (to a typical reader) to
the extent that they become accepted.
A free-lance writer on the system publishes a popularized account of the wonders of connectionist hypertext, but this account is more moderate in tone than one might expect. In a hypertext publication, readers expect links to the primary literature and to technical review articles. And knowing that readers will be able to see any criticism added by the actual researchers keeps the writer from speaking of scientists racing to develop a Giant Social Brain. In hypertext publishing, one must be careful of one's reputation, since so many reader's filters exclude work by unreliable authors.
A typical reader mostly browses popular articles and technical reviews, seldom following links deep into the argumentation network. But more accurate and current summaries let them benefit nonetheless. People specialize in different domains, and everyone benefits from the resulting division of intellectual labor.
(Note: Although the process just described and the resulting consensus are imaginary, the connectionist hypertext idea is to be taken as a serious proposal for a line of inquiry. On mentioning it to members of the connectionist community at a recent conference, I was told that connectionist groups are interested in the related idea of social models inspired by neural nets.)
Implementation of a hypertext publishing system is one goal among many, all competing for our funds and attention. How important is it? The following attempts to examine its value in a way that lends itself to crude, quantitative estimates. This is a difficult and risky enterprise, but we are likely to have a better idea of its value if we try to estimate it than if we don't.
We have some sense of the value of intellectual effort; trained people and innovative ideas are considered major assets. We also hear much about using resources efficiently and wisely. Though this is often applied to tangible resources (land, petroleum) it may be still more important to apply to the intangible resource of the human mind.
The human intellect is a limited resource
Human intellectual effort is, at any given time, a limited resource. There are a limited number of knowledgeable people in any field and a limited number of hours in a year. Increasing the number of people and the quality of their training is slow and difficult; increasing the number of hours is impossible.
This limited resource is wasted
Our limited intellectual resources are wasted in many ways.
The history of the rise and fall of the (fictitious) square-wheel
research program illustrates some familiar patterns.
Bad ideas adopted through ignorance of refutations. Transportation researchers, concerned with bumpy wheels, pursue work on the square wheel. They reason that it is superior to higher polygons, since it has fewer bumps; further, since its fewer corners probe the height of the ground at fewer points, it is less sensitive to typical bumps on a road. Bearing researchers are familiar with arguments that the decisive issue is bump magnitude rather than number, but the transportation research community remains ignorant of them. Work on the square wheel goes forward under a major defense contract, and major intellectual effort is misinvested.
Bad ideas maintained despite outsider's refutations. Later, when financially and intellectually committed square-wheel researchers hear of the bump-magnitude issue, they ignore it in their publications and research proposals. Lacking links, critics can't easily make their arguments visible. With sufficient effort they might make their point, but they have no real incentive to try. Investment of intellectual effort in the square-wheel program continues, and the knowledgeable say, That's life.
New thinking twisted by misinformation. Observing the major effort in square wheel development, others make plans for square-wheel vehicles. They focus formidable engineering skills on developing tough suspension systems and motors with extraordinarily high starting torque. An exploratory research effort begins on the more challenging triangular wheel, with its promise of eliminating a bump.
New ideas generated but not pursued. One researcher looks beyond polygons and considers the idea of a round wheel. But this doesn't fit with the researcher's other interests and seems like too small a point for a paper, and so is not published. The idea remains as a marginal note scrawled in a copy of the Journal of Earth/Vehicle Interfaces. Investment continues in what should have become an obsolete idea.
Good ideas neglected through ignorance. When the round wheel is finally proposed, few know whether to take it seriously. Most readers of the proposal have no way to know whether it makes sense, since it involves abstruse, interdisciplinary considerations of geometry, structures, and kinematics. Investment still continues in obsolete ideas.
Good ideas neglected because refutations are suspected. Mutterings are heard: "Round wheels - wouldn't they violate conservation of friction, or something? In any event, they sound too good to be true." Again, investment continues in obsolete ideas.
New thinking undermined by ignorance. The round wheel is at last accepted by a substantial community, and development is under way. The promise is clear, but many haven't heard of it. The failure of the square-wheel program to produce commercially viable results (despite its use for rough-terrain military vehicles) has left the transportation community wary of wheels. Considerable effort is invested in plans for sled-based systems for several years.
Old ideas redundantly pursued out of ignorance. In later years, this becomes proverbial, and is called reinventing the round wheel.
Effort consumed by research and publication. All of the above ways of squandering intellectual effort could be avoided, given thorough-enough searches of a complete-enough literature. But in reality, the costs of search (which may be fruitless) are high enough that it often makes more sense to risk wasting effort on bad or redundant work.
Hypertext can help economize it
Hypertext publishing won't eliminate wasted intellectual
effort, eliminating bad ideas and spreading good ones instantly
and effortlessly. But its many advantages in the expression,
transmission, and evaluation of ideas - often reducing monetary
and labor costs by orders of magnitude - can be expected to have
a major positive effect.
The expected improvement in the efficiency of intellectual effort depends both on the degree of waste today and on the effectiveness of hypertext in reducing it. If one's standard of efficiency involves applying our best information to the most important problems, then one may well conclude that much of today's intellectual effort is wasted. If hypertext publishing can substantially reduce that waste (cutting it by tens of percent or more?) its benefits will be quantitatively huge. (And if its benefits will be huge, then the paucity of effort in the field today indicates that much effort in computer science is, relatively speaking, wasted; this, in turn, further increases one's estimate of the potential benefits of a hypertext publishing, which indicates. . . )
Original web version prepared by Russell Whitaker.