1. We should bound ontologies by creating them to support a specific set of inter-agent interactions.
Ontologies are needed because independent agents (human, software, or both) share knowledge. As Tom Gruber and Nicola Guarino have argued, ontologies are representations of the agents' agreements about the set of concepts that underlie the information to be shared. I will argue that it is useful to think of ontologies in terms of the inter-agent interactions they need to support. Ontologies must specify enough information about shared concepts to enable the agents to behave appropriately when they receive the designated interactions. They need to be broad enough to include all of the concepts in the set of interactions and deep enough to clearly distinguish the behaviors that should occur when the interactions are received. I think that we may be able to create a design methodology for ontologies built around specifying them in terms of the interactions they must support.
2. We should design ontologies to be useful to agent developers.
I will argue that, by analogy to standards, ontologies become useful when interfaces are built to handle the interactions they support. Following the analogy, ontologies must be expressed in a form that allows agent developers (in particular, agent interface developers) to understand how to make their agents behave when they receive an interaction supported by the ontology.
3. We need research into how to adequately support the evolution of ontologies after their initial implementation.
Any implemented ontology is just a starting point: it is impossible to see all of the ramifications of the initial set of agreements. As agents begin to interact, there will inevitably be some shared concepts that need to be further elaborated in order to really express how they are being used. As technology changes, new meanings for shared concepts will develop as existing concepts are used in new ways. Supporting this process requires research progress in at least two areas:
I would like to emphasize the process nature of ontology creation. In particular I compare ontology creation to standards creation. Standards arise because of the need of independently developed elements (hardware or software) to interact with each other. A decision must be made on exactly what to standardize, i.e., what must be agreed upon and what can remain unshared. This decision is the result of an often lengthy process of design, with both technical and political design constraints. (De facto standards are simply preexisting designs that become adopted before other participants enter the discussion). Usually the design is intended to give the standard a "lowest common denominator" character: there is an attempt to keep complexity, or at least detail, in the individual components rather than in the standard. Almost always the standard is subdivided into levels or is otherwise segmented in order to simplify the design problem or reduce the number of stake holders. The standard is then implemented when the participants create interfaces to their elements that cause them to behave in a specified way when a specified interaction occurs.
So:
Of course, standards have negative aspects as well. First, standards take a long time to create. Second, they often end up acting as a barrier to necessary change. The first problem arises because it is hard to get agreement and even harder to specify exactly what has been agreed upon. The second problem arises because it is impossible to see all of the ramifications of the initial set of agreements. As components begin to interact, there will inevitably be some shared concepts that need to be further elaborated in order to really express how they are being used. As technology changes, new meanings for shared concepts will develop. For example, use of the Internet for transmitting video and speech data has changed the concept of "packet". Packets containing multimedia data must be treated differently by the interacting components of the Internet system (viz., they must be delivered within certain time boundaries and in certain sequences). The transport level of the Internet standard must be changed to cope with this expanded meaning.
I believe that we have the potential to greatly ameliorate the negative aspects of standards. In terms of the first problem, while we can't help with getting agreement, we should be able to represent agreement as it is being developed. In terms of the second problem, we should be able to automate the process of incorporating new meanings. But to deliver on this potential will require research progress.
We must also do research on treating ontologies as ever-evolving bodies of knowledge. The key is to allow ontologies to grow, but to preserve their underpinning set of agreements -- and to know when (and how) these agreements need to be changed. Usually evolution will come through the need to incorporate recently understood ramifications. The research problem is to automatically determine when these ramifications require agreements to change (i.e., when they go beyond specialization of the existing representations) and then to demonstrate what changes are required to accommodate the new ramifications.
Another important kind of meaning evolution -- indeed the Holy Grail of knowledge base design -- is using an existing set of concepts to support an unplanned set of tasks. Here the issue is often in understanding what elaboration will be required to incorporate the new uses for the ontology. This is a major research issue, but I think that a possible way to bound the problem is to define it in terms of an enlarged set of interactions that must be supported. That is, if the ontology is originally defined in terms of the set of inter-agent interactions it must support, it might be possible to definitively determine the differences that are introduced by a new set, and how the ontology must be changed to support them.