Distributed computing is here to stay. But as the benefits of client/server computing become increasingly apparent, so too do the problems it presents. Chief among these problems is the need for a common infrastructure, a common set of distributed services on which a variety of client/server applications can be built. For a single application or even a small group of applications, it's reasonable to build a unique infrastructure for each one. To create a client/server enterprise, however, an organization entirely built around distributed applications, requires building a common foundation for those applications. And since most environments today include technologies from a variety of vendors, that infrastructure must work well on all kinds of systems.
Providing this common, multi-vendor infrastructure is the goal of the Open Software Foundation's Distributed Computing Environment (DCE). DCE provides the key services required for supporting distributed applications, including:
The services provided by DCE are available today from a broad range of vendors on virtually every major system. Many organizations are deploying DCE as the foundation for a client/server enterprise.
As it stands today, DCE provides a solid set of services for distributed applications. Technology doesn't stand still, however, and DCE must adapt to a changing environment. One of the most important changes in the last few years is the increasing use of object technology. Objects present a new paradigm for software development, one that offers a number of significant benefits. Increasingly, support for objects is seen as an essential part of any software development environment.
The issue, then, is how DCE should evolve into this new world. It turns out that DCE's designers were not deaf to the music of objects, and so there is already some built-in support for object technology. One route to enhancing DCE, then, is to add even better support by building on what's already there. Another option is to build some kind of object-oriented environment on top of DCE, something that makes use of what DCE provides but that also provides an extra layer of support for distributed object computing. There are two major efforts underway that, in one way or another, have the potential to do this.
The first is the set of technologies that comprise the Common Object Request Broker Architecture (CORBA). Produced by the Object Management Group (OMG), CORBA is a set of standards for interfaces to various distributed services. Much like DCE, the CORBA specifications aim at defining a multi-vendor solution for building a common distributed infrastructure. Unlike DCE, however, CORBA is explicitly object-oriented, and the various CORBA standards define only interfaces, not actual implementations of those interfaces. This means that a vendor providing a CORBA-based product may (but is not required to) support those interfaces using DCE. Running CORBA over DCE, then, is one possible way to add support for objects to a DCE environment. Today, for example, CORBA-based products from IBM, Digital, and Hewlett Packard can run effectively over DCE, taking advantage of DCE's existing services.
The second of the two major efforts underway to support distributed objects is Microsoft's Network OLE. Although not yet available, Network OLE will use DCE's RPC and provide some support for other DCE technologies as well. Unlike CORBA (and unlike DCE itself), Network OLE is purely a single-vendor effort, and how well it will actually interoperate with DCE remains to be seen. Nevertheless, the potential exists for Network OLE to be an effective path to distributed object support in a DCE environment.
Adding support for objects is an essential part of DCE's ongoing evolution. And as just described, there are several possible paths to this goal. The goal of this paper is to describe the major options in this area and to elucidate the strengths and weaknesses of each.
Describing the options for adding object-orientation to DCE first requires a common understanding of what's meant by this often used (and often abused) phrase. While opinions differ, it's possible to describe at least the majority view of the essentials of the technology, beginning with the word object.
To most, an object is a defined set of data and behavior. An active object's data is commonly implemented as one or more variables in a running program. When the object is not active, its data may be stored (or, in object jargon, made persistent) in a file or a relational database or even a database specifically designed to store objects. An object's behavior is typically expressed through its support for one or more methods. These methods commonly (but not always) work with the object's data. A method is usually implemented as a procedure or subroutine, although other possibilities exist. One fairly concrete and quite common way to picture an object, then, is as a set of variables bundled together with a particular group of procedures.
In thinking about objects, it's useful to distinguish between an object class and an object instance. A class describes some group of objects, all of which support the same methods and the same type of data. Every object is of some class, and knowing an object's class generally allows one to determine the object's capabilities. An object instance is exactly what the name suggests: a specific instance of some class. For example, one could imagine a class of objects representing bank accounts and a specific instance of this class representing, say, your bank account.
In the minds of most, just grouping data and procedures together isn't enough to qualify as object-oriented. Most people would argue that to really be worthy of the name, an object-oriented technology must have three additional characteristics: encapsulation, inheritance, and polymorphism.
Encapsulation means that an object's data isn't accessible by the object's users except via its methods. For example, an object-oriented programming language might produce a compiler error if the user of an object tries to directly access that object's internal variables. The state of those variables, and thus of the object's data, can only be accessed or modified using the object's methods (interestingly, C++, the most widely used object-oriented programming language, has ways around this restriction-pragmatic concerns can sometimes outweigh dogma). Encapsulation can greatly enhance software modularity, leading to code that's both easier to maintain and more likely to be correct.
Inheritance, the second characteristic of object-orientation, is conceptually no more complex than encapsulation, which is to say that it's a simple idea. The notion is this: given an object, it's possible to create a new object that automatically includes some or all of the functionality of its parent. In the same way that sons, through no effort on their part, can inherit male pattern baldness from their parents, so too can objects automatically inherit various things from their parents. Objects can also typically override some parts of this inherited behavior if desired, something that's broadly analogous to the bald son buying a hairpiece.
The great advantage of inheritance, and the primary reason that it's an attractive attribute for a technology to have, is that it provides an effective mechanism for reuse. Reusing something that already exists can mean faster development, less testing, and more reliable code. To think a little more about what is actually getting reused, it's useful to distinguish between two different kinds of inheritance: implementation inheritance and interface inheritance.
In implementation inheritance, an object inherits the actual code of its parent. The child object need not reimplement this code, but can instead directly use and build upon the code that's in the parent. In interface inheritance, on the other hand, the child inherits nothing more than the definition of its parent's interface, e.g., the names and parameter lists of the parent's methods. While using the same definition guarantees some commonality between parent and child and is certainly a useful thing, this kind of inheritance requires the child to reimplement all of the methods in the inherited interface (or at least to explicitly call the parent's methods when needed). With interface inheritance alone, it's not possible to automatically reuse the parent's existing code.
The third characteristic of object-orientation is also in some ways the most complex to describe: polymorphism. Put simply, polymorphism means that the user of two different kinds of objects can, in some ways at least, treat them as if they were the same. Think, for example, of two different objects, one representing your checking account and another your savings account. These two objects might present very similar or even identical interfaces. Each, for instance, might offer methods to credit a deposit and make a withdrawal. The way those methods are implemented, however, may differ significantly. The saving account object's withdrawal method, for example, would probably just check the amount to be withdrawn against the account balance. If the amount you've requested doesn't exceed the balance, the withdrawal succeeds; otherwise, it fails. Checking accounts, on the other hand, often grant an automatic loan to protect against overdrafts. The withdrawal method in the checking account object, then, might check your requested withdrawal amount against the account balance plus the current amount available for an automatic loan. If the requested amount exceeds the account balance but isn't more than the balance plus the available loan amount, the request succeeds. To the user of these two objects, the withdrawal method looks the same. The differences, important as they are, are hidden from view.
This simple example illustrates the key benefit of polymorphism. By allowing a user to treat different things as if they were the same, it reduces complexity. Objects that share the same interface or even some of the same methods can use polymorphism to keep their users' lives as simple as possible. And since polymorphism depends on different kinds of objects sharing some or all of an interface, it has an obvious link with inheritance. An easy way to implement polymorphism is to inherit the interface and/or implementation of another object, then customize it in some way. Although inheritance is not required to make polymorphism work, it can be an effective tool in creating it.
The notion of an object as a collection of data and methods, together with the three key concepts of encapsulation, inheritance, and polymorphism, form the foundation of object-orientation. They are by no means the only relevant ideas, however. Object theologians can debate forever points like whether an object should have only a single interface, as in CORBA, or multiple distinct interfaces, as in Microsoft's OLE. There are also many contentious issues concerning object life cycle (i.e., how individual objects should be created and destroyed), how specific object instances should be identified, and more. Still, the benefits that can be derived from object technologies are beyond debate. What's still quite debatable is how to get these benefits in a DCE environment.
It's easy to argue that DCE already includes a large fraction of what's required to qualify as object-oriented. First of all, a DCE server can be thought of as a grouping together of data and methods (called remote procedures in DCE). This is quite object-like, although perhaps on a somewhat larger scale than the average object.
Second, DCE already supports encapsulation and polymorphism, two of the three required characteristics for object-orientation listed above. To understand encapsulation in DCE, think about how one defines the relationship between a client and server. In DCE, the developer must specify that relationship using DCE's Interface Definition Language (IDL). An IDL interface explicitly spells out the remote procedures that the client can invoke in the server. Since the only way a client can access the server's data is through these procedures, the server's data is effectively encapsulated.
DCE also supports polymorphism. A DCE server can allow two or more different sets of remote procedure implementations to be accessed through the same interface. A client identifies which of these implementations it wishes to invoke (i.e., which object's methods it wishes to execute) by specifying a universal unique identifier (UUID) for a particular object. To the client, each of these objects presents the same interface, but in fact, the "methods" in those objects may well be implemented differently. This is polymorphism, and it was built into DCE by its original designers.
Encapsulation and polymorphism are present in DCE today, but there is no support for inheritance of any kind (although this will change-see below). Still, a large fraction of what's required to be object oriented is already there. This suggests that modifying DCE to directly support objects ought not to be a Herculean task.
To many, though, the sine qua non of object orientation is the ability to write code in an object-oriented language like C++. As long as the standard interface to DCE is defined only in C, some say, DCE can never be seen as truly object-oriented. There certainly is some merit to this point of view; after all, while it's possible to develop object-oriented applications in C or even in assembly language, it's much easier and far more natural to do it in an object-oriented language. Toward this end, several different attempts have been made to define object-oriented add-ons to DCE, usually in the form of C++ class libraries. The most widely used of these is a Hewlett Packard product called OODCE.
OODCE exposes DCE's intrinsic object model through a number of C++ classes. These classes allow more natural access to DCE's object-oriented features, and they also make the sometimes complex DCE interfaces much easier to use. For people who want to write C++ applications in a DCE environment, OODCE offers an attractive solution.
DCE itself is a vendor-neutral standard, however, while OODCE is a proprietary product. Although it could potentially run on more than just HP systems, OODCE is nonetheless controlled by a single vendor. To add objects to DCE in a vendor-neutral way requires multi-vendor agreement under the aegis of OSF. Doing this, at least in part, is one goal of the next major DCE release, DCE 1.2.
DCE 1.2 will include an enhanced version of IDL, one that supports a number of object-oriented features (in fact, inspiration for many of these new features came from CORBA and OLE). Among them are the ability to pass references to objects as parameters, interface inheritance, and support for the generation of C++ header files. It's also possible that a C++ application programming interface like that defined in OODCE will be added to the standard DCE source base in roughly the same time frame.
One route to distributed objects in a DCE environment, then, is to directly use DCE's inherent object support, something that's fairly minimal today but will be enhanced in DCE 1.2. A more complete alternative, in the near term at least, is to use an add-on product like HP's OODCE. There's another route, though, one that doesn't require changing DCE at all. In this approach, DCE becomes the foundation on which is built another, specifically object-oriented, distributed technology. A leading example of such a technology is OMG CORBA.
In some ways, CORBA is much like DCE. Both, for example, define technology for supporting distributed applications, and both are vendor-neutral standards. There are also significant differences between the two. The most obvious distinction is that CORBA was designed to be completely object-oriented from the start, something that's not true of DCE. Another important (though perhaps less obvious) difference is this: while DCE does specify standard interfaces to its services, it also provides a reference implementation, code on which virtually all DCE products are based. CORBA, on the other hand, defines almost nothing but interfaces-there's no single reference implementation. Instead, each CORBA vendor is free to implement CORBA's standard interfaces as it sees fit. That implementation may be built on DCE or it may not-it's the vendor's choice. This very important distinction between the two technologies stems in no small part from the different goals of their controlling organizations and, especially, from the different ways in which they were created.
The term "CORBA" is commonly used to refer to a large and growing set of specifications for distributed object computing created by OMG. To create these documents, OMG issues Requests for Proposals (RFPs) laying out a specific problem to be solved. OMG member organizations then submit responses to an RFP detailing their solutions. These solutions generally, although not always, take the form of standard interfaces to objects. Officially, the OMG membership then votes on the various submissions received, choosing one to become the OMG standard. In practice, the submitting organizations (largely vendors) get together at various points throughout the process and pool their submissions. When the official vote is taken, there's usually only a single joint submission, one that's approved with little or no dissent. Although there have been fierce battles over the contents of various OMG standards, most of those battles take place behind the scenes-official approval by OMG itself is rarely controversial.
This largely consensus-oriented process effectively produces agreements that all vendors are willing to support, and by the standards of open systems committees, at least, it's quite fast. It's not perfect, however. One potential problem is that merging several submissions to produce a single winner can be very much like design by committee. The result may be politically correct, but the technology can suffer. Also, reaching complete agreement can sometimes require defining very broad standards, standards that allow each vendor to interpret or extend them as they wish.
This last point has been especially important in CORBA's development. When OMG began work on CORBA several years ago, its major sponsoring vendors already had products or substantial development efforts underway in this area. These vendors, naturally, wished to protect their investments. Doing this meant that whatever OMG standardized had to be implementable by each of them, ideally with only limited change to their existing work. Even today, when most leading vendors offer CORBA-based products, their diverse heritage is evident. Unlike DCE, where all vendors offer very similar products, virtually all derived from the same source code base, CORBA products are quite diverse. Although all of them support the key CORBA interfaces (and most are adding support for new ones as fast as possible), the level of diversity among CORBA-based products is much greater than among DCE products.
The CORBA standards define an environment for distributed object computing, a world in which objects can communicate as clients and servers. A key piece of this environment is the object request broker (ORB). According to the base CORBA standard, an ORB "provides the means by which clients make and receive requests and responses". A client invokes a method in some object via the ORB, which conveys the request to the appropriate object. The method executes and the result, if any, is returned, once again via the ORB. All communication between true objects in a CORBA environment relies on the ORB. Its function is often likened to a bus in a computer system; just as the bus supports communication among various pieces of hardware in the machine, the ORB provides a software bus connecting together a group of objects.
While the abstract idea of an ORB is straightforward, thinking about ORBs more concretely is a bit more difficult. The CORBA standards define only the interface to the ORB, not how an ORB is actually implemented. As is obvious from the definition quoted above, vendors are free to implement object-to-object communication in any way they wish as long as they support the standard ORB interface. A vendor might, for instance, choose to implement an ORB using RPC, or with some kind of message passing scheme, or perhaps in some other way. An ORB that supports communication among objects on the same system may rely on some kind of interprocess communication mechanism. The ORB's functionality might be part of the operating system, or it could be implemented in a separate process or in libraries linked into the client and server objects, or some combination of these things. However it's implemented, by definition, all communication between objects goes through the ORB.
This diversity is a natural outgrowth of the goals of the vendors in OMG. Each of these vendors had already gone a good way down the road to implementing distributed objects, and none wished to retrace its steps. An abstract definition of an ORB allowed each vendor to retain its work, then add some standardization by supporting a common ORB interface on top.
Somewhat unfortunately, the vendors were unable to reach complete agreement about what this ORB interface should be. While the client side of the interface was standardized quite completely, the server side was much more loosely defined. An effort is currently underway to fix this by more fully defining the server side of the ORB interface. It's worth noting, however, that most vendors provide a development environment on top of that defined by CORBA. As a result, even the standard interface that's currently defined can be largely hidden by proprietary extensions, something that doesn't help CORBA's already limited support for application portability.
CORBA defines two different ways for clients to invoke operations on server objects. The first, called the static interface, works very much like RPC. An object defines its interface using CORBA's Interface Definition Language (IDL), a tool that has much in common with DCE's IDL (although being object-oriented by design, it supports interface inheritance). This IDL definition is then compiled to produce a client stub and a server skeleton, code that typically gets linked into the client and server objects, respectively (officially, both stub and skeleton are part of the ORB).To invoke a method in the server object, a client calls a function, a request that, via the ORB, is conveyed to and executed in the destination object. The client blocks until the function returns, so using this static interface looks very much like making a remote procedure call in DCE.
The alternative for clients is called the dynamic invocation interface (DII). With the DII, a client creates a request dynamically, building it one parameter at a time. It then sends the request via the ORB to the destination object. The requested method in that object executes and the result, if any, is returned. The client that invoked the operation has a choice, however; it can elect to block, waiting for this result, or it can choose to continue executing without waiting. If the client chooses this second option, it will then typically check later to see if the operation's result has been returned. The DII allows what the CORBA specification calls deferred synchronous operation, the ability to invoke a request without blocking.
With the DII, a client builds a request dynamically rather than relying on the pre-compiled client stub used in the static interface. This means that it's possible for a client to learn about an entirely new kind of object while the client is executing, then build requests and invoke operations on that object. To support this kind of dynamic world, CORBA defines an interface repository. This repository contains the interface definitions for objects in the ORB environment. When a client encounters a new, unknown class of object, it can query the interface repository for the object's interface. Once it learns that interface, the client can then use the DII to dynamically build and invoke a request on the new object. This kind of flexibility can be very useful in some kinds of applications, and it's something that's not present in DCE.
Although CORBA defines two choices for client interfaces, a server object can remain completely unaware which one the client is using-it just receives invocations via the ORB as always. And while having two different interfaces can be quite useful, it also provides another example of the effects of consensus-oriented design. In the work on the original CORBA standard, the vendors involved split into two groups. One championed the static interface, while the other insisted on the superiority of the dynamic interface (unsurprisingly, which camp a vendor was in was strongly correlated with what their existing implementation already provided). The dilemma was solved in the traditional fashion of standards committees: all vendors were required to support both options.
Defining a standard interface to the ORB is a worthwhile thing to do. It's not the end of the story, however. To create a complete environment for distributed object computing, it's useful to define other standard interfaces as well, interfaces to services that are needed by a variety of objects. For example, CORBA requires that a client hold an object reference to an object before it can invoke operations on that object (a CORBA object reference is analogous to a binding handle in DCE). How does the client acquire that object reference? One obvious answer is that it queries some kind of naming or directory service. Doing this in a standard way requires defining at least a standard interface to this service. OMG's definitions of interfaces to generally useful services like these are collectively known as CORBAservices.
The example just given, an interface to a naming service, is in fact one illustration of a standardized CORBA service. There are many others, including interfaces for handling events, interfaces for controlling an object's life cycle, and interfaces for object persistence. All of these interfaces are defined in CORBA's IDL, and all are designed to be implementable in a variety of different ways.
The naming service interface, for example, was explicitly designed to be usable at least over DCE's Cell Directory Service (CDS), OSI's X.500 directory service, and Sun's NIS+. This diversity reflects, as always, the different ways in which various CORBA vendors implement their products. Flexibility is rarely free, however. In this case, the OMG-defined naming service interface could only include functions that were in all of the target directory services-the standard interface could be no larger than the intersection of their capabilities. Services that were in only one of these directory services, things like CDS's ability to create groups and profiles, could not be included in the standard. If vendors wish to add support for these extras, they must do so in a non-standard way.
Reaching multi-vendor agreement can sometimes result in even more diversity. An extreme example is provided by the definition of OMG's object life cycle services. With this service, an object that wishes to allow its clients to copy or move it must support a certain standard interface, one that defines methods for requesting both of these things. Each of these two methods, appropriately called "copy" and "move", has two parameters. In both, however, the first parameter can sometimes be effectively omitted, while the second parameter is left completely undefined. Since the stated goal of defining standard interfaces is to allow application portability, one is left to wonder: how is it possible to write portable code using standard methods where the first parameter is optional and the second is undefined?
Defining standard interfaces to useful services is clearly a good idea. Unfortunately, the processes used to create those interfaces within OMG can lead to the definition of some very loose standards. For users of products based on those standards, this translates to limited portability of application code across CORBA-based products from different vendors.
One more important point worth making here is this: if CORBA's interfaces are sometimes defined so broadly, how will it be possible to create a workable conformance test suite? When buying standard products, users commonly want some assurance before they sign the check that what they are buying really meets the standard. Many standard technologies go to some lengths to ensure this. DCE, for example, has a large conformance test suite that vendors must pass to claim DCE certification. Since virtually all DCE products are derived from the same source code, and since the interfaces to that source code are precisely specified, creating this test suite was straightforward. For CORBA, however, with its diversity of implementations and its sometimes loosely-defined interfaces, creating a conformance test suite that guarantees application portability is far more problematic. The result is that buyers of CORBA-based products may not feel as confident about application portability as do buyers of DCE products.
The original CORBA standard said nothing at all about how to build an ORB. Naturally, then, it also didn't specify any particular wire protocol to use for communicating between objects. The result was that every vendor defined its own unique protocol. Since most customers have machines from a variety of different companies, each ORB vendor was effectively required to support their ORB on multiple different platforms. Even ORBs sold by traditionally hardware-oriented companies were made available on a wide range of machines, including those of their competitors. Distributed object computing was possible in a multi-vendor environment, but it required that only a single vendor's ORB be used on all systems.
The most recent CORBA standard, CORBA 2.0, still says nothing at all about how to build an ORB for use in a single-vendor environment. It does, however, spell out requirements for communication between ORBs built by different vendors. To comply with the interoperability requirements of this newest version of the specification, all ORBs must support in some way the Internet Inter-ORB Protocol (IIOP). ORBs may optionally support alternative interoperability protocols, known as Environment-Specific Inter-ORB Protocols (ESIOPs) in addition to the IIOP. An example of an ESIOP is the DCE Common Inter-ORB Protocol (DCE CIOP), based on DCE RPC. Support for both the IIOP and the DCE CIOP has been shown by many ORB vendors, and these protocols certainly do allow interoperability between ORBs built by different vendors.
Note, however, that vendors are still free to use whatever protocol they wish within their own ORB. While interoperability among multiple vendors' ORBs is possible, its practicality is debatable. Since the leading ORB vendors all sell their products on a range of different systems, they can quite justifiably claim that their customers are better off buying only a single vendor's product, i.e., theirs. A multi-vendor ORB environment, although it will provide basic interoperability, will also be significantly more complex than a single-vendor world. Issues like administration and security are likely to be much easier to resolve if only a single ORB product is deployed throughout the environment. This reality, together with the limited portability that exists across different vendor's products, means that a multi-ORB world has only limited appeal.
It's worth contrasting this situation with that of DCE. In a DCE environment, virtually all products are built from a common code base. There is one protocol, DCE RPC, used for all communication, and every product supports the exact same application programming interfaces. Although DCE has limited support for objects, its level of standardization is very high. By contrast, CORBA-based products are certainly more similar than they would have been had OMG never existed, but they are also far more different from one another than are DCE products.
From quite early in the process, some (but by no means all) of the leading vendors in OMG planned to support CORBA's standard interfaces using DCE. Although none of the original CORBA implementations took this approach, solutions using at least some parts of DCE began to appear from IBM, Digital, and Hewlett Packard in late 1995.
Building CORBA on top of DCE has some distinct advantages. For one thing, DCE provides a solid set of services to build on, including directory and security services. And from the point of view of DCE users, adding CORBA provides a way to do distributed object computing in a DCE world.
Building CORBA on top of DCE also has some drawbacks. Chief among them is that CORBA's limited standardization all but negates one of DCE's greatest benefits: its vendor-neutrality. If an organization that has committed to DCE elects to use CORBA to move to distributed objects, they will have a strong incentive to choose a single vendor's ORB. The limited portability and problematic interoperability among CORBA products today make a single-vendor solution very likely.
Anyone thinking about single-vendor solutions for distributed object computing would be hard pressed to ignore Microsoft, if only because of the enormous installed base of Windows. While Microsoft currently doesn't offer a product in this area, their announced plans revolve around what has become a key technology in their environment: OLE.
Although it originally stood for "Object Linking and Embedding", OLE is today treated as just a name. The OLE family of technologies includes a variety of things, ranging from compound documents to standard interfaces for accessing a database. All OLE technologies have one thing in common, however: they all rely on the Component Object Model (COM).
COM specifies conventions and provides services for defining and using objects. Much like CORBA, it has an Interface Definition Language (IDL) for specifying interfaces to objects, and it includes services that allow objects to instantiate and communicate with one another. These services essentially do what an ORB does in the CORBA world.
As mentioned above, though, OLE and COM are today single system technologies-there's no Microsoft-supplied support for distribution. This is slated to change with the arrival of Network OLE. While Network OLE is not yet available, some of its announced attributes are especially interesting to users of DCE. First of all, Network OLE has chosen DCE RPC as its communications protocol. Ideally, this will allow interoperability between OLE and DCE, although it's sure to require some work on the DCE side. Also, Network OLE will support multiple options for security, one of which is intended to be compatible with DCE security. Even OLE's IDL is derived from work done by OSF, since it's essentially an extension of DCE IDL.
Technically, Network OLE may be an attractive path to distributed object computing in a DCE environment. Unlike CORBA, which in many ways reinvents technologies like IDL, Network OLE builds on what DCE already provides. And although Network OLE is clearly a single-vendor solution to the problem of building a distributed object computing environment in a DCE world, this is likely to be effectively true of any CORBA-based product, too, given CORBA's minimalist approach to standardization.
At the same time, it's too soon to draw firm conclusions about Network OLE's suitability for a DCE world. Perhaps the only safe thing to assume is that Microsoft, like any other vendor, will do what it believes is best for its shareholders. Should this lead to effective interworking with DCE, this may well prove to be a good thing for organizations using DCE.
DCE is an excellent choice as an infrastructure for supporting distributed applications in a client/server enterprise. While DCE does provide some support for objects, that support is limited today. To improve this situation, there are essentially three approaches in progress today to make DCE a solid foundation for distributed object computing.
The first is to directly modify DCE to add better support for objects. This can be done by individual vendors, with products like HP's OODCE, or by vendors working together through OSF. This latter path is taken in the next major DCE release, DCE 1.2, which adds more support for object-oriented technologies.
A second approach is to install a CORBA-based product over DCE. Among the attractions here are CORBA's good support for distributed object technologies and the wide range of standardized service interfaces it offers. A significant drawback to this approach is that due to CORBA's quite limited standardization, DCE's vendor-neutrality is hidden under a somewhat proprietary veneer. While there are many excellent CORBA-based products available, choosing this route probably means selecting a single vendor's implementation of CORBA throughout an organization's DCE environment.
The third approach is also the one that today has the most unknowns: Microsoft's Network OLE. While Network OLE appears to be a good technical fit with DCE, it's too soon to know how well this will work in practice. Network OLE, like products based on CORBA, effectively provides only a single-vendor solution, although it's from a vendor with whom virtually every organization must contend. All three of these approaches are being pursued today, and all have strengths and weaknesses. For an organization committed to DCE but wondering how to square this commitment with the growing use of object technology, the message is simple. It is this: fear not-objects are coming to DCE, perhaps in more than one way. Choosing DCE does not consign an organization to the technological backwaters, because DCE has the flexibility to grow as needed. As the fundamental base for a multi-vendor client/server enterprise, DCE remains unsurpassed.
For more information on DCE and the DCE Program, please see the DCE Home