Saturday, 30 June 2007

XML Serialization in ASP.NET

XML Serialization in ASP.NET

In the past, maintaining the state of an object in ASP often required some very inventive and painstaking code. In the brave new world of .NET, however, Object Serialization offers us a comparatively easy way to do just that, as well as some other useful tasks.
As a kid, I remember waking up on many a cold morning and stumbling into the kitchen with my eyes half-closed, looking forward to whatever Mom had prepared for breakfast, only to find an anticlimactic bowl of steaming hot just-add-boiling-water instant oatmeal waiting for me on the table. At least I wasn't like the more unfortunate kids whose mothers force-fed them that white silt of death, powdered milk. I am absolutely certain that something must go seriously awry in the dehydration process of milk because upon rehydration, that stuff is just plain nasty.
Be that as it may, I think that at least one of the developers involved in creating the .NET Framework must have been one of those abused children. I see powdered milk fingerprints all over some of the new data management techniques in .NET. Then again, in an age of dehydrated/rehydrated food products, what could be more logical than dehydrated/rehydrated data?
Object Serialization
The technique to which I refer is called "Object Serialization". Object Serialization is a process through which an object's state is transformed into some serial data format, such as XML or a binary format, in order to be stored for some later use. In other words, the object is "dehydrated" and put away until we need to use it again. Let's look at an example to clarify this idea a little further. Suppose we have an object defined and instantiated as shown below:
Public Class Person
private m_sName as string
private m_iAge as integer

public property Name() as string
get
return m_sName
end get
set(byval sNewName as string)
m_sName = sNewName
end set
end property

public property Age() as integer
get
return m_iAge
end get
set(byval iNewAge as integer)
m_iAge = iNewAge
end set
end property
End Class
dim oPerson as New Person()
oPerson.Name = "Powdered Toast Man"
oPerson.Age = "38"
Let's say that for some reason, we wanted to save a copy of this object just as it is at this very moment. We could serialize it as an XML document that would look something like this:

Powdered Toast Man
38

Then, at some later time when we needed to use the object again, we could just deserialize ("rehydrate") it and have our object restored to us just as it was at the moment we serialized it.
Why All This Dried Food?
"This serialization and deserialization is all well and good," you may be saying at this point, "but what can it be used for in real world web applications?" Very good question. What would be the sense of dehydrating a glass of milk just to rehydrate it and then dehydrate it again without drinking any? Never mind that it'll probably taste like the inside of your little brother's sock drawer (maybe I should have come up with a more tasty dehydrated food for this analogy). Some good uses for serialization/deserialization include:
• Storing user preferences in an object.
• Maintaining security information across pages and applications.
• Modification of XML documents without using the DOM.
• Passing an object from one application to another.
• Passing an object from one domain to another.
• Passing an object through a firewall as an XML string.
• These are only a few of the many possibilities that serialization opens up for us.
The Dehydration Process

"Okay, okay, I get the picture," you may be muttering now, "but how do I do it?" I'm going to walk you through a simple example of serializing an object to be saved to disk as an XML file. Keep in mind that in .NET we can serialize objects into a binary format or SOAP format as well as into XML, but we will focus solely on XML for this article for the sake of brevity. Also, the object in the example is obviously very simplistic and wouldn't be practical in the real world, but it will serve as a means to clearly illustrate how to serialize an object. The same principles used in this example can then be applied to more complicated tasks. Note that all the code used for this example is available for download at the end of this article).
First of all, let's take a look at the class that is the blueprint for our object (this snippet can be found in the file, xmlser.aspx).
_
public class Person
private m_sName as string
private m_iAge as integer
_
public property Name() as string
get
return m_sName
end get
set(byval sNewName as string)
m_sName = sNewName
end set
end property
_
public property Age() as integer
get
return m_iAge
end get
set(byval iNewAge as integer)
m_iAge = iNewAge
end set
end property
public function Hello() as string
dim s as string
s = "Hi! My name is " & Name & " and I am " & Age & " years old."
return s
end function
public function Goodbye() as string
return "So long!"
end function
end class
This looks pretty similar to the class we saw earlier but with a few adjustments. First of all, notice the lines that say,
")
Response.Write("Goodbye() = " & oLucky.Goodbye() & "
")
'Serialize object to XML and write it to XML file
oStmW = new StreamWriter(Server.MapPath("lucky.xml"))
oXS.Serialize(oStmW, oLucky)
oStmW.Close()
First of all, we declare and instantiate an XMLSerializer object. You'll notice that we had to tell it right from the onset, using the GetType() function, what type of object it's going to be serializing. Next you see that we assign values to the Name and Age properties of the Person object we instantiated. Then we output to the ASP.NET page what the properties are set to by calling the Hello() and Goodbye() methods of the Person object.

Remember that this is only so that we can see what's happening with the object during this process. Next comes the good stuff: We instantiate a StreamWriter object and tell it that it will be writing to a file called, lucky.xml. We then call the Serialize() method of the XMLSerializer object and send it our Person object to be serialized as well as the StreamWriter object so it will write the resulting XML to the file specified. Then we close the StreamWriter, thereby closing the file.
That's it. If everything works correctly, an XML file (lucky.xml) will be written to disk. It should look like this:


52
Lucky Day

Notice that the names of the XML elements are exactly as we specified in the and attributes in the class earlier. In Part 2 we'll examine how to "rehydrate" our XML into an object instance.
Just Add Water
Now let's look at this from the opposite angle. We've just seen how to serialize an object into an XML file and save it to disk, but now suppose we already had an XML file saved and wanted to use it to instantiate an object. In the downloadable code found at the end of this article, you will find a file called, ned.xml. We’re going to use that XML file to create a Person object. Its contents look like this:


47
Ned Nederlander

You'll notice that this XML document has exactly the same structure as the XML file that we wrote to disk a moment ago but the data it contains is, of course, different. Now put on your wicked mad scientist grins and look at the code required to bring this beast of an object to life:
dim oNed as Person
dim oStmR as StreamReader
'Pull in contents of an object serialized into an XML file
'and deserialize it into an object
oStmR = new StreamReader(Server.MapPath("ned.xml"))
oNed = oXS.Deserialize(oStmR)
oStmR.Close()
'Display property values
Response.Write("Hello() = " & oNed.Hello() & "
")
Response.Write("Goodbye() = " & oNed.Goodbye() & "
")
Before anything else, we declare a Person object and StreamReader object. Next, we create an instance of the StreamReader object and feed it the stored XML file. Then we instantiate the Person object by calling the Deserialize() method of the XMLSerializer object. This method uses the StreamReader object to read the contents of the XML file and then instantiates an object whose state matches that described in the XML file. Finally, we close up the StreamReader object and then output the results of the newly created object's Hello() and Goodbye() methods just to prove that it was successfully created. It's just like that instant oatmeal Mom used to make.
Note: Something important to remember is that when an object is instantiated through Deserialization, its constructor is not called. Just keep that in mind if you plan on doing this with any objects which are very dependent on their constructors performing some crucial function.
Do I Have To Keep My Raisins?
Perhaps you are wondering now, "Pretty cool - but what if I don't want to save my object to disk?" Another good question. There's no reason you would have to. Let's suppose that for some reason, you needed to serialize an object into an XML string to be used for some purpose and then forgotten or re-instantiated or whatever else. This can be accomplished in almost the same way that was demonstrated earlier. However, instead of using a StreamWriter object in the process, we will use a StringWriter object. See the code snippet below:
dim oDusty as new Person()
dim oStrW as new StringWriter()
dim sXML as string
'Set properties
oDusty.Name = "Dusty Bottoms"
oDusty.Age = 51
'Serialize object into an XML string
oXS.Serialize(oStrW, oDusty)
sXML = oStrW.ToString()
oStrW.Close()
As you can see, we instantiate a new Person object and StringWriter object and then assign values to the Name and Age properties of the Person object. We then call the Serialize() method of the XMLSerializer object and the Person object is serialized into an XML document and placed in the StringWriter object.
Before we move on, it is important to understand some things about the StringWriter and StreamWriter objects and Inheritance. The Serialize() method of the XMLSerializer object is an overloaded method and one of its signatures is: Overloads Public Sub Serialize(TextWriter, Object). This means we must send it a TextWriter object and some other object.
"Wait a minute!" I hear you shouting, "If it needs to be sent a TextWriter object, why are we sending it StringWriters and StreamWriters?" That's because of Inheritance.
In object oriented development, objects can be derived from other objects, inheriting some or all of the original object's characteristics. This is where StringWriter and StreamWriter come from. They are "descendants" of TextWriter. Think of it this way: A man named Fritz Meyer has two children, Hansel and Gretel. Hansel is not Fritz, but he is a Meyer as is Gretel and when they have a Meyer family reunion, Fritz, Hansel, and Gretel can all get in the door because they are all Meyers. Similarly, because StreamWriter and StringWriter are both descended from TextWriter, they can be used with this call to Serialize(). Unfortunately, StreamWriter doesn't have a way to present its contents as a string data type, but StringWriter does and we are interested, at this point, in getting the XML string rather than saving it to a file.
That is why, in the code snippet above, we send a StringWriter to Serialize() instead of a StreamWriter. (For more information on inheritence and how it is used in .NET, be sure to read: Using Object-Orientation in ASP.NET: Inheritance.)
After the serialization takes place, we capture the XML string by calling the ToString() method of the StringWriter object and placing the results in a string variable. Next, we close the StringWriter object because we no longer need it. We now have our hands on the XML string and can do with it what we please. In the downloadable example code, all we do with it is output it to the browser.

Conclusion
As you have just seen, serialization is fairly easy to implement. I've already listed several possible reasons to use serialization in your applications and now that you know how to do it, I will leave the rest to your own capable imaginations. This article has focused only on how to serialize an object into an XML document, but please remember that objects can also be serialized into binary or SOAP formats. To learn more about those types of serialization, look up the BinaryFormatter class and the SOAPFormatter class.
Maybe this powerful technology didn't really have its humble beginnings in the bottom of a glass of powdered milk - but for some reason, it makes me smile to think so. Then again, maybe someday we'll see Bill Gates or one of his .NET guys sporting a liquidy white moustache on one of those "Got Milk" ads.


Friday, 1 June 2007

Transforming Data into Information

Data in SOA, Part I: Transforming Data into Information

Data and data management are key aspects of nearly every enterprise software solution. SOA is no exception. Effective data modeling and management are an essential part of successful SOA realization. To take your data to the next level you need to transform it into information; to take your information to the next level you need to transform it into knowledge.

This article is the first in a series of two articles on “Data in SOA: Transforming Data into Knowledge.” In this article I describe an approach to transforming data into information in SOA as part of an overall SOA transformation plan, with a definition of a SOA Reference Architecture (SOA RA), and the realization of an enterprise SOA. In Part II of this series I describe an approach to transforming information into knowledge for SOA as an extension to an overall SOA transformation plan and a high-value expansion of an enterprise SOA RA.
Why Data?

Data are ubiquitous (data is plural; datum is singular though both plural and singular verbs can be used with "data"). At their core, most IT efforts are focused on collecting, distributing, and managing data, providing data when it's needed, where it's needed, how it's needed, and for whomever (with proper authorization) needs it. Some may recall that long before the term IT ("information technology") was coined, most enterprises called their "computer departments" and activities DP, or “Data Processing.”

With all the technology waves past, present, and into the foreseeable future, one constant has remained: data. The same data that were (and still likely are) processed by mainframes have also likely been processed by one or more of client-server, CORBA/DCOM, Java EE, .NET, Web services, SOA, and Web 2.0. Over time, the storage, formats, and transports may have changed, and how the data is processed has changed, but the "data" remain (and are growing). In essence, all the industry technology waves have one thing in common: they are new or improved ways to process data. Data are fundamental. If you agree with my premise that data are fundamental to enterprise solutions, it follows that data (and data modeling/management) are also a priority consideration for enterprise architects in SOA (and Web 2.0).
What are Data?

Let's start by selecting your favorite dictionary definition for "data," and then augment it. For the purpose of this article, data are the elemental, atomic, or low-level aggregation of pieces of "information" with some structure (form), relations, and state, but no behavior. For example, an Address table with columns for Street Address, City, and so on, is an example of data, as is the definition of an Address in a Customer Table. Data are structure and state without behavior. Data are the raw building blocks from which we may construct information. Data are the prerequisite for Information.
What is Information?

Again, choose your favorite definition, and then augment it. For the purpose of this article, information is the aggregation of data and the fundamental logic that provides additional form, the basic relations, and syntactic and semantic contexts—that is, it is state and core model behavior. For example, correctness in ensuring a ZIP code is valid and consistent with the City. Information extends data by providing the ability to map, or relate, data, and define logic for the behavioral models consistent with the domain (syntax and semantics) context. Information is based on and requires data. In other words, information represents entities (subjects, objects) that encapsulate both state (data) and behavior (logic). You may consider information as being analogous to an instance of a model class in object-oriented programming which contains both data members (instance variables) that hold state and methods that provide (model) behavior.
The Value of Data in SOA

Organizations have different drivers, starting points, and priorities for defining and refining their SOA Reference Architecture (SOA RA), which may shift during their transformation to SOA. A holistic approach to the planning and design of a SOA RA should include the data services layer. This article uses the term data services layer to include both data and information access services.

Without an enterprise data services layer in your SOA RA, subsequent line-of-business (LoB) projects will be forced to develop individual "point," or one-off solutions, that are specific to each application. Few commonalities will be discovered, few opportunities for shared service definition, reuse, and consistency will be discovered, and the definition of a canonical data model will be elusive. There is a good chance that many of the benefits of SOA (and ROI) will take longer to realize, if they are realized at all. We’ve probably all read statistics that place project resource consumption on data integration tasks at anywhere from 50 to 85 percent of enterprise application software development! This anecdotal "fact" alone should be enough to ensure a data services layer is an integral part of any SOA realization. Combined with the obvious notion that our enterprise software solutions are primarily designed to process data, the value of data in SOA should also be apparent.

Figure 1 is a high-level conceptual view of BEA's SOA Reference Architecture, which illustrates high-level layers. Note the presence of the data services layer as first-class area, indicating the importance of the data services layer in a SOA RA.

BEA SOA Reference Architecture
Figure 1: SOA Reference Architecture layers

Data, data models, and data management are fundamental to SOA success. In fact, BEA values data services so highly that not only do we offer the AquaLogic Data Services Platform product, but data services are a fundamental part of many BEA Consulting service offerings, which include a Data Services Consulting Service where the focus is on SOA data and information layer planning, design, and development.
A Note About Data Access and Connectivity Services

Data access services refer to information sources often collectively known as Enterprise Information Systems (EIS) as well as databases and file systems. These can be legacy systems, systems of record, packaged commercial applications, customer, partner, and third-party applications and services, and Web services. What they have in common is that they provide data and/or information (which implies behavior in the context of this article) for consumption by other applications. In this sense, these applications when accessed through the data services layer are just another form or source of data. At a higher level of abstraction, Data services would look the same to consuming applications, which is one of the primary goals (normalization/consistency) of the data services layer in SOA RA. The fact that the interface exposed for consumption interacts with one or more databases, tables, back-end, legacy, shrink-rapped, and/or external systems is an implementation detail encapsulated by the data services layer.

Connectivity services are about exposing applications and databases as application services in a standards-based manner.
Transforming Data into Information

So, your organization is planning a transformation to SOA. Investigation and planning on all layers and aspects of the SOA RA (see Figure 1) has started, and you have been tasked with the realization of the data services layer. Now what? Consider the following transformation steps:

1. Inventory existing data and system access assets
2. Determine dependency matrix
3. Establish baselines metrics/SLAs
4. Set asset priorities
5. Carry out data modeling
6. Create logical modeling
7. Set information rules
8. Establish application specializations

Figure 2 provides an example of a possible set of internal abstraction layers for an SOA RA data services layer where we will map the requirements and capabilities from our 9 steps:

Data Services Layer – Internal Layer Abstraction

Figure 2: Data services layer –internal layer abstraction

Based on your requirements and perspective, you may determine the need for a different set of abstraction layers. At the very least, you should separate the physical and logical layers and distribute your rule types accordingly.

Let's now look at each of these steps in more detail.
1) Inventory Existing Data and System Access Assets

The first step is finding out what is out there, that is, what are your current data and information system access assets. What data and information assets (referred to as simply "assets" for the remainder of the article), for example databases, information sources, and applications (meaning legacy, system of record) does your organization have? For each asset you will want to know the supporting metadata such as documentation, history, technology/tools/products/platforms, versions, ownership/management, location, security, and access mechanisms. Depending on the number of assets and their metadata, you may want to consider some sort of metadata catalogue or repository as well as a standard template or set of templates that captures the meta-information in a consistent manner and allows for search.
2) Determine Dependency Matrix

Once you have started or created the asset catalogue, the second step is to determine the dependency matrix. The dependency matrix, also part of the asset meta-information, captures information on who uses the asset, when they use it, frequency/how often, what they do with, or to, the asset (for example, CRUD), where they use it (that is, what type of access—batch, online, real time, reporting). It is also important to understand why a consumer uses a particular asset as that will help with task prioritization as well as provide requirements for your emerging data models.

Once you have captured the "who, what, where, when, how, and why" for each known consumer of an asset, you can start to analyze and form generalizations across all asset consumers. The goal is to find opportunities for simplification and reuse by transforming existing assets into SOA Building Blocks. These include, but are not limited to, assets in a service-oriented, self-describing, discoverable form that can be readily utilized in an SOA ecosystem using open, common, industry, and/or organization standards.

One definition contained within the set of SOA Building Blocks is your definition of a service. What standards and specifications, and their versions, will be used? For example, specific versions of WSDL, SOAP, UDDI, WS-Security, WS-I Basic Profile, WS-Addressing, XML, and XSD may be required, while others may be optional/recommended. Your data and information access assets will likely take a form consistent with your basic SOA Building Block definition of a "service." (Using your favorite search engine, search on the topics of “Service Identification” and “Service Definition,” which cover this area.)
3) Establish Baselines Metrics/SLAs

Each catalogued asset, since it already exists in some form, should have estimated or actual production usage statistics, including transaction volume, patterns, concurrent users, reliability, availability, scalability, and performance (RASP) information.

Usage information is also a great indicator of business and IT value and priority. This baseline information is used to define a set of metrics that will form the basis of Service Level Agreements (SLAs) and allow for goal definition and tracking over time. Metrics, as well as current production information, are invaluable in sizing and capacity planning of both hardware and software to support the data services layer in SOA. Be sure your SLAs are bidirectional, that is the service provider defines its SLA terms, conditions, and penalties for each consumer; consumers are expected to abide by the agreement.

For example, an agreement states that Consumer A may perform a maximum of 100 get() requests on DataServiceXYZ (the asset/service provider) per day (where a day is defined as a 24-hour period starting at 12:00 midnight GMT) and the response time per request is to be <= 2 seconds. If Consumer A sends more than the agreed maximum get() requests, then the service provider is able to apply the penalties as defined in the agreement. There are corresponding expectations on the service provider. Should Consumer A stay at or beneath their request maximum, the service provider must provide a response time <= 2 seconds or face commensurate penalties defined in the agreement.

Metrics and SLAs define the expectations and rules of engagement that affect the basis of the value, goal, and sizing of each asset. Track your baseline metrics, SLAs, and reuse to establish a cost and benefits model.

With the preceding set of information captured to some degree, it should be possible to start evaluating each asset in the context of all the other cataloged assets—that is, assign each asset a priority. A good heuristic is to have at least three and no more than ten (which is excessive) priority levels; any more or less will be inadequate or unmanageable.

Priority assignments are designed to assist in the identification of the most important assets based on utilization and the value of the business functions supported. You should design a set of metrics (including those in Step 3) and definitions that provide for empirical comparison and evaluation of each asset to determine its priority assignment. Assigning asset priorities will help determine possible project starting points, potential business/IT sponsors, and relative business value.

Using all of the preceding information, a "current reality" snapshot for each asset can be established, documented, and tracked as these assets are transformed into SOA building blocks. Across all catalogued assets, the top-rated highest priority assets should be selected for the remaining set of steps. The actual number selected depends on your risk assessments, priority valuation, business/IT goals, resources, and similar factors.

5) Data Modeling

Starting with the first selected asset (I recommend doing one asset end-to-end first, perhaps not the highest priority either, as this allows you to exercise the governance and data services layer’s SDLC process in a more controlled and manageable manner), review the existing physical aspects. For a database or set of tables, consider the various queries that are used by consumers, any logic procedures stored in the database, and their triggers, as well as any side-effect actions. This forms the physical data asset definition and description. For information access, what is used: MOM, third-party adapters, or proprietary integrations, point-to-point custom integration? This forms the physical information asset definition and description.

As the data services layer forms an integral part of an overall SOA Reference Architecture, the definitions and requirements for an SOA building block should be defined. There is likely a gap between your asset's current state and the SOA RA building block goal state. The first order of business is to bring the current physical asset as close to your SOA Building Block goal state standard as possible. You may recall the previous discussion regarding the definition and description of a "service" for your SOA Reference Architecture. For simplicity, let's say your definition of a service requires WSDL, SOAP, document-style with documents defined using XSD. Other recommended specifications include WS-Addressing, and XQuery/XPath. With this definition, we need to consider how to transform or map tables in a relational database, XML data, and/or information access systems into a set of services that meet our building block service definition criteria.

There are various tools and technologies to map existing data and information access assets into a physical data layer in Figure 2 to define logical service models consistent with your specific requirements and definition of a service. BEA's AquaLogic Data Services Platform (ALDSP) is our realization technology for transformation of data/information access assets into SOA building blocks (data services), which provides a standards-based, service-oriented data services layer for your SOA Reference Architecture.

Once you import your physical assets (regardless of their interface and implementation), you have what is known as the physical data services layer (refer to Figure 2). Services in the physical data services layer have a consistent look, feel, and representation—that is, the underlying implementation details and communication protocols are abstracted, encapsulated, and removed from view (and you may still go "under the covers" when required), providing only the asset definition (service definition) and operational information. Now that you have your "data," it is time to define your logical model.
6) Logical Modeling

The goal of the logical model is to abstract, integrate, normalize, and manage the aggregation of one or more physical data services. These actions may be abstracted into two logical layers: the logical data normalization layer and the logical data integration layer, as shown in Figure 2, which also have a set of applicable rules: management rules, data rules, integration rules, and business rules.

Before we go further, it is worth noting that ALDSP allows for any number of logical layers that are required to support your logical abstraction design requirements. The logical layers are design-time-oriented only; their purpose is to allow designers and developers to separate and layer logical models and concerns effectively. These logical layers are not part of the runtime deployment—that is, even though there may be several logical layers in design, they do not correspond to a set of indirection layers at runtime. They are flattened and optimized into a single runtime layer. Development and operational staff can view the runtime artifacts and the optimizations and make modifications as they deem necessary.

You may define a different set of criteria and factors as the basis of your logical model layers than the ones I use here. For example, there may be a single layer that contains all of your logical abstractions, or you may have several logical layers. Too few logical layers may prove to be limiting and potentially lead to an increase in complexity over time. At a minimum, you should define a set of criteria that determines your logical abstraction layers and what they contain.

For example, you may have a logical abstraction that performs the normalizations as I show in Figure 2. The logical data normalization layer allows you to "clean up" and simplify any complex or confusing information. It is often difficult if not impossible to change the physical structure of existing databases or other systems over which you do not have direct ownership or responsibility, or changes at that level are simply not practical. The logical data normalization layer provides this opportunity to reengineer without forcing changes in the physical data layer. (If you need more information on "data normalization," I recommend performing a Web search on "data normalization" to learn more about what that is, and what it entails.) The logical layer provides a model design that may be used as a future physical data and information model as the systems that use the data sources directly are updated or retired. The goal of logical data services is provide a service model that is much easier to use, more understandable, and potentially more reusable by higher-level shared services and consuming applications.

Steps 5 and 6 may be reversed. The key is to ensure your logical models are not overly constrained by the current physical assets. In other words, while your logical models will utilize physical data services, do not let the limitations of those current physical assets limit your logical models or exert undue influence on your overall data services layer design. The physical assets are a starting point upon which to build richer, more expressive models.
7) Information Rules

Rules and rule processing are how data become information. Rules and rule processing provide relations, semantics, and behavior in the data services layer. As shown in Figure 2, there are several categories of rules:

*

Management rules provide any requirements and/or restrictions on using the system and data assets that form the physical data layer. This can include security, access windows (dates/times), caching, metadata, transactions, and any side effects or ancillary actions (for example, logging and auditing) that need to be performed.
*

Data rules provide validation, consistency, cross-checking, and any other rules associated with data accuracy and consistency. They may also provide cache management and other side effects in the physical or logical models. Data rules are at the table, row, column, and field level.
*

Integration rules provide mappings and consistency across logical and physical data layers. Integration maps higher-level abstractions to their corresponding logical or physical layers. For example, a Customer ID at a higher-level abstraction as part of a new canonical data model that is converted from/to several underlying native forms from several customer databases and/or backend systems. Integration rules are at the system and/or database layer.
*

Business rules provide meaningful business relations and some business logic, that is, behavior. In object-oriented programming, consider the state and behavior encapsulated in your model objects. Business rules perform a similar behavioral role in data services. Business Rules capture business processing logic at the data model layer. This logic is fundamental to the business entity’s very definition and its relations with other business entities that are intrinsic to the business entity across all utilizations, for example, in an enterprise-wide, or at least a division-wide, scope. Some of these rules are defined in the canonical model, while others are defined in the application specialization models.

8) Application Specializations

Once you have completed your logical model, you have effectively defined a canonical information model. The definition of this model completes the initial design of your information model, meaning you have effectively started to transform your data into your information. There is one final step that further refines your information model: application specializations.

Though many may, not all consuming applications will be able to use the canonical information model directly. Application specialization provides an abstraction layer for consuming applications to define their own logical model specific to their requirements.

Application specializations encapsulate the additional information model state and behavior required by consuming applications, which simplifies the consuming applications' utilization of the canonical information model assets. Since application specializations are unique to each consuming application, or a set of related business applications, there is no need to include them in the canonical information model. If application specializations have a larger scope (for example, across divisions or the enterprise), then they should be part of the canonical information model.
Conclusion

Creating the data services layer for your SOA Reference Architecture and defining the canonical information model for your organization is a difficult, challenging task often with little glory: it is difficult work, and challenging to do well. Following the approach described in this article should provide enough information for you to plan, assess, and begin designing your SOA transformation in the data layer, and transforming your organization's data into information. The actual planning, design, and development of your SOA Reference Architecture's data services layer depend on a number of unique factors that are specific to your organization or situation and well beyond the scope of this architecture article.

Now that we have started transforming our data into information in our SOA ecosystem, we can think about transforming our information into knowledge. The second and final article in this series, "Data in SOA, Part II: Transformation of Information into Knowledge," will describe the steps for this transformation.
References

Building Interoperable Insurance Systems with .NET 3.0 Technologies

Building Interoperable Insurance Systems with .NET 3.0 Technologies


Mike Walker

Microsoft Corporation

December 2006

Applies to:
Microsoft .NET Framework 3.0

Summary: This white paper will use an insurance-industry scenario to demonstrate interoperability capabilities of the Microsoft platform. Using protocol-level standards alone is not enough; capturing the business side of the messaging transactions is key to making interoperability work for your business. This is true across all industries, not just insurance. (15 printed pages)
Contents

Introduction
Insurance-Industry Forces
Business Terms Used in This Document
Life-Insurance Policy Scenario
Architecture Overview
The Insurance Agent Policy System
The Insurance Carrier Systems
What Is the Value?
Conclusion
Resources
Introduction

The purpose of this white-paper series is to provide guidance around integration challenges.

Through this white paper, we will use an insurance-industry scenario to demonstrate interoperability capabilities of the Microsoft platform. Through maturity of many enterprises, we live in a world where there is more than one stack of technology. These platform stacks range from legacy mainframe-based COBOL or FORTRAN types of applications to the more modern solutions based on .NET, Mobile Systems, or Java—and everything in the middle. As a result, as enterprises have iterated through technologies and technology trends, there has been more than a few bandages applied to the various technologies.
Insurance Interop Series

This white paper will serve as a guide for architects who are facing integration challenges in the insurance industry. We will show you how to use Microsoft integration technologies to integrate disparate systems in your enterprise. Additionally, this document will provide pragmatic design guidance for building interoperable solutions using open standards such as WS-*. Additional documents in this series will include the following:

Architecture Overview to Building Interoperable Insurance Systems

Securing Insurance Solutions

Scaling and Operational Management

Deploying Enterprise Solutions

Developing Composite Applications

Technologies that will be covered include:

1. BizTalk 2006. The integration technology for this solution. The solution also uses the BizTalk business rules and workflow orchestration.
2. Windows Communication Foundation (WCF). The programming model to develop Web service messages and manage protocol-level communication by using the WS-* protocols.
3. Windows Workflow Foundation (WF). To create compelling workflows using smart-client technologies.
4. SQL Server 2005. The repository for all of the application and customer data.
5. Windows Server 2003. The server platform.

This scenario will give us a glimpse into the business process. Like many businesses, each insurance company has its own unique way of handling its process. However, there are some similarities that these businesses share at the platform level. The purpose here is to demonstrate that there is a way to leverage these common platform services to build Service-Oriented Architectures (SOA) giving an organization more agility with the processes that differentiate their specific business.
Insurance-Industry Forces

In the insurance industry, there are many technologies at play, ranging from mainframe to UNIX to Windows. With this wide range of platform technologies, it is increasingly difficult to manage and operate while trying to be agile in an ever-changing financial market. For years, organizations have been building and buying technologies to meet these needs. Interoperability has become a necessary evil after the solution has been built and/or implemented. This has left us with point-to-point integrations that address very specific problems only at the application or system level, but not at the business-function level.

Figure 1. The result of point-to-point integrations

If care is not taken, point-to-point integrations over many years result in:

* IT portfolio management becoming unmanageable, given the duplication of systems, multiple variations of integrations, management of dependencies of applications, and so forth.
* Increased cost of IT systems, dramatically rising because of the number of custom integrations.
* Loss of agility, because development of systems is slowed significantly as a result of increased code complexity, limited reusability, and lack of standardization in the enterprise.

So, what does this mean to many insurance carriers? It means that interoperability is at critical importance—not only as an efficiency issue, but also as a competitive differentiator. In these days of modern competition, companies must increase the return on investment (ROI) of their IT systems by streamlining the processes and becoming more agile to stay competitive.

Our goal is to address the industry challenges with a set of enterprise-ready technologies on the Microsoft platform. We use the following principles in the examples:

* Enterprise-class solution
* Standard communications:
o Use WS-* standards
o ACORD messages
* Ensuring interoperability with existing solutions

Business Terms Used in This Document

ACORD—ACORD (www.acord.org) is a nonprofit association whose mission is to facilitate the development and use of standards for the insurance, reinsurance, and related financial-services industries.

Order system—Creates requests for external data, transmits them to the appropriate third-party data provider, manages responses received, and matches responses to the appropriate original requestor.

Third-party service provider—An external system to fulfill an underwriting requirements request (for example, a credit-rating system).

Underwriting process—Implementation of the business process for assessing and processing a new business.

Broker system—Possible smart-client front-end system for order entry and progress monitoring used by an insurance broker. Other front-end systems are also possible, such as a Web portal to brokers, or a Web UI for self-service order entry by customers.
Life-Insurance Policy Scenario

The customer, Robert, wants to purchase a platinum-level 1 million dollar life-insurance policy. The broker, Tom, enters Robert's policy application using his smart-client application. The policy is sent to the Order system, where it is processed and routed to the appropriate systems to begin the underwriting process. While in the Order system, third-party services are kicked off. For this scenario, we will use a Paramed, a third-party service that verifies insurers' health insurance and medical records.

The built-in business logic can also generate requests to third parties if a certain condition is met. This could be the broker or another partner of the insurance company.

Figure 2. Business process used for our scenario
Architecture Overview

This section will walk through the high-level logical architecture used in the scenario. The details around specific aspects such as security, messages, development, and deployment will be provided in other papers in this series.

To ensure applicability with real-world challenges, we derived a set of high-level requirements.
Requirements

The following requirements are for an enterprise-class solution:

* Must interoperate with existing, commercial off-the-shelf applications. As discussed earlier, many organizations purchase and customize software. It is critical to address this.
* The integration technology must be Web services. Many forms of communication, such as binary communication, are proprietary. Until the emergence of Web services, there was no standardized way to communicate messages. Web services provide a way to communicate across heterogeneous platforms.
* WS-* standards must be used. Web services using SOAP and WSDL have been industry integration standard for years. However, these traditional Web services lack the robustness needed for messaging. The WS-* standards provide these necessary features without the usage of binary communication.
* Long-running workflow. Management of long-running orchestrations has been difficult, especially when that workflow spawns many smaller external workflows, in which case reconciliation and transaction management can become complex.

We use BizTalk as the message hub for this solution, given its rich capabilities and the strong need this insurance solution has for tying multiple systems together and managing multiple external workflows.

Figure 3. Using message-bus technology

Shown in Figure 3 is an enterprise view of BizTalk as an enterprise service bus (ESB). Remember that it is not a requirement that this is used as an ESB. This white paper refers to this layer as just a message layer, so that you can incorporate it into your solution in either case.

The rationale for using BizTalk is that it provides a centralized platform for the following capabilities:

* Business-process management—Centralizing reusable business process not only lends to service orientation, but also provides a mechanism for organizations to augment existing or purchased commercial off-the-shelf–based (COTS-based) applications without the complexity of modifying them.
* Workflow orchestration—Management of multiple workflows can be simplified through this platform. Instead of coding or reconciling each workflow, solutions can be managed as they should. We do this by creating one workflow to manage the business process from the beginning to the end that is able to orchestrate multiple internal system workflows.
* Rich adapter support—Jump-starting development is critical to organizations. BizTalk has a wide range of adapters to support your integration needs. In the insurance space, there is an ACORD adapter that can jump-start your integrations. In conjunction with the ACORD adapter, the Web Services Adapter and File-Based Adapters are available for BizTalk.
* Message routing and transformation—Message routing can be very complex when the messages must be transformed so that other systems understand the message. BizTalk can provide a platform to reduce the complexity and still align with open standards.

The Insurance Agent Policy System

Currently, the technology trends in the insurance industry vary from portals, thick clients, 3270 mainframe terminal-emulation screens, and smart clients. Given the diverse number of applications and vendors in this space, we choose a smart-client user interface (UI) to provide the optimal experience for the agent for the following reasons:

* Offline and online modes
* No dependencies on network connectivity
* Rich user experience with much greater functionality

A disconnected model for agents makes sense in many situations, as brokers can often be mobile or have limited connectivity to network resources. However, because we will be using Web services as the core of our messaging strategy when architecting this solution, the manner in which the end broker submits policies should be trivial.

For the client-side architecture, we used Windows Forms as the user interface, which provides the user interface needed for the agents. There will be several controls, such as data grids, text boxes, and command buttons. A data grid on the Windows form will serve as the window into the policy pipeline for the broker. We use Web services to update this data grid to ensure real-time updates.

Because this is a smart client, returning data can be cached for offline viewing and updating. This provides significant benefits to the brokers. In addition to the data, a small layer of business logic would reside on the client application. The majority of the application logic will reside on the insurance company side. The rationale here is that we will have light rules to drive UI functionality.

Figure 4. Client logical architecture

To make the calls from the client to the messaging tier, we will use Windows Communication Foundation (WCF). WCF will send SOAP 1.2 Web services messages using the ACORD messaging schemas. The WCF layer will provide a unified development model for our developers when coding communications. From the protocol perspective, we will use a series of WS-* standards. However, this is not enough to ensure interoperability. Usage of the ACORD industry standards is key, too. We should be able to interoperate seamlessly between "homegrown" applications, COTS applications, and third-party services.
Messaging Architecture

The use of Web services enables this broad variety of channels to leverage a common Web service that receives new business applications into the underwriting process in the form of an ACORD 103 message that includes a policy number that has been assigned, and that will be used for tracking/correlation purposes throughout this demonstration. This ACORD 103 New Business Submission message will be based on a SOAP Message Transmission Optimization Mechanism (MTOM/XOP) attachment containing the binary representation of Robert's signature to authorize release of medical information, as required by HIPAA. It is absolutely critical that the ACORD standards are incorporated in our messaging. This will ensure portability of the architecture.

It is also essential that communications are secure and reliable. To achieve, this we will use WS-Secure Conversation (WS-SC) for personal information that might pass through an undetermined number of intermediaries. We also use WS-SC for high-volume, frequent requests (such as credit checks) that will be required for all new policy applications. We use WS-Security for less frequent requests, such as an Attending Physician's Statement (APS), where the overhead of session establishment is not justified by the request volume. We also use TLS/SSL (also known as HTTPS) in rare cases where a service is directly processing requests without any intermediate routing.

For messaging where tracking reception is important, such as ensuring receipt of a new policy to claim a commission, we use WS- Reliable Messaging (WS-RM). We also use WS RM for data requests that are expensive to process (typically involving human workflow, such as APS queries). This ensures that requests are only delivered once, and avoids expensive duplicate requests.

For long-running messages, we use WS-Secure Conversation (WS-SC) (See Resources.)

Figure 5. Client message-exchange patterns
Transaction Business processes WS-* protocols Architecture decision
Submission of new policy (103 Request) Broker-client

Underwriting process
WS-Security (WS-S)

WS-Reliable Messaging (WS-RM)
WS-S used for personal information that might pass through an undetermined number of intermediaries.

WS-RM used to track message receipt.

Because of infrequent transactions, no need for session-oriented security mechanisms, such as WS-Secure Conversation.
Status queries (122 Request/Response) Broker-client

Underwriting process

Fulfillment process
WS-Secure Conversation (WS-SC) Noncritical and individual request or response messages that can be retried easily, but still contain personal information.
Underwriting requirement order request (121)

Underwriting requirement order response (1122)
Underwriting process

Fulfillment process
WS-Secure Conversation (WS-SC) or WS-Security (WSS) or Transport-level security (TLS/SSL)

WS-Reliable Messaging (WS-RM)
These messages contain personal information.

WS-SC will be used for high-volume, frequent requests (such as credit checks).

Use WS-Security for less frequent requests, where the overhead of session establishment is not justified by the request volume.

Use TLS/SSL where a service is directly processing requests without any intermediate routing.

Use WS RM for data requests that are expensive to process.

Table 1: Business-process messaging design decision matrix

You might ask yourself, after making a submission: Why is the status returned in a separate transaction? Well, the reasons are twofold. Firstly, it is important for this to be asynchronous, and the ACORD standard does not allow an implementation without separating the status from the submission. Secondly, the Broker will be getting status returns periodically through the course of the application process by querying the Status Service.
The Insurance Carrier Systems

When architecting the server side of the solution, there were particular aspects and assumptions considered:

* This architecture accounts for fragmented systems.
* Functional areas are self-contained and need to be managed.
* Operating systems and development environments differ.

As a result, there are a significant number of point-to-point integrations with very specific applications, thus causing proprietary implementations. In this solution, the façade layer will be created around these existing applications.

Figure 6. Insurance message bus

Here, you can see how we are using the enterprise service bus (ESB) as a message bus. This layer will serve as the centralized messaging layer that will manage our internal and external messages. Management and orchestration are key benefits of this architecture.

An infrastructure like this can bring order to the chaos of disparate point-to-point integrations by putting intelligent, long-running orchestrations and policies around transactions in one layer instead of many. It would be common to have several distinct COTS-based applications in upwards of five or six systems to accomplish an end-to-end transaction. We are reducing these systems significantly by consolidating the redundant functions, such as workflow and messaging, leaving infrastructure-level functionality where it belongs and keeping the business logic in applicable applications.

It is important to remember that this message bus is a logical representation. The implementation view can look very different. For example, the message bus could be several BizTalk servers, or there could be servers in different DMZ environments to manage both internal and external communications.

Click here for larger image

Figure 7. Workflow designer (Click on the picture for a larger image)

The next tier down, which is where specific business functions are performed, contains two different legacy systems wrapped with an interface: the Order system and the Fulfillment system. The reason that we are keeping these as separate systems instead of consolidating them is that, the majority of the time, these would be two separate COTS-based systems.

A Status system was added for the following reasons:

* To provide a centralized way to report status to the agents.
* To reduce the number of interfaces and control logic needed to query multiple systems.
* It fits nicely with the orchestration capabilities of our ESB for our long-running workflow.

The Ordering system and the Fulfillment system have been converted into course-grained services. By doing so, we have removed the dependencies of independent implementations. All communication that occurs to these systems now goes through our message hub. The exposed Web service endpoints that are managed from the message bus can then be managed with orchestration technologies built in BizTalk.

Figure 8. End-to-end message-exchange pattern

Now that these applications are exposed as Web services, any technology that can accept Web services XML can integrate with these applications. This removes the tight coupling of other technology protocols that would limit interoperability. For example, you could just as easily use existing Java-based systems, if those were your legacy systems.

SQL Server is used here to store application data in the database layer. Because the core focus of this paper is integration and composite applications, we will not highlight this.

The third-party services referenced are external services that are called by the Fulfillment service. These services have varying protocol needs. However, this paper will show how WS-* standards can provide increased functionality for your services. It is important to note that many of the real-world insurance third-party services only support XML-based communications, not the more advanced SOAP-based Web services. The messaging-architecture sections that follow will have more on the third-party services.
Insurance Carrier Messaging Architecture

This section walks through a basic life policy that is processed by the insurance carrier. Based on the information Robert supplied, the business rules/heuristic logic defined in the underwriting process decides that an Attending Physician Statement APS (that is, a physical) is also required.

Because another provider must fulfill this request, the Order system builds an ACORD XML TransType 121 General Requirements Order Request transaction (TXLifeRequest) and transmits it to a secondary external ordering system for Robert's physician (the APS system). This message also contains the MTOM/XOP attachment of Robert's signature that was originally carried on the ACORD 103 New Business Submission, authorizing his physician to release his medical information to the insurance company.

At some point, Robert's physician will process the APS order by verifying that Robert's signature matches the one he has on file, and then examining Robert's medical history, filling in the necessary information required on the APS report.

After the physician has completed the APS report, an 1122 General Requirements Status/Results Transmittal message is generated and transmitted back to the endpoint reference specified in the WS-Addressing ReplyTo specified in the previous ACORD 121 request. This message will also be delivered reliably using WS-Reliable Messaging.

The rest of the business process runs, including any automated-actuary decision. However, in this case, because there is an APS and possibly some additional information that cannot be processed automatically, the case is flagged for an underwriter's review and approval.

Figure 9. Underwriting process message-exchange pattern
Fulfillment Service

In the insurance industry, a fulfillment system or service is very different from the process of fulfillment:

* Fulfillment system: A system or service that receives a request and fulfills it. Think of a fulfillment service as an integration component for gathering data. In this scenario, the fulfillment system is responsible for pulling the various reports from third-party providers.
* Fulfillment process: The process in which a policy is issued by the insurance carrier.

You might ask why we kept the fulfillment service. For this scenario, we are assuming that systems such as these are purchased as black-box solutions. This is not to say that you could not remove these layers and incorporate them into a message bus, instead.

Choosing a messaging pattern is not as clear-cut as choosing one set of standards. When designing this part of the solution, we had to take a step back and look at the business and legacy aspects of each individual transaction.

Some transactions, such as receiving a credit report, were easier decisions. However, other transactions, such as pulling an APS report, required the ability to contain attachments.

Here are some aspects to consider when designing you messaging:

* Understand the business process. It is critical to understand how the business uses these messages (for example, securing data). If the data being sent is not sensitive, you do not need to take extensive security precautions for the message.
* Understand how the transactions are consumed from the service providers. This can be both internal and external. Many times, when relying on service providers, there are technical limitations. These can range from standards support to hours of operation.
* Give proper attention to security. This is often overlooked. Protocol-level security, such as SSL/TLS, often is sufficient, but not always. Make sure you evaluate the sensitivity of the data and review the message paths to determine how many endpoints there are before the ultimate consumer.
* Be realistic and pragmatic. When designing these services, do not go overboard trying to use every standard. Do not force a standard into a message, if it does not belong. This will only introduce unneeded complexity.

Figure 10. Fulfillment system message-exchange pattern
What Is the Value?

We talked quite a bit about the Microsoft platform and development technologies by walking through the scenario. We also highlighted architecture decisions. But what we did not do is highlight the features of these Microsoft technologies.

The following are the core benefits using Microsoft technologies in the insurance industry:

* Business-process automation—Business processes are complex and specific to each carrier. With the orchestration tools provided in BizTalk, orchestrations can be developed by business analysts, removing the developer from this process and enabling the business.
* Reduction of integration code—With the custom adapters in BizTalk and the unified programming model of WCF, the code required to integrate systems is drastically reduced.
* Alignment with standards—WCF and BizTalk are based on Open XML standards out of the box. No more custom coding to incorporate Web services standards.
* Productivity—With an integrated Visual Studio IDE and .NET 3.0 technologies, both the tools and the development language provide substantial productivity gains over other languages.

Conclusion

As demonstrated in this white paper, using protocol-level standards alone is not enough; capturing the business side of the messaging transactions is key to making interoperability work for your business. This is true across all industries, not just insurance.

We have Web services standards, but that is not enough. There is still a level of due diligence that is required to make the optimal technology decisions for your organization. With this specific reference implementation, we go through a real-world scenario and determine the optimum messaging with this scenario's business forces. This can serve as a guide to help you choose message-exchange patterns in your enterprise. With all architectures, there are trade-offs when choosing specific standards. It is important to understand these trade-offs and be willing to take on the resulting implications.

Microsoft is committed to making the job of architecting and developing service-oriented solutions easier for its customers. And we show here that Microsoft has removed many of the industry barriers and complexities that daunt customers today. These range from providing thought leadership in the industry standards to automating and building out-of-the box Web services support.

Thursday, 31 May 2007

Wednesday, 23 May 2007

Deploying Microsoft .NET Framework Version 3.0

Deploying Microsoft .NET Framework Version 3.0

Annie Wang
Microsoft Corporation

June 2006

Applies to
Microsoft .NET Framework version 3.0 (formerly known as WinFX)
Microsoft .NET Framework 2.0
Microsoft Windows Vista

Summary: The Microsoft .NET Framework 3.0 (formerly known as WinFX), is the new managed-code programming model for Windows. Applications written for the .NET Framework 3.0 require the .NET Framework 3.0 to be installed on the computer where the application runs. This article briefly describes .NET Framework 3.0 and its relationship to earlier versions of the .NET Framework, provides information on installing and deploying the Framework, and describes how to detect whether the .NET Framework 3.0 is installed. (18 printed pages)


Introduction

This section provides an overview of the .NET Framework 3.0.

About Microsoft .NET Framework 3.0

The Microsoft .NET Framework version 3.0 (formerly known as WinFX) is the new managed-code programming model for Windows. It combines the power of .NET Framework 2.0 with new technologies for building applications that have a visually compelling user experience, seamless communication across technology boundaries, and support for a wide range of business processes. Microsoft plans to ship .NET Framework 3.0 as part of Windows Vista. At the same time, Microsoft will make .NET Framework available for Windows XP Service Pack 2 and Windows Server 2003 Service Pack 1.

The following table lists some of the technologies included with .NET Framework 3.0.

Technology Description
Windows Presentation Foundation (WPF, formerly code-named "Avalon") Provides classes for building next-generation smart-client applications that combine UI, documents, and media.
Windows Communication Foundation (WCF, formerly code-named "Indigo") Offers a unified programming model and runtime for building service-oriented applications.
Windows Workflow Foundation (WF) Offers a programming model, engine, and tools to build workflow-enabled applications that model business processes.
Windows CardSpace (formerly code-named "InfoCard") Provides a technology for simplifying and improving the safety of working online with personal identity information.
Microsoft .NET Framework 2.0 Enables productively building Windows and Web-based applications.

All of the classes that represent the new components (WPF, WF, WCF, and CardSpace) are part of the System namespace. The core classes of the .NET platform, such as the common language runtime (CLR) and base class libraries (BCL) remain as they are in .NET Framework 2.0.

The following diagram illustrates the structure of .NET Framework 3.0.

Figure 1. .NET Framework 3.0

How .NET Framework 3.0 Relates to .NET Framework 2.0 and Earlier

The .NET Framework 3.0 adds new technologies to the .NET Framework 2.0, which makes the .NET Framework 3.0 a superset of the .NET Framework 2.0. You can think of .NET Framework 3.0 as an "additive" release to the .NET Framework 2.0, as contrasted with a generational release where software is revised across the board. (For example, the .NET Framework 2.0 was a generational release over the .NET Framework 1.0.)

Because .NET Framework 3.0 is an additive release and uses the core run-time components from .NET Framework 2.0, it is completely backward compatible with the earlier version. Your existing .NET Framework 2.0 based-applications will continue to run without any modifications and you can safely continue your investments using the technologies that shipped with .NET Framework 2.0.

If you are moving to .NET Framework 3.0 from .NET Framework 1.1 or 1.0, you should perform impact analysis and run compatibility testing prior to deployment. While we have worked to make .NET Framework releases compatible, there are a small number of known incompatibles due to security and significant functionality additions. For more information, see the page Breaking Changes in .NET Framework 2.0 on the Microsoft .NET Developer Center Web site.

Servicing Policy for the .NET Framework 3.0

Microsoft will continue to service .NET Framework 2.0 release in accordance with the standard support policy for the platforms it is supported on. Users who currently rely on .NET Framework 2.0 have the option of remaining on that version of the .NET Framework and receiving software updates as they become available.

Any component that ships as part of .NET Framework 3.0 will be serviced on the platforms it is supported on. For more information, see the Software Update Technology page on the Microsoft Visual Studio Developer Center Web site.

Roadmap for Future .NET Framework Releases

In general, any new version of the .NET Framework is designed to provide backward compatibility with the previous version. If a new release introduces breaking changes due to security issues or other reasons, Microsoft will enable you to install the new release side by side with the existing version.

For more information about future releases of the .NET Framework, see the Microsoft .NET Framework Developer Center Web site.

Installing the .NET Framework 3.0

The .NET Framework 3.0 is installed by default on Microsoft Windows Vista. On Microsoft Windows Server code-named "Longhorn", you can install the .NET Framework as a Windows Feature using Roles Management tools.

On Windows XP and Windows Server 2003, installing .NET Framework 3.0 also adds any .NET Framework 2.0 components that are not already installed. If .NET Framework 2.0 is already installed, the .NET Framework 3.0 installer adds only the files for Windows Presentation Foundation (WPF), Windows Workflow Foundation (WF), Windows Communication Foundation (WCF), and Windows CardSpace.

Components shared with .NET Framework 2.0 are installed in the following location:

%windir%\Microsoft.NET\Framework\V2.0.50727

Components that are new to .NET Framework 3.0 are installed in the following location:

%windir%\Microsoft.NET\Framework\V3.0

All components of the .NET Framework 3.0 reference assemblies are installed in the following location:

%programfiles%\Reference Assemblies\Microsoft\Framework\v3.0

Uninstalling .NET Framework 3.0 will not remove the components shared with .NET Framework 2.0. To remove those components, you must first uninstall .NET Framework 3.0 and then separately uninstall .NET Framework 2.0. (You can uninstall the .NET Framework using the Add or Remove Programs item in Windows Control Panel.)

Version Numbers for .NET Framework Assemblies

The .NET Framework 3.0 shares many components with .NET Framework 2.0, and the common language runtime (CLR) and base class libraries are the same as those in .NET Framework 2.0. Therefore, these shared components stay at version 2.0. The version number 3.0 applies to all runtime and reference assemblies for Windows Communication Foundation (WCF), Windows Presentation Foundation (WPF), Windows Workflow Foundation (WF), and Windows CardSpace.

Deploying .NET Framework 3.0

This section provides information about deploying the .NET Framework 3.0 for use with your applications.

Software Requirements

To install .NET Framework 3.0, you must have one of the following operating systems installed on the target computer:

  • Microsoft Windows XP Home or Microsoft Windows XP Home Professional, with Service Pack 2 or later.
  • Microsoft Windows Server 2003 family with Service Pack 1 or later.
Note .NET Framework 2.0 continues to be supported on its target platforms. For more information, see the .NET Framework 2.0 Redistributable Prerequisites page on the MSDN Web site.

.NET Framework 3.0 is installed by default with Microsoft Windows Vista. On Microsoft Windows Server "Longhorn", the .NET Framework 3.0 is a Windows feature that can be installed using Roles Management tools.

Note Microsoft Windows Server "Longhorn" IA64 Edition is the only IA64 platform that the .NET Framework 3.0 supports.

Hardware Requirements

The following table lists the hardware requirements for running .NET Framework 3.0.


CPU required RAM required
Minimum Pentium 400 MHz 96 MB
Recommended Pentium 1 GHz or higher 256 MB or more

Redistribution Rights for the .NET Framework

Microsoft strongly supports customers in deploying the .NET Framework within their organizations and as part of their software solutions. Distributing the .NET Framework 3.0 runtime requires you to accept license terms. For information about redistributing the .NET Framework 3.0 with your application or to a third party, review the page The ISV Guide for Redistributing the .NET Framework and Other Runtime Components page on the MSDN Web site.

Note The redistributable right is reserved only for the official released version of the Microsoft .NET Framework 3.0. You may not redistribute the pre-released version of Microsoft .NET Framework 3.0 with your application.

IT Administrator Tools for Deploying the .NET Framework 3.0

The .NET Framework 3.0 offers two ways for IT administrators to deploy to field clients: administrator-mode setup and Active Directory deployment.

Administrator-mode Setup

Administrator-mode setup enables IT administrators to deploy the .NET Framework through Microsoft Systems Management Server (SMS) or other software distribution tools. The IT administrator runs the Framework setup in silent mode. If errors occur, setup quits silently and logs an error code.

Active Directory Deployment

In Active Directory deployment, the administrator must add individual .msi files from the .NET Framework 3.0 installation package into the group policy in the order in which the .msi files should be deployed. After the group policy is enabled, any clients that are part of this group policy will automatically install the components when they boot and reconnect to the network. If errors occur, setup quits silently and logs an error code.

For more information about administrative deployment instructions, see the Administrators Deployment Guide Web page.

Redistributing the .NET Framework with Your Application

The .NET Framework 3.0 redistributable package is available as a stand-alone executable file. The name of the file depends on the type of targeted platform. For 32-bit platforms, the file is named Dotnetfx3.exe. For 64-bit platforms, the file is named Dotnetfx3_x64.exe. You can also deploy a small redistributable package named Dotnet3setup.exe that downloads the required components during setup.

When you distribute the .NET Framework 3.0 redistributable package with your application, you must agree to the license terms, which give you specific distribution rights.

You can manually launch and install the redistributable on a computer, or it can be launched and installed as part of the setup program for a .NET Framework 3.0 application.

Note Administrator privileges are required to install the .NET Framework 3.0.

For more information, see the Microsoft .NET Framework 3.0 Deployment Guide Web page.

Detecting .NET Framework 3.0 and Earlier Releases

You can detect if the .NET Framework 3.0 is installed by reading a registry key and by querying the user-agent string in Internet Explorer.

Reading a Registry Key

You can look for a specified registry key value to detect whether the .NET Framework is installed on a computer. The following table lists the registry keys and values that you can test to determine whether specific versions of the .NET Framework are installed.

Version Registry Key Name Value
3.0 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup InstallSuccess (DWORD value = 1)
2.0 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727 Install (DWORD value = 1)
1.1 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322 Install (DWORD value = 1)
1.0 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\policy\v1.0 3705 (REG_SZ value = 3321-3705)
Note For more information about detecting previously released service packs for .NET Framework 1.0 and 1.1 , see article 318785, "How to determine which versions of the .NET Framework are installed and whether service packs have been applied" in the Microsoft Knowledge Base.

Reading the User-Agent String in Internet Explorer

For browser-based applications, you can detect whether the .NET Framework 3.0 is installed on a computer by examining the user-agent string using Internet Explorer running on that computer. This will contain the substring "NET CLR" followed by the major and minor version numbers. A sample user-agent string looks like the following:

Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322;
.NET CLR 2.0.50727; .NET CLR 3.0.04131.06)

Appendix B: Sample Script for Detecting the .NET Framework 3.0 Using Internet Explorer lists a sample JavaScript program that runs in a browser and displays information about the current .NET Framework version number.

The user-agent string that is sent in browser headers is stored in the registry of the server computer, as listed in the following table.

Version Registry Key
3.0 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\5.0\User Agent\Post Platform\.NET CLR.3.0.04131.06
Note The build number (04131.06 in the example) will change for the final release of .NET Framework 3.0.
2.0 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\5.0\User Agent\Post Platform\.NET CLR 2.0.50727
1.1 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\5.0\User Agent\Post Platform\.NET CLR 1.1.4322

Command Line Options for the .NET Framework 3.0 Redistributable

The following table lists options that you can include when you run the .NET Framework 3.0 Redistributable installation program (Dotnetfx3.exe, Dotnetfx3_x64.exe, or Dotnet3setup.exe) from the command line.

Option Description
/q
Suppresses all UI. An .INI file cannot be specified with this option.
/quiet
Same as /q.
/qb
Displays minimal UI, showing only progress.
/passive
Same as /qb.
/uninstall
Uninstalls product.
/remove
Same as /uninstall.
/f
Repairs all .NET Framework 3.0 components that are installed.
/nopatch
Specifies that patches are not applied and bypasses patch checking.
/norollback
Specifies that setup is not rolled back if a setup component fails.
/norestart
Specifies that the installer does not restart the computer after installation completes. The Redistributable installer returns ERROR_SUCCESS_REBOOT_REQUIRED (3010) if a reboot is required.
/?
Displays this list of options.

Error Codes for the .NET Framework 3.0 Redistributable

The following table lists error codes that can be returned by the .NET Framework 3.0 Redistributable installation program (Dotnetfx3.exe, Dotnetfx3_x64.exe, or Dotnet3setup.exe). The error codes are the same for all versions of the installer.

Error Code Value Description
ERROR_SUCCESS
0 The action completed successfully.
ERROR_INSTALL_USEREXIT
1602 User canceled installation.
ERROR_INSTALL_FAILURE
1603 A fatal error occurred during installation.
ERROR_UNKNOWN_PRODUCT
1605 This action is valid only for products that are currently installed.
ERROR_PATCH_PACKAGE_INVALID
1636 The patch package could not be opened or the patch was not applicable to .NET Framework 3.0.
ERROR_INVALID_COMMAND_LINE
1639 Invalid command line argument.
ERROR_SUCCESS_REBOOT_INITIATED
1641 The installer has initiated a restart. This indicates success, and setup will continue after restart. (The reboot is not performed if the /norestart option was specified.)
ERROR_PATCH_PACKAGE_REJECTED
1643 The patch package is not permitted by system policy.
ERROR_SUCCESS_REBOOT_REQUIRED
3010 A restart is required to complete the installation. This message indicates success.

Appendix A: Detecting .NET Framework Language Packs

The following table lists the registry values you can read to detect whether a .NET Framework language pack is installed on a computer. For more information on how to detect localized version of the .NET Framework 1.0, see the page .NET Framework Redistributable Package Technical Reference on the MSDN Web site.

Product Registry Value
Chinese (Simplified) Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\2052

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\2052

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\2052

InstallSuccess(DWORD value = 1)

Chinese (Traditional) Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\1028

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\1028

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\1028

InstallSuccess(DWORD value = 1)

Czech Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\1029

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\1029

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\1029

InstallSuccess(DWORD value = 1)

Danish Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\1030

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\1030

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\1030

InstallSuccess(DWORD value = 1)

Dutch Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\1043

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\1043

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\1043

InstallSuccess(DWORD value = 1)

Finnish Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\1035

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\1035

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\1035

InstallSuccess(DWORD value = 1)

French Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\1036

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\1036

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\1036

InstallSuccess (DWORD value = 1)

German Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\1031

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\1031

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\1031

InstallSuccess(DWORD value = 1)

Greek Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\1032

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\1032

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\1032

InstallSuccess(DWORD value = 1)

Italian Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\1040

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\1040

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\1040

InstallSuccess(DWORD value = 1)

Japanese Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\1041

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\1041

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\1041

InstallSuccess(DWORD value = 1)

Korean Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\1042

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\1042

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\1042

InstallSuccess(DWORD value = 1)

Norwegian Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\1044

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\1044

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\1044

InstallSuccess(DWORD value = 1)

Polish Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\1045

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\1045

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\1045

InstallSuccess(DWORD value = 1)

Portuguese (Brazilian) Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\1046

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\1046

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\1046

InstallSuccess(DWORD value = 1)

Portuguese (Portugal) Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\2070

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\2070

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\2070

InstallSuccess(DWORD value = 1)

Russian Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\1049

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\1049

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\1049

InstallSuccess(DWORD value = 1)

Spanish Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\3082

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\3082

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\3082

InstallSuccess(DWORD value = 1)

Swedish Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\1053

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\1053

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\1053

InstallSuccess(DWORD value = 1)

Turkish Language Pack Version 1.1

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322\1055

Version 2.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v2.0.50727\1055

Install (DWORD value = 1)

Version 3.0

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0\Setup\1055

InstallSuccess(DWORD value = 1)

Appendix B: Sample Script for Detecting the .NET Framework 3.0 Using Internet Explorer

The following example shows a JavaScript program that runs in a browser detects whether .NET Framework 3.0 is running. The script searches the user-agent string and displays a status message based on the results of the search.




Test for NET Framework 3.0







If the search for the string ".NET Framework 3.0" version is successful, the following message appears:

This computer has the correct version of the .NET Framework: 3.0.04131.06.

This computer's userAgent string is: Mozilla/4.0 (compatible; MSIE 6.0;
Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04131.06).

Otherwise, the following message appears:

This computer does not have the correct version of the .NET Framework.

Click here to get .NET Framework 3.0 now.

This computer's userAgent string is: Mozilla/4.0 (compatible; MSIE 6.0;
Windows NT 5.1; SV1; .NET CLR 1.1.4322; InfoPath.1; .NET CLR 2.0.50727).