Вы находитесь на странице: 1из 30

Data Layer Design

Architecture Guidance......................................................................... 3
Data Layer Guidelines ......................................................................... 3
Data Layer...................................................................................... 3
Design Considerations ...................................................................... 3
Specific Design Considerations .......................................................... 4
Design Patterns ............................................................................... 4
Designing Data Components ................................................................ 4
Choose Data Access Technology ........................................................ 4
Choose How Retrieving and Persisting Business Entities ........................ 5
Determine How Connecting the Data Source........................................ 5
Determine Strategies for Handling Data Source Errors .......................... 5
Design Service Agents ...................................................................... 5
Data Access Technology Matrix............................................................. 6
Data Access Technologies ................................................................. 6
Object-Relational Data Access ........................................................... 6
Disconnected and Offline Data Access................................................. 7
SOA and Service Scenarios ............................................................... 7
N-tier and General Scenarios............................................................. 8
Designing Business Entities................................................................. 10
Choose the Representation .............................................................. 10
Choose a Design for Business Entities ................................................10
Determine Serialization Support .......................................................11
Domain Driven Design ..................................................................... 11
Entity Framework ............................................................................. 12
Anti-Patterns To Avoid In N-Tier Applications ........................................12
Understanding N-Tier ...................................................................... 12
Custom Service or RESTful Service? ..................................................12
Anti-Pattern #1: Tight Coupling ........................................................ 13
Anti-Pattern #2: Assuming Static Requirements..................................13
Anti-Pattern #3: Mishandled Concurrency ..........................................14
Anti-Pattern #4: Stateful Services.....................................................14
Anti-Pattern #5: Two Tiers Pretending to be Three ..............................15
Anti-Pattern #6: Undervaluing Simplicity ...........................................15
N-Tier Application Patterns ................................................................. 15
Change Set .................................................................................... 15
DTOs ............................................................................................ 16
Simple Entities ............................................................................... 16
Self-Tracking Entities ...................................................................... 16
Implementing N-Tier with the Entity Framework .................................17
Concurrency Tokens........................................................................ 17
Serialization ................................................................................... 17
Working with the ObjectStateManager ...............................................18
Patterns Other Than Simple Entities in .NET 3.5 SP1............................19
API Improvements in .NET 4 ............................................................ 20
Building N-Tier Apps with EF4 ............................................................. 20
Self-Tracking Entities ...................................................................... 21
Data Transfer Objects ..................................................................... 24
Tips .............................................................................................. 26
Conclusion ..................................................................................... 26
Figures.............................................................................................. 27
Data Access Technologies ................................................................... 27
Current Data Technologies ............................................................... 27
Native Data Technologies................................................................. 27
WCF Data Services.......................................................................... 28
Future Data Technologies ................................................................ 28
Entity Framework .............................................................................. 29
Comparing N-Tier Patterns with EF4 ..................................................29
References ........................................................................................ 29
Book References ............................................................................... 29
Patterns of Enterprise Application Architecture ....................................29
Web References ................................................................................ 29
MSDN ........................................................................................... 29
MSDN Patterns & Practices ............................................................ 30
InfoQ ............................................................................................ 30
Blogs ............................................................................................ 30
Architecture Guidance
Data Layer Guidelines

Data Layer

• Data access components


• Data helpers/utilities: common data access logic sometimes provided by data access
framework (Object/Relational Mapping).
• Service agents

Design Considerations

• Choose an appropriate data access technology (Data Access Technology Matrix).


• Implement a loosely coupled interface to the data access layer with interface
components such as gateways, interface types or abstract base classes.
• Encapsulate data access functionality; The data access layer hides details of data
source access.
• Decide how to map application entities to data source structures. Common design
approaches follow the Domain Model or Table Module patterns or use Object/
Relational Mapping (O/RM) frameworks. Identify a strategy for populating business
entities from the data source, making them available to the business layer of the
application (Designing Business Entities).
• Consider consolidating data structures. If you are exposing data through services,
consider using Data Transfer Objects (DTOs) to help you organize the data into
unified structures. In addition, DTOs encourage coarse-grained operations while
providing a structure designed to move data across different boundary layers. DTOs
can span business entities for aggregate operations. If you are using the Table Data
Gateway or Active Record pattern, you may consider using a DataTable to represent
the data.
• Decide how you will manage connections. The data access layer creates and
manages all connections to all data sources required by the application. Choose an
appropriate method for storing and protecting connection information (encrypting
sections of the configuration file, limiting storage of configuration information to the
server).
• Determine how you will handle data exceptions. The data access layer catches and
(at least initially) handles all exceptions associated with data sources and CRUD
(Create, Read, Update, and Delete) operations. Exceptions concerning the data
itself, and data source access and timeout errors, are passed to other layers only if
the failures affect application responsiveness or functionality.
• Consider security risks. The data access layer protects against attacks that try to
steal or corrupt data, and protects the mechanisms used to gain access to the data
source. Security should be implemented in the data access layer as well as in the
data source. Database access should be through parametrized queries to prevent
SQL injection attacks succeeding.
• Reduce round trips. Consider batching commands into a single database operation.
• Consider performance and scalability objectives.
• Consider accessing the functionality and data provided by service agents only
through data access components. This provides a consistent data access interface
regardless of the data source.

Specific Design Considerations

• Batching, Binary Large Objects (BLOBs), Connections, Data Format, Exception


Management
• Object Relational Mapping
◦ Handle O/R mismatches using design patterns such as Repository or O/RM
tools such as the ADO.NET Entity Framework.
◦ A Domain Driven Design approach, which is based on modeling entities
based on objects within a domain, is often an appropriate choice.
◦ For stateless services, group entities and support options that will partially
load domain entities with only the required data (lazy loading). This allows
applications to handle the higher user load required to support stateless
operations, and limit the use of resources by avoiding holding initialized
domain models for each user in memory.
• Queries, Stored Procedures, Stored Procedures vs. Dynamic SQL, Transactions,
Validation, XML

Design Patterns

• General: Active Record, Data Mapper, Data Transfer Object, Domain Model, Query
Object, Repository, Row Data Gateway, Table Data Gateway, Table Module
• Batching: Parallel Processing, Partitioning
• Transactions: Capture Transaction Details, Coarse-Grained Lock, Implicit Lock,
Optimistic Offline Lock, Pessimistic Offline Lock, Transaction Script

Designing Data Components

Choose Data Access Technology

• Choice is determined by the type of data and how data must be manipulated within
the application.
• Consider using the ADO.NET Entity Framework (EF) if you want to create a data
model and map it to a relational database, using a flexible schema with the
flexibility of separating the mapping schema from the object model. If using EF, also
consider using LINQ to Entities allowing queries over strongly typed entities.
• Consider using WCF Data Services (formerly known as ADO.NET Data Services) if
developing a RIA or an n-tier rich client application, wanting to access data through
a resource-centric service interface. WCF Data Services is built on top of EF and
allows you to expose parts of an Entity Model through a REST interface.
• Consider using ADO.NET Core if you need a low level API for full control over data
access or if building an application that must support a disconnected data access
experience.
• Consider using ADO.NET Sync Services if designing an application that must support
occasionally connected scenarios, or requires collaboration between databases.
• Consider using LINQ to XML if using XML data in the application, and wanting to
execute queries using the LINQ syntax.
Choose How Retrieving and Persisting Business Entities

• Choose a strategy for populating business entities from the data store and for
persisting them back to the data store. An impedance mismatch exists between an
object-oriented data model and the relational data store. The most common
approaches use O/RM tools and frameworks to handle this mismatch.
• Consider using an O/RM framework that translates between domain entities and the
database. In a greenfield environment, use an O/RM tool to generate a schema to
support the object model and provide a mapping between the database and domain
entities. In a brownfield environment with an existing database schema, use an
O/RM tool for mapping between the domain model and relational model.
• A common pattern is domain driven design, based on modeling entities on objects
within a domain (see Designing Business Entities).
• Ensure grouping entities correctly to achieve a high level of cohesion. This may
mean requiring additional objects within the domain model, and that related entities
are grouped into aggregate roots.
• When working with Web applications or services, group entities and provide options
for partially loading domain entities with only the required data. This minimizes the
use of resources by avoiding holding initialized domain models for each user in
memory, and allows applications to handle higher user load.

Determine How Connecting the Data Source

• Identify how to connect to the data source, protect user credentials, and perform
transactions.
• Connections, Connection Pooling, Transactions and Concurrency

Determine Strategies for Handling Data Source Errors

• Design an overall strategy to handle data source errors. All exceptions associated
with data sources should be caught by the data access layer. Exceptions concerning
the data itself, and data source access and timeout errors, should be handled in this
layer and passed to other layers only if the failures affect application responsiveness
or functionality.
• Exceptions, Retry Logic, Timeouts

Design Service Agents

• Service agents manage the semantics of communicating with external services,


providing additional services such as basic mapping between the format of the data
exposed by the service and the format the application requires, caching, and offline
or intermittent connection support.
• Adding a service reference generates a proxy and the data classes that represent
the data contract from the service.
• For most applications, the service agent acts as an abstraction layer between the
business layer and the remote service, and can provide a consistent interface
regardless of the data format. In smaller applications, the presentation layer, may
access the service agent directly.
Data Access Technology Matrix

Data Access Technologies

• ADO.NET Core provides facilities for the general retrieval, update, and management
of data. It includes providers for SQL Server, OLE DB, ODBC, SQL Server CE, and
Oracle databases.
• ADO.NET Data Services Framework exposes data using the Entity Data Model,
through RESTful Web services accessed over HTTP. The data can be addressed
directly using URIs. The Web service can be configured to return the data as plain
Atom and JavaScript Object Notation (JSON) formats.
• ADO.NET Entity Framework provides a strongly typed data access experience over
relational databases. It moves the data model from the physical structure of
relational tables to a conceptual model that accurately reflects common business
objects. It introduces a common Entity Data Model within the ADO.NET
environment, allowing developers to define a flexible mapping to relational data.
This mapping helps to isolate applications from changes in the underlying storage
schema. It also supports LINQ to Entities, which provides LINQ support for business
objects exposed through the Entity Framework. When used as an O/RM product,
developers use LINQ to Entities against business objects, which Entity Framework
will convert to Entity SQL that is mapped against an Entity Data Model managed by
the Entity Framework. Developers also have the option of working directly with the
Entity Data Model and using Entity SQL in their applications.
• ADO.NET Sync Services is a provider included in the Microsoft Sync Framework, and
is used to implement synchronization for ADO.NET-enabled databases. It enables
data synchronization to be built into occasionally connected applications. It
periodically gathers information from the client database and synchronizes it with
the server database.
• Language Integrated Query (LINQ) provides class libraries that extend C# and
Visual Basic with native language syntax for queries. It is primarily a query
technology supported by different assemblies throughout the .NET Framework.
Queries can be performed against a variety of data formats, including DataSet
(LINQ to DataSet), XML (LINQ to XML), in-memory objects (LINQ to Objects),
ADO.NET Data Services (LINQ to Data Services), and relational data (LINQ to
Entities).
• LINQ to SQL provides a lightweight, strongly typed query solution against SQL
Server. LINQ to SQL is designed for easy, fast object persistence scenarios where
the classes in the mid-tier map very closely to database table structures. Starting
with .NET Framework 4.0, LINQ to SQL scenarios will be integrated and supported
by the ADO.NET Entity Framework; however, LINQ to SQL will continue to be a
supported technology.

Object-Relational Data Access

Technology Benefits Considerations


• Decouples database structure
• Requires changing the design
ADO.NET from the logical data model.
of entities and queries if
Entity • Entity SQL provides a
coming from a more
Framework consistent query language
traditional data access
(EF) across all data sources and
method.
database types.
• Separates meta data into
• Has more layers of
well-defined architectural
abstraction than LINQ to
layers.
DataSet.
• Allows business logic
• Can be used with or without
developers to access the data
LINQ to Entities.
without knowing database
• If database structure
specifics.
changes, the Entity Data
• Use of a provider model
Model must be regenerated
allows it to be mapped to
and EF libraries re-deployed.
many databases.
• A LINQ-based solution for
relational data in the
ADO.NET EF.
• Provides strongly typed LINQ
LINQ to access to relational data. • Requires the ADO.NET Entity
Entities • Supports LINQ-based queries Framework.
against objects built on top of
the EF Entity Data Model.
• Processing occurs on the
server.
• Simple way to read/write • Functionality integrated into
objects when the data object the EF as of .NET Framework
model matches the physical 4.0.
database model. • Maps LINQ queries directly to
LINQ to SQL
• Provides strongly typed LINQ the database instead of
query access to SQL data. through a provider (only
• Processing occurs on the works with Microsoft SQL
server. Server).

Disconnected and Offline Data Access

Technology Benefits Considerations


LINQ to • Allows full-featured queries • All processing occurs on the
DataSet against a DataSet. client.
• Enables synchronization
between databases,
collaboration, and offline
• Requires implementation of
scenarios.
ADO.NET own change tracking.
• Synchronization can execute
Sync • Exchanging large chunks of
in the background.
Services data during synchronization
• Provides a hub-and-spoke
can reduce performance.
type of architecture for
collaboration between
databases.

SOA and Service Scenarios

Technology Benefits Considerations


• Data can be addressed
directly via a URI using a
REST-like scheme.
• Data can be returned in
either Atom or JSON formats.
• Includes a lightweight
versioning scheme to simplify
the release of new service
ADO.NET interfaces.
Data • The .NET Framework, • Is only applicable to service-
Services Silverlight, and AJAX client oriented scenarios.
Framework libraries allow to work directly
with objects and provide
strongly typed LINQ access to
ADO.NET Data Services.
• These client libraries also
provide a familiar API surface
to Windows Azure Tables,
SQL Data Services, and other
Microsoft services.
• Allows creating LINQ-based
queries against client-side
• Can only be used with the
LINQ to Data data returned from ADO.NET
ADO.NET Data Services
Services Data Services.
client-side framework.
• Supports LINQ-based queries
against REST data.

N-tier and General Scenarios

Technology Benefits Considerations


• Code is written directly
• Includes .NET managed code
against specific providers,
providers for connected
thereby reducing re-usability.
access to a wide range of
ADO.NET • The relational database
data stores.
Core structure may not match the
• Provides facilities for
object model, requiring
disconnected data storage
creation of a data-mapping
and manipulation.
layer.
• Data can be addressed
directly via a URI using a
REST-like scheme. • Is only applicable to service-
• Data can be returned in oriented scenarios.
ADO.NET either Atom or JSON formats. • Provides a resource-centric
Data • Includes a lightweight service that maps well to
Services versioning scheme to simplify data-heavy services, but may
Framework the release of new service require more work if a
interfaces. majority of the services are
• Provider model allows any operation-centric.
IQueryable data source to be
used.
• The .NET Framework,
Silverlight, and AJAX client
libraries provide a familiar
API surface to Windows Azure
Tables, SQL Data Services,
and other Microsoft services.
• Separates meta-data into
well-defined architectural
layers. • Requires changing the design
• Supports LINQ to Entities for of entities and queries if
querying complex object coming from a more
models. traditional data access
• Use of a provider model method.
allows it to be mapped to • Entity objects can be sent
many database types. across a network, or the Data
• Allows you to build services Mapper pattern can be used
that have well defined to transform entities into
boundaries, and data/service objects that are more
contracts for sending and generalized DataContract
ADO.NET
receiving well defined entities types. The planned POCO
Entity
across the service boundary. support will eliminate the
Framework
• Instances of entities from need to transform objects
your Entity Data Model are when sending them over a
directly serializable and network.
consumable by Web services. • Building service endpoints
• Full flexibility in structuring that receive a generalized
the payload – send individual graph of entities is less
entities, collections of service oriented than
entities, or an entity graph to endpoints that enforce
the server. stricter contracts on the
• Eventually will allow for true types of payload that might
persistence-ignorant objects be accepted.
to be shipped across service
boundaries.
• Allows you to create LINQ-
based queries against objects
in memory.
• Represents a new approach
to retrieving data from
collections. • Works only with objects that
LINQ to
• Can be used directly with any implement the IEnumerable
Objects
collections that support interface.
IEnumerable or
IEnumerable<T>.
• Can be used to query strings,
reflection-based meta-data,
and file directories.
• Allows you to create LINQ- • Relies heavily on generic
based queries against XML classes.
LINQ to XML data. • Is not optimized to work with
• Is comparable to the untrusted XML documents,
Document Object Model which require different
(DOM), which brings an XML
document into memory, but
is much easier to use.
security mitigation
• Query results can be used as
techniques.
parameters to XElement and
XAttribute object
constructors.
• As of .NET Framework 4.0,
the Entity Framework will be
• Provides a simple technique the recommended data
for retrieving and updating access solution for LINQ-to-
data as objects when the relational scenarios.
LINQ to SQL
object model and the • LINQ to SQL will continue to
database model are the be supported and will evolve
same. based on feedback received
from the community.

Designing Business Entities

Choose the Representation

• Custom business objects are common language runtime (CLR) objects that describe
entities in your system. The objects are created manually or using an O/RM
technology. Custom business objects are appropriate if complex business rules or
behavior must be encapsulated along with the related data. If custom business
objects need to be accessed across AppDomain, process, or physical boundaries, a
service layer can be implemented that provides access via Data Transfer Objects
(DTO) and operations that update or edit your custom business objects.
• DataSets are a form of in-memory database closely mapping to the actual database
schema. DataSets are typically used when building a data-oriented application
where the data in the application logic maps very closely to the database schema.
DataSets cannot be extended to encapsulate business logic or business rules.
Although DataSets can be serialized to XML, they should not be exposed across
process or service boundaries.
• XML is used to represent business entities only if the presentation layer requires it
or if application logic must work with the content based on its schema (for example,
a message routing system routing messages based on some well-known nodes in
the XML document). Using and manipulating XML can use large amounts of
memory.

Choose a Design for Business Entities

• Domain Model is a design pattern that defines business objects representing real
world entities within the business domain. The business or domain entities contain
both behavior and structure (business rules and relationships are encapsulated
within the domain model). The domain model design requires in-depth analysis of
the business domain and typically does not map to the relational database models.
Consider using it when the business domain has complex business rules that relate
to the business domain, when designing a rich client and the domain model can be
initialized and held in memory, or when not working with a stateless business layer
that requires initialization of the domain model with every request.
• Table Module is a design pattern that defines entities based on tables or views
within a database. Operations used to access the database and populate the table
module entities are usually encapsulated within the entity, but can also be provided
by data access components. Consider using this design pattern if the tables or views
within the database closely represent the business entities, or if business logic and
operations relate to a single table or view.
• Custom XML objects represent deserialized XML data that can be manipulated within
the application code. Objects are instantiated from classes defined with attributes
that map properties within the class to elements and attributes within the XML
structure. Consider using custom XML objects if the consumed data is already in
XML format; XML data must be generated from non-XML data sources; or working
with read-only document-based data.

Determine Serialization Support

• To pass business entities across physical boundaries such as application domain,


process, and service interface boundaries, data must be serialized. Keep in mind the
performance impact when also serializing the data when crossing logical boundaries.
• Expose serializable business entities directly only if required. If another layer in the
application, on the same physical tier, is consuming business entities, the most
straightforward approach is to expose business entities directly through
serialization. This creates a dependency between the consumers of business entities
and their implementation. This approach is not recommended unless you can
maintain direct control over the consumers and remote access to business entities
between physical tiers is not required.
• Convert business entities into serializable data transfer objects. To decouple the
consumers of business entities from the internal implementation of the business
layer, consider translating business entities into special serializable data transfer
objects. Data Transfer Object (DTO) is a design pattern used to package multiple
data structures into a single structure for transfer across boundaries. Data transfer
objects are also useful when the consumers have a different data representation or
model. This approach allows changing the internal implementation of the business
layer without affecting the business entity consumers, and allows versioning of
interfaces more easily.
• Expose XML directly. In some cases, business entities are serialized and exposed as
XML. Attributes on the business entities control the serialization.

Domain Driven Design

Domain Driven Design (DDD) is an object-oriented approach to designing software based on


the business domain, its elements and behaviors, and the relationships between them. It
aims to enable software systems that are a realization of an underlying business domain by
defining a domain model expressed in the language of business domain experts. The
domain model can be viewed as a framework from which solutions can then be rationalized.

Domain Driven Design requires good understanding of the business domain mostly provided
to the development team by business domain experts. The whole team agrees to only use a
single language that is focused on the business domain, and which excludes any technical
jargon. Quite often, communication problems within development teams are due not only to
misunderstanding the language of the domain, but also due to the fact that the domain’s
language is itself ambiguous.

The domain model is expressed using entities, value objects, aggregate roots, repositories,
and domain services; organized into coarse areas of responsibility known as Bounded
Contexts:
• Entities are objects in the domain model that have a unique identity that does not
change throughout the state changes of the software. Entities encapsulate both
state and behavior.
• Value objects are objects in the domain model that are used to describe certain
aspects of a domain. They do not have a unique identity and are immutable (for
example a customer address object).
• Aggregate roots are entities that group logically related child entities or value
objects together, control access to them, and coordinate interactions between them.
• Repositories are responsible for retrieving and storing aggregate roots, typically
using an O/RM framework.
• Domain services represent operations, actions, or business processes and provide
functionality that refers to other objects in the domain model. At times, certain
functionality or an aspect of the domain cannot be mapped to any objects with a
specific life-cycle or identity; such functionality can be declared as a domain service
(for example, a catalog pricing service within the e-commerce domain).

While Domain Driven Design provides many technical benefits, such as maintainability, it
should be applied only to complex domains where the model and the linguistic processes
provide clear benefits in the communication of complex information, and in the formulation
of a common understanding of the domain.

Entity Framework
Anti-Patterns To Avoid In N-Tier Applications

Understanding N-Tier

• A well-designed application has multiple layers with carefully managed


dependencies. Those layers live in a single tier or can be split across multiple tiers.
• A layer is an organizational concept, while a tier denotes physical separation or at
least a design that will allow physical separation if needed.
• N-tier applications have at a minimum a database tier, a middle tier that exposes a
service, and a client tier.
• Main focus of presented anti-patterns is creating and consuming custom WCF
services that persist data using the Entity Framework.

Custom Service or RESTful Service?

• The key difference is that REST services are resource-centric while custom services
are operation-centric.
• With REST, data is divided into resources, each resource is given a URL, and
standard operations on those resources allowing CRUD are implemented.
• With custom services, any arbitrary method can be created and those operations
can be tailored to the specific needs of the application.
• ADO.NET Data Services in combination with the Entity Framework (EF) makes it
easy to create both RESTful services and clients to work with them. The framework
can provide more functionality to RESTful services automatically because the
services are constrained to follow a specific pattern.
• For many applications, the constraints of REST are just too much. For example,
sometimes the operations involve multiple resources at once.
• Often the ideal solution for an application is a mixture of REST and custom services.

Anti-Pattern #1: Tight Coupling

• Loose coupling is more difficult than tight coupling, and often the performance is not
as good.
• Why introduce an interface and dependency injection? Why build an abstraction with
custom objects mapped to the database instead of filling a DataTable and passing it
around?
• In the short term you gain some efficiency with tight coupling, but in the long run
evolving the application can become almost impossible.
• When you have modules that work together closely within a tier, sometimes tight
coupling is the right choice, but in other cases, components need to be kept at
arm's length from one another.
• Tiers do not always change at the same rate. The trick is to identify which parts of
the application might have different rates of change and which parts are tightly
coupled to each other.
• First, consider the boundary between the database and the mid-tier. Using the EF
already helps here because its mapping system provides an abstraction between
mid-tier code and the database. The same questions should be considered between
the mid-tier and the client.
• A particularly common and painful example of this anti-pattern in action is an
architecture that uses table adapters (moves the data into a DataSet with the same
schema) to retrieve data from the database and Web services that exchange
DataSets with the client (tightly coupling the mid-tier to the client).

Anti-Pattern #2: Assuming Static Requirements

• Two cases where changing requirements have an especially significant impact:


◦ Treating the client as trusted
◦ The mid-tier service assuming that the client will be implemented using a
particular technology
• If you perform validation only on the client and trust the received data on the mid-
tier to send it directly to the database without re-validating, the chance that
something will eventually go wrong is much larger than you might think.
• Always validate and enforce some degree of security on the mid-tier, even if that
may mean validating or performing access control more than once.
• Locking the client into a particular technology is more likely to be a problem. If an
application survives long enough, technology adjustments will occur. You may
initially design your application as a rich client desktop application and then later
find you need to move it to Silverlight. If you then designed your service to
exchange DataSets, major surgery would be needed for the service and all existing
clients.
Anti-Pattern #3: Mishandled Concurrency

• Concurrency is a complex-but-important area that the DataSet handles well. A


mistake with concurrency is the kind of problem that often only shows up once the
application is in production. At its core, concurrency management is fairly simple:
guarantee data integrity even if two clients try to modify the same data at roughly
the same time.
• For most applications, the concurrency management technique of choice is
optimistic concurrency. The number of times when the exact same entity is modified
in conflicting ways is quite small.
• Detection is driven by one or more properties, collectively called the concurrency
token, that change whenever any part of the entity changes. When the application
updates an entity back to the database, it first checks to make sure that the value
of the concurrency token in the database is still the same as it was when the entity
was originally read.
• The Entity Framework supports optimistic concurrency by transparently tracking the
original value of concurrency tokens when entities are queried and checking for
conflicts prior to database updates.
• The correct update pattern is either to make a copy of the entity on the client and
send back both the original version unmodified and the modified version or to write
the client in such a way that it does not modify the concurrency token. If the
concurrency token is updated by a server trigger or automatically because it is a
row version number (probably the best plan anyway), then there is no reason to
modify it on the client.
• To make this approach work, when the mid-tier receives the entity from client, you
need to attach it to the context and then go over its properties, manually marking
them as modified. In either case, though, you will fix both of the problems with the
anti-pattern at once. You will no longer query the database twice, and the
concurrency check will be based on the correct value of the token (from the initial
query) rather than some later value.

Anti-Pattern #4: Stateful Services

• The next anti-pattern comes up when developers try to simplify things by keeping
the context around across multiple service calls.
• Managing the context lifetime can get tricky quickly. When you have multiple clients
calling the services, you have to maintain a separate context for each client or risk
collisions between them. And even if you solve those issues, you will end up with
major scalability problems.
• These scalability problems are not only the result of tying up server resources for
every client. In addition you will have to guard against the possibility that a client
might start a unit of work, but never complete it, by creating an expiration scheme.
Further, if you decide that you need to scale your solution out by introducing a farm
with multiple mid-tier server, then you will have to maintain session affinity to keep
a client associated with the same server where the unit of work began.
• The best solution is to avoid them altogether by keeping your mid-tier service
implementations stateless. If some information needs to be maintained for a unit of
work that extends across multiple service calls, then that information should be
maintained by the client.
Anti-Pattern #5: Two Tiers Pretending to be Three

• "Why can't you make the Entity Framework serialize queries across tiers?" "Oh, and
while you are at it, can you support initiating updates from another tier as well?"
• If you could create an Entity Framework ObjectContext on the client tier, execute
any Entity Framework query to load entities into that context, modify those entities,
and then have SaveChanges push an update from the client through the mid-tier to
the database server—if you could do all that, then why have the mid-tier at all? Why
not just expose the database directly?

Anti-Pattern #6: Undervaluing Simplicity

• In the name of avoiding all the anti-patterns discussed previously, it is easy to


decide that you need to create the most carefully architected, multi-tier, fully
separated, re-validating, super design that you can come up with.
• It is important to think over your goals and consider whether you are going to need
the investment n-tier requires. Simple is good. Sometimes a two-tier app is just the
thing.
• If you can make the problem simpler, do so.

N-Tier Application Patterns

Change Set

• The idea behind the change set pattern is to create a serializable container that can
keep the data needed for a unit of work together and, ideally, perform change
tracking automatically on the client. This approach also tends to be quite full-
featured and is easy to use on the mid-tier and on the client. DataSet is the most
common example of this pattern.
• Some of the downsides of this pattern:
◦ The change set pattern places significant constraints on the client because
the wire format tends to be very specific to the change set and hard to
make interoperable.
◦ The wire format is usually quite inefficient. Change sets are designed to
handle arbitrary schemas, so overhead is required to track the instance
schema.
◦ The ease with which you can end up tightly coupling two or more of the
tiers, which causes problems if you have different rates of change.
◦ Easy to abuse the change set.
• Because it is so easy to put data into the change set, send it to the mid-tier, and
then persist, you can do so without verifying on the mid-tier that the changes you
are persisting are only of the type that you expect.
• This pattern is best used in cases where you have full control over client deployment
so that you can address the coupling and technology requirement issues. It is also
the right choice if you want to optimize for developer efficiency rather than runtime
efficiency. If you do adopt this pattern, be sure to validate any changes on the mid-
tier rather than blindly persisting whatever changes arrive.
DTOs

• The intent of the Data Transfer Objects (DTOs) pattern is to separate the client and
the mid-tier by using different types to hold the data on the mid-tier and the data
on the client and in the messages sent between them. The DTO approach requires
the most effort to implement, but when implemented correctly, it can achieve the
most architectural benefits.
• You can develop and evolve your mid-tier and your client on completely separate
schedules because you can keep the data that travels between the two tiers in a
stable format regardless of changes made on either end. Naturally, at times you'll
need to add some functionality to both ends, but you can manage the rollout of that
functionality by building versioning plus backward and forward compatibility into the
code that maps the data to and from the transfer objects.
• Because you explicitly design the format of the data for when it transfers between
the tiers, you can use an approach that interoperates nicely with clients that use
technologies other than .NET. You can use a format that is very efficient to send
across the wire, or you can choose to exchange only a subset of an entity's data for
security reasons.
• The downside is the extra effort required to design three different sets of types for
essentially the same data and to map the information between the types.
• For many projects you might be able to achieve your goals with a pattern that
requires less effort.

Simple Entities

• The simple entities pattern reuses the mid-tier entity types on the client striving to
keep the complexity of the data structure to a minimum and passing entity
instances directly to service methods. Only allows simple property modification to
entity instances on the client. More complex operations, such as changing the
relationships or accomplishing a combination of inserts, updates, and deletes,
should be represented in the structure of the service methods.
• No extra types are required and no effort has to be put into mapping data from one
type to another. If you can control deployment of the client, you can reuse the same
entity structures.
• The primary disadvantage is that more methods are usually required on the service
if you need to accomplish complex scenarios that touch multiple entities. This leads
to either chatty network traffic, where the client has to make many service calls to
accomplish a scenario or special-purpose service methods with many arguments.
• The simple entities approach is especially effective when you have relatively simple
clients or when the scenarios are such that operations are homogenous. Then the
service methods are generally either queries for read-only data, modifications to
one entity at a time without changing much in the way of relationships, or inserting
a set of related entities all at once for a specific entity.

Self-Tracking Entities

• The self-tracking entities pattern is built on the simple entities pattern. It creates
smart entity objects that keep track of their own changes and changes to related
entities. To reduce constraints on the client, the entities are plain-old CLR objects
(POCO) that are not tied to any particular persistence technology. They just
represent the entities and some information about whether they are unchanged,
modified, new, or marked for deletion.
• Because the tracking information is built into the entities themselves and is specific
to their schema, the wire format can be more efficient than with a change set.
Because they are POCO, they make few demands on the client and interoperate
well. Because validation logic can be built into the entities themselves, you can
more easily remain disciplined about enforcing the intended operations for a
particular service method.
• There are two primary disadvantages for self-tracking entities compared to change
sets:
◦ A change set implementation can allow multiple change sets to be merged if
the client needs to call more than one service method to retrieve the data it
needs.
◦ The entity definitions are somewhat complicated because they include the
tracking information directly instead of keeping that information in a
separate structure outside the entities.
• Self-tracking entities are not as thoroughly decoupled as DTOs, and there are times
when more efficient wire formats can be created with DTOs.
• Nothing prevents you from using a mix of DTOs and self-tracking entities.

Implementing N-Tier with the Entity Framework

• The EF provides a foundation for addressing persistence concerns:


◦ Declarative mapping between the database and conceptual entities, which
decouples your mid-tier from the database structure.
◦ Automatic concurrency checks on updates as long as appropriate change-
tracking information is supplied
◦ Transparent change tracking on the mid-tier.
• The EF is a LINQ provider, which means that it is relatively easy to create
sophisticated queries that can help with mapping entities to DTOs.
• The EF can be used to implement any of the four patterns described earlier
◦ EF release in Visual Studio 2008 SP1/.NET 3.5 SP1 makes patterns other
than the simple entities pattern very difficult to implement.
◦ EF release in Visual Studio 2010/.NET 4 makes implementing the other
patterns easier.

Concurrency Tokens

• The best option for concurrency token is to use a row version number. A row's
version automatically changes whenever any part of the row changes in the
database.
• The next best option is to use something like a time stamp and add a trigger to the
database updating the time stamp whenever a row is modified.
• In the Entity Designer, select the property and set its Concurrency Mode to Fixed.
You can have more than one property in the same entity with Concurrency Mode set
to Fixed, but this is usually not necessary.

Serialization

• The EF automatically generates DataContract attributes on the types and


DataMember attributes on the persistable properties of the entities so that they
directly can be used in WCF. This includes navigation properties, which means that
if you retrieve a graph of related entities into memory, the whole graph is serialized
automatically. The generated code supports binary serialization and XML
serialization (only to single entities, not graphs).
• The change-tracking information which is stored in the ObjectStateManager (part of
ObjectContext) is not serialized. In the simple entities pattern, you typically retrieve
unmodified entities from the database on the mid-tier and serialize them to the
client:

public Customer GetCustomerByID(string id){ using (var ctx = new


NorthwindEntities()) { return ctx.Customers.Where(c => c.CustomerID
== id).First(); } }

Working with the ObjectStateManager

• For two-tier persistence operations, the ObjectStateManager does its job


automatically for the most part.

It keeps track of the existence of each entity under its control; its key value; an EntityState value,
which can be unchanged, modified, added, or deleted; a list of modified properties; and the original
value of each modified property. When you retrieve an entity from the database, it is added to the
list of entities tracked by the state manager, and the entity and the state manager work together to
maintain the tracking information. If you set a property on the entity, the state of the entity
automatically changes to Modified, the property is added to the list of modified properties, and the
original value is saved. Similar information is tracked if you add or delete an entity. When you call
SaveChanges on the ObjectContext, this tracking information is used to compute the update
statements for the database. If the update completes successfully, deleted entities are removed
from the context, and all other entities transition to the unchanged state so that the process can
start over again.

• When sending entities to another tier, the automatic change tracking process is
interrupted. To perform an update on the mid-tier by using information from the
client, you need two special methods of ObjectContext:
◦ The Attach method tells the state manager to start tracking an entity. There
are two critical things about Attach to keep in mind:
▪ At the end of a successful call to Attach, the entity will always be in
the unchanged state. If you want to eventually get the entity into
some other state, such as modified or deleted, you need to take
additional steps to transition the entity to that state. The value an
entity's property has when you attach it will be considered the
original value for that property. The value of the concurrency token
when you attach the entity will be used for concurrency checks.
▪ If you attach an entity that is part of a graph of related entities, the
Attach method will walk the graph and attach each of the entities it
finds.
◦ The ApplyPropertyChanges method implements the other half of a
disconnected entity modification scenario. It looks in the
ObjectStateManager for another entity with the same key as its argument
and compares each regular property of the two entities. When it finds a
property that is different, it sets the property value on the entity in the state
manager to match the value from the entity passed as an argument to the
method. It is important to note that the method operates only on "regular"
properties and not on navigation properties, so it affects only a single entity,
not an entire graph. It was designed especially for the simple entities
pattern.

public void UpdateCustomer(Customer original, Customer modified) {


using (var ctx = new NorthwindEntities()) { ctx.Attach(original);
ctx.ApplyPropertyChanges(modified.EntityKey.EntitySetName,
modified); ctx.SaveChanges(); }}

• Above mechanism adds some complication to the client which needs to copy the
entity before modifying it. An altenative is to attach the modified entity and use
some lower-level APIs on the ObjectStateManager to tell it that the entity should be
in the modified state and that every property is modified.

public void UpdateCustomer(Customer modified){ using (var ctx = new


NorthwindEntities()) { ctx.Attach(modified); var stateEntry =
ctx.ObjectStateManager.GetObjectStateEntry(modified); foreach (var
propertyName in stateEntry.CurrentValues
.DataRecordInfo.FieldMetadata .Select(fm => fm.FieldType.Name)) {
stateEntry.SetModifiedProperty(propertyName); } }
ctx.SaveChanges();}

• ObjectStateManager mechanism can also be used for service methods to add and
delete entities.

public void AddCustomer(Customer customer){ using (var ctx = new


NorthwindEntities()) { ctx.AddObject("Customers", customer);
ctx.SaveChanges(); }}
public void DeleteCustomer(Customer customer){ using (var ctx = new
NorthwindEntities()) { ctx.Attach(customer);
ctx.DeleteObject(customer); ctx.SaveChanges(); }}

• The approach can be extended to methods that change relationships between


entities or perform other operations. The key concept is to get the state manager
into a state it would have been in if entities were queried from the database, then
make changes to the entities and then call SaveChanges.

Patterns Other Than Simple Entities in .NET 3.5 SP1

• The change set pattern can be implemented. See the sample of this pattern written
with one of the prerelease betas of the EF. Consider creating an ObjectContext on
the client with only the conceptual model metadata and use that as a client-side
change tracker.
• Implementing DTOs is not that much more difficult with the first release of the EF
than it will be in later releases. You have to write your own code or use an
automatic mapper to move data between your entities and the DTOs. Consider
using LINQ projections to copy data from queries directly into your DTOs.
public List<CustomerDTO> GetCustomerDTOs(){ using (var ctx = new
NorthwindEntities()) { var query = from c in ctx.Customers select
new CustomerDTO() { Name = c.ContactName, Phone = c.Phone }; return
query.ToList(); }}

• Self-tracking entities is the hardest pattern to implement in the SP1 release:


◦ EF in .NET 3.5 SP1 does not support POCO, so self-tracking entities will have
a dependency on the 3.5 SP1 version of .NET, and the serialization format
will not be as suitable for interoperability.
◦ Implementing a method on the mid-tier to handle a mixed graph is quite
difficult. One of the nice features of self-tracking entities is that you can
create a single graph of related entities with a mix of operations: some
entities can be modified, others new, and still others marked for deletion.

API Improvements in .NET 4

• The EF will support complete persistence ignorance for entity classes (POCO). Allows
creation of entities that have no dependencies on the EF or other persistence-
related DLLs. A single entity class used for persisting data with the EF will also work
on Silverlight or earlier versions of .NET. POCO helps isolate the business logic in
your entities from persistence concerns and makes it possible to create classes with
a clean, interoperable serialization format.
• Working with the ObjectStateManager will be easier because the state transition
constraints have been relaxed.
• Allow building a model in which an entity exposes a foreign key property that can be
manipulated directly.
• EF will use the T4 template engine to allow easy, complete control over the code
that is generated for entities. Allows Microsoft releasing templates that generate
code for a variety of scenarios and usage patterns, and allows customizing those
templates. One of the templates will produce classes that implement the self-
tracking entities pattern with no custom coding required on your part.

Building N-Tier Apps with EF4


While Simple Entities is usually not the preferred pattern for n-tier applications, it is the
most viable option in the first release of the EF. EF4 significantly changes the options for n-
tier programming with the framework:
• New framework methods that support disconnected operations, such as
ChangeObjectState and ChangeRelationshipState, which change an entity or
relationship to a new state (added or modified, for example); ApplyOriginalValues,
which lets you set the original values for an entity; and the new ObjectMaterialized
event, which fires whenever an entity is created by the framework.
• Support for Plain Old CLR Objects (POCO) and foreign key values on entities. These
features let you create entity classes that can be shared between the mid-tier
service implementation and other tiers, which may not have the same version of the
Entity Framework (.NET 2.0 or Silverlight, for example). POCO objects with foreign
keys also have a straightforward serialization format that simplifies interoperability
with platforms like Java. The use of foreign keys also enables a much simpler
concurrency model for relationships.
• T4 templates to customize code generation. These templates provide a way to
generate classes implementing the Self-Tracking Entities or DTOs patterns.

These features are used to implement the Self-Tracking Entities pattern in a template
(making it more accessible) and while DTOs still require the most work during initial
implementation, this process is also easier with EF4 (see figure).

The right pattern for a particular situation depends on a lot of factors:


• DTOs provide many architectural advantages at a high initial implementation cost.
• DTOs are best choice as your application becomes larger and more complex or if
you have requirements that can’t be met by Self-Tracking Entities, like different
rates of change between the client and the server.
• Change Set exhibits few good architectural characteristics but is easy to implement
(when available for a particular technology—for example, the DataSet in ADO.NET).
• Pragmatic/agile balance between these concerns by starting with Self-Tracking
Entities and moving to DTOs if the situation warrants it.
• Self-Tracking Entities represents a much better trade-off than Change Set or Simple
Entities.

Self-Tracking Entities

Start by creating an Entity Data Model that represents the conceptual entities and map it to
a database:
• Reverse engineer a model from an existing database
• Create a model from scratch and then generate a database to match

Replace the default code generation template with the Self-Tracking Entities template:
• Right-click the entity designer surface and choose Add Code Generation Item.
• Choose the Self-Tracking Entities template from the list of installed templates.
• Turns off default code generation and adds two templates: one generates the
ObjectContext, and the other generates entity classes. Separating into two
templates makes it possible to split the code into separate assemblies, one for
entity classes and one for context.

The main advantage is that you can have your entity classes in an assembly that has no
dependencies on the Entity Framework. This way, the entity assembly and any business
logic implemented there can be shared by the mid-tier and the client if you want.

The context is kept in an assembly that has dependencies on both the entities and the EF:
• If the client is running .NET 4, you can just reference the entity assembly from the
client project.
• If your client is running an earlier version of .NET or is running Silverlight, you can
add links from the client project to the generated files and recompile the entity
source in that project (targeting the appropriate CLR).

The generated entity classes are simple POCO classes:


• Provide basic storage of entity properties.
• Keep track of changes to the entities: overall state of an entity, changes to critical
properties such as concurrency tokens, and changes in relationships between
entities.
• The extra tracking information is part of the DataContract definition for the entities.
On the client of the service, changes to the entities are tracked automatically even though
the entities are not attached to any context. Each generated entity has code like the
following for each property. If you change a property value on an entity with the Unchanged
state, the state is changed to Modified:

[DataMember]public string ContactName{ get { return _contactName; }


set { if (!Equals(_contactName, value)) { _contactName = value;
OnPropertyChanged("ContactName"); } }} private string _contactName;

Similarly, if new entities are added to a graph or entities are deleted from a graph, that
information is tracked:
• Since the state of each entity is tracked on the entity itself, the tracking mechanism
behaves as you would expect even when you relate entities retrieved from more
than one service call.
• If you establish a new relationship, just that change is tracked: the entities involved
stay in the same state, as though they had all been retrieved from a single service
call.

The context template adds the method ApplyChanges to the generated context. It attaches
a graph of entities to the context and sets the information in the ObjectStateManager to
match the information tracked on the entities. With the track information of the entities
track and ApplyChanges, the generated code handles both change tracking and concurrency
concerns, two of the most difficult parts of correctly implementing an n-tier solution.

As a concrete example, following shows a simple ServiceContract with Self-Tracking Entities


of an order submission system (based on Northwind):

[ServiceContract]public interface INorthwindSTEService{


[OperationContract] IEnumerable<Product> GetProducts();
[OperationContract] Customer GetCustomer(string id);
[OperationContract] bool SubmitOrder(Order order);
[OperationContract] bool UpdateProduct(Product product);}

The GetProducts service method is used to retrieve reference data on the client about the
product catalog. It retrieves a customer and a list of that customer’s orders:

public Customer GetCustomer(string id){ using (var ctx = new


NorthwindEntities()) { return ctx.Customers.Include("Orders")
.Where(c => c.CustomerID == id) .SingleOrDefault(); } }

To illustrate client usage of self-tracking entities, consider the creation of an order with
appropriate order detail lines, updating parts of the customer entity with the latest contact
information, and also deleting any orders that have a null OrderDate (system marks
rejected orders that way):

var svc = new ChannelFactory<INorthwindSTEService>(


"INorthwindSTEService")
.CreateChannel();

var products = new List<Product>(svc.GetProducts());

var customer = svc.GetCustomer("ALFKI");


customer.ContactName = "Bill Gates";

foreach (var order in customer.Orders


.Where(o => o.OrderDate == null).ToList())
{
customer.Orders.Remove(order);
}

var newOrder = new Order();


newOrder.Order_Details.Add(new Order_Detail()
{
ProductID = products.Where(p => p.ProductName == "Chai")
.Single(),
Quantity = 1
});
customer.Orders.Add(newOrder);
var success = svc.SubmitOrder(newOrder);

Note that when creating the order detail entity for the new order, just the ProductID
property is set rather than the Product entity itself. This is the new foreign key relationship
feature in action. It reduces the amount of information that travels over the wire because
you serialize only the ProductID back to the mid-tier, not a copy of the product entity.

It’s in the implementation of the SubmitOrder service method that Self-Tracking Entities
really shines:

public bool SubmitOrder(Order newOrder)


{
using (var ctx = new NorthwindEntities())
{
ctx.Orders.ApplyChanges(newOrder);
ValidateNewOrderSubmission(ctx, newOrder);
return ctx.SaveChanges() > 0;
}
}

The call to ApplyChanges reads the change information from the entities and applies it to
the context in a way that makes the result the same as if those changes had been
performed on entities attached to the context the whole time.

ValidateNewOrderSubmission, added to the service implementation, examines the


ObjectStateManager to make sure that only the kinds of changes we expect in a call to
SubmitOrder are present. Validation is really important because ApplyChanges pushes
whatever changes it finds in an entire graph of related objects into the context. The
expectation that a client will only add new orders, update the customer and delete rejected
orders doesn’t mean that a buggy (or even malicious) client would not do something else.
Regardless of the n-tier pattern that is used, it is a critical rule that changes are always
validated before saving them to the database.

A second critical design principle is that you should develop separate, specific service
methods for each operation. Without these separate operations, you do not have a strong
contract representing what is and isn’t allowed between your two tiers, and properly
validating your changes can become impossible.

Data Transfer Objects

In DTOs, instead of sharing a single entity implementation between the mid-tier and the
client, you create a custom object that’s used only for transferring data over the service and
develop separate entity implementations for the mid-tier and the client:
• It isolates your service contract from implementation issues on the mid-tier and the
client, allowing that contract to remain stable even if the implementation on the
tiers changes.
• It allows you to control what data flows over the wire. You can avoid sending
unnecessary data or data the client is not allowed to access.
• The service contract is designed with the client scenarios in mind so that the data
can be reshaped between the mid-tier entities and the DTOs (maybe by combining
multiple entities into one DTO).
• Benefits come at the price of having to create and maintain one or two more layers
of objects and mapping.

Following code applies DTOs to the order submission example. Note the CustomerVersion
field which contains the row version information used for concurrency checks on the
customer entity:

public class NewOrderDTO


{
public string CustomerID { get; set; }
public string ContactName { get; set; }
public byte[] CustomerVersion { get; set; }
public List<NewOrderLine> Lines { get; set; }
}
public class NewOrderLine
{
public int ProductID { get; set; }
public short Quantity { get; set; }
}

The service method that accepts this DTO uses the same lower-level Entity Framework APIs
that the Self-Tracking Entities template uses to accomplish its tasks. First, you create a
graph of customer, order and order detail entities based on the information in the DTO:

var customer = new Customer


{
CustomerID = newOrderDTO.CustomerID,
ContactName = newOrderDTO.ContactName,
Version = newOrderDTO.CustomerVersion,
};

var order = new Order


{
Customer = customer,
};

foreach (var line in newOrderDTO.Lines)


{
order.Order_Details.Add(new Order_Detail
{
ProductID = line.ProductID,
Quantity = line.Quantity,
});
}

Then you attach the graph to the context and set the appropriate state information:

ctx.Customers.Attach(customer);
var customerEntry =
ctx.ObjectStateManager.GetObjectStateEntry(customer);
customerEntry.SetModified();
customerEntry.SetModifiedProperty("ContactName");

ctx.ObjectStateManager.ChangeObjectState(order, EntityState.Added);
foreach (var order_detail in order.Order_Details)
{
ctx.ObjectStateManager.ChangeObjectState(order_detail,
EntityState.Added);
}
return ctx.SaveChanges() > 0;

Flow:
• Attach the entire graph to the context: each entity is in the Unchanged state.
• Tell the ObjectStateManager to put the customer entity in the Modified state with
only the ContactName property marked as modified (the only customer info
provided by the DTO).
• Change the state of the order and each of its order details to Added.
• Apply changes to customer and order with SaveChanges.

Because you have a very specific DTO for each scenario no change validations are required:
you are interpreting the DTO object as you map the information from it into your entities.
Nevertheless, in many cases additional validation of the values or other business rules is
still required.

One other consideration is properly handling concurrency exceptions using the version
information of the customer entity included in the DTO. You can either map this exception
to a WCF fault for the client to resolve the conflict, or you can catch the exception and apply
some sort of automatic policy for handling the conflict.

Tips

Some tips to watch out for:


• Reuse the Self-Tracking Entity template’s generated entity code on your client. If
you use proxy code generated by Add Service Reference in Visual Studio, things
look right, but the entities don’t actually keep track of their changes on the client.
• Create a new ObjectContext instance in a Using statement for each service method
so that it is disposed of before the method returns. This step is critical for scalability
of your service (closing database connections across service calls, garbage collecting
temporary state used by a particular operation). The Entity Framework
automatically caches metadata and other information it needs in the app domain,
and ADO.NET pools database connections, so re-creating the context each time is a
quick operation.
• Use the new foreign key relationships feature whenever possible. It makes
changing relationships between entities much easier. The relationship is simply a
property of the entity, and if the entity passes its concurrency check, no further
check is needed. You can change a relationship just by changing the foreign key
value.
• Be careful of EntityKey collisions when attaching a graph to an ObjectContext. If,
for instance, you are using DTOs and parts of your graph represent newly added
entities for which the entity key values have not been set because they will be
generated in the database, you should call the AddObject method to add the whole
graph of entities first and then change entities not in the Added state to their
intended state (rather than calling the Attach method and then changing Added
entities to that state). Otherwise, when you first call Attach, the Entity Framework
thinks that every entity should be put into the Unchanged state, which assumes that
the entity key values are final. If more than one entity of a particular type has the
same key value (0, for example), the Entity Framework will throw an exception. By
starting with an entity in the Added state, you avoid this problem because the
framework does not expect Added entities to have unique key values.
• Turn off automatic lazy loading (new EF4 feature) when returning entities from
service methods. If you don’t, the serializer will trigger lazy loading and try to
retrieve additional entities from the database, which will cause more data than you
intended to be returned. Self-Tracking Entities does not have lazy loading turned on
by default, but if you are creating a DTOs solution, this is something to watch out
for.

Conclusion

The .NET 4 release of the Entity Framework makes the creation of architecturally sound n-
tier applications much easier. For most applications, it is recommended starting with the
Self-Tracking Entities template, which simplifies the process and enables the most reuse. If
you have different rates of change between service and client, or if you need absolute
control over your wire format, you should move up to a Data Transfer Objects
implementation. Regardless of which pattern you choose, always keep in mind the key
principles that the antipatterns and patterns represent and never forget to validate your
data before saving.
Figures
Data Access Technologies

Current Data Technologies

Native Data Technologies


WCF Data Services

Future Data Technologies


Entity Framework

Comparing N-Tier Patterns with EF4

References
Book References

Patterns of Enterprise Application Architecture

[PEAA, Ch. Patterns of Enterprise Application Architecture, Chapter 9, Domain Logic


09] Patterns
[PEAA, Ch. Patterns of Enterprise Application Architecture, Chapter 10, Data Source
10] Architectural Patterns
[PEAA, Ch. Patterns of Enterprise Application Architecture, Chapter 13, Object-Relational
13] Metadata Mapping Patterns
[PEAA, Ch. Patterns of Enterprise Application Architecture, Chapter 15, Distribution
15] Patterns
[PEAA, Ch. Patterns of Enterprise Application Architecture, Chapter 16, Offline
16] Concurrency Patterns

Web References

MSDN

[DDC] Data Developer Center


[DDTAG] Microsoft Data Development Technologies At-a-Glance
Guide to the Data Development Platform for .NET Developers
"The ADO.NET Entity Framework should be considered the
development API of choice for .NET SQL Server programmers
[GDDP]
going forward. The Entity Framework raises the abstraction
level of data access from logical relational database-based
access to conceptual model-based access."
[IDDD] An Introduction To Domain-Driven Design
[APAN] Anti-Patterns To Avoid In N-Tier Applications
[NTAP] N-Tier Application Patterns
[BNTAE] Building N-Tier Apps with EF4
[EFFAQ] Entity Framework FAQ

MSDN Patterns & Practices

[DAAG] .NET Data Access Architecture Guide


[DDTC] Designing Data Tier Components and Passing Data Through Tiers
[ESPDN] Enterprise Solution Patterns Using Microsoft .NET

InfoQ

[DDDQ] Domain Driven Design Quickly


[DDDP] Domain Driven Design and Development In Practice

Blogs

[BASILRIA, Business Apps Example for Silverlight 3 RTM and .NET RIA Services,
Overview] Overview, Blog Brad Adams
[BDSEF] Blog Danny Simmons - Dev manager for the Entity Framework Team
Walkthrough: Self Tracking Entities for the Entity Framework, Blog
[WSTEEF]
ADO.NET Team
[FKREF] Foreign Key Relationships in the Entity Framework, Blog ADO.NET Team

Вам также может понравиться