Here's a quote from Scott Ambling that I think we all need to be able to respond to as convincingly:
Why is Big Modelling up front the wrong thing to do?
Mistakenly compare software development to civil engineering. A common analogy is to compare software models to architectural diagrams for a bridge or building. Unfortunately the analogy isn’t an accurate one: software is more malleable than concrete making it easier and much less expensive to change your mind part way through the effort; The material (app servers, operating systems, …) used to build software isn’t as well understood as the material to build bridges (steel, concrete, …) making it much more difficult to accurately plan up front; Incremental delivery often doesn’t make much sense when you’re building a bridge, so what if the foundation has been set but the rest of the bridge isn’t in place, although incremental delivery of software is clearly possible and as I argue above the norm. Much better analogies are comparing software developers with chefs or comparing software development with putting on a play (as described in the wonderful book Artful Making).
Read more on Scott's article on why Big Modeling Up Front is an Anti-Pattern
Tuesday, December 4, 2012
Thursday, October 11, 2012
WCF Performance tuning resources
http://blogs.msdn.com/b/wenlong/archive/2009/07/26/wcf-4-higher-default-throttling-settings-for-wcf-services.aspx
http://weblogs.asp.net/paolopia/archive/2008/03/23/wcf-configuration-default-limits-concurrency-and-scalability.aspx
http://kennyw.com/work/indigo/150
http://csharp-codesamples.com/2009/04/configuring-wcf-throttling-behaviors/
It is advisable to keep MaxConcurrentCalls and MaxConcurrentSessions equal.
http://www.codeproject.com/Articles/33362/WCF-Throttling
http://msdn.microsoft.com/en-us/library/7w2sway1.aspx
Setting ProcessModel parameters.
Testing and finding the limits of Windows:
http://blogs.technet.com/b/markrussinovich/archive/2009/07/08/3261309.aspx
http://weblogs.asp.net/paolopia/archive/2008/03/23/wcf-configuration-default-limits-concurrency-and-scalability.aspx
http://kennyw.com/work/indigo/150
http://csharp-codesamples.com/2009/04/configuring-wcf-throttling-behaviors/
It is advisable to keep MaxConcurrentCalls and MaxConcurrentSessions equal.
http://www.codeproject.com/Articles/33362/WCF-Throttling
http://msdn.microsoft.com/en-us/library/7w2sway1.aspx
Setting ProcessModel parameters.
Testing and finding the limits of Windows:
http://blogs.technet.com/b/markrussinovich/archive/2009/07/08/3261309.aspx
Wednesday, September 26, 2012
Diagramming tools
Here's a quick list of simple and cool diagramming tools:
Text based class diagrams. The syntax can be a little awkward, but works well when you get your head around the syntax. |
|
Text based sequence diagramming web tool. Type some puesdo-code and the sequence diagram is drawn based on your text. |
|
WYSIWYG style free form diagram tool. Has templates for Sequence diagrams and others. Pretty nice tool |
|
Saturday, September 15, 2012
70-513 WCF Study Notes
Skill measured in the exam
http://www.microsoft.com/learning/en/us/exam.aspx?id=70-513This is my raw notes I made while studying for the exam.
HOSTING
Its valid to not specify any end-points. Adding base addresses will infer binding, contract is inferred by service name.DISCOVERY
To discover a service:Create one FindCriteria object for each iterface and set their Duration properties to two seconds. Loop for 30 seconds total and invoke their Find methods.
To implement a "logger" that recieves service start and stop announcments use the AnnouncementService class not the AnnouncementClient class.
RSS CONSUMPTION FROM A CLIENT
SyndicationFeed has an Items property : SyndicationItemTextSyndicationContent : SyndicationItem
+Text : string
CUSTOM BINDING - DEFINING THE ORDER OF ELEMENTS (TRSMTET)
TransactionsReliability
Security
Message Patterns
Transport upgrades/helpers
Encoding
Transport
MESSAGE QUEUEING
To deliver to Dead Letter queue use Reject enum value for receiveErrorHandling.Poison messages use the Move, or Drop
Fault is the default value. (Throws an exception).
Address of dead letter queue: net.msmq://localhost/system$;DeadLetter
REST JSON SERVICES
When using WebGet attrib must use a SVC file <%ServiceHost Service="TestService" Factory="System.ServiceModel.ActivationWebServiceHostFactory" %>or the WebHttpBehaviour on the service.
Use webHttpBinding for Pox/Json and general WCF use from javascript.
Use enableWebScript for Ms AJAX usage.
ROUTING SERVICES
ISimplexDatagramRouter = Reflects a one way message exchange.ISimplexSessionRouter = Reflects a one way message exchange with a session aware channel.
IRequestReplyRouter = Reflects a request-reply message exchange.
IDuplexSessionRouter = Reflects a duplex communication using a callback contract.
Cannot mix message patterns when routing. For contracts that do, use IDuplexSessionRouter and use callbacks to route the responses.
PROXY USAGE
Closing/Disposing a failed proxy might throw. Rather use .Abort in the exception handler or finalise block.Any Reuse of a closed proxy will result in ObjectDisposedException
Proxy's should derive from ClientBase<T>, T = ServiceContract.
SERVICE EXCEPTIONS
Use serviceDebug element in web.config[FaultContract(typeof(FaultException<Order>))] is valid but received by client as FaultException.
[FaultContract(typeof(ErrorInfo)) is best, received by client as FaultException<ErrorInfo>.
PERFORMANCE MONITORING, LOGGING AND AUDTING
Use serviceSecurityAudit element in a behaviour to audit to event log.setting <diagnostics performanceCounters="ServiceOnly"/> will only give service level counters (no endpoint or operation level counters are available).
EventLogs are added to the security log by default in IIS7+ or Application in IIS6.
Calls failed is talking about unhandled exceptions
Calls faulted is when FaultExceptions are thrown.
MessageLogging/Filters can contain operation level filters:
<add xmlns:addr="http://www.w3.org/2005/08/addressing">addr:Action[text()='http://namespaceOfService/IServiceContract/OperationNameResponse']</add>
Configuring Logging:
<diagnostics>
<messageLogging logEntireMessage="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true"/>
<sources>
<source propagateActivity="true" name="System.ServiceModel" switchValue="Warning, ActivityTracing">
<listeners>
<add name="ServiceModelTraceListener"/>
</listeners>
</source>
<sources>
</diagnostics>
mexHttpBinding is a valid binding. Useful when the same address is used for service call and metadata.
SECURITY
OperationContract has a protection level.ServiceContract has a protection level that enforces a Minimum.
Message level security must be encrypted using a cert
Don't use SecurityAction.Assert only use .Demand.
Mulitple PrincipalPermission attributes allowed and are Or'ed.
wsHTTPBinding defaults to message security and client credential type Windows.
IMPERSONATION
TokenImpersontationLevel.ImpersonationImpersonates the user's credentials only on that service machine.
TokenImpersonationLevel.Delegatioin
Allows impersonation of the user's credentials on the service machine and other machines.
When ImpersonateCallerForAllServiceOperations is false Allowed results in no impersonation.
Required always results in impersonation.
ServiceSecurityAuditBehavior adds to windows event log in security section.
<machineSettings enableLoggingKnownPii="true"/> allows usernames and passwords to be logged in clear text in the WCF standard diagnostics logs. It is false by default.
You can implement custom message level security binding with TransportSecurityBindingElement.
Setting EstablishSecurityContext will not issue a token and require every call be re-authenticated.
CERTIFICATES
StoreName.My = Personal certs for logged in userStoreName.Root = Trust Root CA's
StoreName.AuthRoot = Third party CA's
STREAMING
NetTcpBinding doesnt support message streaming with SessionMode=Required or with Reliable Sessions configured on.Use MTOM Message Transmission Optimization Mechanism for streaming large files; its more efficient than binary.
CONTRACTS
SERVICE BEHAVIOR ATTRIBUTEAutomaticSessionShutdown
ConcurrencyMode
ConfigurationName
IncludeExceptionDetailsInFaults
InstanceContextMode (PerSession Default)
Name
Namespace
ReleaseServiceInstanceOnTransactionComplete
TransactionAutoCompleteOnSessionClose
TransactionIsolationLevel
TransactionTimeout
UseSynchronisationContext
OPERATION BEHAVIOR ATTRIBUTE
ImpersonationRelaseInstanceMode
TransactionAutoComplete
TransactionScopeRequired
TRANSACTION FLOW ATTRIBUTE
Used on an Operation Contract (Allows|Mandatory|NotAllowed)ORDER BEHAVIOURS ARE EVAL'D
Most specific to most generalContract, Operation, Endpoint, Service
Reliability should be turned on for Http bindings.
MESSAGE CLASS
A message instance can only be accessed or written to once.To eliminate duplicate xmlns in Message contracts, embed a datacontract as the sole messageBodyMember and declare the namespace on the datacontract.
To read a message Use CreateBufferedCopy method of the Message class, use CreateMethod of the MessageBuffer class to make a copy.
TOOLS
SvcUtil.exeGenerate a client side proxy from service metadata.
Generate metadata from services
WCFTestClient.exe
Edit Client configuration
Invoke service methods
WSDL.exe
Generate a client side proxy from service metadata.
Saturday, September 1, 2012
Which WCF Binding to use
Ever wondered which binding to use and why? Ever wondered what is the difference between wsHttpBinding and ws2007HttpBinding?
For binding selection generally follow this:
Common Binding Capability Matrix
* = The Default.
** = NetNamedPipeBinding doesn't support reliability but is classified as inherently reliable because its communicating within the same machine.
Client Credentials Supported with Transport Security
* = The Default.
Client Credentials Supported with Message Security
Standard Behavior Defaults
[ServiceBehavior(
AutomaticSessionShutdown=true,
ConcurrencyMode=ConcurrencyMode.Single,
InstanceContextMode=InstanceContextMode.PerSession,
IncludeExceptionDetailInFaults=false,
UseSynchronizationContext=true,
ValidateMustUnderstand=true)]
[OperationBehavior(
TransactionAutoComplete=true,
TransactionScopeRequired=false,
Impersonation=ImpersonationOption.NotAllowed)]
http://msdn.microsoft.com/en-us/library/ms751438.aspx
Of all the settings shown above InstanceContextMode is the most tricky. Just because it defaults to PerSession, does not mean a session is always present. If the binding does not support it or the service contract has been configured to not allow it (ServiceContract.SessionMode).
Here's a table showing the resulting instance context mode as a result of binding, service cotract's session mode and service behavior's instance mode.
Invalid configurations are not shown (invalid configurations will throw runtime exceptions such as InvalidOperationException: Contract requires Session, but Binding 'WebHttpBinding' doesn't support it).
Using JSON
I have little experience using JSON with the standard bindings, but my understanding is that you will be forced to use BasicHttpBinding or WebHttpBinding (not WsHttpBinding). When doing so you will loose most of the support for other WCF features like security, reliability, ordered messages, and transactions. With regard to security, SSL is supported by both Web and Basic, any other form of security (message and credentials) you'll be forced to hand craft service side extensions to support these.
WTF is Ws2007HttpBinding?
I came across this MSDN article that explains in detail the differences in a capability matrix between the Http bindings and W3C recommendations when to use which one.
http://msdn.microsoft.com/en-us/library/ms730294.aspx
Simply put, the difference with the newer 2007 Http binding is that is supports extensions to the Organization for the Advancement of Structured Information Standards (OASIS). This means it supports minor changes to the WS Security standards. There is no real compelling reason for a developer to choose these bindings, there's no amazing new features to take advantage of, merely protocol level changes. You might choose the 2007 binding if a client insists on a specific standard being used, or you're integrating into existing clients or services that use it.
Further Reading
For binding selection generally follow this:
Common Binding Capability Matrix
Name | Transport | Encoding | Interop. With non-.Net | Reliability | Ordered Delivery | Security |
---|---|---|---|---|---|---|
BasicHttpBinding | Http/Https | Soap1.1*/Mtom | Yes | No | No | None*, Transport, Message, Mixed |
NetTcpBinding | Tcp | Binary | No | Yes (off*) | Yes (on*) | None, Transport*, Message, Mixed |
NetNamedPipeBinding | IPC | Binary | No | No (**) | Yes (on*) | None, Transport* |
WsHttpBinding | Http/Https | Soap1.2*/Mtom | Yes | Yes (off*) | Yes (on*) | None, Transport, Message*, Mixed |
NetMsMqBinding | MsMq | Binary | No | No | No | None, Transport*, Message, Both |
webHttpBinding | Http/Https | Xml*/Json | Yes | No | No | None*, Transport, Mixed |
** = NetNamedPipeBinding doesn't support reliability but is classified as inherently reliable because its communicating within the same machine.
Client Credentials Supported with Transport Security
Name | None | Windows | UserName | Certificate |
---|---|---|---|---|
BasicHttpBinding | Yes* | Yes | Yes | Yes |
NetTcpBinding | Yes | Yes* | No | Yes |
NetNamedPipeBinding | No | Yes* | No | No |
WsHttpBinding | Yes | Yes* | Yes | Yes |
NetMsMqBinding | Yes | Yes* | No | Yes |
WebHttpBinding | Yes* | Yes | Yes | Yes |
Client Credentials Supported with Message Security
Name | None | Windows | UserName | Certificate | Issued Token |
---|---|---|---|---|---|
BasicHttpBinding | No | No | No | Yes* | No |
NetTcpBinding | Yes | Yes* | Yes | Yes | Yes |
NetNamedPipeBinding | N/A | N/A | N/A | N/A | N/A |
WsHttpBinding | Yes | Yes* | Yes | Yes | Yes |
NetMsMqBinding | Yes | Yes* | Yes | Yes | Yes |
WebHttpBinding | N/A | N/A | N/A | N/A | N/A |
Standard Behavior Defaults
[ServiceBehavior(
AutomaticSessionShutdown=true,
ConcurrencyMode=ConcurrencyMode.Single,
InstanceContextMode=InstanceContextMode.PerSession,
IncludeExceptionDetailInFaults=false,
UseSynchronizationContext=true,
ValidateMustUnderstand=true)]
[OperationBehavior(
TransactionAutoComplete=true,
TransactionScopeRequired=false,
Impersonation=ImpersonationOption.NotAllowed)]
http://msdn.microsoft.com/en-us/library/ms751438.aspx
Of all the settings shown above InstanceContextMode is the most tricky. Just because it defaults to PerSession, does not mean a session is always present. If the binding does not support it or the service contract has been configured to not allow it (ServiceContract.SessionMode).
Here's a table showing the resulting instance context mode as a result of binding, service cotract's session mode and service behavior's instance mode.
Binding | Service Behavior Session Mode | Service Contract Context Mode | Resulting Actual Instance Mode |
---|---|---|---|
BasicHttp | Allowed or Not Allowed | Per Call or Per Session | PerCall |
netTCP and netNamedPipes | Allowed or Required | Per Call | PerCall |
netTCP and netNamedPipes | Allowed or Required | Per Session | PerSession |
WsHttp (no message security and no reliability) | Allowed or Not Allowed | Per Call or Per Session | PerCall |
WsHttp (with message secuity or reliability) | Allowed or Required | Per Session | PerSession |
WsHttp (with message security or reliability) | Not Allowed | Per Call or Per Session | PerCall |
WebHttpBinding | Allowed or Not Allowed | Per Call or Per Session | PerCall |
Using JSON
I have little experience using JSON with the standard bindings, but my understanding is that you will be forced to use BasicHttpBinding or WebHttpBinding (not WsHttpBinding). When doing so you will loose most of the support for other WCF features like security, reliability, ordered messages, and transactions. With regard to security, SSL is supported by both Web and Basic, any other form of security (message and credentials) you'll be forced to hand craft service side extensions to support these.
WTF is Ws2007HttpBinding?
I came across this MSDN article that explains in detail the differences in a capability matrix between the Http bindings and W3C recommendations when to use which one.
http://msdn.microsoft.com/en-us/library/ms730294.aspx
Simply put, the difference with the newer 2007 Http binding is that is supports extensions to the Organization for the Advancement of Structured Information Standards (OASIS). This means it supports minor changes to the WS Security standards. There is no real compelling reason for a developer to choose these bindings, there's no amazing new features to take advantage of, merely protocol level changes. You might choose the 2007 binding if a client insists on a specific standard being used, or you're integrating into existing clients or services that use it.
Further Reading
Friday, August 24, 2012
New Version of Type Visualiser
Version 1.0.18 has been released of my Type Visualiser Tool. This release contains some cool new features but the biggest change was by far shifting to a more strict MVVM approach.
When writing the first release the agile-ist in me wanted to get a prototype up and running as quickly as possible. This resulted in a great deal of code behind to dynamically draw the visualisation. The driver behind the change was the increasing number of bugs, inconsistency of behaviour and ultimately difficultly of adding new features.
By changing to a strict MVVM approach and heavily leveraging data-binding most of the code behind shifted into controllers and the model. Once complete the justification was evident by the ease of adding new some new features.
Here's a summary of the new features:
When writing the first release the agile-ist in me wanted to get a prototype up and running as quickly as possible. This resulted in a great deal of code behind to dynamically draw the visualisation. The driver behind the change was the increasing number of bugs, inconsistency of behaviour and ultimately difficultly of adding new features.
By changing to a strict MVVM approach and heavily leveraging data-binding most of the code behind shifted into controllers and the model. Once complete the justification was evident by the ease of adding new some new features.
Here's a summary of the new features:
Saturday, August 4, 2012
The role of an Architect in Agile Part 1
I had an interesting conversation with a highly experienced architect recently:
"Is what you're doing working? Really? Are you sure?"
If you answered yes, then don't change anything.
More than likely however, there are recurring quality problems, defect count is high, bug-fixes are rejected, and developers spend a great deal of time refactoring. You need an architect and better architecture processes.
I pondered about the title of this post, should it be the role of an architect in Agile or just modern architect? Probably the latter, but the question of architecture in agile is something that bugs more than a few people. I certainly don't have all the answers nor proclaim to be an expert, but these are my experiences and observations.
Stephen Cohen (Chief Architect, Microsoft): "Scrum went through 3-4 years in its original form before admitting that the architect had any role at all to play."
Juval Lowy (Master Architect, IDesign): "The agile priests would like you to believe that following agile will magically produce architecture that is adaptable, resistant to poor coding, and is scalable."
(Apologies for the paraphrasing).
Some practioners claim there is no need for design and it should be simply part of the implementing the story. This works in only the most trivial software, but in my experience most of us veterans would not often descibe what we do as simple. More often than not the code produced is badly structured in retrospect and not properly limiting volatility when business change occurs. Agile addresses this by saying "continuously refactor". How much time could be saved by mapping out an over arching architecture up front?
A wise friend once said:
If you put a bunch of extreme programmers in the middle of a city, let’s say Marrakech, and ask them to visit five tourist hot spots without using a “map”, they will wander around for days exploring every little passage way by brute force. If you give another bunch of developers a “map” and put them in the same city, they will use the map to go directly to each of the five tourist hot spots in a matter of minutes or hours.
Architecture is the map. You'd be crazy not to have a map (or get started creating one).
Simply sitting down with a one liner on a piece of card does not mean any developer from graduate to senior will be able to first estimate, then write tests, then magically produce well designed code that properly encapsulates volatility. This is a naive and utopian ideal that never happens. If the architecture is already mapped out and clearly articulated then it absolutely is.
Once you have a map, in enough detail to be understood by your team, Scrum works okay. It should be done before the team is assembled to begin building.
You should strive to create a shadow board architecture that allows making design decision for new business features easy, consistent and fast.
No one would argue that business needs are changing faster than ever. This necessitates good design that encapsulates volatility as best it can. The role of an Architect is most definitely STILL REQUIRED and because of faster pace of change, more relevant than ever.
Architecture should not be a heavy process, it should be mapped out once at the beginning for the whole new system, and sometimes before each major feature addition to an existing product. According to IDesign's "The Method" architecture can be completed in 3 to 5 weeks. It is not something that happens for each user-story.
Some highly respected architects I have spoken to simply state a proper architecture process does not fit with Agile, period. I won't join this debate, however, agile isn't going to step aside any time soon. To date, I haven't ever won the debate on Agile versus "The Method". It will take time. For now, I believe we should work with agile.
Scott Ambler posted his take on fitting architecture into agile processes. The idea is to envision the architecture during the requirements and analysis phase. (Before the development sprints begin). The key is just-enough, then during development keep reiterating over it during the implementation phase. Just-enough for your target team to comprehend. Amend and refine when necessary.
Architecture is not something that is needed for every story. It is the end-goal vision of your system. This may happen once per release or even less frequently. You must have a clear idea of where you want your system to be 3 or 4 releases from now. Its ok to change the vision. The vision doesn't have to consider business features in detail, its the more like an overarching strategy for implementing them, but you should consider likely business extensions and how your architecture will support the changes.
Are you building a pluggable SOA system? If so, are there standard patterns all communications should use? Is it worthwhile applying aspects to all components? What would happen if services organically grew where developers thought it was quickest and easiest to slap them in? What about realistic and likely business changes? Be cynical, its healthy. These are the questions important to consider when designing an architecture.
It takes effort to make something appear easy, and its easy to make something complex. - Unknown.
"Is what you're doing working? Really? Are you sure?"
If you answered yes, then don't change anything.
More than likely however, there are recurring quality problems, defect count is high, bug-fixes are rejected, and developers spend a great deal of time refactoring. You need an architect and better architecture processes.
I pondered about the title of this post, should it be the role of an architect in Agile or just modern architect? Probably the latter, but the question of architecture in agile is something that bugs more than a few people. I certainly don't have all the answers nor proclaim to be an expert, but these are my experiences and observations.
So what's the problem?
Scrum and agile narrations focus on writing code, building code, not the "ideation" and design that is necessary. Who isn't following Scrum or some self proclaimed "flavour of agile" these days right? Almost never does agile documents mention architecture or design. They expect the team to design and build as part of a sprint, or maybe spend the first sprint just on design but no help on how this should work. In my experience this is fraught with difficulty. Design by committee doesn't work well, its usually better to have the most experienced person responsible for design and to properly think through a design and proof of concepts takes time. They should, of course, engage other team members for input and review, but one person needs to be responsible. Things go better if the senior(s) have a vision of what the end-goal architecture should look like, building this vision with clarity is the issue.Stephen Cohen (Chief Architect, Microsoft): "Scrum went through 3-4 years in its original form before admitting that the architect had any role at all to play."
Juval Lowy (Master Architect, IDesign): "The agile priests would like you to believe that following agile will magically produce architecture that is adaptable, resistant to poor coding, and is scalable."
(Apologies for the paraphrasing).
Some practioners claim there is no need for design and it should be simply part of the implementing the story. This works in only the most trivial software, but in my experience most of us veterans would not often descibe what we do as simple. More often than not the code produced is badly structured in retrospect and not properly limiting volatility when business change occurs. Agile addresses this by saying "continuously refactor". How much time could be saved by mapping out an over arching architecture up front?
Wait, aren't you saying that you'd rather do waterfall?
No. Architecture != Waterfall.A wise friend once said:
If you put a bunch of extreme programmers in the middle of a city, let’s say Marrakech, and ask them to visit five tourist hot spots without using a “map”, they will wander around for days exploring every little passage way by brute force. If you give another bunch of developers a “map” and put them in the same city, they will use the map to go directly to each of the five tourist hot spots in a matter of minutes or hours.
Architecture is the map. You'd be crazy not to have a map (or get started creating one).
Simply sitting down with a one liner on a piece of card does not mean any developer from graduate to senior will be able to first estimate, then write tests, then magically produce well designed code that properly encapsulates volatility. This is a naive and utopian ideal that never happens. If the architecture is already mapped out and clearly articulated then it absolutely is.
Once you have a map, in enough detail to be understood by your team, Scrum works okay. It should be done before the team is assembled to begin building.
You should strive to create a shadow board architecture that allows making design decision for new business features easy, consistent and fast.
Ok, so architecture is important, but do we need an Architect?
What skills does agile require of its team members?- Cross-skilled (Dev, QA, SOA, UI, UX, DBA, Design, and anything else required);
- Able to work on and explain any part of the system;
- TDD & Unit testing;
- QA practices;
- Proven architecture and design skills;
- Industry context knowledge and patterns;
- Ability to write SDK documentation;
- Ability to consistently review code;
- Mentoring skills;
- Able to create processes and procedures to ensure predictable outomes (ie adapt);
- Good communication skills;
- Able to effectively peer program;
No one would argue that business needs are changing faster than ever. This necessitates good design that encapsulates volatility as best it can. The role of an Architect is most definitely STILL REQUIRED and because of faster pace of change, more relevant than ever.
How should architecture fit into an agile process?
It is common to begin (and sometimes complete) the UX work before the Scrum team starts work. Architecture is no different, it should be running in parallel to UX.Architecture should not be a heavy process, it should be mapped out once at the beginning for the whole new system, and sometimes before each major feature addition to an existing product. According to IDesign's "The Method" architecture can be completed in 3 to 5 weeks. It is not something that happens for each user-story.
Some highly respected architects I have spoken to simply state a proper architecture process does not fit with Agile, period. I won't join this debate, however, agile isn't going to step aside any time soon. To date, I haven't ever won the debate on Agile versus "The Method". It will take time. For now, I believe we should work with agile.
Scott Ambler posted his take on fitting architecture into agile processes. The idea is to envision the architecture during the requirements and analysis phase. (Before the development sprints begin). The key is just-enough, then during development keep reiterating over it during the implementation phase. Just-enough for your target team to comprehend. Amend and refine when necessary.
Architecture is not something that is needed for every story. It is the end-goal vision of your system. This may happen once per release or even less frequently. You must have a clear idea of where you want your system to be 3 or 4 releases from now. Its ok to change the vision. The vision doesn't have to consider business features in detail, its the more like an overarching strategy for implementing them, but you should consider likely business extensions and how your architecture will support the changes.
Are you building a pluggable SOA system? If so, are there standard patterns all communications should use? Is it worthwhile applying aspects to all components? What would happen if services organically grew where developers thought it was quickest and easiest to slap them in? What about realistic and likely business changes? Be cynical, its healthy. These are the questions important to consider when designing an architecture.
It takes effort to make something appear easy, and its easy to make something complex. - Unknown.
The role of an Architect in Agile Part 2
What should an architect be responsible for?
An architect or team of architects should be responsible for:
- Assisting in gathering and analysing requirements.
- Sell and negotiate technical constraints and aspects to stakeholders.
- Articulate the end goal architectural vision designed to encapsulate change.
- Communicate with diagrams the intended architecture and implementation plan.
- Prove (or disprove) technologies and techniques (prototyping).
- Own the definition of integration points and service interfaces.
- Establish processes to ensure predictable outcomes and quality in the SDLC.
- Oversee delivery of software in accordance with the intended architecture (reviews).
- Acceptable quality levels (coding standards, code metrics and performance parameters).
Controversially IDesign's "The Method" argues that the architect should also project manage. The explanation was simple, who else knows best how things integrate and their dependencies?
During implementation the job isn't done. Architecture is also governance of ensuring all these things have happened. An architect should oversee a feature from inception to implementation to final load and concurrency testing. A Scrum team should only be working on one feature at a time (this is a founding principle of agile - focus and don't context switch). This allows an architect to potentially oversee two features across two teams at the same time. Its also worth considering the architect joining the team as a development resource, if resources permit.
Any respectable Scrum coach will tell you that Scrum does not mandate no design. It simply states do what is required. If that means design can be done inside the same sprint as implementation, do it, although that is highly unlikely and I wouldn't advise it. The best plan is to complete the necessary design work prior to initiating a stream of work with the Scrum team. The trick is don't do too much. Too much will vary depending on strength of experience in the team, and how effective they are at communicating. The more inexperienced the team the more design work needed.
Having architecture artifacts up front before starting a piece of work, and for grooming sessions, means it is highly visible and scrutised. This inevitably improves the design, and reduces unknowns.
Who's the boss?
Answer: The client. The Product Owner (PO) represents and speaks for the client. That is not to say the architect shouldn't have direct contact with the client, the PO is there to help you with this. Embracing change is the key to a successful software product. Often more valuable information comes to light during the project. Expect change. However, an inexperienced PO will change their mind more often than necessary. This can be addressed with a little more architecture work, this will keep the PO's reasoning honest. Any whiff of analysis paralysis means you either dump the feature/use-case or dumb it down and start the Scrum process with basic a implementation only. Unfortunately a PO that changes their mind too often results in poor qualilty software that costs more than it should to maintain or extend.A good PO will work with their architect during the sprint review to ensure the quality metrics are adaquate and all agreed standards have been met. Although the architect is technical, they are definitely in the Product Owner camp. The PO should lean on their architect for technical advice and verification of the product delivered. The PO should fail the sprint if the architect can prove that the implementation doesn't comply with the design, or standards have not been met.
IDesign's "The Method" advocates the architect head up the project team and be the one to call the shots and sign-off the final product. This is a very tough sell to the establishment today, but it could be the future.
Some tips on how an architect can go about their job?
(Thanks to IDesign's Michael Montgomery for the inspiration for this list).
- Formalise and document your requirements gathering approach.
- Formalise document templates to store the gathered information.
- Don't gather information using sticky notes! Save it digitally using defined consistent templates.
- No gold-plating. Everything is “just enough” based on known and likely use-cases, team composition, and hand-off-point.
- Iterate on everything, including requirements, getting “just enough” before the Design Phase begins. Gather - Refine - Review/Assess, Gather - Refine - Review/Assess, until your ready to start implementation.
- Continually reinforce to the BA's and PO's that they bring the ‘when, what, why, who’, but never the how.
- Developers don’t read. Format use cases as ‘pseudo-code’ (i.e. numbered/bulleted ‘lists’) and diagrams.
- Never use the term ‘spec’. Call it the ‘requirements lists’ or 'acceptance criteria'.
- With the advent of UX, separate functional (UX/UI) from back-end service operational (SOA) requirements lists. Treat back end services as a different product to a client UI. They will have their own lifetimes driven by the same use cases.
- UX is different from UI is different from SOA is different from Framework. Each may need its own requirements treatment, depending on the size and complexity of the system.
- Developers don’t do UX (although they may think they can - don't believe them).
- The UX guys must involve stakeholders early and often. This will need to be quite mature before implementation can begin.
- For user driven applications (which is most), it’s well-crafted UX workflows that produce clearly defined use cases that produce succinct SOA. If you can get the UX guys ‘out in front of the ball’, you put yourself in a sweet spot. This particularly holds true in composite UX.
Conclusion
Given a choice, I would prefer to use IDesign's "The Method". My problem with it has been convincing employers and stakeholders, even though it's a new approach, it's a proven one. Others with superhuman sales and negotiation skills may have more success than I. Working in with Scrum, its roles, and other established roles has been a given for me; and I'm betting you too. However, I believe this is completely realistic in a hybrid model. The role of an architect is absolutely essential, but the role and its skills are far more than a senior developer's. An architect's number one skill, is their communication skills. In my opinion as an industry we need to formalise roles, and the architect role is paramount and akin to the building industry's architect in responsibility but more similar to a head-chef in activities.References and more information
Scott Ambler on User StoriesScott Ambler on Architecture in Agile
IDesign.Net
http://www.idesign.net/articles/agile_and_the_architect.htm
Able Architecture Part 1:
Able Architecture Part 2:
Thursday, July 5, 2012
Fitting Security into the Agile SDLC
Microsoft have some good guidance on this
http://www.microsoft.com/security/sdl/discover/sdlagile.aspx
My personal opinion on this, is that it does look good, however teams starting out with Agile processes should get the basics right first. I've seen companies rush through Scrum implementations too many times with little or no training and change management, only to fail. Once you have the basic Scrum process running smoothly concentrate on implementing good up front design practises, including security.
http://www.microsoft.com/security/sdl/discover/sdlagile.aspx
My personal opinion on this, is that it does look good, however teams starting out with Agile processes should get the basics right first. I've seen companies rush through Scrum implementations too many times with little or no training and change management, only to fail. Once you have the basic Scrum process running smoothly concentrate on implementing good up front design practises, including security.
Sunday, June 3, 2012
WCF 4 Code Samples
The official code samples from Microsoft for WCF 4 on MSDN.
Requiring SSL over HTTP WCF with an upstream proxy
Its a common problem with WCF services that you have a deployment infrastructure that will strip the SSL before the traffic arrives at your service. This is so the site only needs one certificate but can still use a farm of servers to respond to requests. A proxy or firewall holds the certificate and strips the SSL out before forwarding traffic onto the server farm. There are reasons why you may still require specifying TransportEncryption (SSL) in your service config. When you do the service will fail because SSL is expected but traffic received is not encrypted.
To fix this problem Michelle Leroux Bustamate of IDesign has provided a solution here.
There is also a hot-fix published by Microsoft, that adds the enableUnsecuredResponse attribute and this is reputed to solve the issue as well. Although I haven't personally tried it.
To fix this problem Michelle Leroux Bustamate of IDesign has provided a solution here.
There is also a hot-fix published by Microsoft, that adds the enableUnsecuredResponse attribute and this is reputed to solve the issue as well. Although I haven't personally tried it.
A word of warning regarding manual GC
I was party to an email conversation discussing manual manipulation of the GC. Originally the thread was discussing extreme memory and performance management. Stephen Cohen, Senior Architect at Microsoft added the following response:
A quick word of warning;
Calling the GC is possible but generally not wise. In many scenarios you will find performance drops. Worse yet, the GC is not limited to a single application (unless you are the only application on a device/server). Calling GC might seem good for you but what about any other application that could be impacted?
In my experience wanting to call the GC directly should raise a flag on the application design and set of a deep design review to investigate before implementing a change. With rare exception you can find a way to manage the flow of data, the size of chunks, number of instances, use of critical sections or parallel processing, etc that will remove the need to call the GC.
That said, it can be done but you should allocate a lot of time to test with enough instrumentation that you can both validate a positive result … better performance than the baseline… and confirm no negative side effects.
A quick word of warning;
Calling the GC is possible but generally not wise. In many scenarios you will find performance drops. Worse yet, the GC is not limited to a single application (unless you are the only application on a device/server). Calling GC might seem good for you but what about any other application that could be impacted?
In my experience wanting to call the GC directly should raise a flag on the application design and set of a deep design review to investigate before implementing a change. With rare exception you can find a way to manage the flow of data, the size of chunks, number of instances, use of critical sections or parallel processing, etc that will remove the need to call the GC.
That said, it can be done but you should allocate a lot of time to test with enough instrumentation that you can both validate a positive result … better performance than the baseline… and confirm no negative side effects.
Manipulating the GC will affect other .NET applications running on the server.
Tuesday, May 29, 2012
WCF Service Diagnostics Part 2
In the previous post on this topic I gave an outline of the most common tools used to look into problems with WCF. In this post I'll give some more information on more advanced tools and approaches for more insidious problems.
SOAP Xml verfication.
It is quite common to use WCF services as the boundaries between different parties or boundaries between responsibilities of teams. Just like defining an interface so two developers can work on either side in parallel, one developing the interface implementation and the other the consumer; so can service WSDL be used. In this scenario parties often swap SOAP xml requests and responses as files and check them against their own code.
Tools that can be used are:
SOAP Xml verfication.
It is quite common to use WCF services as the boundaries between different parties or boundaries between responsibilities of teams. Just like defining an interface so two developers can work on either side in parallel, one developing the interface implementation and the other the consumer; so can service WSDL be used. In this scenario parties often swap SOAP xml requests and responses as files and check them against their own code.
Tools that can be used are:
- SOAP UI. Basically this tool allows you to copy and paste SOAP Xml into its UI and then send it to a service. It will also allow you to see the response SOAP. Obviously however it is specific to the SOAP protocol and HTTP. It does appear to have loads of other functionality that could be useful particularly for JAVA based diagnostics.
- Fiddler can be used as well. I have used it to look and record the HTTP traffic before, but apparentely you can also use it as a proxy to return preconfigured responses as well (although I personally haven't tried this).
- WsdlSoapXmlValidator. This is a rough tool I have written to take request and response SOAP xml and verify them against the embedded XSD schemas within WSDL (it assumes the WSDL includes Schema and will not work otherwise) and then host a dummy service compliant with the WSDL and using the request and response xml against the service to test actual WCF deserialisation of the samples.
- WireShark is another tool I have reached for on odd occasions to view network traffic. This is useful for TCP/IP and binary based protocols.
- WCF Load Test Tool. This is a free codeplex tool written by the Visual Studio ALM Rangers.
Calls between layers should be WCF service calls
Everyone knows about creating layers in software architecture. During the IDesign Architecture Clinic I attended in March Juval talked about separation of layers and stopping leaking of concerns between layers.
The best way to achieve this is by using WCF between layers. When this is applied correctly WCF will provide:
Consistency between layers (transactions);
Scalability (general code performance, farming, or partitioning layers onto different app-servers etc);
Fault isolation (resilience from unhandled exceptions and isolate each thread from another's exceptions);
Security (message encryption, transport encryption, authentication, impersonation etc);
Clean Separation of presentation from business logic (maintainability and flexibility);
Availability (striving for 24/7 99.99% up time);
Responsiveness;
Throughput;
and Synchronisation.
Juval Lowy: "Tell me which one of these you don't like and I'll tell you in which way your system is going to die." (reference)
Juval is also known for saying "...every class as a service...". This might be a little controversial and extreme, but imagine for a moment that the cost of serialisation and deserialisation was eliminated for internal App-Domain calls. Why would you not want all the above aspects WCF brings to the table not only between layers but also between classes? An insight into the future perhaps?
Dictionary is not thread safe
No surprises, dictionary is not thread safe. But what if you're using it with guaranteed unique keys; ie each thread is guaranteed to be accessing it with a key unique to only that thread?
Here's a test:
Comment the Console.WriteLine and it no longer runs...
So what is happening here? As the dictionary gets larger the it will need to change its internal storage to increase its size, when this occurs another thread will either corrupt it or access stale data. So bad news.
The morale of the story is always add a Console.WriteLine before accessing a dictionary. Just kidding. If you think there will be any sort of multi-thread contention use a ConcurrentDictionary instead. Just simply by thinking each thread accessing the dictionary using a guaranteed unique key will not suffice.
Here's a test:
private static readonly Dictionary<string, string> dictionary = new Dictionary<string, string>();
public static void Main(string[] args) { var threads = new List<Thread>(); for (int threadNumber = 0; threadNumber < 100; threadNumber++) { int number = threadNumber; var thread = new Thread( () => { for (int index = 0; index < 1000; index++) { Console.WriteLine("Thread" + number + " index " + index); var key = Guid.NewGuid().ToString(); dictionary.Add(key, "Value"); var dynamicValue = Guid.NewGuid().ToString(); dictionary[key] = dynamicValue; Debug.Assert(dictionary[key] == dynamicValue); } }); threads.Add(thread); thread.Start(); } foreach (var thread in threads) { thread.Join(); } }So, as you can see, this is creating a bunch of threads that all hammer the dictionary adding and editing and then asserting everything is as it should be. This runs fine with no exceptions. Or does it...
Comment the Console.WriteLine and it no longer runs...
So what is happening here? As the dictionary gets larger the it will need to change its internal storage to increase its size, when this occurs another thread will either corrupt it or access stale data. So bad news.
The morale of the story is always add a Console.WriteLine before accessing a dictionary. Just kidding. If you think there will be any sort of multi-thread contention use a ConcurrentDictionary instead. Just simply by thinking each thread accessing the dictionary using a guaranteed unique key will not suffice.
Sunday, May 6, 2012
Version 1.0.16 of Type Visualiser
A new version of Type Visualiser is available. See the Type Visualiser page for download details.
I've added a new mulit-document interface using tabs to allow many diagrams to be open at once. I found this is need when using the tool for any serious discovery of unfamilar code.
I've added a new mulit-document interface using tabs to allow many diagrams to be open at once. I found this is need when using the tool for any serious discovery of unfamilar code.
Unlike the standard WPF tab control I have written my own to ensure individual tabs are kept in memory for fast switching between tabs. This will consume memory the more tabs you open, but waiting between tab switch didn't feel like a good user experience. As types are loaded they are also stored in a global cache to prevent analysing the same type twice even for different diagrams.
I'm currently looking at changing the way I've implemented zooming, panning and centreing to a far better approach. This will allow for animations when zooming and for infinite canvas size that expands when diagrams push the boundaries of the diagram.
Enjoy.
Thursday, April 12, 2012
ReadOnlyDictionary Template
A re-usable template for a read only dictionary.
Edit: I changed Keys and Values collection to IEnumerable. Previously it was ICollection which has a Remove and Add method.
Edit: I changed Keys and Values collection to IEnumerable. Previously it was ICollection which has a Remove and Add method.
public interface IReadOnlyDictionary<TKey, TValue> { bool ContainsKey(TKey key); IEnumerable<TKey> Keys { get; } IEnumerable<TValue> Values { get; } int Count { get; } bool IsReadOnly { get; } bool TryGetValue(TKey key, out TValue value); TValue this[TKey key] { get; } bool Contains(KeyValuePair<TKey, TValue> item); void CopyTo(KeyValuePair<TKey, TValue>[] array, int arrayIndex); IEnumerator<KeyValuePair<TKey, TValue>> GetEnumerator(); } public class ReadOnlyDictionary<TKey, TValue> : IDictionary<TKey, TValue>, IReadOnlyDictionary<TKey, TValue> { private readonly IDictionary<TKey, TValue> dictionary; public ReadOnlyDictionary() { this.dictionary = new Dictionary<TKey, TValue>(); } public ReadOnlyDictionary(IDictionary<TKey, TValue> dictionary) { this.dictionary = dictionary; } public void Add(TKey key, TValue value) { throw new NotSupportedException("This dictionary is read-only"); } public bool ContainsKey(TKey key) { return this.dictionary.ContainsKey(key); } public ICollection<TKey> Keys { get { return this.dictionary.Keys; } } public bool Remove(TKey key) { throw new NotSupportedException("This dictionary is read-only"); } public bool TryGetValue(TKey key, out TValue value) { return this.dictionary.TryGetValue(key, out value); } public ICollection<TValue> Values { get { return this.dictionary.Values; } } public TValue this[TKey key] { get { return this.dictionary[key]; } set { throw new NotSupportedException("This dictionary is read-only"); } } public void Add(KeyValuePair<TKey, TValue> item) { throw new NotSupportedException("This dictionary is read-only"); } public void Clear() { throw new NotSupportedException("This dictionary is read-only"); } public bool Contains(KeyValuePair<TKey, TValue> item) { return this.dictionary.Contains(item); } public void CopyTo(KeyValuePair<TKey, TValue>[] array, int arrayIndex) { this.dictionary.CopyTo(array, arrayIndex); } public int Count { get { return this.dictionary.Count; } } public bool IsReadOnly { get { return true; } } public bool Remove(KeyValuePair<TKey, TValue> item) { throw new NotSupportedException("This dictionary is read-only"); } public IEnumerator<KeyValuePair<TKey, TValue>> GetEnumerator() { return this.dictionary.GetEnumerator(); } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { return (this.dictionary as System.Collections.IEnumerable).GetEnumerator(); }
}
Sunday, April 8, 2012
Using the Factory Method Pattern
The factory method pattern is a creational software design pattern that encapsulates how to construct a dependency into method. The method could return any number of concrete implementations. However the consuming code only accesses the dependency through an interface or abstract base class. This pattern is useful when there is distinct logic to return one concrete class over another. It is also an effective pattern when working with legacy code, as it is easier to introduce a new method than a whole new interface.
The essence of the pattern is simply a virtual method that contains logic to create and/or return an instance of a dependency used by the consuming class.
Here's an example using the pattern to allow for unit testing substitution.
Things to note:
The essence of the pattern is simply a virtual method that contains logic to create and/or return an instance of a dependency used by the consuming class.
Here's an example using the pattern to allow for unit testing substitution.
namespace FactoryMethodDemo { public class ProductMaintenance { public void EditProduct(int id, string name, decimal price) { var dataAccess = this.CreateDataAccess(); var product = dataAccess.GetProductById(id); product.Name = name; product.Price = price; dataAccess.Save(product); } public virtual DataAccessLayer CreateDataAccess() { return new DataAccessLayer(); } }
public class DataAccessLayer { public virtual Product GetProductById(int id) { // Expensive database call... // Omitted throw new NotImplementedException(); } public virtual void Save(Product product) { // Omitted throw new NotImplementedException(); }
}
public class Product { public string Name { get; set; } public int Id { get; set; } public decimal Price { get; set; }
}
}
In this example the CreateDataAccess method is the factory method pattern. Granted this is a very simple example, the method could contain a great deal of initialisation logic, or logic to determine which DataAccessLayer to return if there is more than one data store. This could be useful when the database is partitioned (sharded) across multiple servers.
Things to note:
- Always make factory methods virtual. This always for testing as well as other variations in other sub-classes.
- Prefer to name the method with clear indicators about how it behaves. I have chosen the word "Create" to inform consumers that each call to the method will create a new DataAccessLayer instance.
- Prefer to make the return type an interface. This may not be feasible with legacy code, hence my demo here uses virtual methods. This is easier to introduce into an existing code base than an interface. Although if its possible introduce an interface.
namespace TestProject1
{using FactoryMethodDemo; using Microsoft.VisualStudio.TestTools.UnitTesting; using Rhino.Mocks;public class ProductMaintenanceTestHarness : ProductMaintenance { private DataAccessLayer mock; public ProductMaintenanceTestHarness(DataAccessLayer mockDataAccessLayer) { this.mock = mockDataAccessLayer; } public override DataAccessLayer CreateDataAccess() { return this.mock; } }[TestClass] public class ProductMaintenanceTest { public TestContext TestContext { get; set; } [TestMethod] public void EditProductTest() { var productTestData = new Product { Id = 1, Name = "Bar", Price = 89.95M, }; var mockDataAccessLayer = MockRepository.GenerateMock<DataAccessLayer>(); // Able to mock because methods are virtual mockDataAccessLayer.Expect(m => m.GetProductById(1)).Return(productTestData); mockDataAccessLayer.Expect(m => m.Save(productTestData)); var subject = new ProductMaintenanceTestHarness(mockDataAccessLayer); subject.EditProduct(1, "Foo", 99.99M); mockDataAccessLayer.VerifyAllExpectations();Assert.AreEqual(99.99M, productTestData.Price); } }}
Saturday, April 7, 2012
Using a singleton (anti?)pattern
Some things that always apply to a Singleton Pattern (a pattern that falls into the creational pattern category):
Ok, so you insist on using a Singleton. Very well, lets try to do it with as least smelly code as possible...
All code examples make use of this interface.
Use a Simple Singleton as a first option. Only use this when you know there will be no thread contention at initialisation time of the singleton. Or maybe it doesn't matter if there is. This works well for unit testing because you can easily inject a mock singleton using a private accessor (reflection) or you could add a internal setter to the Default property and allow your test project access to internals (see this blog post for more details).
A much improved version of a TypeInitialiserSingleton is the BishopSingleton (named after Judith Bishop and its from her book C# Design Patterns [O'Reilly])
Finally, think once, twice, and thrice before using a Singleton, prefer using dependency injection, Inversion of Control, or a query or factory pattern, or even caching data in a shared way (MemCache or AppFabric).
- Think twice before using a singleton. Singletons can be bad for: parallel unit tests, dependency coupling, scalability, performance, memory management, complex threading issues (see here and here and countless others).
- Always use interfaces with your singletons.
- Always return the interface from the default static property.
- Prefer using a static default property that returns an interface over static methods that access a private static field. (This helps enforce the pattern when other developers add new members to the singleton).
- Always use a private constructor, this prevents any ad-hoc use of the class bypassing the singleton. (No one can create the class so you can only use the static Default property).
- Seal your singletons. This will disallow someone sub-classing it then instantiating it as a transient class and bypassing the singleton.
- By definition, a singleton should be constructing only when it is first used.
Ok, so you insist on using a Singleton. Very well, lets try to do it with as least smelly code as possible...
All code examples make use of this interface.
namespace SingletonDemo { public interface ISomeSingleton { void DoSomething(); string Stuff { get; } }}
Use a Simple Singleton as a first option. Only use this when you know there will be no thread contention at initialisation time of the singleton. Or maybe it doesn't matter if there is. This works well for unit testing because you can easily inject a mock singleton using a private accessor (reflection) or you could add a internal setter to the Default property and allow your test project access to internals (see this blog post for more details).
namespace SingletonDemo { public sealed class SimpleSingleton : ISomeSingleton { private static ISomeSingleton defaultInstance; private SimpleSingleton() { // private ctor to prevent transient usage. } public static ISomeSingleton Default { get { return defaultInstance ?? (defaultInstance = new SimpleSingleton()); } } public void DoSomething() { // omitted... } public string Stuff { get { // omitted... return string.Empty; } } } }Use what I call a ContentionSingleton when you know there will be thread contention when the singleton is first accessed. This example guarantees only one thread can create the singleton, all other threads wait until it is initialised.
namespace SingletonDemo
{
public sealed class ContentionSingleton : ISomeSingleton
{
private static readonly object SyncRoot = new object();
private static volatile ISomeSingleton defaultInstance;
private ContentionSingleton()
{
// Intialisation code...
}
public ISomeSingleton Default
{
get
{
if (defaultInstance == null)
{
lock (SyncRoot)
{
if (defaultInstance == null)
{
defaultInstance = new ContentionSingleton();
}
}
}
return defaultInstance;
}
}
public void DoSomething()
{
// Omitted...
}
public string Stuff
{
get
{
// Omitted...
return string.Empty;
}
}
}
}
Things to note:- It is best practise to use a dedicated and separate lock object. Never use the same lock object for different purposes. The lock object should always be private; no one outside this class should be using it.
- Declare the singleton instance as "volatile". This indicates to the runtime that this object might be changed by different threads at the same time. This exempts this field from compiler optimisation that assumes only one thread will access this object at a time. It also guarantees the most up to date value will be in the field at all times.
- Private constructor.
- Double check the field both outside the lock and in. This ensures that 2 (or more) threads trying to initialise the singleton at the same time one enters the lock and one waits. Then when the first exits the lock the second will enter the lock, when it does it must not re-initialise the singleton.
- This still works well with unit testing for the same reasons as SimpleSingleton above.
There is another way to create singleton behaviour which I am not a fan of, Type Initialised Singleton:
namespace SingletonDemo { public sealed class TypeInitialisedSingleton : ISomeSingleton
{
private static readonly ISomeSingleton DefaultInstance;
static TypeInitialisedSingleton()
{
// .NET Runtime guarantees to run static constructors exactly once.
DefaultInstance = new TypeInitialisedSingleton();
// Impossible to intercept during unit testing and mock.
}
private TypeInitialisedSingleton()
{
// Connect to Database...
// Get data using a bunch of SQL statements...
}
public ISomeSingleton Default
{
get
{
return DefaultInstance;
}
}
public void DoSomething()
{
// Omitted...
}
public string Stuff
{
get
{
// Omitted...
return string.Empty;
}
}
}
}
BAD BAD BAD. There is no way to intercept the static constructor code during unit testing and inject mock behaviour. As soon as the type is referenced by a piece of code the type initialiser will run. I would avoid this at all costs. Even if you don't plan to do unit testing, if you change you're mind later it may be hard to change. Generally speaking type initialisers (OO term for static constructors) should be avoided, and when used only used for simple static member initialisation. If something goes wrong in the static constructor, even if handled, it can leave the application in a weird state (check this post).A much improved version of a TypeInitialiserSingleton is the BishopSingleton (named after Judith Bishop and its from her book C# Design Patterns [O'Reilly])
namespace SingletonDemo { public sealed class BishopSingleton : ISomeSingleton { // Local instance allows for tests to inject mock behaviour private static ISomeSingleton localInstance; private static class SingletonCreator { // Despite being internal, this is not visible outside BishopSingleton class. internal static readonly ISomeSingleton DefaultInstance; static SingletonCreator() { // if localInstance is not-null this will not be called. DefaultInstance = new BishopSingleton(); } } private BishopSingleton() { // Initialisation... } public static ISomeSingleton Default { get { return localInstance ?? SingletonCreator.DefaultInstance; } } public void DoSomething() { // omitted... } public string Stuff { get { // Omitted... return string.Empty; } } } }This is quite elegant as it doesn't require understanding intricacies locking code and gives a means of tests injecting mock singletons (using private accessors). Although you will get Resharper and Code Analysis warnings that localInstance is never assigned, which feels a little dirty. Which is better? I personally prefer using SimpleSingleton when I know there is no thread contention and ContentionSingleton otherwise. It is more obvious to the average developer what is going on inside the ContentionSingleton compared to the BishopSingleton in my humble opinion.
Finally, think once, twice, and thrice before using a Singleton, prefer using dependency injection, Inversion of Control, or a query or factory pattern, or even caching data in a shared way (MemCache or AppFabric).
Subscribe to:
Posts (Atom)