пятница, 7 июня 2013 г.

On IIS, PowerShell, compatibility and deprecation

Recently I was developing a deployment script – nothing really special (4 web applications + 3 Windows services, main and geo-failover environment). In scope of deployment I need to find running web applications and then start them again (after deploying binaries and configuration). With PowerShell we have the following 2 ways for doing this:

  1. WMI (against classes in root\MicrosoftIISv2 namespace introduced with IIS 6.0);
  2. WebAdministration API (introduces with IIS 7.0).

It’s quite unsurprising that both have their own pros and cons:

  • Using WMI requires quite some wondering among documentation; getting used to it; knowing and using different classes (not so obvious at times); dealing with security and permissions; respecting the fact that once you’ve obtained an object and did smth (that changes its state) the object you have won’t refresh automatically (i.e. if you have an object that represents a Site and you stopped it, the object you have in your PowerShell session with still be in “Running” state), etc. But overall it’s ok – once you get an idea on how the whole thing works and get over some immediate problems you’ll be up and running with no major problems.
  • WebAdministration API at first sight looks easier / more attractive – high level API (cmdlets for PowerShell) with virtual path provider; classes closely reflect objects that we mange in IIS through interface. Second sight, however, reveals some not-so-pleasant details: is packed into a separate dll and PowerShell module (which you need to find / install correctly); no direct option for managing remote instances of IIS; quite a lot of cmdlets (which is comparable in number with WMI classes for IIS 6.0).

A deeper investigation and some testing reveal a few exciting and interesting facts:

  • root\MicrosoftIISv2 namespace is actually considered as ‘deprecated’ (since IIS 7.0) and you’ll need to install “IIS 6 WMI compatibility” component in order for it to be available;
  • WebAdministration API actually relies on WMI classes underneath (which is good if you managed to master WMI path) which suggests that
    • you can use this new way for doing whatever you want;
    • potentially you don’t need additional dll and module;
    • will seamlessly work for managing remote machines.

All is good indeed, until you realize that new new WMI classes are … well … do not provide you with the functionality that you expect. Microsoft did a good job on providing transition matrix (use this for that, etc), but the point is that new classes add more friction where you would not expect it. For instance, I need to get a list of all running sites (see above). With IIS 6 WMI provider it’s quite straightforward:


Get-WmiObject `
-Namespace root\MicrosoftIISv2 `
-ComputerName $currentNode.HostName `
-Authentication PacketPrivacy `
-Query "SELECT * FROM IISWebServer WHERE ServerState = 2"


conditions – that’s why we have SQL-like syntax here, right?



What about new IIS 7 classes? Surprise! Site / application pool state is no longer a property – now it’s a method! Calling GetState() on ApplicationPool or Site is pretty much a no-brainer, but how should I handle that inside a query? No way? Are you kidding? What if I have dozens of sites and I don’t need / want to go through most of them? You don’t care? But that worked perfectly (ok, ok, just worked) for me before, why did you decided to implement new API and deprecate this one just 1 major release after it was shipped? That’s something I don’t understand about Microsoft lately – and this is just one example (with more around Windows Phone, Windows, Office).



HTH,

AlexS

четверг, 25 октября 2012 г.

Process notes: QA

QA process works towards the following goals:

  1. Provide a qualitative metric of product quality (number of open and fixed/verified bugs)
  2. Provide qualitative metric of functional requirements fulfillment (implemented and verified test cases or user stories against a target total)
  3. Ensure stable product quality over time (regression testing)

QA process is based on product requirements and works with artifacts (labeled builds) produced by release management process. Below you will find some key points for the QA process to meet its goals:

  • QA activities are applied to each official [labeled] build (the set of applied tests might differ, though)
  • Product quality is evaluated against a set of test-cases. Test cases are usually derived into groups (by functional area; by priority) – this allows more flexibility in applying test efforts.
  • Each produced [labeled] build should have an associated test report which specifies the tests being executed and a result of execution (positive/negative).
  • Each build that’s publicly deployed (i.e. uat/staging/production) should be tested at least against a short-list of critical test cases (usually referred to as ‘smoke test’). If a build does not pass this ‘smoke test’ it can’t (won’t) be deployed anywhere.
  • When adding new functionality or updating existing features, a list of test cases is updated to reflect these changes (new features usually result in new test cases being added to this list).
  • Having a consistent track of test results provides an immediate insight on:
    • Development progress (new test cases mean new functionality being added)
    • Stability of each build we produced (number of bugs and their priority) – which greatly simplifies a task of choosing a ‘good enough’ build for urgent delivery
    • Amount of work that lies on our plate (blocked test cases + open bugs)

When maintaining automated [selenium] tests with iterative development process, the practice is usually as follows:

  1. Assuming we have a sufficient test coverage for the previous iteration
  2. During current iteration, QA team does the following:
    1. Preparing a list of test cases to cover new functionality (during current sprint there test cases are usually executed manually)
    2. Implementing [automated] tests for test cases that were implemented/delivered during previous sprint
    3. [optional] backing up critical bugs by automated tests in order to ensure proper regression testing

The reason for automated tests to be one iteration behind development is that in order to develop them, the functionality has to be in place, which is not the case during ‘current’ sprint. With this approach to testing (i.e. using automates tests) a sign of a healthy project is an increase in number of [passing] automated tests after each iteration.

HTH,
AlexS

среда, 24 октября 2012 г.

Process notes: release management

Recently I had to spend some time to produce comments around different aspects of software development. Publishing these notes here is a good way of saving it for future use.

 

Basic definitions and requirements

Artifact – a set of files that we work with or produce. This can refer to a source code drop or to a binaries produced by building the application from this sources.

Process – a repeatable and traceable set of actions that are performed on a given artifact(s).

In scope of release management process we work with the following artifacts:

  • Source code – a complete set of project source code together with required configuration files.
  • Build – a labeled set of binary files that is sufficient for deployment to any environment we have.

Processes we need to have in place:

  1. Build process – takes a source code and produces a labeled build [package].
  2. Deployment process – takes a build and deploys it to a specified environment.

Process requirements:

  1. Repeatability – we need to be able to repeat any process and be sure that for a given input it will produce exactly the same output.
  2. Traceability – we need to be able to trace when a process was initiated, who initiated a process, what were the inputs and what were the outputs/results or process execution.

Security considerations:

  1. We’d like to control who can trigger certain processes (like deployment to production).
  2. We’d like to control who have influence on the artifacts used to produce the build.

Implementation notes

The crucial aspect of organizing release management process is a definition and labelling of artefacts. Here’s what might help us improve our process:

  • Having a dedicated release branch which is used as an input for build process (helps with both repeatability and security)
  • Establishing a consistent build labeling mechanism and using the same labels across all the environments we run.
    • Build label should consist of major and minor release number (might be based on sprint number)
    • Should include source control revision number in some form
    • Should be stamped on each binary we produce during the builds – usually embedded in each assembly via AssemblyVersion attribute
    • Should be surfaced somewhere on the web-site – does not necessary visible to end user, can be a hidden field or a hint
      • Simplifies support and solving technical issues
      • Helps in development and testing by making it 100% clear which version is being deployed/tested
      • Allows easy and reliable tracking of what was fixed when and where it is deployed to

Bottom-line:

  1. Define consistent build artifacts labeling mechanism
  2. Define how to build official (i.e. labeled) builds (release branch or some other strategy)
  3. Define responsible personnel and limit access to corresponding artifacts/processes
  4. Define how we ensure repeatability (the simplest and most common way is to store official builds somewhere and allowing deployment to be triggered for a given set of [build] artifacts)

HTH,
AlexS

четверг, 2 августа 2012 г.

CI and msdeploy: I want to parameterize my package

When we deploy to different environments, we’d like to have different configuration values to be set during deployment. Some of them (like connection strings) can be parameterized by msdeploy itself when it creates deployment package, but that’s only part of the story. Say we want some values in web.config to be changed depending on environment we’re deploying to. Good news: msdeploy has a mechanism called ‘deployment parameters’ – you can supply a file declaring these parameters and msdeploy will parameterize your package (this is not the same as web.config transformation – a good overview of difference can be found here). Bad news: there’s no UI in Visual Studio that allows you to either specify these parameters or specify a file where they are declared. In case you decide to build deployment package on your own (by invoking msdeploy directly of from an msbuild script) you will face another problem: you can either supply deployment parameters declaration in xml file or have msdeploy gather/discover them – but not both. So you can’t define some of the parameters in a file and let msdeploy do the rest of the job for you. This comes especially inconvenient when you have a database deployment [sql] scripts in your package – in order for msdeploy to pick them up [when invoked from command line] you have to do a lot of plumbing around declaration and command-line arguments. From the other side, when msdeploy gathers parameters on its own, it tends to parameterize things that you might not want to be parameterized (like IIS application name or application pool name).

This is how it can be solved:
1) When Visual Studio creates a deployment package for your application it looks for [YourWebProjectName].wpp.targets file and imports it into running msbuild script (well, it is actually Microsoft.Web.Publishing.targets which looks for and imports this file in case it exists). This little file lets you extend / influence package building pipeline.
2) EnablePackageProcessLoggingAndAssert msbuild parameter allows you to see extended log of what happens when a deployment package is build for your project – by examining this you can discover a set of msbuild targets that can give you a clue on what to extend/precede.
3) DisableAllVSGeneratedMSDeployParameter – allows you to prevent VS from parameterizing your package (so you don’t have duplicate / unnecessary parameters).
4) ParametersXMLFiles item type allows you to supply your own parameters declaration file (you have to put it inside ItemGroup element and reference a file with parameters declaration).
5) When defining you own target, you can hook on a ‘standard’ one (i.e. the one, defined in Microsoft.Web.Publishing.targets) by using either of the following attributes: BeforeTargets, AfterTargets – they contain a coma-separated list of targets you want to hook on.

HTH,
AlexS

среда, 1 августа 2012 г.

CI and msdeploy: I want deployment package to be build by my CI tool

Here’s a scenario I have in my current project: ASP.NET MVC application is being deployed to a number of environments (staging, test, uat, production). Web Deployment tool (aka msdeploy) is used for assembling deployment package and for the deployment itself. There’s a CI tool employed (which is TeamCity) with corresponding set of [ms]build scripts.

At first sight the problem seems to be fairly simple – if Visual Studio can build deployment package for us, so why can’t we? It ain’t so simple, because in order to invoke msdeploy [for building deployment package] you need to provide it a lot of arguments and (which is more important) have all the files to be packaged ready at some location (which is not your project directory). This is done by Visual Studio when you … create a deployment package (via project context menu –> Build Deployment Package). But the problem is that this is an explicit action, triggered via a dedicated UI element (i.e. it’s not part of the build process). Fortunately for us, deeps inside it actually relies on a set of MSBuild targets that do all the job (for the curious ones: they are defined in Microsoft.Web.Publishing.targets file which resides in Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\Web\ folder on your system drive). And there’s a ‘magic switch’ which can be passed to MSBuild so it actually build deployment package as part of a normal build process.  These parameters are:

DeployOnBuild=True
DeployTarget=Package

so you will have to invoke msbuild like this

msbuild MySolution.sln /p:Configuration=Release,DeployOnBuild=True,DeployTarget=Package

In case you want to always create a deployment package during build, you can define the same parameters in your project file (but this will require editing it outside Visual Studio).

HTH,
AlexS

вторник, 31 июля 2012 г.

EF Migrations: database schema maintenance burden solved? … Not really.

Just a couple of quick notes about EF Migrations [with ‘code first’ approach]. It sounded like EF Migrations were going to solve all the problems around database development process and allow all the magic to happen right inside Visual Studio. A lot of magic is happening under the hood indeed, but it does not seem to be able to let us forget about sql scripts once and forever (it’s not that I ever believed it really will Winking smile).

First comes a task of deploying a database. We can have a connection string in [web].config, and in case the database is empty, it will be populated by applying all migrations right from the start (starting from Initial). It works ok unless you have some [initial] data to be present in the database upon first launch of the application. You might argue that this is a problem of deployment, not EF Migrations itself, but wait a minute – how are we supposed to deploy a database without a [sql] script? Ok, EF Migrations give you an option for creating a sql script rather than applying a particular migration to an existing database. But there’s a trick – you still have to have a database in place in order to do this. I’ve found it a bit annoying [and frustrating] to have to create an empty database just for getting migration scripted – Update-Database does not work without a connection string. I wish it could fake it.

Ok, you’ve done your homework and now you have an empty database to deed Update-Database with. My first intention was to let EF Migrations to script the latest state of the schema for me [by applying all the migrations one by one]. I issued

Update-Database –script –ConnectionString “[my connection string]”

and got a script (finally!). But guess what? It was not working – due to the fact that EF Migrations does not put each migration into a separate batch, it got a number of very simple syntax errors (declaring a variable multiply times). It does not take long to fix it, but in general it does not look reliable enough for me to trust it.

I ended up with scripting only ‘Initial’ migration and letting EF Migrations do the rest of schema update (with initial data already in) upon first launch of the application. This leaves [me] an open question, however: what if we need to start developing ‘version Next’ and start with the database of the current one? Sure, we can script it into ‘Initial’ in new project. Or just make a branch and continue development having all the migrations from the previous version. Both options does not look great in terms of maintenance.

Despite these little issues, EF Migrations are a huge step ahead and I see that employing code-first approach simplifies life for both app and db developers – less sql to produce/review/maintain.

HTH,
AlexS

суббота, 21 июля 2012 г.

Web Deployment Tool – making it install application remotely

The key is in precisely following this instruction when configuring remote server (it took me quite a while to identify which tutorial/instruction is the one to follow). There are, of cause, some other issues that can stand on your way:

  • you’d want to install Web Deployment tool manually and do a complete install – otherwise you’ll spend a fair amount of time looking for missing options in IIS Manger console;
  • when setting up Management Service Delegation, make sure that you enable the following providers: contentPath, createApp, dbSqlite, iisApp, package, setAcl – otherwise you will have to carefully read error messages and reiterate over delegation config (by default only contentPath and iisApp are included);
  • I would definitely suggest using dedicated low privileged domain users (and you have to have a domain, of cause).

HTH,
AlexS