Kelly's Space

Coding 'n stuff

Deploying multiple web applications to a single Azure instance and applying web.config transforms correctly

MSDN has a good article on configuring a web role for multiple web sites. The downside to this approach is that it doesn’t use project references and msbuild to ensure that each project is built and deployed before packaging. This is because it simply includes the contents of a folder as the content for the virtual directory (or web site). What I needed was a way to ensure that the following conditions were met:

  • All web projects are rebuilt when the Azure deployment package is built
  • All web.config transforms were executed
  • Only the files needed to run the website are included

After some trial and error, I figured it out using the following approach.

I started with a solution with a single Azure web role project, called Web1, and added an additional web project called Web2:

I’m going to configure the Azure project so that it has a virtual directory called /Web2 for the Web2 project. To do this, I need to edit the ServiceDefinition.csdef file as follows:

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="Azure" 
   xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WebRole name="Web1" vmsize="Small">
    <Sites>
      <Site name="Web" physicalDirectory="..\Web1">
        <VirtualApplication name="Web2" physicalDirectory="..\..\..\Publish\Web2" />
        <Bindings>
          <Binding name="Endpoint1" endpointName="Endpoint1" />
        </Bindings>
      </Site>
    </Sites>
    <Endpoints>
      <InputEndpoint name="Endpoint1" protocol="http" port="80" />
    </Endpoints>
    <Imports>
      <Import moduleName="Diagnostics" />
    </Imports>
  </WebRole>
</ServiceDefinition>

The highlighted lines above are the ones that I changed. For the physicalDirectory of Web2, I specified a folder in the parent directory called ..\..\..\Publish\Web2. This folder is going to be created off of the solution root folder during build. To make the Web2 project create this folder and deploy during build, I’ll create a new msbuild target in the project file.

To edit the project file, it must first be unloaded (Right click on Web2 in Solution Explorer, and select Unload Project. Once the project is unloaded, right click on Web2 again, and select Edit Web2.csproj

This will bring up the Web2.csproj file in the editor in Visual Studio. Find the following tag:

<Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" />

After this tag, add the following target:

  <Target Name="PublishToFileSystem" 
          DependsOnTargets="PipelinePreDeployCopyAllFilesToOneFolder">
    <Error Condition="'$(PublishDestination)'==''" 
           Text="The PublishDestination property is not set." />
    <MakeDir Condition="!Exists($(PublishDestination))" 
             Directories="$(PublishDestination)" />
    <ItemGroup>
      <PublishFiles Include="$(_PackageTempDir)\**\*.*" />
    </ItemGroup>
    <Copy SourceFiles="@(PublishFiles)" 
          DestinationFiles="@(PublishFiles->'$(PublishDestination)\%(RecursiveDir)%(Filename)%(Extension)')" 
          SkipUnchangedFiles="True" />
  </Target>

Save the file, and reload the project by right clicking on Web2 in Solution Explorer and selecting Load Project.

The last step is to create a prebuild step in the Azure project, so that it builds Web2 with the PublishToFileSystem target. To do this, unload the Azure project in Solution Explorer, and open it up in the editor. Find the following tag:

 
<Import Project="$(CloudExtensionsDir)Microsoft.WindowsAzure.targets" />

After this tag, add the following :

<Target Name="BeforeBuild">
    <MSBuild Projects="..\Web2\Web2.csproj" 
             ContinueOnError="false" 
             Targets="PublishToFileSystem" 
             Properties="Configuration=$(Configuration);PublishDestination=..\Publish\Web2\;AutoParameterizationWebConfigConnectionStrings=False" />
</Target>

Before the Azure project builds, it will call msbuild on the Web2.csproj project and build the PublishToFileSystem target. It sets the PublishDestination property to the ..\Publish\Web2\ folder that the ServiceDefinition.csdef file referenced.

Save the project file, and reload the Azure project in Visual Studio.

Finally, we need to ensure that the Web2 project gets built before being published. To do this, update the project dependencies so that the Azure project depends on the Web2 project. Right click on the Solution node in Solution explorer, and select Project Dependencies

Select Azure from the Projects list, and check Web2 and click OK.

Now, when you right click on the Azure project and select “Package”, it will build and package Web1 as it normally would, and Web2 will be built and packaged as a virtual directory called “Web2″, with config tranforms being applied and only the files needed to run the application included.

Viewing the output window after packaging the Azure project verifies that the config transform were applied:

>------ Build started: Project: Azure, Configuration: Release Any CPU ------
------ Build started: Project: Azure, Configuration: Release Any CPU ------
				Transformed Web.config using Web.Release.config into obj\Release\TransformWebConfig\transformed\Web.config.
				Copying all files to temporary location below for package/publish:
obj\Release\Package\PackageTmp.

You can add as many web sites as you want to a single Azure web role using this method. All you need to do is to add the PublishToFileSystem target to the new web project, and add an additional msbuild task to the BeforeBuild target in the Azure project file. You can download the source use in this sample here (Updated for VS2012). Hope this helps!

Configuring Event Log Permissions in C#

In NetCOBOL, programs can be configured so that errors are written to the Application event log through the environment variable @CBR_MESSAGE=EVENTLOG.  We set this for all COBOL program steps in NeoBatch automatically, but sometimes the user accounts do not have appropriate permissions to write to the event log.  There is no easy way to configure the event log to adjust permissions, you have to use a special registry key with an SDDL string.

SDDL strings (MSDN doc: http://msdn.microsoft.com/en-us/library/aa379567(v=VS.85).aspx) can be a pain to generate by hand.  Luckily the .NET framework comes to the rescue through the RegistrySecurity class.  It has the ability to initialize itself from an SDDL string and return and SDDL string after it has been configured.

To modify permissions of the event log, you need to create a string value called “CustomSD” under the named event log key in HKLM\SYSTEM\CurrentControlSet\Services\EventLog\.  In the case of the application event log, the key would be under “HKLM\SYSTEM\CurrentControlSet\Services\EventLog\Application”.  The value of the CustomSD key is an SDDL string.

I’ve written a C# program that takes the log name and user name as parameters, and creates and sets the CustomSD key so the user can write to the event log.  Below is the code:

using System;
using System.Collections.Generic;
using System.Text;
using System.Security.Principal;
using Microsoft.Win32;
using System.Security.AccessControl;

namespace EventLogAccessTool
{
    class Program
    {
        static void Main(string[] args)
        {
            string usage = @"Event Log Access Tool
Usage: EventLogAccessTool <logname> <user>
Where:   logname    - Name of the Windows event log, e.g., Application
         user       - User account (or group) to change access for
         ";
            if (args.Length != 2)
            {
                Console.Error.WriteLine(usage);
                return;
            }

            string logName = args[0];
            if (logName.Trim().Length == 0)
            {
                Console.Error.WriteLine("Invalid logname specified");
                Console.Error.WriteLine(usage);
                return;
            }

            string userAccount = args[1];
            if (userAccount.Trim().Length == 0)
            {
                Console.Error.WriteLine("Invalid user account specified");
                Console.Error.WriteLine(usage);
                return;
            }

            try
            {
                string keyName = @"SYSTEM\CurrentControlSet\Services\EventLog\" + logName;

                using (RegistryKey eventLogKey = Registry.LocalMachine.CreateSubKey(keyName, RegistryKeyPermissionCheck.Default))
                {
                    string existing = eventLogKey.GetValue("CustomSD") as string;
                    if (existing == null)
                    {
                        existing = "O:BAG:SY";  // Owner is built-in administrators and system
                    }
                    else
                    {
                        Console.WriteLine("Existing SDDL is: " + existing);
                    }

                    // We'll use the RegistrySecurity class for creating the SDDL string.
                    // Initialize it with the existing SDDL string.
                    RegistrySecurity acl = new RegistrySecurity();
                    acl.SetSecurityDescriptorSddlForm(existing);

                    // Add the user account so it can query values and set values 
                    // so that it can read and write to the event log
                    RegistryAccessRule ace = new RegistryAccessRule(userAccount, 
                        RegistryRights.QueryValues | RegistryRights.SetValue, 
                        AccessControlType.Allow);

                    acl.AddAccessRule(ace);

                    string newSDDL = acl.GetSecurityDescriptorSddlForm(AccessControlSections.All);
                    
                    Console.WriteLine("New SDDL is: " + newSDDL);
                    // Write out the new SDDL in the CustomSD string value.
                    eventLogKey.SetValue("CustomSD", newSDDL);
                }
            }
            catch(Exception e)
            {
                Console.Error.WriteLine("An error occurred: " + e.Message);
                return;
            }

            Console.WriteLine("Successfully updated the registry");
        }
    }
}

In the event that the event log security gets messed up, simply delete the CustomSD key.

Diagnosing a Resource Leak

We had a customer report an issue with their NeoKicks converted application, where NeoData was apparently exhausting pooled SQL connections.  It certainly seemed odd, given the fact that their load was relatively low.  To help us debug, we had them create a crash dump of IIS using the Debug Diagnostic Tool from Microsoft

Once I got a crash dump, I fired up Windbg, attached to the crash dump, and loaded SOS by issuing:

.loadby sos mscorwks

Since this is a .NET solution and the error was related to running out of pooled connections, I could dump the objects with references (that the GC won’t collect) using !dumpheap

Glancing through the output, I found the following:

image

The highlighted number above is the count of the object for each type.  The reason this one in particular is odd is that the WorkerSession object represents a current, running transaction.  The WorkerSession object contains things like open file handles, etc.  If there are 525 active transactions with each containing a few NeoData files, then it is certainly possible the SQL connection pool is exhausted.  Since this server wasn’t under a lot of load (just had been running for a while), 525 seems way too high.  I would expect this to be in the 1-10 range.  Luckily, we can dig in a little deeper to see who has a reference to these objects in Windbg. 

The next step is to get more information about each one of these objects.  You can use:

!dumpheap -type Alchemy.NeoKicks.WorkerSession

to dump out the references to that object type. 

image

I’ve highlighted the address for one of the objects in the listing from !dumpheap.  I just picked one at random.  I want to find out what has references to this object, to see what might be preventing this object from being garbage collected.  To do this, I use the following:

!gcroot 000000016fa53470

image

I looked at a few other objects (again, using !gcroot) and found that they were all being ultimately held by a System.Threading._TimerCallback object. 

This was getting interesting!  Why in the world would a TimerCallback object have a reference to my WorkerSession object?  I decided it was time to go looking through our NeoKicks code to see where a TimerCallback was used.  It turns out, a WorkerSession object can contain a Timer object that is used when a CICS START is issued so that the transaction will execute on a thread pool thread (and optionally on a time interval).  These objects should have gotten cleaned up by the GC, since these tasks run immediately and return.

I decided to look through the documentation for Timer objects in the MSDN documentation and found the following:

“When a timer is no longer needed, use the Dispose method to free the resources held by the timer.”

Now we’re getting somewhere.  Sure enough, although we implement IDisposable for the WorkerSession object, we were not calling Dispose on the Timer object.  I added the Dispose() call, and re-ran a test that reproduced the problem and the issue went away.

The moral of this story – if you are using System.Threading.Timer objects in your code, you must call Dispose() on the object when you’re done.  Otherwise, you will end up with a memory leak!

Querying Census Data

I was building a tool to generate some random data for testing with the goal that the data is readable.  I wanted to get actual city/state/zip code information so I began looking at the census.gov website for the type of data that is available.  I stumbled across several “names files” that contain first and last names and their frequency.  The last name file was about 2MB in size – perfect for generating some random customer names.

I opened the file in notepad and started looking through the file to find how common my last name is in comparison to others.  Sure, the Find… feature in notepad works – but a SQL query would be better at pulling out the data that I wanted.

Since the format of the file was fixed record size (not comma or tab delimited), I realized that the easiest solution would be to use NeoData to create a map for the fields in the file and import directly into a SQL server table.

Each record contains a record like this:

       01 NAME-RECORD.
          05 NAME     PIC X(15).
          05 FREQ     PIC 9.999.
          05 FILLER   PIC X.
          05 CUM-FREQ PIC ZZ.999.
          05 FILLER   PIC X(2).
          05 RANK     PIC 9(5).

 

So I fired up NeoData and created a sequential file mapping using the COBOL description above, and added a line sequential endpoint pointing to the dist.all.txt file.

image

Next, I created a SQL mapping with the name field set as the primary key (since it is unique in this file) and added a SQL Endpoint to point to a test database:

image

Now it is just a matter of running the converter, to convert from the line sequential input to the SQL database:

image

I now have 88,799 records in my SQL table!  Now for the fun part – fire up SQL Management studio and run some queries.  I wanted to see how some of the names of my coworkers compare to each other.  I used the following query:

SELECT [NAME]
      ,[FREQ]
      ,[CUM-FREQ]
      ,[RANK]
  FROM [VidRent].[dbo].[NAME-RECORD]
  WHERE [NAME] IN ( 'HOLLIS', 'ANDERSON', 'COOK', 'PAULSON', 'ROBSON', 'LANGER' )
  ORDER BY [RANK] ASC

And it gave me the following results:

image

It turned out to be a trivial exercise to map this Census data into SQL Server using the data conversion feature in NeoData, and now the data is in a form that is easy to consume for my data generator.  So when you have a text file that can easily be described with a COBOL record, you can easily convert it into SQL Server using NeoData following steps similar to these.

JIT debugging in Windows Server 2008 R2

I have my development machine configured with Windows Server 2008 R2 (which is a great desktop OS) but certain Just-In-Time debugging scenarios that worked in previous OS versions do not work out of the box in Server 2008 R2.  Consider the following code:

        static void Main(string[] args)
        {
            if (args.Length == 1 && string.Compare(args[0], "/debug", true) == 0)
            {
                System.Diagnostics.Debugger.Break();
            }
        }

Under Server 2008 R2, this doesn’t launch the JIT debugger dialog.  If the Debugger.Break() is changed Debugger.Launch(), then the dialog is displayed. 

Another scenario that fails to show the JIT debugger is when you have an unhandled exception.  Instead of raising the JIT debugger dialog when an unhandled exception occurs, the system would just return. 

The root cause of this behavior turns out to be Windows Error Reporting (WER) getting in the way.   If you look at the registry key HKLM\SOFTWARE\Microsoft\Windows\Windows Error Reporting, there is value for DontShowUI that is set to 1.  Changing this value to zero will result in a dialog like this being displayed:

I should also point out that I have Visual Studio, so I have JIT debuggers registered.  If you have the DontShowUI set, then the Debugger.Break() will also work again. 

Unexplained failures running debug x64 builds

I was getting random null reference exceptions with a vanilla NetCOBOL for .NET COBOL program.  I tracked it down to a weird behavior in which the .NET runtime was losing my byte array reference inside of an internal struct.  It would be valid before a call to a method, but then upon return the array was empty.  I created a C# sample unrelated to COBOL in C# and sent it to Microsoft.  Apparently, there is a bug introduced in .NET Framework 3.5 SP1.  It is also fixed in .NET Framework 4.0.  The KB article is 974168.

If you are running x64, using NetCOBOL for .NET, and compiling as debug then there is a high probability that you will experience this problem.  Make sure to have the hotfix applied!  You can request it here: http://support.microsoft.com/kb/974168.

Missing performance counters on x64 – Fix!

I’ve noticed on x64 versions of Windows that performance counter information gets corrupted for apparent reason.  I have seen it happen on a clean machine, right after installing custom performance counters using installutil and a custom .NET performance counter installer class. 

The symptoms are that if you start up perfmon you might get an error saying “MMC failed to initialize the snap-in”, or you might find that certain perf counters are missing (like ones in the Processor category).  In the past, I have resolved problems like this by running lodctr /R from the command line.  This instructs Windows to rebuild the performance counters using the backup perf database that it maintains. 

I had a case today where this approach didn’t work.  I went digging through the registry, and started searching for “Disable Performance Counters” in HKLM\SYSTEM\CurrentControlSet\Services using the Find dialog.  I found there were several entries with a hex value 4 or 1 listed.  I changed all the occurrences of that key to 0, and it fixed the problem!

Random process hangs while reading output

I’ve been debugging an intermittent problem that occurs under load where launched processes exit and threads reading the output from the process hang while reading the output.  Below is a common snippet of code to launch a process, and redirect the output to separate threads (to avoid deadlock that occurs if the client process fills the output buffer and cannot exit because the host process isn’t reading the pipe):

        public void LaunchProgram(string programName)
        {
            Process p = new Process();
            ProcessStartInfo psi = new ProcessStartInfo();
            psi.FileName = programName;

            psi.CreateNoWindow = true;
            psi.WindowStyle = ProcessWindowStyle.Hidden;
            psi.RedirectStandardError = true;
            psi.RedirectStandardOutput = true;
            psi.UseShellExecute = false;
            psi.RedirectStandardInput = false;
            psi.WorkingDirectory = Directory.GetCurrentDirectory();
            p.StartInfo = psi;
            p.Start();

            OutputReader stdOutReader = new OutputReader(p.StandardOutput);
            Thread stdOutThread = new Thread(new ThreadStart(stdOutReader.ReadOutput));
            stdOutThread.Start();
            OutputReader stdErrReader = new OutputReader(p.StandardError);
            Thread stdErrThread = new Thread(new ThreadStart(stdErrReader.ReadOutput));
            stdErrThread.Start();

            p.WaitForExit();

            stdOutThread.Join();
            stdErrThread.Join();
        }

 

The OutputReader class looks like this:

    public class OutputReader
    {
        StreamReader stream;

        public OutputReader(StreamReader stream)
        {
            this.stream = stream;
        }

        public void ReadOutput()
        {
            string line = null;

            while ((line = stream.ReadLine()) != null)
            {
                // do something with the output
            }
        }
    }

The problem occurs when the LaunchProgram method is called in a multithreaded app. 

The core issue is that the Process class creates pipes for redirecting the output using CreatePipe and then calls CreateProcess with the inheritHandles parameter set to true.  Under load there is a race condition where a process can inherit handles that it shouldn’t.  In the case I was debugging, each of the programs were synchronizing with a database for shared resources that were only released when the process exited.  Since another process had a copy of the pipes the OutputReader thread was still waiting for data to come in because there was still an open pipe handle on the other side.  The other executable was deadlocked because it was waiting for shared resources held by the first process.

The fix for this is to make sure that the CreatePipe and CreateProcess calls happen synchronously in a process.  An easy fix is to add a static lock object, and lock it before the call to p.Start().  That is:

static object ProcessLockObj = new object();

public void LaunchProgram(string programName)
        {
            Process p = new Process();
            ProcessStartInfo psi = new ProcessStartInfo();
            psi.FileName = programName;

            psi.CreateNoWindow = true;
            psi.WindowStyle = ProcessWindowStyle.Hidden;
            psi.RedirectStandardError = true;
            psi.RedirectStandardOutput = true;
            psi.UseShellExecute = false;
            psi.RedirectStandardInput = false;
            psi.WorkingDirectory = Directory.GetCurrentDirectory();
            p.StartInfo = psi;

            lock (ProcessLockObj)
            {
                p.Start();
            }

            OutputReader stdOutReader = new OutputReader(p.StandardOutput);
            Thread stdOutThread = new Thread(new ThreadStart(stdOutReader.ReadOutput));
            stdOutThread.Start();
            OutputReader stdErrReader = new OutputReader(p.StandardError);
            Thread stdErrThread = new Thread(new ThreadStart(stdErrReader.ReadOutput));
            stdErrThread.Start();

            p.WaitForExit();

            stdOutThread.Join();
            stdErrThread.Join();
        }

The problem can also manifest itself if you are not using the Process class, but are calling CreateProcess directly.  If you are doing this, then make sure that you place CreateProcess and the CreatePipe calls in a critical section.

Lots of debugging and searching narrowed down the problem, and then the following article explained what was going on: http://support.microsoft.com/kb/315939

Improving cursor performance

In NetCOBOL for .NET v3.1 you can get fast forward, read-only cursors provided you configure the ODBC settings in your configuration file accordingly.  By default, all cursors will be read only.  Unfortunately, many people use cursors for updating as well, and usually get this to work by changing the @SQL_CONCURRENCY option to something other than READ_ONLY.  This option will affect all cursors in the application, and turn read only cursors into dynamic ones (which perform much slower than their read only counterparts).

You can get around this by using SQL Options in the NetCOBOL configuration section.  Basically you create two options, one for read only cursors, and the other for updatable cursors.  Then, set the @SQL_ODBC_CURSOR_DEFAULT and @SQL_ODBC_CURSOR_UPDATE settings to point to these two options so that NetCOBOL can pick the correct SQL Option depending on the cursor type.  You are probably asking: How does it know what type the cursor is?  The answer is that it looks for the FOR UPDATE clause in the cursor declaration.  In order for this setup to work, you will need to change all of the cursor declarations for any cursor that is used for updating.

Here’s a sample of what your SQL settings might look like:

   1: <sqlSettings>
   2:     <sqlOptionInf>
   3:         <option name="OPTION1" description="Read Only Cursor Settings
   4:             <add key="@SQL_CONCURRENCY" value="READ_ONLY" />
   5:             <add key="@SQL_ROW_ARRAY_SIZE" value="1" />
   6:             <add key="@SQL_CURSOR_TYPE" value="FORWARD_ONLY" />
   7:         </option>
   8:         <option name="OPTION2" description="Update cursor settings">
   9:             <add key="@SQL_CONCURRENCY" value="ROWVER" />
  10:             <add key="@SQL_ROW_ARRAY_SIZE" value="1" />
  11:             <add key="@SQL_CURSOR_TYPE" value="DYNAMIC" />
  12:         </option>
  13:     </sqlOptionInf>
  14:     <connectionScope>
  15:         <add key="@SQL_CONNECTION_SCOPE" value="THREAD" />
  16:     </connectionScope>
  17:     <serverList>
  18:         <server name="SERVER1" type="odbc" description="SERVER1">
  19:             <add key="@SQL_DATASRC" value="VidRent" />
  20:             <add key="@SQL_USERID" value="" />
  21:             <add key="@SQL_PASSWORD" value="" />
  22:             <add key="@SQL_ODBC_CURSOR_DEFAULT" value="OPTION1" />
  23:             <add key="@SQL_COMMIT_MODE" value="MANUAL" />
  24:             <add key="@SQL_ODBC_CURSOR_UPDATE" value="OPTION2" />
  25:         </server>
  26:     </serverList>
  27:     <sqlDefaultInf>
  28:         <add key="@SQL_SERVER" value="SERVER1" />
  29:         <add key="@SQL_USERID" value="video" />
  30:         <add key="@SQL_PASSWORD" value="AGMEJJBJKNFPGMAOBPNE" />
  31:     </sqlDefaultInf>
  32: </sqlSettings>

Here is a sample SQL cursor declaration for an updatable cursor:

   1: EXEC SQL DECLARE VIDCUSTBR CURSOR FOR                     
   2:     SELECT                                                   
   3:       VIDEO_NUMBER,
   4:       TITLE,
   5:       MEDIA_TYPE,
   6:       STATUS,
   7:       CUSTOMER_NUMBER,
   8:       DATE_OUT                                     
   9:     FROM                                                     
  10:       VIDMAST                                         
  11:     WHERE                                                    
  12:        CUSTOMER_NUMBER = :CM-CUSTOMER-NUMBER 
  13:     FOR UPDATE                                
  14: END-EXEC.
  15:    

cscript error when running under a non-interactive user session

If you are launching cscript on behalf of a user using a delegated token or by creating a token via LogonUser on a server that the user has never interactively logged on before, you will probably get an error like "Loading your settings failed. (Access is denied)."

This problem is caused by not having the Windows Scripting Host registry settings in the users HKCU registry hive. You can solve this for all users by adding the following keys:

HKEY_USERS\.DEFAULT\Software\Microsoft\Windows Scripting Host\Settings\
HKEY_USERS\.DEFAULT\Software\Microsoft\Windows Script Host\Settings\

You’ll want to make sure that the users also have read access to these keys as well.

Follow

Get every new post delivered to your Inbox.