Generate an x509 certificate with an SHA256 signature hash

When authenticating with a vendor using a custom webservice, the vendor requested that we use an x509 certificate with a 2048 byte key and an SHA256 hash (sometimes referred to as SHA2, though SHA2 actually refers to the group of hashes containing SHA256, 384, and 512). Since I’d used IIS to generate our certificate (IIS will only generate a certificate using an SHA1 hash), and it involved quite a bit of research to get a certificate with an SHA256 signature hash on it, I wanted to detail the steps here:

  1. First, download and install OpenSSL from Shining Light. The “Light” version of the package will do, since you’re only using basic functionality.
  2. Generate your Certificate request (CSR), specifying an SHA256 signature hash
    1. openssl req -nodes -sha256 -newkey rsa:2048 -keyout C:\SomeFolder\PrivateKey.key -out C:\SomeFolder\CertificateRequest.csr
    2. You’ll be prompted for a few certificate fields, including your state, company name, computer name on your certificate, etc. Enter these as they come up.
  3. This will generate two files – PrivateKey.key (which contains the un-encrypted version of your private key – protect this file, as somebody who obtains it along with your signed public key can impersonate you), and CertificateRequest.csr (your certificate signing request, which is not sensative).
  4. Though this isn’t required, if you want to confirm what you’re entered in the CSR, you can view the details using another OpenSSL command line
    1. openssl req -in C:\SomeFolder\CertificateRequest.csr -text -noout
  5. Now that you have your CSR, submit it to whatever signing authority you use – for us, it was Verisign, but there are any number of different CAs out there that can sign it.
  6. Once your CA has signed the CSR, you’ll get back either a binary p7b file (which we’ll called SignedKeyFromCA.p7b) containing your certificate signed public key (and, possibly, the certificate chain your CA used as well), or either a binary or base64 CER file containing just your certificate. Whatever you receive back, you’ll need to convert it to a Base64 CER (called SignedKeyFromCA.cer here), since that’s what OpenSSL expects.
  7. To combine your private key with the signed public key to create a certificate:
    1. openssl pkcs12 -export -in c:\Temp\SignedKeyFromCA.cer -inkey c:\Temp\openssl.key -out SignedKeyPair.p12
    2. Since you’re exporting your private key in this file, you’ll be required to encrypt it with a password, which OpenSSL will prompt you for (twice).
  8. You’ve now got your signed key pair – SignedKeyPair.p12. You can either use this pkcs12 file in your code, or you can import in into your web server (assuming it supports SHA256 hashes). IIS7, for example, supports importing this certificate and using it for SSL, but just doesn’t support generating a SHA2 CSR in the first place.

Enjoy! If you have any issues, please feel free to post a comment and I’ll do my best to answer it!

Error TF255271 while upgrading TFS 2005 -> 2010

When upgrading TFS 2005 to TFS 2010 (using these instructions) and it worked great on my test computer, but when I went to migrate the production server, I received the following error:

Warning Message: [2011-05-12 20:12:14Z] Servicing step Register Integration Database failed. (ServicingOperation: UpgradePreTfs2010Databases; Step group: AttachPreTFS2010Databases.TfsFramework)
Warning Message: TF255271: The team project collection could not be created. The number of steps before the completion of project creation is: 216. The number of steps completed before the failure was 10.

The error message doesn’t give any detail at all, so I opened the log file and found this near the bottom:

[Info   @20:12:19.133] [2011-05-12 20:12:14Z][Error] BisCreateSchema.sql Line 816 Error: Incorrect syntax near ‘,’. (10 of 216)
[Info   @20:12:19.133] [2011-05-12 20:12:14Z][Informational] Microsoft.TeamFoundation.Framework.Server.CollectionServicingException: BisCreateSchema.sql Line 816 Error: Incorrect syntax near ‘,’.
—> System.Data.SqlClient.SqlException: Incorrect syntax near ‘,’.

Try as I might, I couldn’t find the SQL file it referred to, and Google wasn’t much help either – however, it seemed that the SQL file wasn’t actually to blame, especially since the same upgrade process had run flawlessly on my test server a few days earlier. Then I realized that my test server was SQL 2008 and my production server was SQL 2005 – while I didn’t read specifically anywhere that this was a problem, SQL 2005 isn’t supported by TFS 2010.

After much digging, the cause of the error ends up being that the TFS upgrade tool (and TFS 2010 in general) doesn’t support SQL Server 2005. Upgrading the database server to SQL Server 2008 and re-running the process corrected the error and allowed us to complete the migration.

However, I’ve read that SQL 2008 support on TFS 2005 is patchy, so this also obliterates your rollback, if you were planning on one 🙂 If you get this error, hope this helps!

Lightweight, single-row alternative to OUTPUT clause in T-SQL

SQL Server 2005 adds the option for an OUTPUT clause in your query to act upon table rows and return the old and new values. When I’ve done queuing in the past, I’ve used the clause to mark a row as processing and return the value, all in a single operation, so it’s lightweight and threadsafe. For example, like this:

UPDATE TOP (1) dbo.MyQueue
   SET ClaimedBy = @Server,
       ClaimTime = @ClaimTime
OUTPUT INSERTED.QueueID,
       INSERTED.SomeData1,
       INSERTED.SomeDate2,
       INSERTED.SomeData3
  INTO #OutputTable (QueueID, Column1, Column2, Column3)
 WHERE Some Criteria...

To do this, you’ll need to create a table called #OutputTable that has the right schema, which works well if you’re returning multiple rows from your query, but is a little cumbersome to work with if you’re only doing one row at a time. If you’re only returning a single row from your UPDATE query (as I am here), there’s an alternative to OUTPUT that’s easier to use – just do variable assignment inline in the UPDATE statement! The query above becomes:

UPDATE TOP (1) dbo.MyQueue
   SET ClaimedBy = @Server,
       ClaimTime = @ClaimTime
       @QueueID = QueueID,
       @OutputVar1 = SomeData1,
       @OutputVar2 = SomeData2,
       @OutputVar3 = SomeData3
 WHERE Some Criteria...

Notice the reversed variable assignment in the second query? I’ve done away with my table, and my OUTPUT clause, and now I just have the relevant values from the row I’m interested in. Much easier to work with, and as an added bonus (though I hope you’re not in this situation), it works just fine in SQL 2000.

The caveat is that it’s only good for a single row, and it only works for UPDATE – if you’re using DELETE, you’ll still need the temp table and an OUTPUT clause.

Reading from C:\ProgramData without requiring UAC elevation

When trying to read some user settings from C:\ProgramData in my .NET app, I was getting an Access Denied exception, even though I was only attempting to read the configuration file, not write anything:

System.UnauthorizedAccessException: Access to the path ‘C:\ProgramData\YourApp\1.0.0.0\settings.xml’ is denied.

Even though I was only reading the file, and not writing anything, it still wanted elevation before it let me read it. It turns out that I need to signify my intent not to write anything when I open the stream. This code generated the exception (the “Using” statement actually threw the exception):

Using f As New FileStream(MySettingsFilePath, FileMode.Open)
    Dim formatter As New Formatters.Binary.BinaryFormatter
    MySettings = formatter.Deserialize(f)
    f.Close()
End Using

However, by changing the “FileStream” to a “StreamReader”, I signify my intent to read and not write, so the code runs without an issue (there are two changes):

Using f As New StreamReader(MySettingsFilePath)
    Dim formatter As New Formatters.Binary.BinaryFormatter
    MySettings = formatter.Deserialize(f.BaseStream)
    f.Close()
End Using
MORAL – Elevation isn’t required to read common application settings, only to write them, but you need to be clear about what you intend to do!

Periodic timeouts with local WCF endpoint

I have two applications – a Windows service and a client WinForm – that run on the same box, and the client needs to check on the status of the service every few seconds. It worked well most of the time, but every 4-5 times, it would timeout – I’d receive a System.TimeoutException, no matter how long the timeout was actually set for. It didn’t make sense – the two apps are on the same box, it happens with both a TCP and an NamedPipe endpoint, and it can happen even when the service isn’t busy.

It turns out that WCF endpoints only handle a very limited number of simultanious connections, and though I was creating a new ChannelFactory each time, I wasn’t closing the channel when I was finished with it. I assumed that it would automatically close the channel when it passed out of scope, but no dice. Since the channels remained open, it didn’t take long to fill them all up, and then the app would appear to be unresponsive until my channels started to time out. So here’s what I ended up doing – critical line in bold, at the end:

Dim tcpBinding As New NetTcpBinding
Dim pipeFactory As ChannelFactory(Of WCF_Class.IServiceRemoting) = New  _
    ChannelFactory(Of WCF_Class.IServiceRemoting)(tcpBinding, "net.tcp://localhost:4079")
Dim ServiceWCFConnection As WCF_Class.IServiceRemoting = pipeFactory.CreateChannel
MessageBox.Show(ServiceWCFConnection.Responsive)
pipeFactory.Close() ' This is what's important!

That’s it – make sure you close your Channel when you’re done with it! If you leave it open, you quickly hit the server’s connection limit and new requests will fail. Closing the channel when you’re done frees it up, instead of letting it time out and close on its own.

Removing an arbitrary number of spaces from a string in SQL Server

When I was concatenating some fields to create an address, I ended up with a number of spaces in my address that were unnecessary. For example, I had this:

SELECT StreetNumber + ' ' + Direction + ' ' + StreetName + ' ' + StreetType as Address

However, when an address didn’t have a direction, I ended up with a double-space in the middle of my address, and I wanted a way to clean it up. Enter the code I found at http://www.itjungle.com/fhg/fhg101106-story02.html:

SELECT name,
       REPLACE(REPLACE(REPLACE(name,' ','<>'),'><',''),'<>',' ')
  FROM SomeTable

This shortens any run of spaces in the string into a single space – sneaky! It works in any language that supports a function like REPLACE, which scans one string for instances of a second string, and swaps them out for something else.

Visual Studio solutions don’t load when you double-click on them

When I installed Windows 7, double-clicking on .SLN files to load them in Visual Studio stopped working – I would get the hourglass for a few moments, and then nothing. It turns out that it’s because I had set Visual Studio to always run as an Administrator, but when you double-click a .SLN file, you’re not actually running Visual Studio – you’re running the “Visual Studio Version Selector”, even if you only have one version of Visual Studio installed.

To resolve the problem, you’ll need to set the Version Selector to also run as an Administrator – to do this, find the Version Selector EXE:

C:\Program Files\Common Files\microsoft shared\MSEnv\VSLauncher.EXE (or “Program Files (x86)” if you have an x64 installation)

Right-click on it, go to the “Compatibility” tab, and check the box that says “Run this program as an administrator”. Since I have multiple users on my laptop that I use for development, I had to click “Change Settings for all users” before checking the “Run as admin” box.

Voila! The VS Version Selector will now properly load the solution files when you double-click on them – enjoy!

Migrate database indexes to a new file group

I recently had to mass-migrate all the indexes from a database to a new file group since we’d added some additional storage to our database server. I found this article at SQL Server Central (unfortunately, registration required, so I’ve included a copy of the original script in the download at the end). While it worked okay, there were some things I didn’t like about it:

  • Assumed 90% index fill rate
  • “Moved” indexes were all created as non-unique, regardless of original
  • Fail during index creation left you without an index (drop and then create, with no rollback)
  • Table was un-indexed during the move (index dropped and then created)
  • Script re-created indexes without any “Included” columns, even if original index had them

To address these limitations, I rebuilt the process using that script as a starting point. The new script:

  • Uses 90% fill rate by default, but if the original index had a different rate specified, it will use that
  • Re-creates indexes as unique if the source index was unique
  • Rollback problem resolved – new index is created with different name, old index is dropped, and then new index is renamed, all in a TRY-CATCH block
  • Since the new index is created and then the old one dropped, table indexing remains “online” during the move
  • Migrates “Included” columns in index
  • Updated the script to use SYS views (breaks compatibility with SQL 2000, since SYS is 2005/2008/beyond only)
I welcome any feedback on the script, and would love to know if you see any improvements that should be made.

Download .SQL scripts (contains both Original and Modified scripts)

Finding unused tables in SQL Server 2005 and 2008

Recently, I was tasked with “cleaning up” a very large database on our network – it included hundreds of tables with cryptic names, and I wasn’t able to tell which ones were still being used and which weren’t. There are triggers for INSERT, UPDATE, and DELETE, but no trigger for SELECT, and that’s what I wanted.

However, SQL Server 2005 and later provide something that’s almost as good – the sys.dm_db_index_usage_stats system view. This view has table and index statistics for every table in the database and you can use it to determine when a table was last accessed. Though I initially thought this table only contained index stats, so would be useless against tables without indexes, that’s not the case; it contains tables themselves as well, and calls them “HEAP” indexes. This way, you can see which tables are being scanned against often (a sign that a better set of indexes is needed), or which indexes aren’t being accessed at all and can safely be removed.

Using this data, it’s easy to determine which tables haven’t been accessed since the server was last restarted:

WITH LastActivity (ObjectID, LastAction) AS
(
  SELECT object_id AS TableName,
         last_user_seek as LastAction
    FROM sys.dm_db_index_usage_stats u
   WHERE database_id = db_id(db_name())
   UNION
  SELECT object_id AS TableName,
         last_user_scan as LastAction
    FROM sys.dm_db_index_usage_stats u
   WHERE database_id = db_id(db_name())
   UNION
  SELECT object_id AS TableName,
         last_user_lookup as LastAction
    FROM sys.dm_db_index_usage_stats u
   WHERE database_id = db_id(db_name())
)
  SELECT OBJECT_NAME(so.object_id) AS TableName,
         MAX(la.LastAction) as LastSelect
    FROM sys.objects so
    LEFT
    JOIN LastActivity la
      ON so.object_id = la.ObjectID
   WHERE so.type = 'U'
     AND so.object_id > 100
GROUP BY OBJECT_NAME(so.object_id)
ORDER BY OBJECT_NAME(so.object_id)

Since the table is cleared when the SQL service restarts, this will only display the tables not accessed since the last time the server was restarted. Because of this, you’ll need to ensure that the SQL Server has been running for sufficiently long before you rely on this query to see which tables aren’t accessed by users.

Keep in mind that, even if the server has been running for months and a table is still in this list, it may not be safe to delete it. Some tables may be part of year-end or rare processes. This list should be used as a guide to help you figure out what’s safe to delete, and you may even consider renaming objects for a while first, so that any processes that do end up relying on one of these tables can be easily corrected by renaming the objects back.

Moving a SQL Server database to another server on a schedule – without using replication

Recently, I had the need to copy a set of databases from a dozen remote servers to a central server, restore them, and have it happen automatically, with no intervention from me at all. Replication wouldn’t work for the following reasons:

  1. Many tables didn’t have primary keys, so merge replication was out (even though this was only one-way replication)
  2. The size of the databases (28GB in one instance) and the quality/speed of the WAN removed the log shipping option
  3. There’s too much activity to consider any kind of live replication

Given our restrictions, we decided to go the following route. On the remote server, we set up a batch file that did the following:

  1. Use OSQL to back up the databases in question to a folder
  2. Run 7Zip from the command line to compress the backups into separate archives. For each auto-attaching later, each archive had the name we wanted it attached to the remote server with (for example, Site1ProdDB was backed up to Site1ProdDB.BAK, then compressed to Site1ProdDB.7z)
  3. Delete the BAK files
  4. Archives were renamed from *.7z to *.7zz (this is important – I’ll explain why in the server part)
  5. Scripted FTP using Windows command line FTP tool to a folder on our central collection server
  6. Once the FTP was complete, rename the archives on the remote server back from *.7zz to *.7z
  7. Delete the local *.7zz files

That’s it for the client – the BAT file was scheduled as a SQL Agent job so that we could kick it off remotely from any site we wanted, or so we could set them up on a schedule. Then, we put a BAT file on the server that did the following:

  1. Check folder for files that match *.7z
  2. For each one found, do the following:
    1. Extract it to a “Staging” folder
    2. Delete the 7z file for that archive
    3. Use OSQL to restore the file from the command line
    4. Use OSQL to run a script that changes the DB owner, adds some user permissions, and generally does some housework on the database
    5. Use an SMTP tool to send a email notice that the backup has been restored
  3. Repeat step 2 for every .7z file in the folder
  4. As a second step in the SQL Agent job, run “MoveLog.bat” (included below) to finish rotating the logs – it ensures that only logs with meaningful information are kept

The server BAT process can run as often as desired – in our case, we run it every 30 minutes, so the backup will be picked up and restored as soon as it’s available. That’s where the rename from the client side comes into play: If the files were named Database.7z, then the server process would attempt to pick them up while they’re being uploaded via FTP, and shenanigans would ensue. By renaming them when they’re done uploading, they become immediately available for restoring on the server side.

As I said before, I scheduled both the client (source) and the server (restore/destination) process as SQL Agent jobs – the Windows scheduler is too cumbersome to work with remotely, and kicking them off on demand was a pain. With the SQL Agent, they can be started on demand, and then I get an email notification as soon as they’ve been successfully restored.

I’ve attached the files below, and I welcome any feedback that you have or any improvements that can be made – I’m happy to give you credit and post a new version here. Specifically, I’m interested in any feedback about how to make this process more dynamic – I know BAT scripting supports FOR EACH and wildcards, but I was unable to make them work properly with OSQL, so I’d appreciate any input there. Enjoy!

Download the ZIP archive containing the files for this post