Error TF255271 while upgrading TFS 2005 -> 2010

When upgrading TFS 2005 to TFS 2010 (using these instructions) and it worked great on my test computer, but when I went to migrate the production server, I received the following error:

Warning Message: [2011-05-12 20:12:14Z] Servicing step Register Integration Database failed. (ServicingOperation: UpgradePreTfs2010Databases; Step group: AttachPreTFS2010Databases.TfsFramework)
Warning Message: TF255271: The team project collection could not be created. The number of steps before the completion of project creation is: 216. The number of steps completed before the failure was 10.

The error message doesn’t give any detail at all, so I opened the log file and found this near the bottom:

[Info   @20:12:19.133] [2011-05-12 20:12:14Z][Error] BisCreateSchema.sql Line 816 Error: Incorrect syntax near ‘,’. (10 of 216)
[Info   @20:12:19.133] [2011-05-12 20:12:14Z][Informational] Microsoft.TeamFoundation.Framework.Server.CollectionServicingException: BisCreateSchema.sql Line 816 Error: Incorrect syntax near ‘,’.
—> System.Data.SqlClient.SqlException: Incorrect syntax near ‘,’.

Try as I might, I couldn’t find the SQL file it referred to, and Google wasn’t much help either – however, it seemed that the SQL file wasn’t actually to blame, especially since the same upgrade process had run flawlessly on my test server a few days earlier. Then I realized that my test server was SQL 2008 and my production server was SQL 2005 – while I didn’t read specifically anywhere that this was a problem, SQL 2005 isn’t supported by TFS 2010.

After much digging, the cause of the error ends up being that the TFS upgrade tool (and TFS 2010 in general) doesn’t support SQL Server 2005. Upgrading the database server to SQL Server 2008 and re-running the process corrected the error and allowed us to complete the migration.

However, I’ve read that SQL 2008 support on TFS 2005 is patchy, so this also obliterates your rollback, if you were planning on one 🙂 If you get this error, hope this helps!

Lightweight, single-row alternative to OUTPUT clause in T-SQL

SQL Server 2005 adds the option for an OUTPUT clause in your query to act upon table rows and return the old and new values. When I’ve done queuing in the past, I’ve used the clause to mark a row as processing and return the value, all in a single operation, so it’s lightweight and threadsafe. For example, like this:

UPDATE TOP (1) dbo.MyQueue
   SET ClaimedBy = @Server,
       ClaimTime = @ClaimTime
OUTPUT INSERTED.QueueID,
       INSERTED.SomeData1,
       INSERTED.SomeDate2,
       INSERTED.SomeData3
  INTO #OutputTable (QueueID, Column1, Column2, Column3)
 WHERE Some Criteria...

To do this, you’ll need to create a table called #OutputTable that has the right schema, which works well if you’re returning multiple rows from your query, but is a little cumbersome to work with if you’re only doing one row at a time. If you’re only returning a single row from your UPDATE query (as I am here), there’s an alternative to OUTPUT that’s easier to use – just do variable assignment inline in the UPDATE statement! The query above becomes:

UPDATE TOP (1) dbo.MyQueue
   SET ClaimedBy = @Server,
       ClaimTime = @ClaimTime
       @QueueID = QueueID,
       @OutputVar1 = SomeData1,
       @OutputVar2 = SomeData2,
       @OutputVar3 = SomeData3
 WHERE Some Criteria...

Notice the reversed variable assignment in the second query? I’ve done away with my table, and my OUTPUT clause, and now I just have the relevant values from the row I’m interested in. Much easier to work with, and as an added bonus (though I hope you’re not in this situation), it works just fine in SQL 2000.

The caveat is that it’s only good for a single row, and it only works for UPDATE – if you’re using DELETE, you’ll still need the temp table and an OUTPUT clause.

Removing an arbitrary number of spaces from a string in SQL Server

When I was concatenating some fields to create an address, I ended up with a number of spaces in my address that were unnecessary. For example, I had this:

SELECT StreetNumber + ' ' + Direction + ' ' + StreetName + ' ' + StreetType as Address

However, when an address didn’t have a direction, I ended up with a double-space in the middle of my address, and I wanted a way to clean it up. Enter the code I found at http://www.itjungle.com/fhg/fhg101106-story02.html:

SELECT name,
       REPLACE(REPLACE(REPLACE(name,' ','<>'),'><',''),'<>',' ')
  FROM SomeTable

This shortens any run of spaces in the string into a single space – sneaky! It works in any language that supports a function like REPLACE, which scans one string for instances of a second string, and swaps them out for something else.

Migrate database indexes to a new file group

I recently had to mass-migrate all the indexes from a database to a new file group since we’d added some additional storage to our database server. I found this article at SQL Server Central (unfortunately, registration required, so I’ve included a copy of the original script in the download at the end). While it worked okay, there were some things I didn’t like about it:

  • Assumed 90% index fill rate
  • “Moved” indexes were all created as non-unique, regardless of original
  • Fail during index creation left you without an index (drop and then create, with no rollback)
  • Table was un-indexed during the move (index dropped and then created)
  • Script re-created indexes without any “Included” columns, even if original index had them

To address these limitations, I rebuilt the process using that script as a starting point. The new script:

  • Uses 90% fill rate by default, but if the original index had a different rate specified, it will use that
  • Re-creates indexes as unique if the source index was unique
  • Rollback problem resolved – new index is created with different name, old index is dropped, and then new index is renamed, all in a TRY-CATCH block
  • Since the new index is created and then the old one dropped, table indexing remains “online” during the move
  • Migrates “Included” columns in index
  • Updated the script to use SYS views (breaks compatibility with SQL 2000, since SYS is 2005/2008/beyond only)
I welcome any feedback on the script, and would love to know if you see any improvements that should be made.

Download .SQL scripts (contains both Original and Modified scripts)

Finding unused tables in SQL Server 2005 and 2008

Recently, I was tasked with “cleaning up” a very large database on our network – it included hundreds of tables with cryptic names, and I wasn’t able to tell which ones were still being used and which weren’t. There are triggers for INSERT, UPDATE, and DELETE, but no trigger for SELECT, and that’s what I wanted.

However, SQL Server 2005 and later provide something that’s almost as good – the sys.dm_db_index_usage_stats system view. This view has table and index statistics for every table in the database and you can use it to determine when a table was last accessed. Though I initially thought this table only contained index stats, so would be useless against tables without indexes, that’s not the case; it contains tables themselves as well, and calls them “HEAP” indexes. This way, you can see which tables are being scanned against often (a sign that a better set of indexes is needed), or which indexes aren’t being accessed at all and can safely be removed.

Using this data, it’s easy to determine which tables haven’t been accessed since the server was last restarted:

WITH LastActivity (ObjectID, LastAction) AS
(
  SELECT object_id AS TableName,
         last_user_seek as LastAction
    FROM sys.dm_db_index_usage_stats u
   WHERE database_id = db_id(db_name())
   UNION
  SELECT object_id AS TableName,
         last_user_scan as LastAction
    FROM sys.dm_db_index_usage_stats u
   WHERE database_id = db_id(db_name())
   UNION
  SELECT object_id AS TableName,
         last_user_lookup as LastAction
    FROM sys.dm_db_index_usage_stats u
   WHERE database_id = db_id(db_name())
)
  SELECT OBJECT_NAME(so.object_id) AS TableName,
         MAX(la.LastAction) as LastSelect
    FROM sys.objects so
    LEFT
    JOIN LastActivity la
      ON so.object_id = la.ObjectID
   WHERE so.type = 'U'
     AND so.object_id > 100
GROUP BY OBJECT_NAME(so.object_id)
ORDER BY OBJECT_NAME(so.object_id)

Since the table is cleared when the SQL service restarts, this will only display the tables not accessed since the last time the server was restarted. Because of this, you’ll need to ensure that the SQL Server has been running for sufficiently long before you rely on this query to see which tables aren’t accessed by users.

Keep in mind that, even if the server has been running for months and a table is still in this list, it may not be safe to delete it. Some tables may be part of year-end or rare processes. This list should be used as a guide to help you figure out what’s safe to delete, and you may even consider renaming objects for a while first, so that any processes that do end up relying on one of these tables can be easily corrected by renaming the objects back.

Moving a SQL Server database to another server on a schedule – without using replication

Recently, I had the need to copy a set of databases from a dozen remote servers to a central server, restore them, and have it happen automatically, with no intervention from me at all. Replication wouldn’t work for the following reasons:

  1. Many tables didn’t have primary keys, so merge replication was out (even though this was only one-way replication)
  2. The size of the databases (28GB in one instance) and the quality/speed of the WAN removed the log shipping option
  3. There’s too much activity to consider any kind of live replication

Given our restrictions, we decided to go the following route. On the remote server, we set up a batch file that did the following:

  1. Use OSQL to back up the databases in question to a folder
  2. Run 7Zip from the command line to compress the backups into separate archives. For each auto-attaching later, each archive had the name we wanted it attached to the remote server with (for example, Site1ProdDB was backed up to Site1ProdDB.BAK, then compressed to Site1ProdDB.7z)
  3. Delete the BAK files
  4. Archives were renamed from *.7z to *.7zz (this is important – I’ll explain why in the server part)
  5. Scripted FTP using Windows command line FTP tool to a folder on our central collection server
  6. Once the FTP was complete, rename the archives on the remote server back from *.7zz to *.7z
  7. Delete the local *.7zz files

That’s it for the client – the BAT file was scheduled as a SQL Agent job so that we could kick it off remotely from any site we wanted, or so we could set them up on a schedule. Then, we put a BAT file on the server that did the following:

  1. Check folder for files that match *.7z
  2. For each one found, do the following:
    1. Extract it to a “Staging” folder
    2. Delete the 7z file for that archive
    3. Use OSQL to restore the file from the command line
    4. Use OSQL to run a script that changes the DB owner, adds some user permissions, and generally does some housework on the database
    5. Use an SMTP tool to send a email notice that the backup has been restored
  3. Repeat step 2 for every .7z file in the folder
  4. As a second step in the SQL Agent job, run “MoveLog.bat” (included below) to finish rotating the logs – it ensures that only logs with meaningful information are kept

The server BAT process can run as often as desired – in our case, we run it every 30 minutes, so the backup will be picked up and restored as soon as it’s available. That’s where the rename from the client side comes into play: If the files were named Database.7z, then the server process would attempt to pick them up while they’re being uploaded via FTP, and shenanigans would ensue. By renaming them when they’re done uploading, they become immediately available for restoring on the server side.

As I said before, I scheduled both the client (source) and the server (restore/destination) process as SQL Agent jobs – the Windows scheduler is too cumbersome to work with remotely, and kicking them off on demand was a pain. With the SQL Agent, they can be started on demand, and then I get an email notification as soon as they’ve been successfully restored.

I’ve attached the files below, and I welcome any feedback that you have or any improvements that can be made – I’m happy to give you credit and post a new version here. Specifically, I’m interested in any feedback about how to make this process more dynamic – I know BAT scripting supports FOR EACH and wildcards, but I was unable to make them work properly with OSQL, so I’d appreciate any input there. Enjoy!

Download the ZIP archive containing the files for this post

Accessing a clustered SQL Server instance without the instance name

When I clustered SQL Server 2005 the first time, it bothered me that I had to access each clustered instance using both the cluster DNS name and the instance name. If my SQL Cluster is called SQL-CLUSTER and the DNS alias of my first instance is SQL-INSTANCE1, I had to connect to SQL-INSTANCE1INSTANCE1. Since each instance has a dedicated name and IP address that uniquely identifies it, shouldn’t that be enough? Why is the instance name required?

It turns out it’s not! With a little tweaking, you can access the instance using just the SQL DNS name, without the instance name, and it works for as many SQL instances as you have on a cluster, not just one (you’ll need to repeat this process for each instance). Here’s how:

  1. On the active node for the instance you’re adjusting, open the “SQL Server Configuration Manager” from the start menu.
  2. Expand “SQL Server Network Configuration” and select the instance you want to edit.
  3. In the right window, right-click “TCP/IP” and select “Properties.”
  4. On the “Protocol” tab, ensure that “Listen All” is set to “Yes”
  5. On the “IP Addresses” tab, scroll all the way to the bottom, to the “IPAll” section.
  6. The “TCP port” box is blank by default – set it to 1433, the default port for SQL Server.
  7. Click “OK” – you’ll receive a message that you need to restart the SQL Service.
  8. Restart the SQL Service on that node.

Presto – done! You can now access this SQL Server instance using either “SQL-INSTANCE1\INSTANCE1” or just “SQL-INSTANCE”. It’s also worth noting that this network settings follows the instance between active nodes – if you failover the SQL instance to the other node, it should still be addressable the same way, and if you check the protocol settings, you’ll see that your change is active on that server now as well.

To me, it’s so much easier to not have to use the instance name when addressing the server, and since the service is responding on port 1433, the default port, this change should be compatible with every application that connects, and I haven’t had any problems at all.

If you try this and run into any problems, please leave a comment below.

UPDATE:

After playing with this a little more, it seems like it only works with the SQL Native Client. If you attempt to connect via the SQL Server OLEDB driver or the old SQL 2000 driver, you get the error that you’re not able to connect to the specified instance. Switching your connection method to the SQL Native Client (SQLNCLI) allows you to connect, assuming your application allows that. Has anybody else experienced this problem?

Scheduled Task “Could not start” when installing SQL Server on a Windows Cluster

I ran into this error while deploying SQL Server 2005 Enterprise to a two-node Windows Server 2003 cluster. The SQL Server installation checked all the prereqs with no problems, and as soon as it was time to actually do the installation, it paused for about 5 minutes, with the message “SQL Server Setup is preparing to make the requested configuration changes…” After a few minutes with no activity, the installation fails with the following error message:

I checked the “Scheduled Tasks” list on the passive node, and found the following:

Starting the task manually, even while setup was still running on the active node, had no effect. Also, the following entry appeared in the Log (Advanced Menu -> View Log):

“SQL Server Remote Setup .job” (setup.exe) 5/1/2009 11:29:45 AM ** ERROR **

Unable to start task.
The specific error is:
0x80070005: Access is denied.
Try using the Task page Browse button to locate the application.


SOLUTION:

It turns out it’s not an “Access denied” message at all! This occurs when you’re logged in to the desktop of the passive node while you’re doing the SQL installation. I had a remote desktop session open to both nodes, which caused this problem. Simply logging out of the passive node, then attempting the installation again from the active node, will allow setup to complete successfully.

Please, Microsoft: Add a pre-check to the setup process, or at the very least, give me a real error!

Accessing System.DirectoryServices from SQL Server 2005

SQL Server 2005 allows for the integration of .NET assemblies into the databases so that they can be accessed from inside stored procedures and other database functions. Although this is a great new feature, I got hung up on a particularly cryptic error message when I tried to build an assembly and import it.

Since SQL Server makes it difficult to query active directory, and I wanted to build an AD-based authentication module for my database application, the best way to do that seemed to be to use this new feature. My assembly depended on System.DirectoryServices in order to access Active Directory, but that wouldn’t be a problem, since the .NET 2.0 framework is available from inside SQL Server 2005 (http://msdn2.microsoft.com/en-us/library/ms254506.aspx, provided you’ve enabled the feature), right? Well, sort of. As it turns out, SQL Server was rushed to RTM too quickly for all of the .NET 2.0 assemblies to be cleared as SAFE, so the ones that weren’t fully tested aren’t included by default. Fair enough – so it’s just a matter of importing System.DirectoryServices, and then importing my assembly that relies on it, right? Again, sort of.

System.DirectoryServices can be imported into SQL Server, but only as an UNSAFE assembly. This has all sorts of other security implications (which is a little ironic, since I was using it to verify user security), but I decided to use it anyway, since I figured that the UNSAFE tag was more of a formality than a real danger, and the assembly would be SAFE once more testing had been done. I imported System.DirectoryServices:

USE master
GO

CREATE ASYMMETRIC KEY asmKey_DirectoryServices
FROM EXECUTABLE FILE = 'c:\Windows\Microsoft.NET\Framework\v2.0.50727\System.DirectoryServices.dll'
GO

CREATE LOGIN asmLogin_DirectoryServices
FROM ASYMMETRIC KEY asmKey_DirectoryServices
GO

GRANT unsafe ASSEMBLY TO asmLogin_DirectoryServices
GO

That imports the System.DirectoryServices assembly as UNSAFE. Next, I imported my assembly as SAFE, since it was signed. The only problem was that when I called my assembly, which reached into System.DirectoryServices, I got an error (I’m calling clrIsMemberOfGroup in my assembly, SqlHelper):

Msg 6522, Level 16, State 2, Line 1
A .NET Framework error occurred during execution of user defined routine or aggregate 'clrIsMemberOfGroup':
System.Security.SecurityException: Request for the permission of type 'System.DirectoryServices.DirectoryServicesPermission, System.DirectoryServices, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' failed.

After 4 hours in the phone troubleshooting the issue with Microsoft, it turned out that it was VERY simple, and even vaguely alluded to in a knowledgebase document. In order to reach into the UNSAFE System.DirectoryServices assembly, I had to make my assembly UNSAFE as well. Since the UNSAFE assembly was running outside the bounds of what .NET considers “SAFE”, it could potentially return suspect results, and so anything that relies directly on those results couldn’t be considered “SAFE”, and had to be tagged as “UNSAFE”. It seems like I should be able to implement proper sanitizing code in my assembly so that I don’t inherently trust the results from my “UNSAFE” assembly, but SQL Server would have none of it. In order to reach into an UNSAFE assembly, I needed to flag my assembly as UNSAFE – simply placing my assembly import into a “Create key, create login, import assembly” setup like the one I used for System.DirectoryServices fixed the problem.

I suppose the question is really “Did that fix anything?” since all I really did was disable security on those assemblies. It’s really hard to throw a security assembly when you don’t do any sort of security checks. Well, at least I alleviated the symptoms, and now I’ll just wait for SP1 to (hopefully) add System.DirectoryServices (among other missing framework assemblies) to the assemblies accessible from inside the CLR access in SQL Server 2005. I suppose we’ll have to wait and see…