Visual Studio solutions don’t load when you double-click on them

When I installed Windows 7, double-clicking on .SLN files to load them in Visual Studio stopped working – I would get the hourglass for a few moments, and then nothing. It turns out that it’s because I had set Visual Studio to always run as an Administrator, but when you double-click a .SLN file, you’re not actually running Visual Studio – you’re running the “Visual Studio Version Selector”, even if you only have one version of Visual Studio installed.

To resolve the problem, you’ll need to set the Version Selector to also run as an Administrator – to do this, find the Version Selector EXE:

C:\Program Files\Common Files\microsoft shared\MSEnv\VSLauncher.EXE (or “Program Files (x86)” if you have an x64 installation)

Right-click on it, go to the “Compatibility” tab, and check the box that says “Run this program as an administrator”. Since I have multiple users on my laptop that I use for development, I had to click “Change Settings for all users” before checking the “Run as admin” box.

Voila! The VS Version Selector will now properly load the solution files when you double-click on them – enjoy!

Migrate database indexes to a new file group

I recently had to mass-migrate all the indexes from a database to a new file group since we’d added some additional storage to our database server. I found this article at SQL Server Central (unfortunately, registration required, so I’ve included a copy of the original script in the download at the end). While it worked okay, there were some things I didn’t like about it:

  • Assumed 90% index fill rate
  • “Moved” indexes were all created as non-unique, regardless of original
  • Fail during index creation left you without an index (drop and then create, with no rollback)
  • Table was un-indexed during the move (index dropped and then created)
  • Script re-created indexes without any “Included” columns, even if original index had them

To address these limitations, I rebuilt the process using that script as a starting point. The new script:

  • Uses 90% fill rate by default, but if the original index had a different rate specified, it will use that
  • Re-creates indexes as unique if the source index was unique
  • Rollback problem resolved – new index is created with different name, old index is dropped, and then new index is renamed, all in a TRY-CATCH block
  • Since the new index is created and then the old one dropped, table indexing remains “online” during the move
  • Migrates “Included” columns in index
  • Updated the script to use SYS views (breaks compatibility with SQL 2000, since SYS is 2005/2008/beyond only)
I welcome any feedback on the script, and would love to know if you see any improvements that should be made.

Download .SQL scripts (contains both Original and Modified scripts)

Finding unused tables in SQL Server 2005 and 2008

Recently, I was tasked with “cleaning up” a very large database on our network – it included hundreds of tables with cryptic names, and I wasn’t able to tell which ones were still being used and which weren’t. There are triggers for INSERT, UPDATE, and DELETE, but no trigger for SELECT, and that’s what I wanted.

However, SQL Server 2005 and later provide something that’s almost as good – the sys.dm_db_index_usage_stats system view. This view has table and index statistics for every table in the database and you can use it to determine when a table was last accessed. Though I initially thought this table only contained index stats, so would be useless against tables without indexes, that’s not the case; it contains tables themselves as well, and calls them “HEAP” indexes. This way, you can see which tables are being scanned against often (a sign that a better set of indexes is needed), or which indexes aren’t being accessed at all and can safely be removed.

Using this data, it’s easy to determine which tables haven’t been accessed since the server was last restarted:

WITH LastActivity (ObjectID, LastAction) AS
(
  SELECT object_id AS TableName,
         last_user_seek as LastAction
    FROM sys.dm_db_index_usage_stats u
   WHERE database_id = db_id(db_name())
   UNION
  SELECT object_id AS TableName,
         last_user_scan as LastAction
    FROM sys.dm_db_index_usage_stats u
   WHERE database_id = db_id(db_name())
   UNION
  SELECT object_id AS TableName,
         last_user_lookup as LastAction
    FROM sys.dm_db_index_usage_stats u
   WHERE database_id = db_id(db_name())
)
  SELECT OBJECT_NAME(so.object_id) AS TableName,
         MAX(la.LastAction) as LastSelect
    FROM sys.objects so
    LEFT
    JOIN LastActivity la
      ON so.object_id = la.ObjectID
   WHERE so.type = 'U'
     AND so.object_id > 100
GROUP BY OBJECT_NAME(so.object_id)
ORDER BY OBJECT_NAME(so.object_id)

Since the table is cleared when the SQL service restarts, this will only display the tables not accessed since the last time the server was restarted. Because of this, you’ll need to ensure that the SQL Server has been running for sufficiently long before you rely on this query to see which tables aren’t accessed by users.

Keep in mind that, even if the server has been running for months and a table is still in this list, it may not be safe to delete it. Some tables may be part of year-end or rare processes. This list should be used as a guide to help you figure out what’s safe to delete, and you may even consider renaming objects for a while first, so that any processes that do end up relying on one of these tables can be easily corrected by renaming the objects back.

Moving a SQL Server database to another server on a schedule – without using replication

Recently, I had the need to copy a set of databases from a dozen remote servers to a central server, restore them, and have it happen automatically, with no intervention from me at all. Replication wouldn’t work for the following reasons:

  1. Many tables didn’t have primary keys, so merge replication was out (even though this was only one-way replication)
  2. The size of the databases (28GB in one instance) and the quality/speed of the WAN removed the log shipping option
  3. There’s too much activity to consider any kind of live replication

Given our restrictions, we decided to go the following route. On the remote server, we set up a batch file that did the following:

  1. Use OSQL to back up the databases in question to a folder
  2. Run 7Zip from the command line to compress the backups into separate archives. For each auto-attaching later, each archive had the name we wanted it attached to the remote server with (for example, Site1ProdDB was backed up to Site1ProdDB.BAK, then compressed to Site1ProdDB.7z)
  3. Delete the BAK files
  4. Archives were renamed from *.7z to *.7zz (this is important – I’ll explain why in the server part)
  5. Scripted FTP using Windows command line FTP tool to a folder on our central collection server
  6. Once the FTP was complete, rename the archives on the remote server back from *.7zz to *.7z
  7. Delete the local *.7zz files

That’s it for the client – the BAT file was scheduled as a SQL Agent job so that we could kick it off remotely from any site we wanted, or so we could set them up on a schedule. Then, we put a BAT file on the server that did the following:

  1. Check folder for files that match *.7z
  2. For each one found, do the following:
    1. Extract it to a “Staging” folder
    2. Delete the 7z file for that archive
    3. Use OSQL to restore the file from the command line
    4. Use OSQL to run a script that changes the DB owner, adds some user permissions, and generally does some housework on the database
    5. Use an SMTP tool to send a email notice that the backup has been restored
  3. Repeat step 2 for every .7z file in the folder
  4. As a second step in the SQL Agent job, run “MoveLog.bat” (included below) to finish rotating the logs – it ensures that only logs with meaningful information are kept

The server BAT process can run as often as desired – in our case, we run it every 30 minutes, so the backup will be picked up and restored as soon as it’s available. That’s where the rename from the client side comes into play: If the files were named Database.7z, then the server process would attempt to pick them up while they’re being uploaded via FTP, and shenanigans would ensue. By renaming them when they’re done uploading, they become immediately available for restoring on the server side.

As I said before, I scheduled both the client (source) and the server (restore/destination) process as SQL Agent jobs – the Windows scheduler is too cumbersome to work with remotely, and kicking them off on demand was a pain. With the SQL Agent, they can be started on demand, and then I get an email notification as soon as they’ve been successfully restored.

I’ve attached the files below, and I welcome any feedback that you have or any improvements that can be made – I’m happy to give you credit and post a new version here. Specifically, I’m interested in any feedback about how to make this process more dynamic – I know BAT scripting supports FOR EACH and wildcards, but I was unable to make them work properly with OSQL, so I’d appreciate any input there. Enjoy!

Download the ZIP archive containing the files for this post

Expand a disk partition under Windows XP or Server 2003

Vista/Server 2008 include support for expanding hard disk partitions in the Disk Management MMC snap-in, but XP/2003 support it as well (if you’re not afraid of the command line). To expand a partition:

  1. Open a command window
  2. Type “DISKPART” and press Enter to run the partition manager
  3. “LIST DISK” and press enter to show general disk information, including unused space
  4. “LIST VOLUME” to show the volumes. Make note of the volume you’d like to modify
  5. “SELECT VOLUME 0”, where “0” is actually the volume number you’re expanding
  6. “EXTEND” to expand the partition into all available space
  7. “EXIT” to leave DISKPART

There you go – enjoy your newly expanded partition, with no reboot necessary!

WARNING: Messing with your partitions can cause serious problems, including losing the ability to boot Windows or permanently deleting data. Only use this utility to if you know what you’re doing! You’ve been warned!