Friday, December 30, 2011

A year of Reading

A while ago, I outlined a system that I implemented to ensure that I a) read more books and b) read a variety of books. In short, I have books grouped in 5 lists: Recreational, Classical Reading, Technical Reading, Business Reading, and Religious reading. At any given time, I have 3 books I'm reading: a Recreational or Classical reading book, a Technical or Business book, and a Religious book. I read the first category of books on Monday's, Friday's, and Saturday's, while I read the second list on Tuesday through Thursday. The Religious books are left for Sunday.

And this system has worked very well for 2011, the first full year I've tried it. In that span of time, I've read 10 Recreational books, 5 Classical books, 3 Technical books and 5 Business books for a total of 23 books. The books, in chronological order of my reading them are:

  • Gulliver's Travels (Classical)
  • jQuery in Action (Technical)
  • Warrior Heir (Recreational)
  • Pride and Prejudice (Classical)
  • Web Marketing for Dummies (Business)
  • Wizard Heir (Recreational)
  • Arguing with Idiots (Recreational)
  • Oliver Twist (Classical)
  • More Eric Meyer on CSS (Business)
  • 4 Hour Work Week (Business)
  • Bloodcurdling Tales of Horror and the Macabre: The Best of H. P. Lovecraft (Recreational)
  • Latitude (Recreational)
  • Time Machine (Recreational)
  • Fahrenheit 451 (Recreational)
  • Frankenstein (Recreational)
  • Jane Eyre (Classical)
  • Silverlight 4 in Easy Steps (Technical)
  • ADO.NET 2.0 Core Reference (Technical)
  • Dragon Heir (Recreational)
  • Strength Finder 2.0 (Business)
  • The Scarlet Letter (Classical)
  • Hackers and Painters (Business)
  • Atlas Shrugged (Recreational)

There was a span in the middle of the year where I read a lot of Recreational books. This was during June and July, when I read 8 books for our library reading program. At our library during the summer, library patrons can receive a free book for every 4 books they read from the library. My wife ended up reading 4 books a week during this time period (for a total of 36 books!). Sadly, in that 2 month time span, she read more books then I did in the entire year.

I doubt I will be able to surpass her next year, but my goal is simply to read at least 25 books next year. And with that goal set, I better get to work.

Saturday, December 3, 2011

Final Update on book

First see about my efforts to write a novel here and here.

The month of November seemed to pass by in a blur. A lot happened in the month, but the one thing that did not happen was me finishing my book. Sadly, I only finished a 1/5th of the book. I'm sadly disappointed, but I've learned a few things.

  • An idea is not enough. You need to prepare for more than a few minutes to plan things out all the way through.
  • Reading is a lot like programming, in the respect that in both crafts, you must keep many things in your head simultaneously to ensure that all angles are accounted for.
  • Along the same lines, distractions can kill your train of thought. My wife spent many evenings upstairs with me watching TV while I wrote. Unfortunately, I can't just turn out the TV and during an hour with the TV on, I often found myself writing just a few words or sentences. Far short of where I needed to be.
  • The stories in my head and my style of writing fit better as short stories.

While I don't know if I'll ever have a professional career as a writer, I think some of these lessons can be applied to other efforts elsewhere. For instance, I've instituted (much to my wife's chagrin) a no TV rule upstairs for 1 hour after my son goes to bed. This quiet time I will utilize for other projects I plan to work on in the next year.

I can also learn from my mistakes with the preparation of the story. Going into the month, I felt like if I spent time planning out the plot of the story, I would be cheating. But, I needed direction and a plan for where the story was to go. I eventually did write an outline for the story, but this took more precious time away from me, time that I could have used.

I will continue to work on my story. My goal is to have it finished sometime next year before next November. Then, if I attempt another novel next November, I can learn from my mistakes.

And perhaps next time, I'll pursue a group of short stories instead.

Wednesday, November 16, 2011

Book Update

It is now over halfway through National Novel Writing month. At this point, I should have 25,000 words written out of 50,000 by now. Unfortunately, I've only written 7,915 words for my goal, way short of where I'd hoped to be by now.

How did this happen? For me, writing is a much lengthier process then I expected. While it may seem easy to keep all of the details about your characters to paper in your head, in reality, it's very hard to be able to recall all of the minor details that make up the story. I'm sure most authors have a system in place to assist in this, but I certainly could use such a system to manage everything in my head.

In addition to being busy and unorganized, I've also found that I have difficulties focusing long enough to decide what to write and type the words on the screen. By the time I recall the necessary points of the story, I often have a distraction which prevents me from writing. Part of this is due to my desk setup at home, and partially because I'm easily distracted.

Finally, I wanted to reflect on my tool of choice so far in this project, Google Docs. The standard web version of the application works pretty well, especially in Chrome. Occasionally in Firefox I've been unable to enter text. Refreshing the page usually resolved this issue, but I've just made the switch to Chrome to edit the book chapters.

But, the Android app is a little awkward to use. It highlights the entire row you're editing, even when that row is word wrapped. For some reason, this reminds me of vim, which is not something I want out of a tool to write a book. For programming, this might work, but as a general purpose text editor, it doesn't work.

Additionally, the up and down arrows scroll the text in the window instead of moving the cursor up and down rows. This is much tougher to work around than the purely aesthetic issue with the row highlight. On a slower Android phone like the one I use, it can take a while to load ta document to edit it. Yet, when you select a document, it opens it up for viewing. If you want to actually edit your document, you have to press an 'Edit ' button to load another screen which contains the edit elements on it. Performance wise, it took about 20 seconds to load the application, about a minute to load one of my chapters in the application, then about another 15 seconds to load the document again to edit it. A slow phone and a slow network exacerbate the performance issues with the application and the architecture around it. In the end, I've typed only a small portion of the book on my phone and done the rest of the writing on a desktop through the browser to Google Docs. Its still a good way to write a novel, even if its not as easy to write on-the-go as I was hoping.


Sunday, November 6, 2011

Labels in Football and Software Business

It's been a tough year of football in my neck of the woods. As an IU alum, its painful to watch the slow progress the football team is making this year with its 1-9 record. There have been some games against opponents in lesser known divisions that have made a mockery of the team this year. But that is in some respects expected from a school that has never had a strong football program. What truly stings this year is that the Indianapolis Colts, without Peyton Manning, have gone 0 and 9 so far this year. From a team that has hadn't had a losing season in over a decade, its a stinging blow.

But when you break down the Indianapolis Colts, in hindsight, it becomes obvious that without their lynch-pin player, the system falls apart. Over the last decade, so much emphasis of the Colts has been placed on Manning. Manning of course is a stellar player, but he's just one player on the team. From a morale and recruitment standpoint, a stellar player can inhibit recruitment in other areas. If you describe your team as a power offense, it will be tough to recruit the best defensive players in the game, the best special  teams players, and the best running backs.

I've seen this happen in businesses too. A company that makes and sells software or hardware might change its language and begin to call itself a 'Services' company. This has the effect of preventing recruiters from bringing in the best and brightest developers or hardware engineers, as its clear from the language that the companies focus and emphasis is on the Services department. Likewise, its tough to hold onto the best and brightest developers and engineers as they are no longer appreciated to the level they once were.

In business, as in sports, its dangerous to become labeled. Any labeling has to be done consciously and with the realization that there will be negative consequences to the other groups in the organization. In the case of the Colts, This decision seems intentional. Over the years, its been discussed that the Colts defense is built to be light and play with a lead, with the hope that Manning and the offense can build up a quick lead and force the opponents to pass the ball. This has worked well for the Colts over the years, but remove the key player, Manning, and the system collapses in on itself. The same thing can happen to a business though. If the emphasis is placed on one department that implodes due to anymore of reasons (such as internal politics, turnover, a failed project, etc.), the other departments may be too weak to carry the additional burden.

Monday, October 31, 2011

National Novel Writing Month

It seems that most people I know aspire to write a Novel at some point, including me. Writing is a lofty profession and a book or novel is a path to immortality. How many people today know of Jane Austen, Ernest Hemingway, Mark Twain, and Charles Dickens? These writers have achieved a sort of immortality through their writing, as they have written books that even today we read and analyze.

That is why when I found that November is National Novel Writing Month, I signed up to participate. The goal of the site is to write a short novel of about 190 pages within 30 days. While I'm no R.L. Stine or Stephen King, I still think a short novel can still be written in a short time. I'll know for certain in 30 days.

To foster my writing, I've drawn up a short outline of the novel that I'm going to write. It's based on some of my and my wife's encounters while living in a small town in Southern Indiana while I was going to college. Its one of a few stories that I've had bouncing around my head for the last few years. A National Novel Writing Month is just what I need to get one of the stories out of my head.

Writing a novel in a month on top of a busy work schedule will be a trick. In order to facilitate this, I've decided to write the novel in Google Docs, so I will have access to the novel from my laptop, either of my desktop computers, and my phone. I've used Google Docs for some writing before, but never for something as long or complex as a novel. It should prove as a good test to see just how far Web applications have advanced.

Do you have a few story ideas bouncing around your head? Would a National Novel Writing Month help to transfer those ideas to paper?

Thursday, October 20, 2011

Strengths vs. Weaknesses

On the recommendation of the Startupsfor the Rest of Us podcast#38, I just read Strengths Finder 2.0. From a book that came highly recommended by others, I was a bit disappointed in the book, the test that accompanies it, as well as the test results that are provided after the test has been complete. If you can find the book at a reasonable price like I did, I still think you'll get your money's worth.

The book provides an introduction to the test and the 34 'themes' that the test can identify. The introduction explains the importance focusing on ones strengths by using a few analogies, yet most of these analogies involve sports. A personality test like the one that is included with the book of course would never work for an aptitude with sports. While sports analogies are easily accessible, with access to numerous individuals who have taken their personality test, one would imagine a more appropriate analogy would be available.

The test itself involved a series of two statements which one must judge which applies better. However, the nature of the test and the speed of which it must be finished (as each question has a set time period in which it must be answered), made me extremely nervous and had me second guessing myself. While I'm a very introspective person by nature, I also second guess myself as to whether I'm the best judge of my character or if my wife would have been able to answer more accurately.

Finally, I think it would have been helpful if the test results would have also revealed 5 weaknesses, so I could know what areas to avoid. However, with the description of the 'themes' in the book, I can identify which areas I'm weakest at myself, though it would have been nice to know this from the test.

While my review of this book has been harshly critical, I think there's a couple points to keep in mind. First, this book came highly recommended, so I had high expectations going in. Second, the book and test are a few years old, and as such, I think there are likely better personality tests out there now. The book touted itself as a revolutionary way to determine strengths, but in the end, it seemed like a standard personality test, one that I've taken for free on the internet every now and again. Having to pay $10 to $15 for the book to gain access to the test is a little steep. However, if you can find this book for less, I would say it would be a good deal and worth the couple of hours it would take to read the book and take the test.


Tuesday, September 27, 2011

User Impersonation in .NET

Recently, I've been working on a tool to manage deployments to a web farm. As part of this task, I had to modify our current tool to impersonate a shared login to perform the actions on the various servers. This was so the IT group would not have to grant access to a large number of users. This can be done in .NET, but the approach you have to take to impersonate another user is different depending on the action you are taking. While updating the program, I encountered three scenarios which had to be addressed in different ways.

Scenario #1: Executing .NET code in the current Thread
Scenario #2: Spawning a new Process in .NET
Scenario #3: Making a WMI request

Scenario #1: Executing .NET code in the current Thread
Executing .NET code as a different user may be necessary when performing disk operations, network operations, manipulating services, or other tasks that require security permissions. To start the Impersonation, the WindowsIdentity.Impersonate() method needs to be called with an authorization token for the impersonated user. This entire process requires making an external call to the Windows API to authenticate the user. While it sounds difficult, the MSDN page for the Impersonate() method provides a working sample that can be used as a basis for your own code.

Scenario #2: Spawning a new Process in .NET
Creating a new process already requires setting up a Process object or a ProcessStartInfo object. To set this object to execute as a different user requires just a bit of extra work to pass in a username and password. The following code snippet shows how to create a ProcessStartInfo object to start the command prompt as an Impersonated user.



String userName;
SecureString password;

// Code to get and set userName and password.

ProcessStartInfo startInfo = new ProcessStartInfo(“cmd.exe”);
startInfo.UserName = userName;
startInfo.Password = password;
// Add other parameters to startInfo as needed
Process.Start(startInfo);


Scenario #3: Making a WMI request
A WMI request queries the current or a remote server for system information to monitor the system. This functionality resides in the System.Management namespace, and isn't a common one to work with. WMI does allow for the queries to be ran as another user by creating a ConnectionOptions object, as the below example outlines.



String userName;
SecureString password;

// Code to get and set userName and password.

ConnectionOptions connectOptions = new ConnectionOptions();
options.UserName = userName;
options.Password = password;

ManagementPath path = new ManagementPath(“path to wmi object to query”);
ManagementScope scope = new ManagementScope(path, connectOptions);

// Once we have a scope object, we can either create an ObjectQuery object and query that way
scope.Connect();

ObjectQuery query = new ObjectQuery(“query to execute”);
ManagementObjectSearcher searcher = new ManagementObjectSearcher(scope, query);
ManagementObjectCollection collection = searcher.Get();

// Or we can create an ObjectGetOptions and a ManagementObject
ObjectGetOptions options = new ObjectGetOptions();
ManagementObject mgmtObject = new ManagementObject(scope, path, options);
mgmtObject.Get();
// Can now retrieve values from ManagementObject like a Dictionary
string value = mgmtObject[“key name”].ToString();

Saturday, September 24, 2011

Be careful what you say to customers

I've seen a few articles this week regarding Netflix and how they should have handled the price hike and the splitting of the company into 2 separate entities. Cringely tookto his pulpit to defend the move stating that with the name 'Netflix', it should be obvious that movie streaming was the plan the entire time. Elsewhere on Hacker News, an article popped up that provided an example of how Netflix should have handled the news this week. One of my favorite responses came from the Oatmeal explaining why thecompany split into two.

But, not being a Netflix user, this news doesn't affect me a whole lot nor can I add anything to the conversation that others have not already said. However, on Friday my wife received a newsletter email from our local book store. Our local bookstore is your typical local bookstore, small, slow moving stock, but it has a warm, rich atmosphere and a cat who lives at the store. But, the staff seem ignorant to the World Wide Web, as my wife once inquired about a book series and it took an excruciatingly long time for them to look it up on their computer. Their web presence is non-existent outside of a Facebook page. And while the store has a lot of good books, it can take them quite a while to turnover their stock.

The email itself started out generic enough, explaining some of the upcoming events and address some comments from customers regarding the loss of the local Borders. While their sad that a larger showroom of books is leaving, their hope that it will mean more business for them.

If the letter had stopped there, I would have deemed it a normal business letter and moved on. However, the letter quickly became awkward as it outlined the challenges the local bookstore is having. I'm not really sure these need to be listed in a newsletter, but rather kept in an internal document in the back office. I'm sure if you think about it for a moment, you can list the challenges bookstores face yourself.

Go ahead.



I'll wait.



Done? Good.

So here's what they listed: Internet, competition with eBooks (as many small stores can't publish eBooks), books being sold in lots of places, such as Walmart and such, and the Local Library.

Firstly, the whole list is much like going to a dinner party where the host and hostess reveal the difficulty they had in throwing the party. I, as the guest (or shopper in this case), don't really care about the difficulty in running the store. Second, many parts of the list come off as whiny, such as when the list claims the library is a competitor for local events and they seem to have an unlimited source of funds to attract events. Or when the list complains about the Internet and how people will shop in the store only to later buy the books from the internet, either from home or online.

Further on in the email, it hints that the business had to make several sacrifices and if business does not improve, further changes will have to be made. The letter then finishes with a Refer a Friend program for the email subscribers in hopes to bring in more clientele. Most of the letter reads as a self-indulgent, whiny prose later, and my desire to stop by the book store vanished. Up until I read the letter, I was planning on stopping by this weekend, yet now, I'll be shopping the Internet.

What the letter should have done is start off by describing the Refer a Friend program while briefly mentioning their hope that while it is sad to see a showroom of books leaving, they hope to see more foot traffic. Then, the letter could transition to discuss some upcoming events, then shift to another announcement, that the store will begin selling books online in an effort to compete more and to provide another avenue to sell their books and rotate stock. While this may mean a bit of a change to the business, it would be good for the shoppers, as new books should arrive more often and with more variety.

In a market where it's common knowledge that all the competitors are hurting and struggling, you can't whine to your customers; you have to take action.

Tuesday, September 6, 2011

.NET Web Deploy options

What options are there to deploy .NET applications to a server farm? Unless you're at a small web shop, you've likely never had to encounter this question. But this last month, it was this very question that I began to ponder as I began a rotation to a different group whose responsibilities include managing builds for the weekly release and the deployment to a web farm.

So what options are there? At my previous job, the code deployments were a manual process where the IT group would run an MSI file we generated during the build and use this to deploy to the .NET code to the web and application servers. As the code was being deployed, SQL scripts would be executed via a custom tool designed to execute the scripts on multiple databases at once to ease in this step of the deployment and to better record the errors.

But is a manual process the only way to go? Certainly not. One option is to utilize Windows Group Policy to remotely install an MSI containing the updated code. While developers may not be familiar with this functionality, IT staff members will likely have used this before to manage the software on desktops or servers remotely.

Another option is to utilize Microsoft's Web Deploy Tool to create deployment packages to push out to code to other servers. This tool can take a snapshot of a website on one server and replicate it to another, or identify changes to a server and replicate them to another server. Deploying these changes is still a manual process, but the Web Deploy tool is a powerful tool that can make the deployment process easier.

Perhaps the most intriguing option out there is Microsoft's Web Farm Framework. This tool can work with the Web Deploy tool or an MSI to automatically handle the deployment to a farm of servers. According to Scott Gu's article on the tool, the Web Farm Framework is an IIS extension that is installed on each server in the farm. One server is identified as the 'Primary' server. Any changes made to this server will be replicated to the other servers in the farm, one at a time in order to keep the entire farm operational.

The Web Farm Framework makes deployments easy, but as with many tools, there are tradeoffs. Deployments will take longer as only one server is pulled from the farm at a time. Because of this strategy, all but one server can respond to requests during the deployment, yet the servers will be on different versions of the code. And what if there is a central database that the web farm utilizes that must be updated? When should the database be updated?

What's the best option for deploying .NET code to a Server Farm? It depends on your situation.

If you have a strong IT staff, perhaps they can administer the Web Farm using Windows Group Policy and MSI's that are created during the build process.

If the IT staff is not familiar with Windows Group Policy, perhaps Web Farm Framework is worthwhile option, especially if the system cannot be taken down during an upgrade.

If you have the spare development staff, your best option might be to build your own tools. This requires a group of developers with an understanding of how the system works as a whole, but with the effort, you can build yourself a tool-set geared directly towards your server farm.

Saturday, July 9, 2011

Inspecting the Memory dump of a .NET Application

There are times when a .NET application crashes and what logging is in place is not enough. In this case, if you happen to have a memory dump of the application in a .dmp file, then the application's memory and stack trace of the application's threads can be investigated, with the right tools.

The Windows Debugger is the tool for investigating Windows memory dump files. Microsoft provides this tool to query the memory dump files, files with the .dmp extension. On its own, this tool cannot show the stack trace for .NET 2.0 applications, but with the extension PSSCore2, .NET stack traces can be viewed and a great level of detail gleamed from the stack trace. PSSCore2 replaces an older tool SOS. Most of the resources I've found refer to SOS instead of PSSCore2, though the commands for the most part are the same between the two systems.

Once both of these tools have been installed, open the memory dump file with Windows Debugger. After the memory dump file has been loaded, issue the command:

    .load clr10\psscore2

You may have to replace clr10 with the folder that you placed psscore2. If you place the psscore2 dll in the same folder as the Windows Debugger, then the command would simply be:

    .load psscore2

But how do you use the Windows Debugger once its been installed? This article details the basic steps for loading the old extensions, but it still contains some good references (just replace references to 'sos' with 'psscore2'). This blog post from MSDN outlines some of the most useful commands and how to use them. Also, this blog post provides a real world use case for the Windbg tool. Finally, this blog post contains numerous links and a few random tidbits about the tool.

This outlines most of the useful pages I've found on the web for loading the memory dump of a .NET application. The command “!help” will return the name of the various commands available for the psscore2 extension. In general, commands that start with “!” refer to the commands available in the loaded extension, while those starting with “.” are those available natively in the Windows debugger.

Wednesday, June 8, 2011

Is the Dream of Courier about to be fulfilled?

A couple weeks ago, a new project appeared on Kickstarter that got a bit of attention. Kickstarter is a site that allows anyone to fund projects that they like in hopes that the project will become a reality. In many cases, those who donate to a project will receive things in return, such as a copy of the application (if its software).

This project, which is named Taposé, provides a few interesting videos on the planned design of the application. Watching these videos is a bit of deja vu, with a journal like application on half of the screen and various other applications in the other side. In fact, the title of the project says it all: Bringing the Courier to the iPad.

Despite the fact that the Microsoft announced the Courier's demise over a year ago, the device's possibility always stuck with me. And obviously, I was not the only one taken in by the device's promise. The Kickstarter project had a goal of $10,000, but surpassed it with $26,561.

My initial reaction to this was obvious excitement, with a bit of scorn, since I'd rather not pony up the money for an iPad. I'm no fan of Apple products (I don't own a single one), but if the only way to get Courier like functionality is to buy an iPad, well, I might have to give in to the dark side.

(Its funny. Back in the mid 90's, when my parents bought a Pentium 1 PC with Windows 95, my 5th grade teacher claimed I was going to the dark side (he was a big Mac guy). But I stand by my statement. Apple seems as dark today as Microsoft did back then. Oh the irony of Apple's '1984' commercial when viewed today).

But, given the amount of money the Tapose team raised, they areconsidering a Droid application. Being a happy Droid phone owner, a Droid tablet is one device I can see myself owning without switching allegiances. I hope these guys the best of luck. Perhaps there will be a Droid tablet on store shelves in a few months promising Courier like functionality right out of the box.

Monday, June 6, 2011

Samsung Intercept

I'm notoriously frugal, which is why when I read from Robert Cringely's prediction of the demise of Feature phones, I thought he was overly optimistic. When someone disdains monthly cell phone plans, what other options are there for a cell phone but a feature phone?

Well, it turns out, there are a few options out there. In fall of last year, Virgin Mobile started offering the Samsung Intercept, a phone with Android 2.1. The initial price was steep ($250!), but it was a fully loaded smart phone, on a pre-paid plan! Of course, if you wanted data, you're better off with a monthly plan, but these plans are ridiculously priced. For $25 a month, one can get unlimited data, texting, and 300 minutes.

It all sounds good on paper, especially if you were to compare phone and data plans with other carriers. The reduced monthly price (at expense for higher hardware costs) pays for itself in about a year.

Needless to say, I eventually bought the phone (on sale after Christmas for $180) and have been using it for 5 months. I honestly can say I don't know how I operated without this device. Prior to this, I had a cheap phone for calls, an MP3 player to listen to music at work, an old Pocket PC for reading and writing, a Tom Tom GPS in the car for directions. This device has replaced them all and then some.

I use the sliding Qwerty keyboard all the time to write notes, search the web, or even write blog posts. The on screen keyboard is a little cramped, but is handy with the auto-sense.

I have better success with navigation on the phone vs. Tom Tom. This is largely because the GPS device's maps are a few years out of date and map updates cost so much, I might as well buy a new device.

I have a lot more storage on the phone as compared to my MP3 player, so I can keep a much larger music collection on hand. And if I get bored of that, there's always YouTube and Pandora.

With Amazon's Kindle application, reading is much easier on my phone compared to my old Pocket PC. I've finished a few books on the device, plus read hundreds of articles using the InstaFetch application that synchronizes with my InstaPaper account.

Finally, I've taken many more pictures with the phone's camera thanks to it always being in my pocket. Uploading to Facebook can be a bit difficult (as it doesn't always recognize orientation), but I've installed the application PicSay to handle this when the native application fails.

All told, I'm still loving my Droid phone. It was upgraded to Droid 2.2 about a month ago. The only downside here is that during the upgrade, the note application, which I had saved a dozen notes in, was removed and I lost all of the notes. This warning was clearly stated, but unfortunately, I did not heed the warning.

Friday, June 3, 2011

.NET Decompile Tools

For many needing a tool to decompile a .NET assembly, the tool of choice has been Red Gate's Reflector. This tool is one of the fastest tools out there and the decompiled code it displays is often somewhat intelligible (of course, THAT depends on the underlying quality of the code). But, recent changes to the licensing of Reflector have left many .NET developers scrambling for their wallets or for an alternative. Here's a few alternatives to .NET Reflector.


JetBrains DotPeek


This is my favorite replacement for .NET Reflector. It feels a little slower than Reflector, but the code it decompiles is very readable. Each variable that the decompiler has to name, it attempts to find a somewhat sensible name.


This tool will decompile the AssemblyInfo.cs as well, so any assembly properties will be available.


DotPeek requires .NET 4.0 but does not require registration.



Telerik JustDecompile


JustDecompile is comparable to DotPeek in many ways, but in the early build I tried, it did not decompile the AssemblyInfo.cs. While the variable names were also reasonably named, since it can't decompile the AssemblyInfo.cs file, JustDecompile is an incomplete replacement for Reflector.


JustDecompile does require free registration.


Conclusion


There are still other tools to decompile .NET code, but Reflector, DotPeek, and JustDecompile are all powerful tools that are supported by major vendors in the .NET ecosystem. As such, they bring a lot of clout to these solutions and have the support team in place to maintain such a tool. At the end of the day, JustDecompile lacks the ability to decompile the AssemblyInfo.cs. If this is important to you as it is to me, you will want to look at DotPeek instead.

Monday, May 9, 2011

How rake db:migrate works

When I first started developing Rails applications, the Database migrations with ActiveRecord seemed so much more advanced then anything I've worked with in .NET (and I've used ADO.NET, LINQ-to-SQL, and Nhibernate). The migrations seemed so intuitive, powerful, and magically. Unfortunately, when the magic doesn't work, you need an understanding of the underlying architecture to fix it.


The Problem

The other day I rolled out a Production update for Kanban for Developers. In it were 2 migrations that added a new integer column to 2 different tables and a bit column to a third table. The migration passed local validation, but when I went to deploy it to production, I issued the command:


rake db:migrate RAILS_ENV=”PRODUCTION”


But running this command gave me an error stating that it couldn't add a duplicate index. The migration name provided was created in the middle of last summer. There have been numerable migrations since then, so for some reason, the system thought this migration had been skipped.


Now, there's a couple of ways to fix the problem. The first, and easiest would have been to remove the index from the database and re-run the command. However, I wanted to learn how db:migrate operated, so I began to look around for something to indicate the migrations that had been executed on the database.


Schema.rb


The first place I began to look was within the folder structure of the web application itself. The db folder would be the logical place to store this kind of information. In this folder is the schema.rb folder, and as one might guess, it contains the schema information for the database.


The file in production had all of the latest additions, though the database itself didn't. Why? Because in my rush to move code to production, I had copied the entire db folder instead of just the migration folder or just the latest migration file. With the power of source control, I reverted the old file and re-ran rake db:migrate. At this point, I received the same error message.


schema_migrations


And the reason for this is because the migration information is stored within the database, and not a file, which of course makes much more sense. Databases are locked down much more then the file system, generally, and provide a much more logically place to store this information. For Rails, this information is stored in a simple table named 'schema_migrations' which contains a single column, the name of the migrations that have been applied.


Resolution

In the end, after ensuring the the index and all of the database changes from the missing migration had indeed been applied, I simply wrote an insert statement to add a record for the missing migration. After this, I was able to proceed with the new migrations and complete the build.


And the schema.rb file? It actually was updated to include the latest changes after I ran rake db:migrate. So while this file isn't how rake db:migrate determines whether a migration has been executed, it does document what database changes have been made in a location separate from the database.

Thursday, April 14, 2011

Geek Reading List

In an article on Tech Republic, an author listed what he (or she) considered to be required reading for a geek. While the list is a decent start, there are many books that I felt were left off of the list that are excellent reads for geeks.

Lord of The Rings trilogy: Any book which defines a new language is bound to be popular with geeks, but the fact that this book is the Grand daddy of all modern Fantasty books, and you have a book that is almost be required reading for every geek.

Myst: This book is the prequel to the story told in the popular mid -90s video game. I've always thought of the clues and game play of Myst to be particularly geeky and the prequel in book format does not disappoint.

Flat Land: The story of an imagining 2 dimensional World and what happens when it encounters beings from a 3 dimensional world. The topic and the Science involved are certainly in the realm of the Geeky.

Foundation Series by Issac Asimov: This Book is based on the premise that by studying data enough, one could predict future events. Place a story regarding this in space, and you have the Holy Grail for a geek sci-fi book. This book has several sequels, but I feel the first book was by far the best.

This of course is not an exhaustive list, but does contain several good books that should be included, or at least considered for a "geek" reading list.

Monday, April 11, 2011

11 Word Monday Meme

What are your personal goals? Are you 'Winning'? Change that today!

(About Meme Monday)

Saturday, April 9, 2011

Jquery UI Draggable and Scrollbars

While working on a recent sprint on my task management software Kanban for Developers, I ran across an issue that took some time to track down. On the main screen of the application is a Kanban board with yellow boxes to represent tasks written on sticky notes on a white board. There are 5 different categories and the tasks can be dragged and dropped onto any of the categories. By default, categories are limited to only a small number of tasks to prevent users from being overwhelmed and so that the tasks will be displayed nicely on the board. However, users can edit the size of these categories and in the latest sprint, can double click on a task and it will expand to fit the text within the task. This can greatly increase the size the task category and look unsightly. Something needed to be done to reighn in the height of the task categories.

The simplest solution would be to add scrollbars via the CSS overflow property, but the appearance is less then desirable. Then I found a promising site with a list of 10 different scrollbar widgets.

While the site looked promising, there were some, like jScrollPane, which required too much setup and broke other parts of the application. Most of the other plugins would not work with the draggable plugin from jQuery UI. Once the draggable element reached the edge of the its parent container, the element disappeared behind the adjacent container. I encountered this behavior with jQuery scroll and Tiny Scrollbar. After trying a few different plugins, finally I was able to get the expected behvior out of the ShortScroll plugin.

Is Short Scroll the only choice for getting jQuery droppable and scrollbar plugibs to play nice with eachother? No, but if you encounter this issue with jQuery scrollbar and UI-Droppable elements, then you will probably have to switch your scrollbar plugin, as there does not appear to be a work around with many of these plugins.

Of course, this could change in a few months.