Thursday, December 11, 2008

Silverlight – Under the covers

About 6 months ago, I “blogged” about the book that I had chosen as my summer reading - Essential WPF. Due to some personal problems, however, I was unable to complete my task before the summer end. But “I'm catching up” and I'm half the way with eating Chris Andersons’ words. I must say that the book hasn't convinced me yet, but I'll leave that thought on hold until I'm finished reading it. Nevertheless, every technical book has that little bag of knowledge that can surprise you.

Essential WPF actually answered my biggest question around Silverlight: How on earth does it work? What runs where? Under what security permissions?

Well, WPF hosts applications by implementing an ActiveX DocObject (OLE DocObject). This means that every application that supports hosting DocObjects can actually host WPF applications. To know how to interpret the XBAP extension, when the .NET Framework is installed, a mime handler is registered for the XBAP extension, which loads up WPF. "The real deal" comes next. WPF runs all browser-hosted applications out of process, using a host application called PresentationHost. This isn't what I expected and actually takes me to the answer to one of the questions made above: If it's running "out of the browsers' scope", under what security permissions is it running? The answer is simple: Quoting Chris Anderson, "on Windows XP, PresentationHost.exe is launched with modified NT permissions. The admin token of whatever permission “iexplore.exe” is running is used to set the security permissions. On Windows Vista, the Limited User Access feature provides this same type of security model.”

This is quite intriguing and has brought a few more questions to my head, especially concerning on what’s going on “under the covers”. But let me solve them by myself first and I will post back later.

The Silverlight world is new to me, I wonder if it will amaze me as much as the WF did for the last year.

Saturday, December 6, 2008

Export MIME Types from IIS

I usually don’t share “my homework” or “personal developments”, but this might actually be useful at work for some of you. It’s quite simple: the idea is to export all the MIME Types registered in IIS. Furthermore, you can generate an HTML page (with style sheet) from the exported MIME Types. Here's how:

      1 – MIMETypesVBScript.vbs: the VBScript that will get the MIME Types from IIS. Note that if you are using the script on a remote server (through remote desktop, VPN, whatever), you might need special permissions to execute it.
      2 – MIMETypes.XSL: The file that will apply the style. Feel free to develop your own.

Just save the files from Github and run the VBScript (cscript MIMETypesVBScript.vbs). Finally, use the XSL to transform the XML generated. That simple.



Thursday, December 4, 2008

Scary – SyBase

In my opinion, SQL Server is the most complete DBMS. For the past couple of years, I've worked with it from the basic T-SQL to the most administrative clustering, security and so on.
But despite my preferences (Microsoft), it's always good to keep your mind open to other solutions around you. For instance, I also appreciate MySQL, which is a great solution for small business (free).

The last month, however, I've worked with SyBase. SyBase is known by many people as being the "cousin" of SQL Server for historical reasons. Of course, the syntax isn't much different from SQL Server, but the tiniest difference can have a big impact. Take the following example of a stored procedure in SQL Server:

Stored Procedure

You can call this procedure passing no parameters or one:


And that's it. If you pass more than one parameter, the SQL Server Management Studio will yield an error message:

Msg 8144, Level 16, State 2, Procedure DefaultParameter, Line 0
Procedure or function DefaultParameter has too many arguments specified.

Now, the same procedure can be written for SyBase... exactly the same! However, in SyBase, you can call the procedure with "n" arguments:


The DBMS will take just the number of arguments that the procedure wants, ignoring the rest of them. This is quite odd!

The problem I had at work was that someone created a procedure with 2 parameters, both varchar. Then, all the other stored procedures were written to call this procedure with two varchar parameters. Somewhere in time, the second parameter was no longer required and was removed from the procedure definition. Of course, the code that called this procedure kept running because SyBase allows the procedure to request just one parameter and the caller to pass two parameters.

My task was to add a new output parameter to the procedure. It now had 2 parameters again, but the one I added was of type integer. Can you imagine what happened to all the calls to the procedure? Bum! They where passing two varchar variables and now the procedure is expecting one varchar and one integer. Scary...

I haven't searched for an explanation for this behaviour (nor I'm going to) because I'm sure it won’t convince me. So, that's one point less to the SQL Server cousin.

Sunday, November 23, 2008

PHP and .NET

Outside of the “Microsoft’s world”, there are few technologies and programming languages that I enjoy. PHP is definitely on that small group.

Up until now, the only way to load a .NET assembly into PHP and use its types, functions and so on, was to use COM objects. Now, the PHP team is actually developing functions to load a .NET assembly directly without using COM objects (at least, not in code). It has been in testing for quite a while, here’s the link:


Thursday, November 20, 2008

LINQ - Just because you can

Day after day, the .NET Framework evolves in a fantastic way and if you know how to take advantage of the new features, your productivity can actually improve. However, if you don’t have the knowledge, you will do rude mistakes. The latest I encountered was something like this (adapted to the widely used person example and reduced to fit the blogpost):

(click to enlarge)

The idea was to search a collection (in this case, a Person’s collection) and print out the items that did match a certain condition (in the above example, Person’s whose age is over 18). So far so good, the output is what was expected. But the programmer decided that would be “cool” to write it using LINQ. Can you identify the problem on the above example?

Well, as you might expect, the “Where” extension doesn’t actually perform as “an SQL where clause”. The “Where” extension translates in a foreach statement:

(click to enlarge)

So, in the above example, first the user iterates through the collection to find the items that match the condition and then iterates through those items to print them. Two foreach statements, something that could be done with one foreach and one if clause… And that’s not all, but it’s enough to make my point of view.

I know this is a “way too basic” example, but the point is that tools shouldn’t be used “just because you can” or “they’re cool”. LINQ is useful in a wide number of scenarios but not on every task you have.

Thursday, November 6, 2008

Optional Parameters (C#)

“Yes”! This was the word out of my mouth when reading about the new C# 4.0 that will be coming soon. Why? Well, a couple of months ago I wrote a post about optional parameters in VB.NET and how useful they can be. I also questioned why the C# team didn’t implement this feature. My answer is here (quoting a white paper from the C# team):

Named and optional parameters
“Parameters in C# can now be specified as optional by providing a default value for them in a member declaration. When the member is invoked, optional arguments can be omitted. Furthermore, any argument can be passed by parameter name instead of position.”

And the same white paper goes on justifying this new feature much the way I did on my blog post:

COM specific interop features
“Dynamic lookup as well as named and optional parameters both help making programming against COM less painful than today. On top of that, however, we are adding a number of other small features that further improve the interop experience.”

They’ve taken the “optional parameter” idea even further and introduced Named Parameters.
Day after day, the .NET world keeps on surprising me.

Thursday, October 9, 2008

Why compensatable transactions?

Last month, I wrote a post about compensatable transactions, but left a question in the air: how really useful can this kind of transactions be and where to apply it?

Well, when reading Kenn Scribner’s book about WF, I didn’t catch the “big picture”. XA-style transactions seemed enough. But after reading a couple of articles from the WF Team at the MSDN magazine, I understood what Kenn Scribner was talking about in the book.

Here’s an example from me that better describes his thoughts (although not far from his example). Keep in mind that depending on the isolation level you choose for a XA-style transaction data may became locked throughout the entire transaction scope.

Imagine that you have to develop an intranet for a company with several departments, all of them with the ability to place orders for material specific to their business area. Of course, all the orders must be approved by the accounts man before sent to the suppliers and paid. Would you use a XA-style transaction throughout the process? What if the accounts man takes hours (or days!) to approve the order, should the data be locked for that period? Further more, if an exception happens, should we roll back and some orders where miraculously never placed or never shipped to the supplier? What happens to transactional tasks in the context of a larger process where you can’t lock data for too long and must be able to roll back past commits?

This is much the example he gave. In a long running process, you might have to split one action that you would desire to be atomic, breaking one of the ACID properties: atomicity. The idea was that from the time the employee places the order until the order actually gets to the supplier and paid, it’s all or nothing.

Compensatable transactions allow this scenario. Quoting Dino Esposito at MSDN magazine, “compensation is any logic you run at some point to undo, mitigate, or compensate for the effects of previous operations. The point is that the compensatable transaction might contain child ACID transactions that, once committed, can’t be rolled back any longer. However, in case of a further failure, their effects must be compensated for in some way. Compensation is like rollback except that the developer is called to write any code used to compensate for the work done.”

Dino Esposito actually goes on and answers my question: why bother with compensation? “When different companies and services are involved, defining the process in terms of the ACID semantics is often challenging. For it to be isolated and durable, you have to keep all resources of different companies locked for the duration of the task. This is frequently unreasonable, especially if the task is long.”

After all this talk, let’s resume: although the idea behind both the transaction types I’ve mentioned is the same, they’re actually quite different. Obviously, you want your XA-style transactions to be as short as possible for a variety of reasons (most important, data might get locked) while in a compensatable scenario this isn’t true.

Although I haven’t used it yet (nor I’m expecting to), it’s a great thing that WF incorporates this concept.

Friday, September 12, 2008

Add.Ovf

Writing high-performance applications is probably one of the most enthusiastic challenges a software developer can face. Nowadays, developing an application isn’t as hard as it was 10 years ago, mostly due to the appearance of managed environments (and tools improvement). But writing high-performance, multithread, extensible, expansible and scalable software is actually very challenging. Every detail counts.

Having said that, there’s one detail I haven’t paid much attention in the past: compiler flags (the other guidelines to write such software certainly can’t be learned from a blog). When using Visual Studio, creating a project, building and deploying it is very easy. But when building the project, the IDE actually uses a command line to build the project (for C#, “csc.exe”, for VB.NET “vbc.exe”) passing some parameters while others are default. Well, by default, the VB.NET compiler generates overflow checks for integer operations (both Debug and Release versions). The C# compiler doesn’t – you must specify that you want integer overflow checks (“csc.exe /checked” or in the properties of your Visual Studio project).

What is integer overflow check? An example from Derek Hatchard and Scott Swigart at MSDN:

(click to enlarge)

Quoting their explanation: “The code segments look equivalent (the extra C# variable loopLimit is used to match Visual Basic .NET behaviour, which is to copy max to a temporary local variable). Both methods require two integer addition operations – one for the increment of the sum variable and one for the increment of the loop counter. By default, the Visual Basic .NET compiler will generate the IL instruction add.ovf for these addition operations. The add.ovf instruction includes an overflow check and throws an exception if the sum exceeds the capacity of the target data type. By contrast, the default output of the C# compiler is the IL instruction add, which does not include an overflow check.”

Looking at the generated IL by both the compilers we can see the difference:

(click to enlarge)

(click to enlarge)

How about running the examples to compare the performance?

(click to enlarge)

As you can see when using integer overflow check there’s a big difference. If you set the VB compiler not to generate integer overflow checks the above VB.NET code will have the same performance as the C# version. You must be aware of details like these in order to “sharp” your application’s performance – and don’t say “I’m using C#” because that’s not the point!

Just as a final note, you can also force integer overflow checks for some operations using the C# keyword “checked”: http://msdn.microsoft.com/en-us/library/74b4xzyw.aspx

Friday, August 29, 2008

Compensatable transactions

Up until now, when working with SQL Server, the most common transaction type I’ve used was the XA-style transaction. An XA transaction involves the XA protocol, which implements the two-phase commit (you’re probably familiar with this, most DBMS use it, but if you aren’t I suggest you go ahead and work on this issue, since it is essential when working with databases). The XA transactions, when using non-volatile resources, guarantees the ACID properties (Atomicity, Consistency, Isolation and Durability). This is all done at the DBMS level.

However, Windows Workflow Foundation “translates” this idea into an activity: the TransactionScope activity. What this means is that the activity knows how to use this kind of transactions without forcing the programmer to explicitly open a transaction. You just place activities inside the TransactionScope activity and that’s a XA-style transaction. You can even define the isolation level of the transaction for that activity (Serializable, Read Uncommitted, Repeatable Read and so on).

But WF also has another transaction style: The compensatable transactions. I wasn’t aware of this type of transactions up until now. The idea is basically the same as in XA-style transactions: if something goes wrong in one of the operations performed in the scope of the transaction, the data must go to a consistent state. But there’s a difference: When using XA-style transactions, if something fails, the transaction is “rolled back”. When using compensation, if something does fail, the transaction isn’t automatically “rolled back”. Instead, you must provide the actions to compensate the failure. To better explain, let me use the example given by Kenn Scribner: “If I give you five apples using a XA-style transaction and the transaction fails, time itself rewinds to the point I started to give you the apples. In a sense, history is rewritten such that the five apples were never given in the first place. But if I give you five apples in a compensated transaction and the transaction fails, to compensate (so that we maintain a determinate application state), you must return five apples to me.”

What this means is that the programmer is responsible for compensating, providing the actions to compensate the failed transaction. There is no “rollback”. Is this better than XA-style transactions? Well, it certainly gives you more control (and responsibility), but you must be very careful when writing compensation actions. The smallest mistake can leave a database in an inconsistent state.

I haven’t used this transaction style yet, but Kenn Scribner gives a hint on a few scenarios where it could be more useful then the XA-style transactions. I’m going to analyze these scenarios more deeply in order to see if it really makes sense.


Wednesday, August 20, 2008

WorkflowRuntime object

On chapter 1 of the book Windows Workflow Foundation Step by Step, the author states that “there can be only a single instance of the WorkflowRuntime per AppDomain”. When I was reading the book, I didn’t found this odd because it makes some sense. You only need one runtime.

In a recent project, however, I was presented with a scenario where I needed a different behaviour from the WorkflowRuntime when using multiple workflows. Simply put, I had 3 different workflows in the same AppDomain and I wanted the WorkflowRuntime to persist one of them and not the others. The only logical way to accomplish this behaviour was to implement myself a persistence service. Of course, I could also create another AppDomain and run another WorkflowRuntime in the new AppDomain but I don’t like this “workaround”.

Since I was short on time, after I googled a little I found that you can have more than one WorkflowRuntime per AppDomain. And this helped. Two WorklflowRuntime objects, one using the persistence service and the other one not using it. Yes, this leads to more resources consumption (the WorkflowRuntime isn’t a “cheap” object) but it was the quickest solution and I wasn’t concerned with performance.

Despite this “multiple WorkflowRuntime objects per AppDomain” feature, I still agree with the author of the book (Kenn Scribner) when he states that we should use a WorkflowRuntime Factory (Singleton and Factory patterns).

Here’s the errata page for the book: http://www.endurasoft.com/wf.aspx


Monday, July 21, 2008

Extension Methods - Cool

Since the first release of the .NET Framework (2002), Microsoft as significantly improved the quality and features of the framework. Also, the compilers and the Visual Studio have evolved in a fantastic way.

One of the many features introduced in VS2008 (with the new compilers) was the Extension Methods. What are they? A cute little thing! Ok, technically speaking, I prefer to quote Scott Gu:

“Extension methods allow developers to add new methods to the public contract of an existing CLR type, without having to sub-class it or recompile the original type. Extension Methods help blend the flexibility of “duck typing” support popular within dynamic languages today with the performance and compile-time validation of strongly-typed languages. Extension Methods enable a variety of useful scenarios, and help make possible the really powerful LINQ query framework that is being introduced with .NET as part of the “Orcas” release.”

But what’s happening behind the scene? Is this complicated? Well, lets start by looking at the extension methods with one example from Scott Gu. Have you ever wanted to check if a string is in some way valid? For example, have you ever wanted to check if a string is a valid e-mail address or “postal code”? Before extension methods, you’d probably write something like this:

(click to enlarge)

As you can see, there’s a static class with static methods to validate the string, which is passed as a parameter. But we’re trying to check if a string is in some way valid, so, shouldn’t we have this capability on the string type? Well, this is Extension Methods. Here’s the version using extension methods:

(click to enlarge)

Cool! You actually call a method from the string type! The “this” keyword tells the compiler that the extension method is to be applied to string type. However, what’s happening behind the scenes? Well, if we compile both the examples we will notice that it’s the C# compiler that’s “doing the trick”. Here’s the IL for the first example:

(click to enlarge)

And here’s the IL for the second example using extension methods:

(click to enlarge)

As you may have noticed, the compiler actually transformed the extension method in a normal method call, generating a class. From the CLR point of view, everything is still the same (there’s one new flag).


This is “the big picture”. However, the extension methods can be used with every base class or interface. Basically, it can be applied to any type. And this is fantastic. The built-in LINQ Extension methods use this feature. Once you get used to some of these “syntactical sugar” improvements, you’ll get addicted.

There are so much new cool features with the new compilers that come with VS2008 that the best is to take a look at Scott Gus’ blog. Despite the fact that he normally doesn’t look at the generated IL, he discusses important issues and his speech is very easy to understand. It’s a fantastic source of information.

Wednesday, July 16, 2008

Memory Leaks

The .NET Framework evolves considerably from version to version. I think it’s the best software Microsoft has ever written and documented. The latest versions of the framework (and the compilers) are truly brilliant.

But not everyone works with the latest version. Some companies, for their own reasons, often work with older versions, some of them even with version 1.1. That was the case of some projects in my last job.

This introduction is just to contextualize why I’m talking about a “so old” version of the Framework. Remember the memory leaks in C++, when there was no Garbage Colector? Well, here’s one in .NET 1.x:


Tip: Upgrade! Always try to work with the latest version of the .NET Framework.

Wednesday, July 2, 2008

Be careful when using IIf

As I stated before, I’m a C# fan. The language itself is so beautiful that you can write clean and amazingly readable code. For example, recall the ternary expression:

(click to enlarge)

In the above example, we’re testing if the value that the variable “oneValue” contains equals “anotherValue”. If it does, CalculateSomething gets called. Otherwise, CalculateSomethingElse will be called. The result of one of these methods will be stored in the valueToPrint variable. This is the ternary expression.

Now, take a closer look at the CalculateSomethingElse method. As you can see, if we call this method an ArgumentException will be thrown, because we’re trying to convert a non-valid string to an integer. However, in the above example, the value in the “oneValue” variable equals the value in the “anotherValue” variable, so, the CalculateSomethingElse never gets called. If we run this application, everything will go as expected, with no Exceptions thrown:

(click to enlarge)

Now, VB.NET has something “similar”: the IIf. Take a look at the VB.NET version of the above example:

(click to enlarge)

The idea is basically the same as for the C# ternary expression. Remember that the CalculateSomethingElse method will throw an ArgumentException when called and the value in the “oneValue” variable equals the “anotherValue”. So, just like what happened with the C# example, you’d expect that everything would compile and run with no Exceptions (since the only method that should be called is the CalculateSomething because the test expression evaluates to true). However, this is not true. If you run the VB.NET example, you will get an ArgumentException:

(click to enlarge)

Why? Well, the IIf is actually a VB.NET function defined in the class Interaction (Microsoft.VisualBasic namespace). The problem with this is that at runtime, both the functions in the IIf get called. This is a major problem when you have large amount of work inside those methods or even something that might thrown an exception (e.g.: divide by zero). The IIf function is as follows:

(click to enlarge)


So, for those VB.NET users, the best is just to use IIf very carefully.

Tip: Compile both the examples and see the IL that was generated by the compilers.

Tuesday, June 17, 2008

VB.NET Optional parameters

First of all, let me state: I’m not much of a VB fan. However, the company where I’ve worked for about 8 months had everything implemented with VB.NET (for compatibility reasons – they came from a VB6 era) and I had to get used to it. It wasn’t hard, but I definitely prefer other approaches, like C++ or C#.

But there was one little thing that I enjoyed in VB.NET: optional parameters. Take a look at the following example, specially the doSomething method:

(click to enlarge)


The second parameter is optional. That is, if you don’t specify it in a method call, the compiler (remember, the compiler!) inserts the "False" string for you. If we look at the generated IL, we can see that there’s a false string inserted when we call the method with just one parameter. Despite I wrote different calls to the method, the generated call is exactly the same:

(click to enlarge)

Also note that I only wrote one method and, accordingly, only one got generated by the compiler. However, the C# (3.0 and below) compiler doesn’t allow this. To do the same example with C#, we’ll need to write two methods and take advantage of the method overload:

(click to enlarge)

Looking at the generated IL, we’ll see the difference:


(click to enlarge)


Now, the compiler emits the call to different versions of the same method. Of course, two methods were generated. So, why do I like optional parameters?
Well, as you may have noticed, in the C# example I had to wrote two different methods to do the same. And this is the main reason why I think optional parameters are nice. I “googled” a little to find why the C# team decided to keep this feature out, and found a few “pros and cons”:

Pros
  • Using optional parameters will allow programmers to write less code. Less generated methods.
  • It’s intuitive.
  • COM interfaces are filled with default parameters (for example, the Microsoft Office COM automation model - some functions have as many as 30 default parameters). This makes it hard do work with C# (you need to specify all the parameters).

Cons

  • A change in the default parameters will force the user to recompile (imagine if the default parameters change in a server, the client has some troubles).
  • The code generated by the compiler is less obvious (the user didn’t write it).
  • Microsoft tries to limit “the magic” because it’s harder to follow up by programmers.


Optional parameters are not CLS compliant. However, in my opinion, I think it would be good to have them in C#. We had default values in C++, optional parameters exist in VB.NET… We should have them in C#.

I don’t think though that they will ever exist in C#. I’m not seeing it in the near future.

Discussion about this issue:

http://blogs.msdn.com/csharpfaq/archive/2004/03/07/85556.aspx


Note: The C# 4.0 will allow optional parameters.

Monday, June 16, 2008

The Listen Activity Issue – State Machines

In the past couple of months, I’ve dedicated an amount of my free time to learn Windows Workflow Foundation. The Workflow concept was familiar to me and I was curious about this technology. So far, and there’s only been 6 months since I started this learning, I got to apply this technology a couple of times.

Why am I writing this post? Well, from all of the scenarios I came across until now, one of the most “tricky” ones is when there’s a timeout situation. Let me explain:

One of the most common issues every programmer has to deal with is the timeout situation. This situation happens all the time, whether it’s on a synchronous/asynchronous request, a lack of user intervention or even a hardware failure. What to do when there’s no response from a server? Should we wait for ever? Of course not: it’s a timeout!

Up until now, to deal with this, we’d probably implement a timer and when the time to wait for a response has ended, we should perform a set of actions to compensate this event (e.g.: Cancel the asynchronous request – most common). With Workflow Foundation, a good part of all the work needed to do this is already done! The Listen Activity:



The above picture shows the Listen Activity. What is it? Well, it’s a “block” that waits for “n” events and when one of them fires it resumes execution, going down the trail of the event that fired. The other ones are no longer to be “listen”. One of the events can be a delay (a.k.a. timer). Are you already picturing the “timeout situation”? It’s perfect! One event (or more) is what our application wants (e.g.: A document approval from a user) and the other event is the timeout!

So, why am I complaining? Well, the listen activity is available only for Sequential Workflows. That is, workflows that execute the activities sequentially and the execution of them can’t go back. My loved listen activity can’t be used on State Machine Workflows. According to Kenn Scribner, a software architect and instructor at Wintellect, who wrote Windows Workflow Fountation Step by Step, WF team decided to keep the listen activity out from state machine workflows because it could be a potential cause of deadlocks (witch are not easy to find). I can agree that it could lead to these “tricky” deadlocks, I can even imagine a few ones and tell you how to solve them (don’t ask), but one thing that I can’t understand is why aren’t we able to decide if we want the risk. I mean, we become software engineers for a reason! Solve complicated problems! To come up with solutions for the “hard stuff”!

Well, of course Microsoft doesn’t do things for no reason, and you can “go around” and implement the listen activity yourself (composite activity) or even use Parallel activity. But this is unnecessary work if we had the listen activity. I sure hope they’ll let us use the listen activity with state machine workflows in the next WF release.