When you move from one programming language to another, do you eventually find yourself thinking differently?
After eight years of slinging Java code, I converted over to C#. The C# language borrows (steals?) heavily from Java, but it also has added some features. Some of C#’s most valuable introductions to the language have now been adopted by Java. Things like attributes, generics, and auto-boxing – which first appeared in C# – have now been added to the JDK as of 1.5. It’s only a matter of time until Java adopts
using-blocks and the
yield keyword, too.
Despite this back-and-forth trading of features and enhancements, each language still maintains its distinct flavor. Some facets of each language – many of which are quite subtle – will probably never be adopted by the other simply because their introduction would create such a fundamental shift in language concepts it would invalidate a lot of prior work.
So, I was careful about my approach to C#. My Java work had been preceded by seven years of C++, and I remember what that transition was like. Although Java is strikingly similar to C++, the similarities are deceiving. Seemingly minor differences between the two languages created a sea change in how I thought, how I designed code, and how I tested applications.
Java introduced me to new things like garbage collection, type reflection, and meaningful exception handling. At first, I tried to write Java like a C++ programmer. It took some time to get my brain adjusted to a new paradigm. Once I did, however, I was able to unleash the true power of the technology by leveraging new, fundamental concepts of the language.
Hence, I approached C# with an abundance of caution. Despite C#’s similarity to Java, I didn’t want to stay shackled to “the Java way” of doing things. I kept a sharp eye on what other developers were doing and tried to gain insight into the most effective way to adopt the language.
Now that I’ve gained experience in C# – but continue to do some Java work on the side – I can see how seemingly trivial differences in the languages impact my design decisions. Off the top of my head, two things come to mind…
C# events and delegates
This is a huge win for C#. I use the Observer Pattern all over the place now.
The Observer Pattern, of course, allows you to decouple the events generated by an object and the objects that listen for them. In Java, making this work requires a lot of effort on the part of the programmer. For your observed class, you need to create an interface which captures all of your events, manually implement methods to add and remove your listeners, and manually iterate over your list of listeners in order to raise events.
Even implementing event listeners is ugly. You need to create an implementation of the event interface – stubbing out all of the uninteresting events with no-ops – and have that class make callbacks to your business object. Anonymous inner classes help a bit, but they still clutter up the code.
java.util.Observable – a weak attempt to encapsulate this design pattern into a class – has been around since JDK 1.0, but it fails to be useful in anything but contrived, academic examples.
In short, the level of effort required to make the Observer Pattern work in Java is so overwhelming it is seldom implemented except in the most compelling cases. Most Java developers, though not intentionally lazy, simply don’t think to reach for this tool and end up creating tightly coupled code which is harder to test and maintain.
C#’s decision to add events as first-class members of a class and delegates as first-class data types makes hooking up listeners to events as easy as calling the
+= operator. For those of you unfamiliar with C#, think of delegates as typesafe method pointers and events as special class properties that automatically manage a collection of delegates. All of those listener interfaces and anonymous inner classes from the Java world get reduced to a single line of code.
The complexity of the Observer Pattern in Java makes me reluctant to use it if I can envision a “cleaner” solution, but the simplicity of implementing the pattern in C# makes using it second nature. As a result, I design code in C# that looks fundamentally different from what I’d design in Java. It’s not deliberate. It’s just that the nature of each language naturally steers me toward a particular way of solving certain problems.
In Java, all methods are overridable by default. To disallow overriding a method, you have to explicitly declare it using the
final keyword. C# is exactly the opposite. A method on a C# class may not be overridden unless it is explicitly declared with the
A minor difference? No way. In C#, the inability to make minor behavioral changes to third party classes is driving me insane!
The only semi-legitimate rationale I can think of for this default behavior is performance. Invoking an overridable method requires an extra step of finding the method pointer in a lookup table instead of invoking it directly. Still, this is a silly argument. If your application is so in need of this performance boost, you are using the wrong programming language.
Allowing other code to override your class’ methods makes your classes more extensible. Although some classes use the Template Method design pattern to deliberately leverage overridable methods, most class designers can seldom predict how and where the consumers of their classes might benefit from polymorphic behaviors.
Developers almost always take the default path unless they’re envisioning a particular usage. Therefore, the default path for class methods should be the most useful and extensible one – overridable unless explicitly declared otherwise.
Once again, a seemingly trivial distinction between two languages drives my thought process. Knowing that third party C# classes probably limit polymorphism (unless every method is declared
virtual), I’m forced to approach my problem solving differently than I would in a Java application.
Do these two examples imply that either C# or Java is “flawed”? No. But, they certainly highlight how moving between the two can create some consternation for developers as they are forced to “think differently”.
How does moving between languages make you think differently? Please comment.