Password Based Encryption

December 18th, 2013

Password Based Encryption (PBE) is a mechanism for protecting sensitive data using a symmetric cryptographic key derived from a password or passphrase. The use of a passphrase allows the data owner to use a self-selected, easy to remember secret expression instead of 32 random bytes (in the case of a 256 bit key). If coded improperly, even with the use of strong passwords, password encrypted data is easily cracked. Therefore, the use of a proven cryptographic library is essential.

Here, we’ll look at a simple example of PBE to encode plaintext data using a password of arbitrary length. The example is in C# and uses the .Net version of the BouncyCastle cryptographic library. However, there is also a Java version of BouncyCastle which allows for a similar solution.

Read the rest of this entry »

TCP/IP Parameter Tuning for Rapid Client Connections

February 17th, 2010

Applications that open and close a large number of client TCP/IP sockets run the risk of running out of available socket ports.  This can happen in a load and performance testing scenario using a tool like LISA Test from iTKO, or it could happen in a production environment if an active application simply needs to rapidly open and close a large number of outbound connections.

On the .NET platform, the exception raised reads “System.Net.Sockets.SocketException: Only one usage of each socket address (protocol/network address/port) is normally permitted <host>:<port>“. 

In Java, the exception is “java.net.BindException: Address already in use: connect“. 

Both exceptions are misleading because they are generally associated with server socket conflicts – not outbound client socket connections.  However, a better understanding of the TCP state machine sheds some light on this behavior – and a solution.

Read the rest of this entry »

The .NET Asynchronous I/O Design Pattern

February 11th, 2010

Asynchronous operations allow a program to perform time consuming tasks on a background thread while the main application continues to execute.  For example, consider when a program makes a request to a remote system.  In a single-threaded scenario, the call is made and the CPU goes idle as the caller waits on the server’s processing time and the network latency.  If this waiting time can be delegated to a separate thread of execution, the program can complete other tasks until it receives notification the background work is complete.

However, managing multiple threads and cross-thread communication adds complexity to your code.  Fortunately, the .NET Framework has a useful design pattern applied to its I/O classes which easily enables asynchronous calls.  Let’s take a look at an example.

Read the rest of this entry »

Understanding SSL – Part 1: Certificates and Keys

October 14th, 2009

The technology behind Secure Sockets Layer (SSL) network connections is often perceived as a bit of “black magic” – smoke and mirrors securing our Internet connections from snooping.  When banking and shopping online, even a novice user understands their browser sets up an HTTPS connection (which is simply HTTP over SSL) to protect the transaction.  It’s easy to simply surf to a secure URL and know that, somehow, SSL is magically keeping you safe.

Developing software that uses SSL is an entirely different matter.  The simplicity quickly fades, and the developer must confront the complexities of certificate management, trust stores, handshaking, and a host of other details that must be perfectly aligned to make the secure communication work.  In Part 1, we’ll cover a very high level of SSL concepts.  In subsequent posts, we’ll take a deeper dive into making these connections happen in both Java and C#.

Read the rest of this entry »

Liquid Cooling a PC: Gimmick or Necessity?

October 6th, 2009

Early PCs seldom had more than a tiny, weak fan on the back of the case to push out excess heat generated by the internal electronics.  As transistors shrank and chips grew faster and more complex, CPUs began running hotter and reaching dangerous temperatures – so hot, in fact, that the little case fan couldn’t protect the delicate electronics from burning out.

To address this, PC manufacturers began adding fans dedicated to cooling this nerve center of the motherboard.  Today, with high end gaming machines consuming up to 1000W or more, enormous heat is generated not just by the CPU, but by the memory, north and south bridges, and the graphics card.  To expel this heat from inside the case, larger and faster case fans are needed to keep everything running at a safe, relatively cool temperature.

For the past few years, PC accessory vendors have been marketing liquid cooling systems.  These products promise to cool more efficiently, and more quietly, than traditional fans – at the same time adding several hundred dollars to the total price tag of a new machine.  The question is: is this just a pricey gimmick, or is this the next logical step in the progression of ever more powerful machines?

Read the rest of this entry »

Kill Spam With Real-Time DNS Blacklists

June 11th, 2008

A great Open Source project for gaining understanding about e-mail systems, including an in-depth look at SMTP and POP3, is the Java-based Apache JAMES Project.  Although JAMES has the unfortunate shortcoming of being built around the now defunct and unsupported Apache Avalon Framework, it’s still a fantastic learning tool for understanding email protocols, mail delivery, and spam filtering.  Not only that, it’s a fully functional, enterprise-ready mail server that can be up and running with minimal configuration.

One technology implemented by JAMES for spam filtering is real-time DNS blacklists.  DNSBLs identify the IP addresses of potential spam sources and machines known to be delivering spam (as determined by the sometimes controversial policies of the list owner).  Spam blacklists date back to 1996 with Paul Vixie’s Mail Abuse Prevention System, and are now used by ISPs and corporate mail systems around the world.  Countless organizations maintain blacklists, and Web sites like MX Toolbox permit ad hoc queries of IP addresses against dozens of published lists.

Read the rest of this entry »

Got Requirements? If Not, You’re Doomed

March 31st, 2008

Yet another software development disaster is headed for the digital trash heap of failed projects. This time, the casualty is software funded by the U. S. Census Bureau. The Associated Press reports failure to deliver usable software to census enumerators could add as much as $2 billion to the 2010 census. Worse, the AP reports “census officials are considering a return to using paper and pencil to count every man, woman and child in the nation.”

This is a spectacular train wreck that had doom written all over it from Day One. It’s a familiar, predictable pattern constantly repeated since the first clueless manager commanded “just make it user friendly”.

Read the rest of this entry »

JUnit Factory Part 3: Improving Code Coverage

February 10th, 2008

JUnit Factory is rather clever how it analyzes and executes your code to generate characterization tests. However, legacy Java code was generally not written with testability in mind. This sometimes makes it difficult for JUnit Factory to attain complete coverage of your code due to the need for objects to exist in a complex state or the need to interact with an external resource such as a database.

JUnit Factory is often able to generate mock instances automatically for problematic classes. When automocking fails, the developer can improve coverage by either extracting behaviors into private methods or by providing hints to JUnit Factory in the form of test data helpers.

Read the rest of this entry »