Tag Archives: Application Development

[repost ]7 programming myths — busted!

original:http://www.infoworld.com/d/application-development/7-programming-myths-busted-190890?source=rs

The tools are sharper, but software development remains rife with misconceptions on productivity, code efficiency, offshoring, and more

Even among people as logical and rational as software developers, you should never underestimate the power of myth. Some programmers will believe what they choose to believe against all better judgment.

The classic example is the popular fallacy that you can speed up a software project by adding more developers. Frederick P. Brooks debunked this theory in 1975, in his now-seminal book of essays, “The Mythical Man-Month.”

[ Find out which 11 programming trends are on the rise, verse yourself in the 10 hard truths developers must accept, and test your programming smarts with our programming IQ tests: Round 1 and Round 2 and “Hello, world”: Programming languages quiz. | Keep up on key application development insights with the Fatal Exception blog and Developer World newsletter. ]

Brooks’ central premise was that adding more developers to a late software project won’t make it go faster. On the contrary, they’ll delay it further. If this is true, he argued, much of the other conventional wisdom about software project management was actually wrong.

Some of Brooks’ examples seem obsolete today, but his premise is still sound. He makes his point cogently and convincingly. Unfortunately, too few developers seem to have taken it to heart. More than 35 years later, mythical thinking still abounds among programmers. We keep making the same mistakes.

The real shame is that, in many cases, our elders pointed out our errors years ago, if only we would pay attention. Here are just a few examples of modern-day programming myths, many of which are actually new takes on age-old fallacies.

Programming myth No. 1: Offshoring produces software faster and cheaper
These days, no one in their right mind thinks of launching a major software project without an offshoring strategy. All of the big software vendors do it. Silicon Valley venture capitalists insist on it. It’s a no-brainer — or so the service providers would have you believe.

It sounds logical. By off-loading coding work to developing economies, software firms can hire more programmers for less. That means they can finish their projects in less time and with smaller budgets.

But hold on! This is a classic example of the Mythical Man-Month fallacy. We know that throwing more bodies at a software project won’t help it ship sooner or cost less — quite the opposite. Going overseas only makes matters worse.

According to Brooks, “Adding people to a software project increases the total effort necessary in three ways: the work and disruption of repartitioning itself, training new people, and added intercommunication.”

Let’s assume that the effort required for repartitioning and training is the same for outsourced projects as for homegrown ones (a dangerous assumption). The communication effort required for outsourcing is much higher. Language, culture, and time-zone differences add overhead. Worse, offshore development teams are often prone to high turnover rates, so communication rarely improves over time.

Little wonder there’s no shortage of offshoring horror stories. Outsourcers who promise more than they deliver are a recurring theme. When deadlines slip and clients are forced to finish the work in-house, any putative cost savings disappear.

Offshoring isn’t magic. In fact, it’s hard to get right. If an outsourcer promises to solve all of your problems for nothing, maintain a healthy skepticism. That free lunch could end up costing more than you bargained for.

Programming myth No. 2: Good coders work long hours
We all know the stereotype. In popular culture, programmers stay up late into the night, coding. Pizza boxes and energy-drink cans litter their desks. They work weekends; indeed, they seldom go home.

There’s some truth to this caricature. In a recent analysis of National Health Interview Survey data, programming tied for the fifth most sleep-deprived profession. Long hours are particularly endemic in the video game industry, where developers must endure “crunch time” as deadlines approach.

But it doesn’t have to be that way. There’s plenty of evidence to suggest that long hours don’t increase productivity. In fact, crunch time may hurt more than it helps.

There’s nothing wrong with putting in extra effort. Fred Brooks praises “running faster than necessary, moving sooner than necessary, trying harder than necessary.” But he also warns against confusing effort with progress.

More often than not, Brooks says, software projects run late due to chronic schedule slippage, not catastrophes. Maybe the initial estimates were unrealistic. Maybe the project milestones were fuzzy and poorly defined. Or maybe they changed midstream when the client added requirements or requested new features.

Either way, the result is the same. As the little delays add up, programmers are forced into crisis mode, but their extra efforts are spent chasing goals that can no longer be reached. As the project schedule slips further, so does morale.

Some programmers might be content to work until they drop, but most have families, friends, and personal lives, like everyone else. They’d be happy to leave the office when everyone else does. So instead of praising coders for working long hours, concentrate on figuring out why they have to — and how it can stop. They’ll appreciate it far more than free pizza, guaranteed.

Programming myth No. 3: Great developers are 10 times more productive
Good programmers are hard to find, but great programmers are the stuff of legend — or at least urban legend.

If you believe the tales, somewhere out there are hackers so skilled that they can code rings around the rest of us. They’ve been dubbed “10x developers” — because they’re allegedly an order of magnitude more productive than your average programmer.

Naturally, recruiters and hiring managers would kill to find these fabled demigods of code. Yet for the most part, they remain as elusive as Bigfoot. In fact, they probably don’t exist.

Let’s assume that the effort required for repartitioning and training is the same for outsourced projects as for homegrown ones (a dangerous assumption). The communication effort required for outsourcing is much higher. Language, culture, and time-zone differences add overhead. Worse, offshore development teams are often prone to high turnover rates, so communication rarely improves over time.

Little wonder there’s no shortage of offshoring horror stories. Outsourcers who promise more than they deliver are a recurring theme. When deadlines slip and clients are forced to finish the work in-house, any putative cost savings disappear.

Offshoring isn’t magic. In fact, it’s hard to get right. If an outsourcer promises to solve all of your problems for nothing, maintain a healthy skepticism. That free lunch could end up costing more than you bargained for.

Programming myth No. 2: Good coders work long hours
We all know the stereotype. In popular culture, programmers stay up late into the night, coding. Pizza boxes and energy-drink cans litter their desks. They work weekends; indeed, they seldom go home.

There’s some truth to this caricature. In a recent analysis of National Health Interview Survey data, programming tied for the fifth most sleep-deprived profession. Long hours are particularly endemic in the video game industry, where developers must endure “crunch time” as deadlines approach.

But it doesn’t have to be that way. There’s plenty of evidence to suggest that long hours don’t increase productivity. In fact, crunch time may hurt more than it helps.

There’s nothing wrong with putting in extra effort. Fred Brooks praises “running faster than necessary, moving sooner than necessary, trying harder than necessary.” But he also warns against confusing effort with progress.

More often than not, Brooks says, software projects run late due to chronic schedule slippage, not catastrophes. Maybe the initial estimates were unrealistic. Maybe the project milestones were fuzzy and poorly defined. Or maybe they changed midstream when the client added requirements or requested new features.

Either way, the result is the same. As the little delays add up, programmers are forced into crisis mode, but their extra efforts are spent chasing goals that can no longer be reached. As the project schedule slips further, so does morale.

Some programmers might be content to work until they drop, but most have families, friends, and personal lives, like everyone else. They’d be happy to leave the office when everyone else does. So instead of praising coders for working long hours, concentrate on figuring out why they have to — and how it can stop. They’ll appreciate it far more than free pizza, guaranteed.

Programming myth No. 3: Great developers are 10 times more productive
Good programmers are hard to find, but great programmers are the stuff of legend — or at least urban legend.

If you believe the tales, somewhere out there are hackers so skilled that they can code rings around the rest of us. They’ve been dubbed “10x developers” — because they’re allegedly an order of magnitude more productive than your average programmer.

Naturally, recruiters and hiring managers would kill to find these fabled demigods of code. Yet for the most part, they remain as elusive as Bigfoot. In fact, they probably don’t exist.

Unfortunately, the blame for this myth falls on Fred Brooks himself. Well, almost — he’s been misquoted. What Brooks actually says is that, in one study, the very best programmers were 10 times more productive than the very worst programmers, not the average ones.

Most developers fall somewhere in the middle. If you really see a 10-fold productivity differential in your own staff, chances are you’ve made some very poor hiring choices in the past (along with some very good ones).

What’s more, the study Brooks cites was from 1966. Modern software project managers know better than to place too much faith in developer productivity metrics, which are seldom reliable. For one thing, code output doesn’t tell the whole story. Brooks himself admits that even the best programmers spend only about 50 percent of the workweek actually coding and debugging.

This doesn’t mean you shouldn’t try to hire the best developers you can. But waiting for superhuman coders to come along is a lousy staffing strategy. Instead of obsessing over 10x developers, focus on building 10x teams. You’ll have a much larger talent pool to choose from, which means you’ll fill your vacancies and your project will ship much sooner.

Programming myth No. 4: Cutting-edge tools produce better results
Software is a technology business, so it’s tempting to believe technology can solve all of its problems. Wouldn’t it be nice if a new programming language, framework, or development environment could slash costs, reduce time to market, and improve code quality, all at once? Don’t hold your breath.

Plenty of companies have tried using unorthodox languages to outflank their competitors. Yammer, a social network, wrote its first version in Scala. Twitter began life as a Ruby on Rails application. Reddit and Yahoo Store were both built with Lisp.

Unfortunately, most such experiments are short-lived. Yammer switched to Java when Scala couldn’t meet its needs. Twitter switched from Ruby to Scala before also settling on Java. Reddit rewrote its code in Python. Yahoo Store migrated to C++ and Perl.

This isn’t to say your choice of tools is irrelevant. Particularly in server environments, where scalability is as important as raw performance, platforms matter. But it’s telling that the aforementioned companies all switched from trendy languages to more mainstream ones.

Fred Brooks foresaw this decades ago. In his essay “No Silver Bullet,” he writes, “There is no single development, in either technology or management technique, that promises even one order of magnitude improvement in productivity, in reliability, in simplicity.”

For example, when the U.S. Department of Defense developed the Ada language in the 1970s, its goal was to revolutionize programming — no such luck. “[Ada] is, after all, just another high-level language,” Brooks wrote in 1986. Today it’s a niche tool at best.

Of course, this won’t stop anyone from inventing new programming languages, and that’s fine. Just don’t fool yourself. When building quality software is your goal, agility, flexibility, ingenuity, and skill trump technology every time. But choosing mature tools doesn’t hurt.

Programming myth No. 5: The more eyes on the code, the fewer bugs
Open source developers have a maxim: “Given enough eyeballs, all bugs are shallow.” It’s sometimes called Linus’ Law, but it was really coined by Eric S. Raymond, one of the founding thinkers of the open source movement.

“Eyeballs” refers to developers looking at source code. “Shallow” means the bugs are easy to spot and fix. The idea is that open source has a natural advantage over proprietary software because anyone can review the code, find defects, and correct them if need be.

Unfortunately, that’s wishful thinking. Just because bugs can be found doesn’t mean they will be. Most open source projects today have far more users than contributors. Many users aren’t reviewing the source code at all, which means the number of eyeballs for most projects is exaggerated.

More importantly, finding bugs isn’t the same as fixing them. Anyone can find bugs; fixing them is another matter. Even if we assume that every pair of eyeballs that spots a bug is capable of fixing it, we end up with yet another variation on Brooks’ Mythical Man-Month problem.

One 2009 study found that code files that had been patched by many separate developers contained more bugs than those patched by small, concentrated teams. By studying these “unfocused contributions,” the researchers inferred an opposing principle to Linus’ Law: “Too many cooks spoil the broth.”

Brooks was well aware of this phenomenon. “The fundamental problem with program maintenance,” he wrote, “is that fixing a defect has a substantial (20 to 50 percent) chance of introducing another.” Running regression tests to spot these new defects can become a significant constraint on the entire development process — and the more unfocused fixes, the worse it gets. It’s enough to make you bug-eyed.

Programming myth No. 6: Great programmers write the fastest code
A professional racing team’s job is to get its car to the finish line before all the others. The machine itself is important, but it’s the hard, painstaking work of the driver and the mechanics that makes all the difference. You might think that’s true of computer code, too. Unfortunately, hand-optimization isn’t always the best way to get the most performance out of your algorithms. In fact, today it seldom is.

One problem is that programmers’ assumptions about how their own code actually works are often wrong. High-level languages shield programmers from the underlying hardware by design. As a result, coders may try to optimize in ways that are useless or even harmful.

Take the XOR swap algorithm, which uses bitwise operations to swap the values of two variables. Once, it was an efficient hack. But modern CPUs boost performance by executing multiple instructions in parallel, using pipelines. That doesn’t work with XOR swap. If you tried to optimize your code using XOR swap today, it would actually run slower because newer CPUs favor other techniques.

Multicore CPUs complicate matters further. To take advantage of them, you need to write multithreaded code. Unfortunately, parallel processing is hard to do right. Optimizations that speed up one thread can inadvertently throttle the others. The more threads, the harder the program is to optimize. Even then, just because a routine can be optimized doesn’t mean it should be. Most programs spend 90 percent of their running time in just 10 percent of their code.

In many cases, you’re better off simply trusting your tools. Already in 1975, Fred Brooks observed that some compilers produced output that handwritten code couldn’t beat. That’s even truer today, so don’t waste time on unneeded hand-optimizations. In your race to improve the efficiency of your code, remember that developer efficiency is often just as important.

Programming myth No. 7: Good code is “simple” or “elegant”
Like most engineers, programmers like to talk about finding “elegant” or “simple” solutions to problems. The trouble is, this turns out to be a poor way to judge software code.

For one thing, what do these terms really mean? Is a simple solution the same as an elegant one? Is an elegant solution one that is computationally efficient, or is it one that uses the fewest lines of code?

Spend too long searching for either, and you risk ending up with that bane of good programming: the clever solution. It’s so clever that the other members of the team have to sit and puzzle over it like a crossword before they understand how it works. Even then, they dare not touch it, ever, for fear it might fly apart.

In many cases, the solution is too clever even for its own good. In their 1974 book, “The Elements of Programming Style,” Brian Kernighan and P.J. Plauger wrote, “Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?” For that matter, how will anyone else?

In a sense, concentrating on finding the most “elegant” solution to a programming problem is another kind of premature optimization. Solving the problem should be the primary goal.

So be wary of programmers who seem more interested in feathering their own caps than in writing code that’s easy to read, maintain, and debug. Good code might not be that simple. Good code might not be that elegant. The best code works, works well, and is bug-free. Why ask for more?

Related articles

 

[repost ]10 programming languages that could shake up IT

original:http://www.infoworld.com/d/application-development/10-programming-languages-could-shake-it-181548

Do we really need another programming language? There is certainly no shortage of choices already. Between imperative languages, functional languages, object-oriented languages, dynamic languages, compiled languages, interpreted languages, and scripting languages, no developer could ever learn all of the options available today.

And yet, new languages emerge with surprising frequency. Some are designed by students or hobbyists as personal projects. Others are the products of large IT vendors. Even small and midsize companies are getting in on the action, creating languages to serve the needs of their industries. Why do people keep reinventing the wheel?

[ Test your programming smarts with our programming IQ test: Round 1 and Round 2. | Learn how to work smarter, not harder with InfoWorld’s roundup of all the tips and trends programmers need to know in the Developers’ Survival Guide. Download the PDF today! | Keep up with the latest developer news with InfoWorld’s Developer World newsletter. ]

The answer is that, as powerful and versatile as the current crop of languages may be, no single syntax is ideally suited for every purpose. What’s more, programming itself is constantly evolving. The rise of multicore CPUs, cloud computing, mobility, and distributed architectures have created new challenges for developers. Adding support for the latest features, paradigms, and patterns to existing languages — especially popular ones — can be prohibitively difficult. Sometimes the best answer is to start from scratch.

Here, then, is a look at 10 cutting-edge programming languages, each of which approaches the art of software development from a fresh perspective, tackling a specific problem or a unique shortcoming of today’s more popular languages. Some are mature projects, while others are in the early stages of development. Some are likely to remain obscure, but any one of them could become the breakthrough tool that changes programming for years to come — at least, until the next batch of new languages arrives.

Experimental programming language No. 1: Dart
JavaScript is fine for adding basic interactivity to Web pages, but when your Web applications swell to thousands of lines of code, its weaknesses quickly become apparent. That’s why Google created Dart, a language it hopes will become the new vernacular of Web programming.

Like JavaScript, Dart uses C-like syntax and keywords. One significant difference, however, is that while JavaScript is a prototype-based language, objects in Dart are defined using classes and interfaces, as in C++ or Java. Dart also allows programmers to optionally declare variables with static types. The idea is that Dart should be as familiar, dynamic, and fluid as JavaScript, yet allow developers to write code that is faster, easier to maintain, and less susceptible to subtle bugs.

You can’t do much with Dart today. It’s designed to run on either the client or the server (a la Node.js), but the only way to run client-side Dart code so far is to cross-compile it to JavaScript. Even then it doesn’t work with every browser. But because Dart is released under a BSD-style open source license, any vendor that buys Google’s vision is free to build the language into its products. Google only has an entire industry to convince.

Experimental programming language No. 2: Ceylon
Gavin King denies that Ceylon, the language he’s developing at Red Hat, is meant to be a “Java killer.” King is best known as the creator of the Hibernate object-relational mapping framework for Java. He likes Java, but he thinks it leaves lots of room for improvement.

Among King’s gripes are Java’s verbose syntax, its lack of first-class and higher-order functions, and its poor support for meta-programming. In particular, he’s frustrated with the absence of a declarative syntax for structured data definition, which he says leaves Java “joined at the hip to XML.” Ceylon aims to solve all these problems.

King and his team don’t plan to reinvent the wheel completely. There will be no Ceylon virtual machine; the Ceylon compiler will output Java bytecode that runs on the JVM. But Ceylon will be more than just a compiler, too. A big goal of the project is to create a new Ceylon SDK to replace the Java SDK, which King says is bloated and clumsy, and it’s never been “properly modernized.”

That’s a tall order, and Red Hat has released no Ceylon tools yet. King says to expect a compiler this year. Just don’t expect software written in “100 percent pure Ceylon” any time soon.

Experimental programming language No. 3: Go
Interpreters, virtual machines, and managed code are all the rage these days. Do we really need another old-fashioned language that compiles to native binaries? A team of Google engineers — led by Robert Griesemer and Bell Labs legends Ken Thompson and Rob Pike — says yes.

Go is a general-purpose programming language suitable for everything from application development to systems programing. In that sense, it’s more like C or C++ than Java or C#. But like the latter languages, Go includes modern features such as garbage collection, runtime reflection, and support for concurrency.

Equally important, Go is meant to be easy to program in. Its basic syntax is C-like, but it eliminates redundant syntax and boilerplate while streamlining operations such as object definition. The Go team’s goal was to create a language that’s as pleasant to code in as a dynamic scripting language yet offers the power of a compiled language.

Go is still a work in progress, and the language specification may change. That said, you can start working with it today. Google has made tools and compilers available along with copious documentation; for example, the Effective Go tutorial is a good place to learn how Go differs from earlier languages.

Experimental programming language No. 4: F#
Functional programming has long been popular with computer scientists and academia, but pure functional languages like Lisp and Haskell are often considered unworkable for real-world software development. One common complaint is that functional-style code can be difficult to integrate with code and libraries written in imperative languages like C++ and Java.

Enter F# (pronounced “F-sharp”), a Microsoft language designed to be both functional and practical. Because F# is a first-class language on the .Net Common Language Runtime (CLR), it can access all of the same libraries and features as other CLR languages, such as C# and Visual Basic.

F# code resembles OCaml somewhat, but it adds interesting syntax of its own. For example, numeric data types in F# can be assigned units of measure to aid scientific computation. F# also offers constructs to aid asynchronous I/O, CPU parallelization, and off-loading processing to the GPU.

After a long gestation period at Microsoft Research, F# now ships with Visual Studio 2010. Better still, in an unusual move, Microsoft has made the F# compiler and core library available under the Apache open source license; you can start working with it for free and even use it on Mac and Linux systems (via the Mono runtime).

Experimental programming language No. 5: Opa
Web development is too complicated. Even the simplest Web app requires countless lines of code in multiple languages: HTML and JavaScript on the client, Java or PHP on the server, SQL in the database, and so on.

Opa doesn’t replace any of these languages individually. Rather, it seeks to eliminate them all at once, by proposing an entirely new paradigm for Web programming. In an Opa application, the client-side UI, server-side logic, and database I/O are all implemented in a single language, Opa.

Opa accomplishes this through a combination of client- and server-side frameworks. The Opa compiler decides whether a given routine should run on the client, server, or both, and it outputs code accordingly. For client-side routines, it translates Opa into the appropriate JavaScript code, including AJAX calls.

Naturally, a system this integrated requires some back-end magic. Opa’s runtime environment bundles its own Web server and database management system, which can’t be replaced with stand-alone alternatives. That may be a small price to pay, however, for the ability to prototype sophisticated, data-driven Web applications in just a few dozen lines of code. Opa is open source and available now for 64-bit Linux and Mac OS X platforms, with further ports in the works.

Experimental programming language No. 6: Fantom
Should you develop your applications for Java or .Net? If you code in Fantom, you can take your pick and even switch platforms midstream. That’s because Fantom is designed from the ground up for cross-platform portability. The Fantom project includes not just a compiler that can output bytecode for either the JVM or the .Net CLI, but also a set of APIs that abstract away the Java and .Net APIs, creating an additional portability layer.

There are plans to extend Fantom’s portability even further. A Fantom-to-JavaScript compiler is already available, and future targets might include the LLVM compiler project, the Parrot VM, and Objective-C for iOS.

But portability is not Fantom’s sole raison d’être. While it remains inherently C-like, it is also meant to improve on the languages that inspired it. It tries to strike a middle ground in some of the more contentious syntax debates, such as strong versus dynamic typing, or interfaces versus classes. It adds easy syntax for declaring data structures and serializing objects. And it includes support for functional programming and concurrency built into the language.

Fantom is open source under the Academic Free License 3.0 and is available for Windows and Unix-like platforms (including Mac OS X).

Experimental programming language No. 7: Zimbu
Most programming languages borrow features and syntax from an earlier language. Zimbu takes bits and pieces from almost all of them. The brainchild of Bram Moolenaar, creator of the Vim text editor, Zimbu aims to be a fast, concise, portable, and easy-to-read language that can be used to code anything from a GUI application to an OS kernel.

Owing to its mongrel nature, Zimbu’s syntax is unique and idiosyncratic, yet feature-rich. It uses C-like expressions and operators, but its own keywords, data types, and block structures. It supports memory management, threads, and pipes.

Portability is a key concern. Although Zimbu is a compiled language, the Zimbu compiler outputs ANSI C code, allowing binaries to be built only on platforms with a native C compiler.

Unfortunately, the Zimbu project is in its infancy. The compiler can build itself and some example programs, but not all valid Zimbu code will compile and run properly. Not all proposed features are implemented yet, and some are implemented in clumsy ways. The language specification is also expected to change over time, adding keywords, types, and syntax as necessary. Thus, documentation is spotty, too. Still, if you would like to experiment, preliminary tools are available under the Apache license.

Experimental programming language No. 8: X10
Parallel processing was once a specialized niche of software development, but with the rise of multicore CPUs and distributed computing, parallelism is going mainstream. Unfortunately, today’s programming languages aren’t keeping pace with the trend. That’s why IBM Research is developing X10, a language designed specifically for modern parallel architectures, with the goal of increasing developer productivity “times 10.”

X10 handles concurrency using the partitioned global address space (PGAS) programming model. Code and data are separated into units and distributed across one or more “places,” making it easy to scale a program from a single-threaded prototype (a single place) to multiple threads running on one or more multicore processors (multiple places) in a high-performance cluster.

X10 code most resembles Java; in fact, the X10 runtime is available as a native executable and as class files for the JVM. The X10 compiler can output C++ or Java source code. Direct interoperability with Java is a future goal of the project.

For now, the language is evolving, yet fairly mature. The compiler and runtime are available for various platforms, including Linux, Mac OS X, and Windows. Additional tools include an Eclipse-based IDE and a debugger, all distributed under the Eclipse Public License.

Experimental programming language No. 9: haXe
Lots of languages can be used to write portable code. C compilers are available for virtually every CPU architecture, and Java bytecode will run wherever there’s a JVM. But haXe (pronounced “hex”) is more than just portable. It’s a multiplatform language that can target diverse operating environments, ranging from native binaries to interpreters and virtual machines.

Developers can write programs in haXe, then compile them into object code, JavaScript, PHP, Flash/ActionScript, or NekoVM bytecode today; additional modules for outputting C# and Java are in the works. Complementing the core language is the haXe standard library, which functions identically on every target, plus target-specific libraries to expose the unique features of each platform.

The haXe syntax is C-like, with a rich feature set. Its chief advantage is that it negates problems inherent in each of the platforms it targets. For example, haXe has strict typing where JavaScript does not; it adds generics and type inference to ActionScript; and it obviates the poorly designed, haphazard syntax of PHP entirely.

Although still under development, haXe is used commercially by its creator, the gaming studio Motion Twin, so it’s no toy. It’s available for Linux, Mac OS X, and Windows under a combination of open source licenses.

Experimental programming language No. 10: Chapel
In the world of high-performance computing, few names loom larger than Cray. It should come as no surprise, then, that Chapel, Cray’s first original programming language, was designed with supercomputing and clustering in mind.

Chapel is part of Cray’s Cascade Program, an ambitious high-performance computing initiative funded in part by the U.S. Defense Advanced Research Project Agency (DARPA). Among its goals are abstracting parallel algorithms from the underlying hardware, improving their performance on architectures, and making parallel programs more portable.

Chapel’s syntax draws from numerous sources. In addition to the usual suspects (C, C++, Java), it borrows concepts from scientific programming languages such as Fortran and Matlab. Its parallel-processing features are influenced by ZPL and High-Performance Fortran, as well as earlier Cray projects.

One of Chapel’s more compelling features is its support for “multi-resolution programming,” which allows developers to prototype applications with highly abstract code and fill in details as the implementation becomes more fully defined.

Work on Chapel is ongoing. At present, it can run on Cray supercomputers and various high-performance clusters, but it’s portable to most Unix-style systems (including Mac OS X and Windows with Cygwin). The source code is available under a BSD-style open source license.