HIGH-Tech: Reducing the brain RAM with CHAPEL programming

The following text is an excerpt from an interview with ATTO researcher Nelson Dias on the Chapel language blog. Chapel is a productive language for parallel computing at scale.

What do you do? What problems are you trying to solve?

Greenhouse gas flux measurements are essential to understand the effect on the climate of carbon emission, and they involve very large amounts of data.

In the last decades, fluxes of greenhouse gases (CO2, CH4, and some others) have been measured over all types of ecosystems around the world. These measurements are essential to understand the effect on the climate of carbon emission and sequestering by those ecosystems, and they involve very large amounts of data, typically wind speed components and gas concentrations measured 10 times per second. To further extract hourly and daily fluxes from the data requires statistical processing of those datasets, and fast (compiled) and easily parallelizable languages such as Chapel help to streamline the calculation of those fluxes.

My research areas are Hydrology and Atmospheric Turbulence. I work with my students and postdocs analyzing turbulence data, and more recently (we are in our first steps) with computational fluid dynamics. As an example of my research work using Chapel, here is an evaporation model that I created, fully developed in Chapel: It was published in Water Resources Research. The model itself, called STAEBLE, is available in GitHub. I think it is a good example of the language’s qualities, such as ease of coding and general applicability.

How does Chapel help you with these problems?

Chapel allows me to use the available CPU and GPU power efficiently, and reduces the time involved in most analyses significantly. This can be done without low-level programming of message passing, data synchronization, managing threads, etc.

Turbulence datasets are very large, and their statistical processing can be quite demanding. Chapel allows me to use the available CPU and GPU power in the computer efficiently, and reduces the time involved in most analyses significantly. This can be done without low-level programming of message passing, data synchronization, managing threads, etc. (which I have not mastered). We work mainly with desktop computers with relatively high-end CPUs and GPUs.

Chapel is very good at simpler tasks as well, such as scripting. I have mostly replaced traditional scripting tools such as AWK and Python with small Chapel programs.

What is also very important is that Chapel is very good at simpler tasks as well, such as scripting. I have by now mostly replaced traditional scripting tools such as AWK and Python with small Chapel programs. This reduces the brain RAM 🙂 required to cope with different syntaxes, and streamlines my workflow.

Nelson doing some on-the-fly programming while on the train.

Read the full interview on the Chapel language blog.