Status message

Your comment has been queued for review by site administrators and will be published after approval.

Pi of the Century

Hariharan Kolam

3/14 - Pi Day & Albert Einstein Birthday

I’ve been dodging the Pi Day cultural phenomenon for a while now. Not to say that the day isn’t important- it is indeed- being the birthday of a very special person makes it significantly important. However, I couldn’t escape the 2015 “Ultimate Pi Day” hoopla. For the uninformed and uninitiated, 2015 holds the most accurate Pi Day in the 21st century - a once in a lifetime event. The “once in a lifetime” prefix usually adds to the enigma and provided me with an additional reason to take note and celebrate.

No other mathematical constant can possibly claim more fame than pi - at least not enough to deserve a day of its own. It’s quite intriguing to realize that one can never truly measure the area and circumference of a circle because we cannot truly know the value of pi. The quest for more digits of this irrational, non-repeating, endless series of numbers is a long-standing pastime of researchers and mathematicians, with the earliest recorded algorithm dating all the way back to 250 BC.

I found it quite interesting that the methods and tools employed in estimating digits of pi provide deep insights into the advancement of mathematics and computing. The impact of the discovery of calculus, and the summation of infinite series, has had a very notable advancement in estimating the value of pi – as did the discovery and evolution of computing, which pushed the envelope even further (the 13th quadrillionth digit being discovered last year). I was particularly intrigued by the way the pi crunching methodology has evolved in the computing era. A survey of various advancements in the quest for more digits of pi and the tools used (computers and algorithms) reveals top computing bottlenecks as perceived at that point of time.

Optimizing around compute cycles and using high-end machinery with massive compute cycles formed the basis of solution formulation from the 1950s to 1990s. MPP (massively parallel processing) supercomputers and custom home-built parallel computers were used; and algorithms were designed to squeeze every ounce of available CPU. The quest for more pi digits saw significant advancement – from a few thousand digits in the 1950s to several billion digits during the 1990s. It’s easy to note that the early 2000's saw algorithms that evolved quite significantly by intelligently using memory and storage to optimize compute cycles, although the approach still utilized massive computers. However, all latest records since 2009 utilized computers running commoditized commercial parts.

The modern day multi-threaded/multi-core CPU architectures are effectively utilized by tools like y-cruncher (featured in multiple records over the past 5 years) to effectively compute pi in an insanely parallel way. The enhancements in GPUs are yet another addition to the rich computational resources at our disposal. However, it’s not surprising that modern pi crunching is no longer bottlenecked on compute, but rather on data communication, including memory, disk I/O, and so forth.

A quick peek into recent algorithmic enhancements reveals exactly that – on-disk algorithms (such as on-disk multiplication) are being implemented to effectively optimize data communication, and y-cruncher actually calls this out. The memory, disk I/O and data communication problem is also being solved with parallelism by using a cluster of commodity machines. One of the recent pi-crunching records utilized the Apache Hadoop platform to build MapReduce jobs distributed across multiple machines. The key algorithmic enhancement of letting independent compute units compute one specific piece of pi (vs. complete calculation of every digit) is quite a significant change in implementing massive parallelism.

The obsessive pursuit of accurate rational approximations of pi is a fascinating problem, which is important for many reasons. It is well understood that computing advancements result in bottlenecks being shifted between compute, network and memory. I found it quite intriguing surveying this evolution. The problem has remained unchanged, but the evolving solutions provide an interesting perspective on not only computing advancements, but also algorithmic advancements that constantly try to trade sparse and contentious resources with less sparse ones (CPU for memory, CPU for network etc.). This trade-off is not surprising, yet very insightful.

It’s hard for me to not draw parallels between this and Instart Logic’s view of the world. Optimizing and adapting to bottlenecks by dynamically detecting a contentious resource and trading the same with a less contentious one (e.g. compute time for reduced network time) forms the basis of our Cloud-Client perspective of the world.

Leave a Reply

Your email address will not be published. Required fields are marked *