Thanks for visiting my blog - I have now moved to a new location at Nature Networks. Url: http://blogs.nature.com/fejes - Please come visit my blog there.

Sunday, October 5, 2008

Field Programmable Gate Arrays

Yes, I'm procrastinating again. I have two papers, two big chunks of code and a thesis proposal to write, a paper to review (it's been done but I have yet to type out my comments..), several major experiments to do and at least one poster looming on the horizon - not to mention squeezing in a couple of manuals for the Vancouver Package Software. And yet, I keep finding other stuff to work on, because it's the weekend.

So, I figured this would be a good time to touch on a topic of Field Programmable Gate Arrays or FPGAs. I've done very little research on this topic, since it's so far removed from my own core expertise, but it's a hot topic in bioinformatics, so I'd be doing a big disservice by not touching on this subject at all. However, I hope people will correct me if they spot errors.

So what is an FPGA? I'd suggest you read the wikipedia article linked above, but I'd sum it up as a chip that can be added to a computer, which has the ability to optimize the way in which information is processed, so as to accellerate a given algorithm. It's a pretty cool concept - move a particular part of an algorithm into the hardware itself to speed it up. Of course, there are disadvantages as well. Reprogramming is (was? - this may have changed) a few orders of magnitude slower than processing information, so you can't change the programming on the fly while processing data and still hope to get a speed up. Some chips can change programming of unused sub-sections, while other algorithms are running... but now we're getting really technical.

(For a very good technical discussion, I suggest this book, of which I've read a few useful paragraphs.)

Rather than discuss FPGAs, which are a cool subject on their own, I'd rather discuss their applications in Bioinformatics. As far as I know, they're not widely used for most applications at the moment. The most processor intensive bioinformatics applications, Molecular Modeling and drug docking, are mainly vector-based calculationd, so vector chips (eg Graphics Processing Units - GPUs) are more applicable for them. As for the rest, CPUs have traditionally been "good enough". However, recently the following two things seem to have accelerated this potential mariage of technology:
  1. The makers of FPGAs have been looking for applications for their products for years and have targeted bioinformatics because of it's intense computer use. Heavy computer use is always considered to be a sign that more efficient processing speed is an industry need - and FPGAs appear to meet that need - on the surface.
  2. Bioinformatics was doing well with the available computers, but suddenly found itself behind the processing curve with the advent of Second Generation Sequencing (SGS). Suddenly, the amount of information being processed spiked by an order of magnitude (or more), causing the bioinformaticians to scream for more processing power and resources.
So, it was inevitable that FPGA producers would hear about the demand for more power in the field, and believe that it's the ideal market into which they should pluge. To the casual observer, Bioinformatics needs more efficiency and power, and FPGA producers are looking for a martet where efficiency and power are needed! Is this a match made in heaven or what?

Actually, I contend that FPGAs are the wrong solution for several reasons.

While Second Generation Sequencing produces tons more data, the algorithms being employed haven't yet settled down. Every 4 months we pick a different aligner. Every 3 months we add a new data base. Every month we produce a more efficient version of our algorithms for interpreting the data. Due to the overhead in producing an algorithm translation into hardware necessary to use the FPGA (which seems large to me, but may not be to people more fluent in HDL) would mean that you'd spend a disproportionate amount of time trying to get the chips set up to process your data - which you're only going to use for a short period of time before moving on. And the gain of efficiency would probably be wiped out by the amount of effort introduced.

Furthermore, even when we do know the algorithms being used are going to stay around, a lot of our processing isn't necessarily CPU bound - but rather is I/O or memory bound. When you're trawling through 16Gb or memory, it's not necessarily obvious that adding more speed to the CPU will help. Pre-fetching and pre-caching are probably doing more to help you out than anything else bound to your CPU.

In the age of multi-CPUs, using multi-threaded programs already reduces many of the pains that plague bioinformaticians. Most of my java code is thrilled to pull 2, 3, or more processors in to work faster - without a lotof explicit multi-treadding. (My record so far is 1496% cpu usage - nearly 15 processors.) I would expect that buying 16-way processors is probably more cost-efficient than buying 16 FPGAs in terms of processing data for many of the current algorithms in use.

Buying more conventional resources will probably alleviate the sudden bottle-neck in compute power, rather than innovating around new solutions to solve the need. It's likely that many groups getting into the second generation genomics technologies failed to understand the processing demands of the data, and thus didn't plan adequately for the resources. This means that much of the demand for data processing is just temporary, and may even be aleviated with more efficient algorithms in the future.

So where does the FPGA fit in?

I'd contend that there are very few products out there that would benefit from FPGAs in Bioinformatics... but there are a few. Clearly, all bioinformaticians know that aligning short reads is one of those areas. Considering that a full Maq run for a flow cell from an Illumina GAII takes 14+ hours on a small cluster, that would be one area in which they'd clearly benefit.

Of course, no bioinformatician wants to have to reprogram an FPGA on the fly to utilize their work. Were I to pick a model, it would probably be to team up with an aligner group, to produce a stand alone, multi-FPGA/CPU hybrid box with 32Gb of RAM, and a 3-4 year upgrade path. Every 4 months you produce a new aligner algorithm and HDL template, and users pick up the aligner and HDL upgrade, and "flash" their computer to use the new software/hardware. This would follow the Google Appliance model: an automated box that does one task, and does it well, with the exception that hardware "upgrades" come along with the software patches. That would certainly turn a few heads.

At any rate, only time will tell. If the algorithms settle down, FPGAs may become more useful. If the FPGAs become easier to program for bioinformaticians, they may find a willing audience. If the FPGAs begin to understand the constraints of the bioinformatics groups, they may find niche applications that will truly benefit from this technology. I look forward to seeing where this goes.

Ok... now that I've gone WAY out on a limb, I think it's time to tackle a few of those tasks on my list.

Labels: , , ,

3 Comments:

Blogger cariaso said...

There are certainly apps that can benefit, but as a programmer I hate specialized hardware. I want FPGAs to be invisible optimizations of the deepest loops, not new interfaces.

But I got this nice email a few days ago.

he seems to bo looking for suitable application for an FPGA. I can't help him since WikiLIMS is not the right layer for an FPGA, but I'm sure he's appreciate being contacted by anyone who has more immediate needs for FPGAs.

From: saby.makai@chemistrylogic.com
To: cariaso

I am contacting you because we are exploring the possibilities how FPGA accelerated computing could enhance the DNA analysis and data management. We would like to investigate the possibilities of such accelaration for wikLIMS that I have heard from at The Providence summit.

I understand that it is an expandable paltform that can have tools added. Please can you comment on how it would be possible to add new mapping tools (accelarated with FPGA board) or apply FPGA accelaration to already exisiting tools.

I am writing on behalf a consortium that consists of a university, an academic research institute, and a technology incubator, based in Hungary. We have developed a special competence in accelerating scientific computing tasks by FPGA based computers.

October 6, 2008 1:11:00 PM PDT  
Blogger Anthony said...

Hi Cariaso,

Thanks for the note - I received a similar email from chemistrylogic as well, which was one of several independent FPGA-related things that came across my desk in the past week, prompting me to write this post.

In any case, I have a hard time imagining anyone wanting to use an FPGA for WikiLIMS - just as I can't see anyone wanting to use one for something like FindPeaks or my transcriptome code. Clearly aligners/assemblers are the one of the few places where FPGAs would be worth developing to do any heavy lifting.

Anyhow, this is a good example of marketing in the Biotech world - cast the widest net you can, and see what you can catch. Even if 99% of the responses are useless, the remaining 1% can make it a worth while exercise. It's the same principle behind spam - and why I keep getting emailed fliers from Invitrogen. (=

October 6, 2008 1:35:00 PM PDT  
Blogger Anthony said...

Hi Dr. Lyon,

Thanks for leaving a comment here - unfortunately, it's a little too much of a sales pitch. If you want to make a point, please feel free - and feel free to leave your contact information on the posts - but please don't use my blog to advertise your products.

Cheers,

Anthony

October 9, 2008 1:01:00 PM PDT  

Post a Comment

<< Home