Thanks for visiting my blog - I have now moved to a new location at Nature Networks. Url: http://blogs.nature.com/fejes - Please come visit my blog there.

Wednesday, February 10, 2010

Citation for parameters for accurate genome alignment

I just saw an interesting article citation on twitter (via BioInfo - his tweet):
Frith M, Hamada M, Horton P. Parameters for accurate genome alignment. BMC Bioinformatics. 2010 February;11(1):80+
Unfortunately, it's not yet available as an early access, yet... but it would imagine it will be tomorrow. Sounds like a good read!

Labels:

Tuesday, August 18, 2009

new repository of second generation software

I finally have a good resource for locating second gen (next gen) sequencing analysis software. For a long time, people have just been collecting it on a single thread in the bioinformatics section of the SeqAnswers.com forum, however, the brilliant people at SeqAnswers have spawned off a wiki for it, with an easy to use form. I highly recommend you check it out, and possibly even add your own package.

http://seqanswers.com/wiki/SEQanswers

Labels: , , , , , , , , , , , ,

Monday, August 17, 2009

SNP Datatabase v0.1

Good news, my snp database seems to be in good form, and is ready for importing SNPs. For people who are interested, you can download the Vancouver Short Read Package from SVN, and find the relevant information in
/trunk/src/transcript_analysis/SNP_Database/

There's a schema for setting up the tables and indexes, as well as applications for running imports from maq SNP calls and running a SNP caller on any form of alignment supported by FindPeaks (maq, eland, etc...).

At this point, there are no documents on how to use the software, since that's the plan for this afternoon, and I'm assuming everyone who uses this already has access to a postgresql database (aka, a simple ubuntu + psql setup.)

But, I'm ready to start getting feature requests, requests for new SNP formats and schema changes.

Anyone who's interested in joining onto this project, I'm only a few hours away from having some neat toys to play with!

Labels: , , , , , , , , , ,

Saturday, August 15, 2009

What would you do with 10kbp reads?

I just caught a tweet about an article on the Pathogens blog (What can you do with 1000 base pair reads?), which is specifically about 454 reads. Personally, I'm not so interested in 454 reads - the technology is good, but I don't have access to 454 data, so it's somewhat irrelevant to me. (Not to say 1kbp reads isn't neat, but no one has volunteered to pass me 454 data in a long time...)

So, anyhow, I'm trying to think two steps ahead. 2010 is supposed to be the year that Pacific Biosciences (and other companies) release the next generation of sequencing technologies - which will undoubtedly be longer than 1k. (I seem to recall hearing that PacBio has 10k+ reads.- UPDATE: I found a reference.) So to heck with 1kbp reads, this raises the real question: What would you do with a 10,000bp read? And, equally important, how do you work with a 10kbp read?
  • What software do you have now that can deal with 10k reads?
  • Will you align or assemble with a 10k read?
  • What experiments will you be able to do with a 10k read?
Frankly, I suspect that nothing we're currently using will work well with them - we'll all have to go back to the drawing board and rework the algorithms we use.

So, what do you think?

Labels: , , , ,

Wednesday, July 29, 2009

Aligner tests

You know what I'd kill for? A simple set of tests for each aligner available. I have no idea why we didn't do this ages ago. I'm sick of off-by-one errors caused by all sorts of slightly different formats available - and I can't do unit tests without a good simple demonstration file for each aligner type.

I know Sam format should help with this - assuming everyone adopts it - but even for SAM I don't have a good control file.

I've asked someone here to set up this test using a known sequence- and if it works, I'll bundle the results into the Vancouver Package so everyone can use it.

Here's the 50-mer I picked to do the test. For those of you with some knowledge of cancer, it comes from tp53. It appears to blast uniquely to this location only.
>forward - chr17:7,519,148-7,519,197
CATGTGCTGTGACTGCTTGTAGATGGCCATGGCGCGGACGCGGGTGCCGG

>reverse - chr17:7,519,148-7,519,197
ccggcacccgcgtccgcgccatggccatctacaagcagtcacagcacatg

Labels: , , , , , ,

Friday, May 29, 2009

Science Cartoons - 2

Comic #2/5. This one, obviously is about aligners. I've added in the copyright on the far right, this time. If I expect people to respect my copyright, I really do need to put it on there, don't I?

Labels: , , ,

Thursday, April 16, 2009

Multi-match reads in ChIP-Seq

I had an interesting comment left on my blog today, which is worth taking a few minutes to write a response to:
"Hi Anthony, I just discovered your blog and it looks very interesting to me!
Since this article on Eland is now more than one year old, I was wondering
if the description at point 3 about multi matching locations is still
applicable to the Eland program in the Illumina pipeline 1.3. More in general,
would you trust the multi matching locations extracted from the multi_eland
output files to perform a repeat enrichment analysis over an experiment of
ChIP-seq? If no, why? Thank you in advance for your attention."

The first question asks about multi-matching locations - and if the point in question (point 3) applies to the Illumina Pipeline 1.3. Since point 3 was just that the older pipeline didn't provide the locations of the multi-matche reads, I suppose this no longer really applies: I understand the new version of Eland does provide multi-match alignment information, as do other aligners such as Bowtie. However, I should also mention that since I adopted Maq as my preferred aligner, I haven't used Eland much - so it's hard for me to give an informed opinion on the quality of the matches. I simply don't know if they're any good, and I won't belabour that point. I have used Bowtie specifically because it was able to do mutli-matches, but we didn't use it for ChIP-Seq, and the multi-matches had other uses in that experiment.

So, the more interesting question is whether I'd use multi-match reads in a ChIP-Seq analysis. And, off hand, my answer has to be no. But let me explain my reasoning, and the conditions in which I would change that answer.

First, lets assume that we have Single End Tags, so the multi-match information is not resolvable. That means anytime we have a read that maps to more than one location, we have the possibility that we can either map it to it's source - or we're mapping it incorrectly. A 50% change of "getting it right." The greater the number of multi-match locations, the smaller the chance we're actually finding it's correct origin. So, at best we've got a 50-50 chance that we're not adversely affecting the outcome of the experiment. That's not great.

In contrast, there are things we could do to make them usable. The most widely used method from FindPeaks is the weighted fragment distribution type. Thus, we could expand the principle to weight the fragments according to the number of sites. That would be... bearable. But would it significantly add to the quality of the alignment?

I'm still going to say no. Fragments we see in ChIP-Seq experiments tend to fall within 200-300bp of the regions in which the transcription factor (or other sites) bind. Thus, even if we were concerned that a particular transcription factor binds primarily to the similar motif regions at two sites, there should be more than enough (unique) sequence around that site (which is usually <30-40bp in length) to which you'll still see fragments aligning. That should compensate for the loss of the multi-match fragments.

Even more importantly, as read lengths increase, the amount of non-unique sequence decreases rapidly, making the shrinking number of multi-match reads less important.

The same argument can be extended for paired end tags: Just as read lengths improve and reduce the number of multi-match sites, more of the multi-match reads will be resolved by pairing them with a second read, which is unlikely to be within the same repeat region, thus reducing the number of reads that become unresolvable multi-matches. Proportionally, one would then expect that leaving out these reads become a smaller and smaller segment of the population, and would have to worry less and less about their contribution.

So, then, when would I want them?

Well, on the odd chance you're working with very short reads, you can pull off the weighting properly, and you have single end tags - and the multi-match reads make up a significant proportion of the reads, then it's worth exploring.

You'd need to start asking the tough questions: did the aligner simply find that a small k-mer of the read aligned to multiple locations (and was then unable to resolve the tie by extension the way some Eland aligners work)? Does the aligner use quality scores to identify mis-alignments? How reliable are the alignments (what's their error rate)? What was your sample, and how divergent is it from reference ? (e.g., cancer samples have a high variation rate, and so encourage many false alignments, making the alignments less reliable.)

Overall, I really don't see too many cases where you're going to gain a lot by digging in the multi-match files. That's not too say that you won't find anything good in there - you probably would, if you knew where to look, but the noise to signal ratio is going to be pretty poor - just by definition of the fact that they're mutli-match reads alone. You'll just have to ask if it's worth your time.

For the moment, I don't think my time (even at grad student wages) is worth it. It's just not low hanging fruit, when it comes to ChIP-Seq.

Labels: , , , , , , ,

Friday, March 20, 2009

Universal format converter for aligned reads

Last night, I was working on FindPeaks when I realized what an interesting treasure trove of libraries I was really sitting on. I have readers and writers for many of the most common aligned read formats, and I have several programs that do useful functions. So, that raise the distinctly interesting point that all of them should be applied together in one shot... and so I did exactly that.

I now have an interesting set of utilities that can be used to convert from one file format to another: bed, gff, eland, extended eland, MAQ .map (read only), mapview, bowtie.... and several other more obscure formats.

For the moment, the "conversion utility" forces the output to bed file format (since that's the file type with the least information, and I don't have to worry about unexpected file information loss), which can then be viewed with the UCSC browser, or interpreted by FindPeaks to generate wig files. (BED files are really the lowest common denominator of aligned information.) But why stop there?

Why not add a very simple functionality that lets one format be converted to the other? Actually, there's no good reason not to, but it does involve some heavy caveats. Conversion from one format type to another is relatively trivial until you hit the quality strings. since these aren't being scaled or altered, you could end up with some rather bizzare conversions unless they're handled cleanly. Unfortunately, doing this scaling is such a moving target that it's just not possible to keep up with that and do all the other devlopment work I have on my plate. (I think I'll be asking for a co-op student for the summer to help out.)

Anyhow, I'll be including this nifty utility in my new tags. Hopefully people will find the upgraded conversion utility to be helpful to them. (=

Labels: , , , , , , , , , , ,

Friday, January 9, 2009

No More Maq?

Another grad student at the GSC forwarded an email to our mailing list the other day, which was in turn from the maq-help mailing list. Unfortunately, the link on the maq-help mailing list takes you to another page, which incidentally (and erroneously) complains that FindPeaks doesn't work with Maq .map files - which it does. Instead, I suggest checking out this post on SeqAnswers from Li Heng, the creator of Maq, which has a very similar message.

The main gist of it is that the .map file format will be deprecated, and there will be no new versions of the Maq software package in the future. Instead, they will be working on two other projects (from the forwarded email):
  1. Samtools: replaces maq's (reference-based) "assembly"
  2. bwa: replaces maq's "mapping" for whole human genome alignment.
I suppose it means that eventually FindPeaks should support the Samtools formats, which I'll have to look into at some point. For those of you who are still using Maq, you may need to start following those projects as well, simply because it raises the question of long-term Maq support. As with many early generation Bioinformatics tools, we'll just have to be patient and watch how the software landscape evolves.

It probably also means that I'll have to start watching the Samtools development more carefully for use with my thesis project - many of the tools they are planning seem to replace the ones I've already developed in the Vancouver Short Read Alignment Package. Eventually, I'll have to evaluate both sets against each other. (That could also be an interesting project.)

While this was news to me, it's probably no more than the expected churn of a young technology field. I'm sure it's not going to be long until even the 2nd generation sequencing machines themselves evolve into something else.

Labels: , , ,

Thursday, January 8, 2009

The Future of FindPeaks

At the end of my committee meeting, last month, my advisors suggested I spend less time on engineering questions, and more time on the biology of the research I'm working on. Since that means spending more time on the cancer biology project, and less on FindPeaks, I've been spending some time thinking about how I want to proceed forward - and I think the answer is to work smarter on FindPeaks. (No, I'm not dropping FindPeaks development. It's just too much fun.)

For me, the amusing part of it is that FindPeaks is already on it's 4th major structural iteration. Matthew Bainbridge wrote the first, I duplicated it by re-writing it's code for the second version, then came the first round of major upgrades in version 3.1, and then I did the massive cleanup that resulted in the 3.2 branch. After all that, why would I want to write another version?

Somewhere along the line, I've realized that there are several major engineering things that could be done that would make FindPeaks faster, more versatile and able to provide more insight into the biology of ChIP-Seq and similar experiments. Most of the changes are a reflection of the fact that the underlying aligners that are being used have changed. When I first got involved we were using Eland 0.3 (?), which was simple compared to the tools we now have available. It just aligned each fragment individually and spit out the results, which left the filtering and sorting up to FindPeaks. Thus, early versions of FindPeaks were centred on those basic operations. As we moved to sorted formats like .map and _sorted.txt files, those issues have mostly dissapeared, allowing more emphasis to be placed on the statistics and functionality.

At this point, I think we're coming to the next generation of biology problems - integrating FindPeaks into the wider toolset - and generating real knowledge about what's going on in the genome, and I think it's time for FindPeaks to evolve to fill that role, growing out to better use the information available in the sorted aligner results.

Ever since the end of my exam, I haven't been able to stop thinking of neat applications for FindPeaks and the rest of my tool kit - so, even if I end up focussing on the cancer biology that I've got in front of me, I'm still going to find the time to work on FindPeaks, to better take advantage of the information that FindPeaks isn't currently using.

I guess that desire to do things well, and to get at the answers that are hidden in the data is what drives us all to do science. And probably what drives grad students to work late into the night on their projects.... I think I see a few more late nights in the near future. (-;

Labels: , , , , , ,

Monday, November 10, 2008

Bowtie and Single End Mapped Paired End Data

Strange title for a posting, but it actually makes sense, if you think about it.

One of the researchers here has been working on an assembly of a reasonably small fungal genome, using short read technology with which I've been somewhat involved. It's been an educational project for me, so I'm glad I had the opportunity to contribute. One of the key elements of the paper, however, was the use of Paired End Tags (PET) from an early Illumina machine to assess the quality of the assembly.

Unfortunately, an early run of the software I'd written to analyze the Maq alignments of the paired ends to the assemblies had a bug, which made it look like the data supported the theory quite nicely - but alas, it was just a bug. The bug was fixed a few weeks ago, and a re-run of the data turned up something interesting:

If you use Maq alignments, you're biasing your alignments towards the smith-waterman aligned sequences, which are not an independent measure of the quality of the assembly. Not shocking? I know - it's pretty obvious.

Unfortunately, we hadn't put it in context before hand, so we had to find another way of doing this. We wanted to get a fairly exhaustive set of alignments for each short read - and we wanted to get it quickly, so we turned to Bowtie. While I wasn't the one running the alignments, I have to admit, I'm impressed with the quick performance of the aligner. Multi-matches work well, the file format is intuitive - similar to an eland file, and the quality of the information seems to be good. (The lack of a good scoring mechanism is a problem, but wasn't vital for this project.)

Anyhow, by performing a Single End Tag style alignment on PET data, while retaining multimatches, we were able to remove the biases of the aligners, and perform the pairings ourselves - learning more about the underlying assembly to which we were aligning. This isn't something you'd want to do often, unless you're assembling genomes fairly regularly, but it's quite cool.

Alas, for the paper, I don't think this data was good enough quality - there may not be a good story in the results that will replace the one we thought we had when the data was first run... but there were other good results in the data, and the journey was worth the trip, even if the destination wasn't all we'd hoped it would be.

As a sidenote, my code was run on an otherwise unused 16-way box, and managed to get 1592% CPU usage, at one point, while taking up 46Gb of RAM. (I should have processed each lane of reads separately, but didn't.) That's the first time I've seen CPU usage that high on my code. My previous record was 1499%. I should note that I only observe these scalings during the input processing - so the overal speed up was only 4x.

The code is part of the Vancouver Short Read Analysis Package, called BowtieToBedFormat.java, if anyone needs a process similar to this for Bowtie Reads. (There's no manual for it at the moment, but I can put one togther, upon request.)

Labels: , , ,

Wednesday, October 8, 2008

Maq Bug

I came across an interesting bug today, when trying to work with reads aligned by Smith-Waterman (flag = 130), in PET alignments. Indeed, I even filed a Bug Report on it.

The low down on the "bug" or "feature" (I'm not sure which it is yet), is that sequences containing deletions don't actually show the deletions - they show the straight genomic sequence at that location. The reason that I figure this is a bug instead of by design is because sequences with insertions show the insertion. So why the discrepancy?

Anyhow, the upshot of it is that I'm only able to use 1/2 of the Smith-Waterman alignments maq produces when doing SNP calls in my software. (I can't trust that the per-base quality scores align to the correct bases with this bug) Since other people are doing SNP calls using MAQ alignments... what are they doing?

Labels: , , ,

Sunday, October 5, 2008

Field Programmable Gate Arrays

Yes, I'm procrastinating again. I have two papers, two big chunks of code and a thesis proposal to write, a paper to review (it's been done but I have yet to type out my comments..), several major experiments to do and at least one poster looming on the horizon - not to mention squeezing in a couple of manuals for the Vancouver Package Software. And yet, I keep finding other stuff to work on, because it's the weekend.

So, I figured this would be a good time to touch on a topic of Field Programmable Gate Arrays or FPGAs. I've done very little research on this topic, since it's so far removed from my own core expertise, but it's a hot topic in bioinformatics, so I'd be doing a big disservice by not touching on this subject at all. However, I hope people will correct me if they spot errors.

So what is an FPGA? I'd suggest you read the wikipedia article linked above, but I'd sum it up as a chip that can be added to a computer, which has the ability to optimize the way in which information is processed, so as to accellerate a given algorithm. It's a pretty cool concept - move a particular part of an algorithm into the hardware itself to speed it up. Of course, there are disadvantages as well. Reprogramming is (was? - this may have changed) a few orders of magnitude slower than processing information, so you can't change the programming on the fly while processing data and still hope to get a speed up. Some chips can change programming of unused sub-sections, while other algorithms are running... but now we're getting really technical.

(For a very good technical discussion, I suggest this book, of which I've read a few useful paragraphs.)

Rather than discuss FPGAs, which are a cool subject on their own, I'd rather discuss their applications in Bioinformatics. As far as I know, they're not widely used for most applications at the moment. The most processor intensive bioinformatics applications, Molecular Modeling and drug docking, are mainly vector-based calculationd, so vector chips (eg Graphics Processing Units - GPUs) are more applicable for them. As for the rest, CPUs have traditionally been "good enough". However, recently the following two things seem to have accelerated this potential mariage of technology:
  1. The makers of FPGAs have been looking for applications for their products for years and have targeted bioinformatics because of it's intense computer use. Heavy computer use is always considered to be a sign that more efficient processing speed is an industry need - and FPGAs appear to meet that need - on the surface.
  2. Bioinformatics was doing well with the available computers, but suddenly found itself behind the processing curve with the advent of Second Generation Sequencing (SGS). Suddenly, the amount of information being processed spiked by an order of magnitude (or more), causing the bioinformaticians to scream for more processing power and resources.
So, it was inevitable that FPGA producers would hear about the demand for more power in the field, and believe that it's the ideal market into which they should pluge. To the casual observer, Bioinformatics needs more efficiency and power, and FPGA producers are looking for a martet where efficiency and power are needed! Is this a match made in heaven or what?

Actually, I contend that FPGAs are the wrong solution for several reasons.

While Second Generation Sequencing produces tons more data, the algorithms being employed haven't yet settled down. Every 4 months we pick a different aligner. Every 3 months we add a new data base. Every month we produce a more efficient version of our algorithms for interpreting the data. Due to the overhead in producing an algorithm translation into hardware necessary to use the FPGA (which seems large to me, but may not be to people more fluent in HDL) would mean that you'd spend a disproportionate amount of time trying to get the chips set up to process your data - which you're only going to use for a short period of time before moving on. And the gain of efficiency would probably be wiped out by the amount of effort introduced.

Furthermore, even when we do know the algorithms being used are going to stay around, a lot of our processing isn't necessarily CPU bound - but rather is I/O or memory bound. When you're trawling through 16Gb or memory, it's not necessarily obvious that adding more speed to the CPU will help. Pre-fetching and pre-caching are probably doing more to help you out than anything else bound to your CPU.

In the age of multi-CPUs, using multi-threaded programs already reduces many of the pains that plague bioinformaticians. Most of my java code is thrilled to pull 2, 3, or more processors in to work faster - without a lotof explicit multi-treadding. (My record so far is 1496% cpu usage - nearly 15 processors.) I would expect that buying 16-way processors is probably more cost-efficient than buying 16 FPGAs in terms of processing data for many of the current algorithms in use.

Buying more conventional resources will probably alleviate the sudden bottle-neck in compute power, rather than innovating around new solutions to solve the need. It's likely that many groups getting into the second generation genomics technologies failed to understand the processing demands of the data, and thus didn't plan adequately for the resources. This means that much of the demand for data processing is just temporary, and may even be aleviated with more efficient algorithms in the future.

So where does the FPGA fit in?

I'd contend that there are very few products out there that would benefit from FPGAs in Bioinformatics... but there are a few. Clearly, all bioinformaticians know that aligning short reads is one of those areas. Considering that a full Maq run for a flow cell from an Illumina GAII takes 14+ hours on a small cluster, that would be one area in which they'd clearly benefit.

Of course, no bioinformatician wants to have to reprogram an FPGA on the fly to utilize their work. Were I to pick a model, it would probably be to team up with an aligner group, to produce a stand alone, multi-FPGA/CPU hybrid box with 32Gb of RAM, and a 3-4 year upgrade path. Every 4 months you produce a new aligner algorithm and HDL template, and users pick up the aligner and HDL upgrade, and "flash" their computer to use the new software/hardware. This would follow the Google Appliance model: an automated box that does one task, and does it well, with the exception that hardware "upgrades" come along with the software patches. That would certainly turn a few heads.

At any rate, only time will tell. If the algorithms settle down, FPGAs may become more useful. If the FPGAs become easier to program for bioinformaticians, they may find a willing audience. If the FPGAs begin to understand the constraints of the bioinformatics groups, they may find niche applications that will truly benefit from this technology. I look forward to seeing where this goes.

Ok... now that I've gone WAY out on a limb, I think it's time to tackle a few of those tasks on my list.

Labels: , , ,

Thursday, August 14, 2008

Indel Calling

William left an interesting comment yesterday, which I figured was worthy of it's own post, albeit a short one. (Are any of my posts really short, tho?) His point was that everyone in the genomics field is currently paying a lot of attention to SNPs, and very little attention to INDELs (Insertions and Deletions). And, I have to admit - this is true. I'm not aware of anyone really trying to do indels with Solexa reads, yet. But there are very good reasons for this.

The first is a lack of tools - most aligners aren't doing gapped alignments. Well, there's Exonerate and ZOOM, and possibly blast, but all of those have their problems. (Exonerate gapped mode is very slow, poor support, and we had a very difficult time making some versions work at all, while Blast is completely infeasible for short reads in a reasonable time scale and not guaranteed to give the right answer, while Zoom just hasn't been released yet. If there are others, feel free to let me know.) Nothing currently available will really do a good gapped short read alignment. And, if you've noticed they key words: "short read", you're on to the real reason why no one is currently working with indels.

Yep - the reads are just too short. If you have a 36bp read, or even, say a 42bp read, you're looking at (best case) having your indel right in the middle of the sequence, giving you two 21-base sequences, one on each side of the gap. Think about that for a moment and let that settle in. How much of the genome is unique for 21bp reads, which may have 2 or more SNPs or sequencing errors? I'd venture to say it's 60% or so. With the 36 base pair read, you're looking at two 18-bp reads, which is more like 40-50% of the genome. (Please don't quote me on those numbers - they're just estimates.) And that's best case.

If your gap is closer to the end, you'll get something more like a 32bp read and a 10bp read.... and I wouldn't trust a 10bp seed to give the correct match against the genome no matter what aligner you've got - especially if it comes from the "poor" end of an Illumina sequence.

So that leaves you with two options: use a paired end tag, or use a longer read.

Paired end tags (PET) have been around for a couple months, now. We're still trying to figure out the best way of using the technology, but it's coming. People are mostly interested in using them for other applications - gross structural abnormalities, inversions, duplications, etc, but indels will be in there. It should be a few more months before we really see some good work done with PETs in the literature. I know of a couple of neat applications already, but a lot of the difficulty was just getting a good PET aligner going. Maq is there now, and it does an excellent job, albeit post processing the .map files for PET is not a lot of fun. (I do have software for it, tho, so it's definitely a tractable problem.)

Longer reads are also good. Once you get gaps with enough bases on either side to do double-seed searches, we'll get fast Indel capable aligners - and I'm sure it's coming. There are long reads being attempted this week at the GSC. I don't know anything about them, or the quality, but if they work, I'd expect to see a LOT more sequences being generated like this in the future, and a lot more attention being paid to indels.

So, I can only agree with William: we need to pay more attention to indels, but we need the technology to catch up first.

P.S. For 454 fans out there, yes, you do get longer reads, but I think you also need a lot of redundancy to show that the reads aren't artifacts. As 454 ramps up its throughput, we'll see both the Solexa and 454 platforms converge towards better data for indel studies.

Labels: , ,

Monday, August 11, 2008

ZOOM-ing along.

I've recently been having a public conversation with the people at Bioinformatics Solutions Inc about their latest project - a short read aligner called ZOOM. While I joked about them only accepting resumes in Microsoft Word format, (Update: Apparently that's already taken off their web page.) they do seem to have their act together in other respects. Of course, I haven't actually seen their application yet, and I have only their word to go by, however, what I am hearing seems promising. I suggest people go read that thread on SeqAnswers to get an idea of what they're offering.

Unfortunately (- or maybe not - ) the timing is a bit off for me. I'm in the process of clearing things off my desk for my two week vacation, starting Friday, and I believe they'll be releasing the demo version of their software next week sometime. Still, I suspect I won't let that bother me too much. Between coconuts, hammocks and snorkeling, I don't think I'll have much time to think about missing a new aligner. (Just a hunch.)

Disclosure: I suppose, since people might perceive this as a conflict of interest, I should point out that I do have several ties to the company. I believe Ming Li is one of their founders, and I did (briefly) work with him and Paul Kearney for a thesis project at the University of Waterloo close to a decade ago. A relation of mine works there, as well, though I don't have any insider information, so don't ask. I don't own stock or have any interest in the company otherwise. Bioinformatics is just too small of a community in Canada not to have some connections to anyone else doing bioinformatics here. Six degrees of separation, and all.

Labels: ,

Tuesday, July 22, 2008

SNP calling from MAQ

With that title, you're probably expecting a discussion on how MAQ calls snps, but you're not going to get it. Instead, I'm going to rant a bit, but bear with me.

Rather than just use the MAQ snp caller, I decided to write my own. Why, you might ask? Because I already had all of the code for it, my snp caller has several added functionalities that I wanted to use, and *of course*, I thought it would be easy. Was it, you might also ask? No - but not for the reasons you might expect.

I spent the last 4 days doing nothing but working on this. I thought it would be simple to just tie the elements together: I have a working .map file parser (don't get me started on platform dependent binary files!), I have a working snp caller, I even have all the code to link them together. What I was missing was all of the little tricks, particularly the ones for intron-spanning reads in transcriptome data sets, and the code that links together the "kludges" with the method I didn't know about when I started. After hacking away at it, bit by bit things began to work. Somewhere north of 150 code commits later, it all came together.

If you're wondering why it took so long, it's three fold:

1. I started off depending on someone else's method, since they came up with it. As is often the case, that person was working quickly to get results, and I don't think they had the goal of writing production quality code. Since I didn't have their code (though, honestly, I didn't ask for it either since it was in perl, which is another rant for another day) it took a long time to settle all of the 1-off, 2-off and otherwise unexpected bugs. They had given me all of the clues, but there's a world of difference between being pointed in the general direction towards your goal and having a GPS to navigate you there.

2. I was trying to write code that would be re-usable. That's something I'm very proud of, as most of my code is modular and easy to re-purpose in my next project. Half way through this, I gave up: the code for this snp calling is not going to be re-usable. Though, truth be told, I think I'll have to redo the whole experiment from the start at some point because I'm not fully satisfied with the method, and we won't be doing it exactly this way in the future. I just hope the change doesn't happen in the next 3 weeks.

3. Name space errors. For some reason, every single person has a different way of addressing the 24-ish chromosomes in the human genome. (Should we include the mitochondrial genome in our own?) I find myself building functions that strip and rename chromosomes all the time, using similar rules. Is the Mitochondrial genome a "MT" or just "M"? What case do we use for "X" and "Y" (or is it "x" and "y"?) in our files? Should we pre-pend "chr" to our chromsome names? And what on earth is "chr5_random" doing as a chromosome? This is even worse when you need to compare two active indexes, plus the strings in each read... bleh.

Anyhow, I fully admit that SNP calling isn't hard to do. Once you've read all of your sequences in, determined which bases are worth keeping (prb scores), determined the minimum level of coverage, minimum number of bases that are needed to call a snp, there's not much left to do. I check it all against the Ensembl database to determine which ones are non-synonymous, and then: tada, you have all your snps.

However, once you're done all of this, you realize that the big issue is that there are now too many snp callers, and everyone and their pet dog is working on one. There are several now in use at the GSC: Mine, at least one custom one that I'm aware of, one built into an aligner (Bad Idea(tm)) under development here and the one tacked on to the swiss army knife of aligners and tools: MAQ. Do they all give different results, or is one better than another? who knows. I look forward to finding someone who has the time to compare, but I really doubt there's much difference beyond the alignment quality.

Unfortunately, because the aligner field is still immature, there is no single file output format that's common to all aligners, so the comparison is a pain to do - which means it's probably a long way off. That, in itself, might be a good topic for an article, one day.

Labels: , , , ,

Friday, June 20, 2008

MAQ mapview format. - Updated

Well, I promised a quick update on the MAQ mapview format, after I wrote the interpreter for it, but there isn't much to say.

Key bits of information:
  • The file is simply a zipped text file. (Update: this file is not normally gzipped. While the .map file is zipped, the .mapview file is not normally found in the gzipped state.) You can unzip it with 'gunzip' on a linux system.
  • Reads are pre-sorted by chromosome, then position.
  • The format is 1 based, so if you're using a zero based format, you'll need to convert.
  • Starting points are for the "left end", ie, regardless of which strand the sequence aligned to, the matching position with the lowest position is reported.
  • Sequences are not contained in this file, but if you go back to the original fastq file you can retrieve the sequence. If you do so, you will need to obtain the reverse compliment of any read that maps to the reverse strand to map to your fasta file sequence. Forward strand sequences will map correctly to the fasta sequence.
  • Most of the fields are not useful for any form of analysis, and what's given is mostly incomprehensible.


The best information I had was from the MAQ manpage. In a slightly more readable format:

  1. read name
  2. chromosome
  3. position
  4. strand
  5. insert size from the outer coorniates of a pair
  6. paired flag
  7. mapping quality
  8. single-end mapping quality
  9. alternative mapping quality
  10. number of mismatches of the best hit
  11. sum of qualities of mismatched bases of the best hit
  12. number of 0-mismatch hits of the first 24bp
  13. number of 1-mismatch hits of the first 24bp on the reference
  14. length of the read
  15. read sequence
  16. quality


Most strikingly, you'll notice 16 fields are listed above, while the file appears to have 14 fields. It seems not all files have the last two fields. I don't know if it's just the file I have, or if it's usually that way. (Update: Actually, there are normally 16 fields. The files I was given were generated using the mapview -B flag, which strips out some of the information, which I believe are the final two fields. Thus, the comments above reflect the -B flag output only! Thanks to Ryan for catching that!)

If I come across anything else that needs to be added, I'll update this entry.

Labels: , ,

Thursday, June 19, 2008

Downloads and aligners

At the Genome Science Centre, we have a platform Scientific Advisory Board meeting coming up in the near future, so a lot of people are taking the time to put together some information on what's being going on since the last meeting. As part of that, I was asked to provide some information on the FindPeaks application, since it's been relatively successful. One of those things was how many times it's been downloaded.

Surprisingly, it's not an insigificant number: FindPeaks 2.x has been downloaded 280 times, while FindPeaks 3.1.x has been downloaded 325 times. I was amazed. Admittedly, I filtered out google and anything calling itself "spider", and didn't filter on unique IPs, but it's still more than I expected.

The other surprise to me was the institutions from which it was downloaded, which are scattered across the world. Very cool to see it's not just Vancouver people who took an interest in FindPeaks, though I somewhat suspected as much, already.

Thinking of FindPeaks, I put in one last marathon cleaning session for all my software. I'm sure I'm somewhere north of 15,000 lines of code modified in the past week. Even when you consider that some of those changes were massive search-and-replace jobs, (very few, really), or refactoring and renaming variables (many many more), it's still an impressive number for two weeks. With that done, I'm looking to see some significant gains in development speed, and in developer productivity. (I learned a LOT doing this.)

The last 122 warnings will just have to get cleaned up when I get around to it - although they're really only 4 warnings types repeated so many times. (The majority of them are just that the branched logic goes deeper than 5 levels, or that my objects have public variables. (You can only write so many accessor functions in a week.)

Anyhow, I'm looking forward to testing out FindPeaks 4.0 alphas starting tomorrow, and to put some documents together on it. (And catch up on the flood of emails I received in the last 2 days.

Finally, I'm working on a MAQ file interpreter for another project I'm working on. If I ever manage to figure out how to interpret the (nearly undocumented) files, I'll post that here. If anyone's done this already (though I didn't see anything publicly available on the web), I'd love to hear from them.

Cheers!

Labels: , , ,

Monday, April 21, 2008

Eland file Format

I haven't written much over the past couple of days. I have a few things piled up that all need doing urgently... and it never fails, that's when you get sick. I spent today in bed, fighting off a cold, sore throat and fever. Wonderful combination.

Anyhow, since I'm starting to feel better, I thought I'd write a few lines before going to bed, and wanted to mention that I've finally seen a file produced by the new Eland. It's a little different, and the documentation provided (ahem, that I was able to obtain from a colleague who uses the pipeline) was pretty scarce.

In fact, much of what you see in the file is pretty obvious, with the same general concept as the previous Eland files, except this has a few caveats:

1) the library name and 4-coordinate position of the sequence are all separated by tabs in one of the files I saw, but concatenated with a separating ":" in another. I'm not sure which is the real format, but there are at least 2 formats for line identification.

2) There's a string that seems to encode the base quality scores from the prb files, but it's in a format for which I can't find any information.

3) there's a new format for mismatches within the alignment. Instead of telling you the location of the mismatch, you now get a summary of the alignment itself. If Eland could do insertions, it would work well for those too. From the document, it tells you the number of aligned bases, with letters interspersed to show the mismatch. (e.g. if you had a 32 base alignment, with a mismatch A at character 10, you'd get the string "9A22".) I also understand that upper and lower case mismatches mean something different, though I haven't probed the format too much.

So, in the discussion of formats, I understand there's some community effort around using a so-called "short read format" or SRF format. It's been adopted by Helicos, GEO, as well as several other groups.

Maybe it's time I start converting Eland formats to this as well. Wouldn't it be nice if we only had to work with one format? (If only Microsoft understood that too! - ps, don't let the community name fool you, it's well known Microsoft sponsored that site.)

Labels: , ,

Friday, March 21, 2008

Catching up....

I can't believe it's been nearly a month since my last post! I feel like I've been neglecting this a bit more than I should, but I'll try to rectify that as best I can.

For an indication of how busy I've been, I sat down to update my resume yesterday, and ended up adding 3 papers (all in submission) and two posters. That just about doubles what was in there previously in the papers section.

Anyhow, Next-generation sequencing doesn't stand still, so I thought I'd outline some of the things I want to talk about in my next posts, and set up a few pointers to other resources:

1. SeqAnswers. This aptly named forum has been around for a few months now, but has recently become more popular, and a great forum for discussing relevant Next-gen technology and sequencing methods. I'm especially happy to see the automated posts triggered by new literature on the subject, which are a great resource for those of us who are busy and forget to check for new publications ourselves.

2. There's one forum in particular that's of great interest: Benchmarking different aligners. This appears to be a well done comparison (if lightweight) that may be a good focus for other people who are interested in comparing aligners, and discussing it in a wider forum.

3. For people interested in ChIP-Seq, or Chromatin immunoprecipitation and massively parallel sequencing, I've finally gotten around to posting FindPeaks 3.1 on the web. I'd consider this release (3.1.3) an alpha release. I'd love to get more people contributing by using this application and telling me what could be improved on it, or what enhancements they'd like to see. I'm always happy to discuss new features, and can probably add most of them in with a relatively quick turn around time.

4. For people interested in assessing the quality of the whole transcriptome shotgun sequencing (WTSS), I'm about to break out a tool that should fit that purpose. If anyone is interested in giving suggestions on ways they'd like to see quality tests performed, I'd be more than happy to code those into my applications. (And yes, if you contribute to the tool, I will provide you a copy of the tool to use. Collaborations, etc, can be discussed, as well.)

5. A quick question, of which I'll post more in the future. Has anyone here managed to get Exonerate 2.0 to work in client/server mode on two separate machines?

6. I'll also post a little more about this in the future: building environments, ant and java. Why are people still doing large projects in perl?

7. One last thing I wanted to mention. I was going to write more on this topic, but eh... I'll let slashdot do it for me: The more you drink, the less you publish. Well, So much for keeping a bottle of tequila under the desk. Now I know what to get the competition for x-mas, though...

Cheers!

Labels: , , ,

Tuesday, February 19, 2008

Jason's comment

I received a comment on my posting last night from Jason Stajich, which (as always) is right on the money. To quote the main theme I want to reply to:

While I am all for people writing their own software, writing your own aligner seems more difficult than figuring out how to get what you want from a published (and used format) like ACE assembly files.


That's absolutely correct. I think writing my own aligner is probably the last thing I want to do. While I have the source for at least our own internal aligner on my desktop this week (and have been doing some serious hacking on it), I really don't want to have to build my own. It's a distraction from my real research, it's a pain to support, it's going to have to compete with much larger groups doing similar things, and it's probably not a long-term viable option. That said, I still think the current state of affairs in the realm of aligners is pretty poor - thought that's not to say I think I could do it better. I'll leave writing assemblers to the professionals - or at least those who have more time to do this stuff.

Still, maybe if I can offer one helpful suggestion to people writing aligners. Get together and use a standard format! I'm less inclined to test Yet Another Aligner when I have to write another filter to interpret the results. There are even currently efforts under way to create a unified format for aligned reads and short-read data. (Disclaimer: I've only read a couple handfuls of posts, and not all of it appears to be on topic.)

The formats we're familiar with (such as fasta) aren't really designed for this type of work. (It hasn't stopped several aligners from using them however - which is still better than creating their own file formats, I might add.) What's actually needed is a purpose built format.

I'm sure you can guess from this rant that I'm a big fan of stuff like the OpenDocument Format (ODF), which are levelling the playing field for word processing documents (despite MicroSoft's best efforts to remain the dominant force at all costs), but even a limited (i.e. non-ISO standardized) approach could make a huge impact in this area.

Why not have a two day get together for all the people building aligners, and decide on a subset of formats? Make it versioned, so that it can change over time, and elect a maintainer of the official standard. While you're at it decide on a few conventions - what's a U0 hit? (Slider can have up to 9 SNPs and still call it a U0...) What is a read start? How do you represent stranded genomic information - which end is really the "start"?

Anyhow, I know it's wishful thinking, but hey, maybe this rant will encourage a few people to band together and come up with something to make all of our lives easer. Even something as simple as Fasta freed up thousands (millions?) of man-hours (or woman-hours) for bioinformaticians, since they no longer need a custom format library for every new program they use. Maybe it's time to do the same for aligned data.

P.S. I'll answer Jason's questions on EST/transcriptome approaches in my next post.

Labels: , , ,

Monday, February 18, 2008

Aligning DNA - comments from above

I've been pretty bad about continuing my posts on how the different aligners work. It's a lot of work keeping up with them, since I seem to hear about a new one each week. However, a post-doc in my lab gave a presentation on contrasting the various aligners, to discuss each of their strengths and weaknesses for doing short (Illumina) read alignments.

Admittedly, I don't know how accurate the presenter's data was - most of the presentation was in being used to set up his own in-house aligner development, and thus all of the aligners were painted in a poor light, except his, of course. That being said, there's some truth to what he found: most of the aligners out there have some pretty serious issues.

Eland is still limited by it's 32-base limit, which you'd think they'd have been over by now. For crying out loud, the company that produces it is trying to sell kits for doing 36-base alignments. It's in their best interest to have an aligner that does more than 32 bases. (Yes, they have a new work-around in their Gerald program, but it's hardly ideal.)

MAQ, apparently, has a weird "feature" that if multiple alignments are found, it just picks one at random as the "best". Hardly ideal for most experiments.

Mosaik provides output in .ace files - which are useless for any further work, unless you want to reverse engineer converters to other, more reasonable, formats.

SOAP only aligns against the forward strand! (How hard can it be to map the reverse compliment???)

Exonerate is great when run in "slow mode", at which point it's hardly usable for 40M reads, and when it's run in "fast mode", it's results are hardly usable at all.

SHRiMP, I just don't know enough about to comment on.

And yes, even the post-doc's in-house aligner (called Slider) has some serious issues: it's going to miscall all SNPs, unless you're aligning fragments from the reference sequence back to itself. (That's not counting the 20 hours I've already put in to translate the thing to java proper, patching memory leaks, and the like...)

Seriously, what's with all of these aligners? Why hasn't anyone stepped up to the plate and come up with a decent open-source aligner? There are got to be hundreds of groups out there who are struggling to make these work, and not one of them is ideal for use with Illumina reads. Isn't there one research group out there dog-fooding their own Illumina sequence aligner?

At this rate, I may have to build my own. I know what they say about software, though: You can have fast, efficient or cheap - pick any two. With aligners, it seems that's exactly where we're stuck.

Labels: , ,

Wednesday, January 30, 2008

Comments on de novo assembly

First off, I'd like to say thanks to Paul and stajich for their comments. Paul for raising several points that I'd like to discuss tonight, and stajich for bringing my attention to the SHRiMP aligner. Obviously, I haven't had much chance to look at SHRiMP, yet, but I'll definitely get around to it.

So, paul mentioned several things that are worth discussing:

The Velvet roadmap for mRNA could be interesting. Sometimes the intron is snipped, sometimes it isn't, get a nice little bubble. And the similar transcripts will do... interesting things.

Short reads would be fine for de-novo if DNA was completely random. Pity that it isn't.


For the most part, I agree with Paul, Using Velvet on a transcriptome would be pretty cool. Yes, you'd get bubbles, but you'd get a lot of nifty information about new transcripts, or intermediate forms of transcripts. This neatly avoids one of the major hurdles I currently face when working with transcriptomes: I can either align against a genome, or a list of known genes, and neither one really gives me much opportunity to identify new genes. (Of course, I'm working on doing just that, with a colleague of mine, but I'm saving that for another post.)

Where I disagree with paul, however, is his final statement, that short reads would be fine for de novo assembly if they were truly random. I have two problems with that: The first is mapability (or sequencability), and the second is the short nature of the reads themselves.

Mappability has been defined in many different ways, but for general purposes, it's the ability to identify a sequenced read that can be unambiguously aligned to that location on a chromosome. Fortunately, with 36-mer reads, as are produced by Illumina 1G's, something like 70-80% of the genome is mappable. Not 100% mappable, but mappable to some degree. This may seem trivial, but it's important.

Why it's important is that a genome with 95% mapability doesn't mean you get a single contig covering 95% of the genome, and a chunk of 5% of your reads that don't cover your genome. It's more like every 20-100kb you'll find something that you just can't assemble over. (I'm guestimating that number, by the way.) This means you have lots of small contigs that have no overlap. That is, of course, assuming you had enough highly mappable reads to do a relatively error free assembly, and, also of course, assuming your sequencing errors haven't completely interfered with your assembly. I don't think either can really be taken for granted.

In reality, the mappability of a genome isn't a property you can figure out until you've sequenced it, so I don't want to discourage anyone from trying. I'm aware of people working on using Illumina reads to do this, albeit they've supplemented the reads with BAC sequencing, ESTs and various other options, which will provide a nice scaffolding. This approach allows them to augment their sequencing power by obtaining a lot of coverage through the Illumina runs, rather than having to use them as the basis of a serious de novo assembly - which seems wise to me. (And even then, they're only doing a fungus - not a higher vertebrate!)

And to return to my earlier point about the mappable genome not being 100% mappable, this is where I think paul's point over simplifies. Although some bases may be 50% mappable, and are thus considered to be "mappable" in common discussion, that means that 50% of the possible sequencing reads in which they're likely to participate will not yield an un-ambiguous fragment! That means you can say goodbye to 50% of the likely fragments which you would need to generate an overlap to span two forming contigs. That, to me, indicates that any de novo assembly is unlikely to correctly proceed past that base, and 50% mappability is not unusual in the genome.

The other point of paul's that I wanted to discuss, was the assertion that the non-random fragmenting of DNA is what's stopping us from doing a serious de novo assembly. While it's true that shearing DNA isn't an entirely random process, it's also true that it doesn't need to be. People have been doing restriction enzyme digests for decades now, in order to map vectors and inserts. (I learned how to do it in grade 11 biology, and that was back in 1994). So yes, while sonication or digests may not be random, what's to stop someone from stripping DNA of it's histones, and then doing 25 different digests? The net effect is just about the same (assuming you pick 25 good restriction enzymes with different base recognitions), and will yield fragments that will get around paul's issue. But does that mean we can now do a de novo assembly?

No... I don't think so. I doubt the lack of a random fragmentation pattern is the limiting factor in de novo assemblies. Unfortunately, the only way to prove myself or paul right or wrong is to simulate this: Take the mouse genome, fragment it into random 36-mers (the same size you get from an Illumina sequencing run), then inject 1 random base for every 1000 bases read (I'll be generous and assume a .1% error rate, though the real thing is closer to 3-5% from what I've seen), and then try running velvet on it.

I bet you'll observe that somewhere around 40x coverage you'll start to saturate and discover that your assembly has covered anywhere from 60-80% of the genome (say 5% of that is just way wrong), and that it'll have taken you longer to do that assembly than it would have to just sequenced the damned thing in the first place with PCR.

Anyhow, I take paul's point to heart: We're still early on in this game, and there are a lot of neat things we can try. I'm looking forward to seeing the results of many of them. (Who has time to try them all?)

By the way, FindPeaks 3.0.1 is done, and I'll be presenting it as a poster at AGBT 2008 this week. If you're interested in ChIP-Seq/Chip-Solexa (or for the pedantic, ChIP-Illumina), come find me, and I'll tell you some of its new features.

Labels: , , , , ,

Tuesday, January 22, 2008

Solexa Transcriptom Shotgun:Transcriptome alignments vs. Genome Alignments

First off, I was up late trying to finish off one of my many projects, so I didn't get a lot of sleep. Thus, if my writing is more incoherent than usual, that's probably why. And now on with the show.

I'm jumping gears a bit, today. I haven't finished my discussion about Aligners, of which I still want to talk about Exonerate in detail, and then discuss some of the other aligners in overview. (For instance, the one that I found out about today, called GMAP, a Genomic Mapping and Alignment Program for mRNA and EST sequences.) Anyhow, the point is that part of the purpose of using an aligner is to align to something in particular, such as a genome or a contig, but selecting what you align your sequences back to is a big issues.

When you're re-sequencing a genome, you map back to the genome template. Yes, you're probably sequencing a different individual, so you'll find occasional sections that don't match, but most humans are ~99% identical, and you can look into SNP (single nucleotide polymorphism) databases to find out if the differences you're seeing are already commonly known changes. Of course, if you're re-sequencing Craig Venter, you don't need to worry about SNPs as much. Fortunately, most of us are sequencing more exciting genomes and so forth.

When you're sequencing a genome you've never sequenced before, you can't do mapping at all. There are various assemblers (i.e., Velvet (written by Daniel Zerbino, who is a lot of fun to hang out with at conferences, I might add... ), SSake (written by Rene Warren, who incidentally also works at the GSC, although several floors above me.), and Euler (which I'd never heard of till I googled the web page for velvet...). The key point: you don't need to worry about what you're mapping back to when you do de novo assembly, since you're creating your own map. I'll digress further for one brief comment: assembly from Solexa/Illumina sequences is a bad idea, because they're so short!

Moving right along, we come to the third thing people are sequencing these days: Transcriptomes. (Yes, I'm ignoring cloned libraries... they're so 90's!) Transriptomes are essentially a snapshot of the mRNA in a set of cells at a given point in time. Of course, mRNA is rather unstable, so protocols have been developed to convert mRNA to cDNA (complementary DNA), which is essentially a copy of the mRNA in DNA form. (Yes, I'm ignoring some complexity here, because it makes for a better story.) But I'm getting ahead of myself. Lets talk about the mRNA, but be aware that the sequencing is actually done on cDNA.

mRNA is an interesting beast. Unlike Genomic DNA, it's a more refined creature. For Eukaryotes, the mRNA is processed by the cell, removing some segments that are non-coding. Once the cell removes these segments (labeled introns), and leaves other segments (labeled exons), we have a sequence of bases that no longer matches the genomic DNA sequence from which it came. Sure, there are short segments that map back very well (i.e. the exons), but if you take a small random snippet from the mRNA, there's a small chance that it might overlap the boundaries between two exons, which means the bases you have won't map back nicely to the genomic point of origin. That can be a serious problem.

Sure, you say, we can do a gapped alignment, and try to find two points where this sequence originated, with one big gap. If you're sophisticated, you'll even know that introns often have signals that indicate their presence most of the time. And yes, you'd be right, we can do that. Unfortunately, for most solexa runs, you get 20,000,000+ sequences. At 10 seconds a sequence (which doesn't seem like much, really), how long would it take to do that alignment?

Too long.

So most of the time, we don't do gapped alignments. Instead, we have two choices:
  1. Align against the genome, and throw away reads that we can't align (i.e. those that over lap intron/exon boundaries.)

  2. Align against a collection of known coding DNA sequences


Number two isn't a bad option: it already has all the introns spliced out, so you don't need to worry about missing those alignments. Unfortunately, there are several issues with this approach:
  • Do we really know all of the coding DNA sequences? For most species, probably not, but this is a great idea for animals like Drosophila. (I tried this yesterday on a fruit fly Illumina run and it worked VERY well.

  • Many transcripts are very similar. This isn't a problem with the data, but with your alignment program. If your aligner doesn't handle multi-matches (like Eland), this will never work.

  • Many transcripts are very similar. Did I say that already? Actually, it causes more problems. How do you know which transcript was really the source of the sequence? I have several ways to get around it, but nothing foolproof yet.

  • Aligning to a transcript is hard to visualize. This is going to be one of my next projects... but with all the fantastic genomes out there, I'm still not aware of a good transcriptome visualization tool.


And that brings us to the conclusion. Aligning a transcriptome run against a genome or against a transcriptome both have serious problems, and there really are no good solutions for this yet.

For now, all I can do is run both: they tell you very different things, but both have fantastic potential. I haven't released my code for either one, yet, but they both work, and if you contact my supervisor, he'd probably be interested in collaborating, or possibly sharing the code.

Labels: , , , , ,

Thursday, January 17, 2008

Aligning DNA - Eland

Continuing in my series of blog articles on short-read sequencing, it's time for a little bit of a discussion on the first short-sequence aligner that I'd ever met: Eland. The problem with writing anything about Eland, is that 90% of the information I have on it can't be confirmed independently. In fact, the majority of what I know comes from the header of the ELAND.cpp file. I'll admit, while I've gone through the source code a couple of times, I haven't spent much time getting to know it's well - I've processed billions of lines of Eland files, but have yet to run it, myself...

Anyhow, here's what I can tell you: Eland stands for "Efficient Local Alignment of Nucleotide Data" and was written by Anthony J. Cox for Solexa Ltd. Of course, Solexa has since been bought out by Illumina, so I assume the copyright has been transferred along with the technology during that transaction.

What makes Eland so fascinating to me is that it's a direct departure from the dynamic programming based algorithms (like the venerable Smith-Waterman alignment), making it somewhat interesting as a computational program.

The basic algorithm works along these lines: Given a sequence of length N, it can be divided into four subsequences (A, B, C, and D), which are of equal (or nearly equal length). Assuming there are no more than 2 errors, at least two of these subsequences will be "error free", so that the two error free sequences can then be searched for in a database containing all possible subsequences in the genome of interest. Thus, you can search your database for the subsequence AB and CD. Of course, you don't know which subsequences are the error free ones, so you need to try all possible combination.

What Eland does to speed things up is to combine these subsequences into sets of two. Thus, rather than searching for 4 independent entries in their database, they simply have to search for 2 sets of two, and as long as one set matches, you can use looser matching criteria on the other two, ie, allowing for mismatches. That is to say, if you make two sequences out of the 4 subsequences {AB and CD}, you can search for an exact match for AB, and test all the possible results for mismatches in the {CD} portion of the sequence. This has the effect of significantly reducing the search space. (Only sequences containing the exact match to {AB}.)

Searching for the {AB and CD} subsequences would only work if the first half of your sequence has no errors. What if B and C had the errors? The simple answer is to shuffle your subsequences to make other combinations: ({AB and CD}, {AC and BD}, {AD and CD}, {BA and CD}, etc.) This still provides you with a relatively small search space for each sequence, as there are only 4! possible combinations (which is 4x3x2x1 = 24 possible sequences) to search for.

Fortunately, you can bound this even further. You always know that you want sequences in which the first pair and second pair are in the correct order, (ie {AB and CD} and {AC and BD} are ok, but {CA and DB} and {BA and DC} would give you an incorrect result) limiting you to only six possible combinations, which still allows you to find any correct match where at least two of the four subsequences are error free.

That, in general is how it works.. with a few nifty shortcuts. ie, they only need to pass over the genome search space 3 times to find all possible matches with up to two errors, because they can search for subsequence combinations simultaneously and efficiently. In other words, you get a very quick alignment.

On the other hand, the way in which ELAND was implemented is very important. The Eland aligner has some severe restrictions, some of which are algorithm specific, some of which are implementation specific:

  1. The Eland aligner can only align sequences up to 32 base pairs long. (Implementation specific: they use four integers to represent the subsequences {A,B,C and D}, which reduces the memory requirements.)

  2. The Eland aligner can only align sequences with up to two mismatches. (Algorithm specific: at least two of the keys must match, although the algorithm could be expanded to use more subsequences, at the expense of expanding the sequence space.)

  3. The Eland aligner only returns sequences that match to one unique location in the genome (Implementation specific: Eland already tells you when a sequence matches to more than one location, it just doesn't tell you what those locations are.)

  4. The Eland aligner can do mismatches in the sequence, but can not handle insertions or deletions (i.e, gapped alignments). (Algorithm specific: The algorithm for matching the keys could be expanded to include this, but it would be very costly in time/memory.)

  5. The code is not freely available. (Implementation specific: I don't know of anywhere you can go to get the code... or documentation!)

  6. Changing the length of the alignment sequence (I called it N, above), requires a recompile. (Implementation, obviously. Why this isn't a command line parameter, I don't know.)


I'm sure I could go on and list other disadvantages, but I'd like to point out the main advantage: It's FAST. Blazing fast, compared to the other aligners. From my own personal experience, Eland may only be able to map 60-70% of your sequences to the genome (compared to 70-80% for other aligners with which I have less experience), but it does so in a few hours, compared to several days or weeks for most other aligners (or years, compared to Blast). (Comparisons are for a single processor, matching 1 million+ reads to a mamalian size genome.) The only other aligner that can come close to this speed is, to the best of my knowledge, SXOligoSearch, produced by Synamatix, though that beast will cost upwards of $225,000CAD to license.

Anyhow, to close this discussion, it's worth mentioning that rumours of a new version of Eland have been circulating since March 2007. Beta versions, capable of performing searches on sequences larger than 32 base pairs in length and able to perform multi-matches, were supposed to be released in September 2007. Unfortunately, the new betas appear to be vapourware, which has prompted us to look into several other aligners, which I'll discuss in future posts.

(2008-01-28 Update: I stand corrected: I've heard that the new betas have finally arrived! I haven't seen what they can do yet, but at least we know my prediction that they're vapourware is wrong.)

Labels: , , ,

Wednesday, January 9, 2008

Aligning DNA - a quick intro to some of the tools

A long time ago, I mentioned that I'd use this blog for some science. I don't really know who my audience is, but I suspect it's 1 part people who google in for linux related key words, 1 part people who are googling me because they've read my name somewhere, and 1 part random people... oh, and not to forget those people who I actually know in real life, but don't get to see often. Since I'm unlikely to scare of the googlers or the random people with some science, and the people I know in person will just skip over this posting (that's a hint, if you haven't already stopped reading...), I'll just go ahead with a quick lesson in Genomics.

As I've mentioned to many people, my research is currently heavily embedded in writing code for the processing of Solexa/Illumina reads. For those who aren't in the know, the Illumina 1G is a DNA sequencing machine that can produce ~40 Million DNA sequencing reactions in about 36 hours, where each sequence is 36 bases long. This is drastically different from traditional sequencing, where you get far fewer sequences (~100's), each of which can be up to 1000 bases long. With the low volume of alignments produced, and the long reads, traditional sequencing can be processed with a rather laisez-faire attitute. if you take 10 seconds to align 100 reads, you're talking about 1000 seconds to do all the alignments, or roughly 15 minutes.

In contrast, with 40 million reads, at ten seconds each, you're talking about waiting 12 years for your data to be processed. Clearly this is not an option, if you want to get into a cutting edge journal.

(For those who are curious to know where the 10 second mark came from, my supervisor timed a few blast searches on a local server.)

Consequently, computational scientists have risen to the task, and created several new algorithms for performing these alignments. The first one that I met is called "Eland", which I believe stands for "Efficient Local Alignment of Nucleotide Data". To be honest, I'm not really sure how to get Eland, as there is very little information available on the web. I've gleaned a few scraps of information: It was written by Anthony Cox, and was distributed by Solexa/Illumina. As far as I can tell, it came with the Illumina 1G machines.

The second one I met was called "Mosaik". This aligner comes from the lab of Gabor Marth, at boston college. The third was "Exonerate", followed by several others. Hands down, the best name for an aligner has got to be "MAPASS"... I'll let you ponder that for a few minutes.

Anyhow, each alginer has it's advantages and disadvantages. Eland, for instance is crippled by a algorithmic limitation to only ever being able to align the first 32 bases of a sequence, and it's inability to map a read to more than one location in a target DNA source (i.e genome or transcriptome). On the other hand, it's one of the fastest aligners out there.

Mosaik, on the other hand, is a bit slower, but has several nice features - and is able to handle the cases that Eland can't. On the other hand, it dumps out it's alignments to a file format that really isn't convenient for doing any further processing, which is a product of it's original use in sequence assembly.

Just to throw in one more curve, it's worth mentioning a competing proprietary piece of software. There's a company in Malaysia that has what appears to be the fastest aligner out there, with none of the limitations of Eland, and a flexible file output. Like all commercial products for the science market, they set the base price of their product outside of the reach of most consumers: I can't seem to find a good justification to get my supervisor to pay $200,000+ USD/CAD for a copy of their software. (You could hire four post-docs for a year for that... or 10 grad students.)

Anyhow, now you know who the players are. My next post on this topic will be to introduce some of them in detail, explain how they work, and then tell you how we use them.

Yes, I'm feeling ambitious: I got three pieces of software working today: two of which are likely to be in production in the next couple of weeks at the Genome Sciences Centre: One for ChIP-Seq experiments (FindPeaks 3.0) and the other for transcriptome processing for mammalian cells (yet to be named.)

Labels: , , ,