Thanks for visiting my blog - I have now moved to a new location at Nature Networks. Url: http://blogs.nature.com/fejes - Please come visit my blog there.

Saturday, March 28, 2009

Taking control of your documents

It's always a mystery to me how bioinformaticians, who are generally steeped in computer culture, can be Microsoft users. Not that Microsoft's software is necessarily bad, (although I maintain that it doesn't come with all of the tools built in that bioinformaticians need, depending on what form of bioinformatics you're doing), but for those who have been immersed in the high tech environment, Microsoft's well documented business practices and bad-neighbor behaviour seem to be somewhat unenlightened. That led me to leave the MS ecosystem in search of more friendly environments nearly a decade ago.

Ever since then, I've been trying to move people away from Microsoft products and towards either the truly open Linux ecosystem, or the proprietary (but less open) Apple Macintosh ecosystem. (I run 3 linux machines and a mac laptop at home.) As part of that move - and probably the most important one, I always suggest people take control of their documents and not hand them over to Microsoft's trust.

One of the great proponents of this is Rob Weir, who has a vested interest in the process, but is able to provide a fantasticly objective perspective on the subject, in my opinion. (Microsoft employees frequently disagree.)

Anyhow, I just thought it was worth linking to a particular article of his, on that subject. Even if you don't want to move away from your Microsoft supplied word processor, he gives advice on how to keep your documents as open as possible. I highly recommend you give this article a quick read - and maybe take some of Mr. Weir's advice.

http://www.robweir.com/blog/2009/03/taking-control-of-your-documents.html

Labels:

Friday, March 27, 2009

MIT Backs Free Access to Scientific Papers

I'm sure it's old news for a few people, but this was the first I've heard of it.

MIT Backs Free Access to Scientific Papers

I think this is pretty awesome. It's not just a few people, either it's the whole university. I hope other institutions follow their lead - Science should be a public venture, and free access to information is a keystone for scientists.

Labels:

Thursday, March 26, 2009

TomTom has no Linux support?

I'm still procrastinating - A plumber is supposed to show up to cut a hole in my ceiling in a few minutes, basically as exploratory surgery on my new house, in order to find a leak that's developed in the pipes leading away from the washer and dryer. So, I thought I'd spend the intervening moments doing something utterly useless. I looked up TomTom's web site and took a look at what they have to offer.

If you don't know TomTom, they're a company that produces GPS units for personal and car use. They've recently shot to fame because Microsoft decided to sue them for a bunch of really pointless patents. The most interesting ones of the bunch are the ones that Microsoft seems to think are being infringed just because TomTom is using Linux.

Anyhow, this post wasn't going to be about the patents, since I already gave my opinion of that. Instead, since I'd been thinking about buying a GPS unit for a while, I thought it might be worth buying one from someone who uses embedded Linux - and I'd like to support TomTom in their fight against the Redmond Monopoly. Unfortunately - and this is the part that boggles my mind - TomTom offers absolutely zero support for people who run Linux as their computer operating system. Like many other companies, they're a Windows/Mac support only shop.

This strikes me as rather silly - all of the open source users out there would probably be interested in buying an open source GPS, and would probably be happy to support TomTom in their fight... but they've completely neglected that market. They've generated a great swelling of goodwill in many communities by standing up to Microsoft's bullying, but then completely shut that same market segment out of purchasing their products.

Well, that's some brilliant strategy right there. I only hope TomTom changes their mind at some point - since otherwise all that goodwill is just going right down the toilet...

And thinking of plumbing, again, it's time to go see about a hole in my ceiling.

Labels: ,

Wednesday, March 25, 2009

Searching for SNPs... a disaster waiting to happen.

Well, I'm postponing my planned article, because I just don't feel in the mood to work on that tonight. Instead, I figured I'd touch on something a little more important to me this evening: WTSS SNP calls. Well, as my committee members would say, they're not SNPs, they're variations or putative mutations. Technically, that makes them Single Nucleotide Variations, or SNVs. (They're only polymorphisms if they're common to a portion of the population.

In this case, they're from cancer cell lines, so after I filter out all the real SNPs, what's left are SNVs... and they're bloody annoying. This is the second major project I've done where SNP calling has played a central role. The first was based on very early 454 data, where homopolymers were frequent, and thus finding SNVs was pretty easy: they were all over the place! After much work, it turned out that pretty much all of them were fake (false positives), and I learned to check for homopolymer runs - a simple trick, easily accomplished by visualizing the data.

We moved onto Illumina, after that. Actually, it was still Solexa at the time. Yes, this is older data - nearly a year old. It wasn't particularly reliable, and I've now used several different aligners, references and otherwise, each time (I thought) improving the data. We came down to a couple very intriguing variations, and decided to sequence them. After several rounds of primer design, we finally got one that worked... and lo and behold. 0/2. Neither of them are real. So, now comes the post-mortem: Why did we get the false positives this time? Is it bias from the platform? Bad alignments? Or something even more suspicious... do we have evidence of edited RNA? Who knows. The game begins all over again, in the quest for answering the question "why?" Why do we get unexpected results?

Fortunately, I'm a scientist, so that question is really something I like. I don't begrudge the last year's worth of work - which apparently is now more or less down the toilet - but I hope that the why leads to something more interesting this time. (Thank goodness I have other projects on the go, as well!)

Ah, science. Good thing I'm hooked, otherwise I'd have tossed in the towel long ago.

Labels: , , , , , ,

Tuesday, March 24, 2009

Decision time

Well, now that I've heard that there's a distinct possibility that I might be done my PhD in about a year, it's time to start making some decisions. Frankly, I didn't think I'd be done that quickly - although, really, I'm not done yet. I have a lot of publications to put together, and things to make sense of before I leave, but the clock to start figuring out what to do next has officially begun.

I suppose all of those post-doc blogs I've been reading for the last year have influenced me somewhat: I'm going to look for a lab where I'll find a good mentor, a good environment, and a commitment to publishing and completing post-docs relatively quickly. Although that sounds simple, judging by other blogs I've been reading, it's probably not all that easy to work out. Add to that the fact that my significant other isn't interested in leaving Vancouver (and that I would prefer to stay here as well), and I think this will be a difficult process.

I do need to put together a timeline, however - and since I'm not yet entirely convinced which track I should follow (academic vs industry), it's going to be a somewhat complex timeline. Anyhow, the point of blogging this it is an excellent way to open communication channels with people who you wouldn't be able to connect with in person - and the first one I'd like to open is to ask readers if they have any suggestions.

Input, at this time would be VERY welcome, both on the point of academia vs. industry, as well as what I should be looking for in a good post-doc position, if that ends up being the path I go down. (=

Anyhow, just to mention, I have another blog post coming, but I'll save it for tomorrow. I'd like to comment on another series of blog post from John Hawks and Daniel McArthur. I'm sure the whole blogosphere has heard all about the subject of training bioinformatics students from both the biology and computer science paths by now, but I feel I have something unique to talk about on that issue. In the meantime, I'd better get back to debugging and testing code. FindPeaks has a very cool new method of comparing different samples - and I'd like to get the testing finished. (=

Labels: ,

Friday, March 20, 2009

Universal format converter for aligned reads

Last night, I was working on FindPeaks when I realized what an interesting treasure trove of libraries I was really sitting on. I have readers and writers for many of the most common aligned read formats, and I have several programs that do useful functions. So, that raise the distinctly interesting point that all of them should be applied together in one shot... and so I did exactly that.

I now have an interesting set of utilities that can be used to convert from one file format to another: bed, gff, eland, extended eland, MAQ .map (read only), mapview, bowtie.... and several other more obscure formats.

For the moment, the "conversion utility" forces the output to bed file format (since that's the file type with the least information, and I don't have to worry about unexpected file information loss), which can then be viewed with the UCSC browser, or interpreted by FindPeaks to generate wig files. (BED files are really the lowest common denominator of aligned information.) But why stop there?

Why not add a very simple functionality that lets one format be converted to the other? Actually, there's no good reason not to, but it does involve some heavy caveats. Conversion from one format type to another is relatively trivial until you hit the quality strings. since these aren't being scaled or altered, you could end up with some rather bizzare conversions unless they're handled cleanly. Unfortunately, doing this scaling is such a moving target that it's just not possible to keep up with that and do all the other devlopment work I have on my plate. (I think I'll be asking for a co-op student for the summer to help out.)

Anyhow, I'll be including this nifty utility in my new tags. Hopefully people will find the upgraded conversion utility to be helpful to them. (=

Labels: , , , , , , , , , , ,

Thursday, March 19, 2009

Findpeaks 3.3... continued

Patch, compile, read bug, search code, compile, remember to patch, compile, test, find bug, realized it's the wrong bug, test, compile, test....

Although I really enjoy working on my apps, sometimes a whole day goes by where tons of changes are made, and really I don't feel like I've gotten much done. I suppose it's more of the scale of things left to do, rather than the number of tasks. I've managed to solve a few mysteries and make an impact for some people using the software, but haven't got around to testing the big changes I've been working on for a few days on using different compare mechanisms for FindPeaks.

(One might then ask why I'm blogging instead of doing that testing... and that would be a very good question.)

Some quick ChIP-Seq things on my mind:
  • Samtools: there is a very complete Java/Samtools/Bamtools API that I could be integrating, but after staring at it for a while, I've realized that the complete lack of documentation on how to integrate it is really slowing the effort down. I will proably return to it next week.
  • Compare and Control: It seems people are switching to this paradigm on several other projects - I just need to get the new compare mechanism in, and then integrate it in with the control at the same time. That will provide a really nice method for doing both at once, which is really key for moving forward.
  • Eland "extended" format: I ended up reworking all of the Eland Export file functions today. All of the original files I worked with were pre-sorted and pre-formatted. Unfortunately, that's not how they exist in the real world. I now have updated the sort and separate chromosome functions for eland ext. I haven't done much testing on them, unfortunately, but that's coming up too.
  • Documentation: I'm so far behind - writing one small piece of manual a day seems like a good target - I'll try to hold myself to it. I might catch up by the end of the month, at that pace.
Anyhow, lots of really fun things coming up in this version of FindPeaks... I just have to keep plugging away.

Labels: , ,

Wednesday, March 18, 2009

CSCBC 2009

Someone raised the good point that I had forgotten to mention the origin of the talks I had made notes on last week, which is a very important point for several reasons. Although the conference is over, it was a neat little conference which deserves a little publicity. Additionally, it's now in planning for it's fifth year, so it's worth mentioning just in case people are interested but weren't aware of it.

The full title of the conference is the Canadian Student Conference on Biomedical Computing, although I believe the next year's title will also be expanded to include Biomedical Computing and Engineering explicitly. (CSCBCE 2010) This year's program can be found at http://www.cscbc2009.org/, and my notes for it can all be found under the tag of the same name.

As for why I think it was a neat conference, I suppose I have several reasons. It doesn't hurt that one of the organizers sits in the cubicle next to mine at the office, and that many of this years organizers are friends through the bioinformatics program at UBC/SFU. But just as important (to me, anyhow), I was invited to be an industry panelist for the conference for the saturday morning session and to help judge the bioinformatics poster session. Both of those were a lot of fun. (Oddly enough, another memver of the industry panel was one of my committee members, and he suggested I would probably graduate in the coming year in front of a room of witnesses...)

Anyhow, back to the point, CSCBCE 2010 is now officially in the planning, and the torch has formally been passed along to the new organizers. I understand next year's conference is going to be held in May 2010 at my alma matter, the University of Waterloo, which is a beautiful campus in the spring. (I strongly concur with their decision to host it in May instead of March, by the way. Waterloo is typically a rainy, grey and bleak place in March.) And, for those of you who have never been, Waterloo now has it's own airport. I'm not sure if I'll be going next year - especially if I've completed my degree by then, but if this year's attendance was any indication of where the conference is heading, it'll probably be worth checking out.

Labels:

Monday, March 16, 2009

xorg.conf file for vostro 1000 using compiz in Ubuntu 9.04

I'm sure most people aren't interested in this, but I finally got my laptop (a Dell Vostro 1000) to work nicely with Compiz under Ubuntu 9.04 (Jaunty). I think the key steps were removing every fglrx package on the computer (apt-get remove fglrx*), switching to the "ati" driver in the xorg.conf, and getting the BusID right (I tried copying it from my earlier xorg.conf file, but the value seems to have changed.) However, I added a lot of other things along the way, which sees to have helped the performance, so, for those who are interested, this is the Ubuntu 9.04, jaunty alpha 5 xorg.conf file for the vostro 1000:

Section "Device"
Identifier "Configured Video Device"
Driver "ati"
BusID "PCI:1:5:0"
Option "DRI" "true"
Option "ColorTiling" "on"
Option "EnablePageFlip" "true"
Option "AccelMethod" "EXA"
Option "RenderAccel" "true"

EndSection

Section "Monitor"
Identifier "Configured Monitor"
EndSection

Section "Screen"
Identifier "Default Screen"
Monitor "Configured Monitor"
Device "Configured Video Device"
Defaultdepth 24
Option "AddARGBGLXVisuals" "True"
SubSection "Display"
Modes "1280x800"
EndSubSection
EndSection

Section "Module"
Load "glx"
Load "dri"
EndSection

Section "DRI"
Group "video"
Mode 0660
EndSection

Section "ServerFlags"
Option "DontZap" "false"
EndSection

Section "Extensions"
Option "Composite" "Enable"
EndSection

Labels: , ,

Friday, March 13, 2009

Dr. Michael Hallett, McGill University - Towards as systems approach to understanding the tumour microenvironment in breast cancer

Most of this talk is from 2-3 years ago. Breast cancer is now more deadly for women than lung cancer. Lifetime risk for women is 1 in 9 women. Two most significant risk factors: being a woman, aging.

Treatment protocols include surgery, irradiation, hormonal therapy, chemotherapy, directed antibody therapy. Several clinical and molecular markers are now available to decide the treatment course. These also predict recurrence/survival well... but...

Many caveats: only 50% of Her2+ tumours respond to trastuzumab (Herceptin). No regime for (Her2-, ER-, PR-) “tripple negative” patients other than chemo/radiation. Many ER+ patients do not benefit from tamoxifen. 25% of lymph node negative patients (a less aggressive cancer) will develop micrometastatic disease and possibly recurrence (an example of under-treatment.) - Many other examples of undertreatment.

Microarray data caused a whole new perspective on breast cancer treatment. Created a taxonomy of breast cancer – Breast cancer is at least 5 different diseases. (Luminal Subtype A, Subtype B, ERBB2+, Basal Subtype, Normal Beast-like. Left to right, better prognosis to worst prognosis.)

[background into cellular origin of each type of cell. Classification, too.]

There are now gene expression biomarker panels for breast cancer. Most of them do very well in clinical trials. Point made that we almost never find biomarkers that are single gene. Most of the time you need to look at many many genes to figure out what's going on. (“Good sign for bioinformatics”)

Microenvironment: Samples used on arrays, as above, include environment when run on arrays. We end up looking at averaging over the tumour. (Contribution of microenvironment is lost.) Epithelial gene expression signature “swamping out” signatures from other cell types. However, tumour cells interact successfully with it's surrounding tissues.

Most therapies target epithelial cells. Genetic instability in epi cells lead to therapeutic resistance. Stromal cells (endothelial cells in particular) are genetically stable (eg, non-cancer.)

Therefore, If you target the stable microenvironment cells, it won't become resistant.

Method: using invasive tumours, patient selection, laser capture microdiseaction, RNA isolation and amplification (Two rounds) -> microarray.

BIAS bioinformatics integrative application software. (Tool they've built)

LCM + Linear T7 amplification leads to 3' Bias. Nearly 48% of probes are “bad”. Very hard to pick out the quality data.

Looking at just the tumour epitheila profiles (tumours themselves), confirmed that subtypes cluster as before. (Not new data. The breast cancer profiles we already have are basically epithelial driven.) When you look just at the stroma (the microenvironment), you find 6 different categories, and each one of them have distinct traits, which are not the same. There is almost no agreement between endothelial and epithelial cell categorization.. they are orthogonal.

Use both of these categorizations to predict even more accurate outcomes. Stroma are better at predicting outcome than the tumour type itself.

Found a “bad outcome cluster”, and then investigated each of the 163 genes that were differentially expressid between cluster and rest. Can use it to create a predictor. The subtypes are more difficult to work with, and become confounding effects. Used genes ordered by p-value from logistic regression. Apply to simple naive bayes' classifier and cross validation using subsets. Identified 26 (of 163) as optimal classifier set.

“If you can't explain it to a clinician, it won't work.”

Stroma classifier is stroma specific.. It didn't work on epithelial cells. But shows as well or better than other predictors (New, valuable information that wasn't previously available.)

Cross validation of stromal targets against other data sets: worked on 8 datasets which were on bulk tumour. It was surprising that it worked that way, even though bulk tumour is usually just bulk tumour. You can also replicate this with blood vessels from a tumour.

Returning back to biology, you find the genes represent: angiogensis, hypoxic areas, immunosuppression.

[Skipping a few slides that say “on the verge of submission.”] Point: Linear Orderings are more informative than clustering! Things are not binary – it's a real continuum with transitions between classic clusters. (Crosstalk between activated pathways?)

In a survey (2007, Breast Cancer Research 9-R61?), almost all things that breast cancer clinicians would like research done on is bioinformatic driven classification/organization,etc.

Aims:
  • define all relevant breast cancer signatures
  • analysis of signatures
  • focus on transcriptional signatures
  • improve quality of signatures
  • aims for better statistics/computation with signatures.

There are too many papers coming out with new signature. Understanding breast cancer data in the litterature involves a lot of grouping and teasing out information – and avoiding noise. Signatures are heavily dependent on tissues type, etc etc.

Traditional pathway analysis: Always need experiment and control and require rankings. If that's just two patients, that's fine, if it's a broad panel of patients, you won't know what's going on- you're now in an unsupervised setting.

There are more than 8000 patients who have had array data collected. Even outcome is difficult to interpret.

Instead, using “BreSAT” to do linear ranking instead of clustering, and try to tease out signatures.

There is an activity of a signature – clinicians have always been ordering patients, so that's what they want.

What is the optimal ordering that matches with the ordering....[sorry missed that.] Many trends show up when you do this than with hierarchical clustering. (Wnt, Hypoxia) You can even order two things: (eg. BRCA and Interferon), you can see tremendously strong signals. Start to see dependencies between signatures.

Working on several major technologies (chip-chip, microarray, smallRNA) and more precise view of microenvironment.

Labels: ,

Anamaria Crisan and Jing Xiang, UBC – Comparison of Hidden Markov Models and Sparse Bayesian Learning for Detection of Copy Number Alterations.

Point was to implement a C-algorithm in Matlab. (Pique-Regi et al, 2008). Uses sparse Bayesian Learning (SPL) and Backward Elimination. (Used microarray data for this experiment.)

Identifying gains, loss or neutral. (in this case, they looked at specific genes, rather than regions.) [Probably because they were using array data, not 2nd gen sequencing.]

Novelty of algorithm: piece-wise constant (pwc) representation of breakpoints.

Assume normal distribution of weights, forumale as a posteriori estimate, and apply SBL. Hierarchical prior of the weights and hyperparameters....

[some stats in here] Last step is to optimize using (expectation maximization) EM algorithm.

Done in matlab “because you can do fancy tricks with the code”, easily readable. It's fast, and diagonals from matrices can be calculated quickly and easily.

Seems to take 30 seconds per chromosome.

Have to filter out noise, which may indicate false breakpoints. So, backwards elimination algorithm – measures significance of each copy number variation found, and removes insignificant points. [AH! This algorithm is very similar to sub-peak optimization in FindPeaks... Basically you drop out the points until you find and remove all points below threshold.]

It's slower, but more readable than C.

Use CNAHMMer by Sohrab Shah (2006). HMM with Gaussian mixture model to assign CNA type (L,G,N). On the same data set, results were not comparable.

SBL not much faster than CNAHMMer. (Did not always follow vectorized code, however, so some improvements are possible.)

Now planning to move this to Next-Gen sequencing.

Heh.. they were working from template code with Spanish comments! Yikes!

[My comments: this is pretty cool! What else do I need to say. Spanish comments sound evil, though... geez. Ok, so I should say that all their slagging on C probably isn't that warranted.... but hey, to each their own. ]

Labels: ,

Aria Shahingohar, UWO – Parameter estimation of Bergman's minimal model of insulin sensitivity using Genetic Algorithm.

Abnormal insulin production can lead to serious problems. Goal is to enhance the estimation of insulin sensitivity. Glucose is injected into blood at time zero, insulin is injected shortly after. Bergman has a model that describes the curves produced in this experiment.

Equations given for:
Change in plasma glucose over time = ......
Rate of insulin removal....

There are 8 parameters in this model which vary from person to person. The model is a closed loop system, and requires the partitioning of the subsystems [?] Requires good signal to noise ratio.

Use a genetic algorithm to optimize the 8 parameters.

Tested different methods: Genetic algorithms and Simplex method. Also tested various methods of optimization using subsets of information.

Used a maximum of 1000 generations in Genetic Algorithm. Population size 20-40, depending on expt. Each method tested 50 times (stochastic) to measure error for each parameter separately.

Results: GA was always better, and partitioning subsystem works better than trying to estimate all parameters at once.

Conclusion: Genetic algorithm significantly lowers error, and parameters can be estimated with only glucose and insulin measurements.

[My Comments: This was an interesting project which clearly has real world impacts. Although much of it wasn't particularly well explained, leaving the audience to pick out out the meaning. Very nice presentation, and cool concept. It would be nice to see more information on other algorithms.... ]

An audience member has asked about saturation. That's another interesting topic that wasn't covered.

Labels: ,

Harmonie Eleveld and Emilie Lalonde, Queen's University – A computational approach for the discovery of Thi1 and Thi5 regulated (Thiamine repressible)

[Interesting – two presenters! This is their undergraduate project]

Bioinformatics looking for genes activated by thiamine, using transcription factor binding motifs. [Some biological background] Thi1 and Thi5 binding sites are being detected.

Thiamine uptake causes repression of Thi1 and Thi5.

Used upstream sequences from genes of interest. Used motif detection tools to generate a dataset of potential sites.

Looking at Zinc finger TF's, so bipartite, palindromic sites. Used BioProspector, from Stanford. It did what they wanted the best.

Implemented a pattern recognition network (feed forward), using training sets from bioprospector + negative (random) controls. Did lots of gene sets, many trials and tested many different parameters.

Used 3 different gene sets (nmt1 and nmt2 gene sets from different species), (gene set from s. Pombe only, 6 genes), (all gene sets all species)

Preliminary results: used length of 21, Train on S. pombe and S. japonicus, test on S. octosporus.
Results seem very good for first attempt. Evaluation with “confusion matrix” seems very good. (Accuracy appears to be in the range of 86-95%)

Final testing with the neural network: Significant findings will be verified biologically, and knockout strains may be tested with microarrays.

Labels: ,

Denny Chen Dai, SFU – An Incremental Redundancy Estimation for the Sequence Neighbourhood Boundary

Background: RNA primary and secondary structure. Working on the RNA design problem (Inverse RNA folding.) [Ah, the memories...]

Divide into sequence space and structure space. Structure space is smaller than sequence space. (Many to one relationship.)

Biology application: how does sequence mutation change the structure space?

Neighbourhood Ball : Sequences that are closely related, but fold differently. As you get closer to the edge of the ball, you find... [something?]

Method:
  • Sample n sequences with unique mapping strucure
  • for each sample: search neutral sequence within inner layers, redundancy hit?
  • Compute redundancy rate p.
  • Redundancy rate distribution over Hamming layers. P will approach 1. (all structure are redundant.)
The question is at what point do you saturate? Where do you find this boundary? Somewhere around 50% of sequence space. [I think??]

Summary:
  • An efficient estimation boundary – confirmed the existence of the neigborhood ball
  • ball radius is much smaller than the seqeunce length.
Where is this useful?
  • Reduce computational effort for RNA design
  • naturally occurring RNA molecules, faster reduncdancy growth rate suggests mutational robustness.
[My Comment: I really don't see where this is coming from.  Seems to be kind of silly, doesn't reference any of the other work in the field that I'm aware of.  (Some of the audience questions seem to agree.)  Overall, I just don't see what he's trying to do - I'm not even sure I agree with his results.  I'll have to check out his poster later to see if I can make sense of it.  Sorry for the poor notes.  ]

Labels: ,

Connor Douglas, UBC – How User Configuration in Bioinformatics Can Facilitate “Translational Science” - A Social Science Perspective

Background is in sociology of science – currently based in centre for applied ethics.

What is civic translational science? Why is it important?

Studying pathogenomics of innate immunity in a large project, including Hancock lab, Brinkman lab, etc. GE(3)LS: Genomics, Ethics, Economics, Environment, Legal and Social issues. What are the ramifications of the knowledge? Trying to hold a mirror up to scientific practices.

Basically, studying bioinformaticians from a social science perspective!

[talking a lot about what he won't talk a lot about.... (-: ]

“Pathogenomics of Innate Immunity” (PI2). This project was required to have a GE(3)LS component, and that is what his research is.

What role does user configuration play in fostering civic translational science? What is it?

It is “iterative movements between the bench to markets to bedside”. Moving knowledge out from a lab into the wider research community.

Studying the development of the “InnateDB” tool being developed. It's open access, open source, database & suite of tools. Not just for in-house use.

Looking at what forces help move tools out into the wider community:
  • Increased “Verstehen” within the research team. (Taking into account the needs of the wider community – understanding what the user wants.)
  • limited release strategies – the more disseminating the better
  • peer-review publication process: review not just the argument but the tool as well.
  • A continued blurring of divisions between producers and users.
And out of time....

Labels: ,

Medical Imaging and Computer-Assisted Interventions - Dr Terry Peters, Robarts Institute, London Ontario

This talk was given as the keynote at the 2009 CSCBC (Fourth Canadian Student Conference on Biomecical Computing.)

In the beginning, there were X-rays. They were the mainstay of medical imaging till the 70s, although ultrasound started in the 50's, it didn't take off for a while. MRI appeared in the 80's. Tomography in 1973.

Of course, all of this required computers. [A bit of the history of computing.]

Computer Tomography. The fundamentals go back to 1917 - “The Radon Transform”, which are the mathematical underpinnings of CT.

Ronald Bracewell made contributions in 1956, with Radio Astronomer used this to reconstruct radio sources. He recognized that fourier transform relation between signals and reconstruction. He developed math very similar to what's used for CT reconstruction.. he was working on a calculator (3 instructions /min)!

Sir Godfrey Hounsfield, Nobel prize winner in 1979. He was an engineer for EMI (the music producer!) Surprisingly, it was the profit of the Beatle's albums that funded this research.

Dr Peters himself began working on CT in the late 1960's. “Figure out a way sof measuring bone-density in the forearm using ultrasound....” (in the lab of Richard Bates, 1929-1990). That approach was a total disaster, so turned to X-ray. Everything in Dr. Bates lab started with Fourier transforms, so his research interests gave him a natural connection with Bracewell at Stanford... The same math that Bracewell was working on made the jump to CT.

The first “object” they used to do was with sheep bones – in New Zealand – what else??

The first reconstruction required 20 radiographs, a densitometer scan, a manual digitization, and 20 minutes on an IBM 360. “Pretty pictures but they will never replace radiographs” - NZ Radiologizt 1972.

The following months, Hounsfiled reports on invention of EMI scanner – scooping Dr. Peters PhD project. However, there were still lots of things to work on. “If you find you're scooped, don't give up there are plenty of problems to be solved...” “Don't be afraid to look outside your field.”

How does CT work?  The central slice Theorem. Take an X-ray projection, fourier transform it, so instead of inverting the matrix, you can do the whole thing in the fourier transform space.

Filtered Back Projection: FT -> | rho | -> Inv FT.

This all leads to the clinical acceptance of CT. Shows us the first CT image ever. His radiology colleagues were less than enthusiastic. However, Dr. James Ambrose in London, saw the benefits of the EMI scanner. Of course, EMI only though there will ever be a need for 6 CT machines.

First CT was just for the head. It took about 80 seconds of scanning, and about the same to recreate the image.

His first job was to “build a CT scanner”, with a budget of $20,000, in 1975-78.

in 1974: 80x80 2009 : 1024x1024
3mm pixels less than .5mm pixels
13mm thick slices  less than 1mm thick slices

What good is CT scanning? Good for scanning density. Great for bones, good for high constrast, not so good in brain (poor contrast between white and grey matter), high spacial resolution,
tradeoff, high cost of radiation dose to patient.
Use for image-guidance for modeling and for pre-operative patients. Not used during surgery, however.

CT Angiography is one example of the power of the technique. You can use contrast dyes, and then collect images to observe many details, and reconstruct vessels. You can even look for occlusions in the heart in blood vessels.

Where is this going? Now working on robotically assisted CABG. Stereo visualization systems.

Currently working to optimize robot tools + CT combination. Improper thoracic port placement, and optimize patient selection.

Pre-operative imaging can be used to measure distances and optimize locations of cuts. This allows the doctor to work without opening the rib cage. They can now use a laser to locate and identify where the cuts should be made, in a computer controlled manner.

NMR:

Has roots in physics and chemistry labs. NMR imaging built on mathematical foundations similar to CT. Some “nifty tricks” can be used to make images from it. Dropped “N” because nuclear wasn't politically correct.

In 1975, Paul Lautebur presented “Zeumatography”. Magnets, water, tubes... confusing everyone! Seemed very far away from CT scanning. Most people thought he was WAY out there. He ended up sharing a Nobel Prize.

Sr Peter Mansfield in 1980 produced an MRI of a human using this method – although it didn't look much better than the first CT.

[Explanation of how NMR works – and how Fourier transforms and gradients are applied.]

More than anything else, MRI combines more scientific disciplines than anything else he can think of.

We are now at 35 years of MRI. Originally said that MRI would never catch on. We now generate high resolution 7 Tesla images. [Very impressive pictures]

Discussion of Quenching of the magnets... yes, boiling off the liquid helium is bad. Showing image of how a modern MRI works.

What good is MRI? Well, the best signals come from water (protons), looking at T1 and T2 relaxation times. Have good soft tissue contrast – including white and grey matter brain cells. High spatial resolution, high temporal resolution. No radiation dose, great use for image-guidance.
(As far as we can tell, the human body does not react negatively to the magnetic fields we generate.)

Can also be used for inter-operative techniques, however everything used must be non-magnetic. Several neat MRI scanners exist for this purpose, including robots that can do MRI using just the “fringe fields” from a nearby MRI machine.

Can be used for:
  • MRA - Angiography (vascular system), 
  • MRS – Spectroscopy (images of brain and muscle metabolism)
  • fMRI – Functional magnetic resonance imagine (image of brain function)
  • PW MRI – Perfusion-Weighted imaging. (Blood flow in ischemia and stroke)
  • DW MRI – Diffusion-Weighted imaging (water flow along nerve pathways – images of nerve bundles).

FMRI: Looks at regions that demand more oxygen. Can differentiate 1% changes, and then can correlate signal intensity with some task (recognition, or functional) Can be used to avoid critical areas during surgery.

Diffusion Tensor: looks at the diffusion of water, resulting in technique of “Tractography”, which can be used to identify all of the nerve pathways, which can then be avoided during surgery.

There are applications for helping to avoid the symptoms of Parkinson's. Mapped hundreds of patients to find best location, and now can use this information to tell them exactly where to place the electrodes in new patients.

[Showing an image in which they use X windows for their computer imaging – go *nix.]

Two minutes of Ultrasound: [How it works.] Typical sonar, and then reconstruct. “Reflections along line of sight.” Now, each ultrasound uses various focal lengths, several transducers, etc, etc. All done electronically now.

The beam has an interesting shape – not conical, as I had always though.

Original Ultrasound used an oscilloscope with long persistence, and they'd use a Polaroid camera to take pictures of it. The ultrasound head used joints to know where it was to graph points on the oscilloscope. (Long before computers were available.)

Advantage: Images interfaces between tissues, inexpensive, portable, realtime 2D/3D, does not pass through air or bone. Can be used to measure changes of reflective frequency, so blood flow direction and speed. Can be used for image-guidance – can be much more useful when combined with MRI, etc.
Disadvantage: difficult to interpret.

In the last year, 3d, dynamic ultrasound is now available. You can put a probe in the ultrasound and watch the heart valves.

For intra-cardiac intervention: Create model from pre-op imaging, register model to patient, use trans-esophogeal ultrasound for real-time image guidance, introduce instruments through chest/heart wall, magnetically track ultrasound and instruments, display in VR environment.

[Very cool demonstrations of the technology.] [Now showing another VR environment using windows XP. Bleh.]

Other modalities: PET – positron emission tomography, SPECT,

One important tool, now, is the fusion of several of these techniques: MRI-PET, CT-MRI, US-MRI.

Conclusion: CT and MRI provide high resolution 3d/4d data, but can't be used well in operating room. US is inexpensive and 2d/3d imaging, but really hard to get context.

Future: image-guided procedures, deformable models with US synchronization. Challenges: tracking intra-op imaging devices and real-time registration. Deformation of pre-op models to intra-op anatomy.

Labels: ,

Thursday, March 12, 2009

Personal Medicine... is it worthwhile?

After the symposium yesterday, and several more insightful comments, I thought I should write a couple of quick points.

One of the main issues is penetrance, or how often the disease occurs when you have a given genomic profile. For some diseases, like Huntington's disease, having the particular mutation translates directly into a certainty that you will have the disease. There really isn't much of a chance that you'll somehow avoid developing it. For other diseases, a gene may change your likelihood of developing the disease slightly or in an almost un-noticeable way. In fact, sometimes you may have offsetting changes that negate what would be a risk factor in another person. Genomes are wild and complex data structures, and are definitely not digital in the sense that seeing a particular variation will always give you a certain result.

Mainly, that has to do with the biology of the cell. There are often redundant pathways to accomplish a given task, or several levels of regulation that can be called on to turn genes on or off. Off the top of my head, I can think of several levels of regulation (dna methylation, histone post-translational modifications, enhancers, promotors, microrna, ubiquitination leading to increased degradation, splicing, mis-folding through chaperonin regulation, etc) that can be used to fine tune or throttle the systems in any given cell. At that rate, looking at a single variation seems like it might be an entirely useless venture.

And, in fact, that was the general consensus of the panelists last night: the companies that currently run a microarray on your dna and then report to you some slight changes in risk factors are really a waste of time - they don't begin to compensate for the complexity that is really going on.

However, my contention isn't that we should be doing personal medicine over the whole genome, but that as we move forward, that personal medicine will have a large and growing impact over how healthcare is practiced. I've heard several people talk about Warfarin as an example of this. Warfarin is used to treat hypertension, and is quite effective in most people. However, each person has different dosage requirements - not because they need more to activate the pathway, but because we all degrade it at different rates, depending on which p450 enzymes we have to break it down.



In the above graph, you can see all patients conform to some "normal" distribution, but they're really made up of two subpopulations - one set of fast metabolizers and one set of slow metabolizers, as judged by metabolism of some other drugs. (Yes, I'm way oversimplifying how this works - this is not real warfarin data!) When you look at the spectrum of patients that come in, you see a continuum of patient dosages, but you'd never understand why.

Instead, you could look for markers. In the case of drug metabolism, only one p450 may be responsible for the speed at which the drug is processed, so looking at the same group of patients for that particular trait will give you a completely different graph:



Which means, you can start to figure out what initial dose will be required, and tweak from there.

(If you're wondering why the fast metabolizers and slow metabolizers of the same drug have some overlap in my example, it's just so I'd have an excuse to say there are probably other factors involved: environment, other things interfering with the metabolism, the rate at which the kidneys clear the drugs... and probably many other things I've never considered.)

So what's my point? It's easy. Personal medicine isn't about whole genomics, but rather about finding out what conditions underly the complex behaviours of the body - and then to apply that knowledge as best as we can to treat people. (Whole Genome Studies will be important to learning how these things work though, so without the ability to do whole genome sequencing, we wouldn't have a chance at really making personal medicine effective.) I'll be the first to admit we don't know enough to do this for all diseases, but we certainly do know enough to begin applying it to a few. I've argued that within 5 years, we'll start to really see the effects. It won't be a radical change to all medical care at once, but a slow progression into the clinics.

To narrow my prediction down further, at some point in the next 5 years, it will become routine (~10-20% of patients?) for doctors to start doing genomic tests (not full genome sequencing!) to apply this type of knowledge when they treat their patients with new drugs. (Not every illness will require genomic information, so clearly we'll never reach 100% requirement for it - having a splinter removed in the E.R. won't require the doc to check your genome...) I give it another 10 years before full genome sequencing begins hitting clinics.. and even that will be a gradual change.

Now I've really wandered far outside of my field. I'll let the doctors and physicians handle it from here and try to restrict my comments to the more scientific aspects of it.

Labels: ,

Wednesday, March 11, 2009

Notes from the Michael Smith Panel at the Gairdner Symposium.

I took some notes from the panel session this evening, which was the final event for the Vancouver portion of the Gairdner Symposium 50th Anniversary celebrations. Fortunately, they are in electronic format, so I could easily migrate them to the web. I also took notes from the other sessions, but only with my pen and paper, so if people are interested I can also transcribe some of those notes, or summarize them from the rest of the sessions, but I will only do that upon request.

As always, if there's something that doesn't make sense, it was my fault, not those of the panelists, and when in doubt, this version of the talk is probably wrong.

You'll also notice several changes in tenses in my notes - I apologize, but I don't think I'm going to fix them tonight.

Before you read the transcription below, I'd also like to comment that all of the speakers tonight were simply outstanding, and that I can't begin to do them justice. Mr. Sabine is simply an amazing orator, and he left me impressed with his ability to mesmerize the crowd and bring his stories to life in a way that I doubt any scientist could match. He was also an exemplary moderator. If the panelists were any less impressive in their ability to hold the crowd's attention, it was easily made up by their ability to give their opinions clearly and concisely, and they were always able to add insightful comments on each subject that they addressed. Clearly, I couldn't have asked for more from anyone involved.

There was also a rumour that the video version of the talk would be available on the web, somewhere. If that's the case, I highly suggest you watch it, instead of reading my "Coles notes" version. If you can't find it, this might tide you over till you do.

------------------------------------

Introduction by Michael Hayden

Michael Smith told reporters, on the day he won the Nobel Prize: He sleeps in the nude, has a low IQ and wears Berkenstocks.

Tonight's focus is different: DNA is “everywhere”.. it has become a cultural icon. Sequencing the human genome was estimated to take $3 billion and 10 years, and it took nearly that. Now, you can do it for about $1000. Who knows, it might even be your next Christmas gift. Personal genomics was the invention of the year in 2008.

Our questions for tonight – what are the advantages, and what harm can it do.

Moderator: Charles Sabine
Panellists:
Dr. Cynthia Kenyon,
Dr. Harold Varmus
Dr Muin Khoury

Questions: Personalized Genomics: Hope or Hype?

[Mr. Sabine began his talk, which included a slideshow with fantastic clips of Michael Smith, his experiences in the war, and was narrated with dramatic stories from conflicts in which he reported upon, and human tragedies he witnessed. I can't begin to do justice to this extremely eloquent and engaging dialogue, so I'll give a quick summary.]

Mr Sabine recently took a break from broadcasting to begin participating in science discussions, and to engage the community on issues that are tremendously important: science, genomics and medicine. His family recently found out that Mr. Sabine's father was suffering from Huntington's disease, which is a terrible hereditary genetic disease. With his father's diagnosis of Huntingtons, he himself had a 50% chance of developing the disease, as do all of his siblings. His older brother, a successful lawyer, has developed the disease, and is now struggling with the symptoms.

An interesting prediction is that in the near future, as much as 50% of the population will have dementia by the time they die.

From his experience in wars, if you take away dignity and hope, people will lose their moral compass. (Mr Sabine is much more eloquent than my notes make out, and this was the result of a long set of connected points, which I was unable to jot down.)

Huntington's disease is an interesting testing ground, for it's ability to predict personal medical futures. It has a high penetrance, and is one of the early genetic diseases for which a test was identified, thus, it is the precursor to the genetic testing and personalized medicine processes that people have envisioned for the future. But the question remains, will personalized medicine be a saviour, by enabling preventative medicine, or will it be a huge distraction by presenting us with information that just isn't actionable. Many people have different answers to this question: insurance companies would like to remove risk, and the personal impact is, of course, enormous.

Mr. Sabine recently took the test for Huntington's himself. The result was positive: he will suffer the same fate as his brother and his father.

[If only I could type fast enough! Fantastic metaphors, stories and wit.]

End of introduction – Beginning of the panel

Question: Is personal medicine a source of hope, or is it just hype?

Varmous: Middle of the road. First, The fact that we're talking about genes excites special attention – it's hereditary, and seem unchangeable. However, this new modality that plays a role in risk assessment is just one more part of the continuum of care we already have. All our environment and physical choices are just one more component of what goes into our medical care.

Second, We're already using medical diagnosis which is based on genomics. These are mostly high penetrance genes, however, so we have to consider the penetrance of each of the genes we're going to use clinically.

Third, it's not always easy to implement the changes in the clinic, when we find them in the lab. There is resistance – physicians are creatures of habit, there are licensing and cost issues. These things are important in how the future plays out.

Many of the new commercial ventures (genotyping companies) are grounded in a questionable area of science. There may be a slightly increased risk of a disease because of a couple changes in your genome, but it's not an accurate description of what we know. There are suppressors and other mitigating factors, so to put your health in the hands of a commercial vendor is premature.

Khoury: My job is to make sure that information is used to improve the public health. Have an interest in making sure that it gets used, and used well. Therefore, I'm for it, and believe it can be done. However, I have concerns about the way it's being used now. “The genie is out of the bottle, will we get our wish?” (Title of a recent publication he was involved in.)

Kenyon: Excited about it. Humans are not all the same as one another. Can we correlate attributes with genes? We know about disease genes. Is there a gene for perfect pitch? Happiness? Etc etc. It would be interesting to know, we could start asking if we had more sequences. What would you do with that knowledge? If you don't have the gene for happiness, how would you feel?

The more we know, the more we'll be able to do with the tests. We're too early for real action most of the time. Can you make the right judgements based on the information? Now, probably not, there's too much room to make bad choices.

Question: Is knowing a patient's gene sequence going to impact their care, and what role does the environment play in all of this?

Khoury: most diseases are interactions between environment and genes. Huntington's is one of the few exceptions with high penetrance. You need to understand both parts of the puzzle, to identify the risk of disease. It's still too early.

Varmous: Get away from phrase “sequencing whole genomes”, we're not there yet.. that would cost $100,000's. Right now we do more targeted (Arrays?) or we do shotgun (random?) sequencing. So we have a wide variety of techniques that are used, but we're not sequencing people's genomes.

In some cases there are very low environmental influences. Many people have gene mutations that guarantee you will get a disease.. These should be indicators where genes must be tested and knowing genetic information provides a protective preventative power.

Kenyon: Agree with Harry. Sometimes genes are the answer, other times we just don't know.

Khoury: There is a wide variety of diseases with a variety of possible intervention. Sometimes the diseases are treatable in environmental ways. (eg. phenylketonuria). However, single gene diseases make only 5% of the diseases that are affecting the population.

Genome profiles tell us that there are MANY complex diseases, and we can use non-genetic properties to indicate many of our risks, instead of using genome screens.

Varmous: The word environment does not include all things non-genetic. Behaviour is a huge component. Dietary, drug, motor accidents, warfare, smoking... they are controllable and make a huge contribution.

Khoury: Every disease is 100% environmental and 100% genetic. (-;

Question: given that a genome scan is ~$400-500, would you have your genome scanned. Would you share that information with anyone, and who would that be?

Kenyon: havn't had it done, would do it if there was a familial disease. She likes a little mystery in life, so probably wouldn't do it.

Varmous: Wouldn't do a scan – would only do a test for a single gene. The scan results aren't interpretable. The stats are population based, and not personal. He wouldn't publish his own.

Khoury: he's had the offer from 3 main companies.. and has turned it down. What you get from the scans is incomplete, but also misleading. Some people have had scans by multiple groups – they aren't always consistent. The information is based on published epidemiology studies.... and some of the replicable, some of them...well, aren't. The ones that do give stable information give VERY low odds risk. What do you do with the difference between a 10 and 12% lifetime risk? What changes should you make?

Why waste $400 on this, go spend it on a gym membership.

Kenyon: If you have the test done, it could mislead you to make changes that really hurt you in the long run.

Q: Should personal health care be incorporated into the health care system, and would it become a tiered system?

Khoury: We all agree that personal genomics isn't ready for prime time. After validation, and all....
(Varmous: define personal genomics), insurance companies are paying for this information, genetic counselling, and people make up their minds. The question is whole genome scans, though. And this is all about microarray chips and small variants. However, if the 3 billion bases are sequenced, what then? Would you start adding that information to do massive interpretation. We need to wait till things are actionable. Once that happens, we'll see them move through the health care system.

Will it become part of people's medical record.. probably not.

Varmous: The president is interested in using genetics in interesting ways. The most useful aspect is pharmacogenetics, and start looking at genetic variation and response to drugs. When you get sick away from your home, the physician you visit should have access to that information.

Kenyon: There are a lot of drugs already being tested that have very specific actions which only work for a subpopulation. Once we know what the variations in the population are that treatable with a drug, more drugs will make it through the trials.

Varmous: not a correction, just not sure the audience knows enough about cancers, which are heterogenous. Many of the changes are not somatic, and we don't yet have the tools to analyse cancers at that level.

Q: How worried should we be that personal genomics will lead to discrimination?

Khoury: it has been a huge topic for discussion. Congress has passed an act to prevent exactly that.
It's a good start in the right direction, and there's still plenty of room to worry.

Varmous: This is most worrisome in employment and insurance. There were few cases in the past anyhow, based on predisposition. However, some employers may discriminate against people who have diseases which may potentially reoccur, because they don't want their insurance premiums to rise.

The only way to avoid that is to have a Canada-like health care system.

Sabine: would you tell your employer?

Kenyon: it wouldn't be a problem at University of California, but it could be a major problem for people. There's a risk that things change and you don't know where it goes.

Sabine: would you trust insurance companies?

Kenyon: Gov't can't do things that harm too many people and stay in power. In long run, we can trust that things will be put right.

Khoury: not right now.

Varmous: I wouldn't reveal it [his own genome sequencing results].

Q: How do you suggest we bring personalized medicine to developing countries?

Khoury: Same technologies can be used everywhere around the world. The concerns around chronic diseases are there. They have their fair share of infectious diseases, and the genomic information will help with better medications, which will help on that front. We can also use the technology for solving other difficult problems, beyond human personal medicine.

Dr. Singer [the authority on the subject was pulled up from the audience]: It can help. There is a war being waged on the global poor, waged by diseases. Killing millions of people. We could use the technology to create better tests, better drugs, etc etc. We can use life-sciences and biotech to save people in the third world. Personalized genomics: that too could have an effect, but not at the individual level. If you apply personalized genomics at the population level... [I think he's talking about doing WGAS to study infectious diseases, and confusing it with personalized genomics.]

Varmous: Always eager to agree with Dr. Singer. The use of genetic technology could be very important for infectious diseases in the developing world .

Kenyon: Something about using WGAS to study..... [I missed the end of the thought.]

Q: How might this information [personal genomics] shape romantic relationships?

Khoury: Just a public health guy! But there's an angle: it's highly unlikely that we'll find the gene “for something”, so while he specializes in prevention and disease, personal genomics are just unlikely to be useful outside of that realm. He doesn't think it's going to have an impact, but there are medical applications such as tay-sachs screening. There are forms of screening, but it's not romantic in the sense of “romantic”

Varmous: We're unlikely to ever see genomes on facebook for romantic purposes, but some times it is useful in preventing disease. May be useful in screening embryos.

Kenyon: Thinks the same thing. Predicting love or even personality from DNA is impossible.. cheaper to do 2 minute dating. However, many of the screens are still useless in terms of predictions that really carry weight. We should instead teach statistics to kids to better understand risk. If we bring in testing, we have to bring in education.

Q: Should information be used to screen your potential partners?

Varmous: If he had genetic testing, and if he were single, he still wouldn't tell his dates.

Q: Genome canada's funding was reduced to zero. How do we advocate for funding?

Varmous: Wasn't aware of that change of funding. There are many factors to be considered. Both economic and political climates have to be considered. Scientists must keep explaining in an honest and straightforward manner how science works and it's contribution to the public. Do what you can, and engage the politicians! They do listen, and they learn – visit local pharma, etc etc. All scientists have to do their part.

Khoury: Harry said it all.

Kenyon: Opportunities arise all the time, engage everyone around you. Take the time to talk with people on the bus, whatever, but just in general seize opportunities, and they come up all the time.

Varmous: just don't become the crazy scientist on the bus who will talk to anybody.

Q: Epigentics are influenced by the environment, and we can influence them with our behaviour. How long will it be before we know enough about the epigenome before we can start making predictions about disease?

Khoury: The sequence variation we measure may mean something different for people, depending on the patterns we see. It can be a big factor, and the environment also complicates our efforts to understand how it all works together. Particularly in cancer. How long will it take to mature? Progress is moving forward rapidly, but can't make a prediction. Excited by prospects, though.

Varmous: Epigenomics is being most vigourously applied in oncology. Gene silencing and other effects can be seen in the epigenome. That may contribute to the cancer, and determine efficacy of drugs. However, the tools are still crude.

[Hey, what about FindPeaks!? :P]

Kenyon: Explanation of what epigenetics is.

Q: What kind of regulation should exist, if any, on the companies that do personal genomics.

Khoury: FDA does regulation, and the US oversight is fairly loose. Talk about CLIA. [previously mentioned in other talks on my blog, so I'm not taking notes on this.] Basically, more people are concerned, and people believe that other regulation is necessary.

Varmous: There's an uneven playing field out there. Certain things are tightly regulated, whereas other things are too loosely tested. Seems like DNA testing wasn't really the point of the original screening regulation, so that could be improved.

Sabine: Closing remarks. Thanks to everyone.

Labels:

Tuesday, March 10, 2009

Ubuntu Jaunty Alpha

Wow. Really, wow. I upgraded to Jaunty (Ubuntu 9.04 Alpha) over the weekend on my work computer. It wasn't flawless, and I wouldn't recommend anyone else do it yet since it's far from a trivial process, but I'm very impressed with the results. The computer is more responsive, several bugs that were annoying me have disappeared, and the monitor just looks nice.

But, since I had problems, I figured I should leave a trail for those who are following in my footsteps.

I had problems installing the nvidia drivers. My old xorg.conf failed miserably on the new configuration, and no end of tweaking seemed to fix it. In the end, I settled for using the standby:
sudo dpkg-reconfigure -phigh xserver-xorg

You'll note that without using -phigh, the command no longer works. Either way, this fixed several issues (multiple instances of X trying to start, bad configurations, inability to install new nvidia drivers, etc), and left me with a much cleaner xorg.conf file.

Once the new xorg.conf file was installed, I was able to install the new nvidia drivers, which had to be done manually (drop to terminal by pressing "ctl-alt-F1", stop the current gdm with the command "sudo /etc/init.d/gdm stop", and then running the nvidia binary with "sudo sh NVIDIA-Linux-x86-96.43.11-pkg1.run". Substitute your appropriate version/file name as necessary.)

You'll have to enable the nvidia driver as well, which can be done with the command:
sudo nvidia-xconfig

From that point on, I had to manually add back a couple of things:

To the end of the file, I put the following, to re-enable the control-alt-backspace combination, which is, for no discernable reason, removed from Jaunty.
Section "ServerFlags"
Option "DontZap" "false"
EndSection

At one point, I also tried adding dontzap from the repositories, which didn't help, but may have been necessary for that code to work:
 sudo apt-get install dontzap

To the Screen section, I had to add back
    Option  "AddARGBGLXVisuals" "true"

in order to use compiz again, which seems to require this flag.

To get my monitor showing the right resolution, I also had to edit the "Modes" line in the Screen section:
Modes      "1920x1200" "1600x1200" "1280x1024" "1024x768" "800x600" "640x480"

as my monitor's native resolution (1920x1200) wasn't included in the list. Putting it to the front makes it the default mode.

Compiz itself required several libraries which weren't installed:
sudo apt-get install compiz compiz-core compiz-fusion-plugins-extra
compiz-fusion-plugins-main compiz-gnome compiz-plugins compiz-wrapper
compizconfig-backend-gconf compizconfig-settings-manager libcompizconfig0

Some of which were present, others weren't, so I asked it to reinstall the whole stack.

Java, for some reason, became really slow - probably because it changed the default away from Sun's version 1.6 to something else. That should be switched back with:
sudo update-alternatives --config java

Firefox was upgraded at one point, to version 3.0.7, which seems not to be backwards compatible. (it was no longer able to go back a page, or remember previous pages or bookmarks.) Blowing away the ~/.mozilla/firefox directory fixed it, but lost all my bookmarks and settings. I also had to remove the lock files, but if you're blowing away the firefox directory, that will be included as part of it.

I'm leaving out several other steps I did that shouldn't be necessary for anyone else: the computer hung midway through the install, and had to be recovered at the command line, several repositories were tested out, as well as multiple rounds of update/upgrade/autoremove from the command line were also used, and a bit of tweaking of other features.

I would also give a couple of points to people who are considering doing this upgrade: backup everything you value. config settings, xorg files, your compiz settings, etc. Several config files were reset back to default or had to be blown away to get things working again, and, although minor, took some time to recover.

Was all that worth the 6 hours of tweaking? So far, I think so!

Labels: ,

Monday, March 9, 2009

Personal Medicine - towards efficient medicine

I've posted a couple of thoughts on Personal Medicine, lately. They've been fairly popular, and obviously controversial enough that people have taken the time to comment. (I really appreciate that, by the way!) Those comments are very useful in giving me an opportunity to think about the subject in ways I hadn't considered. (Thanks, again, to those who chimed in on the last two posts!) So, I have at least two more topics I want to cover. The first one is "efficient medicine."

All this talk about personal medicine is interesting, because it's relatively obvious what everyone means: using a patient's genomic/transcriptomic information to make personal health decisions that are tailored to suit the patient's personal needs. Hence, it's personal medicine. However, the question really has to be asked why we're doing it. I contend that the personal medicine is a technique, but the underlying goal is really "efficient medicine."

By efficient medicine, I really mean efficiency in several ways:
  1. More efficient use of medication (1): treating only those people who will benefit from the treatment.
  2. More efficient use of time: automate health care so that we can figure out the right treatment more quickly.
  3. More efficient use of resources: treat people once with the right medication, so that less time needs to be spent in clinics and hospitals
  4. More efficient use of medication (2): ensure people treated with medications won't suffer from adverse effects, which has a human cost as well.
  5. More efficient use of doctors: Allow doctors to spend less time trying to diagnose problems, and more time trying to figure out how to solve them.
I'm sure I could go on, but by now everyone gets the idea. Efficiency means something different to everyone in the medical chain of command, yet I'd like to think everyone is striving to provide more efficient medical care. Whether the medical funding agency wants to save money by not treating non-responders to a drug, a hospital wants to save resources by pro-actively treating an out-patient (metabolic disease), or whether the doctor wants to spend less time trying to figure out the root cause of a patient's problem (eg. Crone's disease), knowing what's going on at the genomic level will make the medical care more efficient for everyone involved.

So, let me re-iterate my other points from the past few blog items: We are near the tipping point where the cost of personal medicine is becoming sufficiently low that the efficiency benefits from taking advantage of it will have a measurable effect.

Once that takes place, it will be a tide that washes away the in-efficient medical practices of the past. Medical funding agencies won't fund doctors or medical practices that waste time or money, and that will force through changes that make personal medicine the only way to do business.

Again, I'm not arguing that doctors are incompetent, just that personal medicine will change the baseline level of efficiency we demand, and that MDs will need to cope with that change.

And, as a corrolory, that's going to lead to an aweful lot of medical funding agencies to start funding lifestyle changes. (Go to the gym 3 times a week, and save 50% on your insurance....) Change is coming, people... and you don't need to be an MD or a PhD to see it.

And speaking of efficiency, I have a few more things I need to get done this afternoon! Back to the grindstone...

Labels:

Wednesday, March 4, 2009

alternative and personal medicines

After my post the other day, on the subject of resistance to personal medicine from doctors, there were a few interesting comments, which I figured merited their own entry.

The first comment, from Will, implied that I think all MD's are idiots - which is far from the truth. I've met idiot doctors before (such as the one that told me a collapsed lung was psychosomatic), and some very bright doctors (such as the one that asked me about 10 questions, listened to my chest, told me I had a collapsed lung and then sent me back to the hospital right away.) Like all professions, there are good ones, and there are bad ones. However, like all professions, the exceptional doctors, by definition, are few and far between.

And, as a scientist, I can appreciate why that is: doctors are to the human body as mechanics are to our cars, and a car is a relatively simple piece of machinery, when compared with the human body. Even more frightening, a lot of the human body is simply a "black box" in the sense that we know what we put in, and we know what comes out, but we rarely understand all of the intricacies of the processes that are occuring. So when it comes to my car, if I had one, I'd trust a guy named Garry who doesn't have a high school education to be able to figure out what went wrong and fix it, but when it comes to my body, I expect the person doing the fixing to have about 10 years of higher-education.

But what is that higher-education? It's not necessarily a biochemistry degree, or even a molecular biology degree - it's typically a higher level overview of how the body works: anatomy, histology, immunology, and the various other "organ-level" subjects. We don't expect the average physician to be able to describe how transcription factors, polymerases, gyrases, ligases or any of the host of other molecular tools work, or what their effect is on the human body. Thus, physicians are handcuffed by their high level view of the complex systems upon which they use.

And, of course, that leads us to the major issue. Dealing with complex systems at a high level can only be done by applying rule based solutions. For instance, if you see a broken leg, you splint it. You don't need to know about osteoblasts and osteoclasts and how they work to rebuild bone. We don't look at the molecular signals that they need, or what to do to encourage them, you just expect the doctor to apply the rule. If something goes wrong and the bone doesn't heal, then (and only then) your doctor starts looking for another rule to apply. That's not a bad thing, really - but that's how we have come to expect modern medicine to work.

The article I linked to in my earlier post wasn't about doctors being idiots or stupid, it was about doctors being influenced in their rules and the application of those rules in ways that aren't productive. When doctors are influenced by other doctors around them (group mentality) to do unnecessary or unproductive treatments, despite the lack of evidence to show the treatment works, that's not a good thing. When doctors use rule based medicine that's outdated, that's also not a good thing.

While I don't have independent stats on it, the article certainly made it seem like those are common occurrences - and that makes it appear that modern science isn't doing a very good job on matching diseases with treatments. When that starts to sink in to a patient's mind, they start looking for alternatives, which leads you to alternative medicines. In my mind, alternative medicines are any form of treatment for which there is no scientific evidence that it works. If you could show me in a properly controlled trial that waving a crystal pyramid over aching joints actually did better than placebo, I'd have no problem considering it a real medical treatment.

So what does alternative medicine have to offer? Hope and faith. Having nothing to believe in is a scary concept, and when science based rules let you down, there's alternative medicine, waiting to lure you in like a cult. Of course, I don't mean to say that alternative medicines have nothing to contribute - but the vast majority of them (in my humble opinion) are complete garbage, made up by people who want to make a living on someone else's misery and doubt.

Of course, our current medical practices aren't much better, in many cases. (See this example for Lipitor's Number Needed To Treat. It's worth a quick read.)

And that's what brings us to personal medicine. Like the rule based approach, personal medicine isn't a huge change, but it does introduce a new layer.



The advantage of he new layer is twofold: The first is that rules that were based on "bad practice" should slowly melt away, and the second is that the number needed to treat should be drastically reduced, since treatments will now be indicated for conditions that can be more closely matched with the cause (not the symptoms.)

And, best of all, it still lets the doctors operate in a rule based environment. The shift may not be as big, after all - it just means retraining all of our MDs. In some countries, that education will be mandated by the organizations that pay them, and the transition will go quickly. Only in the places where no one monitors how treatments are done will the switch be slow.

So really, I think the time is ripe to update the rules, don't you?

Labels:

Bioinfomatics in a spreadsheet?

This is an old article, but it just came to my attention today.

Mistaken Identifiers: Gene name errors can be introduced inadvertently when using Excel in bioinformatics

The title really does say it all. Alas, I just tested it with openoffice 3.0, and it has the same problem.

Good thing I do my gene name storage in databases!

Labels:

Tuesday, March 3, 2009

Update to blog url

Just a quick admin note - I'm finally going to make the blog the default page on my domain, so for people who have this page bookmarked, you'll want to drop the "blog.html" part of the URL.

As far as I can tell, people aren't spending a lot of time looking at my photography, so I may as well let the blog tell the main story here.

Just in case you find yourself at the blog.html page, and aren't sure where to go, just follow this link to fejes.ca

Labels:

Monday, March 2, 2009

We're probably a lot further from personalized medicine than we think...

I keep writing posts about how I think we're closer to personalized medicine than we realize, but I think I might have to change my tune... just by a little bit. This article (Why Doctors Hate Science - Sharon Begley/Newsweek) was linked to from Slashdot today, and caught my eye.

I highly suggest giving the article a quick read, but if you don't, the general summary is that many doctors aren't able to embrace science to find the correct/best diagnosis and treatment option, let alone necessarily provide appropriate care at some level. On the surface, it's a scathing taunt at doctors, although really, I think there are "professionals" in all fields that just don't apply logic. To quote my girlfriend "I hate stupid people." Well, apparently they exist everywhere, even in the medical profession.

Still after reading the article, I have to say that change is coming, and it's going to come quickly. Those doctors who can't cope with it are going to be blown away by the younger ones who are able to deal with it - and who will be able to get the diagnosis right the first time, as well as the treatment. (As personal medicine begins to LOWER the number of errors doctors make in treatment, insurance companies will have to start lowering their premiums compared to the "old school" doctors - or raise the premiums on the ones not using genetic information - and we can all see where that will take the medical profession in the U.S.) In Canada, I guess the federal government will just mandate that the correct tests must be done before doctors are paid for a treatment. Voila.

Anyhow, with doctors actively resisting the application of logic and science to their treatment regiments, I have to wonder how long they'll effectively be able to keep personalized medicine at bay.

Labels: