Nick Loman On Twitter: Thinking Of Switching To Airmail For Mac

The Atlantic recently posted and managed to capture two very interesting views for anyone looking in on biomedical research from the outside. The first highlights how different it is from most things in for-profit environments: I’m not Pollyanna.

This is not around the corner. It’s not for next quarter; it’s not for next year. We play for the long game. I don’t want to overpromise in the short term, but it is incredibly exciting if you take the 25-year view. And the second, how similar 'picking winners' in life sciences research is to deciding on projects to invest your money in: When there’s less money, reviewers don’t want to run the risk of wasting money on something that doesn’t work.

I’ve got to tell you, if you aren’t prepared to waste money on things that might not work, you can’t possibly do things that are transformative. Because for every successful transformative idea, there’s five times as many nonsuccessful transformative ideas. Nobody knows how to figure out in advance which ones they’re going to be. Here's an excerpt from that's based on a recent interview with Robert Birgeneau, the former Chancellor of Berkeley and former President of the University of Toronto before that. He happens to be an alumnus of St. Michael's College at the University of Toronto, where I also did my undergraduate degree.

The article was published in a special fall issue of the St. Michael's College alumni magazine. Everyone is familiar with the common phases of matter—solids, liquids and gases—and the transitions between them. Whether it’s freezing condensation on a window or ice melting in water, materials constantly change from one phase to another. Robert Birgeneau’s professional life has been dedicated to understanding how different materials go through phase transitions, and there’s much more to them than temperature. “What you learn very quickly is that the nature of a material’s phase transition depends on many other features,” says Birgeneau, adding that part of the challenge in his research is finding good models for phase transitions in systems that might exist in worlds with more or less than three dimensions. Read the rest of the at the St.

Michael's College Alumni site. In what's ending up as a abject lesson against using hype to publicize scientific papers, a entitled 'Exonic Transcription Factor Binding Directs Codon Choice and Affects Protein Evolution' by Stergachis et al. Was accompanied by a press release that's been at Forbes by Emily Willingham: The hype began with the way hype often begins: an institutional news release offering us the holy grail/huge breakthrough/game-changing finding of the day.

This kind of exaggeration is the big reason any science consumer should look well beyond the news release in considering new findings. A news release is a marketing tool. You’re reading an advertisement when you read a news release. In this case, the advertisement/news release not only goes off the rails with the hype, it’s also scientifically garbled and open to all kinds of misinterpretation, as the comments at the link to the release make clear. The whole news release is found isn't actually that much more hyped up than a lot of scientific releases coming from the usual biased sources.

Here, a UW press release was written by a engaged by (and I presume paid by) UW to promote work that - you guessed it - a team of UW scientists just published. Apart from awkwardly referring to codons as a '64-letter alphabet', I don't think you can fault them for joining the scientific hype arms race that's ongoing if you want to get a publication noticed.

As far as claims for 'discovering double meaning in the genetic code', here's the actual abstract from the Stergachis et al. Paper: Genomes contain both a genetic code specifying amino acids and a regulatory code specifying transcription factor (TF) recognition sequences. We used genomic deoxyribonuclease I footprinting to map nucleotide resolution TF occupancy across the human exome in 81 diverse cell types.

We found that 15% of human codons are dual-use codons (“duons”) that simultaneously specify both amino acids and TF recognition sites. Duons are highly conserved and have shaped protein evolution, and TF-imposed constraint appears to be a major driver of codon usage bias. Conversely, the regulatory code has been selectively depleted of TFs that recognize stop codons.

More than 17% of single-nucleotide variants within duons directly alter TF binding. Pervasive dual encoding of amino acid and regulatory information appears to be a fundamental feature of genome evolution. Let's paraphrase and simplify: DNA encodes protein and transcription factor (TF) binding sites.

We mapped TF binding sites within protein coding regions, and found that 1 out of 6 codons also encode TF binding sites. For convenience, we call these 'Duons'. Duons are evolutionarily conserved, probably because they code for both amino acids and TF binding sites. We also found TFs tend to not recognize stop codons, and that about 1 in 5 SNPs within duons alter TF binding. And that's it. The abstract itself doesn't claim that this team was the first to find duons, or to find a second genetic code, or anything shocking like.

They simply used a term to conveniently refer to known a feature of codons; in other words, they made up some jargon, and others ran with it. The actual paper is actually pretty good, and helps to explain how evolution/mutation of silent SNPs can change gene transcription patterns without changing (usually damaging) the protein encoded at the site. Most importantly, there's nothing about a second genetic code, in the. There didn't find any secret tRNAs in the study, nor did they find rogue ribosomes that no one noticed over the past. I'm going to assume that the team at UW didn't proofread the press release before it went out. If they did, maybe they brushed it off as 'just another release' that will fall into obscurity on the UW site.

That might have been their mistake, but I'm just speculating here. In either case, this is a perfect lesson that scientific experts need to be engaged in all parts of the science communications process.

It also shows how even a great Science paper can be overshadowed by what many assume to be a simple press release. Christophe Lambert, Chairman of Golden Helix Inc., recently gave on the many facets of the health system can be improved using different approaches to analyzing the wealth of information in health records. He captures the general idea at around 41:00: I've done some interesting work a project with Medco and Golden Helix, where we were looking at millions of patients records for drug safety and efficacy. The end game that we were envisioning, was if you're sick, it would be great to look at tens of millions of records to find patients who were similar to you, anonymously, but then find out evidence based, what courses of treatment led to the best outcomes, and have a set of possibilities to present to a doctor as here are the many outcomes of various drugs, various treatments, and so forth. On his Vimeo channel.

Not surprisingly, the sent to 23andMe by the U.S. Food and Drug Administration hit a chord with many people around the web. Basically, the FDA has taken the position that 23andMe's $99 DNA test kits are medical devices, and having not been authorized by the FDA, cannot be marketed by the company.

What makes the whole issue especially puzzling is that the FDA claims 23andMe has failed to provide adequate information that their service is 'substantially equivalent' to any other legal devices with similar functions - since May of 2013. You might think that if the FDA blocks 23andMe's DNA test sales the company will freeze up and shutter it's doors, but I don't.

Nick Loman On Twitter: Thinking Of Switching To Airmail For Mac

Nevertheless, there's been an explosion of responses to the FDA's decision. Here's a quick run down of a few of the best. Matthew Herper, at Forbes, put up, by far: Really? In six months, a company choosing to work in a business in which it knows the FDA believes it has jurisdiction decided not to respond to the agency for six months? At a time when 23andMe was going to be launching an advertising campaign to try to sign up a million people to its service? At a moment when Anne Wojcicki, the company’s chief executive, was going to be on the cover of FastCompany talking about how 23andMe is revolutionizing health care?

And 23andMe thought the FDA was just going to, I don’t know, not notice? Either 23andMe is deliberately trying to force a battle with the FDA, which I think would potentially win points for the movement the company represents but kill the company itself, or it is simply guilty of the single dumbest regulatory strategy I have seen in 13 years of covering the Food and Drug Administration.

I don't know about 'single dumbest regulatory strategy', but more on this in a minute. David Dobbs, at The New Yorker, is more guarded and suggests that: Another possibility is that the company simply dropped the ball. Linda Avey, who founded 23andMe with Anne Wojcicki, in 2006, and left in 2009, still owns shares and keeps in touch with some employees, says she doesn’t know what created the rift between the company and the F.D.A. “It surprised me,” she said. But she pointed out that 23andMe’s general counsel, whom she understands was leading the negotiations with F.D.A., left the company this summer; perhaps it fell through the cracks. I doubt that's the case.

You don't drop the ball on work that's essentially make-or-break for your company, unless of course that it isn't anymore. Michael Eisen, a member of the Scientific Advisory Board for 23andMe, on the evolving state of personal genetics: The data are, at this point in time, very very messy. I don’t think anyone disagrees with that. The question is what to do about that. One the one side you have people who argue that the data are so messy, of so little practical value, and so prone to misinterpretation by a population poorly trained in modern genetics that we should not allow the information to be disseminated. I am not in this camp. But I do think we have to figure out a way for companies that provide this kind of information to be effectively regulated.

The challenge is to come up with a regulatory framework that recognizes the fact that this information is – at least for now – intrinsically fuzzy. Which kind of falls in line with this next opinion. Gholson Lyon, an Assitant Prof at Cold Spring Harbor, comes at FDA's stonewalling of 23andMe at The Conversation: We need to collect billions of data points for analysis by computers. The only company in major contention to do this soon is 23andMe. With FDA’s latest attempt to stop 23andMe, all it is really doing is delaying, or worse stopping, the revolution that today’s medicine desperately needs. And this, I have always thought, was the point of 23andMe. While I believe that personal genetics will elicit a huge change in medicine, actually doing that requires knowledge of how a person's genetics impacts their health.

For many genes, like BRCA1/2 which are the oft cited breast cancer risk markers in the media, the links between genetics and disease are established. But for many others, a lot of work still needs to be done: clinical trials based on experiments need to be designed; experiments based on hypotheses need to be run; hypotheses flowing from data analyses need to be identified; and analyses need data sets to start from. Fuzzy data is better than no data at all. I expect that's where 23andMe is going to graciously come into play.

If the FDA actually winds up killing the company's main data collection channel, the company already has about people genotyped and has built up enough traction without having to sell another $99 test for a long time. Do you remember ever buying a Big Mac from McDonalds in one of those old non-biodegradable styrofoam containers? At the time, no one thought twice about the logic behind throwing all that styrofoam into the trash. So little, in fact, that immortalized the whole idea of non-biodegradable styrofoam in. Fast forward to today, when people go out of their way to find a recycling container rather than throw a paper cup into the garbage. That's how far we've come, at least in Toronto. So today, I had the surprise of seeing Al Gore, former Vice-President of the USA (like you didn't know that already), and Kathleen Wynne, Premier of Ontario, making an announcement at MaRS, where I work.

Next week, the Ontario government is going to put forward legislation. The whole event was put together by the people at. I'm not going to go much into the merits and drawbacks of this decision here, because other people with much better knowledge of power production can do much better. But I will describe key points Gore appealed to in his speech.

He's a powerful orator, and whether you love him or hate him, there's something you can take home from listening to people like Gore. If you like working on science communication in general, he's a good politician to watch. I've seen his major documentary, before and enjoyed watching him put together his arguments, which he does very, very well. His use of figures and speech in the video is excellent and if you're a scientist and haven't seen it, I suggest that you do. Dsp quattro 4 audio editor cd creation software for mac.

But back to Ontario's energy announcement. Why does Al Gore think renewable energy is in everyone's near future? Today, his three big claims were: Renewable energy is quickly becoming economical. Gore told everyone a story of how in Atlanta, Georgia, Tea Party Republicans and Sierra Club Environmentalists that bans companies from owning solar power cells on residential rooftops. Environmentalists and Republicans don't usually walk hand in hand. So why are they cooperating?

It's because installing solar has become so cheap that returns from installing panels on rooftops are high enough to save the homeowner on energy bills and give the company a profit, said Gore. The message: If you're a person that likes to save money, renewables will help you do exactly that, and soon. The developing world will help develop the technology.

We heard Gore's example of how AT&T (then, Ma Bell) made projections of cell phone use in the 1980's. They were way off. AT&T predicted that around the year 2000, the global market for cell phones would surge to about 900,000 units a year, which ended up off by a factor of about 120 because they didn't take into account technological changes, dropping chip prices, increases in quality of service, and another much more important factor: the developing world. The developing world doesn't have many of the constraints the devloping world has.

In the case of telecom, landlines keep people from adopting cell phones, but in regions where the infrastructure doesn't exists it's much easier to just go in with wireless, explained Gore. So the developing world created a huge market for wireless phones because that was the logical thing to install. Gore dropped in that analogy to argue that it'll be much easier for the adoption of renewable energy in the developing world.

We may be stuck with power plants and an electrical grid for the time being, but that doesn't mean that there isn't a huge market to develop technology to be used elsewhere. If entrepreneurs do so, it will only end up helping us adopt the same technologies here, suggested Gore. Switching to clean energy is a moral choice. He finished the last several minutes of his speech framing the environmental choice as a choice between right and wrong. The morally right choices will always win.

Gore passionately explained that racism and homophobia aren't tolerated anymore and that it took everyday people to stand up against these kinds of behaviour to make them unacceptable in society. I'm actually surprised that he chose to go way past the idea of littering and polluting and went right to equating our attitudes towards coal-fired energy to say, racially segregated cafeterias, but maybe I'm just from a different time.

Pollution and racism aren't exactly comparable problems to me, but hey, he made his point. Either way, his argument was using energy from dirty sources is morally wrong and that each person has an obligation to stand up for what's right. There isn't much of an argument when you frame the decision to pollute as a morally right/wrong question, is there? Here I thought I was a winner for not buying styrofoam containers anymore.

Coming back to the Ontario legislation, I'll wait and see what happens. It's a great idea and will probably become a bit of a political football, but I think a phase out of coal energy is going to happen. It won't eliminate all the negative consequences of power production but it's probably the easiest source to quash. In an ideal world, we'd be able to eliminate all negative effects of producing power. From each of nuclear, hydroelectric, and thermal (coal, natural gas) sources, and there are negative aspects and pollution from each one. If we, as a province, decide not to burn coal, other energy sources are going to have to come into play.

Congratulations to, which recently was in the business-to-business blog category in the recent Canadian Online Publishing Awards competition! The COPAs consider brands from a lot of other media publishing companies - Rogers Media, Huffington Post Canada, Corus Entertainment, the Toronto Star, and Metro News, to name a few - so for the Stem Cell Network to be recognized for the quality Signals has achieved is a great achievement! I've had the pleasure of working with Lisa Willemse, the Director of Communications responsible for Signals for the past few years, who had about the win: I’m immensely proud of this blog, but perhaps not for the reason you might think. Yes, it’s nice to win awards and be recognized.

Yes, I’m happy this blog has contributed to important, ongoing discussions about stem cell and regenerative medicine research and has helped share knowledge with broader audiences. But I get the best warm fuzzy from the success of our bloggers. So once again, congrats! Spend a few minutes reading in Nature Biotechnology by and Mick Watson (whom I had the pleasure of hearing a few weeks ago in Toronto). This is probably the most salient point: You're a scientist, not a programmer. The perfect is the enemy of the good.

Remember you are a scientist and the quality of your research is what is important, not how pretty your source code looks. Perfectly written, extensively documented, elegant code that gets the answer wrong is not as useful as a basic script that gets it right. Having said that, once you're sure your core algorithm works, spend time making it elegant and documenting how to use it. Use your biological knowledge as much as possible—that's what makes you a computational biologist. I've often heard students and researchers debating merits of one computer language versus another, whether Object Oriented code is 'better' than functional programming, but in biology these kinds of debates usually miss the point. Biology is about biology, not about technology. Let me make a sweeping generalization about, you, the computational biologist.

One of the main things you're going to try to do is to use your computational abilities to identify what the next steps are in a wet lab to validate what you're observing in the data. It doesn't matter how you arrive at the conclusion to do a certain experiment, as long as you identify that next step in the line of thinking. If you throw in a mix of statistical skills, you, as the budding computational biologist, will be able to rapidly identify the most suspicious and/or interesting pieces of data in a huge mish-mash of data. If your fellow bench biologists take up enough of your experimental pitches, and they work out from time to time, you're probably doing very well.

That's not to say write horrible code. Well written code is probably essential if you're ever going to ask others for help debugging your scripts. But since the end point is to inform experiments, it doesn't matter whether you use double spaces or tabs to indent your code.

All that aside, I've spent about a decade trying to decide on the differences between 'computational biologists' and 'bioinformaticians'. It's much harder that you think. The difficulty lies in that the two roles are often interchangeable within the same people; some days, you'll be a computational biologist, others, a bioinformatician.

It doesn't matter which hat you wear, but it's important to understand that the two roles require two different ways of thought. But don't take my word for it, take Computational biology = the study of biology using computational techniques. The goal is to learn new biology, knowledge about living sytems. It is about science.

Nick Loman On Twitter: Thinking Of Switching To Airmail For Mac Pro

Nick Loman On Twitter: Thinking Of Switching To Airmail For Mac

Nick Loman On Twitter: Thinking Of Switching To Airmail For Mac Free

Bioinformatics = the creation of tools (algorithms, databases) that solve problems. The goal is to build useful tools that work on biological data. It is about engineering. Much like the case of computational and experimental biologists, these two roles are different but complimentary ones that contribute to each other in many research programs. Just realize which one you're supposed to be playing!