In a little less than three weeks, I will defend my masters. This is scary not because I think I'm likely to fail, but because I want to do the most excellent job I could ever do and I've no idea what this means or how to do it.
Part of this has to do with the fact that my project was built to look at a problem from multiple angles, with each perspective designed to evaluate the question in slightly different way. In my dreams, these different perspectives would be in agreement and solidify some core truth about microbes breaking down carbon. But, of course 50% of the evidence supports my hypotheses, 50% opposes it, and the best overall explanation for my results may just be experimental artifacts my controls may not have fully accounted for.
How can I do an excellent job with results that won't give a fairytale ending? Do I do more analyses with the risk of complicating the results further? Collect new data? Read more papers and empathize with the confuzzled results? Romanticize about the beautiful stochasticities of the natural world which periodically hide true, biologically-meaningful patterns?
Do I share my data and analyses with my advisor prior to my defense? If I did it right, maybe she'll be proud of my competence and ability to develop and assess ideas independently. And isn't training independent thinkers a key part of an advisor's role? But if I did it wrong, then what if my incompetence humiliates her in front of her peers?
As a member of her lab, I am constantly conferring with my colleagues about how we can best serve team Kristen to try and repay the great kindness and fearless leadership she has shown to us. So ultimately, doing the most excellent job I could ever do means doing everything in my power to do my advisor proud, and to represent her team well.
If only I knew how to act on that in this instance...
TMI yet TLI: science as I see it
Science is exploding; follow me as I see where it is and where it is going
Tuesday, March 4, 2014
Tuesday, January 21, 2014
Readin'
My mum asked me to write a post on what I thought the
greatest challenges facing young scientists are for her blog. I started writing
something which didn't quite fit her criteria, so while I think more about her
topic, I thought I would share what became a bit of a confessional.
What are the biggest challenges facing young scientists, you
ask?
I can't really answer this question for everyone, but I know
for me it is the fact that reading and lab work are mutually exclusive events in my life. My lab and the fields of microbial and molecular ecology are growing so
fast, and I want to know a bit about and be involved in everything. This
clearly is not possible; 10-15 papers directly related to my work come out
every day, and it's good if I can read that many in full in a week. And that
doesn't include getting caught up on the "old" classics upon which these
fields are built. Perhaps more importantly, reading goes against the views my
father bestowed upon me.
I an my father's daughter, and as anti-elitists, reading is
an intellectual task reserved for the kind of wusses we don't approve of. Reading
can be fun, but only when the mind is allowed to wonder and I can go on random
tangents and explore citations in the papers at hand. However, reading in this
way is reserved for the artsy types doing PhDs in comparative literature or philosophy
who are expected to read for 7-9 years and possibly not graduate. When my dad and I read, we are on a mission; we
read manuals and protocols to figure out how to DO things better. But we only
resort to readin' when we can't figure out how to do something ourselves.
Reading is for the weak.
The harsh practicality of this view compounds my short-sighted
in my approach to science. I could read more, but the instantaneous
gratification of doing an experiment is so much more enjoyable. I am still
young and inexperienced when it comes to the lab, and every new discovery is
exciting. I like having a growing family of bacteria in the lab, and watching
how my "children" develop through time, how they are each unique in
their growth morphology and feeding preferences. I love the rush I get when I
have finally figured out why something didn't work, and crushing the problem. If my head was stuck in a journal, I wouldn't
get to do as many of these things. Yet ultimately I know that if my findings
are to benefit the scientific world and eventually advance some aspect of
society, I will have to communicate them in a manner which appropriately places
them in the context of what is already known. And for that, I must read.
And if to enjoy
reading I must take a day to meander through and really understand a paper or
two each week, then I must step down from my podium and become a wuss. From now on, Wednesdays are my wussdays.
Sunday, December 15, 2013
Morals and the ethics of collaboration
Why can't scientists stick to science at conferences? I find
it hard enough to speak to strangers, and to be welcomed by racist comments
that I don't know I can address if I am in the perpetrator's country makes it
much harder. Racism and sexism are trashy. You shouldn't have to hide racist/sexist/other-ist thoughts. They shouldn't even come to your mind. Yes - the ideal
world.
But are we too sensitive? Do we rush to conclusions too
early? Perhaps. The only time I can remember being called explicitly racist was
in fourth grade, after I said I didn't want to watch riverdance for the third
day straight and said that Irish people drink beer on St. Patrick's Day
(apparently saying that Irish people eat boiled dinner isn't racist though). It
seemed (and still seems) ridiculous that my teacher should assume I was discriminating
against Irish culture, but it made complete sense to her.
So am I ridiculous in crying racism after some conference
attendees told me that Chinese students are lazy, sleep in the lab and only
ever pretend to do work, playing video games all the time instead? Maybe. But it
is just another reminder that we need to think about what comes out of our
mouths and how, because after that comment, I couldn't bring myself to talk
science with them. Or talk to them full stop, for that matter.
Conferences and face-to-face interactions are vital for the
spread of scientific knowledge and forging collaborations, but if these are
going to be shrouded in ideals that one party vehemently opposes, then once
again ideals have shaped the path of science. Human rights and environmental charities
are constantly scorning universities and companies for investing in morally
grey areas, so why do scientists think it is OK to collaborate with scientists
who hold grey morals?
I think this is probably because we like to think of science
and society as separate when convenient. And it is hard to hold a firm stance
against poor morals because so many technological improvements have emerged
from dark times and/or dark minds (SONAR, RADAR, and the Haber-Bosch process to
name a few).
But where do we draw the line in collaborations? If we
ignore the discriminatory tendencies of our collaborators and work with them to
advance science, are we also subtly advancing their ideals? What if we cite
papers written by people with questionable morals (James Watson comes to mind
here)?
Ultimately, are we not guilty by association if we knowingly
collaborate with discriminatory colleagues? Is this how the closed-minded norms
perpetuate in fields that like to consider themselves open, in the heads of
people who like to think of themselves as educated and/or liberals?
Thursday, October 31, 2013
Has the sun set on societies?
My mum asked me to write a post on how society publishers could keep young scientists involved, as this is something that many scholarly publishers, libraries, and societies are thinking about. Her comments were something along the lines of "We oldies, who are the ones making decisions about the future of our organizations, are worried about keeping early career scientists involved. For example, most societies are run by the older generation, and although some (eg. AGU, ESA) do have a good group of early career scientists, these appear to be exceptions to the rule. How can we get more of you involved?"
One factor could be the cliqueyness
of conferences. The primary incentive of society memberships I can see
are 1. access to jobs boards/listservs and 2. going to conferences (for
societies that require membership to attend conferences). But how do you
break into a group of scientists who are talking to one another if they
are all friends? Would you rather not stick with your own kind?
It is kind of like the first day of school all over again, only you
probably don't have quite as much courage to ask to join someone else's
game (conversation), or the brazen spirit to get over rejection. Then
again, later career scientists may feel the same way about talking to
earlier career scientists. People are just awkward.
Perhaps the best way to get all age groups is to create intergenerational labs. While as a grad student I may be able to relate to my pre-tenure advisor, I look at many later stage professors and they seem a world away. And since many of them got tenure in a different era, they really are. There are certain full professors I can relate to, but the initial interactions and courage to talk to one another is contingent upon our daily bumping into each other rather than any intellectual exchanges. Of course, deciding who shares which lab space isn’t really within the powers of societies, but
holding workshops specifically designed to bridge the generational gap is (so offering miniseminars or discussion groups at conferences where the
organizers put together a small group of scientists at different stages
in their careers and from different institutions to talk).*
From
a publishers perspective, keeping early career scientists involved with
the societies they work with is seen as vital for maintaining
readership. I can’t really say how to keep up readership in the early
career sector, but I know what I want. I want those pesky career
services adverts to go away – or at least to not
compromise job post quality and relevance in the name of money (do I
really seem like the kind to want a job in pharmaceuticals if I am
reading a paper on theoretical ecology?). I don’t need a society’s
calculator tools; labs have already developed plenty enough of those,
which are high quality so don't waste your resources trying to compete with them. A publisher’s website does not need to be a one-stop shop, and should
stop wasting its energy trying to be. Remove that banner stuff and those
sidebar links and fill as much of the page as possible with actual content–
the screen on my laptop is quite small and I want to see the figures as
I read. Just give me the papers I want to read, please. This isn’t the
superbowl.
But perhaps ultimately what societies and publishers can do to involve a younger audience is to stop assuming everyone is going into academia (and in a few instances, perhaps industry). Develop resources that don't immediately exclude the majority of young scientists that aren't in or aspire to be in tenure track positions.
What do you want from societies and publishers (besides free access to all the journals in the world)? Is there something you think that societies could provide to entice you to participate more, or are societies a lost cause in your mind?
* I would like to point out that there are of course numerous exceptions to my sweeping generalizations about societies, publishers, and conferences and/or organizations doing the things I suggest are good here. The purpose of these statements are to demonstrate instances of where I think these organizations are headed in the right direction.
Sunday, October 13, 2013
What makes a paper mind-alteringly good?
Sorry about the overly-dramatic title, but after reading what I think is a very well planned and executed paper on the phylogenetic and geographical dispersion of drought tolerance in plants earlier, I have been thinking about the power that single articles can have over our outlook on our work.
If the world's journal archives were about to be obliterated, the one article I would grab would be Davidsson and Janssens' 2006 paper in Nature on how climate-induced changes in decomposition may feed back. Actually, this is the only paper I kept from my undergraduate, and I have been reading the same dog-eared, scribbled-on copy for the past five years. It is also the only paper for which I have gone through and read every paper cited in it.
But why did this article appeal to me initially? I didn't understand most of it the first time I read it. Or the second. It took a week of staring at Box 1 to understand what it was talking about, and some of the other arguments in the paper seemed flawed to me because the lines of logic they were following weren't laid out, and disagreed with the facts I was aware of the time. But from what I could decode, I knew this paper would be really important for my understanding of the carbon cycle under climate change. And in my young inexperienced state, I thought that it must be good and right, because it was published in Nature.
Despite my interest in the paper's important topic, it was the dense challenge of the paper, working through the hidden complexities of the carbon cycle, that got me. Every time I read it, I get something new out of it, and it helps me frame my work in the bigger picture and remind my why I love my work.
But what article do you keep returning to?
Basic psychology would tell us that for most of us, our most powerful article will be one of the first we read on a topic; our experiences early in life (whether research or real) shape how we perceive subsequent events, and therefore we will find ourselves returning to the point (paper) which established our mindsets. I would think that review articles would also be favored over primary research articles, because they put the research in context and are generally written by people with respected views. Of course, regurgitating what is known doesn't help, but putting a new spin we hadn't thought of (for example pulling in information from other disciplines) would make an influential article in my books.
I think this would mean that journals that want to be cited lots should favor interdisciplinary reviews. I believe I read somewhere that this already happens - does it? That seems like it would be a much too simple key to "success"!
If the world's journal archives were about to be obliterated, the one article I would grab would be Davidsson and Janssens' 2006 paper in Nature on how climate-induced changes in decomposition may feed back. Actually, this is the only paper I kept from my undergraduate, and I have been reading the same dog-eared, scribbled-on copy for the past five years. It is also the only paper for which I have gone through and read every paper cited in it.
But why did this article appeal to me initially? I didn't understand most of it the first time I read it. Or the second. It took a week of staring at Box 1 to understand what it was talking about, and some of the other arguments in the paper seemed flawed to me because the lines of logic they were following weren't laid out, and disagreed with the facts I was aware of the time. But from what I could decode, I knew this paper would be really important for my understanding of the carbon cycle under climate change. And in my young inexperienced state, I thought that it must be good and right, because it was published in Nature.
Despite my interest in the paper's important topic, it was the dense challenge of the paper, working through the hidden complexities of the carbon cycle, that got me. Every time I read it, I get something new out of it, and it helps me frame my work in the bigger picture and remind my why I love my work.
But what article do you keep returning to?
Basic psychology would tell us that for most of us, our most powerful article will be one of the first we read on a topic; our experiences early in life (whether research or real) shape how we perceive subsequent events, and therefore we will find ourselves returning to the point (paper) which established our mindsets. I would think that review articles would also be favored over primary research articles, because they put the research in context and are generally written by people with respected views. Of course, regurgitating what is known doesn't help, but putting a new spin we hadn't thought of (for example pulling in information from other disciplines) would make an influential article in my books.
I think this would mean that journals that want to be cited lots should favor interdisciplinary reviews. I believe I read somewhere that this already happens - does it? That seems like it would be a much too simple key to "success"!
Thursday, October 3, 2013
What to do when you're wrong...
Sorry for the delay in posts. I've been busy being wrong.
As we know, one of the first tenets of "good" science is reproducibility. If I follow someone else's protocol, using the same starting material, I should get the same results. But if you do that, and you get a different result, how do you know whether the alternate result is because you didn't quite replicate the protocol, or because the original conclusions were wrong? Or what if you interpreted the same results differently?
I can imagine this is an especially large problem when you are new to a protocol (or at least that is my excuse), as you lack a sense of the range of possible outcomes. For example, a labmate was trying to interpret a catalase test earlier, and would have said that all our results were negative compared to the plate of mixed soil bacteria I had, where adding a drop of hydrogen peroxide caused a baking soda and vinegar type volcano effect. Two of the bacteria were supposed to be catalase positive, and if we added tons of hydrogen peroxide directly to a plate with a high density of bacteria and looked really really close for a few minutes and imagined a positive result, we got one or two small bubbles. This is exactly the same as if I drop a drop of the hydrogen peroxide on uninoculated media. So did the authors who stated these organisms were catalase positive actually mean it, or did some nervous inexperienced student who was told to look for any sign of bubbling squirt a bubble onto the slide and say there was a positive result? Who decides where the boundaries of a positive or a negative result are - would it not depend on "how" positive or negative your control organisms are? Do people publish the organisms they used as controls alongside their determination of whether something shows a positive or a negative result? No evidence for these yet.
Maybe we should just mandate that all standards in genomics tables be replaced by full-colour pictures. Voluptuously bubbling hydrogen peroxide. Stunning Gram stains. Replace someone else's interpretation of ambiguous data with informative eye candy.
In another instance, I have been working (a bit too long) on trying to get well-published qpcr primers to work, and finding that the conditions as close to the original ones I can reproduce in my lab just don't work. I was afraid I had contaminated the freezer stock of my organism, or put too much or too little template, or used the wrong temperature...something that was my fault. I tried to come up with an answer and a solution for my PI for when I told her that things weren't really working. But apparently, however, what I thought was close enough to the original conditions probably is not; the primers were tested against a sequence placed in a purified plasmid, and I am using genomic DNA.
Which brings us to another class of issues with reproducing an experiment - what if you purposefully are not exactly reproducing an experiment, because you don't believe the methods used are valid and/or reliable and/or result in biologically meaningful conclusions? Why waste time and money trying to reproduce an experiment that was invalid when it was made, and still invalid now, just to try and compare your results to ones you cannot trust? At my stage, I think it is so you can fit in; the ability to reproduce something invalid is a valuable skill for perpetuating some of the falsehoods of science, which you have to be able to do in order to prove your worth in breaking the (methodological) status quo.
So why use a plasmid template for qPCR, even if it does a poor job representing the kinds of templates you will be comparing with from the environment? Because it is one of the standards for this kind of data collection; we can hide behind completely non-reproducible data, uncertain whether it is due to the methods, or because we are dealing with "complex" environmental samples that nobody else is likely to exactly mimic.
But no matter what I may do to try and convince myself I wasn't wrong, I was. I tried to reproduce results under the conditions I thought they should be done, not those in which they were intended.
So if you are wrong, either make sure you do it reproducibly so you can challenge what is deemed right, or do so using conditions nobody expects ever to exactly reproduce. But know when to accept that it is you, not science, not the protocol, that is wrong.
As we know, one of the first tenets of "good" science is reproducibility. If I follow someone else's protocol, using the same starting material, I should get the same results. But if you do that, and you get a different result, how do you know whether the alternate result is because you didn't quite replicate the protocol, or because the original conclusions were wrong? Or what if you interpreted the same results differently?
I can imagine this is an especially large problem when you are new to a protocol (or at least that is my excuse), as you lack a sense of the range of possible outcomes. For example, a labmate was trying to interpret a catalase test earlier, and would have said that all our results were negative compared to the plate of mixed soil bacteria I had, where adding a drop of hydrogen peroxide caused a baking soda and vinegar type volcano effect. Two of the bacteria were supposed to be catalase positive, and if we added tons of hydrogen peroxide directly to a plate with a high density of bacteria and looked really really close for a few minutes and imagined a positive result, we got one or two small bubbles. This is exactly the same as if I drop a drop of the hydrogen peroxide on uninoculated media. So did the authors who stated these organisms were catalase positive actually mean it, or did some nervous inexperienced student who was told to look for any sign of bubbling squirt a bubble onto the slide and say there was a positive result? Who decides where the boundaries of a positive or a negative result are - would it not depend on "how" positive or negative your control organisms are? Do people publish the organisms they used as controls alongside their determination of whether something shows a positive or a negative result? No evidence for these yet.
Maybe we should just mandate that all standards in genomics tables be replaced by full-colour pictures. Voluptuously bubbling hydrogen peroxide. Stunning Gram stains. Replace someone else's interpretation of ambiguous data with informative eye candy.
In another instance, I have been working (a bit too long) on trying to get well-published qpcr primers to work, and finding that the conditions as close to the original ones I can reproduce in my lab just don't work. I was afraid I had contaminated the freezer stock of my organism, or put too much or too little template, or used the wrong temperature...something that was my fault. I tried to come up with an answer and a solution for my PI for when I told her that things weren't really working. But apparently, however, what I thought was close enough to the original conditions probably is not; the primers were tested against a sequence placed in a purified plasmid, and I am using genomic DNA.
Which brings us to another class of issues with reproducing an experiment - what if you purposefully are not exactly reproducing an experiment, because you don't believe the methods used are valid and/or reliable and/or result in biologically meaningful conclusions? Why waste time and money trying to reproduce an experiment that was invalid when it was made, and still invalid now, just to try and compare your results to ones you cannot trust? At my stage, I think it is so you can fit in; the ability to reproduce something invalid is a valuable skill for perpetuating some of the falsehoods of science, which you have to be able to do in order to prove your worth in breaking the (methodological) status quo.
So why use a plasmid template for qPCR, even if it does a poor job representing the kinds of templates you will be comparing with from the environment? Because it is one of the standards for this kind of data collection; we can hide behind completely non-reproducible data, uncertain whether it is due to the methods, or because we are dealing with "complex" environmental samples that nobody else is likely to exactly mimic.
But no matter what I may do to try and convince myself I wasn't wrong, I was. I tried to reproduce results under the conditions I thought they should be done, not those in which they were intended.
So if you are wrong, either make sure you do it reproducibly so you can challenge what is deemed right, or do so using conditions nobody expects ever to exactly reproduce. But know when to accept that it is you, not science, not the protocol, that is wrong.
Sunday, September 1, 2013
Science on the Free Market?
My mum's friend sent me this article discussing how science is moving away from fundamental research, and towards translational work, and asked for my opinion.
The article is titled "Should Science be for Sale?", which immediately made me think of some kind of sinister plan to buy people out of presenting the whole truth.
Although faking science is not limited to the former Soviet Republics, the author points out that it has been getting worse there. I agree this is bad; when people build science on bad science, it may take years for its consequences on scientific theory to emerge. However, the author tries to blame this "evil" on the fact that people increasingly see science as a means to an end, rather than for its own holy sake. Yes money can lead to greed and cheating the system, but it can also lead to healthy competition - a bit of a free market with the tax-paying public as customers - for doing the research that matters, and so the author's implied sentiment that applied research is filthy compared to basic science is a bit simple-minded and snooty to me.*
But by no means do I think science is a holy palace either. Science has its traditions about what should and should not be published which don't always coincide with simple rules of following the scientific method. It is true that scientific knowledge is financially-driven - science journals are after all really just glorified tabloids looking for the biggest and best (fact-checked) stories to boost readership - and researchers need to do the work that will get them money from the governmental funding agencies that decide the country's research agenda.
While applied research may be more explicitly designed to facilitate progress towards these goals, and to fit in with where the government is funding, by no means does this mean that basic research is excluded from these funding calls. You just have to spin it a bit harder to make it sexy, and ultimately I think this is better for the researchers. Any taxpayer is entitled to know what areas of science his or her money is going to, and to ask scientists how they are attempting to make the world a better place. Taxpayers don't always have to understand exactly how this research will get to that end-point, but by forcing research proposals to consider broader impacts, it better prepares scientists to legitimize their work to their funders, and to think about how their work will ultimately contribute to society. It gives people a goal, and making coherent progress is difficult without one (not to mention checking the boxes on annual reports!)
I think that if this country is going to continue to succeed in science, ALL scientists will have to promote their research and give it credibility in the eyes of the public, who, whether as taxpayers or as private donors, determine its future. We absolutely need more basic research, but if you are up on your high, unapplied horse and refuse to even distantly relate it to a topic of public or private interest, don't whine when funding dries up.
* This attitude towards applied sciences apparently is even worse in maths than in (other?) science. Last year I lived with a mathematician who was complaining about lack of funding for his field, so I asked him what the end goal of his work was - how could it eventually be applied to physics or economics to improve knowledge of the world. He said that application was a no-go word, and even thinking about it would lead to ostracization, so he couldn't tell me what he did or where his research was headed. It was like he was bitter that he had to reduce or filthy himself with applied work, even though the taxpayer had funded grad school for him.
The article is titled "Should Science be for Sale?", which immediately made me think of some kind of sinister plan to buy people out of presenting the whole truth.
Although faking science is not limited to the former Soviet Republics, the author points out that it has been getting worse there. I agree this is bad; when people build science on bad science, it may take years for its consequences on scientific theory to emerge. However, the author tries to blame this "evil" on the fact that people increasingly see science as a means to an end, rather than for its own holy sake. Yes money can lead to greed and cheating the system, but it can also lead to healthy competition - a bit of a free market with the tax-paying public as customers - for doing the research that matters, and so the author's implied sentiment that applied research is filthy compared to basic science is a bit simple-minded and snooty to me.*
But by no means do I think science is a holy palace either. Science has its traditions about what should and should not be published which don't always coincide with simple rules of following the scientific method. It is true that scientific knowledge is financially-driven - science journals are after all really just glorified tabloids looking for the biggest and best (fact-checked) stories to boost readership - and researchers need to do the work that will get them money from the governmental funding agencies that decide the country's research agenda.
While applied research may be more explicitly designed to facilitate progress towards these goals, and to fit in with where the government is funding, by no means does this mean that basic research is excluded from these funding calls. You just have to spin it a bit harder to make it sexy, and ultimately I think this is better for the researchers. Any taxpayer is entitled to know what areas of science his or her money is going to, and to ask scientists how they are attempting to make the world a better place. Taxpayers don't always have to understand exactly how this research will get to that end-point, but by forcing research proposals to consider broader impacts, it better prepares scientists to legitimize their work to their funders, and to think about how their work will ultimately contribute to society. It gives people a goal, and making coherent progress is difficult without one (not to mention checking the boxes on annual reports!)
I think that if this country is going to continue to succeed in science, ALL scientists will have to promote their research and give it credibility in the eyes of the public, who, whether as taxpayers or as private donors, determine its future. We absolutely need more basic research, but if you are up on your high, unapplied horse and refuse to even distantly relate it to a topic of public or private interest, don't whine when funding dries up.
* This attitude towards applied sciences apparently is even worse in maths than in (other?) science. Last year I lived with a mathematician who was complaining about lack of funding for his field, so I asked him what the end goal of his work was - how could it eventually be applied to physics or economics to improve knowledge of the world. He said that application was a no-go word, and even thinking about it would lead to ostracization, so he couldn't tell me what he did or where his research was headed. It was like he was bitter that he had to reduce or filthy himself with applied work, even though the taxpayer had funded grad school for him.
Subscribe to:
Posts (Atom)