Hoping they’ll lose Pinterest

The people who introduced me to blogging were not scientists or academics. They were online friends I’d met through playing games. A few of them set up their first blogs in 2001, and I thought it looked fun, so I started one as well. It was on an archaic blogging platform that doesn’t exist anymore. B2? Greymatter? Whichever came first. It was more a diary than anything else, and the only people reading it were my friends.

When I first started thinking about expanding my blog to cover science, there weren’t many other science blogs. I’d been clicking around to see what was out there, and I remember seeing the blog that was later revealed to have been the science blog of the woman who moonlighted as a prostitute and who blogged about that elsewhere under the Belle du Jour pseudonym. There were really only about five science blogs back then. It was ages ago. The web was young.

Now I manage a professional science blog, where researchers sign up for a WordPress account and blog about their work. Scientists have taken up blogging as an almost natural thing, and I don’t mind that at all. Of course they would. It’s a medium. You can use it for anything you want. Pictures of cats. Science. It makes sense.

The people who introduced me to Twitter were not scientists. They were my techie friends in Toronto, who I knew via blogger meetups. “What is Twitter?” I asked in a pub one night, and my friend said “It’s like Facebook, if it only had status updates.”

Now I manage two Twitter accounts for work. They’re followed by Twitter accounts from other scientific publishers. I don’t mind that at all. It’s a good way of keeping in touch. Twitter has become its own medium. You can tweet about anything you want. Sandwiches. Science. It makes sense.
I joined Facebook so I could see a friend’s photos that she uploaded there. She’s not a scientist.
Now I manage a Facebook page for work. I link to the posts and job ads that scientists have posted on our blog. Scientific societies ‘like’ my status updates – or at least the people managing their page do. I don’t mind that at all. Almost everyone has a Facebook page now, and subscribing to professional updates is a convenient way for them to see all the news they need to know in one place. Family news. Science news. It makes sense.

But sometimes, certain internet-minded scientists, who so fervently jumped on blogging half a decade after it first started, go a teensy bit overboard in their praising of an online tool.
I heard about FriendFeed via science bloggers. None of my other friends ever used it.
I heard about Google Wave at a science blogging conference. None of my other friends ever used it or even heard of it.

I heard about Google Plus via science bloggers. A few of my other friends created a profile, but immediately abandoned it – like everyone else.

The people who introduced me to Pinterest were not scientists, admittedly, but this time it only took weeks, not years, for the first science/web-people to jump on the bandwagon. They were really excited about it. Probably the most excited I have ever seen a group of mostly men be about a website of mostly pictures of dresses. And the dreaded questions were asked: “How can we use this for science?”

You can’t, okay! Just leave it!

Not EVERYTHING on the internet has to be twisted and molded into some sort of vehicle for science communication. If it’s a good fit for such communication, like blogging or Twitter, it will happen. But if you try to force your professional research interests onto something that is so purposely modeled after scrapbooks and inspirational pinboards and NOT after anything remotely resembling the way you normally distribute or find scientific information, you are only going to be annoyed and disappointed. Disappointed with the way it functions. Disappointed with the restrictions it imposes.

Why do I care? I didn’t care that FriendFeed or Google Wave or Google+ never worked out, but as soon as I now see the same group of people that thought those tools were the next big thing get completely disproportionately excited about an online product, I fear that it will succumb to the same fate. And I do rather like scrapbooks and inspirational pinboards.

Academics may have invented the web, but not everything that’s on the web has to do with academics. Nobody is going to judge you if you just want to use a product for fun, so please stop trying to turn everything you like into work.

My only consolation is Instagram – a safe haven of food and pets. Until the first person sepia-filters their lab notes and considers it as a medium for research dissemination, that is.


Would I eat that?

It’s something I rarely talk about, but this year is my 10th anniversary of being vegetarian. I don’t know exactly when, because it was a very gradual process. I started slowly phasing out meat from my diet in the late nineties, but lapsed in early 2001, when I was staying in Quebec for four months. Soon after I got back to Holland, though, foot-and-mouth disease hit Europe.
During the 2001 foot-and-mouth disease epidemic, hundreds of thousands of cows were killed in Holland alone. The news showed images of piles of dead cows lying on barricaded farms. Many of them were healthy cows, who were just killed to stop the spread of the disease.
I wasn’t the only one to give up meat entirely that year. After foot-and-mouth disease, followed by an outbreak of swine fever that same year, the sale of meat replacements in Holland increased dramatically. That was probably the only positive economic effect. A large number of farming families lost their business after being forced to have their animals killed, and across Europe the epidemic cost billions.
I stopped eating meat because seeing piles of dead cows on the news made me realize how they are not treated like animals, but like objects. I do still eat fish once in a while, because they don’t have the same “aww” factor and because they are swimming freely until they’re caught, and not squished in the tiniest possible spaces. Other vegetarians have other reasons for not eating meat. Some think it’s healthier, others are concerned about greenhouse gasses, and a few just don’t like meat.
But I love the smell of barbecue.
Contrary to what some people believe about vegetarians, I don’t dislike meat. I love it. The crispy skin on a chicken leg, the juicy inside of a steak that’s just right. Bacon. I just choose to not eat any of those things anymore, because I don’t agree with the way chickens, cows, and pigs are kept and killed just so we can enjoy their meat.
That moral decision will once in a while bring up the hypothetical question whether I would eat test tube meat. I don’t know. Would I? Ethically, yes. None of my arguments for denying myself meat apply to test tube meat. Okay, there is a source animal somewhere from which the starting cells have to be taken, but that is no different from the many cell biology experiments I did in the lab. If I can do tissue culture work – and I have done a lot of that – then I can eat test tube meat.
But test tubes and petri dishes make me think of research, not of food. I am picturing meat soaked in DMEM. Would I eat that? I don’t know.
Until very recently, it didn’t matter. It was just a hypothetical question, but now test tube meat has become a reality. Mark Post of the University of Maastricht has been optimizing the process of growing meat in the lab, and he will unveil the first lab-grown burger later this year.
Meat in a petri dish. (Image: Mark Post, Maastricht University.)
The research leading up to it has cost about £200,000, reports the Guardian, and was funded by an anonymous individual donor. It’s a lot for a burger, as the newspaper rightly points out, but it’s a reasonable amount of money for a research project. And if you compare it to the billions that foot-and-mouth disease cost, it’s a bargain.
“If lab-grown meat mimics farmed meat perfectly – and Post admits it may not – the meat could become a premium product just as free range and organic items have.
He said that in conversations with the Dutch Society of Vegetarians, the chairman estimated half its members would start to eat meat if he could guarantee that it cost fewer animal lives.”
Half would, half wouldn’t, and I’m still on the fence. Would I eat lab-grown meat? Would you?

Book Review – Geek Nation

I’ve never been to India, but I did change planes at the airport in Delhi on my way to Kathmandu this summer. Delhi airport looks like every other major international airport, with a Body Shop, WH Smith, and Costa Coffee. The shops accepted American dollars, but gave change in rupees. Since even airport coffee does not cost $20, I had quite a few rupees left.
Two weeks later, I changed planes at Delhi again. It still looked exactly like every major international airport, but after Tibet and Nepal that suddenly felt a lot like stepping into a shiny, bright, technological future. The airport’s WH Smith had dedicated one of their main display tables, close to the entrance, to science and technology – clearly a popular topic. It was only apt to spend my remaining rupees on the book that looks at India’s obsession with science and technology: Geek Nation by Angela Saini.

Geek Nation follows along with Angela as she travels across her parents’ home country to find out why so many Indians, including her dad and herself, are geeks. She interviews teenage geniuses, IT moguls, geneticists, physicists, neuroscientists, tech entrepreneurs and others to look for a common ground, or an explanation of this cultural mass interest in science and technology.
Many interview fragments are included verbatim, in the original vernacular, and that threw me off a bit at first. “Now the worker who is working on my field I cannot pay him more than 3,000 rupees a month” – to grab an example from a random page. But it’s how they speak, and using the direct quotes, without “fixing” the English, helps to make the book come alive.
The geeks introduced in the book are a different breed of geek than the ones we’re used to. They’re not all-round geeks, with geeky hobbies and creative pastimes, but they’re very focussed on their careers. Students at the prestigious Indian Institute of Technology are driven by the pressure of academic success and the promise of a good job after graduation. They barely have time off – let alone hobbies.
Besides schools, labs, and space centres, Angela also visits a city that sounds like it comes straight from a science fiction novel: Lavasa, a remote city-in-progress that will, when it’s finished, be governed entirely by technology. It sounds ambitious, but what Geek Nation shows is that if there’s any country that can pull off building a city with a full electronic governance system in place, it’s India.
The full extent of India’s collective fascination with science becomes clear at the end of the book, when Angela visits the Indian Science Congress. A massive mega-event, attended by thousands of people from every field of science and beyond, where the line between quackery and science seems a bit blurry at times, and where kids are lost in the crowded audience.
Does Geek Nation give an explanation for the interest in science? It doesn’t exactly offer one single explanation, but I don’t think we could expect it to. If the collective penchant for geekery among Indian people was something that could be explained in a few lines, it wouldn’t be the interesting phenomenon that it is. What it does is provide a snapshot of multiple factors. Technology is important because it can support agriculture, it can provide work, and a bright student can drag a family out of poverty.
As Angela writes in the final chapter: “Scientific progress (…) [is] about improving the lives of ordinary people using innovative technology. On that front, while it may not be publishing as many journal articles as other superpowers, India is having an impact far beyond the surface statistics.”
The main message the book brings home is that, in India, science is just part of life.
The paperback edition of Geek Nation just came out last week, making my review of a book I bought in August suddenly timely again! You don’t have to go all the way to the WH Smith at Delhi airport to get it, either.

Make history, not vitamin C

(This post was previously hosted on my old blog at http://blogs.nature.com/eva and is published in print in The Best Science Writing Online 2012)

“Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas?”
- Edward Lorenz

This is a story about a tiny molecular shift affecting war, politics, disease, agriculture and international corporations. Like all good stories, it also contains a healthy dose of biochemistry and genetics, some pirates and a few rodents of unusual size. The very start – the event that set everything in motion – is a genetic mutation that happened millions of years ago, but we’ll get to that. First, let’s meet the pirates.

The pirates in this story are Dutch pirates, and they were active near the end of the 16th century. During this time, the Netherlands were occupied by Spain, and after a period of repression, the Northern (protestant) provinces started to fight off the Spanish. They were most successful on water. From 1568 onward, several ships received government permission to attack and plunder Spanish ships. These “watergeuzen” dominated on sea, but in 1572 they captured the city of Brielle, marking a turning point in the Eighty Year’s War.

Meanwhile, a large part of the income for the Spanish side of the war came from trade with the East Indies. The European supply of pepper was solely provided by Portuguese fleets, and the trading post in Lisbon was no longer easily accessible to the Dutch while they were at war with Spain. Pepper was extremely valuable in those days, and the Portuguese kept their routes secret to make sure nobody else would cash in on the spice. But eventually, Dutch ships found a route to the East Indies. They sailed south, all the way around Africa, and returned with enough spices to finally make some money.

VOC ship off the coast of South Africa

Finding a trade route to the East Indies led to the formation of the East India Company (VOC) in 1602 – the first multinational corporation, and the first company to sell stocks. The company did more than buy and sell spices, though. For several years, it had a monopoly on colonial activities in Asia, and it had the power to take prisoners and establish colonies. During its existence, the VOC boosted the economy of the Netherlands to the top of the world. This period of economic growth is referred to as the “Golden Century” in Dutch history.

Money may not buy happiness, but the sudden wealth of the country certainly formed the perfect environment to nurture artistic endeavours and encourage major scientific progress. These were the century and country in which Rembrandt painted the Night Watch and Antonie Van Leeuwenhoek developed the microscopes with which he first observed single celled-organisms. The effects of the VOC trade have shaped entire fields of art and science, all because a few ships found a route to the East Indies in a time of economic need.

There was just one problem with the VOC trade route to the East Indies: It was quite long.

Scientific progress notwithstanding, there was no suitable way to keep the crew’s food, especially fresh fruit and vegetables, from going bad before they were even halfway there. This was a problem, because without fresh fruit, the crew was prone to scurvy. Scurvy was the scourge of sea travellers since the 15th century, when ships started to sail across oceans and stayed away from home – and fruit – for too long. Starting with some spots on the skin, scurvy can progress to bleeding from mucous membranes, ulcers, seeping wounds, loss of teeth, and eventually death. 15th century explorers could lose up to 80% of their crew to scurvy. The solution was known and simple: eat lots of fresh fruit.

Scurvy is caused by a lack of ascorbic acid – better known as vitamin C. Our bodies use this vitamin for many metabolic processes, such as producing collagen or repairing tissue damage. Without vitamin C, we essentially slowly start to fall apart – skin breaks open, wounds won’t heal, teeth fall out.

But we humans are one of the few animals that need to eat fruit and vegetables to keep our vitamin C levels up. Most animals are quite capable of synthesising their own vitamin C. Most, but not all. We share our need for fruit and veggies with other primates, including closely related apes, but also monkeys and tarsiers. Our inability to synthesize vitamin C is the result of a mutation that occurred more than forty million years ago in our shared primate ancestor, affecting the gene that encodes the L-gulonolactone oxidase (GULO) enzyme. Normally, this enzyme catalyses a crucial step in the formation of vitamin C. But in humans and related primates the genetic mutation produces a broken enzyme. It doesn’t work, and we can’t make our own vitamin C anymore. Luckily, it’s quite easy to compensate for the lack of GULO by simply taking in vitamin C via our diets, but this also meant that there was no selective pressure for a functional GULO, and us primates have been living with a broken version ever since.

The relative ease by which animals can compensate for no longer producing their own vitamin C is illustrated by the fact that the mutation that disabled our GULO enzyme millions of years ago was not the only mutation in the animal kingdom to shut down vitamin C biosynthesis. It happened at least three other times: bats, guinea pigs, and sparrows also have defective GULO enzymes and get vitamin C via their diets. The mutation in the guinea pig’s ancestor happened more recently than ours – possibly “only” about 20 million years ago, but that is still far enough back to also have affected another member of the caviidae family: The capybara also needs a steady diet of vitamin C to keep a hold on its title of largest living rodent on earth. Especially in captivity these R.O.U.S. (rodents of unusual size) are, like the sailors and pirates of yore, at risk of scurvy unless they eat enough fresh vegetables.


Speaking of fresh vegetables – how were the VOC crew going to manage the journey to the East Indies, which took longer than the expiration date on their perishables? The ideal solution was to restock along the way, but the continent of Africa was not exactly a farmers market where you can just get some more fruit and veg when you need it. Well then, they would just have to make a farmers market. The VOC took several Dutch farmers, and settled them in South Africa to grow more food for the ships passing by along their trade route. The restocked ships could then sail on with a scurvy-free crew.


The VOC’s Commander of the Cape, Jan van Riebeeck, founding the first Dutch colony in South Africa on April 6, 1652.

If the VOC crew had been able to make their own vitamin C, like most animals do, they wouldn’t have had to bring farmers to South Africa. That move, guided by a mutation that happened millions of years ago, entirely shaped the more recent history of South Africa. How? Here’s a hint: The Dutch word for farmer is “boer”.

The Boer population of South Africa were the direct descendants of the farmers relocated there to supply the VOC ships with the fruit and vegetables for their voyage to and from the East Indies. After the VOC was disbanded and British colonials settled in South Africa, the Boer population moved away from the Cape. Conflicts between the Boers and the British Empire, most notably the Second Anglo-Boer War at the end of the 19th century, directly led to the formation of the Union of South Africa in 1910, which was the predecessor of the current-day Republic of South Africa.

So there you have it. In a scene set by pirates, and with R.O.U.S. lurking in the background, an entire country, with all its political and cultural complications, was formed as a result of a method to distribute fruit and vegetables to the crew of 17th and 18th century trade ships to compensate for a genetic mutation that makes humans incapable of synthesising their own vitamin C.

Our broken GULO enzyme may not have been able to make vitamin C for millions of years, but it’s made history all right.

The Etymology of SciBarCamb

By far the most frequently asked question about SciBarCamb (or about SciBarCamp with a p) is
and the answer is long. So long, that it takes several minutes to answer in person, and several paragraphs to answer in writing. To avoid at least ever having to type it again, here is the written answer. Grab a cup of coffee, and let me tell you a story of geek culture history.
The “Bar” in “SciBarCamp”, “SciBarCamp”, or any kind of “barcamp” is historical, and has its roots in computer programming. In 2003, computer technology publishing company O’Reilly Media hosted a meeting for 400 of their friends. These “Friends Of O’Reilly” (FOO) met on a weekend in October of that year to “share their work-in-progess, show off the latest tech toys and hardware hacks, and tackle challenging problems together.”
Rather than inviting speakers and creating a program beforehand, they merely provided the participants with space and internet access, and let them generate their own ad hoc meeting. They let people sleep in the building as well, so it was much more like a camp than like a conference, hence the name Foo Camp.
Now, there were many computer programmers who were not amongst the 400 invited to Foo Camp, or who were invited once but not the year after, and they were a bit jealous of the fun event they missed out on. But nothing was stopping people to organize a similar gathering themselves, and that is exactly what happened in 2005, when a group of people spontaneously organized an event directly modeled after Foo Camp. The main difference was that instead of inviting people, they let anyone who was interested join them. This meeting was called BarCamp, in a nod to the phrase foobar used in programming as a placeholder name in coding examples.
Now the great thing about BarCamp was that the model was there for anyone to use. Anyone can organize a meeting without a program and call it a BarCamp. Since that first BarCamp, there have been BarCamps all over the world, and about any topic you can think of – not just programming. There are KnitCamps, CupCakeCamps, BarCamps about urban planning, photography, social change – and they’re all organized by anyone who one day just thought “it would be cool to have a meeting about topic X.”
Meanwhile, Foo Camp also started diversifying. In 2006, they held the first Science Foo Camp in collaboration with Nature and Google. Like the regular Foo Camp, this event was invitation-only, but this time, it was all about science, and the participants included not just programmers, but also scientists, publishers, bloggers, and media professionals. The first year, the event ran by Chatham House rules, which meant that not a lot of information about it got out, and it went a bit under the radar. The second year, in 2007, the event was much more open: a lot of science bloggers attended, who wrote all about the event and shared pictures, and this is the year the jealousy kicked in for many of the non-attendees.
And so, driven by the same little green monster that led to BarCamp, an open version of SciFoo was born. Imitation is the sincerest form of flattery, so this copy of SciFoo was named along the pattern of other themed BarCamps: SciBarCamp.
The Cambridge version of SciBarCamp has it roots not only in SciBarCamp, but also in the Cambridge edition of the regular (tech-focused) BarCamp, which is called BarCamb.
Here’s a figure summarizing all this:
And that is why SciBarCamb is called SciBarCamb.

A metaphor for science and technology

Forget about art and science – this century has a whole new two world problem.
It’s been nagging me for a while – at science online conferences (both the London and North Carolina varieties), in talks with lab mates, at work at a scientific publisher, and hanging out with technology-oriented geeks in my spare time. There’s a gap between science and technology, and it’s growing.
Were we to take some opinions on the street, vox-pop-style, about perceived “two worlds” between science and other fields, I’m sure many would still point to a supposed divide between the arts and sciences. That may be what it seems like – from books and TV, from high school memories – but if you’re in art or science, this supposed divide is so well-bridged that you don’t even notice the chasm when you cross it.
Yet what none of our hypothetical vox-poppers would say is that there is such a thing as a divide between science and technology. To the contrary, they always see them together. “Science and Technology” share newspaper sections, website pages, and ticky-boxes on “occupation” fields in surveys. Science brought us technology, so surely they go hand in hand?
It’s true, they used to go well together, and in certain fields of research they still do, but apart from areas like computer science or bioinformatics, there is no correlation between people who like to use computers, and people who like to do research.
The thing is: scientists are just like normal people. You’ll find that a small group of them is hugely interested in blogging, just like a small group of the overall population is. Another (perhaps overlapping) group is over the moon about new web tools they can try out in the lab, just like there’s a small group of early adopters in the general population. But by and large, many scientists hate new things.
It is this audience of print-reading, references-in-Word-typing, Facebook-avoiding researchers that we are trying to get to download new reference managers, upload their data for their competitors, and while they’re at it, write a blog post or two.
It’s scary for me to sit in a seminar that teaches publishers and scientists about social media – things I’ve picked up on the go, without anyone teaching me – and it’s frustrating to see enthusiastic digital natives pitch the next new tool to reluctant researchers. I’ve seen both. I’m kind of in between the two worlds, and they really are two worlds.
Where art and science have many bridges (a love for high resolution microscopy, excitement about data from outer space, and a common struggle to get funded – to name a few) science and technology have little to go on. They share a past, but they’ve moved in their own direction.
Here’s a metaphor: Science and Technology used to sit next to each other in elementary school, but throughout high school and college Technology got really popular and famous, and Science never changed much. Now when they meet once in a while, to catch up over coffee, Technology still acts like they’re as close friends as they were when they were ten, but Science doesn’t even know what Technology is talking about when he says things like “widget”, really does not think he needs any of the things Technology seems to be trying to give him, and regularly glances under the table at the minute hand on his watch, to make sure he gets back to the lab in time. “Man, Technology really changed”, thinks Science, “and he hasn’t even asked me how I am…”
I’m friends with both of them, and it’s getting more and more difficult to find common ground for these guys. And the hypothetical vox-pop interviewees from before? They just remember Science and Technology from when they were all in elementary school together: they were the two nerds, always sitting next to each other at the front. Surely they’re still in touch? They were always so close…

The biggest petri dish in the universe

I admit, I just clicked the link on the BBC news site because it said something about “beer”, but it turned out to not be about the drink, but a town. Bill Bryson was right – English towns really do have the strangest names…
“Bacteria taken from cliffs at Beer on the South Coast have shown themselves to be hardy space travellers,” the article started. “The bugs were put on the exterior of the space station to see how they would cope in the hostile conditions that exist above the Earth’s atmosphere.”
And cope they did. A year a half in space, subject to temperature extremes and all kinds of radiation, but they made it. The bacteria now live under slightly less harsh conditions at the Open University in Milton Keynes, where researchers will poke and prod them to see which genes were involved in their survival. (My money’s on the family of heat shock proteins .)
Well, that’s nice. But I also find it a bit worrying. If space is treated as a giant petri dish – which it was for this particular experiment – shouldn’t we be more concerned about the fact that we’re contaminating the very system that’s being studied? Were the bacteria well-contained? Who says there aren’t now some British sea cliff bugs propagating on a meteorite on a collision course to Mars?
When in several years time we find our much sought after “Life on Mars”, don’t pop the champagne cork just then: I bet it will turn out to be nothing but the contamination from one of our previous space experiments….

“Google ‘panspermia’” twittered Richard , when I voiced a similar concern in far fewer characters elsewhere. I didn’t need to, because I’ve heard of the idea that life on earth resulted from outer space.The difference is: that is something that (probably) happened and which we still need to study further to be sure about. Putting bacteria on the outside of the space station is something that is being done now (fine, two years ago), on purpose, even though we’re well aware of the risk of sample contamination. Yes, we’ve been going into space for half a century, but always blindly assumed (or at least tried to convince ourselves) that we were just looking around, and not leaving traces behind. This experiment was done with the purpose of exposing bacteria to non-Earth conditions and seeing what would survive. Like a giant survival screen, in space.
So, yes, it worries me that we’re basically purposely contaminating a system of which we don’t yet know what was even in there to begin with.
Richard says: “it’s people like you wot get in the way of progress”
I say: “google ‘ethics’”


I had the strangest dream last night.
I don’t remember all of it, but there was a skeleton of a dinosaur with a fake nose and flamingos on it, and I was walking underneath a space ship with some people, and later a journalist poured me wine in a water glass, and there was a robot, and fossils of crocodiles, and the former technology adviser to Obama was playing viola with the director of the New York Hall of Science strumming accompaniments on guitar at an indoor campsite, while in the background aluminium plates were flying off a magnetic field and people were discussing the multiverse and human flight, and someone built an ancient clock out of LEGO, and all the food was free, and people wrote haikus on a whiteboard, and stuck giant post-it notes on another board, and I saw what chords look like in four dimensions and heard the sounds that atoms make, and some of my friends were there too, but never in the same place all the time – I’d just run into them here and there, and we’d talk, and then strange things would happen again.
There was much more, but I don’t remember it all. I just wrote down some notes, before I forget, because that’s the thing with dreams – if you don’t write them down immediately, you can’t remember them the next day.

Now run along, and don’t get into mischief

It’s Ada Lovelace Day, on which we’re encouraged to blog about women in science and technology. Having been a woman in fields from chemistry to biology, I’ve never felt out of place at all. My undergrad chemistry department was almost 50% female, and in 1998 I was one of five girls on a seven-person chemistry student executive board. (Incidentally, the initials of the five girls were H, A, R, E, and M.) I know you’re going to say that it’s different for students, and that the number of female chemists drops after PhD level. Maybe that’s true in chemistry, still, but the tides are most certainly turning in areas of biology.
Yesterday I was updating a list of contacts at work, and discovered that more than half of the developmental biology societies that are currently active in different countries are chaired by women. To be the chair or president of the developmental biology society in your country, you have to be active and respected in your field, and in the UK, Hong Kong, Israel, Germany, France, Portugal, and Australia/NZ this position is now held by a woman.
“Oh, sure”, you mutter, “but that’s biology, that’s girly, and you don’t know what it’s like in engineering/physics/math”.
However male-dominated some fields may still be today, I think it’s worth looking at how far other fields have come. Even though biology is doing quite well now, it wasn’t always as accessible to women.
In 1897 George Massee presented a paper to the Linnean Society, called “Germination of the Spores of the Agraricinae”. He hadn’t written the paper himself, but the author was not allowed to present it in person, because she was female, and women were not allowed at society meetings at that time. The paper was withdrawn, meaning it wouldn’t be published unless significant changes were made.
Whether it really was a result of not being able to attend meetings in person (making it harder to explain the work to the society than it would have been for a male author) or a case of too many revisions to be worth spending all her spare time on isn’t known, but it is certainly likely that if she had lived a century later, Beatrix Potter would have been better supported in her career as a scientist. As it is, she didn’t finish all the required experiments to resubmit her paper, and instead gained more and more acclaim as a children’s book author and illustrator.
Beatrix Potter could draw scientifically accurate lichens as well as fluffy bunnies, but it was the latter that was more acceptable for women a century ago. Today, she would have been encouraged to study, to submit papers, attend meetings.
Some fields are perhaps further behind than biology, or maybe there aren’t as many women trying to get in, like Potter did in botany, but even the most male-dominated fields let women attend their meetings these days, so nobody has to revert to becoming a famous illustrator as a backup plan.
[Potter was also the first person to ever patent a character for distribution as a toy. And now the Peter Rabbit images are all copyrighted and I can’t show them here, but above’s a picture of a young Beatrix Potter holding a dog.]

Drums and Neuroscience

I have another interview up on the Musicians and Scientists site, and it’s yet again a Nature Network blogger , although I had him on my list because of something he wrote pre-NN. (I tried to also get a few non-bloggers in before I left the Toronto time-zone, but that didn’t work out.)
Enjoy the interview with Ian Brooks over on the mus/sci blog. Did he really go to Pennsylvania because he was on the run from Interpol? Find out here !
Meanwhile, I want to make an (audio) compilation of the interviews with musicians/scientists I’ve done so far, to emphasize some trends that I’m already seeing, but I want some more balance in my sample and need to interview more women, classical musicians, and non-biologist-scientists. I have a list of names already, so I’ll get to that at some point.