It is not possible for us to see molecules. Well, of course we can see molecules because they are what makes up all matter. But a single molecule – that is a different story. If I asked you ‘why can’t you see a single molecule’, you’d probably reply ‘because it is too small’. And this is kinda correct. But the complete answer is a bit more complex. Think about bacteria, for example. They’re also too small to be seen by the naked eye, but with a microscope we can see them. But a single molecule is still too small to be seen with a microscope. Why is that, and where is the limit to that? And even more interesting: If we can actually not see a molecule with optical methods, then how do we know their structure anyway? These are the questions that I want to dive in today, so turn off that music and give me your full attention.
Back to the microscope. In principle, they are an arrangement of lenses that allow us to zoom into small structures. You can all of a sudden see the cells in an onion with them, or fine structures in fish scales, or something like that. In my field of work, we mainly use them to look at cells in a petri dish. These cells are, depending on their origin, between 1 and 100 micrometer large (bacteria are about 1 µm, human cells are rather between 20 and 100 µm). The molecules that I am talking about are rather in the range of about a nanometer, so more than a thousand times smaller than a bacterium. And of course, we don’t just want to see it as a dot, but we want to know how the atoms are connected within that molecule. So, the resolution we want to have is somewhere at 0.1 nanometers (which we call 1 Ångström).
The resolution of optical microscopy is limited
Better resolution comes with better instrumentation on the microscope, like lenses and stuff. But where’s the limit here? Well, people started to ask this question soon after they started building microscopes and they actually found an answer to that. As it turned out, the resolution limit for an optical microscope is wavelength-dependent, and in math speech, it is
where λ (lambda) is the wavelength of the light used, and NA is the numerical aperture, some technical parameter of the microscope. Let’s just set it to 1 and ignore it (oldest trick in physics, still works). So if we use visible light at its shortest wavelength, blue, we have 400 nanometers, so the resolution limit would be at 200 nanometers. And this is pretty much unavoidable. With an optical microscope, we can only resolve structures that are larger than 200 nanometers, and that’s just enough to see the outline of a cell. Period.
That can’t be the entire story. There must be more to it…
This is actually a bit disappointing. We can’t really be satisfied with a resolution of 200 nanometers. The fine structure of so many things would forever stay unknown to us. But, if you have paid attention, there are several questions arising from this. All leading us in different directions. Now you might think:
Did we actually never crack the resolution limit? I can barely believe that. I’ve been seeing amazing pictures of a cell’s interior and also, wasn’t there a Nobel Prize about this recently?
You’re completely right. The scientific community has made huge advances in that field known as called super-resolution microscopy. There was a Nobel Prize for chemistry 2014 for a certain technique that brought the resolution limit down to roughly 10 nanometers. However, there is one catch. To get great resolution, people are not doing brightfield microscopy anymore. Instead, they are using fluorescent molecules that emit light of a certain colour and leave the rest dark. This way, they can track specific molecules in the cell, even in 3 dimensions. What people usually do is using fluorescent molecules of different colours to stain different structures in the cell. This way, we can get colourful high-resolution pictures that only show exactly what we want to see, and not just everything. Without the fluorescent markers, it would also be impossible to even see single proteins, since they are too small. But, with attaching a light-emitter to them we can actually spot them.
Okay, the resolution is half the wavelength. If we reduce the wavelength, we increase the resolution, right? Let’s just take x-rays and use some kind of x-ray camera or something instead of our eye. Wouldn’t that work?
Good thinking, have a cookie. This is in fact a great approach. X-rays have the right wavelength to resolve these tiny molecular structures. The issue with them is only that they’re not so easy to handle. We can’t really build lenses for them and focus them as easily as visible light. So in the end, x-ray microscopy is still very limited with its resolution. We reach about 50 nanometers in optimal conditions.
But smart people came up with smart things to also get around that. Quick recap on quantum mechanics. Particle-wave duality: Our friends from the quantum world behave like both a particle, and a wave. So, is a photon a light particle but at the same time a wave (if you don’t know what I am talking about, check out this video by minutephysics). The wavelength of the light depends on the energy of the photon. The smaller the wavelength, the higher the energy. The same goes for electrons. Also, they are a particle and a wave at the same time. Does this mean we can just use electrons to image our sample? Well, I wouldn’t be writing about it if it didn’t work. I am not familiar with all the technical details here, but hell yeah it works! Cryogenic electron microscopy even gave the Nobel Prize for chemistry 2017. Atomic resolution can not quite be achieved yet, but it gives great pictures of larger biomolecules in quite good detail. And of course electron microscopy has been used for all kinds of pictures from mite faces to bacterial cells for ages already. It was just recently that the method could be used for biomolecules and their structure as well which is truly fantastic.
If this is actually the resolution limit, how did we ever discover the structure of molecules? And I know for a fact that we did solve them, because they are all over my old schoolbooks, Wikipedia and my nerdy friend’s whiteboard.
Right again, we know the structures of basically all the molecules (large biomolecules excluded). But, none of them thanks to microscopy (except large biomolecules, but that’s a different story). Back in the day, old-school chemists had to use super complicated and tedious methods to figure out what they had in their test tubes. Today, we have much more advanced and beautiful methods. Most of them work by shining some kind of electromagnetic radiation onto the sample and extracting information about atomic mass, chemical composition composition, bond arrangement, electrons distribution, nuclei environments, and so on. The weird thing is that in the end, we never get a picture from these experiments. We need to assemble the information, like in the Sunday crossword puzzle, until we have a structure that fits the data. Chemists nowadays totally rely on these methods, but, they also provided for us the structure of natural compounds, like antibiotics, proteins from our cells and so on.
A New Hope and End Credits
If you think about it, it’s actually pretty cool that there is so much in our world that we can never see and still, we know what they are made from. Many things are still too small and we have to reach deep into the toolbox of physics to study them. But, microscopy (with light, but also with x-rays and electrons) made huge steps in the last decade, and I am really curious about what we will be able to actually visualise soon, which we had to study different before.
To be completely fair with you, I have been simplifying a lot here. The question about why we can’t see single molecules has been bugging me for a long time. Don’t take me for stupid, I knew about the resolution limit and so on, but there is actually much more that I have not mentioned here. There is for example scattering. In the end, light scattering is the only reason for why we can actually see things. If matter absorbed all light and scatter nothing, the world would be pitch black.
At university, I was taught that objects scatter waves if the object is roughly in the size range of the wavelength. And, since our visible light is of much larger wavelength than the size of molecular structure, we can’t see those (400 – 800 nanometers versus < 1 nanometer). Rough edges on surfaces, however, fit that condition quite well. And we see fog, because the water droplets reached a size in the range of visible light, so they start to scatter light and we can’t see through the water in the air anymore like we usually can. This model is challenged, if you think of the sky and the fact that it is blue because blue light is scattered by tiny nitrogen molecules. Even through endless discussion with fellow scientists and literature research, I did not get to a definite solution. I only know that scattering is very complicated, somehow wavelength- and size-dependent and that my brain hurts. If you can give me an intelligible explanation, leave me a comment, email me, send me a message on social media or send me a raven. I really want to know, so don’t hold anything back.
On that note, see you next time!
Music to feel a bit epic, because you deserve it: A New Hope and End Credits. If you play the Star Wars soundtrack in the Spotify desktop version, the progress bar turn into a lightsaber.