Main Menu (Mobile)- Block

Main Menu - Block

Additional Information

janelia7_blocks-janelia7_secondary_menu | block
janelia7_blocks-janelia7_fake_breadcrumb | block
node_title | node_title
Additional Information
node_body | node_body

Additional online resources, including general information and tools, for fluorescence microscopy and image analysis are available using the following links and supplementary reading. Please feel free to contact us with any other references to share.

Useful Links/Documents

Reference Tools

Educational Resources

Image Analysis

Supplementary Reading

Widefield Microscopy:

Widefield microscopy is any technique that illuminates the entire field of view simultaneously and collects the resulting image on a camera. Widefield techniques can be further classified as either 'epi' or 'trans', depending on whether the illumination and signal collection occur on the same (epi) or opposite (trans) sides of the sample. Trans-illumination techniques require a lens on each side of the sample: one termed the condenser, which focuses the illuminating light, and one termed the objective, which forms the image. In epi-illumination techniques, a single lens acts as both the condenser and objective. Widefield techniques are highly sensitive and so capable of high frame rate (100+ fps) as well as very long term time-lapse imaging. However, these techniques also collect signal coming from all depths in the sample, and so out-of-focus light can greatly reduce image contrast.

Brightfield Microscopy

Brightfield is the oldest of all microscopic techniques. The image is produced via absorption of light as it passes through dense areas of the sample. It is ideal for specimens with intrinsic color or that have been stained. Advantages: simple and cost-effective setup and living cells can be observed. Disadvantages: very low contrast of most biological specimens, low apparent image resolution due to blur from out-of-focus features, and unstained or transparent samples cannot be seen well.

Darkfield Microscopy

This illumination technique is used to enhance contrast in unstained samples by excluding unscattered (direct) light from the image. When an opaque light stop is inserted beneath the condenser aperture diaphragm, it creates a hollow cone of light allowing only oblique rays to pass. Specimens with smooth reflective surfaces or a refractive index (RI) different from the medium produce images. Small angular changes in direction allow some light to enter the objective. As a result, the field around the specimen is dark. It is ideal for unstained, non-light absorbing specimens. Advantages: simple but effective, high sensitivity, and images are free of artifacts. Disadvantages: sample must be strongly illuminated which may cause photodamage, scattered light in thick specimens lowers contrast of fine details, poor depth of field, images must be interpreted carefully (invisible or inaccurate features), and NAcond > NAobj can lead to poor resolution.

Phase Contrast Microscopy

An illumination technique that converts phase shifts in light passing through a sample to brightness changes in the image. It generates an image based on abrupt changes in a sample's RI, a measure of 'optical density'. These optical edges cause light to diffract (bend) in many directions, where the amount of bending depends on the degree and abruptness of the RI change. Simply speaking, phase contrast measures how much light is bent at each location in the sample relative to how much light was not bent. Physically, this comparison occurs via an induced interference between the diffracted and undiffracted light. Phase contrast is created by translating minute differences in phase to corresponding changes in amplitude. When light passes through a “phase object” (specimen), it becomes out of phase by ¼ wavelength with the unscattered light. Separation of direct light from diffracted light at the objective’s rear focal plane is achieved using a ring annulus and phase plate. “Speeding up” direct light by ¼ wavelength yields destructive interference. It is ideal for unstained and transparent specimens. Advantages: excellent contrast for unstained specimens and combinable with other imaging methods like fluorescence. Disadvantages: condenser annulus still limits the working NA thereby reducing resolution, phase ring attenuates low-angle diffracted light to produce “halos” (artifacts) at specimen edges, and is not appropriate for thick specimens.

Differential Interference Contrast (DIC) Microscopy

Like phase contrast, DIC generates an image based on changes in the sample's RI but uses interferometry to gain information about the optical path length of the sample to see otherwise invisible features. Plane-polarized light interacting with a sample produces two individual wave components. The Nomarski prism splits light into two rays vibrating perpendicular to each other. They travel parallel and close together with a slight path difference (shear). Wave paths are altered based on sample thickness, slope, and RIs. A second prism combines the beams and removes shear. If the beams are not in phase, interference at the analyzer passes light with differences in intensity and color. It also creates an interesting shadow effect bestowing a pseudo 3D relief or appearance; however, this is not an indicator of actual topographical structure but is rather due to the direction of the offset between the two intermediate images. It is ideal for live and unstained biological specimens. DIC typically offers the best contrast among standard optical microscopy techniques. Advantages: higher system NA results in better resolution (no substage annulus), image quality is excellent and nearly free of artifacts, better visibility of outlines/details and in color, and most objectives are compatible. Disadvantages: DIC equipment can be expensive, sample must be manually reoriented if parallel features aren’t visible, and plastic specimen carriers are not suitable.

Epifluorescence Microscopy

Epifluorescence microscopy is an extremely popular way to visualize fluorescent probes in biological samples. As the name suggests, epifluorescence employs epi-illumination and so a single lens (the objective) both illuminates the sample and collects the fluorescent emissions. Filters restrict the excitation light to a small range of wavelengths suitable to excite fluorophores in the sample, while other filters sort out the longer wavelength fluorescence emissions before they reach the camera. A major drawback of epifluorescence microscopy is out-of-focus background emissions, or 'flare', which greatly degrades image contrast (and therefore resolution).

Point Scanning Microscopy:

A main drawback of widefield microscopy is that out-of-focus light greatly degrades image contrast and thus effective resolution. The ability to remove out-of-focus light is a tremendous imaging advantage, and many technologies have been developed to create optical sections. Point scanning techniques illuminate only one diffraction-limited spot (~200 nm diameter) in the focal plane, while rotating mirrors traverse this point back and forth in a raster pattern across the sample to create an image. Light emitted (or reflected) from each point then travels back through the objective and mirrors before reaching a detector.

Laser Scanning Confocal Microscopy

Because the illuminating spot of light is created by focusing, the light spreads into a cone above and below the focal plane. Thus, out-of-focus sample regions are illuminated and contribute (unwanted) emissions, though their intensity is less than in widefield mode. A second effect completes the optical section:  before reaching the detector, all fluorescence emissions pass through a very small aperture located in a conjugate image plane (i.e. a location outside the sample where light from the sample is also in focus). Said in an abbreviated way, the focal plane within the sample and the pinhole outside the sample are 'confocal'. Thus, only emissions from the focal plane can pass through the pinhole, while out-of-focus (i.e. spatially diffuse) emission can not fit through the aperture. Together, the combination of reduced out-of-focus illumination and blocking of out-of-focus emissions produces a crisp optical section.

Spinning Disk Confocal Microscopy

This technique uses a series of moving pinholes on a disk to scan spots of light. Since a series of pinholes scans an area in parallel, each pinhole is allowed to hover over a specific area for a longer amount of time thereby reducing both acquisition time and excitation energy needed to illuminate a sample when compared to laser scanning microscopes. Decreased excitation energy reduces phototoxicity and photobleaching of a sample often making it the preferred system for imaging live cells or organisms. Micro-lenses can also be placed before the spinning disk; every pinhole has an associated microlens that captures a broader band of light for focusing into each pinhole thereby improving sensitivity.

Multi-Photon Microscopy

Like confocal, multi-photon microscopy also creates an optical section but does so using entirely different mechanisms. Multi-photon microscopy relies on very short (100 fs) but very intense bursts of light to induce an effect called multi-photon absorption. In the most likely scenario, two photons (of lower energy and longer wavelengths) interact with a dye's electron simultaneously to trigger fluorescence emission. Such events are highly improbable and will only occur at a spot within the focal plane where light is most concentrated. The result is that fluorescence will only be generated (and detected) within a small focal volume such that a pinhole is no longer required. Furthermore, two photon excitation microscopy is better suited for tissue which increasingly scatters visible light as you image deeper. This is because in tissue, the 'biological window' is more transparent to infrared light permitting greater penetration up to 1 mm.​

Fluorescence Lifetime Imaging Microscopy (FLIM)

An imaging technique that uses the fluorescence lifetimes of fluorophores, rather than their intensities and emission spectra, to generate additional contrast. Lifetime refers to the average time that a fluorescent molecule remains in the excited state prior to transitioning down to the ground state, resulting in photon emission. The corresponding image is based on the differences in the exponential decay rate from a fluorescent sample. Consequently, FLIM imaging is sensitive to the local micro-environment precluding erroneous, intensity-based measurements dependent on concentration, sample absorption and thickness, photobleaching, as well as intensity changes from illumination or background sources. It can be used in combination with confocal and multi-photon microscopy and is ideal for functional imaging as well as samples with multiple, spectrally-overlapping and/or weakly-fluorescent dyes.

Total Internal Reflection Fluorescence (TIRF) Microscopy:

This technique uses an evanescent wave to selectively illuminate and excite fluorophores in a restricted region of the specimen immediately adjacent to the glass-water interface. The evanescent electromagnetic field decays exponentially from the interface and thus penetrates to a depth of only approximately 100 nm into the sample medium. By eliminating background fluorescence from otherwise freely-diffusing molecules below the surface, true diffraction-limited imaging is achieved.Thus, TIRF enables selective visualization of surface regions such as the basal plasma membrane of cells.

Light Sheet Fluorescence Microscopy (LSFM):

LSFM or Selective Plane Illumination Microscopy (SPIM) illuminates a thin section (usually a few hundred nanometers to several microns) of the sample. Unlike TIRF, the position of the lightsheet relative to the sample isn't fixed and is generated by focusing a laser through cylindrical lenses or low NA objectives, perpendicular to the direction of observation, that is then rapidly scanned to produce a virtual lightsheet. This method greatly reduces photodamage and stress on a live sample while achieving good optical sectioning capabilities. In contrast to point scanning techniques, LSFM can acquire images at speeds 100-1000 times faster. While a Gaussian beam is commonly used, Lattice Light Sheet Microscopy (LLSM) achieves superior axial resolution by employing an array of Bessel beams that destructively interfere with each other to limit the contribution of side lobes from any individual beam. Furthermore, an ideal 2D lattice is non-diffracting so it can propagate indefinitely with 'self-healing' properties.

Super Resolution Microscopy:

In conventional light microscopy, an optical system is considered to be diffraction-limited if the resolution is limited only due to the diffraction of light. The theoretical limit, regarded as the Abbe limit or Rayleigh criterion, is strictly governed by the numerical aperture of the objective and wavelength of fluorescence emission and is optimally 200 and 600 nm, along the lateral and axial dimensions, respectively. This defines the corresponding point spread function (PSF) or spot size of a fluorophore such that two molecules can't be distinguished if separated by less than this distance. Super resolution refers to multiple techniques designed to acquire images with higher resolution than the one imposed by the diffraction limit. While some methods modestly improve resolution by a factor of two, other classes based on deterministic (e.g., STED) and stochastic approaches (e.g., SMLM) can achieve nanometer precision.

Single Molecule Localization Microscopy (SMLM)

These are a class of super resolution imaging techniques that utilize sequential activation and time-resolved localization of photoswitchable fluorophores to create high resolution images. During imaging, only an optically resolvable subset of fluorophores is activated to a fluorescent state at any given moment. By turning on stochastically sparse subsets of fluorophores with light of a specific wavelength, individual molecules can then be excited and imaged according to their spectra. To avoid the accumulation of active fluorophores in the sample, which would eventually degrade back to a diffraction-limited image, the spontaneously occurring phenomenon of photobleaching is exploited in Photoactivated Localization Microscopy (PALM), whereas reversible switching between a fluorescent on-state and a dark off-state of a dye is exploited in Stochastic Optical Reconstruction Microscopy (STORM). From individual images of single molecules, the centroid position can be localized to a much higher precision by applying a statistical fit of the ideal Gaussian to its measured photon distribution. To date, the spatial resolution achieved is ~20 nm in the lateral dimensions and ~50 nm in the axial dimension, however, the integration time to reconstruct a super resolution (SR) image can be quite long.

Stimulated Emission Depletion (STED) Microscopy

STED creates SR images by the selective deactivation of fluorophores, minimizing the area of illumination at the focal point, and thus enhancing the achievable resolution for a given system. It functions deterministically by depleting fluorescence in specific regions of the sample while leaving a center focal spot active to emit fluorescence. This focal area can be engineered by altering the properties of the pupil plane of the objective lens. The most common early example of these diffractive optical elements, or DOEs, is a torus shape used in 2D lateral confinement. The lateral resolution is typically between 30 and 80 nm, while axial resolution is on the order of 100 nm. To optimize the effectiveness of STED, the destructive interference in the center of the focal spot needs to be highly accurate which imposes certain constraints on the fluorescent dyes and optics that can be used. Moreover, as a point scanning technique, the integration time to generate an SR image can also be quite long.

Structured Illumination Microscopy (SIM)

SIM is designed to utilize the moiré effect to obtain finer spatial frequencies emitted by the specimen that can be extracted from Fourier transforms by overlapping two different spatial frequencies from multiple directions. This is accomplished by inserting a movable diffraction grating into the excitation beam path. Low order diffracted light interferes at the focal plane to create illumination in stripes such that its superposition with the sample (objects organized at high frequency) generates a moiré pattern of lower frequency. To reconstruct the final SR image, several raw images must be collected, each being acquired at different orientations using structured illumination. Higher spatial frequency information contained in the image can then be extracted using software. The result is lateral resolution in the range of 100 nanometers and axial resolution approaching 300 nanometers.

Note: Do not confuse this modality with optical- (or non-SR) SIM, where structured illumination is used to suppress out-of-focus light by using only three sinusoidal illumination patterns (instead of 9-15) of relatively coarse stripes.

Pixel Reassignment

The effective PSF of a confocal microscope is the product of the illumination and detection PSFs. While the probable position of a fluorophore is within this narrow overlap, it is actually detected along the optical axis corresponding to the detector position leading to resolution loss. The goal of pixel reassignment is to shift the measured signal back to where the effective PSF is located. Because a single molecule appears as a diffraction-limited spot when imaged, emitted light will be detected over multiple pixels. In the case of a confocal microscope, a detection pinhole that is displaced in regard to the optical axis will improve the probability for localizing an emitter within the narrow overlap between the illumination and detection PSFs. Consequently, a displaced pinhole contains a higher proportion of higher spatial frequencies (same for peripheral pixels that lie off-center with respect to the emission PSF) albeit with significantly less signal. Since the exact shift of the effective PSF against the detection PSF is known, signal can be shifted back to the place where it belongs. When collectively done for all peripheral pixels or back-shifted pinhole positions, the process is called digital pixel reassignment resulting in better SNR and resolution. The increase in SNR enables a much better deconvolution step that has to follow pixel reassignment. For detector arrays like Airyscan, each detector element acts like a separate pinhole with its own PSF such that the detection PSFs and images from each element can be treated individually. Properly weighing the image of each detector element using linear deconvolution assigns the frequencies to their correct location yielding resolution enhancements in both the lateral and axial dimensions by up to a factor of two. In optical pixel reassignment, a microlens array doubles the convergence angle at which light passes through the pinhole providing 2x optical contraction of individual foci. This mimics an ideal confocal microscope with an infinitesimally small pinhole resulting in a further 1.4x lateral resolution improvement.

Back to top