chatInviteImg

Leave a message

print-screen-header

  • Log In New to Edmund Optics ® ? Register .

Custom and volume optical manufacturing with the expertise to guide you:

  • Knowledge Center Edmund Events
  • About Us Trade Shows Regional Offices Careers Catalogs Sustainability Edmund Outreach Contact Us Executive Team Compliance Discounts/Programs Press Releases DEI Statement
  • English 简体中文 日本語 한국어 繁體中文 Deutsch Français

USD

Knowledge Center / Application Notes / Imaging Application Notes / The Airy Disk and Diffraction Limit

The Airy Disk and Diffraction Limit

Authors: Greg Hollows, Nicholas James

Edmund Optics Twitter

This is Section 2.4 of the Imaging Resource Guide .

The Airy Disk

When light passes through any size aperture (every lens has a finite aperture), diffraction occurs. The resulting diffraction pattern, a bright region in the center, together with a series of concentric rings of decreasing intensity around it, is called the Airy disk (see Figure 1 ). The diameter of this pattern is related to the wavelength (λ) of the illuminating light and the size of the circular aperture, which is important since the Airy disk is the smallest point to which a beam of light can be focused. As focused Airy patterns from different object details approach one another, they begin to overlap (see Contrast ). When the overlapping patterns create enough constructive interference to reduce contrast, they eventually become indistinguishable from each other. Figure 1 shows the difference in spot sizes between a lens set at f/2.8 and a lens set at f/8. This effect becomes more of an issue as pixels continue to reduce in size. The Airy disk $ \left( \varnothing_{\small{\text{Airy Disk}}} \right) $, or minimum spot size, can be estimated using the f/# and wavelength (λ):

Figure 1: Diffraction increases as the imaging lens iris is closed (f/# increases). The top lens is set at f/2.8; the bottom lens is at f/8.

Table 1 shows the Airy disk diameter for different f/#s using green light (520nm). The smallest achievable spot size can quickly exceed the size of small pixels. This leads to difficulties in yielding the full resolution capacities of a sensor with any usable level of contrast. Additionally, this does not consider any lens design limitations or manufacturing errors associated with the fabrication of lens elements or the optical assemblies, which can lead to reductions in the ability to produce the smallest physically achievable spot and thus reduce levels of resolution and contrast.

f/# Airy Disk Diameter [µm] at a Wavelength of 520nm
2.54
3.55
5.08
7.11
10.15
13.96
20.30

Table 1: The minimum spot size, or Airy disk, increases with f/# and can quickly surpass pixel size. See Table 1 in Resolution for sample pixel sizes.

Note: This is all theoretical and is the starting point for limitations in an optical system.

The Diffraction Limit

Every lens has an upper-performance limit dictated by the laws of physics and the Airy disk, known as the diffraction limit. This limit is the theoretical maximum resolving power of the lens given in line pairs per millimeter $ \left[ \small{\tfrac{\text{lp}}{\text{mm}}} \right]  $. A perfect lens, not limited by design, will still be diffraction limited.

This limit is the point where two Airy patterns are no longer distinguishable from each other ( Figure 2 in Contrast ). The diffraction-limited resolution, often referred to as the cutoff frequency of a lens, is calculated using the lens f/# and the wavelength of light. Learn more about f/# in f/# (Lens Iris/Aperture Setting) .

When the diffraction limit is reached, the lens is incapable of resolving greater frequencies. Table 2 shows the diffraction limit for contrast at 0% at given f/#s. Table 2 shows the diffraction limit for contrast at 0% at given f/#s. These numbers may seem large but are theoretical; several other factors must also be considered. As a rule, and because of inherent background noise, imaging sensors cannot reproduce information at or near 0% contrast. The contrast generally needs to be 10% or greater to be detected on standard imaging sensors. To avoid imaging complications, it is recommended to target 20% contrast or higher at the application-specific critical resolution. Additionally, lens aberrations and variations associated with manufacturing tolerances also reduce performance.

f/# 0% Contrast Limit $ \bf{ \left[ \small{\tfrac{\text{lp}}{\text{mm}}} \right]} $ @ 0.520µm
1374
962
687
481
343
240
175
120

Table 2: The diffraction limit calculated at different f/#s for 0.520μm light (green light).

Waveplates and Retarders

Edmund Optics ® Imaging: Manufacturing Capabilities

Related products.

Aspherically Contoured Fresnel Lenses

Copyright 2023 , Edmund Optics Inc., 101 East Gloucester Pike, Barrington, NJ 08007-1380 USA

Edmund Optics Facebook

Wave Optics

Limits of resolution: the rayleigh criterion, learning objectives.

By the end of this section, you will be able to:

  • Discuss the Rayleigh criterion.

Light diffracts as it moves through space, bending around obstacles, interfering constructively and destructively. While this can be used as a spectroscopic tool—a diffraction grating disperses light according to wavelength, for example, and is used to produce spectra—diffraction also limits the detail we can obtain in images. Figure 1a shows the effect of passing light through a small circular aperture. Instead of a bright spot with sharp edges, a spot with a fuzzy edge surrounded by circles of light is obtained. This pattern is caused by diffraction similar to that produced by a single slit. Light from different parts of the circular aperture interferes constructively and destructively. The effect is most noticeable when the aperture is small, but the effect is there for large apertures, too.

Part a of the Figure shows a single circular spot of bright light; the light is dimmer around the edges. Part b of the Figure shows two circles of light barely overlapping, forming a Figure eight; the dimmer light surrounds the outer edges of the Figure eight, but is slightly brighter where the two circles intersect. Part c of the Figure shows two circles of light almost completely overlapping; again the dimmer light surrounds the edges but is slightly brighter where the two circles intersect.

Figure 1. (a) Monochromatic light passed through a small circular aperture produces this diffraction pattern. (b) Two point light sources that are close to one another produce overlapping images because of diffraction. (c) If they are closer together, they cannot be resolved or distinguished.

How does diffraction affect the detail that can be observed when light passes through an aperture? Figure 1b shows the diffraction pattern produced by two point light sources that are close to one another. The pattern is similar to that for a single point source, and it is just barely possible to tell that there are two light sources rather than one. If they were closer together, as in Figure 1c, we could not distinguish them, thus limiting the detail or resolution we can obtain. This limit is an inescapable consequence of the wave nature of light.

There are many situations in which diffraction limits the resolution. The acuity of our vision is limited because light passes through the pupil, the circular aperture of our eye. Be aware that the diffraction-like spreading of light is due to the limited diameter of a light beam, not the interaction with an aperture. Thus light passing through a lens with a diameter D shows this effect and spreads, blurring the image, just as light passing through an aperture of diameter D does. So diffraction limits the resolution of any system having a lens or mirror. Telescopes are also limited by diffraction, because of the finite diameter D of their primary mirror.

Take-Home Experiment: Resolution of the Eye

Draw two lines on a white sheet of paper (several mm apart). How far away can you be and still distinguish the two lines? What does this tell you about the size of the eye’s pupil? Can you be quantitative? (The size of an adult’s pupil is discussed in Physics of the Eye.)

Just what is the limit? To answer that question, consider the diffraction pattern for a circular aperture, which has a central maximum that is wider and brighter than the maxima surrounding it (similar to a slit) (see Figure 2a). It can be shown that, for a circular aperture of diameter D , the first minimum in the diffraction pattern occurs at [latex]\theta=1.22\frac{\lambda}{D}\\[/latex] (providing the aperture is large compared with the wavelength of light, which is the case for most optical instruments). The accepted criterion for determining the diffraction limit to resolution based on this angle was developed by Lord Rayleigh in the 19th century. The Rayleigh criterion for the diffraction limit to resolution states that two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other . See Figure 2b. The first minimum is at an angle of [latex]\theta=1.22\frac{\lambda}{D}\\[/latex], so that two point objects are just resolvable if they are separated by the angle

[latex]\displaystyle\theta=1.22\frac{\lambda}{D}\\[/latex],

where λ is the wavelength of light (or other electromagnetic radiation) and D is the diameter of the aperture, lens, mirror, etc., with which the two objects are observed. In this expression, θ has units of radians.

Part a of the Figure shows a graph of intensity versus theta. The curve has a central maximum at theta equals zero and its first minima occur at plus one point two two lambda over D and minus one point two two lambda over D. Farther from the central peak, several small peaks occur, but they are much much smaller than the central maximum. Part b of the Figure shows a drawing in which two light bulbs, labeled object one and object two, appear in the foreground positioned next to each other. Two rays of light, one from each light bulb, pass through a pinhole aperture and continue on to strike a screen that is farther back in the drawing. On the screen is an x y plot of the two resulting intensity patterns. Because the rays cross in the pinhole, the ray from the left light bulb makes the right-hand intensity pattern, and vice versa. The angle between the rays coming from the light bulbs is labeled theta min. Each ray hits the screen at the central maximum of the intensity pattern that corresponds to the object from which the ray came. The central maximum of object one is at the same position as the first minimum of object two, and vice versa.

Figure 2. (a) Graph of intensity of the diffraction pattern for a circular aperture. Note that, similar to a single slit, the central maximum is wider and brighter than those to the sides. (b) Two point objects produce overlapping diffraction patterns. Shown here is the Rayleigh criterion for being just resolvable. The central maximum of one pattern lies on the first minimum of the other.

Making Connections: Limits to Knowledge

All attempts to observe the size and shape of objects are limited by the wavelength of the probe. Even the small wavelength of light prohibits exact precision. When extremely small wavelength probes as with an electron microscope are used, the system is disturbed, still limiting our knowledge, much as making an electrical measurement alters a circuit. Heisenberg’s uncertainty principle asserts that this limit is fundamental and inescapable, as we shall see in quantum mechanics.

Example 1. Calculating Diffraction Limits of the Hubble Space Telescope

The primary mirror of the orbiting Hubble Space Telescope has a diameter of 2.40 m. Being in orbit, this telescope avoids the degrading effects of atmospheric distortion on its resolution.

  • What is the angle between two just-resolvable point light sources (perhaps two stars)? Assume an average light wavelength of 550 nm.
  • If these two stars are at the 2 million light year distance of the Andromeda galaxy, how close together can they be and still be resolved? (A light year, or ly, is the distance light travels in 1 year.)

The Rayleigh criterion stated in the equation [latex]\theta=1.22\frac{\lambda}{D}\\[/latex] gives the smallest possible angle θ between point sources, or the best obtainable resolution. Once this angle is found, the distance between stars can be calculated, since we are given how far away they are.

Solution for Part 1

The Rayleigh criterion for the minimum resolvable angle is [latex]\theta=1.22\frac{\lambda}{D}\\[/latex].

Entering known values gives

[latex]\begin{array}{lll}\theta&=&1.22\frac{550\times10^{-9}\text{ m}}{2.40\text{ m}}\\\text{ }&=&2.80\times10^{-7}\text{ rad}\end{array}\\[/latex]

Solution for Part 2

The distance s between two objects a distance r away and separated by an angle θ is s  =  rθ .

Substituting known values gives

[latex]\begin{array}{lll}s&=&\left(2.0\times10^6\text{ ly}\right)\left(2.80\times10^{-7}\text{ rad}\right)\\\text{ }&=&0.56\text{ ly}\end{array}\\[/latex]

The angle found in Part 1 is extraordinarily small (less than 1/50,000 of a degree), because the primary mirror is so large compared with the wavelength of light. As noticed, diffraction effects are most noticeable when light interacts with objects having sizes on the order of the wavelength of light. However, the effect is still there, and there is a diffraction limit to what is observable. The actual resolution of the Hubble Telescope is not quite as good as that found here. As with all instruments, there are other effects, such as non-uniformities in mirrors or aberrations in lenses that further limit resolution. However, Figure 3 gives an indication of the extent of the detail observable with the Hubble because of its size and quality and especially because it is above the Earth’s atmosphere.

Two pictures of the same galaxy taken by different telescopes are shown side by side. Photo a was taken with a ground-based telescope. It is quite blurry and black and white. Photo b was taken with the Hubble Space Telescope. It shows much more detail, including what looks like a gas cloud in front of the galaxy, and is in color.

Figure 3. These two photographs of the M82 galaxy give an idea of the observable detail using the Hubble Space Telescope compared with that using a ground-based telescope. (a) On the left is a ground-based image. (credit: Ricnun, Wikimedia Commons) (b) The photo on the right was captured by Hubble. (credit: NASA, ESA, and the Hubble Heritage Team (STScI/AURA))

The answer in Part 2 indicates that two stars separated by about half a light year can be resolved. The average distance between stars in a galaxy is on the order of 5 light years in the outer parts and about 1 light year near the galactic center. Therefore, the Hubble can resolve most of the individual stars in Andromeda galaxy, even though it lies at such a huge distance that its light takes 2 million years for its light to reach us. Figure 4 shows another mirror used to observe radio waves from outer space.

The Figure shows a photograph from above looking into the Arecibo Telescope in Puerto Rico. It is a huge bowl-shaped structure lined with reflecting material. The diameter of the bowl is three times as long as a football field. Trees can be seen around the bowl, but they do not shade the bowl significantly.

Figure 4. A 305-m-diameter natural bowl at Arecibo in Puerto Rico is lined with reflective material, making it into a radio telescope. It is the largest curved focusing dish in the world. Although D for Arecibo is much larger than for the Hubble Telescope, it detects much longer wavelength radiation and its diffraction limit is significantly poorer than Hubble’s. Arecibo is still very useful, because important information is carried by radio waves that is not carried by visible light. (credit: Tatyana Temirbulatova, Flickr)

Diffraction is not only a problem for optical instruments but also for the electromagnetic radiation itself. Any beam of light having a finite diameter D and a wavelength λ exhibits diffraction spreading. The beam spreads out with an angle θ given by the equation [latex]\theta=1.22\frac{\lambda}{D}\\[/latex]. Take, for example, a laser beam made of rays as parallel as possible (angles between rays as close to θ  = 0º as possible) instead spreads out at an angle [latex]\theta=1.22\frac{\lambda}{D}\\[/latex], where D is the diameter of the beam and λ is its wavelength. This spreading is impossible to observe for a flashlight, because its beam is not very parallel to start with. However, for long-distance transmission of laser beams or microwave signals, diffraction spreading can be significant (see Figure 5). To avoid this, we can increase D . This is done for laser light sent to the Moon to measure its distance from the Earth. The laser beam is expanded through a telescope to make D much larger and θ smaller.

The drawing shows a parabolic dish antenna mounted on a scaffolding tower and oriented to the right. The diameter of the dish is D. A horizontal line extends to the right from the top rim of the dish. Above the top line appears another line leaving the rim of the dish and angling up and to the right. The angle between this line and the horizontal line is labeled theta. Analogous lines appear at the bottom rim of the dish, except that the angled line extends down and to the right.

In Figure 5 we see that the beam produced by this microwave transmission antenna will spread out at a minimum angle [latex]\theta=1.22\frac{\lambda}{D}\\[/latex] due to diffraction. It is impossible to produce a near-parallel beam, because the beam has a limited diameter.

In most biology laboratories, resolution is presented when the use of the microscope is introduced. The ability of a lens to produce sharp images of two closely spaced point objects is called resolution. The smaller the distance x by which two objects can be separated and still be seen as distinct, the greater the resolution. The resolving power of a lens is defined as that distance x . An expression for resolving power is obtained from the Rayleigh criterion. In Figure 6a we have two point objects separated by a distance x . According to the Rayleigh criterion, resolution is possible when the minimum angular separation is

[latex]\displaystyle\theta=1.22\frac{\lambda}{D}=\frac{x}{d}\\[/latex]

where d is the distance between the specimen and the objective lens, and we have used the small angle approximation (i.e., we have assumed that x is much smaller than d ), so that tan θ ≈ sin θ  ≈ θ .

Therefore, the resolving power is

[latex]\displaystyle{x}=1.22\frac{\lambda{d}}{D}\\[/latex]

Part a of the Figure shows two small objects arranged vertically a distance x one above the other on the left side of the schematic. On the right side, at a distance lowercase d from the two objects, is a vertical oval shape that represents a convex lens. The middle of the lens is on the horizontal bisector between the two points on the left. Two rays, one from each object on the left, leave the objects and pass through the center of the lens. The distance d is significantly longer than the distance x. Part b of the Figure shows a horizontal oval representing a convex lens labeled microscope objective that is a distance lowercase d above a flat surface. The oval’s long axis is of length capital D. A point P is labeled on the plane directly below the center of the lens, and two rays leave this point. One ray extends to the left edge of the lens and the other ray extends to the right edge of the lens. The angle between these rays is labeled acceptance angle theta, and the half angle is labeled alpha. The distance lowercase d is longer than the distance capital D.

Figure 6. (a) Two points separated by at distance x and a positioned a distance d away from the objective. (credit: Infopro, Wikimedia Commons) (b) Terms and symbols used in discussion of resolving power for a lens and an object at point P. (credit: Infopro, Wikimedia Commons)

Another way to look at this is by re-examining the concept of Numerical Aperture ( NA ) discussed in Microscopes. There, NA  is a measure of the maximum acceptance angle at which the fiber will take light and still contain it within the fiber. Figure 6b shows a lens and an object at point P. The NA  here is a measure of the ability of the lens to gather light and resolve fine detail. The angle subtended by the lens at its focus is defined to be θ  = 2α. From the Figure and again using the small angle approximation, we can write

[latex]\displaystyle\sin\alpha=\frac{\frac{D}{2}}{d}=\frac{D}{2d}\\[/latex]

The NA for a lens is NA =  n sin  α , where n is the index of refraction of the medium between the objective lens and the object at point P.

From this definition for NA , we can see that

[latex]\displaystyle{x}=1.22\frac{\lambda{d}}{D}=1.22\frac{\lambda}{2\sin\alpha}=0.61\frac{\lambda{n}}{NA}\\[/latex]

In a microscope, NA  is important because it relates to the resolving power of a lens. A lens with a large NA  will be able to resolve finer details. Lenses with larger NA  will also be able to collect more light and so give a brighter image. Another way to describe this situation is that the larger the NA , the larger the cone of light that can be brought into the lens, and so more of the diffraction modes will be collected. Thus the microscope has more information to form a clear image, and so its resolving power will be higher.

One of the consequences of diffraction is that the focal point of a beam has a finite width and intensity distribution. Consider focusing when only considering geometric optics, shown in Figure 7a. The focal point is infinitely small with a huge intensity and the capacity to incinerate most samples irrespective of the NA  of the objective lens. For wave optics, due to diffraction, the focal point spreads to become a focal spot (see Figure 7b) with the size of the spot decreasing with increasing NA . Consequently, the intensity in the focal spot increases with increasing NA . The higher the NA , the greater the chances of photodegrading the specimen. However, the spot never becomes a true point.

The first schematic is labeled geometric optics focus. It shows an edge-on view of a thin lens that is vertical. The lens is represented by a thin ellipse. Two parallel horizontal rays impinge upon the lens from the left. One ray goes through the upper edge of the lens and is deviated downward at about a thirty degree angle below the horizontal. The other ray goes through the lower edge of the lens and is deviated upward at about a thirty degree angle above the horizontal. These two rays cross a point that is labeled focal point. The second schematic is labeled wave optics focus. It is similar to the first schematic, except that the rays do not quite cross at the focal point. Instead, they diverge away from each other at the same angle as they approached each other. The region of closest approach for the lines is called the focal region.

Figure 7. (a) In geometric optics, the focus is a point, but it is not physically possible to produce such a point because it implies infinite intensity. (b) In wave optics, the focus is an extended region.

Section Summary

  • Diffraction limits resolution.
  • For a circular aperture, lens, or mirror, the Rayleigh criterion states that two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other.
  • This occurs for two point objects separated by the angle [latex]\theta=1.22\frac{\lambda}{D}\\[/latex], where λ is the wavelength of light (or other electromagnetic radiation) and D  is the diameter of the aperture, lens, mirror, etc. This equation also gives the angular spreading of a source of light having a diameter D .

Conceptual Questions

  • A beam of light always spreads out. Why can a beam not be created with parallel rays to prevent spreading? Why can lenses, mirrors, or apertures not be used to correct the spreading?

Problems & Exercises

  • The 300-m-diameter Arecibo radio telescope pictured in Figure 4 detects radio waves with a 4.00 cm average wavelength. (a) What is the angle between two just-resolvable point sources for this telescope? (b) How close together could these point sources be at the 2 million light year distance of the Andromeda galaxy?
  • Assuming the angular resolution found for the Hubble Telescope in Example 1, what is the smallest detail that could be observed on the Moon?
  • Diffraction spreading for a flashlight is insignificant compared with other limitations in its optics, such as spherical aberrations in its mirror. To show this, calculate the minimum angular spreading of a flashlight beam that is originally 5.00 cm in diameter with an average wavelength of 600 nm.
  • (a) What is the minimum angular spread of a 633-nm wavelength He-Ne laser beam that is originally 1.00 mm in diameter? (b) If this laser is aimed at a mountain cliff 15.0 km away, how big will the illuminated spot be? (c) How big a spot would be illuminated on the Moon, neglecting atmospheric effects? (This might be done to hit a corner reflector to measure the round-trip time and, hence, distance.)
  • A telescope can be used to enlarge the diameter of a laser beam and limit diffraction spreading. The laser beam is sent through the telescope in opposite the normal direction and can then be projected onto a satellite or the Moon. (a) If this is done with the Mount Wilson telescope, producing a 2.54-m-diameter beam of 633-nm light, what is the minimum angular spread of the beam? (b) Neglecting atmospheric effects, what is the size of the spot this beam would make on the Moon, assuming a lunar distance of 3.84 × 10 8 m?
  • The limit to the eye’s acuity is actually related to diffraction by the pupil. (a) What is the angle between two just-resolvable points of light for a 3.00-mm-diameter pupil, assuming an average wavelength of 550 nm? (b) Take your result to be the practical limit for the eye. What is the greatest possible distance a car can be from you if you can resolve its two headlights, given they are 1.30 m apart? (c) What is the distance between two just-resolvable points held at an arm’s length (0.800 m) from your eye? (d) How does your answer to (c) compare to details you normally observe in everyday circumstances?
  • What is the minimum diameter mirror on a telescope that would allow you to see details as small as 5.00 km on the Moon some 384,000 km away? Assume an average wavelength of 550 nm for the light received.
  • You are told not to shoot until you see the whites of their eyes. If the eyes are separated by 6.5 cm and the diameter of your pupil is 5.0 mm, at what distance can you resolve the two eyes using light of wavelength 555 nm?
  • (a) The planet Pluto and its Moon Charon are separated by 19,600 km. Neglecting atmospheric effects, should the 5.08-m-diameter Mount Palomar telescope be able to resolve these bodies when they are 4.50 × 10 9 km from Earth? Assume an average wavelength of 550 nm. (b) In actuality, it is just barely possible to discern that Pluto and Charon are separate bodies using an Earth-based telescope. What are the reasons for this?
  • The headlights of a car are 1.3 m apart. What is the maximum distance at which the eye can resolve these two headlights? Take the pupil diameter to be 0.40 cm.
  • When dots are placed on a page from a laser printer, they must be close enough so that you do not see the individual dots of ink. To do this, the separation of the dots must be less than Raleigh’s criterion. Take the pupil of the eye to be 3.0 mm and the distance from the paper to the eye of 35 cm; find the minimum separation of two dots such that they cannot be resolved. How many dots per inch (dpi) does this correspond to?
  • Unreasonable Results.  An amateur astronomer wants to build a telescope with a diffraction limit that will allow him to see if there are people on the moons of Jupiter. (a) What diameter mirror is needed to be able to see 1.00 m detail on a Jovian Moon at a distance of 7.50 × 10 8 km from Earth? The wavelength of light averages 600 nm. (b) What is unreasonable about this result? (c) Which assumptions are unreasonable or inconsistent?
  • Construct Your Own Problem.  Consider diffraction limits for an electromagnetic wave interacting with a circular object. Construct a problem in which you calculate the limit of angular resolution with a device, using this circular object (such as a lens, mirror, or antenna) to make observations. Also calculate the limit to spatial resolution (such as the size of features observable on the Moon) for observations at a specific distance from the device. Among the things to be considered are the wavelength of electromagnetic radiation used, the size of the circular object, and the distance to the system or phenomenon being observed.

Rayleigh criterion:  two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other

Selected Solutions to Problems & Exercises

1. (a) 1.63 × 10−4 rad; (b) 326 ly

3. 1.46 × 10 −5 rad

5. (a) 3.04 × 10 −7 rad; (b) Diameter of 235 m

9. (a) Yes. Should easily be able to discern; (b) The fact that it is just barely possible to discern that these are separate bodies indicates the severity of atmospheric aberrations.

  • College Physics. Authored by : OpenStax College. Located at : http://cnx.org/contents/031da8d3-b525-429c-80cf-6c8ed997733a/College_Physics . License : CC BY: Attribution . License Terms : Located at License

Optica Publishing Group

  • Keep it simple - don't use too many different parameters.
  • Example: (diode OR solid-state) AND laser [search contains "diode" or "solid-state" and laser]
  • Example: (photons AND downconversion) - pump [search contains both "photons" and "downconversion" but not "pump"]
  • Improve efficiency in your search by using wildcards.
  • Asterisk ( * ) -- Example: "elect*" retrieves documents containing "electron," "electronic," and "electricity"
  • Question mark (?) -- Example: "gr?y" retrieves documents containing "grey" or "gray"
  • Use quotation marks " " around specific phrases where you want the entire phrase only.
  • For best results, use the separate Authors field to search for author names.
  • Use these formats for best results: Smith or J Smith
  • Use a comma to separate multiple people: J Smith, RL Jones, Macarthur
  • Note: Author names will be searched in the keywords field, also, but that may find papers where the person is mentioned, rather than papers they authored.

Optics Express

  • pp. 12684-12694
  • • https://doi.org/10.1364/OE.451114

Article Cover

Breaking the diffraction limit using fluorescence quantum coherence

Wenwen Li and Zhongyang Wang

Author Affiliations

Wenwen Li 1, 2 and Zhongyang Wang 1, *

1 Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210, China

2 School of Microelectronics, University of Chinese Academy of Sciences, Beijing 100049, China

* Corresponding author: [email protected]

  • Share with Facebook
  • X Share on X
  • Post on reddit
  • Share with LinkedIn
  • Add to Mendeley

Add to BibSonomy

  • Share with WeChat
  • Endnote (RIS)
  • Citation alert
  • Save article
  • More Like This A theoretical study of diffraction limit breaking via coherent control of the relative phase in... Dong Wang, et al. Opt. Express 27 (4) 5005-5013 (2019) Quantum correlation measurement with single photon avalanche diode arrays Gur Lubin, et al. Opt. Express 27 (23) 32863-32882 (2019) Resolution enhancement in random illumination microscopy using photon correlations Penghuan Liu Appl. Opt. 61 (10) 2910-2914 (2022)
  • Imaging Systems, Microscopy, and Displays
  • Coherence theory
  • Fluorescence microscopy
  • Light sources
  • Partial coherence
  • Photon statistics
  • Superresolution
  • Original Manuscript: December 13, 2021
  • Revised Manuscript: March 16, 2022
  • Manuscript Accepted: March 18, 2022
  • Published: March 31, 2022
  • Back to Article
  • Introduction
  • Results and discussions
  • References and links
  • Figures ( 3 )
  • Data Availability
  • Equations ( 9 )
  • References ( 42 )
  • Cited By ( 2 )
  • Back to Top

The classical optical diffraction limit can be overcome by exploiting the quantum properties of light in several theoretical studies; however, they mostly rely on an entangled light source. Recent experiments have demonstrated that quantum properties are preserved in many fluorophores, which makes it possible to add a new dimension of information for super-resolution fluorescence imaging. Here, we developed a statistical quantum coherence model for fluorescence emitters and proposed a new super-resolution method using fluorescence quantum coherence in fluorescence microscopy. In this study, by exploiting a single-photon avalanche detector (SPAD) array with a time-correlated single-photon-counting technique to perform spatial-temporal photon statistics of fluorescence coherence, the subdiffraction-limited spatial separation of emitters is obtained from the determined coherence. We numerically demonstrate an example of two-photon interference from two common fluorophores using an achievable experimental procedure. Our model provides a bridge between the macroscopic partial coherence theory and the microscopic dephasing and spectral diffusion mechanics of emitters. By fully taking advantage of the spatial-temporal fluctuations of the emitted photons as well as coherence, our quantum-enhanced imaging method has the significant potential to improve the resolution of fluorescence microscopy even when the detected signals are weak.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

In classical linear optics, the resolution of far-field microscopy is limited by Rayleigh’s criterion ≃ 0.61λ/NA, where λ indicates the wavelength of light and NA is the numerical aperture of the objective lens. Breaking the resolution limit has always been a critical issue in the fields of life sciences, engineering, and material sciences. In the last two decades, various super-resolution imaging methods have been developed, which includes utilizing the photophysical properties of fluorophores. One branch of methods relies on the nonlinear fluorescence response arising from stimulated emission depletion (STED) [ 1 ] and the saturation effect in structured illumination microscopy (SSIM) [ 2 ]. Other methods stochastically localize single fluorescence emitters with variations in brightness caused by photoswitching or intrinsic blinking, such as stochastic optical reconstruction microscopy (STORM) [ 3 , 4 ] and photoactivated localization microscopy (PALM) [ 5 ]. This single-molecule localization method can improve the resolution to a lower bound on the root-mean-square position estimation error (shot-noise limit) of ≃ 1/ $\sqrt N $ ( N is the detected photons); the statistical resolution is determined by the precision of measurement [ 6 ].

In quantum optics, the quantum properties of light have been successfully used to achieve resolution enhancement by using non-classically correlated photons to provide more information [ 7 , 8 ]. The pioneering proposal is based on entangled light sources, for which the entangled N -photon state between paths is exploited to generate interference fringes that are N times narrower than that of the classical limit, thus allowing a further resolution improvement at the Heisenberg bound ≃1/N [ 9 ]. A resolution enhancement of 2 -fold can be achieved by using second-order coincidence detection to measure the quantum coherence of entangled photon pairs [ 10 , 11 ]. Based on this experimental scheme, the quantum-enhanced method has been effectively employed in super-resolved quantum lithography [ 12 , 13 ] and subshot-noise quantum imaging [ 14 – 15 ]. However, a higher resolution has been limited owing to the significant challenge in generating a large N -photon entangled state. The low efficiency of coincidence detection [ 16 ] makes this method infeasible for practical applications.

For fluorescence microscopy, an alternative approach takes advantage of the quantum properties of the fluorescence by the emitter rather than the correlated light sources used as illumination. Each fluorescence emitter can be treated as a single-photon source; following excitation, it emits a single fluorescence photon via spontaneous emission. Hence, the photons have a tendency to be emitted one by one rather than in bursts, exhibiting a non-classical feature of photon antibunching [ 17 , 18 ]. In particular, for an ideal single-photon source, the photons are perfectly indistinguishable considering the spectrum, lifetime, and polarization, which leads to a high quantum coherence effect [ 19 ]. These intrinsic quantum properties of fluorescence, including antibunching and indistinguishability, have been observed in many fluorophores, such as organic dyes [ 20 – 22 ], quantum dots [ 23 – 25 ], and color centers in diamonds [ 26 , 27 ]. Therefore, the photon antibunching, one of quantum properties, has been recently utilized to build a new super-resolution microscopy; the imaging resolution can be enhanced $\sqrt {\textrm{M}} $ by using M SPAD detectors to perform M th-order photon correlation detection [ 28 – 30 ]. Experimental demonstrations have been performed in wide-field microscopy by spatial photon correlation detection [ 28 ] and confocal microscopy by recording time-photon coincidence events [ 29 – 30 ].

For the other quantum property of photon indistinguishability, whether the origin of fluorescence coherence can also be exploited to achieve a higher resolution is uncertain. The coherence of light has been analyzed in previous studies by Mandel and Wolf [ 17 ], as well as the latest quantum estimation theory [ 31 – 33 ]; these discussions of macroscopic coherence cover a wide range of situations including fully and partially in-phase or anti-phase coherent sources. However, there is no bridge between the macroscopic coherence theory and microscopic radiation mechanics of emitters. A few theoretical studies have recently attempted to build a microscopic coherence model for the radiation of ensemble atoms and achieved a resolution improvement of λ/N by measuring the interference of N identical photons [ 34 – 36 ]. However, this method generally requires ideal single-photon sources, that is, emitted photons are indistinguishable. For fluorescence microscopy, the actual fluorophores remain distant from the ideal case owing to the effects of dephasing and spectral diffusion. Therefore, whether the quantum coherence truly enables subdiffraction resolution in practical fluorescence microscopy is considered.

Motivated by the aforementioned, in this study, we developed a statistical quantum coherence model arising from the spatiotemporal interference of spontaneously emitted photons from fluorophores in fluorescence microscopy. A practical method to break the diffraction limit is proposed by using a time-resolved single-photon counting camera, an SPAD array, to perform space- and time-resolved quantum coherence measurements, in which the spatial separation distance can be accurately extracted from the spatial coherence by introducing a well-defined time function rather than a stochastic time variable. In contrast to traditional fluorescence microscopy, in which the fluorescence emitter is considered fully incoherent, we employ photon indistinguishability as additional quantum information to improve the resolution. We also built a partial coherence model based on inherent dephasing and spectral diffusion mechanics of actual fluorophores. For simplicity, we exemplify our method in the case of two single-photon emitters. By exploiting a two-photon interference motivated by a possible experimental procedure in wide-field fluorescence microscopy, we demonstrate that this scheme enables molecular-scale spatial resolution.

2.1 Theoretical model

The spatiotemporal nature of a single-photon wave packet was quantitatively assessed based on the inherent radiation mechanics of fluorescence emitters. Here, for a single-photon emitter, the spatiotemporal wave operator of a single photon emitted from the i th fluorescence emitter is given by the following: (1) $$\hat{E}_i^ + ({r,t} )= {f_i}({r,t} ){\hat{a}^{}}_i \mbox{ and }\hat{E}_i^ - ({r,t} )= f_i^ \ast ({r,t} ){\hat{a}^{\dagger} }_i$$ where f i (r,t) is the single-photon wave packet, ${\hat{a}^{\dagger}}_i$ and ${\hat{a}^{}}_i$ are the photon creation and annihilation operators, respectively. When r = 0 , Eq. ( 1 ) describes the time evolution of a single photon emitted from a fluorescence emitter, which strongly depends on the intrinsic transition and relaxation mechanics. Hence, a quantum model of a single emitter must first be constructed to quantify the relationship between the time-dependent wave operator and the radiation mechanics. Typically, a single fluorescence emitter can be considered as a two-level system with ground and excited states. A single photon is emitted from the excited state towards the ground state in the long radiative lifetime of the excited state T 1 , normally a few nanoseconds, giving rise to a narrow spectral line with a width of 1/2πT 1 and a long lifetime-limited coherence time T 2  = 2T 1 . In this case, perfectly indistinguishable photons are obtained. However, the linewidth increases steeply owing to the perturbations from the environment, involving pure dephasing (PD) and spectral diffusion (SD). PD is mainly caused by phonon relaxation with emission frequency shifts in short interaction time, resulting in a homogeneous broadening of the emission line with Lorentzian line shape. PD decreases the coherence time 1/T 2  = 1/2T 1 +1/ $\textrm{T}_\textrm{2}^{\ast }$ ( $\textrm{T}_\textrm{2}^{\ast }$ is pure phasing time), typically several hundred picoseconds. SD follows from slow time-dependent frequency fluctuations as a result of its long interactions with a dynamically evolving system, which leads to inhomogeneous broadening with Gaussian spectral lines. The coherence time is further shortened with ${\textrm{T}_\textrm{2}}\textrm{ ={-} }{\mathrm{\Gamma }_\textrm{2}}/{\mathrm{\Delta }^\textrm{2}}\textrm{ + }\sqrt {{{\textrm{(}{\mathrm{\Gamma }_\textrm{2}}/{\mathrm{\Delta }^\textrm{2}}\textrm{)}}^\textrm{2}}\textrm{ + 2}/{\mathrm{\Delta }^\textrm{2}}} $ ( ${\mathrm{\Gamma }_\textrm{2}}/\mathrm{2\pi }$ and $\textrm{2}\sqrt {\mathrm{2ln2\Delta }} $ , which are the homogeneous and inhomogeneous linewidths, respectively) [ 18 , 37 , 38 ]. Hence, both PD and SD limit the photon indistinguishability. The degree of indistinguishability, or coherence time, can be measured by a two-photon interference in the time domain following the famous Hong–Ou–Mandel (HOM) experiment, which is critical for quantum technologies [ 39 , 40 ]. The HOM experiment provides the time-resolved coincidence count with PD and SD and a measurement of the coherence time T 2 . Recent experiments have observed excellent coherence in various fluorescence emitters, including organic dyes [ 21 , 22 ], quantum dots [ 24 , 25 ], and color centers in diamond [ 26 , 27 ], which allow the coherence time (several hundred picoseconds to a few nanoseconds) to be longer than the time resolution of single-photon detectors (approximately tens of picoseconds) and perform a time-resolved measurement of the quantum interference phenomenon.

To quantify the time-dependent PD and SD mechanics, we exploit laser pulse excitation combined with an SPAD array, in which the time-correlated single-photon-counting (TCSPC) technique measures not only the spatial position r , but also the arrival time t of each photon, that is, the time delay between the trigger signal of the laser pulse and the detection time. Therefore, following pulse excitation, the time evolution of the single-photon wave packet can be expressed in the following form with the analytical amplitude of ε i (t) and phase φ i (t) [ 37 , 38 ]: (2) $$\begin{aligned} \zeta _i^{}(t )&= {\varepsilon _i}(t )\exp [{ - i{\phi_i}(t )} ]\\ &= \frac{1}{{{T_{1i}}}}H(t )\exp \left[ { - \frac{{t + \delta {t_\textrm{i}}}}{{2{T_{1i}}}}} \right]\exp \left\{ { - i\left[ {({\omega_0} + \varDelta {\omega_i})t + \int_t {\delta {\omega_i}(\tau )d\tau } } \right]} \right\} \end{aligned}$$ where the Heaviside-function H(t) indicates that no photon that can be emitted prior to the excitation, δt is the time jitter of the excitation, ω 0 is the central frequency, Δω i t represents the time-dependent phase caused by PD, $\mathop \smallint \limits_\textrm{t}^{\; } \mathrm{\delta }{\mathrm{\omega }_\textrm{i}}\mathrm{(\tau )d\tau }$ is the phase variation under the influence of SD arising from time-dependent frequency fluctuations. The temporal wave functions were normalized $\mathrm{\int\!\!\!\int }{|{{\mathrm{\zeta }_\textrm{i}}\textrm{(t)}} |^\textrm{2}}\textrm{dt = 1}$ . However, in wide-field fluorescence imaging, the propagation properties of a single photon must be considered in both the time and space domains. The spatial propagation property mainly depends on the size of the light source and characteristics of the imaging system. To better understand the relationship between time and space propagation, we first focus on far-field imaging in free space. Here, a single fluorescence emitter, as a point source, emits photons in a spherical mode; the spatial wave function is h(r)=exp(ikR)/R, where R and k indicate the space coordinate and wave number, respectively. Note that in free space, the light field propagates in a nondispersive medium and the spectral line is relatively narrow, that is, Δω<<ω 0 , for which the spatial phase variation caused by Δω is significantly lower than ω 0 . Therefore, k ≈ k 0 can be approximated. The space evolution is now decoupled from time, and the spatiotemporal single-photon wave packet can be expressed as follows: (3) $${f_i}({r,t} )= \zeta _i^{}(t ){h_i}(r )$$ We built a fluorescence quantum coherence model using this single-photon wave packet. For example, let us consider the simplest case of a two-photon interference. Under each pulse excitation, two single photons originating from two respective fluorescence point sources interfere in free space and subsequently arrive at the detection position r and arrival time t . Here, the detection probability of two-photon interference is determined by the statistical average of an ensemble of successively detected photons as follows: (4) $$\begin{aligned} P({r,t} )&= \left\langle \Phi \right|\hat{E}_{}^ - ({r,t} )\hat{E}_{}^ + ({r,t} )|\Phi \rangle \\ &=\left\langle {{1_1}{1_2}} \right|({\hat{E}_1^ - ({r,t} )+ \hat{E}_2^ - ({r,t} )} )({\hat{E}_1^ + ({r,t} )+ \hat{E}_2^ + ({r,t} )} )|{{1_1}{1_2}} \rangle \\ &= {|{{f_1}({r,t} )+ {f_2}({r,t} )} |^2}\\ &=\frac{{H(t )}}{{z_0^2T_1^2}}{\left\langle {\left\langle {\left\langle \begin{array}{l} \exp \left( { - \frac{{t + \delta {t_1}}}{{{T_1}}}} \right) + \exp \left( { - \frac{{t + \delta {t_2}}}{{{T_1}}}} \right)\\ + \exp \left( { - \frac{{2t + \delta {t_2} + \delta {t_1}}}{{2{T_1}}}} \right)\exp \left\{ {i\left[ {(\varDelta {\omega_2} - \varDelta {\omega_1})t + \int_t {({\delta {\omega_2}(\tau )- \delta {\omega_1}(\tau )} )} d\tau - {k_0}\frac{r}{{{z_0}}}s} \right]} \right\} + c.c. \end{array} \right\rangle } \right\rangle } \right\rangle _{\delta {t_i},\varDelta {\omega _i},\delta {\omega _i}}}\\ &=\frac{{2exp \left( { - \frac{t}{{{T_1}}}} \right)}}{{{z_0}^2{T_1}}}\left\{ {1 + \exp \left( { - \frac{{2t}}{{T_2^ \ast }} - \frac{{\Delta _2^2\textrm{ + }\Delta _1^2}}{2}{t^2}} \right)\cos \left[ {\left( {\left\langle {\delta {\omega_2}} \right\rangle - \left\langle {\delta {\omega_1}} \right\rangle } \right)t\textrm{ + }{k_0}\frac{r}{{{z_0}}}s} \right]} \right\}, t > 0 \end{aligned}$$ where ${\mathrm{\hat{E}}^\mathrm{\ \pm }}\textrm{(r,t)}$ is the detected spatiotemporal field operator acting on the incoming state $\mathrm{|\Phi }$ , which is equal to the superposition of two-photon field operators $\mathrm{\hat{E}}_\textrm{1}^\mathrm{\ \pm }\textrm{(r,t)}$ and $\mathrm{\hat{E}}_\textrm{2}^\mathrm{\ \pm }\textrm{(r,t)}$ ; $\cdots $ denotes the statistical average for the three random variables δt i , Δω i , δω i . Assuming that two near emitters have the same radiation lifetime T 1 and dephasing time $\textrm{T}_\textrm{2}^{\ast }$ owing to their similar local environments, PD and SD act independently on the different emitters. Therefore, $\mathrm{exp}( \pm i\Delta {\mathrm{\omega }}_\textrm{i}\textrm{t}) = exp({\textrm{ - 2}|\textrm{t} |/\textrm{T}_\textrm{2}^{\ast }} )$ and $\textrm{exp}\left[ {\mathrm{\ \pm i}\mathop \smallint \limits_\textrm{t}^{\; } \mathrm{\delta }{\mathrm{\omega }_\textrm{i}}\mathrm{(\tau )d\tau }} \right]\textrm{ = exp}({\mathrm{\ -\ \Delta }_\textrm{i}^\textrm{2}{\textrm{t}^\textrm{2}}/\mathrm{2\ \pm i\delta }{\mathrm{\omega }_\textrm{i}}\textrm{t}} )$ , and ‹ δω i › is the detuning of its emission line with respect to ω 0 [ 38 ]. The separation distance s for the far-field imaging of the two-point sources within the diffraction limit is far less than the propagation distance z 0 about the wavelength scale. Because the limit of Fraunhofer diffraction is fulfilled, that is, z 0 >>s 2 /λ 0 , the spatial function can be approximated as h(r)≈exp{ik 0 (z 0 -sr/2z 0 )}/z 0 , owing to $\textrm{R = }\sqrt {\textrm{z}_\textrm{0}^\textrm{2}\textrm{ + }{{({\mathrm{r\ \pm s}/\textrm{2}} )}^\textrm{2}}} \simeq {\textrm{z}_\textrm{0}}\mathrm{\ \pm sr}/\textrm{2}{\textrm{z}_\textrm{0}}$ [ 41 ]. Moreover, we ignore the dipole-dipole interaction between the two fluorescence emitters, which exists with a separation distance of approximately a few nanometers. As shown in Eq. ( 4 ), the first term represents the self-coherence determined by the radiation lifetime T 1 of the emitters; the second term represents the cross-coherence and contains the temporal and spatial coherence of emitters. The amplitude of cross-coherence decreases exponentially with the arrival time owing to the effect of PD and SD, which indicates that two-photon interference is partially coherent. The cross-coherence phase relies on the frequency difference and spatial separation between the two sources. Therefore, the detected fluorescence quantum coherence is entirely determined by the space-time emission properties of the fluorescence emitters and does not change with propagation in the non-dispersive system.

The visibility of the coherence was modulated by setting the width of the time gate to engineer the effect of PD on coherence. The high visibility requires extracting highly coherent photons, which is achieved by setting T g to be shorter than the coherence time T 2 to reduce the effect of PD. Therefore, when T g <T 2 <<T 1 , Eq. ( 6 ) can be approximated as follows: (7) $$\begin{aligned} p({r,{T_g}} )&\simeq \frac{{2{T_g}}}{{{T_1}z_0^2}}\cos \left( {{k_0}\frac{r}{{{z_0}}}s} \right)\\ &= V({{T_g}} )\cos \left( {{k_0}\frac{r}{{{z_0}}}s} \right) \end{aligned}$$ In far-field imaging, when the distance s is very small, the spatial phase φ=k 0 rs/z 0 <<π/2 only causes a contrast variation of cos(k 0 rs/z 0 ) along s because its full period is difficult to be detected. Therefore, it is difficult to extract s if only the intensity is measured in conventional imaging, because the contrast is a stochastic variable subject to a time-dependent drift. However, if we introduce a time modulation function V(T g ) , an accurate measurement of the spatial contrast as a function of the time gate can be performed. As shown in Eq. ( 7 ), the cumulative detection probability p(r,T g ) increases linearly with V(T g ) , where the slope of p(r,T g ) is determined only by the spatial coherence cos(k 0 rs/z 0 ). Here, the distance s can be accurately extracted from cos(k 0 r 0 s/z 0 ) at a specific position r 0 .

  • b. When considering PD and SD, the detection probability is given by Eq. ( 4 ). In this case, the cross-coherence term decays faster owing to the low coherence time caused by the SD, for which the time-modulated coherence V(T g ) is difficult to obtain. However, another time modulation function, a time-varying phase cos((‹δω 2 ›-‹δω 1 ›)t), can be constructed by using two fluorescence emitters with unequal frequencies ‹ δω 2 ›≠‹ δω 1 ›. The spatial phase $\varphi $ can still be extracted from the period of cos((‹δω 2 ›-‹δω 1 ›)t+φ). The method is similar to [ 42 ] in the telescope; by extracting the spatial coherence from the interference of two sources to provide a measurement of their separation, the diffraction limit of the telescope can be surpassed by approximately 40 times. However, for two sources with different wavelengths, the photon indistinguishability is obtained by the color erasure detector to erase the wavelength identifying information. In contrast, the photon indistinguishability in our method is the intrinsic quantum property of the fluorescence emitter. Therefore, we can directly detect the interference of the two photons without color erasure detectors.

For fluorescence microscopy, a similar statistical model of two fluorescence photons from their interference propagating in a microscopy system must be built. Here, we consider only the PD case because the slow SD process in organic dyes and color centers in diamonds can typically be neglected at low temperatures [ 21 , 22 , 26 , 27 ]. We also set the time gate to less than the SD time to reduce the effect of SD. Therefore, the detected probability is given by the following: (8) $$\begin{aligned} p({r,t} )&= \frac{1}{{{T_1}}}\exp \left( { - \frac{t}{{{T_1}}}} \right)\left( {S_ +^2 + S_ -^2 + 2exp \left( { - \frac{2}{{T_2^ \ast }}t} \right){S_ + }{S_ - }} \right)\\ {S_ \pm } &= \frac{{k_0^2{D^2}}}{{16\pi {f_1}{f_2}}}\left( {\frac{{2{J_1}(\frac{{{k_0}D}}{{2{f_2}}}\left( {\frac{{{f_2}}}{{{f_1}}}\left( {{x_0} \pm \frac{s}{2}} \right) + r} \right))}}{{\frac{{{k_0}D}}{{2{f_2}}}\left( {\frac{{{f_2}}}{{{f_1}}}\left( {{x_0} \pm \frac{s}{2}} \right) + r} \right)}}} \right) \end{aligned}$$ where ${\textrm{S}_\mathrm{\ \pm }}$ is the point spread function of the microscopy system [ 41 ], in which J 1 is the first-order Bessel function, D is the diameter of the pupil of the objective, f 1 and f 2 are the focal lengths of the objective and tube lens, respectively, and x 0 is the central position of the two-point sources. Here, microscopy is considered as an achromatic imaging system for which the same approximation k ≈ k 0 is also adopted. Hence, the detected cross-coherence describes a combination of temporal coherence determined by the PD, and spatial coherence arising from the interference of spatially separated two-point sources. Similarly, to extract the cross-coherence that contains the s value information, we used a time gate T g to construct a temporal coherence modulation function using the integral of Eq. ( 8 ) along T g ; when T g <T 2 <<T 1 , the cumulative detection probability is as follows: (9) $$p({r,{T_g}} )\simeq \frac{{2{T_g}}}{{{T_1}}}{S_ + }{S_ - }$$ where the modulation function p(r, T g ) varies approximately linearly with T g . The slope of p(r 0 , T g ) provides an estimation of s when the other parameters are pre-determined from the imaging system and the properties of the fluorescence emitter. Based on this, the limit of resolution is no longer determined by the optical diffraction limit, but instead by the measurement precision of the function p(r, T g ) . Therefore, the diffraction limit can be overcome in fluorescence microscopy by measuring the well-defined temporal and spatial functions of the fluorescence quantum coherence.

2.2 Simulation model

As a possible experimental procedure for our method, the experimental setup of a wide-field fluorescence microscopy is shown in Fig.  1 (a). For many actual fluorophores, the emitted photons are distributed over a broad range of frequencies owing to the complex inhomogeneous environment or the creation of additional vibrations and phonons. To obtain excellent coherence, a narrow-band spectral filter is required to isolate a single emission line; the zero-phonon line (ZPL) is typically selected, which has a narrow linewidth corresponding to a long coherence time ranging from several hundred picoseconds to a few nanoseconds at low temperatures. Thus, the long coherence time that exceeds the time resolution of the SPAD by more than one order of magnitude allows the time-resolved interference effects to be measured. A wave plate (WP) combined with a polarizing beam splitter (PBS) was used to ensure the identical polarization of the emitters while compensating for the ellipticity introduced by other optical components. To investigate the coherence in the space-time domain, an SPAD array is used as our imaging device, in which each SPAD acts as a pixel and feeds a TCSPC card and logs the arrival times of the detected photons. This device combines spatial information with single-photon sensitivity and picosecond-scale temporal resolution and is capable of detecting emission transients orders of magnitudes faster than the 1 ms temporal resolution of typical cameras. Therefore, we used the pulse excitation scheme synchronized with the SPAD array to record the arrival time t and spatial position r of each fluorescence photon rather than only the spatial position in traditional imaging, as shown in Fig.  1 (b). Following per-pulse excitation, two fluorescence photons emitted from two respective sources at a separation of s interfere and reach the SPAD array owing to photon antibunching while the (r,t) of each fluorescence photon is recorded, as shown in Fig.  1 (c). In this process, only two-photon detection events per pulse are extracted; one, three, or more photon detection events are removed because they are considered as the noise caused by nondeterministic photon emission, environmental factors, and dark count. These uncorrelated photons reduce coherence. By counting the photon number according to each sampling bin (r,t) within the pulse period, a space-time distribution histogram of the photons can be accumulated using several pulse events, which produces an accurate representation of the photon number distribution. Based on the distribution, we can fit the detected probability curve P(r,t) using Eq. ( 8 ), which is normalized by dividing the total number of photons, as shown in Fig.  1 (d). To extract the cross-coherence that contains the separation distance s information, we introduce a time gate T g as a photon arrival time post-selection window by simply summing the photon numbers with $\textrm{t} \le {\textrm{T}_\textrm{g}}$ , and obtain the photon number distribution as a function of (r,T g ) . Because the coherence time T 2 in Eq. ( 8 ) is pre-determined by the HOM experiment, we can extract the highly coherent photons by setting T g <T 2 . Therefore, the cumulative detection probability p(r,T g ) in Eq. ( 9 ) is linearly fitted from the photon number distribution, as shown in Fig.  1 (e), which indicates that the visibility of the coherence gradually improves along T g owing to the accumulation of more coherent photons. In Eq. ( 9 ), T 1 can be pre-measured by Handury-Brown and Twiss (HBT) experiment [ 20 – 22 ], x 0 can be estimated by the center location algorithm from the detected spatial photon number distribution at a particular T g [ 6 ], and other parameters can be pre-determined by the microscopy system. Therefore, we can determine the only unknown parameter s from the slope 2S + S - /T 1 of p(r 0 , T g ) at a particular position r 0 .

figure: Fig. 1.

Fig. 1. Schematic illustration of super-resolution wide-field microscopy based on fluorescence quantum coherence. (a) The experimental implementation. An SPAD array is incorporated into a traditional fluorescence microscope and synchronized with a pulsed laser to measure the spatial position and arrival time of each photon. DM, dichroic mirror, WP, wave plate, PBS, polarizing beam splitter. (b) Comparison of the detection schemes of the traditional imaging and SPAD array; the former is based on spatial intensity measurements, while the latter is a temporal and spatial measurement of the photon pairs per pulse. (c) Sequence diagram of the detection and post-selection scheme of SPAD array. Only two-photon events are recorded while single- or multi-photon events are ignored. (d) The photon count distribution with the photon arrival times and detection positions of the two-photon interference. (e) The photon count distribution with the time gate widths and detection positions of the two-photon coherence.

Download Full Size | PDF

3. Results and discussions

We implemented a numerical demonstration for imaging two separate nitrogen vacancy (NV) centers in diamonds to prove the feasibility of our method for improving the spatial resolution, as shown in Fig.  2 . For traditional wide-field fluorescence microscopy, the spatial resolution of two incoherent point sources was approximately 230 nm owing to Rayleigh’s diffraction limit. However, for the coherent sources, the resolution is slightly worse when considering only the spatial intensity measurement (approximately 350 nm) owing to the interference effect shown in Fig.  2 (a), and the contrast of the interference decreases with the increasing distance of two-point sources; thus, it is not possible to resolve the spatial distances of less than 350 nm from the contrast of the spatial intensity owing to the unknown variation of the temporal intensity. Here, we introduce the time dimension and use the SPAD array to simultaneously measure the temporal and spatial information of the detected photons. The time- and space-resolved detection probability functions P(r, t) for different separation distances are shown in Fig.  2 (b, top). The detected intensity decreases with the photon arrival time t , as shown in Eq. ( 8 ), which includes the T 1 -determined self-coherence and T 2 -determined cross-coherence. When t < T 2 , the variation in cross-coherence is dominant because T 2 <2T 1 , in which the cross-coherence is clearly more sensitive to the separation distance s and sharply decreases as s increases. Therefore, according to the predetermined value of T 2 , we set the time gate T g ( T g <T 2 ) to extract the cross-coherence and then constructed the T g -dependent modulation function p(r, T g ) , which is shown in Fig.  2 (b, bottom). At a particular position, r = 0 , the visibility of the cross-coherence p(0, T g ) increases approximately linearly with T g, and its slope decreases as s increases, as shown in Fig.  2 (c). The relationship between the visibility of the cross-coherence and s is now determined by the known variation, which is the T g -dependent intensity p(0, T g ) , rather than the unknown time variation. Therefore, based on the other predetermined parameters in Eq. ( 9 ), the separation distance s can be estimated and resolved from the deterministic slopes of p(0, T g ) , despite being below the diffraction limit.

figure: Fig. 2.

Fig. 2. Numerical demonstration of breaking the diffraction limit using fluorescence quantum coherence for a case of two NV centers. The simulation parameters of NV centers are obtained from the experiment in [ 26 ]: the emission wavelength is λ=670 nm, excited state lifetime is T 1  = 12 ns, and coherence time is T 2  = 15.8 ns. Moreover, the time-resolution of the SPAD array is 55 ps (Photon Force PF32), the magnification of the objective lens is 100× and NA is 1.49, and the detected photon number is approximately 10 4 . (a) The spatial intensity distributions with the different separation distances between two NV centers in traditional imaging. (b) The photon detection probability distributions with different separation distances ( s =50, 150, 250, and 350 nm) between two NV centers at the different detection distances r vs the photon arrival time (top) and the time gate widths (bottom). (c) The photon detection probability distributions with the different separation distance s as a linear function of the time gate T g at the detection center ( r = 0 ) of two NV centers. The slope of each curve changes with the separation distance s .

Here, the spatial resolution is determined by the precision of estimating the separation distance s between the two-point sources, which depends on the precision of the p(r, T g ) measurement rather than the diffraction limit. Improving the measurement precision requires a low detection noise and a high sampling rate. Although the noise can be effectively reduced via the post-selection of two-photon events, the photon shot noise remains unavoidable. Therefore, the measurement error is fundamentally lower bounded by the “shot-noise limit” of ≃ 1/ $\sqrt N $ [ 6 , 7 ]. As shown in Fig.  3 , a low cumulative number of detected photons causes the high measurement error of p(0, T g ) due to the effect of the photon shot noise, so that a low estimation of s . The simulation results in Fig.  3 show the spatial resolution of 50 nm can be reached by detecting at least 10 4 photons, and achieving higher resolution requires counting more photons to obtain the optimality of the p(0, T g ) measurement. In addition to noise, higher sampling rates, both spatial and temporal, also improve the precision of measurement, which are determined by the spatial and temporal resolutions of the SPAD array. A lower coherence time of fluorophores requires a higher temporal resolution and a lower time jitter of the detection to measure time-resolved coherence more accurately. The high spatial resolution of the SPAD array obtains a high sampling measurement of the detector position r , thus producing a more accurate fitting curve p(r,T g ) , which also enables the precise estimation of s . Meanwhile, for any position r’ , the slope of p(r’,T g ) can provide an estimation of s . Hence, by performing a statistical analysis of all the estimation results, an enhancement in estimation of s and the estimation error are given by the mean and variance, respectively. Moreover, SD limits the observation of coherence, we can still set the time gate T g less than the SD time to reduce the effect of SD.

figure: Fig. 3.

Fig. 3. The effect of the measurement errors of p(0, T g ) on the spatial resolutions (s = 50, 100 nm) under different cumulative photon numbers of 10 3 , 10 4 , and 10 5 .

4. Conclusion

In this study, we built a microscopic coherence model based on the inherent radiation mechanics of a fluorescence source, and proposed a new method for breaking the diffraction limit by using a SPAD array to simultaneously record the temporal and spatial interference, and then utilized post-selection to modulate temporal coherence to extract the spatial separation distance. Unlike previous atom model that emitting identical fluorescence photons, or incoherent sources in conventional fluorescence microscopy, we employed a partial coherence model based on the fluorescence emission mechanics of actual fluorophores, which not only fully utilizes the quantum properties of fluorescence but also evaluates them in practical environmental effects and imaging systems. Utilizing post-selection by filtering the desirable quantum coherence makes our method less dependent on completely coherent fluorescence sources and suitable for more fluorophores, especially for certain organic dye molecules, quantum dots and color centers commonly used in fluorescence microscopy. Furthermore, our method does not require a complicated entangled light source nor a correlation measurement, which makes this novel quantum-enhanced imaging method accessible in conventional fluorescence microscopy and is helpful for speeding the super-resolution imaging of live cells with weak fluorescence signals.

Science and Technology Commission of Shanghai Municipality (20DZ2210300).

Disclosures

The authors declare no conflicts of interest.

Data availability

The data underlying the results of this study are available in Ref. [ 26 ].

1. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19 (11), 780 (1994). [ CrossRef ]  

2. M. G. L. Gustafsson, “Nonlinear structured-illumination microscopy: Wide-field fluorescence imaging with theoretically unlimited resolution,” Proc. Natl. Acad. Sci. 102 (37), 13081–13086 (2005). [ CrossRef ]  

3. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3 (10), 793–796 (2006). [ CrossRef ]  

4. U. Endesfelder and M. Heilemann, “Direct stochastic optical reconstruction microscopy (dSTORM),” Methods in Molecular Biology 1251 , 263–276 (2015). [ CrossRef ]  

5. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313 (5793), 1642–1645 (2006). [ CrossRef ]  

6. R. J. Ober, S. Ram, and E. S. Ward, “Localization Accuracy in Single-Molecule Microscopy,” Biophys. J. 86 (2), 1185–1200 (2004). [ CrossRef ]  

7. I. R. Berchera and I. P. Degiovanni, “Quantum imaging with sub-poissonian light: challenges and perspectives in optical metrology,” Metrologia 56 (2), 024001 (2019). [ CrossRef ]  

8. M. Genovese, “Real applications of quantum imaging,” J. Opt. 18 (7), 073002 (2016). [ CrossRef ]  

9. J. Jacobson, G. Björk, I. Chuang, and Y. Yamamoto, “Photonic de Broglie Waves,” Phys. Rev. Lett. 74 (24), 4835–4838 (1995). [ CrossRef ]  

10. E. J. S. Fonseca, C. H. Monken, and S. Pádua, “Measurement of the de Broglie Wavelength of a Multiphoton Wave Packet,” Phys. Rev. Lett. 82 (14), 2868–2871 (1999). [ CrossRef ]  

11. K. Edamatsu, R. Shimizu, and T. Itoh, “Measurement of the photonic de broglie wavelength of entangled photon pairs generated by spontaneous parametric down-conversion,” Phys. Rev. Lett. 89 (21), 213601 (2002). [ CrossRef ]  

12. A. N. Boto, P. Kok, D. S. Abrams, S. L. Braunstein, C. P. Williams, and J. P. Dowling, “Quantum-interferometric optical lithography: Towards arbitrary two-dimensional patterns,” Phys. Rev. Lett. 85 (13), 2733–2736 (2000). [ CrossRef ]  

13. M. D. Angelo, M. V. Chekhova, and Y. Shih, “Two-photon diffraction and quantum lithography,” Phys. Rev. Lett. 87 (1), 013602 (2001). [ CrossRef ]  

14. G. Brida, M. Genovese, and I. R. Berchera, “Experimental realization of sub-shot-noise quantum imaging,” Nat. Photonics 4 (4), 227–230 (2010). [ CrossRef ]  

15. M. A. Taylor, J. Janousek, V. Daria, J. Knittel, B. Hage, H. A. Bachor, and W. P. Bowen, “Biological measurement beyond the quantum limit,” Nat. Photonics 7 (3), 229–233 (2013). [ CrossRef ]  

16. Y. S. Kim, O. Kwon, S. M. Lee, H. Kim, S. K. Choi, H. S. Park, and Y. H. Kim, “Observation of Young's Double-Slit Interference with the Three-Photon N00N State,” Opt. Express 19 (25), 24957 (2011). [ CrossRef ]  

17. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge University, Cambridge, 1995), pp. 683–803.

18. B. Lounis and M. Orrit, “Single-photon sources,” Rep. Prog. Phys. 68 (5), 1129–1179 (2005). [ CrossRef ]  

19. C. Santori, D. Fattal, J. Vuckovic, G. S. Solomon, and Y. Yamamoto, “Indistinguishable photons from a single-photon device,” Nature 419 (6907), 594–597 (2002). [ CrossRef ]  

20. T. Basché, W. Moerner, M. Orrit, and H. Talon, “Photon antibunching in the fluorescence of a single dye molecule trapped in a solid,” Phys. Rev. Lett. 69 (10), 1516–1519 (1992). [ CrossRef ]  

21. A. Kiraz, M. Ehrl, T. Hellerer, O. E. Müstecaplolu, C. Bräuchle, and A. Zumbusch, “Indistinguishable Photons from a Single Molecule,” Phys. Rev. Lett. 94 (22), 223602 (2005). [ CrossRef ]  

22. R. Lettow, Y. L. A. Rezus, A. Renn, G. Zumofen, E. Ikonen, S. Götzinger, and V. Sandoghdar, “Quantum Interference of Tunably Indistinguishable Photons from Remote Organic Molecules,” Phys. Rev. Lett. 104 (12), 123605 (2010). [ CrossRef ]  

23. B. Lounis, H. A. Bechtel, D. Gerion, P. Alivisatos, and W. E. Moerner, “Photon antibunching in single CdSe/ZnS quantum dot fluorescence,” Chem. Phys. Lett. 329 (5-6), 399–404 (2000). [ CrossRef ]  

24. E. B. Flagg, A. Muller, S. V. Polyakov, A. Ling, A. Migdall, and G. S. Solomon, “Interference of Single Photons from Two Separate Semiconductor Quantum Dots,” Phys. Rev. Lett. 104 (13), 137401 (2010). [ CrossRef ]  

25. V. Giesz, S. L. Portalupi, T. Grange, C. Antón, L. De Santis, J. Demory, N. Somaschi, I. Sagnes, A. Lemaître, L. Lanco, A. Auffèves, and P. Senellart, “Cavity-enhanced two-photon interference using remote quantum dot sources,” Phys. Rev. B 92 (16), 161302 (2015). [ CrossRef ]  

26. H. Bernien, L. Childress, L. Robledo, M. Markham, D. Twitchen, and R. Hanson, “Two-Photon Quantum Interference from Separate Nitrogen Vacancy Centers in Diamond,” Phys. Rev. Lett. 108 (4), 043604 (2012). [ CrossRef ]  

27. A. Sipahigil, K. D. Jahnke, L. J. Rogers, T. Teraji, J. Isoya, A. S. Zibrov, F. Jelezko, and M. D. Lukin, “Indistinguishable Photons from Separated Silicon-Vacancy Centers in Diamond,” Phys. Rev. Lett. 113 (11), 113602 (2014). [ CrossRef ]  

28. O. Schwartz, J. M. Levitt, R. Tenne, S. Itzhakov, Z. Deutsch, and D. Oron, “Superresolution Microscopy with Quantum Emitters,” Nano Lett. 13 (12), 5832–5836 (2013). [ CrossRef ]  

29. J. M. Cui, F. W. Sun, X. D. Chen, Z. J. Gong, and G. C. Guo, “Quantum statistical imaging of particles without restriction of the diffraction limit,” Phys. Rev. Lett. 110 (15), 153901 (2013). [ CrossRef ]  

30. R. Tenne, U. Rossman, B. Rephael, Y. Israel, A. Krupinski-Ptaszek, R. Lapkiewicz, Y. Silberberg, and D. Oron, “Super-resolution enhancement by quantum image scanning microscopy,” Nat. Photonics 13 (2), 116–122 (2019). [ CrossRef ]  

31. W. Larson and B. E. A. Saleh, “Resurgence of Rayleigh’s curse in the presence of partial coherence,” Optica 5 (11), 1382–1389 (2018). [ CrossRef ]  

32. K. Liang, S. A. Wadood, and A. N. Vamivakas, “Coherence effects on estimating two-point separation,” Optica 8 (2), 243–248 (2021). [ CrossRef ]  

33. Z. Hradil, J. Rehácek, L. Sánchez-soto, and B. G. Englert, “Quantum Fisher Information with Coherence,” Optica 6 (11), 1437–2536 (2019). [ CrossRef ]  

34. C. Thiel, T. Bastin, J. Martin, E. Solano, J. von Zanthier, and G. S. Agarwal, “Quantum Imaging with Incoherent Photons,” Phys. Rev. Lett. 99 (13), 133603 (2007). [ CrossRef ]  

35. C. Thiel, T. Bastin, J. von Zanthier, and G. S. Agarwal, “Sub-Rayleigh quantum imaging using single-photon sources,” Phys. Rev. A 80 (1), 013820 (2009). [ CrossRef ]  

36. A. Muthukrishnan, M. O. Scully, and M. S. Zubairy, “Quantum microscopy using photon correlations,” J. Opt. B: Quantum Semiclassical Opt. 6 (6), S575–S582 (2004). [ CrossRef ]  

37. B. Kambs and C. Becher, “Limitations on the indistinguishability of photons from remote solid state sources,” New J. Phys. 20 (11), 115003 (2018). [ CrossRef ]  

38. A. Tokmakoff, “Time dependent quantum mechanics and spectroscopy,” (2014), http://tdqms.uchicago.edu .

39. T. Legero, T. Wilk, A. Kuhn, and G. Rempe, “Time-Resolved Two-Photon Quantum Interference,” Appl. Phys. B 77 (8), 797–802 (2003). [ CrossRef ]  

40. T. Legero, T. Wilk, A. Kuhn, and G. Rempe, “Characterization of single photons using two-photon interference,” Adv. At., Mol., Opt. Phys. 53 , 253–289 (2006). [ CrossRef ]  

41. Y. Shih, An introduction to quantum optics (CRC Press, 2011), pp. 25–84.

42. L. C. Liu, L. Y. Qu, C. Wu, J. Cotler, F. Ma, M. Y. Zheng, X. P. Xie, Y. A. Chen, Q. Zhang, F. Wilczek, and J. W. Pan, “Improved Spatial Resolution Achieved by Chromatic Intensity Interferometry,” Phys. Rev. Lett. 127 (10), 103601 (2021). [ CrossRef ]  

Optica participates in Crossref's Cited-By Linking service . Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.

Figures (3)

Fig. 1.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

James Leger, Editor-in-Chief

Confirm Citation Alert

Field error.

  • Publishing Home
  • Conferences
  • Preprints (Optica Open)
  • Information for
  • Open Access Information
  • Open Access Statement and Policy
  • Terms for Journal Article Reuse
  • Other Resources
  • Optica Open
  • Optica Publishing Group Bookshelf
  • Optics ImageBank
  • Optics & Photonics News
  • Spotlight on Optics
  • Optica Home
  • About Optica Publishing Group
  • About My Account
  • Sign up for Alerts
  • Send Us Feedback
  • Go to My Account
  • Login to access favorites
  • Recent Pages

Login or Create Account

17.1 Understanding Diffraction and Interference

Section learning objectives.

By the end of this section, you will be able to do the following:

  • Explain wave behavior of light, including diffraction and interference, including the role of constructive and destructive interference in Young’s single-slit and double-slit experiments
  • Perform calculations involving diffraction and interference, in particular the wavelength of light using data from a two-slit interference pattern

Teacher Support

The learning objectives in this section will help your students master the following standards:

  • (D) investigate behaviors of waves, including reflection, refraction, diffraction, interference, resonance, and the Doppler effect

Section Key Terms

diffraction Huygens’s principle monochromatic wavefront

Diffraction and Interference

[BL] Explain constructive and destructive interference graphically on the board.

[OL] Ask students to look closely at a shadow. Ask why the edges are not sharp lines. Explain that this is caused by diffraction, one of the wave properties of electromagnetic radiation. Define the nanometer in relation to other metric length measurements.

[AL] Ask students which, among speed, frequency, and wavelength, stay the same, and which change, when a ray of light travels from one medium to another. Discuss those quantities in terms of colors (wavelengths) of visible light.

We know that visible light is the type of electromagnetic wave to which our eyes responds. As we have seen previously, light obeys the equation

where c = 3.00 × 10 8 c = 3.00 × 10 8 m/s is the speed of light in vacuum, f is the frequency of the electromagnetic wave in Hz (or s –1 ), and λ λ is its wavelength in m. The range of visible wavelengths is approximately 380 to 750 nm. As is true for all waves, light travels in straight lines and acts like a ray when it interacts with objects several times as large as its wavelength. However, when it interacts with smaller objects, it displays its wave characteristics prominently. Interference is the identifying behavior of a wave.

In Figure 17.2 , both the ray and wave characteristics of light can be seen. The laser beam emitted by the observatory represents ray behavior, as it travels in a straight line. Passing a pure, one-wavelength beam through vertical slits with a width close to the wavelength of the beam reveals the wave character of light. Here we see the beam spreading out horizontally into a pattern of bright and dark regions that are caused by systematic constructive and destructive interference. As it is characteristic of wave behavior, interference is observed for water waves, sound waves, and light waves.

That interference is a characteristic of energy propagation by waves is demonstrated more convincingly by water waves. Figure 17.3 shows water waves passing through gaps between some rocks. You can easily see that the gaps are similar in width to the wavelength of the waves and that this causes an interference pattern as the waves pass beyond the gaps. A cross-section across the waves in the foreground would show the crests and troughs characteristic of an interference pattern.

Light has wave characteristics in various media as well as in a vacuum. When light goes from a vacuum to some medium, such as water, its speed and wavelength change, but its frequency, f , remains the same. The speed of light in a medium is v = c / n v = c / n , where n is its index of refraction. If you divide both sides of the equation c = f λ c = f λ by n , you get c / n = v = f λ / n c / n = v = f λ / n . Therefore, v = f λ n v = f λ n , where λ n λ n is the wavelength in a medium, and

where λ λ is the wavelength in vacuum and n is the medium’s index of refraction. It follows that the wavelength of light is smaller in any medium than it is in vacuum. In water, for example, which has n = 1.333, the range of visible wavelengths is (380 nm)/1.333 to (760 nm)/1.333, or λ n = λ n = 285–570 nm. Although wavelengths change while traveling from one medium to another, colors do not, since colors are associated with frequency.

The Dutch scientist Christiaan Huygens (1629–1695) developed a useful technique for determining in detail how and where waves propagate. He used wavefronts , which are the points on a wave’s surface that share the same, constant phase (such as all the points that make up the crest of a water wave). Huygens’s principle states, “Every point on a wavefront is a source of wavelets that spread out in the forward direction at the same speed as the wave itself. The new wavefront is a line tangent to all of the wavelets.”

Figure 17.4 shows how Huygens’s principle is applied. A wavefront is the long edge that moves; for example, the crest or the trough. Each point on the wavefront emits a semicircular wave that moves at the propagation speed v . These are drawn later at a time, t , so that they have moved a distance s = v t s = v t . The new wavefront is a line tangent to the wavelets and is where the wave is located at time t . Huygens’s principle works for all types of waves, including water waves, sound waves, and light waves. It will be useful not only in describing how light waves propagate, but also in how they interfere.

What happens when a wave passes through an opening, such as light shining through an open door into a dark room? For light, you expect to see a sharp shadow of the doorway on the floor of the room, and you expect no light to bend around corners into other parts of the room. When sound passes through a door, you hear it everywhere in the room and, thus, you understand that sound spreads out when passing through such an opening. What is the difference between the behavior of sound waves and light waves in this case? The answer is that the wavelengths that make up the light are very short, so that the light acts like a ray. Sound has wavelengths on the order of the size of the door, and so it bends around corners.

[OL] Discuss the fact that, for a diffraction pattern to be visible, the width of a slit must be roughly the wavelength of the light. Try to give students an idea of the size of visible light wavelengths by noting that a human hair is roughly 100 times wider.

If light passes through smaller openings, often called slits, you can use Huygens’s principle to show that light bends as sound does (see Figure 17.5 ). The bending of a wave around the edges of an opening or an obstacle is called diffraction . Diffraction is a wave characteristic that occurs for all types of waves. If diffraction is observed for a phenomenon, it is evidence that the phenomenon is produced by waves. Thus, the horizontal diffraction of the laser beam after it passes through slits in Figure 17.2 is evidence that light has the properties of a wave.

Once again, water waves present a familiar example of a wave phenomenon that is easy to observe and understand, as shown in Figure 17.6 .

Watch Physics

Single-slit interference.

This video works through the math needed to predict diffraction patterns that are caused by single-slit interference.

Which values of m denote the location of destructive interference in a single-slit diffraction pattern?

  • whole integers, excluding zero
  • whole integers
  • real numbers excluding zero
  • real numbers

The fact that Huygens’s principle worked was not considered enough evidence to prove that light is a wave. People were also reluctant to accept light’s wave nature because it contradicted the ideas of Isaac Newton, who was still held in high esteem. The acceptance of the wave character of light came after 1801, when the English physicist and physician Thomas Young (1773–1829) did his now-classic double-slit experiment (see Figure 17.7 ).

When light passes through narrow slits, it is diffracted into semicircular waves, as shown in Figure 17.8 (a). Pure constructive interference occurs where the waves line up crest to crest or trough to trough. Pure destructive interference occurs where they line up crest to trough. The light must fall on a screen and be scattered into our eyes for the pattern to be visible. An analogous pattern for water waves is shown in Figure 17.8 (b). Note that regions of constructive and destructive interference move out from the slits at well-defined angles to the original beam. Those angles depend on wavelength and the distance between the slits, as you will see below.

Virtual Physics

Wave interference.

This simulation demonstrates most of the wave phenomena discussed in this section. First, observe interference between two sources of electromagnetic radiation without adding slits. See how water waves, sound, and light all show interference patterns. Stay with light waves and use only one source. Create diffraction patterns with one slit and then with two. You may have to adjust slit width to see the pattern.

Visually compare the slit width to the wavelength. When do you get the best-defined diffraction pattern?

  • when the slit width is larger than the wavelength
  • when the slit width is smaller than the wavelength
  • when the slit width is comparable to the wavelength
  • when the slit width is infinite

Calculations Involving Diffraction and Interference

[BL] The Greek letter θ θ is spelled theta . The Greek letter λ λ is spelled lamda . Both are pronounced the way you would expect from the spelling. The plurals of maximum and minimum are maxima and minima , respectively.

[OL] Explain that monochromatic means one color. Monochromatic also means one frequency . The sine of an angle is the opposite side of a right triangle divided by the hypotenuse. Opposite means opposite the given acute angle. Note that the sign of an angle is always ≥ 1.

The fact that the wavelength of light of one color, or monochromatic light, can be calculated from its two-slit diffraction pattern in Young’s experiments supports the conclusion that light has wave properties. To understand the basis of such calculations, consider how two waves travel from the slits to the screen. Each slit is a different distance from a given point on the screen. Thus different numbers of wavelengths fit into each path. Waves start out from the slits in phase (crest to crest), but they will end up out of phase (crest to trough) at the screen if the paths differ in length by half a wavelength, interfering destructively. If the paths differ by a whole wavelength, then the waves arrive in phase (crest to crest) at the screen, interfering constructively. More generally, if the paths taken by the two waves differ by any half-integral number of wavelengths ( 1 2 λ ,   3 2 λ ,   5 2 λ ,  etc .) ( 1 2 λ ,   3 2 λ ,   5 2 λ ,  etc .) , then destructive interference occurs. Similarly, if the paths taken by the two waves differ by any integral number of wavelengths ( λ ,   2 λ ,   3 λ ,  etc .) ( λ ,   2 λ ,   3 λ ,  etc .) , then constructive interference occurs.

Figure 17.9 shows how to determine the path-length difference for waves traveling from two slits to a common point on a screen. If the screen is a large distance away compared with the distance between the slits, then the angle θ θ between the path and a line from the slits perpendicular to the screen (see the figure) is nearly the same for each path. That approximation and simple trigonometry show the length difference, Δ L Δ L , to be d sin θ d sin θ , where d is the distance between the slits,

To obtain constructive interference for a double slit, the path-length difference must be an integral multiple of the wavelength, or

Similarly, to obtain destructive interference for a double slit, the path-length difference must be a half-integral multiple of the wavelength, or

The number m is the order of the interference. For example, m = 4 is fourth-order interference.

Figure 17.10 shows how the intensity of the bands of constructive interference decreases with increasing angle.

Light passing through a single slit forms a diffraction pattern somewhat different from that formed by double slits. Figure 17.11 shows a single-slit diffraction pattern. Note that the central maximum is larger than those on either side, and that the intensity decreases rapidly on either side.

The analysis of single-slit diffraction is illustrated in Figure 17.12 . Assuming the screen is very far away compared with the size of the slit, rays heading toward a common destination are nearly parallel. That approximation allows a series of trigonometric operations that result in the equations for the minima produced by destructive interference.

When rays travel straight ahead, they remain in phase and a central maximum is obtained. However, when rays travel at an angle θ θ relative to the original direction of the beam, each ray travels a different distance to the screen, and they can arrive in or out of phase. Thus, a ray from the center travels a distance λ / 2 λ / 2 farther than the ray from the top edge of the slit, they arrive out of phase, and they interfere destructively. Similarly, for every ray between the top and the center of the slit, there is a ray between the center and the bottom of the slit that travels a distance λ / 2 λ / 2 farther to the common point on the screen, and so interferes destructively. Symmetrically, there will be another minimum at the same angle below the direct ray.

Below we summarize the equations needed for the calculations to follow.

The speed of light in a vacuum, c , the wavelength of the light, λ λ , and its frequency, f , are related as follows.

The wavelength of light in a medium, λ n λ n , compared to its wavelength in a vacuum, λ λ , is given by

To calculate the positions of constructive interference for a double slit, the path-length difference must be an integral multiple, m , of the wavelength. λ λ

where d is the distance between the slits and θ θ is the angle between a line from the slits to the maximum and a line perpendicular to the barrier in which the slits are located. To calculate the positions of destructive interference for a double slit, the path-length difference must be a half-integral multiple of the wavelength:

For a single-slit diffraction pattern, the width of the slit, D , the distance of the first ( m = 1) destructive interference minimum, y , the distance from the slit to the screen, L , and the wavelength, λ λ , are given by

Also, for single-slit diffraction,

where θ θ is the angle between a line from the slit to the minimum and a line perpendicular to the screen, and m is the order of the minimum.

Worked Example

Two-slit interference.

Suppose you pass light from a He-Ne laser through two slits separated by 0.0100 mm, and you find that the third bright line on a screen is formed at an angle of 10.95º relative to the incident beam. What is the wavelength of the light?

The third bright line is due to third-order constructive interference, which means that m = 3. You are given d = 0.0100 mm and θ θ = 10.95º. The wavelength can thus be found using the equation d sin θ = m λ d sin θ = m λ for constructive interference.

The equation is d sin θ = m λ d sin θ = m λ . Solving for the wavelength, λ λ , gives

Substituting known values yields

To three digits, 633 nm is the wavelength of light emitted by the common He-Ne laser. Not by coincidence, this red color is similar to that emitted by neon lights. More important, however, is the fact that interference patterns can be used to measure wavelength. Young did that for visible wavelengths. His analytical technique is still widely used to measure electromagnetic spectra. For a given order, the angle for constructive interference increases with λ λ , so spectra (measurements of intensity versus wavelength) can be obtained.

Single-Slit Diffraction

Visible light of wavelength 550 nm falls on a single slit and produces its second diffraction minimum at an angle of 45.0° relative to the incident direction of the light. What is the width of the slit?

From the given information, and assuming the screen is far away from the slit, you can use the equation D sin θ = m λ D sin θ = m λ to find D .

Quantities given are λ λ = 550 nm, m = 2, and θ 2 θ 2 = 45.0°. Solving the equation D sin θ = m λ D sin θ = m λ for D and substituting known values gives

You see that the slit is narrow (it is only a few times greater than the wavelength of light). That is consistent with the fact that light must interact with an object comparable in size to its wavelength in order to exhibit significant wave effects, such as this single-slit diffraction pattern.

Practice Problems

What is the width of a single slit through which 610-nm orange light passes to form a first diffraction minimum at an angle of 30.0°?

Check Your Understanding

Use these problems to assess student achievement of the section’s learning objectives. If students are struggling with a specific objective, these problems will help identify which and direct students to the relevant topics.

  • The wavelength first decreases and then increases.
  • The wavelength first increases and then decreases.
  • The wavelength increases.
  • The wavelength decreases.
  • This is a diffraction effect. Your whole body acts as the origin for a new wavefront.
  • This is a diffraction effect. Every point on the edge of your shadow acts as the origin for a new wavefront.
  • This is a refraction effect. Your whole body acts as the origin for a new wavefront.
  • This is a refraction effect. Every point on the edge of your shadow acts as the origin for a new wavefront.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute Texas Education Agency (TEA). The original material is available at: https://www.texasgateway.org/book/tea-physics . Changes were made to the original material, including updates to art, structure, and other content updates.

Access for free at https://openstax.org/books/physics/pages/1-introduction
  • Authors: Paul Peter Urone, Roger Hinrichs
  • Publisher/website: OpenStax
  • Book title: Physics
  • Publication date: Mar 26, 2020
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/physics/pages/1-introduction
  • Section URL: https://openstax.org/books/physics/pages/17-1-understanding-diffraction-and-interference

© Jun 7, 2024 Texas Education Agency (TEA). The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Scientific Imaging, Inc.

Scientific Imaging, Inc.

Imaging Solutions for Science and Industry

M4) How Diffraction Limits the Optical Resolution of a Microscope

Dr. Kurt Thorn (UCSF) begins this iBiology video with a historical summary of the work of Ernst Abbe (1840-1905) who formalized the definition of resolution in 1873 after conducting a groundbreaking experiment, referred to as the Abbe Diffraction experiment. Dr. Thorn describes the experiment in which a sample, which is theoretically represented as a diffraction grating with a repeating pattern of dark and light lines that are close together. Beginning at about the 7:00 mark Dr. Thorn conducts a must-see experiment, in which a microscope is configured from optical components and laid out on a light table. He uses a beam-splitter and two cameras to show (alternately) the image formed by the tube lens and the image at the back focal plane – the Fourier plane – of the objective. In doing the experiment with different samples and then by showing the effect of simple filtering in the Fourier plane, he concretizes the above abstract concepts in the mind of the viewer.

diffraction limit experiment

Applying Bragg’s law we arrive at the diffraction angle being determined by the equation dSin(β) = λ, where d is spacing of the diffraction grating, β is the angle of diffraction and λ is the wavelength of light. For a finer diffraction pattern, the diffraction angle increases – and if the angle is greater than α, the maximum angle that can be collected by the objective, then no information will be captured about the stripes. All that we would see is the 0th order transmitted light through the sample. What this means is that if β > α then no image is formed.

diffraction limit experiment

The limiting case may be obtained by setting the diffraction angle equal to the largest angle that can be collected by the objective. From this, Abbe was able to deduce the resolution limit of a microscope as governed by the equation dSin(α) = λ. The refractive index “n” is applied to account for different media, and the resolution limit is improved by an additional factor of 2 because we are not limited to the on-axis illumination, but there is a full range of angles to illuminate the sample. 

Another insight is that diffraction in the sample tells us something about the size of the objects in the field of view and interference between the diffracted beams is what gives rise to the image of the sample. Dr. Thorn conveys this insight in the form of a famous quote from Abbe: “ The microscope image is the interference effect of a diffraction phenomenon ”.

In this iBiology video, Prof. Jeff Lichtman (Harvard University) describes the phenomenon of diffraction, a critically important concept because it is diffraction that limits the optical resolution of Light Microscopes.

Building on the concept of diffraction that was described in the previous iBiology video, Prof. Jeff Lichtman (Harvard University) explains that for a point object on the focal plane (such as an infinitesimally small fluorescent bead) the Point Spread Function (PSF) is the resulting distribution of light at and near the image plane. He further explains the influence of the previously described concept of Numerical Aperture on the PSF, and how this relates to optical resolution.

In this iBiology lecture Prof. Lichtman leverages the previously explained concepts of diffraction, NA and PSF and builds up to the final payoff: optical resolution. How close could two points be in the sample plane and still be resolved as separate points in the image. We learn about the Rayleigh Criterion, and that it is different for the XY and XZ planes of a microscope. Our main takeaway is that the higher the NA, the smaller the Rayleigh Criterion and the better the resolution. Prof. Lichtman then shows a sequence of images taken with cameras that have different pixel pitches providing a visual insight into why the sampling in the image plane must be sufficient to resolve an image. This can be quantified by applying the Nyquist Criterion to the spatial sampling on the image plane, leading to an upper limit on the pixel size of the image sensor that is used in conjunction with a microscope.

In the following table, “R” represents the Rayleigh Criterion for the diffraction-limited spot size: R = 1.22λ/(2NA)

40X/0.75NA0.45µm0.45 x 40 = 18µm9µm
4X/0.13NA2.58µm2.58 x 4 = 10.32µm5.16µm

The above method can be used to estimate the performance of a camera and a microscope system, as long as the relevant parameters of the camera and the microscope objective are known. In the table below, we show the Limiting Resolution (in µm) and the Field of View for several commercially available microscope objectives when used with a 2048 x 2048 camera with 6.5µm pixels: this includes non-cooled sCMOS cameras such as the pco.panda 4.2 , pco.panda 4.2bi and the pco.panda 4.2bi uv . This table also applies to cooled sCMOS cameras such as the pco.edge 4.2 , pco.edge 4.2bi and pco.edge 4.2bi uv cameras.

Camera Parameters
Optical Format #Megapixels #H pixels #V pixels Pixel size (μm) H size (mm) V size (mm) Diagonal (mm)
18.8mm 2048 2048 6.5 13.31 13.31 18.83
Commercially Available Objectives
Objective: Mag/NA 1X\0.03NA 2X\0.1NA 4X\0.13NA 10X\0.26NA 20X\0.5NA 40X\0.75NA 60X\0.85NA 100X\1.4NA
Min. feature size (um) 11.18 3.36 2.58 1.29 0.67 0.45 0.39 0.24
Max pix size (um) 5.59 3.36 5.16 6.45 6.70 9.00 11.70 12.00
FN (mm) 22 22 22 22 22 22 22 22
WD (mm) 8 56.3 17.2 16 2.1 16 0.66 0.31-0.4
NA 0.03 0.1 0.13 0.26 0.5 0.75 0.85 1.4
Magnification 1 2 4 10 20 40 60 100
Estimated Performance of Camera and Objective Combinations
(*) Limiting Resolution (µm) 13 6.5 3.25 1.3 0.67 0.45 0.39 0.24
Vignetting? No No No No No No No No
FOV (H x V) mm 13.31 x 13.31 6.66 x 6.66 3.33 x 3.33 1.33 x 1.33 0.67 x 0.67 0.33 x 0.33 0.22 x 0.22 0.13 x 0.13
FOV (diagonal) mm 18.83 9.42 4.71 1.88 0.94 0.47 0.31 0.19
1) The minimum feature size is estimated using the Rayleigh Criteria at λ = 550nm; it can be re-calculated for other wavelengths. R = 1.22*λ/(2NA)
2) The maximum pixel size is estimated by applying the Nyquist criteria to the minimum feature size as it appears on the imager plane. Maximum pixel size = R*magnification/2
3) Camera and Objective combinations which meet the Nyquist criteria and are therefore diffraction limited are indicated by showing the Limiting Resolution in green font. (*)
4) Values for Working Distance (WD) and Field Number (FN) are shown for reference only. Actual values may vary.
5) Vignetting, a darkening of the image at the corners, may be observed if the Imager Diagonal is larger than the Field Number of the Objective.

Dr. Jennifer Waters (Director of the Nikon Imaging Center, Harvard Medical School) provides additional insight on the topic of Numerical Aperture (NA). Numerical Aperture is critical because it limits both the resolution and the brightness of an image obtained from a microscope. She begins with the Rayleigh Criteria, showing how NA impacts resolution and then shows the physical properties that impact NA. This leads into an informative discussion about the benefits of higher NA.

The Point Spread Function (PSF) of microscope is the basis for many practical and theoretical concepts in light microscopy, both basic and advanced. In the video, Dr. Jennifer Waters defines the PSF of a microscope and then shows how it relates to the resolution of a microscope. She further explains that the PSF is a result of diffraction and interference: this concept is concretized in the video with the help of animations and examples. This video is also very helpful in integrating in the viewer’s mind the concept of the PSF with the well-known Rayleigh Criterion. Dr. Waters also helps viewers connect the PSF to real-world insights: for example, (citing Vogel et al, Science, 2006) one can fit ~3million GFP molecules into the PSF maxima of a typical high NA objective!

She likens the process of convolving each point source in the sample with the PSF as “stamping a PSF on every point-source in the sample”. The analogy she uses is of the PSF being like a paint brush of a particular size, used to create an optical image from all the point sources in an image. For this reason, objects in a diffraction limited image of a sample will never appear smaller than the PSF. This is shown clearly in both animated images and real-world examples of fluorescence microscopy images.

diffraction limit experiment

In this iBiology video, Dr. Bo Huang (UCSF) explains the concept of a Fourier transform, and links it to the operation of the objective of a microscope. He shows how the back focal plane of an objective provides a Fourier transform and uses this information to derive the equation for the diffraction limit.

USFCR Verified Vendor

Please complete this form to gain site-wide access to Product Documents.

" * " indicates required fields

Cage Code: 9LW84 UEI: P95ZTNC5JR98

RP Photonics

Encyclopedia … combined with a great Buyer's Guide !

Encyclopedia > letter D > diffraction

Diffraction

Author: the photonics expert Dr. Rüdiger Paschotta

Definition : wave phenomena which occur when light waves hit some structure with variable transmission or phase changes

article belongs to category general optics

DOI : 10.61835/ijl    Cite the article : BibTex plain text HTML

Diffraction is a general term for phenomena which can occur when light waves (or other waves) encounters certain structures. Some typical examples of diffraction effects are discussed in the following sections.

Although in everyday life one rarely encounters substantial diffraction effects with light, such effects are very common in optics and laser technology. In fact, the operation principles of various optical devices are essentially based on diffraction (→ diffractive optics ). Diffraction also plays a crucial role in many other devices, such as optical resonators and fibers .

Diffraction at a Single Slit

A common situation is that a narrow optical slit is uniformly illuminated with spatially coherent radiation from a monochromatic laser. Behind the slit, one can observe a diffraction pattern (see Figure 1) with the following features:

  • For each wavelength, there is a main maximum in the middle, and they are much weaker side maxima at larger angles.
  • For longer wavelengths, the central peak is broader, and the side peaks appear at larger angles.

diffraction at single slit

For a given wavelength, the first minimum of the intensity occurs where the phase difference of contributions from the two edges of the slit reaches . The intensity profiles can be described with sinc 2 functions.

Diffraction at a Double Slit

In his famous double-slit experiment of 1803, Thomas Young used two closely spaced narrow optical slits . As he had no laser, he had to achieve spatially coherent illumination of the two slits by using a third narrow slit before them.

Figure 2 shows a calculated intensity profile for one particular wavelength. The first installation arises from the interference of field contributions from the two different slits. The intensity profile is further slowly modulated with a function determined by the finite width of each slit.

diffraction at double slit

Figure 3 shows with a color scale the interference patterns for different wavelengths. The patterns for longer wavelength involve correspondingly larger diffraction angles.

diffraction at double slit

Diffraction at Circular Apertures

If a light beam (for example a laser beam ) encounters some aperture which transmits the light in some regions and blocks it otherwise, the immediate effect on the transmitted light is only the corresponding truncation of the intensity profile. Only after some distance behind the aperture, characteristic diffraction effects can be observed.

Figure 4 shows a simulated example, where an originally Gaussian beam has been truncated at a centered circular hard aperture. During the further propagation in air, the intensity profile develops a complicated structure due to diffraction. For a soft aperture (Figure 5), causing a smooth intensity drop at the edge, the diffraction pattern is smoother.

diffraction at aperture

Such diffraction effects can be well understood and calculated based on Fourier optics . The hard aperture introduces high optical frequencies, corresponding to rapid spatial changes of intensity.

Such effects can also occur, for example, when trying to force a laser into single transverse mode operation (for optimum beam quality ) by inserting a hard aperture into the laser resonator . Although such an aperture can provide substantially higher round-trip losses for higher-order resonator modes , compared with those for the fundamental mode, it also introduces diffraction effects. Therefore, the method often does not work that well.

The angular resolution of many optical instruments such as telescopes is also limited due to diffraction e.g. at the input aperture . That resolution limit can be estimated to be roughly the wavelength divided by the aperture diameter.

Apertures are not always circular. Figures 6 and 7 show an example case, where a laser beam is truncated with a blade.

diffraction at a blade

Most lasers and laser optics are designed such that there are only negligibly weak diffraction effects due to hard apertures. This implies that all laser mirrors , for example, must be so large that essentially the whole beam profile can be reflected.

Note that the diffraction effects are intrinsically dependent on the optical wavelength . For polychromatic beams, the resulting spatial patterns can substantially differ between different wavelength components. Therefore, it is possible that one observes colors for a white input beam, for example. The classical case is that of a diffraction grating , which is discussed further below.

Divergence of Laser Beams

Even without any aperture, a laser beam always exhibits some amount of diffraction according to its transverse spatial limitation. For Gaussian beams , the shape of the intensity profile is preserved, i.e., it stays Gaussian; only the beam radius gradually increases. This property of preserved intensity profile shapes also applies for other kinds of free-space modes , e.g. to Hermite–Gaussian modes . In general, however, diffraction leads to changes of the shape of the intensity profile, as can be seen e.g. in Figure 1.

Laser beams are often diffraction-limited , i.e., their expansion during propagation is not stronger than caused by diffraction alone.

Strong diffraction effects occur for light with long wavelengths. For example, difference frequency generation of long-wavelength beams can be severely limited in performance by diffraction of the generated beam, which limits the interaction length or enforce weaker beam focusing .

Diffraction and Resonator or Waveguide Modes

Diffraction effects also play a crucial role for the formation of certain kinds of modes . For example, there are modes of optical fibers , for which (by definition) the intensity profile remains constant during propagation. Such modes are formed by two counteracting effects:

  • Diffraction alone would tend to widen a beam, as discussed above.
  • Waveguide effects from a refractive index profile of the fiber provide a kind of focusing.

For the fiber modes, these two effects exactly balance each other. Similarly, resonator modes exhibit a balance of diffraction and focusing effects, only that the latter are usually lumped rather than distributed in the resonator.

Good stability of such modes is achieved when the two counteracting effects are relatively strong, so that any additional effects (e.g. imperfections of a fiber structure, bending of a fiber or misalignment of a resonator element) have comparatively weak effects. Poor stability arises in situations where both effects are weak – for example, in a laser resonator where the Rayleigh length of the beam is much larger than the resonator length. Such situations can arise e.g. when developing Q-switched lasers with large mode radii and short laser resonators.

Diffraction at Periodic and Non-periodic Structures

beams at a diffraction grating

Diffraction effects can also occur when a light beam encounters a structure which causes spatially periodic changes of the optical intensity (via a variable absorbance ) or of the optical phase (e.g. via a variable refractive index or a height profile). Such structures are called diffraction gratings , and the phenomenon is called Bragg diffraction . If a grating exhibits a large number of oscillations within the beam profile, there can be multiple diffracted output beams (see Figure 8), each of which has a similar spatial shape as the input beam. The beam direction of the output beams (except that of the zero-order beam) are dependent on the optical wavelength. That effect is exploited e.g. in grating spectrometers .

Diffraction can also be caused by refractive index modulations in some volume of a medium. For example, there are volume Bragg gratings which can be used as wavelength-dependent reflectors . Also, Bragg diffraction is possible based on sound waves in a medium; this is exploited in acousto-optic modulators .

Diffraction effects can also occur in reflection. In fact, most diffraction gratings are reflective elements.

Of course, diffraction effects also occur at non-periodic structures. For example, the phenomenon of laser speckle occurs when a laser beam is scattered on a rough surface, which in effect causes a complicated phase modulation pattern on the beam. Very noticeable speckle effects can be observed with quasi-monochromatic light as obtained from lasers. This is not the case for broadband ( temporally incoherent ) light because the obtained patterns have a strong wavelength dependence, such that the averaging of intensities over some wavelength range effectively washes out such patterns.

Diffractive Optics

There are various other kinds of optical elements which exploit diffraction effects. For example, there are diffractive beam splitters with multiple outputs, and similar devices are used for coherent beam combining . For more details, see the article on diffractive optics .

Diffraction and Interference

Diffraction effects can be explained based on the interference of different contributions of a field profile to the resulting fields at distant locations ( Huygens–Fresnel principle ). There is actually no clear boundary between diffraction and interference. For example, the transmission of light through a narrow slit (aperture) is usually described in terms of diffraction, while phenomena behind a double slit are called interference phenomena. However, the basic principle of interference can be applied to both cases.

Different Regimes of Diffraction

Different regimes of diffraction are distinguished, which can be treated with different mathematical methods. Fraunhofer diffraction is relevant when considering the far field , i.e., diffraction patterns far away from the refracting structure; this regime is characterized by values of the Fresnel number well below 1. On the other hand, the concept of Fresnel diffraction with large Fresnel numbers can be applied to cases where the near field is relevant.

Diffraction-limited Performance of Optical Instruments

The performance of various kinds of optical instruments such as microscopes is essentially limited by diffraction effects. Essentially, the limited transverse size of the entrance aperture or of internal elements cause diffraction effects which set a minimum spot size of the so-called point spread function. Therefore, optical microscopes (including laser microscopes ) are usually limited in resolution to the order of half the optical wavelength . There are few exceptions to that limitation, for example near field microscopes (using an optical tip of sub-wavelength size for scanning objects) or certain kinds of fluorescence microscopy (STED).

Similar performance limitations apply to optical telescopes . Limiting diffraction effects (for optimum angular resolution) requires the use of large optical apertures.

More to Learn

Encyclopedia articles:

  • diffraction gratings
  • diffraction-limited beams
  • resonator modes
  • laser speckle

Questions and Comments from Users

Here you can submit questions and comments . As far as they get accepted by the author, they will appear above this paragraph together with the author’s answer. The author will decide on acceptance based on certain criteria . Essentially, the issue must be of sufficiently broad interest.

Please do not enter personal data here. (See also our privacy declaration .) If you wish to receive personal feedback or consultancy from the author, please contact him, e.g. via e-mail .

Your question or comment:

Spam check:

  (Please enter the sum of thirteen and three in the form of digits!)

By submitting the information, you give your consent to the potential publication of your inputs on our website according to our rules . (If you later retract your consent, we will delete those inputs.) As your inputs are first reviewed by the author, they may be published with some delay.

Follow our specific LinkedIn pages for more insights and updates:

(the company) (RP, the founder)

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • News & Views
  • Published: 20 June 2002

Light microscopy

Beyond the diffraction limit?

  • Ernst H. K. Stelzer 1  

Nature volume  417 ,  pages 806–807 ( 2002 ) Cite this article

4179 Accesses

70 Citations

4 Altmetric

Metrics details

The wave nature of light manifests itself in diffraction, which hampers attempts to determine the location of molecules. Clever use of microscopic techniques might now be circumventing the 'diffraction limit'.

This is a preview of subscription content, access via your institution

Relevant articles

Open Access articles citing this article.

Smart quantum statistical imaging beyond the Abbe-Rayleigh criterion

  • Narayan Bhusal
  • , Mingyuan Hong
  •  …  Omar S. Magaña-Loaiza

npj Quantum Information Open Access 16 July 2022

An acoustofluidic scanning nanoscope using enhanced image stacking and processing

  • Geonsoo Jin
  • , Joseph Rich
  •  …  Tony Jun Huang

Microsystems & Nanoengineering Open Access 13 July 2022

Access options

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

Dyba, M. & Hell, S. W. Phys. Rev. Lett. 88 , 163901 (2002).

Abbe, E. Arch. Mikrosk. Anat. 9 , 413–468 (1873).

Article   Google Scholar  

Heisenberg, W. Z. Phys. 43 , 172–198 (1927).

Article   ADS   Google Scholar  

Stelzer, E. H. K. & Grill, S. Opt. Commun. 173 , 51–56 (2000).

Article   ADS   CAS   Google Scholar  

Lukosz, W. J. Opt. Soc. Am. 57 , 932–941 (1967).

Gustafsson, M. G. L., Agard, D. A. & Sedat, J. W. J. Microsc. 195 , 10–16 (1999).

Article   CAS   Google Scholar  

Hell, S. & Stelzer, E. H. K. J. Opt. Soc. Am. A 9 , 2159–2166 (1992).

Stelzer, E. H. K. J. Microsc. 189 , 15–24 (1997).

Toraldo di Francia, G. Atti Fond. Giorgio Ronchi 7 , 366–372 (1952).

Google Scholar  

Martinez-Corral, M., Caballero, M. T., Stelzer, E. H. K. & Swoger, J. Opt. Express 10 , 98–103 (2002).

Download references

Author information

Authors and affiliations.

Cell Biology and Biophysics Programme, European Molecular Biology Laboratory, Meyerhofstrasse 1, Heidelberg, 69117, Germany

Ernst H. K. Stelzer

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Ernst H. K. Stelzer .

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Stelzer, E. Beyond the diffraction limit?. Nature 417 , 806–807 (2002). https://doi.org/10.1038/417806a

Download citation

Issue Date : 20 June 2002

DOI : https://doi.org/10.1038/417806a

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

  • Joseph Rich
  • Tony Jun Huang

Microsystems & Nanoengineering (2022)

  • Mingyuan Hong
  • Omar S. Magaña-Loaiza

npj Quantum Information (2022)

Source Image Squeezing and Field Tunneling for Propagating Light Beyond-Limit Focusing to Reach the Intermediate Zone

  • Jian-Shiung Hong
  • Ting-Kai Wang
  • Kuan-Ren Chen

Plasmonics (2021)

The challenges of sequencing by synthesis

  • Carl W Fuller
  • Lyle R Middendorf
  • Dmitri V Vezenov

Nature Biotechnology (2009)

Fabrication of Small Fluorescence Scale Patterns by Electron Beam Drawing Using Polymer Film Containing Fluorescence Dye

  • Tatsuhiko Sugiyama
  • Satoshi Yoneyama
  • Makoto Minakata

Optical Review (2006)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

diffraction limit experiment

Journal of Applied Crystallography Journal of Applied
Crystallography

Journals Logo

1. Introduction

2. experimental, 3. results and discussion.

diffraction limit experiment

Format BIBTeX
EndNote
RefMan
Refer
Medline
CIF
SGML
Plain Text
Text

diffraction limit experiment

laboratory notes \(\def\hfill{\hskip 5em}\def\hfil{\hskip 3em}\def\eqno#1{\hfil {#1}}\)

JOURNAL OF
APPLIED
CRYSTALLOGRAPHY

Open Access

Use of a confocal optical device for centring a diamond anvil cell in single-crystal X-ray diffraction experiments

a EastChem School of Chemistry and Centre for Science at Extreme Conditions, The University of Edinburgh, King's Buildings, West Mains Road, Edinburgh EH9 3FJ, United Kingdom, b Micro-Epsilon UK Ltd, No. 1 Shorelines Building, Shore Road, Birkenhead CH41 1AU, United Kingdom, and c Bruker AXS GmbH, Oestliche Rheinbrueckenstrasse 49, 76187 Karlsruhe, Germany * Correspondence e-mail: [email protected]

High-pressure crystallographic data can be measured using a diamond anvil cell (DAC), which allows the sample to be viewed only along a cell vector which runs perpendicular to the diamond anvils. Although centring a sample perpendicular to this direction is straightforward, methods for centring along this direction often rely on sample focusing, measurements of the direct beam or short data collections followed by refinement of the crystal offsets. These methods may be inaccurate, difficult to apply or slow. Described here is a method based on precise measurement of the offset in this direction using a confocal optical device, whereby the cell centre is located at the mid-point of two measurements of the distance between a light source and the external faces of the diamond anvils viewed along the forward and reverse directions of the cell vector. It is shown that the method enables a DAC to be centred to within a few micrometres reproducibly and quickly.

Keywords: diamond anvil cells ; high-pressure experiments ; sample alignment .


( ) A and ( ) components of a Merrill–Bassett DAC. Reproduced with permission from Moggach (2008 ).

2.1. Sample preparation

Single-crystal X-ray diffraction data were collected on a Bruker AXS D8 Venture three-circle (2 θ , ω and φ with χ fixed at 54.74°) diffractometer incorporating an Incoatec Mo K α ( λ = 0.71073 Å) microsource.


A diffractometer configuration showing the confocal device mounted on the diffractometer stage. The inset shows a steel ball mounted on a goniometer head as used for the sensor alignment.

2.2. Centring procedure

Initial centring of the crystal along the cell vector can also be performed using the video camera. An initial reading is taken on the video camera stage micrometer, and the camera focus is then adjusted so that the sample is in focus. A second micrometer reading is taken and the video camera is moved back to the average of the two micrometer readings. The image focus is then re-established using the goniometer head adjustor screw parallel to the viewing direction of the video camera. The cell can be rotated by 180° in φ to check that the sample remains in focus; if not, the procedure can be iterated until the sample is in focus when viewed along both forward and reverse directions along the cell vector. The success of this method relies on both diamonds having the same thickness.


The user interface for the confocal device. The red and blue traces refer to the raw and optically corrected readings, respectively.

2.3. Alignment of the sensor

2.4. validation of centring using diffraction.


Strategy for short data collections used for validation of centring

= 0° and χ = 54.74° in all runs.

Run Scan angle (°) Fixed angle (°)
1 13.00 to −17.00 in ω φ = 270.00
2 13.00 to −17.00 in ω φ = 90.00
3 65.00 to 115.00 in φ ω = 0.00

2.5. Data collections


Strategy for diffraction data collections

= 54.74° in all runs.

Run 2θ (°) φ (°) ω range (°)
1 11.00 270.00 7.00 to 22.00
2 11.00 270.00 15.00 to −45.00
3 −11.00 270.00 345.00 to 360.00
4 −11.00 270.00 353.00 to 320.00
5 11.00 270.00 187.00 to 220.00
6 11.00 270.00 195.00 to 180.00
7 −11.00 270.00 165.00 to 225.00
8 −11.00 270.00 173.00 to 158.00
9 11.00 90.00 7.00 to 22.00
10 11.00 90.00 15.00 to −45.00
11 −11.00 90.00 345.00 to 360.00
12 −11.00 90.00 353.00 to 320.00
13 11.00 90.00 187.00 to 220.00
14 11.00 90.00 195.00 to 180.00
15 −11.00 90.00 165.00 to 225.00
16 −11.00 90.00 173.00 to 158.00

3.1. Centring procedure using the optical sensor

Although diamonds are optically transparent, it is sometimes not possible to obtain a clear view of a sample from both sides of a DAC. This may be because a crystal has been grown in situ and the crystalline region of the sample is obscured; a sample may have fragmented; the sensitivity of a sample may mean that it needed to be loaded quickly with contaminated mother liquor as a pressure-transmitting medium; the medium may partially dissolve the sample and become coloured; optical effects in partially vitrified media may also occur; or, sadly, the outer faces of the diamonds may be dirty. In short, there are many reasons why a clear view of a sample might not be obtained, which can make methods of centring based on focusing the sample image from two opposite directions difficult to apply. Moreover, even when a clear image can be obtained, assessment of whether an image is focused or not can be somewhat subjective and dependent on the quality of the lighting and optics on the viewing device being used. Our aim in incorporating the optical sensor into the centring procedure for a DAC was to replace focus-based centring methods with one which is both based on numerical measurements and less sensitive to the characteristics of the sample. Although the method has been applied to DACs, it could in principle be used for any experiment where the view of the sample is restricted, for example when a sample is surrounded by other material, such as can occur in a capillary.


The shift required between the mid-point of the diamond culets (dark grey) and the centre of the crystal (light grey) is ( )/2, where is the gasket depth (typically measured during indenting) and the thickness of the crystal.

Although in-house high-pressure work is usually still carried out using conventional manual goniometer heads, motorized heads are becoming much more common for ambient-pressure measurements. Very convenient procedures are available that allow a user to select the centre of a sample with a mouse click or even rely on image-recognition algorithms to identify the sample. Application of this approach to high-pressure work would be very attractive because the precision of adjustments on motorized heads is finer than that on manual heads, provided the weight of the cell can be accommodated. Use of a motorized goniometer head would be immediately applicable to centring perpendicular to the cell vector. The numerical feedback provided by the confocal centring procedure described here would also provide the distance adjustments required for centring along the cell vector, introducing the potential for essentially automated DAC centring.

3.2. Data collection tests


Crystal and data for glyphosate collected at different offsets (in µm) along the cell vector

along the X-ray beam from source to sample, vertical and pointing up, and making a right-handed set. The DAC was mounted so that the cell vector would lie along if all the setting angles were at zero, so that the values of the offset in this table correspond to displacements along the cell vector.

Empirical formula: C H NO P Crystal system: Monoclinic Space group: 2 / Resolution limit: 0.7 Å
  offset
    0 (centred) 30 60 −30 −60
Unit cell (Å) 8.6274 (12) 8.6261 (13) 8.6232 (11) 8.6262 (12) 8.6274 (13)
(Å) 7.7307 (5) 7.7305 (6) 7.7303 (5) 7.7299 (5) 7.7301 (6)
(Å) 9.4613 (7) 9.4604 (7) 9.4606 (6) 9.4620 (7) 9.4631 (7)
α (°) 90.00 90.00 90.00 90.00 90.00
β (°) 109.406 (8) 109.414 (8) 109.414 (7) 109.417 (8) 109.407 (9)
γ (°) 90.00 90.00 90.00 90.00 90.00
) 595.18 (11) 594.99 (11) 594.79 (10) 595.04 (11) 595.24 (11)
 
Domain translation (mm) −0.004 (5) −0.004 (5) −0.005 (5) −0.013 (5) −0.009 (5)
  (mm) 0.004 (10) 0.027 (9) 0.051 (10) −0.0045 (11) −0.053 (9)
  (mm) 0.000 (5) 0.002 (5) −0.004 (5) −0.004 (5) 0.000 (5)
 
(%) 2.56 2.65 2.85 2.78 2.60
(%) 5.44 5.39 6.41 6.48 6.42
(%) 4.17 4.37 4.34 4.56 4.30
Total No. of reflections 2693 2772 2672 2724 2520
No. of unique reflections 339 356 352 362 362
Reflections with ≥ 2σ( ) 296 299 300 292 293
Completeness (%) 27.0 26.8 26.7 26.9 25.7
Average /σ( ) 27.70 27.49 26.98 26.95 24.61

Scale variation graphs for Mo X-ray radiation measurement with different offsets applied.

‡ Current address: Renishaw plc, New Mills, Wotton-under-Edge GL12 8JR, United Kingdom.

Acknowledgements

Funding information.

We thank the Engineering and Physical Sciences Research Council (grant number EP/R042845/1 to Simon Parsons) and the University of Edinburgh for funding.

This is an open-access article distributed under the terms of the Creative Commons Attribution (CC-BY) Licence , which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are cited.

Follow J. Appl. Cryst.

IMAGES

  1. The Diffraction Limit

    diffraction limit experiment

  2. 43-0 The diffraction limit (1201)

    diffraction limit experiment

  3. M4) How Diffraction Limits the Optical Resolution of a Microscope

    diffraction limit experiment

  4. 16. Schematic of diffraction experiment. Here the laser passes from the

    diffraction limit experiment

  5. Resolving Power of Microscope and Telescope

    diffraction limit experiment

  6. Diffraction limit equation

    diffraction limit experiment

VIDEO

  1. Experiment#1: Measure Wavelength of Laser by Diffraction Grating

  2. Young's Double Slit Experiment YDSE

  3. Diffraction grating #Shorts

  4. The Diffraction Limit

  5. Rayleighs Criterion

  6. Diffraction of light

COMMENTS

  1. Diffraction-limited system

    An optical instrument is said to be diffraction-limited if it has reached this limit of resolution performance. Other factors may affect an optical system's performance, such as lens imperfections or aberrations, but these are caused by errors in the manufacture or calculation of a lens, whereas the diffraction limit is the maximum resolution ...

  2. PDF Lecture 5

    separation due to diffraction, then diffraction limits the imagequality. The "f-number"of a lens is defined as f/D. To minimize diffraction, you want a small f-number, i.e., a large aperture*. d Photosensor: 7 mm 5 mm Pixel *This assumes a 'perfect lens'. In practice, lens aberrations limit the resolution if D is toobig. Photosensor ...

  3. The Airy Disk and Diffraction Limit

    The Airy Disk. When light passes through any size aperture (every lens has a finite aperture), diffraction occurs. The resulting diffraction pattern, a bright region in the center, together with a series of concentric rings of decreasing intensity around it, is called the Airy disk (see Figure 1).The diameter of this pattern is related to the wavelength (λ) of the illuminating light and the ...

  4. Limits of Resolution: The Rayleigh Criterion

    The Rayleigh criterion for the diffraction limit to resolution states that two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other. See Figure 2b. The first minimum is at an angle of θ = 1.22 λ D θ = 1.22 λ D, so that two point objects are ...

  5. PDF Matthew Schwartz Lecture 19: Diffraction and resolution

    2. sin2. I =. (5) (6) where I0 is the intensity from a single source. Since the slits are spaced d apart, the total size of the diffraction grating is a = Nd. Now say we place a screen at a distance L from the slits with L ≫ a. Then the height y up this screen is. y = L tanθ ≈ Lθ ≈ L sinθ, and so d y ∆ = 2π + δ λ L.

  6. PDF Chapter 14 Interference and Diffraction

    Figure 14.1.2 Superposition of two sinusoidal waves. We see that the wave has a maximum amplitude when sin( x + φ ) = 1 , or x = π /2 − φ The interference there is constructive. On the other hand, destructive interference occurs at x = π − φ = 2.61 rad, wheresin( π ) = 0 . The light sources must be coherent.

  7. PDF Resolution of the microscope

    point being imaged as a diffraction spot of a finite size. Diffraction spots from nearby points may overlap with each other and become indistinguishable. The present experiment studies the diffraction resolution limit of a microscope objective. The theory of microscope resolution was developed by German physicist Ernst Karl Abbe (1840 - 1905).

  8. 27.6 Limits of Resolution: The Rayleigh Criterion

    Just what is the limit? To answer that question, consider the diffraction pattern for a circular aperture, which has a central maximum that is wider and brighter than the maxima surrounding it (similar to a slit) [see Figure 27.26(a)]. It can be shown that, for a circular aperture of diameter D D, the first minimum in the diffraction pattern occurs at θ = 1. 22 λ / D θ = 1. 22 λ / D ...

  9. Breaking the diffraction limit using fluorescence quantum coherence

    The classical optical diffraction limit can be overcome by exploiting the quantum properties of light in several theoretical studies; however, they mostly rely on an entangled light source. Recent experiments have demonstrated that quantum properties are preserved in many fluorophores, which makes it possible to add a new dimension of information for super-resolution fluorescence imaging.

  10. 17.1 Understanding Diffraction and Interference

    Figure 17.7 Young's double-slit experiment. Here, light of a single wavelength passes through a pair of vertical slits and produces a diffraction pattern on the screen—numerous vertical light and dark lines that are spread out horizontally. Without diffraction and interference, the light would simply make two lines on the screen.

  11. M4) How Diffraction Limits the Optical Resolution of a Microscope

    Dr. Kurt Thorn (UCSF) begins this iBiology video with a historical summary of the work of Ernst Abbe (1840-1905) who formalized the definition of resolution in 1873 after conducting a groundbreaking experiment, referred to as the Abbe Diffraction experiment. Dr. Thorn describes the experiment in which a sample, which is theoretically represented as a diffraction grating with a repeating ...

  12. PDF 5.1. Abbe's theory of imaging

    We have already encountered the limited resolution in extracting the structure of inhomogeneous objects via scattering experiments (Section 2.5). The microscope obeys the limits. Unlike magnification, resolution is ... effect of a diffraction phenomenon" [3]. Thus, a given image field is formed by the interference between plane waves ...

  13. PDF Algorithmic Foundations for the Diffraction Limit

    First we remark that the way the diffraction limit is traditionally studied is in fact a mixture model. In particular we assume that, experimentally, we can measure photons that are sampled from the true diffracted image. However we only observe a finite number of them because our experiment has finite exposure time, and indeed

  14. Diffraction

    The angular resolution of many optical instruments such as telescopes is also limited due to diffraction e.g. at the input aperture. That resolution limit can be estimated to be roughly the wavelength divided by the aperture diameter. Apertures are not always circular. Figures 6 and 7 show an example case, where a laser beam is truncated with a ...

  15. Beyond the diffraction limit

    In 1873, the German physicist Ernst Abbe realized that the resolution of optical imaging instruments, including telescopes and microscopes, is fundamentally limited by the diffraction of light.

  16. PDF Experiment 9: Interference and Diffraction

    Experiment 9: Interference and Diffraction Answer these questions on a separate sheet of paper and turn them in before the lab 1. Measuring the Wavelength of Laser Light In the first part of this experiment you will shine a red laser through a pair of narrow slits (a = 40 µm) separated by a known distance (you will use both d = 250 µm and 500 ...

  17. Beyond the diffraction limit?

    The distance — the diffraction limit — is proportional to the wavelength and inversely proportional to the angular distribution of the light observed. ... Although the experiments shown in ...

  18. Diffraction

    For completeness, Bragg diffraction is a limit for a large number of atoms with X-rays or neutrons, and is rarely valid for electron diffraction or with solid particles in the size range of less than 50 nanometers. [23] ... In the case of Young's double-slit experiment, this would mean that if the transverse coherence length is smaller than the ...

  19. (IUCr) Use of a confocal optical device for centring a diamond anvil

    Diffraction data were collected using the strategy shown in Table 2, which is based on that described by Dawson et al. (2004) but with runs split so that shading is minimized at the beginning of each run. Data were collected with the cell in its centred position and in positions deliberately displaced by ±30 and ±60 µm along the cell vector.