Category Archives: μ-Publications (Papers)

The Pockels Effect and the Kerr Effect

The Electro-optic effect essentially describes the phenomena that, with an applied voltage, the refractive index of a material can be altered. The electro-optic effect lays the ground for many optical and photonic devices. One such application would be the electro-optic modulator.

If we consider a waveguide or even a lens, such as demonstrated through problems in geometrical optics, we know that the refractive index can alter the direction of propagation of a transmitted beam. A change in refractive index also changes the speed of the wave. The change of light propagation speed in a waveguide acts as phase modulation. The applied voltage is the modulated information and light is the carrier signal.

The electro-optic effect is comprised of both a linear and non-linear component. The full form of the electro-optic effect equation is as follows:


The above formula means that, with an applied voltage E, the resultant change in refractive index is comprised of the linear Pockels Effect rE and a non-linear Kerr Effect PE^2.

The Pockels Effect is dependent on the crystal structure and symmetry of the material, along with the direction of the electric field and light wave.



Receiver Dynamic Range

Dynamic range is pretty general term for a ratio (sometimes called DNR ratio) of a highest acceptable value to lowest acceptable value that some quantity can be. It can be applied to a variety of fields, most notably electronics and RF/Microwave applications. It is typically expressed in a logarithmic scale. Dynamic range is an important figure of merit because often weak signals will need to be received as well as stronger ones all while not receiving unwanted signals.

Due to spherical spreading of waves and the two-way nature of RADAR, losses experienced by the transmitted signal are proportional to 1/(R^4). This leads to a great variance over the dynamic range of the system in terms of return. For RADAR receivers, mixers and amplifiers contribute the most to the system’s dynamic range and Noise Figure (also in dB). The lower end of the dynamic range is limited by the noise floor, which accounts for the accumulation of unwanted environmental and internal noise without the presence of a signal. The total noise floor of a receiver can be determined by adding the noise figure dB levels of each component. Applying a signal will increase the level of noise past the noise floor, and this is limited by the saturation of the amplifier or mixer. For a linear amplifier, the upper end is the 1dB compression point. This point describes the range at which the amplifier amplifies linearly with a constant increase in dB for a given dB increase at the input. Past the 1dB compression point, the amplifier deviates from this pattern.


The other points in the figure are the third and second order intercept points. Generally, the third intercept point is the most quoted on data sheets, as third order distortions are most common. Assuming the device is perfectly linear, this is the point where the third order distortion line intersects that line of constant slope. These intermodulation distortion generate the terms 2f_2 – f_1 and 2f_1 – f_2. So in a sense the third order intercept point is a measure of linearity. As shown in the figure, the third order distortion has a linear slope of 3:1. The point that the line intercepts the linear output is (IIP3, OIP3). This intercept point tends to be used as more of a rule of thumb, as the system is assumed to be “weakly linear” which does not necessarily hold up in practice.

Often manual gain control or automatic gain control can be employed to achieve the desired receiver dynamic range. This is necessary because there are such a wide variety of signals being received. Often the dynamic range can be around 120 dB or higher, for instance.

Another term used is spurious free dynamic range. Spurs are unwanted frequency components of the receiver which are generated by the mixer, ADC or any nonlinear component. The quantity represents the distance between the largest spur and fundamental tone.

Semiconductor Growth Technology: Molecular Beam Epitaxy and MOCVD

The development of advanced semiconductor technologies presents one important challenge: fabrication. Two methods of fabrication that are being used to in bandgap engineering are Molecular Beam Epitaxy (MBE) and Metal organic chemical vapour deposition (MOCVD).

Molecular Beam Epitaxy uses high-intensity vacuums to fabricate compound semiconductor materials and compounds. Atoms or molecules containing the desired atoms are directed to a heated substrate. Molecular Beam Epitaxy is highly sensitive. The vacuums used make use of diffusion pumps or cryo-pumps; diffusion pumps for gas source MBE and cryo-pumps for solid source MBE. Effusion cells are found in MBE and allow the flow of molecules through small holes without collusion. The RHEED source in MBE stands for Reflection Hish Energy Electron Diffraction and allows for information regarding the epitaxial growth structure such as surface smoothness and growth rate to be registered by reflecting high energy electrons. The growth chamber, heated to 200 degrees Celsius, while the substrate temperatures are kept in the range of 400-700 degrees Celsius.

MBE is not suitable for large scale production due to the slow growth rate and higher cost of production. However, it is highly accurate, making it highly desired for research and highly complex structures.



MOCVD is a more popular method for growing layers to a semiconductor wafer. MOCVD is primarily chemical, where elements are deposited as complex chemical compounds containing the desired chemical elements and the remains are evaporated. The MOCVD does not use a high-intensity vacuum. This process (MOCVD) can be used for a large number of optoelectronic devices with specific properties, including quantum wells. High quality semiconductor layers in the micrometer level are developed using this process. MOCVD produces a number of toxic elements including AsH3 and PH3.

MOCVD is recommended for simpler devices and for mass production.



Discrete Time Filters: FIR and IIR

There are two basic types of digital filters: FIR and IIR. FIR stands for Finite Impulse Response and IIR stands for infinite impulse response. The outputs of any discrete time filter can be described by a “difference equation”, similar to a differential equation but does not contain derivatives. The FIR is described by a moving average, or weighted sum of past inputs. IIR filter difference equations are recursive in the sense that they include both a sum of weighted values of past inputs as well as a weighted average of past outputs.


As shown, this specific IIR filter difference equation contains an output term (first time on the right hand side).

The FIR has a finite impulse response because it decays to zero in a finite length of time. In the discrete time case, this means the output response of a system to a Kronecker delta input or impulse. In the IIR case, the impulse response decays, but never reaches zero. The FIR filter has zeros with only poles at  z = 0 for H(z) (system function). The IIR filter is more flexible and can contain zeroes at any location on a pole zero plot.

The following is a block diagram of a two stage FIR filter. As shown, there is no recursion but simply a weighted sum. The triangles represent the values of the impulse response at a particular time. These sort of diagrams represent the difference equations and can be expressed as the output as a function of weighted sum of the inputs. These z inverse blocks could be thought of as memory storage blocks in a computer.


In contrast, the IIR filter contains recursion or feedback, as the past inputs are added back to the input. This feedback leads to a nontrivial term in the denominator of the transfer function of the filter. This transfer function can be tested for stability of the filter by observing the pole zero plot in the z-domain.


Overall, IIR filters have several advantages over FIR filters in terms of efficiency in terms of implementation which means that lower order filters can be used to achieve the same result of an FIR filter. A lower order filter is less computationally expensive and hence more preferable. A higher order filter requires more operations. However, FIR filters have a distinct advantage in terms of ease of design. This mainly comes into play when trying to design filters with linear phase (constant group delay with frequency) which is very hard to do with an IIR filter.

The Acoustic Guitar – Intro

We will consider our study of sound by briefly analyzing the acoustic guitar: an instrument that uses certain physical properties to “amplify” (not really true as no energy is technically added) sound acoustically rather than through electromagnetic induction or piezoelectric means (piezoelectric pickups are common on acoustic-electric guitars however). A guitar can be tuned many ways but standard (E standard) tuning is E-A-D-G-B-E across the six strings from top to bottom, or thickest string to thinnest. The tuning is something that can be changed on the fly, which differentiates the guitar from something like a harp which the tension of the string cannot be adjusted.

Just like the tuning pegs on a guitar can be loosened or tighten to change the tension, the fretting hand can also be used to change the length of the string. Both of these affect the frequency or perceived pitch. In fact, two other qualities of the string (density and thickness) also effect the frequency. These can be related through Mersenne’s rule:


As shown, the length and density of the string are inversely proportional to the pitch. The tension is proportional, so tightening the string will tune the string up.  The frequency is inversely proportional to string diameter.

The basic operation of the guitar is that plucking or strumming strings will cause a disturbance in the air, displacing air particles and causing buildups of pressure “nodes” and “antinodes”. This leads to the creation of a longitudinal pressure wave which is perceived by the human ear as sound. However, a string on its own does not displace much air, so the rest of the guitar is needed. The soundboard (top) of the guitar acts as an impedance matching network between the string and air by increasing the surface area of contact with the air. Although this does not amplify the sound since no external energy is applied, it does increase the sound intensity greatly. So in a sense the soundboard (typically made of spruce or a good transmitter of sound) can be thought of as something like an electrical impedance matching transformer. The acoustic guitar also employs acoustic resonance in the soundhole. As with the soundboard, the soundhole also vibrates and tends to resonate at lower frequencies. When the air in the soundhole moves in phase with the strings, sound intensity increases by about 3 dB. So basically, the sound is being coupled from the string to the soundboard, from the soundboard to the soundhole and from both the soundhole and soundboard to the external air. The bridge is the part of the guitar that couples the string vibration to the soundboard. This creates a reasonably loud pressure wave.

In terms of wood, the typical wood used for guitar making has a high stiffness to weight ratio. Spruce has an excellent stiffness to weight ratio, as it has a high modulus of elasticity and moderately low density. Rosewood tends to be used for the back and sides of a guitar. The main thing to note hear is the guitar is made of wood.. because wood does not carry vibrations well. As a result the air echos within the guitar instead, creating a sound that is pleasant to the ear. Another factor, of course is cost.

Strings are comprised of a fundamental frequency as well as harmonics and overtones, which lead to a distinct sound. If you fret a string at the twelfth fret, this is the halfway part of the string. This would be the first overtone with double the frequency. It is important to note that the frets of a guitar taper off as you go towards the bridge. This distance can be calculated since c = fλ is a constant. Each successive note is 1.0595 higher in pitch so the first fret is placed 1.0595 from the bridge. This continues on and on with 1.0595 being raised to a higher and higher power based on what fret is being observed.

Microstrip Antenna – Cavity Model

The following is an alternative modelling technique for the microstrip antenna, which is also somewhat similar to the analysis of acoustic cavities. Like all cavities, boundary conditions are important. For the microstrip antenna, this is used to calculated radiated fields of the antenna.

Two boundary conditions will be imposed: PEC and PMC. For the PEC the orthogonal component of the E field is zero and the transverse magnetic component is zero. For the PMC, the opposite is true.


This supports the TM (transverse magnetic) mode of propagation, which means the magnetic field is orthogonal to the propagation direction. In order to use this model, a time independent wave equation (Helmholtz equation) must be solved.


The solution to any wave equation will have wavelike properties, which means it will be sinusoidal. The solution looks like:


Integer multiples of π  solve the boundary conditions because the vector potential must be maximum at the boundaries of x, y and z. These cannot simultaneously be zero. The resonant frequency can be solved as shown:


The units work out, as the square root of the product of the permeability and permittivity in the denominator correspond to the velocity of propagation (m/s), the units of the 2π term are radians and the rest of the expression is the magnitude of the k vector or wave number (rad/m). This corresponds to units of inverse seconds or Hz. Different modes can be solved by plugging in various integers and solving for the frequency in Hz. The lowest resonant mode is found to be f_010 which is intuitively true because the longest dimension is L (which is in the denominator). The f_000 mode cannot exist because that would yield a trivial solution of 0 Hz frequency. The field components for the dominant (lowest frequency) mode are given.




Microstrip Patch Antennas Introduction – Transmission Line Model

Microstrip antennas (or patch antennas) are extremely important in modern electrical engineering for the simple fact that they can directly be printed to a circuit board. This makes them necessary for things like cellular antennas for GPS, communication with cell towers and bluetooth/WiFi. Patch antennas are notoriously narrowband, especially those with a rectangular shape (patch antennas can have a wide variety of shapes). Patch antennas can be configured as single antennas or in an array. The excitation is usually fed by a microstrip line which usually has a characteristic impedance of 50 ohms.

One of the most common analysis methods for analyzing microstrip antennas is the transmission line model. It is important to note that the microstrip transmission line does not support TEM mode, unlike the coaxial cable which has radial symmetry. For the microstrip line, quasi-TEM is supported. For this mode, there is a field component along the direction of propagation, although it is small. For the purposes of the model, this can be ignored and the TEM mode which has no field component in the direction of propagation can be used. This reduces the model to:


Where the effective dielectric constant can be approximated as:


The width of the strip must be greater than the height of the substrate. It is important to note that the dielectric constant is not constant for frequency. As a consequence, the above approximation is only valid for low frequencies of microwave.

Another note for the transmission line model is that the effective length differs from the physical length of the patch. The effective length is longer by 2ΔL due to fringing effects. ΔL can be expressed as a function of the effective dielectric constant.





The Helical Antenna

The helical antenna is a frequently overlooked antenna type commonly used for VHF and UHF applications and provides high directivity, wide bandwidth and interestingly, circular polarization. Circular polarization provides a huge advantage in that if two antennas are circularly polarized, the will not suffer polarization loss due to polarization mismatch. It is known that circular polarization is a special case of elliptical polarization. Circular polarization occurs when the Electric field vector (which defines the polarization of any antenna) has two components which are in quadrature with equal amplitudes. In this case, the electric field vector rotates in a circular pattern when observed at the target, whether it be RHP or LHP (right hand or left hand polarized).

Generally, the axial mode of the helix antenna is used but normal mode may also be used. Usually the helix is mounted on a ground plane which is connected to a coaxial cable using a N type or SMA connector.

The helix antenna can be broken down into triangles, shown below.


The circumference of each loop is given by πD. S represents the spacing between loops. When this is zero (and hence the angle of the triangle is zero), the helix antenna reduces to a flat loop. When the angle becomes a 90 degree angle, the helix reduces to a monopole linear wire antenna. L0 represents the length of one loop and L is the length of the entire antenna. The total height L is given as NS, where N is the number of loops. The actual length can be calculated by multiplying the number of loops with the length of one loop L0.

An important thing to note is that the helix antenna is elliptically polarized by default and must be manually designed to achieve circular polarization for a specific bandwidth. Another note is that the input impedance of the antenna depends greatly on the pitch angle (alpha).

The axial (endfire) mode, which is more common occurs when the circumference of the antenna is roughly the size of the wavelength. This mode is easier to achieve circular polarization. The normal mode features a much smaller circumference and is more omnidirectional in terms of radiation pattern.

The Axial ratio is the numerical quantity that governs the polarization. When AR = 1, the antenna is circularly polarized. When AR = ∞ or 0, the antenna is linearly polarized. Any other quantity means elliptical polarization.


The axial ratio can also be approximated by:


For axial mode, the radiation pattern is much more directional, as the axis of the antenna contains the bulk of the radiation. For this mode, the following conditions must be met to achieve circular polarization.


These are less stringent than the normal mode conditions.

It is also important to consider that the input impedance of these antennas tends to be higher than the standard impedance of a coaxial line (100-200 ohms compared to 50). Flattening the feed wire of the antenna and covering the ground plane with dielectric material helps achieve a better SWR.


This equation can be used to calculated the height of the dielectric used for the ground plane. It is dependent on the transmission line characteristic impedance, strip width and the dielectric constant of the material used.

The Superheterodyne Receiver

“Heterodyning” is a commonly used term in the design of RF wireless communication systems. It the process of using a local oscillator of a frequency close to an input signal in order to produce a lower frequency signal on the output which is the difference in the two frequencies. It is contrasted with “homodyning” which uses the same frequency for the local oscillator and the input. In a superhet receiver, the RF input and the local oscillator are easily tunable whereas the ouput IF (intermediate frequency) is fixed.


After the antenna, the front end of the receiver comprises of a band select filter and a LNA (low noise amplifier). This is needed because the electrical output of the antenna is often as small as a few microvolts and needs to be amplified, but not in a way that leads to a higher Noise Figure. The typical superhet NF should be around 8-10 dB. Then the signal is frequency multiplied or heterodyned with the local oscillator. In the frequency domain, this corresponds to a shift in frequency. The next filter is the channel select filter which has a higher Quality factor than the band select filter for enhanced selectivity.

For the filtering, the local oscillator can either be fixed or variable for downconversion to the baseband IF. If it is variable, a variable capacitor or a tuning diode is used. The local oscillator can be higher or lower in frequency than the desired frequency resulting from the heterodyning (high side or low side injection).

A common issue in the superhet receiver is image frequency, which needs to be suppressed by the initial filter to prevent interference. Often multiple mixer stages are used (called multiple conversion) to overcome the image issue. The image frequencies are given below.


Higher IF frequencies tend to be better at suppressing image as demonstrated in the term 2f_IF. The level of attenuation (in dB) of a receiver to image is given in the Image Rejection Ratio (the ratio of the output of the receiver from a signal at the received frequency, to its output for an equal strength signal at the image frequency.

RADAR Range Resolution

Before delving into the topic of pulse compression, it is necessary to briefly discuss the advantages of pulse RADAR over CW RADAR. The main difference between the two is with duty cycle (time high vs total time). For CW RADARs this is 100% and pulse RADARs are typically much lower. The efficiency of this comes with the fact that the scattered signal can be observed when the signal is low, making it much more clear. With CW RADARs (which are much less common then pulse RADARs), since the transmitter is constantly transmitting, the return signal must be read over the transmitted signal. In all cases, the return signal is weaker than the transmitter signals due to absorption by the target. This leads to difficulties with continuous wave RADAR.  Pulse RADARs can also provide high peak power without increasing average power, leading to greater efficiency.

“Pulse Compression” is a signal processing technique that tries to take the advantages of pulse RADAR and mitigate its disadvantages. The major dilemma is that accuracy of RADAR is dependent on pulse width. For instance, if you send out a short pulse you can illuminate the target with a small amount of energy. However the range resolution is increased. The digital processing of pulse compression grants the best of both worlds: having a high range resolution and also illuminate the target with greater energy. This is done using Linear Frequency Modulation or “Chirp modulation”, illustrated below.


As shown above, the frequency gradually increases with time (x axis).

A “matched filter” is a processing technique to optimize the SNR, which outputs a compressed pulse.

Range resolution can be calculated as follows:

Resolution = (C*T)/2

Where T is the pulse time or width.

With greater range resolution, a RADAR can detect two objects that are very close. As shown this is easier to do with a longer pulse, unless pulse compression is achieved.

It can also be demonstrated that range resolution is proportional to bandwidth:

Resolution = c/2B

So this means that RADARs with higher frequencies (which tend to have higher bandwidth), greater resolution can also be achieved.



Mathematical Formulation for Antennas: Radiation Integrals and Auxiliary Potentials

This short paper will attempt to clarify some useful mathematical tools for antenna analysis that seem overly “mathematical” but can aid in understanding antenna theory. A solid background in Maxwell’s equations and vector calculus would be helpful.

Two sources will be introduced: The Electric and Magnetic sources (E and M respectively). These will be integrated to obtain either an electric and magnetic field directly or integrated to obtain a Vector potential, which is then differentiated to obtain the E and H fields. We will use A for magnetic vector potential and F for electric vector potential.

Using Gauss’ laws (first two equations) for a source free region:


And also the identity:


It can be shown that:


In the case of the magnetic field in response to the magnetic vector potential (A). This is done by equating the divergence of B with the divergence of the curl of A, which both equal zero. The same can be done from Gauss Law of electricity (1st equation) and the divergence of the curl of F.

Using Maxwell’s equations (not necessary to know how) the following can be derived:


For total fields, the two auxiliary potentials can be summed. In the case of the Electric field this leads to:


The following integrals can be used to solve for the vector potentials, if the current densities are known:


For some cases, the volume integral is reduced to a surface or line integral.

An important note: most antenna calculations and also the above integrals are independent of distance, and therefore are done in the far field (region greater than 2D^2/λ, where D is the largest dimension of the antenna).

The familiar duality theorem from Fourier Transform properties can be applied in a similar way to Maxwell’s equations, as shown.


In the chart, Faraday’s Law, Ampere’s Law, Helmholtz equations and the above mentioned integrals are shown. To be perfectly honest, I think the top right equation is wrong. I believe is should have permittivity rather than permeability.

Another important antenna property is reciprocity… that is the receive and transmit radiation patterns are the same , given that the medium of propagation is linear and isotropic. This can be compared to the reciprocity theorem of circuits, meaning that a volt meter and source can be interchanged if a constant current or voltage source is used and the circuit components are linear, bilateral and discrete elements.


Image Resolution

Consider that we are interested in building an optical sensor. This sensor contains a number of pixels, which is dependent on the size of the sensor. The sensor has two dimensions, horizontal and vertical. Knowing the size of the pixels, we will be able to find the total number of pixels on this sensor.

The horizontal field of view, HFOV is the total angle of view normal from the sensor. The effective focal length, EFL of the sensor is then:

Effective Focal Length: EFL = V / (tan(HFOV/2)),

where V is the vertical sensor size in (in meters, not in number of pixels) and HFOV is the horizontal field of view. Horizontal field of view as an angled is halved to account that HFOV extends to both sizes of the normal of the sensor.

The system resolution using the Kell Factor: R = 1000 * KellFactor * (1 / (PixelSize)),

where the Pixel size is typically given and the Kell factor, less than 1 will approximate a best real case result and accounts for aberrations and other potential issues.

Angular resolution: AR = R * EFL / 1000,

where R is the resolution using the Kell factor and EFL is the effective focal length. It is possible to compute the angular resolution using either pixels per millimeter or cycles per millimeter, however one would need to be consistent with units.

Minimum field of view: Δl = 1.22 * f * λ / D,

which was used previously for the calculation of the spatial resolution of a microscope. The minimum field of view is exactly a different wording for the minimum spatial resolution, or minimum size resolvable.

Below is a MATLAB program that computed these parameters, while sweeping the diameter of the lens aperture. The wavelength admittedly may not be appropriate for a microscope, but let’s say that you are looking for something in the infrared spectrum. Maybe you are trying to view some tiny laser beams that will be used in the telecom industry at 1550 nanometer.

Pixel size: 3 um. HFOV: 4 degrees. Sensor size: 8.9mm x 11.84mm.


Spatial Resolution of a Microscope

Angular resolution describes the smallest angle between two objects that are able to be resolved.

θ = 1.22 * λ / D,

where λ is the wavelength of the light and D is the diameter of the lens aperture.

Spatial resolution on the other hand describes the smallest object that a lens can resolve. While angular resolution was employed for the telescope, the following formula for spatial resolution is applied to microscopes.

Spatial resolution: Δl = θf = 1.22 * f * λ / D,

where θ is the angular resolution, f is the focal length (assumed to be distance to object from lens as well), λ is the wavelength and D is the diameter of the lens aperture.



The Numerical Aperture (NA) is a measure of the the ability to of the lens to gather light and resolve fine detail. In the case of fiber optics, the numerical aperture applies to the maximum acceptance angle of light entering a fiber. The angle by the lens at its focus is θ = 2α. α is shown in the first diagram.

Numerical Aperture for a lens: NA = n * sin(α),

where n is the index of refraction of the medium between the lens and the object. Further,

sin(α) = D / (2d).

The resolving power of a microscope is related.

Resolving power: x = 1.22 * d * λ / D,

where d is the distance from the lens aperture to the region of focus.


Using the definition of NA,

Resolving power: x = 1.22 * d * λ / D = 1.22 * λ / (2sin(α)) = 0.61 * λ / NA.


Telescope Resolution & Distance Between Stars using the Rayleigh Limit

Previously, the Rayleigh Criterion and the concept of maximum resolution was explained. As mentioned, Rayleigh found this formula performing an experiment with telescopes and stars, exploring the concept of resolution. This formula may be used to determine the distance between two stars.

θ = 1.22 * λ / D.

Consider a telescope of lens diameter of 2.4 meters for a star of visible white light at approximately 550 nanometer wavelength. The distance between the two stars in lightyears may be calculated as follows. The stars are approximately 2.6 million lightyears away from the lens.

θ = 1.22 * (550*10^(-9)m)/(2.4m)

θ =2.80*10^(-7) rad

Distance between two objects (s) at a distance away (r), separated by angle (θ): s = rθ

s = rθ = (2.0*10^(6) ly)*(2.80*10^(-7)) = 0.56 ly.

This means that the maximum resolution for the lens size, star distance from the lens and wavelength would be that two stars would need to be separated at least 0.56 lightyears for the two stars to be distinguishable.


Diffraction, Resolution and the Rayleigh Criterion

The wave theory of light includes the understanding that light diffracts as it moves through space, bending around obstacles and interfering with itself constructively and destructively. Diffraction grating disperses light according to wavelength. The intensity pattern of monochromatic light going through a small, circular aperture will produce a pattern of a central maximum and other local minima and maxima.


The wave nature of light and the diffraction pattern of light plays an interesting role in another subject: resolution. The light which comes through the hole, as demonstrated by the concept of diffraction, will not appear as a small circle with sharply defined edges. There will appear some amount of fuzziness to the perimeter of the light circle.

Consider if there are two sources of light that are near to each other. In this case, the light circles will overlap each other. Move them even closer together and they may appear as one light source. This means that they cannot be resolved, that the resolution is not high enough for the two to be distinguished from another.


Considering diffraction through a circular aperture the angular resolution is as follows:

Angular resolution: θ = 1.22 * λ/D,

where λ is the wavelength of light, D is the diameter of the lens aperture and the factor 1.22 corresponds to the resolution limit formulated and empirically tested using experiments performed using telescopes and astronomical measurements by John William Strutt, a.k.a. Rayleigh for the “Rayleigh Criterion.” This factor describes what would be the minimum angle for two objects to be distinguishable.