# All Posts on one Page

### POST ARCHIVE:

• Electron Spin January 13, 2022

In filling the k-space of a metal with electrons, each grid point in the k-space can be occupied with two electrons, one in a spin up state and one in a spin down state. Energy of the electron does not depend on the spin state of the electron. The full quantum state of the electron then includes the position wavefunction and the spin orientation. This can be notates as follows:

The spin of a particle can also be represented as a two-element matrix or spinor, with spin up represented as (1;0) and spin down as (0;1).

http://hyperphysics.phy-astr.gsu.edu/hbase/spin.html

• The Sommerfeld Model – Metals January 12, 2022

Following the Drude model describing the movement of electrons in metals, Sommerfeld developed yet another model for electrons in metals in 1927. This new model would account for electron energy distributions in metals, Pauli’s exclusion principle, and Fermi-Dirac statistics of electrons. This model factors in quantum mechanics and the Schrödinger Equation.

The Somerfeld model’s view of electrons in metals can be taken as an example of a large volume of metal with electrons confined in the volume. We’ll call this a potential well. Inside this volume or potential well, electrons are ‘free’ with zero potential. Outside the potential well, the potential is infinite. The electron states inside this box are governed by the Schroedinger equation.

The quantum state of an electron is generally described by the Schroedinger equation as shown below.

The result of the Schroedinger equation, applying the boundary conditions of the problem with the potential being zero at the boundary, the solution of the wavefunction is below. This solution introduces a concept called the k-space, a 3D grid of allowed quantum states.

The density of grid points in the k-space is related to Lx, Ly, and Lz of the solution. The number of grid points per unit volume or density (i.e. density of states) will be V/(2*pi)^3, where the spacing of points in the 3D k-space are defined as 2pi/Lx, 2pi/Ly, and 2pi/Lz.

• Drude Model – Metals January 11, 2022

In 1900, Paul Drude formed a theory of conduction in metals using the newly discovered concept of the electron. The theory states:

1. Metals have a high density of free electrons.
2. Electrons in metals move according to Newton’s laws
3. Electrons in metals scatter when encountering defects, ions and the momentum of the electron is random after this scattering event.

In short, the Drude Model explains how electrons can be expected to move in metals, which is fundamental to the operation of many devices.

Applied Electric Field

In the presence of an electric field, electron motion on average can be described with the following momentum p(t), when tau is the scattering time and 1/tau is the scattering rate:

Now consider several cases for electron motion in metals.

No applied electric field, E(t) = 0
In this case, electrons move randomly and the electron path momentum averages zero.

Constant Uniform Electric Field
When a uniform electric field is present, electron movement averages in the opposite direction of the electric field.

Relating momentum to velocity, we can find the electron drift velocity which is the rate at which the electron travels in an average direction caused by the field. The electron mobility is the relationship between drift velocity and the electric field.

Electron current density is related to the number of electrons, an electron’s charge, the mobility, and the electric field. The factor between electron current density and electric field is the conductivity.

• Nanofabrication Processes January 10, 2022

Due to the size of the structures made during nanofabrication, the semiconductor wafers are highly vulnerable to foreign objects, such as dust. The waveguide structures being fabricated see a width of 2 microns. Interference by a piece of dust of similar width results in device fabrication failure if left on the wafer before depositing an oxide layer. For this reason, cleanrooms and their procedures are designed to eliminate dust particles and organic material that may invade the device fabrication process. Still, caution must be taken at each step to ensure that there is no dust on one’s sample. The wafer should be observed under a microscope before each etching and depositing procedure and cleaned using the appropriate method, depending on the previous and next steps.

If the wafer does not have a layer of photoresist on it, the cleaning procedure is to dip the wafer into a beaker of acetone, then isopropanol and then DI water. The typical duration of this is for about 1 minute each. If there is a layer of old photoresist on the wafer, this may need to be increased to five minutes each. If old photoresist remains, a flood UV exposure using the mask aligner and development may proceed a second solvent wash. The RIE machine is also used for removing photoresist on wafers if used on an O2 descum process.

Masks are templates used in the mask aligner to create patterns on wafers. These plates must also be cleaned after using, since they will make contact with wafers that have photoresist on them. This is especially critical for masks that have small patterns (<20 microns). After finishing use, soak in acetone, isopropanol and DI water for 5 minutes or longer. If after inspecting the mask on a microscope there is still photoresist remaining, an ultrasonic bath for 20 minutes or more or a flood exposure on the contact side of the mask can be attempted, however it likely will not be needed if it is cleaned continually after use for each day.

If the wafer being used requires a custom epitaxial structure, the first step in the fabrication process is to grow these layers. A common method for research applications is molecular beam epitaxy (MBE). MBE machines are used primarily for research due to their accuracy but slow growth rate. MOCVD, on the other hand is used for simpler epitaxial structures and mass production.

PECVD is used to deposit oxides such as SiO2, SiNx and others. If using small wafers, using a larger carrier wafer is common practice. When creating a PECVD deposit recipe, the gas mixture and temperature are selected.

The E-beam evaporator is used to deposit metals such as gold, aluminum, chrome, platinum and other materials such as germanium.

F E-Beam Evaporator

Reactive Ion Etching (RIE) is used for etching oxides and deposits on wafers using a chemical plasma that is charged with an electromagnetic field and under a strong vacuum. This is termed a dry etching process. For RIE etching recipes, the pressure, chemical and RF power are chosen. The RIE is also used to remove photoresist and organic material using an O2 clean process.

The ICP (or Inductively Coupled Plasma) tool can etch many materials including SiO2, SiNx, Cr, GaAs and AlGaAs. ICP etch recipes are designed using a selected pressure, RF and ICP power, etchant gas and temperature. When using the ICP, run a chamber cleaning process with O2 and Argon with a dummy wafer loaded. After a cleaning run, the desired etch process should be run with a dummy wafer first before loading the desired wafer. For smaller wafers, thermal conducting paste can be applied between the wafer and a larger carrier wafer. For deep wafer etches on semiconductor material such as GaAs, the edges of the wafer will be etched more. To avoid this, other wafers can be placed aside it.

Acid Etching is a wet etch process, unlike RIE and ICP. This is performed at a bench using a blend of chemicals to etch the semiconductor wafer itself, typically. Heavier protective gear is worn during this process to prevent contact with some of the most dangerous chemicals used in a nanofabrication facility. Hydroflouric acid, a deadly neurotoxin is one chemical that is used frequently in a nanofabrication facility [37]. One use of an acid etch recipe used is an AlGaAs-selective etch.

Photolithography is a technique used in semiconductor device fabrication. First, a light-sensitive layer called photoresist is added to a semiconductor wafer. Depending on the type of resist (positive or negative), this layer can be removed using developer after applying UV light. A mask is template used to apply UV light only to a desired region or shape on the wafer. After this is done, etching can be performed exclusively to the parts of the wafer without a photo-sensitive layer.

The spinner is the machine that is used to apply photoresist to a wafer. The wafer is first held on a vacuum arm and photoresist is applied. The vacuum arm is then spun at a desired spin rate and duration. This creates a uniform film of photoresist on the wafer. After running on the spinner, the wafer should sit on a hot plate for a specified time and temperature.

Particularly for non-circular wafers, photoresist can build up along the edges, creating an uneven surface. This is problematic for the following steps, so the photoresist is removed from the edges and underside of the wafer using a swab and acetone.

If there is already a pattern on the wafer, the wafer position in the mask aligner can be adjusted to ensure alignment. It is recommended to include an alignment feature on the mask die, such as a veneer mark, especially if alignment is critical to that fabrication step. After aligning as needed, the UV light exposure time is selected and applied to the wafer. The wafer is dipped in developer for a specified time, then in DI water, and gently blow dried with a nitrogen gun.

Lift-off photoresist is used when creating a metal feature on a wafer. In this process, normal photoresist can be applied over the lift-off resist (before putting on a hot plate). After running the wafer in the spinner, lift-off resist needs to be removed from the edges using tweezers. Lift-off resist will react differently to acetone, so for edge bead removal, a separate solution needs to be used. After performing photolithography and metal depositions, the lift-off resist may need to be developed using yet another type of solution. For LOR 20-3, it is recommended that the wafer sit on a hot plate at 80 degrees C in the lift-off developer solution for 12 hours. It also recommends a wash in cool lift-off developer and isopropanol after sitting on the hot plate. Refer to data sheets for specific instructions for chemicals.

Electron beam Lithography (EBL) performs the same role as the mask aligner, but with much higher precision due to the smaller electron wavelength. Photoresist still needs to be applied before using an EBL machine. Instead of using a mask, it follows the pattern on a GDSII file. An EBL can be used for all lithography steps in fact, though it is much slower, so it is used only for steps with a narrow feature that is not achievable on a mask aligner or stepper. An EBL machine also contains an SEM, scanning electron microscope and can be used for that function as well.

The SEM or Scanning Electron Microscope is used for examining structures that are too small for an optical microscope. This is achieved using an electron beam, which excites conductive materials. SEM is necessary for inspecting etch qualities and for making accurate measurements of component sizes. The SEM operates by directing an electron gun at the sample. The charged atoms release electrons, producing the signal that is detected for the image.

One tool that is useful for measuring the height profile on a wafer is the Dektak tool. This is often used after an etching process. This gives the height profile along a line on the wafer.

The ellipsometer is used for measuring the thickness of a film deposit. Unlike the Dektak, the ellipsometer is able to measure multiple layers. However, it is not useful for precise positioning or height profiles over a distance on the wafer. The ellipsometer is typically only used for uniform deposits on wafers that have not been exposed to etching or photolithography, unless the shapes are very large.

• Optical Loss in Optical Waveguides and Free Carrier Absorption January 9, 2022

Sources of loss in optical waveguides include free carrier absorption, band edge absorption, surface roughness, bending loss, and two photon absorption. Optical loss can be determined from the imaginary index of refraction.

Band edge absorption is a wavelength-dependent absorption based on material properties. For wavelengths above the bandgap wavelength (approx. 1 micron), the band edge absorption and free-carrier absorption of GaAs is greatly reduced. Free-carrier absorption caused by doping is still a concern for optical waveguide loss, however.

Free carrier absorption is loss in optical waveguides due to interaction of photons and charge carriers. The effects of free carrier absorption can be calculated using the free carrier coefficients of electrons and holes for the material and the doping concentration. Since doping is used to create a PIN structure, it is therefore wiser based on free carrier absorption to have the regions surrounding the intrinsic waveguide core to be lightly doped. The imaginary dielectric constant due to free carrier absorption, based on doping levels is calculated as follows. The doping concentration for electrons and holes are n and p respectively, the bulk refractive index is n0, the wavenumber is k, and FCN and FCP are the free carrier coefficients of electrons and holes respectively.

• Converting from normalized SFDR (dBHz^(2/3)) to real SFDR (dB) January 7, 2022

SFDR is frequently written in the units of dBHz^(2/3), particularly for fiber optic links. Fiber optic links can often have such high bandwidth, that assuming a bandwidth in SFDR is unhelpful or misleading. Normalizing to 1Hz therefore became a standard practice. The units of SFDR for a real system with a bandwidth are dB.

Now consider that the real system has a specific bandwidth. The real SFDR can be calculated using the following formulas:
SFDR_real = SFDR_1Hz – (2/3)*10*log10(BW)

Here are a few examples.

• Noise Figure in a Microwave Photonic Link January 6, 2022

The standard definition for noise figure (NF) is the degradation of signal to noise ratio (SNR). That is, if the output noise power of the system is increased more than the output signal power, then this implies a significant noise figure and a degredation of SNR.

For an RF photonic link, there are a couple assumptions that result in a slightly altered definition and calculation for noise figure. One assumption is that the input noise is the thermal noise (kT), such as would be detected from an antenna receiver. It is also the case that RF photonic links may be employed in a case where the input signal power level is not defined. In simple telecommunications aplications, it is standard to expect a certain input power level, but as a communications system at a radar front end for instance, the input signal is not known. We can use the gain of the link as a relationship between output signal and input signal instead of a known input and output signal power.

It is a goal of the link designer in those cases to ensure that all true signals can be distinguished from noise. For these reasons, we may also think of noise figure in the following definition:

Noise figure (NF) is the difference between the total equivalent input noise and thermal background noise.

The equivalent input noise is the output noise without considering the gain of the link.

For the noise figure calculation, we have then:

NF = 10*log_10( EIN / GkT ),

where EIN is the equivalent input noise, G is the link gain, k is Boltzmann’s constant, and T is the temperature in Kelvin.

Equivalent input noise (EIN) is as follows:

EIN = GkT + <I^2>*Z_PD,

where <I^2> is the current noise power spectral density at the output of the link and Z_PD is the photodetector termination impedance.

These together, we have noise figure:

NF = 10*log_10(1+(<I^2*Z_PD)/GkT)

• Noise Sources in RF Photonic Links January 5, 2022

Identifying the noise sources in an RF Photonic link allows us to determine the performance of the link and helps us to identify critical components to link and device design to develop a high performance link. Below is an intensity modulated optical link. Other modulation schemes in RF photonic links may be discussed at a later point.

Since the output of the RF photonic link is the photocurrent generated by the photodetector, the noise sources are a current noise power spectral density.

Noise sources from the laser:

Laser RIN (relative intensity noise) is the fluctuation of optical power. Relative intensity noise is the noise of the optical power divided by the average optical power in a laser. RIN noise originates from spontaneous radiative carrier recombination and photon generation.

Noise sources from the modulator:

Noise in a modulator is due to thermal noise of electrode termination and ohmic loss in the electrodes.

Noise sources from the photodetector:

Shot noise occurs as a result of the quantization of discrete charges or photons. Noise is also generated by the photodetector termination.

Total current noise power spectral density of the RF photonic link:

• RF Photonic Links January 4, 2022

RF Photonic links (also called Microwave Photonic Links) are systems that transport radiofrequency signals over optical fiber. The essential components of an RF photonic link are the laser as a continuous-wave (CW) carrier, a modulator as a transmitter and the photodetector as a receiver. A low-noise amplifier is often used before the modulator.

Optical fiber boasts much lower loss over longer distances compared to coaxial cable, and this flexibility of optical fiber is one advantage over conventional microwave links. Another advantage of RF photonic links are their immunity to electromagnetic interference, which plays a more significant role in electronic warfare (EW) applications. RF Photonic links are employed in telecommunications, electronic warfare, and quantum information processing applications, although the performance requirement in each of these situations vary. In telecommunications, a high bandwidth is required, while in EW applications having high spurious-free dynamic range (SFDR) and a low noise figure (NF) is critical. In quantum information processing applications, a low insertion loss is critical.

In EW scenarios, unlike in telecommunications, the expected signal frequency and signal power is unknown. This is because typically, an RF photonic link is found as a radar receiver. In a system with high SFDR and low NF, distortion is minimized, the radar has stronger reliability and range, and smaller signals can be registered. Here is a demonstration of two scenarios with different SFDR and NF:

Low SFDR, High NF:

High SFDR, Low NF:

• Transformer Circuit Review: Ideal Transformers, Conservation of Energy January 3, 2022

In a closed system, energy can be transferred through different forms (heat, kinetic energy, potential energy, etc), but not created nor destroyed. For a passive device such as a transformer, the energy in the system must also follow. This is termed conservation of energy.

A transformer is a passive circuit component which follows these basic formulas, where P is the power and n is the number of turns on the transformer:

Pout=Pin

n2/n1=V2/V1

Consider an electrical transformer with turns ratio N, what is the output voltage?  What is the output current?

The output voltage Vout=N*Vin.

A correct answer to this question must satisfy that power is conserved. This means that the output power must equal the input power. Power = voltage * current.

The impedance of the transformer for N turns ratio:

Zout = Vout/Iout = (N*V­in)/(Iin/N) = N2 * Zin

• Thermal Background Noise January 2, 2022

Any object with a temperature above absolute zero (Kelvin) radiates electromagnetic energy, or thermal noise. Noise is generated by the earth and cosmos, and this is background thermal noise, which is received by an antenna.

Thermal background noise is the starting point for system performance. A signal of strength below the thermal background noise will be indistinguishable from noise.

The thermal background noise power is proportional to the temperature (P = kTB, k being Boltzmann’s constant, T the temperature in Kelvin, and B the bandwidth in hertz). The thermal background noise power spectral density is the fundamental noise minimum at -174 dBm/Hz at 300K.

The gain of the device or system further amplifies the thermal background noise. RF Photonic links most often use a low noise amplifier (LNA) directly before the modulator, amplifying the thermal background noise.

The definition of thermal noise applied to electronics is the movement of charge carriers caused by temperature in a conductor.

• Mean Squared Noise Power January 1, 2022

What does it mean when people say “mean squared”?

The average value of a noise waveform is zero. The square of the waveform mean is also equal to zero. The square of the noise signal and the mean of the square are non-zero. This is because the negative values associated with the zero-mean noise waveform are made positive by squaring, and the entire waveform is positive. Taking the root of the averaged square of the waveform yields the RMS.

The mean of the squared (“mean square”) noise waveform is the noise power with respect to a 1 Ohm resistor (units: V2/Ω=W, “power” if noise signal is a voltage signal, and units I2/Ω=W, also “power” if noise waveform is current).

The power spectral density is the power of the signal in a unit bandwidth.

What is a current noise power spectral density?

The correct definition of current noise spectral density is the mean of the squared current per hertz, <i2>. The units are A2/Hz.

The square of the mean is equal to zero, because the mean of the noise waveform is zero and squaring that number remains zero. The mean of the square is a non-zero number. Taking the square of a noise current results in a positive valued current waveform. Taking the average of the square is a non-zero number used for the spectral density.

• Why is it that RF waves travel faster than c/√(εμ) in a coplanar waveguide (CPW) electrode? December 31, 2021

In electro-optic modulators, one important task is matching the propagating RF and optical wave velocities. This begins a discussion on modulator electrode design.

The primary method of matching the RF and optical velocities is using a slow wave electrode, that is, a capacitively loaded electrode.

Before capacitive loading, we need to determine the initial velocity of RF waves travelling in the coplanar waveguide (CPW). The CPW electrode is as follows. The optical waveguides are positioned between the signal and ground electrodes.

Let’s think about the formula for wave propagation velocity for RF waves:

V_RF = c/n,

where c is the speed of light (3×10^8 m/s) and n is the microwave index. n is also equal to:

n = √(εμ),

where ε and μ are the relative permittivity and permeability of the medium that the RF waves are propagating in.

We might think, if we know what material the modulator is made of, then we can calculate the microwave index based on the relative permittivity and permeability of the material, and calculate it. Not so fast…

As shown in the diagram above, the propagating electromagnetic wave’s mode is not confined within the substrate. For this reason, we must determine a weighted average of the material properties based on the electromagnetic waves’ mediums, namely air and the substrate. I use Ansys HFSS to perform this calculation easily and accurately.

In summary, the propagating electromagnetic waves along the coplanar waveguide electrodes are present both in the semiconductor substrate and in air surrounding the device. The velocity of the propagating electromagnetic wave is therefore a weighted average of the electric field propagation in air and the semiconductor. Since the index of air is less than the semiconductor, the field propagates faster than if it were entirely propagating in semiconductor.

• What are the frequencies of the second-order and third-order distortion tones given two frequency peaks? December 30, 2021

In general, the third order distortion tones are understood to exist as in-band distortion at frequencies 2ω21 and 2ω12 in a two tone intermodulation test. Third order distortion also exists at frequencies ω1 and ω2. Second order distortion tones are found outside of a narrowband system at 2ω2, 2ω1, and ω12.

Consider the two-tone input of a non-linear system with frequencies ω1 and ω2:

Vin = A[cos(ω1t)+cos(ω2t)]

The second order and third order distortion tones are calculated on the following page. In summary, the tones are shown in the table below. This shows that third order distortion tones are found not only in the positions mentioned above, but also contribute to the fundamental tone frequencies.  In a spurious-free system, all third order tones will be below the noise floor. This is verified in MATLAB with ω1, ω2 at 500kHz, 501kHz.

• What does the term “Spurious-free” mean in Spurious-free Dynamic Range (SFDR)? December 29, 2021

In the term spurious-free dynamic range (SFDR), spurious-free means that non-linear distortion is below the noise floor for given input levels. The system is spurious when non-linear distortion is present above the noise floor. The system is spurious-free when non-linear distortion is below the noise floor. SFDR therefore is the range of output levels whereby the system is undisturbed by non-linear distortion or spurs.

SFDR contrasts with compression dynamic range (or linear dynamic range (LDR)) which is the range of output levels whereby the fundamental tone is proportional to the input, irrespective of distortion tone levels. The fundamental tone is no longer considered to be linear beyond the 1dB compression point, after which the output fundamental tones do not increase at the same rate as the input fundamental tones.

Spurs are non-linear distortion tones generated by non-linearities of a system. The output of a non-linear system can be modeled as a Fourier series.

The first term a0 is a DC component generated by the non-linear system. The second term a1Vin is the fundamental tone with some level of gain a1. The third term a2Vin2 is a second order non-linear distortion tone. The fourth term a3Vin3 is the third-order non-linear distortion tone. Further expansion of the Fourier series generates more harmonic and distortion tones. Even order harmonic distortion tones are usually outside of the band of interest, unless the system is very wideband. Odd order distortion tones however are found much closer to the fundamental tone in the frequency domain. SFDR is usually taken with respect to the third order intermodulation distortion, however it may also occasionally be taken for the fifth order (or seventh).

• Where do the units of SFDR “dB·Hz^(2/3)” come from? December 28, 2021

The units of spurious-free dynamic range (SFDR) are dB·Hz^(2/3). The units can be a source of confusion. The short answer is that it is a product of ratios between power levels (dBm) and noise power spectral density (dBm/Hz). The units of dBHz^(2/3) are for SFDR normalized to a 1Hz bandwidth. For the real SFDR of a system, the units are in dB.

If we look at a plot of the equivalent input noise (EIN), the fundamental tone, OIP3 (output intercept point of the third order distortion), and IMD3 (intermodulation distortion of the third order), a ratio of 2/3 exists between OIP3 and SFDR. This can be recognized from the basic geometry, given that the slop of the fundamental is 1 and the slope of IMD3 is 3.

Now, we need to look at the units of both OIP3 and EIN. The units of OIP3 are dBm and the units of the equivalent input noise (a noise power spectral density) are dBm/Hz.

SFDR = (2/3)*(OIP3 – EIN)

[SFDR] = (2/3) * ( [dBm] – [dBm/Hz] )

Now, remember that in logarithmic operations, division is equal to subtracting the denominator from the numerator. and therefore:

[dBm/Hz] = [dBm] – 10*log_10([Hz])

Note that the [Hz] term is still in logarithmic scale. We can use dBHz to denote the logarithmic scale in Hertz.

[dBm/Hz] = [dBm] – [dBHz]

Substituting this into the SFDR unit calculation:

[SFDR] = (2/3) * ( [dBm] – ( [dBm] – [dBHz] )

This simplifies to:

[SFDR] = (2/3) * ( [dBm] – [dBm] + [dBHz] )

Remember that the difference between two power levels is [dB].

[SFDR] = (2/3) * ( [dB] + [dBHz] )_

The units of [dB] + [dBHz] is [dBHz], as we know from the same logarithmic relation used above for [dBm] and [dB].

[SFDR] = (2/3) * [dBHz]

Now, remember that this is a lkogarithmic operation, and a number multiplying a logarithm can be taken as an exponent in the inside of the logarithm.Therefore, we can express Hz again explicitly in logarithm scale, and move the (2/3) into the logarithm.

(2/3) * [dBHz] = (2/3) * 10*log_10([Hz]) = 10*log_10([Hz]^(2/3))

We can return our units back to the dB scale now, giving us the true units for SFDR: dBHz^(2/3):

[SFDR] = [dBHz^(2/3)]

• Calculating Bandwidth for RF/Photonic Components based on Velocity mismatch June 24, 2021

The bandwidth of a device such as a modulator or photodetector is an important figure. When designing a modulator or photodetector for high frequencies, much attention is paid to matching the velocity of the optical waves and the RF waves.

By finding the propagation time difference between the optical and RF waves, we model this in the time domain as a rect function. Note that for the rect function, the difference in propagation time is the tau variable. Performing the Fourier transform on the rect function will give us a sinc function. The 3dB cutoff point of this sinc function in the frequency domain gives us the device bandwidth. Note the MATLAB algorithm used below. The 3dB bandwidth is calculated using a simple manipulation of the frequency vector indices.

v_optical = ; %simulated optical velocity [define]

v_RF = ; %simulated RF velocity [define]

l_device = ; %device length [define]

f_max = ; %max frequency of vector (should be higher than bandwidth) [define]

f_num = ; %number of frequencies in vector [define]

tau = abs((l_device/v_optical)-(l_device/v_RF)) ; %propagation time difference

W = linspace(0,f_max,f_num); %frequency vector

S = tau*sinc(W*tau/2); %sinc function in frequency domain

Qs = find(20*log10(S)<=(20*log10(S(1))-3)); %intermediate calculation for index of 3db cutoff

BW_3dB= f_max*(Qs(1))/f_num %This is the result

• Optical Isolators and Photonic Integrated Isolators June 8, 2021

# I.                   Optical Isolators

## Introduction

An optical isolator is a device that allows light to travel in only one direction. Isolators have two ports and are made for free-space and optical fiber applications. Lasers benefit from isolators by preventing backscatter into the laser, which is detrimental to their performance. Other applications include fiber optic communication systems, such as CATV and RF over fiber, and gyroscopes. Isolators are magneto-optic devices that use Faraday rotators and polarizers to achieve optical isolation. The magneto-optic effect was discovered by Michael Faraday, when he observed that polarized light rotates when propagating through a material with a magnetic charge.

## Types of Isolators

Two categories of optical isolators are free-space and fiber isolators. Isolators in both these categories see use in a wide range of applications and for many wavelengths from ultraviolet to long-wavelength infrared. Isolators can be fixed for isolation at a single wavelength, tunable for multiple wavelengths, or wideband. Adjustable isolators come with a tuning ring to adjust the Faraday rotator’s position and effect in the isolator. Adjustable isolators can be narrowband for specific wavelengths or broadband adjustable.

Polarization-dependent isolators and polarization-independent isolators are two different operation concepts that are used to achieve isolation. Polarization-dependent isolators use polarizers and faraday rotators, while polarization-independent isolators use a Faraday rotator, a have-wave plate, and birefringent beam displacers. In either case, polarization is used in both systems to achieve isolation, exploiting the magneto-optic effect using the Faraday rotator.

## Key Parameters of Isolators

An isolator’s performance is measured by its transmission loss, insertion loss, isolation, and return loss. The transmission loss or S(1,2) should be low, meaning that there is no loss in optical power from the direction of port 1 to port 2. Optical power should not be transmitted from port 2 to port 1. The reduction in optical power from port 2 to port 1 is termed insertion or S(2,1) and should be high. Optical isolators can have 50 dB or higher isolation  [1]. Insertion loss is reflected optical power from port 1 and should be reduced.

Figure 2. Optical isolator as 2-port Element

Pulse dispersion is a relevant parameter in the design of isolators for specific pulsed laser uses, such as an ultrafast laser. This is measured as a ratio between pulse time width before the isolator and the pulse time width after leaving the isolator [1].

For fiber isolators, the type of fiber will need to be considered for its application. Size is a consideration for many applications. The design of the magnet in the isolator often a major limiting factor in size reduction. Operating temperature and accepted optical power are two other considerations to ensure the isolator is in acceptable operating conditions and is not damaged. This information can be found in datasheets for common isolators.

## Concept of Operation

Polarization is an important concept in the operation of an optical isolator. Polarization refers to the orientation of waves transverse to the direction of propagation. Polarization can be written as a vector sum of components in two directions perpendicular to the direction of propagation.

Figure 3. Wave polarized in the x-direction and propagating in the -z-direction

A polarizer changes the polarization of an incident wave to that of the polarizer. However, light is only allowed to pass through the polarizer to the extent that the incident wave shares a level of polarization in the direction of the polarizer. The output intensity from the polarizer is defined using Malus’ Law, where  is the angle between the polarization of the incident wave and the polarizer’s direction:

For example, a wave polarized in the x-direction propagating in the negative z-direction may enter a polarizer positioned in the x-direction. In this case, the angle difference between the polarizer and the incident wave is zero, meaning that full optical power is transmitted. If an angle is introduced between the polarizer and incident wave, the output intensity is reduced. Two polarizers are used in an optical isolator, as well as a Faraday rotator.

Figure 4. The angle between polarizers and incident waves

When light propagates through a magnetic material, the plane of polarization is rotated. This is termed the Faraday effect or the magneto-optic effect [2]. The Verdet constant measures the strength of the Faraday Effect in a material. The Verdet effect units are radians per Tesla per meter, and it is a function of the wavelength, electron charge and mass, dispersion, and speed of light.

Figure 5. Rotation of Polarization using Faraday Effect

Figure 6. Optical Isolator in Forward Direction

Figure 7. Optical Isolator in Reverse Direction

Using the Faraday rotator and two polarizers, a non-reciprocal isolator is made. The forward and reverse directions for an isolator are demonstrated below, made of a Faraday rotator and two polarizers. The Faraday rotator is made to rotate the polarization 45 degrees in a clockwise direction.

Since the clockwise rotation is different respective to the direction of light in the Faraday rotator, a non-reciprocal effect is expected. The two polarizers are positioned at 45 degrees from one another. In the forward direction, the direction of polarization of the optical wave matches that of the second polarizer, meaning that full optical power is present at the isolator’s output. In the reverse direction, the 45 degrees difference from the Faraday rotator and 45 degrees angle difference from the polarizers are applied constructive, producing a 90-degree difference in polarization direction for the second polarizer. Using Malus’s law, the resultant intensity for the outgoing light wave is zero. Since the Verdet constant is dependent on wavelength, there is a variety of materials that can be used to demonstrate the Faraday effect. For 1550 nm light, widely used in telecommunications, Yttrium Iron Garnet has a strong effect and sees use in optical isolators [3].

## Designing an Isolator

To design an isolator, we must first consider the wavelength that we are using, the material that we are using for Faraday rotation, and the electromagnet output B field. Recall that the Verdet constant is a function of the wavelength. For Yttrium Iron Garnet, the Verrdet constant is 304 radians/Tesla/meter for 1550 nm-wavelength light [4]. The rotation of the polarization plane for linearly polarized light is:

where L is the length of the magneto-optic material and B is the applied magnetic field to the magneto-optic material. Given that we are looking for a Faraday rotation of 45 degrees, we can either solve for the required magnet field strength B given a certain length L, or we can determine how long (L

the Faraday rotator should be given an applied magnetic field strength, B. An optical isolator can be described using the following formations. The first polarizer is for polarization in the x direction.

The Faraday rotator rotates the polarization by  radians and then goes through the second polarizer, which is 45 degrees to the first polarizer.

The forward and reverse polarization can be calculated as follows:

The components of the resulting Jones’ Matrices for forward and reverse directions are plotted, while varying the magnetic field strength to find the optimal strength B for reverse isolation.

Figure 8. Forward Direction, Polarization from Isolator vs. Magnetic Field Strength

The Faraday rotator is 1 mm long, rotates polarization by pi/4 radians, and is made of Ytterbium Iron Garnet with a Verdet constant of 304 radians/Tesla/meter, I can find what strength magnetic field should be applied. The result shows that the optimal field strength is 2.584 Tesla to achieve full isolation.

Figure 9. Backward Direction, Polarization from Isolator vs. Magnetic Field Strength

The output matricies for the forward, reverse directions and the Faraday rotation matrix are then found to be:

N that the dimensions of the isolator have been calculated, these numbers can be loaded into an optical simulation for the permittivity tensor of the material. Simulation as an approach to designing an isolator is also useful to measure at which length the polarization has rotated the desired amount.

## Other Magneto-Optic Devices

The optical isolator is a magneto-optic component. Closely related is the circulator. The circulator is a non-reciprocal component with multiple ports and is commonly used for transceivers and radar receivers. When light enters a four-port circulator, the output port is dependent on the input port. A simple diagram and S-matrix are shown below for a four-port circulator, where the columns denote where light entered, and the row denotes which port light exits. Like the isolator, the circulator also uses Faraday rotation.

Figure 11. Circulator Concept and S-Matrix

Other magneto-optic devices include beam-deflectors, multiplexers, displays, magneto-optic modulators [5]. Magneto-optic memory devices, including disks, tapes, and films, were commonplace but have been largely replaced by solid-state memory. Thin-film magneto-optic waveguides have also been demonstrated [6]. Magnetic-tunable optical lenses allow for a dynamically tunable focal length [7].

# II.                Photonic Integrated Isolators

## Introduction

Integrated photonics is a technology that, like integrated electronics, allows for many components to be made on a single semiconductor chip, allowing for more complex systems with reduces size and improved reliability. The photonic IC market was about \$190M in 2013 and is estimated between \$1.3B and \$1.8B in 2022 [8]. An integrated isolator is a highly sought technology, currently in the beginning stages of commercial availability and still in research and development. Photonic integrated circuits are usually developed on Silicon, Indium Phosphide and Gallium Arsenide; each having advantages. Indium Phosphide is of particular importance for the telecommunications wavelength (C-band at 1550nm) because this platform is used for lasers as well as photodetectors. Other methods to include active components such as lasers made on Indium Phosphode onto a Silicon wafer have been realized with limited success. Given the benefit of an isolator to a laser, an integrated isolator, if designed well could improve the performance of the semiconductor laser on Indium Phoshide.

Figure 12. Photonic Integrated Circuit

As mentioned previously, the circulator is based on the same non-reciprical Faraday Effect. When conducting research on integrated isolators, it is of interest to consider that advancements in integrated isolator technology can be applied to design an integrated circulator. However, the circulator is also used for a very different purpose, which may be more compatible to passive integrated photonics on a platform such as silicon. Integrated circulators can enable the realization of miniaturized receivers and a wide range of applications.

Integrated isolators can improve the performance of semiconductor lasers by reducing backscattering. Laser backscattering can reduce the laser linewidth and relative intensity noise (RIN), a limiting factor for next-generation high dynamic-range microwave photonic systems. Improving the signal to noise ratio by reducing RIN noise in a microwave photonic link can therefor allow for an overall signal-to-noise ratio (SNR).

Figure 13. Noise Limits in an RF Photonic Link

Benchtop laser units typically come packaged with isolating components as needed to improve performance, but to design an entire high dynamic range microwave photonic system on a chip, the integrated lasers will need sufficient isolation to ensure that the system works properly. For a fully integrated system, an integrated isolator therefore can reduce the noise floor and enable high dynamic range system operation.

Figure 14. High Dynamic Range Integrated Microwave Photonic System

Because of the potential of integrated isolators, the US Air Force has taken an interest in this technology and sought to introduce this technology to the AIM Photonics Foundry in Albany, New York [9]. AIM Photonics currently includes an integrated isolator in its process design kit.

The first challenge in making an integrated isolator is the question of making an integrated Faraday rotator. Since the magneto-optic effect is dependent on the length of the Faraday Rotator, this may cause an issue with the size of the component. Semiconductor platforms do not exhibit a magneto-optic effect. One main material used for the magneto-optic effect at the 1550 nm wavelength is Yttrium Iron Garnet (YIG). For fiber isolators and free-space isolators, light propagates through the magneto-optic material. Two solutions were made for the issue of using YIG on a photonic IC: YIG waveguides [10] and layering YIG on top of the optical waveguides [11], each including the use of an electromagnet for the magnetic field. When layering YIG on top of the optical waveguides, the magneto-optic effect is applied to the evanescent waves outside of the waveguide, producing a weaker effect. The advantage of layering YIG rather than using YIG waveguides is its relative ease of fabrication.

## Challenges: Fabrication

Fabrication with garnets and semiconductors is one challenge for integrated isolators. One issue is that garnets are not typically used in semiconductor fabrication, presenting several unique challenges specific to their material properties. It has found to be unreliable especially in the deposition process [11]. When growing semiconductor materials, the wafer is exposed to very high temperatures. Thermal expansion mismatch between garnets and semiconductors makes growing YIG on semiconductor difficult without cracking. To avoid thermal expansion mismatch, alternate methods using lower temperatures are used such as rapid annealing (RTA) [12]. While direct bonding techniques are preferred over YIG waveguides, YIG waveguides on deposited films are made using a H3PO4 wet etch [12].

Polarizers in integrated photonics are achieved using waveguide polarizers. Waveguide polarizers have been realized using a variety of approaches, including metal-cladding and birefrinfence waveguides [13], photonic crystal slab waveguides [14]. Polarizers have been fabricated using ion beam lithography [10] [15].

Figure 15. Integrated Waveguide Polarizer Example

Another question related to the exploitation of Faraday rotation on an integrated isolator is the design of the optical waveguide structure and phase shift or use of polarizers to reduce optical power in the reverse direction. Two designs are a microring resonator and a Mach-Zehnder interferometer. Due to the challenges of desigon developing integrated isolators that do not use the magneto-optic effect [16] [17]. A non-magneto-optic isolator was designed as a Mach-Zehnder Interferometer, providing some backwards isolation [16].

Figure 16. Travelling-Wave MZ Modulator as Isolator

## Challenges: TE and TM Polarization

One issue with integrated isolators is that they require light to be TM polarized for operation, making them incompatible with TE polarized lasers. To circumvent this issue, some isolator designs are being optimized for TE polarization, or include polarization rotators between the laser and isolator [11]. The polarization rotator between the laser and isolator would then need to have low loss and a high polarization extinction ratio. Below is a model for an integrated waveguide polarization converter from TE1 to TM0 modes [11], which would follow after a TE0 to TE1 mode coupler:

Figure 17. TE1 to TM0 Converter for Integrated Isolators

## Challenges: Performance

The performance of current integrated isolator designs is a major drawback. Integrates isolators should provide wide band isolation across the C-band, have high isolation, and low insertion loss. Wideband operation ensures that the isolator will prevent backscattering from all wavelengths from a C-band laser. Achieving low insertion loss is needed to prevent optical loss. Finally, isolation measures how much loss is provided in the reverse direction, which for an isolator should be high.

Discrete component isolators can offer up to 60 dB isolation with <1dB insertion loss. Integrated isolators have been shown with much lower isolation and often large insertion loss, while being too narrowband for some applications. Improving their performance is a research topic still being explored. Large optical loss hass been theorized to be due to the loss from scattering of the YIG layered above the waveguide, the interface of the bond and losses in YIG material. Other methods for improving on previous designs include electromagnet design and waveguide design, especially for coupling of the electromagnet and a microring resonator insolator design [11].

Figure 18. Integrated Isolator Performance

## Design on Indium Phosphide

Design of integrated isolators on the Indium Phosphide platform has the advantage of being integrated directly after a 1550 nm wavelength laser, providing numerous benefits to the integrated laser. There remains an interest in integrating isolators on this platform with the laser rather than a separate, discrete component. One design uses Yttrium Iron Garnet as the magneto-optic material with a Mach-Zehnder interferometer structure and a reciprocal phase shifter [18] [11]. A design on the indium phosphide platform was proposed in 2007, utilizing the magneto-optic effect on an MZI design and achieving greater than 25 dB isolation over the telecommunications wavelength C-band [19]. Another was proposed and developed in 2008 as an interferometric isolator on the indium phosphide platform based on non-reciprocal phase shifts in the modulator’s arms [20]. The YIG magneto-optic material was bonded using a surface activated direct bonding technique [20].

Figure 19. Integrated Isolator Design on InP with DFB Laser

The above design also features a 3×2 MMI coupler, so that the backward light wave is radiated out of the sides of the coupler, avoiding backscatter to the laser.

## Design on Silicon

Microring resonator isolators and Mach-Zehnder Interferometers (MZI) utilizing the magneto-optic effect have been developed on heterogeneous silicon integration platforms. Integrated micro-ring resonator isolators provide isolation with low insertion loss, but the isolation is too narrow for most applications [9]. An advantage to the MZI integrated isolator is its high isolation. However, this is largely offset by its large optical insertion loss, making them impractical for most RF Photonics applications [9].

Figure 20. Integrated MZI and Micro-ring Isolator

## Integrated Circulator

Designing an integrated circulator has several similarities to an integrated isolator, since they are both based on the non-reciprical magneto-optic effect.

Figure 21. Integrated Isolator Design using Mircoring Resonator

## Conclusion

In conclusion, isolators, and particularly integrated isolators, show a promising future for enabling photonics technology in the future. There is a clear demand for isolators to enable narrow-linewidth lasers with low RIN noise and considerable effort to apply new approaches to integration technology to realize an isolator that is wideband with high isolation and low insertion loss.

# Bibliography

## MATLAB Code

%Michael Benker

%ECE591 Photonic Devices

clf;

A_P2 = pi/4 %Polarization shift of P2

P1 = [1,0;0,0] %P1 Matrix

P2=[0.7071,0;0.7071,0] %P2 Matrix

B=2.584; %Magnetic Field

L = 0.001; %Length

V = 304; %Verdet Constant

beta = B*V*L; %Polarization shift

Forward = P2*FR*P1

Backward = P1*FR*P2

h=101; %Number of points on the plot

for x=1:h

B=1.5+(x-1)*(3-1.5)/h;

Bvect(x) = B;

beta = B*V*L;

FR= [cos(beta),-sin(beta);sin(beta),cos(beta)];

Forward = P2*FR*P1;

Backward = P1*FR*P2;

Forw11(x) = Forward(1,1);

Forw21(x) = Forward(2,1);

Forw12(x) = Forward(1,2);

Forw22(x) = Forward(2,2);

Back11(x) = Backward(1,1);

Back21(x) = Backward(2,1);

Back12(x) = Backward(1,2);

Back22(x) = Backward(2,2);

end

figure(1)

plot(Bvect,Forw11)

hold on

plot(Bvect,Forw12)

plot(Bvect,Forw21)

plot(Bvect,Forw22)

legend([‘F(1,1)’;’F(1,2)’;’F(2,1)’;’F(2,2)’])

title([‘Isolator: Forward Direction vs. B field (L =1mm,V=304)’])

ylabel(‘Matrix component value’)

xlabel(‘Magnetic Field Strength B’)

figure(2)

plot(Bvect,Back11)

hold on

plot(Bvect,Back12)

plot(Bvect,Back21)

plot(Bvect,Back22)

legend([‘B(1,1)’;’B(1,2)’;’B(2,1)’;’B(2,2)’])

title([‘Isolator: Backwards Direction vs. B field (L =1mm,V=304)’])

ylabel(‘Matrix component value’)

xlabel(‘Magnetic Field Strength B’)

• Photolithography for Device Fabrication December 18, 2020

Introduction

Photolithography is a technique used in semiconductor device fabrication. A light sensitive layer is added to a semiconductor wafer. Light is applied to the parts of the light-sensitive layer to remove the layer where needed. After this is done, etching can be performed exclusively to the parts of the wafer without a layer of the photo-sensitive layer.

Photoresist

The light-sensitive layer added to the wafer is called the photoresist. To apply photoresist to a wafer, first clean the wafer. The photoresist should cover most of the wafer. This should be done while the wafer is sitting in the spinner. The spinner rotates the wafer so that the photoresist has an even coating. Then the spun wafer is placed on a heating plate. The RPM and temperature of the hot plate are important. The photoresist data sheet can indicate the required spin rate and temperature.

When the photoresist layer is uneven, an interference pattern can be seen on the wafer, as shown below. The interference pattern shown is less than ideal. It is normal however that there will be more photoresist build-up at the edges of the wafer. The excess photoresist at the edges of the wafer can be removed using acetone and a q-tips or swabs.

There is also the question of whether to cut the wafer before applying photoresist or afterwards. The advantage of cutting the wafer afterwards is that the built up layer of photoresist at the edges will not be used, since the cut wafer will be taken from the middle. A disadvantage of cutting the wafer afterwards is that cutting the wafer can cause damage to the photoresist layer. Also, the issue of built up photoresist at the edges of the cut wafer will remain. If the cut wafer is not round, there will be more build-up at corners of the cut wafer. If using a silicon wafer, cutting a clean square will be more difficult than when using a III-V semiconductor. Generally, one can create a square or rectangle by making a notch in the side of the wafer. The lattice of the material will cause a break to be a straight line.

The mask is what is used to select which parts of the photoresist layer will be removed and which will stay. Masks are ordered from a company such as Photronics with a .gds file for the die. The mask and the cut wafer are placed in a mask aligner. Here, a vacuum press is used. UV light is directed at the wafer from above the mask aligner.

Developing

The UV light from the mask aligner breaks the bond of the photoresist where it was applied. Now the broken-bond photoresist needs to be removed using a developer solution. The amount of time that the wafer is rinsed in the developer solution is critical. Too much time can cause the photoresist to be removed further than needed. This is especially important for small features, such as a waveguide.

Next, we can view the wafer using a microscope. The combination of the thickness of the photoresist, how even the photoresist layer is, the type of photoresist, how fast it was spun, how hot it was baked on the hot plate, the mask, the developer solution, how long it was soaked in the developer solution and how much care was given to the wafer during the process, including the presence of dust will all contribute to the overall result of the wafer. Below, curved waveguides are shown on the microscope. These are layers of photoresist. For a higher magnification, an electron microscope can be used.

• Photonic Components: Multimode Interference Waveguides December 18, 2020

Multimode Interference Waveguides, also termed MMI Couplers, are used to split light from one waveguide into two or more paths. MMI couplers are designed to match the power at each output port. The length, width and positioning of the output ports are critcal to the design of the MMI coupler. The MMI coupler is also difficult to build in device fabrication due to the sensitivity of the width of the multimode waveguide to the performance.

Below are two MMI couplers, designed in Rsoft. The 3dB Coupler has two output ports of half the input power. The approach for both couplers is to monitor the optical power at each output port in the simulation. Initially, we design the length of the multimode section to be longer than estimated. The length of the multimode waveguide section is reduced to the length at which the optical power in each of the output paths is equal.

3dB Coupler

MMI Coupler

• Designing a Waveguide Photodetector in Rsoft October 29, 2020

The following images depict the first stages of a waveguide photodetector design in Rsoft., The input waveguide is 2 microns, followed by a tapered section to a 10 micron wide photodetector region. Three tapering typologies are used. Following these initial simulations come optimization of the photodetector region and electrical simulations.

First, the layer view. This section is at the input waveguide.

The InGaAs layers above the waveguide serve to absorb the optical power in the photodetector region:

Three different input tapers are used:

Exponential Taper:

Absorption in the photodetector region is in the range of 95%.

Here is the optical power remaining in the waveguide region:

Linear Taper:

• IMD3: Third Order Intermodulation Distortion September 11, 2020

We’ll begin a discussion on the topic of analog system quality. How do we measure how well an analog system works? One over-simplistic answer is to say that power gain determines how well a system operates. This is not sufficient. Instead, we must analyze the system to determine how well it works as intended, which may include the gain of the fundamental signal. Whether it is an audio amplifier, acoustic transducers, a wireless communication system or optical link, the desired signal (either transmitted or received) needs to be distinguishable from the system noise. Noise, although situationally problematic can usually be averaged out. The presence of other signals are not however. This begs the question, which other signals could we be speaking of, if there is supposed to be only one signal? The answer is that the fundamental signal also comes with second order, third order, fourth order and higher order distortion harmonic and intermodulation signals, which may not be averaged from noise. Consider the following plot:

We usually talk about Third Order Intermodulation Distortion or IMD3 in such systems primarily. Unlike the second and fourth order, the Third Order Intermodulation products are found in the same spectral region as the first order fundamental signals. Second and fourth order distortion can be filtered out using a bandpass filter for the in-band region. Note that the fifth order intermodulation distortion and seventh order intermodulation distortion can also cause an issue in-band, although these signals are usually much weaker.

Consider the use of a radar system. If a return signal is expected in a certain band, we need to be able to distinguish between the actual return and differentiate this from IMD3, else we may not be able to trust our result. We will discuss next how IMD3 is avoided.

• Mode Converters and Spot Size Converters September 10, 2020

Spot size converters are important for photonic integrated circuits where a coupling is done between two different waveguide sizes or shapes. The most obvious place to find a spot size converter is between a waveguide of a PIC and a fiber coupling lens.

Spot size converters feature tapered layers on top of a ridge waveguide for instance, to gradually change the mode while preventing coupling loss.

The below RSoft example shows how an optical path is converted from a more narrow path (such as a waveguide) to a wider path (which could be for a fiber).

While the following simulation is designed in Silicon, similar structures are realized in other platforms such as InP or GaAs/AlGaAs.

RSoft Beamprop simulation, demonstrating conversion between two mode sizes. Optical power loss is calculated in the simulation for the structure.

This is the 3D structure. Notice the red section present carries the more narrower optical path and this section is tapered to a wider path.

The material layers are shown:

Structure profile:

• Discrete-Time Signals and System Properties September 8, 2020

First, a comparison between Discrete-Time and Digital signals:

Discrete-Time and Digital signal examples are shown below:

Discrete-Time Systems and Digital Systems are defined by their inputs and outputs being both either Discrete-Time Signals or Digital Signals.

### Discrete-Time Signals

Discrete-Time Signal x[x] is sequence for all integers n.

Unit Sample Sequence:
𝜹[n]: 1 at n=0, 0 otherwise.
Unit Step:
u[n] = 1 at n>=0, 0 otherwise.

Or,

Any sequence: x[n] = a1* 𝜹[n-1] + a2* 𝜹[n-2]+…
where a1, a2 are magnitude at integer n.
or,

### Exponential & Sinusoidal Sequences

Exponential sequence: x[n] = A 𝞪n
where 𝞪 is complex, x[n] = |A|ej𝜙 |𝞪|e0n=|A||𝞪|n ej(ω0n+𝜙)
= |A||𝞪|n(cos(ω0n+𝜙)+j sin(ω0n+𝜙))
Complex and sinusoidal: -𝝅< ω0< 𝝅 or 0< ω0< 2𝝅.

Exponential sequences for given 𝞪 (complex 𝞪 left, real 𝞪 right):

Periodicity:        x[n] = x[n+N],  for all n. (definition). Period = N.
Sinusoid: x[n] = A cos(ω0n+𝜙) = A cos (ω0n+ ω0N+ 𝜙)
Test: ω0N = 2𝝅k,                            (k is integer)

Exponential: x[n] = e0(n+N) = e0n,
Test: ω0N = 2𝝅k,                            (k is integer)

### System Properties

System: Applied transformation y[n] = T{x[n]}

Memoryless Systems:

Output y[nx] is only dependent on input x[nx] where the same index nx is used for both (no time delay or advance).

Additive property:         Where y1[n] = T{x1[n]} and y2[n] = T{x2[n]},
y2[n] + y1[n] = T{x1[n]+ x2[n]}.

Scaling property:            T{a.x[n]} = a.y[n]

Time-Invariant Systems:

Time shift of input causes equal time shift of output. T{x[n-M]} = y[n-M]

Causality:

The system is causal if output y[n] is only dependent on x[n+M] where M<=0.

Stability:

Input x[n] and Output y[n] of system reach a maximum of a number less than infinity. Must hold for all values of n.

### Linear Time-Invariant Systems

Two Properties: Linear & Time-Invariant follows:

“Response” hk[n] describes how system behaves to impulse 𝜹[n-k] occurring at n = k.

• Convolution Sum: y[n] = x[n]*h[n].

Performing Discrete-Time convolution sum:

1. Identify bounds of x[k] (where x[k] is non-zero) asN1 and N2.
2. Determine expression for x[k]h[n-k].
3. Solve for

General solution for exponential (else use tables):

Graphical solution: superposition of responses hk[n] for corresponding input x[n].

### LTI System Properties

As LTI systems are described by convolution…

LTI is commutative: x[n]*h[n] = h[n]*x[n].

… is additive: x[n]*(h1[n]+h2[n]) = x[n]*h1[n] + x[n]*h2[n].

… is associative: (x[n]*h1[n])*h2[n] = x[n]*(h1[n]*h2[n])

LTI is stable if the sum of impulse responses

… is causal if h[n] = 0 for n<0                  (causality definition).

Finite-duration Impulse response (FIR) systems:

Impulse response h[n] has limited non-zero samples. Simple to determine stability (above).

Infinite-duration impulse response (IIR) systems:

Example: Bh=

If a<1, Bh is stable and (using geom. series) =

Delay on impulse response: h[n] = sequence*delay = (𝜹[n+1]- 𝜹[n])* 𝜹[n-1] = 𝜹[n] – 𝜹[n-1].

______________________________________________________

Continued:

• MMIC – A Revolution in Microwave Engineering September 3, 2020

One of the most revolutionary inventions in microwave engineering was the MMIC (Monolithic microwave integrated circuits) for high frequency applications. The major advantage of the MMIC was integrating previously bulky components into non-discrete tiny components of a chip. The subsequent image shows the integrated components of the MMIC – spiral inductors (red), FETs (blue) for example.

It is apparent that smaller transistors are present towards the input of the MMIC. This is because less power is required to amplify the weak input signals. As the signals become stronger, higher power (and hence a larger FET) is required. The input terminal (given by the arrow) is the gate and the output the drain. Like almost all RF devices, MMIC’s output and input are usually matched to 50 ohms, making them easy to cascade.

Originally, MMICs found their place within DoD for usage in phased array systems in fighter jets. Today, they are present in cellular phones, which operate in the GHz range much like military RADARs. MMICs have switched from MESFET configurations to HEMTs, which utilize compound semiconductors to create heterostructures. MMICs can be fabricated using Silicon (low cost) or III-V semiconductors which offer higher speed. Additionally, MOSFET transistors are becoming increasingly common due to improved performance over the years. The gate of the MOSFET has been shortened from several microns to several nanometers, allowing better performance at higher frequencies.

• Arrayed Waveguide Grating for Wavelength Division Multiplexing August 30, 2020

Arrayed Waveguide Grating (or AWG) is a method for wavelength division multiplexing or demultiplexing. The approach for multiplexing is to use unequal path lengths to generate a phase delay and constructive interference for each wavelength at an output port of the AWG. Demultiplexing is done with the same process, but reversed.

Arrayed Waveguide Gratings are commonly used in photonic integrated circuits. While Ring Resonators are also used for WDM, ring resonators see other uses, such tunable or static filters. Further, a ring resonator selects a single wavelength to be removed from the input. In the case of AWGs, light is separated according to wavelength. For many applications, this is a more superior WDM, as it offers great capability for encoding and modulating a large amount of information according to a wavelength.

Both the design of the star coupler and the path length difference according to the designed wavelength division make up the significant amount of complexity of this component. RSoft by Synopsys includes an AWG Utility for designing arrayed waveguide gratings.

Using this utility, a star coupler is created below:

• Methods of Optical Coupling August 29, 2020

An optical coupler is necessary for transferring optical energy into or out of a waveguide. Optical couplers are used for both free-space to waveguide optical energy transmission as well as a transmission from one waveguide to another waveguide, although the methods of coupling for these scenarios are different. Some couplers selectively couple energy to a specific waveguide mode and others are multimode. For the PIC designer, both the coupling efficiency and the mode selectivity are important to consider for optical couplers.

Where the coupling efficiency η is equal to the power transmitted into the waveguide divided by the total incident power, the coupling loss (units: dB) is equal to
L = 10*log(1/η).

Methods of optical coupling include:

• Direct Focusing
• End-Butt Coupling
• Prism Coupling
• Grating Coupling
• Tapered Coupling (and Tapered Mode Size Converters)
• Fiber to Waveguide Butt Coupling

### Direct Focusing for Optical Coupling

Direct focusing of a beam to a waveguide using a lens in free space is termed direct focusing. The beam is angled parallel with the waveguide. This is also one type of transverse coupling. This method is generally deemed impractical outside of precision laboratory application. This is also sometimes referred to as end-fire coupling.

### End-Butt Coupling

A prime example of end-butt coupling is for a case where a laser is fixated to a waveguide. The waveguide is placed in front of the laser at the light-emitting layer.

### Prism Couplers

Prism coupling is used to direct a beam onto a waveguide when the beam is at an oblique incidence. A prism is used to match the phase velocities of the incident beam and the waveguide.

### Grating Couplers

Similar to the prism coupler, the grating coupler also functions to produce a phase match between a waveguide mode and an oblique incident beam. Gratings perturb the waveguide modes in the region below the grating, producing a set of spatial harmonics. It is through gratings that an incident beam can be coupled into the waveguide with a selective mode.

### Tapered Couplers

Explained in one way, a tapered coupler intentionally disturbs the conditions of total internal reflection by tapering or narrowing the waveguide. Light thereby leaves the waveguide in a predictable manner, based on the tapering of the waveguide.

### Tapered Mode Size Converters

Mode size converters exist to transfer light from one waveguide to another with a different cross-sectional dimension.

### Butt Coupling

The procedure of placing the waveguide region of a fiber directly to a waveguide is termed butt coupling.

• RF Spectrum Analyzers August 27, 2020

A Spectrum analyzer (whether in the RF Domain or optical) is a tool that is dual of the oscilloscope. An oscilloscope displays a waveform in time domain. When this is represented as a function, a Fourier transform can be used on it to obtain its spectrum. A spectrum analyzer displays this content.

Spectrum analyzers are very similar to radio receivers. A radio receiver could be classified into many types: (Super)heterodyne, crystal video, etc. Similar to a heterodyne receiver, which features a bandpass filter, mixer and low pass filter, a spectrum analyzer must tune over a specific range. This range must be very narrow, which requires a high Quality factor bandpass filter to operate. This is where the YIG (Yttrium Iron Garnet) filter comes into play. YIG has a very high quality factor and resonates when exposed to a DC magnetic field. This is what determines the spectrum analyzers “resolution bandwidth”. Of course, a narrow RBW means a less noisy display and better resolution. The tradeoff for this is increased sweep time. The sweep time is inversely proportional to the RBW squared.

A sweep generator is used to repetitively scan over the frequency band. The oscillator sweeps and repetitively mixes/multiples with the input signal and is filtered with a low pass filter. The low pass filter determines the spectrum analyzer’s “video bandwidth”.

An important concept with regards to bandwidth is thermal noise. Thermal noise is the single greatest source of noise in systems under 100 GHz. Past 100 GHz and into optics, shot noise becomes more apparent. However, bandwidth is the greatest contributor to thermal noise, as noise power is given as kTB. Since k is a constant and T has a relatively negligible effect on thermal noise (the main thing is that T is nonzero. At absolute zero, you have no thermal noise. Anything above that, you have thermal noise. The difference between a pretty cold device and a scorching hot one is only maybe 10 dBm or so. Just ballparking), this means that bandwidth has a huge effect on noise. A higher RBW increases the spectrum’s noise floor and makes it harder for closely spaced frequency components to be seen, as more frequency components are passed through the envelope detector.

Video bandwidth, on the other hand, typically determines resolution between power levels and smooths the display. It is important to note that the VBW contribution happens after data has been collected and does not affect the measurement results, whereas the RBW dictates the minimum measurable bandwidth.

Phase noise is also present in a spectrum analyzer and can affect measurements near the center frequency and results from phase jitter. Since this is pretty much a phase modulation, sidebands are produced near the center frequency which can interfere with measurement. Jitter refers to deviation from periodicity of a signal.

• Programs for PIC (photonic Integrated Circuit) Design August 25, 2020

For building PICs or Photonic Integrated Circuits, there are a number of platforms that are used in industry today. Lumerical Suite is a major player for instance with built in simulators. Cadence has a platform that can simulate both photonic and electronic circuits together, which for certain applications provides a major advantage. There are two platforms that I’ve become familiar with, which are the Synopsys PIC Design Suite (available for students with an agreement, underwritten by a professor at your university to ensure it’s use is for only educational purposes) and Klayout using Nazca Design packages.

Synopsys is another great company with advanced programs for photonic simulation and PIC design. Synopsys Photonic Design Suite can include components that are designed using Rsoft. OptoDesigner is the program in the PIC design suite where PICs are designed, yet the learning curve may not be what you were hoping. The 3,000+ page manual let’s the user dive into the scripting language PheoniX, which is necessary to learn for PIC design using Synopsys. Using a scripting language means that designing your PIC can be automated, thereby eliminating repetitive designing. There also comes other advantages to this such as being able to fine tune one’s design without needing to click and drag components. Coding for PIC design might sound tedious, but if you start to use it, I think you’ll realize that it’s really not and that it’s a very powerful way of designing PICs. If you’d like to use PheoniX scripting language using the Synopsys PIC design suite, note that the scripting language is similar to C.

One of the greatest aspects of OptoDesigner and the PIC Design Suite is the simulation capabilities. Much like the simulations that can be run in Rsoft, these are available in OptoDesigner.

The downside of Synopsys PIC Design Suite is in the difficulty of obtaining a legal copy that can be used for any and all purposes, even commercial. I mentioned that I obtained a student version. This is great for learning the software, to a certain extent. The learning stops when I would like to build something that could be sent out to a foundry for manufacture. Let’s be honest though, there is a lot to learn before getting to that point. Still, if we would even like to use a Process Design Kit (PDK) which contains the real component models for a real foundry so that you can submit your design to be built on a wafer, you will need to convince Synopsys that the PDK is only used for educational purposes and not only for learning, but as part of an education curriculum. If your university let’s you get your hands on a PDK with Synopsys Student version, you will essentially have free range to design PICs to your hearts content. If you have a student version, you’ll still have to buy a professional version if you want to design a PIC using a foundy PDK, submit it for a wafer run and sell it. I’ll let you look up the cost for that. The best way to use Synopsys is to work for a company that has already paid for the profession version, in conclusion.

Now, if you find yourself in the situation where all the simulation benefits of using OptoDesigner are outweighed by the issue of needing to perform a wafer run, you might just want to use Klayout with Nazca Design photonic integrated circuit design packages. These are both open source. Game changer? Possibly. Suddenly, you picture yourself working as an independent contractor for PIC design someday and you’ll have Klayout to thank.

Klayout and the Nazca Design packages are based on the very popular Python programming language. Coding can be done in Spyder, Notepad or even Command Prompt (lol?). If you aren’t familiar with how Python works, PIC design might move you to learn. Python takes the place of PheoniX scripting language as is used in OptoDesigner, so you still have the automation and big brain possibilities that a scripting language gives you for designing PICs. As for simulations, you’ll have to go with your gut, but you could use discrete components to design your circuit and evaluate that.

Klayout doesn’t come with a 3,000+ page manual, but you’ll likely find that it is a simpler to use than OptoDesigner. Below is a Python script, which generates a .gds file and then the file opened in Klayout.

• Ring Resonators for Wavelength Division Multiplexing July 23, 2020

The ring resonator is a rather simple passive photonic component, however the uses of it are quite broad.

The basic concept of the ring resonator is that for a certain resonance frequency, those frequencies entering port 1 on the diagram below will be trapped in the ring of the ring resonator and exit out of port 3. Frequencies that are not of the resonance frequency will pass through to port 2.

Ring resonators can be used for Wavelength Division Multiplexing (WDM). WDM allows for the transmission of information allocated to different wavelengths simultaneously without interference. There are other methods for WDM, such as an Asymmetric Mach Zehnder Modulator.

Here I present one scheme that will utilize four ring resonators to perform wavelength division multiplexing. The fifth output will transmit the remaining wavelengths after removing the chosen wavelengths dependent on the resonating frequency (and actually, the radius) of the ring resonators.

• Quantum Well: InP-InGaAsP-InP July 6, 2020

Quantum wells are widely used in optoelectronic and photonic components and for a variety of purposes. Two materials that are often used together are InP and InGaAsP. Two different models will be presented here with simulations of these structures. The first is an InP pn-junction with a 10 nm InGaAsP (unintentionally doped) layer between. The second is an InP pn-junction with 10 nm InGaAsP quantum wells positioned in both the positive and negative doped regions.

Quantum Well between pn-junction

The conduction band and valence band energies are depicted below for the biased case:

The conduction current vector lines:

ATLAS program:

go atlas
Title Quantum Wells
# Define the mesh
mesh auto
x.m l = -2 Spac=0.1
x.m l = -1 Spac=0.05
x.m l = 1 Spac=0.05
x.m l = 2 Spac =0.1
#TOP TO BOTTOM – Structure Specification
region num=1 bottom thick = 0.5 material = InP NY = 10 acceptor = 1e18
region num=3 bottom thick = 0.01 material = InGaAsP NY = 10 x.comp=0.1393  y.comp = 0.3048
region num=2 bottom thick = 0.5 material = InP NY = 10 donor = 1e18
# Electrode specification
elec       num=1  name=anode  x.min=-1.0 x.max=1.0 top
elec       num=2  name=cathode   x.min=-1.0 x.max=1.0 bottom

#Gate Metal Work Function
contact num=2 work=4.77
models region=1 print conmob fldmob srh optr
models region=2 srh optr
material region=2

#SOLVE AND PLOT
solve    init outf=diode_mb1.str master
output con.band val.band e.mobility h.mobility band.param photogen opt.intens recomb u.srh u.aug u.rad flowlines
tonyplot diode_mb1.str
method newton autonr trap  maxtrap=6 climit=1e-6
solve vanode = 2 name=anode
save outfile=diode_mb2.str
tonyplot diode_mb2.str
quit
Quantum Well layers inside both p and n doped regions of the pn-junction
Structure:
Simulation results:
#TOP TO BOTTOM – Structure Specification
region num=1 bottom thick = 0.25 material = InP NY = 10 acceptor = 1e18
region num=3 bottom thick = 0.01 material = InGaAsP NY = 10 x.comp=0.1393  y.comp = 0.3048
region num=4 bottom thick = 0.25 material = InP NY = 10 acceptor = 1e18
region num=2 bottom thick = 0.25 material = InP NY = 10 donor = 1e18
region num=6 bottom thick = 0.01 material = InGaAsP NY = 10 x.comp=0.1393  y.comp = 0.3048
region num=2 bottom thick = 0.25 material = InP NY = 10 donor = 1e18
• Capacitance and Parallel Plate Capacitors July 5, 2020

Capacitance relates two fundamental electric concepts: charge and electric potential. The formula that relates the two is Capacitance = charge / electric_potential.

The term equipotential surface refers to how a charge, if moved along a particular path or surface, the work done on the field is equal to zero. If there are many charges along the surface of a conductor (along an equipotential surface), then the potential energy of the charged conductor will be equal to 1/2 multiplied by the electric potential φ and the integral of all charges along this surface.

Ue = ½ φ ∫ dq.

Given a scenario in which both charge and electric potential are related, we may introduce capacitance. The following formula proves important for calculating the energy of a charged conductor:

Ue = ½ φ q = ½ φ2 C = q2 / (2C).

A parallel plate capacitor is a system of metal plates separated by a a dielectric. One plate of the capacitor will be positively charged, while the other is negatively charged. The potential difference and charge on the capacitor places causes a storage of energy between the two plates in an electric field.

• Electric Potential and Electric Potential Energy July 4, 2020

Electric potential can be summarized as the work done by an electric force to move a charge from one point to another. The units are in Volts. Electric potential is not dependent on the shape of the path that the work is applied. Being a conservative system, the amount of energy required to move a charge in a full circle, to return it back to where it started will be equal to zero.

The work of an electrostatic field takes the formula

W12 = keqQ(1/r1 – 1/r2),

which is found by integrating the the charge q times the electric field. The work of an electrostatic field also contains both the electric potential and electric potential energy. Electric potential energy, U is equal to the electric potential φ multiplied by the charge q. Electric potential energy is a difference of potentials, while electric potential uses the exact level of electric potential in the given case.

To calculate electric potential energy, it is convenient to assume that the potential energy is zero at a distance of infinity (and surely it should be). In this case, we can write the electric potential energy as equal to the work needed to move a charge from point 1 to infinity.

We’ll consider a quick application related to both the dipole moment and the electric potential. The dipole potential takes the formula in the figure below. Dipole potential decreases faster with distance r than it would for a point charge.

• Dipole Moment July 3, 2020

Consider we have both a positive and negative charge, separated by a distance. When applying supperposition of the electric force and electric field generated by the two charges on a target point, it is said that the positive and negative charges create an effect called a dipole moment. Let’s consider a few example of how an electric field will be generated for a point charge in the presence of both a positive and negative charge. Molecules also often have a dipole moment.

Here, the target point is at distance b at the center between the negative and positive charges. Where both charges are of the same magnitude, both the vertical attraction and repulsion components are cancelled, leaving the electric field to be generated in a direction parallel to the axis of the two charges.

Now, we’ll consider a target point along the axis of the two charges. Remember that a positive charge will produce an electric force and electric field that radiates from itself outward, while the force and field is directed inwards towards a negative charge. We can expect then, that the electric field will be different on either side. We can expect that the side of the positive charge will repel and the negative side will attract. This works, because the distance inverse proportionality is squared, making it so that the effect from the other charge will be less. This is a dipole.

Given how a dipole functions, it would be nice to have a different set of formulas and a more refined approach to solving electric field problems with dipoles. The dipole moment p is found using the formula, p=qI with units Couolumb*meter. I is the vector which points from the negative charge to the positive charge. The dipole moment is drawn as one point at the center of the dipole with vector I through it.

In order to treat the two charges as a center of a dipole, there should be a minimum distance between the dipole and the target point. The distance between the dipole and the target should be much larger than the length l of the magnitude of vector I.

Finally, the formula for these electric fields using a dipole moment are

E1 = 2kep/b13

E2 = 2kep/b23

• Electric Force & Electric Field July 2, 2020

While the electric force describes the exertion of one charge or body to another, we also have to remember that the two objects do not need to be touching physically for this force to be applied. For this reason, we describe the force that is being exerted through empty space (i.e. where the two objects aren’t touching) as an electric field. Any charge or body or thing that exerts an electrical force, generated most importantly by the distance between the objects and the amount of charge present, will generate an electric field.

The electric field generated as a result of two charges is directly proportional to the electric force exerted on a charge, or Coulomb force and inversely proportional to the charge of the particle. In other words, if the Coulomb force is greater, then the electric field will be stronger, but it will also be smaller if the charge it is applied to is smaller. Coulomb force as mentioned previously is inversely proportional to the distance between the charges. The electric field, E then uses the formula E = F/q and the units are Volts per meter.

By combining both Coulomb’s Law and our definition for the electric field, the electric field can be written as

E1 = ke * q1/r2 er

where er again is the unit vector direction from charge q1.

When drawing electric field lines, there are three rules pay attention to:

1. The direction is tangent to the field line (in the direction of flow).
2. The density of the lines is proportional to the magnitude of the electric field.
3. Vector lines emerge from positive charges and sink towards negative charges.

Adding electric fields to produce a resultant electric field is simple, thanks to the property of superposition which applies to electric fields. Below is an example of how a resultant electric field will be calculated geometrically. The direction of each individual field from the charges is determined by the polarity of the charge.

• Coulomb Force July 1, 2020

Electric charge is important in determining how a body or particle will behave and interact electromagnetically. It is also key for understanding how electric fields, electric potentials and electromagnetic waves come into existence. It starts with the atom and it’s number of protons and electrons.

Charges are positive or negative. In a neutral atom, the number of protons in a nucleus is equal to the number of electrons. When an atom loses or gains an electron from this state, it becomes a negatively or positively charged ion. When bodies or particles exhibit a net charge, either positive or negative, an electric force arises. Charges can be caused by friction or irradiation. Electrostatic force functions similar to the gravitational force – in fact the formulas look very similar! The difference between the two is most importantly that electrostatic force can be attraction or repulsion, but gravitational force is always attraction. However for small bodies, the electrostatic force is primary and the gravitational force is negligible.

Charles Coloumb conducted experiments around 1785 to understand how electric charges interact. He devised two main relations that would become Coulomb’s Law:

The magnitude of the force between two stationary point charges is

1. proportional to the product of the magnitude of the charges and
2. inversely proportional to the square of the distance between the two charges.

The following expression describes how one charge will exert a force on another:

The unit vector in the direction of charge 1 to charge 2 is written as e12 and the position of the two numbers indicates the direction of the force, moving from the first numbered position to the second. Reversing the direction of the force will result in a reversed polarity, F12 = -F21.

The coefficient ke will depend on the unit system and is related to the permittivity:

The permittivity of vacuum, ε0 = 8.85*10^(-12)  C^2N*m^2.

Coulomb forces obey superposition, meaning that a series of charges may be added linearly without effecting their independent effects on it’s ‘target’ charge. Coulomb’s Law extends to bodies and non-point charges to describe an applied electrostatic force on an object; the same first equation may be used in this scenario.

• Noise Figure June 30, 2020

Electrical noise is unwanted alterations to a signal of random amplitude, frequency and phase. Since RADAR is typically done at microwaves frequencies, the noise contribution of most RADAR receivers is highest at the first stages. This is mostly thermal noise (Johnson noise). Each component of a receiver has its own Noise Figure (dB) which is typically kept low through the use of a LNA (Low Noise amplifier). It is important to know that all conductors generate thermal noise when above absolute zero (0K).

Noise Power

Noise Power is the product of Boltzman’s constant, temperature in Kelvin and receiver bandwidth (k*t0*B). This is typically also expressed in dBm. This value is -174 dBm at room temperature  for a 1 Hz bandwidth. If a different receiver bandwidth is present, you can simply add the decibel equivalent of the bandwidth to this value. For example, at a 1MHz bandwidth, the bandwidth ratio is 60 dB (10*log(10^6) = 60). This value can be added to the standard 1Hz bandwidth to arrive at -114 dBm. For a real receiver, this number is scaled by the Noise Figure.

The Noise Figure is defined as 10*log(Na/Ni) where Na is the noise output of an actual receiver and Ni is the noise output of an ideal receiver. Alternatively these can be converted to dB and subtracted. It can also be defined as the rate at which SNR degrades. For systems on earth, Noise Figure is quite useful as temperature tends to stay around 290K (room temperature). However, for satellite communication, the antenna temperature tends to be colder than 290K and therefore effective noise temperature would be used instead.

Noise Factor is the linear equivalent of Noise Figure. For cascaded systems, the noise factor gradually decreases and decreases as shown. This explains why in a receiver chain, the initial components have a much higher effect on the Noise Figure.

Noise Figure is a very important Figure of Merit for detection systems where the input signal strength is unknown. For example, it is necessary to decrease the Noise Figure in the electromagnetic components of a submarine in order to detect communication and RADAR signals.

• Dispersion in Optical Fibers June 29, 2020

Dispersion is defined as the spreading of a pulse as it propagates through a medium. It essentially causes different components of the light to propagate at different speeds, leading to distortion. The most commonly discussed dispersion in optical fibers is modal dispersion, which is the result of different modes propagating within a MMF (multimode fiber). The fiber optic cable supports many modes because the core is of a larger diameter than SMF (single mode fibers). Single mode fibers tend to be used more commonly now due to decreased attenuation and dispersion over long distances, although MMF fibers can be cheaper over short distances.

Let’s analyze modal dispersion. When the core is sufficiently large (generally the core of a SMF is around 8.5 microns or so), light enters are different angles creating different modes. Because these modes experience total internal reflection at different angles, their speeds differ and over long distances, this can have a huge effect. In many cases, the signal which was sent is completely unrecognizable. This type of dispersion limits the bandwidth of the signal. Often GRIN (graded index) fibers are employed to reduce this type of dispersion by gradually varying the refractive index of the fiber within the core so that it decreases as you move further out. As we have learned, the refractive index directly influences the propagation velocity of light. The refractive index is defined as the ratio of the speed of light to the speed of the medium. In other words, it is inversely proportional to the speed of the medium (in this case silica glass).

In order to mitigate the effects of intermodal distortion in multimode fibers, pulses are lengthened to overlap components of different modes, or even better to switch to Single mode fibers when it is available.

The next type of dispersion is chromatic dispersion. All lasers suffer from this effect because no laser is comprised of a single frequency. Therefore, different wavelengths will propagate at different speeds. Sometimes chirped Bragg gratings are employed to compensate for this effect. Doped fiber lasers and solid state lasers tend to have much thinner linewidths than semiconductor PIN lasers and therefore tend to have less chromatic dispersion, although the semiconductor lasers has several advantages such as lesser cost and smaller size.

Another dispersion type is PMD (Polarization mode dispersion) which is caused by different polarizations travelling at different speeds within a fiber. Generally, these travel at the same speed however spreading of pulses can be caused by imperfections in the material.

For SMF fibers, it is important to cover waveguide dispersion. It is important to note that since the cladding of the fiber is doped differently than the core, the core has a higher refractive index than the cladding (doping with fluorine lowers refractive index and doping with germanium increases it). As we know, a lower refractive index indicates faster speed of propagation. Although most of the light stays within the core, some is absorbed by the cladding. Over long distances this can lead to greater dispersion as the light travels faster in the core leading to different propagation velocities.

• RF Over Fiber Links June 28, 2020

The basic principle of an RF over Fiber link is to convey a radio frequency electrical signal optically through modulation and demodulation techniques. This has many advantages including reduced attenuation over long distances, increased bandwidth capability, and immunity to electromagnetic interference. In fact, Rf over fiber links are essentially limitless in terms of distance of propagation, whereas coaxial cable transmission lines tend to be limited to 300 ft due to higher attenuation over distance.

The simple RFoF link comprises of an optical source, optical modulator, fiber optic cable and a receiver.

The RF signal modulates the optical signal at its frequency (f_opt) with sidebands at the sum and difference of the RF frequency and optical signal frequency. These beat against the carrier in the photodetector to reproduce and electrical RF signal. The above picture shows amplitude modulation and direct detection method. Also, impedance matching circuitry is generally included to match the ports of the modulator to the demodulator as well as amplifiers.

Before designing an RFoF link, it must be essential to bypass a transmission line in the first place. Will the system benefit from having a lower size and weight or immunity to electromagnetic interference? Is a wide bandwidth required? If not, this sort of link may not be necessary. It also must be determined the maximum SWaP of all the hardware at the two ends of the link. Another important consideration is the temperature that the link will be exposed to (or even pressure, humidity or vibration levels) that the link will be exposed to. The bandwidth of the RF and distance of propagation must be considered, finally.

The Following Figures of Merit can be used to quantify the RFoF link:

Gain

In dB, this is defined as the Signal out (in dBm) – Signal in (dBm) or 10log(g) where g is the small signal gain (gain for which the amplitude is small enough that there is no amplitude compression)

Noise Figure

For RADAR and detection systems where the input signal strength is unknown, Noise Figure is more important than SNR. NF is the rate at which SNR degrades from input to output and is given as N_out – kTB – Gain (all in dB scale).

Dynamic Range

It is known that the Noise Floor defines the lower end of dynamic range. The higher end is limited by spurious frequencies or amplitude compression. The difference between the highest acceptable and lowest acceptable input power is the dynamic range.

For example, if defined in terms of full compression, the dynamic range would be (in dB scale) : S_in.max – MDS. where MDS is the minimum detectable signal strength power.

Scattering Parameters

Scattering parameters are frequency dependent parameters that define the loss or gain at various ports. For two port systems, this forms a 2×2 matrix. In most Fiber Optic links, the backwards isolation S_12 is equal to zero due to the functionality of the detectors and modulators (they cannot perform each other’s functions). Generally the return losses at port 2 and 1 are what are specified to meet the system requirements.

• Erbium Doped Fiber Amplifiers (EDFA) June 27, 2020

The above figure demonstrates the attenuation of optical fibers relative to wavelength. It can be seen that Rayleigh Scattering is more prevalent at higher frequencies. Rayleigh scattering occurs when minute changes in density or refractive index of optical fibers is present due to manufacturing processes. This tends to scatter either in the direction of propagation within the core or not. If it is not, this leads to increased attenuation. This accounts for 96% of attenuation in optical fibers. It can also be noted that lattice absorption varies wildly with the wavelength of light. From the graph, it is apparent that 1550 nm wavelength this value (and also Rayleigh Scattering) is quite low. It is for this reason that 1550 nm is a common wavelength of propagation with silica glass optical fibers. Although this wavelength allows for greater options in design, shorter wavelengths (such as 850 nm) are also used when distance of propagation is short. However, 1550 is the common wavelength due to the development of dispersion shifted fibers as well as something called the EDFA (Erbium doped fiber amplifier).

EDFAs operate around the 1550 nm region (1530 to 1610 nm) and work based on the principle of stimulated emission, in which a photon is emitted within a optical device when another photon causes electrons and holes to recombine. The stimulated emission creates a photon of the same size and in the same direction (coherent light). The EDFA acts as an amplifier, boosting the intensity of light with a heavily doped core (erbium doped). As discussed earlier, the lowest power loss for silica fibers tends to occur at 1550 nm, which is the wavelength that this stimulated emission occurs. The excitation, however, occurs at 980 or 1480 nm, which is shown to have high loss.

The advantages of the EDFA is high gain and availability to operate in the C and L bands of light, It is also not polarization dependent and has low distortion at high frequencies. The major disadvantage is the requirement of optical pumping.

• RSoft Tutorials 9. Using Real Materials and Multilayer Structures June 26, 2020

Rsoft comes with a number of libraries for real materials. To access these materials, we can add them at any time from the Materials button on the side. However, to build a Multilayer structure that can utilize many materials, select “Multilayer” under 3D Structure Type.

Now, select “Materials…” to add desired materials. Move through the RSoft Libraries to chose a material and use the button in the top right (not the X button, silly) to use the material in the project. Now select OK to be brought back to the Startup Window, where we must now design a layered structure using these materials. Note that while building the layers, you can add more materials.

Selecting “Edit Layers…” on the Startup window brings you to the following window. Here, you can define your layers by selecting “New Layer”. Enter the Height and Material of the layer and select “Accept Layer” and repeat the process until the structure is finished. Select OK when done and select OK on the Startup window if all other settings are complete. This is my structure. Note that my structure size adds up to 1. Remember what the size of your layers are.

Now, design the shape of the structure. I’ve made a rectangular waveguide. What is also important to consider is where the beam should enter the structure. By default, the beam is focused across the entire structure. In the case where a particular layer is meant to be a waveguide, this should be reduced in size. By remembering the sizes of the layers however it will not be difficult to aim the beam at a particular section of the waveguide. For my structure, I will aim my beam at the 0.2 GaInAsP layer. The positioning, width, height, angle and more of the launch beam can be edited in the “Launch Parameters” window, accessible through “Launch Fields” on the right side.

Finally, run a simulation with your structure!

• Rsoft Tutorials 8. Air Gaps June 25, 2020

There are cases where you may want to simulate a region of air in between two components. A simple way of approaching this task is by creating a region with the same refractive index as air. The segment between the two waveguides (colored in gray) will serve as the “air” region. Right-click on the segment to define properties and under “Index Difference”, chose the value to be 1 minus the background index.

Properties for the segment:

Symbol Table Editor:

Notice that in the “air” region, the pathway monitor detects the efficiency to be zero, though the beam reconvenes in the waveguide, if the gap is short and the waveguide continues at the same angle, but with losses.

• Rsoft Tutorials 7. Index Grating June 24, 2020

Index grating is a common method to alter the frequency characteristics of light. In Rsoft, a graded index component is found under the “Index Taper” tab when right-clicking on a component. By selecting the tab “Tapers…”, one can create a new index taper.

Here, the taper is called “User 1” and defined by an equation step(M*z), with z being the z-coordinate location.

Selecting “Test” on the User Taper Editor will plot the index function of the tapered component:

The index contour is plotted below:

Here, the field pattern:

Light contour plot:

• Rsoft Tutorials 6. Multiple Launch Fields, Merging Parts June 23, 2020

Launch Fields define where light will enter a photonic device in Rsoft CAD. An example that uses multiple launch fields is the beam combiner.

On the sidebar, select “Edit Launch Fields”. To add a new lauch, select New and chose the pathway. A waveguide will be selected by default. Moving the launch to a new location however will place it elsewhere. Input a parameter other than “default” to change the location, and other beam parameters.

Choosing “View Launch” will plot the field amplitude of the launches. For the plot below, the third launch was removed.

Merging Waveguides

Right-clicking on the structure will give the option to chose the “Combine Mode.” Be sure that Merge is selected to allow waveguides to combine.

• The Pockels Effect and the Kerr Effect June 22, 2020

The Electro-optic effect essentially describes the phenomena that, with an applied voltage, the refractive index of a material can be altered. The electro-optic effect lays the ground for many optical and photonic devices. One such application would be the electro-optic modulator.

If we consider a waveguide or even a lens, such as demonstrated through problems in geometrical optics, we know that the refractive index can alter the direction of propagation of a transmitted beam. A change in refractive index also changes the speed of the wave. The change of light propagation speed in a waveguide acts as phase modulation. The applied voltage is the modulated information and light is the carrier signal.

The electro-optic effect is comprised of both a linear and non-linear component. The full form of the electro-optic effect equation is as follows:

The above formula means that, with an applied voltage E, the resultant change in refractive index is comprised of the linear Pockels Effect rE and a non-linear Kerr Effect PE^2.

The Pockels Effect is dependent on the crystal structure and symmetry of the material, along with the direction of the electric field and light wave.

• Rsoft Tutorials 5. Pathway Monitoring (BeamPROP) June 21, 2020

When stringing multiple parts together, it is important to check a lightwave system for losses. BeamPROP Simulator, part of the Rsoft package will display any losses in a waveguide pathway. Here we have an example of an S-bend simulation. There appears to be losses in a few sections.

Here, the design for the S-bend waveguide has a few locations that are leaking, as indicated by the BeamPROP simulation.

The discontinuities are shown below, which are a possible source of loss:

After fixing these discontinuities, the waveguide can be simulated again using BeamPROP. In fact the losses are not fixed. This loss is called bending loss.

Bending loss is an important topic for wavguides and becomes critical in Photonic Integrated Circuits (PIC).

• Rsoft Tutorials 4. Multi-Layer Profiles June 20, 2020

Rsoft has the ability to create multilayered devices, as was done previously using ATLAS/TCAD. Rather than defining a structures through scripts as is done with ATLAS, information about the layers can be defined in tables that are accessed in Rsoft CAD.

To begin adding layers to a device, such as a waveguide, first draw the device in Rsoft CAD. To design a structure with a substrate and rib waveguide, select Rib/Ridge 3D Structure Type in the Startup Window.

Next, design the structure in Rsoft CAD.

The Symbol Table Editor is needed now not only to define the size of the waveguide, but also the layer properties. The materials for this waveguide will be defined simply using basic locally defined layers with a user-defined refractive index. Later, we will discuss importing layer libraries to use real materials.To get used to the parameters typically needed for this exercise, layer properties may not need to be defined now before entering the Layer Table Editor.

The Layer Table Editor is found on the Rsoft CAD sidebar. First, assign the substrate layer index and select new later. The layer name, index and height are defined for this exercise.

After layers have been chosen, the mode profile can be simulated.

• Rsoft Tutorials 3. Fiber Structures and BeamPROP Simulation Animations June 19, 2020

An interesting feature of BeamPROP simulations and other simulators in the Rsoft packages is that the simulation results can be displayed in a running animation. The following simulation is the result of a simulation of an optical fiber. BeamPROP simulates the transverse field in an animation as a function of the z parameter, which is the length of the optical fiber.

To design an optical fiber component with Rsoft CAD, select under 3D structure type, “Fiber” when making a new project.

To build a cylinder that will be the optical fiber, select the cylinder CAD tool (shown below) and use the tool to draw in the axis that the base of the cylinder is found.

Dimensions of the fiber can be specified using the symbol tool discussed previously and by right-clicking the object to assign these values. Note that animations of mode patterns through long waveguides is not only available for cylindrical fibers. Fibers may consist of a variety of shapes. Multiple pathways may be included. Simulations can indicate if a waveguide has potential leaks in it or the interaction of light with a new surface.

• Rsoft Tutorials 2. Simulating a Waveguide using BeamPROP and Mode Profile June 18, 2020

BeamPROP is a simulator found in the Rsoft package. Here, we will use BeamPROP to calculate the field distributions of our tapered waveguides. Other methods built withing Rsoft CAD are will also be explored.

### Tapered Waveguide

The tapered waveguide that we are simulating is found below. We will use the BeamPROP tool to simulate the field distributions in the waveguide. We will also use the mode calculation tool to simulate the mode profile at each end of the waveguide.

BeamPROP Simulation Results

Mode Profile Simulation

The mode simulation tool is found on the sidebar:

Before choosing the parameters of the Mode Simulator, let’s first take a look at the coordinates of the beginning and end of the waveguide. This dialog is found by right-clicking on the component. The window shows that the starting point along the z axis is 1 and the ending point is 43 (the units are actually micrometers, by the way). We will choose locations along the waveguide close to the ends of the waveguide at z equals 1.5 and 42.5.

Parameter selection window:

Results at z = 1.5:

Results at z = 42.5:

• Rsoft Tutorials 1. Getting Started with CAD (tapered waveguide) June 17, 2020

Rsoft is a powerful tool for optical and photonic simulations and design. Rsoft and Synopsys packages come with a number of different tools and simulators, such as BeamPROP, FullWAVE and more. There are also other programs typically found with Rsoft such a OptoDesigner, LaserMOD and OptSim. Here we focus on the very basics of using the Rsoft CAD environment. I am using a student version, which is free for all students in the United States.

New File & Environment

When starting a new file, the following window is opened. We can select the simulation tools needed, the refractive index of the environment (“background index”) and other parameters. Under dimensions, “3D” is selected.

The 3D environment is displayed:

Symbol Editor

On the side bar, select “Edit Symbols.” Here we can introduce a new symbol and assign it a value using “New Symbol,” filling out the name and expression and selecting “Accept Symbol.”

Building Components

Next we will draw a rectangle, which will be our waveguide.  Select the rectangular segment below:

Now, select the bounds of the rectangle. See example below:

Editing Component Parameters

Right click on the component to edit parameters. Here, we will now change the refractive index and the length of the component. The Index Difference tab is the difference in refractive index compared to the background index, which was defined when we created the file. We’ll set it to 0.1 and since our background index was 1.0, that means the refractive index of the waveguide is 1.1. Alternatively, the value delta that was in the box may be edited from the Symbol menu. We also want to use our symbol “Length” to define the length of our waveguide. We also want this waveguide to be tapered, so the ending vertex will be set to width*4. Note that width may also be edited in the symbol list.

Here, we have a tapered waveguide:

• Methods of Calculation for Signal Envelope June 16, 2020

The envelope of a signal is an important concept. When a signal is modulated, meaning that information is combined with or embedded in a carrier signal, the envelope follows the shape of the signal on it’s upper and lower most edges.

There are a number of methods for calculating an envelope. When given an in-phase and quadrature signal, the envelope is defined as:

E = sqrt(I^2  + Q^2).

This envelope, if plotted will contain the exact upper or lower edge of the signal. An exact envelope may be sought, depending on the level of detail required for the application.

Here, this data was collected as a return from a fiber laser source. We seek to identify this section of the data to determine if the return signal fits the description out of a number of choices. The exact envelope using the above formula is less useful for the application.

The MATLAB formula is used to calculate the envelope:

[upI, lowI] = envelope(I,x,’peak’);

And this is plotted below with the I and Q signals:

Here are two envelopes depicted without the signal shown. By selecting the range of interpolation, this envelope can be smoother. Typically it is less desirable for an envelope to contain so many carrier signals, as is the following where x=1000, the range of interpolation.

Further methods involving the use of filters may also be of consideration. Below, the I and Q signals are taken through a bandpass filter (to ensure that the data is from the desired frequency range) and finally a lowpass filter is applied to the envelope to remove higher frequency oscillation.

• Receiver Dynamic Range June 15, 2020

Dynamic range is pretty general term for a ratio (sometimes called DNR ratio) of a highest acceptable value to lowest acceptable value that some quantity can be. It can be applied to a variety of fields, most notably electronics and RF/Microwave applications. It is typically expressed in a logarithmic scale. Dynamic range is an important figure of merit because often weak signals will need to be received as well as stronger ones all while not receiving unwanted signals.

Due to spherical spreading of waves and the two-way nature of RADAR, losses experienced by the transmitted signal are proportional to 1/(R^4). This leads to a great variance over the dynamic range of the system in terms of return. For RADAR receivers, mixers and amplifiers contribute the most to the system’s dynamic range and Noise Figure (also in dB). The lower end of the dynamic range is limited by the noise floor, which accounts for the accumulation of unwanted environmental and internal noise without the presence of a signal. The total noise floor of a receiver can be determined by adding the noise figure dB levels of each component. Applying a signal will increase the level of noise past the noise floor, and this is limited by the saturation of the amplifier or mixer. For a linear amplifier, the upper end is the 1dB compression point. This point describes the range at which the amplifier amplifies linearly with a constant increase in dB for a given dB increase at the input. Past the 1dB compression point, the amplifier deviates from this pattern.

The other points in the figure are the third and second order intercept points. Generally, the third intercept point is the most quoted on data sheets, as third order distortions are most common. Assuming the device is perfectly linear, this is the point where the third order distortion line intersects that line of constant slope. These intermodulation distortion generate the terms 2f_2 – f_1 and 2f_1 – f_2. So in a sense the third order intercept point is a measure of linearity. As shown in the figure, the third order distortion has a linear slope of 3:1. The point that the line intercepts the linear output is (IIP3, OIP3). This intercept point tends to be used as more of a rule of thumb, as the system is assumed to be “weakly linear” which does not necessarily hold up in practice.

Often manual gain control or automatic gain control can be employed to achieve the desired receiver dynamic range. This is necessary because there are such a wide variety of signals being received. Often the dynamic range can be around 120 dB or higher, for instance.

Another term used is spurious free dynamic range. Spurs are unwanted frequency components of the receiver which are generated by the mixer, ADC or any nonlinear component. The quantity represents the distance between the largest spur and fundamental tone.

• Semiconductor Growth Technology: Molecular Beam Epitaxy and MOCVD June 14, 2020

The development of advanced semiconductor technologies presents one important challenge: fabrication. Two methods of fabrication that are being used to in bandgap engineering are Molecular Beam Epitaxy (MBE) and Metal organic chemical vapour deposition (MOCVD).

Molecular Beam Epitaxy uses high-intensity vacuums to fabricate compound semiconductor materials and compounds. Atoms or molecules containing the desired atoms are directed to a heated substrate. Molecular Beam Epitaxy is highly sensitive. The vacuums used make use of diffusion pumps or cryo-pumps; diffusion pumps for gas source MBE and cryo-pumps for solid source MBE. Effusion cells are found in MBE and allow the flow of molecules through small holes without collusion. The RHEED source in MBE stands for Reflection Hish Energy Electron Diffraction and allows for information regarding the epitaxial growth structure such as surface smoothness and growth rate to be registered by reflecting high energy electrons. The growth chamber, heated to 200 degrees Celsius, while the substrate temperatures are kept in the range of 400-700 degrees Celsius.

MBE is not suitable for large scale production due to the slow growth rate and higher cost of production. However, it is highly accurate, making it highly desired for research and highly complex structures.

MOCVD is a more popular method for growing layers to a semiconductor wafer. MOCVD is primarily chemical, where elements are deposited as complex chemical compounds containing the desired chemical elements and the remains are evaporated. The MOCVD does not use a high-intensity vacuum. This process (MOCVD) can be used for a large number of optoelectronic devices with specific properties, including quantum wells. High quality semiconductor layers in the micrometer level are developed using this process. MOCVD produces a number of toxic elements including AsH3 and PH3.

MOCVD is recommended for simpler devices and for mass production.

• Discrete Time Filters: FIR and IIR June 13, 2020

There are two basic types of digital filters: FIR and IIR. FIR stands for Finite Impulse Response and IIR stands for infinite impulse response. The outputs of any discrete time filter can be described by a “difference equation”, similar to a differential equation but does not contain derivatives. The FIR is described by a moving average, or weighted sum of past inputs. IIR filter difference equations are recursive in the sense that they include both a sum of weighted values of past inputs as well as a weighted average of past outputs.

As shown, this specific IIR filter difference equation contains an output term (first time on the right hand side).

The FIR has a finite impulse response because it decays to zero in a finite length of time. In the discrete time case, this means the output response of a system to a Kronecker delta input or impulse. In the IIR case, the impulse response decays, but never reaches zero. The FIR filter has zeros with only poles at  z = 0 for H(z) (system function). The IIR filter is more flexible and can contain zeroes at any location on a pole zero plot.

The following is a block diagram of a two stage FIR filter. As shown, there is no recursion but simply a weighted sum. The triangles represent the values of the impulse response at a particular time. These sort of diagrams represent the difference equations and can be expressed as the output as a function of weighted sum of the inputs. These z inverse blocks could be thought of as memory storage blocks in a computer.

In contrast, the IIR filter contains recursion or feedback, as the past inputs are added back to the input. This feedback leads to a nontrivial term in the denominator of the transfer function of the filter. This transfer function can be tested for stability of the filter by observing the pole zero plot in the z-domain.

Overall, IIR filters have several advantages over FIR filters in terms of efficiency in terms of implementation which means that lower order filters can be used to achieve the same result of an FIR filter. A lower order filter is less computationally expensive and hence more preferable. A higher order filter requires more operations. However, FIR filters have a distinct advantage in terms of ease of design. This mainly comes into play when trying to design filters with linear phase (constant group delay with frequency) which is very hard to do with an IIR filter.

• Heterostructures & Carrier Recombination June 12, 2020

Heterojunction is the term for a region where two different materials interact. A Heterostructure is a combination of two or more materials. Here, we will explore several interesting cases.

### AlGaAs-InGaAs-AlGaAs

The AlGaAs-InGaAs interaction is interesting due to the difference in energy bandgap levels. It was found that AlGaAs has a higher bandgap level, while InGaAs has a lower bandgap. By layering these two materials together with a stark difference in bandgap levels, the two materials make for an interesting demonstration of a heterostructure.

The layering of a smaller bandgap material between a wider bandgap material has an effect of trapping both electrons and holes. As shown on the right side of the below picture, the center region, made of AlGaAs exibits high concentrations of both electrons and holes. This leads to a higher rate of carrier recombination, which can generate photons.

Here, the lasing profile of the material under bias:

### GaAs-InP-GaAs

InGaAsP-InGaAs-InP

A commonly used group of materials is InGaAsP, InGaAs and InP. Unlike the above arrangements, these materials may be lattice-matched. Lattice-matching may be explored in depth later on.Simulations suggest low or non-existent recombination rates. Although this is a heterostructure, one can see that there are no jagged or sudden drastic movements in the conduction and valence band layers with respect to each other to create a discontinuity that may result in a high recombination rate.

• The Acoustic Guitar – Intro June 11, 2020

We will consider our study of sound by briefly analyzing the acoustic guitar: an instrument that uses certain physical properties to “amplify” (not really true as no energy is technically added) sound acoustically rather than through electromagnetic induction or piezoelectric means (piezoelectric pickups are common on acoustic-electric guitars however). A guitar can be tuned many ways but standard (E standard) tuning is E-A-D-G-B-E across the six strings from top to bottom, or thickest string to thinnest. The tuning is something that can be changed on the fly, which differentiates the guitar from something like a harp which the tension of the string cannot be adjusted.

Just like the tuning pegs on a guitar can be loosened or tighten to change the tension, the fretting hand can also be used to change the length of the string. Both of these affect the frequency or perceived pitch. In fact, two other qualities of the string (density and thickness) also effect the frequency. These can be related through Mersenne’s rule:

As shown, the length and density of the string are inversely proportional to the pitch. The tension is proportional, so tightening the string will tune the string up.  The frequency is inversely proportional to string diameter.

The basic operation of the guitar is that plucking or strumming strings will cause a disturbance in the air, displacing air particles and causing buildups of pressure “nodes” and “antinodes”. This leads to the creation of a longitudinal pressure wave which is perceived by the human ear as sound. However, a string on its own does not displace much air, so the rest of the guitar is needed. The soundboard (top) of the guitar acts as an impedance matching network between the string and air by increasing the surface area of contact with the air. Although this does not amplify the sound since no external energy is applied, it does increase the sound intensity greatly. So in a sense the soundboard (typically made of spruce or a good transmitter of sound) can be thought of as something like an electrical impedance matching transformer. The acoustic guitar also employs acoustic resonance in the soundhole. As with the soundboard, the soundhole also vibrates and tends to resonate at lower frequencies. When the air in the soundhole moves in phase with the strings, sound intensity increases by about 3 dB. So basically, the sound is being coupled from the string to the soundboard, from the soundboard to the soundhole and from both the soundhole and soundboard to the external air. The bridge is the part of the guitar that couples the string vibration to the soundboard. This creates a reasonably loud pressure wave.

In terms of wood, the typical wood used for guitar making has a high stiffness to weight ratio. Spruce has an excellent stiffness to weight ratio, as it has a high modulus of elasticity and moderately low density. Rosewood tends to be used for the back and sides of a guitar. The main thing to note hear is the guitar is made of wood.. because wood does not carry vibrations well. As a result the air echos within the guitar instead, creating a sound that is pleasant to the ear. Another factor, of course is cost.

Strings are comprised of a fundamental frequency as well as harmonics and overtones, which lead to a distinct sound. If you fret a string at the twelfth fret, this is the halfway part of the string. This would be the first overtone with double the frequency. It is important to note that the frets of a guitar taper off as you go towards the bridge. This distance can be calculated since c = fλ is a constant. Each successive note is 1.0595 higher in pitch so the first fret is placed 1.0595 from the bridge. This continues on and on with 1.0595 being raised to a higher and higher power based on what fret is being observed.

• Materials & Photogeneration Rate at 1550 nm June 10, 2020

We now seek to understand how different materials respond and interact with light. Photogeneration is the rate at which electrons are created through the absorption of light.

A program is built in ATLAS TCAD to simulate a beam incident on a block of material. A PN junction is used, similar to previous iterations. An example of the code for the Photogeration Simulator will be provided at the end of this article.

The subject of photogeneration certainly can see a more thorough examination that is provided here. Consider this as an introduction and initial exploration.

### GaAs-InP-GaAs PN Junction

Here we see that a cross section of this unintentionally doped InP region, sandwiched between a GaAs PN junction exhibits a level of photogeneration, while the GaAs regions do not.

Adding more layers of other materials, as well as introducing a bias of the structure, we notice that the InP region still exhibits the highest (only) level of photogeneration of the materials tested in this condition. Interestingly, this structure emits light under the conditions tested.

Also consider that a photogeneration effect may not be sought. If, for instance, a device is supposed to act as a waveguide, there will be no benefit to having a photogeneration effect, let alone losses in the beam that result from it.

InGaAsP-InP-InGaAs Heterostructure

A common set of materials for use in Photodetectors is InGaAsP, InP and InGaAs. This particular structure features a simple, n-doped InGaAsP, unintentionally doped InP and p-doped InGaAs. The absorption rate of InP was already demonstrated above. InGaAs proves also to exhibit absorption at 1500 nm.

go atlas

Title Photogeneration Simulator

#Define the mesh

mesh auto

x.m l = -2 Spac=0.1

x.m l = -1 Spac=0.05

x.m l = 1 Spac=0.05

x.m l = 2 Spac =0.1

#TOP TO BOTTOM – Structure Specification

region num=1 bottom thick = 0.5 material = GaAs NY = 20 acceptor = 1e17

region num=3 bottom thick = 0.5 material = InP NY = 10

region num=2 bottom thick = 0.5 material = GaAs NY = 20 donor = 1e17

#Electrode specification

elec       num=1  name=anode  x.min=-1.0 x.max=1.0 top

elec       num=2  name=cathode   x.min=-1.0 x.max=1.0 bottom

#Gate Metal Work Function

contact num=2 work=4.77

models region=1 print conmob fldmob srh optr fermi

models region=2 srh optr print conmob fldmob srh optr fermi

models material=GaAs fldmob srh optr fermi print \

laser gainmod=1 las_maxch=200. \

las_xmin=-0.5 las_xmax=0.5 las_ymin=0.4 las_ymax=0.6 \

photon_energy=1.43 las_nx=37 las_ny=33 \

lmodes las_einit=1.415 las_efinal=1.47 cavity_length=200

beam     num=1 x.origin=0 y.origin=4 angle=270 wavelength=1550 min.window=-1 max.window=1

output band.param ramptime TRANS.ANALY photogen opt.intens con.band val.band e.mobility h.mobility band.param photogen opt.intens recomb u.srh u.aug u.rad flowlines

method newton autonr trap  maxtrap=6 climit=1e-6

#SOLVE AND PLOT

solve    init

SOLVE B1=1.0

output band.param ramptime TRANS.ANALY photogen opt.intens con.band val.band e.mobility h.mobility band.param photogen opt.intens recomb u.srh u.aug u.rad flowlines

outf=diode_mb1.str master

tonyplot diode_mb1.str

method newton autonr trap  maxtrap=6 climit=1e-6

LOG outf=electrooptic1.log

solve vanode = 0.5

solve vanode = 1.0

solve vanode = 1.5

solve vanode = 2.0

solve vanode = 2.5

save outfile=diode_mb2.str

tonyplot diode_mb2.str

tonyplot electrooptic1.log

quit

• Microstrip Antenna – Cavity Model June 9, 2020

The following is an alternative modelling technique for the microstrip antenna, which is also somewhat similar to the analysis of acoustic cavities. Like all cavities, boundary conditions are important. For the microstrip antenna, this is used to calculated radiated fields of the antenna.

Two boundary conditions will be imposed: PEC and PMC. For the PEC the orthogonal component of the E field is zero and the transverse magnetic component is zero. For the PMC, the opposite is true.

This supports the TM (transverse magnetic) mode of propagation, which means the magnetic field is orthogonal to the propagation direction. In order to use this model, a time independent wave equation (Helmholtz equation) must be solved.

The solution to any wave equation will have wavelike properties, which means it will be sinusoidal. The solution looks like:

Integer multiples of π  solve the boundary conditions because the vector potential must be maximum at the boundaries of x, y and z. These cannot simultaneously be zero. The resonant frequency can be solved as shown:

The units work out, as the square root of the product of the permeability and permittivity in the denominator correspond to the velocity of propagation (m/s), the units of the 2π term are radians and the rest of the expression is the magnitude of the k vector or wave number (rad/m). This corresponds to units of inverse seconds or Hz. Different modes can be solved by plugging in various integers and solving for the frequency in Hz. The lowest resonant mode is found to be f_010 which is intuitively true because the longest dimension is L (which is in the denominator). The f_000 mode cannot exist because that would yield a trivial solution of 0 Hz frequency. The field components for the dominant (lowest frequency) mode are given.

• HF Antenna Matched Network for a Radio Broadcasting Station June 8, 2020

The goal of this demonstration is to explain the importance of a matched network and the role of transmission lines (coax) for an HF Antenna matched network. This network is designed for the 20-meter band in the HF domain of the radio frequency region of the electromagnetic spectrum.

Consider you have an HF antenna load, which is positioned on a tower. The tower height is a consideration as a feed coax line will be connected to the antenna from the bottom (roughly) of the tower. Secondly, another coax line will be connected from the base of the tower to the radio station.

The reflection coefficient is the measure for an impedance matched network. A matched network will mean that loss will be minimal. SimSmith is a free tool that is useful for smith chart matching. In SimSmith, the load (left), transmission lines (as mentioned in the previous paragraph) and the radio are plotted on the smith chart.

The length chosen for T1 was chosen at 18.23 feet, which gives a clear shot for an impedance match towards the center using a stub transmission line.

We now add a shorted stub between both coax lines and adjust the length of the excess line until the impedance is matched at the radio station.

As shown above, the the excess length on the stub is about 6′. Plotting the SWR shows that the system is matched well for the whole band, meaning that this station is set up well for an HF radio broadcasting station for extra class amateur radio broadcasters.

• Microstrip Patch Antennas Introduction – Transmission Line Model June 7, 2020

Microstrip antennas (or patch antennas) are extremely important in modern electrical engineering for the simple fact that they can directly be printed to a circuit board. This makes them necessary for things like cellular antennas for GPS, communication with cell towers and bluetooth/WiFi. Patch antennas are notoriously narrowband, especially those with a rectangular shape (patch antennas can have a wide variety of shapes). Patch antennas can be configured as single antennas or in an array. The excitation is usually fed by a microstrip line which usually has a characteristic impedance of 50 ohms.

One of the most common analysis methods for analyzing microstrip antennas is the transmission line model. It is important to note that the microstrip transmission line does not support TEM mode, unlike the coaxial cable which has radial symmetry. For the microstrip line, quasi-TEM is supported. For this mode, there is a field component along the direction of propagation, although it is small. For the purposes of the model, this can be ignored and the TEM mode which has no field component in the direction of propagation can be used. This reduces the model to:

Where the effective dielectric constant can be approximated as:

The width of the strip must be greater than the height of the substrate. It is important to note that the dielectric constant is not constant for frequency. As a consequence, the above approximation is only valid for low frequencies of microwave.

Another note for the transmission line model is that the effective length differs from the physical length of the patch. The effective length is longer by 2ΔL due to fringing effects. ΔL can be expressed as a function of the effective dielectric constant.

• The Helical Antenna June 6, 2020

The helical antenna is a frequently overlooked antenna type commonly used for VHF and UHF applications and provides high directivity, wide bandwidth and interestingly, circular polarization. Circular polarization provides a huge advantage in that if two antennas are circularly polarized, the will not suffer polarization loss due to polarization mismatch. It is known that circular polarization is a special case of elliptical polarization. Circular polarization occurs when the Electric field vector (which defines the polarization of any antenna) has two components which are in quadrature with equal amplitudes. In this case, the electric field vector rotates in a circular pattern when observed at the target, whether it be RHP or LHP (right hand or left hand polarized).

Generally, the axial mode of the helix antenna is used but normal mode may also be used. Usually the helix is mounted on a ground plane which is connected to a coaxial cable using a N type or SMA connector.

The helix antenna can be broken down into triangles, shown below.

The circumference of each loop is given by πD. S represents the spacing between loops. When this is zero (and hence the angle of the triangle is zero), the helix antenna reduces to a flat loop. When the angle becomes a 90 degree angle, the helix reduces to a monopole linear wire antenna. L0 represents the length of one loop and L is the length of the entire antenna. The total height L is given as NS, where N is the number of loops. The actual length can be calculated by multiplying the number of loops with the length of one loop L0.

An important thing to note is that the helix antenna is elliptically polarized by default and must be manually designed to achieve circular polarization for a specific bandwidth. Another note is that the input impedance of the antenna depends greatly on the pitch angle (alpha).

The axial (endfire) mode, which is more common occurs when the circumference of the antenna is roughly the size of the wavelength. This mode is easier to achieve circular polarization. The normal mode features a much smaller circumference and is more omnidirectional in terms of radiation pattern.

The Axial ratio is the numerical quantity that governs the polarization. When AR = 1, the antenna is circularly polarized. When AR = ∞ or 0, the antenna is linearly polarized. Any other quantity means elliptical polarization.

The axial ratio can also be approximated by:

For axial mode, the radiation pattern is much more directional, as the axis of the antenna contains the bulk of the radiation. For this mode, the following conditions must be met to achieve circular polarization.

These are less stringent than the normal mode conditions.

It is also important to consider that the input impedance of these antennas tends to be higher than the standard impedance of a coaxial line (100-200 ohms compared to 50). Flattening the feed wire of the antenna and covering the ground plane with dielectric material helps achieve a better SWR.

This equation can be used to calculated the height of the dielectric used for the ground plane. It is dependent on the transmission line characteristic impedance, strip width and the dielectric constant of the material used.

• The Superheterodyne Receiver June 5, 2020

“Heterodyning” is a commonly used term in the design of RF wireless communication systems. It the process of using a local oscillator of a frequency close to an input signal in order to produce a lower frequency signal on the output which is the difference in the two frequencies. It is contrasted with “homodyning” which uses the same frequency for the local oscillator and the input. In a superhet receiver, the RF input and the local oscillator are easily tunable whereas the ouput IF (intermediate frequency) is fixed.

After the antenna, the front end of the receiver comprises of a band select filter and a LNA (low noise amplifier). This is needed because the electrical output of the antenna is often as small as a few microvolts and needs to be amplified, but not in a way that leads to a higher Noise Figure. The typical superhet NF should be around 8-10 dB. Then the signal is frequency multiplied or heterodyned with the local oscillator. In the frequency domain, this corresponds to a shift in frequency. The next filter is the channel select filter which has a higher Quality factor than the band select filter for enhanced selectivity.

For the filtering, the local oscillator can either be fixed or variable for downconversion to the baseband IF. If it is variable, a variable capacitor or a tuning diode is used. The local oscillator can be higher or lower in frequency than the desired frequency resulting from the heterodyning (high side or low side injection).

A common issue in the superhet receiver is image frequency, which needs to be suppressed by the initial filter to prevent interference. Often multiple mixer stages are used (called multiple conversion) to overcome the image issue. The image frequencies are given below.

Higher IF frequencies tend to be better at suppressing image as demonstrated in the term 2f_IF. The level of attenuation (in dB) of a receiver to image is given in the Image Rejection Ratio (the ratio of the output of the receiver from a signal at the received frequency, to its output for an equal strength signal at the image frequency.

• Conduction & Valence Band Energies under Biasing (PN & PIN Junctions) June 4, 2020

Previously, we discussed the effect of doping concentrations on the energy band gap. The conclusion of this process was that the doping concentration alone does not alter the band gap. The band gap is the difference between the conduction band and valence bands. Under biasing, the conduction and valence bands are in fact affected by doping concentration.

One method to explain how the doping level will influence the conduction band and valence band under bias is by demonstrating the difference between the energy bands of a PN Junction versus that of a PIN Junction. Simulations of both are presented below. The intermediate section found between the p-doped and n-doped regions of the PIN junction diode offer a more gradual transition between the two levels. A PN junction offers a sharper transition at the conduction and valence band levels simulatenously. A heterostructure, which is made of more than one material (which will have different band gaps) may produce even greater discontinuities. Depending on the application, a discontinuity may be sought (think, Quantum well), while in other situations, it may be necessary to smooth the transition between band levels for a desired result.

The conduction and valence bands are of great importance for determining the carrier concentrations and carrier mobilities in a semiconductor structure. These will be discussed soon.

PN Junction under biasing (conduction and valence band energies):

Code Used (PN Junction):

#TOP TO BOTTOM – Structure Specification
region num=1 bottom thick = 0.5 material = GaAs NY = 20 acceptor = 1e18
region num=2 bottom thick = 0.5 material = GaAs NY = 20 donor = 1e18

PIN Junction Biased:

PIN Junction Unbiased:

Code Used (PIN Junction):

#TOP TO BOTTOM – Structure Specification
region num=1 bottom thick = 0.5 material = GaAs NY = 20 acceptor = 1e18
region num=3 bottom thick = 0.2 material = GaAs NY = 10
region num=2 bottom thick = 0.5 material = GaAs NY = 20 donor = 1e18

Here, the carrier concentrations are plotted:

• RADAR Range Resolution June 3, 2020

“Pulse Compression” is a signal processing technique that tries to take the advantages of pulse RADAR and mitigate its disadvantages. The major dilemma is that accuracy of RADAR is dependent on pulse width. For instance, if you send out a short pulse you can illuminate the target with a small amount of energy. However the range resolution is increased. The digital processing of pulse compression grants the best of both worlds: having a high range resolution and also illuminate the target with greater energy. This is done using Linear Frequency Modulation or “Chirp modulation”, illustrated below.

As shown above, the frequency gradually increases with time (x axis).

A “matched filter” is a processing technique to optimize the SNR, which outputs a compressed pulse.

Range resolution can be calculated as follows:

Resolution = (C*T)/2

Where T is the pulse time or width.

With greater range resolution, a RADAR can detect two objects that are very close. As shown this is easier to do with a longer pulse, unless pulse compression is achieved.

It can also be demonstrated that range resolution is proportional to bandwidth:

Resolution = c/2B

So this means that RADARs with higher frequencies (which tend to have higher bandwidth), greater resolution can also be achieved.

• Energy Bandgaps June 2, 2020

Previously, a PN Junction Simulator in ATLAS program was posted. Now, we will use and modify this program to explore more theory in respect to semiconductor materials, high speed electronics and optoelectronics.

The bandgap, as mentioned previously is the difference between the conduction band energy and valence band energy. The materials GaAs, InP, AlGaAs, InGaAs and InGaAsP are simulated and the bandgap values for each are estimated (just don’t use these values for anything important).

• GaAs: ~ 1.2 eV
• InP: ~ 1.35 eV
• AlGaAs: ~ 1.8 eV
• InGaAs: ~0.75 eV
• InGaAsP: 1.1 eV

Here the conduction band and valence band are shown.

The structure used in the PN Junction Simulator is found below:

#TOP TO BOTTOM – Structure Specification
region num=1 bottom thick = 0.5 material = GaAs NY = 20 acceptor = 1e17
region num=3 bottom thick = 0.001 material = InP NY = 10
region num=4 bottom thick = 0.001 material = GaAs NY = 10
region num=5 bottom thick = 0.001 material = AlGaAs NY = 10 x.composition=0.3 grad.3=0.002
region num=6 bottom thick = 0.001 material = GaAs NY = 10
region num=7 bottom thick = 0.001 material = InGaAs NY = 10 x.comp=0.468
region num=8 bottom thick = 0.001 material = GaAs NY = 10
region num=9 bottom thick = 0.001 material = InGaAsP NY = 10 x.comp=0.145 y.comp = 0.317
region num=2 bottom thick = 0.5 material = GaAs NY = 20 donor = 1e17

### Is the bandgap affected by doping the concentration level?

A quick simulation (below) will tell us that the answer is no. What might influence the bandgap however? And what could the concentration level change?

This (above) is a simulation of GaAs with layers at different doping concentration levels. The top is a contour of the bandgap, which is constant, as expected. The top right is a cross section of this GaAs structure (technically still a pn junction diode); the bandgap is still constant. The bottom two images are the donor and acceptor concentrations.

The bandgap energy E_g is the amount of energy needed for a valence electron to move to the conduction band. The short answer to the question of how the bandgap may be altered is that the bandgap energy is mostly fixed for a single material. In praxis however, Bandgap Engineering employs thin epitaxial layers, quantum dots and blends of materials to form a different bandgap. Bandgap smoothing is employed, as are concentrations of specific elements in ternary and quarternary compounds. However, the bandgap cannot be altered by changing the doping level of the material.

• PN Junction Simulator in ATLAS June 1, 2020

This post will outline a program for ATLAS that can simulate a pn junction. The mesh definition and structure between the anode and cathode will be defined by the user. The simulator plots both an unbiased and biased pn junction.

go atlas

Title PN JUNCTION SIMULATOR

#Define the mesh

mesh auto
x.m l = -2 Spac=0.1
x.m l = -1 Spac=0.05
x.m l = 1 Spac=0.05
x.m l = 2 Spac =0.1

#TOP TO BOTTOM – Structure Specification
region num=1 bottom thick = 0.5 material = GaAs NY = 20 acceptor = 1e17
region num=2 bottom thick = 0.5 material = GaAs NY = 20 donor = 1e17

#Electrode specification
elec num=1 name=anode x.min=-1.0 x.max=1.0 top
elec num=2 name=cathode x.min=-1.0 x.max=1.0 bottom
#Gate Metal Work Function
contact num=2 work=4.77
models region=1 print conmob fldmob srh optr
models region=2 srh optr
material region=2

#SOLVE AND PLOT
solve init outf=diode_mb1.str master
output con.band val.band
tonyplot diode_mb1.str

method newton autonr trap maxtrap=6 climit=1e-6
solve vanode = 2.5 name=anode
save outfile=diode_mb2.str
tonyplot diode_mb2.str
quit

This program may also be useful for understanding how different materials interact between a PN junction. This simulation below is for a simple GaAs pn junction.

The first image shows four contour plots for the pn junction with an applied 2.5 volts. With an applied voltage of 2.5, the recombination rate is high at the PN junction, while there is low recombination throughout the unbiased pn junction. The hole and electron currents are plotted on the bottom left and right respectively.

Here is the pn junction with no biasing.

The beam profile can also be obtained:

• ATLAS TCAD: Simulation of Frequency Response from Light Impulse May 31, 2020

Recently a project was posted for a high speed photodetector. Part of that project was to develop a program that takes the frequency response of a light impulse. My thought is to create a program that can perform these tasks, including an impulse response for any structure.

Generic Light Frequency Response Simulator Program in ATLAS TCAD

The first part of the program should include all the particulars of the structure that is being simulated:

go atlas

[define mesh]

[define structure]

[define electrodes]

[define materials]

Then, the beam is defined. x.origin and y.origin describes from where the beam is originating on the 2D x-y plane. The angle shown of 270 degrees means that the beam will be facing upwards. One may think of this angle as starting on the right hand sixe of the x-y coordinate plane and moves clockwise. The wavelength is the optical wavelength of the beam and the window defines how wide the beam will be.

beam num=1 x.origin=0 y.origin=5 angle=270 wavelength=1550 min.window=-15 max.window=15

The program now should run an initial solution and set the conditions (such as if a voltage is applied to a contact) for the frequency response.

METHOD HALFIMPL

solve init
outf = lightpulse_frequencyresponse.str
LOG lightpulse_frequencyresponse.log

[simulation conditions such as applied voltage]

LOG off

Now the optical pulse is is simulated as follows:

LOG outf=transient.log
SOLVE B1=1.0 RAMPTIME=1E-9 TSTOP=1E-9 TSTEP=1E-12
SOLVE B1=0.0 RAMPTIME=1E-9 TSTOP=20E-9 TSTEP=1E-12

tonyplot transient.log

outf=lightpulse_frequencyresponse.str master onefile
log off

The optical pulse “transient.log” is simulated using Tonyplot at the end of the program. It is a good idea to separate transient plots from frequency plots to ensure that these parameters may be chosen in Tonyplot. Tonyplot does not give the option to use a parameter if it is not the object that is being solved before saving the .log file.

log outf=frequencyplot.log
FOURIER INFILE=transient.log OUTFILE=frequencyplot.log T.START=0 T.STOP=20E-9 INTERPOLATE
tonyplot frequencyplot.log
log off

output band.param ramptime TRANS.ANALY photogen opt.intens con.band val.band e.mobility h.mobility band.param photogen opt.intens recomb u.srh u.aug u.rad flowlines

save outf=lightpulse_frequencyresponse.str
tonyplot lightpulse_frequencyresponse.str

quit

Now you can focus on the structure and mesh for a light impulse frequency response. Note that adjustments may be warranted on the light impulse and beam.

And so, here is a structure simulation that could be done easily using the process above.

• High Speed UTC Photodetector Simulation with Frequency Response in TCAD May 30, 2020

The following is a TCAD simulation of a high speed UTC photodetector. An I-V curve is simulated for the photodetector, forward and reverse. A light beam is simulated to enter the photodetector. The photo-current response to a light impulse is simulated, followed by a frequency response in TCAD.

Structure:

I-V Curve

Beam Simulation Entering Photodetector:

Light Impulse:

Frequency Response in ATLAS:

The full project (pdf) is here: ece530_final_mbenker

• Sinusoidal and Exponential Sequences, Periodicity of Sequences May 29, 2020

Continuing our discussion on discrete-time sequences, we now come to define exponential and sinusoidal sequences. The general formula for a discrete-time exponential sequence is as follows:

x[n] = Aα^n.

This exponential behaves differently according to the value of α. If the sequence starts at n=0, the formula is as follows:

x[n] = Aα^n * u[n].

If α is a complex number, the exponential function exhibits newer characteristics. The envelope of the exponential is |α|. If |α| < 1, the system is decaying. If |α|> 1, the system is growing.

When α is complex, the sequence may be analyzed as follows, using the definition of Euler’s formula to express a complex relationship as a magnitude and phase difference.

Where ω0 is the frequency and φ is the phase, for n number of samples, a complex exponential sequence of form Ae^jw0n may be considered as a sinusoidal sequence for a set of frequencies in an interval of 2π.

A sinusoidal sequence is defined as follows:

x[n] = Acos(ω0*n + φ), for all n, and A, φ are real constants.

Periodicity for discrete-time signals means that the sequence will repeat itself for a certain delay, N.

x[n] = x[n+N] : system is periodic.

t = (-5:1:15)’;

impulse = t==0;
unitstep = t>=0;
Alpha1 = -0.5;
Alpha2 = 0.5;
Alpha3 = 2.5;
Alpha4 = -2.5;
cAlpha1 = -0.5 – 0.5i;
cAlpha2 = 0.5 + 0.5i;
cAlpha3 = 2.5 -2.5i;
cAlpha4 = -2.5 + 2.5i;
A = 1;

Exp1 = A.*unitstep.*Alpha1.^t;
Exp2 = A.*unitstep.*Alpha2.^t;
Exp3 = A.*unitstep.*Alpha3.^t;
Exp4 = A.*unitstep.*Alpha4.^t;

cExp1 = A.*unitstep.*cAlpha1.^t;
cExp2 = A.*unitstep.*cAlpha2.^t;
cExp3 = A.*unitstep.*cAlpha3.^t;
cExp4 = A.*unitstep.*cAlpha4.^t;

%%
figure(1)
subplot(2,1,1)
stem(t, impulse)
xlabel(‘x’)
ylabel(‘y’)
title(‘Impulse’)

subplot(2,1,2)
stem(t, unitstep)
xlabel(‘x’)
ylabel(‘y’)
title(‘Unit Step’)
%%
figure(2)
subplot(2,2,1)
stem(t, cExp1)
xlabel(‘n’)
ylabel(‘x[n]’)
title(‘Exponential: alpha = -0.5 – 0.5i’)

subplot(2,2,2)
stem(t, cExp2)
xlabel(‘n’)
ylabel(‘x[n]’)
title(‘Exponential: alpha = 0.5 + 0.5i’)

subplot(2,2,3)
stem(t, cExp3)
xlabel(‘n’)
ylabel(‘x[n]’)
title(‘Exponential: alpha = 2.5 -2.5i’)

subplot(2,2,4)
stem(t, cExp4)
xlabel(‘n’)
ylabel(‘x[n]’)
title(‘Exponential: alpha = -2.5 + 2.5i’)
%%
figure(3)
subplot(2,2,1)
stem(t, Exp1)
xlabel(‘n’)
ylabel(‘x[n]’)
title(‘Exponential: alpha = -0.5’)

subplot(2,2,2)
stem(t, Exp2)
xlabel(‘n’)
ylabel(‘x[n]’)
title(‘Exponential: alpha = 0.5’)

subplot(2,2,3)
stem(t, Exp3)
xlabel(‘n’)
ylabel(‘x[n]’)
title(‘Exponential: alpha = 2.5’)

subplot(2,2,4)
stem(t, Exp4)
xlabel(‘n’)
ylabel(‘x[n]’)
title(‘Exponential: alpha = -2.5’)

• Mathematical Formulation for Antennas: Radiation Integrals and Auxiliary Potentials May 28, 2020

This short paper will attempt to clarify some useful mathematical tools for antenna analysis that seem overly “mathematical” but can aid in understanding antenna theory. A solid background in Maxwell’s equations and vector calculus would be helpful.

Two sources will be introduced: The Electric and Magnetic sources (E and M respectively). These will be integrated to obtain either an electric and magnetic field directly or integrated to obtain a Vector potential, which is then differentiated to obtain the E and H fields. We will use A for magnetic vector potential and F for electric vector potential.

Using Gauss’ laws (first two equations) for a source free region:

And also the identity:

It can be shown that:

In the case of the magnetic field in response to the magnetic vector potential (A). This is done by equating the divergence of B with the divergence of the curl of A, which both equal zero. The same can be done from Gauss Law of electricity (1st equation) and the divergence of the curl of F.

Using Maxwell’s equations (not necessary to know how) the following can be derived:

For total fields, the two auxiliary potentials can be summed. In the case of the Electric field this leads to:

The following integrals can be used to solve for the vector potentials, if the current densities are known:

For some cases, the volume integral is reduced to a surface or line integral.

An important note: most antenna calculations and also the above integrals are independent of distance, and therefore are done in the far field (region greater than 2D^2/λ, where D is the largest dimension of the antenna).

The familiar duality theorem from Fourier Transform properties can be applied in a similar way to Maxwell’s equations, as shown.

In the chart, Faraday’s Law, Ampere’s Law, Helmholtz equations and the above mentioned integrals are shown. To be perfectly honest, I think the top right equation is wrong. I believe is should have permittivity rather than permeability.

Another important antenna property is reciprocity… that is the receive and transmit radiation patterns are the same , given that the medium of propagation is linear and isotropic. This can be compared to the reciprocity theorem of circuits, meaning that a volt meter and source can be interchanged if a constant current or voltage source is used and the circuit components are linear, bilateral and discrete elements.

• Discrete-Time Impulse and Unit Step Functions May 27, 2020

Discrete-Time Signals are understood as a set or sequence of numbers. These sequences possess magnitudes or values at a given index.

One mark of Discrete-Time Signals is that the index value is an integer. Thus, the sequence will have a magnitude or value for a whole number index such as -5, -4, 0, 6, 10000, etc.

A discrete-time signal represented as a sequence of numbers takes the following form:

x[n] = {x[n]},          -∞ < n < ∞,

where n is any real integer (the index).

An analog representation describes values of a signal at time nT, where T is the sampling period. The sampling frequency is the inverse of the sampling period.

x[n] = X_a(nT),      -∞ < n < ∞.

Common Sequences

Both a very simple and important sequence is the unit sample sequence, “discrete time impulse” or simply “impulse,” equal to 1 only at zero and equal to zero otherwise.

The discrete time impulse is used to describe an entire system using a delayed impulse. An entire sequence may also be shifted or delayed using the following relation:

y[n] = x[n – n0],

where n0 is an integer (which is the increment of indices by which the system is delayed. The impulse function delayed to any index and multiplied by the value of the system at that index can describe any discrete-time system. The general formula for this relationship is,

The unit step sequence is related to the unit impulse. The unit step sequence is a set of numbers that is equal to zero for all numbers less than zero and equal to one for numbers equal and greater than zero.

The unit step sequence is therefore equal to a sequence of delta impulses with a zero and greater delay.

u[n] = δ[n] + δ[n-1] + δ[n-2] + . . .

The unit impulse can also be represented by unit step functions:

δ[n] = u[n] – u[n-1].

Below I’ve plotted both the impulse and unit step function in matlab.

```t = (-10:1:10)';

impulse = t==0;
unitstep = t>=0;

figure(1)
subplot(2,1,1)
stem(t, impulse)
xlabel('x')
ylabel('y')
title('Impulse')
figure(1)
subplot(2,1,2)
stem(t, unitstep)
xlabel('x')
ylabel('y')
title('Unit Step')```

• Image Resolution May 26, 2020

Consider that we are interested in building an optical sensor. This sensor contains a number of pixels, which is dependent on the size of the sensor. The sensor has two dimensions, horizontal and vertical. Knowing the size of the pixels, we will be able to find the total number of pixels on this sensor.

The horizontal field of view, HFOV is the total angle of view normal from the sensor. The effective focal length, EFL of the sensor is then:

Effective Focal Length: EFL = V / (tan(HFOV/2)),

where V is the vertical sensor size in (in meters, not in number of pixels) and HFOV is the horizontal field of view. Horizontal field of view as an angled is halved to account that HFOV extends to both sizes of the normal of the sensor.

The system resolution using the Kell Factor: R = 1000 * KellFactor * (1 / (PixelSize)),

where the Pixel size is typically given and the Kell factor, less than 1 will approximate a best real case result and accounts for aberrations and other potential issues.

Angular resolution: AR = R * EFL / 1000,

where R is the resolution using the Kell factor and EFL is the effective focal length. It is possible to compute the angular resolution using either pixels per millimeter or cycles per millimeter, however one would need to be consistent with units.

Minimum field of view: Δl = 1.22 * f * λ / D,

which was used previously for the calculation of the spatial resolution of a microscope. The minimum field of view is exactly a different wording for the minimum spatial resolution, or minimum size resolvable.

Below is a MATLAB program that computed these parameters, while sweeping the diameter of the lens aperture. The wavelength admittedly may not be appropriate for a microscope, but let’s say that you are looking for something in the infrared spectrum. Maybe you are trying to view some tiny laser beams that will be used in the telecom industry at 1550 nanometer.

Pixel size: 3 um. HFOV: 4 degrees. Sensor size: 8.9mm x 11.84mm.

• Spatial Resolution of a Microscope May 25, 2020

Angular resolution describes the smallest angle between two objects that are able to be resolved.

θ = 1.22 * λ / D,

where λ is the wavelength of the light and D is the diameter of the lens aperture.

Spatial resolution on the other hand describes the smallest object that a lens can resolve. While angular resolution was employed for the telescope, the following formula for spatial resolution is applied to microscopes.

Spatial resolution: Δl = θf = 1.22 * f * λ / D,

where θ is the angular resolution, f is the focal length (assumed to be distance to object from lens as well), λ is the wavelength and D is the diameter of the lens aperture.

The Numerical Aperture (NA) is a measure of the the ability to of the lens to gather light and resolve fine detail. In the case of fiber optics, the numerical aperture applies to the maximum acceptance angle of light entering a fiber. The angle by the lens at its focus is θ = 2α. α is shown in the first diagram.

Numerical Aperture for a lens: NA = n * sin(α),

where n is the index of refraction of the medium between the lens and the object. Further,

sin(α) = D / (2d).

The resolving power of a microscope is related.

Resolving power: x = 1.22 * d * λ / D,

where d is the distance from the lens aperture to the region of focus.

Using the definition of NA,

Resolving power: x = 1.22 * d * λ / D = 1.22 * λ / (2sin(α)) = 0.61 * λ / NA.

• Telescope Resolution & Distance Between Stars using the Rayleigh Limit May 24, 2020

Previously, the Rayleigh Criterion and the concept of maximum resolution was explained. As mentioned, Rayleigh found this formula performing an experiment with telescopes and stars, exploring the concept of resolution. This formula may be used to determine the distance between two stars.

θ = 1.22 * λ / D.

Consider a telescope of lens diameter of 2.4 meters for a star of visible white light at approximately 550 nanometer wavelength. The distance between the two stars in lightyears may be calculated as follows. The stars are approximately 2.6 million lightyears away from the lens.

θ = 1.22 * (550*10^(-9)m)/(2.4m)

Distance between two objects (s) at a distance away (r), separated by angle (θ): s = rθ

s = rθ = (2.0*10^(6) ly)*(2.80*10^(-7)) = 0.56 ly.

This means that the maximum resolution for the lens size, star distance from the lens and wavelength would be that two stars would need to be separated at least 0.56 lightyears for the two stars to be distinguishable.

• Diffraction, Resolution and the Rayleigh Criterion May 23, 2020

The wave theory of light includes the understanding that light diffracts as it moves through space, bending around obstacles and interfering with itself constructively and destructively. Diffraction grating disperses light according to wavelength. The intensity pattern of monochromatic light going through a small, circular aperture will produce a pattern of a central maximum and other local minima and maxima.

The wave nature of light and the diffraction pattern of light plays an interesting role in another subject: resolution. The light which comes through the hole, as demonstrated by the concept of diffraction, will not appear as a small circle with sharply defined edges. There will appear some amount of fuzziness to the perimeter of the light circle.

Consider if there are two sources of light that are near to each other. In this case, the light circles will overlap each other. Move them even closer together and they may appear as one light source. This means that they cannot be resolved, that the resolution is not high enough for the two to be distinguished from another.

Considering diffraction through a circular aperture the angular resolution is as follows:

Angular resolution: θ = 1.22 * λ/D,

where λ is the wavelength of light, D is the diameter of the lens aperture and the factor 1.22 corresponds to the resolution limit formulated and empirically tested using experiments performed using telescopes and astronomical measurements by John William Strutt, a.k.a. Rayleigh for the “Rayleigh Criterion.” This factor describes what would be the minimum angle for two objects to be distinguishable.

• Optical Polarizers in Series May 22, 2020

The following problems deal with polarizers, which is a device used to alter the polarization of an optical wave.

1. ### Unpolarized light of intensity I is incident on an ideal linear polarizer (no absorption). What is the transmitted intensity?

Unpolarized light contains all possible angles to the linear polarizer. On a two dimensional plane, the linear polarizer will emit only that amount of light intensity that is found in the axis of polarization. Therefore, the Intensity of light emitted from a linear polarizer from incident unpolarized light will be half the intensity of the incident light.

### c) Is it possible to reduce the intensity of transmitted light to zero by removing a polarizer(s)?

a) Using Malus’s Law, the intensity of light from a polarizer is equal to the incident intensity multiplied by the cosine squared of the angle between the incident light and the polarizer. This formula is used in subsequent calculations (below). The intensity of light from the last polarizer is 19.8% of the incident light intensity.

b) My removing polarizer three, the total intensity is reduced to 0.0516 times the incident intensity.

c) In order to achieve an intensity of zero on the output of the polarizer, there will need to exist an angle difference of 90 degrees between two of the polarizers. This is not achievable by removing only one of the polarizers, however it would be possible by removing both the second and third polarizer, leaving a difference of 90 degrees between two polarizers.

• Jones Vector: Polarization Modes May 21, 2020

The Jones Vector is a method of describing the direction of polarization of light. It uses a two element matrix for the complex amplitude of the polarized wave. The polarization of a light wave can be described in a two dimensional plane as the cross section of the light wave. The two elements in the Jones Vector are a function of the angle that the wave makes in the two dimensional cross section plane of the wave as well as the amplitude of the wave.

The amplitude may be separated from the ‘mode’ of the vector. The mode of the vector describes only the direction of polarization. Below is a first example with a linear polarization in the y direction.

Using the Jones Vector the mode can be calculated for any angle. See calculations below:

The phase differences of the Jones Vector are plotted for a visual representation of the mode. If both components of the differ in phase, the plot depict a circular or oval pattern that intersects both components of the mode on a two dimensional plot. The simplest of plots to understand is a polarization of 90 degree phase difference. In this case, both magnitudes of the components of the mode will be 1 and a full circle is drawn to connect these points of the mode. In the case of a zero phase difference, this is demonstrated at 45 degrees where both sin(45deg) and cos(45deg) equal 0.707. In this case, the phase difference is plotted as a straight line, indicating that polarization is of equal phase from each axis of the phase difference plot.

• Acoustics and Sound: The Vocal Apparatus May 20, 2020

The study of modulation of signals for wireless transmission can, to some extent, be applied to the human body, In the RF wireless world, a “carrier” signal of a high frequency has a “message” encoded on it (message signal) in some form or fashion. This is then transmitted through a medium (generally air) as a radio frequency electromagnetic wave.

In a similar way, the vocal apparatus of the human body performs a similar function. The lungs forcibly expel air in a steady stream comparable to a carrier wave.  This steady stream gets encoded with information by periodically varying its velocity and pressure into two forms of sound: voiced and unvoiced. Voiced sounds produce vowels and are modulated by the larynx and vocal cords. The vocal chords are bands which have a narrow slit in between them which are flexed in certain ways to produce sounds. The tightening of the cords produces a higher pitch and loosening or relaxing produces a lower pitch. In general, thicker vocal cords will produce deeper voices. The relaxation oscillation produced by this effect converts a steady air flow into a periodic pressure wave. Unvoiced sounds do not use the vocal chords.

The tightness of the vocal cords produces a fundamental frequency which characterizes the tone of voice. In addition, resonating cavities above and below the larynx have certain resonant frequencies which also contribute to the tone of voice through inharmonic frequencies, as these are not necessarily spaced evenly.

Although the lowest frequency is the fundamental and most recognizable tone within the human voice, higher frequencies tend to be of a greater amplitude. Different sounds produced will of course have different spectrum characteristics. This is demonstrated in the subsequent image.

The “oo” sound appears to contain a prominent 3rd harmonic, for example. In none of these sounds is the fundamental of highest amplitude. The image also shows how varying the position of the tongue as well as the constriction or release of the larynx contributes to the spectrum.

It is interesting to note the difference between male and female voices: male voices contain more harmonic content. This is because lower multiples of the fundamentals are more represented in the male voice and are spaced closed to one another in the frequency domain.

• The Cavity Magnetron May 19, 2020

The operation of a cavity magnetron is comparable to a vacuum tube: a nonlinear device that was mostly replaced by the transistor. The vacuum tube operated using thermionic emission, when a material with a high melting point is heated and expels electrons. When the work function of a material is overcome through thermal energy transfer to electrons, these particles can escape the material.

Magnetrons are comprised of two main elements: the cathode and anode. The cathode is at the center and contains the filament which is heated to create the thermionic emission effect. The outside part of the anode acts as a one-turn inductor to provide a magnetic field to bend the movement of the electrons in a circular manner. If not for the magnetic field, the electrons would simple be expelled outward. The magnetic field sweeps the electrons around, exciting the resonant cavities of the anode block.

The resonant cavities behave much like a passive LC filter circuit which resonate a certain frequency. In fact, the tipped end of each resonant cavity looks much like a capacitor storing charge between two plates, and the back wall acts an inductor. It is well known that a parallel resonant circuit has a high voltage output at one particular frequency (the resonant frequency) depending on the reactance of the capacitor and inductor. This can be contrasted with a series resonant circuit, which has a current peak at the resonant frequency where the two devices act as a low impedance short circuit. The resonant cavities in question are parallel resonant.

Just like the soundhole of a guitar, the resonant cavity of the magnetron’s resonance frequency is determined by the size of the cavity. Therefore, the magnetron should be designed to have a resonant frequency that makes sense for the application. For a microwaves oven, the frequency should be around 2.4GHz for optimum cooking. For an X-band RADAR, this should be closer to 10GHz or around this level. An interesting aspect of the magnetron is when a cavity is excited, another sequential cavity is also excited out of phase by 180 degrees.

The magnetron generally produces wavelength around several centimeters (roughly 10 cm in a microwave oven). It is known as a “crossed field” device, because the electrons are under the influence of both electric and magnetic fields, which are in orthogonal directions. An antenna is attached to the dipole for the radiation to be expelled. In a microwaves oven, the microwaves are guided using a metallic waveguide into the cooking chamber.

• Optical Polarization, Malus’s Law, Brewster’s Angle May 18, 2020

In the theory of wave optics, light may be considered as a transverse electromagnetic wave. Polarization describes the orientation of an electric field on a 3D axis. If the electric field exists completely on the x-axis plane for example, light is considered to be polarized in this state.

Non-polarized light, such as natural light may change angular position randomly or rapidly. The process of polarizing light uses the property of anisotropy and the physical mechanisms of dichroism or selective absorption, reflection or scattering. A polarizer is a device that utilizes these properties. Light exiting a polarizer that is linearly polarized will be parallel to the transmission axis of the polarizer.

Malus’s law states that the transmitted intensity after an ideal polarizer is

I(θ)=I(0)〖cos〗^2 (θ),

where the angle refers to the angle difference between the incident wave and the transmission axis of the polarizer.

Brewster’s Angle, an extension of the Fresnel Equation is a theory which states that the difference between a transmitted ray or wave into a material comes at a 90 degree angle to the reflected wave or ray along the surface. This situation is true only at the condition of the Brewster’s Angle. In the scenario where the Brewster’s Angle condition is met, the angle between the incident ray or wave and the normal, the reflected ray or wave and the surface normal and the transmitted ray or wave and the surface normal are all equal.

If the Brewster’s Angle is met, the reflected ray will be completely polarized. This is also termed the polarization angle. The polarization angle is a function of the two surfaces.

• Fourth Generation Optics: Thin-Film Voltage-Controlled Polarization May 17, 2020
###### Michael Benker ECE591 Fundamentals of Optics & Photonics April 20,2020

Introduction

Dr. Nelson Tabiryan of BEAM Engineering for Advanced Measurements Co. delivered a lecture to explain some of the latest advances in the field of optics. The fourth generation of optics, in short includes the use of applied voltages to liquid crystal technology to alter the polarization effects of micrometer thin film lenses. Both the theory behind this type of technology as well as the fabrication process were discussed.

First Three Generations of Optics

A summary of the four generation of optics is of value to understanding the advancements of the current age. Optics is understood by many as one of the oldest branches of science. Categorized by applications of phenomena observable by the human eye, geometrical optics or refractive optics uses shape and refractive index to direct and control light.

The second generation of optics included the use of graded index optical components and metasurfaces. This solved the issue of needing to use exceedingly bulky components although it would be limited to narrowband applications. One application is the use of graded index optical fibers, which could allow for a selected frequency to reflect through the fiber, while other frequencies will pass through.

Anisotropic materials gave rise to the third generation of optics, which produced technologies that made use of birefringence modulation. Applications included liquid crystal displays, electro-optic modulators and other technologies that could control material properties to alter behavior of light.

Fourth Generation Optics

To advance technology related to optics, there are several key features needed for output performance. A modernized optics should be broadband, allowing many frequencies of light to pass. It should be highly efficient, micrometer thin and it should also be switchable. This technology is currently present.

Molecule alignment in liquid crystalline materials is essential to the theory of fourth generation optics. Polarization characteristics of the lens is determined by molecule alignment. As such, one can build a crystal or lens that has twice the refractive index for light which is polarized in one direction. This device is termed the half wave plate, which polarizes light waves parallel and perpendicular to the optical axis of the crystal. Essentially, for one direction of polarization, a full period sinusoid wave is transmitted through the half wave plate, but with a reversed sign exit angle, while the other direction of polarization is allowed only half a period is allowed through. As a result of the ability to differentiate a sign of the input angle to the polarization axis (full sinusoid polarized wave), the result is an ability to alter the output polarization and direction of the outgoing wave as a function of the circular direction of polarization of the incident wave.

The arrangement of molecules on these micrometer-thin lenses are not only able to alter the direction according to polarization, but also able to allow the lens to act as a converging lens or diverging lens. The output wave, a result of the arrangement of molecules in the liquid crystal lens has practically an endless number of uses and can align itself to behave as any graded index lens one might imagine. An applied voltage controls the molecular alignment.

How does the lens choose which molecular alignment to use when switching the lens? The answer is that, during the fabrication process, all molecular alignments are prepared that the user plans on employing or switching to at some point. These are termed diffraction wave plates.

## Problem 1.

The second lens is equivalent to the first (left) lens, rotated 180 degrees. In the case of a polarization-controlled birefringence application, one would expect lens 2 to exhibit opposite output direction for the same input wave polarization as lens 1. For lens 1 (left), clockwise circularly polarized light will exit with an angle towards the right, while counterclockwise circularly polarized light exits and an angle to the left. This is reversed for lens 2.

## Problem 2.

There are as many states as there are diffractive waveplates. If there are six waveplates, then there will be 6 states to choose from.

• LED Simulation in Atlas May 16, 2020

This post features an LED structure simulated in ATLAS. The goal will be to demonstrate why this structure may be considered an LED. Light Emitting Diodes and Laser Diodes both serve as electronic-to-photonic transducers. Of importance to the operation of LEDs is the radiative recombination rate.

The following LED structure is built using the following layers (top-down):

• GaAs: 0.5 microns, p-type: 1e15
• AlGaAs: 0.5 microns, p-type: 1e15, x=0.35
• GaAs: 0.1 microns, p-type: 1e15, LED
• AlGaAs: 0.5 microns, n-type: 1e18, x=0.35
• GaAs: 2.4 microns, n-type: 1e18

This structure uses alternating GaAs and AlGaAs layers.

• Pulsed Lasers and Continuous-Wave Lasers May 15, 2020

Continuous-Wave (CW) Lasers emit a constant stream of light energy. Power emitted is typically not very high, not exceeding killoWatts. Pulse Lasers were designed to produce much higher peak power output through the use of cyclical short bursts of optical power with intervals of zero optical power output. There are several important parameters to explore in relation the pulsed laser in particular.

The period of the laser pulse Δt is the duration from the start of one pulse to the start of the next pulse. The inverse of the period Δt is the repetition rate or repetition frequency. The pulse width τ is calculated as the 3dB (half power) drop-off width.

The Duty cycle is an important concept in signals and systems for periodic pulsed systems and is described as the ratio of the pulse duration to the duration of the period. Interestingly, the continuous wave lase can be considered as a pulse laser with 100% duty cycle.

Power calculations and Pulse Energy remain as several important relations.

• Average Power: the product of Peak pulsed power, repetition frequency and the pulse width
• Pulsed Energy: Average power divided by the repetition frequency

Other formulations of these parameters are found above.

• Monochromaticity, Narrow Spectral Width and High Temporal & Spatial Coherence May 14, 2020

A laser is a device that emits light through a process of optical amplification based on stimulated emission of electromagnetic radiation. A laser has high monochromaticity, narrow spectral width and high temporal coherence. These three qualities are interrelated, as will be shown.

Monochromaticity is a term for a system, particularly in relation to light that references a constant frequency and wavelength. With the understanding that color is a result of frequency and wavelength, a monochromatic system also means that a single color is selected. A good laser will have only one output wavelength and frequency, typically referred to in relation to the wavelength (i.e. 1500 nanometer wavelength, 870 nanometer wavelength).

A monochromatic system, made of only one frequency ideally is a single sinusoid function. A constant frequency sinusoid plotted in the frequency domain will have a line width approaching zero.

The time τ that the wave behaves as a perfect sinusoid is related to the spectral line width. If the sinusoid takes an infinite time domain presence, the spectral line width is zero. The frequency domain plot in this scenario is a perfect pulse.

If two frequencies are present in the time domain, the system is not monochromatic, which violates one of the principles of a perfect laser.

Temporal Coherence is essentially a different perspective of the same relation present between monochromaticity and narrow spectral width. Coherence is the ability to predict the value of a system. Temporal coherence means that, given information related to the time of the system, the position or value of the system should be predictable. Given a sinousoid with a long time domain presence, the value of the sinusoid will be predictable given a time value. This is one condition of a proper laser.

Spatial coherence takes a value of distance as a given. If the system is highly spatially coherent, the value of the system at a certain distance should predictable. This point is also a condition of a proper laser. This is also one differentiating point between a laser and an LED, since an LED’s light propagation direction is unpredictable at a certain time and certainly not in a certain distance. Light emitted from the LED may travel at any angle at any time. An LED does not produce coherent light; the Laser does.

• AlGaAs/GaAs Strip Laser May 13, 2020

This project features a heterostructure semiconductor strip laser, comprised of a GaAs layer sandwiched between p-doped and n-doped AlGaAs. The model parameters are outlined below. The structure is presented, followed by output optical power as a function of injection current. Thereafter, contour plots are made of the laser to depict the electron and hole densities, recombination rate, light intensity and the conduction and valence band energies.

• Quality Factor May 12, 2020

Quality factor is an extremely important fundamental concept in electrical and mechanical engineering. An oscillator (active) or resonator (passive) can be described by its Q-factor, which is inversely proportional to bandwidth. For these devices, the Q factor describes the damping of the system. In some instances, it is better to have either a lower or higher quality factor. For instance, with a guitar you would want to have a lower quality factor. The reason is because a high Q guitar would not amplify frequencies very evenly. To lower the quality factor, complex or strange shapes are introduced for the instrument body. However, the soundhole of a guitar (a Helmholtz resonator) has a very high quality factors to increase its frequency selectivity.

A very important area of discussion is the Quality Factor of a filter. Higher Q filters have higher peaks in the frequency domain and are more selective. The Quality factor is really only valid for a second order filter, which is based off of a second order equation and contains both an inductor and a capacitor. At a certain frequency, the reactances of both the capacitor and inductor cancel, leading to a strong output of current (lower total impedance). For a tuned circuit, the Q must be very high and is considered a “Figure of Merit”.

In terms of equations, the quality factor can be thought of in many different ways. It can be thought of as the ratio of “reactive” or wasted power to average power. It can also be thought of as the ratio of center frequency to bandwidth (NOTE: This is the FWHM bandwidth in which only frequencies that are equal to or greater than half power are part of the band). Another common equation is 2π multiplied by the ratio of energy stored in a system to energy lost in one cycle. The energy dissipated is due to damping, which again shows that Q factor is inversely related to damping, in addition to bandwidth.

Q can also be expressed as a function of frequency:

The full relationship between Q factor and damping can be expressed as the following:

When Q = 1/2, the system is critically damped (such as with a door damper). The system does not oscillate. This is also when the damping ratio is equal to one. The main difference between critical damping and overdamping is that in critical damping, the system returns to equilibrium in the minimum amount of time.

When Q > 1/2 the system is underdamped and oscillatory. With a small Quality factor underdamped system, the system many only oscillate for a few cycles before dying out. Higher Q factors will oscillate longer.

When Q < 1/2 the system is overdamped. The system does not oscillate but takes longer to reach equilibrium than critical damping.

• Bragg Gratings May 11, 2020

Bragg gratings are commonly used in optical fibers. Generally, an optical fiber has a relatively constant refractive index throughout. With a FBG (Fiber Bragg Grading) the refractive index is varied periodically within the core of the fiber. This can allow certain wavelengths to be reflected while all others are transmitted.

The typical spectral response is shown above. It is clear that only a specific wavelength is reflected, while all others are transmitted. Bragg Gratings are typically only used in short lengths of the optical fiber to create a sort of optical filter. The only wavelength to be reflected is the one that is in phase with the Bragg grating distribution.

A typical usage of a Bragg Grating is for optical communications as a “notch filter”, which is essentially a band stop filter with a very high Quality factor, giving it a very narrow range of attenuated frequencies. These fibers are generally single mode, which features a very narrow core that can only support one mode as opposed to a wider multimode fiber, which can suffer from greater modal distortion.

The “Bragg Wavelength” can be calculated by the equation:

λ = 2n∧

where n is the refractive index and ∧ is the period of the bragg grating. This wavelength can also be shifted by stretching the fiber or exposing it to varying temperature.

These fibers are typically made by exposing the core to a periodic pattern of intense laser light which permanently increases the refractive index periodically. This phenomenon is known as “self focusing” which is when refractive index can be permanently changed by extreme electromagnetic radiation.

• Photodetectors and Dark Current May 10, 2020

A photodetector simply is a device that converts light energy to an electrical current. These devices are very much similar to lasers, although they are designed to operate in reverse bias. “Dark current” is a term that originates from this reverse bias condition. When you reverse bias any diode, there is some leakage current which is appropriately named reverse bias leakage current. For photsensitive devices, it is called dark current because there is no light absorption involved. The main cause of this current is random generation of electrons and holes in the depletion region. Ideally, this dark current is minimal (<< 1).

The basic structure of the photodiode is the “PIN” structure, similar to a semiconductor laser diode. An intrinsic (undoped) region occurs between the P-doped and N-doped region.  Although PIN diodes are poor rectifiers, they are much better suited for high speed, high frequency applications due to the high level injection process. The wide intrinsic region provides a lowered capacitance at high frequencies. For photodetectors, the process is photon energy being absorbed into the depletion region, causing an electron hole pair to be created when the electron moves to a higher energy level (from valence to conduction band). This is what causes an electrical current to be created from light.

Photodetectors are “photoconductive”. That is, conductivity changes with applied light. Like amplifiers and other devices, photodetectors have “Figures of Merit” which signify characteristics of the device. These will be briefly examined

Quantum Efficiency

Quantum efficiency refers to the number of carriers generated per photon. It is normally denoted by η. It can also be stated as carrier flux/incident photon flux. Sometimes anti-reflection coatings are applied to photodetectors to increase QE.

Responsivity

Responsivity is closely related to the QE (quantum efficiency). The units are amperes/watt. It can also be known as “input-out gain” of any photosensitive or detective device. For amplifiers this is known as “gain”. Responsivity can be increased by maximizing the quantum efficiency.

Response Time

This is the time required for the photodiode to increase its output from 10% to 90% of final output level.

Noise Equivalent power

This value corresponds to units of Watts/sqrt(Hz). It is another measure of sensitivity of the device in terms of power that gives a signal to noise ratio of one hertz per output bandwidth, Small NEP is due to increased sensitivity of the device.

• Carrier Recombination May 9, 2020

Carrier recombination is an effect in which electrons and holes (carriers) interract with each other in a way in which both particles are eliminated. The energy given off in this process is related to the difference between the energy of the initial and final state of the electron that is moved during this process. Recombination can be stimulated by temperature changes, exposure to light or electric fields. Radiative recombination occurs when a photon is emitted in the process. Non-radiative recombination occurs when a phonon (quanta of lattice vibrations) is given off rather than a photon. A special case known as “Auger recombination” causes kinetic energy to be transferred to another electron.

Band to band recombination occurs when an electron moves from one band to another. In thermal equilibrium, the carrier generation rate is equal to the recombination rate. This type of recombination is dependent on carrier density. In a direct bandgap material, this will radiate a photon.

An atom of a different type of defect in the material can form “traps” which can contain one electron when the particle falls into it. Essentially, trap assisted recombination is a two step transitional process as opposed to the one step band to band transition. This is sometimes known as R-G center recombination. A two step recombination is known as “Shockley Read Hall” recombination. This is typically indirect recombinaton, which emits lattice vibrations rather than light.

The final type is Auger Recombination caused by collisions. These collisions between carriers transfer motional energy to another particle. One of the main reasons why this is distinct from the other two types is that this transfer of energy also causes a change in the recombination rate. Like the previous type, this tends to be non radiative.

A distinction should be made for band-to-band recombination between stimulated and spontaneous emission. Spontaneous emission is not started by a photon, but rather due to temperature or some other means (sometimes called luminescence). As stated in a previous post, stimulated emission is what emits coherent light in lasers, however spontaneous emission is responsible for most light emission in general.

• Rayleigh Scattering May 8, 2020

Rayleigh scattering is an effect of the scattering of light or electromagnetic radiation by particles much smaller in size than the wavelength. For example, when sunlight emits photons which enter the earth’s atmosphere, scattering occurs. The average wavelength for sunlight is around 500nm, which is in the visible light spectrum. However, it is known that the sunlight also emits Infrared waves and of course, ultraviolet radition. Interestingly enough, Rayleigh scattering influences the color of the sky due to diffuse sky radiation.

The reason why a huge wavelength (compare 400 nm with nitrogen and oxygen molecules which are only hundreds of picometers) can scatter on a small particle is because of electromagnetic interractions. When the nitrogen/oxygen molecules vibrate at a certain frequency, the photons interract and vibrate at the same frequency. The molecule essential absorbs and reradiates the energy, scattering it. Because the horizontal direction is the primary direction of vibration, the air scatters the sunlight. The polarization is dependent on the direction of the incoming sunlight. The intensity is proportional to the inverse of the wavelength to the fourth power. The shorter the wavelength, the more scattering. This can explain why the sky is blue because blue is more likely scattered by Raleigh scattering due to higher frequency (smaller wavelength). It is not dark blue because other wavelengths are also scattered, but much less so.

Rayleigh Scattering is quite important in optical fibers. Because the silica glass have microscopic differences in the refractive index within the material, Rayleigh scattering occurs which leads to losses. The following coefficient determines the scattering.

The equation shows that the scattering coefficient is proportional to isothermal compressibility (β), photoelastic coeffecient, the refractive index  as well as fictive Temperatue and is inversely proportional to the wavelength.

Rayleigh scattering accounts for 96% of attenuation in optical fibers. In a perfectly pure fiber, this would not occur. The scattering centers are typically atoms or molecules, so in comparison to the wavelength they are quite small. The Rayleigh scattering sets the lower limit for propagation loss. In low loss fibers, the attenuation is close to the Rayleigh scattering level, such as in Silica Fibers optimized for long distance propagation.

• The Electronic Oscillator May 7, 2020

The semiconductor laser is a device that can be compared to an electronic oscillator. An oscillator can be thought of as a resonator (a circuit that resonates or produces a strong output at a specific frequency) with gain. Resonators naturally decay over time by some factor, so adding in gain (so long as the gain is greater than or equal to the loss) can allow the resonator to become an oscillator that does not decay or dampen.

The stimulation of the oscillations of an oscillator is caused by electronic noise. A block diagram can demonstrate an oscillator in an abstract, easier to understand way.

The oscillator is built using an amplifier (transistor that is biased into active/saturation region) or op amp with positive and negative feedback. Noise in the circuit begins the oscillation, and this output is fed back into the input and is filtered along the way. This becomes an oscillation at a single frequency.

Oscillators can be built from RC circuits, LC circuits or can be crystal oscillators. RC circuit oscillators tend to be lower frequency oscillators in the audio range. The LC oscillator is often compared to the laser in terms of functionality. The negative reactance of the capacitor and positive inductive reactance cancel at a specific frequency, leaving the circuit with only resistance and a strong current is achieved. LC oscillators are much more important for RF/microwave purposes. A crystal oscillator produces its frequency through mechanical vibrations and has a much higher Q factor than the other resonator types, which provides greater temperature and frequency stability.

Two very important oscillator types for RF/microwave/mmWave circuits are dielectric resonators and SAW (surface acoustic wave) resonators. Dielectric resonators are mainly used as mmWave oscillators to drive antennas. They are generally made of a “puck” of ceramic which oscillates at a certain frequency dependent on its dimensions. Waves are confined inside the material due to an abrupt change in the permittivity. When the waves inside interfere and produce a standing wave, this increase of amplitude creates the resonance effect. SAW resonators are often used in cell phones and have distinct advantages over the LC oscillator or other types due to cost and size.

In a semiconductor laser (laser diode), the source of oscillations is the noise generated by spontaneous emission. Spontaneous emission is the result of recombination of electron and hole pairs within the material which produces photons. This spontaneous emission is how lasers begin their operation, and this is continued by stimulated emission. Stimulated emission is electron hole recombination due to photon energy which also produces a photon. The light emitted by this type of emission is coherent, a characteristic of a laser.

• Deriving Newton’s Lens Equation for a diverging lens May 6, 2020

For a Diverging lens, derive a formula for the output angle with respect to the refractive indexes and input angle. Assume paraxial approximation and thin lens.

For a Diverging lens, construct a derivation of Newton’s lens equation x_o*x_i = f^2.

• Pseudomorphic HEMT May 5, 2020

The Pseudomorphic HEMT makes up the majority of High Electron Mobility Transistors, so it is important to discuss this typology. The pHEMT differentiates itself in many ways including its increased mobility and distinct Quantum well shape. The basic idea is to create a lattice mismatch in the heterostructure.

A standard HEMT is a field effect transistor formed through a heterostructure rather than PN junctions. This means that the HEMT is made up of compound semiconductors instead of traditional silicon FETs (MOSFET). The heterojunction is formed when two different materials with different band gaps between valence and conduction bands are combined to form a heterojunction. GaAs (with a band gap of 1.42eV) and AlGaAs (with a band gap of 1.42 to 2.16eV) is a common combination. One advantage that this typology has is that the lattice constant is almost independent of the material composition (fractions of each element represented in the material). An important distinction between the MESFET and the HEMT is that for the HEMT, a triangular potential well is formed which reduces Coloumb Scattering effects. Also, the MESFET modulates the thickness of the inversion layer while keeping the density of charge carriers constant. With the HEMT, the opposite is true. Ideally, the two compound semiconductors grown together have the same or almost similar lattice constants to mitigate the effects of discontinuities. The lattice constant refers to the spacing between the atoms of the material.

However, the pseudomorphic HEMT purposely violates this rule by using an extremely thin layer of one material which stretches over the other. For example, InGaAs can be combined with AlGaAs to form a pseudomorphic HEMT. A huge advantage of the pseudomorphic typology is that there is much greater flexibility when choosing materials. This provides double the maximum density of the 2D electron gas (2DEG). As previously mentioned, the field mobility also increases. The image below illustrates the band diagram of this pHEMT. As shown, the discontinuity between the bandgaps of InGaAs and AlGaAs is greater than between AlGaAs and GaAs. This is what leads to the higher carrier density as well as increased output conductance. This provides the device with higher gain and high current for more power when compared to traditional HEMT.

The 2DEG is confined in the InGaAs channel, shown below. Pulse doping is generally utilized in place of uniform doping to reduce the effects of parasitic current. To increase the discontinuity Ec, higher Indium concentrations can be used which requires that the layer be thinner. The Indium content tends to be around 15-25% to increase the density of the 2DEG.

• Parameter Analysis of the MESFET, Channel Width Calculation May 4, 2020

Engineering design regularly involves an analysis of the formulae behind the various parameters of a system one is trying to build or improve. Some parameters are static, such a particular qualities of the materials being used. Perhaps there is a constraint made on the system or a goal, such as achieving function at a certain frequency or to reduce the size as much as possible. Today, many programs exist that can perform complicated calculations for the engineer. To construct a problem or calculation that produces the desired result may need more attention.

The MESFET uses a contact between n-doped semiconductor material with highly n-doped semiconductor material to form a junction field effect transistor. The great advantage of not using a p-doped semiconductor material is that the transistor can be built without using hole transfer. Since hole transfer is much slower than electron transfer, the MESFET can function much faster than other types of transistors.

For the MESFET, it may not be possible to examine all parameters. Consider first the following:

Potential variation along the channel (notice the similarity of the following to Ohm’s law, V=IR):

Where the resistance along the channel is:

Depletion Width (also referenced in the above formula) under the gate:

Pinch-off Voltage:

Threshold Voltage:

Built-in Potential:

The above formulas alone would be enough to put to use. While constructing a MESFET, it was found that the doping concentration of donor electrons in the channel played an important role. N_D, the donor doping concentration is found in most of the above formulas. The doping concentration is of particular importance, since it can be directly manipulated. The pinch-off voltage and the donor concentration are directly proportional. By achieving an estimate (or of the values are known) for other parameters, it would be possible to perform a parameter sweep for the MESFET system for doping concentration. This method may become critical for optimizing semiconductor device designs.

# MESFET Design Problem

Let’s say we want to calculate the channel width of an n-channel GaAs MESFET with a gold Schottky barrier contact. The barrier height (φ_bn) is 0.89 V. The temperature is 300 K. The n-channel doping N_d is 2*10^15 cm^(-3). Design the channel thickness such that V_T = +0.25V.

• GaAs MESFET Designs May 3, 2020

A GaAs MESFET structure was built using Silvaco TCAD:

• Channel Donor Electrons: 2e17
• Channel thicknes s : 0.1 microns
• Bottom layer: p doped GaAs (5 micron thick, 1e15p doping)
• Gate length: 0.3 micron
• Gate metal work function: 4.77eV
•Separation between the source and drain electrode: 1 micron

The IV curve is as follows. Of primary importance are the two bottom curves, which are for a gate voltage of -0.2V and -0.5V. The top curve is 0V, over which would be undesirable for the MESFET operation.

Now, in terms of designing a MESFET, there is a large amount of theory that one may need to grasp to build one from scratch – you would probably first start by building one similar to a more common iteration. That said, there are a number of parameters that one may wish to tweak and to achieve, to name a few: saturation current, threshold voltage, transit frequency, maximum frequency, pinch-off voltage.

The iteration above does not show a highly doped region under the source and drain contacts. The separation between source and drain may also be increased and the size of the gate decreased.

Channel doping level was found to make a significant difference in overall function. The channel must be doped to a certain level, otherwise the structure may not behave properly as a transistor.

go atlas

Title GaAs MESFET

# Define the mesh

mesh auto
x.m loc = 0 Spac=0.1
x.m loc = 1 Spac=0.05
x.m loc = 3 Spac=0.05
x.m loc = 4 Spac =0.1

# n region

region num=1 bottom thick = 0.1 material = GaAs NY = 10 donor = 2e17

# p region

region num=2 bottom thick = 5 material = GaAs NY = 4 acceptor = 1e15

# Electrode specification
elec num=1 name=source x.min=0.0 x.max=1.0 top
elec num=2 name=gate x.min=1.95 x.max=2.05 top
elec num=3 name=drain x.min=3.0 x.max=4 top

doping uniform conc=5.e18 n.type x.left=0. x.right=1 y.min=0 y.max=0.05
doping uniform conc=5.e18 n.type x.left=3 x.right=4 y.min=0 y.max=0.05

#Gate Metal Work Function
models fldmob srh optr fermidirac conmob print EVSATMOD=1
contact num=2 work=4.77

# specify lifetimes in GaAs and models
material material=GaAS taun0=1.e-8 taup0=1.e-8
method newton

solve vdrain=0.5
LOG outf=proj2mesfet500mVm.log
solve vgate=-2 vstep=0.25 vfinal=0 name=gate
save outf=proj2mesft.str
#Plotting
output band.param photogen opt.intens con.band val.band

tonyplot proj2mesft.str
tonyplot proj2mesfet500mVm.log
quit

• Basic Energy Band Theory May 2, 2020

Band theory is essential in the study of solid state physics. The basic idea tends to center around two bands: the conduction and valence band (for reasons discussed later on). Between the two bands is a forbidden energy level (Energy gap) which depends on the resistivity or conductance of the material. In order to fully understand solid state devices such as transistors or solar cells, this must be discussed.

For a single atom, electrons occupy discrete energy levels called bands. When two atoms join together to form a diatomic element (such as Hydrogen), their orbitals overlap. The Pauli Exclusion Principle states that no two electrons can have the same quantum numbers. Now keep in mind that there are four types of quantum numbers. This means that when these two atoms combine the atomic orbitals must split to compensate so that no two electrons have the same energy. However for a macroscopic piece of a solid, the number of atoms is quite high (on the power of 10^22) and therefore the number of energy levels is also high. For this reason, adjacent energy levels are almost continuous, forming an energy band. The main bands under consideration are the valence (outermost band involved in chemical bonding) and conduction because the inner electron bands are so narrow. Band gaps or “forbidden zones” are leftover energy levels that are not covered by a band.

In order to apply band theory to a solid, the medium must be homogeneous or evenly distributed. The size of material must be considerable as well, which is not unreasonable considering the number of atoms in an appreciable piece of a solid. The assumption also must include that electrons do not interract with phonons or photons.

The “density of states” is a function that describes the number of states per unit volume, per unit energy. It is represented by a Probability Density function.

A Fermi-Dirac distribution function demonstrates the probability of a state of energy being filled with an electron. The probability is given below.

The μ is generally expressed as EF which is the Fermi energy level or total chemical potential. kT is the familiar thermal energy which is the product of the Boltzmann constant and the temperature. From this equation it is clear that absolute zero temperature, the exponential term increases to infinity, causing the entire term to trend to zero. This leads to the conclusion that semiconductors behave as insulators at 0K.

The density of electrons can be calculated by multiplying this value with the density of states function and integrating over all energy.

Band-gap engineering is the process of changing a material’s band gap. This is usually done to semiconductors by changing the composition of alloys in the material.

• Object Oriented Programming and C#: Dictionaries/Hash Tables May 1, 2020

A “dictionary” in C# is a ADT (Abstract data type) that maps “keys” to “values”. Normally with an array, the values within this collection of data are accessed using indexing. For the dictionary, instead of indexes there are keys. Another name for a dictionary is a “hash table”, although the distinction can be made in the sense that the hash table is a non-generic type and the dictionary is of generic type. The namespace required for dictionaries is the “System.Collections.Generics” namespace.

The dictionary is initialized much like a list (dynamic array), however the dictionary take two parameters (“TKey”,”TValue”). The first is the data type of the key and the second the data type of the value. Similarly to dynamic arrays, values can be added to the dictionary using a “Add(key,value)” command. Similarly, a value can be deleted using the “Delete(key)” command. However it is important to note that keys do not have to be integers, unlike an index. They can be of any data type imaginable. However, a dictionary cannot contain duplicate keys.

The functionality of a dictionary in C# is similar to a physical dictionary. A dictionary contains words and their definitions and analogously, a programming dictionary maps a key (word) to a value (definition).

The following program illustrates adding values to a dictionary. The key is of type integer and the value of type string. The values “one”, “two” and “three” are added with corresponding integer keys.

Much like with arrays, a “foreach” statement can be used to iterate over all the values of a dictionary.

It is important to note for a hash table, the relationship between the key and its value as that this must be one to one. When different keys have the same hash value, a “collision” occurs. In order to resolve the collision, a link list must be created in order to chain elements to a single location.

An important concept with hash table: speed of processing does not depend on size. For arrays, in order to find a specific value a linear search must be performed. This takes a long time to complete if the array is very long. With a hash table, size does not matter because the hashing function is a constant time. The “ContainsKey()” method can be used to find a specific key without the need for a linear search.

When would you use a dictionary/hash table over a list? Dictionaries can be helpful in instances where indexes have special meaning. A particular use of a dictionary could be to count the words in a text using the “String.Split()” method and adding each word to the dictionary. In this instance, the “foreach” statement could easily be used to iterate over every value and find the number of words. In short, the dictionary maps meaningful keys to values whereas the list simply maps indexes to values.

• The Half Wave Dipole Antenna April 30, 2020

The dipole is a type of linear antenna which commonly features two monopole antennas of a quarter wavelength in size bent at 90 degree angles to each other. Another common size for the dipole is 1.25λ. These sizes will be discussed later.

It is important for beginning the study of the dipole antenna to discuss the infinitesimal dipole. This is the dipole which is smaller than 1/50 of the wavelength and is also known as a Hertzian dipole. This is an idealized component which does not exist, although it can serve as an approximation to large antennas which can be broken into smaller segments. The mathematics behind this can be found in “Antenna theory:Analysis and Design” by Constantine Balanis.

More importantly, three regions of radiation can be defined: the far field (where the radiation pattern is constant – this is where the radiation pattern is calculated), the reactive near field and the radiative near field.

As shown in the image, the reactive near field is when the range is less than the wavelength divided by 2π or when the range is less than 1/6 of the wavelength. The electric and magnetic fields in this region are 90 degrees out of phase and do not radiate. It is known that the E and H fields must be in phase to propagate. The radiating near field is where the range is between 1/6 of the wavelength and the value 2D^2 divided by the wavelength. This is also known as the Fresnel zone. Although the radiation pattern is not fully formed, propagating waves exist in this region. For the far field, r must be much, much greater than λ/2π.

The radiating patterns of the dipole antenna is pictured below, with both the E and H planes. The E plane (elevation angle pattern) is pictured on the bottom right and the H plane (Azimuthal angle) beside it on the left. The plots are given in dB scale. The radiation patterns can be understood by considering a pen. While facing the pen you can see the full length of the pen, but if you look down on the pen you can only see the tip or end. This is analogous to the dipole antenna where maximum radiation is broadside to the antenna and minimum radiation on the ends, leading to the figure 8 radiation pattern. When this radiation pattern in extended to three dimensions, the top left image is derived.

• Focal Length of a Submerged Lens April 29, 2020

# Is the focal length of a spherical mirror affected by the medium in which it is immersed? …. of a thin lens? What’s the difference?

Mirrors

A spherical mirror may be either convex or concave. In either case, the focal length for a spherical mirror is one-half the radius of curvature.

The formula for focal length of a mirror is independent of the refractive index of the medium:

Lens

The thin lens equation, including the refractive index of the surrounding material (“air”):

The effect of the refractive index of the surrounding material can be summarized as follows:

• The focal length is inversely proportional to the refractive index of the lens minus the refractive index of the surrounding medium.
• As the refractive index of the surrounding medium increases, the focal length also increases.
• If the refractive index of the surrounding medium is larger than the refractive index of the thick lens, the incident ray will diverge upon exiting the lens.

• Infinite Lateral Magnification of Lenses and Mirrors April 28, 2020

# Under what conditions would the lateral magnification (m=-i/o) for lenses and mirrors become infinite? Is there any practical significance to such condition?

Magnification of a lens or mirror is the ratio of projected image distance to object distance. Simply put, how much closer does the object appear as a result of the features of the lens or mirror? The object may seem larger or it may seem smaller as a result of it’s projection through a lens or mirror. Take for instance, positive magnification:

If the virtual image appears further than the real object, there will be negative magnification:

The formula for magnification is the following:

The question then is, how can there be an infinite ratio of image size to object size? Consider the equation for focal length:

For magnification to be infinite, the image distance should be infinite, in which case the object distance is equal to the focal length:

In this case, the magnification is infinite:

The meaning of this case is that the object appears as if it were coming from a distance of infinity, or very far away and is not visible. A negative magnification means that the image is upside-down.

• Focal Length of a Lens as a function of light frequency April 27, 2020

# How does the focal length of a glass lens for blue light compare with that for red light? Consider the case of either a diverging lens or a converging lens.

This question really has three parts:

• Focal Length of a lens
• Effect of light frequency (color)
• Diverging and Converging lens

Focal Length of the Converging and Diverging Lens

For the converging and diverging lens, the focal point has a different meaning. First, consider the converging lens. Parallel rays entering a converging lens will be brought to focus at the focal point F of the lens. The distance between the lens and the focal point F is called the focal length, f. The focal length is a function of the radius of curvature of both sides or planes of the lens as well as the refractive index of the lens. The formula for focal length is below,
(1/f) = (n-1)((1/r1)-(1/r2)).

This formula also works for a diverging lens, however the directions of the radius of curvature must be taken into account. If for instance the center of the circle for one side of the lens is to the left of the lens, one may chose that direction to be positive and the other direction to be negative; as long as one maintains the same standard for direction.

If the focal length of a lens is negative, meaning that the focal point is behind the lens, on the side at which the rays entered, this is a diverging lens.

Interaction of Color with Focal Length

The other part of this question dealt with how the focal length would change for one color such as blue versus another color such as red. The key to this relationship is the refractive index of the lens, as the refractive index can change with regards to the color (i.e. frequency).

The material from which the lens is made is not known, however as demonstrated by the following table, the refractive index is consistently higher for smaller wavelength colors.

Reviewing the focal length formula, it is understood from the inverse proportionality of the equation that as the refractive index increases, the focal length will decrease. Blue has a higher refractive index than red. Therefore, blue will have a smaller focal length than red.

• Object Oriented and C#: Quadratic Roots Program April 26, 2020

The following program is designed to accept three doubles as inputs and prints the roots of a quadratic, whether complex or real. If a non-double is inputted into the program, the program should display “Bad Input”. The program contains two files: a “program” file to run several of the main methods and a “complex” file which creates the class for handling complex numbers and overrides the built in “ToString” method.

The first goal is to initialize the part of the program that handles real roots. The easiest portion is to create a method that reads doubles. It is important that the method contains a nullable type because the method should return null if a non-double such as a string is put into the method. This provides an easy way to use a conditional statement upon using the “Tryparse” method. The “Tryparse” method returns a boolean value of true or false. The “if” statement checks if the return is true and if so, returns the result. If not, null is returned.

Next, the “getquadraticstring” method is implemented to format the printed result in the form “AX^2+BX+C”. This is also done within the “program” file. Format specifiers are put within the placeholders to set the printed values to two decimal places if neccessary.

The “getrealroots” method produces the roots of the quadratic given that they are purely real. First the discriminant (the part in the quadratic formula under the square root symbol) is calculated. Several if statements are provided to check how many real roots there are and returns that quantity as an integer. For example, if the discriminant is negative, there will be no real roots returned. This means both of the “out” variables should be set to null and the function should return a 0. For a discriminant = 0, the quadratic formula reduces to -B/2A and the second root should be null. The return value is again the number of roots (1). It is important to note that an “if-else” statement must only end in “else” rather than “else if”. The “else” statement must cover all other possibilities.

Within the “main” function, three numbers are taken from the console using the getDouble method. An integer value is obtained from the getRealroots method which states the number of roots. This will be used for the conditional statements. For ease of reading code, a string variable is created to store the return from the “getQuadraticString” method.

Next, an “if” statement is used to print a bad input if any of the a, b, c variables are null. A return statement is included within the “if” statement so that an else does not have to be provided. This will exit the statement after it has completed.

Now the logic for the imaginary numbers must be implemented. The default constructor is shown with default inputs of zero. It doesn’t need any code within it because it inherits the Complex constructor. The “ToString()” method must be overridden because the formatting must be changed to adhere to complex numbers.

In addition, logic must be implemented for the “getImaginaryRoots()” method. The discriminant is calculated the same way as before, however the absolute value is taken. The real part must be calculated separately and the denominator is split for this reason. For clarification, this is the real part of a complex root. The two roots are the same, but complex conjugates.

The “main” function must be updated to reflect the imaginary roots.

The “getQuadraticString()” method is updated as shown. Three pieces of string must be created with several conditions imposed. They begin as empty strings and are filled in. Separating them into parts lets the logic be implemented for when each coefficient is 1 or -1. When C is zero, an empty string will be printed.

• E-K Diagrams April 25, 2020

As previously concluded, solids can be characterized based on energy band diagrams. A conductor has a valence and conduction bands that are very close or overlap. In addition a conductor will have a completely filled valence band and an almost full conduction band. The “forbidden region of the conductor is very small and little energy is required for an electron to move from conduction to valence band. In the presence of an external field, it is very easy for electrons to move from the valence band to the conduction band.

For semiconductors, at absolute zero the valence band is also completely full and the bandgap is typically about 1eV to 3eV, however even a bandgap of .1eV could be considered a semiconductor. Therefore, a semiconductor at 0K is an insulator. Semiconductors are very temperature sensitive. The subsequent figure illustrates the temperature dependence. The resistivity is very high at absolute zero, making the semiconductor behave like an insulator. However at higher temperatures the semiconductor can become quite conductive. At room temperature (300k), the semiconductor behaves more like a conductor.

With band diagrams, not much information is given therefore it is necessary to also analyze an E-K (Energy momentum) diagram. E is the energy require for an electron to traverse the bandgap. For example in Silicon with a bandgap of 1.1eV, it would take an energy level of 1.1eV for an electron to move from conduction to valence band. Energy is given as E = kT where T is a given temperature.

For intrinsic semiconductors like Silicon, the structure is crystalline and periodic. The wavefunction (which describes probability of finding an electron) should therefore be of periodic nature (sinusoidal). From the Schrodinger equation, it can be found that the Energy is periodic with k as well. For the diagrams, E is plotted against k.

The borders of the first Brillouin zone are from -π/a to π/a. These are cells of the crystalline lattice. Since the wavefunction is periodic, we only care about one of the zones. The above figure can be considered the “reduced zone” figure. Sometimes the x axis is given as the moment or wavenumber, since these only differ by a factor of Planck’s constant. From this diagram: the bandgap energy is shown, the effective mass of electrons and holes are shown as well as the density of states. The effective mass is shown by the curvature of the bands. For example, a heavy hole band could be found by observing the diagram that is less curved. From the above diagram, it is also noticeable that the material is direction bandgap (such as GaAs). The basic energy gap diagram compares to the E-k diagram in that the maximums and minimums correspond. However, the original band gap diagram does not give any other characteristics. It is for this reason the E-k diagram is so useful.

• The Radar Range Equation April 24, 2020

To derive the RADAR range equation, it is first necessary to define the power density at a distance from an isotropic radiator. An isotropic radiator is a fictional antenna that radiates equally in all directions (azimuthal and elevation angle accounted for). The power density (in watts/sq meter) is given as:

However, of course RADARs are not going to be isotropic, but rather directional. The power density for this can be taken directly from the isotropic radiator with an additional scaling factor (antenna gain). This simply means that the power is concentrated into a smaller surface area of the sphere. To review, gain is directivity scaled by antenna efficiency. This means that gain accounts for attenuation and loss as it travels through the input port of the antenna to where it is radiated into the atmosphere.

To determine the received power to a target, this value can be scaled by another value known as RCS (RADAR Cross section) which has units of square meters. The RCS of a target is dependent on three main parameters: interception, reflection and directivity. The RCS is a function of target viewing angle and therefore is not a constant. So in short, the RCS is a unit that describes how much from the target is reflected from the target, how much is intercepted by the target as well as how much as directed back towards the receiver. An invisible stealth target would have an RCS that is zero. So in order to determined received power, the incident power density is scaled by the RCS:

The power density back at the receiver can then be calculated from the received power, resulting in the range being to the fourth power. This means that if the range of the radar to target is doubled, the received power is reduced by 12 dB (a factor of 16). When this number is scaled by Antenna effective area, the power received at the radar can be found. However it is customary to replace this effective area (which is less than actual area due to losses) with a receive gain term:

The symbol η represents antenna, and is coefficient between 0 and 1. It is important to note that the RCS value (σ) is an average RCS value, since as discussed RCS is not a constant. For a monostatic radar, the two gain terms can be replaced by a G^2 term because the receive and transmitted gain tends to be the same, especially for mechanically scanned array antennas.

• HFSS: Conical Horn Antenna Simulation April 23, 2020

For the following simulation, the solution type is Driven Modal. Driven modal gives solutions in terms of power, as opposed to Driven Terminal which displays results in terms of voltages and currents. The units are set to inches.

The first step is to create the circular waveguide with a radius of .838 inches and a height of three inches:

To make the building process easier, a relative coordinate system is implemented through the Modeler window. The coordinate system is moved up to z = 3. A conical transition region (taper) is built at that origin point. The lower radius is 0.838 and the upper radius is 1.547. The height is 1.227. The coordinate system is then adjusted to be on top of the taper.

The “throat” is created by placing yet another cylinder on top of the taper. The height is 3.236. Now, all the objects are selected and a Boolean unite is performed. All units can be selected by using the shortcut “CTRL + A”. From this point, a single object is obtained and name “Horn_Air”. This can be seen in the project tree on the left.

The coordinate system is displaced back to the standard origin and “pec” is selected as the default material (perfect electrical conductor). This will be used to create the horn wall, shown below. A Boolean subtract is performed between the vacuum parts and the conductive portion to create a hollowed out antenna.

Because the simulation is of a radiating antenna, an air box of some sort must be implemented. In our case, we use a cylindrical radiation boundary. The bottom of the device is chosen for the waveport. Upon assigning the two mode waveport, the coordinate system is redefined for the radiation setup. For the radiation, the azimuthal angle is incremented from 0 to 90 in one 90 degree increment and the elevation angle is incremented from -180 to 180 with a step size of 2:

The simulation is done at 5 GHz with 10 as the maximum number of passes. The S-Matrix data is shown below.

As well as the convergence plot:

The radiation pattern is shown for the gain below:

The plot is in decibels and is swept over the elevation angle. Both the lefthand and righthand polarized circular wave patterns are shown at angles phi = 90 and phi = 0. The two larger curves are the RHCP and the two smaller are LHCP.

• Object Oriented Programming and C#: Program to Determine Interrupt Levels April 22, 2020

The following is a program designed to detect environmental interrupts based on data inputted by the user. The idea is to generate a certain threshold based on the standard deviation and twenty second average of the data set.

A bit of background first: The standard deviation, much like the variance of a data set, describes the “spread” of the data. The standard deviation is the square root of the variance, to be specific. This leaves the standard deviation with the same units as the mean, whereas the variance has squared units. In simple terms, the standard deviation describes how close the values are to the mean. A low standard deviation indicates a narrow spread with values closer to the mean.

Often, physical data which involves the averaging of many samples of a random experiment can be approximated as a Gaussian or Normal distribution curve, which is symmetrical about the mean. As a real world example, this approximation can be made for the height of adult men in the United States. The mean of this is about 5’10 with a standard deviation of three inches. This means that for a normal distribution, roughly 68% of adult men are within three inches of the mean, as shown in the following figure.

In the first part of the program, the variables are initialized. The value “A” represents the multiple of standard deviations. Previous calculations deemed that the minimum threshold level would be roughly 4 times the standard deviation added to the twenty second average. Two arrays are defined: an array to calculate the two second average which was set to a length of 200 and also an array of length 10 for the twenty second average.

The next part of the program is the infinite “while(true)” loop. The current time is printed to the console for the user to be aware of. Then, the user is prompted to input a minimum and maximum value for a reasonable range of audible values, and these are parsed into integers. Next, the Random class is instantiated and a for loop is incremented 200 times to store a random value within the “inputdata_two[]” array for each iteration. The random value is constrained to the max and min values provided by the user. The “Average()” method built into the Random class gives an easy means to calculate the two second average.

Next, a foreach statement is used to iterate through every value (10 values) of the twenty second average array and print them to the console. An interrupt is triggered if two conditions are met: the time has incremented to a full 20 seconds and the two second average is greater than the calculated minimum threshold. “Alltime” is set to -2 to reset the value for the next set of data. Once the time has incremented to 20 seconds, a twenty second average is calculated and from this, the standard deviation is calculated and printed to the console.

The rest of code is pictured below. The time is incremented by two seconds until the time is at 18 seconds.

The code is shown in action:

If a high max and min is inputted, an interrupt will be triggered and the clock will be reset:

• Object Oriented Programming and C#: Fractions Program April 21, 2020

The following is a post explaining the functionality of a C# program in Visual Studio, which is designed to do basic operations between fractions which are ratios of whole numbers.

To begin, three namespaces are included using the “using” directive statement.

The “system” namespace is included with every program. The next two must be included to implement certain classes. Without the directives in place, these namespaces would have to be included manually with every usage of the classes that are a part of them.

The next bit of code is pictured above. Two integers are created with a “private” access modifier to indicate they can only be used within the Fraction class. Next, the constructor for Fraction is called and supplied with the two integers. The “this” keyword uses the current instance of the class to assign one of the inputs (num) to a member of the class. “This” can be helpful to distinguish between constructor inputs and members of the class since “this” always refers to members of the current instance. An “if” statement is included to handle exceptions thrown by having a denominator of zero. You can always identify a constructor by its lack of a data type.

For some reason, the constructor should be called again, and it should inherit itself (???) with a 0 and 1 supplied as its argument. God only knows why.

The Reduce function is meant to reduce the fraction to its canonical (simplest) form. It is important to note that the method is private, which means that it cannot be used outside the class “Fraction”. The greatest common denominator is initialized to zero. A for loop is executed to cycle through all the way to what “denomenator” is used for. Denomenator is allowed to be used here because the method is used within the class. Successive division is used to check if the canonical form has been achieved. By dividing by the loop index and checking for a remainder of zero for both the numerator and denominator, it can be shown whether more division should be done or not. If both conditions are true, the greatest common denominator has been found to be the loop index. The next step is to divide the numerator and denominator through by this value. For example, if the numerator was set to 3 and the denominator was set to 6, by the time the loop counter reached three, both statements would return a boolean “TRUE” and the gcd would be set to 3. Then both values would be divided by 3, reducing the fraction to 1/2.

The next step is to define the properties. Properties allow private variables to be used publicly. This can be useful when you need to protect certain data by not allowing it to be used in any class, but sometimes needs to be exposed. This is accomplished using “getters” and “setters”. The “value” keyword is automatically provided when using a “setter” and sets the private variable to that value. Basically, numerator and denomenator are private variables and can only be changed within the class. Encapsulation refers to the scope of members within classes or structs. Properties provide a flexible way to control the accessibility of these members.

The last method is used to convert integer fractions to the double data type. This functionality is provided by the “explicit” keyword. The result is a returned “fractiondecimal” of data type double.

The following codes are suppressed using the “#region” keyword. By entering each region, the code can be viewed. Within the first block of code, the custom arithmetic operators are defined, two of which are shown below.

The addition is slightly complicated, because a common denominator must be found. Different implementations of the Fraction class are supplied to the input of the operator method. The fields “numerator” and “denomenator” are accessed through the class and assigned to a variable. A new object (“c”) is instantiated from the Fraction class which is the sum of “a” and “b”. The multiplication custom operator is slightly simpler, because it is straight across multiplication. Additional code is provided to change the sign of both the numerator and denominator if the denominator is negative. The operators for division and subtraction employ similar logic.

The comparison operators are defined using the same common denominator technique. The only difference between each operator method is the symbol used in the “if” statement. Six methods are provided (<, >, ==, !=, >= and <=).

The last bit of code is pictured below. The “ToString” method is inherited by every class and therefore can be overridden. This allows flexibility to define the “ToString” method however you want. In this case, we want a fraction to be printed. The “as” keyword can convert between nullable or reference types and returns null if the conversion is not possible. When this conversion from obj to Fraction is possible, the numerator and denominator are set and the fraction is returned.

• Refractive Index as a Function of Wavelength April 20, 2020

Previously, we discuss how the resultant wavelength and velocity in an optical system is said to be dependent on the refractive index. What we didn’t explain however is that the relationship between refractive index and wavelength more often involves a dependency of the refractive index according to the incident wavelength. After all, it is easier to change the wavelength of a light wave than it is to change the material that it is propagating through. So in fact, the refractive index will vary according to the wavelength of the incident wave. If the system is not monochromatic, the frequency may also change.

As we know from ray optics or geometrical optics is that the refractive index is used to determine how a ray will travel through an optical system. The relationship between wavelength and refractive index implies that an optical system with the same material will produce a different transmission angle (or perhaps a completely different result) for two rays of different wavelength.

Consider the range of refractive indexes for several different mediums with an altered wavelength and color (i.e frequency):

The differences in refractive indexes for these materials given different wavelengths and frequencies may seem small, however the difference is enough that rays of different wavelengths will interact slightly differently through optical systems.

Now, what if a ray managed to contain more than one wavelength? Or, if it were a blend of all colors? This case is called white light. If white light can contain a sum of a number of wavelengths and frequencies, each component of white light will behave according to it’s relative refractive index.

The classic example of this is of course the prism.

• Refractive Index, Speed of Light, Wavelength and Frequency April 19, 2020

The relationship between the speed of light in a medium and the refractive index is the following:

Therefore it can be understood that for a medium of higher refractive index, the speed of light in that medium will be slower. Light will not achieve a speed higher than c or 2.99 x 10^8 m/s. When light is traveling at this speed the refractive index of the medium is 1.00.

Now, what about the wavelength? Interestingly, one might begin to understand that the wavelength is the determining factor for color. In fact, this is not the case. Frequency is what defines the color of the light, which can vary from an invisible infrared range to the visible range to the invisible ultraviolet range. In a monochromatic system, the frequency of light (and therefore color) will stay the same. The velocity and wavelength will change with the refractive index.

As the above picture suggests, we might beleive that wavelength and frequency are forever tied together. The above example would in fact be incomplete at best, were we to consider that light can travel at more than one speed. However, let us review the relationship between wavelength and frequency. The following formula is normally presented for wavelength:

Now, here is the question: does c in this equation correspond to the speed of light in a vacuum, or does it correspond to the speed of the travelling light wave? Let’s consider, what does the speed of light in a vacuum have to say about the speed of light in water? It really doesn’t have much to say, does it? Which is why we can use instead, v to denote the speed of light.

Note that I’ve written the wavelength as a function of the speed of light in the medium. Taking this to it’s conclusions, we would understand that actually, the wavelength is not exclusively dependent on frequency and that multiple wavelengths may exist for one frequency. The determining factor in such a case is the refractive index, given that frequency is constant.

Given the wavelength, frequency and refractive index, the speed of the light wave may also be calculated.

Physically, one may picture that the frequency is the rate at which the peak of a wave passes by a point. A longer wavelength wave will need to move faster to keep at the same frequency.

The applications and implications of this physical relationship will be explored next.

• Yagi-Uda Antenna/Parasitic Array April 18, 2020

The Yagi-Uda antenna is a highly directional antenna which operates above 10 MHz and is commonly used in satellite communications, as well as with amateur radio operators and as rooftop television antennas. The radiation pattern for the Yagi-Uda antenna shows strong gain in one particular direction, along with undesirable side lobes and a back lobe. The Yagi is similar to the log periodic antenna with a major distinction between the two being that the Yagi is designed for only one frequency, whereas the log periodic is wideband. The Yagi is much more directional, so it provides a higher gain in that one particular direction that it is designed for.

The “Yagi” antenna has two types of elements: the driven element and the parasitic elements. The driven element is the antenna element that is directly connected to the AC source in the transmitter or receiver. A reflector element (parasitic) is placed behind the driven element in order to split the undesirable back lope into two smaller lobes. By adding directive parasitic elements in front of the driven element, the radiation pattern is stronger and more directional. All of these elements are parallel to each other and are usual half wave dipoles. These elements work by absorbing and reradiating the signal from the driven element. The reflector is slightly longer (inductive) than the driven element and the director elements are slightly shorter (capacitive).

It is well known in transmission line theory that a low impedance/short circuit load will reflect all power with an 180 degree phase shift (reflection coeffecient of -1). From this knowledge, the parasitic element can be considered a normal dipole with a short circuit at the feed point. Since the parasitic elements reradiate power 180 degrees out of phase, the superposition of this wave and the wave from the transmitter leads to a complete cancellation of voltage (a short circuit). Due to the inductive effects of the reflector element and the capacitive effects of the director antennas, different phase shifts are created due to lagging or leading current (ELI the ICE man). This cleverly causes the superposition of the waves in the forward direction to be constructive and destructive in the backwards direction, increasing directivity in the forward direction.

Advantages of the Yagi include high directivity, low cost and high front to back ratio. Disadvantages include increased sizing when attempting to increase gain as well as a gain limitation of 20dB.

• III-V Semiconductor Materials & Compounds April 17, 2020

The Bandgap Engineer’s Periodic Table

In contrast with an elemental semiconductor such as Silicon, III-V Semiconductor compounds do not occur in nature and are instead combinations of materials from the III and V category groups on the periodic table. Silicon, although a proven as a functional semiconductor for electronic applications at lower frequencies is unable to perform a number of roles that III-V semiconductors are able to. This is in large part due to the indirect bandgap quality of Silicon. III-V semiconductor materials under a number of applications and combinations are direct bandgap semiconducting materials. This allows for operation at much higher speeds. Indirect bandgap materials will be unable to produce light.

Ternary and Quaternary III-V

The following list introduces the main III-V semiconductor material compounds used today. In a follow-up discussion, ternary and quarternary III-V semiconductors will be discussed in greater depth. To begin however, these may be understood as a process of mixing, varying or transitioning between two or more material types. For instance, a transition region between GaAs and GaP is described as GaAsxP1-x. This is the compound GaAsP, a blend of both GaAs and GaP, but at end of the material region, it is GaAs and at the other end it is equal to GaP.

GaAs
GaAs was the first III-V material to play a major role in photonics. The first LED was fabricated using this material in 1961. GaAs is frequently used in microwave frequency devices and monolithic microwave integrated circuits. GaAs is used in a number of optical and optoelectronic near-infra-red range devices. The bandgap wavelength is λg = 0.873 μm.

GaSb
Not long after GaAs was used, other III-V semiconductor materials were grown, such as GaSb. The bandgap wavelength of GaSb λg = 1.70 μm, making it useful for operation in the Infra-red band. GaSb can be used for infrared detectors, LEDs, lasers and transistors.

InP
Similar to GaAs, Indium Phosphide is used in high-frequency electronics, photonic integrated circuits and optoelectronics. InP is widely used in the optical telecommunications industry for wavelength-division multiplexing applications. It is also used in photovoltaics.

GaAsP
An alloy of GaAs and GaP, Gallium Arsenide Phosphide is used for the manufacture of red, orange and yellow LEDs.

InGaAs
Indium Gallium Arsenide is used in high-speed and high sensitivity photodetectors and see common use in optical fiber telecommunications. InGaAs is an alloy often written as GaxIn1-xAs when defining compositions. The bandgap energy is approximately 0.75 eV, which is convenient for longer wavelength optical domain detection and transmission.

InGaAsP
Indium Gallium Arsenide Phosphide is commonly used to create quantum wells, waveguides and other photonic structures. InGaAsP can be lattice-matched to InP well, which is the most common substrate material for photonic integrated circuits.

InGaAsSb
Indium Gallium Arsenide Antimonide has a narrow bandgap (0.5 eV to 0.6 eV), making it useful for the absorption of longer wavelengths. InGaAsSb faces a number of difficulties in manufacture and can be expensive to make, although when these difficulties are avoided, devices (such as photovoltaics) that use it may achieve high quantum efficiency (~90%).

AlGaAs
Aluminum Gallium Aresinide has nearly the same lattice constant as GaAs, but with a larger bandgap, between 1.42 eV and 2.16 eV. AlGaAs may be used as part of a border region of a quantum well with GaAs as the inner section.

AlInGaP
AlInGaP sees wide use in the construction of diode lasers and LEDs from deep ultraviolet to infrared ranges.

GaN
GaN has a wide bandgap of 3.4 eV and sees use in high frequency high power devices and optoelectronics. GaN transistors operate at higher voltages than the GaAs microwave transistors and sees possible use in THz devices.

InGaN
InxGa1−xN is another ternary III-V semiconductor that can be tuned for use in optoelectronics from the ultraviolet (see GaN) to infrared (see InN) wavelengths.

AlGaN
AlxGa1−xN is another compound that sees use in LEDs for blue to ultraviolet wavelengths.

AlInGaN
Although AlInGaN is not used much independently, it sees wide use in lattice matching the compounds GaN and AlGaN.
InSb
Indium Antimonide is an interesting compound, given that it has a very narrow bandgap of 0.17 eV and the highest electron mobility of any known semiconductor. InSb can be used in quantum wells and bipolar transistors operating up to 85 GHz and field-effect transistors operating at higher frequencies. It can also be used as a terrahertz radiation source.

• HFSS – Simulation of a Square Pillar April 16, 2020

The following is an EM simulation of the backscatter of a golden square object. This is by no means a professional achievement, but rather provides a basic introduction to the HFSS program.

The model is generated using the “Draw -> Box” command. The model is placed a distance away from the origin, where the excitation is placed, shown below. The excitation is of spherical vector form in order to generate a monostatic plot.

The basic structure is a square model (10mm in all three coordinates) with an airbox surrounding it. The airbox is coated with PML radiation boundaries to simulate a perfectly matched layer. This is to emulate a reflection free region. This is necessary to simulate radiating structures in an unbounded, infinite domain. The PML absorbs all electromagnetic waves that interract with the boundary. The following image is the plot of the Monostatic RCS vs the Incident wave elevation angle.

The subsequent figure was generated by using a “bistatic” configuration and is plotted against the elevation angle.

• Miller Effect April 15, 2020

The Miller Effect is a generally negative consequence of broadband circuitry due to the fact that bandwidth is reduced when capacitance increases. The Miller effect is common to inverting amplifiers with negative gain. Miller capacitance can also limit the gain of a transistor due to transistors’ parasitic capacitance. A common way to mitigate the Miller Effect, which causes an increase in equivalent input capacitance, is to use cascode configuration. The cascode configuration features a two stage amplifier circuit consisting of a common emitter circuit feeding into a common base. Configuring transistors in a particular way to mitigate the Miller Effect can lead to much wider bandwidth. For FET devices, capacitance exists between the electrodes (conductors) which in turn leads to Miller Effect. The Miller capacitance is typically calculated at the input, but for high output impedance applications it is important to note the output capacitance as well.

Interesting note: the Miller effect can be used to create a larger capacitor from a smaller one. So in this way, it can be used for something productive. This can be important for designing integrated circuits, where having large bulky capacitors is not ideal as “real estate” must be conserved.

• Beamforming April 14, 2020

Beamforming (spatial filtering) is a huge part of Fifth Generation wireless technology. Beamforming is basically using multiple antennas and varying the phase and amplitude of the inputs to these antennas. The result is a directed beam in a specific direction. This is a great method of preventing interference by focusing the energy of the antennas. Constructive and Destructive interference is used to channel the energy and increase the antennas’ directivity. The receiver receives the multitude of waves and depending on the receiver’s location will determine whether there is mostly constructive or destructive interference. Beamforming is not only used in RF wireless communication but also in Acoustics and Sonar.

An important concept to know is that placing multiple radiating elements (antennas) together increases the directivity of the radiation pattern. Putting two antennas side by side, creating a main lobe with a 3dB gain going forward. With four radiating elements, this becomes 6dB (quadruple gain). Feeding all of the elements with the same signal means that the elements are still one single antenna, but with more forward gain. The major issue here is that you only benefit from this in one single stationary direction unless the beam can be moved. This is where feeding the antennas with different phases and amplitudes comes in. The number of antennas becomes equal to the number of input signals. Having more separate antennas (and more input signals) creates a more directed antenna pattern. Spatial multiplexing can also be implemented to service multiple users wirelessly by utilizing space multiple times over.

Using electronic phase shifters at the input of the antennas can decrease cost of driving the elements quite a bit. This is known as a phased array and can steer the beam pattern as necessary but can only point in one direction at a time.

• RF Mixer basics April 13, 2020

Mixers are three port devices that can be active or passive, linear or nonlinear. They are used to modulate (upconvert) or demodulate (downconvert) a signal to change its frequency to be sent to a receiver or to demodulate at the receiving end to a lower frequency.

Two major mixer categories are switching and nonlinear. Nonlinear mixers allow for higher frequency upconversion, but are less prevalent due to their unpredictable performance. In the diagram above, the three ports are shown. The RF signal is the product or sum of the IF (intermediate frequency) and LO (Local Oscillator) signal during upconversion. Due to reciprocity, any mixer can be used for either upconversion or downconversion. For a downconversion mixer, the output is the IF and the RF is fed on the left hand side.

The above diagram illustrates the concept of frequency translation. In a receiver, the mixer translates the frequency from a higher RF frequency (frequency that the wave propagated wirelessly through air) to a lower Intermediate frequency. The mixer cannot be LTI; it must be either nonlinear or time varying. The mixer is used in conjunction with a filter to select either upper or lower sideband which are the result of the multiplication of two signals with different frequencies. These new frequencies are the sum or difference of the two frequencies at the two input ports.

In addition to frequency translation during modulation, RF mixers can also be used as phase comparators, such as in phase locked loops.

To maintain linearity and avoid distortion, the LO input should be roughly 10dB higher than the input RF signal (downconverter). Unfortunately this increases cost and so therein lies the tradeoff between cost and performance.

• High Speed Waveguide UTC Photodetector I-V Curve (ATLAS Simulation) April 12, 2020

The following project uses Silvaco TCAD semiconductor software to build and plot the I-V curve of a waveguide UTC photodetector. The design specifications including material layers are outlined below.

# Simulation results

The structure is shown below:

Forward Bias Curve:

Negative Bias Curve:

Current Density Plot:

Acceptor and Donor Concentration Plot:

Bandgap, Conduction Band and Valence Band Plots:

# DESIGN SPECIFICATIONS

Construct an Atlas model for a waveguide UTC photodetector. The P contact is on top of layer R5, and N contact is on layer 16. The PIN diode’s ridge width is 3 microns. Please find: The IV curve of the photodetector (both reverse biased and forward bias).

The material layers and ATLAS code is shown in the following PDF: ece530proj1_mbenker

• VHF and UHF April 11, 2020

The RF and microwave spectrum can be subdivided into many bands of varying purpose, shown below.

On the lower frequency end, VLF (Very Low Frequency) tends to be used in submarine communication while LF (Low Frequency) is generally used for navigation. The MF (Medium Frequency) band is noted for AM broadcast (see posts on Amplitude modulation). The HF (shortwave) band is famous for use by HAM radio enthusiasts. The reason for the widespread usage is that HF does not require line of sight to propagate, but instead can reflect from the ionosphere and the surface of the earth, allowing the waves to travel great distances. VHF tends to be used for FM radio and TV stations. UHF covers the cellphone band as well as most TV stations. Satellite communication is covered in the SHF (Super High Frequency) band.

Regarding UHF and VHF propagation, line of sight must be achieved in order for the signals to propagate uninhibited. With increasing frequency comes increasing attenuation. This is especially apparent when dealing with 5G nodes, which are easily attenuated by buildings, trees and weather conditions. 5G used bands within the UHF, SHF and EHF bands.

Speaking of line of sight, the curvature of the earth must be taken into account.

The receiving and transmitting antennas must be visible to each other. This is the most common form of RF propagation. Twenty five miles (sometimes 30 or 40) tends to be the max range of line of sight propagation (radio horizon). The higher the frequency of the wave, the less bending or diffraction occurs which means the wave will not propagate as far. Propagation distance is a strong function of antenna height. Increasing the height of an antenna by 10 feet is like doubling the output power of the antenna. Impedance matching should be employed at the antennas and feedlines as losses increase dramatically with frequency.

Despite small wavelengths, UHF signals can still propagate through buildings and foliage but NOT the surface of the earth. One huge advantage of using UHF propagation is reuse of frequencies. Because the waves only travel a short distance when compared to HF waves, the same frequency channels can be reused by repeaters to re-propagate the signal. VHF signals (which have lower frequency) can sometimes travel farther than what the radio horizon allows due to some (limited) reflection by the ionosphere.

Both VHF and UHF signals can travel long distances through the use of “tropospheric ducting”. This can only occur when the index of refraction of a part of the troposphere due to increased temperature is introduced. This causes these signals to be bent which allows them to propagate further than usual.

• P-I-N Junction Simulation in ATLAS April 10, 2020

Introduction to ATLAS

ATLAS by Silvaco is a powerful tool for modeling for simulating a great number of electronic and optoelectronic components, particularly related to semiconductors. Electrical structures are developed using scripts, which are simulated to display a wide range of parameters, including solutions to equations otherwise requiring extensive calculation.

P-I-N Diode

The function of the PN junction diode typically fall off at higher frequencies (~3GHz), where the depletion layer begins to be very small. Beyond that point, an intrinsic semiconductor is typically added between the p-doped and n-doped semiconductors to extend the depletion layer, allowing for a working PN junction structure in the RF domain and to the optical domain. The following file, a P-I-N junction diode is an example provided with ATLAS by Silvaco. The net doping regions are, as expected at either end of the PIN diode. This structure is 10 microns by 10 microns.

The code used to create this structure is depicted below.

The cutline tool is used through the center of the PIN diode after simulating the code. The Tonyplot tool allows for the plotting of a variety of parameters, such as electric field, electron fermi level, net doping, voltage potential, electron and hole concentration and more.

• Introduction to Electro-Optic Modulators April 9, 2020

Electro-optics is a branch or topic in photonics that deals with the modulation, switching and redirection of optical signals. These functions are produced through the application of an electric field, which alters the optical properties of a material, such as the refractive index. The refractive index refers to the speed of light propagation in a medium relative to the speed of light in a vacuum.

Modulators vs. Switches

In a number of situations, the same device may function as both a modulator and a switch. One dependent factor on whether the device would be suitable or not for a switch as opposed to a modulator would be the strength of the effect that an electric field may have on the device. If the device’s primary role is to impress information onto a light wave signal through temporary varying of the signal, then it is referred to as a modulator. A switch on the other hand either changes the direction or spatial position of light or turns it off completely.

# Theory of Operation

Electro-optic Effect

The electro-optic effect presumes the dependence of the refractive index on the the applied electric field. The change in refractive index, although small allows for various applications. For instance, a lens may be applied an electric field and depending on the material and the applied field, the focal length of the lens can change. Other optical instruments that utilize this effect may also see use, such as a prism. A very small adjustment to the refractive index may still produce a delay in the signal, still large enough to detect and, if information was implied by the delay that was produced on the signal, the delay can be phase demodulated at the receiving end.

Electroabsorption

Electroabsorption is also another effect that is used to modify the optical properties of a material by the application of an electric field. An applied electrical field may increase the bandgap of the optical semiconductor material, turning the material from optically transparent to optically opaque. This process is useful for making modulators and switches.

Kerr Effect and Pockels Effect

The Pockels Effect and the Kerr Effect both account for the change in refractive index through the application of an electric field. The Kerr Effect states that this effect is nonlinear, while the Pockels Effect states that the effect is linear. Although the Pockels Effect is more pronounced in Electro-optical modulator design, both are applied in many situations. The linear electro-optic effect exists only in crystals without inversion symmetry. The design of electro-optic modulators or switches requires special attention to the waveguide material and how the electric field reacts with the material. Common materials (also maintaining large Pockels coefficients) are GaAs, GaP, LiNbO3, LiTaO3 and quartz. The Kerr Effect is relatively weak in commonly used waveguide materials.

# Properties of the Electro-Optic Modulator

Modulation Depth

Important for both modulators and switches is the modulation depth, also known as the modulation index. Modulation depth has applications for the several types of optical modulators, such as intensity modulators, phase modulators and interference modulators. The modulation depth may be conceptually understood as the ratio of effect that is applied to the signal. In other words, is the modulation very noticeable? Is it a strong modulation or is it a weak modulation?

Bandwidth

The bandwidth of the modulator is critically important as it determines what range of signal frequencies may be modulated onto the optical signal. Switching time or switching speed may be equally applied to an optical switch.

Insertion Loss

Insertion loss of optical modulators and switches is a form of optical power loss and is expressed in dB. However, the result of insertion loss often results in the system requiring more electrical power and would not explicitly reduce performance of the modulation or switching function of the device.

Power Consumption

In distinction from the electric field, a modulator or switch also needs a power supply for itself. The amount of power required increases with modulation frequency. A common figure of merit is the drive power per unit bandwidth, typically expressed in milliwatts per megahertz.

References: [1], [4], [6]

• Optical System Design using MATLAB April 8, 2020

Previously featured was an article that derived a matrix formation of an equation for a thick lens. This matrix equation, it was said can be used to build a variety of optical systems. This will be undertaken using MATLAB. One of the great parts of using a matrix formula in MATLAB is that essentially any known parameter in the optical system can not only be altered directly, but a parameter sweep can be used to see how the parameter will effect the system. Parameters that can be altered include radius of curvature in the lens, thickness of the lens or distance between two lenses, wavelength, incidence angle, refractive indexes and more. You could also have MATLAB solve for a parameter such as the radius of curvature, given a desired angle. All of these parameters can be varied and the results can be plotted.

Matrix Formation for Thick Lens Equation

The matrix equation for the thick lens is modeled below:

Where:

• nt2 is the refractive index beyond surface 2
• αt2 is the angle of the exiting or transmitted ray
• Yt2 is the height of the transmitted ray
• D2 is the power of curvature of surface 2
• D1 is the power of curvature of surface 1
• R1 is the radius of curvature of surface 1
• R2 is the radius of curvature of surface 2
• d1 is the thickness of the lens or distance between surface 1 and 2
• ni is the refractive index before surface 1
• αi is the angle of the incident ray
• Yi1 is the height of the incident ray

The following plots show a parameter sweep on an number of these variables. The following attachment includes the code that was used for these calculations and plots: optics1hw

• HEMT – High Electron Mobility Transistor April 7, 2020

One of the main limitations of the MESFET is that although this device extends well into the mmWave range (30 to 300 GHz or the upper part of the microwave spectrum), it suffers from low field mobility due to the fact that free charge carriers and ionized dopants share the same space.

To demonstrate the need for HEMT transistors, let us first consider the mobility of GaAs compound semiconductor. As shown in the picture, with decreasing temperature, Coloumb scattering becomes prevalent as opposed to phonon lattice vibrations. For an n-channel MESFET, the main electrostatic Coloumb force is between positively ionized donor elements (Phosphorous) and electrons. As shown, the mobility is heavily dependent on doping concentration. Coloumb Scattering effectively limits mobility. In addition, decreasing the length of the gate in a MESFET will increase Coloumb scattering due to the need for a higher doping concentration in the channel. The means that for an effective device, the separation of free and fixed charge is needed.

A heterojunction consisting of n+ AlGaAs and p- GaAs material is used to combat this effect. A spacer layer of undoped AlGaAs is placed in between the materials. In a heterojunction, materials with different bandgaps are placed together (as opposed to a homojunction where they are the same).

This formation leads to the confinement of electrons from the n- layer in quantum wells which reduces Coloumb scattering. An important distinction between the HEMT and the MESFET is that the MESFET (like all FETs) modulates the channel thickness whereas with an HEMT, the density of charge carriers in the channel is changed but not the thickness. So in other words, applying a voltage to the gate of an HEMT will change the density of free electrons will increase (positive voltage) or decrease (negative voltage). The channel is composed of a 2D electron gas (2DEG). The electrons in the gas move freely without any obsctruction, leading to high electron mobility.

HEMTs are generally packed into MMIC chips and can be used for RADAR applications, amplifiers (small signal and PAs), oscillators and mixers. They offer low noise performance for high frequency applications.

The pHEMT (pseudomorphic) is an enhancement to the HEMT which feature structures with different lattice constants (HEMTs feature roughly the same lattice constant for both materials). This leads to materials with wider bandgap differences and generally better performance.

• Off Topic: Planet Earth – Climates and Deserts April 6, 2020

The following post is an off topic discussion of planet earth, which will consists of miscellaneous topics involving climate types and deserts.

We can begin our study of the planet earth by discussing different types of sand dunes. Dunes are found wherever sand is blown around, as sand dunes are the construct of Aeolian processes where wind erodes loose sand. There are five main types: Barchan, Star, Parabolic, Tranverse and Longitudinal, though these sometimes go by other names. These dunes are  the product of wind direction. With Barchan dunes, the wind is predominantly in one direction which leads to the development of a crescent shape dune. The shape is convex and the “horns” point in the direction of the wind direction. The other two types of dunes where the wind is in one direction are parabolic and transverse dunes. Parabolic dunes are similar to Barchan dunes, although the “horns” point opposite to the direction of the wind. The key defining feature of this dune type is the presence of vegetation and the fact that they are effected by “blowouts” which is erosion of the vegetated sand.

As shown above, transverse dunes are also quite similar to barchans, but have wavy ridges instead of a crescent shape. The ridges are at right angles to the wind direction. Sand dunes which are formed by wind going in multiple directions are either Linear/Longitudinal or Star dunes. Star dunes are the result of wind moving in many directions whereas Longitudinal dunes are formed where wind converges into a single point, forming parallel lines to the direction of the winds.

An important term concerning dunes is “saltation”. Saltation is the rolling and bouncing of sand grains due to wind. The distinction between saltation, creep and suspension is that saltation forms a parabolic shape, though these are all wind-based processes.

Within hot deserts (as opposed to cold deserts), it is common to find structures such as mesas and buttes.

From left to right in the image, the difference between each type of landform is apparent. The pinnacle (or spire) is the most narrow. It is important to note that all of these desert structures are formed by not only wind, but also water (and heat). In addition, a desert surface is generally made of sand, rock and mountainous formations.

An important feature of deserts is desert pavement. This sheet-like rock formation of rock particles formed when wind or water has removed the sand, which is a very slow process. There are several theories as to why desert pavement exists including intermittent removal of sand by wind and later rain or possibly by shrinking and swelling of clay.

Another concept of sand erosion is deflation, defined as the release of sand from soil by wind.

An important characteristic of deserts is the extreme temperature of the region. During the day, (hot) deserts are hot, as all of the heat from the sun is reflected by the sand in the ground. This raises the temperature of the ground due the lack of water near the surface. If water was present near the surface, most of the heat would go into evaporating the water. However, even hot deserts are cold at night because the dry surface does not store heat as well as a moist surface. Since water vapor is a greenhouse gas (and there is little water vapor in the air in a desert), infrared radiation is lost to outer space which contributes to the cold night temperatures.

Ventifacts, pictured below, are stones shaped by wind erosion. They are commonly found in arid climates with very little vegetation and feature strong winds. This is because vegetation often interferes with particle transport.

An inselberg, as its Germanic name implies, is a type of mountain that is isolated and tends to be surrounded by sand. The area around the inselberg tends to be relatively flat, another defining characteristic of the structure. The word “insel” refers to island, which reinforces this concept.

A playa lake is a temporary body of water which also referred to as a dry lake. They are created whenever water ends up in a depression, however when the evaporation rate is quicker than the incoming water, the lake dries up. This tends to leave a buildup of salt.

An interesting piece of information about deserts is that they tend to located at 30 degree latitudes in both the north and south hemispheres. At the equator there is a low pressure zone due to direct sunlight, however at the 30 degree points there is high pressure, which leads to dry weather. At the equator, the climate tends to be relatively stable and has heavy rainfall. The sinking of air is what leads to these deserts, so in that way high pressure regions are very important to the development of deserts. The world’s largest hot desert (H climate) is the Sahara and the largest cold desert (K) is Antarctica. The major difference between a BW (Arid) climate and a BS (semiarid) climate is the amount of precipitation. Less than ten inches indicates arid climate and generally 10-20 inches indicates semiarid.