All Posts on one Page

POST ARCHIVE:

  • IMD3: Third Order Intermodulation Distortion September 11, 2020

    We’ll begin a discussion on the topic of analog system quality. How do we measure how well an analog system works? One over-simplistic answer is to say that power gain determines how well a system operates. This is not sufficient. Instead, we must analyze the system to determine how well it works as intended, which may include the gain of the fundamental signal. Whether it is an audio amplifier, acoustic transducers, a wireless communication system or optical link, the desired signal (either transmitted or received) needs to be distinguishable from the system noise. Noise, although situationally problematic can usually be averaged out. The presence of other signals are not however. This begs the question, which other signals could we be speaking of, if there is supposed to be only one signal? The answer is that the fundamental signal also comes with second order, third order, fourth order and higher order distortion harmonic and intermodulation signals, which may not be averaged from noise. Consider the following plot:

    We usually talk about Third Order Intermodulation Distortion or IMD3 in such systems primarily. Unlike the second and fourth order, the Third Order Intermodulation products are found in the same spectral region as the first order fundamental signals. Second and fourth order distortion can be filtered out using a bandpass filter for the in-band region. Note that the fifth order intermodulation distortion and seventh order intermodulation distortion can also cause an issue in-band, although these signals are usually much weaker.

    Consider the use of a radar system. If a return signal is expected in a certain band, we need to be able to distinguish between the actual return and differentiate this from IMD3, else we may not be able to trust our result. We will discuss next how IMD3 is avoided.

  • Mode Converters and Spot Size Converters September 10, 2020

     Spot size converters are important for photonic integrated circuits where a coupling is done between two different waveguide sizes or shapes. The most obvious place to find a spot size converter is between a waveguide of a PIC and a fiber coupling lens.

     Spot size converters feature tapered layers on top of a ridge waveguide for instance, to gradually change the mode while preventing coupling loss.

    The below RSoft example shows how an optical path is converted from a more narrow path (such as a waveguide) to a wider path (which could be for a fiber).

    While the following simulation is designed in Silicon, similar structures are realized in other platforms such as InP or GaAs/AlGaAs.

    RSoft Beamprop simulation, demonstrating conversion between two mode sizes. Optical power loss is calculated in the simulation for the structure.

    rsoft13.2

     This is the 3D structure. Notice the red section present carries the more narrower optical path and this section is tapered to a wider path.

    rsoft13.1

     The material layers are shown:

    rsoft13.3

    Structure profile:

    rsoft13.5

  • Discrete-Time Signals and System Properties September 8, 2020

    First, a comparison between Discrete-Time and Digital signals:

    Discrete-TimeDigital
    The independent variable (most commonly time) is represented by a sequence of numbers of a fixed interval. Both the independent variable and dependent variable are represented by a sequence of numbers of a fixed interval. 

    Discrete-Time and Digital signal examples are shown below:

    Discrete-Time Systems and Digital Systems are defined by their inputs and outputs being both either Discrete-Time Signals or Digital Signals.

    Discrete-Time Signals

    Discrete-Time Signal x[x] is sequence for all integers n.

      Unit Sample Sequence:
    𝜹[n]: 1 at n=0, 0 otherwise.  
       Unit Step:
    u[n] = 1 at n>=0, 0 otherwise.

    Or,

    Any sequence: x[n] = a1* 𝜹[n-1] + a2* 𝜹[n-2]+…
    where a1, a2 are magnitude at integer n.
                                                    or,           

    Exponential & Sinusoidal Sequences

    Exponential sequence: x[n] = A 𝞪n
                                                     where 𝞪 is complex, x[n] = |A|ej𝜙 |𝞪|e0n=|A||𝞪|n ej(ω0n+𝜙)
                                                                                                         = |A||𝞪|n(cos(ω0n+𝜙)+j sin(ω0n+𝜙))
                    Complex and sinusoidal: -𝝅< ω0< 𝝅 or 0< ω0< 2𝝅.

                                                    Exponential sequences for given 𝞪 (complex 𝞪 left, real 𝞪 right):

    Periodicity:        x[n] = x[n+N],  for all n. (definition). Period = N.
                                    Sinusoid: x[n] = A cos(ω0n+𝜙) = A cos (ω0n+ ω0N+ 𝜙)
                                                    Test: ω0N = 2𝝅k,                            (k is integer)

                                    Exponential: x[n] = e0(n+N) = e0n,
                                                    Test: ω0N = 2𝝅k,                            (k is integer)

    System Properties

                                      System: Applied transformation y[n] = T{x[n]}

    Memoryless Systems:

                                    Output y[nx] is only dependent on input x[nx] where the same index nx is used for both (no time delay or advance).

    Linear Systems:               Adherence to superposition. The additive property and scaling property.

    Additive property:         Where y1[n] = T{x1[n]} and y2[n] = T{x2[n]},
    y2[n] + y1[n] = T{x1[n]+ x2[n]}.

    Scaling property:            T{a.x[n]} = a.y[n]           

    Time-Invariant Systems:

                                    Time shift of input causes equal time shift of output. T{x[n-M]} = y[n-M]

    Causality:

                                    The system is causal if output y[n] is only dependent on x[n+M] where M<=0.

    Stability:

                                    Input x[n] and Output y[n] of system reach a maximum of a number less than infinity. Must hold for all values of n.

    Linear Time-Invariant Systems

                                    Two Properties: Linear & Time-Invariant follows:

                    “Response” hk[n] describes how system behaves to impulse 𝜹[n-k] occurring at n = k.

    • Convolution Sum: y[n] = x[n]*h[n].

    Performing Discrete-Time convolution sum:

    1. Identify bounds of x[k] (where x[k] is non-zero) asN1 and N2.
    2. Determine expression for x[k]h[n-k].
    3. Solve for

    General solution for exponential (else use tables):

    Graphical solution: superposition of responses hk[n] for corresponding input x[n].

    LTI System Properties

    As LTI systems are described by convolution…

    LTI is commutative: x[n]*h[n] = h[n]*x[n].

                                    … is additive: x[n]*(h1[n]+h2[n]) = x[n]*h1[n] + x[n]*h2[n].

                                    … is associative: (x[n]*h1[n])*h2[n] = x[n]*(h1[n]*h2[n])

                    LTI is stable if the sum of impulse responses

                                    … is causal if h[n] = 0 for n<0                  (causality definition).

    Finite-duration Impulse response (FIR) systems:

                    Impulse response h[n] has limited non-zero samples. Simple to determine stability (above).

    Infinite-duration impulse response (IIR) systems:

                    Example: Bh=


    If a<1, Bh is stable and (using geom. series) =

    Delay on impulse response: h[n] = sequence*delay = (𝜹[n+1]- 𝜹[n])* 𝜹[n-1] = 𝜹[n] – 𝜹[n-1].

    ______________________________________________________

    Continued:

  • MMIC – A Revolution in Microwave Engineering September 3, 2020

    One of the most revolutionary inventions in microwave engineering was the MMIC (Monolithic microwave integrated circuits) for high frequency applications. The major advantage of the MMIC was integrating previously bulky components into non-discrete tiny components of a chip. The subsequent image shows the integrated components of the MMIC – spiral inductors (red), FETs (blue) for example.

    It is apparent that smaller transistors are present towards the input of the MMIC. This is because less power is required to amplify the weak input signals. As the signals become stronger, higher power (and hence a larger FET) is required. The input terminal (given by the arrow) is the gate and the output the drain. Like almost all RF devices, MMIC’s output and input are usually matched to 50 ohms, making them easy to cascade.

    Originally, MMICs found their place within DoD for usage in phased array systems in fighter jets. Today, they are present in cellular phones, which operate in the GHz range much like military RADARs. MMICs have switched from MESFET configurations to HEMTs, which utilize compound semiconductors to create heterostructures. MMICs can be fabricated using Silicon (low cost) or III-V semiconductors which offer higher speed. Additionally, MOSFET transistors are becoming increasingly common due to improved performance over the years. The gate of the MOSFET has been shortened from several microns to several nanometers, allowing better performance at higher frequencies.

  • Arrayed Waveguide Grating for Wavelength Division Multiplexing August 30, 2020

    Arrayed Waveguide Grating (or AWG) is a method for wavelength division multiplexing or demultiplexing. The approach for multiplexing is to use unequal path lengths to generate a phase delay and constructive interference for each wavelength at an output port of the AWG. Demultiplexing is done with the same process, but reversed.

    Arrayed Waveguide Gratings are commonly used in photonic integrated circuits. While Ring Resonators are also used for WDM, ring resonators see other uses, such tunable or static filters. Further, a ring resonator selects a single wavelength to be removed from the input. In the case of AWGs, light is separated according to wavelength. For many applications, this is a more superior WDM, as it offers great capability for encoding and modulating a large amount of information according to a wavelength.

    Both the design of the star coupler and the path length difference according to the designed wavelength division make up the significant amount of complexity of this component. RSoft by Synopsys includes an AWG Utility for designing arrayed waveguide gratings.

    RSoft AWG Utility Guide

    Using this utility, a star coupler is created below:

    Star Coupler for AWG designed in RSoft using AWG Utility

  • Methods of Optical Coupling August 29, 2020

    An optical coupler is necessary for transferring optical energy into or out of a waveguide. Optical couplers are used for both free-space to waveguide optical energy transmission as well as a transmission from one waveguide to another waveguide, although the methods of coupling for these scenarios are different. Some couplers selectively couple energy to a specific waveguide mode and others are multimode. For the PIC designer, both the coupling efficiency and the mode selectivity are important to consider for optical couplers.

    Where the coupling efficiency η is equal to the power transmitted into the waveguide divided by the total incident power, the coupling loss (units: dB) is equal to
    L = 10*log(1/η).

    Methods of optical coupling include:

    • Direct Focusing
    • End-Butt Coupling
    • Prism Coupling
    • Grating Coupling
    • Tapered Coupling (and Tapered Mode Size Converters)
    • Fiber to Waveguide Butt Coupling

    Direct Focusing for Optical Coupling

    Direct focusing of a beam to a waveguide using a lens in free space is termed direct focusing. The beam is angled parallel with the waveguide. This is also one type of transverse coupling. This method is generally deemed impractical outside of precision laboratory application. This is also sometimes referred to as end-fire coupling.

    End-Butt Coupling

    A prime example of end-butt coupling is for a case where a laser is fixated to a waveguide. The waveguide is placed in front of the laser at the light-emitting layer.

    Prism Couplers

    Prism coupling is used to direct a beam onto a waveguide when the beam is at an oblique incidence. A prism is used to match the phase velocities of the incident beam and the waveguide.

    Prism Coupling

    Grating Couplers

    Similar to the prism coupler, the grating coupler also functions to produce a phase match between a waveguide mode and an oblique incident beam. Gratings perturb the waveguide modes in the region below the grating, producing a set of spatial harmonics. It is through gratings that an incident beam can be coupled into the waveguide with a selective mode.

    Grating Coupler in RSoft

    Tapered Couplers

    Explained in one way, a tapered coupler intentionally disturbs the conditions of total internal reflection by tapering or narrowing the waveguide. Light thereby leaves the waveguide in a predictable manner, based on the tapering of the waveguide.

    Tapered Mode Size Converters

    Mode size converters exist to transfer light from one waveguide to another with a different cross-sectional dimension.

    Butt Coupling

    The procedure of placing the waveguide region of a fiber directly to a waveguide is termed butt coupling.

  • RF Spectrum Analyzers August 27, 2020

    A Spectrum analyzer (whether in the RF Domain or optical) is a tool that is dual of the oscilloscope. An oscilloscope displays a waveform in time domain. When this is represented as a function, a Fourier transform can be used on it to obtain its spectrum. A spectrum analyzer displays this content.

    Spectrum analyzers are very similar to radio receivers. A radio receiver could be classified into many types: (Super)heterodyne, crystal video, etc. Similar to a heterodyne receiver, which features a bandpass filter, mixer and low pass filter, a spectrum analyzer must tune over a specific range. This range must be very narrow, which requires a high Quality factor bandpass filter to operate. This is where the YIG (Yttrium Iron Garnet) filter comes into play. YIG has a very high quality factor and resonates when exposed to a DC magnetic field. This is what determines the spectrum analyzers “resolution bandwidth”. Of course, a narrow RBW means a less noisy display and better resolution. The tradeoff for this is increased sweep time. The sweep time is inversely proportional to the RBW squared.

    A sweep generator is used to repetitively scan over the frequency band. The oscillator sweeps and repetitively mixes/multiples with the input signal and is filtered with a low pass filter. The low pass filter determines the spectrum analyzer’s “video bandwidth”.

    An important concept with regards to bandwidth is thermal noise. Thermal noise is the single greatest source of noise in systems under 100 GHz. Past 100 GHz and into optics, shot noise becomes more apparent. However, bandwidth is the greatest contributor to thermal noise, as noise power is given as kTB. Since k is a constant and T has a relatively negligible effect on thermal noise (the main thing is that T is nonzero. At absolute zero, you have no thermal noise. Anything above that, you have thermal noise. The difference between a pretty cold device and a scorching hot one is only maybe 10 dBm or so. Just ballparking), this means that bandwidth has a huge effect on noise. A higher RBW increases the spectrum’s noise floor and makes it harder for closely spaced frequency components to be seen, as more frequency components are passed through the envelope detector.

    Video bandwidth, on the other hand, typically determines resolution between power levels and smooths the display. It is important to note that the VBW contribution happens after data has been collected and does not affect the measurement results, whereas the RBW dictates the minimum measurable bandwidth.

    Phase noise is also present in a spectrum analyzer and can affect measurements near the center frequency and results from phase jitter. Since this is pretty much a phase modulation, sidebands are produced near the center frequency which can interfere with measurement. Jitter refers to deviation from periodicity of a signal.

  • Programs for PIC (photonic Integrated Circuit) Design August 25, 2020

    For building PICs or Photonic Integrated Circuits, there are a number of platforms that are used in industry today. Lumerical Suite is a major player for instance with built in simulators. Cadence has a platform that can simulate both photonic and electronic circuits together, which for certain applications provides a major advantage. There are two platforms that I’ve become familiar with, which are the Synopsys PIC Design Suite (available for students with an agreement, underwritten by a professor at your university to ensure it’s use is for only educational purposes) and Klayout using Nazca Design packages.

    Synopsys is another great company with advanced programs for photonic simulation and PIC design. Synopsys Photonic Design Suite can include components that are designed using Rsoft. OptoDesigner is the program in the PIC design suite where PICs are designed, yet the learning curve may not be what you were hoping. The 3,000+ page manual let’s the user dive into the scripting language PheoniX, which is necessary to learn for PIC design using Synopsys. Using a scripting language means that designing your PIC can be automated, thereby eliminating repetitive designing. There also comes other advantages to this such as being able to fine tune one’s design without needing to click and drag components. Coding for PIC design might sound tedious, but if you start to use it, I think you’ll realize that it’s really not and that it’s a very powerful way of designing PICs. If you’d like to use PheoniX scripting language using the Synopsys PIC design suite, note that the scripting language is similar to C.

    Synopsys PIC Design Suite, Tutorial Program for Ring Resonator

    One of the greatest aspects of OptoDesigner and the PIC Design Suite is the simulation capabilities. Much like the simulations that can be run in Rsoft, these are available in OptoDesigner.

    Running FDTD in OptoDesigner

    The downside of Synopsys PIC Design Suite is in the difficulty of obtaining a legal copy that can be used for any and all purposes, even commercial. I mentioned that I obtained a student version. This is great for learning the software, to a certain extent. The learning stops when I would like to build something that could be sent out to a foundry for manufacture. Let’s be honest though, there is a lot to learn before getting to that point. Still, if we would even like to use a Process Design Kit (PDK) which contains the real component models for a real foundry so that you can submit your design to be built on a wafer, you will need to convince Synopsys that the PDK is only used for educational purposes and not only for learning, but as part of an education curriculum. If your university let’s you get your hands on a PDK with Synopsys Student version, you will essentially have free range to design PICs to your hearts content. If you have a student version, you’ll still have to buy a professional version if you want to design a PIC using a foundy PDK, submit it for a wafer run and sell it. I’ll let you look up the cost for that. The best way to use Synopsys is to work for a company that has already paid for the profession version, in conclusion.

    Now, if you find yourself in the situation where all the simulation benefits of using OptoDesigner are outweighed by the issue of needing to perform a wafer run, you might just want to use Klayout with Nazca Design photonic integrated circuit design packages. These are both open source. Game changer? Possibly. Suddenly, you picture yourself working as an independent contractor for PIC design someday and you’ll have Klayout to thank.

    Klayout and the Nazca Design packages are based on the very popular Python programming language. Coding can be done in Spyder, Notepad or even Command Prompt (lol?). If you aren’t familiar with how Python works, PIC design might move you to learn. Python takes the place of PheoniX scripting language as is used in OptoDesigner, so you still have the automation and big brain possibilities that a scripting language gives you for designing PICs. As for simulations, you’ll have to go with your gut, but you could use discrete components to design your circuit and evaluate that.

    Klayout doesn’t come with a 3,000+ page manual, but you’ll likely find that it is a simpler to use than OptoDesigner. Below is a Python script, which generates a .gds file and then the file opened in Klayout.

    Python Script for PIC Design in Klayout using Nazca Design packages
    .gds file opened in Klayout
  • Ring Resonators for Wavelength Division Multiplexing July 23, 2020

    The ring resonator is a rather simple passive photonic component, however the uses of it are quite broad.

    The basic concept of the ring resonator is that for a certain resonance frequency, those frequencies entering port 1 on the diagram below will be trapped in the ring of the ring resonator and exit out of port 3. Frequencies that are not of the resonance frequency will pass through to port 2.

    ringres

    Ring resonators can be used for Wavelength Division Multiplexing (WDM). WDM allows for the transmission of information allocated to different wavelengths simultaneously without interference. There are other methods for WDM, such as an Asymmetric Mach Zehnder Modulator.

    Here I present one scheme that will utilize four ring resonators to perform wavelength division multiplexing. The fifth output will transmit the remaining wavelengths after removing the chosen wavelengths dependent on the resonating frequency (and actually, the radius) of the ring resonators.

    wdm

     

     

     

  • Quantum Well: InP-InGaAsP-InP July 6, 2020

    Quantum wells are widely used in optoelectronic and photonic components and for a variety of purposes. Two materials that are often used together are InP and InGaAsP. Two different models will be presented here with simulations of these structures. The first is an InP pn-junction with a 10 nm InGaAsP (unintentionally doped) layer between. The second is an InP pn-junction with 10 nm InGaAsP quantum wells positioned in both the positive and negative doped regions.

    Quantum Well between pn-junction

    quantum well

    The conduction band and valence band energies are depicted below for the biased case:

    quantum well2

    The conduction current vector lines:

    qwell1

    ATLAS program:

    go atlas
    Title Quantum Wells
    # Define the mesh
    mesh auto
    x.m l = -2 Spac=0.1
    x.m l = -1 Spac=0.05
    x.m l = 1 Spac=0.05
    x.m l = 2 Spac =0.1
    #TOP TO BOTTOM – Structure Specification
    region num=1 bottom thick = 0.5 material = InP NY = 10 acceptor = 1e18
    region num=3 bottom thick = 0.01 material = InGaAsP NY = 10 x.comp=0.1393  y.comp = 0.3048
    region num=2 bottom thick = 0.5 material = InP NY = 10 donor = 1e18
    # Electrode specification
    elec       num=1  name=anode  x.min=-1.0 x.max=1.0 top
    elec       num=2  name=cathode   x.min=-1.0 x.max=1.0 bottom

    #Gate Metal Work Function
    contact num=2 work=4.77
    models region=1 print conmob fldmob srh optr
    models region=2 srh optr
    material region=2

    #SOLVE AND PLOT
    solve    init outf=diode_mb1.str master
    output con.band val.band e.mobility h.mobility band.param photogen opt.intens recomb u.srh u.aug u.rad flowlines
    tonyplot diode_mb1.str
    method newton autonr trap  maxtrap=6 climit=1e-6
    solve vanode = 2 name=anode
    save outfile=diode_mb2.str
    tonyplot diode_mb2.str
    quit
    Quantum Well layers inside both p and n doped regions of the pn-junction
    Structure:
    qwell3
    Simulation results:
    qwell2
    #TOP TO BOTTOM – Structure Specification
    region num=1 bottom thick = 0.25 material = InP NY = 10 acceptor = 1e18
    region num=3 bottom thick = 0.01 material = InGaAsP NY = 10 x.comp=0.1393  y.comp = 0.3048
    region num=4 bottom thick = 0.25 material = InP NY = 10 acceptor = 1e18
    region num=2 bottom thick = 0.25 material = InP NY = 10 donor = 1e18
    region num=6 bottom thick = 0.01 material = InGaAsP NY = 10 x.comp=0.1393  y.comp = 0.3048
    region num=2 bottom thick = 0.25 material = InP NY = 10 donor = 1e18
  • Capacitance and Parallel Plate Capacitors July 5, 2020

    Capacitance relates two fundamental electric concepts: charge and electric potential. The formula that relates the two is Capacitance = charge / electric_potential.

    The term equipotential surface refers to how a charge, if moved along a particular path or surface, the work done on the field is equal to zero. If there are many charges along the surface of a conductor (along an equipotential surface), then the potential energy of the charged conductor will be equal to 1/2 multiplied by the electric potential φ and the integral of all charges along this surface.

    Ue = ½ φ ∫ dq.

    Given a scenario in which both charge and electric potential are related, we may introduce capacitance. The following formula proves important for calculating the energy of a charged conductor:

    Ue = ½ φ q = ½ φ2 C = q2 / (2C).

    A parallel plate capacitor is a system of metal plates separated by a a dielectric. One plate of the capacitor will be positively charged, while the other is negatively charged. The potential difference and charge on the capacitor places causes a storage of energy between the two plates in an electric field.

    caps

  • Electric Potential and Electric Potential Energy July 4, 2020

    Electric potential can be summarized as the work done by an electric force to move a charge from one point to another. The units are in Volts. Electric potential is not dependent on the shape of the path that the work is applied. Being a conservative system, the amount of energy required to move a charge in a full circle, to return it back to where it started will be equal to zero.

    The work of an electrostatic field takes the formula

    W12 = keqQ(1/r1 – 1/r2),

    which is found by integrating the the charge q times the electric field. The work of an electrostatic field also contains both the electric potential and electric potential energy. Electric potential energy, U is equal to the electric potential φ multiplied by the charge q. Electric potential energy is a difference of potentials, while electric potential uses the exact level of electric potential in the given case.

    0001

    To calculate electric potential energy, it is convenient to assume that the potential energy is zero at a distance of infinity (and surely it should be). In this case, we can write the electric potential energy as equal to the work needed to move a charge from point 1 to infinity.

    0010

    We’ll consider a quick application related to both the dipole moment and the electric potential. The dipole potential takes the formula in the figure below. Dipole potential decreases faster with distance r than it would for a point charge.

    0011

  • Dipole Moment July 3, 2020

    Consider we have both a positive and negative charge, separated by a distance. When applying supperposition of the electric force and electric field generated by the two charges on a target point, it is said that the positive and negative charges create an effect called a dipole moment. Let’s consider a few example of how an electric field will be generated for a point charge in the presence of both a positive and negative charge. Molecules also often have a dipole moment.

    Here, the target point is at distance b at the center between the negative and positive charges. Where both charges are of the same magnitude, both the vertical attraction and repulsion components are cancelled, leaving the electric field to be generated in a direction parallel to the axis of the two charges.

    Capture

    Now, we’ll consider a target point along the axis of the two charges. Remember that a positive charge will produce an electric force and electric field that radiates from itself outward, while the force and field is directed inwards towards a negative charge. We can expect then, that the electric field will be different on either side. We can expect that the side of the positive charge will repel and the negative side will attract. This works, because the distance inverse proportionality is squared, making it so that the effect from the other charge will be less. This is a dipole.

    Given how a dipole functions, it would be nice to have a different set of formulas and a more refined approach to solving electric field problems with dipoles. The dipole moment p is found using the formula, p=qI with units Couolumb*meter. I is the vector which points from the negative charge to the positive charge. The dipole moment is drawn as one point at the center of the dipole with vector I through it.dipole

    In order to treat the two charges as a center of a dipole, there should be a minimum distance between the dipole and the target point. The distance between the dipole and the target should be much larger than the length l of the magnitude of vector I.

    dipole2

    Finally, the formula for these electric fields using a dipole moment are

    E1 = 2kep/b13

    E2 = 2kep/b23

  • Electric Force & Electric Field July 2, 2020

    While the electric force describes the exertion of one charge or body to another, we also have to remember that the two objects do not need to be touching physically for this force to be applied. For this reason, we describe the force that is being exerted through empty space (i.e. where the two objects aren’t touching) as an electric field. Any charge or body or thing that exerts an electrical force, generated most importantly by the distance between the objects and the amount of charge present, will generate an electric field.

    The electric field generated as a result of two charges is directly proportional to the electric force exerted on a charge, or Coulomb force and inversely proportional to the charge of the particle. In other words, if the Coulomb force is greater, then the electric field will be stronger, but it will also be smaller if the charge it is applied to is smaller. Coulomb force as mentioned previously is inversely proportional to the distance between the charges. The electric field, E then uses the formula E = F/q and the units are Volts per meter.

    By combining both Coulomb’s Law and our definition for the electric field, the electric field can be written as

    E1 = ke * q1/r2 er

    where er again is the unit vector direction from charge q1.

    Capturl

    When drawing electric field lines, there are three rules pay attention to:

    1. The direction is tangent to the field line (in the direction of flow).
    2. The density of the lines is proportional to the magnitude of the electric field.
    3. Vector lines emerge from positive charges and sink towards negative charges.

    CapturX

    Adding electric fields to produce a resultant electric field is simple, thanks to the property of superposition which applies to electric fields. Below is an example of how a resultant electric field will be calculated geometrically. The direction of each individual field from the charges is determined by the polarity of the charge.

    CapturMPNG

  • Coulomb Force July 1, 2020

    Electric charge is important in determining how a body or particle will behave and interact electromagnetically. It is also key for understanding how electric fields, electric potentials and electromagnetic waves come into existence. It starts with the atom and it’s number of protons and electrons.

    Charges are positive or negative. In a neutral atom, the number of protons in a nucleus is equal to the number of electrons. When an atom loses or gains an electron from this state, it becomes a negatively or positively charged ion. When bodies or particles exhibit a net charge, either positive or negative, an electric force arises. Charges can be caused by friction or irradiation. Electrostatic force functions similar to the gravitational force – in fact the formulas look very similar! The difference between the two is most importantly that electrostatic force can be attraction or repulsion, but gravitational force is always attraction. However for small bodies, the electrostatic force is primary and the gravitational force is negligible.

    Charles Coloumb conducted experiments around 1785 to understand how electric charges interact. He devised two main relations that would become Coulomb’s Law:

    The magnitude of the force between two stationary point charges is

    1. proportional to the product of the magnitude of the charges and
    2. inversely proportional to the square of the distance between the two charges.

    The following expression describes how one charge will exert a force on another:

    coulomb

    The unit vector in the direction of charge 1 to charge 2 is written as e12 and the position of the two numbers indicates the direction of the force, moving from the first numbered position to the second. Reversing the direction of the force will result in a reversed polarity, F12 = -F21.

    The coefficient ke will depend on the unit system and is related to the permittivity:

    coulomb2

    The permittivity of vacuum, ε0 = 8.85*10^(-12)  C^2N*m^2.

    Coulomb forces obey superposition, meaning that a series of charges may be added linearly without effecting their independent effects on it’s ‘target’ charge. Coulomb’s Law extends to bodies and non-point charges to describe an applied electrostatic force on an object; the same first equation may be used in this scenario.

  • Noise Figure June 30, 2020

    Electrical noise is unwanted alterations to a signal of random amplitude, frequency and phase. Since RADAR is typically done at microwaves frequencies, the noise contribution of most RADAR receivers is highest at the first stages. This is mostly thermal noise (Johnson noise). Each component of a receiver has its own Noise Figure (dB) which is typically kept low through the use of a LNA (Low Noise amplifier). It is important to know that all conductors generate thermal noise when above absolute zero (0K).

    Noise Power

    Noise Power is the product of Boltzman’s constant, temperature in Kelvin and receiver bandwidth (k*t0*B). This is typically also expressed in dBm. This value is -174 dBm at room temperature  for a 1 Hz bandwidth. If a different receiver bandwidth is present, you can simply add the decibel equivalent of the bandwidth to this value. For example, at a 1MHz bandwidth, the bandwidth ratio is 60 dB (10*log(10^6) = 60). This value can be added to the standard 1Hz bandwidth to arrive at -114 dBm. For a real receiver, this number is scaled by the Noise Figure.

     

    The Noise Figure is defined as 10*log(Na/Ni) where Na is the noise output of an actual receiver and Ni is the noise output of an ideal receiver. Alternatively these can be converted to dB and subtracted. It can also be defined as the rate at which SNR degrades. For systems on earth, Noise Figure is quite useful as temperature tends to stay around 290K (room temperature). However, for satellite communication, the antenna temperature tends to be colder than 290K and therefore effective noise temperature would be used instead.

    Noise Factor is the linear equivalent of Noise Figure. For cascaded systems, the noise factor gradually decreases and decreases as shown. This explains why in a receiver chain, the initial components have a much higher effect on the Noise Figure.

    noisefactor

    Noise Figure is a very important Figure of Merit for detection systems where the input signal strength is unknown. For example, it is necessary to decrease the Noise Figure in the electromagnetic components of a submarine in order to detect communication and RADAR signals.

  • Dispersion in Optical Fibers June 29, 2020

    Dispersion is defined as the spreading of a pulse as it propagates through a medium. It essentially causes different components of the light to propagate at different speeds, leading to distortion. The most commonly discussed dispersion in optical fibers is modal dispersion, which is the result of different modes propagating within a MMF (multimode fiber). The fiber optic cable supports many modes because the core is of a larger diameter than SMF (single mode fibers). Single mode fibers tend to be used more commonly now due to decreased attenuation and dispersion over long distances, although MMF fibers can be cheaper over short distances.

    Let’s analyze modal dispersion. When the core is sufficiently large (generally the core of a SMF is around 8.5 microns or so), light enters are different angles creating different modes. Because these modes experience total internal reflection at different angles, their speeds differ and over long distances, this can have a huge effect. In many cases, the signal which was sent is completely unrecognizable. This type of dispersion limits the bandwidth of the signal. Often GRIN (graded index) fibers are employed to reduce this type of dispersion by gradually varying the refractive index of the fiber within the core so that it decreases as you move further out. As we have learned, the refractive index directly influences the propagation velocity of light. The refractive index is defined as the ratio of the speed of light to the speed of the medium. In other words, it is inversely proportional to the speed of the medium (in this case silica glass).

    modal

    In order to mitigate the effects of intermodal distortion in multimode fibers, pulses are lengthened to overlap components of different modes, or even better to switch to Single mode fibers when it is available.

    The next type of dispersion is chromatic dispersion. All lasers suffer from this effect because no laser is comprised of a single frequency. Therefore, different wavelengths will propagate at different speeds. Sometimes chirped Bragg gratings are employed to compensate for this effect. Doped fiber lasers and solid state lasers tend to have much thinner linewidths than semiconductor PIN lasers and therefore tend to have less chromatic dispersion, although the semiconductor lasers has several advantages such as lesser cost and smaller size.

    Another dispersion type is PMD (Polarization mode dispersion) which is caused by different polarizations travelling at different speeds within a fiber. Generally, these travel at the same speed however spreading of pulses can be caused by imperfections in the material.

    For SMF fibers, it is important to cover waveguide dispersion. It is important to note that since the cladding of the fiber is doped differently than the core, the core has a higher refractive index than the cladding (doping with fluorine lowers refractive index and doping with germanium increases it). As we know, a lower refractive index indicates faster speed of propagation. Although most of the light stays within the core, some is absorbed by the cladding. Over long distances this can lead to greater dispersion as the light travels faster in the core leading to different propagation velocities.

  • RF Over Fiber Links June 28, 2020

    The basic principle of an RF over Fiber link is to convey a radio frequency electrical signal optically through modulation and demodulation techniques. This has many advantages including reduced attenuation over long distances, increased bandwidth capability, and immunity to electromagnetic interference. In fact, Rf over fiber links are essentially limitless in terms of distance of propagation, whereas coaxial cable transmission lines tend to be limited to 300 ft due to higher attenuation over distance.

    The simple RFoF link comprises of an optical source, optical modulator, fiber optic cable and a receiver.

    rfof

    The RF signal modulates the optical signal at its frequency (f_opt) with sidebands at the sum and difference of the RF frequency and optical signal frequency. These beat against the carrier in the photodetector to reproduce and electrical RF signal. The above picture shows amplitude modulation and direct detection method. Also, impedance matching circuitry is generally included to match the ports of the modulator to the demodulator as well as amplifiers.

    Before designing an RFoF link, it must be essential to bypass a transmission line in the first place. Will the system benefit from having a lower size and weight or immunity to electromagnetic interference? Is a wide bandwidth required? If not, this sort of link may not be necessary. It also must be determined the maximum SWaP of all the hardware at the two ends of the link. Another important consideration is the temperature that the link will be exposed to (or even pressure, humidity or vibration levels) that the link will be exposed to. The bandwidth of the RF and distance of propagation must be considered, finally.

    The Following Figures of Merit can be used to quantify the RFoF link:

    Gain

    In dB, this is defined as the Signal out (in dBm) – Signal in (dBm) or 10log(g) where g is the small signal gain (gain for which the amplitude is small enough that there is no amplitude compression)

    Noise Figure

    For RADAR and detection systems where the input signal strength is unknown, Noise Figure is more important than SNR. NF is the rate at which SNR degrades from input to output and is given as N_out – kTB – Gain (all in dB scale).

    Dynamic Range

    It is known that the Noise Floor defines the lower end of dynamic range. The higher end is limited by spurious frequencies or amplitude compression. The difference between the highest acceptable and lowest acceptable input power is the dynamic range.

    For example, if defined in terms of full compression, the dynamic range would be (in dB scale) : S_in.max – MDS. where MDS is the minimum detectable signal strength power.

    Scattering Parameters

    Scattering parameters are frequency dependent parameters that define the loss or gain at various ports. For two port systems, this forms a 2×2 matrix. In most Fiber Optic links, the backwards isolation S_12 is equal to zero due to the functionality of the detectors and modulators (they cannot perform each other’s functions). Generally the return losses at port 2 and 1 are what are specified to meet the system requirements.

     

     

  • Erbium Doped Fiber Amplifiers (EDFA) June 27, 2020

    EDFA

    The above figure demonstrates the attenuation of optical fibers relative to wavelength. It can be seen that Rayleigh Scattering is more prevalent at higher frequencies. Rayleigh scattering occurs when minute changes in density or refractive index of optical fibers is present due to manufacturing processes. This tends to scatter either in the direction of propagation within the core or not. If it is not, this leads to increased attenuation. This accounts for 96% of attenuation in optical fibers. It can also be noted that lattice absorption varies wildly with the wavelength of light. From the graph, it is apparent that 1550 nm wavelength this value (and also Rayleigh Scattering) is quite low. It is for this reason that 1550 nm is a common wavelength of propagation with silica glass optical fibers. Although this wavelength allows for greater options in design, shorter wavelengths (such as 850 nm) are also used when distance of propagation is short. However, 1550 is the common wavelength due to the development of dispersion shifted fibers as well as something called the EDFA (Erbium doped fiber amplifier).

    EDFAs operate around the 1550 nm region (1530 to 1610 nm) and work based on the principle of stimulated emission, in which a photon is emitted within a optical device when another photon causes electrons and holes to recombine. The stimulated emission creates a photon of the same size and in the same direction (coherent light). The EDFA acts as an amplifier, boosting the intensity of light with a heavily doped core (erbium doped). As discussed earlier, the lowest power loss for silica fibers tends to occur at 1550 nm, which is the wavelength that this stimulated emission occurs. The excitation, however, occurs at 980 or 1480 nm, which is shown to have high loss.

    The advantages of the EDFA is high gain and availability to operate in the C and L bands of light, It is also not polarization dependent and has low distortion at high frequencies. The major disadvantage is the requirement of optical pumping.

    EDFA

  • RSoft Tutorials 9. Using Real Materials and Multilayer Structures June 26, 2020

    Rsoft comes with a number of libraries for real materials. To access these materials, we can add them at any time from the Materials button on the side. However, to build a Multilayer structure that can utilize many materials, select “Multilayer” under 3D Structure Type.

    rsoft17.2

    Now, select “Materials…” to add desired materials. Move through the RSoft Libraries to chose a material and use the button in the top right (not the X button, silly) to use the material in the project. Now select OK to be brought back to the Startup Window, where we must now design a layered structure using these materials. Note that while building the layers, you can add more materials.

    rsoft17.1

    Selecting “Edit Layers…” on the Startup window brings you to the following window. Here, you can define your layers by selecting “New Layer”. Enter the Height and Material of the layer and select “Accept Layer” and repeat the process until the structure is finished. Select OK when done and select OK on the Startup window if all other settings are complete. This is my structure. Note that my structure size adds up to 1. Remember what the size of your layers are.

    rsoft17.3

    Now, design the shape of the structure. I’ve made a rectangular waveguide. What is also important to consider is where the beam should enter the structure. By default, the beam is focused across the entire structure. In the case where a particular layer is meant to be a waveguide, this should be reduced in size. By remembering the sizes of the layers however it will not be difficult to aim the beam at a particular section of the waveguide. For my structure, I will aim my beam at the 0.2 GaInAsP layer. The positioning, width, height, angle and more of the launch beam can be edited in the “Launch Parameters” window, accessible through “Launch Fields” on the right side.

    rsoft17.4

    Finally, run a simulation with your structure!

    rsoft17.5

    rsoft17.7

     

     

     

     

     

     

     

  • Rsoft Tutorials 8. Air Gaps June 25, 2020

    There are cases where you may want to simulate a region of air in between two components. A simple way of approaching this task is by creating a region with the same refractive index as air. The segment between the two waveguides (colored in gray) will serve as the “air” region. Right-click on the segment to define properties and under “Index Difference”, chose the value to be 1 minus the background index.

    rsoft14.1

    Properties for the segment:

    rsoft14.2

    Symbol Table Editor:

    rsoft14.3

    Notice that in the “air” region, the pathway monitor detects the efficiency to be zero, though the beam reconvenes in the waveguide, if the gap is short and the waveguide continues at the same angle, but with losses.

    rsoft14.0

     

  • Rsoft Tutorials 7. Index Grating June 24, 2020

    Index grating is a common method to alter the frequency characteristics of light. In Rsoft, a graded index component is found under the “Index Taper” tab when right-clicking on a component. By selecting the tab “Tapers…”, one can create a new index taper.

    rsoft12.1

    Here, the taper is called “User 1” and defined by an equation step(M*z), with z being the z-coordinate location.

    rsoft12.2

    Selecting “Test” on the User Taper Editor will plot the index function of the tapered component:

    rsoft12.6

    The index contour is plotted below:

    rsoft12.5

    Here, the field pattern:

    rsoft12.4

    Light contour plot:

    rsoft12.3

     

  • Rsoft Tutorials 6. Multiple Launch Fields, Merging Parts June 23, 2020

    Launch Fields define where light will enter a photonic device in Rsoft CAD. An example that uses multiple launch fields is the beam combiner.

    rsoft11.1

    rsoft11.2

     

    On the sidebar, select “Edit Launch Fields”. To add a new lauch, select New and chose the pathway. A waveguide will be selected by default. Moving the launch to a new location however will place it elsewhere. Input a parameter other than “default” to change the location, and other beam parameters.

    rsoft11.5

    Choosing “View Launch” will plot the field amplitude of the launches. For the plot below, the third launch was removed.

    rsoft11.4

    Merging Waveguides

    Right-clicking on the structure will give the option to chose the “Combine Mode.” Be sure that Merge is selected to allow waveguides to combine.

    rsoft11.3

     

  • The Pockels Effect and the Kerr Effect June 22, 2020

    The Electro-optic effect essentially describes the phenomena that, with an applied voltage, the refractive index of a material can be altered. The electro-optic effect lays the ground for many optical and photonic devices. One such application would be the electro-optic modulator.

    If we consider a waveguide or even a lens, such as demonstrated through problems in geometrical optics, we know that the refractive index can alter the direction of propagation of a transmitted beam. A change in refractive index also changes the speed of the wave. The change of light propagation speed in a waveguide acts as phase modulation. The applied voltage is the modulated information and light is the carrier signal.

    The electro-optic effect is comprised of both a linear and non-linear component. The full form of the electro-optic effect equation is as follows:

    Capture

    The above formula means that, with an applied voltage E, the resultant change in refractive index is comprised of the linear Pockels Effect rE and a non-linear Kerr Effect PE^2.

    The Pockels Effect is dependent on the crystal structure and symmetry of the material, along with the direction of the electric field and light wave.

     

    99-mod-transfer-function-rev-600w

  • Rsoft Tutorials 5. Pathway Monitoring (BeamPROP) June 21, 2020

    When stringing multiple parts together, it is important to check a lightwave system for losses. BeamPROP Simulator, part of the Rsoft package will display any losses in a waveguide pathway. Here we have an example of an S-bend simulation. There appears to be losses in a few sections.

    rsoft6.2

    Here, the design for the S-bend waveguide has a few locations that are leaking, as indicated by the BeamPROP simulation.

    rsoft6.1

    The discontinuities are shown below, which are a possible source of loss:

     

    After fixing these discontinuities, the waveguide can be simulated again using BeamPROP. In fact the losses are not fixed. This loss is called bending loss.

    rsoft5.9

    rsoft5.10

    Bending loss is an important topic for wavguides and becomes critical in Photonic Integrated Circuits (PIC).

  • Rsoft Tutorials 4. Multi-Layer Profiles June 20, 2020

    Rsoft has the ability to create multilayered devices, as was done previously using ATLAS/TCAD. Rather than defining a structures through scripts as is done with ATLAS, information about the layers can be defined in tables that are accessed in Rsoft CAD.

    rsoft5.1

    To begin adding layers to a device, such as a waveguide, first draw the device in Rsoft CAD. To design a structure with a substrate and rib waveguide, select Rib/Ridge 3D Structure Type in the Startup Window.

    rsoft4.4

    Next, design the structure in Rsoft CAD.

    rsoft5.2

    The Symbol Table Editor is needed now not only to define the size of the waveguide, but also the layer properties. The materials for this waveguide will be defined simply using basic locally defined layers with a user-defined refractive index. Later, we will discuss importing layer libraries to use real materials.To get used to the parameters typically needed for this exercise, layer properties may not need to be defined now before entering the Layer Table Editor.

    rsoft5.3

    The Layer Table Editor is found on the Rsoft CAD sidebar. First, assign the substrate layer index and select new later. The layer name, index and height are defined for this exercise.

    rsoft5.4

    After layers have been chosen, the mode profile can be simulated.

    rsoft5.5

     

  • Rsoft Tutorials 3. Fiber Structures and BeamPROP Simulation Animations June 19, 2020

    An interesting feature of BeamPROP simulations and other simulators in the Rsoft packages is that the simulation results can be displayed in a running animation. The following simulation is the result of a simulation of an optical fiber. BeamPROP simulates the transverse field in an animation as a function of the z parameter, which is the length of the optical fiber.

    fiberBeamPROP sim

    To design an optical fiber component with Rsoft CAD, select under 3D structure type, “Fiber” when making a new project.

    rsoft4.1

    To build a cylinder that will be the optical fiber, select the cylinder CAD tool (shown below) and use the tool to draw in the axis that the base of the cylinder is found.

    rsoft4.2

    Dimensions of the fiber can be specified using the symbol tool discussed previously and by right-clicking the object to assign these values. Note that animations of mode patterns through long waveguides is not only available for cylindrical fibers. Fibers may consist of a variety of shapes. Multiple pathways may be included. Simulations can indicate if a waveguide has potential leaks in it or the interaction of light with a new surface.

    rsoft4.3

     

     

  • Rsoft Tutorials 2. Simulating a Waveguide using BeamPROP and Mode Profile June 18, 2020

    BeamPROP is a simulator found in the Rsoft package. Here, we will use BeamPROP to calculate the field distributions of our tapered waveguides. Other methods built withing Rsoft CAD are will also be explored.

     

    Tapered Waveguide

    The tapered waveguide that we are simulating is found below. We will use the BeamPROP tool to simulate the field distributions in the waveguide. We will also use the mode calculation tool to simulate the mode profile at each end of the waveguide.

    BeamPROP Simulation Results

    rsoft3.3

    Rsoft CAD

    rsoft3.4

    Mode Profile Simulation

    The mode simulation tool is found on the sidebar:

    rsoft3.5

    Before choosing the parameters of the Mode Simulator, let’s first take a look at the coordinates of the beginning and end of the waveguide. This dialog is found by right-clicking on the component. The window shows that the starting point along the z axis is 1 and the ending point is 43 (the units are actually micrometers, by the way). We will choose locations along the waveguide close to the ends of the waveguide at z equals 1.5 and 42.5.

    rsoft3.6

    Parameter selection window:

    rsoft3.7

    Results at z = 1.5:

    rsoft3.72

    Results at z = 42.5:

    rsoft3.71

  • Rsoft Tutorials 1. Getting Started with CAD (tapered waveguide) June 17, 2020

    Rsoft is a powerful tool for optical and photonic simulations and design. Rsoft and Synopsys packages come with a number of different tools and simulators, such as BeamPROP, FullWAVE and more. There are also other programs typically found with Rsoft such a OptoDesigner, LaserMOD and OptSim. Here we focus on the very basics of using the Rsoft CAD environment. I am using a student version, which is free for all students in the United States.

    New File & Environment

    When starting a new file, the following window is opened. We can select the simulation tools needed, the refractive index of the environment (“background index”) and other parameters. Under dimensions, “3D” is selected.

    rsoft1.02

    The 3D environment is displayed:

    rsoft1.01

    Symbol Editor

    rsoft1.2

    On the side bar, select “Edit Symbols.” Here we can introduce a new symbol and assign it a value using “New Symbol,” filling out the name and expression and selecting “Accept Symbol.”

    rsoft1.1

     

     

     

     

     

     

     

    Building Components

    Next we will draw a rectangle, which will be our waveguide.  Select the rectangular segment below:

    rsoft1.2

    Now, select the bounds of the rectangle. See example below:

    rsoft1.3

    Editing Component Parameters

    Right click on the component to edit parameters. Here, we will now change the refractive index and the length of the component. The Index Difference tab is the difference in refractive index compared to the background index, which was defined when we created the file. We’ll set it to 0.1 and since our background index was 1.0, that means the refractive index of the waveguide is 1.1. Alternatively, the value delta that was in the box may be edited from the Symbol menu. We also want to use our symbol “Length” to define the length of our waveguide. We also want this waveguide to be tapered, so the ending vertex will be set to width*4. Note that width may also be edited in the symbol list.

    rsoft1.4

    Here, we have a tapered waveguide:

    rsoft1.5

  • Methods of Calculation for Signal Envelope June 16, 2020

    The envelope of a signal is an important concept. When a signal is modulated, meaning that information is combined with or embedded in a carrier signal, the envelope follows the shape of the signal on it’s upper and lower most edges.

    There are a number of methods for calculating an envelope. When given an in-phase and quadrature signal, the envelope is defined as:

    E = sqrt(I^2  + Q^2).

    This envelope, if plotted will contain the exact upper or lower edge of the signal. An exact envelope may be sought, depending on the level of detail required for the application.

    Here, this data was collected as a return from a fiber laser source. We seek to identify this section of the data to determine if the return signal fits the description out of a number of choices. The exact envelope using the above formula is less useful for the application.

    The MATLAB formula is used to calculate the envelope:

    [upI, lowI] = envelope(I,x,’peak’);

    And this is plotted below with the I and Q signals:

    envelope1

    Here are two envelopes depicted without the signal shown. By selecting the range of interpolation, this envelope can be smoother. Typically it is less desirable for an envelope to contain so many carrier signals, as is the following where x=1000, the range of interpolation.

    envelope2

    Further methods involving the use of filters may also be of consideration. Below, the I and Q signals are taken through a bandpass filter (to ensure that the data is from the desired frequency range) and finally a lowpass filter is applied to the envelope to remove higher frequency oscillation.

    envelope3

  • Receiver Dynamic Range June 15, 2020

    Dynamic range is pretty general term for a ratio (sometimes called DNR ratio) of a highest acceptable value to lowest acceptable value that some quantity can be. It can be applied to a variety of fields, most notably electronics and RF/Microwave applications. It is typically expressed in a logarithmic scale. Dynamic range is an important figure of merit because often weak signals will need to be received as well as stronger ones all while not receiving unwanted signals.

    Due to spherical spreading of waves and the two-way nature of RADAR, losses experienced by the transmitted signal are proportional to 1/(R^4). This leads to a great variance over the dynamic range of the system in terms of return. For RADAR receivers, mixers and amplifiers contribute the most to the system’s dynamic range and Noise Figure (also in dB). The lower end of the dynamic range is limited by the noise floor, which accounts for the accumulation of unwanted environmental and internal noise without the presence of a signal. The total noise floor of a receiver can be determined by adding the noise figure dB levels of each component. Applying a signal will increase the level of noise past the noise floor, and this is limited by the saturation of the amplifier or mixer. For a linear amplifier, the upper end is the 1dB compression point. This point describes the range at which the amplifier amplifies linearly with a constant increase in dB for a given dB increase at the input. Past the 1dB compression point, the amplifier deviates from this pattern.

    123

    The other points in the figure are the third and second order intercept points. Generally, the third intercept point is the most quoted on data sheets, as third order distortions are most common. Assuming the device is perfectly linear, this is the point where the third order distortion line intersects that line of constant slope. These intermodulation distortion generate the terms 2f_2 – f_1 and 2f_1 – f_2. So in a sense the third order intercept point is a measure of linearity. As shown in the figure, the third order distortion has a linear slope of 3:1. The point that the line intercepts the linear output is (IIP3, OIP3). This intercept point tends to be used as more of a rule of thumb, as the system is assumed to be “weakly linear” which does not necessarily hold up in practice.

    Often manual gain control or automatic gain control can be employed to achieve the desired receiver dynamic range. This is necessary because there are such a wide variety of signals being received. Often the dynamic range can be around 120 dB or higher, for instance.

    Another term used is spurious free dynamic range. Spurs are unwanted frequency components of the receiver which are generated by the mixer, ADC or any nonlinear component. The quantity represents the distance between the largest spur and fundamental tone.

  • Semiconductor Growth Technology: Molecular Beam Epitaxy and MOCVD June 14, 2020

    The development of advanced semiconductor technologies presents one important challenge: fabrication. Two methods of fabrication that are being used to in bandgap engineering are Molecular Beam Epitaxy (MBE) and Metal organic chemical vapour deposition (MOCVD).

    Molecular Beam Epitaxy uses high-intensity vacuums to fabricate compound semiconductor materials and compounds. Atoms or molecules containing the desired atoms are directed to a heated substrate. Molecular Beam Epitaxy is highly sensitive. The vacuums used make use of diffusion pumps or cryo-pumps; diffusion pumps for gas source MBE and cryo-pumps for solid source MBE. Effusion cells are found in MBE and allow the flow of molecules through small holes without collusion. The RHEED source in MBE stands for Reflection Hish Energy Electron Diffraction and allows for information regarding the epitaxial growth structure such as surface smoothness and growth rate to be registered by reflecting high energy electrons. The growth chamber, heated to 200 degrees Celsius, while the substrate temperatures are kept in the range of 400-700 degrees Celsius.

    MBE is not suitable for large scale production due to the slow growth rate and higher cost of production. However, it is highly accurate, making it highly desired for research and highly complex structures.

    MBE

     

    MOCVD is a more popular method for growing layers to a semiconductor wafer. MOCVD is primarily chemical, where elements are deposited as complex chemical compounds containing the desired chemical elements and the remains are evaporated. The MOCVD does not use a high-intensity vacuum. This process (MOCVD) can be used for a large number of optoelectronic devices with specific properties, including quantum wells. High quality semiconductor layers in the micrometer level are developed using this process. MOCVD produces a number of toxic elements including AsH3 and PH3.

    MOCVD is recommended for simpler devices and for mass production.

     

    matscience_1

  • Discrete Time Filters: FIR and IIR June 13, 2020

    There are two basic types of digital filters: FIR and IIR. FIR stands for Finite Impulse Response and IIR stands for infinite impulse response. The outputs of any discrete time filter can be described by a “difference equation”, similar to a differential equation but does not contain derivatives. The FIR is described by a moving average, or weighted sum of past inputs. IIR filter difference equations are recursive in the sense that they include both a sum of weighted values of past inputs as well as a weighted average of past outputs.

    fuck

    As shown, this specific IIR filter difference equation contains an output term (first time on the right hand side).

    The FIR has a finite impulse response because it decays to zero in a finite length of time. In the discrete time case, this means the output response of a system to a Kronecker delta input or impulse. In the IIR case, the impulse response decays, but never reaches zero. The FIR filter has zeros with only poles at  z = 0 for H(z) (system function). The IIR filter is more flexible and can contain zeroes at any location on a pole zero plot.

    The following is a block diagram of a two stage FIR filter. As shown, there is no recursion but simply a weighted sum. The triangles represent the values of the impulse response at a particular time. These sort of diagrams represent the difference equations and can be expressed as the output as a function of weighted sum of the inputs. These z inverse blocks could be thought of as memory storage blocks in a computer.

    800px-FIR_Filter_(Moving_Average).svg

    In contrast, the IIR filter contains recursion or feedback, as the past inputs are added back to the input. This feedback leads to a nontrivial term in the denominator of the transfer function of the filter. This transfer function can be tested for stability of the filter by observing the pole zero plot in the z-domain.

    IIR

    Overall, IIR filters have several advantages over FIR filters in terms of efficiency in terms of implementation which means that lower order filters can be used to achieve the same result of an FIR filter. A lower order filter is less computationally expensive and hence more preferable. A higher order filter requires more operations. However, FIR filters have a distinct advantage in terms of ease of design. This mainly comes into play when trying to design filters with linear phase (constant group delay with frequency) which is very hard to do with an IIR filter.

  • Heterostructures & Carrier Recombination June 12, 2020

    Heterojunction is the term for a region where two different materials interact. A Heterostructure is a combination of two or more materials. Here, we will explore several interesting cases.

    AlGaAs-InGaAs-AlGaAs

    The AlGaAs-InGaAs interaction is interesting due to the difference in energy bandgap levels. It was found that AlGaAs has a higher bandgap level, while InGaAs has a lower bandgap. By layering these two materials together with a stark difference in bandgap levels, the two materials make for an interesting demonstration of a heterostructure.

    The layering of a smaller bandgap material between a wider bandgap material has an effect of trapping both electrons and holes. As shown on the right side of the below picture, the center region, made of AlGaAs exibits high concentrations of both electrons and holes. This leads to a higher rate of carrier recombination, which can generate photons.

    12Picture2

    Here, the lasing profile of the material under bias:

    2Picture2

    GaAs-InP-GaAs

    8Picture24Picture2

     

    InGaAsP-InGaAs-InP

    A commonly used group of materials is InGaAsP, InGaAs and InP. Unlike the above arrangements, these materials may be lattice-matched. Lattice-matching may be explored in depth later on.Simulations suggest low or non-existent recombination rates. Although this is a heterostructure, one can see that there are no jagged or sudden drastic movements in the conduction and valence band layers with respect to each other to create a discontinuity that may result in a high recombination rate.

    inpingaaSInGaAsP

     

  • The Acoustic Guitar – Intro June 11, 2020

    We will consider our study of sound by briefly analyzing the acoustic guitar: an instrument that uses certain physical properties to “amplify” (not really true as no energy is technically added) sound acoustically rather than through electromagnetic induction or piezoelectric means (piezoelectric pickups are common on acoustic-electric guitars however). A guitar can be tuned many ways but standard (E standard) tuning is E-A-D-G-B-E across the six strings from top to bottom, or thickest string to thinnest. The tuning is something that can be changed on the fly, which differentiates the guitar from something like a harp which the tension of the string cannot be adjusted.

    Just like the tuning pegs on a guitar can be loosened or tighten to change the tension, the fretting hand can also be used to change the length of the string. Both of these affect the frequency or perceived pitch. In fact, two other qualities of the string (density and thickness) also effect the frequency. These can be related through Mersenne’s rule:

    unnamed

    As shown, the length and density of the string are inversely proportional to the pitch. The tension is proportional, so tightening the string will tune the string up.  The frequency is inversely proportional to string diameter.

    The basic operation of the guitar is that plucking or strumming strings will cause a disturbance in the air, displacing air particles and causing buildups of pressure “nodes” and “antinodes”. This leads to the creation of a longitudinal pressure wave which is perceived by the human ear as sound. However, a string on its own does not displace much air, so the rest of the guitar is needed. The soundboard (top) of the guitar acts as an impedance matching network between the string and air by increasing the surface area of contact with the air. Although this does not amplify the sound since no external energy is applied, it does increase the sound intensity greatly. So in a sense the soundboard (typically made of spruce or a good transmitter of sound) can be thought of as something like an electrical impedance matching transformer. The acoustic guitar also employs acoustic resonance in the soundhole. As with the soundboard, the soundhole also vibrates and tends to resonate at lower frequencies. When the air in the soundhole moves in phase with the strings, sound intensity increases by about 3 dB. So basically, the sound is being coupled from the string to the soundboard, from the soundboard to the soundhole and from both the soundhole and soundboard to the external air. The bridge is the part of the guitar that couples the string vibration to the soundboard. This creates a reasonably loud pressure wave.

    In terms of wood, the typical wood used for guitar making has a high stiffness to weight ratio. Spruce has an excellent stiffness to weight ratio, as it has a high modulus of elasticity and moderately low density. Rosewood tends to be used for the back and sides of a guitar. The main thing to note hear is the guitar is made of wood.. because wood does not carry vibrations well. As a result the air echos within the guitar instead, creating a sound that is pleasant to the ear. Another factor, of course is cost.

    Strings are comprised of a fundamental frequency as well as harmonics and overtones, which lead to a distinct sound. If you fret a string at the twelfth fret, this is the halfway part of the string. This would be the first overtone with double the frequency. It is important to note that the frets of a guitar taper off as you go towards the bridge. This distance can be calculated since c = fλ is a constant. Each successive note is 1.0595 higher in pitch so the first fret is placed 1.0595 from the bridge. This continues on and on with 1.0595 being raised to a higher and higher power based on what fret is being observed.

  • Materials & Photogeneration Rate at 1550 nm June 10, 2020

    We now seek to understand how different materials respond and interact with light. Photogeneration is the rate at which electrons are created through the absorption of light.

    A program is built in ATLAS TCAD to simulate a beam incident on a block of material. A PN junction is used, similar to previous iterations. An example of the code for the Photogeration Simulator will be provided at the end of this article.

    The subject of photogeneration certainly can see a more thorough examination that is provided here. Consider this as an introduction and initial exploration.

    GaAs-InP-GaAs PN Junction

    photogen1

    Here we see that a cross section of this unintentionally doped InP region, sandwiched between a GaAs PN junction exhibits a level of photogeneration, while the GaAs regions do not.

    Adding more layers of other materials, as well as introducing a bias of the structure, we notice that the InP region still exhibits the highest (only) level of photogeneration of the materials tested in this condition. Interestingly, this structure emits light under the conditions tested.

    Picture1

    Also consider that a photogeneration effect may not be sought. If, for instance, a device is supposed to act as a waveguide, there will be no benefit to having a photogeneration effect, let alone losses in the beam that result from it.

     

    InGaAsP-InP-InGaAs Heterostructure

    A common set of materials for use in Photodetectors is InGaAsP, InP and InGaAs. This particular structure features a simple, n-doped InGaAsP, unintentionally doped InP and p-doped InGaAs. The absorption rate of InP was already demonstrated above. InGaAs proves also to exhibit absorption at 1500 nm.

    ingaasPInPInGaAs

     

    go atlas

    Title Photogeneration Simulator

    #Define the mesh

    mesh auto

    x.m l = -2 Spac=0.1

    x.m l = -1 Spac=0.05

    x.m l = 1 Spac=0.05

    x.m l = 2 Spac =0.1

    #TOP TO BOTTOM – Structure Specification

    region num=1 bottom thick = 0.5 material = GaAs NY = 20 acceptor = 1e17

    region num=3 bottom thick = 0.5 material = InP NY = 10

    region num=2 bottom thick = 0.5 material = GaAs NY = 20 donor = 1e17

    #Electrode specification

    elec       num=1  name=anode  x.min=-1.0 x.max=1.0 top

    elec       num=2  name=cathode   x.min=-1.0 x.max=1.0 bottom

    #Gate Metal Work Function

    contact num=2 work=4.77

    models region=1 print conmob fldmob srh optr fermi

    models region=2 srh optr print conmob fldmob srh optr fermi

    models material=GaAs fldmob srh optr fermi print \

    laser gainmod=1 las_maxch=200. \

    las_xmin=-0.5 las_xmax=0.5 las_ymin=0.4 las_ymax=0.6 \

    photon_energy=1.43 las_nx=37 las_ny=33 \

    lmodes las_einit=1.415 las_efinal=1.47 cavity_length=200

    beam     num=1 x.origin=0 y.origin=4 angle=270 wavelength=1550 min.window=-1 max.window=1

    output band.param ramptime TRANS.ANALY photogen opt.intens con.band val.band e.mobility h.mobility band.param photogen opt.intens recomb u.srh u.aug u.rad flowlines

    method newton autonr trap  maxtrap=6 climit=1e-6

     

    #SOLVE AND PLOT

    solve    init

    SOLVE B1=1.0

    output band.param ramptime TRANS.ANALY photogen opt.intens con.band val.band e.mobility h.mobility band.param photogen opt.intens recomb u.srh u.aug u.rad flowlines

    outf=diode_mb1.str master

    tonyplot diode_mb1.str

    method newton autonr trap  maxtrap=6 climit=1e-6

    LOG outf=electrooptic1.log

    solve vanode = 0.5

    solve vanode = 1.0

    solve vanode = 1.5

    solve vanode = 2.0

    solve vanode = 2.5

    save outfile=diode_mb2.str

    tonyplot diode_mb2.str

    tonyplot electrooptic1.log

    quit

  • Microstrip Antenna – Cavity Model June 9, 2020

    The following is an alternative modelling technique for the microstrip antenna, which is also somewhat similar to the analysis of acoustic cavities. Like all cavities, boundary conditions are important. For the microstrip antenna, this is used to calculated radiated fields of the antenna.

    Two boundary conditions will be imposed: PEC and PMC. For the PEC the orthogonal component of the E field is zero and the transverse magnetic component is zero. For the PMC, the opposite is true.

    cavity

    This supports the TM (transverse magnetic) mode of propagation, which means the magnetic field is orthogonal to the propagation direction. In order to use this model, a time independent wave equation (Helmholtz equation) must be solved.

    helmholtz

    The solution to any wave equation will have wavelike properties, which means it will be sinusoidal. The solution looks like:

    1234

    Integer multiples of π  solve the boundary conditions because the vector potential must be maximum at the boundaries of x, y and z. These cannot simultaneously be zero. The resonant frequency can be solved as shown:

    res

    The units work out, as the square root of the product of the permeability and permittivity in the denominator correspond to the velocity of propagation (m/s), the units of the 2π term are radians and the rest of the expression is the magnitude of the k vector or wave number (rad/m). This corresponds to units of inverse seconds or Hz. Different modes can be solved by plugging in various integers and solving for the frequency in Hz. The lowest resonant mode is found to be f_010 which is intuitively true because the longest dimension is L (which is in the denominator). The f_000 mode cannot exist because that would yield a trivial solution of 0 Hz frequency. The field components for the dominant (lowest frequency) mode are given.

    1x

     

     

  • HF Antenna Matched Network for a Radio Broadcasting Station June 8, 2020

    The goal of this demonstration is to explain the importance of a matched network and the role of transmission lines (coax) for an HF Antenna matched network. This network is designed for the 20-meter band in the HF domain of the radio frequency region of the electromagnetic spectrum.

    Consider you have an HF antenna load, which is positioned on a tower. The tower height is a consideration as a feed coax line will be connected to the antenna from the bottom (roughly) of the tower. Secondly, another coax line will be connected from the base of the tower to the radio station.

    The reflection coefficient is the measure for an impedance matched network. A matched network will mean that loss will be minimal. SimSmith is a free tool that is useful for smith chart matching. In SimSmith, the load (left), transmission lines (as mentioned in the previous paragraph) and the radio are plotted on the smith chart.

    unsmith2

    The length chosen for T1 was chosen at 18.23 feet, which gives a clear shot for an impedance match towards the center using a stub transmission line.

    unsmith1

    We now add a shorted stub between both coax lines and adjust the length of the excess line until the impedance is matched at the radio station.

    smith1smith2

    As shown above, the the excess length on the stub is about 6′. Plotting the SWR shows that the system is matched well for the whole band, meaning that this station is set up well for an HF radio broadcasting station for extra class amateur radio broadcasters.

    swr1

  • Microstrip Patch Antennas Introduction – Transmission Line Model June 7, 2020

    Microstrip antennas (or patch antennas) are extremely important in modern electrical engineering for the simple fact that they can directly be printed to a circuit board. This makes them necessary for things like cellular antennas for GPS, communication with cell towers and bluetooth/WiFi. Patch antennas are notoriously narrowband, especially those with a rectangular shape (patch antennas can have a wide variety of shapes). Patch antennas can be configured as single antennas or in an array. The excitation is usually fed by a microstrip line which usually has a characteristic impedance of 50 ohms.

    One of the most common analysis methods for analyzing microstrip antennas is the transmission line model. It is important to note that the microstrip transmission line does not support TEM mode, unlike the coaxial cable which has radial symmetry. For the microstrip line, quasi-TEM is supported. For this mode, there is a field component along the direction of propagation, although it is small. For the purposes of the model, this can be ignored and the TEM mode which has no field component in the direction of propagation can be used. This reduces the model to:

    microstrip

    Where the effective dielectric constant can be approximated as:

    eff

    The width of the strip must be greater than the height of the substrate. It is important to note that the dielectric constant is not constant for frequency. As a consequence, the above approximation is only valid for low frequencies of microwave.

    Another note for the transmission line model is that the effective length differs from the physical length of the patch. The effective length is longer by 2ΔL due to fringing effects. ΔL can be expressed as a function of the effective dielectric constant.

    123

     

     

     

  • The Helical Antenna June 6, 2020

    The helical antenna is a frequently overlooked antenna type commonly used for VHF and UHF applications and provides high directivity, wide bandwidth and interestingly, circular polarization. Circular polarization provides a huge advantage in that if two antennas are circularly polarized, the will not suffer polarization loss due to polarization mismatch. It is known that circular polarization is a special case of elliptical polarization. Circular polarization occurs when the Electric field vector (which defines the polarization of any antenna) has two components which are in quadrature with equal amplitudes. In this case, the electric field vector rotates in a circular pattern when observed at the target, whether it be RHP or LHP (right hand or left hand polarized).

    Generally, the axial mode of the helix antenna is used but normal mode may also be used. Usually the helix is mounted on a ground plane which is connected to a coaxial cable using a N type or SMA connector.

    The helix antenna can be broken down into triangles, shown below.

    traignel

    The circumference of each loop is given by πD. S represents the spacing between loops. When this is zero (and hence the angle of the triangle is zero), the helix antenna reduces to a flat loop. When the angle becomes a 90 degree angle, the helix reduces to a monopole linear wire antenna. L0 represents the length of one loop and L is the length of the entire antenna. The total height L is given as NS, where N is the number of loops. The actual length can be calculated by multiplying the number of loops with the length of one loop L0.

    An important thing to note is that the helix antenna is elliptically polarized by default and must be manually designed to achieve circular polarization for a specific bandwidth. Another note is that the input impedance of the antenna depends greatly on the pitch angle (alpha).

    The axial (endfire) mode, which is more common occurs when the circumference of the antenna is roughly the size of the wavelength. This mode is easier to achieve circular polarization. The normal mode features a much smaller circumference and is more omnidirectional in terms of radiation pattern.

    The Axial ratio is the numerical quantity that governs the polarization. When AR = 1, the antenna is circularly polarized. When AR = ∞ or 0, the antenna is linearly polarized. Any other quantity means elliptical polarization.

    itsover

    The axial ratio can also be approximated by:

    AR

    For axial mode, the radiation pattern is much more directional, as the axis of the antenna contains the bulk of the radiation. For this mode, the following conditions must be met to achieve circular polarization.

    Axial

    These are less stringent than the normal mode conditions.

    It is also important to consider that the input impedance of these antennas tends to be higher than the standard impedance of a coaxial line (100-200 ohms compared to 50). Flattening the feed wire of the antenna and covering the ground plane with dielectric material helps achieve a better SWR.

    h

    This equation can be used to calculated the height of the dielectric used for the ground plane. It is dependent on the transmission line characteristic impedance, strip width and the dielectric constant of the material used.

  • The Superheterodyne Receiver June 5, 2020

    “Heterodyning” is a commonly used term in the design of RF wireless communication systems. It the process of using a local oscillator of a frequency close to an input signal in order to produce a lower frequency signal on the output which is the difference in the two frequencies. It is contrasted with “homodyning” which uses the same frequency for the local oscillator and the input. In a superhet receiver, the RF input and the local oscillator are easily tunable whereas the ouput IF (intermediate frequency) is fixed.

    1

    After the antenna, the front end of the receiver comprises of a band select filter and a LNA (low noise amplifier). This is needed because the electrical output of the antenna is often as small as a few microvolts and needs to be amplified, but not in a way that leads to a higher Noise Figure. The typical superhet NF should be around 8-10 dB. Then the signal is frequency multiplied or heterodyned with the local oscillator. In the frequency domain, this corresponds to a shift in frequency. The next filter is the channel select filter which has a higher Quality factor than the band select filter for enhanced selectivity.

    For the filtering, the local oscillator can either be fixed or variable for downconversion to the baseband IF. If it is variable, a variable capacitor or a tuning diode is used. The local oscillator can be higher or lower in frequency than the desired frequency resulting from the heterodyning (high side or low side injection).

    A common issue in the superhet receiver is image frequency, which needs to be suppressed by the initial filter to prevent interference. Often multiple mixer stages are used (called multiple conversion) to overcome the image issue. The image frequencies are given below.

    image

    Higher IF frequencies tend to be better at suppressing image as demonstrated in the term 2f_IF. The level of attenuation (in dB) of a receiver to image is given in the Image Rejection Ratio (the ratio of the output of the receiver from a signal at the received frequency, to its output for an equal strength signal at the image frequency.

  • Conduction & Valence Band Energies under Biasing (PN & PIN Junctions) June 4, 2020

    Previously, we discussed the effect of doping concentrations on the energy band gap. The conclusion of this process was that the doping concentration alone does not alter the band gap. The band gap is the difference between the conduction band and valence bands. Under biasing, the conduction and valence bands are in fact affected by doping concentration.

    One method to explain how the doping level will influence the conduction band and valence band under bias is by demonstrating the difference between the energy bands of a PN Junction versus that of a PIN Junction. Simulations of both are presented below. The intermediate section found between the p-doped and n-doped regions of the PIN junction diode offer a more gradual transition between the two levels. A PN junction offers a sharper transition at the conduction and valence band levels simulatenously. A heterostructure, which is made of more than one material (which will have different band gaps) may produce even greater discontinuities. Depending on the application, a discontinuity may be sought (think, Quantum well), while in other situations, it may be necessary to smooth the transition between band levels for a desired result.

    The conduction and valence bands are of great importance for determining the carrier concentrations and carrier mobilities in a semiconductor structure. These will be discussed soon.

    PN Junction under biasing (conduction and valence band energies):

    pnjunctionbandenergies

    Code Used (PN Junction):

    #TOP TO BOTTOM – Structure Specification
    region num=1 bottom thick = 0.5 material = GaAs NY = 20 acceptor = 1e18
    region num=2 bottom thick = 0.5 material = GaAs NY = 20 donor = 1e18

     

    PIN Junction Biased:

    pinjunction

    PIN Junction Unbiased:

    pinjunction_unbiased

    Code Used (PIN Junction):

    #TOP TO BOTTOM – Structure Specification
    region num=1 bottom thick = 0.5 material = GaAs NY = 20 acceptor = 1e18
    region num=3 bottom thick = 0.2 material = GaAs NY = 10
    region num=2 bottom thick = 0.5 material = GaAs NY = 20 donor = 1e18

    Here, the carrier concentrations are plotted:

    pinconc

  • RADAR Range Resolution June 3, 2020

    Before delving into the topic of pulse compression, it is necessary to briefly discuss the advantages of pulse RADAR over CW RADAR. The main difference between the two is with duty cycle (time high vs total time). For CW RADARs this is 100% and pulse RADARs are typically much lower. The efficiency of this comes with the fact that the scattered signal can be observed when the signal is low, making it much more clear. With CW RADARs (which are much less common then pulse RADARs), since the transmitter is constantly transmitting, the return signal must be read over the transmitted signal. In all cases, the return signal is weaker than the transmitter signals due to absorption by the target. This leads to difficulties with continuous wave RADAR.  Pulse RADARs can also provide high peak power without increasing average power, leading to greater efficiency.

    “Pulse Compression” is a signal processing technique that tries to take the advantages of pulse RADAR and mitigate its disadvantages. The major dilemma is that accuracy of RADAR is dependent on pulse width. For instance, if you send out a short pulse you can illuminate the target with a small amount of energy. However the range resolution is increased. The digital processing of pulse compression grants the best of both worlds: having a high range resolution and also illuminate the target with greater energy. This is done using Linear Frequency Modulation or “Chirp modulation”, illustrated below.

    290px-Linear-chirp.svg

    As shown above, the frequency gradually increases with time (x axis).

    A “matched filter” is a processing technique to optimize the SNR, which outputs a compressed pulse.

    Range resolution can be calculated as follows:

    Resolution = (C*T)/2

    Where T is the pulse time or width.

    With greater range resolution, a RADAR can detect two objects that are very close. As shown this is easier to do with a longer pulse, unless pulse compression is achieved.

    It can also be demonstrated that range resolution is proportional to bandwidth:

    Resolution = c/2B

    So this means that RADARs with higher frequencies (which tend to have higher bandwidth), greater resolution can also be achieved.

     

     

  • Energy Bandgaps June 2, 2020

    Previously, a PN Junction Simulator in ATLAS program was posted. Now, we will use and modify this program to explore more theory in respect to semiconductor materials, high speed electronics and optoelectronics.

    The bandgap, as mentioned previously is the difference between the conduction band energy and valence band energy. The materials GaAs, InP, AlGaAs, InGaAs and InGaAsP are simulated and the bandgap values for each are estimated (just don’t use these values for anything important).

    • GaAs: ~ 1.2 eV
    • InP: ~ 1.35 eV
    • AlGaAs: ~ 1.8 eV
    • InGaAs: ~0.75 eV
    • InGaAsP: 1.1 eV

    bandgaps

    Here the conduction band and valence band are shown.

    bandgaps2

    The structure used in the PN Junction Simulator is found below:

    #TOP TO BOTTOM – Structure Specification
    region num=1 bottom thick = 0.5 material = GaAs NY = 20 acceptor = 1e17
    region num=3 bottom thick = 0.001 material = InP NY = 10
    region num=4 bottom thick = 0.001 material = GaAs NY = 10
    region num=5 bottom thick = 0.001 material = AlGaAs NY = 10 x.composition=0.3 grad.3=0.002
    region num=6 bottom thick = 0.001 material = GaAs NY = 10
    region num=7 bottom thick = 0.001 material = InGaAs NY = 10 x.comp=0.468
    region num=8 bottom thick = 0.001 material = GaAs NY = 10
    region num=9 bottom thick = 0.001 material = InGaAsP NY = 10 x.comp=0.145 y.comp = 0.317
    region num=2 bottom thick = 0.5 material = GaAs NY = 20 donor = 1e17

    Is the bandgap affected by doping the concentration level?

    A quick simulation (below) will tell us that the answer is no. What might influence the bandgap however? And what could the concentration level change?

    bandgap4

    This (above) is a simulation of GaAs with layers at different doping concentration levels. The top is a contour of the bandgap, which is constant, as expected. The top right is a cross section of this GaAs structure (technically still a pn junction diode); the bandgap is still constant. The bottom two images are the donor and acceptor concentrations.

    The bandgap energy E_g is the amount of energy needed for a valence electron to move to the conduction band. The short answer to the question of how the bandgap may be altered is that the bandgap energy is mostly fixed for a single material. In praxis however, Bandgap Engineering employs thin epitaxial layers, quantum dots and blends of materials to form a different bandgap. Bandgap smoothing is employed, as are concentrations of specific elements in ternary and quarternary compounds. However, the bandgap cannot be altered by changing the doping level of the material.

  • PN Junction Simulator in ATLAS June 1, 2020

    This post will outline a program for ATLAS that can simulate a pn junction. The mesh definition and structure between the anode and cathode will be defined by the user. The simulator plots both an unbiased and biased pn junction.

    go atlas

    Title PN JUNCTION SIMULATOR

    #Define the mesh

    mesh auto
    x.m l = -2 Spac=0.1
    x.m l = -1 Spac=0.05
    x.m l = 1 Spac=0.05
    x.m l = 2 Spac =0.1

    #TOP TO BOTTOM – Structure Specification
    region num=1 bottom thick = 0.5 material = GaAs NY = 20 acceptor = 1e17
    region num=2 bottom thick = 0.5 material = GaAs NY = 20 donor = 1e17

    #Electrode specification
    elec num=1 name=anode x.min=-1.0 x.max=1.0 top
    elec num=2 name=cathode x.min=-1.0 x.max=1.0 bottom
    #Gate Metal Work Function
    contact num=2 work=4.77
    models region=1 print conmob fldmob srh optr
    models region=2 srh optr
    material region=2

    #SOLVE AND PLOT
    solve init outf=diode_mb1.str master
    output con.band val.band
    tonyplot diode_mb1.str

    method newton autonr trap maxtrap=6 climit=1e-6
    solve vanode = 2.5 name=anode
    save outfile=diode_mb2.str
    tonyplot diode_mb2.str
    quit

    This program may also be useful for understanding how different materials interact between a PN junction. This simulation below is for a simple GaAs pn junction.

    The first image shows four contour plots for the pn junction with an applied 2.5 volts. With an applied voltage of 2.5, the recombination rate is high at the PN junction, while there is low recombination throughout the unbiased pn junction. The hole and electron currents are plotted on the bottom left and right respectively.

    pnjunction_biased

    Here is the pn junction with no biasing.

    pnjunction_unbiased

    The beam profile can also be obtained:

    beamprof

  • ATLAS TCAD: Simulation of Frequency Response from Light Impulse May 31, 2020

    Recently a project was posted for a high speed photodetector. Part of that project was to develop a program that takes the frequency response of a light impulse. My thought is to create a program that can perform these tasks, including an impulse response for any structure.

    Generic Light Frequency Response Simulator Program in ATLAS TCAD

    The first part of the program should include all the particulars of the structure that is being simulated:

    go atlas

    [define mesh]

    [define structure]

    [define electrodes]

    [define materials]

    Then, the beam is defined. x.origin and y.origin describes from where the beam is originating on the 2D x-y plane. The angle shown of 270 degrees means that the beam will be facing upwards. One may think of this angle as starting on the right hand sixe of the x-y coordinate plane and moves clockwise. The wavelength is the optical wavelength of the beam and the window defines how wide the beam will be.

    beam num=1 x.origin=0 y.origin=5 angle=270 wavelength=1550 min.window=-15 max.window=15

    The program now should run an initial solution and set the conditions (such as if a voltage is applied to a contact) for the frequency response.

    METHOD HALFIMPL

    solve init
    outf = lightpulse_frequencyresponse.str
    LOG lightpulse_frequencyresponse.log

    [simulation conditions such as applied voltage]

    LOG off

    Now the optical pulse is is simulated as follows:

    LOG outf=transient.log
    SOLVE B1=1.0 RAMPTIME=1E-9 TSTOP=1E-9 TSTEP=1E-12
    SOLVE B1=0.0 RAMPTIME=1E-9 TSTOP=20E-9 TSTEP=1E-12

    tonyplot transient.log

    outf=lightpulse_frequencyresponse.str master onefile
    log off

    The optical pulse “transient.log” is simulated using Tonyplot at the end of the program. It is a good idea to separate transient plots from frequency plots to ensure that these parameters may be chosen in Tonyplot. Tonyplot does not give the option to use a parameter if it is not the object that is being solved before saving the .log file.

    log outf=frequencyplot.log
    FOURIER INFILE=transient.log OUTFILE=frequencyplot.log T.START=0 T.STOP=20E-9 INTERPOLATE
    tonyplot frequencyplot.log
    log off

    output band.param ramptime TRANS.ANALY photogen opt.intens con.band val.band e.mobility h.mobility band.param photogen opt.intens recomb u.srh u.aug u.rad flowlines

    save outf=lightpulse_frequencyresponse.str
    tonyplot lightpulse_frequencyresponse.str

    quit

    Now you can focus on the structure and mesh for a light impulse frequency response. Note that adjustments may be warranted on the light impulse and beam.

    And so, here is a structure simulation that could be done easily using the process above.

    trr

     

  • High Speed UTC Photodetector Simulation with Frequency Response in TCAD May 30, 2020

    The following is a TCAD simulation of a high speed UTC photodetector. An I-V curve is simulated for the photodetector, forward and reverse. A light beam is simulated to enter the photodetector. The photo-current response to a light impulse is simulated, followed by a frequency response in TCAD.

    Structure:

    121

    I-V Curve

    1211

    Beam Simulation Entering Photodetector:

    12111

     

    Light Impulse:

    121111

    Frequency Response in ATLAS:

    1211111

    The full project (pdf) is here: ece530_final_mbenker

     

  • Sinusoidal and Exponential Sequences, Periodicity of Sequences May 29, 2020

    Continuing our discussion on discrete-time sequences, we now come to define exponential and sinusoidal sequences. The general formula for a discrete-time exponential sequence is as follows:

    x[n] = Aα^n.

    This exponential behaves differently according to the value of α. If the sequence starts at n=0, the formula is as follows:

    x[n] = Aα^n * u[n].

    expo

    If α is a complex number, the exponential function exhibits newer characteristics. The envelope of the exponential is |α|. If |α| < 1, the system is decaying. If |α|> 1, the system is growing.

    cexpo

    When α is complex, the sequence may be analyzed as follows, using the definition of Euler’s formula to express a complex relationship as a magnitude and phase difference.

    Captu56 ma

    Where ω0 is the frequency and φ is the phase, for n number of samples, a complex exponential sequence of form Ae^jw0n may be considered as a sinusoidal sequence for a set of frequencies in an interval of 2π.

    A sinusoidal sequence is defined as follows:

    x[n] = Acos(ω0*n + φ), for all n, and A, φ are real constants.

    Periodicity for discrete-time signals means that the sequence will repeat itself for a certain delay, N.

    x[n] = x[n+N] : system is periodic.

    t = (-5:1:15)’;

    impulse = t==0;
    unitstep = t>=0;
    Alpha1 = -0.5;
    Alpha2 = 0.5;
    Alpha3 = 2.5;
    Alpha4 = -2.5;
    cAlpha1 = -0.5 – 0.5i;
    cAlpha2 = 0.5 + 0.5i;
    cAlpha3 = 2.5 -2.5i;
    cAlpha4 = -2.5 + 2.5i;
    A = 1;

    Exp1 = A.*unitstep.*Alpha1.^t;
    Exp2 = A.*unitstep.*Alpha2.^t;
    Exp3 = A.*unitstep.*Alpha3.^t;
    Exp4 = A.*unitstep.*Alpha4.^t;

    cExp1 = A.*unitstep.*cAlpha1.^t;
    cExp2 = A.*unitstep.*cAlpha2.^t;
    cExp3 = A.*unitstep.*cAlpha3.^t;
    cExp4 = A.*unitstep.*cAlpha4.^t;

    %%
    figure(1)
    subplot(2,1,1)
    stem(t, impulse)
    xlabel(‘x’)
    ylabel(‘y’)
    title(‘Impulse’)

    subplot(2,1,2)
    stem(t, unitstep)
    xlabel(‘x’)
    ylabel(‘y’)
    title(‘Unit Step’)
    %%
    figure(2)
    subplot(2,2,1)
    stem(t, cExp1)
    xlabel(‘n’)
    ylabel(‘x[n]’)
    title(‘Exponential: alpha = -0.5 – 0.5i’)

    subplot(2,2,2)
    stem(t, cExp2)
    xlabel(‘n’)
    ylabel(‘x[n]’)
    title(‘Exponential: alpha = 0.5 + 0.5i’)

    subplot(2,2,3)
    stem(t, cExp3)
    xlabel(‘n’)
    ylabel(‘x[n]’)
    title(‘Exponential: alpha = 2.5 -2.5i’)

    subplot(2,2,4)
    stem(t, cExp4)
    xlabel(‘n’)
    ylabel(‘x[n]’)
    title(‘Exponential: alpha = -2.5 + 2.5i’)
    %%
    figure(3)
    subplot(2,2,1)
    stem(t, Exp1)
    xlabel(‘n’)
    ylabel(‘x[n]’)
    title(‘Exponential: alpha = -0.5’)

    subplot(2,2,2)
    stem(t, Exp2)
    xlabel(‘n’)
    ylabel(‘x[n]’)
    title(‘Exponential: alpha = 0.5’)

    subplot(2,2,3)
    stem(t, Exp3)
    xlabel(‘n’)
    ylabel(‘x[n]’)
    title(‘Exponential: alpha = 2.5’)

    subplot(2,2,4)
    stem(t, Exp4)
    xlabel(‘n’)
    ylabel(‘x[n]’)
    title(‘Exponential: alpha = -2.5’)

     

     

  • Mathematical Formulation for Antennas: Radiation Integrals and Auxiliary Potentials May 28, 2020

    This short paper will attempt to clarify some useful mathematical tools for antenna analysis that seem overly “mathematical” but can aid in understanding antenna theory. A solid background in Maxwell’s equations and vector calculus would be helpful.

    Two sources will be introduced: The Electric and Magnetic sources (E and M respectively). These will be integrated to obtain either an electric and magnetic field directly or integrated to obtain a Vector potential, which is then differentiated to obtain the E and H fields. We will use A for magnetic vector potential and F for electric vector potential.

    Using Gauss’ laws (first two equations) for a source free region:

    cfr

    And also the identity:

    1

    It can be shown that:

    2

    In the case of the magnetic field in response to the magnetic vector potential (A). This is done by equating the divergence of B with the divergence of the curl of A, which both equal zero. The same can be done from Gauss Law of electricity (1st equation) and the divergence of the curl of F.

    Using Maxwell’s equations (not necessary to know how) the following can be derived:

    3

    For total fields, the two auxiliary potentials can be summed. In the case of the Electric field this leads to:

    4

    The following integrals can be used to solve for the vector potentials, if the current densities are known:

    5

    For some cases, the volume integral is reduced to a surface or line integral.

    An important note: most antenna calculations and also the above integrals are independent of distance, and therefore are done in the far field (region greater than 2D^2/λ, where D is the largest dimension of the antenna).

    The familiar duality theorem from Fourier Transform properties can be applied in a similar way to Maxwell’s equations, as shown.

    mxw

    In the chart, Faraday’s Law, Ampere’s Law, Helmholtz equations and the above mentioned integrals are shown. To be perfectly honest, I think the top right equation is wrong. I believe is should have permittivity rather than permeability.

    Another important antenna property is reciprocity… that is the receive and transmit radiation patterns are the same , given that the medium of propagation is linear and isotropic. This can be compared to the reciprocity theorem of circuits, meaning that a volt meter and source can be interchanged if a constant current or voltage source is used and the circuit components are linear, bilateral and discrete elements.

     

  • Discrete-Time Impulse and Unit Step Functions May 27, 2020

    Discrete-Time Signals are understood as a set or sequence of numbers. These sequences possess magnitudes or values at a given index.

    One mark of Discrete-Time Signals is that the index value is an integer. Thus, the sequence will have a magnitude or value for a whole number index such as -5, -4, 0, 6, 10000, etc.

    A discrete-time signal represented as a sequence of numbers takes the following form:

    x[n] = {x[n]},          -∞ < n < ∞,

    where n is any real integer (the index).

    An analog representation describes values of a signal at time nT, where T is the sampling period. The sampling frequency is the inverse of the sampling period.

    x[n] = X_a(nT),      -∞ < n < ∞.

     

    Common Sequences

    Both a very simple and important sequence is the unit sample sequence, “discrete time impulse” or simply “impulse,” equal to 1 only at zero and equal to zero otherwise.

    12

    The discrete time impulse is used to describe an entire system using a delayed impulse. An entire sequence may also be shifted or delayed using the following relation:

    y[n] = x[n – n0],

    where n0 is an integer (which is the increment of indices by which the system is delayed. The impulse function delayed to any index and multiplied by the value of the system at that index can describe any discrete-time system. The general formula for this relationship is,

    122

    The unit step sequence is related to the unit impulse. The unit step sequence is a set of numbers that is equal to zero for all numbers less than zero and equal to one for numbers equal and greater than zero.

    1222

    The unit step sequence is therefore equal to a sequence of delta impulses with a zero and greater delay.

    u[n] = δ[n] + δ[n-1] + δ[n-2] + . . .

    12222

    The unit impulse can also be represented by unit step functions:

    δ[n] = u[n] – u[n-1].

    Below I’ve plotted both the impulse and unit step function in matlab.

    122222

    t = (-10:1:10)';
    
    impulse = t==0;
    unitstep = t>=0;
    
    figure(1)
    subplot(2,1,1)
    stem(t, impulse)
    xlabel('x')
    ylabel('y')
    title('Impulse')
    figure(1)
    subplot(2,1,2)
    stem(t, unitstep)
    xlabel('x')
    ylabel('y')
    title('Unit Step')

     

     

  • Image Resolution May 26, 2020

    Consider that we are interested in building an optical sensor. This sensor contains a number of pixels, which is dependent on the size of the sensor. The sensor has two dimensions, horizontal and vertical. Knowing the size of the pixels, we will be able to find the total number of pixels on this sensor.

    The horizontal field of view, HFOV is the total angle of view normal from the sensor. The effective focal length, EFL of the sensor is then:

    Effective Focal Length: EFL = V / (tan(HFOV/2)),

    where V is the vertical sensor size in (in meters, not in number of pixels) and HFOV is the horizontal field of view. Horizontal field of view as an angled is halved to account that HFOV extends to both sizes of the normal of the sensor.

    The system resolution using the Kell Factor: R = 1000 * KellFactor * (1 / (PixelSize)),

    where the Pixel size is typically given and the Kell factor, less than 1 will approximate a best real case result and accounts for aberrations and other potential issues.

    Angular resolution: AR = R * EFL / 1000,

    where R is the resolution using the Kell factor and EFL is the effective focal length. It is possible to compute the angular resolution using either pixels per millimeter or cycles per millimeter, however one would need to be consistent with units.

    Minimum field of view: Δl = 1.22 * f * λ / D,

    which was used previously for the calculation of the spatial resolution of a microscope. The minimum field of view is exactly a different wording for the minimum spatial resolution, or minimum size resolvable.

    Below is a MATLAB program that computed these parameters, while sweeping the diameter of the lens aperture. The wavelength admittedly may not be appropriate for a microscope, but let’s say that you are looking for something in the infrared spectrum. Maybe you are trying to view some tiny laser beams that will be used in the telecom industry at 1550 nanometer.

    Pixel size: 3 um. HFOV: 4 degrees. Sensor size: 8.9mm x 11.84mm.

    2245225

  • Spatial Resolution of a Microscope May 25, 2020

    Angular resolution describes the smallest angle between two objects that are able to be resolved.

    θ = 1.22 * λ / D,

    where λ is the wavelength of the light and D is the diameter of the lens aperture.

    Spatial resolution on the other hand describes the smallest object that a lens can resolve. While angular resolution was employed for the telescope, the following formula for spatial resolution is applied to microscopes.

    Spatial resolution: Δl = θf = 1.22 * f * λ / D,

    where θ is the angular resolution, f is the focal length (assumed to be distance to object from lens as well), λ is the wavelength and D is the diameter of the lens aperture.

    223

     

    The Numerical Aperture (NA) is a measure of the the ability to of the lens to gather light and resolve fine detail. In the case of fiber optics, the numerical aperture applies to the maximum acceptance angle of light entering a fiber. The angle by the lens at its focus is θ = 2α. α is shown in the first diagram.

    Numerical Aperture for a lens: NA = n * sin(α),

    where n is the index of refraction of the medium between the lens and the object. Further,

    sin(α) = D / (2d).

    The resolving power of a microscope is related.

    Resolving power: x = 1.22 * d * λ / D,

    where d is the distance from the lens aperture to the region of focus.

    224

    Using the definition of NA,

    Resolving power: x = 1.22 * d * λ / D = 1.22 * λ / (2sin(α)) = 0.61 * λ / NA.

     

  • Telescope Resolution & Distance Between Stars using the Rayleigh Limit May 24, 2020

    Previously, the Rayleigh Criterion and the concept of maximum resolution was explained. As mentioned, Rayleigh found this formula performing an experiment with telescopes and stars, exploring the concept of resolution. This formula may be used to determine the distance between two stars.

    θ = 1.22 * λ / D.

    Consider a telescope of lens diameter of 2.4 meters for a star of visible white light at approximately 550 nanometer wavelength. The distance between the two stars in lightyears may be calculated as follows. The stars are approximately 2.6 million lightyears away from the lens.

    θ = 1.22 * (550*10^(-9)m)/(2.4m)

    θ =2.80*10^(-7) rad

    Distance between two objects (s) at a distance away (r), separated by angle (θ): s = rθ

    s = rθ = (2.0*10^(6) ly)*(2.80*10^(-7)) = 0.56 ly.

    This means that the maximum resolution for the lens size, star distance from the lens and wavelength would be that two stars would need to be separated at least 0.56 lightyears for the two stars to be distinguishable.

    telescope

  • Diffraction, Resolution and the Rayleigh Criterion May 23, 2020

    The wave theory of light includes the understanding that light diffracts as it moves through space, bending around obstacles and interfering with itself constructively and destructively. Diffraction grating disperses light according to wavelength. The intensity pattern of monochromatic light going through a small, circular aperture will produce a pattern of a central maximum and other local minima and maxima.

    diffraction

    The wave nature of light and the diffraction pattern of light plays an interesting role in another subject: resolution. The light which comes through the hole, as demonstrated by the concept of diffraction, will not appear as a small circle with sharply defined edges. There will appear some amount of fuzziness to the perimeter of the light circle.

    Consider if there are two sources of light that are near to each other. In this case, the light circles will overlap each other. Move them even closer together and they may appear as one light source. This means that they cannot be resolved, that the resolution is not high enough for the two to be distinguished from another.

    Capture6543

    Considering diffraction through a circular aperture the angular resolution is as follows:

    Angular resolution: θ = 1.22 * λ/D,

    where λ is the wavelength of light, D is the diameter of the lens aperture and the factor 1.22 corresponds to the resolution limit formulated and empirically tested using experiments performed using telescopes and astronomical measurements by John William Strutt, a.k.a. Rayleigh for the “Rayleigh Criterion.” This factor describes what would be the minimum angle for two objects to be distinguishable.

  • Optical Polarizers in Series May 22, 2020

    The following problems deal with polarizers, which is a device used to alter the polarization of an optical wave.

    1. Unpolarized light of intensity I is incident on an ideal linear polarizer (no absorption). What is the transmitted intensity?

      Unpolarized light contains all possible angles to the linear polarizer. On a two dimensional plane, the linear polarizer will emit only that amount of light intensity that is found in the axis of polarization. Therefore, the Intensity of light emitted from a linear polarizer from incident unpolarized light will be half the intensity of the incident light.

    2. Four ideal linear polarizers are placed in a row with the polarizing axes vertical, 20 degrees to vertical, 55 degrees to vertical, and 90 degrees to vertical. Natural light of intensity I is incident on the first polarizer.

      a) Calculate the intensity of light emerging from the last polarizer.

      b) Is it possible to reduce the intensity of transmitted light (while maintaining some light transmission) by removing one of the polarizers?

      c) Is it possible to reduce the intensity of transmitted light to zero by removing a polarizer(s)?

      a) Using Malus’s Law, the intensity of light from a polarizer is equal to the incident intensity multiplied by the cosine squared of the angle between the incident light and the polarizer. This formula is used in subsequent calculations (below). The intensity of light from the last polarizer is 19.8% of the incident light intensity.

      b) My removing polarizer three, the total intensity is reduced to 0.0516 times the incident intensity.

      c) In order to achieve an intensity of zero on the output of the polarizer, there will need to exist an angle difference of 90 degrees between two of the polarizers. This is not achievable by removing only one of the polarizers, however it would be possible by removing both the second and third polarizer, leaving a difference of 90 degrees between two polarizers.

     

    Capturepol

     

     

  • Jones Vector: Polarization Modes May 21, 2020

    The Jones Vector is a method of describing the direction of polarization of light. It uses a two element matrix for the complex amplitude of the polarized wave. The polarization of a light wave can be described in a two dimensional plane as the cross section of the light wave. The two elements in the Jones Vector are a function of the angle that the wave makes in the two dimensional cross section plane of the wave as well as the amplitude of the wave.

    CaptureXNTA

    The amplitude may be separated from the ‘mode’ of the vector. The mode of the vector describes only the direction of polarization. Below is a first example with a linear polarization in the y direction.

    Capturet5

    Using the Jones Vector the mode can be calculated for any angle. See calculations below:

    Capture553

    The phase differences of the Jones Vector are plotted for a visual representation of the mode. If both components of the differ in phase, the plot depict a circular or oval pattern that intersects both components of the mode on a two dimensional plot. The simplest of plots to understand is a polarization of 90 degree phase difference. In this case, both magnitudes of the components of the mode will be 1 and a full circle is drawn to connect these points of the mode. In the case of a zero phase difference, this is demonstrated at 45 degrees where both sin(45deg) and cos(45deg) equal 0.707. In this case, the phase difference is plotted as a straight line, indicating that polarization is of equal phase from each axis of the phase difference plot.

    Capture554

     

     

  • Acoustics and Sound: The Vocal Apparatus May 20, 2020

    The study of modulation of signals for wireless transmission can, to some extent, be applied to the human body, In the RF wireless world, a “carrier” signal of a high frequency has a “message” encoded on it (message signal) in some form or fashion. This is then transmitted through a medium (generally air) as a radio frequency electromagnetic wave.

    In a similar way, the vocal apparatus of the human body performs a similar function. The lungs forcibly expel air in a steady stream comparable to a carrier wave.  This steady stream gets encoded with information by periodically varying its velocity and pressure into two forms of sound: voiced and unvoiced. Voiced sounds produce vowels and are modulated by the larynx and vocal cords. The vocal chords are bands which have a narrow slit in between them which are flexed in certain ways to produce sounds. The tightening of the cords produces a higher pitch and loosening or relaxing produces a lower pitch. In general, thicker vocal cords will produce deeper voices. The relaxation oscillation produced by this effect converts a steady air flow into a periodic pressure wave. Unvoiced sounds do not use the vocal chords.

    The tightness of the vocal cords produces a fundamental frequency which characterizes the tone of voice. In addition, resonating cavities above and below the larynx have certain resonant frequencies which also contribute to the tone of voice through inharmonic frequencies, as these are not necessarily spaced evenly.

    Although the lowest frequency is the fundamental and most recognizable tone within the human voice, higher frequencies tend to be of a greater amplitude. Different sounds produced will of course have different spectrum characteristics. This is demonstrated in the subsequent image.

    fuck

    The “oo” sound appears to contain a prominent 3rd harmonic, for example. In none of these sounds is the fundamental of highest amplitude. The image also shows how varying the position of the tongue as well as the constriction or release of the larynx contributes to the spectrum.

    It is interesting to note the difference between male and female voices: male voices contain more harmonic content. This is because lower multiples of the fundamentals are more represented in the male voice and are spaced closed to one another in the frequency domain.

     

  • The Cavity Magnetron May 19, 2020

    The operation of a cavity magnetron is comparable to a vacuum tube: a nonlinear device that was mostly replaced by the transistor. The vacuum tube operated using thermionic emission, when a material with a high melting point is heated and expels electrons. When the work function of a material is overcome through thermal energy transfer to electrons, these particles can escape the material.

    Magnetrons are comprised of two main elements: the cathode and anode. The cathode is at the center and contains the filament which is heated to create the thermionic emission effect. The outside part of the anode acts as a one-turn inductor to provide a magnetic field to bend the movement of the electrons in a circular manner. If not for the magnetic field, the electrons would simple be expelled outward. The magnetic field sweeps the electrons around, exciting the resonant cavities of the anode block.

    The resonant cavities behave much like a passive LC filter circuit which resonate a certain frequency. In fact, the tipped end of each resonant cavity looks much like a capacitor storing charge between two plates, and the back wall acts an inductor. It is well known that a parallel resonant circuit has a high voltage output at one particular frequency (the resonant frequency) depending on the reactance of the capacitor and inductor. This can be contrasted with a series resonant circuit, which has a current peak at the resonant frequency where the two devices act as a low impedance short circuit. The resonant cavities in question are parallel resonant.

    Just like the soundhole of a guitar, the resonant cavity of the magnetron’s resonance frequency is determined by the size of the cavity. Therefore, the magnetron should be designed to have a resonant frequency that makes sense for the application. For a microwaves oven, the frequency should be around 2.4GHz for optimum cooking. For an X-band RADAR, this should be closer to 10GHz or around this level. An interesting aspect of the magnetron is when a cavity is excited, another sequential cavity is also excited out of phase by 180 degrees.

    The magnetron generally produces wavelength around several centimeters (roughly 10 cm in a microwave oven). It is known as a “crossed field” device, because the electrons are under the influence of both electric and magnetic fields, which are in orthogonal directions. An antenna is attached to the dipole for the radiation to be expelled. In a microwaves oven, the microwaves are guided using a metallic waveguide into the cooking chamber.

    unnamed

     

  • Optical Polarization, Malus’s Law, Brewster’s Angle May 18, 2020

    In the theory of wave optics, light may be considered as a transverse electromagnetic wave. Polarization describes the orientation of an electric field on a 3D axis. If the electric field exists completely on the x-axis plane for example, light is considered to be polarized in this state.

    Non-polarized light, such as natural light may change angular position randomly or rapidly. The process of polarizing light uses the property of anisotropy and the physical mechanisms of dichroism or selective absorption, reflection or scattering. A polarizer is a device that utilizes these properties. Light exiting a polarizer that is linearly polarized will be parallel to the transmission axis of the polarizer.

    Circular.Polarization.Circularly.Polarized.Light_Circular.Polarizer_Creating.Left.Handed.Helix.View.svg

     

    Malus’s law states that the transmitted intensity after an ideal polarizer is

    I(θ)=I(0)〖cos〗^2 (θ),

    where the angle refers to the angle difference between the incident wave and the transmission axis of the polarizer.

    Brewster’s Angle, an extension of the Fresnel Equation is a theory which states that the difference between a transmitted ray or wave into a material comes at a 90 degree angle to the reflected wave or ray along the surface. This situation is true only at the condition of the Brewster’s Angle. In the scenario where the Brewster’s Angle condition is met, the angle between the incident ray or wave and the normal, the reflected ray or wave and the surface normal and the transmitted ray or wave and the surface normal are all equal.

    Picture2

    If the Brewster’s Angle is met, the reflected ray will be completely polarized. This is also termed the polarization angle. The polarization angle is a function of the two surfaces.

    Picture3

     

     

     

     

     

  • Fourth Generation Optics: Thin-Film Voltage-Controlled Polarization May 17, 2020
    Michael Benker
    ECE591 Fundamentals of Optics & Photonics
    April 20,2020

    Introduction

    Dr. Nelson Tabiryan of BEAM Engineering for Advanced Measurements Co. delivered a lecture to explain some of the latest advances in the field of optics. The fourth generation of optics, in short includes the use of applied voltages to liquid crystal technology to alter the polarization effects of micrometer thin film lenses. Both the theory behind this type of technology as well as the fabrication process were discussed.

     

    First Three Generations of Optics

    A summary of the four generation of optics is of value to understanding the advancements of the current age. Optics is understood by many as one of the oldest branches of science. Categorized by applications of phenomena observable by the human eye, geometrical optics or refractive optics uses shape and refractive index to direct and control light.

    The second generation of optics included the use of graded index optical components and metasurfaces. This solved the issue of needing to use exceedingly bulky components although it would be limited to narrowband applications. One application is the use of graded index optical fibers, which could allow for a selected frequency to reflect through the fiber, while other frequencies will pass through.

    Anisotropic materials gave rise to the third generation of optics, which produced technologies that made use of birefringence modulation. Applications included liquid crystal displays, electro-optic modulators and other technologies that could control material properties to alter behavior of light.

     

    Fourth Generation Optics

    To advance technology related to optics, there are several key features needed for output performance. A modernized optics should be broadband, allowing many frequencies of light to pass. It should be highly efficient, micrometer thin and it should also be switchable. This technology is currently present.

    Molecule alignment in liquid crystalline materials is essential to the theory of fourth generation optics. Polarization characteristics of the lens is determined by molecule alignment. As such, one can build a crystal or lens that has twice the refractive index for light which is polarized in one direction. This device is termed the half wave plate, which polarizes light waves parallel and perpendicular to the optical axis of the crystal. Essentially, for one direction of polarization, a full period sinusoid wave is transmitted through the half wave plate, but with a reversed sign exit angle, while the other direction of polarization is allowed only half a period is allowed through. As a result of the ability to differentiate a sign of the input angle to the polarization axis (full sinusoid polarized wave), the result is an ability to alter the output polarization and direction of the outgoing wave as a function of the circular direction of polarization of the incident wave.

    The arrangement of molecules on these micrometer-thin lenses are not only able to alter the direction according to polarization, but also able to allow the lens to act as a converging lens or diverging lens. The output wave, a result of the arrangement of molecules in the liquid crystal lens has practically an endless number of uses and can align itself to behave as any graded index lens one might imagine. An applied voltage controls the molecular alignment.

    How does the lens choose which molecular alignment to use when switching the lens? The answer is that, during the fabrication process, all molecular alignments are prepared that the user plans on employing or switching to at some point. These are termed diffraction wave plates.

     

     

    Problem 1.

    4go1

    The second lens is equivalent to the first (left) lens, rotated 180 degrees. In the case of a polarization-controlled birefringence application, one would expect lens 2 to exhibit opposite output direction for the same input wave polarization as lens 1. For lens 1 (left), clockwise circularly polarized light will exit with an angle towards the right, while counterclockwise circularly polarized light exits and an angle to the left. This is reversed for lens 2.

     

     

    Problem 2.

    4go2

    There are as many states as there are diffractive waveplates. If there are six waveplates, then there will be 6 states to choose from.

     

  • LED Simulation in Atlas May 16, 2020

    This post features an LED structure simulated in ATLAS. The goal will be to demonstrate why this structure may be considered an LED. Light Emitting Diodes and Laser Diodes both serve as electronic-to-photonic transducers. Of importance to the operation of LEDs is the radiative recombination rate.

    The following LED structure is built using the following layers (top-down):

    • GaAs: 0.5 microns, p-type: 1e15
    • AlGaAs: 0.5 microns, p-type: 1e15, x=0.35
    • GaAs: 0.1 microns, p-type: 1e15, LED
    • AlGaAs: 0.5 microns, n-type: 1e18, x=0.35
    • GaAs: 2.4 microns, n-type: 1e18

    This structure uses alternating GaAs and AlGaAs layers.

     

    06_0106_02

  • Pulsed Lasers and Continuous-Wave Lasers May 15, 2020

    Continuous-Wave (CW) Lasers emit a constant stream of light energy. Power emitted is typically not very high, not exceeding killoWatts. Pulse Lasers were designed to produce much higher peak power output through the use of cyclical short bursts of optical power with intervals of zero optical power output. There are several important parameters to explore in relation the pulsed laser in particular.

    The period of the laser pulse Δt is the duration from the start of one pulse to the start of the next pulse. The inverse of the period Δt is the repetition rate or repetition frequency. The pulse width τ is calculated as the 3dB (half power) drop-off width.

    The Duty cycle is an important concept in signals and systems for periodic pulsed systems and is described as the ratio of the pulse duration to the duration of the period. Interestingly, the continuous wave lase can be considered as a pulse laser with 100% duty cycle.

    CaptureA

    Power calculations and Pulse Energy remain as several important relations.

    • Average Power: the product of Peak pulsed power, repetition frequency and the pulse width
    • Pulsed Energy: Average power divided by the repetition frequency

    Other formulations of these parameters are found above.rereate

     

  • Monochromaticity, Narrow Spectral Width and High Temporal & Spatial Coherence May 14, 2020

    A laser is a device that emits light through a process of optical amplification based on stimulated emission of electromagnetic radiation. A laser has high monochromaticity, narrow spectral width and high temporal coherence. These three qualities are interrelated, as will be shown.

    Monochromaticity is a term for a system, particularly in relation to light that references a constant frequency and wavelength. With the understanding that color is a result of frequency and wavelength, a monochromatic system also means that a single color is selected. A good laser will have only one output wavelength and frequency, typically referred to in relation to the wavelength (i.e. 1500 nanometer wavelength, 870 nanometer wavelength).

    A monochromatic system, made of only one frequency ideally is a single sinusoid function. A constant frequency sinusoid plotted in the frequency domain will have a line width approaching zero.

    Cap00e!

    The time τ that the wave behaves as a perfect sinusoid is related to the spectral line width. If the sinusoid takes an infinite time domain presence, the spectral line width is zero. The frequency domain plot in this scenario is a perfect pulse.

    If two frequencies are present in the time domain, the system is not monochromatic, which violates one of the principles of a perfect laser.

    CaptureXU

    Temporal Coherence is essentially a different perspective of the same relation present between monochromaticity and narrow spectral width. Coherence is the ability to predict the value of a system. Temporal coherence means that, given information related to the time of the system, the position or value of the system should be predictable. Given a sinousoid with a long time domain presence, the value of the sinusoid will be predictable given a time value. This is one condition of a proper laser.

    Spatial coherence takes a value of distance as a given. If the system is highly spatially coherent, the value of the system at a certain distance should predictable. This point is also a condition of a proper laser. This is also one differentiating point between a laser and an LED, since an LED’s light propagation direction is unpredictable at a certain time and certainly not in a certain distance. Light emitted from the LED may travel at any angle at any time. An LED does not produce coherent light; the Laser does.

  • AlGaAs/GaAs Strip Laser May 13, 2020

    This project features a heterostructure semiconductor strip laser, comprised of a GaAs layer sandwiched between p-doped and n-doped AlGaAs. The model parameters are outlined below. The structure is presented, followed by output optical power as a function of injection current. Thereafter, contour plots are made of the laser to depict the electron and hole densities, recombination rate, light intensity and the conduction and valence band energies.

     

    1

    2345678

  • Quality Factor May 12, 2020

    Quality factor is an extremely important fundamental concept in electrical and mechanical engineering. An oscillator (active) or resonator (passive) can be described by its Q-factor, which is inversely proportional to bandwidth. For these devices, the Q factor describes the damping of the system. In some instances, it is better to have either a lower or higher quality factor. For instance, with a guitar you would want to have a lower quality factor. The reason is because a high Q guitar would not amplify frequencies very evenly. To lower the quality factor, complex or strange shapes are introduced for the instrument body. However, the soundhole of a guitar (a Helmholtz resonator) has a very high quality factors to increase its frequency selectivity.

    A very important area of discussion is the Quality Factor of a filter. Higher Q filters have higher peaks in the frequency domain and are more selective. The Quality factor is really only valid for a second order filter, which is based off of a second order equation and contains both an inductor and a capacitor. At a certain frequency, the reactances of both the capacitor and inductor cancel, leading to a strong output of current (lower total impedance). For a tuned circuit, the Q must be very high and is considered a “Figure of Merit”.

    In terms of equations, the quality factor can be thought of in many different ways. It can be thought of as the ratio of “reactive” or wasted power to average power. It can also be thought of as the ratio of center frequency to bandwidth (NOTE: This is the FWHM bandwidth in which only frequencies that are equal to or greater than half power are part of the band). Another common equation is 2π multiplied by the ratio of energy stored in a system to energy lost in one cycle. The energy dissipated is due to damping, which again shows that Q factor is inversely related to damping, in addition to bandwidth.

    Q can also be expressed as a function of frequency:

    1

    The full relationship between Q factor and damping can be expressed as the following:

    When Q = 1/2, the system is critically damped (such as with a door damper). The system does not oscillate. This is also when the damping ratio is equal to one. The main difference between critical damping and overdamping is that in critical damping, the system returns to equilibrium in the minimum amount of time.

    When Q > 1/2 the system is underdamped and oscillatory. With a small Quality factor underdamped system, the system many only oscillate for a few cycles before dying out. Higher Q factors will oscillate longer.

    When Q < 1/2 the system is overdamped. The system does not oscillate but takes longer to reach equilibrium than critical damping.

     

     

  • Bragg Gratings May 11, 2020

    Bragg gratings are commonly used in optical fibers. Generally, an optical fiber has a relatively constant refractive index throughout. With a FBG (Fiber Bragg Grading) the refractive index is varied periodically within the core of the fiber. This can allow certain wavelengths to be reflected while all others are transmitted.

    spec

    The typical spectral response is shown above. It is clear that only a specific wavelength is reflected, while all others are transmitted. Bragg Gratings are typically only used in short lengths of the optical fiber to create a sort of optical filter. The only wavelength to be reflected is the one that is in phase with the Bragg grating distribution.

    A typical usage of a Bragg Grating is for optical communications as a “notch filter”, which is essentially a band stop filter with a very high Quality factor, giving it a very narrow range of attenuated frequencies. These fibers are generally single mode, which features a very narrow core that can only support one mode as opposed to a wider multimode fiber, which can suffer from greater modal distortion.

    The “Bragg Wavelength” can be calculated by the equation:

    λ = 2n∧

    where n is the refractive index and ∧ is the period of the bragg grating. This wavelength can also be shifted by stretching the fiber or exposing it to varying temperature.

    These fibers are typically made by exposing the core to a periodic pattern of intense laser light which permanently increases the refractive index periodically. This phenomenon is known as “self focusing” which is when refractive index can be permanently changed by extreme electromagnetic radiation.

     

  • Photodetectors and Dark Current May 10, 2020

    A photodetector simply is a device that converts light energy to an electrical current. These devices are very much similar to lasers, although they are designed to operate in reverse bias. “Dark current” is a term that originates from this reverse bias condition. When you reverse bias any diode, there is some leakage current which is appropriately named reverse bias leakage current. For photsensitive devices, it is called dark current because there is no light absorption involved. The main cause of this current is random generation of electrons and holes in the depletion region. Ideally, this dark current is minimal (<< 1).

    1

    The basic structure of the photodiode is the “PIN” structure, similar to a semiconductor laser diode. An intrinsic (undoped) region occurs between the P-doped and N-doped region.  Although PIN diodes are poor rectifiers, they are much better suited for high speed, high frequency applications due to the high level injection process. The wide intrinsic region provides a lowered capacitance at high frequencies. For photodetectors, the process is photon energy being absorbed into the depletion region, causing an electron hole pair to be created when the electron moves to a higher energy level (from valence to conduction band). This is what causes an electrical current to be created from light.

    Photodetectors are “photoconductive”. That is, conductivity changes with applied light. Like amplifiers and other devices, photodetectors have “Figures of Merit” which signify characteristics of the device. These will be briefly examined

    Quantum Efficiency

    Quantum efficiency refers to the number of carriers generated per photon. It is normally denoted by η. It can also be stated as carrier flux/incident photon flux. Sometimes anti-reflection coatings are applied to photodetectors to increase QE.

    Responsivity

    Responsivity is closely related to the QE (quantum efficiency). The units are amperes/watt. It can also be known as “input-out gain” of any photosensitive or detective device. For amplifiers this is known as “gain”. Responsivity can be increased by maximizing the quantum efficiency.

    Response Time

    This is the time required for the photodiode to increase its output from 10% to 90% of final output level.

    Noise Equivalent power

    This value corresponds to units of Watts/sqrt(Hz). It is another measure of sensitivity of the device in terms of power that gives a signal to noise ratio of one hertz per output bandwidth, Small NEP is due to increased sensitivity of the device.

  • Carrier Recombination May 9, 2020

    Carrier recombination is an effect in which electrons and holes (carriers) interract with each other in a way in which both particles are eliminated. The energy given off in this process is related to the difference between the energy of the initial and final state of the electron that is moved during this process. Recombination can be stimulated by temperature changes, exposure to light or electric fields. Radiative recombination occurs when a photon is emitted in the process. Non-radiative recombination occurs when a phonon (quanta of lattice vibrations) is given off rather than a photon. A special case known as “Auger recombination” causes kinetic energy to be transferred to another electron.

    1

    Band to band recombination occurs when an electron moves from one band to another. In thermal equilibrium, the carrier generation rate is equal to the recombination rate. This type of recombination is dependent on carrier density. In a direct bandgap material, this will radiate a photon.

    An atom of a different type of defect in the material can form “traps” which can contain one electron when the particle falls into it. Essentially, trap assisted recombination is a two step transitional process as opposed to the one step band to band transition. This is sometimes known as R-G center recombination. A two step recombination is known as “Shockley Read Hall” recombination. This is typically indirect recombinaton, which emits lattice vibrations rather than light.

    The final type is Auger Recombination caused by collisions. These collisions between carriers transfer motional energy to another particle. One of the main reasons why this is distinct from the other two types is that this transfer of energy also causes a change in the recombination rate. Like the previous type, this tends to be non radiative.

    A distinction should be made for band-to-band recombination between stimulated and spontaneous emission. Spontaneous emission is not started by a photon, but rather due to temperature or some other means (sometimes called luminescence). As stated in a previous post, stimulated emission is what emits coherent light in lasers, however spontaneous emission is responsible for most light emission in general.

  • Rayleigh Scattering May 8, 2020

    Rayleigh scattering is an effect of the scattering of light or electromagnetic radiation by particles much smaller in size than the wavelength. For example, when sunlight emits photons which enter the earth’s atmosphere, scattering occurs. The average wavelength for sunlight is around 500nm, which is in the visible light spectrum. However, it is known that the sunlight also emits Infrared waves and of course, ultraviolet radition. Interestingly enough, Rayleigh scattering influences the color of the sky due to diffuse sky radiation.

    The reason why a huge wavelength (compare 400 nm with nitrogen and oxygen molecules which are only hundreds of picometers) can scatter on a small particle is because of electromagnetic interractions. When the nitrogen/oxygen molecules vibrate at a certain frequency, the photons interract and vibrate at the same frequency. The molecule essential absorbs and reradiates the energy, scattering it. Because the horizontal direction is the primary direction of vibration, the air scatters the sunlight. The polarization is dependent on the direction of the incoming sunlight. The intensity is proportional to the inverse of the wavelength to the fourth power. The shorter the wavelength, the more scattering. This can explain why the sky is blue because blue is more likely scattered by Raleigh scattering due to higher frequency (smaller wavelength). It is not dark blue because other wavelengths are also scattered, but much less so.

    emagspec

    Rayleigh Scattering is quite important in optical fibers. Because the silica glass have microscopic differences in the refractive index within the material, Rayleigh scattering occurs which leads to losses. The following coefficient determines the scattering.

    scattering

    The equation shows that the scattering coefficient is proportional to isothermal compressibility (β), photoelastic coeffecient, the refractive index  as well as fictive Temperatue and is inversely proportional to the wavelength.

    Rayleigh scattering accounts for 96% of attenuation in optical fibers. In a perfectly pure fiber, this would not occur. The scattering centers are typically atoms or molecules, so in comparison to the wavelength they are quite small. The Rayleigh scattering sets the lower limit for propagation loss. In low loss fibers, the attenuation is close to the Rayleigh scattering level, such as in Silica Fibers optimized for long distance propagation.

  • The Electronic Oscillator May 7, 2020

    The semiconductor laser is a device that can be compared to an electronic oscillator. An oscillator can be thought of as a resonator (a circuit that resonates or produces a strong output at a specific frequency) with gain. Resonators naturally decay over time by some factor, so adding in gain (so long as the gain is greater than or equal to the loss) can allow the resonator to become an oscillator that does not decay or dampen.

    The stimulation of the oscillations of an oscillator is caused by electronic noise. A block diagram can demonstrate an oscillator in an abstract, easier to understand way.

    block

    The oscillator is built using an amplifier (transistor that is biased into active/saturation region) or op amp with positive and negative feedback. Noise in the circuit begins the oscillation, and this output is fed back into the input and is filtered along the way. This becomes an oscillation at a single frequency.

    Oscillators can be built from RC circuits, LC circuits or can be crystal oscillators. RC circuit oscillators tend to be lower frequency oscillators in the audio range. The LC oscillator is often compared to the laser in terms of functionality. The negative reactance of the capacitor and positive inductive reactance cancel at a specific frequency, leaving the circuit with only resistance and a strong current is achieved. LC oscillators are much more important for RF/microwave purposes. A crystal oscillator produces its frequency through mechanical vibrations and has a much higher Q factor than the other resonator types, which provides greater temperature and frequency stability.

    Two very important oscillator types for RF/microwave/mmWave circuits are dielectric resonators and SAW (surface acoustic wave) resonators. Dielectric resonators are mainly used as mmWave oscillators to drive antennas. They are generally made of a “puck” of ceramic which oscillates at a certain frequency dependent on its dimensions. Waves are confined inside the material due to an abrupt change in the permittivity. When the waves inside interfere and produce a standing wave, this increase of amplitude creates the resonance effect. SAW resonators are often used in cell phones and have distinct advantages over the LC oscillator or other types due to cost and size.

    In a semiconductor laser (laser diode), the source of oscillations is the noise generated by spontaneous emission. Spontaneous emission is the result of recombination of electron and hole pairs within the material which produces photons. This spontaneous emission is how lasers begin their operation, and this is continued by stimulated emission. Stimulated emission is electron hole recombination due to photon energy which also produces a photon. The light emitted by this type of emission is coherent, a characteristic of a laser.

  • Deriving Newton’s Lens Equation for a diverging lens May 6, 2020

    For a Diverging lens, derive a formula for the output angle with respect to the refractive indexes and input angle. Assume paraxial approximation and thin lens.

    C

     

    For a Diverging lens, construct a derivation of Newton’s lens equation x_o*x_i = f^2.Ca

  • Pseudomorphic HEMT May 5, 2020

    The Pseudomorphic HEMT makes up the majority of High Electron Mobility Transistors, so it is important to discuss this typology. The pHEMT differentiates itself in many ways including its increased mobility and distinct Quantum well shape. The basic idea is to create a lattice mismatch in the heterostructure.

    A standard HEMT is a field effect transistor formed through a heterostructure rather than PN junctions. This means that the HEMT is made up of compound semiconductors instead of traditional silicon FETs (MOSFET). The heterojunction is formed when two different materials with different band gaps between valence and conduction bands are combined to form a heterojunction. GaAs (with a band gap of 1.42eV) and AlGaAs (with a band gap of 1.42 to 2.16eV) is a common combination. One advantage that this typology has is that the lattice constant is almost independent of the material composition (fractions of each element represented in the material). An important distinction between the MESFET and the HEMT is that for the HEMT, a triangular potential well is formed which reduces Coloumb Scattering effects. Also, the MESFET modulates the thickness of the inversion layer while keeping the density of charge carriers constant. With the HEMT, the opposite is true. Ideally, the two compound semiconductors grown together have the same or almost similar lattice constants to mitigate the effects of discontinuities. The lattice constant refers to the spacing between the atoms of the material.

    However, the pseudomorphic HEMT purposely violates this rule by using an extremely thin layer of one material which stretches over the other. For example, InGaAs can be combined with AlGaAs to form a pseudomorphic HEMT. A huge advantage of the pseudomorphic typology is that there is much greater flexibility when choosing materials. This provides double the maximum density of the 2D electron gas (2DEG). As previously mentioned, the field mobility also increases. The image below illustrates the band diagram of this pHEMT. As shown, the discontinuity between the bandgaps of InGaAs and AlGaAs is greater than between AlGaAs and GaAs. This is what leads to the higher carrier density as well as increased output conductance. This provides the device with higher gain and high current for more power when compared to traditional HEMT.

    1

    The 2DEG is confined in the InGaAs channel, shown below. Pulse doping is generally utilized in place of uniform doping to reduce the effects of parasitic current. To increase the discontinuity Ec, higher Indium concentrations can be used which requires that the layer be thinner. The Indium content tends to be around 15-25% to increase the density of the 2DEG.

    1

  • Parameter Analysis of the MESFET, Channel Width Calculation May 4, 2020

    Engineering design regularly involves an analysis of the formulae behind the various parameters of a system one is trying to build or improve. Some parameters are static, such a particular qualities of the materials being used. Perhaps there is a constraint made on the system or a goal, such as achieving function at a certain frequency or to reduce the size as much as possible. Today, many programs exist that can perform complicated calculations for the engineer. To construct a problem or calculation that produces the desired result may need more attention.

    The MESFET uses a contact between n-doped semiconductor material with highly n-doped semiconductor material to form a junction field effect transistor. The great advantage of not using a p-doped semiconductor material is that the transistor can be built without using hole transfer. Since hole transfer is much slower than electron transfer, the MESFET can function much faster than other types of transistors.

    For the MESFET, it may not be possible to examine all parameters. Consider first the following:

    eqmesfet

    Potential variation along the channel (notice the similarity of the following to Ohm’s law, V=IR):

    potentialvariation

    Where the resistance along the channel is:

    res2

    Depletion Width (also referenced in the above formula) under the gate:

    depletionwidth

    Pinch-off Voltage:

    vpinch

    Threshold Voltage:

    re4

    Built-in Potential:

    builtin

    The above formulas alone would be enough to put to use. While constructing a MESFET, it was found that the doping concentration of donor electrons in the channel played an important role. N_D, the donor doping concentration is found in most of the above formulas. The doping concentration is of particular importance, since it can be directly manipulated. The pinch-off voltage and the donor concentration are directly proportional. By achieving an estimate (or of the values are known) for other parameters, it would be possible to perform a parameter sweep for the MESFET system for doping concentration. This method may become critical for optimizing semiconductor device designs.

     

    MESFET Design Problem

    Let’s say we want to calculate the channel width of an n-channel GaAs MESFET with a gold Schottky barrier contact. The barrier height (φ_bn) is 0.89 V. The temperature is 300 K. The n-channel doping N_d is 2*10^15 cm^(-3). Design the channel thickness such that V_T = +0.25V.

    mesfetcalc1

  • GaAs MESFET Designs May 3, 2020

    A GaAs MESFET structure was built using Silvaco TCAD:

    • Channel Donor Electrons: 2e17
    • Channel thicknes s : 0.1 microns
    • Bottom layer: p doped GaAs (5 micron thick, 1e15p doping)
    • Gate length: 0.3 micron
    • Gate metal work function: 4.77eV
    •Separation between the source and drain electrode: 1 micron

    p4struct

    The IV curve is as follows. Of primary importance are the two bottom curves, which are for a gate voltage of -0.2V and -0.5V. The top curve is 0V, over which would be undesirable for the MESFET operation.

    p4iv

    Now, in terms of designing a MESFET, there is a large amount of theory that one may need to grasp to build one from scratch – you would probably first start by building one similar to a more common iteration. That said, there are a number of parameters that one may wish to tweak and to achieve, to name a few: saturation current, threshold voltage, transit frequency, maximum frequency, pinch-off voltage.

    The iteration above does not show a highly doped region under the source and drain contacts. The separation between source and drain may also be increased and the size of the gate decreased.

    08

    Channel doping level was found to make a significant difference in overall function. The channel must be doped to a certain level, otherwise the structure may not behave properly as a transistor.

    go atlas

    Title GaAs MESFET

    # Define the mesh

    mesh auto
    x.m loc = 0 Spac=0.1
    x.m loc = 1 Spac=0.05
    x.m loc = 3 Spac=0.05
    x.m loc = 4 Spac =0.1

    # n region

    region num=1 bottom thick = 0.1 material = GaAs NY = 10 donor = 2e17

    # p region

    region num=2 bottom thick = 5 material = GaAs NY = 4 acceptor = 1e15

    # Electrode specification
    elec num=1 name=source x.min=0.0 x.max=1.0 top
    elec num=2 name=gate x.min=1.95 x.max=2.05 top
    elec num=3 name=drain x.min=3.0 x.max=4 top

    doping uniform conc=5.e18 n.type x.left=0. x.right=1 y.min=0 y.max=0.05
    doping uniform conc=5.e18 n.type x.left=3 x.right=4 y.min=0 y.max=0.05

    #Gate Metal Work Function
    models fldmob srh optr fermidirac conmob print EVSATMOD=1
    contact num=2 work=4.77

    # specify lifetimes in GaAs and models
    material material=GaAS taun0=1.e-8 taup0=1.e-8
    method newton

    solve vdrain=0.5
    LOG outf=proj2mesfet500mVm.log
    solve vgate=-2 vstep=0.25 vfinal=0 name=gate
    save outf=proj2mesft.str
    #Plotting
    output band.param photogen opt.intens con.band val.band

    tonyplot proj2mesft.str
    tonyplot proj2mesfet500mVm.log
    quit

  • Basic Energy Band Theory May 2, 2020

    Band theory is essential in the study of solid state physics. The basic idea tends to center around two bands: the conduction and valence band (for reasons discussed later on). Between the two bands is a forbidden energy level (Energy gap) which depends on the resistivity or conductance of the material. In order to fully understand solid state devices such as transistors or solar cells, this must be discussed.

    For a single atom, electrons occupy discrete energy levels called bands. When two atoms join together to form a diatomic element (such as Hydrogen), their orbitals overlap. The Pauli Exclusion Principle states that no two electrons can have the same quantum numbers. Now keep in mind that there are four types of quantum numbers. This means that when these two atoms combine the atomic orbitals must split to compensate so that no two electrons have the same energy. However for a macroscopic piece of a solid, the number of atoms is quite high (on the power of 10^22) and therefore the number of energy levels is also high. For this reason, adjacent energy levels are almost continuous, forming an energy band. The main bands under consideration are the valence (outermost band involved in chemical bonding) and conduction because the inner electron bands are so narrow. Band gaps or “forbidden zones” are leftover energy levels that are not covered by a band.

    In order to apply band theory to a solid, the medium must be homogeneous or evenly distributed. The size of material must be considerable as well, which is not unreasonable considering the number of atoms in an appreciable piece of a solid. The assumption also must include that electrons do not interract with phonons or photons.

    The “density of states” is a function that describes the number of states per unit volume, per unit energy. It is represented by a Probability Density function.

    A Fermi-Dirac distribution function demonstrates the probability of a state of energy being filled with an electron. The probability is given below.

    fuck

    The μ is generally expressed as EF which is the Fermi energy level or total chemical potential. kT is the familiar thermal energy which is the product of the Boltzmann constant and the temperature. From this equation it is clear that absolute zero temperature, the exponential term increases to infinity, causing the entire term to trend to zero. This leads to the conclusion that semiconductors behave as insulators at 0K.

    The density of electrons can be calculated by multiplying this value with the density of states function and integrating over all energy.

    Band-gap engineering is the process of changing a material’s band gap. This is usually done to semiconductors by changing the composition of alloys in the material.

  • Object Oriented Programming and C#: Dictionaries/Hash Tables May 1, 2020

    A “dictionary” in C# is a ADT (Abstract data type) that maps “keys” to “values”. Normally with an array, the values within this collection of data are accessed using indexing. For the dictionary, instead of indexes there are keys. Another name for a dictionary is a “hash table”, although the distinction can be made in the sense that the hash table is a non-generic type and the dictionary is of generic type. The namespace required for dictionaries is the “System.Collections.Generics” namespace.

    The dictionary is initialized much like a list (dynamic array), however the dictionary take two parameters (“TKey”,”TValue”). The first is the data type of the key and the second the data type of the value. Similarly to dynamic arrays, values can be added to the dictionary using a “Add(key,value)” command. Similarly, a value can be deleted using the “Delete(key)” command. However it is important to note that keys do not have to be integers, unlike an index. They can be of any data type imaginable. However, a dictionary cannot contain duplicate keys.

    The functionality of a dictionary in C# is similar to a physical dictionary. A dictionary contains words and their definitions and analogously, a programming dictionary maps a key (word) to a value (definition).

    The following program illustrates adding values to a dictionary. The key is of type integer and the value of type string. The values “one”, “two” and “three” are added with corresponding integer keys.

    1

    Much like with arrays, a “foreach” statement can be used to iterate over all the values of a dictionary.

    2

    It is important to note for a hash table, the relationship between the key and its value as that this must be one to one. When different keys have the same hash value, a “collision” occurs. In order to resolve the collision, a link list must be created in order to chain elements to a single location.

    An important concept with hash table: speed of processing does not depend on size. For arrays, in order to find a specific value a linear search must be performed. This takes a long time to complete if the array is very long. With a hash table, size does not matter because the hashing function is a constant time. The “ContainsKey()” method can be used to find a specific key without the need for a linear search.

    When would you use a dictionary/hash table over a list? Dictionaries can be helpful in instances where indexes have special meaning. A particular use of a dictionary could be to count the words in a text using the “String.Split()” method and adding each word to the dictionary. In this instance, the “foreach” statement could easily be used to iterate over every value and find the number of words. In short, the dictionary maps meaningful keys to values whereas the list simply maps indexes to values.

  • The Half Wave Dipole Antenna April 30, 2020

    The dipole is a type of linear antenna which commonly features two monopole antennas of a quarter wavelength in size bent at 90 degree angles to each other. Another common size for the dipole is 1.25λ. These sizes will be discussed later.

    It is important for beginning the study of the dipole antenna to discuss the infinitesimal dipole. This is the dipole which is smaller than 1/50 of the wavelength and is also known as a Hertzian dipole. This is an idealized component which does not exist, although it can serve as an approximation to large antennas which can be broken into smaller segments. The mathematics behind this can be found in “Antenna theory:Analysis and Design” by Constantine Balanis.

    More importantly, three regions of radiation can be defined: the far field (where the radiation pattern is constant – this is where the radiation pattern is calculated), the reactive near field and the radiative near field.

    regions

    As shown in the image, the reactive near field is when the range is less than the wavelength divided by 2π or when the range is less than 1/6 of the wavelength. The electric and magnetic fields in this region are 90 degrees out of phase and do not radiate. It is known that the E and H fields must be in phase to propagate. The radiating near field is where the range is between 1/6 of the wavelength and the value 2D^2 divided by the wavelength. This is also known as the Fresnel zone. Although the radiation pattern is not fully formed, propagating waves exist in this region. For the far field, r must be much, much greater than λ/2π.

    The radiating patterns of the dipole antenna is pictured below, with both the E and H planes. The E plane (elevation angle pattern) is pictured on the bottom right and the H plane (Azimuthal angle) beside it on the left. The plots are given in dB scale. The radiation patterns can be understood by considering a pen. While facing the pen you can see the full length of the pen, but if you look down on the pen you can only see the tip or end. This is analogous to the dipole antenna where maximum radiation is broadside to the antenna and minimum radiation on the ends, leading to the figure 8 radiation pattern. When this radiation pattern in extended to three dimensions, the top left image is derived.

    patterns

     

  • Focal Length of a Submerged Lens April 29, 2020

    Is the focal length of a spherical mirror affected by the medium in which it is immersed? …. of a thin lens? What’s the difference?

     

    Mirrors

    A spherical mirror may be either convex or concave. In either case, the focal length for a spherical mirror is one-half the radius of curvature.

    mirror1

    The formula for focal length of a mirror is independent of the refractive index of the medium:

    mirrorlens

    Lens

    lens1

    The thin lens equation, including the refractive index of the surrounding material (“air”):lens2

    The effect of the refractive index of the surrounding material can be summarized as follows:

    • The focal length is inversely proportional to the refractive index of the lens minus the refractive index of the surrounding medium.
    • As the refractive index of the surrounding medium increases, the focal length also increases.
    • If the refractive index of the surrounding medium is larger than the refractive index of the thick lens, the incident ray will diverge upon exiting the lens.

     

  • Infinite Lateral Magnification of Lenses and Mirrors April 28, 2020

    Under what conditions would the lateral magnification (m=-i/o) for lenses and mirrors become infinite? Is there any practical significance to such condition?

     

    Magnification of a lens or mirror is the ratio of projected image distance to object distance. Simply put, how much closer does the object appear as a result of the features of the lens or mirror? The object may seem larger or it may seem smaller as a result of it’s projection through a lens or mirror. Take for instance, positive magnification:

    mag

    If the virtual image appears further than the real object, there will be negative magnification:

    mag3

    The formula for magnification is the following:

    mag2

    The question then is, how can there be an infinite ratio of image size to object size? Consider the equation for focal length:

    f1

    For magnification to be infinite, the image distance should be infinite, in which case the object distance is equal to the focal length:

    f2

    In this case, the magnification is infinite:

    mag7

    The meaning of this case is that the object appears as if it were coming from a distance of infinity, or very far away and is not visible. A negative magnification means that the image is upside-down.

    mag5

  • Focal Length of a Lens as a function of light frequency April 27, 2020

    How does the focal length of a glass lens for blue light compare with that for red light? Consider the case of either a diverging lens or a converging lens.

     

    This question really has three parts:

    • Focal Length of a lens
    • Effect of light frequency (color)
    • Diverging and Converging lens

     

    Focal Length of the Converging and Diverging Lens

    For the converging and diverging lens, the focal point has a different meaning. First, consider the converging lens. Parallel rays entering a converging lens will be brought to focus at the focal point F of the lens. The distance between the lens and the focal point F is called the focal length, f. The focal length is a function of the radius of curvature of both sides or planes of the lens as well as the refractive index of the lens. The formula for focal length is below,
    (1/f) = (n-1)((1/r1)-(1/r2)).

    This formula also works for a diverging lens, however the directions of the radius of curvature must be taken into account. If for instance the center of the circle for one side of the lens is to the left of the lens, one may chose that direction to be positive and the other direction to be negative; as long as one maintains the same standard for direction.

    converg

    If the focal length of a lens is negative, meaning that the focal point is behind the lens, on the side at which the rays entered, this is a diverging lens.

    diverg

     

    Interaction of Color with Focal Length

    The other part of this question dealt with how the focal length would change for one color such as blue versus another color such as red. The key to this relationship is the refractive index of the lens, as the refractive index can change with regards to the color (i.e. frequency).

    The material from which the lens is made is not known, however as demonstrated by the following table, the refractive index is consistently higher for smaller wavelength colors.

    33

    Reviewing the focal length formula, it is understood from the inverse proportionality of the equation that as the refractive index increases, the focal length will decrease. Blue has a higher refractive index than red. Therefore, blue will have a smaller focal length than red.

    focallength2

     

     

  • Object Oriented and C#: Quadratic Roots Program April 26, 2020

    The following program is designed to accept three doubles as inputs and prints the roots of a quadratic, whether complex or real. If a non-double is inputted into the program, the program should display “Bad Input”. The program contains two files: a “program” file to run several of the main methods and a “complex” file which creates the class for handling complex numbers and overrides the built in “ToString” method.

    The first goal is to initialize the part of the program that handles real roots. The easiest portion is to create a method that reads doubles. It is important that the method contains a nullable type because the method should return null if a non-double such as a string is put into the method. This provides an easy way to use a conditional statement upon using the “Tryparse” method. The “Tryparse” method returns a boolean value of true or false. The “if” statement checks if the return is true and if so, returns the result. If not, null is returned.

    1

    Next, the “getquadraticstring” method is implemented to format the printed result in the form “AX^2+BX+C”. This is also done within the “program” file. Format specifiers are put within the placeholders to set the printed values to two decimal places if neccessary.

    2

    The “getrealroots” method produces the roots of the quadratic given that they are purely real. First the discriminant (the part in the quadratic formula under the square root symbol) is calculated. Several if statements are provided to check how many real roots there are and returns that quantity as an integer. For example, if the discriminant is negative, there will be no real roots returned. This means both of the “out” variables should be set to null and the function should return a 0. For a discriminant = 0, the quadratic formula reduces to -B/2A and the second root should be null. The return value is again the number of roots (1). It is important to note that an “if-else” statement must only end in “else” rather than “else if”. The “else” statement must cover all other possibilities.

    3

    Within the “main” function, three numbers are taken from the console using the getDouble method. An integer value is obtained from the getRealroots method which states the number of roots. This will be used for the conditional statements. For ease of reading code, a string variable is created to store the return from the “getQuadraticString” method.

    Next, an “if” statement is used to print a bad input if any of the a, b, c variables are null. A return statement is included within the “if” statement so that an else does not have to be provided. This will exit the statement after it has completed.

    4

    Now the logic for the imaginary numbers must be implemented. The default constructor is shown with default inputs of zero. It doesn’t need any code within it because it inherits the Complex constructor. The “ToString()” method must be overridden because the formatting must be changed to adhere to complex numbers.

    ToString

    In addition, logic must be implemented for the “getImaginaryRoots()” method. The discriminant is calculated the same way as before, however the absolute value is taken. The real part must be calculated separately and the denominator is split for this reason. For clarification, this is the real part of a complex root. The two roots are the same, but complex conjugates.

    5

    The “main” function must be updated to reflect the imaginary roots.

    main

    The “getQuadraticString()” method is updated as shown. Three pieces of string must be created with several conditions imposed. They begin as empty strings and are filled in. Separating them into parts lets the logic be implemented for when each coefficient is 1 or -1. When C is zero, an empty string will be printed.

    6

     

  • E-K Diagrams April 25, 2020

    As previously concluded, solids can be characterized based on energy band diagrams. A conductor has a valence and conduction bands that are very close or overlap. In addition a conductor will have a completely filled valence band and an almost full conduction band. The “forbidden region of the conductor is very small and little energy is required for an electron to move from conduction to valence band. In the presence of an external field, it is very easy for electrons to move from the valence band to the conduction band.

    For semiconductors, at absolute zero the valence band is also completely full and the bandgap is typically about 1eV to 3eV, however even a bandgap of .1eV could be considered a semiconductor. Therefore, a semiconductor at 0K is an insulator. Semiconductors are very temperature sensitive. The subsequent figure illustrates the temperature dependence. The resistivity is very high at absolute zero, making the semiconductor behave like an insulator. However at higher temperatures the semiconductor can become quite conductive. At room temperature (300k), the semiconductor behaves more like a conductor.

    temp_semi

    With band diagrams, not much information is given therefore it is necessary to also analyze an E-K (Energy momentum) diagram. E is the energy require for an electron to traverse the bandgap. For example in Silicon with a bandgap of 1.1eV, it would take an energy level of 1.1eV for an electron to move from conduction to valence band. Energy is given as E = kT where T is a given temperature.

    For intrinsic semiconductors like Silicon, the structure is crystalline and periodic. The wavefunction (which describes probability of finding an electron) should therefore be of periodic nature (sinusoidal). From the Schrodinger equation, it can be found that the Energy is periodic with k as well. For the diagrams, E is plotted against k.

    ek

    The borders of the first Brillouin zone are from -π/a to π/a. These are cells of the crystalline lattice. Since the wavefunction is periodic, we only care about one of the zones. The above figure can be considered the “reduced zone” figure. Sometimes the x axis is given as the moment or wavenumber, since these only differ by a factor of Planck’s constant. From this diagram: the bandgap energy is shown, the effective mass of electrons and holes are shown as well as the density of states. The effective mass is shown by the curvature of the bands. For example, a heavy hole band could be found by observing the diagram that is less curved. From the above diagram, it is also noticeable that the material is direction bandgap (such as GaAs). The basic energy gap diagram compares to the E-k diagram in that the maximums and minimums correspond. However, the original band gap diagram does not give any other characteristics. It is for this reason the E-k diagram is so useful.

  • The Radar Range Equation April 24, 2020

    To derive the RADAR range equation, it is first necessary to define the power density at a distance from an isotropic radiator. An isotropic radiator is a fictional antenna that radiates equally in all directions (azimuthal and elevation angle accounted for). The power density (in watts/sq meter) is given as:

    1

    However, of course RADARs are not going to be isotropic, but rather directional. The power density for this can be taken directly from the isotropic radiator with an additional scaling factor (antenna gain). This simply means that the power is concentrated into a smaller surface area of the sphere. To review, gain is directivity scaled by antenna efficiency. This means that gain accounts for attenuation and loss as it travels through the input port of the antenna to where it is radiated into the atmosphere.

    2

    To determine the received power to a target, this value can be scaled by another value known as RCS (RADAR Cross section) which has units of square meters. The RCS of a target is dependent on three main parameters: interception, reflection and directivity. The RCS is a function of target viewing angle and therefore is not a constant. So in short, the RCS is a unit that describes how much from the target is reflected from the target, how much is intercepted by the target as well as how much as directed back towards the receiver. An invisible stealth target would have an RCS that is zero. So in order to determined received power, the incident power density is scaled by the RCS:

    3

    The power density back at the receiver can then be calculated from the received power, resulting in the range being to the fourth power. This means that if the range of the radar to target is doubled, the received power is reduced by 12 dB (a factor of 16). When this number is scaled by Antenna effective area, the power received at the radar can be found. However it is customary to replace this effective area (which is less than actual area due to losses) with a receive gain term:

    4

    5

    6

    The symbol η represents antenna, and is coefficient between 0 and 1. It is important to note that the RCS value (σ) is an average RCS value, since as discussed RCS is not a constant. For a monostatic radar, the two gain terms can be replaced by a G^2 term because the receive and transmitted gain tends to be the same, especially for mechanically scanned array antennas.

    7

  • HFSS: Conical Horn Antenna Simulation April 23, 2020

    For the following simulation, the solution type is Driven Modal. Driven modal gives solutions in terms of power, as opposed to Driven Terminal which displays results in terms of voltages and currents. The units are set to inches.

    The first step is to create the circular waveguide with a radius of .838 inches and a height of three inches:

    1

    To make the building process easier, a relative coordinate system is implemented through the Modeler window. The coordinate system is moved up to z = 3. A conical transition region (taper) is built at that origin point. The lower radius is 0.838 and the upper radius is 1.547. The height is 1.227. The coordinate system is then adjusted to be on top of the taper.

    2

    The “throat” is created by placing yet another cylinder on top of the taper. The height is 3.236. Now, all the objects are selected and a Boolean unite is performed. All units can be selected by using the shortcut “CTRL + A”. From this point, a single object is obtained and name “Horn_Air”. This can be seen in the project tree on the left.

    3

    The coordinate system is displaced back to the standard origin and “pec” is selected as the default material (perfect electrical conductor). This will be used to create the horn wall, shown below. A Boolean subtract is performed between the vacuum parts and the conductive portion to create a hollowed out antenna.

    4

    Because the simulation is of a radiating antenna, an air box of some sort must be implemented. In our case, we use a cylindrical radiation boundary. The bottom of the device is chosen for the waveport. Upon assigning the two mode waveport, the coordinate system is redefined for the radiation setup. For the radiation, the azimuthal angle is incremented from 0 to 90 in one 90 degree increment and the elevation angle is incremented from -180 to 180 with a step size of 2:

    5

    The simulation is done at 5 GHz with 10 as the maximum number of passes. The S-Matrix data is shown below.

    smatrix

    As well as the convergence plot:

    plot

    The radiation pattern is shown for the gain below:

    radiation

    The plot is in decibels and is swept over the elevation angle. Both the lefthand and righthand polarized circular wave patterns are shown at angles phi = 90 and phi = 0. The two larger curves are the RHCP and the two smaller are LHCP.

  • Object Oriented Programming and C#: Program to Determine Interrupt Levels April 22, 2020

    The following is a program designed to detect environmental interrupts based on data inputted by the user. The idea is to generate a certain threshold based on the standard deviation and twenty second average of the data set.

    A bit of background first: The standard deviation, much like the variance of a data set, describes the “spread” of the data. The standard deviation is the square root of the variance, to be specific. This leaves the standard deviation with the same units as the mean, whereas the variance has squared units. In simple terms, the standard deviation describes how close the values are to the mean. A low standard deviation indicates a narrow spread with values closer to the mean.

    std

    Often, physical data which involves the averaging of many samples of a random experiment can be approximated as a Gaussian or Normal distribution curve, which is symmetrical about the mean. As a real world example, this approximation can be made for the height of adult men in the United States. The mean of this is about 5’10 with a standard deviation of three inches. This means that for a normal distribution, roughly 68% of adult men are within three inches of the mean, as shown in the following figure.

    normal

    In the first part of the program, the variables are initialized. The value “A” represents the multiple of standard deviations. Previous calculations deemed that the minimum threshold level would be roughly 4 times the standard deviation added to the twenty second average. Two arrays are defined: an array to calculate the two second average which was set to a length of 200 and also an array of length 10 for the twenty second average.

    prog1

    The next part of the program is the infinite “while(true)” loop. The current time is printed to the console for the user to be aware of. Then, the user is prompted to input a minimum and maximum value for a reasonable range of audible values, and these are parsed into integers. Next, the Random class is instantiated and a for loop is incremented 200 times to store a random value within the “inputdata_two[]” array for each iteration. The random value is constrained to the max and min values provided by the user. The “Average()” method built into the Random class gives an easy means to calculate the two second average.

    prog2

    Next, a foreach statement is used to iterate through every value (10 values) of the twenty second average array and print them to the console. An interrupt is triggered if two conditions are met: the time has incremented to a full 20 seconds and the two second average is greater than the calculated minimum threshold. “Alltime” is set to -2 to reset the value for the next set of data. Once the time has incremented to 20 seconds, a twenty second average is calculated and from this, the standard deviation is calculated and printed to the console.

    prog3

    The rest of code is pictured below. The time is incremented by two seconds until the time is at 18 seconds.

    prog4

    The code is shown in action:

    resultprog

    If a high max and min is inputted, an interrupt will be triggered and the clock will be reset:

    inter

  • Object Oriented Programming and C#: Fractions Program April 21, 2020

    The following is a post explaining the functionality of a C# program in Visual Studio, which is designed to do basic operations between fractions which are ratios of whole numbers.

    To begin, three namespaces are included using the “using” directive statement.

    directives

    The “system” namespace is included with every program. The next two must be included to implement certain classes. Without the directives in place, these namespaces would have to be included manually with every usage of the classes that are a part of them.

    fraction

    The next bit of code is pictured above. Two integers are created with a “private” access modifier to indicate they can only be used within the Fraction class. Next, the constructor for Fraction is called and supplied with the two integers. The “this” keyword uses the current instance of the class to assign one of the inputs (num) to a member of the class. “This” can be helpful to distinguish between constructor inputs and members of the class since “this” always refers to members of the current instance. An “if” statement is included to handle exceptions thrown by having a denominator of zero. You can always identify a constructor by its lack of a data type.

    wtf

    For some reason, the constructor should be called again, and it should inherit itself (???) with a 0 and 1 supplied as its argument. God only knows why.

    reduce

    The Reduce function is meant to reduce the fraction to its canonical (simplest) form. It is important to note that the method is private, which means that it cannot be used outside the class “Fraction”. The greatest common denominator is initialized to zero. A for loop is executed to cycle through all the way to what “denomenator” is used for. Denomenator is allowed to be used here because the method is used within the class. Successive division is used to check if the canonical form has been achieved. By dividing by the loop index and checking for a remainder of zero for both the numerator and denominator, it can be shown whether more division should be done or not. If both conditions are true, the greatest common denominator has been found to be the loop index. The next step is to divide the numerator and denominator through by this value. For example, if the numerator was set to 3 and the denominator was set to 6, by the time the loop counter reached three, both statements would return a boolean “TRUE” and the gcd would be set to 3. Then both values would be divided by 3, reducing the fraction to 1/2.

    numden

    The next step is to define the properties. Properties allow private variables to be used publicly. This can be useful when you need to protect certain data by not allowing it to be used in any class, but sometimes needs to be exposed. This is accomplished using “getters” and “setters”. The “value” keyword is automatically provided when using a “setter” and sets the private variable to that value. Basically, numerator and denomenator are private variables and can only be changed within the class. Encapsulation refers to the scope of members within classes or structs. Properties provide a flexible way to control the accessibility of these members.

    The last method is used to convert integer fractions to the double data type. This functionality is provided by the “explicit” keyword. The result is a returned “fractiondecimal” of data type double.

    justgiveup

    The following codes are suppressed using the “#region” keyword. By entering each region, the code can be viewed. Within the first block of code, the custom arithmetic operators are defined, two of which are shown below.

    plusmultiply

    The addition is slightly complicated, because a common denominator must be found. Different implementations of the Fraction class are supplied to the input of the operator method. The fields “numerator” and “denomenator” are accessed through the class and assigned to a variable. A new object (“c”) is instantiated from the Fraction class which is the sum of “a” and “b”. The multiplication custom operator is slightly simpler, because it is straight across multiplication. Additional code is provided to change the sign of both the numerator and denominator if the denominator is negative. The operators for division and subtraction employ similar logic.

    The comparison operators are defined using the same common denominator technique. The only difference between each operator method is the symbol used in the “if” statement. Six methods are provided (<, >, ==, !=, >= and <=).

    comparison

    The last bit of code is pictured below. The “ToString” method is inherited by every class and therefore can be overridden. This allows flexibility to define the “ToString” method however you want. In this case, we want a fraction to be printed. The “as” keyword can convert between nullable or reference types and returns null if the conversion is not possible. When this conversion from obj to Fraction is possible, the numerator and denominator are set and the fraction is returned.

    override

  • Refractive Index as a Function of Wavelength April 20, 2020

    Previously, we discuss how the resultant wavelength and velocity in an optical system is said to be dependent on the refractive index. What we didn’t explain however is that the relationship between refractive index and wavelength more often involves a dependency of the refractive index according to the incident wavelength. After all, it is easier to change the wavelength of a light wave than it is to change the material that it is propagating through. So in fact, the refractive index will vary according to the wavelength of the incident wave. If the system is not monochromatic, the frequency may also change.

    32

    As we know from ray optics or geometrical optics is that the refractive index is used to determine how a ray will travel through an optical system. The relationship between wavelength and refractive index implies that an optical system with the same material will produce a different transmission angle (or perhaps a completely different result) for two rays of different wavelength.

    Consider the range of refractive indexes for several different mediums with an altered wavelength and color (i.e frequency):

    33

    The differences in refractive indexes for these materials given different wavelengths and frequencies may seem small, however the difference is enough that rays of different wavelengths will interact slightly differently through optical systems.

    Now, what if a ray managed to contain more than one wavelength? Or, if it were a blend of all colors? This case is called white light. If white light can contain a sum of a number of wavelengths and frequencies, each component of white light will behave according to it’s relative refractive index.

    The classic example of this is of course the prism.

    34

     

  • Refractive Index, Speed of Light, Wavelength and Frequency April 19, 2020

    The relationship between the speed of light in a medium and the refractive index is the following:

    9

    Therefore it can be understood that for a medium of higher refractive index, the speed of light in that medium will be slower. Light will not achieve a speed higher than c or 2.99 x 10^8 m/s. When light is traveling at this speed the refractive index of the medium is 1.00.

    Now, what about the wavelength? Interestingly, one might begin to understand that the wavelength is the determining factor for color. In fact, this is not the case. Frequency is what defines the color of the light, which can vary from an invisible infrared range to the visible range to the invisible ultraviolet range. In a monochromatic system, the frequency of light (and therefore color) will stay the same. The velocity and wavelength will change with the refractive index.

    wavelengthfrequency

    As the above picture suggests, we might beleive that wavelength and frequency are forever tied together. The above example would in fact be incomplete at best, were we to consider that light can travel at more than one speed. However, let us review the relationship between wavelength and frequency. The following formula is normally presented for wavelength:

    91

    Now, here is the question: does c in this equation correspond to the speed of light in a vacuum, or does it correspond to the speed of the travelling light wave? Let’s consider, what does the speed of light in a vacuum have to say about the speed of light in water? It really doesn’t have much to say, does it? Which is why we can use instead, v to denote the speed of light.

    92

    Note that I’ve written the wavelength as a function of the speed of light in the medium. Taking this to it’s conclusions, we would understand that actually, the wavelength is not exclusively dependent on frequency and that multiple wavelengths may exist for one frequency. The determining factor in such a case is the refractive index, given that frequency is constant.

    93

    Given the wavelength, frequency and refractive index, the speed of the light wave may also be calculated.

    94

    Physically, one may picture that the frequency is the rate at which the peak of a wave passes by a point. A longer wavelength wave will need to move faster to keep at the same frequency.

    The applications and implications of this physical relationship will be explored next.

     

  • Yagi-Uda Antenna/Parasitic Array April 18, 2020

    The Yagi-Uda antenna is a highly directional antenna which operates above 10 MHz and is commonly used in satellite communications, as well as with amateur radio operators and as rooftop television antennas. The radiation pattern for the Yagi-Uda antenna shows strong gain in one particular direction, along with undesirable side lobes and a back lobe. The Yagi is similar to the log periodic antenna with a major distinction between the two being that the Yagi is designed for only one frequency, whereas the log periodic is wideband. The Yagi is much more directional, so it provides a higher gain in that one particular direction that it is designed for.

    The “Yagi” antenna has two types of elements: the driven element and the parasitic elements. The driven element is the antenna element that is directly connected to the AC source in the transmitter or receiver. A reflector element (parasitic) is placed behind the driven element in order to split the undesirable back lope into two smaller lobes. By adding directive parasitic elements in front of the driven element, the radiation pattern is stronger and more directional. All of these elements are parallel to each other and are usual half wave dipoles. These elements work by absorbing and reradiating the signal from the driven element. The reflector is slightly longer (inductive) than the driven element and the director elements are slightly shorter (capacitive).

    It is well known in transmission line theory that a low impedance/short circuit load will reflect all power with an 180 degree phase shift (reflection coeffecient of -1). From this knowledge, the parasitic element can be considered a normal dipole with a short circuit at the feed point. Since the parasitic elements reradiate power 180 degrees out of phase, the superposition of this wave and the wave from the transmitter leads to a complete cancellation of voltage (a short circuit). Due to the inductive effects of the reflector element and the capacitive effects of the director antennas, different phase shifts are created due to lagging or leading current (ELI the ICE man). This cleverly causes the superposition of the waves in the forward direction to be constructive and destructive in the backwards direction, increasing directivity in the forward direction.

    Advantages of the Yagi include high directivity, low cost and high front to back ratio. Disadvantages include increased sizing when attempting to increase gain as well as a gain limitation of 20dB.

    yagi

  • III-V Semiconductor Materials & Compounds April 17, 2020

    iii-v
    The Bandgap Engineer’s Periodic Table

    In contrast with an elemental semiconductor such as Silicon, III-V Semiconductor compounds do not occur in nature and are instead combinations of materials from the III and V category groups on the periodic table. Silicon, although a proven as a functional semiconductor for electronic applications at lower frequencies is unable to perform a number of roles that III-V semiconductors are able to. This is in large part due to the indirect bandgap quality of Silicon. III-V semiconductor materials under a number of applications and combinations are direct bandgap semiconducting materials. This allows for operation at much higher speeds. Indirect bandgap materials will be unable to produce light.

     

    Ternary and Quaternary III-V

    The following list introduces the main III-V semiconductor material compounds used today. In a follow-up discussion, ternary and quarternary III-V semiconductors will be discussed in greater depth. To begin however, these may be understood as a process of mixing, varying or transitioning between two or more material types. For instance, a transition region between GaAs and GaP is described as GaAsxP1-x. This is the compound GaAsP, a blend of both GaAs and GaP, but at end of the material region, it is GaAs and at the other end it is equal to GaP.

     

    GaAs
    GaAs was the first III-V material to play a major role in photonics. The first LED was fabricated using this material in 1961. GaAs is frequently used in microwave frequency devices and monolithic microwave integrated circuits. GaAs is used in a number of optical and optoelectronic near-infra-red range devices. The bandgap wavelength is λg = 0.873 μm.

    GaSb
    Not long after GaAs was used, other III-V semiconductor materials were grown, such as GaSb. The bandgap wavelength of GaSb λg = 1.70 μm, making it useful for operation in the Infra-red band. GaSb can be used for infrared detectors, LEDs, lasers and transistors.

    InP
    Similar to GaAs, Indium Phosphide is used in high-frequency electronics, photonic integrated circuits and optoelectronics. InP is widely used in the optical telecommunications industry for wavelength-division multiplexing applications. It is also used in photovoltaics.

    GaAsP
    An alloy of GaAs and GaP, Gallium Arsenide Phosphide is used for the manufacture of red, orange and yellow LEDs.

    InGaAs
    Indium Gallium Arsenide is used in high-speed and high sensitivity photodetectors and see common use in optical fiber telecommunications. InGaAs is an alloy often written as GaxIn1-xAs when defining compositions. The bandgap energy is approximately 0.75 eV, which is convenient for longer wavelength optical domain detection and transmission.

    InGaAsP
    Indium Gallium Arsenide Phosphide is commonly used to create quantum wells, waveguides and other photonic structures. InGaAsP can be lattice-matched to InP well, which is the most common substrate material for photonic integrated circuits.

    InGaAsSb
    Indium Gallium Arsenide Antimonide has a narrow bandgap (0.5 eV to 0.6 eV), making it useful for the absorption of longer wavelengths. InGaAsSb faces a number of difficulties in manufacture and can be expensive to make, although when these difficulties are avoided, devices (such as photovoltaics) that use it may achieve high quantum efficiency (~90%).

    AlGaAs
    Aluminum Gallium Aresinide has nearly the same lattice constant as GaAs, but with a larger bandgap, between 1.42 eV and 2.16 eV. AlGaAs may be used as part of a border region of a quantum well with GaAs as the inner section.

    AlInGaP
    AlInGaP sees wide use in the construction of diode lasers and LEDs from deep ultraviolet to infrared ranges.

    GaN
    GaN has a wide bandgap of 3.4 eV and sees use in high frequency high power devices and optoelectronics. GaN transistors operate at higher voltages than the GaAs microwave transistors and sees possible use in THz devices.

    InGaN
    InxGa1−xN is another ternary III-V semiconductor that can be tuned for use in optoelectronics from the ultraviolet (see GaN) to infrared (see InN) wavelengths.

    AlGaN
    AlxGa1−xN is another compound that sees use in LEDs for blue to ultraviolet wavelengths.

    AlInGaN
    Although AlInGaN is not used much independently, it sees wide use in lattice matching the compounds GaN and AlGaN.
    InSb
    Indium Antimonide is an interesting compound, given that it has a very narrow bandgap of 0.17 eV and the highest electron mobility of any known semiconductor. InSb can be used in quantum wells and bipolar transistors operating up to 85 GHz and field-effect transistors operating at higher frequencies. It can also be used as a terrahertz radiation source.

  • HFSS – Simulation of a Square Pillar April 16, 2020

    The following is an EM simulation of the backscatter of a golden square object. This is by no means a professional achievement, but rather provides a basic introduction to the HFSS program.

    HFSS_sq_model

    The model is generated using the “Draw -> Box” command. The model is placed a distance away from the origin, where the excitation is placed, shown below. The excitation is of spherical vector form in order to generate a monostatic plot.

    excitation

    The basic structure is a square model (10mm in all three coordinates) with an airbox surrounding it. The airbox is coated with PML radiation boundaries to simulate a perfectly matched layer. This is to emulate a reflection free region. This is necessary to simulate radiating structures in an unbounded, infinite domain. The PML absorbs all electromagnetic waves that interract with the boundary. The following image is the plot of the Monostatic RCS vs the Incident wave elevation angle.

    Monostatic_HFSS

    The subsequent figure was generated by using a “bistatic” configuration and is plotted against the elevation angle.

    bistatic

  • Miller Effect April 15, 2020

    The Miller Effect is a generally negative consequence of broadband circuitry due to the fact that bandwidth is reduced when capacitance increases. The Miller effect is common to inverting amplifiers with negative gain. Miller capacitance can also limit the gain of a transistor due to transistors’ parasitic capacitance. A common way to mitigate the Miller Effect, which causes an increase in equivalent input capacitance, is to use cascode configuration. The cascode configuration features a two stage amplifier circuit consisting of a common emitter circuit feeding into a common base. Configuring transistors in a particular way to mitigate the Miller Effect can lead to much wider bandwidth. For FET devices, capacitance exists between the electrodes (conductors) which in turn leads to Miller Effect. The Miller capacitance is typically calculated at the input, but for high output impedance applications it is important to note the output capacitance as well.

    cascode

    Interesting note: the Miller effect can be used to create a larger capacitor from a smaller one. So in this way, it can be used for something productive. This can be important for designing integrated circuits, where having large bulky capacitors is not ideal as “real estate” must be conserved.

  • Beamforming April 14, 2020

    Beamforming (spatial filtering) is a huge part of Fifth Generation wireless technology. Beamforming is basically using multiple antennas and varying the phase and amplitude of the inputs to these antennas. The result is a directed beam in a specific direction. This is a great method of preventing interference by focusing the energy of the antennas. Constructive and Destructive interference is used to channel the energy and increase the antennas’ directivity. The receiver receives the multitude of waves and depending on the receiver’s location will determine whether there is mostly constructive or destructive interference. Beamforming is not only used in RF wireless communication but also in Acoustics and Sonar.

    An important concept to know is that placing multiple radiating elements (antennas) together increases the directivity of the radiation pattern. Putting two antennas side by side, creating a main lobe with a 3dB gain going forward. With four radiating elements, this becomes 6dB (quadruple gain). Feeding all of the elements with the same signal means that the elements are still one single antenna, but with more forward gain. The major issue here is that you only benefit from this in one single stationary direction unless the beam can be moved. This is where feeding the antennas with different phases and amplitudes comes in. The number of antennas becomes equal to the number of input signals. Having more separate antennas (and more input signals) creates a more directed antenna pattern. Spatial multiplexing can also be implemented to service multiple users wirelessly by utilizing space multiple times over.

    Using electronic phase shifters at the input of the antennas can decrease cost of driving the elements quite a bit. This is known as a phased array and can steer the beam pattern as necessary but can only point in one direction at a time.

    phased array

     

  • RF Mixer basics April 13, 2020

    Mixers are three port devices that can be active or passive, linear or nonlinear. They are used to modulate (upconvert) or demodulate (downconvert) a signal to change its frequency to be sent to a receiver or to demodulate at the receiving end to a lower frequency.

    mixer

    Two major mixer categories are switching and nonlinear. Nonlinear mixers allow for higher frequency upconversion, but are less prevalent due to their unpredictable performance. In the diagram above, the three ports are shown. The RF signal is the product or sum of the IF (intermediate frequency) and LO (Local Oscillator) signal during upconversion. Due to reciprocity, any mixer can be used for either upconversion or downconversion. For a downconversion mixer, the output is the IF and the RF is fed on the left hand side.

    freqtran

    The above diagram illustrates the concept of frequency translation. In a receiver, the mixer translates the frequency from a higher RF frequency (frequency that the wave propagated wirelessly through air) to a lower Intermediate frequency. The mixer cannot be LTI; it must be either nonlinear or time varying. The mixer is used in conjunction with a filter to select either upper or lower sideband which are the result of the multiplication of two signals with different frequencies. These new frequencies are the sum or difference of the two frequencies at the two input ports.

    In addition to frequency translation during modulation, RF mixers can also be used as phase comparators, such as in phase locked loops.

    To maintain linearity and avoid distortion, the LO input should be roughly 10dB higher than the input RF signal (downconverter). Unfortunately this increases cost and so therein lies the tradeoff between cost and performance.

  • High Speed Waveguide UTC Photodetector I-V Curve (ATLAS Simulation) April 12, 2020

    The following project uses Silvaco TCAD semiconductor software to build and plot the I-V curve of a waveguide UTC photodetector. The design specifications including material layers are outlined below.

    Simulation results

    The structure is shown below:

    3

    analyzeband

    Forward Bias Curve:

    2

    Negative Bias Curve:

    1

    Current Density Plot:

    5

    Acceptor and Donor Concentration Plot:

    4

    Bandgap, Conduction Band and Valence Band Plots:

    6

    DESIGN SPECIFICATIONS

    Construct an Atlas model for a waveguide UTC photodetector. The P contact is on top of layer R5, and N contact is on layer 16. The PIN diode’s ridge width is 3 microns. Please find: The IV curve of the photodetector (both reverse biased and forward bias).

    The material layers and ATLAS code is shown in the following PDF: ece530proj1_mbenker

  • VHF and UHF April 11, 2020

    The RF and microwave spectrum can be subdivided into many bands of varying purpose, shown below.

    radiospec

    On the lower frequency end, VLF (Very Low Frequency) tends to be used in submarine communication while LF (Low Frequency) is generally used for navigation. The MF (Medium Frequency) band is noted for AM broadcast (see posts on Amplitude modulation). The HF (shortwave) band is famous for use by HAM radio enthusiasts. The reason for the widespread usage is that HF does not require line of sight to propagate, but instead can reflect from the ionosphere and the surface of the earth, allowing the waves to travel great distances. VHF tends to be used for FM radio and TV stations. UHF covers the cellphone band as well as most TV stations. Satellite communication is covered in the SHF (Super High Frequency) band.

    Regarding UHF and VHF propagation, line of sight must be achieved in order for the signals to propagate uninhibited. With increasing frequency comes increasing attenuation. This is especially apparent when dealing with 5G nodes, which are easily attenuated by buildings, trees and weather conditions. 5G used bands within the UHF, SHF and EHF bands.

    Speaking of line of sight, the curvature of the earth must be taken into account.

    los

    The receiving and transmitting antennas must be visible to each other. This is the most common form of RF propagation. Twenty five miles (sometimes 30 or 40) tends to be the max range of line of sight propagation (radio horizon). The higher the frequency of the wave, the less bending or diffraction occurs which means the wave will not propagate as far. Propagation distance is a strong function of antenna height. Increasing the height of an antenna by 10 feet is like doubling the output power of the antenna. Impedance matching should be employed at the antennas and feedlines as losses increase dramatically with frequency.

    Despite small wavelengths, UHF signals can still propagate through buildings and foliage but NOT the surface of the earth. One huge advantage of using UHF propagation is reuse of frequencies. Because the waves only travel a short distance when compared to HF waves, the same frequency channels can be reused by repeaters to re-propagate the signal. VHF signals (which have lower frequency) can sometimes travel farther than what the radio horizon allows due to some (limited) reflection by the ionosphere.

    Both VHF and UHF signals can travel long distances through the use of “tropospheric ducting”. This can only occur when the index of refraction of a part of the troposphere due to increased temperature is introduced. This causes these signals to be bent which allows them to propagate further than usual.

  • P-I-N Junction Simulation in ATLAS April 10, 2020

    Introduction to ATLAS

    ATLAS by Silvaco is a powerful tool for modeling for simulating a great number of electronic and optoelectronic components, particularly related to semiconductors. Electrical structures are developed using scripts, which are simulated to display a wide range of parameters, including solutions to equations otherwise requiring extensive calculation.

     

    P-I-N Diode

    The function of the PN junction diode typically fall off at higher frequencies (~3GHz), where the depletion layer begins to be very small. Beyond that point, an intrinsic semiconductor is typically added between the p-doped and n-doped semiconductors to extend the depletion layer, allowing for a working PN junction structure in the RF domain and to the optical domain. The following file, a P-I-N junction diode is an example provided with ATLAS by Silvaco. The net doping regions are, as expected at either end of the PIN diode. This structure is 10 microns by 10 microns.

    optoex01_plot0

    The code used to create this structure is depicted below.

    449

     

    The cutline tool is used through the center of the PIN diode after simulating the code. The Tonyplot tool allows for the plotting of a variety of parameters, such as electric field, electron fermi level, net doping, voltage potential, electron and hole concentration and more.

    445446447448

  • Introduction to Electro-Optic Modulators April 9, 2020

    Electro-optics is a branch or topic in photonics that deals with the modulation, switching and redirection of optical signals. These functions are produced through the application of an electric field, which alters the optical properties of a material, such as the refractive index. The refractive index refers to the speed of light propagation in a medium relative to the speed of light in a vacuum.

     

    Modulators vs. Switches

    In a number of situations, the same device may function as both a modulator and a switch. One dependent factor on whether the device would be suitable or not for a switch as opposed to a modulator would be the strength of the effect that an electric field may have on the device. If the device’s primary role is to impress information onto a light wave signal through temporary varying of the signal, then it is referred to as a modulator. A switch on the other hand either changes the direction or spatial position of light or turns it off completely.

    phase-modulators

     

    Theory of Operation

    Electro-optic Effect

    The electro-optic effect presumes the dependence of the refractive index on the the applied electric field. The change in refractive index, although small allows for various applications. For instance, a lens may be applied an electric field and depending on the material and the applied field, the focal length of the lens can change. Other optical instruments that utilize this effect may also see use, such as a prism. A very small adjustment to the refractive index may still produce a delay in the signal, still large enough to detect and, if information was implied by the delay that was produced on the signal, the delay can be phase demodulated at the receiving end.

     

    Electroabsorption

    Electroabsorption is also another effect that is used to modify the optical properties of a material by the application of an electric field. An applied electrical field may increase the bandgap of the optical semiconductor material, turning the material from optically transparent to optically opaque. This process is useful for making modulators and switches.

     

    Kerr Effect and Pockels Effect

    The Pockels Effect and the Kerr Effect both account for the change in refractive index through the application of an electric field. The Kerr Effect states that this effect is nonlinear, while the Pockels Effect states that the effect is linear. Although the Pockels Effect is more pronounced in Electro-optical modulator design, both are applied in many situations. The linear electro-optic effect exists only in crystals without inversion symmetry. The design of electro-optic modulators or switches requires special attention to the waveguide material and how the electric field reacts with the material. Common materials (also maintaining large Pockels coefficients) are GaAs, GaP, LiNbO3, LiTaO3 and quartz. The Kerr Effect is relatively weak in commonly used waveguide materials.

     

    Properties of the Electro-Optic Modulator

    Modulation Depth

    Important for both modulators and switches is the modulation depth, also known as the modulation index. Modulation depth has applications for the several types of optical modulators, such as intensity modulators, phase modulators and interference modulators. The modulation depth may be conceptually understood as the ratio of effect that is applied to the signal. In other words, is the modulation very noticeable? Is it a strong modulation or is it a weak modulation?

     

    Bandwidth

    The bandwidth of the modulator is critically important as it determines what range of signal frequencies may be modulated onto the optical signal. Switching time or switching speed may be equally applied to an optical switch.

     

    Insertion Loss

    Insertion loss of optical modulators and switches is a form of optical power loss and is expressed in dB. However, the result of insertion loss often results in the system requiring more electrical power and would not explicitly reduce performance of the modulation or switching function of the device.

     

    Power Consumption

    In distinction from the electric field, a modulator or switch also needs a power supply for itself. The amount of power required increases with modulation frequency. A common figure of merit is the drive power per unit bandwidth, typically expressed in milliwatts per megahertz.

     

    References: [1], [4], [6]

  • Optical System Design using MATLAB April 8, 2020

    Previously featured was an article that derived a matrix formation of an equation for a thick lens. This matrix equation, it was said can be used to build a variety of optical systems. This will be undertaken using MATLAB. One of the great parts of using a matrix formula in MATLAB is that essentially any known parameter in the optical system can not only be altered directly, but a parameter sweep can be used to see how the parameter will effect the system. Parameters that can be altered include radius of curvature in the lens, thickness of the lens or distance between two lenses, wavelength, incidence angle, refractive indexes and more. You could also have MATLAB solve for a parameter such as the radius of curvature, given a desired angle. All of these parameters can be varied and the results can be plotted.

     

    Matrix Formation for Thick Lens Equation

    The matrix equation for the thick lens is modeled below:

    thicklens3

    Where:

    thicklens4

    • nt2 is the refractive index beyond surface 2
    • αt2 is the angle of the exiting or transmitted ray
    • Yt2 is the height of the transmitted ray
    • D2 is the power of curvature of surface 2
    • D1 is the power of curvature of surface 1
    • R1 is the radius of curvature of surface 1
    • R2 is the radius of curvature of surface 2
    • d1 is the thickness of the lens or distance between surface 1 and 2
    • ni is the refractive index before surface 1
    • αi is the angle of the incident ray
    • Yi1 is the height of the incident ray

    The following plots show a parameter sweep on an number of these variables. The following attachment includes the code that was used for these calculations and plots: optics1hw

    thirefheirad

  • HEMT – High Electron Mobility Transistor April 7, 2020

    One of the main limitations of the MESFET is that although this device extends well into the mmWave range (30 to 300 GHz or the upper part of the microwave spectrum), it suffers from low field mobility due to the fact that free charge carriers and ionized dopants share the same space.

    To demonstrate the need for HEMT transistors, let us first consider the mobility of GaAs compound semiconductor. As shown in the picture, with decreasing temperature, Coloumb scattering becomes prevalent as opposed to phonon lattice vibrations. For an n-channel MESFET, the main electrostatic Coloumb force is between positively ionized donor elements (Phosphorous) and electrons. As shown, the mobility is heavily dependent on doping concentration. Coloumb Scattering effectively limits mobility. In addition, decreasing the length of the gate in a MESFET will increase Coloumb scattering due to the need for a higher doping concentration in the channel. The means that for an effective device, the separation of free and fixed charge is needed.

    mobility

    A heterojunction consisting of n+ AlGaAs and p- GaAs material is used to combat this effect. A spacer layer of undoped AlGaAs is placed in between the materials. In a heterojunction, materials with different bandgaps are placed together (as opposed to a homojunction where they are the same).

    hetero

    This formation leads to the confinement of electrons from the n- layer in quantum wells which reduces Coloumb scattering. An important distinction between the HEMT and the MESFET is that the MESFET (like all FETs) modulates the channel thickness whereas with an HEMT, the density of charge carriers in the channel is changed but not the thickness. So in other words, applying a voltage to the gate of an HEMT will change the density of free electrons will increase (positive voltage) or decrease (negative voltage). The channel is composed of a 2D electron gas (2DEG). The electrons in the gas move freely without any obsctruction, leading to high electron mobility.

    HEMTs are generally packed into MMIC chips and can be used for RADAR applications, amplifiers (small signal and PAs), oscillators and mixers. They offer low noise performance for high frequency applications.

    The pHEMT (pseudomorphic) is an enhancement to the HEMT which feature structures with different lattice constants (HEMTs feature roughly the same lattice constant for both materials). This leads to materials with wider bandgap differences and generally better performance.

  • Off Topic: Planet Earth – Climates and Deserts April 6, 2020

    The following post is an off topic discussion of planet earth, which will consists of miscellaneous topics involving climate types and deserts.

    We can begin our study of the planet earth by discussing different types of sand dunes. Dunes are found wherever sand is blown around, as sand dunes are the construct of Aeolian processes where wind erodes loose sand. There are five main types: Barchan, Star, Parabolic, Tranverse and Longitudinal, though these sometimes go by other names. These dunes are  the product of wind direction. With Barchan dunes, the wind is predominantly in one direction which leads to the development of a crescent shape dune. The shape is convex and the “horns” point in the direction of the wind direction. The other two types of dunes where the wind is in one direction are parabolic and transverse dunes. Parabolic dunes are similar to Barchan dunes, although the “horns” point opposite to the direction of the wind. The key defining feature of this dune type is the presence of vegetation and the fact that they are effected by “blowouts” which is erosion of the vegetated sand.

    dunes

    As shown above, transverse dunes are also quite similar to barchans, but have wavy ridges instead of a crescent shape. The ridges are at right angles to the wind direction. Sand dunes which are formed by wind going in multiple directions are either Linear/Longitudinal or Star dunes. Star dunes are the result of wind moving in many directions whereas Longitudinal dunes are formed where wind converges into a single point, forming parallel lines to the direction of the winds.

    An important term concerning dunes is “saltation”. Saltation is the rolling and bouncing of sand grains due to wind. The distinction between saltation, creep and suspension is that saltation forms a parabolic shape, though these are all wind-based processes.

    aeolian

    Within hot deserts (as opposed to cold deserts), it is common to find structures such as mesas and buttes.

    plateau

    From left to right in the image, the difference between each type of landform is apparent. The pinnacle (or spire) is the most narrow. It is important to note that all of these desert structures are formed by not only wind, but also water (and heat). In addition, a desert surface is generally made of sand, rock and mountainous formations.

    An important feature of deserts is desert pavement. This sheet-like rock formation of rock particles formed when wind or water has removed the sand, which is a very slow process. There are several theories as to why desert pavement exists including intermittent removal of sand by wind and later rain or possibly by shrinking and swelling of clay.

    pavement

    Another concept of sand erosion is deflation, defined as the release of sand from soil by wind.

    An important characteristic of deserts is the extreme temperature of the region. During the day, (hot) deserts are hot, as all of the heat from the sun is reflected by the sand in the ground. This raises the temperature of the ground due the lack of water near the surface. If water was present near the surface, most of the heat would go into evaporating the water. However, even hot deserts are cold at night because the dry surface does not store heat as well as a moist surface. Since water vapor is a greenhouse gas (and there is little water vapor in the air in a desert), infrared radiation is lost to outer space which contributes to the cold night temperatures.

    Ventifacts, pictured below, are stones shaped by wind erosion. They are commonly found in arid climates with very little vegetation and feature strong winds. This is because vegetation often interferes with particle transport.

    ventifact

    An inselberg, as its Germanic name implies, is a type of mountain that is isolated and tends to be surrounded by sand. The area around the inselberg tends to be relatively flat, another defining characteristic of the structure. The word “insel” refers to island, which reinforces this concept.

    saustralia-inselbergs-1024x768

    A playa lake is a temporary body of water which also referred to as a dry lake. They are created whenever water ends up in a depression, however when the evaporation rate is quicker than the incoming water, the lake dries up. This tends to leave a buildup of salt.

    An interesting piece of information about deserts is that they tend to located at 30 degree latitudes in both the north and south hemispheres. At the equator there is a low pressure zone due to direct sunlight, however at the 30 degree points there is high pressure, which leads to dry weather. At the equator, the climate tends to be relatively stable and has heavy rainfall. The sinking of air is what leads to these deserts, so in that way high pressure regions are very important to the development of deserts. The world’s largest hot desert (H climate) is the Sahara and the largest cold desert (K) is Antarctica. The major difference between a BW (Arid) climate and a BS (semiarid) climate is the amount of precipitation. Less than ten inches indicates arid climate and generally 10-20 inches indicates semiarid.

     

  • Thick Lens Equation – Trigonometric Derivation and Matrix Formation April 5, 2020

    The following set of notes presents first a trigonometric derivation of the thick lens equation using principles such as Snell’s law and the paraxial approximation. A final formula for the thick lens equation is rather unwieldy. A matrix form is much more usable, we will find. Moreover, a matrix form allows for one to add a number of lenses together in series with ease. Parameters of the lenses can be altered as well. Soon, the matrix formation of these equations will be used in MATLAB to demonstrate the ease at which an optical system can be built using matrix formations. The matrix formation of the thick lens equation can be summarized as three matrices multiplied, for the first curved surface, the separation between the next curved surface and the final curved surface. By altering the radius of curvature, the refractive indexes at each position, distances between them using these matrices, a new lens can also be made, such as a convex thin lens by inverting the curvature of the lens and reducing the thickness on the lens. A second lens can be added in series. Once a matrix formation is made handy, there are numerous applications that then become simple.

    trtr2tr3tr4tr5

  • Semiconductor Distribution of Electrons and Holes April 4, 2020

    Charge Flow in Semiconductors

    Charge flow in a semiconductor is characterized by the movement of electrons and holes. Considering that the density and availability of electrons and holes in a material is determined by the valence and conduction bands of that material, it follows that for different materials, there will be different densities of electrons and holes. The electron and hole density will determine the current throughput in the semiconductor, which makes it useful to map out the density of holes and electrons in a semiconductor.

     

    Density of States

    The density of electrons and holes is related to the density of states function and the Fermi distribution function. States are the formations of electrons and holes that can be formed in a semiconductor. A density of states is the amount of possible formations that can exist in a semiconductor. The Fermi-Dirac probability function is used for determining the the density of quantum states. The following formula determines the most probable formation distribution or state. By varying Ni (number of particles) along energy levels, the most probable state can be found, while gi refers to remaining particle positions in the distribution.

    W

    Density of States Calculation using ATLAS

    By integration of Fermi-Dirac statistics for the density of states in the conduction and valence bands arises the formulae for electron and hole concentration in a semiconductor:

    eleholeconc

    where Nc and Nv are the effective density of states for the conduction bands and valence bands, which are characteristics of a chosen material. If using a program such as ATLAS, the material selection will contain parameters NC300 and NV300.

    NcNv

     

    Charge Carrier Density

    Charge carriers simply refer to electrons and holes, which both contribute to the flow of charge in a semiconductor. The electron distribution in the conduction band is given by the density of quantum states multiplied by the probability (Fermi-Dirac probability function) that a state is occupied by an electron.

    Conduction Band Electron Distribution:

    electrondist

    The distribution of holes in the valence band is the density of quantum states in the valence band multiplied by the probability that a state is not occupied by an electron:

    holedist

     

    Intrinsic Semiconductor

    An intrinsic semiconductor maintains the same concentration of electrons in the conduction band as holes in the valence band. Where n is the electron concentration and p is the hole concentration, the following formulae apply:

    rre

    The overall intrinsic carrier concentration is:

    nie

    Eg is the band gap energy, which is equal to the difference of the energy is the conduction band and the energy in the valence band. Eg = Ec – Ev.

    Electron and Hole concentrations expressed in terms of the intrinsic carrier concentration, where Ψ is the intrinsic potential and φ is the potential corresponding to the Fermi level (Ef = qφ):

    pnnie

     

    Donor Atoms Effect on Distribution of Electrons and Holes (Extrinsic Semiconductor)

    Adding donor or acceptor impurity atoms to a semiconductor will change the distribution of electrons and holes in the material. The Fermi energy will change as dopant atoms are added. If the density of holes is greater than the density of electrons, the semiconductor is a p-type and when the density of electrons is greater than the density of holes, the semiconductor is n-type (see Density of States formulas above).

    [8], [10]

     

  • Applications of the Paraxial Approximation April 3, 2020

    It was discussed in a previous article, Mirrors in Geometrical Optics, Paraxial Approximation that the paraxial approximation is used to consider an apparently imperfect or flawed system as a perfect system.

    Paraxial Approximation

    The paraxial approximation was proposed in response to a normal occurrence in optical systems where the focal point is inconsistent for incident rays of higher incidence angles.The focal point F for a spherical mirror is understood under the paraxial approximation to be half the radius of curvature. Without the paraxial approximation, the system becomes increasingly complicated, as the focal point is a varying trigonometric function of the angle of incidence. The paraxial approximation assumes that all incident angles will be small.

    par

    The paraxial approximation can be likened (and when analyzed fully, this is it exactly) to a case of a triangle of base B, hypotenuse H and angle θ. Consider a case where H/B is very close to 1. θ will also be very small. In this case, it is of little harm to consider such a triangle as a triangle with θ=0, virtually to lines on top of each other, H and B, and more explicitly, H=B. This is precisely what is done when using the paraxial approximation.

     

    An interesting question to ask is, what angle should be the limit to which we allow a paraxial approximation? The answer would be, it depends on how accurate, or clear the image must be. When discussing optical systems, an aberration is a case in which rays are not precisely focused at the focal point of a mirror (or another type of optical system involving focusing). An aberration will actually cause the image clarity to be reduced at the output of the system. The following image would be an example of the result of an aberration to an image in an optical system:

    Chromatic_aberration_(comparison)

    Here is an example of a problem that makes clear an example of the issue of an aberration. Two rays appear to be correctly aligned to the focal point, however another ray with angle of incidence of 55 degrees is not focused at the focal point. A system that would allow a ray of incidence of 55 degrees may be acceptable under some circumstances, however one would expect to have an aberration or some level of blurriness to the image.

    op7

  • Thermoelectric Effect, Thermoelectric current and the Seebeck Effect April 2, 2020

    There are three types of current flow in a semiconductor: Drift, diffusion, and thermoelectric. Drift current is very familiar as the study of conductors leads us to know that when a potential gradient (voltage) is established, electrons will flow in a conductor to balance this out. The same effect happens in semiconductors. However, there are two types of charge carriers in semiconductors: electrons AND holes. This leads to diffusion current, which is caused by a concentration gradient rather than a potential gradient.

    The third kind of current within a semiconductor is called thermoelectric current. which involves the conversion of a temperature gradient to a voltage. A thermocouple is a device which measures the difference in potential across two dissimilar materials where one end is heated and the other is cold. It was found that the temperature difference was proportional to the potential difference. Although Alessandro Voltage first discovered this effect, it was later rediscovered by Thomas Seebeck. The combination of potential differences leads to the full definition of current density.

    j1

    eemf

    S is called as the “thermopower” or “Seebeck coefficient” which is units of Volts/Kelvin. The two equations of Ohm’s law (point form) and E_emf look remarkably similar.

    thermo

    The Seebeck coefficient is negative for negative charge carriers and positive for positive charge carriers, leading to a difference in the Seebeck Coeffecient between the P and N side of the PN junction above. This leads to the above circuit being used as a thermoelectric generator. If a voltage source replaces the resistor, the circuit becomes a thermal sensor. These (thermoelectric generators) are often employed by power plants to convert wasted heat energy into additional electric power. They are also used in car engine engines for the same reason (fuel efficiency). Solid state devices have a huge advantage in the sense that they require no moving parts or fluids which eliminates much of the need for maintenance. They also reduce environmental impact by converting waste heat into electrical energy.

  • Object Oriented Programming and C#: Simple Program to add three numbers April 1, 2020

    The following is a simple program that takes a user input of three numbers and adds them but does not crash when an exception is thrown (eg. if a user inputs a non integer value). The “int?” variable is used to include the “null” value used to signify that a bad input was received. The user is notified instantly when an incorrect input is received by the program with a “Bad input” command prompt message.

    c1

    The code above shows that the GetNumber() method is called (shown below) three times, and as long as these are integers, they are summed and printed to the console after being converted to a string.

    c2

    The code shows that as long as the sum of the three integers is not equal to null (anything plus null is equal to null, so if at least one input is a non-integer this will be triggered) the Console will print the sum of the three numbers. The GetNumber() method uses the “TryParse” method to convert each string input to an integer. This will handle exceptions that are triggered by passing a non-integer to the command line. It also gives a convenient return of “null” which is used above.

    The following shows the effect of both a summation and an incorrect input summation failure.

    correct

    incorrect

  • Object Oriented Programming and C#: Shallow vs Deep Copying March 31, 2020

    The following will be a brief but important post illustrating the difference between reference and value types. In C#, value types are things like integers, floats, enumerations and doubles. A value type holds the data assigned to it in its own memory allocation whereas a reference type only holds an address which points to the actual data. A reference type is anything that is not an int, float or double, etc such as a dynamic array (list), static array, class objects, or strings. It is important to know the difference because when code such as the code below is executed, it can have some confusing effects.

    shallow copy

    The image above illustrates what is known as a “shallow copy”. Because instances of classes are not storing actual data but are used as pointers, when the object is copied to the second object, the memory address is copied instead of the data contained within “obj”. Therefore, any changes made to “obj2” will also affect “obj” because they point to the same data. The following image shows the difference between deep and shallow copies.

    diff

    To do a deep copy of an array, for instance, every element of that array must be copied to the new array. You can do that using the “Array.Copy(source array, copy array)” method. As shown in the image, this will create two references and two data instead of 2 references pointing to the same data. Shallow copying only copies a memory pointer.

  • Power Factor and the Power Triangle March 30, 2020

    Power factor is very important concept for commercial and industrial applications which require higher current draw to operate than domestic buildings. For a passive load (only containing resistance, inductance or capacitance and no active components), the power factor range from 0 to 1. Power factor is only negative with active loads. Before delving into power factor, it is important to discuss different types of power. The type of power most are familiar with is in Watts. This is called active or useful power, as it represents actual energy or time dissipated or “used” by the load in question. Another type of power is reactive power, which is caused by inductance or capacitance, which leads to a phase shift between voltage and current. To demonstrate how a lagging power factor causes “wasted” power, it would be helpful to look at some waveforms. For a purely resistive load, the voltage and current are in phase, so no power is wasted (P=VI is never zero at any point).

    eli

    The above image captures the concept of leading and lagging power factor (leading and lagging is always in reference to the current waveform). For a purely inductive load, the current will lag because the inductor will create a “back EMF” or inertial voltage to oppose changes in current. This EMF leads to a current within the inductor, but only comes from the initial voltage. It can also be seen that this EMF is proportional to the rate of change of the current, so when the current is zero the voltage is maximum. For a capacitive load, the power factor is leading. A capacitor must charge up with current before establishing a voltage across the plates. This explains the PF “leading” or “lagging”. Most of the time, when power factor is decreased it is because the PF is lagging due to induction motors. To account for this, capacitors are used as part of power factor correction.

    The third type of power is apparent power, which is the complex combination of real and reactive power.

    triangle

    The power factor is the cosine of the angle made in this triangle. Therefore, as the PF angle is increased the power factor decreases. The power factor is maximum when the reactive power is zero. Ideally, the PF would be between 0.95 and 1, but for many industrial buildings this can fall to even 0.7. This leads to higher electric bills for this buildings because having a lower power factor leads to increases current in the power lines leading to the building which causes higher losses in the lines. It also leads to voltage drops and wastage of energy. To conserve energy, power factor correction must be employed. Often capacitors are used in conjunction with contactors that are controlled by regulators that measure power factor. When necessary, the contactors will be switched on and allow the capacitors to improve the power factor.

    For linear loads, power factor is called as displacement power factor, as it only accounts for the phase difference between the voltage and current. For nonlinear loads, harmonics are added to the output. This is because nonlinear loads cause distortion, which changes the shape of the output sinusoids. Nonlinear loads and power factor will be explored in a subsequent post.

  • Photovoltaic Effect and Theory of Solar Cells March 29, 2020

    Just as plants receive energy from the sun and use it to produce glucose, a photovoltaic cell receive energy from the sun and generates an electrical current. The working principle is based on the PN junction, which will be revisited here.

    Silicon can be subdivided into several discrete energy levels called “bands”. The major bands of concern are the valence and conduction bands. The bottom bands are fully occupied and don’t change.

    siliconenergy

    For silicon, the bandgap energy is 1.1eV. For an intrinsic semiconductor, the Fermi level is directly between the conduction and valence band. This is because there is an equal number of holes in the valence band as electrons in the conduction band. This means the probability of occupation of energy levels in both bands are equal. The Fermi level rises in the case of an n-type semiconuctor (doped with Phosphorous) and declines towards the valence band in a p-type (doped with Boron).

    The following illustrates an energy band diagram for a semiconductor with no bias across it. Photodiodes (light sensors) operate in this manner.

    intrinsicenergy

    The Fermi energy is shown to be constant. On the far right hand side away from the depletion region, the PN junction appears to be only P-type (hence the low Fermi level with respect to the conduction band). Likewise, on the left the Fermi level is high with respect to the conduction band. The slope of the junction is proportional to the electric field. A strong electric field in the depletion region makes it harder for holes and electrons to move away from the region. When a forward bias is applied, the barrier decreases and current begins to flow (assuming the applied voltage is higher than the turn on voltage of 0.7V). Current flows whenever recombination occurs. This is because every time an electron recombines on the P side, an electron is pushed out of the N side and beings to flow in an external circuit. The device wants to stay in equilibrium and balance out. This is why solar cells (as opposed to photodiodes) are designed to operate in a forward bias mode.

    The sunlight produces solar energy in the frequency bands of Ultraviolet, infrared and visible light. In order to harness this energy, silicon is employed (made from sand and carbon). Silicon wafers are employed in solar cells. The top layer of the silicon is a very thin layer doped with phosphorous (n-type). The bottom is doped with P-type (doped with Boron). This forms the familiar PN junction. The top layer has thin metal strips and the bottom is conductive as well (usually aluminum). Only frequencies around the visible light spectrum are absorbed into the middle region of the solar cell. The photon energy from the sun knocks electrons loose in the depletion region which causes a current to flow. The output power of a single solar cell is only a few watts. To increase power, solar cells are wired in series and parallel to increase the voltage and current. Because the output of the solar cells is DC, the output is run through an inverter, a high power oscillator that converts the DC current to an 240V AC current compatible with household appliances.

    solar_16x9_2

  • Ray Tracing Examples (1) Curved Mirrors March 28, 2020

    The following ray tracing examples all utilize Fermat’s principle in examining ray traces incident at a mirror.

    Example 1. Draw a ray trace for a ray angled at a convex mirror.

    The ray makes a 40 degree angle with the normal of the mirror at the point of incidence. In accordance with the law of reflection (Fermat’s Principle), the ray will exit at 40 degrees on the other side of the normal.

    op12

     

    The above example shows a single ray at an angle. Often, rays are drawn together in a group of parrallel rays. This example shows how an incident set of parallel rays will no longer be parallel when reflected by a non-uniform (not flat) mirror surface.

    op6

     

    This example brings up an important concept that happens especially with concave mirrors. Two rays drawn seem to be directed towards the same point, known as the focal point. A focal point however is only consistent for smaller angles. The third ray at the bottom makes a 55 degree incident angle with the normal of the surface. The reflected ray is also 55 degrees separated from the normal but is directed to the other side of the normal. The ray does not converge at the focal point as the others do. This effect is known as an aberration and may be discussed further at length in a later article.

    op7

     

    This example makes use of the above concept of focal point. An object placed at the focal point will not make an image at the focal point. This is useful if for instance, some type of lense or collecter should be placed at the focus of the mirror. This can be done without worry for it causing disturbances to the image that is formed at the focal point by the reflected rays.

    op8

  • Principles of Ray Tracing (1) March 27, 2020

    In geometrical optics, light is treated as rays, typically drawn as lines that propagate in a straight line from one point to another. Ray tracing is a method of determining how a ray will react to a surface or mirror. Rays are understood to propagate always in a straight line, however when entering an angled surface, rebounding from an angled surface or propagating through a different medium, there are a few techniques that are needed to reliably determine the direction and path of a light ray. The following properties are the basis for ray tracing.

    Refractive Index

    The refractive index is a property intrinsic to a medium that describes how fast or slow light propagates in the medium. Light speed in a vacuum is 3*10^8 m/s. Light speed will only get slower in real mediums. The formula for refractive index is the speed of light c devided by the velocity of light in the medium.

    op2

    The refractive index of air is approximately 1. The refractive index of glass for instance is about 1.5. This has implications on how light will propage when changing from one medium to another.

    refractivein1

     

     

    Snell’s Law

    Snell’s law uses the angle of incidence (incoming ray), the angle of refraction (exiting ray) and the refractive indexes of each medium at a boundary to determine the path of propagation. Consider the example below:

    op1

    Snell’s Law: η1*sin(θ1) = η2*sin(θ2)

    The angle of incidence and the angle of refraction are both with respect to the normal of the surface!

     

    Fermat’s Principle

    Fermat’s Principle is also demonstrated in the above figure. Fermat’s Principle states that the angle of incidence of a ray will be equal to the angle of reflection, but exiting from the other side of the normal of the surface.

     

    Using these principles alone, many optical instruments and technologies can be designed and built that manipulate the direction of light rays.

  • RFID – Radio Frequency Identification March 26, 2020

    RFID is an important concept in the modern era. The basic principle of operation is simple: radio waves are sent out from an RF reader to an RFID tag in order to track or identify the object, whether it is a supermarket item, a car, or an Alzheimer patient.

    RFID tags are subdivided into three main categories: Active, passive and semipassive. Active RFID tags employ a battery to power them whereas passive tags utilize the incoming radio wave as a power source. The semipassive tag also employs a battery source, but relies on the RFID reader signal as a return signal. For this reason, the active and semi passive tags have a greater range than the passive type. The passive types are more compact and also cheaper and for this reason are more common than the other two types. The RFID picks up the incoming radio waves with an antenna which then directs the electrical signal to a transponder. Transponders receive RF/Microwaves and transmit a signal of a different frequency. After the transponder is the rectifier circuit, which uses a DC current to charge a capacitor which (for the passive tag) is used to power the device.

    The RFID reader consists of a microcontroller, an RF signal generator and a receiver. Both the transmitter and receiver have an antennas which convert radio waves to electrical currents and vice versa.

    The following table shows frequencies and ranges for the various bands used in RFID

    RFIDtable

    As expected, lower frequencies travel further distances. The lower frequencies tend to be used for the passive type of RFID tags.

    For LF and HF tags, the working principle is inductive coupling whereas with the UHF and Microwave, the principle is electromagnetic coupling. The following image shows inductive coupling.

    inductive coupling

    A transformer is formed between the two coils of the reader and tag. The transformer links the two circuits together through electromagnetic induction. This is also known as near field coupling.

    Far field coupling/radiative coupling uses backscatter by reradiating from the tag to the reader. This depends on the load matching, so changing the load impedance will change the intensity of the return wave. The load condition can be changed according to the data in order for the data to be sent back to the reader. This is known as backscatter modulation.

  • Automotive Electrical System March 25, 2020

    In the early days of automobiles, electricity was not utilized within these machines. Car lights were powered by gas and engines were started by crank rather than a chemical battery.

    The three major components within a car’s electrical system are the battery (12Vdc), the alternator and the starter. The battery is the backbone of the car’s electrical system, which is the main source of electrical current. The electrical system can be split into two main parts. The main feed goes from the battery’s positive terminal to the starter motor. This cable is attached to the battery are capable of carrying up to 400 Amperes of current. This is the high current part of the circuit. The other part of the electrical system is from the ignition switch and carries a lower current. When the ignition switch is turned all the way to the “engine start” position, the starter motor is powered which begins the engine process. What actually happens is the starter solenoid is engaged is that when a small current is received from the ignition switch, the solenoid closes a pair of contacts and sends a large current to the starter. The starter needs a huge amount of current to spin the engine, which most humans cannot physically do.

    The starter motor rotates the flywheel, which turns the crankshaft on the engine. This allows the engine’s pistons to move and begin the process of internal combustion. Fuel is injected into the pistons and combined with air and spark, creates explosions which drive the engine.

    The alternator uses the principle of electromagnetic induction to supply energy to the battery and other electrical components. It is important to note that although the alternator produces AC (this is always the case for inductions), this is rectified much like a Dynamo so the output is DC. The alternator is driven by a serpentine belt which causes the rotor to rotate and in the presence of a stator, induces a current. The stator is made of tightly wound copper and the rotor is made of a collection of magnets, which produces the familiar Faraday induction effect. Diodes are used to rectify the output and also to direct current from the alternator to the battery to charge it.

    alternator

  • Using GIT – Introduction March 24, 2020

    Git is essentially a version control system for tracking changes in computer files. This can be used in conjunction with Visual Studio to program in C#, for example. Git can be accessed through commands through the command window in windows. Git is generally to coordinate changes to code between multiple developers and is also used to work in a local repository which is then “pushed” to a remote depository such as Github.com

    Git tracks changes to files by taking snapshots. This is done by the user by typing “git commit ….” in the command prompt. The files should be added to the staging area first by using the command “git add <filename>”. “Git push” and “git pull” are used to interact with the remote repository. “Git clone” will copy and download a repository to your local machine. Git saves every version that gets committed, so a previous version can always be accessed if necessary. The following image illustrates the concept of committing.

    gitcommit

    You can essentially “branch” your commits which can later be merged together by using “git commit” command with multiple parents. The master branch is the main/linear list of saves. This can be done in the remote repository or the local. A “pull request” essential means taking changes that were made in a certain branch and pulling them into another branch. This means multiple people can be editing multiple branches which can then be merged together.

    Git is extremely useful for collaboration (as with websites such as google docs) where multiple authors can work on something at the same time. It also is excellent for keeping track of the history of projects.

  • Mobility and Saturation Velocity in Semiconductors March 23, 2020

    In solid state physics, mobility describes how quickly a charge carrier can move within a semiconductor device when in the presence of a force (electric field). When an electric field is applied, the particles begin to move at a certain drift velocity, given by the mobility of the carrier (electron or hole) and electric field. The equation can be written as: density

    This is also related to Ohm’s law in point form, which is the conductivity multiplied by the Electric field. This shows that the conductivity of a material is related to the number of charge carriers as well as their mobility within the material. Mobility is heavily dependent on doping, which introduces defects to the material. This means that intrinsic semiconductor material (Si or Ge) has higher mobility, but this is a paradox due to the fact that intrinsic semiconductor has no charge carriers. In addition, mobility is inversely proportional to mass, so a heavier particles will move at a slower rate.

    Phonons also contribute to a loss of mobility due to an effect known as “Lattice Scattering”. When the temperature of semiconductor material is raised above absolute zero, the atoms vibrate and create phonons. The higher the temperature, the more phonon particles which means greater collisions and lower mobility.

    Saturation velocity refers to the maximum velocity a charge carrier can travel within a semiconductor in the presence of a strong electric field. As previously stated, the velocity is proportional to mobility, but with increasing electric field there reaches a point where the velocity saturates. From this point, increasing the field only leads to more collisions with the lattice structure and phonons, which does not help the drift speed. Different semiconductor materials have different saturation velocities and are strong functions of impurities.

  • Transistor IV curves and Modes of Operation/Biasing March 22, 2020

    In the field of electronics, the most important active device is without a doubt the transistor. A transistor acts as a ON/OFF switch or as an amplifier. It is important to understand the modes of operation for these devices, both voltage controlled (FET) and current controlled (BJT).

    For the MOSFET, the cutoff region is where no current flows through the inversion channel and functions as an open switch. The “Ohmic” or linear region, the drain-source current increases linearly with the drain-source voltage. In this region, the FET is acting as a closed switch or “ON” state. The “Saturation” region is where the drain-source current stays roughly constant despite the drain source voltage increasing. This region has the FET functioning as an amplifier.

    ivcurve

    The image above illustrates that for an enhancement mode FET, the gate-source voltage must be higher than a certain threshold voltage for the device to conduct. Before that happens, there is no channel for charge to flow. From there, the device enters the linear region until the drain-source voltage is high enough to be in saturation.

    DC biasing is an extremely important topic in electronics. For example, if a designer wishes for the transistor to operate as an amplifier, the FET must stay within the saturation region. To achieve this, a biasing circuit is implemented. Another condition which effects the operating point of the transistor is temperature, but this can be mitigated with a DC bias circuit as well (this is known as stabilization). “Stability factor” is a measure of how well the biasing circuit achieves this effect. Biasing a MOSFET changes its DC operating point or Q point and is usually implemented with a simple voltage divider circuit. This can be done with a single DC voltage supply.  The following voltage transfer curve shows that the MOSFET amplifies best in the saturation region with less distortion than the triode/ohmic region.

    output

  • Quantum Wells in LEDs March 21, 2020

    Previously, the topic of a quantum well’s functionality was discussed. Here, the topic of quantum wells’ function specifically within Light Emitting Diodes is discussed. In fact, quantum wells often implement multiple quantum wells to increase their luminescence, or total light emission.

    Quantum wells are formed when a type of semiconductor (or compound semiconductor) with a more narrow bandgap between its conduction and valence band is placed in between two wider bandgap semiconductors (such as GaN or AlN). The quantum well traps electrons within it at the conduction band, so as to increase recombination. Holes from the valence band will recombine with the conduction band electrons to emit photons which gives the LED its distinct emission of light. The quantum well is the reason why the LED does not function strictly as a diode. If the electrons were not trapped, the current would simply flow normally as in a regular LED. Although a greater number of quantum wells increases the luminescence of the LED, it can also lead to defects in the device.

    LEDs generate different colors of light by using different semiconductor material and different amounts of doping. This changes the energy gaps and leads to a different wavelength of light being produced. Gallium is a common element used in these compound materials.

  • Power Amplifiers basics March 20, 2020

    Multistage amplification is used to increase the overall gain of an amplifier chain. The total gain is the product of each stage’s gain. For example, a microphone can be connected to first a small signal amplifier (voltage amplifier), then a power amplifier before being supplied to a speaker or some other load. The PA (large signal amplifier) is the final stage of the amplifier chain and is the most power hungry. The major features of these PAs are it’s efficiency (usually drain efficiency for FET archetypes or Collector efficiency for BJT amplifiers) and impedance matching to the load. The output power of a PA is typically in tens of watts (small signal amplifiers generally output in mW up to 1 Watt maximum).

    “Small signal” transistors are used for small signal amplifiers whereas “power” transistors are used for PAs. Small signal transistors behave linearly whereas power transistors can suffer from nonlinear distortion.

    PAs can be classified based on the operating point (Q point) location. Class A amplifiers have a Q point at the center of the active region. For Class B, the Q point is at the cutoff region. For Class AB, the Q point is between that of class A and class B. For Class C, it is below cutoff.

    A major parameter of a PA is its efficiency. This is the ratio of AC power to DC input power and is generally expressed in a percentage.

    Harmonic distortion of a PA involves the presence of harmonic multiples of the fundamental frequency at the output. A large input signal can cause this type of distortion

  • Object Oriented Programming and C#: Methods March 19, 2020

    Methods in C# are quite similar to functions in C. A function is something that takes input parameters and returns outputs. The major difference between methods, which are involved with object oriented programming, and functions is that methods are associate with objects. Methods give the programmer a huge advantage in the sense that their code is much easier to read and allows the user to reuse code to avoid repetitions. It is important to note that methods can only be contained within classes and methods declared within methods are not allowed. The method “Main” is included in every program.

    The following is the syntax for a method:

    method

    The return type for “main” is void since it does not return a value. The name of the method and list of parameters are called the “method signature”. The name of the method should contain either a verb or a noun and a verb. An “access modifier” gives the compiler information about how the method can be used, just as with classes. Examples are public, private, etc. The “static” method is a method that does not need to be instantiated.

    “Local variables” are variables defined within a method that can only be used within the method. The area of visibility for this variable is from where it is defined to the last bracket of the method body.

    Methods can be called from the “main” method or from some other method. In fact, methods can call themselves (this is called recursion). You can also call a method BEFORE it is declared!

    A method’s parameters are the inputs necessary to complete whatever task the method needs to achieve. Arrays can be used as parameters if necessary. When declaring a method with parameters, the values in the parentheses are called parameters, but when the method is called, the values actually used are called arguments.

    Another important note is that when a variable is passed to a method argument, the value is copied to the parameter for use within the method. However, when a variable is declared within a certain method and then placed as an argument to a method, if the parameter is hardcoded within that method, it will use the hardcoded value but the variable declared will not be effected. Here is an example:

    printnum

    main

    This will print the number 5 (not 3). In the Main() method, however, numberArg is 3.

  • Hermitian Operators, Time-Shifting Wavefunction March 18, 2020

    It was mentioned in the previous article on Quantum Mechanics [link] that if the integral of a wavefunction over all space at one time is equal to one (thereby meaning that it is normalized and that the probabilility of the particle existing is 100%), then the wavefunction is applicable to a later time, t.

    qm11

    A function in the place of Ψ*Ψ is used as a probability density function, ρ(x,t). The function N(t) is the resultant probability at a given time, given that the probability was found to be equal to 1 at a given time t0. Shown below, it is proposed that for dN/dt to equal zero, the Hamiltonian must be a Hermitian operator.

    qm12

    A Hermitian operator would satisfy the following:

    herm

    Hermiticity in general may referred to as a type of conjugate form of an operator. An operator is hermitian if the hermitian conjugate is equal to itself. One may compare this relationship as to a real number whose complex conjugate is equal to itself.

    herm2

    Returning to the calculation of dN/dt,

    qm13

  • Ψ Wavefunction Describes Probability March 17, 2020

    Schroedinger’s first interpretation of the wavefunction was that Ψ would describe how a particle dissipates. Where the wavefunction Ψ was the highest, then that was where more of the particle was present. Max Born disagreed saying that a particle would not dissintegrates, choosing another direction to move. Max Born proposed that the wavefunction would actually describe the probability of a particle inhabiting a space. Both Schroedinger and Einstein were initially opposed to the idea of a probabilistic interpretation of the Schroedinger equation. The probabilistic interpretation of Max Born however later became the consensus view of quantum mechanics.

    The wavefunction Ψ therefore describes the probability of finding a particle at position x at time t, not the amount of the particle that exists there.

    qm6

    Since the Schroedinger equation is both a function of position and time, it can only be solved for one variable at a time. Solving for position is preferable due to the fact that if the wavefunction is known for all x, this can provide information for how the wavefunction is at a later time.

    qm7

    Of the limits regarding the wavefunction, it is also said that the wavefunction must be convergent. The wavefunction therefore does not approach a finite constant as x approaches infinity.

    qm8

    We also recall that a wavefunction may also be multiplied by a number. It would appear that doing so would violate the above expression. The answer regarding this conjecture is that the above formula represents a normalized wavefunction. Yet it turns out that not all wavefunctions are normalizable. The case of multiplying the wavefunction with a magnitude in fact would still be normalizable, however. A wavefunction can be normalized if the integral is a finite number less than infinity using the following method:

    qm10

     

     

  • Matrices, Multiple Dimensions in Quantum Mechanics March 16, 2020

    There comes to be two main approaches to Quantum Mechanics. One approach is an equations approach which uses wavefunctions, operators and sometimes eigenstages. The other approach is a linear algebra approach that uses matrices, vectors and eigenvectors to describe quantum mechanics.

    matwave

    Consider an example of a quantum mechanical problem that uses linear algebra for the description of particle spin:

    matrqm

    This allows for a more direct view of commutators as discussed in the previous article on quantum mechanics [link]. Matrices have an advantage of storing much more information elegantly and are convenient for commutations.

    qmcommutate1

    Matrices in fact can be written for x_hat, p_hat and other operators. Matrices are also useful for introducing more than one dimension. We can also make use of this method to give us a three-dimensional Schroedinger equation. First we will start by forming three dimensions of momentum p vectors.

    qm4

     

  • Operators in Quantum Mechanics March 15, 2020

    Before getting into problems relating to the free particle schroedinger equation, let’s review the full Schroedinger equation. The energy operator E_hat appears in the first equation below. Thus far, the euqation relates only kinetic energy. Potential energy however when considered would allow the Schroedinger equation to be applied in a wide range of possible applications, being able to describe the interactions of atoms and molecules and their interactions in free space, wells, and other environments due to the linearity of quantum mechanics. One major point to take from discovering the free particle Schroedinger equation is how important it is in Quantum Mechanics to create energy operators. An operator can be as simple as a constant or as complicated as a partial differential. By allowing an ‘operator’ to take on this wider range of features as opposed to a basic variable makes for the basis of many quantum mechanical calculations. It then follows that the portential, V(x,t) can also be treated as an operator that modifies the system.

    schr3009

    Consider an operator X_hat that when multiplied by a function, results in the function being multiplied by x. Remember that although this may look like a variable, it is useful to consider this as an operator in Quantum mechanics.

    operator

    Does the order in which operators are multiplied matter?

    ops2

    Considering that operators are not always constants or variables, but also sometimes differentials, the order of operations for operators does matter.

    ops3

    A communtator is understood as the difference of linear operators. The commutator of x_hat and p_hat is i*h_bar.

    ops4

  • Direct-Bandgap & Indirect-Bandgap Semiconductors March 14, 2020

    Direct Semiconductors

    When light reaches a semiconductor, the light is absorbed if the photon energy is greater than or equal to the band gap, creating electron-hole pairs. In a direct semiconductor, the minimum of the conduction band is aligned with the maximum of the valence band.

    qwell2

    gaas

    One example of a direct semiconductor is GaAs. The band diagram for GaAs is shown to

    the right. As the gap between the valence band and conduction band is 1.42eV, if a

    photon of same or greater energy is applied to the semiconductor, a hole-electron pair is created for each photon. This is termed the photo-excitation of semiconductors. The photon is thereby absorbed into the semiconductor.

     

    2232

     

    Indirect Semiconductors and Phonons

    indiresemicFor an indirect semiconductor to absorb a photon, the process must be mediated by phonons, which are quanta of sound and in this case refer to the acoustic vibration of crystal lattice. A phonon is also used to provide energy for radiative recombination. When understanding the essence of a phonon, one should recall that sound is not necessarily within hearing range (20 – 20kHz). In fact, the sound vibrations in a semiconductor may well be in the Terrahertz range. The diagram to the right shows how an indirect semiconductor band would appear and also the use of phonon energy to mediate the process of allowing the indirect semiconductor to behave as a semiconductor.

     

    Excitons

    Excitons are bound electron-hole pairs that are created in pure semiconductors when a photon with bandgap energy or larger is absorbed. In bulk semiconductors, these excitons will dissipate rapidly. In quantum wells however, the excitons may remain, even at room temperature. The effect of the quantum well is to force an electron and hole to be very close to each other. This allows for a strong bonding effect to take place and allows the quantum well the ability to generate light as a semiconductor laser.

     

    Quiz

    The band structure of a semiconductor is given by:

    sc1

    Where mc = 0.2 * m0 and mv = 0.8 * m0 and Eg = 1.6 eV. Sketch the E-k Diagram.

    sc2

  • DeBroglie Relations and the Scale of Quantum Effects (MIT OpenCourseWare) March 13, 2020

    Assignment Sheet MIT OpenCourseWare – Quantum Physics I

    debroglie22333333333

    qmscan1

    PDF of solutions

    Barton Zwiebach. 8.04 Quantum Physics I. Spring 2016. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu. License: Creative Commons BY-NC-SA.

     

  • Advanced Electronics and Optoelectronics: The MESFET March 12, 2020

    One of the more common FET transistor typologies is the MESFET (Metal Semiconductor field effect transistor). This active device is the oldest FET device concept. The MESFET is similar in  structure to a JFET (Junction Field effect transistor) but includes a Schottky junction instead of a P-N junction.

    The MESFET’s channel depends on three parameters: the velocity of the charge carriers, the density of these charge carriers, and the geometric cross section the carriers flow through. The gate electrode is connected directly to the semiconductor material, creating a Schottky diode. The MESFET is generally constructed from the compound semiconductor GaAs (Gallium Arsenide) to provide higher electron mobility. As shown, the substrate is semi-insulating to decrease parasitic capacitance.

    MESFET

    The device works by limiting the electron flow from source to drain, similar to a JFET. The Schottky diode controls the resistance of the channel (size of depletion region). Varying the voltage across the Schottky gate changes the channel size. Similar to other FETs, there is a certain pinch off voltage that causes the current to be very small, making the MESFET a switch or variable resistor. MESFETs can be depletion mode or enhancement mode. The MESFET is often used in high frequency wireless communication devices such as cell phones or military radars.

    (All information and photos obtained from “High Speed Electronics and Optoelectronics Devices and Circuits” by Sheila Prasad)

  • Ray Tracing with Snell’s Law – Optics, ECE591 March 11, 2020

    Of the four ways of manipulating light, these examples employ shaping of a lens and the refractive index to change the path of a ray.

    591-1