Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • mbenkerumass 10:11 am on November 9, 2019 Permalink | Reply  

    Welcome to the Students’ Page for the RF/Photonics Lab at UMASS Dartmouth 

    What we do in this lab:

    • RF – Radio Frequency, Microwave Electronics
    • Photonics – Lightwave Frequency Electronics, Fiber Optics, Lasers, Optics
    • Research
    • Hands-on learning


    If you are a student and you want to do research in the RF/Photonics lab, focus on the following and talk to Dr. Li:

    Mathematics, Fourier Transform, Differential Equations, Electrical Theory, Electromagnetic Theory, Communication Theory, Analog Electronics, Signal Processing (ECE 321, ECE 384), English Writing (research writing) and 3.0+ GPA

    If you would like to get in contact with the RF/Photonics Lab at UMASS Dartmouth, feel free to fill out the form on the Contact page. Also, visit the official page for the RF/Photonics Lab.

    This blog is written such as to be considered as one long, working document with references listed at the page available by this hyperlink.

  • mbenkerumass 5:00 am on April 5, 2020 Permalink | Reply

    Thick Lens Equation – Trigonometric Derivation and Matrix Formation 

    The following set of notes presents first a trigonometric derivation of the thick lens equation using principles such as Snell’s law and the paraxial approximation. A final formula for the thick lens equation is rather unwieldy. A matrix form is much more usable, we will find. Moreover, a matrix form allows for one to add a number of lenses together in series with ease. Parameters of the lenses can be altered as well. Soon, the matrix formation of these equations will be used in MATLAB to demonstrate the ease at which an optical system can be built using matrix formations. The matrix formation of the thick lens equation can be summarized as three matrices multiplied, for the first curved surface, the separation between the next curved surface and the final curved surface. By altering the radius of curvature, the refractive indexes at each position, distances between them using these matrices, a new lens can also be made, such as a convex thin lens by inverting the curvature of the lens and reducing the thickness on the lens. A second lens can be added in series. Once a matrix formation is made handy, there are numerous applications that then become simple.


  • mbenkerumass 5:00 am on April 4, 2020 Permalink | Reply
    Tags: ATLAS,   

    Semiconductor Distribution of Electrons and Holes 

    Charge Flow in Semiconductors

    Charge flow in a semiconductor is characterized by the movement of electrons and holes. Considering that the density and availability of electrons and holes in a material is determined by the valence and conduction bands of that material, it follows that for different materials, there will be different densities of electrons and holes. The electron and hole density will determine the current throughput in the semiconductor, which makes it useful to map out the density of holes and electrons in a semiconductor.


    Density of States

    The density of electrons and holes is related to the density of states function and the Fermi distribution function. States are the formations of electrons and holes that can be formed in a semiconductor. A density of states is the amount of possible formations that can exist in a semiconductor. The Fermi-Dirac probability function is used for determining the the density of quantum states. The following formula determines the most probable formation distribution or state. By varying Ni (number of particles) along energy levels, the most probable state can be found, while gi refers to remaining particle positions in the distribution.


    Density of States Calculation using ATLAS

    By integration of Fermi-Dirac statistics for the density of states in the conduction and valence bands arises the formulae for electron and hole concentration in a semiconductor:


    where Nc and Nv are the effective density of states for the conduction bands and valence bands, which are characteristics of a chosen material. If using a program such as ATLAS, the material selection will contain parameters NC300 and NV300.



    Charge Carrier Density

    Charge carriers simply refer to electrons and holes, which both contribute to the flow of charge in a semiconductor. The electron distribution in the conduction band is given by the density of quantum states multiplied by the probability (Fermi-Dirac probability function) that a state is occupied by an electron.

    Conduction Band Electron Distribution:


    The distribution of holes in the valence band is the density of quantum states in the valence band multiplied by the probability that a state is not occupied by an electron:



    Intrinsic Semiconductor

    An intrinsic semiconductor maintains the same concentration of electrons in the conduction band as holes in the valence band. Where n is the electron concentration and p is the hole concentration, the following formulae apply:


    The overall intrinsic carrier concentration is:


    Eg is the band gap energy, which is equal to the difference of the energy is the conduction band and the energy in the valence band. Eg = Ec – Ev.

    Electron and Hole concentrations expressed in terms of the intrinsic carrier concentration, where Ψ is the intrinsic potential and φ is the potential corresponding to the Fermi level (Ef = qφ):



    Donor Atoms Effect on Distribution of Electrons and Holes (Extrinsic Semiconductor)

    Adding donor or acceptor impurity atoms to a semiconductor will change the distribution of electrons and holes in the material. The Fermi energy will change as dopant atoms are added. If the density of holes is greater than the density of electrons, the semiconductor is a p-type and when the density of electrons is greater than the density of holes, the semiconductor is n-type (see Density of States formulas above).

    [8], [10]


  • mbenkerumass 6:00 am on April 3, 2020 Permalink | Reply

    Applications of the Paraxial Approximation 

    It was discussed in a previous article, Mirrors in Geometrical Optics, Paraxial Approximation that the paraxial approximation is used to consider an apparently imperfect or flawed system as a perfect system.

    Paraxial Approximation

    The paraxial approximation was proposed in response to a normal occurrence in optical systems where the focal point is inconsistent for incident rays of higher incidence angles.The focal point F for a spherical mirror is understood under the paraxial approximation to be half the radius of curvature. Without the paraxial approximation, the system becomes increasingly complicated, as the focal point is a varying trigonometric function of the angle of incidence. The paraxial approximation assumes that all incident angles will be small.


    The paraxial approximation can be likened (and when analyzed fully, this is it exactly) to a case of a triangle of base B, hypotenuse H and angle θ. Consider a case where H/B is very close to 1. θ will also be very small. In this case, it is of little harm to consider such a triangle as a triangle with θ=0, virtually to lines on top of each other, H and B, and more explicitly, H=B. This is precisely what is done when using the paraxial approximation.


    An interesting question to ask is, what angle should be the limit to which we allow a paraxial approximation? The answer would be, it depends on how accurate, or clear the image must be. When discussing optical systems, an aberration is a case in which rays are not precisely focused at the focal point of a mirror (or another type of optical system involving focusing). An aberration will actually cause the image clarity to be reduced at the output of the system. The following image would be an example of the result of an aberration to an image in an optical system:


    Here is an example of a problem that makes clear an example of the issue of an aberration. Two rays appear to be correctly aligned to the focal point, however another ray with angle of incidence of 55 degrees is not focused at the focal point. A system that would allow a ray of incidence of 55 degrees may be acceptable under some circumstances, however one would expect to have an aberration or some level of blurriness to the image.


  • jalves61 8:19 pm on April 2, 2020 Permalink | Reply

    Thermoelectric Effect, Thermoelectric current and the Seebeck Effect 

    There are three types of current flow in a semiconductor: Drift, diffusion, and thermoelectric. Drift current is very familiar as the study of conductors leads us to know that when a potential gradient (voltage) is established, electrons will flow in a conductor to balance this out. The same effect happens in semiconductors. However, there are two types of charge carriers in semiconductors: electrons AND holes. This leads to diffusion current, which is caused by a concentration gradient rather than a potential gradient.

    The third kind of current within a semiconductor is called thermoelectric current. which involves the conversion of a temperature gradient to a voltage. A thermocouple is a device which measures the difference in potential across two dissimilar materials where one end is heated and the other is cold. It was found that the temperature difference was proportional to the potential difference. Although Alessandro Voltage first discovered this effect, it was later rediscovered by Thomas Seebeck. The combination of potential differences leads to the full definition of current density.



    S is called as the “thermopower” or “Seebeck coefficient” which is units of Volts/Kelvin. The two equations of Ohm’s law (point form) and E_emf look remarkably similar.


    The Seebeck coefficient is negative for negative charge carriers and positive for positive charge carriers, leading to a difference in the Seebeck Coeffecient between the P and N side of the PN junction above. This leads to the above circuit being used as a thermoelectric generator. If a voltage source replaces the resistor, the circuit becomes a thermal sensor. These (thermoelectric generators) are often employed by power plants to convert wasted heat energy into additional electric power. They are also used in car engine engines for the same reason (fuel efficiency). Solid state devices have a huge advantage in the sense that they require no moving parts or fluids which eliminates much of the need for maintenance. They also reduce environmental impact by converting waste heat into electrical energy.

  • jalves61 4:53 pm on April 1, 2020 Permalink | Reply

    Object Oriented Programming and C#: Simple Program to add three numbers 

    The following is a simple program that takes a user input of three numbers and adds them but does not crash when an exception is thrown (eg. if a user inputs a non integer value). The “int?” variable is used to include the “null” value used to signify that a bad input was received. The user is notified instantly when an incorrect input is received by the program with a “Bad input” command prompt message.


    The code above shows that the GetNumber() method is called (shown below) three times, and as long as these are integers, they are summed and printed to the console after being converted to a string.


    The code shows that as long as the sum of the three integers is not equal to null (anything plus null is equal to null, so if at least one input is a non-integer this will be triggered) the Console will print the sum of the three numbers. The GetNumber() method uses the “TryParse” method to convert each string input to an integer. This will handle exceptions that are triggered by passing a non-integer to the command line. It also gives a convenient return of “null” which is used above.

    The following shows the effect of both a summation and an incorrect input summation failure.



  • jalves61 11:23 pm on March 31, 2020 Permalink | Reply

    Object Oriented Programming and C#: Shallow vs Deep Copying 

    The following will be a brief but important post illustrating the difference between reference and value types. In C#, value types are things like integers, floats, enumerations and doubles. A value type holds the data assigned to it in its own memory allocation whereas a reference type only holds an address which points to the actual data. A reference type is anything that is not an int, float or double, etc such as a dynamic array (list), static array, class objects, or strings. It is important to know the difference because when code such as the code below is executed, it can have some confusing effects.

    shallow copy

    The image above illustrates what is known as a “shallow copy”. Because instances of classes are not storing actual data but are used as pointers, when the object is copied to the second object, the memory address is copied instead of the data contained within “obj”. Therefore, any changes made to “obj2” will also affect “obj” because they point to the same data. The following image shows the difference between deep and shallow copies.


    To do a deep copy of an array, for instance, every element of that array must be copied to the new array. You can do that using the “Array.Copy(source array, copy array)” method. As shown in the image, this will create two references and two data instead of 2 references pointing to the same data. Shallow copying only copies a memory pointer.

  • jalves61 8:28 pm on March 30, 2020 Permalink | Reply  

    Power Factor and the Power Triangle 

    Power factor is very important concept for commercial and industrial applications which require higher current draw to operate than domestic buildings. For a passive load (only containing resistance, inductance or capacitance and no active components), the power factor range from 0 to 1. Power factor is only negative with active loads. Before delving into power factor, it is important to discuss different types of power. The type of power most are familiar with is in Watts. This is called active or useful power, as it represents actual energy or time dissipated or “used” by the load in question. Another type of power is reactive power, which is caused by inductance or capacitance, which leads to a phase shift between voltage and current. To demonstrate how a lagging power factor causes “wasted” power, it would be helpful to look at some waveforms. For a purely resistive load, the voltage and current are in phase, so no power is wasted (P=VI is never zero at any point).


    The above image captures the concept of leading and lagging power factor (leading and lagging is always in reference to the current waveform). For a purely inductive load, the current will lag because the inductor will create a “back EMF” or inertial voltage to oppose changes in current. This EMF leads to a current within the inductor, but only comes from the initial voltage. It can also be seen that this EMF is proportional to the rate of change of the current, so when the current is zero the voltage is maximum. For a capacitive load, the power factor is leading. A capacitor must charge up with current before establishing a voltage across the plates. This explains the PF “leading” or “lagging”. Most of the time, when power factor is decreased it is because the PF is lagging due to induction motors. To account for this, capacitors are used as part of power factor correction.

    The third type of power is apparent power, which is the complex combination of real and reactive power.


    The power factor is the cosine of the angle made in this triangle. Therefore, as the PF angle is increased the power factor decreases. The power factor is maximum when the reactive power is zero. Ideally, the PF would be between 0.95 and 1, but for many industrial buildings this can fall to even 0.7. This leads to higher electric bills for this buildings because having a lower power factor leads to increases current in the power lines leading to the building which causes higher losses in the lines. It also leads to voltage drops and wastage of energy. To conserve energy, power factor correction must be employed. Often capacitors are used in conjunction with contactors that are controlled by regulators that measure power factor. When necessary, the contactors will be switched on and allow the capacitors to improve the power factor.

    For linear loads, power factor is called as displacement power factor, as it only accounts for the phase difference between the voltage and current. For nonlinear loads, harmonics are added to the output. This is because nonlinear loads cause distortion, which changes the shape of the output sinusoids. Nonlinear loads and power factor will be explored in a subsequent post.

  • jalves61 8:00 pm on March 29, 2020 Permalink | Reply
    Tags: , ,   

    Photovoltaic Effect and Theory of Solar Cells 

    Just as plants receive energy from the sun and use it to produce glucose, a photovoltaic cell receive energy from the sun and generates an electrical current. The working principle is based on the PN junction, which will be revisited here.

    Silicon can be subdivided into several discrete energy levels called “bands”. The major bands of concern are the valence and conduction bands. The bottom bands are fully occupied and don’t change.


    For silicon, the bandgap energy is 1.1eV. For an intrinsic semiconductor, the Fermi level is directly between the conduction and valence band. This is because there is an equal number of holes in the valence band as electrons in the conduction band. This means the probability of occupation of energy levels in both bands are equal. The Fermi level rises in the case of an n-type semiconuctor (doped with Phosphorous) and declines towards the valence band in a p-type (doped with Boron).

    The following illustrates an energy band diagram for a semiconductor with no bias across it. Photodiodes (light sensors) operate in this manner.


    The Fermi energy is shown to be constant. On the far right hand side away from the depletion region, the PN junction appears to be only P-type (hence the low Fermi level with respect to the conduction band). Likewise, on the left the Fermi level is high with respect to the conduction band. The slope of the junction is proportional to the electric field. A strong electric field in the depletion region makes it harder for holes and electrons to move away from the region. When a forward bias is applied, the barrier decreases and current begins to flow (assuming the applied voltage is higher than the turn on voltage of 0.7V). Current flows whenever recombination occurs. This is because every time an electron recombines on the P side, an electron is pushed out of the N side and beings to flow in an external circuit. The device wants to stay in equilibrium and balance out. This is why solar cells (as opposed to photodiodes) are designed to operate in a forward bias mode.

    The sunlight produces solar energy in the frequency bands of Ultraviolet, infrared and visible light. In order to harness this energy, silicon is employed (made from sand and carbon). Silicon wafers are employed in solar cells. The top layer of the silicon is a very thin layer doped with phosphorous (n-type). The bottom is doped with P-type (doped with Boron). This forms the familiar PN junction. The top layer has thin metal strips and the bottom is conductive as well (usually aluminum). Only frequencies around the visible light spectrum are absorbed into the middle region of the solar cell. The photon energy from the sun knocks electrons loose in the depletion region which causes a current to flow. The output power of a single solar cell is only a few watts. To increase power, solar cells are wired in series and parallel to increase the voltage and current. Because the output of the solar cells is DC, the output is run through an inverter, a high power oscillator that converts the DC current to an 240V AC current compatible with household appliances.


  • mbenkerumass 5:00 am on March 28, 2020 Permalink | Reply

    Ray Tracing Examples (1) Curved Mirrors 

    The following ray tracing examples all utilize Fermat’s principle in examining ray traces incident at a mirror.

    Example 1. Draw a ray trace for a ray angled at a convex mirror.

    The ray makes a 40 degree angle with the normal of the mirror at the point of incidence. In accordance with the law of reflection (Fermat’s Principle), the ray will exit at 40 degrees on the other side of the normal.



    The above example shows a single ray at an angle. Often, rays are drawn together in a group of parrallel rays. This example shows how an incident set of parallel rays will no longer be parallel when reflected by a non-uniform (not flat) mirror surface.



    This example brings up an important concept that happens especially with concave mirrors. Two rays drawn seem to be directed towards the same point, known as the focal point. A focal point however is only consistent for smaller angles. The third ray at the bottom makes a 55 degree incident angle with the normal of the surface. The reflected ray is also 55 degrees separated from the normal but is directed to the other side of the normal. The ray does not converge at the focal point as the others do. This effect is known as an aberration and may be discussed further at length in a later article.



    This example makes use of the above concept of focal point. An object placed at the focal point will not make an image at the focal point. This is useful if for instance, some type of lense or collecter should be placed at the focus of the mirror. This can be done without worry for it causing disturbances to the image that is formed at the focal point by the reflected rays.


  • mbenkerumass 5:01 am on March 27, 2020 Permalink | Reply

    Principles of Ray Tracing (1) 

    In geometrical optics, light is treated as rays, typically drawn as lines that propagate in a straight line from one point to another. Ray tracing is a method of determining how a ray will react to a surface or mirror. Rays are understood to propagate always in a straight line, however when entering an angled surface, rebounding from an angled surface or propagating through a different medium, there are a few techniques that are needed to reliably determine the direction and path of a light ray. The following properties are the basis for ray tracing.

    Refractive Index

    The refractive index is a property intrinsic to a medium that describes how fast or slow light propagates in the medium. Light speed in a vacuum is 3*10^8 m/s. Light speed will only get slower in real mediums. The formula for refractive index is the speed of light c devided by the velocity of light in the medium.


    The refractive index of air is approximately 1. The refractive index of glass for instance is about 1.5. This has implications on how light will propage when changing from one medium to another.




    Snell’s Law

    Snell’s law uses the angle of incidence (incoming ray), the angle of refraction (exiting ray) and the refractive indexes of each medium at a boundary to determine the path of propagation. Consider the example below:


    Snell’s Law: η1*sin(θ1) = η2*sin(θ2)

    The angle of incidence and the angle of refraction are both with respect to the normal of the surface!


    Fermat’s Principle

    Fermat’s Principle is also demonstrated in the above figure. Fermat’s Principle states that the angle of incidence of a ray will be equal to the angle of reflection, but exiting from the other side of the normal of the surface.


    Using these principles alone, many optical instruments and technologies can be designed and built that manipulate the direction of light rays.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc