Miller Effect

The Miller Effect is a generally negative consequence of broadband circuitry due to the fact that bandwidth is reduced when capacitance increases. The Miller effect is common to inverting amplifiers with negative gain. Miller capacitance can also limit the gain of a transistor due to transistors’ parasitic capacitance. A common way to mitigate the Miller Effect, which causes an increase in equivalent input capacitance, is to use cascode configuration. The cascode configuration features a two stage amplifier circuit consisting of a common emitter circuit feeding into a common base. Configuring transistors in a particular way to mitigate the Miller Effect can lead to much wider bandwidth. For FET devices, capacitance exists between the electrodes (conductors) which in turn leads to Miller Effect. The Miller capacitance is typically calculated at the input, but for high output impedance applications it is important to note the output capacitance as well.


Interesting note: the Miller effect can be used to create a larger capacitor from a smaller one. So in this way, it can be used for something productive. This can be important for designing integrated circuits, where having large bulky capacitors is not ideal as “real estate” must be conserved.


Beamforming (spatial filtering) is a huge part of Fifth Generation wireless technology. Beamforming is basically using multiple antennas and varying the phase and amplitude of the inputs to these antennas. The result is a directed beam in a specific direction. This is a great method of preventing interference by focusing the energy of the antennas. Constructive and Destructive interference is used to channel the energy and increase the antennas’ directivity. The receiver receives the multitude of waves and depending on the receiver’s location will determine whether there is mostly constructive or destructive interference. Beamforming is not only used in RF wireless communication but also in Acoustics and Sonar.

An important concept to know is that placing multiple radiating elements (antennas) together increases the directivity of the radiation pattern. Putting two antennas side by side, creating a main lobe with a 3dB gain going forward. With four radiating elements, this becomes 6dB (quadruple gain). Feeding all of the elements with the same signal means that the elements are still one single antenna, but with more forward gain. The major issue here is that you only benefit from this in one single stationary direction unless the beam can be moved. This is where feeding the antennas with different phases and amplitudes comes in. The number of antennas becomes equal to the number of input signals. Having more separate antennas (and more input signals) creates a more directed antenna pattern. Spatial multiplexing can also be implemented to service multiple users wirelessly by utilizing space multiple times over.

Using electronic phase shifters at the input of the antennas can decrease cost of driving the elements quite a bit. This is known as a phased array and can steer the beam pattern as necessary but can only point in one direction at a time.

phased array


RF Mixer basics

Mixers are three port devices that can be active or passive, linear or nonlinear. They are used to modulate (upconvert) or demodulate (downconvert) a signal to change its frequency to be sent to a receiver or to demodulate at the receiving end to a lower frequency.


Two major mixer categories are switching and nonlinear. Nonlinear mixers allow for higher frequency upconversion, but are less prevalent due to their unpredictable performance. In the diagram above, the three ports are shown. The RF signal is the product or sum of the IF (intermediate frequency) and LO (Local Oscillator) signal during upconversion. Due to reciprocity, any mixer can be used for either upconversion or downconversion. For a downconversion mixer, the output is the IF and the RF is fed on the left hand side.


The above diagram illustrates the concept of frequency translation. In a receiver, the mixer translates the frequency from a higher RF frequency (frequency that the wave propagated wirelessly through air) to a lower Intermediate frequency. The mixer cannot be LTI; it must be either nonlinear or time varying. The mixer is used in conjunction with a filter to select either upper or lower sideband which are the result of the multiplication of two signals with different frequencies. These new frequencies are the sum or difference of the two frequencies at the two input ports.

In addition to frequency translation during modulation, RF mixers can also be used as phase comparators, such as in phase locked loops.

To maintain linearity and avoid distortion, the LO input should be roughly 10dB higher than the input RF signal (downconverter). Unfortunately this increases cost and so therein lies the tradeoff between cost and performance.

High Speed Waveguide UTC Photodetector I-V Curve (ATLAS Simulation)

The following project uses Silvaco TCAD semiconductor software to build and plot the I-V curve of a waveguide UTC photodetector. The design specifications including material layers are outlined below.

Simulation results

The structure is shown below:



Forward Bias Curve:


Negative Bias Curve:


Current Density Plot:


Acceptor and Donor Concentration Plot:


Bandgap, Conduction Band and Valence Band Plots:



Construct an Atlas model for a waveguide UTC photodetector. The P contact is on top of layer R5, and N contact is on layer 16. The PIN diode’s ridge width is 3 microns. Please find: The IV curve of the photodetector (both reverse biased and forward bias).

The material layers and ATLAS code is shown in the following PDF: ece530proj1_mbenker


The RF and microwave spectrum can be subdivided into many bands of varying purpose, shown below.


On the lower frequency end, VLF (Very Low Frequency) tends to be used in submarine communication while LF (Low Frequency) is generally used for navigation. The MF (Medium Frequency) band is noted for AM broadcast (see posts on Amplitude modulation). The HF (shortwave) band is famous for use by HAM radio enthusiasts. The reason for the widespread usage is that HF does not require line of sight to propagate, but instead can reflect from the ionosphere and the surface of the earth, allowing the waves to travel great distances. VHF tends to be used for FM radio and TV stations. UHF covers the cellphone band as well as most TV stations. Satellite communication is covered in the SHF (Super High Frequency) band.

Regarding UHF and VHF propagation, line of sight must be achieved in order for the signals to propagate uninhibited. With increasing frequency comes increasing attenuation. This is especially apparent when dealing with 5G nodes, which are easily attenuated by buildings, trees and weather conditions. 5G used bands within the UHF, SHF and EHF bands.

Speaking of line of sight, the curvature of the earth must be taken into account.


The receiving and transmitting antennas must be visible to each other. This is the most common form of RF propagation. Twenty five miles (sometimes 30 or 40) tends to be the max range of line of sight propagation (radio horizon). The higher the frequency of the wave, the less bending or diffraction occurs which means the wave will not propagate as far. Propagation distance is a strong function of antenna height. Increasing the height of an antenna by 10 feet is like doubling the output power of the antenna. Impedance matching should be employed at the antennas and feedlines as losses increase dramatically with frequency.

Despite small wavelengths, UHF signals can still propagate through buildings and foliage but NOT the surface of the earth. One huge advantage of using UHF propagation is reuse of frequencies. Because the waves only travel a short distance when compared to HF waves, the same frequency channels can be reused by repeaters to re-propagate the signal. VHF signals (which have lower frequency) can sometimes travel farther than what the radio horizon allows due to some (limited) reflection by the ionosphere.

Both VHF and UHF signals can travel long distances through the use of “tropospheric ducting”. This can only occur when the index of refraction of a part of the troposphere due to increased temperature is introduced. This causes these signals to be bent which allows them to propagate further than usual.

P-I-N Junction Simulation in ATLAS

Introduction to ATLAS

ATLAS by Silvaco is a powerful tool for modeling for simulating a great number of electronic and optoelectronic components, particularly related to semiconductors. Electrical structures are developed using scripts, which are simulated to display a wide range of parameters, including solutions to equations otherwise requiring extensive calculation.


P-I-N Diode

The function of the PN junction diode typically fall off at higher frequencies (~3GHz), where the depletion layer begins to be very small. Beyond that point, an intrinsic semiconductor is typically added between the p-doped and n-doped semiconductors to extend the depletion layer, allowing for a working PN junction structure in the RF domain and to the optical domain. The following file, a P-I-N junction diode is an example provided with ATLAS by Silvaco. The net doping regions are, as expected at either end of the PIN diode. This structure is 10 microns by 10 microns.


The code used to create this structure is depicted below.



The cutline tool is used through the center of the PIN diode after simulating the code. The Tonyplot tool allows for the plotting of a variety of parameters, such as electric field, electron fermi level, net doping, voltage potential, electron and hole concentration and more.


Introduction to Electro-Optic Modulators

Electro-optics is a branch or topic in photonics that deals with the modulation, switching and redirection of optical signals. These functions are produced through the application of an electric field, which alters the optical properties of a material, such as the refractive index. The refractive index refers to the speed of light propagation in a medium relative to the speed of light in a vacuum.


Modulators vs. Switches

In a number of situations, the same device may function as both a modulator and a switch. One dependent factor on whether the device would be suitable or not for a switch as opposed to a modulator would be the strength of the effect that an electric field may have on the device. If the device’s primary role is to impress information onto a light wave signal through temporary varying of the signal, then it is referred to as a modulator. A switch on the other hand either changes the direction or spatial position of light or turns it off completely.



Theory of Operation

Electro-optic Effect

The electro-optic effect presumes the dependence of the refractive index on the the applied electric field. The change in refractive index, although small allows for various applications. For instance, a lens may be applied an electric field and depending on the material and the applied field, the focal length of the lens can change. Other optical instruments that utilize this effect may also see use, such as a prism. A very small adjustment to the refractive index may still produce a delay in the signal, still large enough to detect and, if information was implied by the delay that was produced on the signal, the delay can be phase demodulated at the receiving end.



Electroabsorption is also another effect that is used to modify the optical properties of a material by the application of an electric field. An applied electrical field may increase the bandgap of the optical semiconductor material, turning the material from optically transparent to optically opaque. This process is useful for making modulators and switches.


Kerr Effect and Pockels Effect

The Pockels Effect and the Kerr Effect both account for the change in refractive index through the application of an electric field. The Kerr Effect states that this effect is nonlinear, while the Pockels Effect states that the effect is linear. Although the Pockels Effect is more pronounced in Electro-optical modulator design, both are applied in many situations. The linear electro-optic effect exists only in crystals without inversion symmetry. The design of electro-optic modulators or switches requires special attention to the waveguide material and how the electric field reacts with the material. Common materials (also maintaining large Pockels coefficients) are GaAs, GaP, LiNbO3, LiTaO3 and quartz. The Kerr Effect is relatively weak in commonly used waveguide materials.


Properties of the Electro-Optic Modulator

Modulation Depth

Important for both modulators and switches is the modulation depth, also known as the modulation index. Modulation depth has applications for the several types of optical modulators, such as intensity modulators, phase modulators and interference modulators. The modulation depth may be conceptually understood as the ratio of effect that is applied to the signal. In other words, is the modulation very noticeable? Is it a strong modulation or is it a weak modulation?



The bandwidth of the modulator is critically important as it determines what range of signal frequencies may be modulated onto the optical signal. Switching time or switching speed may be equally applied to an optical switch.


Insertion Loss

Insertion loss of optical modulators and switches is a form of optical power loss and is expressed in dB. However, the result of insertion loss often results in the system requiring more electrical power and would not explicitly reduce performance of the modulation or switching function of the device.


Power Consumption

In distinction from the electric field, a modulator or switch also needs a power supply for itself. The amount of power required increases with modulation frequency. A common figure of merit is the drive power per unit bandwidth, typically expressed in milliwatts per megahertz.


References: [1], [4], [6]

Optical System Design using MATLAB

Previously featured was an article that derived a matrix formation of an equation for a thick lens. This matrix equation, it was said can be used to build a variety of optical systems. This will be undertaken using MATLAB. One of the great parts of using a matrix formula in MATLAB is that essentially any known parameter in the optical system can not only be altered directly, but a parameter sweep can be used to see how the parameter will effect the system. Parameters that can be altered include radius of curvature in the lens, thickness of the lens or distance between two lenses, wavelength, incidence angle, refractive indexes and more. You could also have MATLAB solve for a parameter such as the radius of curvature, given a desired angle. All of these parameters can be varied and the results can be plotted.


Matrix Formation for Thick Lens Equation

The matrix equation for the thick lens is modeled below:




  • nt2 is the refractive index beyond surface 2
  • αt2 is the angle of the exiting or transmitted ray
  • Yt2 is the height of the transmitted ray
  • D2 is the power of curvature of surface 2
  • D1 is the power of curvature of surface 1
  • R1 is the radius of curvature of surface 1
  • R2 is the radius of curvature of surface 2
  • d1 is the thickness of the lens or distance between surface 1 and 2
  • ni is the refractive index before surface 1
  • αi is the angle of the incident ray
  • Yi1 is the height of the incident ray

The following plots show a parameter sweep on an number of these variables. The following attachment includes the code that was used for these calculations and plots: optics1hw


HEMT – High Electron Mobility Transistor

One of the main limitations of the MESFET is that although this device extends well into the mmWave range (30 to 300 GHz or the upper part of the microwave spectrum), it suffers from low field mobility due to the fact that free charge carriers and ionized dopants share the same space.

To demonstrate the need for HEMT transistors, let us first consider the mobility of GaAs compound semiconductor. As shown in the picture, with decreasing temperature, Coloumb scattering becomes prevalent as opposed to phonon lattice vibrations. For an n-channel MESFET, the main electrostatic Coloumb force is between positively ionized donor elements (Phosphorous) and electrons. As shown, the mobility is heavily dependent on doping concentration. Coloumb Scattering effectively limits mobility. In addition, decreasing the length of the gate in a MESFET will increase Coloumb scattering due to the need for a higher doping concentration in the channel. The means that for an effective device, the separation of free and fixed charge is needed.


A heterojunction consisting of n+ AlGaAs and p- GaAs material is used to combat this effect. A spacer layer of undoped AlGaAs is placed in between the materials. In a heterojunction, materials with different bandgaps are placed together (as opposed to a homojunction where they are the same).


This formation leads to the confinement of electrons from the n- layer in quantum wells which reduces Coloumb scattering. An important distinction between the HEMT and the MESFET is that the MESFET (like all FETs) modulates the channel thickness whereas with an HEMT, the density of charge carriers in the channel is changed but not the thickness. So in other words, applying a voltage to the gate of an HEMT will change the density of free electrons will increase (positive voltage) or decrease (negative voltage). The channel is composed of a 2D electron gas (2DEG). The electrons in the gas move freely without any obsctruction, leading to high electron mobility.

HEMTs are generally packed into MMIC chips and can be used for RADAR applications, amplifiers (small signal and PAs), oscillators and mixers. They offer low noise performance for high frequency applications.

The pHEMT (pseudomorphic) is an enhancement to the HEMT which feature structures with different lattice constants (HEMTs feature roughly the same lattice constant for both materials). This leads to materials with wider bandgap differences and generally better performance.

Off Topic: Planet Earth – Climates and Deserts

The following post is an off topic discussion of planet earth, which will consists of miscellaneous topics involving climate types and deserts.

We can begin our study of the planet earth by discussing different types of sand dunes. Dunes are found wherever sand is blown around, as sand dunes are the construct of Aeolian processes where wind erodes loose sand. There are five main types: Barchan, Star, Parabolic, Tranverse and Longitudinal, though these sometimes go by other names. These dunes are  the product of wind direction. With Barchan dunes, the wind is predominantly in one direction which leads to the development of a crescent shape dune. The shape is convex and the “horns” point in the direction of the wind direction. The other two types of dunes where the wind is in one direction are parabolic and transverse dunes. Parabolic dunes are similar to Barchan dunes, although the “horns” point opposite to the direction of the wind. The key defining feature of this dune type is the presence of vegetation and the fact that they are effected by “blowouts” which is erosion of the vegetated sand.


As shown above, transverse dunes are also quite similar to barchans, but have wavy ridges instead of a crescent shape. The ridges are at right angles to the wind direction. Sand dunes which are formed by wind going in multiple directions are either Linear/Longitudinal or Star dunes. Star dunes are the result of wind moving in many directions whereas Longitudinal dunes are formed where wind converges into a single point, forming parallel lines to the direction of the winds.

An important term concerning dunes is “saltation”. Saltation is the rolling and bouncing of sand grains due to wind. The distinction between saltation, creep and suspension is that saltation forms a parabolic shape, though these are all wind-based processes.


Within hot deserts (as opposed to cold deserts), it is common to find structures such as mesas and buttes.


From left to right in the image, the difference between each type of landform is apparent. The pinnacle (or spire) is the most narrow. It is important to note that all of these desert structures are formed by not only wind, but also water (and heat). In addition, a desert surface is generally made of sand, rock and mountainous formations.

An important feature of deserts is desert pavement. This sheet-like rock formation of rock particles formed when wind or water has removed the sand, which is a very slow process. There are several theories as to why desert pavement exists including intermittent removal of sand by wind and later rain or possibly by shrinking and swelling of clay.


Another concept of sand erosion is deflation, defined as the release of sand from soil by wind.

An important characteristic of deserts is the extreme temperature of the region. During the day, (hot) deserts are hot, as all of the heat from the sun is reflected by the sand in the ground. This raises the temperature of the ground due the lack of water near the surface. If water was present near the surface, most of the heat would go into evaporating the water. However, even hot deserts are cold at night because the dry surface does not store heat as well as a moist surface. Since water vapor is a greenhouse gas (and there is little water vapor in the air in a desert), infrared radiation is lost to outer space which contributes to the cold night temperatures.

Ventifacts, pictured below, are stones shaped by wind erosion. They are commonly found in arid climates with very little vegetation and feature strong winds. This is because vegetation often interferes with particle transport.


An inselberg, as its Germanic name implies, is a type of mountain that is isolated and tends to be surrounded by sand. The area around the inselberg tends to be relatively flat, another defining characteristic of the structure. The word “insel” refers to island, which reinforces this concept.


A playa lake is a temporary body of water which also referred to as a dry lake. They are created whenever water ends up in a depression, however when the evaporation rate is quicker than the incoming water, the lake dries up. This tends to leave a buildup of salt.

An interesting piece of information about deserts is that they tend to located at 30 degree latitudes in both the north and south hemispheres. At the equator there is a low pressure zone due to direct sunlight, however at the 30 degree points there is high pressure, which leads to dry weather. At the equator, the climate tends to be relatively stable and has heavy rainfall. The sinking of air is what leads to these deserts, so in that way high pressure regions are very important to the development of deserts. The world’s largest hot desert (H climate) is the Sahara and the largest cold desert (K) is Antarctica. The major difference between a BW (Arid) climate and a BS (semiarid) climate is the amount of precipitation. Less than ten inches indicates arid climate and generally 10-20 inches indicates semiarid.


Thick Lens Equation – Trigonometric Derivation and Matrix Formation

The following set of notes presents first a trigonometric derivation of the thick lens equation using principles such as Snell’s law and the paraxial approximation. A final formula for the thick lens equation is rather unwieldy. A matrix form is much more usable, we will find. Moreover, a matrix form allows for one to add a number of lenses together in series with ease. Parameters of the lenses can be altered as well. Soon, the matrix formation of these equations will be used in MATLAB to demonstrate the ease at which an optical system can be built using matrix formations. The matrix formation of the thick lens equation can be summarized as three matrices multiplied, for the first curved surface, the separation between the next curved surface and the final curved surface. By altering the radius of curvature, the refractive indexes at each position, distances between them using these matrices, a new lens can also be made, such as a convex thin lens by inverting the curvature of the lens and reducing the thickness on the lens. A second lens can be added in series. Once a matrix formation is made handy, there are numerous applications that then become simple.


Semiconductor Distribution of Electrons and Holes

Charge Flow in Semiconductors

Charge flow in a semiconductor is characterized by the movement of electrons and holes. Considering that the density and availability of electrons and holes in a material is determined by the valence and conduction bands of that material, it follows that for different materials, there will be different densities of electrons and holes. The electron and hole density will determine the current throughput in the semiconductor, which makes it useful to map out the density of holes and electrons in a semiconductor.


Density of States

The density of electrons and holes is related to the density of states function and the Fermi distribution function. States are the formations of electrons and holes that can be formed in a semiconductor. A density of states is the amount of possible formations that can exist in a semiconductor. The Fermi-Dirac probability function is used for determining the the density of quantum states. The following formula determines the most probable formation distribution or state. By varying Ni (number of particles) along energy levels, the most probable state can be found, while gi refers to remaining particle positions in the distribution.


Density of States Calculation using ATLAS

By integration of Fermi-Dirac statistics for the density of states in the conduction and valence bands arises the formulae for electron and hole concentration in a semiconductor:


where Nc and Nv are the effective density of states for the conduction bands and valence bands, which are characteristics of a chosen material. If using a program such as ATLAS, the material selection will contain parameters NC300 and NV300.



Charge Carrier Density

Charge carriers simply refer to electrons and holes, which both contribute to the flow of charge in a semiconductor. The electron distribution in the conduction band is given by the density of quantum states multiplied by the probability (Fermi-Dirac probability function) that a state is occupied by an electron.

Conduction Band Electron Distribution:


The distribution of holes in the valence band is the density of quantum states in the valence band multiplied by the probability that a state is not occupied by an electron:



Intrinsic Semiconductor

An intrinsic semiconductor maintains the same concentration of electrons in the conduction band as holes in the valence band. Where n is the electron concentration and p is the hole concentration, the following formulae apply:


The overall intrinsic carrier concentration is:


Eg is the band gap energy, which is equal to the difference of the energy is the conduction band and the energy in the valence band. Eg = Ec – Ev.

Electron and Hole concentrations expressed in terms of the intrinsic carrier concentration, where Ψ is the intrinsic potential and φ is the potential corresponding to the Fermi level (Ef = qφ):



Donor Atoms Effect on Distribution of Electrons and Holes (Extrinsic Semiconductor)

Adding donor or acceptor impurity atoms to a semiconductor will change the distribution of electrons and holes in the material. The Fermi energy will change as dopant atoms are added. If the density of holes is greater than the density of electrons, the semiconductor is a p-type and when the density of electrons is greater than the density of holes, the semiconductor is n-type (see Density of States formulas above).

[8], [10]


Applications of the Paraxial Approximation

It was discussed in a previous article, Mirrors in Geometrical Optics, Paraxial Approximation that the paraxial approximation is used to consider an apparently imperfect or flawed system as a perfect system.

Paraxial Approximation

The paraxial approximation was proposed in response to a normal occurrence in optical systems where the focal point is inconsistent for incident rays of higher incidence angles.The focal point F for a spherical mirror is understood under the paraxial approximation to be half the radius of curvature. Without the paraxial approximation, the system becomes increasingly complicated, as the focal point is a varying trigonometric function of the angle of incidence. The paraxial approximation assumes that all incident angles will be small.


The paraxial approximation can be likened (and when analyzed fully, this is it exactly) to a case of a triangle of base B, hypotenuse H and angle θ. Consider a case where H/B is very close to 1. θ will also be very small. In this case, it is of little harm to consider such a triangle as a triangle with θ=0, virtually to lines on top of each other, H and B, and more explicitly, H=B. This is precisely what is done when using the paraxial approximation.


An interesting question to ask is, what angle should be the limit to which we allow a paraxial approximation? The answer would be, it depends on how accurate, or clear the image must be. When discussing optical systems, an aberration is a case in which rays are not precisely focused at the focal point of a mirror (or another type of optical system involving focusing). An aberration will actually cause the image clarity to be reduced at the output of the system. The following image would be an example of the result of an aberration to an image in an optical system:


Here is an example of a problem that makes clear an example of the issue of an aberration. Two rays appear to be correctly aligned to the focal point, however another ray with angle of incidence of 55 degrees is not focused at the focal point. A system that would allow a ray of incidence of 55 degrees may be acceptable under some circumstances, however one would expect to have an aberration or some level of blurriness to the image.


Thermoelectric Effect, Thermoelectric current and the Seebeck Effect

There are three types of current flow in a semiconductor: Drift, diffusion, and thermoelectric. Drift current is very familiar as the study of conductors leads us to know that when a potential gradient (voltage) is established, electrons will flow in a conductor to balance this out. The same effect happens in semiconductors. However, there are two types of charge carriers in semiconductors: electrons AND holes. This leads to diffusion current, which is caused by a concentration gradient rather than a potential gradient.

The third kind of current within a semiconductor is called thermoelectric current. which involves the conversion of a temperature gradient to a voltage. A thermocouple is a device which measures the difference in potential across two dissimilar materials where one end is heated and the other is cold. It was found that the temperature difference was proportional to the potential difference. Although Alessandro Voltage first discovered this effect, it was later rediscovered by Thomas Seebeck. The combination of potential differences leads to the full definition of current density.



S is called as the “thermopower” or “Seebeck coefficient” which is units of Volts/Kelvin. The two equations of Ohm’s law (point form) and E_emf look remarkably similar.


The Seebeck coefficient is negative for negative charge carriers and positive for positive charge carriers, leading to a difference in the Seebeck Coeffecient between the P and N side of the PN junction above. This leads to the above circuit being used as a thermoelectric generator. If a voltage source replaces the resistor, the circuit becomes a thermal sensor. These (thermoelectric generators) are often employed by power plants to convert wasted heat energy into additional electric power. They are also used in car engine engines for the same reason (fuel efficiency). Solid state devices have a huge advantage in the sense that they require no moving parts or fluids which eliminates much of the need for maintenance. They also reduce environmental impact by converting waste heat into electrical energy.

Object Oriented Programming and C#: Simple Program to add three numbers

The following is a simple program that takes a user input of three numbers and adds them but does not crash when an exception is thrown (eg. if a user inputs a non integer value). The “int?” variable is used to include the “null” value used to signify that a bad input was received. The user is notified instantly when an incorrect input is received by the program with a “Bad input” command prompt message.


The code above shows that the GetNumber() method is called (shown below) three times, and as long as these are integers, they are summed and printed to the console after being converted to a string.


The code shows that as long as the sum of the three integers is not equal to null (anything plus null is equal to null, so if at least one input is a non-integer this will be triggered) the Console will print the sum of the three numbers. The GetNumber() method uses the “TryParse” method to convert each string input to an integer. This will handle exceptions that are triggered by passing a non-integer to the command line. It also gives a convenient return of “null” which is used above.

The following shows the effect of both a summation and an incorrect input summation failure.