# The Half Wave Dipole Antenna

The dipole is a type of linear antenna which commonly features two monopole antennas of a quarter wavelength in size bent at 90 degree angles to each other. Another common size for the dipole is 1.25λ. These sizes will be discussed later.

It is important for beginning the study of the dipole antenna to discuss the infinitesimal dipole. This is the dipole which is smaller than 1/50 of the wavelength and is also known as a Hertzian dipole. This is an idealized component which does not exist, although it can serve as an approximation to large antennas which can be broken into smaller segments. The mathematics behind this can be found in “Antenna theory:Analysis and Design” by Constantine Balanis.

More importantly, three regions of radiation can be defined: the far field (where the radiation pattern is constant – this is where the radiation pattern is calculated), the reactive near field and the radiative near field. As shown in the image, the reactive near field is when the range is less than the wavelength divided by 2π or when the range is less than 1/6 of the wavelength. The electric and magnetic fields in this region are 90 degrees out of phase and do not radiate. It is known that the E and H fields must be in phase to propagate. The radiating near field is where the range is between 1/6 of the wavelength and the value 2D^2 divided by the wavelength. This is also known as the Fresnel zone. Although the radiation pattern is not fully formed, propagating waves exist in this region. For the far field, r must be much, much greater than λ/2π.

The radiating patterns of the dipole antenna is pictured below, with both the E and H planes. The E plane (elevation angle pattern) is pictured on the bottom right and the H plane (Azimuthal angle) beside it on the left. The plots are given in dB scale. The radiation patterns can be understood by considering a pen. While facing the pen you can see the full length of the pen, but if you look down on the pen you can only see the tip or end. This is analogous to the dipole antenna where maximum radiation is broadside to the antenna and minimum radiation on the ends, leading to the figure 8 radiation pattern. When this radiation pattern in extended to three dimensions, the top left image is derived. # Is the focal length of a spherical mirror affected by the medium in which it is immersed? …. of a thin lens? What’s the difference?

Mirrors

A spherical mirror may be either convex or concave. In either case, the focal length for a spherical mirror is one-half the radius of curvature. The formula for focal length of a mirror is independent of the refractive index of the medium: Lens The thin lens equation, including the refractive index of the surrounding material (“air”): The effect of the refractive index of the surrounding material can be summarized as follows:

• The focal length is inversely proportional to the refractive index of the lens minus the refractive index of the surrounding medium.
• As the refractive index of the surrounding medium increases, the focal length also increases.
• If the refractive index of the surrounding medium is larger than the refractive index of the thick lens, the incident ray will diverge upon exiting the lens.

# Under what conditions would the lateral magnification (m=-i/o) for lenses and mirrors become infinite? Is there any practical significance to such condition?

Magnification of a lens or mirror is the ratio of projected image distance to object distance. Simply put, how much closer does the object appear as a result of the features of the lens or mirror? The object may seem larger or it may seem smaller as a result of it’s projection through a lens or mirror. Take for instance, positive magnification: If the virtual image appears further than the real object, there will be negative magnification: The formula for magnification is the following: The question then is, how can there be an infinite ratio of image size to object size? Consider the equation for focal length: For magnification to be infinite, the image distance should be infinite, in which case the object distance is equal to the focal length: In this case, the magnification is infinite: The meaning of this case is that the object appears as if it were coming from a distance of infinity, or very far away and is not visible. A negative magnification means that the image is upside-down. # How does the focal length of a glass lens for blue light compare with that for red light? Consider the case of either a diverging lens or a converging lens.

This question really has three parts:

• Focal Length of a lens
• Effect of light frequency (color)
• Diverging and Converging lens

Focal Length of the Converging and Diverging Lens

For the converging and diverging lens, the focal point has a different meaning. First, consider the converging lens. Parallel rays entering a converging lens will be brought to focus at the focal point F of the lens. The distance between the lens and the focal point F is called the focal length, f. The focal length is a function of the radius of curvature of both sides or planes of the lens as well as the refractive index of the lens. The formula for focal length is below,
(1/f) = (n-1)((1/r1)-(1/r2)).

This formula also works for a diverging lens, however the directions of the radius of curvature must be taken into account. If for instance the center of the circle for one side of the lens is to the left of the lens, one may chose that direction to be positive and the other direction to be negative; as long as one maintains the same standard for direction. If the focal length of a lens is negative, meaning that the focal point is behind the lens, on the side at which the rays entered, this is a diverging lens. Interaction of Color with Focal Length

The other part of this question dealt with how the focal length would change for one color such as blue versus another color such as red. The key to this relationship is the refractive index of the lens, as the refractive index can change with regards to the color (i.e. frequency).

The material from which the lens is made is not known, however as demonstrated by the following table, the refractive index is consistently higher for smaller wavelength colors. Reviewing the focal length formula, it is understood from the inverse proportionality of the equation that as the refractive index increases, the focal length will decrease. Blue has a higher refractive index than red. Therefore, blue will have a smaller focal length than red. # Object Oriented and C#: Quadratic Roots Program

The following program is designed to accept three doubles as inputs and prints the roots of a quadratic, whether complex or real. If a non-double is inputted into the program, the program should display “Bad Input”. The program contains two files: a “program” file to run several of the main methods and a “complex” file which creates the class for handling complex numbers and overrides the built in “ToString” method.

The first goal is to initialize the part of the program that handles real roots. The easiest portion is to create a method that reads doubles. It is important that the method contains a nullable type because the method should return null if a non-double such as a string is put into the method. This provides an easy way to use a conditional statement upon using the “Tryparse” method. The “Tryparse” method returns a boolean value of true or false. The “if” statement checks if the return is true and if so, returns the result. If not, null is returned. Next, the “getquadraticstring” method is implemented to format the printed result in the form “AX^2+BX+C”. This is also done within the “program” file. Format specifiers are put within the placeholders to set the printed values to two decimal places if neccessary. The “getrealroots” method produces the roots of the quadratic given that they are purely real. First the discriminant (the part in the quadratic formula under the square root symbol) is calculated. Several if statements are provided to check how many real roots there are and returns that quantity as an integer. For example, if the discriminant is negative, there will be no real roots returned. This means both of the “out” variables should be set to null and the function should return a 0. For a discriminant = 0, the quadratic formula reduces to -B/2A and the second root should be null. The return value is again the number of roots (1). It is important to note that an “if-else” statement must only end in “else” rather than “else if”. The “else” statement must cover all other possibilities. Within the “main” function, three numbers are taken from the console using the getDouble method. An integer value is obtained from the getRealroots method which states the number of roots. This will be used for the conditional statements. For ease of reading code, a string variable is created to store the return from the “getQuadraticString” method.

Next, an “if” statement is used to print a bad input if any of the a, b, c variables are null. A return statement is included within the “if” statement so that an else does not have to be provided. This will exit the statement after it has completed. Now the logic for the imaginary numbers must be implemented. The default constructor is shown with default inputs of zero. It doesn’t need any code within it because it inherits the Complex constructor. The “ToString()” method must be overridden because the formatting must be changed to adhere to complex numbers. In addition, logic must be implemented for the “getImaginaryRoots()” method. The discriminant is calculated the same way as before, however the absolute value is taken. The real part must be calculated separately and the denominator is split for this reason. For clarification, this is the real part of a complex root. The two roots are the same, but complex conjugates. The “main” function must be updated to reflect the imaginary roots. The “getQuadraticString()” method is updated as shown. Three pieces of string must be created with several conditions imposed. They begin as empty strings and are filled in. Separating them into parts lets the logic be implemented for when each coefficient is 1 or -1. When C is zero, an empty string will be printed. # E-K Diagrams

As previously concluded, solids can be characterized based on energy band diagrams. A conductor has a valence and conduction bands that are very close or overlap. In addition a conductor will have a completely filled valence band and an almost full conduction band. The “forbidden region of the conductor is very small and little energy is required for an electron to move from conduction to valence band. In the presence of an external field, it is very easy for electrons to move from the valence band to the conduction band.

For semiconductors, at absolute zero the valence band is also completely full and the bandgap is typically about 1eV to 3eV, however even a bandgap of .1eV could be considered a semiconductor. Therefore, a semiconductor at 0K is an insulator. Semiconductors are very temperature sensitive. The subsequent figure illustrates the temperature dependence. The resistivity is very high at absolute zero, making the semiconductor behave like an insulator. However at higher temperatures the semiconductor can become quite conductive. At room temperature (300k), the semiconductor behaves more like a conductor. With band diagrams, not much information is given therefore it is necessary to also analyze an E-K (Energy momentum) diagram. E is the energy require for an electron to traverse the bandgap. For example in Silicon with a bandgap of 1.1eV, it would take an energy level of 1.1eV for an electron to move from conduction to valence band. Energy is given as E = kT where T is a given temperature.

For intrinsic semiconductors like Silicon, the structure is crystalline and periodic. The wavefunction (which describes probability of finding an electron) should therefore be of periodic nature (sinusoidal). From the Schrodinger equation, it can be found that the Energy is periodic with k as well. For the diagrams, E is plotted against k. The borders of the first Brillouin zone are from -π/a to π/a. These are cells of the crystalline lattice. Since the wavefunction is periodic, we only care about one of the zones. The above figure can be considered the “reduced zone” figure. Sometimes the x axis is given as the moment or wavenumber, since these only differ by a factor of Planck’s constant. From this diagram: the bandgap energy is shown, the effective mass of electrons and holes are shown as well as the density of states. The effective mass is shown by the curvature of the bands. For example, a heavy hole band could be found by observing the diagram that is less curved. From the above diagram, it is also noticeable that the material is direction bandgap (such as GaAs). The basic energy gap diagram compares to the E-k diagram in that the maximums and minimums correspond. However, the original band gap diagram does not give any other characteristics. It is for this reason the E-k diagram is so useful.

To derive the RADAR range equation, it is first necessary to define the power density at a distance from an isotropic radiator. An isotropic radiator is a fictional antenna that radiates equally in all directions (azimuthal and elevation angle accounted for). The power density (in watts/sq meter) is given as: However, of course RADARs are not going to be isotropic, but rather directional. The power density for this can be taken directly from the isotropic radiator with an additional scaling factor (antenna gain). This simply means that the power is concentrated into a smaller surface area of the sphere. To review, gain is directivity scaled by antenna efficiency. This means that gain accounts for attenuation and loss as it travels through the input port of the antenna to where it is radiated into the atmosphere. To determine the received power to a target, this value can be scaled by another value known as RCS (RADAR Cross section) which has units of square meters. The RCS of a target is dependent on three main parameters: interception, reflection and directivity. The RCS is a function of target viewing angle and therefore is not a constant. So in short, the RCS is a unit that describes how much from the target is reflected from the target, how much is intercepted by the target as well as how much as directed back towards the receiver. An invisible stealth target would have an RCS that is zero. So in order to determined received power, the incident power density is scaled by the RCS: The power density back at the receiver can then be calculated from the received power, resulting in the range being to the fourth power. This means that if the range of the radar to target is doubled, the received power is reduced by 12 dB (a factor of 16). When this number is scaled by Antenna effective area, the power received at the radar can be found. However it is customary to replace this effective area (which is less than actual area due to losses) with a receive gain term:   The symbol η represents antenna, and is coefficient between 0 and 1. It is important to note that the RCS value (σ) is an average RCS value, since as discussed RCS is not a constant. For a monostatic radar, the two gain terms can be replaced by a G^2 term because the receive and transmitted gain tends to be the same, especially for mechanically scanned array antennas. # HFSS: Conical Horn Antenna Simulation

For the following simulation, the solution type is Driven Modal. Driven modal gives solutions in terms of power, as opposed to Driven Terminal which displays results in terms of voltages and currents. The units are set to inches.

The first step is to create the circular waveguide with a radius of .838 inches and a height of three inches: To make the building process easier, a relative coordinate system is implemented through the Modeler window. The coordinate system is moved up to z = 3. A conical transition region (taper) is built at that origin point. The lower radius is 0.838 and the upper radius is 1.547. The height is 1.227. The coordinate system is then adjusted to be on top of the taper. The “throat” is created by placing yet another cylinder on top of the taper. The height is 3.236. Now, all the objects are selected and a Boolean unite is performed. All units can be selected by using the shortcut “CTRL + A”. From this point, a single object is obtained and name “Horn_Air”. This can be seen in the project tree on the left. The coordinate system is displaced back to the standard origin and “pec” is selected as the default material (perfect electrical conductor). This will be used to create the horn wall, shown below. A Boolean subtract is performed between the vacuum parts and the conductive portion to create a hollowed out antenna. Because the simulation is of a radiating antenna, an air box of some sort must be implemented. In our case, we use a cylindrical radiation boundary. The bottom of the device is chosen for the waveport. Upon assigning the two mode waveport, the coordinate system is redefined for the radiation setup. For the radiation, the azimuthal angle is incremented from 0 to 90 in one 90 degree increment and the elevation angle is incremented from -180 to 180 with a step size of 2: The simulation is done at 5 GHz with 10 as the maximum number of passes. The S-Matrix data is shown below. As well as the convergence plot: The radiation pattern is shown for the gain below: The plot is in decibels and is swept over the elevation angle. Both the lefthand and righthand polarized circular wave patterns are shown at angles phi = 90 and phi = 0. The two larger curves are the RHCP and the two smaller are LHCP.

# Object Oriented Programming and C#: Program to Determine Interrupt Levels

The following is a program designed to detect environmental interrupts based on data inputted by the user. The idea is to generate a certain threshold based on the standard deviation and twenty second average of the data set.

A bit of background first: The standard deviation, much like the variance of a data set, describes the “spread” of the data. The standard deviation is the square root of the variance, to be specific. This leaves the standard deviation with the same units as the mean, whereas the variance has squared units. In simple terms, the standard deviation describes how close the values are to the mean. A low standard deviation indicates a narrow spread with values closer to the mean. Often, physical data which involves the averaging of many samples of a random experiment can be approximated as a Gaussian or Normal distribution curve, which is symmetrical about the mean. As a real world example, this approximation can be made for the height of adult men in the United States. The mean of this is about 5’10 with a standard deviation of three inches. This means that for a normal distribution, roughly 68% of adult men are within three inches of the mean, as shown in the following figure. In the first part of the program, the variables are initialized. The value “A” represents the multiple of standard deviations. Previous calculations deemed that the minimum threshold level would be roughly 4 times the standard deviation added to the twenty second average. Two arrays are defined: an array to calculate the two second average which was set to a length of 200 and also an array of length 10 for the twenty second average. The next part of the program is the infinite “while(true)” loop. The current time is printed to the console for the user to be aware of. Then, the user is prompted to input a minimum and maximum value for a reasonable range of audible values, and these are parsed into integers. Next, the Random class is instantiated and a for loop is incremented 200 times to store a random value within the “inputdata_two[]” array for each iteration. The random value is constrained to the max and min values provided by the user. The “Average()” method built into the Random class gives an easy means to calculate the two second average. Next, a foreach statement is used to iterate through every value (10 values) of the twenty second average array and print them to the console. An interrupt is triggered if two conditions are met: the time has incremented to a full 20 seconds and the two second average is greater than the calculated minimum threshold. “Alltime” is set to -2 to reset the value for the next set of data. Once the time has incremented to 20 seconds, a twenty second average is calculated and from this, the standard deviation is calculated and printed to the console. The rest of code is pictured below. The time is incremented by two seconds until the time is at 18 seconds. The code is shown in action: If a high max and min is inputted, an interrupt will be triggered and the clock will be reset: # Object Oriented Programming and C#: Fractions Program

The following is a post explaining the functionality of a C# program in Visual Studio, which is designed to do basic operations between fractions which are ratios of whole numbers.

To begin, three namespaces are included using the “using” directive statement. The “system” namespace is included with every program. The next two must be included to implement certain classes. Without the directives in place, these namespaces would have to be included manually with every usage of the classes that are a part of them. The next bit of code is pictured above. Two integers are created with a “private” access modifier to indicate they can only be used within the Fraction class. Next, the constructor for Fraction is called and supplied with the two integers. The “this” keyword uses the current instance of the class to assign one of the inputs (num) to a member of the class. “This” can be helpful to distinguish between constructor inputs and members of the class since “this” always refers to members of the current instance. An “if” statement is included to handle exceptions thrown by having a denominator of zero. You can always identify a constructor by its lack of a data type. For some reason, the constructor should be called again, and it should inherit itself (???) with a 0 and 1 supplied as its argument. God only knows why. The Reduce function is meant to reduce the fraction to its canonical (simplest) form. It is important to note that the method is private, which means that it cannot be used outside the class “Fraction”. The greatest common denominator is initialized to zero. A for loop is executed to cycle through all the way to what “denomenator” is used for. Denomenator is allowed to be used here because the method is used within the class. Successive division is used to check if the canonical form has been achieved. By dividing by the loop index and checking for a remainder of zero for both the numerator and denominator, it can be shown whether more division should be done or not. If both conditions are true, the greatest common denominator has been found to be the loop index. The next step is to divide the numerator and denominator through by this value. For example, if the numerator was set to 3 and the denominator was set to 6, by the time the loop counter reached three, both statements would return a boolean “TRUE” and the gcd would be set to 3. Then both values would be divided by 3, reducing the fraction to 1/2. The next step is to define the properties. Properties allow private variables to be used publicly. This can be useful when you need to protect certain data by not allowing it to be used in any class, but sometimes needs to be exposed. This is accomplished using “getters” and “setters”. The “value” keyword is automatically provided when using a “setter” and sets the private variable to that value. Basically, numerator and denomenator are private variables and can only be changed within the class. Encapsulation refers to the scope of members within classes or structs. Properties provide a flexible way to control the accessibility of these members.

The last method is used to convert integer fractions to the double data type. This functionality is provided by the “explicit” keyword. The result is a returned “fractiondecimal” of data type double. The following codes are suppressed using the “#region” keyword. By entering each region, the code can be viewed. Within the first block of code, the custom arithmetic operators are defined, two of which are shown below. The addition is slightly complicated, because a common denominator must be found. Different implementations of the Fraction class are supplied to the input of the operator method. The fields “numerator” and “denomenator” are accessed through the class and assigned to a variable. A new object (“c”) is instantiated from the Fraction class which is the sum of “a” and “b”. The multiplication custom operator is slightly simpler, because it is straight across multiplication. Additional code is provided to change the sign of both the numerator and denominator if the denominator is negative. The operators for division and subtraction employ similar logic.

The comparison operators are defined using the same common denominator technique. The only difference between each operator method is the symbol used in the “if” statement. Six methods are provided (<, >, ==, !=, >= and <=). The last bit of code is pictured below. The “ToString” method is inherited by every class and therefore can be overridden. This allows flexibility to define the “ToString” method however you want. In this case, we want a fraction to be printed. The “as” keyword can convert between nullable or reference types and returns null if the conversion is not possible. When this conversion from obj to Fraction is possible, the numerator and denominator are set and the fraction is returned. # Refractive Index as a Function of Wavelength

Previously, we discuss how the resultant wavelength and velocity in an optical system is said to be dependent on the refractive index. What we didn’t explain however is that the relationship between refractive index and wavelength more often involves a dependency of the refractive index according to the incident wavelength. After all, it is easier to change the wavelength of a light wave than it is to change the material that it is propagating through. So in fact, the refractive index will vary according to the wavelength of the incident wave. If the system is not monochromatic, the frequency may also change. As we know from ray optics or geometrical optics is that the refractive index is used to determine how a ray will travel through an optical system. The relationship between wavelength and refractive index implies that an optical system with the same material will produce a different transmission angle (or perhaps a completely different result) for two rays of different wavelength.

Consider the range of refractive indexes for several different mediums with an altered wavelength and color (i.e frequency): The differences in refractive indexes for these materials given different wavelengths and frequencies may seem small, however the difference is enough that rays of different wavelengths will interact slightly differently through optical systems.

Now, what if a ray managed to contain more than one wavelength? Or, if it were a blend of all colors? This case is called white light. If white light can contain a sum of a number of wavelengths and frequencies, each component of white light will behave according to it’s relative refractive index.

The classic example of this is of course the prism. # Refractive Index, Speed of Light, Wavelength and Frequency

The relationship between the speed of light in a medium and the refractive index is the following: Therefore it can be understood that for a medium of higher refractive index, the speed of light in that medium will be slower. Light will not achieve a speed higher than c or 2.99 x 10^8 m/s. When light is traveling at this speed the refractive index of the medium is 1.00.

Now, what about the wavelength? Interestingly, one might begin to understand that the wavelength is the determining factor for color. In fact, this is not the case. Frequency is what defines the color of the light, which can vary from an invisible infrared range to the visible range to the invisible ultraviolet range. In a monochromatic system, the frequency of light (and therefore color) will stay the same. The velocity and wavelength will change with the refractive index. As the above picture suggests, we might beleive that wavelength and frequency are forever tied together. The above example would in fact be incomplete at best, were we to consider that light can travel at more than one speed. However, let us review the relationship between wavelength and frequency. The following formula is normally presented for wavelength: Now, here is the question: does c in this equation correspond to the speed of light in a vacuum, or does it correspond to the speed of the travelling light wave? Let’s consider, what does the speed of light in a vacuum have to say about the speed of light in water? It really doesn’t have much to say, does it? Which is why we can use instead, v to denote the speed of light. Note that I’ve written the wavelength as a function of the speed of light in the medium. Taking this to it’s conclusions, we would understand that actually, the wavelength is not exclusively dependent on frequency and that multiple wavelengths may exist for one frequency. The determining factor in such a case is the refractive index, given that frequency is constant. Given the wavelength, frequency and refractive index, the speed of the light wave may also be calculated. Physically, one may picture that the frequency is the rate at which the peak of a wave passes by a point. A longer wavelength wave will need to move faster to keep at the same frequency.

The applications and implications of this physical relationship will be explored next.

# Yagi-Uda Antenna/Parasitic Array

The Yagi-Uda antenna is a highly directional antenna which operates above 10 MHz and is commonly used in satellite communications, as well as with amateur radio operators and as rooftop television antennas. The radiation pattern for the Yagi-Uda antenna shows strong gain in one particular direction, along with undesirable side lobes and a back lobe. The Yagi is similar to the log periodic antenna with a major distinction between the two being that the Yagi is designed for only one frequency, whereas the log periodic is wideband. The Yagi is much more directional, so it provides a higher gain in that one particular direction that it is designed for.

The “Yagi” antenna has two types of elements: the driven element and the parasitic elements. The driven element is the antenna element that is directly connected to the AC source in the transmitter or receiver. A reflector element (parasitic) is placed behind the driven element in order to split the undesirable back lope into two smaller lobes. By adding directive parasitic elements in front of the driven element, the radiation pattern is stronger and more directional. All of these elements are parallel to each other and are usual half wave dipoles. These elements work by absorbing and reradiating the signal from the driven element. The reflector is slightly longer (inductive) than the driven element and the director elements are slightly shorter (capacitive).

It is well known in transmission line theory that a low impedance/short circuit load will reflect all power with an 180 degree phase shift (reflection coeffecient of -1). From this knowledge, the parasitic element can be considered a normal dipole with a short circuit at the feed point. Since the parasitic elements reradiate power 180 degrees out of phase, the superposition of this wave and the wave from the transmitter leads to a complete cancellation of voltage (a short circuit). Due to the inductive effects of the reflector element and the capacitive effects of the director antennas, different phase shifts are created due to lagging or leading current (ELI the ICE man). This cleverly causes the superposition of the waves in the forward direction to be constructive and destructive in the backwards direction, increasing directivity in the forward direction.

Advantages of the Yagi include high directivity, low cost and high front to back ratio. Disadvantages include increased sizing when attempting to increase gain as well as a gain limitation of 20dB. # III-V Semiconductor Materials & Compounds

In contrast with an elemental semiconductor such as Silicon, III-V Semiconductor compounds do not occur in nature and are instead combinations of materials from the III and V category groups on the periodic table. Silicon, although a proven as a functional semiconductor for electronic applications at lower frequencies is unable to perform a number of roles that III-V semiconductors are able to. This is in large part due to the indirect bandgap quality of Silicon. III-V semiconductor materials under a number of applications and combinations are direct bandgap semiconducting materials. This allows for operation at much higher speeds. Indirect bandgap materials will be unable to produce light.

Ternary and Quaternary III-V

The following list introduces the main III-V semiconductor material compounds used today. In a follow-up discussion, ternary and quarternary III-V semiconductors will be discussed in greater depth. To begin however, these may be understood as a process of mixing, varying or transitioning between two or more material types. For instance, a transition region between GaAs and GaP is described as GaAsxP1-x. This is the compound GaAsP, a blend of both GaAs and GaP, but at end of the material region, it is GaAs and at the other end it is equal to GaP.

GaAs
GaAs was the first III-V material to play a major role in photonics. The first LED was fabricated using this material in 1961. GaAs is frequently used in microwave frequency devices and monolithic microwave integrated circuits. GaAs is used in a number of optical and optoelectronic near-infra-red range devices. The bandgap wavelength is λg = 0.873 μm.

GaSb
Not long after GaAs was used, other III-V semiconductor materials were grown, such as GaSb. The bandgap wavelength of GaSb λg = 1.70 μm, making it useful for operation in the Infra-red band. GaSb can be used for infrared detectors, LEDs, lasers and transistors.

InP
Similar to GaAs, Indium Phosphide is used in high-frequency electronics, photonic integrated circuits and optoelectronics. InP is widely used in the optical telecommunications industry for wavelength-division multiplexing applications. It is also used in photovoltaics.

GaAsP
An alloy of GaAs and GaP, Gallium Arsenide Phosphide is used for the manufacture of red, orange and yellow LEDs.

InGaAs
Indium Gallium Arsenide is used in high-speed and high sensitivity photodetectors and see common use in optical fiber telecommunications. InGaAs is an alloy often written as GaxIn1-xAs when defining compositions. The bandgap energy is approximately 0.75 eV, which is convenient for longer wavelength optical domain detection and transmission.

InGaAsP
Indium Gallium Arsenide Phosphide is commonly used to create quantum wells, waveguides and other photonic structures. InGaAsP can be lattice-matched to InP well, which is the most common substrate material for photonic integrated circuits.

InGaAsSb
Indium Gallium Arsenide Antimonide has a narrow bandgap (0.5 eV to 0.6 eV), making it useful for the absorption of longer wavelengths. InGaAsSb faces a number of difficulties in manufacture and can be expensive to make, although when these difficulties are avoided, devices (such as photovoltaics) that use it may achieve high quantum efficiency (~90%).

AlGaAs
Aluminum Gallium Aresinide has nearly the same lattice constant as GaAs, but with a larger bandgap, between 1.42 eV and 2.16 eV. AlGaAs may be used as part of a border region of a quantum well with GaAs as the inner section.

AlInGaP
AlInGaP sees wide use in the construction of diode lasers and LEDs from deep ultraviolet to infrared ranges.

GaN
GaN has a wide bandgap of 3.4 eV and sees use in high frequency high power devices and optoelectronics. GaN transistors operate at higher voltages than the GaAs microwave transistors and sees possible use in THz devices.

InGaN
InxGa1−xN is another ternary III-V semiconductor that can be tuned for use in optoelectronics from the ultraviolet (see GaN) to infrared (see InN) wavelengths.

AlGaN
AlxGa1−xN is another compound that sees use in LEDs for blue to ultraviolet wavelengths.

AlInGaN
Although AlInGaN is not used much independently, it sees wide use in lattice matching the compounds GaN and AlGaN.
InSb
Indium Antimonide is an interesting compound, given that it has a very narrow bandgap of 0.17 eV and the highest electron mobility of any known semiconductor. InSb can be used in quantum wells and bipolar transistors operating up to 85 GHz and field-effect transistors operating at higher frequencies. It can also be used as a terrahertz radiation source.

# HFSS – Simulation of a Square Pillar

The following is an EM simulation of the backscatter of a golden square object. This is by no means a professional achievement, but rather provides a basic introduction to the HFSS program. The model is generated using the “Draw -> Box” command. The model is placed a distance away from the origin, where the excitation is placed, shown below. The excitation is of spherical vector form in order to generate a monostatic plot. The basic structure is a square model (10mm in all three coordinates) with an airbox surrounding it. The airbox is coated with PML radiation boundaries to simulate a perfectly matched layer. This is to emulate a reflection free region. This is necessary to simulate radiating structures in an unbounded, infinite domain. The PML absorbs all electromagnetic waves that interract with the boundary. The following image is the plot of the Monostatic RCS vs the Incident wave elevation angle. The subsequent figure was generated by using a “bistatic” configuration and is plotted against the elevation angle. 