Object Oriented Programming and C#: Shallow vs Deep Copying

The following will be a brief but important post illustrating the difference between reference and value types. In C#, value types are things like integers, floats, enumerations and doubles. A value type holds the data assigned to it in its own memory allocation whereas a reference type only holds an address which points to the actual data. A reference type is anything that is not an int, float or double, etc such as a dynamic array (list), static array, class objects, or strings. It is important to know the difference because when code such as the code below is executed, it can have some confusing effects.

shallow copy

The image above illustrates what is known as a “shallow copy”. Because instances of classes are not storing actual data but are used as pointers, when the object is copied to the second object, the memory address is copied instead of the data contained within “obj”. Therefore, any changes made to “obj2” will also affect “obj” because they point to the same data. The following image shows the difference between deep and shallow copies.

diff

To do a deep copy of an array, for instance, every element of that array must be copied to the new array. You can do that using the “Array.Copy(source array, copy array)” method. As shown in the image, this will create two references and two data instead of 2 references pointing to the same data. Shallow copying only copies a memory pointer.

Power Factor and the Power Triangle

Power factor is very important concept for commercial and industrial applications which require higher current draw to operate than domestic buildings. For a passive load (only containing resistance, inductance or capacitance and no active components), the power factor range from 0 to 1. Power factor is only negative with active loads. Before delving into power factor, it is important to discuss different types of power. The type of power most are familiar with is in Watts. This is called active or useful power, as it represents actual energy or time dissipated or “used” by the load in question. Another type of power is reactive power, which is caused by inductance or capacitance, which leads to a phase shift between voltage and current. To demonstrate how a lagging power factor causes “wasted” power, it would be helpful to look at some waveforms. For a purely resistive load, the voltage and current are in phase, so no power is wasted (P=VI is never zero at any point).

eli

The above image captures the concept of leading and lagging power factor (leading and lagging is always in reference to the current waveform). For a purely inductive load, the current will lag because the inductor will create a “back EMF” or inertial voltage to oppose changes in current. This EMF leads to a current within the inductor, but only comes from the initial voltage. It can also be seen that this EMF is proportional to the rate of change of the current, so when the current is zero the voltage is maximum. For a capacitive load, the power factor is leading. A capacitor must charge up with current before establishing a voltage across the plates. This explains the PF “leading” or “lagging”. Most of the time, when power factor is decreased it is because the PF is lagging due to induction motors. To account for this, capacitors are used as part of power factor correction.

The third type of power is apparent power, which is the complex combination of real and reactive power.

triangle

The power factor is the cosine of the angle made in this triangle. Therefore, as the PF angle is increased the power factor decreases. The power factor is maximum when the reactive power is zero. Ideally, the PF would be between 0.95 and 1, but for many industrial buildings this can fall to even 0.7. This leads to higher electric bills for this buildings because having a lower power factor leads to increases current in the power lines leading to the building which causes higher losses in the lines. It also leads to voltage drops and wastage of energy. To conserve energy, power factor correction must be employed. Often capacitors are used in conjunction with contactors that are controlled by regulators that measure power factor. When necessary, the contactors will be switched on and allow the capacitors to improve the power factor.

For linear loads, power factor is called as displacement power factor, as it only accounts for the phase difference between the voltage and current. For nonlinear loads, harmonics are added to the output. This is because nonlinear loads cause distortion, which changes the shape of the output sinusoids. Nonlinear loads and power factor will be explored in a subsequent post.

Photovoltaic Effect and Theory of Solar Cells

Just as plants receive energy from the sun and use it to produce glucose, a photovoltaic cell receive energy from the sun and generates an electrical current. The working principle is based on the PN junction, which will be revisited here.

Silicon can be subdivided into several discrete energy levels called “bands”. The major bands of concern are the valence and conduction bands. The bottom bands are fully occupied and don’t change.

siliconenergy

For silicon, the bandgap energy is 1.1eV. For an intrinsic semiconductor, the Fermi level is directly between the conduction and valence band. This is because there is an equal number of holes in the valence band as electrons in the conduction band. This means the probability of occupation of energy levels in both bands are equal. The Fermi level rises in the case of an n-type semiconuctor (doped with Phosphorous) and declines towards the valence band in a p-type (doped with Boron).

The following illustrates an energy band diagram for a semiconductor with no bias across it. Photodiodes (light sensors) operate in this manner.

intrinsicenergy

The Fermi energy is shown to be constant. On the far right hand side away from the depletion region, the PN junction appears to be only P-type (hence the low Fermi level with respect to the conduction band). Likewise, on the left the Fermi level is high with respect to the conduction band. The slope of the junction is proportional to the electric field. A strong electric field in the depletion region makes it harder for holes and electrons to move away from the region. When a forward bias is applied, the barrier decreases and current begins to flow (assuming the applied voltage is higher than the turn on voltage of 0.7V). Current flows whenever recombination occurs. This is because every time an electron recombines on the P side, an electron is pushed out of the N side and beings to flow in an external circuit. The device wants to stay in equilibrium and balance out. This is why solar cells (as opposed to photodiodes) are designed to operate in a forward bias mode.

The sunlight produces solar energy in the frequency bands of Ultraviolet, infrared and visible light. In order to harness this energy, silicon is employed (made from sand and carbon). Silicon wafers are employed in solar cells. The top layer of the silicon is a very thin layer doped with phosphorous (n-type). The bottom is doped with P-type (doped with Boron). This forms the familiar PN junction. The top layer has thin metal strips and the bottom is conductive as well (usually aluminum). Only frequencies around the visible light spectrum are absorbed into the middle region of the solar cell. The photon energy from the sun knocks electrons loose in the depletion region which causes a current to flow. The output power of a single solar cell is only a few watts. To increase power, solar cells are wired in series and parallel to increase the voltage and current. Because the output of the solar cells is DC, the output is run through an inverter, a high power oscillator that converts the DC current to an 240V AC current compatible with household appliances.

solar_16x9_2

Ray Tracing Examples (1) Curved Mirrors

The following ray tracing examples all utilize Fermat’s principle in examining ray traces incident at a mirror.

Example 1. Draw a ray trace for a ray angled at a convex mirror.

The ray makes a 40 degree angle with the normal of the mirror at the point of incidence. In accordance with the law of reflection (Fermat’s Principle), the ray will exit at 40 degrees on the other side of the normal.

op12

 

The above example shows a single ray at an angle. Often, rays are drawn together in a group of parrallel rays. This example shows how an incident set of parallel rays will no longer be parallel when reflected by a non-uniform (not flat) mirror surface.

op6

 

This example brings up an important concept that happens especially with concave mirrors. Two rays drawn seem to be directed towards the same point, known as the focal point. A focal point however is only consistent for smaller angles. The third ray at the bottom makes a 55 degree incident angle with the normal of the surface. The reflected ray is also 55 degrees separated from the normal but is directed to the other side of the normal. The ray does not converge at the focal point as the others do. This effect is known as an aberration and may be discussed further at length in a later article.

op7

 

This example makes use of the above concept of focal point. An object placed at the focal point will not make an image at the focal point. This is useful if for instance, some type of lense or collecter should be placed at the focus of the mirror. This can be done without worry for it causing disturbances to the image that is formed at the focal point by the reflected rays.

op8

Principles of Ray Tracing (1)

In geometrical optics, light is treated as rays, typically drawn as lines that propagate in a straight line from one point to another. Ray tracing is a method of determining how a ray will react to a surface or mirror. Rays are understood to propagate always in a straight line, however when entering an angled surface, rebounding from an angled surface or propagating through a different medium, there are a few techniques that are needed to reliably determine the direction and path of a light ray. The following properties are the basis for ray tracing.

Refractive Index

The refractive index is a property intrinsic to a medium that describes how fast or slow light propagates in the medium. Light speed in a vacuum is 3*10^8 m/s. Light speed will only get slower in real mediums. The formula for refractive index is the speed of light c devided by the velocity of light in the medium.

op2

The refractive index of air is approximately 1. The refractive index of glass for instance is about 1.5. This has implications on how light will propage when changing from one medium to another.

refractivein1

 

 

Snell’s Law

Snell’s law uses the angle of incidence (incoming ray), the angle of refraction (exiting ray) and the refractive indexes of each medium at a boundary to determine the path of propagation. Consider the example below:

op1

Snell’s Law: η1*sin(θ1) = η2*sin(θ2)

The angle of incidence and the angle of refraction are both with respect to the normal of the surface!

 

Fermat’s Principle

Fermat’s Principle is also demonstrated in the above figure. Fermat’s Principle states that the angle of incidence of a ray will be equal to the angle of reflection, but exiting from the other side of the normal of the surface.

 

Using these principles alone, many optical instruments and technologies can be designed and built that manipulate the direction of light rays.

RFID – Radio Frequency Identification

RFID is an important concept in the modern era. The basic principle of operation is simple: radio waves are sent out from an RF reader to an RFID tag in order to track or identify the object, whether it is a supermarket item, a car, or an Alzheimer patient.

RFID tags are subdivided into three main categories: Active, passive and semipassive. Active RFID tags employ a battery to power them whereas passive tags utilize the incoming radio wave as a power source. The semipassive tag also employs a battery source, but relies on the RFID reader signal as a return signal. For this reason, the active and semi passive tags have a greater range than the passive type. The passive types are more compact and also cheaper and for this reason are more common than the other two types. The RFID picks up the incoming radio waves with an antenna which then directs the electrical signal to a transponder. Transponders receive RF/Microwaves and transmit a signal of a different frequency. After the transponder is the rectifier circuit, which uses a DC current to charge a capacitor which (for the passive tag) is used to power the device.

The RFID reader consists of a microcontroller, an RF signal generator and a receiver. Both the transmitter and receiver have an antennas which convert radio waves to electrical currents and vice versa.

The following table shows frequencies and ranges for the various bands used in RFID

RFIDtable

As expected, lower frequencies travel further distances. The lower frequencies tend to be used for the passive type of RFID tags.

For LF and HF tags, the working principle is inductive coupling whereas with the UHF and Microwave, the principle is electromagnetic coupling. The following image shows inductive coupling.

inductive coupling

A transformer is formed between the two coils of the reader and tag. The transformer links the two circuits together through electromagnetic induction. This is also known as near field coupling.

Far field coupling/radiative coupling uses backscatter by reradiating from the tag to the reader. This depends on the load matching, so changing the load impedance will change the intensity of the return wave. The load condition can be changed according to the data in order for the data to be sent back to the reader. This is known as backscatter modulation.

Automotive Electrical System

In the early days of automobiles, electricity was not utilized within these machines. Car lights were powered by gas and engines were started by crank rather than a chemical battery.

The three major components within a car’s electrical system are the battery (12Vdc), the alternator and the starter. The battery is the backbone of the car’s electrical system, which is the main source of electrical current. The electrical system can be split into two main parts. The main feed goes from the battery’s positive terminal to the starter motor. This cable is attached to the battery are capable of carrying up to 400 Amperes of current. This is the high current part of the circuit. The other part of the electrical system is from the ignition switch and carries a lower current. When the ignition switch is turned all the way to the “engine start” position, the starter motor is powered which begins the engine process. What actually happens is the starter solenoid is engaged is that when a small current is received from the ignition switch, the solenoid closes a pair of contacts and sends a large current to the starter. The starter needs a huge amount of current to spin the engine, which most humans cannot physically do.

The starter motor rotates the flywheel, which turns the crankshaft on the engine. This allows the engine’s pistons to move and begin the process of internal combustion. Fuel is injected into the pistons and combined with air and spark, creates explosions which drive the engine.

The alternator uses the principle of electromagnetic induction to supply energy to the battery and other electrical components. It is important to note that although the alternator produces AC (this is always the case for inductions), this is rectified much like a Dynamo so the output is DC. The alternator is driven by a serpentine belt which causes the rotor to rotate and in the presence of a stator, induces a current. The stator is made of tightly wound copper and the rotor is made of a collection of magnets, which produces the familiar Faraday induction effect. Diodes are used to rectify the output and also to direct current from the alternator to the battery to charge it.

alternator

Using GIT – Introduction

Git is essentially a version control system for tracking changes in computer files. This can be used in conjunction with Visual Studio to program in C#, for example. Git can be accessed through commands through the command window in windows. Git is generally to coordinate changes to code between multiple developers and is also used to work in a local repository which is then “pushed” to a remote depository such as Github.com

Git tracks changes to files by taking snapshots. This is done by the user by typing “git commit ….” in the command prompt. The files should be added to the staging area first by using the command “git add <filename>”. “Git push” and “git pull” are used to interact with the remote repository. “Git clone” will copy and download a repository to your local machine. Git saves every version that gets committed, so a previous version can always be accessed if necessary. The following image illustrates the concept of committing.

gitcommit

You can essentially “branch” your commits which can later be merged together by using “git commit” command with multiple parents. The master branch is the main/linear list of saves. This can be done in the remote repository or the local. A “pull request” essential means taking changes that were made in a certain branch and pulling them into another branch. This means multiple people can be editing multiple branches which can then be merged together.

Git is extremely useful for collaboration (as with websites such as google docs) where multiple authors can work on something at the same time. It also is excellent for keeping track of the history of projects.

Mobility and Saturation Velocity in Semiconductors

In solid state physics, mobility describes how quickly a charge carrier can move within a semiconductor device when in the presence of a force (electric field). When an electric field is applied, the particles begin to move at a certain drift velocity, given by the mobility of the carrier (electron or hole) and electric field. The equation can be written as: density

This is also related to Ohm’s law in point form, which is the conductivity multiplied by the Electric field. This shows that the conductivity of a material is related to the number of charge carriers as well as their mobility within the material. Mobility is heavily dependent on doping, which introduces defects to the material. This means that intrinsic semiconductor material (Si or Ge) has higher mobility, but this is a paradox due to the fact that intrinsic semiconductor has no charge carriers. In addition, mobility is inversely proportional to mass, so a heavier particles will move at a slower rate.

Phonons also contribute to a loss of mobility due to an effect known as “Lattice Scattering”. When the temperature of semiconductor material is raised above absolute zero, the atoms vibrate and create phonons. The higher the temperature, the more phonon particles which means greater collisions and lower mobility.

Saturation velocity refers to the maximum velocity a charge carrier can travel within a semiconductor in the presence of a strong electric field. As previously stated, the velocity is proportional to mobility, but with increasing electric field there reaches a point where the velocity saturates. From this point, increasing the field only leads to more collisions with the lattice structure and phonons, which does not help the drift speed. Different semiconductor materials have different saturation velocities and are strong functions of impurities.

Transistor IV curves and Modes of Operation/Biasing

In the field of electronics, the most important active device is without a doubt the transistor. A transistor acts as a ON/OFF switch or as an amplifier. It is important to understand the modes of operation for these devices, both voltage controlled (FET) and current controlled (BJT).

For the MOSFET, the cutoff region is where no current flows through the inversion channel and functions as an open switch. The “Ohmic” or linear region, the drain-source current increases linearly with the drain-source voltage. In this region, the FET is acting as a closed switch or “ON” state. The “Saturation” region is where the drain-source current stays roughly constant despite the drain source voltage increasing. This region has the FET functioning as an amplifier.

ivcurve

The image above illustrates that for an enhancement mode FET, the gate-source voltage must be higher than a certain threshold voltage for the device to conduct. Before that happens, there is no channel for charge to flow. From there, the device enters the linear region until the drain-source voltage is high enough to be in saturation.

DC biasing is an extremely important topic in electronics. For example, if a designer wishes for the transistor to operate as an amplifier, the FET must stay within the saturation region. To achieve this, a biasing circuit is implemented. Another condition which effects the operating point of the transistor is temperature, but this can be mitigated with a DC bias circuit as well (this is known as stabilization). “Stability factor” is a measure of how well the biasing circuit achieves this effect. Biasing a MOSFET changes its DC operating point or Q point and is usually implemented with a simple voltage divider circuit. This can be done with a single DC voltage supply.  The following voltage transfer curve shows that the MOSFET amplifies best in the saturation region with less distortion than the triode/ohmic region.

output

Quantum Wells in LEDs

Previously, the topic of a quantum well’s functionality was discussed. Here, the topic of quantum wells’ function specifically within Light Emitting Diodes is discussed. In fact, quantum wells often implement multiple quantum wells to increase their luminescence, or total light emission.

Quantum wells are formed when a type of semiconductor (or compound semiconductor) with a more narrow bandgap between its conduction and valence band is placed in between two wider bandgap semiconductors (such as GaN or AlN). The quantum well traps electrons within it at the conduction band, so as to increase recombination. Holes from the valence band will recombine with the conduction band electrons to emit photons which gives the LED its distinct emission of light. The quantum well is the reason why the LED does not function strictly as a diode. If the electrons were not trapped, the current would simply flow normally as in a regular LED. Although a greater number of quantum wells increases the luminescence of the LED, it can also lead to defects in the device.

LEDs generate different colors of light by using different semiconductor material and different amounts of doping. This changes the energy gaps and leads to a different wavelength of light being produced. Gallium is a common element used in these compound materials.

Power Amplifiers basics

Multistage amplification is used to increase the overall gain of an amplifier chain. The total gain is the product of each stage’s gain. For example, a microphone can be connected to first a small signal amplifier (voltage amplifier), then a power amplifier before being supplied to a speaker or some other load. The PA (large signal amplifier) is the final stage of the amplifier chain and is the most power hungry. The major features of these PAs are it’s efficiency (usually drain efficiency for FET archetypes or Collector efficiency for BJT amplifiers) and impedance matching to the load. The output power of a PA is typically in tens of watts (small signal amplifiers generally output in mW up to 1 Watt maximum).

“Small signal” transistors are used for small signal amplifiers whereas “power” transistors are used for PAs. Small signal transistors behave linearly whereas power transistors can suffer from nonlinear distortion.

PAs can be classified based on the operating point (Q point) location. Class A amplifiers have a Q point at the center of the active region. For Class B, the Q point is at the cutoff region. For Class AB, the Q point is between that of class A and class B. For Class C, it is below cutoff.

A major parameter of a PA is its efficiency. This is the ratio of AC power to DC input power and is generally expressed in a percentage.

Harmonic distortion of a PA involves the presence of harmonic multiples of the fundamental frequency at the output. A large input signal can cause this type of distortion

Object Oriented Programming and C#: Methods

Methods in C# are quite similar to functions in C. A function is something that takes input parameters and returns outputs. The major difference between methods, which are involved with object oriented programming, and functions is that methods are associate with objects. Methods give the programmer a huge advantage in the sense that their code is much easier to read and allows the user to reuse code to avoid repetitions. It is important to note that methods can only be contained within classes and methods declared within methods are not allowed. The method “Main” is included in every program.

The following is the syntax for a method:

method

The return type for “main” is void since it does not return a value. The name of the method and list of parameters are called the “method signature”. The name of the method should contain either a verb or a noun and a verb. An “access modifier” gives the compiler information about how the method can be used, just as with classes. Examples are public, private, etc. The “static” method is a method that does not need to be instantiated.

“Local variables” are variables defined within a method that can only be used within the method. The area of visibility for this variable is from where it is defined to the last bracket of the method body.

Methods can be called from the “main” method or from some other method. In fact, methods can call themselves (this is called recursion). You can also call a method BEFORE it is declared!

A method’s parameters are the inputs necessary to complete whatever task the method needs to achieve. Arrays can be used as parameters if necessary. When declaring a method with parameters, the values in the parentheses are called parameters, but when the method is called, the values actually used are called arguments.

Another important note is that when a variable is passed to a method argument, the value is copied to the parameter for use within the method. However, when a variable is declared within a certain method and then placed as an argument to a method, if the parameter is hardcoded within that method, it will use the hardcoded value but the variable declared will not be effected. Here is an example:

printnum

main

This will print the number 5 (not 3). In the Main() method, however, numberArg is 3.

Hermitian Operators, Time-Shifting Wavefunction

It was mentioned in the previous article on Quantum Mechanics [link] that if the integral of a wavefunction over all space at one time is equal to one (thereby meaning that it is normalized and that the probabilility of the particle existing is 100%), then the wavefunction is applicable to a later time, t.

qm11

A function in the place of Ψ*Ψ is used as a probability density function, ρ(x,t). The function N(t) is the resultant probability at a given time, given that the probability was found to be equal to 1 at a given time t0. Shown below, it is proposed that for dN/dt to equal zero, the Hamiltonian must be a Hermitian operator.

qm12

A Hermitian operator would satisfy the following:

herm

Hermiticity in general may referred to as a type of conjugate form of an operator. An operator is hermitian if the hermitian conjugate is equal to itself. One may compare this relationship as to a real number whose complex conjugate is equal to itself.

herm2

Returning to the calculation of dN/dt,

qm13

Ψ Wavefunction Describes Probability

Schroedinger’s first interpretation of the wavefunction was that Ψ would describe how a particle dissipates. Where the wavefunction Ψ was the highest, then that was where more of the particle was present. Max Born disagreed saying that a particle would not dissintegrates, choosing another direction to move. Max Born proposed that the wavefunction would actually describe the probability of a particle inhabiting a space. Both Schroedinger and Einstein were initially opposed to the idea of a probabilistic interpretation of the Schroedinger equation. The probabilistic interpretation of Max Born however later became the consensus view of quantum mechanics.

The wavefunction Ψ therefore describes the probability of finding a particle at position x at time t, not the amount of the particle that exists there.

qm6

Since the Schroedinger equation is both a function of position and time, it can only be solved for one variable at a time. Solving for position is preferable due to the fact that if the wavefunction is known for all x, this can provide information for how the wavefunction is at a later time.

qm7

Of the limits regarding the wavefunction, it is also said that the wavefunction must be convergent. The wavefunction therefore does not approach a finite constant as x approaches infinity.

qm8

We also recall that a wavefunction may also be multiplied by a number. It would appear that doing so would violate the above expression. The answer regarding this conjecture is that the above formula represents a normalized wavefunction. Yet it turns out that not all wavefunctions are normalizable. The case of multiplying the wavefunction with a magnitude in fact would still be normalizable, however. A wavefunction can be normalized if the integral is a finite number less than infinity using the following method:

qm10