Monday, July 31, 2017

Day 18

This morning I went to hear Alexander Rasskazov's dissertation defense for his PhD. He talked about the evolution of supermassive black holes in stellar nuclei, and the effects of such interactions with background gravitational waves. I found it incredibly interesting, and it tied in with the first defense I heard the other day; it was interesting to see the various connecting themes in such similar lines of research.

I spent the most part of today learning how to create a program which would automate the process of creating these PSFs, and adding in some more steps to reduce background noise such as taking several (20-30) exposures for each exposure time and averaging them. After much trial and error, I believe I should be able to have the code up and running by the end of tomorrow.

The electricians finally came by to look at our power distribution modules for the clean room, but unfortunately it looks like it will be a while before they are operational. Since we will need to hardwire the cables into the wall, we need to propose it as a project and get it sent up the chain of command to be approved, which could mean that it's not ready until September.

Sunday, July 30, 2017

Outline

Purpose:

  • Determine a wavelength-dependent Intrapixel Response Function (IPRF), describing the sensitivity of different areas within a single pixel. This will allow us to better analyse past and current data from the Kepler Space Telescope, and determine the extent of sensitivity dependence on wavelength.
Background Info:
  • Original Kepler mission was launched into an Earth-trailing heliocentric orbit in March of 2009, with the goal of finding exoplanets around stars within a given area of the sky. The spacecraft would stay pointed at the exact same patch of space, continually observing the same set of stars, looking for the transits of exoplanets.
  • Transits can be detected by observing the dips in brightness of stars as their planets pass in front of them.
  • All kinds of information about these exoplanets can be gleaned from the data from these transits, planet size, orbital period, etc.
  • Two of four reaction wheels on the spacecraft malfunctioned, unfortunately, so Kepler is no longer as stable, and currently has a drift of 3 milliarcseconds every ~6.5 hours..
  • As such, it now shifts its field of view along the ecliptic every set number of days
  • The current PRFs which are being used in flight were only produced by mathematical modelling and derivation, so taking measured data will greatly improve the accuracy of analysis.
Method and Interpretation:
  • By scanning the area (27 microns by 27 microns) of an individual pixel on a copy of the in-flight cameras onboard with a tiny spot of light (2 microns approx.) and taking thousands of measurements, we can determine the response of the pixel across its area.
  • Using a monochromator, we can produce any wavelength light within the visible to near IR spectrum.
  • We can then map IRPFs for various wavelengths of light, and determine the rate at which different wavelengths of light drop off in sensitivity.
  • The light is focused by a microscope lens, which brings it to a focal point about 20 mm away.
Results  - TBD

Conclusions - TBD

Next Steps - cont. research

Friday, July 28, 2017

Day 17

Today I wrote the rest of the code for our Point-Spread Function; I created logical arrays, describing where there were certain points that fell between 100 and 220 in the signal to noise range. The logical array consists of ones and zeroes, with a one representing a value within our desired range, and a zero, a value outside it. By multiplying this logical array against the array of intensity values in the image, all points that didn't fit the range were removed, as they were multiplied by 0, while the others were just multiplied by 1, remaining the same. Then by dividing each new image by its own length of exposure, the data was normalized to all be read in counts per second. Taking the average of these normalized datasets produced the image of our Point-Spread Function.


We also took our sheets of what we thought were acrylic over to "The Construct" so that we could laser cut them to be 3/4" shorter. Unfortunately Lexan is actually a type of polycarbonate, not acrylic, which we only discovered after lightly burning the first sheet. So tomorrow we're going to have to find another way to cut it, because polycarbonate, as you may have guessed, isn't compatible with the laser cutter.

Thursday, July 27, 2017

Day 16

Today we didn't need to come into work until a bit later because of the field trip, but I still came in fairly early and went to a PhD defense, which I found incredibly interesting. The candidate defended his thesis on the General Relativistic Hydrodynamics of Merging Binary Black Holes, in which he was analyzing and simulating the inspiral phase of a pair of black holes in the process of merging. After the defense, I headed back to the lab and worked on calibrating our beam of light with a different camera. The camera has larger pixels, but because of this, the pixels have much greater well depths, giving us images with much larger dynamic range. This can help us determine the spot size with a rough resolution, but greater precision in the way it shows contrast (it's harder to saturate these pixels because of the greater well depth).

After lunch I took a series of pictures of the spot of light, varying the exposure time with each image. I covered a range from 700 microseconds to 10 million microseconds (10 seconds) on a generally logarithmic scale. With these images I began creating a code to map Point-Spread Functions (PSFs) of the data; first importing the directory and cropping all the images to a 101x101 pixel area (the original images were 1024x1360 pixels and the spots of light are 20 pixels in diameter at their largest), then subtracting the rough value of the dark current for each pixel to minimize the noise.

We all went to dinner together before heading out to the C.E.K. Mees Observatory on a bus. Upon arrival, Henry (who also works at the observatory as a tour guide) told us about the history of the observatory, the recent news in astronomy, and the Astronomy Club of Rochester, while we waited for it to get dark. Just before nightfall, we headed up to the observatory, and while it was too cloudy to actually observe anything, we did get to see the telescope, which was amazing, and we got to learn all about how it operates and the different aspects of using it for observation. It was so cool to see how a telescope from the 60's is still working so well.

Wednesday, July 26, 2017

Day 15

Today we set up and conducted the knife edge experiment. We fixed a razor blade across our power meter and moved our beam of light across it, starting so that the razor blade blocked all of our light, then moving the beam so that more and more light fell on the sensor. Using this data we should be able to map a Gaussian curve, but unfortunately our signal to noise ratio (SNR) was too low, and so our signal was entirely drowned out by the noise. We'll have to rework the experiment and try again tomorrow.



I also went to a fascinating tech talk today from Professor Joel Kastner. He spoke about young, nearby stars which host planets, a section of the known class of stars 'T. Tauri.' These stars are "toddlers," incredibly young and in the early-to-mid stages of forming planets. Using a coronagraph, you can block out light from the host star and observe areas where gas giant planets are forming (very large ones only for now, the smallest being about two times the mass of Jupiter). By taking measurements of these planetary nesting grounds in every wavelength of light we can manage, we are able to identify certain important elements and compounds, such as hydrocarbons, which are some of the building blocks of life.

Tuesday, July 25, 2017

Day 14

Since we have yet to hear back from ThorLabs, we spent today setting up the physical assembly for our experiment. We moved the stages into position and mounted our lens on top of them. This added about 2 pounds to the load, so we had to talk to ALIO Industries again and retune the stages; the previous settings were not providing enough current to the stages to allow them to go though the homing procedure.


The holes on the bottom two stages were imperial, so they fit with the standard screws, but for some reason ALIO decided to make the Z (vertical) stage have metric M5 holes, so we had to find some metric screws and figure out a way to mount the lens. In the end we had to ask Anton to counter-bore holes in our (previously) imperial pieces so that we could screw them to the Z stage, but it's now attached and stable.

We also spent a good deal of time organizing our lab space, there were a lot of extra parts and pieces on our optical table which we were no longer using and were taking up quite a bit of space. We then separated and untangled the many cables from each other, securing different bundles to the table. This allows us to better manage and move our cables if need be.

(pre- mounting the lens)

Monday, July 24, 2017

Day 13

Today we switched gears; we realized that the native software of both the stages and the power meter was not compatible with MATLAB, so we had to switch languages, and downloaded the Visual C++ Studio. While that does mean unfortunately my training for MATLAB was (currently at least) in vain, it does also mean that I now get the chance to learn to program in C++, which is an incredibly widely known and respected language. It may be quite difficult to learn, but it will undoubtedly be worth it. Today I was able to make a great deal of process creating and debugging the basic code for our power meter. I'm creating an application for it, which we will be able to run from the console and automate to synchronize with our movements of the stages. I have completely written the code, and I'm now just working on one last bug, getting rid of which will allow the code to run.

I also upgraded my setup today! Now that the original issue with the monitor ports has been fixed, I can use dual monitors. The double screenspace is incredibly helpful to have, as it allows me to bring up a manual, guide, or webpage on one monitor while the window for code and debugging is open on the other.

Friday, July 21, 2017

Day 12

Today was full of code errors and numerous attempts at debugging. I finally seemed to have some code that worked, but I ran into some errors with the 'include' files, and as ThorLabs has yet to respond to me with any programming help, I decided to shift focus and work on connecting the controls for the stages to MATLAB. There seems to be quite a bit more documentation available for using these ALIO stages with MATLAB, I have all the necessary compilers and drivers now, and the only issue left to deal with is figuring out how to work the loadlibrary function, which has been having some errors too. Luckily, Dmitry will be back next week, so we should be able to figure that out pretty quickly on Monday. We're still waiting on the parts for the rest of the cleanroom, but once they come, assembling it shouldn't be too difficult a task, and we can move on to taking our actual measurements. In the meantime, we must be very careful and precise with our calibration (ergo the precision stages).

Thursday, July 20, 2017

Day 11

Today I worked on debugging and troubleshooting the code to let us connect the power meter to MATLAB. I called ThorLabs again, and they asked me to send them the error response I've been getting (now a different one, from a later line in the code). The necessary drivers and compilers are all loaded onto the computer, with the help of Anton, but the issue has been operating the load library, which will allow us to access commands for the power meter through MATLAB. I have emailed the error response to ThorLabs, but as of yet, they have not responded with any help.

While waiting for a response from ThorLabs, I helped Ashley and Pete with some of their setup and code of their own; and I also learned some more Python from the online course I have been using. Additionally, I have been looking at the documentation for various commands and functions in MATLAB to see if there is any other way to connect the power meter through it. The power meter is properly connected to the computer at least, and seems to be running fine with its native software.

Wednesday, July 19, 2017

Day 10

This morning I went up and down a lot of staircases, trying to fix a few issues with the monitor in our lab. I downloaded the software for the new power meter which arrived yesterday, and the setup wizard asked me to restart the computer to finalize the installation, so I (luckily) saved everything and rebooted the computer. Unfortunately, when it turned back on, the screen just cycled between "VGA No Signal" and "DVI No Signal." We have been using an HDMI cable to connect it, but the monitor was acting as if the HDMI port did not exist. So after many trips between the 2nd and 3rd floors and the lab in the basement. I got a new monitor and DVI cable, then managed to get the computer up and running again (after Brett got rid of a rogue VGA cable that was hijacking the video signal). I was then able to install the rest of the drivers and software.

At the lunch hour, I went with Ashley and Ryan to the tech talk across from the reading room. It ended up being an hour and a half long, as the speaker just had so much to share and the audience had some really great questions. I loved the talk, it was on quantum mechanics and the possibilities for quantum computers. I found that the background information Dmitry had given us about bits and data was incredibly helpful in understanding more of what the speaker was saying, and let me be more mentally engaged with the topic.

I managed to make some headway on the issue of connecting both the power meter and the ALIO stages to MATLAB. Code for the PM100D (our power meter) was very difficult to find online, and I found quite a few codes that only partially fit our needs, or were for a different model. But I could find none for our model, so I called ThorLabs and spoke to an engineer who was eventually able to find some documents to email me. I've been working through them, locating and downloading compilers and creating a load library for our power meter.

Tuesday, July 18, 2017

Day 9

This morning I showed Ashley the code I was working on for the Gaussian functions, as she and Peter are going to be working with MATLAB too (Pete was unfortunately out today). I found that showing someone what I had done and explaining each step really helped me to properly understand the code, so it was just as helpful to me. I read about the Knife-Edge experiment, which can be used to model a 2-D Gaussian curve from the beam of light, it works as an error function. The Knife-Edge test is done by blocking the beam of light with the flat side of a razor or knife blade, then slowly moving the blade to the side to let the light through to the detector. The power is measured as a function of position and, through the use of a standard error function, is converted into a Gaussian curve.

The power function:

The error function:


Today we also set up a meeting (over phone) with a representative from ALIO Industries who helped us to set up our new stages with the help of screen sharing technology. This helped to clear up some issues we were having trouble with yesterday and everything appears to be running smoothly now. The power meter which Dmitry had ordered from ThorLabs cames today (as did some LabSnacks!). So over the next few days, while Dmitry is out, I will be working out a way to connect the software from the power meter and control it remotely through MATLAB. If I can work that out, I will also work on a way to control the ALIO stages by way of MATLAB. It will be much easier to calibrate things and run the tests cleanly if we are controlling both from the same place.

Monday, July 17, 2017

Day 8

Today we made good progress on preparing our ALIO stages, which we will use to direct the beam of light pointed at the Kepler camera. The stages move in x, y, and z directions, and allow us to very precisely position our spot of light, which is critical for the analysis of intrapixel responses. We hooked up the z stage (vertical) to the regulators and our tank of pressurized nitrogen gas, which we use as a counterbalance to help lift some of the load from the motor. There was a lot of software we had to download as well, and many setup wizards to run and read through. We studied the manual that I read last week while working through this, and it definitely helped to have the manual there. Unfortunately, there appear to be some pieces of code missing from the preset program, so we are now in the process of setting up a phone call with some experts from the ALIO company to help walk us through this next section of the setup.

I also read and learned about Gaussian beams of light and the mathematics behind them. Essentially if a beam of light is collimated and then passes through a convex lens, or through a series of lens, as it does in our scenario, then it will behave in a Gaussian-like manner (it works almost perfectly with laser light). Through the explanation of the formulas, I learned that the behavior of a Gaussian beam depends only on the "waist", or narrowest part of the beam, and the wavelength of light. This is very good for us, as it means we can analyze the beam of light much more easily, changing just one or two variables and calculating the rest of the beam's properties.



A model of a Gaussian beam's "waist" as position varies from the focus point (at the center).

Abstract


The Kepler telescope uses a Charge-Coupled Detector (CCD) to measure the brightness of distant stars and looks for the transits of exoplanets passing in front of their host stars. The Kepler mission currently uses a Pixel Response Function (PRF) which is modeled and empirically derived to analyze the distribution of light from a given star. This PRF can be drastically improved by taking measured data in a controlled environment to map it out, which we will do with wavelengths across the visible spectrum. We will direct a spot of light across the area of each pixel, measuring differences in intensity so that we can create an Intrapixel Response Function (IPRF) for each wavelength. This IPRF will allow us to better understand previous and current data from the Kepler missions.

Friday, July 14, 2017

Day 7

Today we wrote a lot of code using MATLAB. A couple weeks ago I started learning Python through an online course. Python is the first language I've programmed in, so I'm still pretty new to the concept, but luckily I've been able to pick up principle ideas fairly quickly. Using the photos I took yesterday, we used the code to crop each of the 100 pictures to a 50x50 pixel region where the spot of light was (the original pictures were about 3000x4000 pixels of mostly black space). We then ordered the pictures into groups of 10 for each of the 10 distances (spotStacks) and took the median of each group (medianSpot), putting each of these medians into their own variable group (medianStack). Taking these medianSpots, we utilized a function called fmgaussfit, which we use to fit a Gaussian curve to each of the data sets (we tried several other methods, but this function was the most efficient).

This is an example of a Gaussian curve, where, in our case, the height would correspond to the intensity of light incident on each pixel. Our raw data is asymmetrical and rather jagged, so by mapping a close mathematical function to it, we can more easily interpret our data, taking information such as the FWHM, or Full-Width at Half Maximum, the width of the Gaussian at half the amplitude, in the x and y-directions. This allows us to compare the size of our spot of light as we focus and unfocus the camera.

Thursday, July 13, 2017

Day 6

Today we spent almost all of our time working on the alignment of our setup. In the morning, we figured out that the lens collimator (the cheaper one) was not focusing because it was too close to our source of light. After lengthening the distance, the collimator worked quite a bit better. Unfortunately, the collimator sends out the light over a larger area, so not as much light passed through the monochromator as when it was uncollimated. So after a couple hours, we had to decide to scrap the idea of collimation before the monochromator.

We spent a great deal of time working on the transfer of light from the monochromator to a collimator at the end of it and spent a couple hours making minute changes to the position of the collimator in the x, y, and z-axes and the vertical tilt and horizontal rotation. We did this using stacked manual stages which we could move different parts of to adjust the position. We also discovered that the optical tube between the final collimator and the microscopic lens was too short, as the image of the fiberoptic core from the cable, reflected from the mirror of the collimator, was appearing as a dark spot in our camera view. After switching out the tube, the image appeared more uniform.

After fixing up the alignment to a high degree of accuracy, we experimented with different exposure settings on the camera, allowing it to take in more or less light to reduce saturation in the image. I unfocused our point of light and moved the camera back in over a range of 50 microns, taking 10 pictures at each 5-micron step, recording the distance of the camera and the exposure time setting from the properties and settings on the monitor. Tomorrow we will input all the pictures into MATLAB and crop the images to the right zones so that we can average the sets of pictures for each distance to cancel out most of the noise we received with our signal.

Wednesday, July 12, 2017

Day 5

In today's staff meeting we watched a short video about imaging science at RIT, and a few people shared what they were doing in their labs; it's always so interesting to hear about the different kinds of work everyone is doing. I coded for about half an hour this morning before heading over to the lab to begin collecting sample data for calibration purposes. Our smaller camera is situated on an actuator which moves along one axis so that we can position it at the focus point of the light beam being emitted from our setup. I adjusted it using our monitor and controls until it was as in focus as our current set up allows, then I moved it backward, a little out of focus, and then back towards and past the focus point over a range of motion of about 20 microns in 0.5 micron steps, taking a picture at each step to map the transformation.

At the lunch hour, I went to attend a weekly tech talk on biomedical engineering, and various new techniques in the field. While biology isn't my area of expertise, I found the talk very interesting and insightful, and I'm very glad I went.

I spent a good deal of time today, both before and after lunchtime, running tests to determine whether or not we should collimate our light beam before sending it through the monochromator. By collimating the light, it essentially makes the light waves parallel to each other, making the beam more uniform so that the light does not spread (much) and weaken the signal, even over a larger distance. As there is already a collimating mirror inside the monochromator, we wanted to see if collimating the light before it entered would have an effect on the strength of the signal coming out the other side. So I placed a power meter in front of the exit slit of the monochromator and measured the current (in nanoAmps) across a range of wavelengths from 400 nm to 700 nm, with 50 nm intervals. In the first trial, the uncollimated light returned higher currents, but oddly enough, within the function, longer wavelengths generally returned higher currents as well. Sensing that something might be off with the collimator, we ran the test once more, this time using our much nicer (and unfortunately, as we need a second one, more expensive) collimator. The results the second time were completely the opposite and behaved more as expected with respect to wavelength differences. The collimated light generated nearly double the current as the uncollimated light. More current means a stronger signal, so unfortunately for us, that means we're going to somehow have to fit a new collimator into our budget, as the first one was quite a bit cheaper and uses a lens (less precise, more scattering) as opposed to a mirror.

We will also need a collimator on the other end of the monochromator (hence the second) so we will position it on a manual actuator which will allow us to adjust the angle and path through which light enters it. At the end of the day, we planned to set up the camera controls by creating a code using MATLAB, but the license was expired so we had to spend a fair amount of time downloading a new copy of it, but it's now ready to go tomorrow.

Tuesday, July 11, 2017

Day 4

After this morning's staff meeting, I reviewed the ALIO systems manual and practiced programming with Python until we met up with Dmitry for our last lesson of background in optics. We missed Peter today because he was out sick, but hopefully he'll be back with us tomorrow. We learned about image saturation, binary (16 bit in particular), digital to analog converters, and how HDR imaging was invented. Dmitry and I then set up our regulators for the pressurized nitrogen tank; the first one fit, but it had a leak; the second one we found wasn't strong enough to take the pressure from the tank, and the third one was strong enough, but with the wrong nozzle shape, so we had to remove it (with the help of vice-grips and WD40). We will be using the nitrogen gas to reduce some of the load on the motor by feeding the gas through two regulators to generate an input of about 30 psi to create a slight upwards pressure to help the small motor function more easily.

We then took a lunch break, after which I spent about an hour learning more coding techniques in Python. Then we returned to the lab to begin setting up our lab space for the experiment. With much rearranging of power cables and various setups, and several trips between labs to locate various materials, we made good progress on our setup. We connected our monitor and computer tower, as well as our power supply, and we set up our laser, which sends light through a fiber-optic cable to our "lamp", a ball of stable gas, which is heated to generate all wavelengths of light, which are sent through another fiber-optic cable to our monochromator, which we can use to create any wavelength of light we need.


We collected the light on the other side of the monochromator and sent it through another, smaller fiber-optic cable, which we sent through a collimator and a microlens to focus the light into one point. When we looked at the response from the camera when placed in front of the lens, the result was extraordinarily out of focus and much too large for our purposes, with an odd halo appearing around it, but by adjusting the position of the light source (manually) and the camera itself (using the computer and an actuator), we were eventually able to bring the size of the point of light down to a spot roughly 2 by 2 pixels, so approximately 3 microns by 3 microns, and much clearer. This would be an acceptable size for the purposes of our project, which we were very glad to achieve just by doing our initial tests, but we know that we can create a smaller, more uniform beam, so that is our next goal.

Monday, July 10, 2017

Day 3

After a quick staff meeting and an overview of scientific abstracts, I started the day off by reviewing Dmitry's proposal and background information for the research we will be doing. Dmitry then taught us some more of the basics of optical physics. We learned about Rayleigh and Mie Scattering and spent a good amount of time focusing on image formation. We analyzed the creation of an image using a convex lens and connected it with the use of Snell's Law of Refraction to determine how rays of light from the object would bend when encountering the lens.

Snell's Law: n₁sinθ₁ = n₂sinθ₂
Where n is the refractive index of each material and theta is the angle with the respect to the normal of the ray of light within each material.


I read the first 50-some pages of the manual for the detector I'll be working with, detailing the setup, tuning, and calibration processes, and I'll finish the rest of it tonight. We're going to be using Python to control our detector, which is a wonderful coincidence, as I just started an online course to learn some of the Python basics a week or so ago, so I know a little starting out. I'm excited to be setting up our workspace and starting to collect data soon!

Friday, July 7, 2017

Day 2

After a quick staff meeting this morning, I researched the wave equation and its different forms in one, two, and three dimensions, and it's nature as a partial differential equation. We then discussed the topics which we had researched for homework with Dmitry, and he explained to us why it's important to cool the cameras we use for measurements (to about -50°C, -90°C if we could fund it). Because the detectors work by counting the number of electrons whose energy is pushed into the conduction band by incident beams of light, it is crucial to the data to know the number of electrons with as minimal error as possible. The temperature of a substance is representative of its kinetic energy, so at greater temperatures, the silicon has more energy to push the energy of so-called "dark electrons" into the conduction band, and therefore a fraction of the measurement may occur from within the material itself, independent from the light we are aiming to measure. These are called dark electrons because they appear in the measurement without any light being directed at the detector. This process, the creation of "dark current" does not occur at a fixed rate, it is random, and therefore we can only know the approximate rate to a range of +/-10%, as long as the temperature remains constant. If the temperature is constantly fluctuating, we will lose any sense of the rate at which dark current is being produced.

Dmitry gave me a copy of the lab's comprehensive report and research proposal to read today, through which I learned more about the history of the Kepler project, and the changes made when the reaction wheels malfunctioned as the telescope lost a fair amount of its ability to keep its aim steady. The project shifted its method from pointing at one point in the sky to shifting to different areas along the ecliptic every set of days. The loss of 2 of the 4 reaction wheels caused the view to drift about 3 milliarcseconds in a given timeframe, moving about one pixel every 6.5 hours. I also learned the background and basics of the project we will be working on, which involves measuring pixels' responses to different wavelengths of light to determine the extent to which light wavelength is a factor in the Pixel Response Function or PRF; as well as measuring and determining the IPRF, the Intrapixel Response Function, as a way to mitigate error in the Kepler Spacecraft's detectors.

Peter, Ashley, and I spent the rest of the afternoon in the lab, beginning to prep the cleanroom. The structural work has been, for the most part, set up by a previous group of interns, but six of the ceiling modules are missing, and the power distribution modules (PDMs) still need to be set up. We cleaned out some of the area and installed the master switch in the wall of the entry to the cleanroom, and then investigated where the PDMs needed to be installed, and came to the conclusion that we should mount them on the "roof" of the cleanroom. Additionally, we connected the REC cables to the PDMs; the REC cables provide power to the internal outlets within the cleanroom. The only things missing are the LPA cables, which provide power to the distribution modules.

Thursday, July 6, 2017

The First Day!

So today I received a high five from a rubber chicken. After our debriefing and staff meeting in the morning, we walked over to the Red Barn, where we did some team-building exercises, in which we learned the importance of communication and collaboration, as well as being mindful that we don't assume certain constraints exist when they may not. It particularly struck me, as we reflected on a challenge in which we had to flip over two carpets while 7 of us stood on each of them, that we had automatically assumed that our two groups were to work separately of each other; and yet, our task would have been far easier, had we all simply stepped onto one carpet, and then flipped the other.

After our team building exercises, we had lunch together and got to know each other better over the pizza and juice, before splitting up into our smaller groups for each department. When we met up with Dmitry, he showed me, Peter, and Ashley around the lab, and to our offices, just across the hall. The offices didn't have swipe access so we headed back upstairs to collect our keys from Joe. Shortly after, Dmitry showed us to our "classroom" of sorts, where he spent the next couple hours teaching us the basics of the physics of light and answering any questions we had.

I found this really fascinating, as it gave me a chance to expand upon the instruction I had received this past year in physics and chemistry. Dmitry taught us about various ways of detecting light, the general nature of the wave function as a series of sine and cosine curves, polarization and scattering of light, semiconductors (the most common/famous of which is Silicon), the band gap, CCDs (charge-coupled detectors), quantum efficiency and yield, and the operation of pixels and why the minimization of pixel size and increase in number of pixels will produce a sharper, clearer image. To learn more, we each continued at home and did some research on a couple topics that interested us. I personally pursued further research of semiconductors and CCDs in C. Kittel's Introduction to Solid State Physics.