M.L.

Koret Scholars: Eclipse Phase Detection

by Hananh Hellman2024-08-20

Thank you to Ethan Martinez, Matthew Vitullo, Obinna Kalu, Nick Carales, and Dr. Gurman Gill for undertaking this machine learning project with the purpose of identifying various stages of solar eclipse in photographs. This project utilized the 2017 Eclipse Megamovie dataset containing thousands of eclipse images submitted by participatory scientists (previously referred to as citizen scientists) after the 2017 total solar eclipse.

The eclipse phase machine learning project is broken into two different fundamental parts, a model that utilizes a Convolutional Neural Network (CNN), and a model that utilizes a Bag of Words method. Unfortunately, the wall that blocked both methods wound up being the distribution of the data collected. The intention for each model is to sort the images into eight different states of eclipse:

  1. 0-25%
  2. 26-55%
  3. 56-95%
  4. Darks
  5. Diamond Ring / Bailey’s Beads
  6. Flats
  7. Not an Eclipse
  8. Total Eclipse
Koret Scholars Poster

The distribution of the data, however, is very heavily skewed toward total solar eclipse and contains very little representation of any of the other phases. From the perspective of a machine learning model this means that the machine will be able to identify total eclipse very well, but will struggle to identify the other seven categories.

Convolutional Neural Network

The data provided to the team is unlabeled, meaning that a large amount of the data needed to be manually labeled before the machine could be trained to do so. Manual labeling involves drawing boxes around something in an image that we want the machine to recognize and label or orient accordingly. Thankfully, this process was sped up by a python-based classification script, which analyzes the image and puts it in the correct category. This helped streamline the preparation and creation of a balanced dataset with which future models will work.

The convolution neural network takes in an image that belongs to a category as input and passes it through a collection of filters that generate features. These features are then used to calculate the probability of the image being in each of the categories.

Bag of Words

The bag of words model takes an image as an input and uses the Scale-Invariant Feature Transform (SIFT) algorithm to extract the most important or noticeable features. Bag of Visual Words Model to describe each image as well as a K-Nearest Neighbor (KNN) algorithm to predict and classify the input images.


The project team concluded that the convolutional neural network was ultimately the better model for classifying the images. The team, if provided the opportunity to pursue further research, intends to improve the CNN accuracy even further as well as exploring different ways of splitting training and testing data to see how it would affect the model. The students project poster can be found below:

View Poster