This program inserts heading information in a comment at the top of source code. It makes use of user-defined templates to generate the headings.
This was a 299 directed study project with Yvonne Coady. My goal was to learn how binary and assembly language is related to modern programming languages, operating systems, and programs.
This program is an implementation of the RSA encryption algorithm. It generates a public and private key which are used to encrypt and decrypt messages. This was for educational purposes and should not be used for securing important information.
This program is an implementation of the LZ78 data compression algorithm. It currently isn't very effective on small data inputs, however it does accurately reflect how the algorithm works. This was for educational purposes and doesn't actually work very well for small datasets.
This program is designed to work as a web-based front-end to the database I use to manage my personal expenses. This project is still a work in progress.
This is my implementation of a blob tree. It is an implicit system which allows the union, intersection, and blending of implicit blobs.
The marching cubes algorithm creates a polygonization from an implicit surface. This is an implementation using my implicit system to generate meshes from blobs.
This is a simple implementation of a ray-tracer to test and generate images from the implicit system.
This is part of the Fundamentals of Computer science course where we explore using a sat solver to solve games of sudoku. We compared minimal and extended encodings of Sudoku as a SAT on various difficulties.
This is a group project for my Self-adaptive systems. It is a web-based python interpreter that is split into three components. There is a Flask front-end server, an RPC controlled CPU scheduler, an overlying autonomic manager, and a core database. This wasn't ever intended to go into production. There are very few security measures implemented and for that reason shouldn't be used for anything other than a demonstration in a trusted environment.
The system has variable price-plans that a user can select. Each plan has different CPU quotas assigned, allowing processing for longer time without timing out, and allowing processing for longer time without a context switch. Administrators can use the autonomic manager to determine which price plan is the desired plan. The autonomic manager determines which price plan has a higher demand and adjust the prices according to the usage of the desired price plan and compare them to the usages of the other price plans.
This project is intended to assist in the maintainance of the Linux kernel by providing clearer visualization and navigation through the commit information.
We use a tree-based model, with the merges as the inner nodes, the commits as the leaf nodes, and the merge into the master branch of the kernel as the root node.
This is where I keep the repositories for my courses. To avoid issues with academic dishonesty, I keep my course repositories private until the term has ended. I probably won't post all of my course repositories here, but they are all available on my GitHub account.
This contains the materials for Introduction to Computer Graphics. It contains some very sparse notes on the lectures and labs. It primarily contains the program submissions for the assignments.
This contains the materials for Introduction to Computer Networking. It contains my notes for the course and my assignment submission.
This contains my notes and assignments on DFAs, NFAs, PDAs, Turing Machines, and the other materials from the course.
Contains my notes and assignments for the artificial intelligence course.
Our assignments covered search algorithms, logic, and Bayesian networks.
These are the papers I have written for various projects. They are not intended for submission to conferences or journals.
This is the accompanying paper to the Sudoku as a SAT project repository. It describes the methods my group used to evaluate the differences in an extended and minimal encoding of Sudoku.
This outlines the capacity of my blob tree at the end of the term. This project is being continued for the purpose of experimenting with polygonization algorithms.
This project implements sentiment analysis on news articles using the GDELT dataset. We investigate the behaviours of Naive Bayes, Decions Trees, Extra Trees, Random forests, and Linear SVM models to determine which model is best suited to the task.
GitHub is a popular source code hosting site which serves as a collaborative coding platform. The many features of GitHub have greatly facilitated developers' collaboration, communication, and coordination. Gists are one feature of GitHub, which defines them as "a simple way to share snippets and pastes with others." This three-part study explores how users are using Gists. The first part is a quantitative analysis of Gist metadata and contents. The second part investigates the information contained in a Gist: We sampled 750k users and their Gists (totalling 762k Gists), then manually categorized the contents of 398. The third part of the study investigates what users are saying Gists are for by reading the contents of web pages and twitter feeds. The results indicate that Gists are used by a small portion of GitHub users, and those that use them typically only have a few. We found that Gists are usually small and composed of a single file. However, Gists serve a wide variety of uses, from saving snippets of code, to creating reusable components for web pages.
With an average of more than 900 top-level merges into the Linux kernel per release, many containing hundreds of commits and some containing thousands, maintenance of older versions of the kernel becomes nearly impossible. Various commercial products, such as the Android platform, run older versions of the kernel. Due to security, performance, and changing hardware needs, maintainers must understand what changes (commits) are added to the current version of the kernel since the last time they inspected it in order to make the necessary patches. Current tools provide information about repositories through the directed acyclic graph (DAG) of the repository, which is helpful for smaller projects. However, with the scale and number of branches in the kernel the DAG becomes overwhelming very quickly. Furthermore, the DAG contains every ancestor of every commit, while maintainers are more interested in how and when a commit arrives to the official Linux repository. In this paper, we propose the merge-tree, a simplified transformation of the DAG of the Linux git repository that shows the way in which commits are merged into the master branch of Linux. Using the merge-tree, we build Linvis, a tool that is designed to allow users to explore how commits are merged into the Linux kernel.
This was an entry for a competition on creating a futuristic car. I won second place. Areas that I worked on were modeling.
This was a simple project that I did when I probably should have been doing homework during the summer of 2013. Areas that I worked on were compositing, lighting, and materials.
This is the result of a video tutorial that I casted.
This is a grand piano that I made for fun and for practice. Areas that I worked on were modeling techniques, materials, lighting, and compositing.
This was an exercise using hair particles. I saw an image
online and attempted to make something similar.
Areas that I worked on were texture-dependence and hair particle systems.
The jacks in this image are implicit models from my implicit system, and polygonized with my marching cubes implementation.