My Projects
I wanted a personal portfolio website to showcase and document my projects, most developers have one so I asked myself "how hard could it be?".
I thought about using React or Flutter to make the site, but I wanted to challenge myself to make something that looks and functions like a modern site, using nothing but raw HTML, CSS and JS. I initially tried to use a HTML template I found online, but it was terribly written and a nightmare to work with, so I decided to go even more RAW and write everything from scratch. As a result, all the code for this site is written by yours truly (me) from scratch!
Writing everything from scratch meant learning the responsive CSS from scratch and developing a deep understanding of the responsive and adaptive layout tools. The initial versions of the site were made responsive using purely clever HTML and CSS. However, as I added more features to the site, I ended up using some plain ol' JavaScript. Such as for the mobile navigation bar overflow menu, carousel layout navigation logic and developer mode. I also took the liberty of splitting the files into more manageable sections and components, simplifying my workflow dramatically.
Inspired by Flutter's stateful widgets, I decided to make a Python library that incorporates statefulness into Tkinter. The name comes from flop and Py, with flop apparently being a synonym of flutter and Py being an obvious shorthand for Python.
I will write more here once I have more implementation details
I couldn't find a lightweight local file backup solution to backup my main Windows PC to my SMB NAS, so I decided to look around. I found RoboCopy, a windows CLI tool which provides the backend functionality for this. I then wanted to make a GUI for this so that I can use this in a bit more refined way.
Flutter Solution
The current solution is a Flutter windows app, which I chose because I was currently learning how to use Flutter at the time. This solution is around 50% complete, as it does work, but the layout is nonexistent and it's not very thoroughly tested. You can find this in the repo linked above.
Windows Tray Solution
I am planning to make another version of the app, that will be windows tray native, so that it will stay in your tray notifications area. This would make it even better for the intended purpose, as it could then be unintrusively launched at startup and stay in the background. With the option to be manually run when needed and opened up to configure settings.
I noticed a market gap for a certain kind of tracker app on the Google Play Store, I then decided to come up with a rough design for an app that could fill that niche. This project is highly speculative at this point.
I will write more here once I have more implementation details
The name of this project is a portmanteau of Thymio and Pi, with Thymio being the name of the robot platform that I used and Pi referencing the Raspberry Pi 3B+ that was the brains of the prototype.
The official title of my dissertation was "An Intelligent Approach to Navigating Environments With Compliant Obstacles". Like most academic titles, it likely means very little to you. The gist of it is that obstacles fall on a spectrum of compliance, which is a fancy way of saying how easy they are to move aside or get through.
Essentially, if you have a robot that is navigating solely visually, it has no context for what an obstacle's compliance is. You would have to manually add compliance context to the object detection and classification model. This is suboptimal for many reasons, which is why you'd probably want to have the robot to be able to do this on its own.
A side note, with recent advancements in AI, training a model with a limited set of known object/compliance pairs and having it extrapolate the data is a lot more realistic and robust of a solution.
The goal of my project was to design, implement and test a prototype robotics platform to demonstrate "An Intelligent Approach to Navigating Environments With Compliant Obstacles".
I had a Raspberry Pi 3 running the OpenCV object classification model, compliance testing logic and Thymio control program (asebamedulla). Everything was written in Python with the Pi 3 was connected to a Pi camera, the MPU6050 inertial measurement unit (accelerometer and gyroscope) and the Thymio robot.
The final prototype was able to identify and classify obstacles, then it would decide to test the compliance of the obstacle if it was unknown to it. In theory, this would then be coupled with a navigation algorithm, using the compliance values to augment the different path "weights" to come up with the most efficient real-life path.
I considered several methods of determining compliance from sensor values:
In order to be able to use a function to calculate a compliance_value from the deceleration on collision, the sensor must have the following properties.
I played around with the following accelerometers before choosing one for the final approach.
I would have needed some kind of mobile robotic test platform to develop the prototype on. In theory, I could've used anything that had the ability to move, a couple servos and wheels bolted to a slab.
However, my supervisor had access and experience to Thymio robots, which were simple small robots with a few built in sensors. Most importantly though, they include a first party control library which meant that I won't have to directly control servos for movement.
Up until this project, I had done some modules on ML theory, but I have never actually implemented or used anything in a practical setting. I did a lot of research and found out about OpenCV, which allowed me to use pre-trained object classification models to detect and classify objects. I eventually settled on using a model specifically for "everyday objects" which I believed would allow me to classify anything I would use in testing.
Obviously running in object recognition model on a Raspberry Pi 3 single board computer was going to run into some performance constraints. I tried my best to optimize by disabling the preview and directly sending the camera input to the model. Eventually I had to sacrifice the input resolution to get the model to work in somewhat real-time, which meant less accurate object classification. However, I managed to get around this by caching the recognition data once and using it for subsequent frames. The final solution ended up with minimal recognition delay in the few hundred milliseconds range, but also being consistent and accurate enough practically.
I had a Raspberry Pi 3 running the control program, written in Python. The Pi 3 was also running the OpenCV object classification model using data from a Pi camera. The MPU6050 IMU was wired directly to the GPIO pins on the raspberry pi and transmitting data over the I2C bus. The Thymio robot was being controlled by the Pi 3 through their proprietary dbus Python library asebamedulla. Everything was powered off a standard USD-PD compatible battery pack.
The final prototype was able to identify and classify obstacles, then it would decide to test the compliance of the obstacle if it was unknown to it. In theory, this would then be coupled with a navigation algorithm, using the compliance values to augment the different path "weights" to come up with a more efficient path, not just by distance but also by obstacle compliance data along the way.
By calculating compliance_value from f(internal_variable), we are mapping a single sensor's data to a numerical value. This inherently has 2 issues, the first being that information is lost during the mapping process, regardless of what function is used. Secondly, using a single sensor value is inherently limiting, as it can often miss certain nuances.
On the function side, the simplest function used to turn a range of values, such as deceleration, into a single value is an average. This would lead to a sharp single spike such as when colliding with a completely immovable and non-compliant obstacle to return a lower compliance_value than a prolonged deceleration as as from pushing a heavy but movable and slightly compliant obstacle.
Therefore, it becomes a task in itself to choose or create a function best suited for the job that consistently outputs obstacle compliance values that are representative and useable for comparison purposes.
On the variables side, using a single variable simplifies things but it can also be easily become inaccurate. For example, by only using a single accelerometer axis, if the robot tilts or otherwise doesn't collide with the obstacles directly every time, it will generate inconsistent data due to the vectoring of the values measured.
Therefore, it makes sense to try and take into account as many variables as possible in calculating an obstacle's compliance_value, to allow for errors like this to either be accounted for with erroneous data detection and correction or though simple mathematical redundancy.
The goal of this project was to design a querying language for Turtle RDF files (read more here ), then implementing an interpreter for that language using Haskell. The name is a play on SQL, or structured querying language, which this querying language strongly resembled.
The coursework was meant to be done in a group of 3. I proposed that we split it such that one person designs the Toy language and interpreter, one person does the underlying querying logic, and another person links the 2 together and codes the evaluation logic. This would split the workload relatively evenly.
However, one of my group members didn't show up to any of the "scrum" meetings, or contribute code at all. Despite confronting him about the lack of contribution and eventually reporting this to the module lead, we didn't get anywhere.
Eventually, the other 2 of us decided to just split the work into 2, with me designing the Toy language, and coding the interpreter and evaluation logic. While my other groupmate ended up implementing the entire backend querying functionality and the actual execution in Haskell.
While my groupmate worked on the backend logic, I requested that everything be encapsulated and available through an API that I could make function calls to. As the library was initially meant to modify files, I realized that I wouldn't actually need to implement runtime variables and I could instead use intermediary files as pseudo-variables.
The second thing I realized was that the actual functionality could be implemented using a combination of simpler functions rather than using a larger set of more complex functions. Essentially reducing the interpreter complexity at the "cost" of requiring more effort to write the code in the Toy language.
The goal of this project was to work in a group of 6 developers, using AGILE project management and development practices to deliver a software product to "customers". With the customers being seniors who would give up feedback on deliverables.
The software product in question was a "runway redeclaration" application. Essentially, when there is an obstacle on the runway, instead of shutting down the runway, the airport can choose to "redeclare" its values instead, to avoid the obstacle. The application would have to recalculate these values and visualize these changes for the user.
This project had us using a lot of industry standard things and some not so standard things:
What I did
The 6 of us were split into pairs to work on the Model, View and Controller code respectively. Initially, I was the main person who did the data modelling with UML class diagrams and the backend calculation logic. However, as the project progressed and the backend functionality reached completion ahead of the frontend, I switched over to help integrate the frontend with the backend.