Anyone following along with my little experiment to blog my FYP/dissertation progress will have noticed that it’s been a while since any update has been posted up here. Apologies, I’ll be posting more from now on (for reasons that’ll be explained shortly), I hope that these posts will continue to be useful to at least someone out in the world who’s interested in flying drones using code. I’m very glad to have received some feedback that people are enjoying the content and getting something out of it, thank you for that! Please let me know if I can add more to these posts to help out further. 🙂
So what’s been going on throughout this period of silence? In a nutshell, I have been working hard to confirm a better spot to fly the drone on campus here at university. In doing this, I’ve been working with my supervisor to make sure everything is above board and good to go, I’m very relieved to report that yesterday we were able to confirm that it is now A-OK for the practical coding and flight of the drone to go ahead!
Small victories aside, the title of this post is intended to give an idea of my current position – resetting my project, reviewing the relevant literature and working from the ground up. Before I get into the in’s and out’s of the work behind the scenes up to now, here’s a recap of my project’s focus:
“Exploring the implementation of consumer-level (off-the-shelf) drone technology in the process of inspecting and gathering data on physical structures for the purposes of performing structural surveys.”
In other words…
“Let’s see if I can get a drone to fly around with some level of autonomy and enable a user to perform survey-related tasks while they’re at it. Preferably without crashing.”
Although some work had previously been done in getting the drone airborne, (see .takeOff();), I decided it best to take a step back, do more research and start afresh once I was able to fly the drone again. During the time I was able to really get into some background research which is now helping me steer development a little better. Through my research, I came to the following conclusions:
- Drones are currently being used by some companies in performing building surveys, however this a field in its infancy as far as I can tell (Kestrel Cam, Flying Eye).
- The drones used in these (up-close) surveys are manually operated. Let’s see about getting some autonomy in there!
- Singular hi-res images and videos are taken, so there’s an opportunity to implement Structure From Motion (SFM) to develop on this foundation without the need for additional hardware.
- Recommended by supervisor – what about taking two photos of the exact same structure at two different points in time and comparing them for any differences using programming?
- Unfortunately these are (in the main) pretty poorly documented. At least, for my tiny brain to comprehend anyway.
- LiDAR (Light Detection and Ranging) payloads are being used on drones for the purposes of larger-scale surveying and 3D modelling.
- Re-purposing this technology for up-close structural surveys could be a more accurate and useful means of collecting and presenting data of structures in hard to reach or unsafe environments or parts of a building?
- However, these payloads are too heavy for my off-the-shelf drone and are over my budget (not so off-the-shelf or consumer-level!).
- The Xbox Kinect might be a better option for this project! The Kinect uses depth-imaging in a similar manner as LiDAR and has a dedicated SDK provided my Microsoft for Developers.
There’s quite a lot to take in as you can probably tell, my job at the moment is to pick a specific direction and go from there. Discussing this with my supervisor it was decided that I will start by working on semi-autonomous navigation of the drone as well as building a function to compare two images of the same structure (angle, lighting etc all the same), and check for any differences. This “spot-the-difference” function should allow a user to compare images taken at different points in time and identify changes or developments of the structure (or “subject”) in question, automatically.
Once these initial steps have been worked on, the more advanced aspects such as LiDAR or similar will be looked into. As the project moves on it will increment in technicality and the amount of tasks it can address.
That’s all for now from this post, I’ll be sure to post as this project progresses. Now that full development of the software has been given the green light, there will no doubt be a lot to say about mistakes made, lessons learned and tips that I recommend. Anything that I think could be of any use will be posted!
As ever, I hope this helps and please get in touch if you have any feedback at all 🙂
Thanks for reading.