Brave new world – first manual flight!

As promised in my previous post (Hard Reset“), these articles will be coming in much more regularly now that I have a place to properly fly and test the drone in. Progress has already started to build ūüôā

Over the last 24 hours the project has gone from sending the drone off on a pre-programmed flight, to enabling manual control through a laptop. Thanks to some helpful signposting of a fellow student at Brunel (Ben Evans Рthanks for the advice!), I was able to implement Webflight (all credit to Laurent Eschenauer (eschnou) for building and providing this application). Below is a screenshot of me and two of my coursemates getting this working today, taken from the front camera of my drone.


This program uses NodeJS to connect with the drone, display a live video stream straight to a web browser (“localhost:3000”) while also enabling the user to manually navigate the drone around using the keyboard. If you’d like to use Webflight yourself, below are some tips that helped me get up and running with it that should hopefully help you get set up and flying in no time at all:

  1. Make sure you’ve done the tutorial over on the Instructables site as described in my earlier post,¬†takeOff();.¬†This guide provides a great foundation to help you understand NodeJS and the npm library usages in a practical manner and will greatly help you in getting to grips with further drone/NodeJS/npm related work.
    • Completing the Instructables tutorial will ensure that you have a separate “Drone” folder, as well as NodeJS and Ffmpeg installed so is a good head start!
  2. Visit the Webflight Github page and spend some time reading the file, the author has spent considerable time and thought to provide a comprehensive guide to the application.This guide is immensely useful if you follow it word-for-word
    • If (like me) you are too keen to get the drone airborne and think you know best, you might end up spending quite a bit of time rectifying your mistakes later on!
  3. To be 100% clear, install git, NodeJS and ffmpeg before starting to work on the Webflight installation.
    • Also, make sure that your PATH (environment variable) includes a reference to “c:\…\Ffmpeg\bin\”, “c:\…\nodejs\”, “c:\…npm\ and “c:\…\Git\bin” once all of these have been installed on your machine through the hyperlinks in “3.”.
  4. Using Git Bash, clone the Webflight project into your Drone folder (see Instructables tutorial) on your laptop as shown in the first step of the Install instructions on the Github page.
  5. Open up windows command prompt to complete the second and third lines of the Install guide on the Github page, ensuring you have used “cd ” to navigate into the Webflight folder
  6. The fourth/final line about installing Bower caused me some issues as it couldn’t find Git on the laptop I was using so here’s what happened:
    • Firstly, make sure you run “npm install –global bower” instead and make sure you’re running the command in the Webflight folder that you cloned (see step two in the Install guide)
    • Checked environment variables, Git was in there
    • Ran “npm install –global bower” ¬†through Command Prompt as administrator, no change on the output
    • Ran the same command in Git Bash (while inside the Webflight folder) and success! I have no idea why this worked, it just did!
  7. The initial control layout is not in QWERTY format, to change this edit the config.js file and save it

If you follow the guide on the Github page and use the above for reference, you should be fine. The Bower install issue was by far the biggest pain as this resulted in only seeing a blank webpage with the Webflight banner at the top when the app was running. Once all the above it set up, it’s as simple as connecting to the drone’s WiFi, executing the program in Command Prompt (cd to Webflight folder and type “node app.js”) and entering “localhost:3000” in a new web browser window or tab.

The above has been more of a how-to style post as this caused me a little trouble while setting it up so below I’ll talk about why I chose this existing application as part of my project.

When determining the best means of approaching drone programming there were several options out there in various different languages (see¬†“Hard Reset”). Considering all of the available options, it became obvious to me that NodeJS is the most established and well documented means of creating drone applications for beginners. As I am a beginner and this project isn’t about me reinventing the wheel in terms of drone control, this offered the best platform for me to get on with programming a meaningful solution towards the objectives of my dissertation.

Webflight in itself was recommended to me as the best starting point to understand drone programming and to work with, this also gives a great insight into client-server interactions and JavaScript. For the sake of my work towards this, I’m aiming to work on a plugin for the existing application that will enable the drone to perform the specific navigational and surveying tasks that I require of it. In doing this, I’m looking to attempt¬†the following:

  • Semi-autonomous navigation – upon request¬†of user (JS)
  • Autonomous detection of objects (JS)
  • Image capture (JS)
  • Image comparison (Java)

Bundling all of this together in a Java project will enable a single point of usage, alleviating text-based input requirements of the user and allowing an image comparison function to be implemented in the same project (comparing a past and present version of a structure for differences). All in good time though…

So there you go, I’ll keep posting as this project progresses and hopefully provide some content to help other beginners get into drone programming ūüôā

Thanks for reading,





Hard Reset

Anyone following along with my little experiment to blog my FYP/dissertation progress will have noticed that it’s been a while since any update has been posted up here. Apologies, I’ll be posting more from now on (for reasons that’ll be explained shortly), I hope that these posts will continue to be useful to at least someone out in the world who’s interested in flying drones using code. I’m very glad to have received some feedback that people are enjoying the content and getting something out of it, thank you for that! Please let me know if I can add more to these posts to help out further. ūüôā

So what’s been going on throughout this period of silence? In a nutshell, I have been working hard to confirm a better spot to fly the drone on campus here at university. In doing this, I’ve been working with my supervisor to make sure everything is above board and good to go, I’m very relieved to report that yesterday we were able to confirm that it is now A-OK for the practical coding and flight of the drone to go ahead!

Small victories aside, the title of this post is intended to give an idea of my current position – resetting my project, reviewing the relevant literature¬†and working from the ground up.¬†Before I get into the in’s and out’s of the work behind the scenes up to now, here’s a recap of my project’s focus:

Exploring the implementation of consumer-level (off-the-shelf) drone technology in the process of inspecting and gathering data on physical structures for the purposes of performing structural surveys.”

In other words…

“Let’s see if I can get a drone to fly around with some level of autonomy and enable a user to perform survey-related tasks while they’re at it. Preferably without crashing.”

Although some work had previously been done in getting the drone airborne, (see .takeOff();), I decided it best to take a step back, do more research and start afresh once I was able to fly the drone again. During the time I was able to really get into some background research which is now helping me steer development a little better. Through my research, I came to the following conclusions:

  1. Drones are currently being used by some companies in performing building surveys, however this a field in its infancy as far as I can tell (Kestrel Cam, Flying Eye).
    • The drones used in these (up-close) surveys are manually operated.¬†Let’s see about getting some autonomy in there!
    • Singular hi-res images and videos are taken, so there’s an opportunity to implement Structure From Motion (SFM) to develop on this foundation without the need for additional hardware.
    • Recommended by supervisor – what about taking two photos of the exact same structure at two different points in time and comparing them for any differences using programming?
  2. There are APIs (Application Programming Interfaces) and SDKs (Software Development Kits) out there for the drone in C#, Java (Android), JavaScript etc.
    • Unfortunately these are (in the main) pretty poorly documented. At least, for my tiny brain to comprehend anyway.
  3. LiDAR (Light Detection and Ranging) payloads are being used on drones for the purposes of larger-scale surveying and 3D modelling.
    • Re-purposing this technology for up-close structural surveys could be a more accurate and useful means of collecting and presenting data¬†of structures in hard to reach or unsafe environments or parts of a building?
    • However, these payloads are too heavy for my off-the-shelf drone and are over my budget (not so off-the-shelf or consumer-level!).
  4. The Xbox Kinect might be a better option for this project! The Kinect uses depth-imaging in a similar manner as LiDAR and has a dedicated SDK provided my Microsoft for Developers.

There’s quite a lot to take in as you can probably tell, my job at the moment is to pick a specific direction and go from there. Discussing this with my supervisor it was¬†decided that I will¬†start by working on semi-autonomous navigation of the drone as well as¬†building a function to compare two images of the same structure (angle, lighting etc all the same),¬†and check for any differences. This “spot-the-difference” function should allow a user to compare¬†images taken at different points in time and identify changes or developments of the structure (or “subject”) in question, automatically.

Once these initial steps have been worked on, the more advanced aspects such as LiDAR or similar will be looked into. As the project moves on it will increment in technicality and the amount of tasks it can address.

That’s all for now from this post, I’ll be sure to post as this project progresses. Now that full development of the software has been given the green light, there will no doubt be a lot to say about mistakes made, lessons learned and tips that I recommend. Anything that I think could¬†be of any use will be posted!

As ever, I hope this helps and please get in touch if you have any feedback at all ūüôā

Thanks for reading.