Monday 23 November 2020

The Plan Part 3 - The Challenges

So - how will we attempt the challenges? For the arena challenges we feel it's such a controlled environment we should be able to attempt them all autonomously. The obstacle course will be done remotely as it'll be more fun to come up with obstacles. 

I decided not to compete in Pi Wars 2020 as I wanted to spend a little bit of time looking into computer vision. The plan was to use OpenCV - but then during the Pi Wars 2019 Mini Conference, I was inspired by Brian Starkey's talk on "Computer Vision from Scratch" to try and write my own. 

So I came up with "Blocky Vision" - a very simple colour matching processor that can be used to find the largest 'block' of a certain colour. I've had mixed success and performance is not great - I still may break out the OpenCV tutorials, but the current plan is to use this to find coloured blocks in the "Tidy The Toys" and "Feed the Fish" challenges. The screen shot below shows the original image, the processed image where the matched colours are found, and finally the original image with the found block outlines and names overlaid. 


Tidy Up The Toys

This is probably the one we'll spend the most time on as it's the most tricky. We plan to manoeuvre the robot around the arena using the distance sensor to give us an idea of where we are, and the camera to find the blocks. 

As we find the blocks, we'll have a forklift attachment to pick them up one by one. I bought some Lego compatible servo motors and a stepper motor. The stepper motor would be to lift the blocks, since it means we could lift them to an accurate height. 

I then built a prototype out of Lego Technics and tried it on some cardboard cubes under remote control. 


This was using a Red Robotics 6 wheel robot for testing that we were lucky enough to win in a raffle at the Pi Wars 2019 Mini Conference. 

The basic mechanism worked OK, but it was a bit bulky so may need a little bit of a redesign, but suspect we'll stick to Lego as we have quite a bit of it and it's easy to work with. 

Feed The Fish

This is the challenge that has the most question marks. How to propel small projectiles into a box 40 cm away? 

Currently the plan is to use Nerf balls as the 'fish food'. Originally I had an idea to repurpose an old pair of tongs for the catapult. I created a prototype, but it just didn't have enough power to launch the nerf balls into the air far enough. I still may go down this road, but for the moment this one has been put on the shelf whilst we rethink. 

In terms of the actual process, the idea is that the fish bowl will have quite a large goldfish on the front that the camera can use for targeting. For distance we'll either use the distance sensor, or use the stop line as a reference in the camera. 

Then - launch the balls using whatever method we come up with. 

Up The Garden Path 

For this one, we're planning on using a mixture of camera vision to follow the line, and voice control to indicate which direction to take when we get to a junction. Plan B is to just use voice control - but that is less points and will be slower. 

Although the course is given in advance, it doesn't sit right with me that a robot could be hard coded to follow a particular course. I like to think that when it comes to a maze challenge, the robot would be able to cope with the maze even if it has no idea of the actual path on the day of the competition.  

So that's why we're adding the voice control - so the robot will find the junctions itself, and then get direction from us which path to take. 

For the voice control we'll probably use a Bluetooth headset for the microphone as I can't see how a microphone on the robot would be able to cope with the background noise. Unless the robot stops every time it gets to a junction and waits for instruction. 

We've experimented using the google voice recognition API and got good results. However, as Svetlana is currently taking a part time degree in Machine Learning, the ideal solution would be to build and train a simple voice recognition model ourselves. 


So - that's the plan. But time will tell how successful we are. 





No comments:

Post a Comment