Post-Competition Reflections

So… how did it go?

Well, we had ups and downs. At one stage, it felt like a lot of downs, but thankfully, things came good at the end.

We started with the “Blast Off!” challenge – the straight-ish line speed test. This was the one that we decided to use vision for this year, and it worked. We didn’t get the fastest times in the world, but they were respectable, and apart from the first run, successful. We would have done even better here if Angus had realised sooner that he could rescue Glitterator in that first run.

Next up was Spirit of Curiosity, and Angus anticipated doing really well here. Glitterator was made for that type of terrain, and should have aced it. Unfortunately… someone forgot to pack the the AA battery charger, and this meant that our controller wasn’t quite at full charge when we started this challenge. All was going brilliantly, until all of a sudden Glitterator set off with a mind of its own, zooming away off the course. No matter how hard we tried, we couldn’t get the controller to stay paired, and so the robot kept running away. We had to abandon the challenge with zero samples collected. 😭

Next up was Pi Noon, and at at this stage, Angus was convinced that it was something about a dead spot in the room rather than low batteries causing the problem. Since Pi Noon was a different spot, he reasoned, it would be OK. And at first it was. Up against a beginner (sorry, I forget who), we were at a disadvantage of four balloons to five as a starting point, but after some battle, Angus first equalised it, and then was ahead by one. But then controller woes struck again, and Glitterator zoomed off into the wall. Another challenge we had to abandon, and this one with only 18 seconds remaining, and we were ahead.

Hubble Telescope was next, and as this didn’t involve the games controller we were more confident. This one had been tested multiple times at home, and we were pretty confident with it. Unfortunately… we put Glitterator in the arena, pressed go, and all it wanted to do was travel backwards! It took time, but the realisation dawned that the front sensor was out of alignment. It probably got knocked on route, and we foolishly hadn’t tested it before the challenge. Once we figured that out, we were able to bash it back into position, and get some points on the last of three runs, but unfortunately there we muddled red and blue and so didn’t get a clear run anyway. 😩

All these challenges occurred during the period of judging for artistic and technical merit, so we raced from here to join the queue for this judging. Unfortunately after spending forty minutes in the queue, we reached the front just as we had to leave for our next challege: Space Invaders. We made our apologies, promising to be back, and made our way downstairs.

It was on to Canyons of Mars, the challenge we should have done well last year, but just couldn’t start because of controller problems. This year, I started it from the keyboard, and for the most part it performed (using completely different code to last year, because this year Angus coded it). I think we might have had one clear run, and two that required rescues. Not sure, but it was enough to make us happy, and so we headed back to the artistic and technical judging in a better frame of mind.

We headed back to the artistic and technical merit judging, and managed to squeeze both in before our next challenge, though this meant our technical explanations were very brief!

I’d tried to find a vendor with a battery charger for sale, but the best I’d managed was one selling pre-charged AA battersies. Better than nothing, I thought, and we headed to this challenge with these. Not everything was great though… we had a second battery pack we were using to drive the laser and gun motors, and as we set up for this challenge, we realised that it wasn’t working 😱 Fortunately, Angus was able to adapt quicky, and while he had to abandon the laser, at least he could run the gun motors from the primary battery pack. From there, things sort of worked… we got some hits. Some bounced off the targets, but some targets went down. And then Glitterator went rogue again, heading off with a mind of its own. At least we got some points though!

Our last challenge was remaining: the obstacle course. This was the one Angus really wanted to do well in, but also one that needed a working controller. I did another round of the vendors, hoping to find either non-recharchable batteries (these have a higher starting voltage) or a AA battery charger. Thankfully I discovered a vendor with non-recharchables. If only we’d found them earlier!!!

With fresh batteries in, we headed to the course, full of hope. Angus managed three clean rounds, with a very respectable best time of 1:00. And this was good enough to with this challenge for the intermediate category. Of course he didn’t know this at the time, but when he found out, it totally made his day.

And Overall?

We were astounded to achieve 4th place in the intermediate category. Just think how well we could have done if things had gone to plan! (But I know everyone is saying that.) Seriously though, it just goes to show how having a go at everything really makes a difference.

Lessons Learned

  • Check that you’ve remembered all your equipment. I still can’t believe that we forgot the AA battery charger.
  • Non-rechargables work better than rechargables. We already knew this, but learned it again!
  • Check your sensors. I fully take responsiblity for this failure. Why I didn’t I don’t know. Nerves perhaps? I usually do this check without thinking about it. We still would have lost points because we messed up the colours, but we would have got more.
  • We didn’t quite get our hardware done by Christmas (which was our aim), but it wasn’t too long after. This gave us plenty of time to focus on the coding for the autonomous challenges. This in turn meant we were finished, bar tuning, two weeks before the competition. As you’ve just read, things still didn’t go to plan, but it was a much less stressful lead-up than we’ve had in the past.

Regrets

So many people I could have met but didn’t. A small number of people I sort of met but didn’t properly. I don’t cope well with crowds. I’m perfectly happy standing in the front of a lecture theatre full of 200 students and talking for an hour, but interacting with people one-on-one is really hard, and I went into semi-shutdown on the day and barely talked to anyone đŸ˜ĸ I really wish I’d chatted with more people.

Next Time

It took a lot of convincing from my family for me to agree to put in an application this year. I only did so because the kids wanted to do more, and they did. Angus did the bulk of it, but Erin made a great go at the colour challenge. Perhaps unsurprisingly, I really want a year off next year.

Angus however is inspired. We got home late on Sunday night, and on Monday morning I headed off to work before Angus and Erin were even awake. The minute I walked in the door after work, Angus was begging me to look at PiBorg motors, planning for his next robot. This morning (Wednesday) before work, he wanted “a spare 3A+, pi zero and the monster borg controller.” When I came home from work, he told me that he wanted to create a raspberry pi based games controller, using the bluetooth comms I’d set up between Glitter brain and Glitter brawns. (Something I’d already thought about myself, but not mentioned at all to him.) I think next time he will be in on his own though, as a solo effort.

The question is: what category should he be targetting??

Advertisements

A weekend of testing and tweaking (and building a stand)

So, as indicated in my last blog post, we’re pretty happy with Glitterator III at this stage. So the weekend was spent testing and tweaking. Tests on the Nebula Challenge and Blast Off! were accepted as passes; the maze had a little bit of tweaking to smooth out some over corrections. If we really wanted to, we could probably spend much more time doing tweaking and tuning, but enough is enough! Angus originally told me that I had to pass the NASA test: 60 repeated successes. He got bored after four though, so we left it at that đŸ¤Ŗ

Except for the gun… Angus decided that it just wasn’t good enough: it was jamming too often. So after much effort at redesign on Saturday, and much frustration, he finally came up with a more reliable mechanism on Sunday. I think he’s still not completely happy, and in reality would prefer to be working with different ammunition than we have. (This year, we are using NERF balls.) It’s too late to change that now though!

So the latter part of the weekend was spent starting a build of the stand that will be covered in GLITTER (aka LIGHTS). Erin worked hard at the woodwork; now we just need to get it painted and actually attach the lights. When this is going to happen, I’m not sure!

The (blog) judging’s over but the work goes on…

There’s still tweaking to do on the robot, such as trying to keep it straighter as it travels the maze, and adjusting the gun feed to prevent occasional jams, but we’re pretty happy with what we have now, and were so a week ago. That’s a good thing, because we didn’t really do much with Glitterator III last weekend: I was busy marking stuff, and the kids were out and about running a fell race for part of it.

With the pressure off in terms of performance, thoughts this week turned to aesthetics. Two years ago, in our first appearance, our robot Glitterator lived up to its name, with a blinding array of lights. Last year, we really didn’t have much glitter on Glitterator II (just a Blinkt, if I remember correctly). So this week I dug through my boxes to find just what I could.

I’ve been a bit constrained by the fact that with the Space theme, we’ve designed Glitterator III with strong reference to the Lost in Space chariot. It’s a reference rather than a replica, but lots of glittery lights wouldn’t really fit. (Yes, I know that this is the first time in the blog that I’ve mentioned Lost in Space, but the plan’s been there from the start – we’ve just been keeping it secret!)

Anyway, after digging through my stuff, I came up with the following:

  • 2* 5×5 RGB breakouts
  • unicorn pHAT
  • unicorn HAT HD
  • scroll pHAT
  • scroll pHAT HD
  • blinkt (multiple 🤭)
  • LED SHIM
  • 2m of APA102 lights (orginally bought with intention to use on Glitterator II)
  • …and probably other stuff!

As I said though, I didn’t really want to overload Glitterator III with lights which would detract from its style. I could immediately find a use to the 5×5 RGB breakouts though: headlights! And that was all well and good if all I wanted to do was headlights, but then I thought: indicators! And the problem here is that they were both sitting on the same I2C address. A bit of poking at the specification and example code revealed that there was an optional alternative address, and at first glance, it looked like there was a solder pad on the back of the board to use that address. I’m glad I bothered to check before I proceeded though! Turns out there’s a fine trace between the pads (my eyesight is shocking, I didn’t spot it), and in fact you need to cut this trace to use the second address. You can then solder the pads if you want to restore the original address… Anyway, that done, first small steps were achieved:

Next step was to go from solid blocks of colour to something looking like indicators… which was fairly straightforward. (And while I have a video of that, I won’t inflict it upon you, because it was terrible quality.) My only concern right now is whether those lights will interfere with the ToF sensor at the front (or elsewhere). I’d love for it to use the indicators when turning (e.g. in the maze), but need to check to see if it will work in practice. In any case, the code is all there, and it’s simple enough to turn on or off.

The only other source of glitter that I’ve actually added to the robot is the LED SHIM. Anything more would be overkill. But there was so much more stuff that I could use, if only I could work out how. And then I had a brainwave: Glitterator III needs a display stand! And it will be GLITTERY!!!

So… first up: LEDs. We used APA102 LEDs on the orginal Glitterator. I used a mote HAT to drive them that time, but I decided to go lower level this time, and just hang them off the GPIO pins. The problem is, they want 5V signals, but the GPIO pins deliver 3V3. So the first step was to get hold of a level shifter: I used a 74AHCT125 from The Pi Hut. After wiring that up, I grabbed tinue’s library from GitHub and hey presto: working lights, very pretty.

I figure that these can form a border for a stand for Glitterator III. But then I started to think about how we could make it interactive… and I decided to see about adding some facial recognition. So I started following this tutorial, and it was disturbing just how quickly I could put together a system that fairly systematically distinguish between members of our team. (Although for some bizarre reason, when it captures Peter’s face in profile, it detects his ear as a face and think’s that it’s Erin. Gave us all a good laugh.) The biggest problem is that the training algorithm is a bit memory hungry, and kept crashing until I increased the swap size. (And yes, decreased it again when the training was complete – failing to do this would significantly decrease the life of the SD card.)

The tutorial has code to show the video stream, put boxes around any faces it detects, and label ones that it recognises. After check that it worked (apart from the Peter’s-ear-as-Erin issue), I removed the video stream display, hooked up a unicorn HAT HD, and had it display the name there instead. And then I realised that I finally have an excuse to buy an ubercorn! I’ve been wanting one of these since they were first released, if only for the PCB artwork. Since an ubercorn is basically a giant unicorn HAT HD, I should be able to swap them over and have MORE GLITTER!!! (My ubercorn should arrive today 🤭)

So, I had my LED strip being pretty, and my face recognition and reporting behaving, but each working independently (on different pis) at this point. Since this was for the stand, I guess there’s no reason why I couldn’t keep it separate, but then if I wanted them both interactive, I’d have to set up comms between them. Much easier to chuck everything on the one pi.

Oops! Both the LED strip control and the unicorn HAT HD were using SPI. Thankfully, Twitter (in the form of Phil Howard from Pimoroni) pointed out I could easily just use different pins for my strip. So I switched them over, put it all together, and yup, it works (with the help of a mini Black HAT Hack3r). I’m not sure it’s going to fit together quite so gacefully with the ubercorn (because of positioning issues), but I’m sure I’ll be able to figure something out. So now I’ll just need to construct a stand to accommodate Glitterator III and all those lights.

So, tasks for this weekend:

  • See if those headlights/indicators interfere with the ToF sensors
  • General testing/tweaking for all challenges
  • Build that stand and affix all my GLITTER plus the pi and camera
  • Check our team t-shirts. Hopefully they still fit the younger members đŸ˜Ŧ

Bullet Proofing

Last year, we had a problem with the maze.

There was nothing wrong with our maze code. The performance on the maze was maybe a bit slow, but it was thoroughly tested and worked every time.

The problem was, we had a “smart” programme selector for the robot, whereby it booted up, then you used the controller to select the mode and start it running.

Why was this a problem? Well, when we got to the maze, we discovered that right beside it was a robot football demo, in which the robots were using exactly the same types of controller. And we couldn’t get our controller to pair with our robot. So we couldn’t even get our robot started on the maze. 😭

One of the things we’ve done to combat this is to simply separate out all of our challenge code, so that we only launch the code specific to a given challenge. But the other issue that I’ve been worrying over is the bluetooth communication between our two Pis. Last year we used UART communication (physical wires), and although the communication failed sometimes during testing, I could never figure out why. Thankfully it worked fine on the day!

But bluetooth communication worries me… A couple of years ago we used a genuine PS3 controller, which uses bluetooth to connect to the pi. It worked perfectly during testing, but on the day, we had a number of drop-outs. That’s why we switched to the wifi-based controller last year. We think the dropouts were due to the high number of BT devices around causing resource contention, which meant the pi using too much power and ultimately shutting down peripherals. If this happens when our “brains” are talking to our “brawn” via BT, they’ll just stop communicating đŸ˜Ģ

The way that our bluetooth comms work is the “brawns” launches a server, which, when it receives a connection, listens for instructions, and when instructions are received, sends them to the motors. The “brains” searches for a server, and once it has found one and connected to it, sends the instructions. All good, so long as the comms link is maintained.

Yesterday I added some extra bullet-proofing around that comms.

First up, if the “brains” can’t find a server, it waits a bit and retries (and keeps doing this until it finds one). This is useful because the server is running on a pi zero, whereas the “brains” is running on a 3A+, so the latter tends to be up and running faster than the server when booted.

Next, if the “brawn” discovers the comms link is broken, it shuts down the motors and tries to re-establish the link. This prevents the robot heading off on its previous path while it’s not receiving instructions from the brain.

Finally, if the “brain” discovers the comms link is broken, it tries to re-establish the link. Unfortunately, it has no way of shutting down the motors, because if the comms link is down, it can’t talk to the brawns. One possible solution to this is to implement a “heartbeat”, whereby the “brains” must send a message every fixed period, and the “brawns” expects this message and assumes the comms link is dead otherwise. Another possibility is to have a physical connection between the two Pis, and set this high from the “brains” end when all is good, and low otherwise. The switch from high to low could then trigger an interrupt which would stop the motors. Or I could just cross my fingers, ignore the potential problem, and hope that it never arises… time limitations might lead to the latter approach!

The Kids are Stars

We’re in the home stretch now, and while there’s still some fine tuning that can be done, I think we’re at the stage now where we could have a credible attempt at everything. The last weekend was spent testing and tweaking the code provided by Angus for the maze (Canyons of Mars) and Erin for the colour challenge (Hubble Telescope Nebula Challenge).

On Saturday we focused on the maze, trying to get it to turn more nicely. It was working well for the shorter sections of maze, but not so well for the longer sections. Angus explained his algorithm thus:

When the robot has a wall in front of it, find the side with the biggest distance. Save that distance, and turn towards that side until the front measures that distance (with a margin for error).

So what was the problem? Well, the sensors only provide new data periodically. The robot turns at a certain speed, which means that it covers approximately the same angle in each sample period. When the distance you want at the front is relatively short, you’ll “catch” that point as you turn. But beyond a certain distance, it’s quite possible that the point you want will fall between two sample points, so the robot just keeps turning…

So with a bit of prompting, he came up with this alternative:

When the robot has a wall in front of it, find the side with the biggest distance, and turn in that direction until you have a clear space in front and the side that had the smaller distance before is aligned with a wall.

Our robot has two sensors on each side, which makes it easy to check alignment with the wall (you just want them roughly both the same distance from the wall). Because we’re going to be checking their alignment with the wall that was previously a short distance in front of us, we shouldn’t encounter that “small angle, big distance change” issue that was the problem before. So by the end of Saturday we had this:

Looking mostly good, but the poor motors were struggling to get the robot to turn on the carpet, and we know it won’t be carpet on the day! So on Saturday morning we took over the kitchen floor. First attempt was a disaster: lots and lots of spinning too far. So we turned down the power to the motors for the turns, and it wasn’t too bad:

Still some problems with over-correction on the straights, and also we could probably speed it up a little other than on the turns, but it’s not bad. We’ll look at tweaking some parameters a bit more.

Next up was Erin’s colour code. She sat down with me the previous weekend and gave me the basic algorithm, which we then worked together to refine to this:

solve colours:
goto_colour(red)
goto_colour(blue)
goto_colour(yellow)
goto_colour(green)

goto_colour(colour):
find_colour(colour)
while not close enough:
move_to_colour
move_back

find_colour(colour):
repeat:
get_picture
check_for_colour(colour)
if can't see colour:
if too close:
move_back
else:
spin
else:
if the colour is centred in the image:
return <- we are ready to go forward!
else
turn to centre it

OK, so it’s not quite at the level of Python code, and there are still some hazy details, but she’s only used Scratch up to now, so I think it’s a pretty good effort. I didn’t dream of scaring her with the complexities of openCV, so deliberately avoided the “check_for_colour” section. Essentially though, that takes the image, applies a blurring filter, then a mask to find only the bits of that colour, and finally finds the contours of the bit of that colour (hopefully only finding one bit that colour)!

The biggest improvement over last year’s code though has come about by having a colour tuning program, which allows me to tune the H values for each of the target colours (and S & V for all of the colours) under different lighting conditions – that’s the screenshot you see up above, and again just below. This then spits out these values to be pasted into our colour finder code, ensuring that we have the right values for the run. The trickiest bit is ensuring that they are set such that each colour is detected when looking at that colour, and not detected when not looking at it. Our black walls were being picked up as blue without careful tuning, and similarly, the cream carpet as yellow. (The original image is shown in the window with the S/V tuning, so we’re looking at green. The aim is to have that show in the green window but nowhere else… then do the same for each of the other colours.)

I’d mentioned that we’d done some basic checking the previous weekend, and it worked for red and green on our test pi, which was not on the robot, because Angus was busy using that. Well, this week we got some nice black walls, and some yellow and blue targets too, so it was time to test it on board the robot. Initially we set it up in the kitchen, since we’d been testing the maze in there, but we soon discovered a problem: there’s no floor on our test rig, and the kitchen lino is a non-solid colour, which includes quite a bit of yellow. đŸĨē It worked reasonably well though apart from this… just didn’t get quite as close to the colours as I wanted. On closer inspection I discovered I’d simply set the distance parameter wrongly for this. ☚ī¸ Still, that was an easy fix… yellow not so much.

I didn’t think I’d do much better upstairs, because the carpet there is cream, but decided to give it a try. To my surprise, it was relatively easy to distinguish between the yellow and cream (and in fact the biggest problem was the reflection of red onto the carpet, creating a small pool of red in front of the target). So fairly quickly we we able to achieve this:

What happened here? Well maybe I tuned that distance parameter a little too far, resulting in a bit of aggression from the robot. Also, once the robot had bashed my red card, the shadows falling on it made it look blue, which it why it went for it twice. So a little more tuning resulted in this:

Obviously still not perfect, but getting better!

So what’s our status?

Well, we’ve been pretty right for the remote control challenges for some time now. The one thing missing there is that we would really like is a laser targeter for the space invaders challenge. We do have a laser, but at the moment I think it’s not got the right resistor on it – when we attach it to the battery pack, it’s tripping the system and all power shuts down. Not good! Still, I think we’ll be able to solve that in the next couple of days. We also need to find our Pi Noon / sample collection attachment. It’s around somewhere 🙄🤭

In terms of autonomous challenges, as you’ve seen above, Canyons of Mars and Hubble Telescope Nebula Challenge are working but could be improved. We’ll probably try to tweak them a bit more before the competition, but the current clunky-but-working versions are backed up in our git repository, so in the worst case we’ll roll them out. The Blast-Off code was completed in half term, then I don’t even know what happened to our test track. Hopefully it was just put away somewhere, rather than thrown away, because it wouldn’t hurt to test it again…

What I’d really like to do at this stage is work on the aesthetics. It’s finding time that’s the issue though… đŸ˜ĸ

More refinement…

As I said, the next stage of the coding was the maze for Angus, and the colour detection for Erin and myself (or at least in the latter case, the testing thereof). Angus painted a lot of maze walls on Friday, and on Saturday assembled (part of) the maze. Part of because that’s really as much as fits in the space we have. And luckily we’re in the middle of redecorating a room, so it’s cleared of furniture and we have that space.

Redecorating is now unofficially on hold for a month… not that we have time for it after robot development/testing anyway 😂

So Saturday saw Angus trying out his maze code. He wasn’t content to use my code, which saw us get a clear run on our first attempt two years ago. He wanted to do it himself. And give how well he’d done on the line following, I thought “fair enough!” Unfortunately though, things didn’t go as planned…

We got past this…

…but at the second turn, things consistently were going fubar, with the sensor connections coming loose. Might have been something to do with the rapid starts/stops that were part of his solution, but still, something that needed addressing. So I spent a few hours changing headers and dupont connectors to soldered connections, and another try was had…

Better progress (it made it past the third corner), but then things did not look good. Unfortunately school’s back in this week, so there will be no chance to look at it until Friday after school at the earliest (*sigh* yes, there are after-school activities every other day!), and we have a busy weekend this weekend too, with a family birthday and a school band performance.

Still, fingers cross that we have good news by next Monday, and if not, there’s always my code to fall back on 😏

So far as the code I wrote, based on Erin’s instructions, it’s been tested on a testbed, using just red and green detection, and looks promising. Just a testbed because Angus hogged the robot last weekend, and only red and green because I didn’t have blue and yellow test panels! Aiming to have a full test set this weekend, and hopefully I’ll also have a chance to use the robot for testing then.

Half term action

Earlier this week I posted the video of Angus’s first attempt at straight-ish line speed test. Over the course of the next few days, he refined it further, first producing this:

And then this:

Very nice! We do need to make use of at least one of those time of flight sensors though, to make sure that it doesn’t crash at the end đŸ¤Ŗ

Next up for him was the maze… but to do this, he really needed some way to test it, so I while I was at work yesterday, the rest of the family headed off to buy some hardware. By the the time I came home yesterday, there was work in progress:

Then this morning, a basic corridor was set up to start testing. After a few hiccups, that seems to be working, including a turn at the end, and now he’s ready for a more complex maze. So tomorrow morning, that will be one of the first tasks.

Meanwhile, being the weekend, I was involved today, but other than helping to troubleshoot the hiccups (which were just slight logic errors – pluses instead of minuses), I was working with Erin. Erin’s getting quite proficient at her Scratch coding these days, but hasn’t progressed to Python yet. So we agreed that she should tell me what she wants the robot to do as if it was blocks like in Scratch for the colour challenge, and then I would implement it in Python.

This is what she came up with:

main block:
goto_colour(red)
goto_colour(blue)
goto_colour(yellow)
goto_colour(green)
stop_motors

goto_colour(colour):
find_colour(colour)
repeat:
move_forward
until closer than 10cm
stop

find_colour(colour):
repeat:
move_backward
until (at least 50cm from wall)
stop
repeat:
turn left
until (see colour)
stop

Now it’s up to me to implement that in Python. Well actually, I think I have, but I don’t want to try testing until tomorrow, in natural light. Also, we’ll have to do a simplified test for now, because I’ve only got red and green boards. 😆

With the infrastructure in place…

… the fun begins! It’s half term this week, but I don’t have any time off work. So it was high on my priority list to get all the infrastructure code in place by the end of last weekend, so that Angus could do some actual code development this week. After a day off yesterday, he got to work on “Blast Off!” today. And this is what he’s achieved since lunch time:

This is using the code I wrote on the weekend, which takes a frame from an image and finds the mid-point of the white at a certain (programmer-specified) distance from the robot. He then uses that information to adjust the heading of the robot. (The robot gets confused at the end when it loses the line, then when searching, sees the white tag on the fleece Angus has put beside the test track.)

Camera work underway!

After spending a lot of time last weekend compiling openCV to work woth Python 3 on my raspberry pi, this weekend (or rather, today, since I was working yesterday) has been devoted mostly to using it.

For the “Blast off!” challenge this year we’re using the camera, rather than the ToF sensors that we’ve used in the past. This might be a mistake, as this was one challenge we’ve done pretty well at previously, but our sensors are sitting too high to be useful in this challenge this year. Rather than alter the physical design, we decided to go for a different approach. Also, this gives Angus a chance to solve the problem, and since he wants to write code for the robot this year, this is a GOOD THING â„ĸ So it’s been set up to find where the line is, a certain distance in front of the robot. Next step, adjust the robot’s heading based upon that position. It’s half term this week, so hopefully Angus will have some success with this.

The other thing we’re using the camera for is the “Hubble Telescope Nebula Challenge” – or in other words, colour detection. This challenge is one that Angus won’t be coding, because Erin wanted some coding to do. However she’s not quite up to writing Python yet (though getting pretty proficient at Scratch), so she’ll be pair programming with me, telling me the algorithm, which I will translate into code. Last year I rushed together a solution for the equivalent challenge in the last week before the competition (we were running considerably behind schedule due to the whole family being extremely unwell during the Christmas holidays), and it partially worked. The main problem was actually “seeing” the colours – the lighting conditions on the day were considerably different to the conditions under which I had tested, and if I remember correctly, we failed to detect one of the colours. Anyway, the key thing I’ve been working on so far for this is some code so that we can calibrate our colour detection on the day. I think that’s working now, so when I get a chance, I’ll sit down with Erin to write the main code for it. (Sadly, while the kids have got half term, there’s no time off for me!)

Last but not least, I’ve spent some time bulletproofing the bluetooth communications between my Pi Zero W and Pi 3A+. Up until now, if the “brains” (3A+) closed the communication link, the “brawns” Zero W would drop out too. Now, the “brawns” gets ready to accept a new connection from the “brains” if this happens. This means that when I’m testing the autonomous control code, constantly starting and stopping it, I don’t need to restart the comms link at both ends every time. And hopefully this will make it more robust in practice as well. (I probably still need to add more bulletproofinf on the “brains” end, but have bookmarked that for later attention.)


Create a free website or blog at WordPress.com.

Up ↑

Create your website at WordPress.com
Get started