A few weeks ago my parents came to visit, and my mother mentioned that she recently started making a jigsaw puzzle. Now, I happen to like jigsaw puzzles as well, but haven’t actually made one for fifteen years or so. Of course I can decide to start one, but I have too many hobbies already, and I really should quit one hobby before I start another. Alternatively, I can try to combine two hobbies and ‘hit two flies in one blow’ (Dutch saying). That set me thinking: what if I combine making a jigsaw puzzle with my Arduino/sensor/IoT/RaspberryPi experiments? Can I perhaps build a RaspberryPi-based robot (or a team of them) to make a jigsaw puzzle?

I instantly loved this idea! How cool is to have some robots driving across the floor of my apartment that are slowly putting pieces of a jigsaw puzzle together! Besides the obvious ‘cool’ factor, this experiment also requires a broad range of skills, all of which I have or are within my ability to learn. I need some math to determine if two pieces fit together, sensors to make sure the robots don’t collide, computer vision to detect the shape of a piece from a photo of it, AI to make the robots collaborate, technical stuff to make the engine/wheels etc. of the robot, and more technical stuff to actually pick up a piece and connect it to another. Below are a few ideas, for future reference …

Shape of the pieces

I would like to have to robots drive around, with all the pieces of the puzzle scattered randomly around the room. Once a robot drives over/close to a piece, it could photograph the piece. The color of my floor is quite uniform, so I can probably filter out the background and get a convex hull around the piece of the puzzle. Then it would have to figure out if this is the piece that it is looking for (assuming the robot is looking for one particular piece). Discovering if two given pieces fit together is probably not so easy, but I have some ideas. One idea is to find the center of gravity of a piece, use that as an ‘origin’, and choose an axis through that point. Then, for each point on the convex hull, consider it a vector (relative to the origin) and determine the angle it makes with the axis chosen earlier. This converts the boundary into a graph, and hopefully I can compare two graphs by seeing of they have ‘opposite curves’ on about one-fourth of the graph (assuming a jigsaw piece has four sides). Whatever approach I choose for comparing pieces, I can develop the entire algorithm on my laptop before using it at the robots.

The robots

There is a lot of robotics stuff available on the net, so I’m hoping that getting a driving RaspberryPi is not too difficult. At least I have some experience with Arduino now, and back in the day at Leiden University I made a Lego Mindstorms robot that drove around a maze. I don’t know yet how I am going to make the collision avoidance, but I’ll make it pretty restrictive, because I’m worried that if one robot hits another one that is photographing a piece, then determining the boundary of that piece fails. Picking up pieces of the puzzle is probably not too difficult with a suction cup, but I do expect a big challenge when actually fitting that piece to the current puzzle. I hope I can reuse/adapt the algorithm for determining the shape of a piece. Now that I think of it, for simplicity it is probably a good idea to make the robot in the form of a container-lifting crane in the docks (‘Straddle Carrier’, see picture below)

Straddle Carrier, Kotka, Finland, 2008

(source: konecranes.ind)

AI: working together

If I get to this stage, I will already be happy, because I will have all (most of?) the necessary tools to have one robot make a puzzle. But if I am going to make a big puzzle, having a team of robots will speed things up considerable. Making a few copies of the same robot is quickly done, so with some collision detection the team should be up and running. But, each robot would work independently, and I would prefer having some cooperation. At this point, I don’t know yet what this ‘cooperation’ will be. Options are:

  1. If I have n robots, I can have them jointly searching for n pieces and notify each other when one is found
  2. I can have separate ‘searching robots’ and ‘placing robots’, with the latter asking the former for the location of a particular piece
  3. I can make each robot share all knowledge will all others, or make them slightly selfish
  4. I could make them work more human-like, with a very limited memory and very little overview of the complete set of unfitted pieces
  5. I could have each robot make its own section of the puzzle, and connect the sections once that is possible
  6. I could try to make the contour of the puzzle first, followed by the interior (that’s the way most humans do it)
  7. Maybe I can sort the unfitted pieces based on shape and/or color to speed up searching (again, this is how I make a puzzle)
  8. Perhaps I can/should print some code in invisible inc on each piece of the puzzle to identify it

This requires further thought, but I’d like to make them mimic human behavior, and I still need a good reason why a team of robots can be used for puzzle-making.

Visualization

I would also like to make a cool video of the experiment once it is running. Maybe video from a camera attached to my ceiling, but also information on what piece each robot is looking for, what they communicate to each other, what they know of the environment and puzzle, etc. This is not very complicated, as long as I remember to log everything I need during the experiment. I can then make the complete visualization when the experiment is done.