In a recent post I described a really fun idea: can I make a RaspberryPi-based robot that can solve a jigsaw puzzle? The robot is supposed to know the current state of the puzzle, and has to drive around looking for the piece that is most likely to fit a particular open position in the puzzle. Once the piece is found, the robot should pick it up, drive to the puzzle, and attached the piece to the puzzle.

Today I started with the algorithmic part of the project, which should ultimately have the following functionality:

  • Find the shape of a jigsaw piece from a picture.
  • Determine how well two specific ‘sides’ of two pieces fit together, based on the shapes of the pieces.
  • An extension of the previous task: if the current open position in the puzzle borders k (1<=k<=4) already fitted pieces, determine how well an unfitted piece fits the k bordering pieces.

I’m hoping that this is sufficient to make the puzzle, although I did make some assumptions and simplifications:

  • I’m not using any color information of the jigsaw puzzle or pieces. The algorithm is purely based on the shape of the pieces (in fact, I am going to make the puzzle with the pieces face-down).
  • I’m assuming that a puzzle fits together in a unique way. I can imagine that this is not a valid assumption, but if I am able to build a robot that makes a well-fitting puzzle which does not reflect the picture on the box, then I am already super happy!
  • I’m assuming that each piece of the finished puzzle has 2, 3 or 4 neighbours. This is true for the puzzle I am using for testing.
  • Looking forward, I will probably also assume that a jigsaw piece has 4 borders that I can identify.

A nice aspect of the algorithmic part is that I can develop it using my laptop, without actually building the robot yet.

Extracting the piece

But first things first, so let’s figure out how to find the shape of a piece from a picture. The figure below show the piece that I used for testing – note that it is the back of the piece. I took the picture with white paper on the background (it looks more like gray on the image below), hoping that the contrast is high enough to distinguish the piece from the background.original

There are many algorithms available that may be able to separate the piece from the background. The first one I tried is called Active Contours, and produces a black and white image with the detected (filled) contour in white, and the background in black. The figure below shows the result after some experimentation with the parameters, and already looks like a proper jigsaw piece. However, the shadows in the original image seem to make it difficult for the method to detect the shape.activecontours

My next step was to convert the colors in the image to HSV (a different coordinate system than RGB), and look for a channel that has a better contrast of the piece versus the background. The three channels are shown in the picture below, and it seems like the second channel (S) has a better contrast. And indeed, if we apply active contours again we see a much cleaner piece. I highlighted the boundary of the piece in green, and in the next step I would like to use this boundary to determine how well two given pieces fit together. When I look at the boundary I am a bit worried that it is too jagged to do that.


Next I tried two edge detection methods, Sobel and Canny, of which the latter worked best. The figure below show result of the Canny edge detector, a nice clean edge that looks very close to the shape of the piece. This picture required some trial and error experiments with the parameters, and particularly the Gaussian smoothing parameter and grayscale threshold helped a lot to get such a clean shape.canny

The experiments with the Canny edge detector gave me an idea: if I first smooth the (high contrast) picture by applying a Gaussian filter, and then convert it to black and white using a threshold, this may give me a nice boundary. And indeed, it does:smooth-bw-with-boundary

The boundary is a lot smoother than before, and also the process is a lot faster than the active contours method. Comparing the green boundary to the original picture, the protrusions on the left/right and the inlets on the top/bottom are quite accurate, but the straight edges have lost some of their sharpness (especially near the corners). I don’t know yet if this is going to be a problem, but at least the boundary highlighted in the picture above is good enough to continue to the next step in the process. Most likely, I will revisit the steps above at a later stage to make it more robust.