Following up yesterday’s post about Quick, Draw, I have been playing with it a bit more. One thing I discovered was that you can examine any of your drawings. For each of them, you can see other things the AI considered while trying to recognise your scribbles and also other people’s attempts that it recognised. Another was that it can be frustrating. I’m sure the drawing of a carrot I did this morning was as good as the one yesterday but today it gave up and suggested that I might have been drawing a clarinet! And the best drawing surface I’ve found so far is my iPad Pro but using my finger rather than the Apple Pencil.
One further observation: I realise that I am learning too. There is a limited range of subjects and so the same things come up again and again. The first time I tried to draw a pig, I think the guess was “monkey”. Next time, I recalled the characteristics I had observed in other people’s drawings and I’ve nailed pig every time since. I wonder how this plays into the learning process of the neural network? I’m no longer drawing what I think about when I see the target word but what I think the computer will recognise. Essentially, I’m adapting my strategy to communicate, which makes for a fun game, but how is that affecting the learning aim of the program?