It will be blogging for learning Machine learning myself. I took a class called Natural of Code from Dan Shiffman and I have started to have interested in Machine Learning. Also, I worked for AMNH for Machine learning table for ‘Our Senses’ exhibition. To be honest, I did not dig into ML things yet. So I want to understand ML myself through my project called ‘MLwand’.This posting is purely my study log so it would be wrong and foolish. But hopefully the end of my blog I want to understand somehow and process for getting to know the next possibilities.
But soon I realized that positional tracking is really difficult to get! So I would like to classify different gestures and train them like a Webkinator or teachable machine.
Might be vector format with sequencing images would be really interesting. But for now, I wanted to use just images and train them.
This is the images from Processing and the sensor. It needs a calibration somehow at first. But It seems to work very well.
The problem that I did no expected is I cannot use the trained model from randoms photographs because it looks so different and compares to webcam images it is too similar and graphical.
So I need to have my own trained models from a bunch of these images like MNIST. Or finding more illustrative trained model or MNIST model.
I will Look at up Keras (Higher level of Tensorflow) and KNN or Convolution Network.
- Sensor data -> web
- Draw them in P5.js