First, since steppers are great at positioning (there is no need for a position feedback), you should certainly limit their movement as you've said yourself. I am not sure how the motor shaft is engineered right now, but if it was fixed to the motor, letting it continue spinning would risk damaging the equipment.
Next, 200ms transport delay in your sensor will probably be too slow, otherwise you will need to slow things down a lot in order to slow down the ball itself. Similar to what Rocket Surgeon said, you should simplify the image processing algorithm to calculate the path only once, and then quickly calculate only the position of the ball in each frame. If you want to skip this step quickly, find a red ball instead of this one, and then check only the red component in your RGB picture, until you've found a better algorithm.
For the PID control, start with the fact that you actually need two separate PID controllers, one for the east-west motor, the other one for the north-south one. If you have two exact motors, their parameters must be equal.
For a PID controller to act, it needs to know the error: difference between the desired position, and the actual position of the ball. X and Y components of this offset will be the inputs for two PID controllers (one for each motor). To get the error, you need to have the desired position on your path first: a trajectory.
To get the trajectory, you need to process the image and get the path, as well as its starting and ending point. I am not sure if your algorithm is capable of distinguishing the path from the rest of the board right now, but if not, note that this is an algorithm of its own to handle before continuing. Again, you can skip this part by manually entering the junction points, if you are eager to see some results quickly. In any case, you should be able to define the setpoint speed, and have your software move the desired coordinate position over the path, from start towards the end. Obviously, you will start with a low desired speed.
So, before starting with control, you should go through the following checklist first:
- Simplify your image processing algorithm to get faster response
- Create an algorithm which creates a trajectory over your path using a predefined speed
- In each frame:
- Calculate the difference between the trajectory and the ball position
- Pass the delta-X component to the east-west PID, pass the delta-Y to the north-south PID
It may turn out that it is better to create the trajectory one segment at a time, and continue with the next segment when that ball ends the previous one. Otherwise, you will need to take care that the ball doesn't overshoot the desired trajectory (which may be hard to accomplish)
Yes, image processing can be done using Microcontrollers and Microprocessors. You can either do image Processing using Arduino with OpenCV or MatLab. Or if you are more interested in Microprocessors you can use a embedded computer such as the Raspberry Pi(RPi) or Beaglebone(BB) which is more suitable for powerful image processing projects. RPi has an inbuilt GPU which is better for your applications but BBB can also be used for image Processing application considering the fact that it has a better and faster ARM processor. Since BB and RPi both run on linux you can use the most common OpenCV or simpleCV to do the task.
You can use Image Processing using BBB and OpenCV or RPi and OpenCV. OpenCV is an It has C++, C, Python and Java interfaces and supports Windows, Linux, Mac OS, iOS and Android.. I would suggest you to use OpenCV that uses C++ for BBB (C++ is faster compared to RPi and BBB doesnt have GPU so there is a chance to slow down processing) and use OpenCV that uses Python for RPi (python is much easier to code and RPi has a GPU) . I have used both RPi and BBB in the past but I would suggest you to buy a RPi for your application since its cheaper and has a huge Documentation online.
Update : There are many OCR based algorithm's available online for OpenCV but are not that reliable. I think best Open Source OCR Engine is Tesseract. You can get more idea about this from the thread Tesseract or OpenCV for OCR.
Best Answer
That would be an optics question, and not an image processing question. Basically you need lenses to make the apparent image appear farther away where the eye can focus on it better. This is how all head worn heads-up-displays work.
You can't "un-fuzzyfy" the image in software, print it on a transparency, look through it, and expect it to somehow be in focus. Physics doesn't work that way.
As for how to use lenses to make it work-- I don't know. I'm not that good with optics.