The Raspberry Turk uses a Raspberry Pi camera module and computer vision algorithms to determine which pieces are in which squares. In order to see the board properly, it needs to be calibrated first.
def current_chessboard_frame(self): _, frame = self._capture.read() h, w = frame.shape[:2] M = get_chessboard_perspective_transform() bgr_img = cv2.warpPerspective(frame, M, (BOARD_SIZE,BOARD_SIZE)) img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2RGB) flipped_img = cv2.flip(img, -1) return ChessboardFrame(flipped_img)
In order to go from A to B,
cv2.warpPerspective from OpenCV is used.
There is a small script used to find the correct transformation matrix. The full source can be found here.
Essentially it does the following:
If the camera moves or the robot is deconstructed and reconstructed the perspective transformation matrix can be easily recalculated.
$ python -m raspberryturk.embedded.vision.chessboard_perspective_transform 2017-02-11 16:05:41,371 - __main__ - INFO - Begin camera position recalibration... 2017-02-11 16:05:41,371 - __main__ - INFO - Camera position recalibration successful.