[Developer says] Automating the process of testing codebender's hosted examples

Greetings.

One of the things I find great about codebender is its large collection of hosted libraries and examples. As this collection grows, we need to keep track the status of each example. If an example compiles successfully, its set of boards that it can compile against, the error that happened during compile in case of failure. It is a process that can be automated and we managed to do so with the help of Selenium.

Selenium is a Java based server that automates web browsers. One can write navigation scenarios by using its API into one of its supported languages. Combining it with a testing framework, such as pytest in Python, we have a testing platform that it can fit our needs.

An example of combining py.test with Selenium, can be seen in the code snippet that follows:

import pytest
from selenium import webdriver

TEST_TARGET = 'https://codebender.cc'

@pytest.fixture(scope='module')
def driver(request):
    driver = webdriver.Firefox()
    request.addfinalizer(lambda: driver.close())
    return driver

def test_index(driver):
    driver.get(TEST_TARGET)
    assert 'codebender' in driver.title

def test_editor(driver):
    driver.get(TEST_TARGET + '/how_it_works')
    assert 'Blink : codebender' in driver.title

After the required imports, we define a function as a fixture for our test. We do this by using the proper function decorator provided by pytest. In this function we create an instance of the webdriver object, which in sequence will be used to communicate with the Selenium WebDriver API and write our navigation steps. We also register an anonymous function that closes the webdriver instance when all tests are completed. Because of the scope argument passed at the decorator function, the fixture function will be available in all the test functions contained in our test module (the file that contains the above code), plus it will be executed only once across the tests, which allows us to reuse the browser's session across the tests. The other two functions, having the test_ prefix into their name, are the actual tests. Both functions visit a page and test that the page title is the expected one. Certainly there are a lot more for pytest and Selenium, but I hope you get the idea. The documentation for each library is quite extensive and there are many examples in the web for each.

Depending on the required browsers for the Selenium tests, it could be hard to maintain locally, different versions of each browser in different platforms. Sauce Labs can be handy in this case. It's a service where someone can run its Selenium tests, under a large collection of supported browsers and platforms.

In order to keep track of each example, we log each compile's result. When an example finishes compiling with its set of boards, a comment is updated with the use of the Disqus service, to inform the users, about the compilation result and the date the test was performed. We update the comments, by using the Disqus API with its Python bindings.

Finally, when all examples are finished with testing, a report is composed, notifying us about the number of differences we had since the last compilation cycle, as well as a log of these differences.

Combining the above tools, into a testing suite that runs periodically, gives us an insight about the state of the hosted libraries and their examples. This, in turn, helps us to identify problems and fix them.

Thanks for stopping by and reading. Happy holidays!