Good tests are always executed considering a predefined, reproducible application state; that is, whenever you run a test in the chosen state, the result will always be equivalent. Usually, this is achieved by setting your database data yourself and clearing your cache and any temporary files (if you make use of external services, you should mock them) for each test. Clearing cache and temporary files is not hard, while setting your database data, on the other hand, is.
If you're using Flask-SQLAlchemy to hold your data, you would need to hardcode, somewhere in your tests as follows:
attributes = { … } model = MyModel(**attributes) db.session.add(model) db.session.commit()
This approach does not scale as it is not easily reusable (when you define this as a function and a method, define it for each test). There are two ways to populate your database for testing: fixtures and pseudo-random data.
Using pseudo-random data is usually library-specific and produces better test data as the generated data is context-specific, not static, but it may require specific coding now and then, just like when you define your own fields or need a different value range for a field.
Fixtures are the most straightforward way as you just have to define your data in a file and load it at each test. You can do that by exporting your database data, editing at your convenience, or writing it yourself. The JSON format is quite popular for this. Let's take a look on how to implement both:
# coding:utf-8 # == USING FIXTURES === import tempfile, os import json from flask import Flask from flask.ext.testing import TestCase from flask.ext.sqlalchemy import SQLAlchemy db = SQLAlchemy() class User(db.Model): id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String(255)) gender = db.Column(db.String(1), default='U') def __unicode__(self): return self.name def app_factory(name=None): name = name or __name__ app = Flask(name) return app class MyTestCase(TestCase): def create_app(self): app = app_factory() app.config['TESTING'] = True # db_fd: database file descriptor # we create a temporary file to hold our data self.db_fd, app.config['DATABASE'] = tempfile.mkstemp() db.init_app(app) return app def load_fixture(self, path, model_cls): """ Loads a json fixture into the database """ fixture = json.load(open(path)) for data in fixture: # Model accepts dict like parameter instance = model_cls(**data) # makes sure our session knows about our new instance db.session.add(instance) db.session.commit() def setUp(self): db.create_all() # you could load more fixtures if needed self.load_fixture('fixtures/users.json', User) def tearDown(self): # makes sure the session is removed db.session.remove() # close file descriptor os.close(self.db_fd) # delete temporary database file # as SQLite database is a single file, this is equivalent to a drop_all os.unlink(self.app.config['DATABASE']) def test_fixture(self): marie = User.query.filter(User.name.ilike('Marie%')).first() self.assertEqual(marie.gender, "F") if __name__ == '__main__': import unittest unittest.main()
The preceding code is simple. We create a SQLAlchemy model, link it to our app, and, during the setup, we load our fixture. In tearDow
n, we make sure our database and SQLAlchemy session are brand new for the next test. Our fixture is written using JSON format because it is fast enough and readable.
Were we to use pseudo-random generators to create our users, (look up Google fuzzy testing for more on the subject), we could do it like this:
def new_user(**kw): # this way we only know the user data in execution time # tests should consider it kw['name'] = kw.get('name', "%s %s" % (choice(names), choice(surnames)) ) kw['gender'] = kw.get('gender', choice(['M', 'F', 'U'])) return kw user = User(**new_user()) db.session.add(user) db.session.commit()
Be aware that our tests would also have to change as we are not testing against a static scenario. As a rule, fixtures will be enough in most cases, but pseudo-random test data is better in most cases as it forces your application to handle real scenarios, which are, usually left out.
Integration testing is a very widely used term/concept with a very narrow meaning. It is used to refer to the act of testing multiple modules together to test their integration. As testing multiple modules together from the same code base with Python is usually trivial and transparent (an import here, a call there, and some output checking), you'll usually hear people using the term integration testing while referring to testing their code against a different code base, an application they did not create or maintain, or when a new key functionality was added to the system.
3.141.193.158