Today we’re going to attach a real database to our application and stop using hardcoded list of projects (well, partially – we’ll still use it to populate database on startup). First we need to understand several things – where is the model stored, how to fetch the data, how to perform database migration.

For now we need just a simple feature of fetching project records by their unique identifier – not the one assigned by Persistent (although this would work out-of-the-box), but the <code>identifier</code> field. Luckily, Persistent offers a feature of assigning unique identifiers to fields (not supported in MongoDB) – fields starting with uppercase letter are assumed to create a new index. in our case – add UID identifier to Project definition in Domain/Project.hs. That’s it for the index, now let’s go to fetching – that’s also fairly easy, it’s sufficient to change selectPage implementation to:

selectPage :: Int -> Widget
selectPage projectId = do
  (Entity _ project) <- handlerToWidget $ runDB $ getBy404 $ UID projectId
  renderPage project

And getting is done. We also get 404: NotFound handling for free – if a record is not found in database, 404 is automatically returned and no further processing is needed. The Entity type here might be a little confusing – it means that the value is taken directly from database and that it is an actual database record. Its first field (here omitted) is id assigned automatically to the data (we don’t need it here). So, that’s it, right? Should it already work? Let’s try it out: stack exec -- yesod devel. It compiles, that’s a good sign. Let’s navigate to localhost:3000/project/1. Whoops, is it an Internal Server Error? What has gone wrong?

The error obviously says that project table doesn’t exist in the database. Oh, right. We just defined migration, but did not run it! Application compiled, because we aren’t obliged to run migration – for example, database may be owned by another team or may be used by more than one application – that’s a logic error, which is not typechecked right now in Yesod. Let’s run the migration then. The scaffolding runs migrations in makeFoundation function, in Application.hs file. The function is quite long, but the important part is runLoggingT (runSqlPool (runMigration migrateAll) pool) logFunc. It seems that some migration is already done, what’s going on?

That’s migration for data structures of the scaffolding – users, comments. Where are they defined? If you inspect Model.hs, you’ll find the following code:

share [mkPersist sqlSettings, mkMigrate "migrateAll"]
    $(persistFileWith lowerCaseSettings "config/models")

Apperently that’s also some Template Haskell magic – and entities are defined in config/models.
This file looks quite reasonable:

User
    ident Text
    password Text Maybe
    UniqueUser ident
    deriving Typeable
Email
    email Text
    userId UserId Maybe
    verkey Text Maybe
    UniqueEmail email
Comment json -- Adding "json" causes ToJSON and FromJSON instances to be derived.
    message Text
    userId UserId Maybe
    deriving Eq
    deriving Show

 -- By default this file is used in Model.hs (which is imported by Foundation.hs)

These are simply data definitions, structurally identical to the one we put into Domain/Project.hs. So, have we put our definitions in a wrong file? Not quite. Yesod as a framework is quite composable. We can use structues provided by scaffolding or not use them, it’s not going to blame us for either of these choices.
So theoretically we could just add our definition of Project here and be done with it. If you prefer this way – go ahead, I won’t blame you. I don’t like the idea, because it causes two separate domains to be mixed, and be tough later on. You can do two other things to solve this – either import Domain.Project everywhere where it is needed, or rely somehow on semi-default behavior. I’ve chosen to go for semi-default behavior – extract models to a separate file, but handled the same way the default one is. I’m not going to argue whether this is better or worse than keeping separate Domain.Project – both have their pros and cons. For example, by putting data definitions to separate file which is included in Import.hs, whole application may start to rely on these data structures. This may cause quite a lot of trouble during refactoring later on, even though we have a strong type system to save us from the simplest mistakes. On the other hand, importing Domain/Project.hs in every place is not what you would expect at first. For now I’m going to stay with the scaffolding’s approach, and if it proves hard to maintain – refactor.

Let’s create a config/projectModels file with the content from the quasiquoted part of Domain/Project.hs:

Project
      identifier Int
      name Text
      shortName Text
      deadline UTCTime
      UID identifier
      deriving Show

Now we can remove Domain/Project.hs and all it’s imports – it’ll soon become visible by default (from Import.hs). To make it visible in all (unfortunately) files that import Import.hs add the following call to Model.hs:

share [mkPersist sqlSettings, mkMigrate "migrateProject"]
    $(persistFileWith lowerCaseSettings "config/projectModels")

That’s all nice and fine, but we still don’t have our migration! To add it to application startup add runLoggingT (runSqlPool (runMigration migrateProject) pool) logFunc line to Application.hs, just below runLoggingT (runSqlPool (runMigration migrateAll) pool) logFunc. We could actually remove the latter, but it may be useful later on when learning more Yesod’ish patterns.

You might have guessed by now that mkMigrate string argument is name of function that will be created for migration. That’s quite a common pattern in places where Yesod uses Template Haskell.

In a big web application I would seriously consider either leaving out this pattern or importing everything to avoid code tangling – but I belive that this app will be small enough to ignore this problem – after all our main objective is to learn some Yesod here, isn’t it?

Now, that’s all nice and fine, but now we have 404s on each project read, and actually we’d prefer to read an actual record from the database, right? So we have to populate the database. Today we’ll do it in a brute-force way, but soon we’ll be honing this as well.

For now, it’s sufficient to add _ <- mapM (\p -> runLoggingT (runSqlPool (insertBy p) pool) logFunc) projects to makeFoundation, just after migrations. You might be tempted to use insert instead of insertBy, but this would cause your application to require much more maintenance – namely, removal of the database after each startup (SQLite would return an error when inserting entries with same UID, and error on startup means that whole application needs to be shut down). Of course, such solution is ugly and adds hacks to production code just to handle development needs – still, it’s ok for us now.

Try out the app after changes – functionality is same as before, but now data is fetched straight from database, which means that we can also insert new entries at runtime. We’re going to deal with forms and inserting in near future, but the next post will be again about databases – where is it? Why our list of projects is still a constant?

Stay tuned!