Languages

Adding GHCJS to equation

September 26, 2016 GHCJS, Haskell, Yesod No comments

In this post we’ll try to remove the JS-ey part of our application – the Julius templates. Before we move on to the actual implementation, I have to warn you: GHCJS is not yet a fully mature project and installing it on Windows platform is a pain. I wasted nearly two days trying to do it using stack, mostly due to a problem with locale – The Internet said it’s already solved, but for some reason it didn’t work. Only the solution described in https://github.com/commercialhaskell/stack/issues/1448 worked – setting non-UTF application locale to en_US) – still, that’s not all. I also had to install pkg-config (this StackOverflow question is about “how to do it”) and nodejs, and that’s still not all – older versions of GHCJS (for example one proposed on Stack GHCJS page) have some trouble with Cabal. In the end, I had to install GHCJS from sources on Windows. Surprisingly, it went without serious problems.

On Linux machines it also requires nasty hacks – like symbolic link from node to nodejs. During standalone installation this can be solved by --with-node nodejs flag, but not on stack installation (unless it’s somehow configurable in stack.yaml, which I’m not aware of).
Installation takes really long – I suppose it was over an hour and downloaded like, half of The Internet.

Anyway, since I’ve managed to do it, you should be able to install it too (if you’re not – write in comments, maybe I can help?), so let’s go to teh codez!
First let’s create the GHCJS project, as a subproject to our main one. Go to main project directory and run stack new ghcjs. That created some files, but – surprisingly – not related to GHCJS in any way. To request compiling it with GHCJS, you have to add the following code to your stack.yaml:

resolver: lts-6.18
compiler: ghcjs-0.2.0.9006015_ghc-7.10.3
compiler-check: match-exact
setup-info:
  ghcjs:
    source:
      ghcjs-0.2.0.9006015_ghc-7.10.3:
        url: "https://tolysz.org/ghcjs/lts-6.15-9006015.tar.gz"
        sha1: 4d513006622bf428a3c983ca927837e3d14ab687

If you are wondering where did I get these paths from (and you should be – never trust an unknown blogger asking you to install some arbitrary packages!), it’s from a ghcjs-dom GitHub issue.

After that run stack setup and spend some time solving the problems (it takes some time, not counting really long installation). For example, if you’re using Windows, you won’t be able to set up environment, because this package uses a particular resolver – lts-6.15, which has a built-in version 0.8.18.4 of yaml package, which cannot be built on Windows, because it contains non-ASCII characters in the path (its this problem). Seriously, that’s one of weirder problems I’ve encountered. Bad news is that it’s not possible to elegantly solve it. I changed manually the downloaded package to use lts-6.18 resolver, which works fine. If you choose this solution, remember to remove sha1 from stack.yaml. Also remember that creating tar.gz files on Windows work a bit differently than on Linux and you may have trouble during repacking (luckily, 7-Zip offers an in-place change feature, which solves this issue).

And voila – we’re ready to code!
First, let’s check whether the current version (which – by default – does only some printing to the console) works with Yesod.
To do that, first compile the GHCJS project (simply stack build), then copy all.js file from .stack-work/install/[some-hash]/bin/*.jsexe to a new directory, static/js. Then we need to includ this file – this is a little cumbersome in Yesod, but adds to type-safety. In desired handler (in our case Project.hs) you have to add addScript $ StaticR js_all_js to implementation of renderPage. The way of inclusion is a bit weird (substituting all slashes and dots with underscores), but guarantees compile-time safety (if the file does not exist, the app will not compile), which is good.

After these changes, run stack exec -- yesod devel. Here’s a fun fact: most probably it won’t compile, saying that js_all_js is not in scope. Thanks to this StackOverflow question we can detect the problem – Yesod didn’t recognize a new static file (identifiers are automatically generated for them). You need to either change or touch Settings/StaticFiles.hs, and after recompilation it’ll work.

Now run the server and open a project. And yay, it works! You can verify it by someFunc print in the JavaScript console. So far so good, now we have to find a way to perform the same action we did in project.julius. There are three steps to perform:

  1. Hook on #delete-project button
  2. onClick – issue a DELETE request
  3. after a response is received – redirect user to site of response

In JavaScript it was really simple, how will it look like in Haskell? Let’s find out!
First thing we need to do is adding ghcjs-dom to our dependencies: extra-deps: [ghcjs-dom-0.3.1.0, ghcjs-dom-jsffi-0.3.1.0] to stack.yaml (FFI is also required) and to .cabal file. Turns out we also have to add ghcjs-base-0.2.0.0 to stack.yaml, but apparently it’s not a published package… luckily, stack can address this. To packages field in stack.yaml add:

- location:
   git: https://github.com/ghcjs/ghcjs-base.git
   commit: 552da6d30fd25857a2ee8eb995d01fd4ae606d22

this will fetch the latest (as of today) commit from ghc-js. This is version 0.2.0.0, so it’ll compile now, right? Right. But wait we didn’t code anything yet! Oh dear, let’s fix it now.

Open the ghcjs/src/Lib.hs file and write:

module Lib (setupClickHandlers) where

import GHCJS.DOM
import GHCJS.DOM (runWebGUI, webViewGetDomDocument)
import GHCJS.DOM.HTMLButtonElement (castToHTMLButtonElement)
import GHCJS.DOM.Document (getElementById)
import GHCJS.DOM.EventTarget
import GHCJS.DOM.EventTargetClosures
import GHCJS.DOM.Types

setupClickHandlers :: IO()
setupClickHandlers = do 
  runWebGUI $ \webView -> do
    Just doc <- webViewGetDomDocument webView
    Just element <- getElementById doc "delete-project"
    button <- castToHTMLButtonElement element
    clickCallback <- eventListenerNew printNow
    addEventListener button "click" (Just clickCallback) False

printNow :: MouseEvent -> IO()
printNow _ = print "Clicked!"

We won’t get through all the imports (I was heavily inspired by some GitHub examples) – it is possible that some of them aren’t necessary, and others can definitely be scoped. Nevertheless, I think that going through the code will be hard enough, so let’s ignore the details for now.

First call, runWebGUI, is responsible for setting our code in proper context. It calls given function (a lambda, in our case) with the GUI view. It’s pretty cool that you can use exactly same code for both the browser and native GTK apps (with proper linkage, obviously). Then we extract the document DOM from the GUI and desired button from the document. In the next line, we create a callback (from function defined a few lines lower), and attach it to "click" event of our button. The syntax for the listener might seem a bit weird, so let’s take a look at the signature and definition:

addEventListener ::
                 (MonadIO m, IsEventTarget self, ToJSString type') =>
                   self -> type' -> Maybe EventListener -> Bool -> m ()
addEventListener self type' listener useCapture

The first two arguments – event target (button) and event type (click) are fairly intuitive, but why is EventListener a Maybe, and what is useCapture? useCapture is a parameter controlling the way of event propagation. It’s explained in more detail here (link from ghcjs-jsffi source). Unfortunately, I still do not know why is EventListener a Maybe – possibly to allow change of event propagation without any actual handler? If you have an idea, let me know in the comments!

You also need to call this function (instead of someFunc) in app/Main.hs. Then compile, copy all.js to static/js (just like before) and remove templates/project.julius file. Now, be careful, this might hurt a little: yesod devel alone won’t spot that you’ve removed the file, so you’ll get 500: Internal Server Error when project.julius should be used.

The current version implements 1st point from our checklist – we’ve added a hook on #delete-project button. Now is the time for some bad news – we won’t be able to easily use Yesod’s typesafe routes. Obviously, we can work around it by generating an interface file – but that’s another level of infrastructure we need to build. That’s why we’ll leave typesafe routes for now and stick with typical strings.

With that knowledge, let’s implement the AJAX call:

data DeletionResponse = DeletionResponse { target :: String } deriving (Generic, Show)
instance ToJSON DeletionResponse where
  toEncoding = genericToEncoding defaultOptions
instance FromJSON DeletionResponse

makeRequest :: String -> Request
makeRequest projectId = Request {
  reqMethod = DELETE,
  reqURI = pack $ "/project/" ++ projectId,
  reqLogin = Nothing,
  reqHeaders = [],
  reqWithCredentials = False,
  reqData = NoData
}

requestProjectDeletion :: String -> IO (Response String)
requestProjectDeletion projectId = xhrString $ makeRequest projectId

deleteProject :: MouseEvent -> IO()
deleteProject _ = do 
  currentLocation <- getWindowLocation
  currentHref <- getHref currentLocation
  response <- requestProjectDeletion $ unpack $ extractLastPiece currentHref
  redirect currentLocation currentHref response
  where
    redirect currentLocation oldHref resp = setHref (pack $ getTarget oldHref resp) currentLocation
    extractLastPiece href = last $ splitOn' (pack "/") href
    getTarget fallback resp = maybe (unpack fallback) target $ ((BS.pack `fmap` contents resp) >>= decode

I’ve also added a few imports (Data.JSString, JavaScript.Web.XMLHttpRequest, JavaScript.Web.Location, GHC.Generics, Data.Aeson and Data.ByteString.Lazy.Char8 as BS) and a language extension – DeriveGeneric, for automatic generation of Aeson serialization/deserialization routines. Of course, this required changes in cabal file as well (aeson and bytestring dependencies). deleteProject becomes our new clickCallback and it works the same way as earlier again!

Now, that’s quite a lot of code, so let’s go through it and examine what happens there.
We start with definition for our response data type – to be honest, it’s not really necessary, and we could’ve just extracted the target field from the response JSON, without intermediate Haskell objects. If GHCJS offers some helpers to do that (with Aeson alone it wouldn’t be much simpler), it could spare us a few packs and unpacks.
Next we create a HTTP request – only variable part here is URI, determined by projectId. An important thing to note is that this code works slightly different thatn the previous version (in Julius) – we only have one code and it dynamically determines routes – previously a separate code was generated and sent with each page, which could add to delays (if it got bigger). Files generated by GHCJS are fairly big (hundreds of kBs), so we can’t really afford sending dozens of them to each user – network bandwidth might be cheaper than earlier, but it won’t be cheap enough to simply throw the throughput away.
Fun fact: the first time I’ve implemented it in a way that it compiled, I mistyped the route. Lack of static typing for routes is quite sad, but probably solvable with some work.
And then, after a short AJAX wrapper, we have the main event listener – deleteProject. It starts with determining the current path, for two reasons – first, that’s the location to set if something goes wrong (“no change”), and second – to determine ID of project. While it works now, it poses several threats. First of all, if two teams work separately on frontend and backend, at some point the route will change (probably without notice) and this mechanism will break. This can be of course prevented with thorough testing and strict processes, but there is also second problem – no URL minifiers will work. While this might not be a problem, it may become one when you switch to MongoDB identifiers.
Next line might be one of the most interesting features of Haskell in asynchronous applications. Due to lazy I/O, we can request redirection to be performed “when data is ready” (after response is received – response is required to proceed here). That’s a really nice solution compared to chains of promises (which are also really nice compared to typical Node.js callbacks), which doesn’t break the code flow but – at the same time – performs implicit waits wherever needed.

The time has come for some final thoughts – after all, we managed to implement the same (or at least very similar), simple functionality using GHCJS and Julius. From my perspective, using GHCJS for simple scripts is a vast overkill – JavaScript is sufficient for it, and if you want more type safety, choose TypeScript (it is also supported in the Shakespearean family). You get out-of-the-box integration with Yesod, a simple type system and route interpolation. That might not be much, but remember that right now we’re aiming for rather simple scripts. And hey – people write *big* apps in JavaScript with no types, so a few lines are not a tragedy (if required).
As for GHCJS – it’s a powerful and promising tool, but still very immature. It’s targets are ambitious, but for now it simply isn’t usable (at least on Windows) – that’s simply unacceptable for an installation from packages to take over two days and require to look through dozens of GitHub issues. Installation from sources might be more convenient, but I expect a mature tool to provide an installable package (even if all the package does is instrumenting the entire compilation locally). And more importantly – if it provides any package, it should work out-of-the-box (regardless if it’s old or new – assuming it’s the newest one, older may have bugs). Right now start-up overhead is simply too big to be acceptable, at least for me (over a half of this post is just about setting up GHCJS!). Programming in it is quite nice, but documentation is ultra-sparse, and most of the stuff has to be looked up in the source – that’s also not what I’d expect from a mature tool. Nevertheless, GHCJS caught my attention and I’ll definitely take a closer look at it again in several months. Maybe then it’ll be possible to apply it to some bigger project (for small ones infrastructure costs – setups, installations etc. – are much too high for me).

Looks like I’ll have to look for a different tool for frontend development (assuming I’m not happy with interpolated JavaScript/TypeScript/CoffeeScript, which is true). I’m going to consider Elm as the next tool – while it’s not exactly Haskell, for a first glance it looks quite haskelly, has static types and several other nice features, as well as decent performance and some Yesod integration. Perhaps it’s worth checking in one of the next posts?

Stay tuned!

Constructing the pipeline

September 22, 2016 DevOps, Gitlab, Haskell, Ubuntu No comments

If you successfully added a Gitlab project and pushed our Yesod code there, you might notice that some builds are being executed on your runner. That’s because the project already contains a .gitlab-ci.yml file. As you can see, it’s pretty much empty – just some prints for sake of checking whether the runner is configured properly.
Since it is, now is the time to adjust our pipeline to a more complex scenario. Obviously, there are dozens of pipelines used for many cases. Here I want to present one, quite simple deployment routine. We won’t be using all the steps yet (since we only have unit tests now), but they will come later during development (if not on this blog, then during your own coding sessions).

I propose a four-step pipeline, in .gitlab-ci.yml coded as:

stages:
  - dev-testing
  - packaging
  - integration-testing
  - publishing

dev-testing is the part executed by developers on their local machines – this usually boils down to some linter, compilation and unit test execution. I treat “unit tests” as tests which do not require any particular binary file available on build server (for example a separate database instance or some set of services). For these reasons, tests that use only sqlite are fine for me in this phase. Of course, feel free to disagree, I’m not going to argue about it. This phase goes first (before the “official” build phase), because – for compiled languages like Haskell – a separate build is required for test cases, and that’s kind of a commit sanity check. This stage should only fail if the developer didn’t run proper scripts before commit (ideally never), or if he’s not required to (e.g. for a really small project).

Next stage, packaging, is a phase that should never fail. It consists of building a whole deployment package, resolving dependencies, constructing RPM, DEB, Docker image or whatever deployment system do you use and pushing it to test-package repository (not necessarily – it may be sometimes passed as an artifact between builds).

Third stage, integration-testing is arguably the most important piece of the whole pipeline. It is needed to verify whether all the pieces fit together. They require full environment set up, including databases, servers, security rules, routing rules etc. I’m a big fan of performing this phase automatically, but many real-world projects require manual attention. If you have such a project, the best advice I can give you is – run whatever is reasonable here, and publish internally if it passes. Then handle the passed scenarios to your testers and add another layer of testing-publishing (possibly using a tool dedicated for release management). This stage will fail often – mostly due to bugs in either your code or your scripts (which are also your code) – there will be races, data overrides and environment misalignments. Be prepared. Still, it’s the purpose of this stage – things that will fail here, most probably would fail on production otherwise, so it’s still good!

The last stage, publishing is simple and should never fail – it should simply connect to release repository and put the new package there. It might be an input point for your Ops people to take it and deploy, it might be an input point for the testers. This stage should be executed only for your release branches (not ones hidden in developer repositories) and is the end of the automated road – next step has to be initated by a human being, be it deployment to production or further testing. This job should also make a proper version tag on the repository (this may be done in packaging as well, but I prefer to have less versions).

Of course, all stages may additionally fail for a number of reasons – invalid server configuration, network outage, out of memory exception, misconfiguration etc. I didn’t mention them earlier, because they aren’t really related to (most of) the code you create and will occur pretty much at random. However, remember my warning: while they might seem random, you should investigate them the first time you encounter any of them. Later on they will only become more and more annoying, and in the end you’ll either spend your most-important-time-just-before-release-oh-my to solve them or ignore the testing stage (which is bad).

A few more words about the choice of tooling: I tend to agree that Gitlab CI might not be the best Continous Deployment platform ever, especially due to limited release management capabilities and tight connection to automated-everything (I like it, but most projects require some manual testing). Perhaps a choice of Jenkins or Electric Flow would be better, but would require significantly more attention – first of all, installing and configuring a separate service and second – managing integration. Configuring Gitlab CI only takes a few lines of YAML, but for Jenkins it’s not that easy anymore!

Now, after we’ve managed to design the pipeline, let us create an example jobs for it.

dev-testing is easy – it should simply run stack setup && stack test (we have to linters for now).
preparing-package is a little trickier:

preparing-package:
  stage: packaging
  script:
    - stack setup
    - stack install --local-bin-path build
  artifacts:
    paths:
      - build/
    expire_in: 1 hr
  cache:
    - .stack-work

first, we need to install the package to build directory (otherwise it would remain in a hash-based location or be installed to local system – which is not what we want), then define the artifacts (whole build directory) and it’s expiration (1 hour – should be enough for us). The cache option is useful to speed up compilations – workspace is not fully cleared between builds. Note that this might be dangerous, if your tools don’t deal well with such “leftovers”. However, clean installation of GHC and all packages takes about a year, so caching is required (of course, you may also set up your own package server with a cache for the used ones, if your company is a tad bigger).
Rest of the stages is just printing for now – we have no integration tests, and installing Apt repository or Hackage server seems to be a bit of an overkill right now. I also hate polluting the public space (public Hackage) with dozens of packages, so I won’t do nothing right now there (I might reconsider later on, of course!).

If you download the code from GitHub, you will see that it doesn’t work in Gitlab. Apparently, stack is not installed in our Runner container! This requires quite a few commands, but luckily, they are listed in Stack GitHub installation manual.

For Ubuntu Server 16.04 this goes as following:

# add repository key
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 575159689BEFB442
# add repository
echo 'deb http://download.fpcomplete.com/ubuntu xenial main'|sudo tee /etc/apt/sources.list.d/fpco.list
# update index and install stack
sudo apt-get update && sudo apt-get install stack -y

Manual configuration management and tool installation is not the best practice ever, but it’s often good enough, as long as the project is relatively small (or you have dedicated people for managing your servers). We might consider changing this to some configuration management tool later on, when dependencies get more complex.

Aaand, that’s it! First pipeline builds should already successfully leave the Gitlab area. Congratulations!

Next post, promised a long time ago – GHCJS instead of jQuery in our app – comes soon.

Stay tuned!

Testing the app

September 8, 2016 Haskell, Languages, Testing, Yesod No comments

Every application with a reasonable complexity obviously has to be tested before releasing to the customer – Yesod applications are no different in this case. Of course, type safety gives us a strong guarantees – guarantees which are far beyond reach of Django, Rails or Node developers. Still, that’s not enough to purge all the bugs (or at least most of them). That’s why we have to write tests.

There are hundreds of levels of testing that are specified by dozens of documents, standards and specifications – we won’t be discussing most of them here, as there are only a few people that care about technicalities, and most of us care only about “what to do to make the app work and guarantee that it will keep working”. Formally, these are tests of the functionality (there are also other groups, like performance or UX), and are the most common group of tests. For our application, I’d divide it into three or four kinds of tests that require different setups and different testing methods (we won’t be implementing all of them here – that’s just a conceptual discussion):

  • Unit tests
  • Server (backend) tests
  • Interface (frontend) tests
  • System tests

They become needed with the increasing complexity, so you won’t necessarily need all of them in your first project, but with time, they become increasingly useful (so do performance tests and UX ones, but they are a different story). Unit tests are one of the most popular tools, and pretty much everybody claim that they use it – it’s the basic tool for assessing application sanity in unityped languages (like Python or JavaScript). Of course, Haskell also has several frameworks for writing them, of which ones of the most popular are Tasty and Hspec. Tasty is a framework, but the actual tests and assertions are provided by other libraries, like QuickCheck, SmallCheck or HUnit. The third one is quite a typical xUnit library for Haskell, but the first two are a bit different – instead of testing specific input/output combinations, they are testing properties of your code. They do this by injecting pseudo-random values and analyzing features of the result (instead of the result value). For example, we if the define that prepend is a function which – after applying it to a List – increases its length by 1 and is the first element of the new structure, we could define it as:


prepend :: a -> [a] -> [a]
prepend elem list = elem : list

x = testGroup "prepend features" 
  [
    QC.testProperty "after prepend list is bigger" $ 
      \list elem -> (length $ prepend elem (list :: [Int])) == length list + 1,
    QC.testProperty "after prepend becomes first element" $ 
      \list elem -> (head $ prepend elem (list :: [Int])) == elem
  ]

of course, there are lots of different properties that can be assessed on most structures. However, there is a catch in these tests – you cannot ignore intermittent failures. Since they use random input (pseudo-random, but the seed is usually really random), every test failure may be an actual bug. Plus, it’s possible that some bugs will go unnoticed for a few runs. That’s a bit different philosophy from the usual approach, where data is always the same and bug is either detected or not on each run (assuming no environment “intermittent” failures). It’s not better or worse, it’s simply different. Arguably better for lower-level tests, such as unit tests, and that’s why it’s used there. They are not suitable for edge-case testing, but are very good at exploring the domain.

There is nothing special in unit tests for applications that are using Yesod – they are just plain Haskell UTs, ignoring the whole Yesod thing.

Next group of tests are server tests – ones that use backend. They should be ran against a fairly-complete app, with backend database set up (preferably on the same host), but without connection to external services or full deployment (proxies, load balancers etc.). It should mostly test reactions on API calls – in most cases you will not want to test the HTML page but rather some JSON or XML responses (testing HTML is much harder). Yesod provides such tests (called integeration tests there, but this name is often used in many contexts) in a helper library, yesod-test. Examples of such tests together with Hspec framework are included in the default scaffolding in test/Handler directory. As you can see, these are quite like HTTP requests, except that they don’t really go through any port, and the communication happens inside a single binary. I really recommend writing this kind of tests – they give a (nearly) end-to-end view on the processing, and are still quite efficient (single binary with occassional DB access). One more thing about the database: beware. While you’ll probably want to use it in these tests, you have to be sure which applications communicate directly with the database. I don’t mean “the same instance of database as your test database”, as I’m sure that you’ll have a special database for tests, preferably set up from scratch on each test run. What I mean is that it’s quite common that more than one application communicate with the same database – for example, for market analysis. That’s an important point, because if you use a shared database, database is also one of your interfaces and you should treat it with the same care as you treat your other interfaces.

There are two types of interface (UI) tests – first one is testing for view – as in Jest, a library for testing React views – and the second type is testing for functionality – as in Selenium. I’m focusing on tools for web projects here, because Yesod is a web framework, but same types are important in pretty much any area, including mobile and desktop applications. These types of tests are probably responsible for the most hatred from the development teams to testing of all tools. That is because both these types are brittle, and small, seemingly not connected changes can break them. These changes include moving the button a few pixels left or right, changing the tone of the background, removing one or two nested divs. Of course, properly written tests will yield more reasonable results (if you’re using Selenium, check out the Page Object pattern), but still, they’re much less change-tolerant than the other types. Additionally, they are not yet fully mature – despite the fact that Selenium is there for a few years already, the driver for Firefox is still not ready (I know that it was broken by Firefox 48 and that geckodriver is not responsibility of Selenium team – still, lack of driver for one of the top browsers signals immaturity of tooling as a whole), so you may encouter quite a few glitches. Nevertheless, I really recommend to implement tests for some basic functionalities of the app. In the beginning it might seem that manual clicking through the app is faster, but amount of manual clicking never ceases to increase, and our patience does – and the quality of testing suffers. Of course, I’m not asking you to implement every single detail in UI test – but at least check basic features – that checkboxes work, that submit buttons cause submits and that data is available in the UI after its submission. Oh, and there are Selenium bindings for Haskell. For web tests your setup should be similar to the one created for server tests, while for view tests it may be simpler – for example as simple as four UT setup.

The last type of tests is arguably the most complex one and most IT projects don’t have them. The are run in the actual production environment (with the exception for no serving of real clients yet) and/or its clone. Their purpose is to guarantee that deployment was done properly, communication with external services is fine and generally the application is ready to start serving real clients. This is no longer time for checking the functionality – this should have been done earlier – only a few basic scenarios are executed, mostly to guarantee that interfaces between system components (services, external world and things like OS) are working fine. Static typing help here as well – perhaps Yesod is not the best example, but Servant is kinda famous for generating type-safe WebAPIs. Still, we have to check that services were built in compatible versions, ports are not blocked etc. Altogether – this step is more of a job for a DevOps guy and simplifies rather operations than development, but hey – in your startup you’ll have to write everything on your own and deal with the administration as well, so you’ll better get to know it!

By the way, that’s precisely what we’re going to deal with in the next post – setting up an automated deployment routine to provide us a fully automated continous deployment pipeline. The whole task probably won’t fit in a single post, but hey – let’s see.

Stay tuned!