Archives

All posts for the month December, 2015

Home Retail’s performance in 2015 could be regarded as an ‘annus horribilis’ offering a return of -52% for the year(under-performing the broader sector by -58%) exacerbated by its Pre-Christmas profit warning suggesting directional short sellers to date have made a healthy return. However, screening the company on OTAS across our range of factors highlights some interesting analysis on the risk landscape which could prompt shorts(and Long investors) to consider their current positioning.

hr

  • Valuation : Home Retail is currently trading on 9.23x 12m fwd P/E and is now statistically cheap. When contextualising similar such valuation levels on the OTAS Stacked Graph, it can be observed that over the last 8 years it has only reached a similar extreme once  back in November 2011 which coincided with lows in the share price and positive inflection point.
    hr1
  • Short Interest : The current total % of free float shares on loan is around 10.5% and significantly high compared to the last 2 years. With roughly 11 days of trading volume to cover the existing short base the risk of a short squeeze notable.
  • Divergence : The performance of the shares over the last month(-6.82%) has prompted a contraction in the short base of around 2% suggesting directional accounts are now covering existing positions.
  • Dividend : As well as offering a cheap valuation, from a fundamental perspective Home Retail also offer a high relative yield at just under 4% which is comfortably covered over 2.7x by next years earnings.
  • Other notables :
    Home Retail are currently rated a consensus ‘Hold’ and trade at the widest discount(-23%) to the median analyst price target of UK peers.
    hr2 Home release a Sales update shortly after Christmas on the 14th January
    hr3

    With fundamental factors arguably appealing and offering positive historic context, short interest high and seeing some signs of contraction whilst the depressed share price is attracting bid chatter it may be time for investors to re-assess current positioning in Home Retail shares.

 

         Merry Christmas & a prosperous New Year from everyone at OTAS

As a general rule of thumb, an environment of interest rate increases is generally seen as a positive catalyst for the banking sector as net interest margins can begin to expand as they exploit the spread between loan and deposit rates. Whilst the reaction of European share prices in the sector today would certainly affirm this(the SX7P is +2.1% at pixel time,) there is a trend developing in OTAS which identifies thematic geographic dispersion suggesting some investors may not believe the rally is all encompassing.

One of the best ways to interpret the markets’ forward share price expectations of a stock is to analyse its implied volatility and associated sentiment indicators.

Currently in the European Banks sector all of the Scandinavian large cap banks are seeing above average implied volatility readings, with 3 companies specifically: SEB, Nordea and Svenska Handelsbanken, seeing extremely high volatility levels relative to the rest of the sector.

To contextualise this, comparing the current median implied volatility of the Scandi banks to that of the wider Euro Banks sector(versus history) it is evident that expectations of significant share price moves are presently being priced in to these companies. Indeed, you would have to go back to Q1 2010 to see such similar levels previously.
sb

Whilst it is noted that volatility is high across the curve specifically for the Scandinavian banks, it is in the single stock detail where we get a true sense of investor sentiment and positioning. Focussing on the 3 companies OTAS previously identified as exhibiting extreme behaviour(SEB, HandelsBanken & Nordea) some important correlations are observed:-

The Put Ratio(an expression of the total amount of Put options traded versus the total number of options traded) in all 3 companies is flagging as extremely high compared to normal suggesting downside protection is being sought. This is further evidenced by our analysis indicating the heaviest volume recently has been dominated by at the money & out of the money Puts.

SEB                                                Nordea                                         SHBA
sb1

Additionally, skew curve analysis can also give a good indication of the demand dynamic from investors. Currently, both Nordea and Handelsbanken have flags indicating the current cost of upside exposure is at or around year lows implying that bullish positioning is minimal.

Nordea                                                                      SHBAsb2

Interestingly, whilst the vol markets are seeing considerable activity and bias there does not seem to be specific directional positioning in the cash market as depicted by short interest. The current percentage of free float on loan for all of the large cap Scandi banks remains largely unchanged suggesting traditional equity long/short funds are not playing this theme.

OTAS Lingo is our natural language app to summarise many of our analyses in a textual report. Users can subscribe to daily, weekly or monthly Lingo emails for a custom portfolio of stocks or a public index, and receive a paragraph for each metric we offer, highlighting important changes across the time period. Our method of natural language generation is simple: given the state of the underlying data, we generate at least one sentence for each of the metric’s sub-topics, and concatenate the results to form a paragraph. This method of text generation has proved to be very customisable and extensible, despite being time-consuming to implement, and the results are very readable. However, given that we specify hundreds of these sentences in the code base, some of which are rendered very infrequently, how do we leverage compile-time checking to overcome the runtime issues of tools such as printf?

First, a little background. We render our natural language both as HTML and plain text, so we abstract the underlying structure with a GeneratedText type. This type represents dates, times and our internal security identifiers (OtasSecurityIds), in addition to raw text values. HTML and plain text are rendered with two respective functions, which use additional information, such as security identifier format (name, RIC code etc.), to customise the output. Here’s a simplified representation of the generated text type.

type HtmlClass = T.Text

data Token 
  = Text T.Text -- ^ Strict Text from text package
  | Day Day -- ^ From time package
  | OtasId OtasSecurityId 
  | Span [HtmlClass] [Token] -- ^ Wrap underlying tokens in span tag, with given classes, when rendering as HTML

newtype GeneratedText = GeneratedText [Token]
  deriving (Monoid)

We expose GeneratedText as a newtype, and not a type synonym, to hide the internal implementation as much as possible. Users are provided with functions such as liftOtasId :: OtasSecurityId-> GeneratedText, which generates a single-token text value, an IsString instance for convenience and a Monoid instance for composition. With this, we can also express useful combinators such as list :: [GeneratedText] -> GeneratedText, which will render a list more naturally as English. These combinators may be used to construct sentences such as the following.

-- ^ List stocks with high volume.
-- E.g. High volume was observed in Vodafone, Barclays and Tesco.
volumeText :: [OtasSecurityId] -> GeneratedText
volumeText sids
  = "High volume was observed in " <> list (map liftOtasId sids) <> "."

This definition is okay, but it is helpful to separate the format of the generated text from the underlying data as much as possible. Think about C-style format strings, which completely separate the format from the arguments to the format. Our code base contains hundreds of text sentences, so the code is more readable with this distinction in place. Additionally, there’s value in being able to treat formats as first class, allowing the structure of a sentence to be built up from sub-formats before being rendered to a generated text value. Enter formatting.

The formatting package is a fantastic type-safe equivalent to printf, allowing construction of typed format strings which can be rendered to strict of lazy text values. The type and number of expected arguments are encoded in the type of the formatter, so everything is checked at compile time: there are no runtime exceptions with formatting. The underlying logic is built upon the HoleyMonoid package, and many formatters are provided to render, for example, numbers with varying precision and in different bases. Formatting uses a text Builder under the hood, but the types will work with any monoid, such as our GeneratedText type, although we do need to reimplement the underlying functionality and formatters. However, this gives us a very clean way of expressing the format of our text:

volumeFormat :: Format r ([OtasSecurityId] -> r)
volumeFormat
  = "High volume was observed in " % list sid % "."

volumeText :: [OtasSecurityId] -> GeneratedText
volumeText 
  = format volumeFormat

The Format type expresses the type and number of arguments expected by the format string, in the correct order. volumeFormat above expects a list of OtasSecurityIds, so has type Format r ([OtasSecurityId] -> r), whereas a format string expecting an OtasSecurityId and a Double would have type Format r (OtasSecurityId -> Double -> r). The r parameter represents the result type, which, in our case, will always be a GeneratedText. Formatters are composed with (%), which keeps track of the expected argument, and a helpful IsString instance exists to avoid lifting text values. Finally, format will convert a format string to a function expecting the correct arguments, and yielding a GeneratedText value.

The format string style is especially beneficial as the text becomes longer, as the format is described without any noise from the data being textualised. We use a range of formatters for displaying dates, times, numeric values and security IDs, but we can also encode more complex behaviours as higher-order formatters, such as list above. Users can also define custom formatters to abstract more specialised representations. In practice, we specify all our generated text in Lingo in this way: it’s simply a clearer style of programming.

This will be a short post discussing how risk is normally handled in finance and how OTAS uses them. Today I’ve been improving code that deals with them, so I thought I would write a bit about risk models.

The audience for this is either someone from outside finance who isn’t familiar with finance norms, or someone in finance who has not had the chance to study risk models in detail.

The definition of risk

Intuitively, the term ‘risk’ should be something to do with the chances that you will lose an uncomfortable amount of money. In the equities business it is normally defined to be the standard deviation of returns. So if in a given year, your portfolio makes perhaps on average £500k, but fluctuating so that perhaps on a bad year it loses £500k, or on a good year it makes £1.5M, your risk is probably about £1M.

This can catch people out – that the definition that is almost universally used (for equities) includes the risk of making lots of money as well as the risk of losing lots of money. You could make the argument that if the stock can go up by 10%, then it could go down 10% just as easily. You could imagine situations where that’s not true though: If 10 companies were bidding for an amazing contract that only one of them would win, then you’re more likely to make lots of money than lose it (if you buy shares in one company).

In fact, the reasons that standard deviation of returns is used is that it’s simple to calculate. That might sound as if it’s the technical teams making a decision to be lazy in order to make life easy, but actually trying to estimate risk in a better way is nightmarishly difficult – it’s not that the quant team would have to sit and think about the problem for *ages*, it’s that the problem becomes guesswork. Getting the standard deviation of a portfolio’s returns takes a surprisingly large number of data points in finance (because fat tails makes the calculation converge more slowly than expected), but getting a complete picture of how the risk works including outliers, catastrophic events, bidding wars, etc., takes far, far more data.

Since there isn’t enough data out there the missing gaps would have to be filled by guess work. And so most people stick to a plain standard-deviation based risk model.

Having a simple definition means that everyone agrees on what risk numbers are: If someone asks you to keep your risk to less than 5% per year, they and you can look at your portfolio and largely agree that a good estimate of risk would be under the threshold. Then most people can look at the actual returns at the end of the year and say whether or not the risk was under the 5% target.

How risk models work

Let’s accept that risk is modelled in terms of the standard deviation of portfolio returns. To estimate your realised risk, you just take how many dollars you made each day for the last year or so, and take a standard deviation. The risk model, though, is used to make predictions for the future.

The risk model could just contain a table of every possible or probable portfolio and the predicted risk for that portfolio, but it would be a huge table. On the other hand, that is a complete description of what a risk model does: It just tells you the risk for any hypothetical portfolio. We can simplify this a bit by noting that if you double a portfolio’s position, the risk must double, so don’t have to store every portfolio. In fact, similar reasoning means that if we have the standard deviation for N(N-1)/2 portfolios, we can work out the standard deviation for every portfolio.

Another way of saying the same is that all we need is the standard deviation for each stock, and the correlation between every stock and every other: If we know how volatile Vodafone is, and how volatile Apple is, and the correlation between them, then we can work out the volatility of any portfolio just containing Vodafone and Apple.

In the first instance, all you can do to predict the future correlation between two stocks is to look at their history – if they were historically correlated, we can say that they probably will be correlated in the future. However, we can probably do slightly better than that, and simplify the risk model at the same time using the following trick:

We make the assumption that the only reason that two stocks are correlated is that they share some factor in common: If a little paper manufacturer in Canada is highly correlated to a mid-sized management consultancy firm in Australia, we might say that it’s only because they’re both correlated to the market. Basically you have, say, 50 hypothetical influences, (known as “factors”) such as “telecoms”, or “large cap stocks” or “the market”, and you say that stocks can be correlated to those factors. You then ban the risk model from having any other view of correlation: The risk model won’t accept that two stocks are simply correlated to each other – it will only say that they’re both correlated to the same factors.

This actually helps quite a bit because the risk model ends up being much smaller – this step reduces the size of the risk model on the hard drive by up to 100 times, and it also speeds up most calculations that use it. If the factors are chosen carefully, it can also improve accuracy – the approximation that stocks are only correlated via a smallish number of factors can theoretically end up averaging out quite a lot of noise that would otherwise make the risk model less accurate.

What OTAS Tech does with them

OTAS Technologies uses risk models for correlation calculations, and for estimating clients portfolio risk, and for coming up with hedging advice. Risk models are also useful for working out whether a price movement was due to something stock-specific or whether it was to do with a factor move.

 

Across the entire FTSE 250 index, OTAS is flagging 4 companies whose shares have seen a significant move higher in short interest over the last week compared to normal: Petrofac, Electra Private Equity, AutoTrader and Provident Financial.

In terms of absolute percentage moves the latter two, AutoTrader and Provident Financial stand out as particularly unusual as our analysis shows that these companies practically had zero stock on loan prior to the recent move.

Provident Financial:-
pf

Provident Financial historic Share Price vs Short Interest
pf1

Other OTAS Factors to note:

  • Shares making recent new highs, continue to outperform broader Diversified Financials sector.
  • Performance driven by index tracker buying in expectation of quarterly review inclusion into FTSE 100(now confirmed.)
  • Short Interest change could suggest Hedge Funds are positioning to take advantage of the move higher and that current demand will drop away.
  • Forward P/E valuation now looking extremely high compared to rest of the sector.
  • Shares trading at a 17% premium to mean estimated sell side price target, consensus Hold rating


AutoTrader:
at

AutoTrader historic Share Price vs Short Interest
at1

Other OTAS Factors to note:

  • Following solid H1 numbers mid-November, Autotrader shares are also touching new highs.
  • Supported by recent positive technicals, the shares are currently in an uptrend.
  • Having previously been zero, Short Interest has seen a 2.5% increase in the last week(the largest since listing) suggesting substantial Hedge Fund activity in the borrow/loan market.
  • Divergence is noted with short interest and price indicating a two way pull in sentiment between long only/fast money accounts.
  • Autotrader shares are trading at a 2% premium to prevailing analyst price target, consensus Buy.
  • Ex-Div in 30 days, 12m Fwd yield is unremarkable at 0.76%

 

 

Several months ago, Stack added support for building and running Haskell modules on-the-fly. This means that you no longer need to include a Cabal file to build and run a singe-module Haskell applications. Here’s a minimal example, HelloWorld.hs.


#!/usr/bin/env stack
-- stack --resolver lts-3.12 --install-ghc runghc

main
  = putStrLn "Hello, world"

Given HelloWorld.hs has executable permissions, ./HelloWorld.hs will produce the expected output!

Stack can be configured in the second line of the file, with general form -- stack [options] runghc [options]. Above, the --install-ghc switch instructs Stack to download and install the appropriate version of GHC if it’s not already present. Specifying the resolver version is also crucial, as it enables the build to be forever reproducible, regardless of updates to any package dependencies. Happily, no extra work is required to import modules from other packages included in the resolver’s snapshot, although non-resolver packages can be loaded with the --package switch.

This feature finally allows programmers to leverage the advantages of Haskell in place of traditional scripting languages such as Python. This would be especially useful for ‘glue’ scripts, transforming data representations between programs, given Haskell’s type safely and excellent support for a myriad of data formats.

The Turtle Library

Gabriel Gonzalez’ turtle library attempts to leverage Haskell in place of Bash. I write Bash scripts very occasionally and am prone to forgetting the syntax, so immediately this is an interesting proposition to me. However, does the flexibility of the command line utilities translate usefully to Haskell?

The turtle library serves two purposes. Firstly, it re-exports commonly-used utilities provided by other packages, reducing the number of dependencies in your scripts. Some of these functions are renamed to match their UNIX equivalents. Secondly, it exposes the Shell type, which encapsulates streaming the results of running some of these utilities. For example, stdin :: Shell Text streams all lines from the standard input, while ls :: FilePath -> Shell FilePath streams all immediate children of the given directory.

The Shell type is similar to ListT IO, and shares its monadic behaviour. Binding to a Shell computation extracts an element from the computation’s output stream, and mzero terminates the current stream of computation. Text operations are line-oriented, so the monadic instance is all about feeding a line of output into another Shell computation. Putting this together lets us write really expressive operations, such as the following.


readCurrentDir :: Shell Text
readCurrentDir = do
  file <- ls "."
  guard $ testfile file -- Ensure 'file' is a regular file!
  input file -- Read the file

readCurrentDir is a Shell which streams each line of text of each file in the current directory. Remember, guard will terminate the computation when file isn’t a directory, so its contents will never be read. Without it, we risk an exception being thrown when trying to read a non-regular file.

The library exposes functions for aggregating and filtering the output of a Shell. grep :: Pattern a -> Shell Text -> Shell Text uses a Pattern to discard non-conforming lines, identical to its namesake. The Pattern type is a monadic backtracking parser capable of both matching and parsing values from Text, and is also used in the sed :: Pattern Text -> Shell Text -> Shell Text utility. The Fold interface collapses the Shell to a single result, and sh and view run a shell to completion in the IO monad.

Perhaps unsurprisingly, Turtle’s utilities are not as flexible as their command-line equivalents, as no variants exists for different command-line arguments. Of course, it would be possible for the library to expose a ‘Grep’ module with a typed representation of the program’s numerous optional parameters, but this seems overkill when I can execute and stream the results with inShell. This, then, informs my use of Turtle in the future. I do see myself using it in place of shell scripts, and not for typed representations of UNIX utilities. The library lifts shell output into an environment which I, as a Haskell programmer, can manipulate efficiently and expressively. It serves as the glue whose syntax and semantics I already reason with every day.

See Also

Stack script interpreter documentation

This post is less finance focused and more software-development focused.

At OTAS Technologies, we invest considerable time into making good software-development decisions. A poor choice can lead to thousands of man-hours of extra labour. We make a point of having an ecosystem that can accommodate several languages, even if one of them has to be used to glue the rest together. Our current question is how much Haskell to embed into our ecosystem. We have some major components (Lingo, for example) in Haskell, and we will continue to use and enjoy it. However it is unlikely to be our “main” language for some time. So it is largely guaranteed a place in our future, but still pacing out the boundaries of its position with us.

We started off assuming that python was going to simplify life. It was widely used and links in to fast C libraries reasonably well. This worked well at first, but there were significant disadvantages which lead us away from python.

Python

Python’s first disadvantage was obvious from the start: Python is slow, several hundred times slower than C#, Haskell, etc., unless you can vectorise the entire calculation. For people coming from a Matlab or Python background, they would say that you can vectorise almost everything. In my opinion, if you’ve only used Matlab or Python, you shy away from the vast number of useful algorithms that you can’t vectorise. We find that vectorising works often, but in any difficult task, there’ll be at least something that has to be redone with a for-loop. It is true, however, that you can work round this problem by vectorising what you can, and using C (or just learning patience) for what you can’t.

Python’s second disadvantage though, is that large projects become unwieldy because they aren’t statically typed. If you change one object, you can’t know that the whole program still works. And when you look at a difficult set of code, there’s often nowhere that you can go to see what an object is made of. An extreme example actually came from code that went to a database, got an object and worked on that object. The object’s structure was determined by the table structure in the database. That makes it impossible to look at the code and see if it’s correct without comparing it to the tables in the database. This unwieldiness gets worse if you have C modules: Having some functionality that’s separated from the rest of the logic simply because it needs a for-loop makes it hard to look through the code.

Consequently, we only use Python now for really quick analysis nowadays. An example is to load a json from a file, and quickly plot a histogram of the market cap. For this task, it’s still good. There’s no danger of it getting too complicated. Nobody needs to read over the code later, and generally speed isn’t a problem.

The comparison process

The last week, though, has been spent comparing Haskell to C#. Haskell is an enormously sophisticated language that has, somewhat surprisingly, become easy to use and set up, thanks to the efforts of the Haskell community (Special thanks to FPComplete there). However, it’s a very different language to C#, and this makes it hard to compare. There is an impulse to try not to compare languages because the comparison can decay into a flame war. If you google X vs Y, where X and Y are languages, you come across a few well informed opinions, and a lot of badly informed wars.

There are several reasons for this, in my opinion. Often the languages’ advantages are too complicated for people to clearly see what they are and how important they are, and each language has some advantages and some disadvantages. A lot of the comparisons are entirely subjective – so two people can have the same information and disagree about it. The choice is often a psychology question regarding which is clearer (for both the first developer, and the year-later maintainer) and a sociology question about how the code will work with a team’s development habits.

There’s another difficulty that most of the evaluators are invested in one or the other language – nobody wants to be told that their favorite language, that they have spent 5 years using, is worse. We even evaluate ourselves according to how good we are at language X and how useful that is overall. A final difficulty is that there’s peer pressure and stigma and kudos attached to various languages.

So what’s the point? The task seems so difficult that perhaps it’s not worth attempting.

The point is that it’s the only way to decide between languages. And actually all the way through the history of science and technology, there have been comparisons between different technologies, and sure, the comparison’s difficult, but you can’t not compare things because the conclusions are valuable. The important thing is to remember that the comparison’s not going to be final, and that mistakes can be made, and that the comparison will be somewhat subjective, but that doesn’t make it pointless.

Haskell

As a disclaimer, I am not a Haskell expert. I have spent a couple of months using Haskell, and I gravitate towards the lens and HMatrix libraries. I have benefited from sitting beside Alex Bates, who is considerably better at Haskell than I am.

Haskell has a very sophisticated type system, and the language is considered to be safe, concise and fast. For me, I suspect that I could write a lot of code in a way that makes it more general than I could in C# – Often I find that my C# code is tied to a specific implementation. You don’t have to: you could make heavy use of interfaces, but my impression is that Haskell is better for writing very general code.

Haskell also has an advantage when you wish to use only pure functions – its type system makes it explicit when you are using state and when you are not. However, in my experience, unintended side effects are actually a very minor problem in C#. Sure, a function that says it’s estimating the derivative of a spline curve *might* be using system environment variables as temporary variables. But probably not. If you code badly, you can get into a mess, but normal coding practices make it rare (in my experience) that state actually causes problems. Haskell’s purity guarantees can theoretically help the compiler, though, and may pay dividends when parallelising code – but I personally do not know. I personally reject the idea that state in itself is inherently bad – a lot of calculations are simpler when expressed using state. Of course, Haskell can manipulate state if you want it too, but at the cost of slightly (in my opinion) more complexity and in the 3 or so examples that I’ve looked at, slightly longer code too.

Haskell is often surprisingly fast – that often surprises non Haskell-users (often the surprise is accompanied by having to admit that the Haskell way is, in fact, faster). The extra complexity and loss of conciseness is something that better designed libraries might overcome. This is possibly accentuated for the highest level code: My impression is that lambda functions and the equivalent of LINQ expressions in Haskell produces less speed overhead in Haskell than in C#.

Another advantage of Haskell is safety – null reference exceptions are not easy to come by in Haskell, and some other runtime errors are eliminated. However, you still can get a runtime exception in Haskell from other things (like taking the head of an empty list, or finding the minus-1st element of an HMatrix array). On the other hand, exception handling is currently less uniform (I think) than C#, and possibly less powerful, so again, we have room for uncertainty.

Some disadvantages of Haskell seem to be that you can accidentally do things that ruin performance (for instance using lists in a way that makes an O(N) algorithm O(N^2) or give yourself stack overflows (doing recursion the wrong way). However, I know that these are considered to be newbie-related, and not serious problems. When I was using Haskell for backtests, I quickly settled on a numerics library and a way of doing things that didn’t in fact have either of these problems. However, when I look over online tutorials, it’s remarkable how many tutorial examples don’t scale up to N=1e6 because of stack overflow problems.

For me, perhaps the most unambiguously beneficial Haskell example was to produce HTML in a type-safe way. A set of functions are defined that form a domain-specific language. This library is imported, and the remaining Haskell code just looks essentially like a variant of HTML – except that it gets checked at compile time, and you have access to all the tools at Haskell’s disposal. You could do that in C#, but it would not look pretty, and as far as I know, it’s not how many people embed HTML in C#.

But we are just starting the comparison process, and it will be interesting in the coming days, months and years to find out exactly what the strengths and weaknesses of this emerging technology are.

In a future blog post, we’ll write about the areas in which the two win and lose on. But for now, it’s better to leave it unconcluded.

Janet Yellen signalled increase confidence in the US economy, and reaffirmed the case for a rate hike at the next December FOMC meeting but reiterated that the decision will be data dependent, while ECB’s additional stimulus measures disappointed the market. Despite oil rebound, commodities slid overnight. Iron ore dropped to a decade low and copper retreated to 5.5 years low. With the commodity price gloomy outlook and global demand growth continue to slow down, it’s not surprising that OTAS indicated negative signals on one of the global miners name Rio Tinto.

From a valuation perspective, RIO P/Book value is the most expensive name compare with its peers.

1

2

RIO’s cost of credit has risen to 161.26 bps, significantly higher than the mean over the past year (101 bps) and up by 5.8% in the past week.

3

Heavily shorted name – Rio Tinto’s 1.5% of free-float shares are on loan, up by 27.8% in the past week, unremarkable compared with previous five-day moves.

4

Insiders sold shares lately – Group Director, Gregory Lilleyman, sold 13,000 shares at $50.78 last month.

5