All posts for the month February, 2016


A previous post ( discussed the problem that the average beta of stocks in a typical portfolio is less than 1.

The consequence of this was that if you try to use a dollar hedge, you frequently end up with an overall short position — in other words, the dollar-hedged portfolio should be expected to lose money if the market goes up. The conclusion is that you should typically only hedge about 60%-70% of your dollar position, depending on the exact makeup of your portfolio.

However, you could worry that in times of crisis, the stocks might move together much more — if they’re driven by large-scale macro forces, then perhaps their correlations might go up.

I attempt to answer this question here by plotting average beta vs time. The article is somewhat technical, but the interesting result is in Figure 1, so feel free to scroll to that.




I picked the 1000 largest Euro-denominated (so that we don’t just measure currency vol) stocks and worked with their beta against the Eurostoxx-50 index, for the last 10 years. Each stock has its beta calculated in a moving (Gaussian) window with width 100 trading days, and the mean is taken.

Beta is hard to estimate based on too little data. This is because it involves dividing two quantities which are both products of returns. Given that stock returns tend to have quite noticeable outliers, the product of two returns (for instance the stock’s return multiplied by the market’s return), can vary wildly from day to day. If you keep outliers in the calculation, then the resulting beta is weighted heavily towards what happened to stocks on just one or two high volatility days, but if you take the outliers out (or clip them to a permissible range), then you don’t answer the question “what happens in high volatility conditions”.

So we need several months’ worth of data realistically, to get a handle on whether beta is currently high or low. There’s also a risk that the numbers we get will be specific to the methodology we choose. For instance, if, during a crash, all the stocks were to crash, but not all on the same day, then the 1-day returns might show low beta, but the 5-day returns might have a much higher beta.

The only way round this is to try lots of methodologies and see if they agree.



The average beta has been fairy constant over the last 10 years, and seems not to be particularly correlated to market volatility:


The beta fluctuates from year to year, but seems not to be directly related to volatility.

The uncertainty comes from assuming that the beta variation for a 10-day window is completely noise, and scaling the observed noise to the window used here (100 days).

If we try varying the return time, we get a similar shape, but shorter timescales have lower betas. This is completely expected if we take into account the short-term mean reversion: There is a slight tendency for stocks to revert from one day to the next, and although difficult to profit from, the effect is strong enough to increase correlations for longer timescales:



Shorter return periods give lower betas because of short-term mean-reverting price fluctuations.


Then, trying median beta rather than mean, and trying a less aggressive outlier reducing process:


Different methods give vaguely similar betas.

So it seems that at least for these 1000 stocks, the beta seems to be fairly unconnected to volatility.

One last test was to try the beta vs a home-made market (the largest 100 stocks in the same universe):


When a different definition of ‘Market’ is used, you get a comparable beta time series, but the details are different.



Beta did vary from year to year, and it seems to be significant – but the uncertainty in the beta estimate is difficult to estimate. There seems not to be a strong link between mean beta and volatility, though.


Underlying data courtesy of Stoxx. The Stoxx indices are the intellectual property (including registered trademarks) of STOXX Limited, Zurich, Switzerland and/or its licensors (“Licensors”), which is used under license. None of the products based on those Indices are sponsored, endorsed, sold or promoted by STOXX and its Licensors and neither of the Licensors shall have any liability with respect thereto.


The pricing of risk for large blocks of stock can be very much an arbitrary process, with Portfolio and Risk Managers applying their own techniques and models to establish what they deem the fair transaction price is for a block of shares.

Typical starting points can include; indentifying what percentage of the historic ADV the block is, use of generic historical return screens to provide percentage discount estimates and analysing recent performance or price levels. Whilst informative, such basic inputs may not always provide the whole picture.

Before a level can be truly established a multitude of current market variables need to be analysed and understood as to their potential influence on the final price . Within the OTAS suite, two components bring together a complete range of pricing and factor analysis providing money managers with tools to effectively do this.

Price Formation –
The Schedule provides an optimised estimate(‘Utility’) establishing an indicative cost framework from which an initial risk priced can be derived. It considers the total size of the block and then models the total expected trading costs* of executing in the market based on our proprietary impact and risk forecasts. The live Microstructure charts can also be referenced to analyse the current market characteristics of the stock to further assist in establishing a price level.

*assuming low/moderate risk aversion

Applying Risk Factors –
Core Summary for single stocks lets you independently evaluate the current state of a company by analysing a complete range of market factors whilst highlighting extremes and outliers across the data set. Based on the evidence, users can evaluate their potential effect on pricing and how/if this should translate in terms of discounting from the Utility.

  • Performance benchmarking including current chart technicals and compare with trading volume analytics.
  • Evaluation of fundamental factors such as recent earnings momentum, relative valuation extremes, dividend expectations and growth assumptions.
  • Gauge current internal/external company sentiment via insider transactions, news intensity or unusual behaviour in the derivative and credit markets.
  • Establish the extent of event pre-positioning and demand from short covering via Alpha Capture trade ideas monitor and Short Interest indicators.
  • Monitoring upcoming event risk.



The Korean cosmetics names had a strong run yesterday after been hit hard with the concern of slowdown revenue growth in China after Cosmax (192820 KS), one of the largest cosmetics Original Equipment manufacturers in Korea, had released disappointing result. With evidence of KOSDAQ’s weakness, a number of high P/E names are sold off and switched into value names. From OTAS we could see a few positive signals for opportunities to buy at this level:


LG Household & Healthcare (051900 KS), one of the largest cosmetic brand names in Korea, had a huge volume bargain hunt by foreign yesterday with its ADV being historically high. Yesterday’s volume was 101.5k and the 30 days ADV is 47.7k:


Its valuation looks appealing again. The 12 month forward P/E valuation of 24.4x is gradually coming down to an inexpensive level over the past six months:


In terms of OTAS Technical signals, an OTAS Full Stochastic (+) signal was fired upon yesterday’s close, which on average generates 4.3% return over the following 20 trading days and its success rate is as high as 72% of the time since 2006:


OTAS stamps are one of the most recognisable ways in which we visually represent our data. We generate stamps for both single-stock and list data across many metrics, always seeking to present the most important information clearly and concisely. Currently, our stamps are handwritten in the SVG vector graphic format. In this post, I’m going to explore recreating one of our stamps in Haskell using the diagrams framework, leveraging the expressiveness and reuse that come with the abstraction as much as possible. This is not a diagrams tutorial; please refer to the manual for a comprehensive introduction.

OTAS single-stock stamps

OTAS single-stock stamps

I’m going to recreate one of our simpler stamps, the Signals stamp. This is for simplicity: I am confident diagrams is expressive enough to recreate all of our stamps. The Signals stamp displays the most recent close-to-close technical signal which has fired for the stock, showing the number of days ago the signals fired and the average one-month return the stock has experienced when this signal has fired in the past. Here’s an enlarged image of the stamp:

diagrams provides a very general Diagram type which can represent, amongst others, two-dimensional and three-dimensional objects. I’ll build up the stamp by combining smaller elements into a larger whole, focusing first on the core details contained in the inner rectangle: the row of boxes and lines of text. As we’ll see, the framework provides a rich array of primitive two-dimensional shapes with which to begin our diagram, and a host of combinators for combining them intuitively. Bear in mind that the Diagram type is wholly independent of the back-end used to render it: back-ends exist for software such as Cairo and PostScript as well as a first-class Haskell SVG representation provided by the diagrams-svg library.

First, let’s get some imports and definitions out of the way:

{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE OverloadedStrings #-}

module Stamps where

import Diagrams.Prelude
import Diagrams.Backend.SVG
import Text.Printf

data SignalsFlag = SignalsFlag
  { avgHistReturn :: Double
  , daysAgo :: Int

  = sRGB24read "#BBBBBB"
  = sRGB24read "#AAAAAA"
  = sRGB24read "#999999"
  = sRGB24read "#40981C"
  = sRGB24read "#101010"
  = sRGB24read "#2F2F2F"

Our point of reference is Diagrams.Prelude in the diagrams-lib package (I’m using version 1.3), which exports the core types and combinators together with handy tools from lens and base. Additionally, we import Diagrams.Backend.SVG from diagrams-svg as our back-end. As I explained above, Diagrams is back-end agnostic: we import one here only to simplify the type signatures. The remainder of the above snippet defines a representation of a signal and a set of colours, read from a hexadecimal colour code.

Primitive shapes such as squares and rectangles, as well as combinations of these, have type Diagram B (the B type is provided by the back-end). Crucially, each diagram has a local origin which determines how it composes with other diagrams. As you may guess, regular polygons use, by default, the canonical origin, but this may not be as easy to determine for more complex structures.

The most basic combinators are (<>), (|||) and (===) and their list equivalents mconcat, hcat and vcat. (<>) (monoidal append) superimposes the first diagram atop the second (clearly diagrams form a monoid under such an operation), so circle 5 <> square 3 is a circle of radius 5 overlaid on a 3 by 3 square, such that their origins coincide (the resulting diagram has the same origin). (|||) combines two diagrams side-by-side, with the resulting origin equal to the origin of the first shape, while (===) performs vertical composition. These combinators are instances of the more general beside function, but they are sufficient here. Additionally, beneath is a handy synonym for flip (<>), and # is reverse function application, provided to aid readability.

Here’s the function to generate the internals of the signals stamp. Remember, we’re using a SignalsFlag to generate a Diagram B, hence the type signature of signalsStamp.

signalsStamp :: SignalsFlag -> Diagram B
signalsStamp flag
  = vcat $
      [ strutY 1.2 `beneath` text perf
          # fontSizeL 1.2
          # fc white
      , strutY 1 `beneath` text "20d avg. perf."
          # fontSizeL 0.8
          # fc mediumGrey
      , strutY 0.5
      , boxRow
          # alignX 0
      , strutY 1 `beneath` text "days ago"
          # fontSizeL 0.8
          # fc mediumGrey
      , strutY 1 -- Padding for aesthetic
      = printf "%+.2f%%" (view avgHistReturn flag)
    boxRow :: Diagram B
      = hcat boxes
          = map (\k -> box k (k == view daysAgo flag)) [5, 4 .. 1]
            box n hasSignal
              = square `beneath` text (show n)
                  # fc white
                  = unitSquare
                      # lc lightGrey
                      # if hasSignal
                            fc brightGreen

Before I explain the code, here’s the resulting SVG:

The centre of the signals stamp

The centre of the signals stamp

The top level of the function vertically concatenates (vcat) the four components of the diagram. Text is enclosed in strutYs, invisible constructs which represent empty space in the diagram, and which are also used for padding. Font size is adjusted relative to the enclosing strut with the fontSizeL function, so the size of the rendered text is a function both of the size of the strut and the font size. boxRow is a Diagram B representing the row of boxes and needs to be aligned with alignX to recentre the origin horizontally prior to vertical concatenation.

Constructing boxRow requires horizontally concatenating (hcat) five boxes labelled 5 through 1, with the box corresponding to the signal’s daysAgo field shaded green. The unitSquare primitive constructs a 1 by 1 square, which we fill with fc if the boolean hasSignal is true. Remember, to insert text into the square we superimpose the string onto it with (<>). A simple map generating the box indices and hasSignal values finishes the construction of the row.

From this small example, I hope it’s clear that simple diagrams, at least, may be constructed in a clear, declarative way. Next, I want to focus on constructing the series of rectangles which will surround the above image, the bezel. This is where the higher-level representation of diagrams really shines: reusability. We can construct a function which will take any diagram, scale it to a uniform size, and then enclose it in our bezel. This could be reused for every stamp we create, with no code duplication. To implement this, we’ll create a function bezel :: String -> Diagram B -> Diagram B which will wrap the given diagram in a series of rectangles and add the given title:

bezel :: String -> Diagram B -> Diagram B
bezel title stamp
  = mconcat
      [ (title' === centre)
          # alignY 0
      , roundedRect (r + innerWidth + outerWidth + outerWidth') (1 + innerWidth + outerWidth + titleHeight + outerWidth') curvature
          # fc mud
          # font "Arial"
    -- Ratio between width and height of stamp
      = 0.8
      = stamp
          # scaleToY 1
          # scaleToX r
      = 0.1
      = 0.025
      = 0.075
      = 0.075
      = 0.15
      = strutY titleHeight `beneath` text title
          # fc darkGrey
          # fontSizeL titleHeight
          # font "Arial"
      = mconcat
          [ stamp'
              # alignY 0
          , roundedRect (r + innerWidth) (1 + innerWidth) curvature
              # fc steel
          , rect (r + innerWidth + outerWidth) (1 + innerWidth + outerWidth)
              # fc black

The core specification is given at the top level and in the definition of centre. centre encloses the scaled stamp (of aspect ratio r) in first a steel-coloured rounded rectangle and then a black regular rectangle. The scaling is important, as it allows the function to operate correctly regardless of the width and height of the diagram argument. Then, the title is vertically concatenated (===) to this object and superimposed onto a final mud-coloured rounded rectangle.

Here’s the result of bezel "Signals" mempty, the bezel enclosing the empty diagram:

The empty bezel

The empty bezel

And here’s the final result, bezel "Signals" (signalsStamp $ SignalsFlag 2.06 5):

The final stamp

The final stamp

I’m very happy with this result, especially for such straightforward code. I haven’t been truly faithful to the dimensions of the original image, but they could be recreated with a little more effort. As I mentioned, our bezel definition is reusable across other diagrams, so to recreate our other stamps I would only need focus on the core structure, logic which I’m confident would be straightforward with Diagrams. But Diagrams is capable of far more than this, and I encourage you to explore the manual for an in-depth introduction.

How do you currently demonstrate best execution, monitor in-trade benchmarking and analyse post trade performance ? Within its suite of applications, OTAS offers a number of components which assist traders in synchronising their execution capabilities from strategy implementation to trade efficiency measurements from receipt of order through to trade completion.

Trade Schedule –
Used for pre-trade strategy selection : Calculates optimal trading schedule to minimize liquidity impact, market risk and information dissemination and displays suggested participation and expected trading costs. Also offers cost sensitivities for standardised POV rates. Schedule dynamically adjusts intra-day using forecasting models that monitor liquidity, volume and spread to account for changing market conditions.

In-Trade Performance
Monitors trader performance with real-time benchmarking of live orders including actual and expected PnL, slippage, order completion calculations and POV estimates.

Post Trade analysis
OTAS Microstructure offers the unique ability to visualise the historic intra-day market conditions of an order, providing a record of what happened and offers supporting evidence for why trading decisions were made in a way that compliance will also recognise and approve.

What’s new – coming soon is our Lingo for Microstructure – which provides intra-day narrative on single stock behaviour to overlay with order TCA metrics. Here is a early preview of what’s to come …all feedback welcome

If you require further information on our integrated solutions including channel partner’s, or navigating any other aspect of OTAS, please contact or your Account Director.