FINANCIAL MARKETS AND QUANTITATIVE MODELS
Quantitative modelling of financial markets
The analysis and study of financial markets and related instruments – with a
view to achieving greater predictability – began to strongly attract the attention
of international economists during the boom years following the end of World War
II. While conceding that the nature of economic and financial phenomena is different
from that of physics, scholars have pursued an approach based on mathematics and statistics, the so-called quantitative approach. The models developed were appreciated not only for the sophisticated
mathematics and rigorous argumentation offered, but also for the relatively strong
prescriptions that could be drawn from their application.
For example, the “Capital Asset Pricing Model” (Treynor, Sharpe, Lintner and
Mossin, in the early 1960s) tells us that it is not necessary – given markets’ efficiency – to analyse individual securities of a given market but it would be possible
to invest in an appropriate mix of the market index (the S&P 500, for example)
and of liquid assets (or debt), which in turn depends on our propensity or aversion
to risk. In other words, in an efficient market – i.e. in a market where share
prices incorporate all available information prudent investors will put a small
portion of their capital in a fund representing the S&P 500 index, the rest
in risk-free securities, while the aggressive investor would be willing to even
get into debt to invest in the fund/index.
However, if we have expectations regarding the performance of individual securities, the “Modern Portofolio Theory” shows us how to select an optimised portfolio, given expected yields, volatility (considered as a measure of the variability
of prices over time) and related correlations. The model selects those securities
whose returns are expected to be the highest with the lowest expected volatility
and bearing the smallest degree of correlation with other securities (i.e. the
ability to undergo change in prices in an “independent” manner, on the basis of
As for the broader issue of Risk Management (i.e. those techniques designed to
measure and manage financial risk), the “Value At Risk” (VAR) predicts which maximum level of losses we can expect from our portfolio. If the model functions correctly, losses are,
say, lower than 3% in the 95% of cases, allowing us to gauge the portfolio risk
As with all probabilistic models, a degree of uncertainty must, nevertheless,
be taken into due consideration.
Criticised for their aggressive and speculative investment style, hedge funds typically rely on a series of highly sophisticated models for the valuation
of assets, the building of portfolios and the management of risk.
Similarly sophisticated algorithms are at the base of “flash trading,” a software operating within investment banks that reacts automatically to
market demand and offer in nanoseconds, generating sale and buy orders to make
very short-term profit. These and other forms of “automatic” transactions are
believed to have been behind the “flash crash” of May 6, 2010, when the stock market suddenly plunged about 10% within minutes,
only partially recovering those losses by the end of the day.
Critics to quantitative modelling and the crisis of 2008
Although generally speaking, these models have been highly considered, they have
also been criticised and have often failed to meet expectations.
Firstly, social sciences provide “lower quality” results with respect to physical sciences because the main focus of the former,
man, is endowed with freedom of choice and is, as such, less predictable. It should
then be observed that in the case of socio-economic sciences, the object of the
study can be modified by the researchers themselves, as the application of a model
by investors could influence market prices.
Secondly, given the fact that a model is a simplification of reality, many believe the difference between the two is conspicuous in the world of
finance, to the extent, in fact, that the very validity of a model can be challenged.
The “Black & Scholes” model implies, for example, rational operators who are
risk neutral as well as endowed with infinite credit at constant rates and all
having access to the same information.
Thirdly, models are developed on assumptions that are often unrealistic. A case in point is the so-called “normality” of returns, an assumption typical of most models. In practice it is well known that on
daily frequencies this assumption empirically clashes with real data, and if we
consider extreme events, such as great crashes and recoveries, they occur more
frequently than expected (“fat tails”). Some argue, on the other hand, that the results of a given model are substantially
valid even though the assumption on which it is based proves to be flawed.
Lastly, models are seriously undermined by market stress conditions during which traditional patterns fall out of joint. Risk management models
were stretched to the limit by the 2008 crisis because during a financial crash correlations soar and all financial assets
plunge at the same time, zeroing all efforts at diversification.
The need for liquidity forces operators to sell not only those financial instruments that are not performing
satisfactorily but often, and indiscriminately, their entire portfolio. When this
occurs, portfolio losses could be far higher than what may have been expected
from the application of a traditional risk management model. As a matter of facts,
commonly applied models are based on the Value at Risk, in which correlations
are assumed to be stable.
The investments by the famous Long Term Capital Management (LTCM) were similarly
based on highly sophisticated models. Defined as the “hedge fund of Nobel Prize
winners,” LTCM crashed in 1998 and had to be bailed out by the FED and the creditor
Asset allocation models
Asset allocation is aimed at building a portfolio of financial assets to meet the specific requirements of an investor, and based on a set of given conditions with a view to limiting risk. Current models derive from the original work by
Markowitz, who proved that on the basis of a number of standard assumptions, and
on the knowledge of expected returns, volatility and correlations of underlying
assets, it is possible to determine an “optimised portfolio,” one, that is, generating maximum returns at a given level of volatility/risk.
While the methodology seemed to be the instrument that investors had longed for
in their task of setting up a portfolio in a rigorous way, it once again showed
that the basic assumptions were not realistic (regularity of returns and no transaction
costs and taxes). The structure of the model also appeared too straightforward,
based as it was on the persona of the “optimising” investor, one who is capable
of estimating with a fair degree of accuracy the distribution of returns. It follows
that the model tended to provide portfolios that were overly focused on assets
that appeared to be less correlated and volatile or with higher expected returns.
In other words, it is a model that tends to magnify valuation errors and is consequently not sufficiently “robust.”
Over the past 10-20 years, many proposals have been advanced to improve the model,
some based on the optimisation process, others on the estimation of the parameters.
Although a lot has been done to overcome the shortcomings mentioned above, much
still needs to be done.
In this light, the Black-Littermann model, which relies on sophisticated statistics
designed to reduce the quantity of parameters to be assessed, or the Michaud model,
which targets optimisation through the application of a statistical resampling
technique designed to further stabilise results, are just two of the theories
that have been developed in this field.
And there is no doubt these latest developments will not be the last, because
the topic is not only of key interest but also continually evolving.
New input, it should be observed, has now come from the field of behavioural psychology applied to economic choices, the so-called “behavioural finance” now still in
its embryonic stage, which opens up unexplored areas of investigation.
Conclusion: the Generali approach
Fully aware of the increasingly specialised technical contents of financial investment,
the Generali Group has set up adequate structures for the management of its assets.
It specifically established in 2000 an investment company, renamed Generali Investment,
which has since developed into the holding company of a network of local asset
managing companies operating in Italy, France, Germany and other countries, that
manages in a coordinated way assets worth over € 300 billion, and employs nearly
100 analysts and asset managers.
Current Group asset management activity requires a more extensive quantitative
modelling than in the past, with focus on various areas of competencies, ranging
from macro-economic forecasts based on econometric modelling to the assessment
of equity markets based on the dividend discount model and to the quantitative
management of the portfolio, targeting the industry’s highest benchmarks.
In the words of Mr. Andrea Rabusin, Chief Portfolio Officer of Generali Investments “if on the one hand it is important to fully understand financial models – to be able to use
and gain as much information as possible from them – it is on the other just as
crucial to take into account their shortcomings and limitations, without overstating
their potential. They are instruments that have to be assessed with both skill
and caution, and with ordinary due diligence”.
Daniele Marvulli, CFA
Quantitative Portfolio Management