Methods for network (re)construction -
an application to systemic risk assessment

Method 1: systemic risk assessment via maximum-entropy network reconstruction

--basic idea

use publicly available data (size + leverage + asset class capitalization) to reconstruct a bipartite network of investors and investments; then employ the vulnerable banks methodology to measure the systemicness of each investor.

The rationale behind the use of maximum entropy is that it enables the reconstruction of the (bipartite) network of portfolio compositions of companies by only knowing (publicly available) node features, namely size and leverage of each company and total capitalization of each asset class. In Di Gangi, Lillo, Pirino (2015) it is shown that the systemic risk metric calculated (via the vulnerable banks framework of Greenwood et. al. (2015), that in general require the whole portfolio composition of companies) on the reconstructed network is a good approximation of the same metric calculated on the real network (of credits and liabilities). Thus, the method allows for systemic risk assessment from partial information.

--what the method does
the methodology is composed by two distinct parts:

Matlab codes for the entire method are available for download at
---> DOWNLOAD LINK With Instruction Page
The Bipartite network reconstruction by partial information is completely independent of the systemic risk/financial application and can be used to reconstructed any kind of bipartite network starting from information about strengths and/or degree.

--workflow

systemic risk of a market participant, or of a group of them, is the amount of loss that the whole market would suffer in case of a negative shock to that specific agent or group.

Greenwood et. al. (2015) propose some network metrics to measure the effect of fire sales in response to a shock on the price of the asset. The network considered is a bipartite network of financial institutions and asset classes they invest on. The idea is to measure the vulnerability of the system (and of single banks) as the percentage of equity wiped out if the price of assets undergoes a negative shock. When a financial institution incurs in a loss due to a negative shock on the asset side, the typical behavior is to sell a portion of its assets, with a mechanism known as leverage targeting. This is done in order to keep constant the asset-equity ratio. This behavior can trigger a negative loop by causing an asset class market value to drop due to massive sells, thus propagating the initial negative shock. This is what is known as fire sale channel of systemic risk contagion.

By looking only to: total size of a bank (= total value of its assets) + equity (= total value of capitalization, i.e. the value of the shares) + total value of asset classes, Maximum Entropy method reconstructs a random network compatible with those constraints. The reconstructed network is a bipartite network Banks -> Asset classes, where each link (i -> j) is weighted with the amount bank i owns of asset class j. Then calculating the systemic risk measures is only a direct application of the methodology described in Greenwood et. al. (2015).

The utility of this process lays in the fact that the required input information are publicly available. However, in order to test the method, knowledge of the full network is mandatory. In Di Gangi, Lillo, Pirino (2015) an analysis is done on US financial institutions, which disclose informations about their portfolio decomposition in order to meet Federal Reserve regulation requirements. In their analysis it is shown that:



Aggregate Vulnerability

This plot displays the time evolution Aggregated Vulnerability (i.e. the sum of systemicness of all considered banks): both the true one, namely calculated on the real FED data network, and the reconstructed ones (via three different entropy methods: see Di Gangi, Lillo, Pirino (2015) for details). It is apparent that, on aggregated scale, network reconstruction is indeed a good approximation.

Here reported the (quarterly) time evolution of systemicness of 2 banks, both the real value (i.e. the value calculated on the real network of FED data, thick line) and the 95% confidence interval of the maximum entropy reconstructed network (red dashed). Magenta dots indicate quarters when true systemicness is above the 95% confidence level of Q1-2001, i.e. they indicate a quarter in which systemicness of that bank is significantly larger than that of Q1-2001 (taken as reference). In the first panel, where you have indeed an increase of systemicness, notice that the increase starts well before the ‘real’ onset of the Lehman crisis (Q3-2008), suggesting that systemicness may be a useful tool for surveillance activity and early warning indicators of financial turmoils. While the second panel is an example of systemicness evolution without a significant increase with respect ot Q1-2001.

European Banking Authority is also publishing stress test results and related datasets on roughly 100 European (big) financial institutions: test analysis on these data is in progress.

--conclusion

this methodology aims at calculating systemic measures as defined by Greenwood et. al. (2015) (vulnearble banks framework) without the necessity to know the full portfolio composition of financial institutions. Indeed, on FED data, it can be shown that the Maxiumum Entropy reconstruction of the network starting from the sole knowledge of publicly available balance sheet data and asset class capitalization is a good proxy for calculating systemic risk measures.

Method 2: contagion risk assessment via network construction by tail Granger-causality

--basic idea

use market transaction data to build time series of stock market returns, then employ Granger-causality on tails to build an “extreme event” causality network among stocks. Use a centrality measure (e.g. simply the degree) to measure the contagion risk of each node/stock.

The rationale behind this method is to build a sort of causality network for extreme events for stocks, namely tail events in the equity variation of a company. Then, a central node of this network, as measured e.g. by its degree, will be a proxy for the contagion systemicness of that company.

--what the method does
the methodology is able to build a network of causality relations of a multivariate time series. An application to (equity → bond) risk contagion is also available.
Matlab codes implementing the methodology and its application here described (see also Corsi, Lillo, Pirino (2015)) are available for download at
---> DOWNLOAD LINK With Instruction Page
Tail Granger-causality network building for multivariate time series is completely independet of the financial application here described and can be used on any collection of time series with the aim of finding statistically relevant causal relations between tail events.

--workflow

In Corsi, Lillo, Pirino (2015) this idea is applied to a dataset of equities (bi-daily) of 33 Globally Systemic Important Banks (G-SIBs) --as defined in Basel III framework-- and 5-year maturity government bonds of 36 countries (bi-daily). The authors build a causality bipartite network of selling/buying bonds. Namely, a link between a bank equity and a bond is there when to an extreme event in the equity corresponds an extreme drop or increase of the bond price. In the paper, the Granger causality relation is interpreted as a distress buying / distress selling of bonds.

To evaluate the presence of the link, a tail Granger-causality test is implemented. Roughly speaking, one time series X is said to “Granger cause” at time t another time series Y when the history of X before t gives some information that helps the prediction of the value of Y at time t.
Tail Granger-causality focuses on extreme (i.e. tail) events of the time series and aims at finding causal relations among them.

The workflow for network building is the following:

Thus, they end up with, for each time t, a bipartite network of causal increases and a bipartite network causal decreases of bond prices in response to equity drops.

Finally, one can compute the density of links in the two bipartite networks as a measure of the amount of causal relations (equity drop → bond increase/decrease) during a specific 3-year window. In the

One can give to these extreme events the interpretation of massive buying and selling of sovereign bonds in a distressed scenario. In this respect, the higher the density of the network the more bonds are sold or bought in response to bank distress. Thus the network density can be seen as an indicator of the amount of distress selling/buying in a given time period.

Moreover, centrality measures on the causality network may be taken as proxies for systemicness. Namely, a bank with a high degree in the (bank equity drop → bond price decrease) Granger-causality network can be interpreted as a bank whose distress is ‘causing’ a massive selling of many different sovereign bonds.

Reconstructed Networks

On the rows the reconstructed networks for 3 different non overlapping time windows (2003-2006 / 2006-2009 / 2009-2012); on the columns the causality netowrks (equity drop → bond price increase) on the left and (equity drop → bond price decrease) on the right. In each panel you have a bipartite network of banks → bonds. The coloring refers to S&P’s government ratings (blue=AAA, magenta=AA, green=A, orange=BBB, red=BB or worse). A link in the network means that, during the 3-year period, there is a significative presence of drops in equity of that bank correlated with an increase/decrease of the corresponding bond price.
Notice that the first period, 2003-2006, was a financially ‘calm’ period with respect to 2006-2009 and 2009-2012, when two financial crises occurred: Lehman crisis and Eurozone crisis, respectively. Indeed it is apparent that the density of extreme events causality relations is much higher after 2006.

Link Density

Time evolution of densities of selling, buying and a third density which is computed via a similar Granger test, but not for extreme events (it is standard Granger-causality, thus answering the question whether a change in equity significantly helps predicting bond price change). Each data point refers to the density of the network ending at that time. (Horizontal lines represent confidence levels, namely the number of links that would be there when the null hypothesis of no causal relation was true. In other words: the percentage of expected false positives).
It is clear that the two distressed periods --Lehman crisis and Eurozone crisis-- are characterized by a different behavior of financial institutions in response to distress: a peak of bond selling for the former and of bond buying for the latter.

--conclusion

This methodology is able to build an extreme-event causality network among time series in order to calculate centrality metrics of that network.
Input data can be:


SoBigData