Monitoring and assessing systemic risk in financial markets is of great importance but it often requires data that are unavailable or available at a very low frequency. For this reason, systemic risk assessment with partial information is potentially very useful for regulators and other stakeholders. In this paper we consider systemic risk due to fire sales spillovers and portfolio rebalancing by using the risk metrics defined by Greenwood et al. (2015). By using a method based on the constrained minimization of the Cross Entropy, we show that it is possible to assess aggregated and single bank’s systemicness and vulnerability, using only the information on the size of each bank and the capitalization of each investment asset. We also compare our approach with an alternative widespread application of the Maximum Entropy principle allowing to derive graph probability distributions and generating scenarios and we use it to propose a statistical test for a change in banks’ vulnerability to systemic events.
We introduce an econometric method to detect and analyze events of flight-to-quality by financial institutions. Specifically, using the recently proposed test for the detection of Granger causality in risk (Hong et al. 2009), we construct a bipartite network of systemically important banks and sovereign bonds, where the presence of a link between two nodes indicates the existence of a tail causal relation. This means that tail events in the equity variation of a bank helps in forecasting a tail event in the price variation of a bond. Inspired by a simple theoretical model of flight-to-quality, we interpret links of the bipartite networks as distressed trading of banks directed toward the sovereign debt market and we use them for defining indicators of flight-to-quality episodes. Based on the quality of the involved bonds, we distinguish different patterns of flight-to-quality in the 2006-2014 period. In particular, we document that, during the recent Eurozone crisis, banks with a considerable systemic importance have significantly impacted the sovereign debt market chasing the top-quality government bonds. Finally, an out of sample analysis shows that connectedness and centrality network metrics have a significant cross-sectional forecasting power of bond quality measures.
In this paper, we develop on a geometric model of social choice among bundles of interdependent elements (objects). Social choice can be seen as a process of search for optima in a complex multidimensional space and objects determine a decomposition of such a space into subspaces. We present a series of numerical and probabilistic results which show that such decompositions in objects can greatly increase decidability, as new kind of optima (called local and u-local) are very likely to appear also in cases in which no generalized Condorcet winner exists in the original search space.
The upper tail of the firm size distribution is often assumed to follow a Power Law. Several recent papers, using different estimators and different data sets, conclude that the Zipf Law, in particular, provides a good fit, implying that the fraction of firms with size above a given value is inversely proportional to the value itself. In this article we compare the asymptotic and small sample properties of different methods through which this conclusion has been reached. We find that the family of estimators most widely adopted, based on an OLS regression, is in fact unreliable and basically useless for appropriate inference. This finding raises doubts about previously identified Zipf behavior. Based on extensive numerical analysis, we recommend the adoption of the Hill estimator over any other method when individual observations are available.
Repurchase agreements (repos) are one of the most important sources of funding liquidity for many financial investors and intermediaries. In a repo, some assets are given by a borrower as collateral in exchange of funding. The capital given to the borrower is the market value of the collateral, reduced by an amount termed as haircut (or margin). The haircut protects the capital lender from loss of value of the collateral contingent on the borrower׳s default. For this reason, the haircut is typically calculated with a simple Value at Risk estimation of the collateral for the purpose of preventing the risk associated to volatility. However, other risk factors should be included in the haircut and a severe undervaluation of them could result in a significant loss of value of the collateral if the borrower defaults. In this paper we present a stylized model of the financial system, which allows us to compute the haircut incorporating the liquidity risk of the collateral and, most important, possible systemic effects. These are mainly due to the similarity of bank portfolios, excessive leverage of financial institutions, and illiquidity of assets. The model is analytically solvable under some simplifying assumptions and robust to the relaxation of these assumptions, as shown through Monte Carlo simulations. We also show which are the most critical model parameters for the determination of haircuts.
We introduce a novel stochastic quantity, named excess idle time (EXIT), measuring the extent of sluggishness in observed high-frequency financial prices. Using a limit theory robust to market microstructure noise, we provide econometric support for the fact that high-frequency transaction prices are, coherently with liquidity and asymmetric information theories of price determination, generally stickier than implied by the ubiquitous semimartingale assumptions (and its microstructure noise-contaminated counterpart). EXIT provides, for every asset and each trading day, a proxy for the extent of frictions (liquidity and asymmetric information) which is conceptually different from traditional price-impact measures. We relate it to existing measures and show its favorable performance under realistic data generating processes. We conclude by showing that EXIT uncovers an economically-meaningful short-term and long-term liquidity premium in market returns.
Sequence motifs are words of nucleotides in DNA with biological functions, e.g., gene regulation. Identification of such words proceeds through rejection of Markov models on the expected motif frequency along the genome. Additional biological information can be extracted from the correlation structure among patterns of motif occurrences. In this paper a log-linear multivariate intensity Poisson model is estimated via expectation maximization on a set of motifs along the genome of E. coli K12. The proposed approach allows for excitatory as well as inhibitory interactions among motifs and between motifs and other genomic features like gene occurrences. Our findings confirm previous stylized facts about such types of interactions and shed new light on genome-maintenance functions of some particular motifs. We expect these methods to be applicable to a wider set of genomic features.
Since the seminal contribution of Teece et al. (1994), the strength, scope and quality of corporate diversification is often detected comparing the observed value of some statistics derived from the diversification patterns of a sample of firms, with its expected value. The latter is obtained under a null hypothesis which assumes some random assignment procedure of sectors to firms. The approaches generally adopted in the literature present two problems. First, being based on the observed value of a statistic, these methods could lead, depending on the nature of the sample, to noisy and non-homogeneous estimates. Second, the benchmark value used to identify the presence and strength of deterministic patterns are obtained under specific and privileged null hypothesis. Both effects could lead to the erroneous classification of spurious random effects as deterministic. This paper shows that the adoption of p-scores as measure of relatedness strongly alleviate the first problem, leading to cleaner and more homogeneous estimates. We design and implement a null hypothesis which rules out random artifacts and effectively identify new features in firm diversification pattern. Using the NBER database on patents, we apply our results to the study of the relationship between the coherence and the scope of corporate patent portfolios.
We propose a simple univariate model for the dynamics of spot electricity prices. The model is nonparametric in the sense that it is free from parametric model assumptions and flexible in capturing the dynamics of the data. The estimation is performed in two steps. Preliminarily, spikes are identified by means of an iterative filtering technique. The series of spikes is used to estimate a seasonal spike intensity function and fitted with an exponential law. We then implement Nadaraya-Watson estimators for the drift and the diffusion coefficients on the filtered series. Monte Carlo simulations are used to evaluate estimation errors.
We fit the model on European and American time series of spot day-ahead electricity prices; in spite of the simplicity of the proposed model, our specification tests indicate successful goodness-of-fit. We provide evidence for mean-reversion, nonlinear volatility and seasonal spike intensity; moreover we find that American markets show a very low level of mean reversion and a lower volatility with respect to their European counterparts.
This study reconsiders the role of jumps for volatility forecasting by showing that jumps have a positive and mostly significant impact on future volatility. This result becomes apparent once volatility is separated into its continuous and discontinuous components using estimators which are not only consistent, but also scarcely plagued by small sample bias. With the aim of achieving this, we introduce the concept of threshold bipower variation, which is based on the joint use of bipower variation and threshold estimation. We show that its generalization (threshold multipower variation) admits a feasible central limit theorem in the presence of jumps and provides less biased estimates, with respect to the standard multipower variation, of the continuous quadratic variation in finite samples. We further provide a new test for jump detection which has substantially more power than tests based on multipower variation. Empirical analysis (on the S&P500 index, individual stocks and US bond yields) shows that the proposed techniques improve significantly the accuracy of volatility forecasts especially in periods following the occurrence of a jump.
The modelling of production in microeconomics has been the subject of heated debate. The controversial issues include the substitutability between production inputs, the role of time and the economic consequences of irreversibility in the production process. A case in point is the use of Cobb–Douglas type production functions, which completely ignore the physical process underlying the production of a good. We examine these issues in the context of the production of a basic commodity (such as copper or aluminium). We model the extraction and the refinement of a valuable substance which is mixed with waste material, in a way which is fully consistent with the physical constraints of the process. The resulting analytical description of production unambiguously reveals that perfect substitutability between production inputs fails if a corrected thermodynamic approach is used. We analyze the equilibrium pricing of a commodity extracted in an irreversible way. We force consumers to purchase goods using energy as the means of payment and force the firm to account in terms of energy. The resulting market provides the firm with a form of reversibility of its use of energy. Under an energy numeraire, energy resources will naturally be used in a more parsimonious way.
Memory properties of financial assets are investigated. Using Detrended Fluctuation Analysis we show that the long memory detection in volatility is affected by the presence of jumps, realized volatility being a biased volatility proxy. We propose threshold bipower variation as an alternative volatility estimator unaffected by discontinuous variations. We also show that, with typical sample sizes, DFA is unable to disentangle long memory from short range dependence with characteristic time comparable to the whole sample length.
Astrocytes can sense local synaptic release of glutamate by metabotropic glutamate receptors. Receptor activation in turn can mediate transient increases of astrocytic intracellular calcium concentration through inositol 1,4,5-trisphosphate production. Notably, the perturbation of calcium concentration can propagate to other adjacent astrocytes. Astrocytic calcium signaling can therefore be linked to synaptic information transfer between neurons. On the other hand, astrocytes can also modulate neuronal activity by feeding back onto synaptic terminals in a fashion that depends on their intracellular calcium concentration. Thus, astrocytes can also be active partners in neuronal network activity. The aim of our study is to provide a computationally simple network model of mutual neuron-astrocyte interactions, in order to investigate the possible roles of astrocytes in neuronal network dynamics. In particular, we focus on the information entropy of neuronal firing of the whole network, considering how it could be affected by neuron-glial interactions.