| Woodbine Associates, Inc.
Capital Markets Consulting and Research
The new era of financial reform has promised increased transparency and a reduction in systemic risk. With the Securities and Exchange Commission (SEC) and Commodities Futures Trading Commission (CTFC) moving through the research and rule-making phases of the new regulation, there has been substantial talk of cost, efficiency, and practicality of the process. We are all for clarity and consistency – where it makes sense. However, there are instances where establishing an intractable boundary is impractical. In our opinion, such is the case with identifying proprietary trading.
Regulators and practitioners alike think it would be easy to differentiate market making from proprietary trading by examining individual trades; however, we do not think it would be that clear. Both the trading community and regulators are trying to establish criteria that would clearly identify trades as either proprietary or customer trades. Such clarification would give the regulators the “clear cut” ability to penalize offenders of the Volker Rule. We think that a highly precise set of criteria to determine one type of trading from another is not entirely possible: one person’s proprietary trade would be another person’s “pre-hedge” in a client book of business.
So, how could regulators successfully distinguish proprietary trading from market making in a straightforward, easily understood way? Our answer would be to apply a general framework to evaluate trading using metrics that would indicate the possibility of proprietary trading. In-depth follow-up using other measures and unbiased, expert evaluation of trading patterns could then be undertaken where necessary to make a determination in areas where there is suspicious trading that may or may not be proprietary. Is this preferred? At this time we think it is the preferred method, as each instance would receive the individualized focus warranted to either confirm or deny proprietary trading. Is this practical? Again, we believe it is, as elaborated on below; the number of instances may not be substantial in the long run, as any false positive can be used to refine the flagging of suspicious trading. Is this appropriate? Again, we believe it is because the proprietary intent behind an individual trade cannot be determined on a trade-by-trade basis. Proprietary intent should be determined by the manner in which a book is handled over an extended period and how a particular book compares with the industry norm, in terms of risk and return.
So, how should regulators go about distinguishing proprietary trading from market making? One would expect a book of agency business to be hedged in such a way as to minimize market and basis risk. Return on the book is generated largely by volume of trading at the bid/ask spread and changes in inventory value. One would expect a book with embedded proprietary business to make more pronounced market risk plays to amplify returns. We would expect a book managed in a proprietary manner to display greater market risk, on an adjusted basis, than a pure agency book. We think normalized comparisons can be made across firms of various sizes by looking at the ratio of the total market risk relative to the amount of customer-generated market risk incurred over a specified period.
To better explain this approach, if a particular book of business at one market-making firm was shown to have decisively greater market risk on an ongoing basis than comparable books across the industry, this may be an indication that the additional risk is proprietary. Such an indicator should serve as a flag for regulators to examine the book, trades, trading patterns, and returns of the firm to determine if a case can be made that the book is being managed in a proprietary manner and in violation of the Volker Rule.
How would risk be viewed on a normalized basis across firms? We think this should be done with existing metrics in the market place and some additional reporting mandates. We would begin by taking a basic value-at-risk (VaR) measurement across comparable business units (e.g., desks) at similar financial institutions. We would then evaluate the customer-driven business and risk traded in each book on a product-by-product basis. This could be measured in a variety of ways, with the simplest example being the sum of the absolute values of the risk associated with the individual trades in each book. This would provide a basis to normalize differences in the size of firms’ books and allow for an appropriate comparison. We believe that financial risk associated with running a customer facing and properly hedged book for a particular product would be constant on a risk-adjusted basis. A book’s VaR for a particular product should reflect, and be a function of, the volume of customer business in that product. A book that shows abnormally greater value-at-risk, relative to the amount of customer risk, would be suspected of being run in a proprietary manner. However, the added risk would have to be proven “long” term and not just a daily anomaly.
Who would make the ultimate decision as to whether a book was being managed in a proprietary manner? Perhaps it is possible to create an expert committee of current or former buy-side and sell-side traders who have the expertise, are unbiased, and lack an allegiance to any market makers. If, in the eyes of these experts, there were sufficient evidence that the book was being managed more as a proprietary book rather than a market-making book, that market maker could be subject to appropriate action.
Admittedly, there is work to be done in the area of specifics, like setting the industry baselines for the appropriate risk ratios, which will be accomplished over time. With proprietary trading defined for us in the Dodd-Frank Act, the challenge now lies in identifying proprietary activity at the institutional and trading book level. The matter of setting criteria for what does and does not constitute proprietary trading is a topic many have views on, yet we believe that a majority of experienced traders, when presented with the facts, can determine when a book is being handled in a proprietary manner and in clear violation of the rule. Selecting the indicators, such as a measure of time that a book of business is “out of line” with other books on a VaR normalized basis, will take work. However, at this time we believe that if it is done right, it would not be overly difficult or resource intensive. It should largely be an extension of existing internal risk management practices.
The too-difficult-to-model argument would also be out. We believe that existing VaR metrics should be utilized, as described previously. However, instead of setting some arbitrary “hard” quantitative risk limit upon which an investigation/audit would be triggered, we would advocate monitoring risk relative to the industry mean for a particular type of product line or asset class. This type of comparison is particularly useful since it effectively eliminates the variability brought about by market movements (since all desks will likely hedge against anticipated risks). This approach would also substantially weaken the argument by any particular book manager aggressively “pre-hedging” customer business.
It is possible to raise the argument that trading audits could be made on an arbitrary basis if no “clear cut” standards are defined. As noted above, we believe that comparison to the industry norm is a tractable solution. This means that unless there is some kind of well-kept market-wide collusion, market risk levels across firms (on an adjusted basis) should be comparable. By examining books over time and on a risk-adjusted basis, suspicious situations could be individually audited on a more granular level. This would allow suspicious situations to be handled with a degree of care and responsibility not possible where hard limits are established.
In summary, regulators should be looking for a pattern of proprietary trading -- not “proprietary trades.” Instances of suspected proprietary trading could best be identified by using a risk-based methodology to examine comparable books across institutions on a relative basis. Furthermore, the threat of investigation under this methodology would provide a deterrent that would not exist under a hard-limit paradigm. No method of regulation or oversight will eliminate the rogue trader seeking to make an individual bet for a quick buck. However, a method such as the one we propose would go a long way toward preventing large-scale sustained abuses, maintaining efficient market operations, and enabling regulators to flush out suspect trading activity, with the objective of reducing systemic risk.