Sunday, December 27, 2015

Financial and Economic Literacy: Implications for a Fed policy rule

FINRA and a multitude of other online sources allow members of the public to test their financial literacy. Sadly, the results of these online quizzes showcase a lack of knowledge on basic financial matters by the public at-large. On FINRA’s version, the national average score is solely 2.88 correct answers out of the following 5 questions:
  • Suppose you have $100 in a savings account earning 2 percent interest a year. After five years, how much would you have? (Answer choices are simply: “More than $102”, “Exactly $102”, “Less than $102”, and “Don’t Know”)
  • Imagine that the interest rate on your savings account is 1 percent a year and inflation is 2 percent a year. After one year, would the money in the account buy more than it does today, exactly the same or less than today?
  • If interest rates rise, what will typically happen to bond prices? Rise, fall, stay the same, or is there no relationship?
  • True or false: A 15-year mortgage typically requires higher monthly payments than a 30-year mortgage but the total interest over the life of the loan will be less.
  • True or false: Buying a single company's stock usually provides a safer return than a stock mutual fund.
On the National Financial Educators Council’s version, the average score for participants aged 15-18 years old is only 60.08%. These disappointing results serve as exhibit for a need for improved financial and economic literacy among members of the general public.

Most fields of science face challenges when it comes to communication and engagement with the general public. While this is perhaps most pressing for some fields of the natural sciences—for example, those scientists communicating climate change science—it also applies to topics in the social sciences, and in particular to those that guide policy.

A lack of financial and economic literacy does not only jeopardize the individual financial health of households, but also the health of the economy at-large. Be it a chronic lack of saving (a recent Google Consumer Survey found that 62% of American households have less than $1,000 in their savings accounts, and that 21% don’t even have savings accounts) or its implications regarding economic growth and the effectiveness of monetary policy, the ability of economic policymakers to effectively craft policy is hampered by the public’s misunderstanding—or simply lack of understanding—of the policies or the issues underlying them.

Indeed, many—both on and off the political spheres—call for strict audits of the Fed, and for this central bank to follow a policy rule, by which some formula would dictate the effective Fed Funds rate (as opposed to it being voted on by members of the FOMC). As any testimony by Chair Yellen would attest, the Fed faces much pressure to follow this rule, but such a process would severely hamper the Fed’s ability to take extraordinary action during times of crisis, or even of expansion.

After all, when the time comes to set policy, the Fed must be able to weigh different moving pieces and dynamic factors of the economy as they arise and evolve. A formula, no matter how robust or well of a fit to past data it may be, would never be able to deal as effectively with policy-setting—at least for the moment—as the power of human brains trained in Economics.

After all, many point out that well-known “rules” (such as the Taylor rule) would have likely implied negative interest rates during the financial crisis and ensuing recession of 2008-2009. Yet, at the moment, a negative interest rate is not part of the Fed’s arsenal for setting policy. As such, tying the Fed to only follow the prescriptions of such a rule would have implied a policy less responsive to the gravity of the situation (as the Fed Funds rate would have likely been set at the floor of 25 basis points, but without the option for quantitative easing as rolled out further in the crisis, or for forward guidance, two powerful tools employed by the Fed in this recovery).

Moreover, rules would have to follow data as observed historically and regressed with different econometric models, and be at the least informed by economic theory. Yet, even now we face a time where we see a tightening labor market, but persistently low inflation (a phenomenon many have referenced to call for patience from the Fed in raising rates). A model would thus likely respond inadequately to such moving forces—as traditionally and theoretically, an expansion implies lowering unemployment and increasing inflation. The Fed must be able to adapt to the future and to extraordinary circumstances, such as the ones we observe today. A rule would instead limit the Fed’s ability and flexibility in responding to crises.

Lastly, the Fed must be able to gently guide policy, while a rule might instead lead to abrupt changes, or lag the economy. By definition, a rule/ formula must be data-dependent and driven by what’s been historically observed. As such, a rule is almost exclusively backward-looking, so that policy changes may be mistimed, too strong or weak depending on the situation, or fail to adjust for future expectations.

All in all, rules and formulas—like algorithms in general—help simplify processes. But monetary policy should inherently not be simple, and should not follow a cookie-cutter process, as the economy is not itself a predictable mechanism. While more transparency from central banks is always welcome, the truth is that a rule would not benefit anyone in the economy, as policy would likely remain above the level of financial and economic literacy currently observed (thus, no one’s thirst for complete transparency would really be satisfied), a rule might be as or more error-prone than human policymakers (given the limitations in observing all the data and weighing the different factors appropriately, according to purely historical relationships and not evolving phenomena), the models used to determine the rule would likely face as much if not more criticism than the current human policymakers, and a rule might imply a limiting of the effectiveness of monetary policy (making our economy dangerously more policy neutral), as it allows agents in the economy to absorb information more quickly and expect changes in policy before they actually happen, as prescribed by the rule.

As even some Fed papers have laid out, the Fed can often be a bit of a black box. As such, economists and policy makers face a challenge in increasing the financial and economic literacy of the general public, such that their own policy recommendations and research results are more meaningful, fruitful, and effective. However, a rule to tie the Fed’s policy-making power goes too far in the direction of “increased transparency,” instead limiting the Fed’s ability to carry out its dual mandate. A happy medium between outreach to the public-at-large, simplified policy statements, and increased knowledge of the workings of economic policy would go a long way in bridging the gaps between economists and the laymen affected by the former’s policy actions.



Sunday, December 20, 2015

Graduate School Application Process

Over the last couple of months, on top of working and trying to enjoy the city (now that I can properly do so as a graduate), I’ve been trying to put together all my materials for applying next Fall to graduate programs in Economics.

While that has spelled some lapses in my ability to write for this blog, it has given me a window into the world of academia and graduate schools that I didn’t fully appreciate until now. On top of reaching out to people who’ve already gone through graduate school and could help answer my questions, registering for some extra math courses, and crossing my t’s and dotting my i’s for materials such as my transcripts, my papers, etc., perhaps the most time-intensive portion of my applications so far has been the statement of purpose.

Trying to condense my background, my interests, and my aspirations for a PhD in Economics into several hundred words is certainly challenging. Moreover, trying to explain why each particular school’s program will allow me to fully pursue my interests can also be challenging—especially when trying to do so in only a few paragraphs.

Trying to do so has forced me to delve deep into the strengths of different schools in diverse fields of study, the particular areas of interest of their faculties, and—most importantly—what kind of research would be most viable, valuable, and original for me to pursue.

Trying to explain one’s passion for a subject—all while simultaneously describing why one wants to pursue graduate work, how one is qualified to do so, what research areas one would be interested in pursuing, what one aspires to do after obtaining a PhD, and why a school is a particularly good fit given all of the above—can be a daunting task. But it has challenged me to articulate in concise and clear terms why I’m up for the task of a doctorate in Economics—and to give a deeper look into the field that I’m trying to communicate I’m very passionate about.

Hopefully, in less than a year now, I’ll be pressing submit on my applications with a statement of purpose that conveys to each program that I’ll be a productive, passionate, and dedicated researcher, with all the tools in his belt to provide valuable contributions to the field of Economics.

Sunday, December 13, 2015

The Fed’s Decision: On the Federal Funds Rate and Optimism

This Wednesday, December 16th, members of the Federal Open Market Committee are widely expected to raise the Federal Funds rate—and with it, short-term interest rates—for the first time in nearly a decade.  While it is of course simple enough to come to an opinion when the choice seems much clearer—though many still remain skeptical of the appropriateness of a rate hike, and there are of course many risks and uncertainties—I am currently of the opinion that an interest rate increase is the proper course of action at this time.

While there are ongoing and vigorous debates in macroeconomics about the role of policy in the economy—in particular, whether fiscal and monetary policy are effective tools at all in a “policy-neutral” economy—there are several reasons why an announcement by Chair Yellen and the members of the FOMC regarding an increase in the Federal Funds rate is appropriate this Wednesday. I will try to list these reasons here, as well as acknowledge and potentially address some of the main concerns that arise with this potential action.

Why the Fed should raise rates:

  • A higher Federal Funds rate is a sort of macroeconomic insurance policy, potentially pressing the brakes on the economy and slowing its momentum (the policy’s premium), but for the sake of more flexibility at a time when a stimulus policy is required (the payout).  Having an interest rate above 25 basis points provides a cushion and some wiggle room for potential further slowdowns.
  • Whether we believe or not in the role of psychology and behavior in the strength of the economy, raising interest rates is a signal of confidence from the Fed in the economy, that may spur further confidence in the agents that participate in the economy itself. As learned in any introductory macroeconomics course, inflation (among other economic variables) can often be self-fulfilling. When people expect or believe the economy to be heating up, they may behave in a way that confirms and produces that very expectation.
  • Most of the recent concern over the last few weeks has regarded falling commodity prices, which many see as a sign of weakness in the global economy. Coupled with still comparatively weak Eurozone and Chinese economies, many are concerned that we are still in a highly unstable situation. However, the Fed has expressed its belief that this downward pressure on inflation should abate with time. Naturally, there should be a floor to which prices of commodities—such as oil—should fall. When that process ends (at the time of this blog post, a barrel of crude oil is well below $40, as are its futures), there should in fact be an upward pressure on inflation. The performance of commodity prices should be seen as a temporary phenomenon, and one that is considerably supply-driven. As we know, production of crude oil by the United States is near record-highs, and the members of OPEC have not pared down their production targets to lower global supply. As such, falling commodity prices (in particular, oil) could be seen predominantly as a supply-driven phenomenon, which is less concerning in the immediate time span than the demand-driven view (where falling oil prices might signal a weakening in manufacturing, production, etc.).
  • While financial stability is under the purview of the Federal Reserve, financial markets should not be the main consideration behind the Fed’s decisions. In fact, many of the notable movements over the last weeks in the prices of several important assets (the EURUSD exchange rate, bond yields, etc.) show that markets have long begun pricing in the effects of a rate hike, and widely expect the Fed to raise rates this Wednesday. Deciding otherwise at this point might be a negative signal on the strength of the economy. While there are of course valid concerns over the level of inflation, the labor markets and growth, the U.S. economy in particular is considerably healthier than it has been in nearly a decade (since the beginning of the recession in 2008 and the first rumblings of crisis in 2007).
  • If we view inflation rates as tied (and potentially lagging) interest rates, raising the latter may in fact be a good way for the Fed to address concerns over disinflation and a potential deflationary spiral. Of course, we should not fall into the fallacy of believing that the Fisher equation—or generally the fact that interest rates and inflation rates are highly correlated—implies a causal relationship between these two variables. After all, the omitted variable here is likely that higher inflation rates follow a strong and vigorous economy, which may be accompanied by increasing interest rates as the Fed prevents overheating. But to the extent that prices follow increased costs of capital, and other factors provide a more causal relationship between interest rates and inflation, perhaps a hike in interest rates may help breathe further life into U.S. inflation.
  • Wage growth, which has been closely observed as it is tied to and is a strong signal of inflation, should face upward pressure as the labor market tightens further. Concerns over the low labor participation rate are partly addressed by the beginning and continuing retirement of the large generation of baby boomers.
  • "Creative destruction”: easy money makes risk-taking easier (the Fed is often blamed for not raising interest rates quickly enough in the years preceding the financial crisis of 2008, as low interest rates encouraged risk-taking in mortgages and other financial products), and allows for the inefficient survival of firms that—in a more competitive environment—would potentially go under. While the closing-down of firms and the subsequent loss of jobs is often not a positive development, if these firms are sluggish and inefficient, they can be a drag on the economy and prevent more innovative, entrepreneurial firms to take their place. Indeed, it can be as problematic to make running a business easy as it is to make it difficult; “creative destruction” entails that it is more optimal for inefficient businesses to not survive. Enabling them to survive means a weaker economy, less innovation, more complacency and deadweight losses on the economy. Less easy money and loans will mean literally “survival of the fittest” firms, which some might argue is how economies should thrive.
  • Many have remarked on the abundance of excess reserves in banks. In a world where we perhaps see supply of loans (as opposed to demand) as the main driver of the amount of credit in the economy (meaning, a large determinant of how much loanable funds exist in the market is banks actually being willing to lend out money, which we saw was not the case during and immediately after the financial crisis), then raising rates may actually be an incentive for banks to increase their lending out processes, enabling a smooth flow of credit into the economy that might help counter any negative effects of a rate hike. Indeed, with higher interest rates the supply of loanable funds should increase, allowing us to increase our savings rate and, in the context of the Solow growth model, help propitiate a faster accumulation of capital.
  • A slow, gradual tightening of policy should be good for the economy; the brakes need to be applied at some point and it’s better it be done smoothly and with warning (see criticisms of last crisis, how the Fed didn’t raise interest rates in time time). We wouldn’t want to raise rates too quickly, without warning, actually causing trouble for markets and consumers alike. The way the Fed has framed and given extensive forward guidance should ensure that expectations have been primed so as to avoid a major shock to the economy.

Of course, as with any other major policy action, there are significant risks and uncertainties. Perhaps the most cited concern by economists and commentators alike is a potential deflationary spiral, as an increase in interest rates provokes a slowdown in the economy (potentially a recession) and the vicious cycle of continuously falling prices (Japan presents a clear case-study).


To address this issue, it’s important to first gain some perspective. After all, the likely increase in interest rates is only a 25 basis point hike, which would then be followed by a slow, gradual increase that would likely follow the same guidelines the Fed has set with forward guidance.  Because of this new practice, in fact, the economy has absorbed and “priced in” the effect of this hike already. As such, this policy action should not actually be a “literal” shock to the economy.  Considering that many macroeconomic models view a role for unexpected shocks (for example, models like the Phillips curve, and generally rational expectations), the interest rate hike should ideally not have a major negative effect on the economy, as it has already been incorporated into our collective information set. Of course, this brings us tangentially to the topic of policy neutrality. After all, if the economy prices in and adjusts for this change in the first place, is the Fed actually able to provoke any changes in the real economy? That is a debate I’m likely not prepared to address—at least not at this time.

Another concern relates to the politics of the situation—some are concerned (on all sides of the political spectrum) that the Fed might provoke a change in the performance in the economy that will have real implications on the Presidential and Congressional elections next fall. To Democrats, a rate hike has the potential for a recession that would destroy any positive legacy regarding the Obama administration’s role in the recovery, and potentially make it more difficult for the Democratic nominee to win election to the White House (with all the subsequent policy implications that would entail). Republicans, on the other hand, are concerned that a delay in the interest rate hike would be a concerted ploy to aid the party in power by stimulating the economy further.

Of course, this concern should be easy (and highly critical) to assuage, as independence from political pressures is an incredibly important factor in the credibility of central banks and—in turn—the effectiveness of their policy actions and their ability to actually have an impact on the real economy. I think it should be clear enough that the Fed’s role is not to hamper or abet the political campaign of any one candidate or political party, and (while it might be necessary to do so) addressing these concerns directly might go too far in the way of granting validity to those who already criticize the Fed for lack of political independence. Validating these concerns is to accept that they are founded and pressing—which we would all like to think they are not.

Lastly, many are concerned about the dynamics of the Fed raising interest rates and, in turn, strengthening the value of the dollar, while the European Central Bank simultaneously continues and expands its own program of monetary stimulus that might further strengthen the USD. This, of course, plays into the recessionary fears, as a significantly strengthened dollar would reduce American exports, increase imports, and in turn, lower growth and GDP. However, this can also be seen as a positive for American consumers and for foreign investment, as the higher interest rates provoke a larger inflow of capital looking to invest in the United States.


I would like to end by once again providing some perspective, and perhaps some caveats. The action the members of the FOMC may take on Wednesday represents a long, careful deliberation process years in the making, and one that in the end may represent a comparatively small increase (likely 25 basis points) in the Federal Funds rate. Like with every policy action, there are risks and concerns. At some point, however, we must lift off. Now is looking better than it has in a while for it to happen.

This financial crisis and Great Recession has done a lot in the way of provoking a sense of pessimism and lack of confidence in the American and global economy. The repercussions of this crisis were indeed painful, far-reaching, and long-lasting, so any hesitation or skepticism at this time from both professional economists and laymen is understandable. Moreover, some may be concerned that the Fed is rushing into an action by simply trying to follow the psychological “clock” of moving early, before the end of the year is out. The turn of the calendar page could be seen as yet another failure—another year in which the economy has not fully and entirely recovered.

Yet, the economy has weathered both the original crisis and many other, smaller panics over the last years, and particularly over the last few months and weeks in the financial markets. I am confident in our policymakers, and optimistic that forward guidance should imply that with rational expectations, markets and consumers have absorbed all relevant rate hike information. Unemployment should not rise (especially given the strength in the labor market anyway), weak companies might be replaced by stronger and more innovative firms, and prices should follow the rise in interest rates as people ask for and obtain higher wages.

We can certainly expect some turbulence, but an increase in interest rates also represents a return to normalcy. In her testimony to Congress recently, Janet Yellen stated: “In closing, the economy has come a long way toward the FOMC's objectives of maximum employment and price stability. When the Committee begins to normalize the stance of policy, doing so will be a testament, also, to how far our economy has come in recovering from the effects of the financial crisis and the Great Recession. In that sense, it is a day that I expect we all are looking forward to.” Indeed, psychologically, Wednesday will be a day when we can all finally say: we made it.

After this entire discussion, the most important thing is that an interest rate hike perhaps means the return to one of the factors that’s been missing and, personally, I think is critical to the strength and performance of an economy: optimism. It is optimism that makes us move forward, take [healthy] risks, and progress. It is time for us to be optimistic about the economy once again, and Wednesday will hopefully be just a smooth and careful first baby step.

Sunday, December 6, 2015

Polling and Measuring Economic Data

Anyone following electoral politics for the last couple of years will likely admit: opinion polling has definitely trended both less accurate and precise. As this article from The New York Times outlines, there are several factors underlying the decline in accuracy in public polling of election races. For the apolitical or those unconcerned with elections, this may not be a problem. But when it comes to economics, this may be a problem that should concern us all.

It is often easy to overlook data and its sources, and take the information we obtain for granted. Yet, behind the numbers on unemployment, growth and GDP, inflation, and other macro and micro variables, are strict methodologies that attempt to measure data as accurately and precisely as possible. These methodologies are of course oriented to ensure large random samples when necessary, and generally to avoid bias and ensure consistency in the variables that organizations and government agencies report on a regular basis. Some of these methods—such as seasonal adjustments—are more familiar than others. But the problem is that, if standard and tested methods are failing when it comes to measuring political sentiment, those economic variables that take in the public’s answers and opinions may be inaccurate as well.

Most economic variables are measured very consistently over time—such as unemployment, by interviewing a set number of households over a time period—and generally, since economic surveys try to capture facts on the economic situation as opposed to opinions, there should in theory be less of a concern than there is in political polling.

However, many other variables cited frequently both by the media and practicing economists—such as consumer confidence, indices of job creation, etc.—rely on the opinion of those polled (which may not be a random sample or not reflect a consistent sample across time) to less clearly defined questions (i.e. questions such as “is your company hiring?” “is the economy headed in a good direction?” “rate the strength of the economy” that might have much less objective criteria or set of answers). Moreover, the methodology to ask these questions often does not rely on the sophisticated methods used to measure other economic variables, such as the Current Population Survey.

As such, the use of these measures to gauge the strength of the economy, potentially define economic or public policy, or sway the outcomes of elections, can be highly problematic. While Economics is a rigorous science, and policy-making an intense study of all the factors in play, how we measure the economy has always been a topic of discussion, and one that we should continue to examine carefully—particularly regarding the sources and, in turn, the accuracy and precision of our measures. From the criticism of economists—including Nobel Prize laureate Joseph Stiglitz—regarding GDP and our measures of welfare, to the surprising lack of accuracy in political opinion polling running up to electoral events, how we measure the data around us should be as important (if not more) as the applications we find for that data. After all, without ascertaining the reliability of the data itself, all inferences obtained from that data should be moot points.

Sunday, October 4, 2015

The Economy and Art Production

While this is not always necessarily true, we often attempt to interpret art by investigating the societal context in which it was created. Thus, what can the state of the economy or the society in which an artist lived tell us about the meaning or the aesthetic composition of an art piece? The resulting empirical question is, then, to what extent does the economy inform and translate into the quantity, quality, and content of artistic production? Moreover, how can that be tested?

The first variable (quantity of art) should be simple enough to study: simply run a regression of the quantity of art produced (represented by an index, factor, or instrument for artistic production, which in itself is perhaps no simple feat) on different economic measures. Of course, such a regression would have to be adequately controlled for potential endogeneity issues (for example, are the price level in an economy and the quantity of art produced jointly determined?), omitted variable biases, etc. With such a regression, we could uncover: do periods of economic crisis (i.e. recessions, stagflation, etc.) cause more or less art to be produced, either contemporaneously or with lags to the crisis? Does art production boom in times of prosperity? Moreover, do the relationships depend on the political and demographic makeup of societies (i.e. how do the coefficients change between democracies or authoritarian societies? Between younger and older nations in terms of their populations?).

Perhaps more importantly, how do the quality and the message content/ aesthetic composition of art pieces vary based on the economic situation? Meaning, do artworks generally reflect the times in which they were produced when those times are difficult, or more prosperous? Will an artist produce works portraying issues of economics or society, or portraying messages regarding economic situations when times are tough? A simple hypothesis—based perhaps on prospect theory—would argue that art reflects economic topics and messages when times are difficult, i.e. when the state of the economy is more salient and more impactful in the minds of the artists.

An unsophisticated way of studying this question would involve cataloguing art works in a systematic fashion by gathering information on their time and place of production, biographical information on the artist, and ultimately classifying them as revolving around a theme of the society or economic times. Then, a regression with a binary variable (“related to economics and society?”) would be run on economic variables.

An obvious difficulty, of course, is how to systematically classify works. After all, if during the Great Depression an artist had painted a scene of shantytowns or jobless lines, it might be simple to classify that as related to the economic situation. However, if an artist had instead painted a scene of prosperity and wealth, who’s to say that painting did not revolve around a yearning for better times, and thus also related to the economic situation of the times? Moreover, the classification of art pieces must be conducted independently from the information on the works. In order to avoid confirmation bias, the person classifying the relevance of an artwork to socioeconomic contexts must not be aware of that very same context. Lastly, how should abstract works be classified? How should pieces of art where only the implied message is economic in nature be classified, if art pieces themselves are often open to interpretation?

Of course, trying to systematically and objectively study a field that is inherently without constraints poses a multitude of problems. Yet, it could also be fruitful. After all, it could help answer questions like: should the government subsidize the arts based on patterns of production during different economic contexts? Should the government be involved in the arts at all? Should our interpretation of some works be informed further (or less) by the context in which they were created? In turn, should our interpretation of the objectives of an artist depend on his or her context as well?

The study of art can only be enriched by uncovering how it relates to economic and societal forces, which invariably shape and influence the hands and the minds that shape each work.

Sunday, September 13, 2015

Public Funding of Football Stadiums

In recent months, various National Football League franchises (primarily the San Diego Chargers and the St. Louis Rams) have threatened their respective home cities with a move to another locale remarkably without such a sports team: Los Angeles. With the expressed motive of receiving public funding for new or enhanced football stadiums, these teams and their supporters invariably cite many an economic argument for why public funding for these pieces of infrastructure should be approved. Befittingly, back in 2001, the St. Louis Fed already published a summary of the economics behind public funding of stadiums, available here: https://www.stlouisfed.org/Publications/Regional-Economist/April-2001/Should-Cities-Pay-for-Sports-Facilities.

Of course, among the arguments included are the fact that most revenues from these upgrades or new stadiums stays with the owners of the franchises themselves, since these franchises usually sign contracts giving them full benefit over naming rights, concessions, ticket sales, etc. In particular, the oft-cited argument that these stadiums will benefit their home cities through increased tourism, spending in the stadium and businesses surrounding stadiums, and multipliers that amplify the effect of these revenue increases, is usually a misrepresentation.

After all, it doesn’t require an individual with advanced training in Economics to consider the issue of opportunity costs. After all, by investing public money in these stadiums, cities are not only suffering the direct costs of funding, but also forgoing the benefits of other—often much more lucrative—alternatives. In particular, many economists point out that the quoted increases in revenues are often not materialized, as new stadiums do not inherently cause new tourists to visit these cities that already housed these sports teams. Moreover, considering the spending of both citizens and tourists as mostly fixed budget constraints, the construction of a new stadium for an already-existing football team may potentially increase revenues for the franchise owners, but only at the expense of other businesses these people would have spent their money in otherwise. Thus, in short, most economic benefits—when they even exist—of new or upgraded stadiums are nullified (and often overpowered) by the costs to the cities that house them. And most critically, when they do exist, they often stay at the hands of those individuals who own the franchises, rarely seeping back into the economy that helped fund them. In short, public funding of new stadiums—in particular when used as a threat against moving to another city—represents almost entirely public costs and private benefits. The marginal benefits of other alternatives at the same marginal costs to the cities that would house these stadiums should make the refusal of these projects a no-brainer.

Of course, this question spans more than economic arguments and assumptions. After all, these sports teams possess loyal and committed fan bases that very much fight for their team’s permanence in their cities. Their loyalty and commitment is not something I can argue against, but from an economic and financial perspective, using their and their fellow taxpayers’ money to fund teams that hold these cities hostage for funds they can easily raise on their own, and that will have few returns on taxpayers, is not a sound decision. Research in behavioral economics likely would provide some intuition behind these fans’ willingness to support these projects.

Lastly, a major problem is that new stadiums are an unsound investment for cities that already house these teams. Yet, for cities currently without a team, the introduction of a football team might actually provide large economic benefits (depending however, on the profile of the city in question—after all, would more tourists visit LA because of a football team? One might assume the economic impact of a sports team is largest for cities without an established tourist base already. Thus, there would be decreasing marginal increase in tourists as city and established tourist base size increase. This consideration might in turn extend to an interesting game theory question: even if San Diego or St. Louis understood that returns to public investment in these stadiums would not be sufficient, because LA would benefit from such a move and would likely offer money for it, San Diego and St Louis effectively have to bid beyond what they would be willing to, just to beat LA. Cities, invariably, lose in the end. This begs the question: if the NFL can essentially operate as a legal cartel, and the member teams can use their influence to extort their home cities, why don’t cities cooperate between themselves to call the NFL teams’ bluffs, and cooperate so this kind of issue just won’t happen?

Sunday, September 6, 2015

Topics in Financial Economics

Just a few months of experience in economic consulting have opened my eyes further to the incredibly interesting research topics to be found in the field of Financial Economics, and have in fact contributed to adding a PhD in Finance or Financial Economics to my list of future academic interests (on top of the already-established interests in Industrial Organization and Arts Economics).

Among the ideas that interest me are the ways to make event studies to determine market efficiency a more objective and systematic process, particularly when it comes to the determination of the expected direction and magnitude of a residual based on the new and value-relevant information that arises on a particular time period (to then compare with the observed residual and determine consistency with efficiency). Ways this could be done would be generating a regression model that takes different pieces of information on a security (this would first require a study determining what pieces of information are actually value-relevant, or important to investors) that may be new on a particular day, and then output a number that would represent the expected magnitude and, of course, direction of the residual of the security’s price we would expect after regressing that price on market and industry indices. In essence, this would be a “measure of surprise” or “unexpected news” to compare observed residuals to and determine market efficiency. Indeed, this would hopefully go in the way of providing a more objective basis to determine a security’s consistency with market efficiency—as opposed to subjective and inconsistently applied measures that qualitatively take in information ex-post—particularly when there’s conflicting information on a given time period.

Along the lines of market efficiency (or in this case, inefficiency), I would also like to consider the issue of market overreactions/ knee-jerk reactions to individual pieces of macroeconomic news. In particular, there is a sense that markets are overly sensitive on a day-to-day basis to macro-related financial or economic news that may ultimately have little impact on the fundamentals of underlying companies and their future cash flows. Too often is volatility heavily impacted by macro news, causing large swings in one direction on a given day, and “corrective” movements in opposite directions on days subsequent to the news.

Leaving the issue of efficiency behind, I would consider ways to improve the accuracy of firm valuations, through the use of management-related variables that would weight calculated cash flows (and ultimately, their present values) upward or downward based on the observed relationships between management quality variables and subsequent earnings surprises or cash flow surprises. Regarding equity valuations, I would want to investigate the timing of analyst report price target and stock rating changes: do they just lag changes in prices and follow the observed price series ex-post, or do they actually play a leading role in determining the future behavior of stock prices?

And of course, I could not consider topics in financial economics without thinking as well about the financial economics of art, looking at art as an asset and how prices, investors, and markets in the financial world of art behave.

Sunday, August 30, 2015

The Masterpiece Effect, and the Market Efficiency of Art

In the empirical research regarding returns from art investments, an interesting phenomenon has been observed called the masterpiece effect. Intuitively, if art pieces are indeed believed to be “masterpieces”—or works of exceptional quality or renown—then we might expect the returns to investing in these art pieces to uniformly outperform the general portfolio and market. However, James Pesando, Jianping Mei, and Michael Moses (among others) have found that masterpieces tend to underperform the market and, in fact, provide lower cumulative returns than non-masterpieces.

Many economists have contributed their thoughts to explain this phenomenon. For example, some believe it’s due to overbidding followed by mean reversion. Thus, masterpieces outperform in one period—we could theorize the one in which their “masterpiece” status was originated or consolidated—and then underperform once they’re more established and change hands less frequently (presumably, these pieces would be coveted and thus not traded as often). Others suggest that masterpieces are less risky because they’re more liquid—they may not trade as often, but are definitely easy enough to sell in the market when they do enter it.

I tend to sympathize with this second theory the most (though the first also has its merits and, in reality, probably explains some portions of this effect as well). It is elementary intuition in financial economics that lower risk involves lower returns. To that extent, non-masterpieces would provide higher returns because they’re indeed riskier than the established pieces of art. For these riskier assets, you can buy at a low price and, with luck, sell at a much higher price later given changing art tastes (meaning, you’re lucky to buy “speculatively”—buy an emerging or obscure artist’s work in the hope she will catch on in the art market—and then sell when your prediction has come true). Yet, of course, this comes at the high risk that this lesser-known work will not in fact sell, or will not “catch on”.

But, masterpieces should in theory always be considered eminent, and as such are less risky. Thus, you’d buy at a high price and, technically, expect to sell at a similarly high price. Their very definition of masterpieces, after all, means they’re tried and tested works. Because tastes regarding established works don’t change much (after all, that’s why they’re “established” parts of the art canon), the only factor affecting increases in prices for these pieces should be inflation, allowing perhaps for slight changes in the interest of the artists at a given time (which reduces the investing game to simply having a sense for when an artist is being paid more attention to).

Given the above discussion, the masterpiece effect almost becomes a market efficiency question, in that masterpieces could be considered assets that trade “efficiently”, while non-masterpieces may not. Applying the concept of an efficient security to artwork, because of their very status as masterpieces we can presume we know most if not all possibly available information about these works and their artists, so nothing new or material (despite, perhaps, deterioration of the work itself or discovery as a fake, etc.) should ever come out about that work of art. Thus, because prices for an asset that trades efficiently should only adjust to new, material, and public information, we should expect prices for masterpieces to change very little over time and thus, these works to provide very low (if not zero) returns.

Because we may lack much more information on non-masterpieces, and because there is a higher likelihood that some particular investors or participants in the art market receive more or better information on them than others (a specification falling more under the “strong” form of market efficiency as defined by Eugene Fama), non-masterpieces may thus show inaccurate prices and allow for outsized returns that deviate from their true value, as compared to masterpieces.

Ashenfelter and Graddy summarize James Pesando’s discussion of the market efficiency question: when pieces trade efficiently, “the market should internalize the favorable properties of masterpieces into their prices, so that riskadjusted returns should not exceed that of other pieces.” Of course, here Pesando explains the masterpiece effect without relying on the inefficiency of non-masterpieces. In fact, for Pesando, it is because the market is efficient for both masterpieces and non-masterpieces that the former do not demonstrate higher returns than the latter (for non-masterpieces, there's simply more "new" (presumably good) information about them coming to the market, so there is more positive price adjustment for newer, non-established works as opposed to masterpieces). I might claim that the masterpiece effect is observed perhaps because market efficiency breaks down for non-eminent pieces of art (again, we can think that assets that are traded very infrequently, that are paid comparatively little attention, and for which there exists sparse information, would trade inefficiently compared to those better-known assets, in the context of the art market).

Of course, this entire discussion relies on some critical questions: firstly, empirical research into this topic requires us to appropriately control for survivorship bias. After all, as mentioned above, masterpieces are presumably much more liquid than non-masterpieces. As such, they are likely to sell much more often so that, when we consider all the non-masterpieces that don’t “survive” in the art market, the cumulative returns of these non-masterpieces may ultimately be below that of the more reliable masterpieces, making the latter ultimately still the better investment.

Secondly and more philosophically (but with much relevance to econometric models), how do we even define masterpieces? The results of any models will ultimately rely on what works are defined as masterpieces, be it through a dummy or other methods.

Lastly, when does an art asset actually trade efficiently? How do we go about showing that an art piece or certain sectors of the art market trade in an efficient way? The “weak” form of efficiency is easy enough to think about: after all, a “weakly” efficient asset is one whose returns cannot be predicted using standard time series methods (in this form, the information set is only past historical prices of the asset).  In the "weak" form, asset prices follow a random walk and only respond non-randomly to new information. Stronger forms of efficiency, however, don’t extend as easily to the art market. Conceptually, showing “semi-strong” efficiency would require us proving that prices for these pieces respond as expected to new, material information about these pieces. Yet, an art piece by definition shouldn’t really change much. Thus, there should be very infrequent new information about the piece to move the price. In turn, how would we go about conducting an event study of an art piece’s returns?

This lastly brings us to the most philosophical question of them all: why do prices change so dramatically for artworks at all? Other than inflation, why would a Picasso 50 years from now sell at a much higher price than today? The most obvious answer would simply be changing consumer preferences: perhaps 50 years from now, Picasso is even more popular than he is today. Yet, how do we calculate when the popularity of an artist has changed? How do we define that popularity to then measure and apply to the art market equivalent of an event study? How, at the end of the day, do we know where art prices should adjust—what the true, accurate value of an artwork should be—when art itself can transcend all attempts at understanding?

Noon: Rest from Work (after Millet), Vincent van Gogh (1890)

Sunday, August 16, 2015

The Value of Art

A common topic of interest in the field of arts economics is that of art as an investment.  For a great survey on issues regarding art prices, returns, and, in turn, investment potential, see “Art Auctionsby Orley Ashenfelter and Kathryn Graddy in the Handbook of Economics of Art and Culture.

Among the many things this article covers is the topic of finding the value of art (in terms of prices), with the end goal of considering art returns, and how the auction process informs price formation.

When we think about valuing a piece of art (in this case, finding the price one should pay for ownership of that work), we could think of it as we do more “traditional assets.”  Thus, we could think of “comparable” works that have been involved in “past transactions” that would allow us to get an estimate for the value of the piece in question. Alternatively, one can conduct a valuation for a piece inspired by a type of discounted cash flow method. In other words, one could evaluate the kind of “cash flows” one would obtain from a given painting (and here, “cash flow” can be abstracted heavily to “value” in general—both monetary, cultural, and otherwise) and the risk involved with owning that painting, essentially valuing the piece of art as the sum of the discounted value you’d obtain in future time periods from that work.

Now, thinking of art valuation less traditionally, one can think of what is known as a hedonic model. This model, very simply, regresses observed prices on characteristics of the respective works. In short, it would give coefficients on different features of a piece of work. In this case, one would preferably estimate different models for different mediums, since different characteristics can have different impacts on the value of a painting based on the type of work—one can imagine that a dark red color would have a much more positive impact on a painting than on a marble statue. In turn, simply adding a dummy variable for different mediums to a regression that encompasses different kinds of arts may not result in correct coefficients (we can also think about the impact this could have on the standard errors of these coefficients, if perhaps some mediums have more widely dispersed price observations for the same variables than other mediums).

The way that a hedonic model would be applied then (if these models were to be used predictively or prescriptively—which both philosophically and economically may involve some issues) would be to input the value of the variables for each work in consideration to output an estimated price for that work. In short, an appraiser would add up the sums of the “values” of each characteristic of the work to reach the piece’s final value.

Of course, a hedonic model based on panel data could be adapted to a fixed effects model in order to control for the different perceived values of “quality” of each painting. In theory, though, paintings with identical values of the dependent variables should, by the definition of this model, have identical “quality”, a question that brings us quickly to the more transcendent and literally “price”-less dimensions of art. After all, what defines “quality,” and shouldn’t by definition the coefficients of the model capture it by defining the “value” or quality of a work given its characteristics? Why are some paintings that are objectively similar to others worth much more (or treated as much more higher-quality) than others?

Alternatively, another regression model to value art would be the repeat-sales model, which is frequently used in another “alternative” asset class: real-estate. This model is perhaps better suited to construct indices of art prices overall, as opposed to valuing individual pieces of art. It controls for the mix of art products being considered (in short, the quality of the works of art in given times) by only considering works that have been sold more than once (an index that does not do this could perceive an increase in the “price of art” that is actually only capturing the entry of new, high-quality or fashionable pieces of art that, because of their quality, would be increasing the index).

Of course, an issue endemic to the repeat-sales model (and generally most regression models observing prices at sale) is that of survivorship bias: a model may be overestimating prices of art because it only observes those pieces that were in fact sold (meaning, those that have “survived” in the market). By definition, a model that only looks at sales would not be accounting for the multitude of art pieces that failed to sell, so coefficients would likely be overestimated given this bias.

And of course, an even broader issue with valuing art is the major philosophical question:  can we even put a price on art? Clearly, auction houses, galleries, dealers, and independent artists have been putting price tags on their work, and this is mostly for good reason. Artists (and the market around them) deserve to make a living off of their labor and the value they add to culture and society.  So, of course, art should never really be “free”: it always adds some value. However, art can in many occasions be “priceless.” And this is where the major issue arises: will art valuation ever truly systematically, consistently, and accurately capture all the value an art piece offers to the world? More deeply, how can we even calculate the value an artwork provides? Beauty (and I use this term very generally, to encompass all forms and styles of “beauty”, including the grotesque and ugly, the conceptual and the performative), after all, is in the eye of the beholder. What kind of factors should we include and, by attempting to run models and quantify values are we not imposing a norm on what actually “counts” to price a piece of art?


Why should we limit and define the characteristics that have value to us humans when, often, art is transcendental and beyond the features we can visually discern (or perceive with different senses)? And who’s to say our calculated values should apply to all (or on that note, any) piece of art? That a blue is worth more than a red? Or a larger frame more than a smaller canvas? That a painting’s exposition in the Met or its more prestigious provenance or artist makes it a more “valuable” painting than the one by your grandfather, sitting over the mantle at home? These and many questions, like art itself, are perhaps beyond the limiting interpretations and assumptions of the human mind.


Sunday, August 9, 2015

Life in Economic Consulting

About two months ago, I began my first full-time job as an analyst at an economic consulting firm. As a member of the team, I conduct economic analysis on a wide range of issues across the lifespan of commercial litigation and regulatory proceedings. Located at the unique intersection of the law, business, finance, and economics, economic consulting firms aid experts in their analysis of the case in question. Under the experts’ direction, we apply both quantitative and qualitative methods to uncover the economic phenomena at play in a given case. As such, these firms are a great primer for a career in research in any of the fields covered by practice areas of the firm.

So far, a job in economic consulting has allowed me to think much more deeply about applied issues in finance and economics. In particular, I’ve expanded my repertoire of research skills and approaches that should be useful for any field of study. For my preferred areas of Industrial Organization and Arts Economics, the kinds of cases economic consulting firms are retained on have already inspired me to think about the role of information in financial decision-making, and of the limits and boundaries of intellectual property rights.

For example, what do investors consider to be material information? When and how do they use information about a financial instrument? When are expectations about an instrument absorbed into its price, if at all? When an expectation becomes a confirmed (or a disconfirmed) fact, should we expect a movement in the price of that instrument? And of course, the classic question: are investors entirely rational?

As for intellectual property rights—and this relates to my previous post on this blog—what are the boundaries between creator and contributor to a given work? Particularly when it comes to the issue of copyrights for creative works—how small of a change in somebody else’s work is enough to grant the editor a claim over the transformed work? Can “concepts” ever be fully claimed by an “owner”? And how long should copyrights even last?

All in all, I’ve thoroughly enjoyed the work, the people, and the culture of the last two months in economic consulting. I’m eager to learn many more skills and gain an even better understanding of the quantitative and qualitative methods involved in researching topics in the law, finance, and economics. Economic consulting is highly recommended to those with some interest in any of these fields.


Any views expressed in this post regarding the work involved in economic consulting are mine only and do not necessarily reflect those of my peers or the firm I work for.