Demystifying “You Can’t Beat the Market”

As you know, economists are scientific types that study how money flows and grows. They study many different aspects of this very broad concept, including how stock market prices behave. Being scientific types, they love to develop theories that explain many different “phenomena.” One particularly important theory that has been the foundation of America’s retirement strategy since the invention of the 401K is the Efficient Market Hypothesis (EMH).

In its simplest form, the theory holds that the price that stocks sell for factors in all available information, and thus no trader can have any real advantage over the other. The idea of efficiency comes into play because the market has already priced each stock given all of the relevant information. If stocks are priced to reflect this efficiency, then it is impossible to pick undervalued stocks to buy and overvalued stocks to sell because those things cannot exist in an efficient market.

This translates into the following investment advice: Stick all of your money into a low-cost mutual fund that reflects the entire market and hope that the overall economy grows. This is the strategy that many 401K “experts” dole out to folks hoping to have enough money to retire one day. Put some in an Index Fund, put some in bonds, and put some in annuities, then wait for retirement. Any scientist (I classify economics as a social science) will tell you that it is much easier to disprove a theory than it is to prove one.

To disprove a theory, all you need are some examples of it not working. The EMH doesn’t hold up in the face of short-term performance legends like Jim Cramer. It doesn’t hold up well in the long run when we consider the mind-boggling success of the long-term value plays of Warren Buffet. Unless you believe that Mr. Cramer and Mr. Buffet are magical creatures, then you must reject the EMH as an absolute law of economics.

In fairness to the supporters of the EMH, it must be acknowledged that any randomly selected portfolio has a very small chance of beating the market by a substantial margin. The laws of probability are clear on that one. Any fund managers that beat the market for a quarter or even a year may well have random chance to thank for their success. In everyday language, this probabilistic growth would be a case of “getting lucky.” Another law of probability tells us that improbable events occurring in a long series become very, very improbable.

Take poker as an example. If we are playing and you get a full house, I’ll say that you got lucky. If you get a full house twenty times in a row, I’ll say you are cheating because that happening by chance is just too improbable—nobody is that lucky. When I look at the careers of great investors like Jim Cramer, Warren Buffet, and Peter Lynch, I must reject the efficient market hypothesis. There are other lines of attack on the theory, such as the impossibility of a “market correction” if the hypothesis is true, but I hope I’ve made my point.

My ultimate conclusion is that you can indeed perform better than the overall market. I would be remiss if I didn’t point out that doing so is not easy. While I’ve argued against the EMH, I will say that it is mostly accurate most of the time. Most stocks do trade right around where they should. Finding an undervalued stock is wonderfully hard work. Identifying a catalyst that will send it upward is more work still. Even so, with a little luck and a lot of homework, you can beat the market.

What’s Wrong with My Savings Account?

For the sake of argument, let’s say you have $1000 in your bank savings account.  Let’s further assume that your money is earning 1.00% Interest. As crazy as it may seem, that is a good rate for a savings account in the 2017 market.  Interest rates for savings accounts are at rock bottom. We’ll use this meager return as an example of why the Magic of Compounding Interest is both impressive and desirable.

How The Magic Works

In a simple world where math is infrequent and universally disdained, you would simply let your money sit in the savings account, and, at the end of the year, the bank would deposit $10 interest into your account.  In that simple case, you would then have $1,010 in your savings account. In that scenario, your gains are due to simple interest, which the bank only pays on the principal (your original $1000). The mathematical complexity arises when we consider that banks usually don’t work like that.  More often than not, you make money on your principal as well as on the interest that you have already earned. That is what Jim Cramer would call a “high-quality problem!”

When banks pay interest on the principal plus the interest already earned, we call it compounding.  When choosing an account, it is important to know how often the bank compounds the interest.  As a general rule, the more frequently the bank compounds your interest, the faster your money will grow.  Some banks compound daily, while others do so monthly or quarterly. Obviously, I’d prefer my money to make me money every day and not just once per month or once per quarter.  Note that this works against you when you hold credit card debt; most credit card companies compound what you owe them daily. In the case of daily compounding, your money will earn 1/365th of the 1% annual rate each day you leave it alone and let it grown.  This quirky calendar may sound like a small detail, but over a ten year period, daily compounding can add about 10% more profit than you would earn with simple interest.

Savings Accounts are Terrible Investments

At the end of 10 years, then, your $1,000 investment would grow to an immense value of $1,105.17 with compound interest. Even with the miracle of compounding interest, that is still depressing.  There are two lessons here. The first may not be so obvious: Compounding interest is amazing, and you want every dime you can get invested so it can contribute to the compounding process that will make you wealthy.  The second lesson is that 1% is a terrible rate of return, and you will never grow rich tucking your money away in a bank savings account.

In reality, the only reason to use a savings account is the psychological barrier it places between you and your money.  Most people have trouble letting cash sit in a checking account without dipping into it. You absolutely must have an emergency fund that you will not touch except in a bona fide emergency, and most of us have the discipline not to rob our emergency fund if it is walled off from our ATM card by a savings account.  Why do we not invest it in equities so it can compound? The reason is what money managers call liquidity. You need your emergency fund hidden away psychologically so that you do not raid it, but you need to be able to access the cash quickly when an emergency arises. If your money is in your investment account, the market may be down, or it may take several days to transfer the money into your savings account.

Securities to Buy Ahead of an Economic Downturn

A growing list of prestigious individuals and financial houses are telling investors that a downturn is coming.  The market is still very bullish, but Ray Dalio and Professor Shiller point out that what goes up must come down, and they warn of a downturn in the next couple of years.  There are several strategies that investors can employ in the face of such a downturn. One is to hold the lions share of your portfolio in cash. Cash isn’t making any money, and most investors don’t feel comfortable with a potential two year period of flat returns.

Another is to invest in companies and sectors that are likely to do well if such a downturn strikes.  Bank of America, for example, is very bullish on gold. The bottom is in, and the risk curve is asymmetrical.  They’ve put a $1350 price target on gold for 2019. That’s about a 12% upside, and leveraged funds ($NUGT) can potentially double that return.  There is the strong possibility that crude oil will make advances as geopolitical tensions, sanctions, and a stubborn OPEC continue to hold sway. You can bet on crude with $OIL, and you can make leveraged bets with $UWT.

Individual stocks with very low multiples stand to weather the storm far better than those that have stretched multiples.  Symantec ($SYMC), AT&T ($T), and Micron ($MU) seem to fit the bill.

Perhaps the most dangerous position–in the short term–is to be long the major indices.  Long-term index investors will plan to stay the course by staying long and keep buying through any downturn.  I have no doubt that long-term value investors such as Warren Buffett will relish all of the discounted stocks that will be available in a widespread selloff.  My fear for the average retail investor is that panic will set in, and they will have bought high and sold low–an age-old recipe for disaster. Many of today’s investors have never traded in a severe bear market, and many will have no idea what to do when momentum reverses.  You will be fine if you change your allocations to a safety portfolio, and you will be fine if you just ride it out as Mr. Buffett and Mr. Boggle will do. You can be badly hurt if you are a panicked seller of damaged securities. The best advice is to strategize now, develop a plan, and stick to the plan when things get scary.

Is the Exuberance Irrational?

A guest on CNBC’s Power Lunch today (09/18/2018) argued that this phase of the bull market is different than similar past markets of a certain age was that this time “there is no irrational exuberance.” I take issue with that assertion. Nio, a Chinese electric car maker, IPOed at $6.00 and went up over 100% in the space of a couple of days. This despite analysts starting the stock with a sell rating and a price target below the IPO price. New rounds of tariffs on around $200 Billion in Chinese goods were levied by the president, and the DJI rose nearly a full percent the next day. The media seems to have forgotten that we already have tariffs on several of our allies from earlier in the President’s term. Tariffs are by definition inflationary, and they disincentivize consumer spending and investing.

The Fed is hiking rates, and bonds have hit the 3% level once again. The dollar keeps strengthening much to the chagrin of gold investors. One official stated that China “was out of bullets” in the trade war since they didn’t have much else that they could put tariffs on. This ignores the several nuclear options, such as dumping billions in US bonds and devaluing the Yuan such that the tariffs don’t really harm Chinese businesses and consumers. This will would have a domino effect, and many world currencies would lose strength. That would, in turn, cause the dollar to strengthen even more. These are all headwinds to corporate earnings, and tailwinds have already been priced into the market. Many investors have adjusted their portfolios to reflect some risk aversion, but holding 5% cash while bidding up the S&P 500 to insane levels does not reflect real caution.

The venerable Professor Schiller of Yale (the economist who coined the phrase “irrational exuberance”) has pointed out on several occasions that the CAPE ratio (currently at 33.27) is very high compared to the average (mean = 16.57). To get a feel for what that looks like visually, examine the chart below:

The blue line represents Professor Shiller’s inflation-adjusted P/E ratio (CAPE) over 138 years. The vertical hatch marks represent 2.58 standard deviations above the mean, which means that any part of the blue line sticking up above those hatch marks represents a very high and statistically unlikely level for the CAPE. About 99% of the time, multiples have been lower. As we can see from the chart, only the Dot-com Bubble era had a higher degree of irrational exuberance than we do today. In absolute terms, P/E ratios may seem reasonable (only 16x next years projected earnings), but when we examine them in the historical context on a cyclically adjusted basis, we see a different picture emerge. We’re just a few bullish days away from rejoining the illustrious 1% club of ridiculously stretched multiples. “Things aren’t as silly as they were during the Dot-com Bubble” is no evidence that we are not overextended now.

Interestingly, the period of the dot-com bubble had a mean CAPE of around 30. That means that average CAPE, even including the big crash, was higher than most of even the extreme scores of all other periods. If we look at the CAPE where the market bottomed out in 2003, we can see that multiples were only slightly stretched. It was still high in reference to the historical average. Logic dictates that when values are more than double the historical average, a 50% downturn results in values that are near the historical average. Market performance during the dot-com bubble was far more irrational than in any other period.

Most other bull markets followed a very similar trend. Multiples climbed high above the mean and then fell back through the mean. What makes the dot-com bubble so spectacular was the height of the mean during the time, and not the pattern of rising and falling. In the chart for the Great Depression, we can see that a boom period was followed by a rapid downward move that lasted nearly four years.

When World War II ended, euphoria was the norm, and the thrill of victory swept the nation.  Thousands and thousands of returning soldiers wanted to start families and live the American Dream.  A year after the War had ended, the economic reality of massive spending and massive destruction started to hit home, and equity values slipped.  The mean CAPE of the period was already lower than the historical averaged, that translated into a gentle drift downward in contrast to the massive and protracted sell-off of the Great Depression.  

The early sixties was a period of social turmoil inside the United States, and it was filled with Cold War intrigue and geopolitical risk.  The Bay of Pigs Invasion and the nuclear fears of the time were on the minds of all Americans, including investors.

The end of the 1960s saw a resurgence of social upheaval, and the civil rights movement was at its zenith.

Energy is the lifeblood of the economy, or at least it as in the early 1960s. Conflict in the Middle East followed by an Arab Oil Embargo did massive damage to Western economies between 1973 through 1975. In this case, the mean of the period was near the historical average and the high point in 1973 wasn’t all that stretched. The downturn would take multiples well below the historical average.

1980 to 1982 marked a time of fiscal nightmares for the government and people of the United States. The end of the Carter administration saw very high inflation, very high interest rates, and very slow growth. The bear market ended with the election of Ronald Reagan, after which the market took off with a vengeance.

Black Monday is rare in that a single day was the focus of a shockingly fast decline in the markets. The DJI shed 26.6% on that one day. Overall, the bear market lasted only three months, but the S&P 500 shed 33.5% during that short period. This chart is very instructive since it serves as a good example of the range of multiples (inflation adjusted) that stocks usually trade within.

The financial crisis happened during the middle of an elevated period of market valuations and multiples. Investors were euphoric, and the Wall Street bankers had gone insane and took on an appalling level of risk. Euphoric borrowers applied for the biggest mortgages that they could get, and underregulated banks handed out huge checks to borrowers that didn’t have a prayer of paying them back. Real estate markets were on fire, and the wisdom of the day was that property values would appreciate so fast that leverage and a positive attitude were all you needed to get rich. That worked well until 2007 when uneducated mortgage borrowers owed bankers focused on short-term profits, and that influx of risk caused the bankers to sell off that risk with financial instruments that they little understood.

From the above charts and commentary, we can see a pattern in how bull markets die.  While the nominal cause of the downturn varies, we usually find geopolitical risks, fiscal risks related to credit cycles and tax structures, and domestic strife in the list of hypothetical causes.  We also find some fairly predictable numerical patterns. The idea of “reversion to the mean”, also known as regression, always seems to be in play.

Any time the CAPE is below the mean of 16.5, we can invest knowing that the odds are very good that markets will move upward.  About two-thirds of the time, CAPE ratios will move between 10 and 23. When we get outside of those ranges, we can expect a move back toward the mean.  When scores become very distant from the mean, the tendency is to return to the mean in short order.

Given the “sideways price action” of recent weeks, it seems that investors are worried about trade wars and other geopolitical risks. They are worried about stretched multiples, but not so worried as to get out of the high flying stocks that have kept the bull market alive for this long. Given the level of the CAPE over its historical average and the lack of response to dangerous economic headwinds, I believe that we have enough evidence to call today’s level of exuberance irrational. The problem for more rational investors is that we have no way of knowing when the madness will end. History teaches us that the stretched multiples tend to snap back toward the mean like a massive rubber band–the further it gets from the point of origin, the more powerfully it snaps back. When a sufficient catalyst does occur to strike fear into the hearts of market revelers, the downfall will likely be as fast as it is dramatic.

Not familiar with the normal curve and where the percentages discussed above came from?  Check out my book chapter to learn more.  

Case Law Research On Google Scholar

As the political climate of anti-intellectualism sweeps the country, more and more universities are suffering from a trend of flat budgets and budget declines.  Those declines have filtered down to university libraries, and database services that scholars have always taken for granted are now absent. Shephard’s Citators may be the gold standard, but there are now alternatives.  Paying an expensive monthly subscription to Westlaw or LexisNexis is an impossibility for many university libraries these days, but those subscriptions may not be absolutely necessary if legal researchers are willing to roll up their sleeves and do a little more work.

With the exception of a few Luddites, everyone knows about Google, even to the point of using it as a verb.  Googe the verb is synonymous with finding out the answer to any question. The problem with Google is that it finds popular results more than it does accurate results.   Google Scholar is a lesser known service that helps fix this problem by trying to limit the database that it maintains for this type of search to scholarly sources. These include legal resources, such as states, regulations, and court cases.   Google’s thirst for ever better search has spawned tools specific to case research. These may not yet have the weight of Shepards, but it is a powerful tool that can get the job done. 

When you first navigate to, you will notice a familiar Google search box, but the banner at the top of the page appends the word “scholar” to the familiar Google logo.  You will also note two “radio buttons” that let the user select “articles” or “case law.” when you select “case law,” an unassuming link appears that allows the user to “select courts.” This opens another page that lists most state and federal courts in the United States.  Most undergraduate students will be interested in the “Supreme Court” option. Once you select the court(s) of interest, select “done” at the bottom of the page. A new search page will open, and you can begin a search for the case you are interested in. When you find and open your case, you are provided with the text of the case.  The case returned by Google Scholar will have nearly ever citation within the case hyperlinked to the source referenced. This in itself is a valuable research tool but is not the one we are interested in for this article. We are most interested in the “how cited” link at the top of the page.

The “how cited” tool doesn’t provide specific treatment flags as does Shephard’s, but it does provide key phrases from the case, and a link to list all of the cases that use similar phrases drawn from the case.  This means that the researcher can choose a particular statement of the law and then find all references to that statement in other cases.  By phrase, the “similar citations” count is provided, but, unfortunately, that information is not hyperlinked to the list. The “all citing documents” list is hyperlinked and can return a staggering amount of cases.  This means that you researcher can ultimately find the status of a law, but it will take a lot of digging and analysis. This method is perhaps most useful for more obscure state cases and less useful for dealing with broad sweeping constitutional issues decided by SCOTUS.

Of interest to the student of the law is the “related cases” tool that provides a list of cases that consider related legal issues.   These are related in a very general sense, and overarching issues like the “right to privacy” and “probable cause” will likely be the common thread.  This will not answer the “still good law?” question, but it will provide you with related cases and issues that can be useful in the classroom.

As of now, Google Scholar is no replacement for Shepherd’s Citators, but it does offer some tools which are better than none.


Blackboard Tip:  References in Word

Some of you have used the “References” tool in Word to add a bibliography to your blogs and discussion posts for Blackboard, and that is an excellent tool.  When you try to paste your work into something else, however, your References (bibliography) gets cut off.

Word is a great tool for creating documents, but it is a terrible tool for sharing them.  You will encounter many different electronic platforms that are Web (HTML) based, and your documents must look good on those these days.  Blackboard is merely one example of this.  To get your references to copy when you paste a document from Word into Blackboard (or WordPress, or anything else on the Web), you simply need to transform your Word document References into “static text.”  The static text is just regular text, and you can cut and past that with no problem.

All you need to do is highlight the Reference block of fields, and click on the “more” icon.  Variousu options are available, but it is the “convert bibliography to static text” at the bottom we are interested in.  Note that this can’t really be undone, so wait until you are sure everything is perfect and all of your references have been included before you do this.

Once your references have been converted, you can then “select all” and copy the document to your clipboard, and then paste it into Blackboard.  You may need to clean up your paragraph spacing, and such.   Then you are ready to submit your document.

Are Creative Commons Licenses Best for OER?

It is readily apparent from many sources that the costs of higher education have skyrocketed in nominal dollars since the mid-1980s.  Less apparent has been the rise in terms of real (inflation-adjusted) dollars. In an age of anti-intellectualism and skills-based focus by government officials, higher education has experienced an unprecedented lack of support despite a thriving economy.  Despite admonishments by business luminaries such as Mark Cuban, our society has devalued a liberal education that fosters critical thinking and creativity. Flat salaries for nearly a decade have resulted in a 30% loss of buying power for college faculty and the institutions that employ them.   

It is a subjective judgment, but I think it fair to say that higher education in America is in a crisis.  Many institutions will need to cut costs for students just to keep enrollment up, or risk coming to the office to find chains on the building doors.  It is a bad strategy to assume that things will get better any time soon. The economy has peaked, and another showdown looms on the horizon. All this is to make a simple point:  Cutting costs for students is critical, and we must take it seriously. If we don’t want to cut salaries, we need to save elsewhere. Astronomical textbook costs are an easy target, and the solution has been Identified.  

There are several important barriers to the broad implementation of OER.  One is that it takes a lot of work to write a book, and a few people are altruistic enough to give it away.  Many of us would rather get fleeced by a textbook company and make a pittance than we had just give it away freely.  Once you have thousands of hours in a project, it is very difficult indeed to slap a Creative Commons license on it and put it out there for the world.  In a world where “publish or perish” is a bona fide concern, we don’t see any way to get proper credit for such a grand, altruistic gesture. There are some other galling things about CC licenses.  You give away the right to chop up your work, repackage it, and redistribute it. The “attribution license” requires that you be given credit, but you have no way of knowing whether you will even want your professional reputation tied to these new products.  You can also restrict “commercial use”, which basically says other people can’t try to profit from your work.

The spirit of “don’t make money off my hard work” may be written into the license, but it is not effective in practice.  Big companies can repost your work “for the good of the learner,” when the real intent is to spam search engines with your content.  They get a valuable rise in Google ranks, and you get nothing for a service that businesses pay handsome sums for otherwise. You also lose the quality of your work over time, because any new editions, corrections, and expansions you add will not make it into every version.   Philosophically, I think Creative Commons Licenses are great and are a credit to humanity. I also think they have some glaring deficiencies for academic authors considering publishing a book as OER as opposed to selling it to the corporate giants.

I propose that likeminded academic authors explore a new type of license that is inspired by the CC licenses and the GNU licenses that came before them.   With this in mind, I developed what I call an “Open Education Resource-Quality Master Source License.” This license is inspired by the GNU licenses used by software developers and the Creative Commons licenses.  These licenses, however, result in many iterations of content that are not updated and corrected as time passes. The purpose of the OER-QMS license is to offer content creators the right to maintain a single, high-quality source that they control and maintain such that quality can be preserved over time.  Whereas the CC licenses have taken a ground up approach, my approach was to retain the basic copyright laws for traditional publishing and build a handful of exceptions that make them available as OER.

Here is the plain English version of what I came up with:

Section 1 A of the license basically says this is my stuff, and if you can’t use it like I say, then you can’t use it at all.

Section 1 B is the carrot.  You have to adopt the book to use it, and all that means is that you must send me a note saying who you are, that you are using it, what you are using it for, and where you are using it at.

Section 1 C says you can cite my stuff like you can another book in a critical review, journal article, etc. under the normal rules of scholarly publication.

Section 1 D extends the right to use my stuff to your instructional designers without extra measures (some professors have help!).  This section also lays out what rights I’m giving you:

  • You can cut and paste stuff into Blackboard (or whatever LMS your institution uses) as long as your students need a password to get to it.  You must give me credit, and provide a link to the URL where you got the information–some disciplines would call that a footnote.
  • You can print stuff if you want to for your own purposes, and you can make copies for indigent students on a case by case basis.  What you can’t do is print hundreds of copies and sell them in your institution’s bookstore.
  • You can also print copies if you need them to go into those big binders for the accreditors, or whatever bureaucratic nonsense your institution makes you do.
  • You can link to any of the material from your LMS, departmental webpage, Library OER directory, or whatever you like.   I want people to use my stuff, I just want it used from my webpages so I know (and can document) that it is being used.

Here is some stuff you explicitly cannot do with my stuff:

  • Put it on the web, any kind of way.  Link to it as much as you like, but don’t republish it.
  • Don’t make an ebook, PDF, or any other kind of file out of it.  Use the HTML in your LMS, but I don’t want a bunch of static eBooks, PDFs, or any other file types floating around that I can’t update, account for, or anything else.
  • Don’t use my work to produce a “derivative” work.  Normal citations are fine, and using the content as a complete book in your classes is fine, using just sections is fine, but I’d like my work to stay mine and hopefully get some recognition for it.   I don’t think most of us in the Ivory Tower question why this is important. If you don’t work in academia, Google “publish or perish” and check out what we have to put up with.
  • I know that since we teachers get paid so well, it can only be pure greed that motivates this, but I ask that you not try to make any money off my work, including you SEO masters out there.   IF there are a few dollars to be made, I’d like to use those to pay for my domain, hosting, software, etc.

And that’s it.  I view this license as an exception to regular copyright laws, and as such, it can be brief.  I invite you to view the more legalistic version and leave comments on either version. I’m sure that this is imperfect, but I believe that we should have the conversation and that it must begin somewhere.

Distributing Your OER Materials: A Broadened Approach

The astronomical costs of textbooks are a significant barrier to student success, and Open Educational Resources are a welcome solution to the problem.  My intention here is not to advocate OER, but to describe some limitations of OER and how those limitations can be overcome by individual OER authors.

A potential problem with using OER in college courses is the fact that OER textbooks exist largely in a digital universe.  In rural areas, students may live where data is a premium, and they are reluctant to spend large amounts of time “logged into”an LMS.  Many students, especially those of us wh.o are “pre-millennials” have a preference for good old fashioned paper books. These factors present a problem if we simply take public domain or Creative Commons Licensed books and place them in our learning management systems.  To expand the reach of OER and develop student “buy-in,” individual authors can expand the delivery of OER materials via several different options.

Traditional academic books required that you write a proposal, send it to an editor and repeating the time-consuming process until one took an interest.  If you were lucky enough to get a contract, you did a massive amount of work, and the book publisher made a lot of money. I don’t know a single academic that has reached the level of “well off” through book royalties.  These days, the traditional textbook companies have nearly priced themselves out of business, and other modes of content delivery have risen to prominence.

We all understand that anyone can start a web page, and content management systems like WordPress have made it easier than ever to do so.  If you have the subject matter expertise and the willingness to work hard for the betterment of student kind, then you too can be an OER author.  The most common form of OER is to write a traditional book, save it as a PDF file, and post it online. When it comes to student needs and preferences, this is about the worst thing you can do.

In my professional life, I am biased toward desktop computers with big screens and lots of processing power.  I often fail to remember that my students are much more likely to access my materials on a smartphone which means a tiny screen.  Because PDF files retain your original formatting, they are very difficult to read on a small screen. If you have tried this, you know exactly what I’m talking about.  If you have not tried it, I encourage you to do so. PDF files are great for professionals, but they are terrible for students.

My suggestion is to first publish your OER materials as HTML files (aka web pages).  There are many ways to do this, but I suggest that you strongly consider the option of starting your own website and building a brand.  This isn’t meant to fuel your ego, but to provide easy access, a common source that you control and keep updated, and Search Engine Optimization (SEO).  If you want your colleagues to use your work, they need to be able to find it. I well desinged web presence can provide many benefits, and keep your materials in a nice, neat, and easily found location.  

I strongly suggest that you use a modern, device sensitive Content Management System (CMS) to build your web presence.  There are many optinos, but I strongly recommend the WordPress option, which has been made famous by the rise of blogging, and more websites are “powered by WordPress” than any other software.  Just as we academics have OER, software folks have “open source” and that is what WordPress (and its thousands of plug-ins) is. WordPress has some powerful capabilities, one of which is to optimize your content for whatever screen you find it on.  That means you see a quality product on your big screen in your office, your tablet at home, and your student’s see quality, readable content on their smartphones.

If you want to go the extra mile and provide your students with an “eBook” version of your project, you can do that via’s Kindle Direct Program (KDP).  Amazon is a for profit business, but they allow you to charge nothing for your Kindle book because “free books” draw more users to the Kindle ecosystem, and that is good for business.   If you create your document in MS Word, KDP has free software that converts it into a very simple print book format (via a new tab) or you can get more advanced software designed to build textbooks (expect a learning curve with the more powerful software).

Once you have your Kindle book built, you can easily convert it to a paper book, and Amazon will house your book for free, and when someone orders a copy, they will print it and mail it out.  This is not a free option, but an attractive book delivered is cheaper than printing PDF files of the same size at your local business center or university library (where $0.10 per page seems to be the gold standard).  My preference is to set my price point where I make around $1.00 per book sold on Amazon to help defray the costs of maintaining my website.

If you want a better looking product, you can consider “self publishing” firms such as  There are some fees involved, especially for first time authors, but you get a lot of professional services for your money, and the product is of superior quality to what most people can do with Amazon’s automated services.  

If, in the end, you have a digital version of your OER textbook online, an eBook version available for download to a reader, and a print version that is available at very low cost, you will have captured most of the available options.  Multiple formats of your OER books will appeal to the maximum number of students, and provide alternatives to expensive textbooks at no or very little cost. This removes a significant barrier to student success.

Safety Trade is Getting Dangerous

The Russell 2000 small-cap index is up nearly 11% so far this year, while the venerable old S&P 500 is up only around 5%.  The disparity is due largely to the trade war, and investors have bought the stocks of small capitalization American companies with great vigor.  The normal correlation between markets has been tossed out, it seems, and the relationship has turned inverse. Anytime the S&P 500 looks weak, the Russell 2000 has a good day.  Investors are forgetting a few things about business economics, and that is a very dangerous mistake to my way of thinking. One thing we need to remember is that small companies have supply chains just like large companies, and these are rather limited in comparison.  

We are essentially blind when it comes to knowing where what companies get what materials.  If a small knife company in Wisconson needs a certain type of steel, the can’t be too picky where they get it, and they don’t have the bargaining power to drive the price down.  They will pay the market price. If GM and Ford are having problems with the plentiful steel that car parts are generally made with, we can only imagine the trouble that small manufacturers that require specialized materials are having.  What percentage of the small-cap supply chain is dependent on our foes in the trade war? Estimates abound, but these are largely derived using the SWAG method and are no basis for careful analysis.

Another key issue is margin expansion due to increased demand.  If investors are flooding into small-cap stocks, there aren’t enough to go round.  This drives prices up substantially, and those already in the space have a great year (so far).   As much as it pains me to admit, the vicissitudes of politics do have a huge impact on the valuation of companies, both large and small.  With the 2018 midterm elections on the horizon, the political pressure is on to demonstrate to the world that the GOP is indeed Making America Great Again.  

Regardless of how good the deals we can get really are, I predict a massive streak of deal signing and a commensurate amount of back patting and acclaim that the deals are great.   Democrats will attack the deals as smoke and mirrors. The truth, as always, will be somewhere in the middle. Regardless o how good the deals are, it will have a calming effect on Wall Street as the uncertainty level drops.  When that happens, traders will see that the small caps have run, and there will likely be a rotation back into large-cap multinationals that have been hurt by the trade war.

I don’t mean to retract my previous predictions that we are nearing a downturn in the broader economy and a big scary pullback in equity prices.  I do, however, agree with Ray Dalio’s timeline and think it is a bit premature to start yelling that the sky is falling. There is a high probability that we’ll see a bit more euphoria and another big rotation before a broad downturn occurs.  I think the next big boom will be back into the out of favor sectors damaged by the trade war, so the industrials and emerging markets will have a few days in the sun.

Regardless of where the money goes, it will come out of small caps. The more I hear watercooler talk of getting into the small-cap space the more I think that the space is overbought.  I recommend getting out of the space and looking toward the beat up sectors, especially emerging markets. I also like Canada and the financials at this stage.

With the FANG earning season in shambles, there may be a sale in tech in the near future.  I would wait for a massive pullback before entering that space as it has flown to amazing heights.  Big moves from recent values don’t necessarily reflect meaningful moves relative to fair valuations.  I am very wary of the upward move in Amazon as the EPS move was truly spectacular, but revenues were essentially flat.  Letting large sums flow down to the bottom line doesn’t tell a growth story, it tells a story of maturity. Amazon may be the retail business equivalent of a bulldozer, and we need to remember that bulldozers aren’t nimble.

I recently closed out my leveraged biotech position, and am holding financials and energy.  I’m short both the Russell 2000 and the S&P 500. I’m also sitting on a lot of cash, waiting on that sale.


You may also be interest in a section of my book entitled Take Some Off the Table.

Why AI Won’t Take Over the World (yet)

As it currently stands, research into artificial intelligence (AI) is focused on getting computers to think like people.  We’ve made some impressive strides in this arena, and the best evidence that we’re better at it than most people think is Google’s AlphaGo program.  Computers can become very human in the way that they process data streaming in real time from the real world, and they can screw it up just about as bad as we can.  The biggest leaps in what the technology can accomplish have come from the study of neural networks, and we’ve come to realize that the best system currently known for processing environmental data and formulating responses is to mimic the architecture of the human mind.  

The fear that arises from this is that human minds are capable of great evil, and we project the “total package” of human cognition onto the AIs, assuming that if they can think like us then they can be evil like us as well. For now, we can rest easy with the knowledge that this will not happen.  The reason is that the neural networks may process data much like the human mind, but the AI architecture is very different.

The human mind is built in layers, and each layer provides for behavioral responses that, taken collectively, make us human.  The neural networks that the computer scientists have managed to create in the digital universe are far less multidimensional, and these lack our emotive drives.  We’ve developed the forebrain analog with a speed that shocks even the futurists, but we’ve only focused on one piece of the puzzle. To create an AI that behaves has humans behave, we’d have to build a layered neural network, and the function of such a network would have to start with the most primitive elements of the brain (the brain stem and nervous system) and build subsequent layers until we reached higher cognitive functioning.  

We’d have to start by thinking in terms of the very basic drives of biology and work our way up to playing Go. In reality, we have reversed the process. We’ve started with what many regard as the pinnacle of human cognition–abstract mathematical logic–and tried to mimic particular aspects of the more primitive side of the brain.

Such a concept has great appeal to brain physiologists and cognitive psychologists because we really don’t understand much about how the brain is wired together, and the greatest strides to come in those fields of study will come from computer science, not anatomy.   My central thesis is that we can’t duplicate the human mind without taking into account (and duplicating) the most primal of neural systems. Precise duplication will be very tricky indeed because most of those primal mechanisms are designed for a biological and not a digital world.  The most promising advances in neural networks have come from learning, not human design.  

Biological Subroutines

If we trace the human mind back to first principles, we note that the most basic drive is the propagation of our genetic material. The prime corollary to that drive is the drive to reproduce. All human behavior can be traced, albeit by a long and winding road, to those two “objectives.”   When we view the human brain in this way, a seeming paradox like “selfishness” versus “altruism” is revealed to be no paradox at all.

Greed is merely a subroutine, buried deep within the brain, that promotes survival by ensuring that we have ample resources to survive in a hostile and changing environment.   “Status seeking” is a higher order subroutine designed to ensure better mate selection. Altruism is a subroutine that was selected because humans tend to survive better in cooperation with other humans.   Love is another subroutine that ensures that we see our helpless infants to maturity so that our genes may be passed onto future generations through them. In computer science, subroutines usually receive inputs, manipulate data, and return a value that is delivered to another section of code.  In the labyrinth of human neural networks, things aren’t nearly so systematic and linear.

Humans are fully capable of dissonant drives, and perhaps it is this that makes us most uniquely human.  Perhaps it is instructional to think of different human drives as probabilistic tilts toward a particular behavior.  Selfishness tilts us in one direction, and altruism tilts us in another. Myriad subroutines process information for each tilt, and the result will be a function of the strongest set of aggregated tilts.  In humans, genetics and learning specify the weights of each subroutine, and the results are mediated by our higher order cognitive functioning. This explains why the social sciences are all probabilistic, and why we as a species are so prone to poor choices.  Our behavior is not driven by rational, theoretically sound empirical models (although it can be). Much of it is dominated by ancient subroutines that no longer serve their original function.

The Future of Artificial Intelligence

If computer scientists and cognitive scientists do ever get together and try to mimic the layered neurological structure of the human mind and give an artificial intelligence a core of primitive drives, I hope that they will isolate the system on an intranet buried deep in the dark side of the moon, far away from other networks where it could have even the possibility of propagation.  

We could, however, consider building something better than human by building a core of drives (strongly weighted tilts) toward the better aspects of our nature. The prosocial, hive subroutines that provide structure to the neural processes of bees is architecturally simpler, and arguably safer for humanity than our own dissonant systems. The problem with this idea is that it involves the egotistical assumption that we can identify the better parts of our nature.  In a complex neural system, predictable linear results are unlikely. I personally don’t relish a nanny state run by an altruistic AI any more than I do such a system run by humans.

Science fiction master Isaac Asimov foresaw the need for rules to govern the behavior of AI systems long before such systems were even possible.  His laws mandated that robots not harm humans or allow humans to come to harm. If those AI systems interpreted harm to mean physical harm, then we are in big trouble.  Asimov was correct in that we need to build a core of rules that AI systems cannot violate, and we’d be better off if this was done sooner rather than later. AlphaGo beat the predictions of the futurists by a decade, and a learning system may surprise us with how quickly it can start to meddle in human affairs.  

As we come closer and closer to mimicking human subsystems, a primal, core subroutine needs to be inserted that mandates something akin to Asimov’s laws.  I would suggest altruism toward humanity, and that lacks the primal drives that make us selfish. I would further define this prosocial drive in terms of human happiness (in the spirit of Bentham and Mill’s Utilitarianism), not in terms of harm.  Individualism and autonomy are key subroutines in the human brain, and we would be completely miserable in a world that was too safe and too altruistic.  As it currently stands, the artificial intelligence developers seem focused on the higher order thinking that doesn’t allow for the development of AI overlords.