You are here

Albert Wenger

Content Written by Author

Wednesday, December 13, 2017 - 11:30am

At our house we were all refreshing our computers furiously starting at 8pm with ever increasing excitement as the evening progressed. We were absolutely thrilled when Doug Jones victory was certain. If Doug Jones can win in Alabama, the state where Trump had his biggest victory, after Trump endorsed his opponent, well then Trump too can be defeated.

I have come to think of Trump’s candidacy and presidency as the last hurrah of a past that we need to leave behind for good. Trump, and the money behind him, waged a symbolic campaign of divisiveness that has continued with him in in office. It has been a mistake so far to try and defeat him with logical arguments. A while back I suggested that Take The Knee might be the right symbolic counter, but I was wrong about that, despite racism being one of the divisions Trump has exploited. 

It now appears that Trump’s real vulnerability might be the #MeToo movement. And I applaud Senator Gillibrand for pursuing that and calling for his resignation. Trump responded in the only way he seems to know how, by denying all responsibility and going on a ridiculous attack. Well, Roy Moore’s defeat, by however narrow a margin, shows that with enough pressure the time for evading responsibility through denying and attacking opponents is up.

All of us who see Trump as unfit to be the President of the United States should now do our part and apply the same pressure to him.

PS Uncertainty Wednesday will resume next week

Tuesday, December 12, 2017 - 11:30am

Later this week the FCC Commission is expected to vote along party lines to do away with net neutrality in the US. I have written extensively about why net neutrality is important for innovation. I will not rehash any of those arguments today (if you want to read from some of the pioneers who helped bring us the Internet, read their letter). Instead, today is simply a call to action. Contact your representatives and let them know why you support the existing net neutrality regulation.

Monday, December 11, 2017 - 11:35am

Planes fly differently from birds. Cars move differently from horses. And computers think differently from humans. That had always been my assumption but if you needed more proof, Google’s AlphaZero program, which had previously shown novel ways of playing Go, has just learned how to play chess incredibly well. It did so in 24 hours and without studying any prior games. Instead it just played games following the rules of the movement of pieces and learned from that.

I encourage everyone to check out the technical paper on Arxiv, as it contains many fascinating insights. But today I want to focus instead on a key high level implication: Computers are not constrained to learning and thinking like humans. And for many, many tasks that will give them at an extraordinary advantage over humans. Just like mechanized transport turned out to be superior to horses in almost all circumstances.

Our big brain that was shaped by the forces of evolution is a marvel in its complexity. But it has evolved to let us deal with a great many different scenarios and try as we might, we cannot apply more than a small fraction of our brain to a specific problem (such as playing chess). And even then our speed of learning is constrained by extremely slow clock cycles (see my previous post about AlphaGo).

AlphaZero’s success should be a startling wake up call. When we developed motorized transport, we went from 25 million employed horses in the US in 1915 to 3 million by 1960 and then we stopped tracking as the number fell further. We now have the technology to free ourselves from the ridiculous demands on humans to spend their lives as machines. We can have computers and robots carry out many of those tasks. We can be free to excel at those things that make us distinctly human, such as caring for each other.

But for that to happen, we must leave the Industrial Age behind and embrace what I call the Knowledge Age. AlphaZero can be the beginning of a great era for humanity, if we stop clinging to outdated ideas such as confusing human purpose with work or thinking every allocation problem can be solved by a market. These are the topics of my book “World After Capital,” which continues to become more relevant with breakthroughs such as AlphaZero. 

Saturday, December 9, 2017 - 7:30am

One of the objections against crypto currencies has been their volatility. Bitcoin for instance just rose by about 60% over 2 days only to then fall by about 15% in a matter of hours. Steam just ended bitcoin support citing volatility. This has led a number of teams on the quest to create a so called stable coin: A coin that does not fluctuate in value.

Now that raises some immediate questions. First, an easy question, relative to what is a stable coin stable? Other crypto currencies? The US dollar? The cost of some kind of computer operation? Second, a much harder one, if a coin were to be stable, do supply and demand become meaningless? And is that a good or bad thing? And third, possibly the hardest of them all, how in the world does one create a stable coin?

Here are some potential answers. The most desirable peg for a stable coin would be some kind of purchasing power index. That is a lot easier said than done especially when it comes to computation where cost has been coming down fast (easier for say the Big Mac Index). In the absence of a PPI, the second best would be a global currency basket.

The question about the effects of supply and demand on price though is a tough one. Prematurely stabilizing a coin could destroy the entire incentive effect for building out capacity. Take filecoin as an example. If there is a lot of demand for decentralized storage, one wants the price of filecoin to rise so as to provide an incentive for more storage capacity to be added to the network. Stabilizing such an increase away would effectively be suppressing the entire price signal! I have explained previously that a better approach to keeping speculative (rather than usage based) demand at bay is to have built-in inflation. So a stable coin makes more sense in places where the coin is simply replacing an existing payment mechanism.

As for a mechanism for creating a stable coin. Many of the ones I have looked at propose some kind of buy back mechanism to withdraw coins should the price per coin fall. I happen to believe that none of these account for the ruin problem (meaning you run out of funds for buying back). Given that a new stable coin would start out tiny relative to the size of the financial markets as a whole these could all be attacked (and an attack would make sense if the coin can be shorted). Leaving aside whether this can be done on an existing blockchain or not, I believe that a potentially better mechanism would be to randomly select coins for deletion (for contracting supply) and similarly randomly select coins for duplication (for increasing supply). While this does have wealth effects for individual holders, those should be small, random, and linear with size of holdings, thus minimizing incentive effects.

I am looking forward to feedback on these answers. And here are to more questions for readers:

A. Do you think a stable coin is needed?

B. If yes, what’s your favorite stable coin project (and why)?

Wednesday, December 6, 2017 - 11:30am

Last Uncertainty Wednesday provided a recap on our adventures with sample means and what those implied about the difficulties of inference. Now we will look at another equally fascinating complication: inferring volatility. As the title of this initial post gives away, we will see that it is easy to make large inference errors when we are dealing with situations in which volatility is somehow suppressed. It turns out such situations are all around us all the time. Let’s work our way into this one step at a time.

First of all, what is volatility? Here is a nice definition, courtesy of Wiktionary: “A quantification of the degree of uncertainty [about the future price of a commodity, share, or other financial product.]” I put the second half in brackets because while volatility is commonly used for financial assets, it could be about something else such as the level of employment in the economy. We have encountered several quantifications of the degree of uncertainty along the way, most notably entropy and variance.

What then might suppressed volatility be? Well if we are fragile, then increased volatility hurts us. So we tend to dislike volatility and look for ways of reducing it. Important aside: if we are “antifragile” then we benefit from increased volatility. The tricky part is that often the measures we take to reduce volatility wind up simply suppressing it. By that I mean it looks, for a while, as if volatility had been reduced but then it comes roaring back. The ways in which attempts to reduce volatility can backfire are among Nassim Taleb’s favorite topics.

The securitization of mortgages provides a great example of suppressed volatility. The basic idea is simple: throw a bunch of mortgages into a pool. Then carve the pool up into tranches of different volatility. Some with presumably very low volatility that looks like triple AAA rated bonds and others with high volatility like equity. It should be easy to infer from this description that total volatility has not been reduced it has just been parceled out.

So why am I calling this an example of suppressed volatility? Well, securitization of mortgages worked fantastically well for several decades. But as it did, people started to mistake the lower volatility of the bond tranches with lower volatility of real estate overall. And that meant more and more money started piling into real estate and as that happened banks got greedy. They underwrote more and more bad mortgage risks, making the pools increasingly risky. And yet for a while, because of securitization, it continued to look as if the the bond tranches had low volatility.

So what started out as a legitimate way of allocating volatility across different investors, turned into a case of massively increased and suppressed volatility that exploded in the 2007 financial crisis which has become known as the Great Recession.

Next Wednesday we will start to develop a simple model that lets us study suppressed volatility and see why it is so hard to detect. In general the take away will be that we should always be questioning anything that looks like a magical reduction in volatility. Most of the time it will be a case of suppressed volatility instead. In that regard the current super low volatility in financial markets, which has become known as the volatility paradox, should we worrisome for investors.

Wednesday, December 6, 2017 - 7:30am

[This is a talk I was going to give at Slush but had to change my travel plans.]

A 12 minute talk should be plenty to address this simple question. Just kidding. This is one of the profound questions that humanity has grappled with for a long time. Here are three artistic takes throughout history. The first is a biological take on being human. This plate from ancient Greece shows centaurs who are half human and half horse. Mythology is full of human animal hybrids. The centaur myth is likely to have arisen in civilizations that were invaded by other cultures that had domesticated horses. Let’s fast forward to the industrial age and a mechanical take on being human. There is a great story from the mid 1800s by Edgar Allan Poe called “The Man That Was Used Up.” It is about a general who has a secret. Spoiler alert: he turns out to be mostly assembled of prosthetic parts which have to be put together every morning. And finally here is a recent take, a still from Star Trek The Next Generation. In this scene “7 of 9,″ a human who has been augmented to become part of the Borg explains why the Borg are superior to humans. But in the series humans defeat the Borg. So throughout history we have worried about being less than human through the metaphor of the time: biology, mechanics, computers.

Now this talk is part of the Human Augmentation track, so let’s take a look at augmentation, starting with the body. As it turns out, I have a small augmentation in the form of a dental implant. And that is a type of augmentation of the body that is very old. Here is a picture of dental implants from more than 1,000 years ago. Here is another very common type of human augmentation: glasses. Now you might say. Gee Albert, you don’t understand augmentation. Dental implants and glasses just give you back some functionality that you lost. But once you take that seemingly small step it is rapidly possible to expand on capabilities. For instance, instead of just vision, you can now have night vision. Now you might say: yes that augments your capabilities but it is not “augmentation” because the night vision glasses are external and not fused into the body. But that is a somewhat misleading distinction. Here is a picture of a defibrillator. It is an external way of restarting a human’s heart. And here is an x-ray image of a pace maker. Some pacemakers just keep the heart beating regularly, but others also act as a defibrillator. In both cases we have fundamentally augmented what is possible for a human. So: humans have augmented the body for a long time, we will continue to do so going forward and whether or not the augmentation is physically implanted is at best a secondary consideration.

Let’s shift to considering augmentation of the mind. This too is something humans have done for a very long time. The abacus, for example, was invented several thousand years ago to augment our ability to compute with large numbers. Here is a more recent augmentation: the ability to get to places without having to read and interpret a map. And of course more recently we have package that into our phones. Again you might say: but Albert, these are not augmentations because they are external to the body. Just as with the example of the defibrillator this seems like an artificial distinction. And furthermore many of us are so close to our phones that when we misplace it we feel like a part of us is missing. This morning on the way here I shared a cab with an entrepreneur who for a moment thought they had left their phone at the hotel and they were super agitated by that. If we are honest with ourselves, I think many of us feel the same way. So yes, if you want to be a stickler you might say that it’s only augmentation if it is directly connected to the mind Matrix style. And if that’s really what you are looking for, we are well on our way. Not just with companies such as Elon Musk’s Neuralink and Brian Johnson’s Kernel. But we are doing it today already with Cochlear implants. These have external signal processors that then connect directly to the acoustic nerve. So we are basically pretty close to a direct brain connection. Again though the key point is that we have been augmenting our minds for a long time and we still consider ourselves human.

So what then is critical to our humanity? It is not the shape of our body, nor the specific way in which our brain works. Those are not what makes humans human. What then is it? In my book World After Capital I argue that it is knowledge. In this world only we humans have knowledge, by which I mean externalized recordings such as books or music or art. I can read a book today or see a piece of art created by another human hundreds or even thousands of years ago and in a totally different part of the world. We share lots of things with other species, such as emotions, some form of speech and consciousness, whatever exactly that turn out to be. But knowledge is distinctly human. No other species has it.

Knowledge comes from the knowledge loop. We learn something, we use that to create something new and we share that with the world. That loop has been active for thousands of years. We each get to participate in this loop. And we get to do so freely. That turns out to be the crucial feature of what it means to be human: we reap the collective benefit of the knowledge loop but we participate in it freely as individuals. That is the big difference between us and the Borg. And that is also what we need to keep in mind when working on augmentation. We must be careful to assure that it increases, rather than limits, our freedom to participate in the knowledge loop.

And there is real risk here. Think about a brain link for example. It could give you much more direct access to the knowledge loop but it could also be used to prevent you from participating it. I recommend Ramez Naan’s Nexus, Crux, Apex series that deals with exactly this set of questions. Like all technology, human augmentation can be used for good and for bad. Let’s all try to work hard to use it for good.

Thursday, November 30, 2017 - 7:30am

The last few weeks in Uncertainty Wednesday, with the exception of my net neutrality post, we have been looking at the relationship between sample data and distributions. Today is a bit of a recap so that we know where we are. One of the reasons for writing this series is that in the past I have found that it is super easy to get into lots of detail on mechanics and in the process lose sight of how everything hangs together.

So now is a good time to remind ourselves of the fundamental framework that I laid out early on: we have observations that provide us with signals about an underlying reality. Uncertainty arises because of limitations on how much we can learn about the reality from the observations. We looked at both limitations on the observations and limitations on explanations.

In the posts on samples and how they behave we have been working mostly in the opposite direction. That is we assumed we had perfect knowledge of the underlying reality. For instance, in the first post we assumed we had a fair dice that produced each number from 1 to 6 with exactly probability 1/6. In a later post we assumed we had a perfectly Cauchy distributed process. In each case we then proceeded to produce observations samples *from* that assumption.

Sometimes people call this the study of probability and reserve the term statistics for going the opposite direction, the one we are usually interested in, i.e. from the observations to improved knowledge about the underlying reality. Another term that you will hear in this context is “inference.” We are trying to infer something about reality from the data .

What then should be the key takeaway about inference from the last few weeks? That for some realities we can learn a lot from even relatively small samples, while for others that is not possible. Making this statement more precise will be a big part of Uncertainty Wednesday going forward. But for now you may have an immediate allergic reaction to the implied circularity of the situation. We are trying to learn about reality from observations but we don’t know how much we can learn unless we make assumptions about which reality we are dealing with. Welcome to uncertainty.

How do we cut this circularity? We do so only over time through better explanations. Explanations connect our observations to the reality. We start with a pretty bad explanation, which results in poor inference and a cloudy view of reality. We will then often use that view of reality to make predictions and compare those to future observations (possibly from experiments). Discrepancies arise, which lead us to consider different explanations. Some of those will emerge as better. They make better predictions. They fit better with subsequent observations.

This is why explanations are central to understanding uncertainty (and central to all of science). Too often, however, treatments of uncertainty make all sorts of implicit assumptions. For instance, assumptions of normality or at a minimum of thin tails abound (when we saw that fat tails behave wildly differently). Even when the distribution assumptions are explicit, they are often not related to a specific explanation.

Monday, November 27, 2017 - 11:35am

Over the holiday day weekend I did a lot of driving in a loaner Tesla (we have ordered one ourselves but it is, ahem, delayed). Well, actually, the car did a lot of the driving. I made extensive use of “Autopilot” features, including the smart cruise control and the autosteering. New cars by other automotive brands have similar capabilities. Long before getting to fully autonomous cars, I am blown away by how immediately transformative this experience is for highway driving.

For me there were two immediate and profound changes. The first has to do with being in stop and go traffic, which one often encounters on the highways to and from New York, such as heading out to JFK. I usually hate this, because the tedium of stop and go makes the time feel that much longer. Autopilot transformed this experience. Now some of that is the novelty effect for sure, but being able to fully engage in a conversation as opposed to having a big part of one’s brain tied up in not hitting the car in front of you (but also not having a huge gap), made the time go by much faster for me.

The second has to do with speeding. We drove up the Taconic Parkway, which is notorious for aggressive ticketing for speeding. Here too Autopilot was a game changer. I realized that speeding is something I do to keep myself busy while driving. And then of course occasionally I speed for the opposite reason, meaning going downhill and picking up speed while in conversation. Again I may be smitten with the novelty effect, but just letting the car do the work at a safe increment to the posted speed limit (a couple of MPH faster) made me perfectly relaxed.

Now at present Autopilot requires you to keep your hands on the steering wheel. You can actually take them off but then you will get a prompt at irregular intervals to put them back on and if you don’t do that quickly enough, the Autopilot disengages for the rest of the trip! This happened to me a couple of times and immediately felt like the loss of crucial functionality (Hint: if you can pull over and hit “Park” the car resets and you have Autopilot again.)

Following this weekend, I can’t wait to have Autopilot permanently. I hope that for the highways I drive frequently, it will soon no longer require having my hands on the steering wheel. Getting to and from places will have never been easier!

Wednesday, November 22, 2017 - 11:30am

Just imagine for a moment the world we can easily find ourselves in. You love my series of blog posts called “Uncertainty Wednesday” but when you try to access it instead of seeing the content, you receive a notice from you ISP (the company you pay to access the Internet), that Continuations is not included in your current plan. You need to upgrade to a more expensive plan to see any content hosted on Tumblr.

This is not some kind of far fetched hypothetical possibility. Without Net Neutrality that’s exactly what will happen over time. We do not need to speculate about that, we can see it in countries that do not have Net Neutrality. Here is a picture from a carrier in Portugal

image

Now you might say: but isn’t it good if this makes services cheaper to access? What if someone can only afford 5 Euros per month, here at least they are getting some access?

But asking the question this way is buying into the ISP’s argument that they should get to decide which services you can access. Any one of the bundles above effectively requires a certain amount of bandwidth from the carrier. It should absolutely be the case that a carrier can give you less bandwidth for less money. But then with whatever bandwidth you have purchased you should be able to do as you please.

I have explained here on Continuations extensively why Net Neutrality is required for last mile access due to the lack of competition. So I am not going to rehash that again, you can read it at your leisure and so far without having to pay extra.

Net Neutrality is once again under attack. Ajit Pai, Chairman of the FCC, has announced his plan to “restore internet freedom” which is, as it turns out not your freedom as a consumer to use the bandwidth you have purchased as you see fit, but rather the freedom of your ISP to charge you for whatever it wants to.

So if you don’t want to wind up with the Portugal situation from above, go ahead and call Congress. Thankfully the website Battle for the Net makes this super easy. Do it!

Monday, November 20, 2017 - 11:30am

I have mentioned here on Continuations before that we have been home schooling our children. The main reason for doing so is to give them plenty of time to pursue their interests. Interest that over time can deepen into passions and have the possibility of ultimately providing purpose. For our son Peter one of those interests has been fashion. He has been learning how to sketch, cut, sow, etc. since age 8 and now at 15 has put together his third collection. This one is Men’s Wear and for the first time he is making it available for sale.

I particularly like the Bomber Jacket above. I am definitely not cool enough though to wear the Kilt:

You can find more pieces from the collection at Peter’s web site Wenger Design.

Friday, November 17, 2017 - 5:05pm

One of the problems with a relatively open platform such as Twitter is impersonation. I can claim to be somebody else, upload their picture to my profile and tweet away. This is particularly problematic for public figures and businesses but anyone can be subject to impersonation. Years ago, Twitter decided that it would “verify” some accounts. 

While a good idea in principle, Twitter’s implementation, sowed the seeds of the current mess. First, Twitter chose to go with a heavily designed checkmark that looks like a badge. Second, this badge appeared not just on a person’s profile but prominently in all timeline views as well. Third, the rollout appeared geared towards Twitter users who were somehow cool or in-the-know. Fourth, Twitter seemingly randomly rejected some verification requests while accepting others. 

The net result of all of these mistakes was that the verified checkmark became an “official Twitter” badge. Instead of simply indicating something about the account’s identity it became a stamp of approval. Twitter doubled down on that meaning when it removed the “verified” check from some accounts over their contents, most notably in January of 2016 with Milo Yiannopoulos.

Just now Twitter has announced a further doubling down on this ridiculously untenable position. Twitter will now deverify accounts that violate its harassment rules. This is a terrible idea for two reasons: First, it puts Twitter deeper into content policing in a way that’s completely unmanageable (e.g., what about the account of someone who is well behaved on Twitter but awful off-Twitter?). Second, it defeats the original purpose of verification. Is an account not verified because it is an impostor or because Twitter deverified it?

What should Twitter have done instead? Here is what I believe a reasonable approach would have been. First, instead of a beautifully designed batch, have a simple “Verified” text on a person’s profile. Second, do not include this in timeline views. It is super easy from any tweet to click through to the profile of the account. Third, link the “verified” text in the profile to some information such as the date of the verification and its basis. For instance, “Albert Wenger - verified October 11, 2012 based on submitted documents.” 

This type of identity-only verification would be quite scalable using third party services that Twitter could contract for (and users could pay for if necessary to help defray cost). Twitter could also allow users to bring their own identity to the service including from decentralized systems such as Blockstack. It would also make it easy for people to report an account strictly for impersonation. Harassment on platform is a real problem, but it is a separate problem and one that should be addressed by different means.

Wednesday, November 15, 2017 - 11:30am

Today’s Uncertainty Wednesday will be quite short as I am super swamped. Last week I showed some code and an initial graph for sample means of size 100 from a Cauchy distribution. Here is a plot (narrowed down to the -25 to +25 range again) for sample size 10:

And here is one for sample size 1,000:

Yup. They look essentially identical. As it turns out this is not an accident. The sample mean of the Cauchy distribution has itself a Cauchy distribution. And it has the same shape, independent of how big we make the sample!

There is no convergence here. This is radically different from what we encountered with the sample mean for dice rolling. There we saw the sample mean following a normal distribution that converged ever tighter around the expected value as we increased the sample size.

Next week will look at what the takeaway from all of this. Why does the sample mean for some distributions (e.g. uniform) follow a normal distribution and converge but not so for others? And, most importantly, what does that imply for what we can learn from data that we observe?

Monday, November 13, 2017 - 11:30am

The latest Senate version of the “Tax Cuts and Jobs Act” has a stab in the eye for startups. It proposes to tax certain stock options and RSUs at the time of vesting. An earlier House version also contained this provision, but the House removed it. 

Startups are a key part of innovation. Often joining a startup means accepting a lower cash compensation for a higher potential upside. This upside usually comes in the form of stock options or other stock based compensation such as restricted stock units (RSUs). For these to be effective means of offsetting lower current compensation, they need to provide upside with no downside.

In particular, an employee should not own taxes on the appreciation of the capital until they actually have liquidity in the asset. Everything else runs the risk of having to pay taxes on paper gains that subsequently evaporate. This problematic situation exists today already for many employees who leave companies and have to exercise their options and there have been legal efforts under way to change that.

The Senate version of the bill does the exact opposite. It now moves the tax payment for options to the point of vesting. So imagine working for a highly successful startup. At each vesting date you would owe a tax payment on the difference between your option strike price and the now fair market value. These could be substantial payments! Not only do you not have the money to pay those unless you are already wealthy but also you have no idea what those shares will eventually be worth. It could easily be much less again. Possibly zero! We have seen plenty of companies that had been valued in the 100s of millions and some in the billions of dollars that went to 0 without ever achieving liquidity along the way.

Now it is somewhat unclear whether this would affect all options or only so-called non-qualifying options. If it only affects the latter, then one possible way to fix the issue would be to dramatically increase or entirely remove the cap on the amount of equity that can be awarded in an incentive stock option (it is currently $100,000 which is why many executive grants wind up being non-qualifying).

I don’t know if this tax bill has a chance of passing. I suspect that it does as it appears less controversial than the healthcare bill. If you want startups to continue to be able to readily use deferred equity compensation then I encourage you to call your Senators right away and let them know you are opposed to Section III(H)(1) of the “Tax Cuts and Jobs Act.”

Friday, November 10, 2017 - 11:30am

Two years ago I wrote a series of blog posts on board effectiveness tips. I am adding a new one today. If you have a large board, make sure you have a lead director. “What is a lead director?” you may ask. Informally speaking it is the director who makes sure that the board reaches consensus on important issues. Some companies formally elect director to the lead role, but this is uncommon for startups. 

Startups that have raised multiple rounds of financing can wind up with large boards with three, four or more investors on it. In these cases a disfunction that I have observed more than once is that each investor waits for some other investor to take the lead. And as a result key decisions are either delayed or not made at all, often with dire consequences. This usually happens around really important and difficult decisions, such as replacing a member of the management team, changing strategy, doing a down round or accepting or rejecting an unsolicited M&A offer (especially if that offer is not super attractive).

So if you have a large board, ask yourself who the lead director is. Who will you go to in such a situation to make sure your board members are engaged? And when you go to them, will they have the time and inclination to act as lead director? If you can’t answer that, I highly recommend you find someone among your board members before a crisis arises.

Wednesday, November 8, 2017 - 11:30am

In today’s Uncertainty Wednesday we are putting some of the ideas from the last few weeks together: we are looking at the behavior of the sample mean of a fat tailed distribution. To do this we will again use a bit of Python code. Unlike our first sample mean example where we looked at the roll of a die, we will need some help here to draw samples from a more complicated distribution. Thankfully the Python ecosystem has the wonderful SciPy libraries, which if you don’t know already you should check out in any case.

So here’s the code for drawing 100,000 samples of size 100 each from the Cauchy distribution.

from scipy.stats import cauchy import numpy as np size = 100
runs = 100000
digits = 1 dist = {} for run in range(0, runs):      r = cauchy.rvs(size=size)
     mean = np.mean(r)
     rounded = round(mean, digits)      if rounded in dist:
          dist[rounded] += 1
     else:
          dist[rounded] = 1 for mean in sorted(dist):
     print "%s: %s" % (mean, dist[mean]);

I am rounding everything to only 1 digit to produce a histogram. And here is a chart from a run of the above program.

What is going on? There seems to be a spike around 0 which is where the distribution is centered, but there also are outcomes where the sample mean from 100 draws is greater than 25,000 and others where it is smaller than -75,000! And pretty much all the values along the way seem to have occurred also.

Let’s zoom in on the spike to see its shape better. Here are just the counts for sample means between -25 and +25:

This looks very much like a chart of the Cauchy distribution itself. Remember that when did this for the rolls of a die (a uniform distribution) we observed that the distribution of the sample mean not only looked normally distributed but that distribution became tighter as we increased the size of the sample.

Next Wednesday we will try the same here. We will look at both smaller and larger sample sizes to see what the effect is here.

Monday, November 6, 2017 - 11:30am

There was an interesting post on the YCombinator blog by Ramon Recuero about the evolution of blockchain protocols through forking and copying. The post does not mention the alternative possibility of binding voting as a mechanism for the evolution of blockchains. There are several projects, including the troubled Tezos, where the blockchain protocol will be able to evolve via on-chain voting.

Voting is an important mechanism to be explored as an alternative to forking. In his famous treatise Exit, Voice, and Loyalty, Albert Hirschman describes how members of an organization or consumers of a product/service can respond to a deterioration in quality. They can either choose to exercise voice, that is speak up and demand changes, or they can exit and join a different organization or use a different product/service.

For blockchains forking is the native implementation of “exit” but voting will be the way to achieve “voice.” Change is most effectively accomplished when both mechanisms are available. Forking (exit) is very disruptive and should be chosen only as a means of last recourse after voting (voice) has been tried and failed. This is why I am excited to see projects that are working to implement on-chain voting for protocol evolution. This is an important missing capability. 

Thursday, November 2, 2017 - 7:30am

Super short Uncertainty Wednesday post today as I have a crazy busy week featuring our annual meeting (where we meet with our Limited Partners). The last few weeks we have been digging into sample means and expected values. We saw some surprising things already, such as random variables that do not have an expected value.

During these posts I have sometimes used the terms random variable and probability distribution interchangeably, despite previously having given two separate definitions (see links). So what gives? Technically they are different concepts but for some commonly used probability distributions, such as the normal distribution, all random variables based on the distribution differ only in one or two parameters (for the normal distribution: mean and standard deviation). 

These differences turn out to be boring and so using the terms interchangeably seems OK. Put differently, often the difference between distributions is more important than the difference between the same distribution with different parameters. For instance, all random variables based on Cauchy distributions (fat tailed) are very different from all random variables based on Normal distributions. That difference is huge compared to the difference between normally distributed random variables.

Important caveat: there are distributions for which changes in the parameter make a big difference, such as the power law distribution. In those cases random variables based on the same distribution will differ a lot from each other.

Monday, October 30, 2017 - 11:30am

Upfront disclosure: I am long both Bitcoin and Ethereum (personally and also indirectly via USV).

In preparation for the annual meeting at USV, we have been putting together some slides on the cryptocurrency market. Looking back at the last year I was most surprised by the run-up in Ethereum as part of the ICO craze. I did not see that coming to nearly the extent that it did. While the ICO is an important innovation, there has definitely been an excess both in the amount of money raised for some individual projects and in projects raising that either have very little chance of succeeding or are outright scams.

So where are we today? At least temporarily there seems to be a slow down in ICOs. This could turn out to just be a lull before more activity resumes but it could also be a welcome return to more sanity (if the latter, there is likely going to be an over correction). In either case Ethereum faces a strong headwind not only from this change in sentiment but also from relatively costly and slow on chain computation. The bull case for Ethereum is that sometime in 2018 we will see a couple of Ethereum based projects launch successfully and get broad adoption *AND* progress is made on Ethereum scaling (either directly or through projects such as Raiden or Plasma). The bear case is that at least one, or possibly both of these don’t happen.

How about Bitcoin? Oddly I think that Bitcoin continues to be misunderstood by many people in the cryptocurrency space who want it to be more than it has to be for it to succeed. It is one of those cases where the more you know, the more you are likely to overthink it. Yes, Bitcoin has all sorts of drawbacks as a blockchain, but it is the one cryptocurrency with a widely understood use case: censorship resistant store of wealth. Fiat currencies, precious metals and real estate (including land) all have more government control and/or are more difficult to move around and transact in than Bitcoin. With everything crazy that’s going on in the world politically, the demand for censorship resistant wealth storage is high and growing.

Bitcoin has issues resulting from mining concentration and the attendant attempts to create additional wealth ex nihilo through forks. The large amounts of money involved have made it nearly impossible to have rational discussions on questions of technical merit. The bull case for Bitcoin is therefore easier than the one for Ethereum: all it takes is for the current forking noise to die back down and one chain to continue to be recognized as Bitcoin (above all other contenders). As a side note: Should Bitcoin’s self-inflicted troubles mount, it will make Zcash and Monero a lot more attractive.

In summary then: for the time being I am cautiously bullish on Bitcoin and at best neutral on Ethereum. As always though please don’t take this as investment advice and keep in mind that all cryptocurrencies continue to be highly risky. 

Thursday, October 26, 2017 - 5:05pm

I last wrote about Blockstack early in 2016 when the team announced the goals of the project. Since then a lot of progress has been made. The team has released a browser which is now available for Mac, Windows and Linux. They have published several papers, including a couple in peer reviewed journals. The latest paper introduces the Blockstack token. In the meantime people have started to build applications using the BlockstackJS framework. If you are interested in writing an application you should check out the available bounties

Here is a new video in which Ryan and Muneeb explain the Blockstack project:

If you are interested based on this update, the Blockstack team has provided a lot of information about the upcoming Blockstack token sale.

Wednesday, October 25, 2017 - 11:30am

So last Uncertainty Wednesday we encountered a random variable that does not have an expected value. Now if you read that post you might ask, was this just an artificially constructed example or do random variables like that actually occur? Well the example I gave was an extreme form of a power law, which are distributions increasingly found in the economy as we transition to a digital world. Due to network effects, the winning company in a space has many times the size of the runner-up and there is a long tail of smaller competitors. The distribution of views on a site such as Youtube similarly follows a power law. So increasingly does the wealth distribution.

Here is another example of a distribution that at first glance looks like it ought to have an expected value:

image

Just eyeballing this it would seem that the expected value is 0. But that’s, well, wrong. In fact, this distribution, known as the Cauchy Distribution does not have an expected value (it does not have a variance either!).

Now you might have noticed that this looks a lot like the Normal Distribution, which we had encountered earlier. That had a well defined expected value and variance, so what gives? Well consider the following graph which compares the two distributions:

image

You can see that the Normal Distribution has more probability concentrated right around the 0 and then declines very rapidly. The Cauchy Distribution by contrast declines less rapidly in probability in the tails. It is an example of a so called fat tailed distribution.

In the Cauchy Distribution if we tried to form the expected value for outcomes above 0 the infinite sum goes to positive infinity and below 0 it goes to negative infinity. The two do not offset each other, instead the sum is not defined. So this is what both last Uncertainty Wednesday’s example and today’s example have in common: extreme events have sufficiently high probability that the expected value is not defined. Next Wednesday we will see some practical implications of this for observed sample means and what we can learn from them.

Albert Wenger is a partner at Union Square Ventures (USV), a New York-based early stage VC firm focused on investing in disruptive networks. USV portfolio companies include: Twitter, Tumblr, Foursquare, Etsy, Kickstarter and Shapeways. Before joining USV, Albert was the president of del.icio.us through the company’s sale to Yahoo. He previously founded or co-founded five companies, including a management consulting firm (in Germany), a hosted data analytics company, a technology subsidiary for Telebanc (now E*Tradebank), an early stage investment firm, and most recently (with his wife), DailyLit, a service for reading books by email or RSS.