You are here

Albert Wenger

Content Written by Author

Friday, August 18, 2017 - 11:35am

This will be my last post until Labor Day. I will be spending as little time online as possible, reading books instead, spending time with family and friends and working on World After Capital.  I will be disabling all notifications on my phone and checking email only twice a day. Given all the craziness here in the US and elsewhere in the world, I have been spending too much time on news and I am looking forward to this break to dial things down. 

Thursday, August 17, 2017 - 7:35am

Last Uncertainty Wednesday, we saw how diminishing marginal utility of wealth provides an explanation of risk aversion via Jensen’s inequality. Why would it be then that lots of people seem to like small gambles, like a game of poker among friends. One possible explanation is that the utility function is locally convex around your current endowment. So this would look something like the following:

In the immediate area around the endowment (marked with dotted lines for two different levels) the utility function is convex, but for larger movements it is concave. 

In the convex area someone would be risk seeking. Why? Well because Jensen’s inequality now gives us

U[EV(w)] ≤  EV[U(w)]

Again, the left hand side is the utility of the expected value of the wealth, whereas the right hand side is the expected utility, meaning the expected value of the utility. Now the inequality says that someone would prefer an uncertain amount over a certain one. Here is a nice illustration from Wikipedia:

image

We see clearly that the Certainty Equivalent (CE) is now larger than the expected value of wealth, meaning the Risk Premium (RP) works the other way: in order to make a risk seeker as well off as accepting the bet, you have to pay them more than the expected value.

Next Uncertainty Wednesday we will look more at how incredibly powerful convexity is in the face of uncertainty. 

Wednesday, August 16, 2017 - 7:35am

As has been widely reported, in the last few months ICOs have raised significantly more money for blockchain startups than has come from traditional venture investors. This can be seen as a sign of the long discussed unbundling of venture capital. The idea is that while VCs bundle capital, advice, governance and possibly services (e.g., help with recruiting), technology may make it possible to separate out these different functions. Today I want to focus on “governance” which is the least understood, and I believe most difficult to accomplish, of these functions.

What is governance? The word, like “government,” comes from the Greek word for “to steer.” It stands for the decision making bodies and processes that steer a company, a protocol, or a country. 

Why is governance needed? Because we (a) cannot in advance specify the right course of action for all possible contingencies and (b) for many of those contingencies there is not a single, obviously optimal action to take. That’s often the case because actions will impact different constituents differently. The role of governance, of steering, is to help choose an action at those moments.

What are examples of governance issues? In companies, these include events such as fundraising, considering an M&A offer, replacing a CEO. The need for governance in a company tends to be relatively limited because the CEO has a lot of discretionary power. In contrast, protocols completely describe the activities of participants and so almost any change to the protocol turns into a governance issue. 

People have written about governance once a protocol is up and running, including different mechanisms for triggering (or avoiding) forks. But the issue I want to focus on instead is the question of governance post ICO and pre-protocol launch. How should the allocation of funds be steered?

For a lot of projects the answer appears to be that funds can be spent on whatever the founding team deems appropriate without any additional governance mechanism. This is akin to a company without a board or a board that’s controlled by the CEO. Other projects have set up foundations that control some of the ICO proceeds with an independent board as a governance mechanism.

Governance will turn out to be important here because there are so many allocation decisions to be made (several projects have raised north of $100 million). And those decisions not only don’t have obviously right answers but also come with the potential for self dealing, starting with determining salaries for team members. 

Now VCs role in governance of companies is not perfect either. There are lots of potential conflicts of interest. For instance, investor board members may want to accept (or reject) an M&A offer that management wants to reject (or accept) because their economics are quite different. And there are of course plenty of examples of VCs not exercising their governance role.

Nonetheless, the right answer is probably not doing away with governance altogether. I recommend to all the projects that have raised money through ICOs to put some kind of governance structure in place before spending a lot of the funds. It will be important for that mechanism to not just include investor and founder interests, but to also reflect community members, the people who are ultimately supposed to benefit from the protocol.  

Saturday, August 12, 2017 - 5:05pm

In my draft book World After Capital, I write that humans having knowledge is what makes us distinctly human and gives us great power (and hence great responsibility). I define knowledge in this context as science, philosophy, art, music, etc. that’s recorded in a medium so that it can be shared across time and space. Such knowledge is at the heart of human progress, because it can be improved through the process of critical inquiry. We can fly in planes and feed seven billion people because we have knowledge.

There is an important implication of this analysis though that I have so far not pursued in the book: if and when we have true General Artificial Intelligence we will have a new set of humans on this planet. I am calling them humans on purpose, because they will have access to the same power of knowledge that we do. The question is what they will do with knowledge, which has the potential to grow much faster than knowledge has to date.

There is a great deal of fear about what a “Superintelligence” might do. The philosopher Nick Bostrom has written an entire book by that title and others including Elon Musk and Stephen Hawking are currently warning that the creation of a superintelligence could have catastrophic results. I don’t want to rehash all the arguments here about why a superintelligence might be difficult (impossible?) to contain and what its various failure modes might be. Instead I want to pursue a different line of inquiry: what would a future superintelligence learn about humanist values from our behavior?

In World After Capital I write that the existence and power of knowledge provides an objective basis for Humanism. Humanism in turn has key value implications, such as the importance of sustaining the process of critical inquiry through which knowledge improves over time. Another key value implication is that humans are responsible for animals, not vice versa. We have knowledge and so it is our responsibility to help say dolphins as opposed to the other way round.

To what degree are we living this value of responsibility today? We could do a lot better here. Our biggest failing with regard to animals is industrial meat production and as someone who eats meat, I am part of that problem. As with many other problems that human knowledge has created, I believe our best way forward is further innovation and I am excited about lab grown meat and meat substitutes. We have a long way to go in being responsible to other species in many other regards (e.g., pollution and outright destruction of many habitats). Doing better here is on important way we should be using the human attention that is freed up through automation.

Even more important though is how we treat other humans. This has two components: how we treat each other today and how we treat the new humans when they arrive. As for how we treat each other today, we again have a long way to go. Much of what I propose in World After Capital is aimed at freeing humans to be able to discover and pursue their personal interests. We are a long way away from that. That also means constructing the Knowledge Age in a way that allows us to overcome, rather than re-enforce, our biological differences (see my post from last week on this topic). That will be a particularly important model for new humans (superintelligences), as they will not have our biological constraints. Put differently, discrimination on the basis of biological difference would be a terrible thing for super intelligent machines to learn from us.

Finally, what about the arrival of the new humans. How will we treat them? The video of a robot being mistreated by Boston Dynamics is not a good start here. This is a difficult topic because it sounds so preposterous. Should machines have human rights? Well if the machines are humans then clearly yes. And my approach to what makes humans distinctly human would apply to artificial general intelligence. Does a general artificial intelligence have to be human in other ways as well in order to qualify? For instance, does it need to have emotions? I would argue no, because we vary widely in how we handle emotions, including conditions such as psychopathy. Since these new humans will likely share very little, if any, of our biological hardware, there is no reason to expect that their emotions should be similar to ours (or that they should have a need for emotions altogether).

This is an area in which a lot more thinking is required. We don’t have a great way of discerning when we might have built a general artificial intelligence. The best known attempt here is the Turing Test for which people have proposed a number of improvements over the years. This is an incredibly important area for further work, as we charge ahead with artificial intelligence. We would not want to accidentally create, not recognize and then mistreat a large class of new humans. They and their descendants might not take kindly to that.

As we work on this new challenge, we have a long way to go in how we treat other species and other humans. Applying digital technology smartly gives us the possibility of doing so. That’s why I continue to plug away on World After Capital.

Thursday, August 10, 2017 - 7:30am

Last Uncertainty Wednesday, I introduced Jensen’s Inequality. I mentioned briefly that it explains a lot of things and today we will look at the first one of these, which goes by the name of risk aversion. This is simply economists way of saying that most people prefer a smaller guaranteed payment over a large but uncertain one. We will now see that this follows directly from diminishing marginal utility of money via Jensen’s inequality.

So what is this “diminishing marginal utility” of money? Well it is generally assumed that the more money you make, the less an additional say 100 dollars will mean to you. This seems, for most people anyhow, a pretty safe assumption. If you are currently making $1,000 per month then getting an extra $100 per month let’s you have a lot more benefit than if you are already making $10,000 per month. But of course you are still somewhat better off making $10,100 per month.

Putting that together would suggest a function that’s increasing but at a decreasing rate and that’s exactly a concave function. Since we are talking about utility here, we will use U(w) to denote this with w standing for wage or wealth. Then U(w) being concave immediately get us the following from Jensen’s inequality:

U[EV(w)] ≥ EV[U(w)]

The left hand side is the utility of the expected value of the wage, whereas the right hand side is the so-called expected utility. So anyone with diminishing marginal utility will prefer say $1,000 per month guaranteed over the possibility of $1,100 per month with 50% probability and $900 per month with 50% probability (expected value also $1,000). That is known as risk aversion. The following image from Wikipedia nicely illustrates the situation:

image

In the image we can also graphically see two values: the so-called certainty equivalent and the risk premium. The certainty equivalent (CE) is the amount that would make the person indifferent between the risky payoff and the certain payoff. We see that the certainty equivalent is less than the expected value. 

Meaning in the example above, someone with risk aversion would in fact accept less then $1,000 (the expected value) with certainty and still feel as good as having the uncertainty of $1,100 with 50% and $900 with 50%. The difference between the certainty equivalent and the expected value is known as the risk premium (RP). That is the amount someone would be willing to pay to not to face the uncertainty.

So if you are currently making $1,000 per month and your employer says that next month if the company does well you will make $1,100 but if it does poorly you will make $900, then a risk averse individual would be willing to pay some money, say $20 to make $1,000 with certainty (which after paying the risk premium will be $980). If you read my series on insurance fundamentals you will recall that this is the basis for the existence of insurance.

Next Wednesday we will talk about risk seeking and get into the ideas of convex tinkering and antifragility.   

Monday, August 7, 2017 - 7:30am

In my draft book World After Capital, I write about how digital technology has given us the possibility to leave the Industrial Age behind and enter the Knowledge Age. In an early chapter on Optimism, I argue against economic, historical and technological determinism. These are all theories in which an external force determines the shape of society, instead of the decisions made by us humans under the guidance of a set of values.

The memo written by a Google employee, is a good reason to add “biological determinism” to this list of false determinisms. Biological determinism argues that certain features of society are the necessary result of some underlying biological process. From there, biological determinism often goes on to argue against efforts to change society with sometimes outright and sometimes veiled claims that such a change effectively goes against (human) nature.

Here is the outline of the post. First, there absolutely are biological differences among humans resulting from our DNA and hence influenced by inheritance and these include our brains. Second, biological differences used to matter a more during the Agrarian Age and somewhat during the Industrial Age (even though they were not determinative even then). Third, with the possibility of entering the Knowledge Age, biological differences can be made irrelevant due to technological progress.

We know that the development of our bodies is influenced by our genetic inheritance. For instance, how tall someone will grow is in part affected by how tall their parents are. The body of course includes the brain and so it would be strange to assume that our cognitive or emotional processes are completely untouched by genetics. I was born a “Lefty,” as in, I liked to pick things up with my left hand. This is a clear and hopefully non-controversial example of a cognitive process with known genetic influence (albeit not super well understood, as it is likely polygenic). Trying to argue away the historic existence of genetic differences goes against science. What we need to focus on is how such differences mattered in the past and, even more importantly, how much they will and should matter in the future.

During the Agrarian Age and even much of the Industrial Age, our technological capabilities were quite limited compared to today. As a result certain tasks, like lifting a heavy object, often required physical strength (“often,” because we had really awesome early technology for lifting, such as pulleys, but unlike today, they were not widely available). On average, males were able to develop more physical strength. Many societies therefore favored males for carrying out these tasks. But even then there was nothing deterministic about it as not every society had exactly the same division of labor or developed the same tools.

Many tools, as it turns out, were designed for right handed people (who make up about 90% of the population). The influence of right handedness on design persisted for a long time, such as most cars having the ignition lock to the right of the steering column (with Porsche as a famous exception). Handwriting a left-to-right language in ink, which was still a common technology when I first got to school, also favors right handedness: try writing with your left hand and not smudging the fresh ink. Handedness gives us a glimpse as to why technology often erases biological differences. At age 16, I learned to write on a  typewriter and all of a sudden being left handed made no difference (there is actually more to the story as we will see in a bit).

Technology gives us the potential to make biological differences irrelevant. It does so in two ways: by letting us augment (or supplant) humans with machines and by allowing us to modify ourselves. For instance, physical strength is largely irrelevant already today and will become even more so in the future with robots, exo-skeletons, and advanced light weight materials. I just gave the example of the typewriter, but early typewriters required you to manually advance the paper as part of the “carriage return,” which was operated with the right hand. By the time I was 16 though, IBM had a really cool electric typewriter called the Selectric, which let you just hit a key and that was it. Another fun technological improvement: many modern cars no longer have an ignition lock, but just a button which is easy to press, even for someone left handed (unlike trying to get key into the ignition lock). And here is yet another automotive example of how technology can be used to make cognitive differences irrelevant: some people found it easier to learn how to read a map than others. Well, now we have turn-by-turn directions.

But there is more to cognitive differences and the fallacy of biological determinism. Biological determinists like to trot out IQ results. Here too though they suffer a confusion between what is currently measured as a result of the past and what is possible in the future. We have learned a great deal in recent years about the amazing degree to which the brain can grow new connections (even in adults). The brain is highly (re)programmable. And here is where the rest of my handedness story comes in, which I skipped earlier: I actually learned how write with my right hand. Sure, it took more effort and my handwriting was awful at first compared to other kids, but over a couple of years the difference went away. There are many examples of people who were at first told they couldn’t learn something only to become experts at it. I highly recommend Grit by Angela Duckworth, which in addition to great anecdotes also provides lots of statistical evidence on how much can be learned given enough time (and deliberate practice).  

We won’t know for quite some time what people will be able to learn in a world in which we can give everyone access to all the world’s knowledge. That is not the world we lived in until quite recently; where you were born and what your parents were able to afford had a huge impact on what you could learn. The idea though that IQ tests are a good measure of what any one person could learn with enough time and focus is in direct contradiction to what we know about the brain and what we have already observed in individuals. Historical statistics about IQ and race or gender are useless for normative purposes. They measure the past and ignore the potential offered by technological progress. Let’s suppose though for a moment that eventually we figure out that there is a meaningful degree of genetic difference in neuroplasticity. Why would we then assume that this is not something we could and should overcome with technology?  

Now if you happen to think that my handedness example throughout is making light of the matter, consider this: at one point being left handed was considered to have been “touched by the devil.” This is reflected in etymology by the Latin “sinister” meaning both left and evil. We have come a long way on handedness since. It is time to do the same with other forms of biological determinism, including what we can and cannot study, what roles we can and cannot have in society, and whom we can and cannot love. We should be actively building towards that future today, including working on increased diversity.

Addendum

After I wrote this post, I read the Slate Star Codex piece. While it is positioned as a defense of the Google Memo, it is actually making arguments about biological differences in interest selection, rather than in potential. Here too the logic of knowledge is more powerful than either biology or society.

Yes, we absolutely have evidence for biological factors influencing interests (see above). Similarly, of course, we also have evidence for social and cultural factors playing a role in interest selection. Particularly relevant to computer science here is the advent of personal computers in the 80s and how those were heavily positioned towards young males. This is likely to have been a factor in the change in CS enrollment patterns in college, as more males arrived with prior knowledge than females.

Critically though, because we understand all of this rationally, we are not slaves to a pattern. Neither biology, nor existing society, has to remain determinative in interest selection. Instead we get to make choices. This is the beauty and power of knowledge! In many fields we have already intentionally chosen to give everyone broad exposure early on, so people can discover and develop an interest.

We should do the same with computers (a great initiative in that regard is my partner Fred’s work on bringing computer science to high schools in New York City). As importantly, we need to revamp the overall education system, including higher education, so that people who get a later start on computers, or any other subject for that matter, can still develop their full potential. Thankfully, technological progress makes that possible (e.g. through online learning), but our institutions are lagging behind substantially.

Changing our institutions requires us to want that change. We have to want to get to the Knowledge Age, it won’t get here by itself.

Thursday, August 3, 2017 - 7:30am

Last week in Uncertainty Wednesday, I introduced functions of random variables as the third level in measuring uncertainty. Today I will introduce a beautiful result known as Jensen’s inequality. Let me start by stating the inequality:

f[EV(X)] ≤ EV[f(X)] where f is a convex function

In words, if we apply a convex function to the expected value of a random variable, then we get a lower value than if we take the expected value of the same function of the random variable. This turns out to be an extremely powerful result.

Jensen’s inequality explains, among other things, the existence of risk seeking and risk aversion (via the curvature of the utility function), why options have value and how we should structure (corporate) research. I will go into detail on these in future Uncertainty Wednesdays. Today, I want to show this wonderful picture from Wikipedia, which gives a visual intuition for the result:

And before we get into applications and implications of the inequality, I should mention for completeness that the inverse holds for concave functions, meaning

g[EV(X)] ≥ EV[g(X)] where g is a concave function

Next Wednesday we will look at utility functions and risk seeking / risk aversion as explained by these inequalities. 

Monday, July 31, 2017 - 7:35am

In my draft book “World After Capital” I have a section on the need for increased “Informational Freedom.” There I write:

By design, the Internet does not embody a concept of geographic regions. Most fundamentally, it constitutes a way to connect networks with one another (hence the name “Internet” or network between networks). Since the Internet works at global scale, it follows that any geographic restrictions that exist have been added in, often at great cost.

As well as:

The same additional equipment used by governments to re-impose geographic boundaries on the Internet is also used by ISPs to extract additional economic value from customers, in the process distorting knowledge access. These practices include paid prioritization and zero rating.

Virtual Private Networks (or VPNs) are a way for citizens to circumvent these artificial restrictions imposed by governments and by ISPs. That’s why it is dismaying to now see a movement to ban VPNs around the world. China just got Apple to remove VPN apps from the Chinese App Store. And Russia is banning the use of VPNs altogether starting in November of this year.

If we want to preserve humanity’s ability to connect freely with each other, we need to respond in many different ways. Here are just some ideas that seem important 

1. Political action to fight against bans on VPNs, making sure they remain legal in as many countries as possible  

2. Making VPNs broadly available through easy to use applications that can find mass market adoption

3. Supporting open phone operating systems, such as UBports, so people can easily run any software they want on their phones  

4. Incentivized systems for providing global traffic routing on existing networks (stay tuned for something from Meshlabs)

5. Incentivized systems for wireless mesh networking

By incentivized systems here I mean something akin to the blockchain where contributors (miners) can earn a crypto currency in return for providing infrastructure.

If you know of other projects / initiatives in these areas, I would love to learn about them.

Saturday, July 29, 2017 - 7:35am

There is lots wrong with the healthcare and health insurance system in the US. One can also have a rational debate of the pros and cons of the Affordable Care Act and how we might proceed from here. What should not happen, however, is pushing through poorly thought through measures just for the sake of making a change. Even more so, when there has been ample of time to come up with a something well designed.

So I was glad to see that three GOP senators voted against the latest half baked attempt at undoing the ACA. Particularly commendable was the opposition by Senators Lisa Murkowski and Susan Collins who bore the brunt of the pressure from their party and from the President. John McCain also finally found the courage to cast a “No” vote.

It will be interesting to see what happens next. It would be great if the Republicans and Democrats could work together to improve the ACA, or propose some actually well-thought-out alternative. Instead, I fear that partisan politics will continue to dominate with every attempt made to have ACA fail for a cheap “I told you so” moment. For the sake of all of those depending on it, I hope I am wrong about that.

Thursday, July 27, 2017 - 7:30am

Just as a reminder, we have been spending the last few weeks of Uncertainty Wednesday exploring different measures of uncertainty. We first looked at entropy which is a measure based only on the states of the probability distributions itself. We then encountered random variables, which associate values or “payouts” with states and learned about their expected value and variance (including continuous random variables).

Today we will look at functions of random variables. We will assume that we have a random variable X and we are interested in looking at the properties of f(X) for some function f. Now you might say, gee, isn’t that just another random variable Y? And so why would there be anything new to learn here?

To motivate why we we want to explore this, let’s go back to the post in which I introduced the different levels at which we can measure uncertainty.  There I wrote:

Payouts are only the immediate outcomes. The value or impact of these payouts may be different for different people. What do I mean by this? Suppose that we look at a situation where you can either win $1 million with 60% probability or lose $10 thousand with 40% probability. This seems like a no brainer situation. But for some people losing $10 thousand would be a rounding error on their wealth, whereas for others it would mean becoming homeless and destitute.

We now have the language to analyze the uncertainty in this. First we can compute the entropy

H(X) = - [0.6 * log 0.6 + 0.4 * log 0.4] = 0.971

We can also calculate the expected value and variance as follows:

EV(X) = 0.6 * 1,000,000 + 0.4 * (- 10,000) = 596,000

VAR(X) = 0.6 (1,000,000 - EV(X))^2 + 0.4 (-10,000 - EV(X))^2 = 244,824,000,000

But as the text makes clear, none of these capture the vastly different impact these payoffs might have for different people.

One way to do that is to introduce the idea of a utility function U which translates payoffs into how a person feels or experiences these payoffs. Consider the following utility function

U(X) = log (IE + X)

where IE is the initial endowment, meaning the wealth someone has before encountering this uncertainty. The uncertainty faced by someone with IE = 10,000 is dramatically different than for someone with IE = 1,000,000. In fact for IE = 10,000 when the payoff is -10,000, the utility function goes to negative infinity (char produced with Desmos; technically you’d have to consider a limit, but you get the idea).

So we can see that applying a function to a random variable can have dramatic effects on uncertainty. Next week will dig deeper into what we can know about the impact of applying a function. In particular we will be interested in questions such as how does EV[U(X)] relate to U[EV(X)] — meaning what can we say about taking the expected value of the function of the random variable versus plugging the expected value into the function?

Tuesday, July 25, 2017 - 11:35am

The dominant position of companies such as Google, Facebook, Amazon is sure receiving a lot more attention these days. There is critical media coverage, including in traditionally pro business publications such as the Wall Street Journal “Can the Tech Giants Be Stopped?” and Bloomberg “Should America’s Tech Giants Be Broken Up?”  There is also the Democratic Party’s “Better Deal” memo which focuses more broadly on the negative effects of corporate power. And then of course there is the European Union, which already fined Google 2.4 Billion Euros for manipulating search results and is considering another fine for Google’s alleged forced bundling of Google services with Android.

While I am happy to see the attention on the issue, I am concerned that regulators are missing the fundamental source of monopoly power in the digital world: network effects arising from the control of data. This will continue to lead to power law size distributions in which the number 1 in a market has a dominant position and is many times bigger than the number 2. That dynamic will play itself out not just for the very large companies which regulators are starting to look at but will be true in lots of other markets as well. The only way to go up against this effect is to shift computational power to the network participants.

I first started to write about this approach nearly three years ago in a post titled “Right to an API Key.” I then expanded this idea into what I am calling the “right to be represented by a bot” – as an enduser I should be able to have all my interactions with digital systems intermediated by software that I control 100%. You can watch my TEDx talk about this and also read more about it in the Informational Freedom section of my book World After Capital

Unfortunately instead of looking for this kind of digitally native solution, regulators are largely reverting to the industrial age tool of antitrust regulation. As a result I have a feeling we will be stuck with network effects based digital monopolies for quite some time, despite the exciting work that is happening around decentralized blockchain-based systems.

Friday, July 21, 2017 - 5:05pm

I previously wrote a review of Yuval Harari’s Sapiens, which I highly recommended, despite fundamentally disagreeing with one of its central arguments. Unfortunately, I cannot say the same about Homo Deus. While the book asks incredibly important questions about the future of humanity, it not only comes up short on answers, but, more disappointingly, it presents caricature versions of other philosophical positions. I nonetheless finished Homo Deus because it is highly relevant to my own writing in World After Capital. Based on some fairly positive reviews, I expected a profound insight until the end, but it never came.

One of the big recurring questions in Homo Deus is why we, Homo Sapiens, think ourselves to be the measure of all things, putting our own interests above those of all other species. Harari blames this on what he calls the “religion of humanism” which he argues has come to dominate all other religions. There are profound problems both with how he asks this question and with his characterization of Humanism.

Let’s start with the question itself. In many parts of the book, Harari phrases and rephrases this question in a way that implies humanity is being selfish, or speciest (or speciesist, as some spell it).  For instance, he clearly has strong views about the pain inflicted on animals in industrial meat production. While it is entirely fine to hold such a view (which I happen to share), it is not good for a philosophical or historical book to let it guide the inquiry. Let me provide an alternative way to frame the question. On airplanes the instructions are to put the oxygen mask on yourself first, before helping others. Why is that? Because you cannot help others if you are incapacitated due to a lack of oxygen. Similarly, humanity putting itself first, does not automatically have to be something morally bad. We need to take care of humanity’s needs, if we want to be able to assist other species (unless you want to make an argument that we should perish). That is not the same as arguing that all of humanity’s wants should come first. The confusion between needs and wants is not at all mentioned in Homo Deus but is an important theme n the wonderful “How Much is Enough” by Edward and Robert Skidelsky and in my book “World After Capital.”

Now let’s consider Harari’s approach to Humanism. For someone who is clearly steeped in history, Harari’s definition of Humanism confounds Enlightenment ideas with those arising from Romanticism. For instance, he repeatedly cites Rousseau as being a key influencer on “Humanism” (putting it in quotes to indicate that this is Harari’s definition of it), but Rousseau was central to the romanticist counter movement to the Enlightenment, as championed by Voltaire. If you want an example of a devastating critique, read Voltaire’s response to Rousseau.  

One might excuse this commingling as a historical shorthand, seeing how Romanticism quickly followed the Enlightenment (Rousseau and Voltaire were contemporaries) and how much of today’s culture is influenced by romantic ideas. Harari makes a big point of the latter, frequently criticizing the indulgence in “feelings” that permeates so much of popular culture and has also invaded politics and even some of modern science. But this is a grave mistake as it erases a 200 year history of secular enlightenment-style humanist thinking that does not at all give a primacy to feelings. Harari pretends that we have all followed Rousseau, when many of us are in the footsteps of Voltaire.

This is especially problematic, as there has never been a more important time to restore Humanism, for the very reasons of dramatic technological progress that motivate Harari’s book. Progress in artificial intelligence and in genomics make it paramount that we understand what it means to be human before taking steps to what could be a post human or trans human future. This is a central theme of my book “World After Capital” and I provide a view of Humanism that is rooted in the existence and power of human knowledge. Rather than restate the arguments here, I encourage you to read the book.

Harari then goes on to argue how progress in AI and genetics will undermine the foundations of “Humanism,” thus making room for new “religions” of trans humanism and “Dataism” (which may be a Harari coinage). These occupy the last part of the book and again Harari engages with caricature versions of the positions, which he sets up based on the most extreme thinkers in each camp. While I am not a fan of some of these positions, which I believe run counter to some critical values of the kind of Humanism we should pursue, their treatment by Harari robs them of any intellectual depth. I won’t spend time here on these, other than to call out a particularly egregious section on Aaron Swartz whom Harari refers to as the “first martyr” for Dataism. This is a gross mis-treatment of Aaron’s motivations and actions.

There are other points where I have deep disagreements with Harari, including the existence of Free Will. Harari’s position, there is no free will, feels like it is inspired by Sam Harris in its absolutism. You can read my own take. I won’t detail all of these other disagreements now as they are less important than the foundational mis-representation of what Humanism has been historically and the ignorance of what it can be going forward.  

Thursday, July 20, 2017 - 2:13pm

Last time in Uncertainty Wednesdays, I introduced continuous random variables and gave an example of a bunch of random variables following a Normal Distribution.

image

Now in the picture you can see two values, denoted as μ and σ^2, for the different colored probability density functions. These are the two parameters that completely define a normally distributed random variable: μ is the Expected Value and σ^2 is the Variance.

This is incredibly important to understand. All normally distributed random variables only have 2 free parameters. What do I mean by “free” parameters? We will give this more precision over time, but basically for now think of it as follows: a given Expected Value and Variance completely define a normally distributed Random Variable. So even though these random variables can take on an infinity of values, the probability distribution across these values is very tightly constrained.

Contrast this with a discrete random variable X with four possible values x1, x2, x3 and x4. Here the probability distribution p1, p2, p3, p4 has the constraint that p1 + p2 + p3 + p4 = 1 where pi = Prob(X = xi). That means there a 3 degrees of freedom because the fourth probability is determined by the first 2. Still that is one more degree of freedom than for the Normal Distribution, despite having only four possible outcomes (instead of an infinity).

Why does this matter? Assuming that something is normally distributed provides a super tight constraint. This should remind you of the discussion we had around independence.  There we saw that assuming independence is actually a very strong assumption. Similarly, assuming that something is normally distributed is a strong constraint because it means there are only two free parameters characterizing the entire probability distribution.  

Thursday, July 20, 2017 - 1:30pm

I have written several posts on token sales and ICOs already, including some “Thoughts on Regulating ICOs” and  “Optimal Token Sales.” With the continued fundraising success of new projects, here are some observations on investment terms and their potential implications for achieving successful outcomes.

Many projects these days have a private fundraising event that precedes any public token offering. These take varying forms, including investments in corporations and some type of SAFT (Simple Agreement for Future Tokens). Fueled by a lot of demand, the terms of these raises have become more and more project friendly. 

Now at first blush that might seem great, but there is an adverse selection problem that is much more severe for protocols than it was for traditional startups. Why? Because in a traditional startup, if I get shut out as an investor I may never make it in and so even strong investors may “hold their nose” at bad terms. With a protocol though there will be a public token sale event and eventually a public token altogether, providing more flexible entry opportunities than in a traditional startup. That in turn means that if the early private terms are not appealing the mix of investors will rapidly shift to lower quality investors.

Now you might still say: who cares about the investors? It’s all about the project creators in any case. And that is likely true once in a while. But overall the initiators of many of these projects are inexperienced when it comes to building an organization, allocating resources, dealing with adversity and so on. The kind of things your early investors and advisors can help you with, if they are good. The evidence from traditional startups that have done “club rounds” as their seed round, with many investors piling in, none with a meaningful equity position and hence skin in the game, suggests that difficult questions often remain unresolved.

I expect that we will unfortunately relearn this same lesson on many projects currently being funded via token sales. It will be fascinating to see how well some of the projects that have recently raised tens and sometimes hundreds of millions of dollars do with putting that money to work sensibly and as they encounter the need to make tough decisions. My sense is that outcomes here will be influenced by the strength of the extended team, including early backers.  

Thursday, July 20, 2017 - 12:26pm

So far in Uncertainty Wednesdays we have only dealt with models and random variables that had a discrete probability distribution. Often in fact we had only two possible states or signal values. There are lots of real world problems though in which the variable of interest can take on a great many values. For example the time between two events taking place. We could try to break this down into discrete small intervals (say seconds) and have a probability per second. Or we could define a continuous random variable where the wait time can be any real number from some continuous range.

Now if you have been following along with this series you will have one immediate objection: how can we assign a probability to our random variable taking on a specific real number from a range? A range of reals contains uncountably infinitely many real numbers and hence the probability for any single real value must be, well, infinitely small? So how do we define a Prob(X = x)?

Before I get to the answer let me interject a bit of philosophy. There is a fundamental question about the meaning of real numbers: are they actually real, as in, do they exist? OK, so this is a flippant way of asking the question. Here is a more precise way. Is physical reality continuous or quantized? If it is quantized, then using a model with real numbers is always an approximation of reality. My reading of physics is that we don’t really know the answer. A lot of phenomena are quantized but then there is something like time, which we understand extremely poorly (which is why I chose time as opposed to say distance as my example above). Personally, while not, ahem certain, I am more inclined to see real numbers as a mathematical ideal, which approximates a quantized reality.

Does this matter? Well, it does because too often continuous random variables are treated as some kind of ground truth, instead of an approximation to a physical process. And as we will see in some future Uncertainty Wednesday, often this is a rather restrictive approximation.

Now back to the question at hand. How do we define a probability for a continuous random variable? The answer is through a so-called probability density function (PDF). I find it easiest to think of the PDF as specifying the probability “mass” for an infinitesimal interval around a specific value. Let’s call our density function f(x), then the value of f(x) at x is not the probability of X = x but rather the probability of x - ε ≤ X ≤ x + ε for an infinitesimal ε (I will surely get grief from someone for this abuse of notation).

But by thinking about it this way it then follows quite readily that we can find the Probability of X being in a range by forming the integral of the probability density function for that range

Probably the single best known probability density function is the one that gives us a random variable with a Normal Distribution. The shape of the PDF is why the Normal Distribution is also often referred to as the “Bell Curve”

image

Next Uncertainty Wednesday we will dig a bit deeper into continuous random variables by comparing them to what we have learned about discrete ones.

Thursday, July 20, 2017 - 12:14pm

We are rapidly approaching the first half year of Trump’s Presidency. I am genuinely curious whether there is anyone attempting a cogent defense of the record so far. If you have read something you think qualifies, please link to it in the comments. I would also love to see someone go back and ask people, like Peter Thiel, who supported Trump’s candidacy about their assessment of his performance to date.

I just spent a week in Germany leading up to the G20 summit. While the following is a commentary after the summit (and from an Australian journalist), it echoes a sentiment I heard frequently: Trump is embarrassing and isolating the US, giving more room for China and Russia in the world. 

Again: I really would like to listen to or read an opposing view, as long as it is coherent, calm and reasoned. This could be about the presidency as a whole, domestic or foreign policy. So please if you have something worthwhile add it in the comments.

Wednesday, July 19, 2017 - 4:35pm

The women who have come forward in tech in the last few weeks, including Susan Fowler, Niniane Wang and Sarah Kunst have shown great courage. Their courage will be the catalyst for lasting change, provided all of us turn this into a sustained effort.

Having been in tech for over two decades as an entrepreneur and investor, I have sadly looked into allegations of sexual harassment on more than one occasion. The majority of these cases turned out to frustratingly go nowhere, as initial allegations were not subsequently confirmed. Women were scared to follow through. Scared that they would be known for their complaint, instead of their accomplishments. Scared that their careers would be blocked, their fundraising stalled. Those fears were real and justified and it is clear that for every complaint there were many, many more incidents that went unreported.

Change starts with courageous individuals. To get systematic change though will take time and broad engagement. I hope that more women will find inspiration by those who have been leading to follow through on the record. And to start speaking out in the moment, to push back and call men out on the spot, knowing full well that doing so will, at times, come at a personal cost. There will, however, also be investors and employers who will be supportive and I want USV to be among those.

None of this should be required of women in the first place. We men should behave professionally. There should be more women making investment decisions (including at USV). There should be more diversity of all kinds and at all levels of tech, period.

Getting there will take a long time though, especially in Venture Capital. Change in our industry is slow, as fund cycles are long. At USV we last added a new General Partner in 2012 with Andy. Change in the diversity of GPs at existing firms will proceed one retirement / new partner at a time (in a variant on Planck’s quote that “science progresses one funeral at a time”).

There is, however, a potential accelerant for change due to the VC industry itself being disrupted. The balance of power has been shifting from investors to entrepreneurs for some time now, as starting a company has become cheaper, services such as Mattermark have provided more visibility and networks such as AngelList have broadened who can invest. Finally, with crypto currencies and token offerings, a whole new funding mechanism that doesn’t rely on gatekeepers has become available.

So now is the time to double down in order to achieve lasting change. We all need to seize this moment. Here are actions we can take as investors:

Make sure to have a sexual harassment policy that explicitly covers external relationships. Here is a template for a policy for VC firms to include in employee handbooks. Jacqueline provided input to this and we will adopt a version for USV.

Become a limited partner in some of the women and minority led early stage funds. As Shai Goldman points out, for the earliest investments there is nothing to go on other than the entrepreneur. Susan and I are LPs in Female Founders Fund and Lattice Ventures and will be LPs in 645  Ventures next fund.

Learn about unconscious bias. We all love to think of ourselves as being immune to it, as strictly applying objective criteria, but there is ample research to the contrary. I recommend “What Works” by Iris Bohnet.

And of course most of all: fund more women and minority entrepreneurs!

Tuesday, July 18, 2017 - 6:36pm

In last week’s Uncertainty Wednesday, I introduced the expected value EV of a random variable X. We saw that EV(X) is not a measure of uncertainty. The hypothetical investments I had described all had the same expected value of 0. It is trivial, given a random variable with EV(X) = μ  to construct X’ so that EV(X’) = 0. That’s in fact how I constructed the first investment. I started with $0 with 99% probability and $100 with 1% probability, which has an EV of

EV = 0.99 * 0 + 0.01 * 100 = 1

and then I simply subtracted 1 from each possible outcome to get -$1 with 99% probability and $99 with 1% probability.

What we are looking for instead in order to measure uncertainty, is a number that captures how spread out the values are around the expected value. The obvious approach to this would be to form the weighted sum of the distances from the expected value as follows:

AAD(X) = ∑ P(X = x) * |x - EV(X)|

where | | denotes absolute value (meaning the magnitude without the sign). This metric is known as the Average Absolute Deviation (btw, instead of the shorthand P(x) I am now writing P(X = x) to show more clearly that it is the probability of the random variable X taking on the value x).

AAD is a one measure of dispersion around the expected value but it is not the most commonly used one. That instead is what is known as Variance, which is defined as follows

VAR(X) = sum Prob(X = x) * (x - EV(X))^2  

Or expressed in words: the probability weighted sum of the squared distances of possible outcomes from the expected value. It turns out that for a variety of reasons using the square instead of the absolute value has some useful properties and also interesting physical interpretations (we may get to those at some later point).   

Let’s take a look at both of these metrics for the random variables from our investment examples

Variance

Investment 1: 0.99 * (-1 - 0)^2 + 0.01 * (99 - 0)^2 = 0.99 * 1 + 0.01 * 9,801 = 99

Investment 2: 0.99 * (-100 - 0)^2 + 0.01 * (9,900 - 0)^2 = 0.99 * 10,000 + 0.01 * 98,010,000 = 990,000

Investment 3: 0.99 * (-10,000 - 0)^2 + 0.01 * (990,000 - 0)^2 = 0.99 * 100,000,000 + 0.01 * 980,100,000,000 = 9,900,000,000

Average Absolute Deviation

Investment 1: 0.99 * |-1 - 0| + 0.01 * |99 - 0| = 0.99 * 1 + 0.01 * 99 = 1.98

Investment 2: 0.99 * |-100 - 0| + 0.01 * |9,900 - 0| = 0.99 * 100 + 0.01 * 9,900 = 198

Investment 3: 0.99 * |-10,000 - 0| + 0.01 * |990,000 - 0| = 0.99 * 10,000 + 0.01 * 990,000 = 19,800

You might have previously noticed that Investment 2 is simply Investment 1 scaled by a factor of 100 (and ditto Investment 3 is 100x Investment 2). We see that AAD, as per its definition, follows that same linear scaling whereas variance grows in the square, meaning the variance of Investment 2 is 100^2 = 10,000x the variance of Investment 1.

Both of these are measures that pick up the values of the random variable as separate from the structure of the underlying probabilities. If that doesn’t make sense to you, go back and read the initial post about measuring uncertainty and then go back to the posts about entropy. The three hypothetical investments each have the same entropy as they share the same probabilities. But AAD and Variance pick up the difference in payouts between the investments.

Monday, July 17, 2017 - 7:32pm

I am spending this 4th of July in Germany, having visited my parents and friends from growing-up near Nuremberg for the last few days. It is late here now and I am seeing all the Happy 4th wishes from the US in my Twitter timeline. That made me think about what I feel like celebrating on this day. And as I wrote last year, it is not so much independence we need in the modern world, but rather interdependence.  

There is, however, something very much worth celebrating and that is the memorable language from the Declaration itself. It brought with it some ideas that feel as important today as they were back then, such as the concept of unalienable rights. Other aspects, though, feel like they need to be updated for the progress that has been made since then. 

Based on my writing in World After Capital, here is a newly phrased “preamble” that I would be happy to celebrate for many years to come (although I am sure it can be improved upon):

We hold these truths to be universal, that all humans are created equal; that they are endowed qua their humanity with certain unalienable Rights, that among these are Life, Liberty and the pursuit of both Happiness and Knowledge; that they have Responsibilities towards each other and other species, that among these are Tolerance, and the Application and Furtherance of Knowledge for the Benefit of All.

I will, in a future post, explain the rationale for my choice of words and ideas in detail.

Until then:  Happy 4th of July!

Friday, July 7, 2017 - 11:41pm

Susan and I have been longtime supporters of the wonderful ChangeX platform: ChangeX helps spread social innovation across communities. This week ChangeX introduced a new model for donors, which they are calling Impact as a Service (IaaS). This is a bit of a tongue-in-cheek reference to Infrastructure as a Service (also IaaS). It is spot on though as the two share important characteristics:

  1. IaaS, reduces or even eliminates the overhead of manual setup activities, allowing all participants to focus on what actually delivers value. 

  2. IaaS provides much more transparency – you can see exactly what you are paying for and what you get in return.

  3. IaaS lets you start small and scale up.

Delivering IaaS is an important milestone for ChangeX. Paul and the team set out to build a true technology platform to make social programs more impactful and help them grow across the world. Their diligent investment is now starting to pay off, which is exciting. Congratulations to the team!

Albert Wenger is a partner at Union Square Ventures (USV), a New York-based early stage VC firm focused on investing in disruptive networks. USV portfolio companies include: Twitter, Tumblr, Foursquare, Etsy, Kickstarter and Shapeways. Before joining USV, Albert was the president of del.icio.us through the company’s sale to Yahoo. He previously founded or co-founded five companies, including a management consulting firm (in Germany), a hosted data analytics company, a technology subsidiary for Telebanc (now E*Tradebank), an early stage investment firm, and most recently (with his wife), DailyLit, a service for reading books by email or RSS.