You are here

Albert Wenger

Content Written by Author

Wednesday, January 16, 2019 - 12:03pm

Today’s Uncertainty Wednesday revisits a favorite topic of mine: correlation. I first wrote about the importance of thinking about correlation in modeling over 10 years ago, long before starting the Uncertainty Wednesday series (the reference to Excel is a giveaway). I then had a three part series about spurious correlation which you can find here: part 1, part 2 and part 3. Here is the key introductory paragraph:

Well, as you have seen from the posts on sample mean and sample variance, whenever you are dealing with a sample the observed values of statistical measure have their own distribution. The same is of course true for correlation. So two random variable may be completely independent, but when you draw a sample, the sample happens to have correlation. That is known as spurious correlation.

But the situation is way worse than that when one of the random variables involved has a fat tailed distribution in which extremes occur with higher probability than say a normal distribution. Why? Because many fat tailed distributions do not have a well-defined variance. Instead their variance explodes towards infinity. Yet any sample from a fat tailed distribution will have a finite variance (by construction). The sample variance in this situation is not an estimate of the actual variance – since the latter does not exist. By extension, a correlation in which at least one variable is fat tailed has the same problem.

This should be a complete “Duh” moment and yet people cannot help themselves but use sample correlation all the time in settings where fat tails are extremely likely. Again, the sample correlation will always exist (by construction), but it doesn’t have to mean a thing! This is incredibly hard for us to accept: we follow a recipe (how to calculate correlation), we get a number (sample correlation) and yet we are supposed to ignore it?

Here is a way of understanding why. Ask yourself what would happen to the correlation if you had more data points. This is an important mental exercise as it is a counter-factual (you only have your existing data points). Sometimes a deeper truth can only be arrived at by realizing that your sample is misleading you.

Let’s look at a concrete example: outcomes in venture investing are generally seen to be fat tailed. You have a sample of venture fund sizes and returns. The sample shows negative correlation – greater fund sizes appear correlated with lower return. Can you actually conclude that this is the case? You have to ask yourself what would happen if just one large fund completely hit it out of the park? Or maybe two funds did. Would the sign on your correlation flip to positive?

If outcomes really are fat tailed you will find that your sample correlation is not really robust (you could actually simulate this by “drawing” fro the distribution an adding new hypothetical points to your sample and then recalculating the sample correlation). This also turns out to be a central argument that Nassim Taleb made in his recent criticism of IQ as a valid concept. As per usual his criticism leaves lots of room for further debate, but I have yet to see a response that at least attempts to address this problem of sample correlation under fat tails.

Monday, January 14, 2019 - 12:58pm

NOTE: Today’s excerpt from World After Capital continues the topic of Informational Freedom, discussing overreach in the patent system and offering prizes as an alternative mechanism. This is timely as the patent office has unfortunately issued new rules that will make it easier to obtain software patents undoing the tightening from prior Supreme Court decisions. 

While copyright limits our ability to share knowledge, patents limit our ability to use knowledge to create something. Much like having a copyright confers a monopoly on reproduction, a patent confers a monopoly on use. And the rationale for the existence of patents is similar to the argument for copyright. The monopoly that is granted results in economic rents (i.e., profits) that are supposed to provide an incentive for people to invest in research and development.

As with copyright, the incentive argument here should be suspect. People invented long before patents existed and some people have continued to invent without seeking patents. We can trace early uses of patents to Venice in the mid 1400s; Britain had a fairly well established system by the 1600s [106]. That leaves thousands of years of invention, a time that saw such critical breakthroughs as the alphabet, movable type, the wheel, and gears. This is to say nothing of those inventors who more recently chose not to patent their inventions because they saw how that would interrupt the knowledge loop and impose a loss on society. These inventors include Jonas Salk, who created the Polio vaccine (others include x rays, penicillin, ether as an anesthetic, and many more, see [107]). Since we know that limits on knowledge use impose a cost, we should therefore ask what alternatives exist to patents to stimulate innovation.

Many people are motivated simply by wanting to solve a problem. This could be a problem they are having themselves or something that impacts family or friends or the world at large. With a Universal Basic Income more of these people will be able to spend their time on inventing following intrinsic motivation.

We will also see more invention because digital technologies are reducing the cost of inventing. One example of this is the USV portfolio company Science Exchange, which has created a market place for laboratory experiments. Let’s say you have an idea that requires you to sequence a bunch of genes. The fastest gene sequencing available to date comes from a company called Illumina, whose machines costs from $850K-$1M to buy [108]. Via Science Exchange, however, you can access such a machine on a per use basis for less than $1000 [109]. Furthermore, the next generation of sequencing machines is already on the way, and these machines will further reduce the cost. Here too we see the phenomenon of technological deflation at work.

A lot of recent legislation has needlessly inflated the cost of innovation. In particular, rules around drug testing have made drug discovery prohibitively expensive. We have gone too far in the direction of protecting patients during the research process and also of allowing for large medical damage claims. As a result, many drugs are either not developed at all or are withdrawn from the market despite their efficacy (for example the vaccine against Lyme disease, which is no longer available for humans [110]).

Patents (i.e., granting a temporary monopoly) are not the only way to provide incentives for innovation. Another historically successful strategy has been the offering of public prizes. Britain famously offered the Longitude rewards starting in 1714 to induce solutions to the problem of determining a ship’s longitude at sea (latitude can be determined easily from the position of the sun). Several people were awarded prizes for their designs of chronometers, lunar distance tables and other methods for determining longitude (including improvements to existing methods). As quid pro quo for receiving the prize money, inventors generally had to make their innovations available to others to use as well [111].

At a time when we wish to accelerate the Knowledge Loop, we must shift the balance towards knowledge that can be used freely and that is not encumbered by patents. It is promising to see successful recent prize programs, such as the X Prizes, DARPA Grand Challenges, and NIST competitions. There is also potential for crowdfunding future prizes. Medical research in particular should be a target for prizes to help bring down the cost of healthcare.

Going forward, we can achieve this by using prizes more frequently. And yet, that leaves a lot of existing patents in place. Here I believe a lot can be done to reform the existing system and make it more functional, in particular by reducing the impact of so-called Non Practicing Entities (NPEs, commonly referred to as “patent trolls”). These are companies that have no operating business of their own, and exist solely for the purpose of litigating patents.

In recent years, many NPEs have been litigating patents of dubious validity. They tend to sue not just a company but also that company’s customers. This forces a lot of companies into a quick settlement. The NPE then turns around and uses the early settlement money to finance further lawsuits. Just a few dollars for them go a long way because their attorneys do much of the legal work on a contingency basis, expecting further settlements. Fortunately, a recent Supreme Court ruling placed limits on where patent lawsuits can be filed, which should help limit the activity of these NPEs going forward [112].

As a central step in patent reform, we thus must make it easier and faster to invalidate existing patents while at the same time making it more difficult to obtain new patents. Thankfully, we have seen some progress on both counts in the U.S., but we still have a long way to go. Large parts of what is currently patentable should be excluded from patentability in the first place, including designs and utility patents. University research that has received even small amounts of public funding should not be eligible for patents at all. Universities have frequently delayed the publication of research in areas where they have hoped for patents that they could subsequently license out. This practice has constituted one of the worst consequences of the patent system for the Knowledge Loop.

We have also gone astray by starting to celebrate patents as a measure of technological progress and prowess instead of treating them as a necessary evil (and maybe not even necessary). Ideally, we would succeed in rolling back the reach of existing patents and raising the bar for new patents while also inducing as much unencumbered innovation as possible through the bestowing of prizes and social recognition.

Friday, January 11, 2019 - 1:50pm

What does it mean to trust someone? Usually, that person (or company, or technology) can take actions that share a bunch of characteristics: the action has a material impact on us, we cannot directly observe the action or judge its consequences relative to alternatives at least until later, and we have an expectation that the other will act in a way that is primarily to our benefit as opposed to their own.

One way to tell that trust was involved in a relationship is when we discover that the person (or company, or technology) acted in a way that harmed us and benefited them. At that point we feel betrayed. This provides a useful distinction between the concepts of trust and reliance. We rely on a clock to tell time. When the clock breaks we will feel disappointed. But when we buy a clock from someone who tells us it is a working clock, we trust them and when it doesn’t work, we feel betrayed (thanks to philosopher Annette Baier for this distinction).

There are many different mechanisms for maintaining trust, such as reputation, bonding, repeated interactions (loyalty), professional codes of conduct, regulation, etc. Trust is an essential facilitator of a broad range of interactions and relationships, societies, economies with high levels of trust can accomplish a lot more than those without.

Here is an example of trust through loyalty. When Susan and I lived in Munich, our apartment was close to the famous Viktualienmarkt, a beautiful old fashioned market in the middle of town. There are many sellers or fresh fruit and vegetables in the market but we would always go to the same vendor. The following was a routine occurrence.

Vendor to tourist in front of us: [Loudly] The oranges are fresh from Spain and super juicy!

Vendor to us (regulars) seconds later: [Hushed] Don’t have the oranges today, they are terrible.

We could trust the vendor to not sell us bad oranges because we were loyal customers who provided steady business.

Now some people have been saying that crypto is exciting because it has “trust built in.” I, however, prefer a different formulation, which is that crypto systems are “trust minimized.”

To see the difference, let’s look at the example of buying oranges. A blockchain based market for oranges does not automatically mean I will only buy juicy oranges. In fact, in most crypto marketplaces I will know less about the seller than I did with our local vendor (who was at the same place every day and whom I knew by name) and may be buying from different vendors so that loyalty at the vendor level may not be available as a trust mechanism. But let’s envision that every orange when it is picked is stamped with a code which is registered on a blockchain together with a proof of location. Now I could scan an orange in the market and look up for myself where and when it was picked.

Among other things I could thus determine for myself whether the orange is in fact from Spain and how recently it was picked. But we see now that this system is only “trust minimized.” Trust hasn’t been built into it nor has it been entirely eliminated. For example, if the payoff was big enough, someone might truck oranges to a region to pretend that they picked them there. And my local vendor can still add value on top of such a trust minimized system. For instance, they could cut open an orange from each batch and let me sample it.

So what would we gain from a trust minimized system? To stick with the example one can see how the tracking system would dramatically reduce the ability of intermediaries along the supply chain to mess around (for instance by substituting orange from one region for another). This matters a lot in markets where I as a customer don’t have a relationship with the intermediary and where the intermediary is potentially powerful (e.g. due to scale economics in distribution) relative to both the grower and the vendor.

We live in a world where large corporations (especially ones with scale or network effects) have often abused trust due to a misalignment of incentives driven by short-term oriented capital markets. There are different ways of tackling this problem, including new regulation, innovative forms of ownership and trust minimized crypto systems.

Wednesday, January 9, 2019 - 2:44pm

In today’s edition of Uncertainty Wednesday, we will look at how uncertainty in markets is highly dynamic. Something can be a sure thing for a while but not forever.

There is a – potentially apocryphal – story about Ron Conway’s first fund: every single investment was a write off except for Google, which resulted in a 7x return for the fund as whole. Thus is the power outsized returns in a world of network effects and winner-take-all (or most) markets. One winner more than offsets many losers. And the approach of making lots of investments turned out to be a great strategy for many years and different firms.

Along the way though, as this strategy became more and more widely understood, the inevitable started to happen: prices and round sizes went up across the board. The average seed round today is as big as a Series A was a decade ago. And the pricing for later rounds for a company that has momentum and some story about how it might have network effects is often premised on that company becoming dominant in its market.

So the strategy will stop working. At least it will stop working for small fund sizes. As price and investment amounts go up, you need to make ever more and larger investments to have enough winners (and enough of a stake in the winners) to pay for the losers. Not surprisingly then, we have seen an explosion in fund size. Much like in a casino, somebody will still win in the end (including possibly a small fund), but the probability that it is you becomes tiny and the expected value of the strategy turns negative.

This is the power of markets and the price mechanism. It does away with sure bets. The “spread money among a bunch of network effects companies and sit back” strategy was a sure bet when prices were low. Now prices are high and uncertainty has returned in full force. 

It will take a long time for the effects of this to work themselves through the venture capital market. Fund cycles are long and if you have had several successful funds you can raise a new fund even after one (or possibly several bad ones). As a result strategies tend to persist far past their expiration date and that will be true for this strategy as well.

Monday, January 7, 2019 - 12:37pm

NOTE: Excerpts from World After Capital will resume next Monday

We just returned from a wonderful week skiing. One of our family traditions is to all work together completing a puzzle. It’s a fun activity that involves a surprising amount of teamwork, such as finding pieces that belong in a part of the puzzle someone else is working on and trading off working on different areas.

We had always picked 1,000 piece puzzles. This time we wound up buying a 3,000 piece puzzle. After all, how much harder could it be? Well, a lot harder as it turns out. As a rough approximation nearly 10x as hard!

Here is the puzzle in a semi-finished state. That’s as far as we got before the vacation ended, despite spending quite a bit of collective time on it.

There is a good business and life lesson in this. As you add more parts to a problem, the complexity tends to grow explosively. Here are some strategies to combat this problem in business:

Whenever you add a feature, consider removing one that’s used by only a small fraction of users (important caveat: if you have 100x consumers to producers on your service you need to do this analysis separately for each group!)

As you hire employees make sure they are organized into logical units that can work as independently from each other as possible  (doing the 1,000 piece puzzles is 3x a 1,000 piece puzzle).

Architect your software so that component services are loosely coupled via APIs.

What’s your favorite strategy for avoiding growth in complexity in business and/or in life?

Tuesday, January 1, 2019 - 10:16am

It is easy to feel pessimistic at the end of 2018. CO2 emissions are still climbing rapidly and reached an all time high in 2018, with severe weather events accelerating globally. Facebook and Twitter continue to be used for manipulation and their approaches to moderation are just as problematic. And the political response to all of this is largely one of chaos dominated by strongmen politicians, including the recent election of Bolsonaro in Brazil.

So how can one stay optimistic? One way is to look at things that happened in 2018 that can be seen as early signs of positive change. Signs that we can and will do better over time. Here are just some examples:

Climate

We have evidence that when we get our act together on an environmental issue, then a recovery is possible. While on an admittedly smaller scale, the ozone layer is on track to a complete recovery. The aggregate growth in CO2 emissions hides real progress that’s being made, such as the UK going for 1,000 hours without coal or global deployment of solar and wind energy reaching 1 TW in capacity. In the US, the Tesla Model 3 became the fastest selling car (by revenue) and China is leading the world in electric vehicle sales with a commitment to going all electric by 2040.

Networks

It also turns out that we do not need to be slaves to online networks. In 2018 Apple released Screen Time as part of iOS 12 and Google released Digital Wellbeing for Android to help people track and limit their usage of apps like Instagram. 2018 was also the year when Facebook engagement in the United States started to decrease for the first time (important footnote: Instagram which also belongs to Facebook is still growing). Regulators globally started to take more serious interest in online networks in 2018 including a US congressional hearing and an EU hearing among many other inquiries and court cases.

Politics

The congressional class elected in 2018 is the most diverse ever elected with a record number of women entering politics. Voter turnout in the midterm elections was much higher than in the last three decades including many more young voters. In general young people started to engage in politics more in 2018 including organizing the March for our Lives. We are also starting to make long overdue improvements to the democratic process starting at the state level. Maine carried out its first ranked-choice voting in the midterms and several states including Colorado, Michigan and Missouri adopted anti-gerrymandering amendments.

For 2019 let’s continue to build the momentum of these positive developments. And in that spirit: Happy New Year!

PS If you have developments from 2018 that give you reason for optimism please share them in the comments or on this Twitter thread.

Wednesday, December 19, 2018 - 2:38pm

I have written fairly extensively here on Continuations about climate change, including several times as part of Uncertainty Wednesday. I have also previously stated my support for geo-engineering research as far back as 2009 and again more recently in 2016. A common objection to geo-engineering is that the earth is a complex highly non-linear system. So the argument goes, anything we might do will likely be a cure that’s worse than the disease.

There are two flaws with this argument. First, we are already engaged in geo-engineering on a global scale. That’s exactly what our massive industrial emissions of CO2 are. So to argue that we should not geo-engineer is to really say we should not engage in “additional geo-engineering” (it really doesn’t matter that our initial “experiment” was inadvertent, we are fully engaged in it nonetheless).

The second flaw with the argument is that the most interesting geo-engineering proposals all draw on existing natural processes. This is true for both the dispersal of aerosols to reduce the amount of solar radiation entering the atmosphere and for iron fertilization to spur the growth of phytoplankton as a way to turn atmospheric CO2 into oxygen. Both of these processes have occurred naturally many times in the past as a result of volcanic eruptions.

Now does that mean we know exactly what will happen if we recreate eruption conditions? No, because human made alternatives will be slightly different. But that is why it is all the more important to run experiments both in labs and in situ and observe what happens so that we can come as close as possible to the naturally occurring conditions. We have already donated some money to lab research on stratospheric aerosols and I would be excited to do the same for ocean fertilization.

If you don’t think this is urgent, I want to leave you with two chart that show what is happening to emissions growth:

And here is a breakdown by country

So while it is great that there has been some progress made with the recent UN agreement in Poland we need to be prepared that it is too little too late and that we will need geo-engineering instead.

Tuesday, December 18, 2018 - 11:34am

NOTE: I have been posting excerpts from my book World After Capital. Currently we are on the Informational Freedom section and the last week’s post was on being represented by a bot. Today looks at the rolling back copyright.

Limiting the Limits to Sharing and Creating

Once we have fought back geographical and prioritization limits and have bots in place so that all users can meaningfully control their own interactions with the global knowledge network, we still come up against limits that restrict which information you can share and what you can create based on how you obtained the information. We’ll first look at copyright and patent laws and suggest policies for reducing how much these limit the knowledge loop. Then we’ll turn to confidentiality and privacy laws.

Earlier I remarked how expensive it was to make a copy of a book when human beings literally had to copy it one letter at a time. Eventually we invented the printing press, and after that movable type. Together the two provided for much faster and cheaper reproduction of information. Even back then, governments and also the church saw this as a threat to their authority. In England, the Licensing of the Press Act of 1662 predated modern attempts to censor the web by more than 300 years: if you operated a printing press and wanted the right to make copies, you needed the government’s approval [97]. You received it in exchange for agreeing to censor content critical of the government or that ran counter to church teachings. And that’s the origin of copyright. It is the right to make copies in return for agreeing to censorship.

Over time, as economies grew and publishing companies emerged as business enterprises, copyright became commercially meaningful, less as an instrument of government control and more as a source of profit. The logic runs like this: “If I have the copyright to a specific material, then you cannot make copies of it, which means that I essentially have a monopoly in providing this content. I am the only one allowed to produce and sell copies of it.”

Legitimating this shift was the idea that in order to get content produced in the first place, incentives needed to exist for the creators of content, just as incentives needed to exist for people to create tangible or material goods. If you own your factory, then you will invest in it because you get to keep the benefits from those improvements. Similarly, the thinking goes, if you are working on a book, you should own the book so that you have an incentive to write it in the first place and improve it over time through revisions.

Over time the holders of copyrights have worked to strengthen their claims and extend their reach. For instance, with the passing of The Copyright Act of 1976, the requirement to register a copyright was removed. Instead, if you created content you automatically had copyright in it [98]. Then in 1998 with passage of the Copyright Term Extension Act, the years for which you had a copyright were extended from 50 to 70 years beyond the life of the author. This became known as the “Mickey Mouse Protection Act,” because Disney had lobbied the hardest for it, having built a very large and profitable business based on protected content, and mindful that a number of its copyrights were slated to expire [99].

More recently, copyright lobbying has attempted to interfere with the publication of content on the Internet through legislation such as PIPA and SOPA, and more recently the TPP. In these latest expansion attempts, the conflict between copyright and the digital knowledge loop becomes especially clear. Copyright severely limits what you can do with content, essentially down to consuming the content. It dramatically curtails your ability to share it and create other works that use some or all of the content. Some of the more extreme examples include takedowns of videos from YouTube that used the Happy Birthday song, which, yes, was copyrighted until recently.

From a societal standpoint, given digital technology, it is never optimal to prevent someone from listening to a song or watching a baseball game once the content exists. Since the marginal cost of accessing it is zero, the world is better off if that person gets just a little bit of enjoyment from that content. And if that person turns out to be inspired and write an amazing poem that millions read, well then the world is a lot better off.

Now, you might say, it’s all well and good that the marginal cost for making a copy is zero, but what about all the fixed and variable cost that goes into making content? If all content were to be free, then where would the money come from for producing any of it? Don’t we need copyright to give people the incentive to produce content in the first place?

Some degree of copyright is probably needed, especially for large-scale projects such as movies. Society may have an interest in seeing $100 million blockbuster films being made, and it may be that nobody will make them if, in the absence of copyright protection, they aren’t economically viable. Yet here the protections should be fairly limited (for instance, you shouldn’t be able to take down an entire site or service just because it happens to contain a link to a pirated stream of your movie). More generally, I believe copyright can be dramatically reduced in its scope and made much more costly to obtain and maintain. The only automatic right accruing to content should be one of attribution. The reservation of additional rights should require a registration fee, because you are asking for content to be removed from the digital knowledge loop.

Let’s take music as an example. Musical instruments were made as far back as 30,000 years ago, pre-dating any kind of copyright by many millennia. Even the earliest known musical notation, which marks music’s transition from information to knowledge (again, defined as something that can be maintained and passed on by humans over time and distance), is around 3,400 years old [100]. Clearly people made music, composed it, shared it long before copyright existed. In fact, the period during which someone could make a significant amount of money making and then selling recorded music is extraordinarily short, starting with the invention of the gramophone in the 1870s and reaching its heyday in 1999, the year that saw the biggest profits in the music industry [101].

During the thousands of years before this short period, musicians made a living either from live performances or through patronage. If copyrighted music ceased to exist tomorrow, people would still compose, perform, and record music. And musicians would make money from live performances and patronage, just as they did prior to the rise of copyright. Indeed, as Steven Johnson found when he recently examined this issue, that’s already what is happening to some degree: “the decline in recorded-­music revenue has been accompanied by an increase in revenues from live music… Recorded music, then, becomes a kind of marketing expense for the main event of live shows” [102]. Many musicians have voluntarily chosen to give away digital versions of their music. They release tracks for free on Soundcloud or YouTube and raise money to make music from performing live and/or using crowdfunding methods such as Kickstarter and Patreon.

Now imagine a situation where the only automatic right accruing to an intellectual work was one of attribution. Anyone wanting to copy or distribute your song in whole or in part has to credit you. Such attribution can happen digitally at zero marginal cost and does not inhibit any part of the knowledge loop. Attribution imposes no restrictions on learning (making, accessing, distributing copies), on creating derivative works, and on sharing those. Attribution can include reference to who wrote the lyrics, who composed the music, who played which instrument and so on. Attribution can also include where you found this particular piece of music (i.e., giving credit to people who discover music or curate playlists). This practice is already becoming more popular using tools such as the Creative Commons License, or the MIT License often used for attribution in open source software development.

Now, what if you’re Taylor Swift and you don’t want others to be able to use your music without paying you? Well, then you are asking for your music to be removed from the knowledge loop, thus removing all the benefits that loop confers upon society. So you should be paying for that right, which not only represents a loss to society but will be costly to enforce. I don’t know how big the registration fee should be — that’s something that will require further work — but it should be a monthly or annual fee, and when you stop paying it, your work should revert back to possessing attribution-only rights.

Importantly, in order to reserve rights, you should have to register your music with a registry, and some part of the copyright fee would go towards maintenance of these registries. Thanks to blockchain technology, competing registries can exist that all use the same global database. The registries themselves would be free for anyone to search, and registration would involve a prior search to ensure that you are not trying to register someone else’s work. The search could and should be built in a way so that anyone operating a music sharing service, such as Spotify or Soundcloud, can trivially implement compliance to make sure they are not freely sharing music that has reserved rights.

It would even be possible to make the registration fee dependent on how many rights you want to retain. All of this could be modeled after the wildly successful Creative Commons licenses. For instance, your fee might decrease if you allow non-commercial use of your music and also allow others to create derivative works. The fee might increase significantly if you want all your rights reserved. The same or similar systems could be used for all content types, including text, images and video.

Critics might object that the registration I’m proposing imposes a financial burden on creators. It is important to remember the converse: Removing content from the knowledge loop imposes a cost on society. And enforcing this removal, for instance by finding people who are infringing and imposing penalties on them, imposes additional costs on society. For these reasons, asking creators to pay is fair, especially if creators’ economic freedom is already assured by a Universal Basic Income. We have generated so much economic prosperity that nobody needs to be a starving artist anymore!

Universal Basic Income also helps us dismantle another argument frequently wielded in support of excessive copyright: Employment at publishers. The major music labels combined currently employ roughly 17,000 people [103] [104] [105]. When people propose limiting the extent of copyright, others point to the potential loss of these jobs. Never mind that the existence of this employment to some degree reflects the cost to society from having copyright. Owners, managers and employees of music labels are after all not the creators of the music.

Before turning to patents, let me point out one more reason why a return to a system of paid registration of rights makes sense. None of us creates intellectual works in a vacuum. Any author who writes a book has read lots of writing by other people. Any musician has listened to tons of music. Any filmmaker has watched lots of movies. Much of what makes art so enjoyable these days is the vast body of prior art that it draws upon and can explicitly or implicitly reference. There is no “great man” or woman who creates in a vacuum and from scratch. We are all part of the knowledge loop that has already existed for millennia.

Saturday, December 15, 2018 - 10:26am

Our twins are about to wrap up their first semester and one thing is clear: 30 years after I went, and 20 years after the web exploded, college has barely changed. Sure, students all have smartphones and laptops, but they also carry around big text and heavy textbooks, go to lectures with frontal presentation from professors and take exams in blue books. They go for four years with maybe a semester (or possibly even a year) abroad if they are lucky. They declare a major at some point and the majority of students pick a traditional discipline.

Now I am sure that my n = 2 is not enough of a sample and there are maybe schools out there that are doing something dramatically new already, but clearly there is a ton of room for innovation. There is also a huge need for it with over $1 trillion in student debt and many students not able to earn back the expense of college. I am convinced that in another 30 years from now, higher education will have changed significantly. This strikes me as an instance of where it has been easy to overestimate the rate of change, but will also turn out now to be easy to underestimate the ultimate degree of change.

What are some of the things we might see? The biggest change I expect to see over time is a move away from four years in one location. There has been some pickup in two year programs which is a start. But much further unbundling will likely occur and Udacity’s Nanodegrees are an interesting innovation in that regard.

Another example of what the future may bring comes from a non-profit, University of the People, which Susan and I have supported. University of the People is US accredited but is tuition free. While initially financed from donations, the model is now self sustaining based on assessment fees. This works because the university has lots of volunteers and no physical infrastructure (students learn via online courses).

I am excited that USV portfolio company Top Hat is building some of the ingredients for a new experience. They provide a collaborative content authoring platform on which professors can work together to create engaging interactive content. They also allow for an engaging in-classroom experience that gives professors data on who among their students is coming along easily and who is falling behind. These are ingredients to a different learning experience and when I attended Top Hat’s Engage conference earlier this year, I was inspired by the way some professors are using the tools in really innovative ways.

If you are aware of a college or a startup that is doing something really innovative in higher education, I would love to know. We need lots of experimentation here to find a new system.

Wednesday, December 12, 2018 - 7:16pm

A startup founder I know likes to say that their leadership style is “frequently wrong, but never in doubt.” Often that expression is applied as a critique, such as in Cheryl Wheeler’s song Driving Home, but the founder meant it as a positive model along the lines of the idea that even a bad decision is better than no decision. Given the high degree of uncertainty inherent in startups, how to lead in its presence is one of the crucial founder/CEO challenges. So should a leader share their doubts about a course of action with the team?

That framing of the question has an implicit assumption: that the leader has doubts to begin with and hence needs to make a decisions whether to share those or not. To some this may seem like a preposterous question, after all, who doesn’t have doubts? Only an overly sure fool would seem not to. But the word doubt has a lot of connotations, including lack of confidence and even distrust. So what do we even mean by asking about doubt and sharing it?

To help narrow this down, I therefore want to use other words and distinguish between “second guessing” and “re-evaluating.” The former is questioning a decision without material new information. The latter is revisiting a decision after material new information has been obtained. It is second guessing which is destructive for morale, because it calls into question not just the decision but also undermines the legitimacy of the decision making process itself. A a leader you should keep any second guessing strictly to yourself.

Re-evaluating on the other hand is healthy but requires a good decision making process. In particular, there has to be a relatively clear way of assessing whether something is in fact material new information. There is a famous quote, often attributed to Keynes: when the facts change, I change my opinion – what do you do? If you have a good process for making decisions then it will be quite clear whether something is a material new fact and the team will be able to be quite dispassionate about re-evaluating the decision.

So as a good exercise, next time you feel doubt about a decision, ask yourself if you are second guessing or if you are re-evaluating. And if you find yourself second guessing a lot, then it likely says something about problems with the decision making process (and potentially about your own fears).

Monday, December 10, 2018 - 5:05pm

NOTE: I have been posting excerpts from my book World After Capital. Currently we are on the Informational Freedom section and the previous excerpt was on Internet Access. Today looks at the right to be represented by a bot (code that works on your behalf).

Bots for All of Us

Once you have access to the Internet, you need software to connect to its many information sources and services. When Sir Tim Berners-Lee first invented the World Wide Web in 1989 to make information sharing on the Internet easier, he did something very important [95]. He specified an open protocol, the Hypertext Transfer Protocol or HTTP, that anyone could use to make information available and to access such information. By specifying the protocol, Berners-Lee opened the way for anyone to build software, so-called web servers and browsers that would be compatible with this protocol. Many did, including, famously, Marc Andreessen with Netscape. Many of the web servers and browsers were available as open source and/or for free.

The combination of an open protocol and free software meant two things: Permissionless publishing and complete user control. If you wanted to add a page to the web, you didn’t have to ask anyone’s permission. You could just download a web server (e.g. the open source Apache), run it on a computer connected to the Internet, and add content in the HTML format. Voila, you had a website up and running that anyone from anywhere in the world could visit with a web browser running on his or her computer (at the time there were no smartphones yet). Not surprisingly, content available on the web proliferated rapidly. Want to post a picture of your cat? Upload it to your webserver. Want to write something about the latest progress on your research project? No need to convince an academic publisher of the merits. Just put up a web page.

People accessing the web benefited from their ability to completely control their own web browser. In fact, in the Hypertext Transfer Protocol, the web browser is referred to as a “user agent” that accesses the Web on behalf of the user. Want to see the raw HTML as delivered by the server? Right click on your screen and use “view source.” Want to see only text? Instruct your user agent to turn off all images. Want to fill out a web form but keep a copy of what you are submitting for yourself? Create a script to have your browser save all form submissions locally as well.

Over time, popular platforms on the web have interfered with some of the freedom and autonomy that early users of the web used to enjoy. I went on Facebook the other day to find a witty note I had written some time ago on a friend’s wall. It turns out that Facebook makes finding your own wall posts quite difficult. You can’t actually search all the wall posts you have written in one go; rather, you have to go friend by friend and scan manually backwards in time. Facebook has all the data, but for whatever reason, they’ve decided not to make it easily searchable. I’m not suggesting any misconduct on Facebook’s part—that’s just how they’ve set it up. The point, though, is that you experience Facebook the way Facebook wants you to experience it. You cannot really program Facebook differently for yourself. If you don’t like how Facebook’s algorithms prioritize your friends’ posts in your newsfeed, then tough luck, there is nothing you can do.

Or is there? Imagine what would happen if everything you did on Facebook was mediated by a software program—a “bot”—that you controlled. You could instruct this bot to go through and automate for you the cumbersome steps that Facebook lays out for finding past wall posts. Even better, if you had been using this bot all along, the bot could have kept your own archive of wall posts in your own data store (e.g., a Dropbox folder); then you could simply instruct the bot to search your own archive. Now imagine we all used bots to interact with Facebook. If we didn’t like how our newsfeed was prioritized, we could simply ask our friends to instruct their bots to send us status updates directly so that we can form our own feeds. With Facebook on the web this was entirely possible because of the open protocol, but it is no longer possible in a world of proprietary and closed apps on mobile phones.

Although this Facebook example might sound trivial, bots have profound implications for power in a networked world. Consider on-demand car services provided by companies such as Uber and Lyft. If you are a driver today for these services, you know that each of these services provides a separate app for you to use. And yes you could try to run both apps on one phone or even have two phones. But the closed nature of these apps means you cannot use the compute power of your phone to evaluate competing offers from the networks and optimize on your behalf. What would happen, though, if you had access to bots that could interact on your behalf with these networks? That would allow you to simultaneously participate in all of these marketplaces, and to automatically play one off against the other.

Using a bot, you could set your own criteria for which rides you want to accept. Those criteria could include whether a commission charged by a given network is below a certain threshold. The bot, then, would allow you to accept rides that maximize the net fare you receive. Ride sharing companies would no longer be able to charge excessive commissions, since new networks could easily arise to undercut those commissions. For instance, a network could arise that is cooperatively owned by drivers and that charges just enough commission to cover its costs. Likewise, as a passenger using a bot could allow you to simultaneously evaluate the prices between different car services and choose the service with the lowest price for your current trip. The mere possibility that a network like this could exist would substantially reduce the power of the existing networks.

We could also use bots as an alternative to anti-trust regulation to counter the overwhelming power of technology giants like Google or Facebook without foregoing the benefits of their large networks. These companies derive much of their revenue from advertising, and on mobile devices, consumers currently have no way of blocking the ads. But what if they did? What if users could change mobile apps to add Ad-Blocking functionality just as they can with web browsers?

Many people decry ad-blocking as an attack on journalism that dooms the independent web, but that’s an overly pessimistic view. In the early days, the web was full of ad-free content published by individuals. In fact, individuals first populated the web with content long before institutions joined in. When they did, they brought with them their offline business models, including paid subscriptions and of course advertising. Along with the emergence of platforms such as Facebook and Twitter with strong network effects, this resulted in a centralization of the web. More and more content was produced either on a platform or moved behind a paywall.

Ad-blocking is an assertion of power by the end-user, and that is a good thing in all respects. Just as a judge recently found that taxi companies have no special right to see their business model protected, neither do ad-supported publishers [96]. And while in the short term this might prompt publishers to flee to apps, in the long run it will mean more growth for content that is paid for by end-users, for instance through a subscription, or even crowdfunded (possibly through a service such as Patreon).

To curtail the centralizing power of network effects more generally, we should shift power to the end-users by allowing them to have user agents for mobile apps, too. The reason users don’t wield the same power on mobile is that native apps relegate end-users once again to interacting with services just using our eyes, ears, brain and fingers. No code can execute on our behalf, while the centralized providers use hundreds of thousands of servers and millions of lines of code. Like a web browser, a mobile user-agent could do things such as strip ads, keep copies of my responses to services, let me participate simultaneously in multiple services (and bridge those services for me), and so on. The way to help end-users is not to have government smash big tech companies, but rather for government to empower individuals to have code that executes on their behalf.

What would it take to make bots a reality? One approach would be to require companies like Uber, Google, and Facebook to expose all of their functionality, not just through standard human usable interfaces such as apps and web sites, but also through so-called Application Programming Interfaces (APIs). An API is for a bot what an app is for a human. The bot can use it to carry out operations, such as posting a status update on a user’s behalf. In fact, companies such as Facebook and Twitter have APIs, but they tend to have limited capabilities. Also, companies presently have the right to control access so that they can shut down bots, even when a user has clearly authorized a bot to act on his or her behalf.

Why can’t I simply write code today that interfaces on my behalf with say Facebook? After all, Facebook’s own app uses an API to talk to their servers. Well in order to do so I would have to “hack” the existing Facebook app to figure out what the API calls are and also how to authenticate myself to those calls. Unfortunately, there are three separate laws on the books that make those necessary steps illegal.

The first is the anti-circumvention provision of the DMCA. The second is the Computer Fraud and Abuse Act (CFAA). The third is the legal construction that by clicking “I accept” on a EULA (End User License Agreement) or a set of Terms of Service I am actually legally bound. The last one is a civil matter, but criminal convictions under the first two carry mandatory prison sentences.

So if we were willing to remove all three of these legal obstacles, then hacking an app to give you programmatic access to systems would be possible. Now people might object to that saying those provisions were created in the first place to solve important problems. That’s not entirely clear though. The anti circumvention provision of the DMCA was created specifically to allow the creation of DRM systems for copyright enforcement. So what you think of this depends on what you believe about the extent of copyright (a subject we will look at in the next section).

The CFAA too could be tightened up substantially without limiting its potential for prosecuting real fraud and abuse. The same goes for what kind of restriction on usage a company should be able to impose via a EULA or a TOS. In each case if I only take actions that are also available inside the company’s app but just happen to take these actions programmatically (as opposed to manually) why should that constitute a violation?

But, don’t companies need to protect their encryption keys? Aren’t “bot nets” the culprits behind all those so-called DDOS (distributed denial of service) attacks? Yes, there are a lot of compromised machines in the world, including set top boxes and home routers that some are using for nefarious purposes. Yet that only demonstrates how ineffective the existing laws are at stopping illegal bots. Because those laws don’t work, companies have already developed the technological infrastructure to deal with the traffic from bots.

How would we prevent people from adopting bots that turn out to be malicious code? Open source seems like the best answer here. Many people could inspect a piece of code to make sure it does what it claims. But that’s not the only answer. Once people can legally be represented by bots, many markets currently dominated by large companies will face competition from smaller startups.

Legalizing representation by a bot would eat into the revenues of large companies, and we might worry that they would respond by slowing their investment in infrastructure. I highly doubt this would happen. Uber, for instance, was recently valued at $50 billion. The company’s “takerate” (the percentage of the total amount paid for rides that they keep) is 20%. If competition forced that rate down to 5%, Uber’s value would fall to $10 billion as a first approximation. That is still a huge number, leaving Uber with ample room to grow. As even this bit of cursory math suggests, capital would still be available for investment, and those investments would still be made.

That’s not to say that no limitations should exist on bots. A bot representing me should have access to any functionality that I can access through a company’s website or apps. It shouldn’t be able to do something that I can’t do, such as pretend to be another user or gain access to private posts by others. Companies can use technology to enforce such access limits for bots; there is no need to rely on regulation.

Even if I have convinced you of the merits of bots, you might still wonder how we might ever get there from here. The answer is that we can start very small. We could run an experiment with the right to be represented by a bot in a city like New York. New York’s municipal authorities control how on demand transportation services operate. The city could say, “If you want to operate here, you have to let drivers interact with your service programmatically.” And I’m pretty sure, given how big a market New York City is, these services would agree.

Friday, December 7, 2018 - 5:55pm

When Brad Smith from Microsoft had called for the regulation of Facial Recognition technology in July I was concerned about where that might go, as it could easily result in stifling innovation. I was therefore relieved to see the principles that Microsoft put forth yesterday, which are for the most part quite sensible.

image

In particular, I agree and strongly support the due process suggestion on government’s use of facial recognition technology. Surveillance of an individual using facial recognition should require a court order.

I am also a big fan of requiring an API to enable third party testing. This is in fact the first instance I am aware of in which a large tech company proposes such a requirement which I have written about frequently before and is a central part of what I call “Informational Freedom” in my book World After Capital. A great approach here would be for an organization such as NIST to publish a reference data set against which all facial recognition systems could be tested.

The only section of the Microsoft proposal that I think is somewhat under specified and potentially subject to bad regulation is titled “Ensuring meaningful human review.” The goal of this section is laudable, which is to require a human in the loop for high stakes decisions instead of operating fully automated. But the criteria for when that might apply are broad and vague and could wind up encompassing a lot of the positive use cases. I would suggest limiting this part of the proposal to the exercise of government power.

Overall this is an incredibly thoughtful contribution from a technology leader to the discussion of how we can use our new capabilities for good.

Thursday, December 6, 2018 - 12:53pm

In investing there is uncertainty about returns. Some investments do well, others do poorly. But that is not the only risk that investors are concerned about when they are investing professionally on behalf of others. There is also the issue of perception: it is one thing not to make money in a sector, it is another not to make money in that sector when everyone else appears to be making money in it. Similarly, it is one thing to lose money on a trade, it is another to lose money on a trade that people have tried many times before and is now widely “known” to be a money-losing trade.

In each case the investor is not just taking return risk but also perception risk. If the others are right, then not only will returns be below the benchmarks but there is also the question: why did you think you were smarter than everyone else? And, well, nobody really wants to hear that. Beyond a bruised ego, the perception risk will eventually also impact one’s ability to raise money for a fund. Why? Because most of the money put into funds is put there by people who are also professional investors and hence face similar perception risk!

I believe perception risk explains why there is so much herding into popular sectors and why, conversely, some sectors go underfunded for long periods of time after big losses have been incurred. For example in 2001, Brad and I together with a mutual friend tried to raise a fund on an investment program roughly similar to what eventually became USV. Nobody wanted to give us money. I remember one meeting with someone at Goldman Sachs particularly well. After we had explained how the next value creation in the Internet would be at the application level (because so much had gone into infrastructure during the dotcom bubble), the person we were pitching looked at us and said “So you are saying you will invest in shitty little companies?”

It will be interesting to see how this plays itself out in crypto now. Longtime crypto investors like to point out how Bitcoin has had multiple previous big corrections. While that is correct from a return risk perspective, it fails to account for perception risk. None of the prior corrections had remotely the same level of public visibility. So to think that institutional investors will by piling in right now is to ignore perception risk. To invest now means taking both return risk and perception risk. That’s why climbing out of the winter of the burst Dotcom bubble took time and that’s why the same is likely to be true for crypto.

Tuesday, December 4, 2018 - 12:47pm

NOTE: I have been posting excerpts from my book World After Capital. The previous section introduced the concept of Informational Freedom. Today looks specifically at internet access.

Access to the Internet

On occasion, the Internet has come in for derision from those who claim it is only a small innovation compared to, say, electricity or vaccinations. Yet it is not small at all. If you want to learn how electricity or vaccinations work, the Internet suddenly makes that possible for anyone, anywhere in the world. Absent artificial limitations re-imposed on it, the Internet provides the means of access to and distribution of all human knowledge—including all of history, art, music, science, and so on—to all of humanity. As such, the Internet is the crucial enabler of the digital Knowledge Loop and access to the Internet is a central aspect of Informational Freedom.

At present, over 3.5 Billion people are connected to the Internet, and we are connecting over 200 Million more every year [88]. This tremendous growth has become possible because the cost of access has fallen so dramatically. A capable smartphone costs less than $100 to manufacture, and in places with strong competition 4G bandwidth is provided at prices as low as $8 per month (this is a plan in Seoul that provides 500 MB at 4G speeds, a 2GB plan is $17 per month) [89] [90].

Even connecting people in remote places is getting much cheaper, as the cost for wireless networking is coming down and we are building more satellite capacity. For instance, there is a project underway that connects rural communities in Mexico for less than $10,000 in equipment cost per community. At the same time in highly developed economies such as the U.S., ongoing technological innovation, such as MIMO wireless technology, will further lower prices for bandwidth in dense urban areas [91].

All of this is to say that even at relatively low levels, a UBI will cover the cost of access to the Internet, provided that we keep innovating and have highly competitive and/or properly regulated access markets. This is a first example of how the three different freedoms mutually re-enforce each other: Economic Freedom allows people to access the Internet, which is the foundation for Informational Freedom.

As we work to give everyone affordable access to the Internet, we still must address other limitations to the flow of information on the Internet. In particular, we should oppose restrictions on the Internet imposed by either our governments or our Internet Service Providers (ISPs, the companies we use to get access to the Internet). Both of them have been busily imposing artificial restrictions, driven by a range of economic and policy considerations.

One Global Internet

By design, the Internet does not include a concept of geographic regions. Most fundamentally, it constitutes a way to connect networks with one another (hence the name “Internet” or network between networks). Any geographic restrictions that exist today have been added in, often at great cost. For instance, Australia and the UK have recently built so-called “firewalls” around their countries that are not unlike the much better-known Chinese firewall. These firewalls are not cheap. It cost the Australian government about $44 million to build its geographic-based, online perimeter [92]. This is extra equipment added to the Internet that places it under government control, restricting Informational Freedom. Furthermore, as of 2017 both China and Russia have moved to block VPN (Virtual Private Network) services, a tool that allowed individuals to circumvent these artificial restrictions and censorship online [93]. As citizens, we should be outraged that our own governments are spending our money to restrict our Informational Freedom. Imagine, as an analogy, if the government in an earlier age had come out to say “we will spend more taxpayer money so that you can call fewer phone numbers in the world.”

No Artificial Fast and Slow Lanes

The same additional equipment used by governments to re-impose geographic boundaries on the Internet is also used by ISPs (Internet Service Providers, i.e. companies that provide access to the Internet) to extract additional economic value from customers, in the process distorting access. These practices include paid prioritization and zero rating. To understand them better and why they are a problem, let’s take a brief technical detour.

When you buy access to the Internet, you pay for a connection of a certain capacity. Let’s say that is 10 Mbps (that is 10 Megabits per second). So if you use that connection fully for, say, sixty seconds, you would have downloaded (or uploaded for that matter) 600 Megabits, the equivalent of 15-25 songs on Spotify or SoundCloud (assuming 3-5 Megabytes per song). The fantastic thing about digital information is that all bits are the same. So it really doesn’t matter whether you used this to access Wikipedia, to check out Khan Academy, or to browse images of kittens. Your ISP should have absolutely no say in that. You have paid for the bandwidth, and you should be free to use it to access whatever parts of human knowledge you want.

That principle, however, doesn’t maximize profit for the ISP. To do so, the ISP seeks to discriminate between different types of information based on consumer demand and the supplier’s ability to pay. Again, this has nothing to do with the underlying cost of delivering those bits. How do ISPs discriminate between different kinds of data? They start by installing equipment that lets them identify bits based on their origin. They then go to a company like YouTube or Netflix and ask them to pay money to the ISP to have their traffic “prioritized,” relative to the traffic from other sources that are not paying. Another form of this manipulation is so-called “zero rating” which is common among wireless providers, where some services pay to be excluded from the monthly bandwidth cap. And if permitted, ISPs will go even a step further: in early 2017 the U.S. Senate voted to allow ISPs to sell customer data including browsing history without prior customer consent [94].

The regulatory solution to this issue goes by the technical and boring name of Net Neutrality. But what is really at stake here is Informational Freedom. Our access to human knowledge should not be skewed by the financial incentives of our ISPs. Why do we need regulation? Why not just switch to another ISP, one that provides neutral access? As it turns out in most geographic areas, especially in the United States, there is no competitive market for Internet access. ISPs either have outright monopolies (often granted by regulators) or they operate in small oligopolies. For instance, in the part of New York City (Chelsea) where I live at the moment, there is just one broadband ISP, with speeds that barely qualify as real broadband.

Over time technological advances such as wireless broadband and mesh networking may make the Internet Access market more competitive. Until then, however, we need regulation to avoid ISPs limiting our Informational Freedom. This concern is shared by people in diverse geographies. For instance, India recently objected to a plan by Facebook to provide subsidized Internet access which would have given priority to Facebook services.

Sunday, December 2, 2018 - 8:42am

We have entered a new phase of the discussion on what to do about speech on platforms such as Twitter. On the plus side many more people are engaged. On the minus side the calls to treat Twitter as a traditional publisher are growing.

Let me start by repeating something I have written before: many of Twitter’s problems are self inflicted. In particular, Twitter has messed up its checkmark system and Twitter has been woefully slow to add moderation tools.  But we shouldn’t base decisions about what to do in the future on Twitter’s current state.

There are many related problems when it comes to speech on a large public platform and often they cut in different directions:

  1. Censorship and suppression of speech
  2. Direct harassment and threatening of individuals
  3. Hate speech
  4. Misinformation and manipulation
  5. Comments that are offensive to someone based on their beliefs
  6. Being trapped in a filter bubble

The idea that there could or should be a single central institution, let alone a commercial company, which as a benevolent dictator resolves all of these issues to everyone’s satisfaction is a complete non-starter. Yet that is essentially what Twitter is attempting at the moment and it is, unsurprisingly, failing badly.

Here is just one example of the kind of problems Twitter’s current approach runs into. David Pinsen just had his Twitter account temporarily suspended. I initially connected with David on Twitter years ago, we have met in person for an interview and stay in touch with over Twitter, comments on my blog and email. Here are the tweets cited in David’s suspension:

Now people might disagree about how useful these tweets are. Some people might even have a negative reaction, such as “why did he have to bring that up?” and those people should be free to mute or block David. But to have Twitter suspend him, even for some time, seems like too much centralized power exercising censorship, i.e. fails #1 above. The calls for Twitter to be a publisher would massively aggravate this problem. Even a large traditional publisher has a tiny number of writers compared to the millions of voices on Twitter.

What should be done? Well, my preferred go to answer is to shift more power to the network participants by requiring Twitter (and other scaled services) to have an API. That would allow endusers to programmatically create the best version of Twitter and would also make it easier to simultaneously use Twitter and new decentralized alternatives. But since we are likely years away from accomplishing that via new regulations or removing existing regulations, let me set out what could be done within the existing legal framework.

For starters, Twitter should fix its verification system by having it simply mean one thing: this account actually belongs to the person whose name appears on the account. Together with some simple other changes, such as requiring verified accounts to have the person’s (or organization’s) name as their name (not their handle) and not allowing verified accounts to change their name without re-verification this would go a long way to dealing with misinformation and manipulation. Twitter would also need to make it much easier to actually obtain verification than they did historically (e.g. DM Twitter an image of your driver’s license from the account you want to verify).

To understand the next set of my proposals it is important to differentiate between the existence of a tweet and its visibility. Consider two extreme book ends: a tweet that exists but you can only see it if you have the URL for the tweet; a tweet that is inserted into everyone’s timelines repeatedly so that eventually all users see it. Twitter as a centralized system controls this entire range of visibility.

Twitter should never delete a tweet, unless either the user chooses to delete it or Twitter is forced by law to delete it. The latter should be extremely rare because even if one country asks for deletion of a tweet, Twitter could choose to make that tweet inaccessible from that geography based on IP or other geolocation technology. As an aside,  Twitter could make tweets editable if it kept around all prior versions for the tweet as well and linked to them from the current version (it could also limit how many characters one can edit per iteration of a tweet and/or how many times one can edit a tweet).

Crucially, Twitter should significantly expand the features that let individuals and groups manage the visibility for tweets for themselves. There are already useful features such as muting a conversation or blocking an individual. These could be expanded in ways that allow for delegation. For instance, users should be able to say that they want to subscribe to mute and block lists from other individuals, groups or organizations they trust. One example of this might be that I could choose to automatically block anyone who is blocked by more than x% of the people I follow (where I can choose x). Ideally these features could be implemented at the tweet/conversation level and not just the account level.

The goal here is to retain Twitter as a platform for expression and empower individuals and groups to more clearly shape how they experience Twitter (but without immediately spilling over into what others can see). Now if a tweet is blocked or muted by more and more people belonging to widely different groups (something Twitter can tell through network analysis), Twitter could also gradually dial down the visibility of that tweet, but it would be doing so on the basis of a lot of signal.

There is one problem that this approach does not solve and may in fact worsen, at least initially. And that’s #6 above – people living in a filter bubble. This problem is endemic to any system that let’s people shape directly or indirectly (via algorithms) what they want to see. As it turns out people generally prefer having their existing beliefs confirmed rather than challenged. Asking any one system to overcome this deeply engrained human bias is asking a lot. This will require us to work on much broader changes along the lines that I propose in my book World After Capital. Nonetheless, with a delegated muting approach as proposed above, Twitter would have more data than ever that could also be used to selectively raise the visibility of some tweets in what I have called an “Opposing View Reader.”

No solution here will be perfect and it will take many iterations to get better. But to give up on platforms and revert to the publisher model would be a huge mistake.

Wednesday, November 28, 2018 - 2:32pm

I have been skeptical about Turing complete on chain computation for a long time. Many early proponents took the position that there is no issue because a mechanism such as Ethereum’s gas limits how long a computation can run. They argued that this means computation will always eventually run out of gas and then stop. But the danger of a program not stopping is not the only thing to worry about in a completely open and interconnected computation system. The more pertinent issue is: what will a new smart contract do to the total system state? With Turing complete languages, there is one and only one way of finding out and that is to actually go ahead and run everything.

Another way of saying this is that focusing on the Halting Problem in its most literal sense, as in “Will this program stop?” is too narrow a view of the issue. As I explain in the Halting Problem post from my old Tech Tuesday series, any question such as “will this program output the number 42?” or “will this program ever execute line 42 of its code?” is not answerable in the general case of Turing complete languages. For Blockchains an example question is: Will a new smart contract cause any existing smart contract to misbehave? And gas limits do not help with answering that question. In a system where any contract can refer to any other contract that question is not generally answerable for arbitrary Turing complete contracts other than by executing the system.

Some people have proposed formal verification as an answer. I absolutely believe that smart contracts systems should be built in a way that makes formal verification possible. That will help with some problems, such as detecting and preventing certain kinds of bugs that make possible attacks such as the one that drained one third of The DAO. But formal verification does not get around the problem that even trivial questions about how the system will behave as a whole with a new contract added cannot be generally answered.

So what is to be done? I am partial to having on-chain computation be non-Turing complete. If done correctly this will not impose a huge limitation on the kind of relatively simple and yet hugely helpful computations one would want to carry out as part of smart contracts, such as taking a conditional action. Such an approach would not only make formal verification dramatically easier, but it would make it so powerful that one could in fact answer questions about the impact of new smart contracts on the system as a whole.

Does that mean we have to give up on Turing complete computation entirely for blockchains? No. I believe the right place for Turing complete computation is off-chain. But the blockchain should natively support verification of zero knowledge proof certificates for off-chain Turing complete computation. In this scenario as long as a single off-chain node properly carries out a computation and submits a certificate, then following successful on-chain validation the results of that computation can be used as inputs for smart contracts.

There is no system today that provides this combination of non-Turing complete smart contracts with native support for on-chain validation of zero knowledge proofs for Turing complete off-chain computation. But projects are starting to experiment with and plan for such an approach and I expect to see some version of this become available over the coming year. 

Monday, November 26, 2018 - 12:24pm

NOTE: I have been posting excerpts from my book World After Capital. The last few excerpts have been about Universal Basic Income as a way of expanding Economic Freedom. Today’s section introduces the concept of Informational Freedom.

Informational Freedom

Can you read any book you want to? Can you listen to all the music that has ever been recorded? Do you have access to any web page at all you wish to consult? Can you easily see your own medical record? Other people’s medical records?

Historically questions like this would not have made much sense, as copying and distributing information was quite expensive. In the early days of writing, for instance, when humans literally copied text by hand, copies of books were rare, costly, and also subject to copy errors (unintentional or intentional). Few people in the world at that time had access to books, and even if some power had wanted to expand access, it would have been difficult to do so because of the immense cost involved.

In the age of digital information, when the marginal cost of making a copy and distributing it has shrunk to zero, all limitations on digital information are in a profound sense artificial. They involve adding cost back to the system in order to impose scarcity on something that is abundant. As an example, billions of dollars have been spent on trying to prevent people from copying digital music files and sharing them with their friends or the world at large [87].

Why are we spending money to make information less accessible? When information existed only in analog form, the cost of copying and distribution allowed us—to some degree required us—to build an economy and a society grounded on information scarcity. A music label, for instance, had to recruit talent, record in expensive studios, market the music (often by paying for radio airplay), and finally make and distribute physical records. Charging for the records allowed the label to cover its costs and turn a profit. Now individuals can produce music on their laptop and distribute it for free to the entire world, the fixed cost is dramatically lower and the marginal cost of a listen is zero. And with that the business model of charging per record, or per song, or per listen, and the extensive copyright protections required to sustain it no longer make sense. Despite the ridiculous fight put up by the music industry in the end we are winding up with listening that is either free (ad supported) or part of a subscription. In either case the marginal listen is free.

Despite this progress in music, we accept many other artificial restrictions on information access and distribution as a given because we, and a couple of generations before us, have grown up with them. This is the only system we know and much of our personal behavior, our public policies and our intellectual inquiries are shaped by what we and our recent ancestors have experienced. To transition into the Knowledge Age, however, we should jettison much of this baggage and strive for dramatically increased informational freedom. This is not unprecedented in human history. Prior to the advent of the printing press, stories and music were passed on largely in an oral tradition or through copying by hand. There were no restrictions on who could tell a story or perform a song.

Just to be clear: Information is not the same as knowledge. Information is a broader concept, including, for instance, the huge amounts of log files generated every day by computers around the world, much of which may never be analyzed. We don’t know in advance what information will turn out to be the basis for knowledge (i.e., information meant for other humans and which humans choose to maintain over time). Hence it makes sense to keep as much information as possible and make access to that information as broad as possible.

In this section we will explore various ways to expand informational freedom, the second important regulatory step to facilitate our transition to a Knowledge Age.

Friday, November 23, 2018 - 5:14pm

Wishing everyone a belated Thanksgiving. If you have some time today or over the weekend, I recommend watching this video of VP Mike Pence talking about the administration’s approach to China

Then follow it by watching this video by Hank Paulson who proposes a different approach but seem clearly pessimistic about where this is headed.

The deterioration of US-China relations is a massive tail risk. The recent sell-off in the stock market may be a partial reflection of that but doesn’t even begin to capture the potential downside. As an important reminder, China is the largest holder of US sovereign debt, which they could use strategically by selling treasuries. That could drive up US borrowing rates just at a time that the Trump administration tax cut is resulting in massive deficits, making borrowing more expensive.

I have talked to a number of American and Chinese entrepreneurs over the last few months about this. What was most surprising to me was how few seemed to have the possibility of a US-China cold war on their radar as a real risk.

Monday, November 19, 2018 - 12:56pm

NOTE: I have been posting excerpts from my book World After Capital. Today’s section wraps up the section on Universal Basic Income (UBI) by addressing a few common objections and arguing that it is a moral imperative.

Other Objections to UBI

I have already addressed the three biggest objections to UBI by showing that it is affordable, will not result in inflation, and will be positive for the labor market. There are some other common objections that are worth addressing. The foremost of these is a moral objection that people have done nothing to deserve receiving an income. That one is important enough that it merits its own section which follows in a bit and closes out this chapter on Economic Freedom.  

Another objection is that UBI diminishes the value of work in society. The opposite is true because UBI recognizes how much unpaid work exists in the world, starting with child rearing. We have created a weird situation where the word “work” has become synonymous with getting paid and using that to conclude that if you do not get paid for an activity (at least not in an obvious direct way), then it cannot be work. As an interesting counter to this, Montessori Schools use “work” to refer to any “purposeful activity” [Citation needed].

That leads us to a different objection, which is that UBI robs people of a purpose which–the argument goes–is provided by work. But work as the sole source of human purpose is a relatively new view that is largely attributable to the Protestant work ethic (which signals its focus on work by its name). Previously human purpose tended to be much more broadly based in following the precepts of religion, which might include work as one of many commandments, and being a good member of the community. Put differently, the source of human purpose is subject to redefinition over time and as I have argued earlier in this book, contribution to the knowledge loop is a better candidate for the future than work.

One other objection that is frequently brought up, is that people will simply spend their basic income on alcohol and drugs. This objection is often accompanied by claims that the casino money received by Native Americans is the cause of drug problems among that population. There is no evidence to support this objection and the accompanying claim. None of the UBI pilots and experiments have found a significant increase in drug or alcohol abuse (in the meantime we have, in the absence of UBI, the largest drug epidemic in U.S. history with the opioid crisis). And the research on casino money shows that, contrary to apparently widely held belief, casino money has in fact contributed to declines in obesity, smoking and heavy drinking [Citation needed].

And then there are people who object to UBI not because they think it will not work, but because they claim it is a cynical ploy by the rich to silence the poor, a financial version of “opium of the people” designed to keep people from rebelling against their situation. This criticism is voiced by some who genuinely believe it but is also used by others as a convenient tool of political division. Whatever the case, the impact of UBI is likely to be the opposite, as was recognized by Thomas Paine (see above). Today, in many parts of the world, including the United States, poor people are effectively shut out from the political process. They are too busy holding down one or more jobs to be able to run for office, or organize and sometimes even just to vote (as in the United States we vote on a week day and there is no requirement for employers to give employees time off from work to vote).  

UBI as a Moral Imperative

Finally, before proceeding to examine Informational Freedom, we should remind ourselves why individuals deserve to have their basic needs taken care of. Why should they have this right by virtue of being born, just as they do the right to breathe air?

None of us did anything to make the air. It was just here. We inherited it from the planet. Nobody ever asks, what did you do to deserve air? None of us alive today did anything to invent electricity. It had already been invented, and we have inherited its benefits. But you might say: electricity costs money and people have to pay for it. True, but they do not pay for the invention of electricity, just for the cost of making it. Yet, nobody asks: what did you do to deserve living in a world where electricity has already been invented? We can substitute many other amazing parts of human knowledge for electricity, such as antibiotics.

Human knowledge is our collective inheritance. We are all incredibly fortunate to have been born into a world where capital is no longer scarce. Using our knowledge to take care of everyone’s basic needs is therefore a moral imperative. And UBI accomplishes that by giving people Economic Freedom, allowing them to exit the Job Loop and thus accelerating the Knowledge Loop that gave us this incredible inheritance in the first place.

Monday, November 12, 2018 - 12:32pm

NOTE: I have been posting excerpts from my book World After Capital. Today’s section looks at the effects a Universal Basic Income (UBI) might have on the labor market.

Impact of UBI on the Labor Market

One of the many attractive features of a UBI is that it doesn’t do away with people’s ability to sell their labor. Suppose someone offers you $5/hour to watch her dog. Under a UBI system you are completely free to accept or reject that proposal. There is no distortion from a minimum wage. The reason we need a minimum wage in the current system is to guard against exploitation. But why does the opportunity for exploitation exist in the first place? Because people do not have an option to walk away from potential employment. With a UBI in place, they will.

The $5 per hour dog sitting example shows why a minimum wage is a crude instrument that results in all sorts of distortions. You might like dogs. You might be able to watch several dogs at once. You might be able to do it while writing a blog post or watching YouTube. Clearly government should have no role in interfering with such a transaction. The same is true, though, for working in a fast food restaurant. If people have a walk away option, then the labor market will naturally find the right clearing price for how much it takes to get someone to work in say a McDonalds. That could turn out to be $15/hour, or it could turn out to be $5/hour, or it could turn out to be $30/hour.

One frequently expressed concern about UBI is that people would stop working altogether and the labor market would collapse. Prior experiments with UBI, such as the Mincome experiment in Canada showed that while people somewhat reduced their working hours there was no dramatic labor shortage [NEED CITATION]. This should not come as a surprise as people will generally want to earn more than basic income provides and the price adjustment of labor will make working more attractive. That is especially true because UBI, in conjunction with the income tax change discussed in the previous section, removes the perverse incentive problem of many existing welfare programs in which people lose their entire benefit when they start to work, thus facing effective tax rates of greater than 100%. With UBI, whatever you earn is incremental to your UBI and you pay the normal marginal tax rate.

But what about dirty and dangerous jobs? Will there be a price of labor high enough to motivate anyone to do those? And will the companies that need this labor still be able to stay in business at that higher price? This is exactly where automation comes in: businesses will have a choice between paying people a lot more to do such work, or investing much more heavily in automation. In all likelihood, the answer will be a combination of both. But we should not fear that there is such a thing as an excessive price for labor. Because of the pressures created by technological deflation, we will not return to labor-price induced inflation.

UBI has two other, hugely important impacts on the Labor Market. The first has to do with volunteering. Today there are not enough people cleaning up the environment. Not enough people taking care of the sick and elderly. Not enough teachers. Labor is under-supplied in these sectors because there often is insufficient money behind the demand. For instance, the environment itself has no money and so the demand for clean up relies entirely on donations. As for the elderly, many of them do not have enough savings to afford personal care.

When you have to work pretty much every free hour just to meet your basic needs and/or have no control over your schedule, you cannot effectively volunteer. Providing people with UBI has the potential to vastly increase the number of volunteers. It won’t do this all by itself; we will also require changes in attitude, but historically people have thought differently about volunteering.

UBI’s second big impact on the Labor Market is dramatically expand the scope for entrepreneurial activity. A lot of people would like to start a local business, such as a nail salon or a restaurant, but have no financial cushion and so can never quit their job to give it a try. UBI changes that which is why I sometimes refer to it as “seed money for everyone.” More businesses getting started in a community means more opportunities for fulfilling local employment.

Once they get going some of these new ventures can receive more traditional financing, including bank lending and venture capital, but UBI also has the potential to significantly expand the reach and importance of crowdfunding. If your basic needs are taken care of, you will be much more likely to want to start an activity that has the potential to attract some crowdfunding, such as recording music videos and putting them up on YouTube. Also, if your basic needs are taken care of, you will be much more likely to use a fraction of any income you make to participate in crowdfunding.

Albert Wenger is a partner at Union Square Ventures (USV), a New York-based early stage VC firm focused on investing in disruptive networks. USV portfolio companies include: Twitter, Tumblr, Foursquare, Etsy, Kickstarter and Shapeways. Before joining USV, Albert was the president of del.icio.us through the company’s sale to Yahoo. He previously founded or co-founded five companies, including a management consulting firm (in Germany), a hosted data analytics company, a technology subsidiary for Telebanc (now E*Tradebank), an early stage investment firm, and most recently (with his wife), DailyLit, a service for reading books by email or RSS.