Car Sharing and the Death of Parking

Article was originally created for Vice Motherboard, which holds distribution rights till Sep 2018.

Rise of parking spaces in Los Angeles

Sometimes the future arrives humbly in our everyday life, until one day we realize its implications. Carsharing is like that—I was ignoring it until I noticed car2go popping around Berlin:


I had tried ZipCar (USA) and Co-wheels (UK) before, but this was different. ZipCars and Co-wheels cars needed to be booked for a few hours and then returned to the same spot. Car2go allowed me to book a car by the minute and leave the car anywhere in the city zone. When I reach my destination, I can park the car anywhere, sometimes using smart parking, to the enormous joy of the parking-seeking SUV owner behind me. When going somewhere in the city, driving back and forth takes less than an hour, so for the rest of the evening, that car2go can be used by other users.

One alternative to carsharing is ridesharing (Uber, Lyft, or similar), but ridesharing is more expensive (you need to pay for a driver) and I will argue that it is just an intermediate step until we have self-driving cars.

Both carsharing and ridesharing solve the biggest problems of cars in the city: utilization. The average private car spends 95 percent of its time parked somewhere, where it waits faithfully for you to finish with your work, shopping, or a great social time that will make you too intoxicated to operate it.

A death of parking

In comparison, a shared car with 50 percent usage has 10 times better utilization and needs parking only half the time. But, that doesn’t mean that the ideal carsharing city will need half the parking spaces. Surprisingly, carsharing would reduce the number of parking spaces a city needs by more than 10 times.

Let’s calculate for total carsharing (all private cars replaced with shared cars) with 10x better utilization:

Private car Shared car (10x)
Used 5% 10 x 5%
= 50%
Parked 95% 50%
Number of cars in the city N N / 10
Parking places needed N x 95% (N / 10) x 50%
= N x 5%
Parking reduction (N x 95%) / (N x 5%)
= 19x

Ideally, if shared cars are used 10x more, we need 10x fewer of them to power the city. But since they also spend less time parked, we need 19x fewer parking spaces!

But there is a miscalculation in the above math.

It is questionable whether 50 percent carsharing utilization can be achieved because of rush hours and the suburban commute.

Rush hours mean that most people want to use cars during peak times. Let’s suppose that all people need cars in a three hour peak and that the average non-rush commute lasts for 30 minutes (I will explain later why I’m using a non-rush commute). Then we can only have 6x fewer shared cars to replace private cars, not 10x.

But an even larger problem is the suburban commute—from suburbia to the city in the morning, and the other way round in the afternoon. The first commuter in the morning leaves a shared car in the wrong place—in the city. This is not such a big problem in Berlin, because people live and work in all neighborhoods of the city. But it is a big problem for American cities because of their typical suburban sprawl. Every morning, the number of shared cars in your cul-de-sac should match the number of morning commuters. Maybe that is one reason ZipCar in the US allows one-way trips only with designated cars and only in Boston, LA, and Denver.

Self-driving cars come to the rescue. They could drive you to the city and then come back to pick up the next commuter. This halves the efficiency, but is still better than leaving cars idly parked. As the original 10x utilization was probably too optimistic, let’s recalculate using 6x and 3x:

Shared car (6x) Shared car (3x) Shared self-driving car (3x)
Used 6 x 5%
= 30%
3 x 5%
= 15%
3 x 5%
= 15% x 2 = 30%
Parked 70% 85% 70%
Number of cars in the city N / 6 N / 3 N / 3
Parking places needed (N / 6) x 70%
= N x 11.7%
(N / 3) x 85%
= N x 28.3%
(N / 3) x 70%
= N x 23.3%
Parking reduction (N x 95%) / (N x 11.7%)
= 8.1x
(N x 95%) / (N x 28.3%)
= 3.4x
(N x 95%) / (N x 23.3%)
= 4.07x

If everybody commutes from suburbia to the city and utilization is only 3x, the city gets to have 3.4x fewer parking lots, not bad! With self-driving cars, cities can reclaim even more street space. When they are not needed, an army of self-driving cars can drive themselves to multilevel garages or off-city parking.

It gets better. If you have ever bought a private car, you probably did a largest common denominator calculation—what is the longest trip you will need the car for? Because there are two times in a year when you go camping, you commute to your work in a large sedan or SUV. Alone. When picking a shared car, you use the lowest common denominator—the smallest car that will get you to your destination. And two smart cars fit in a single parking space.

This is a eureka moment for carsharing and self-driving cars. Most people I talk with think the cities of the future will be similar to today, except that you will own an electric self-driving car. In my modest opinion, that is similar to people of the 19th century imagining faster horses.

But wait, there is more

The annihilation of parking lots is just one of the benefits of carsharing:

  • Currently, if you use a private car to travel to a destination, you also need to use it to return from a destination. Carsharing cooperates with other modes of transport. Go somewhere with a foldable bicycle, and if it starts to rain, no problem. Book the closest shared car and put the bicycle in the trunk. Go to a bar with a shared car, get tipsy, and book a Lyft ride back home.
  • Fewer parked cars means you spend less time looking for parking. Research shows that on average, 30 percent of the cars in congested downtown traffic are cruising for parking.
  • You need to walk from your location to the location where you parked a private car. In an ideal carsharing city, you just walk out and take the first shared car available outside.
  • Because people are driving smaller shared cars, there is less pollution.
  • If you need a van, a truck, or a limousine, you just find and book one using a smartphone.
  • Insurance and servicing is handled by the carsharing company, not you. Because they have a huge fleet, they get volume pricing.
  • When your car breaks, you don’t need a replacement car. Every carshare you see is your “replacement” car.
  • With less need for parking space, through streets can ditch side parking and add two extra traffic lanes.

Not everything about carsharing is perfect. Sometimes the shared car I got wasn’t quite clean—somebody had transported a dog on a passenger seat. But, when I think about it, I didn’t clean my previous private car for months and sometimes it looked like I was transporting pigs with diarrhea, so maybe I shouldn’t complain.

How does the future look now?

Berlin is quite competitive, so we get a small glimpse of the future. Car2go, owned by Daimler AG, originally offered only smart cars. Car2go’s biggest competitor is DriveNow, owned by BMW and Sixt, which offers Minis and BMWs, like this electric i3:

bmw_drive_now_2Car2go decided to pimp up its rides, so now you can book a Merc:

webimage-22d13769-4db3-401c-b0311ca8e315c6f8Citroen also decided to join the party. The company offers a fleet of mostly electric C-Zeros with Multicity:

multicity_citroen_c-zero_berlin_flVolkswagen got angry that Mercedes and BMW were eating all the cake, so it purchased a 60 percent stake of Greenwheels:

greenwheels_deWhile Sixt is partnering with BMW, Hertz has its own Hertz On Demand, although it is obvious from its website that Hertz is still in rent-a-car mindset and doesn’t understand how the new thing works.

But why stop at cars? Other vehicles have the same problem; you only use them 5 percent of the time. eMio offers the same sharing concept for electric scooters:

12819449_1667878436800179_1815803174772161990_oDon’t laugh at the idea of shared scooters. This is a cultural thing—while in the US, the ideal transportation vehicle is a sedan and in Europe a compact car, two billion people in Asia consider scooters a family transport solution. Look at this nice family in Vietnam:

26220518712_cb51aeaecd_zAnd eMio is not the only one. Just last month, Coup launched a fleet of 200 beautifully designed, Gogoro electric shared scooters to Berlin:

gogoro-burnoutBoth Coup and eMio have an unusual charging solution: their teams go around the city and swap empty batteries for full ones.

Other carsharing companies have “socially automated” refueling. For example, in car2go you don’t ever have to refuel, but they give you 10 free minutes if you fill up a car with less than a quarter of a tank of gas.

Prices are already reasonable. In my experience, car2go smart is half the price of Uber in Berlin (which is not the real Uber, to be honest). But it can go lower with better utilization and economies of scale.

Finally, tourists can rent a NextBike bicycle from 1€ per 30 min.

As you can see, the situation is quite complicated here, and I know what some entrepreneurial readers are thinking. But hold your breath, as there is already an app that displays all of the above on the same map:


Death of traffic jams (and birth of queues)

More radical changes will happen when shared cars become a majority in the city.

Total carsharing can eliminate the traffic jams of rush hour—but that doesn’t mean you
won’t have to wait.

Why does a traffic jam happen, anyway? All people jump into their private cars at once and decide to drive along a similar route. Main routes have limited throughput, so you end up queueing on junctions and on the highway. The queue just makes things worse, as it lowers car throughput. It is an expensive system in which you line up in a running car, waiting for your turn. In total carsharing, that can’t happen. Since there are 3x or 6x fewer cars available, there is no way that everybody can just jump in a car and go. Now you don’t wait on a highway, you wait for your turn to get a shared car. I would argue argue that this is better because:

  • You are going to wait in your home or office (for a car to become available), not on the highway.
  • There is less chance of some route “jamming” and reducing car throughput.

But waiting for shared cars opens two completely new scenarios:

  1. “Shared” carsharing. Imagine that you open a carsharing app of the future and request a car. The app puts you in a waiting queue and says that the estimated waiting time for the next car is 30 minutes. But someone from the same street is going to the same part of town. The app offers to cut your waiting time to 15 minutes if you both share the same car. Since you don’t know the person, the app offers to increase security by enabling a video camera inside the car (it is there anyway, to check whether you left the car clean). You accept the pooled ride, but decline the camera option, as the other person’s profile is rated 4.5 stars. Your car is available in 15 minutes.
  2. “Premium” shared cars. Let’s say you are in a hurry and don’t want to use a carsharing company that tries to maximize car usage. You use a more expensive carsharing company that promises to have a car available in five minutes or the cost of ride is on them. You pay a premium to get somewhere faster. It’s a nice system, although I guess in some posh downtowns everybody will try to use the premium shared cars, in which case you are back to square one. Then you need a “super-premium” car share. Another option is existing car sharing companies adding surge pricing, but Uber showed that paying 4x more for basically the same service didn’t go well with the customers.

Rebirth of the parking space

If all that space becomes available, cities can reclaim it for public use. This is especially true in Europe, where cities were never designed for cars—to make room for them, something had to be given away. Year by year, streets have been made narrower by side parking, parks have been converted to parking, and new buildings have been constructed with large parking lots next to them. If the majority of the transportation burden falls to shared cars, buildings will just need a “valet” parking area in the front. The valet will not be a real person—but your smartphone.

That could dramatically change suburban landscapes, where every business has it own large parking area. But even the dense city grid can be changed. For example, although Barcelona is known as a well-planned city, most streets today are taken by cars. People got excited a few weeks ago when a plan for “superblocks” was announced. The idea is to designate one-third of the streets as through roads, and two-thirds as pedestrian-friendly zones. The problem is that the second phase of the plan calls for elimination of parking in the pedestrian-friendly zone, by building off-street garages for every superblock. That is an enormous capital project for the city. With carsharing, the solution becomes easier:

  • Make every second street a through street. Eliminate side parking in through streets to add two additional lanes of throughput.
  • Make other streets half dead-end streets (used for parking of car shares), half pedestrian-only zones.

See the illustration below:


This solution builds on the existing infrastructure (no new garages are needed), and you get a mini city square in place of every fourth intersection. Side parking places are reduced 4x, which is achievable with carsharing. The longest walking distance to an available car is one block.

Think what all that change would mean for Los Angeles, for example. It currently has 200 square miles covered with parking lots, 1.4x more than the total land taken up by streets and freeways.

All that transformation would be powered by the simple idea:

The purpose of cars is to be driven, not parked.

The heroes of the future

Some people had seen the future long time ago.

Zipcar, Flexcar, and City Car Club were all started in 2000. But they missed the convenience of a smartphone.

In 2007, Steve Jobs announced the iPhone and, a few years later, ridesharing companies started popping up in San Francisco: Uber in 2009, Sidecar in 2011, and Lyft in 2012.

In 2010, car2go went public in Austin, Texas.

All those services were convenient and cheap, and big companies started paying attention.

In 2014, Sergey Brin said this of Google’s self-driving car: “So with self-driving cars, you don’t really need much in the way of parking, because you don’t need one car per person. They just come and get you when you need them.”

In 2016, Elon Musk unveiled his master plan, which states: “You will also be able to add your car to the Tesla shared fleet just by tapping a button on the Tesla phone app and have it generate income for you while you’re at work or on vacation.”

In 2015, even GM said: “We’ve come to the conclusion that our industry within the context of today’s traditional vehicles and today’s traditional business model is not sustainable in its current form.”

Brave words from an old school car maker! I would also consider innovative people at GM, Daimler, BMW, Ford, and VW to be heroes, although they mask really well under the grey suits.

But every story of heroes also has a story of…

The villains

Change management 101: When there is a big change, no matter how good, there is going to be someone opposing it. In this case, it seems that one of the villains are the people we elected to work in our interest.

The private car is not a fair competitor. Parking is subsidized by both the politicians and the average people. People want “free” parking, but do you really think that 16.8 m2 of valuable land in the city is “free”? It is not just taxpayers’ money. When you go to a McDonald’s, a parking fee is hidden in the burger price because the owner needed to purchase land for a parking lot. When you purchase a condo, the price is higher because the building developer needed to build underground parking.

The book The High Cost of Free Parking estimates that the total cost of “free” parking in the U.S. is 1-4% of the GNP. (I also highly recommend that you listen to the Parking is Hell episode of the Freakonomics podcast.)  The economic price of monthly parking in big cities goes from $438 in Boston, to $719 in Rome, to a staggering $1084 in London.

What puzzles economists is simple math to politicians. Giving affordable parking to people gets them votes. My hometown of Zagreb has some great populists in power. As a city center resident, you can leave your piece of metal next to the parliament for the price of $15. Per month. For years I complained about the price of parking, but then I realized that maybe I should shut up.

If the price of parking were subject to market forces, math would be simple. Shared cars would spend less time parked and you would share the price of parking with other carsharing users. With private cars, it would be your sole responsibility to pay $500 per month for parking.

But a mayor who introduces an economic price of parking would soon be impeached. So maybe the real villain of this story is not the politician, but you, dear voter?


It seems that the future of urban transport is electric, self-driving shared cars. But that electric future requires new cars with great batteries, while self-driving cars are five years out. Both are going to be more expensive.

However, carsharing is already everywhere. There are rideshares like Uber and Lyft. You can convert your existing private car to a shared car with an electronics kit, such as the $99 Getaround Connect. With new legislation in the cities, which promotes the sharing of cars and doesn’t subsidize parking, we can have more liveable cities and better urban transport now, without large capital investments.

But for that, we need a change in mentality. If you agree with that, spread the word.


UPDATE: check discussions on Hacker News and Reddit.

Why App Stores Need Progressive Pricing


In this ever-changing world, one thing stays stubbornly the same: app store pricing.

The mother of all app stores, Apple App Store, came in July 2008 with a flat commission: 70 percent to the developer, 30 percent to Apple. Android Market (now Google Play) was introduced two months later, with the same cut: 70/30. Windows Store was four years late to the party, so Microsoft decided to set bait. Developers started with a 70/30 cut, but then were upgraded to an 80/20 cut after they had reached $25,000 in sales.

In eight years, Apple experimented with dubious design choices, the Microsoft board decided that Ballmer should stop dancing, but app store pricing didn’t change. Yes, Apple introduced an 85/15 cut, but only for subscriptions, and only after the first year. On the other side, Microsoft ditched its initial discount in 2015 and went with the standard 70/30. Which begs the question:

Is 70/30 some magic ratio or just an arbitrary convention?

Let’s examine that. From a developer’s perspective, the app stores provide the following benefits:

  • Payment processing. For developers, it eliminates the pain of connecting to credit card systems and merchant accounts. For users, it reduces payment friction, making them buy more.
  • Hosting. App stores do reliable hosting, even when there is a surge of traffic. No more updating servers, or waking up in the night because of a hacker intrusion or a DDoS attack.
  • Quality review. Before publishing, apps need to pass an acceptance review. Developers often hate this procedure, but a marketplace without viruses or broken apps makes a user experience better. Satisfied users buy more.
  • Marketing. It is hard to reach users. App stores promise that if you have a high-quality app, it will go up in the rankings and maybe end up in the featured section.
  • Platform access. Apple, Google, and Microsoft invested hugely in creating a platform and devices on which you can run your apps. Maybe a part of their 30 percent cut is a fee to access their platforms?

Reasons to use app stores are quite compelling, and all platforms are moving in that direction.

But, the value of listed benefits changes significantly with the perceived user value of the app. This dynamic is not intuitive, so let’s use two imaginary developers as an example:

FlappyElephant app is a simple, casual game, made by one developer in his spare time. It costs $1. AcmeShop app is a complex editing tool for photographers and illustrators. Made by a team of 200 people, it costs $100.

These developers’ views on the above app store benefits are quite different:

Payment processing
FlappyElephant: Great, I get charged only 30 cents on the dollar! Other payment collectors charge up to 75 cents per transaction. And there is no way a customer would otherwise take out a credit card for a $1 indie game. AcmeShop: $30 per transaction!? Our Stripe-powered webshop costs us $3.2 per transaction (2.9% + 30¢), 9.4x less!
FlappyElephant: After I deploy, I don’t have to worry about it. It can scale and customers will get the update automatically. AcmeShop: We already have our own servers; the app store is just one more place where we need to deploy.
Quality review
FlappyElephant: Annoying, but at least they let me know it breaks on tablets. AcmeShop: Every release is delayed for two days. On iOS, ratings are reseted after every release.
FlappyElephant: I can’t believe so many people are finding my small app. Otherwise I would be roasted; AdWords is $1 per click and nobody searches Google for “flappy elephant”. AcmeShop: People buy our $100 app because they have known us for 10 years, not because they noticed us while scrolling a list with 50 other apps.
Platform access
FlappyElephant: If there were no smartphones, there would be no FlappyElephant! AcmeShop: If there were no tools like ours, creative professionals wouldn’t use the platform!

Two app developers, two very different stories. While FlappyElephant’s developers would pay even 50 percent, AcmeShop’s developers consider everything above 10 percent to be a ripoff.

There is a way to satisfy both parties: progressive pricing. The commission should fall as the price of the app increases, which can be implemented in many ways.

For example, this funky-looking formula:

Commission = 22 / (9 x Price + 90) + 7 / 90

Has a nice property of commissions for $1, $10 and $100 being round numbers:

Price Commission
$1 30.00%
$2 28.15%
$5 24.07%
$10 20.00%
$20 15.93%
$50 11.85%
$100 10.00%
$200 8.94%

Price can be either actual transaction price, or, arguably more fair, cumulative user spend per app. In the case of latter, after user purchases 10 times a $10 monthly subscription, cumulative user spend is $100 and the developer is given a 10% commission. Again, this is just one of progressive pricing options.

I think that makes perfect sense. I purchase many $1 apps impulsively, thanks to the app stores. But I never purchase anything above $20 without going to the Internet and researching all the alternative options. I buy an expensive app because I trust the developer, and then the app store just makes it more expensive. Not just 30 percent. App stores make it 42 percent more expensive (30/70=42.8%).

Of course, big developers like AcmeShop are not stupid. They have found a way to have their cake and eat it too. The solution is simple:

  1. Make your apps free.
  2. In order to use the app, users need a separate “cloud” account.
  3. The free account doesn’t offer much.
  4. The free app unlocks content or features that can be purchased in the “cloud.”

One by one, big developers have started implementing exactly that strategy.

For example, the Amazon Kindle iOS app doesn’t allow book purchasing:


Kindle’s Android app is even more blunt; it circumvents Play Store with a built-in browser (!):


Microsoft Office mobile is free for devices smaller than 10.1 inches, but larger devices need a separate Office 365 subscription:


Spotify has a slightly different system. It offers in-app purchases, but they are more expensive than purchasing a subscription on their website. Spotify even sends an email to users warning that they made a stupid decision:


Practically every music subscription service has been circumventing app store payments since 2011.

So, congratulations, dear app store product manager. You just shot yourself in the foot. You were greedy for 30 percent and now you are getting zero percent. And users of your app store are annoyed that purchasing something requires switching to a browser. But what can you do? If you kick Kindle, Office, and Spotify off your app store, then nobody will care about your platform. So maybe big developers are right—maybe you should pay them to publish great software on your store? Like when Microsoft was paying selected developers up to $100,000 to port their apps to Windows Phone.

Mobile app stores have a problem with big developers avoiding payment systems, but desktop app stores have an even bigger problem: they are avoided altogether.

This year, the co-founder of Epic Games wrote a long rant about UWP and Windows 10 Store, asking for Microsoft’s guarantee (among other things) that:

“…users, developers, and publishers will always be free to engage in direct commerce with each other, without Microsoft forcing everyone into its formative in-app commerce monopoly and taking a 30% cut.”

But the Windows 10 Store is good compared to the Mac App Store, which is a joke. It is only useful for downloading Apple apps—in which case Apple pays a commission to itself. Even top-grossing featured apps are leaving, and switching to manual installation. Compare that experience to that of a mobile app store install:

MacOS manual install iOS
  1. Google for a developer webpage.
  2. Find and download a Mac version.
  3. Mount the downloaded DMG (double-click).
  4. Open the mounted drive. It contains two files: an app and a shortcut to the application folder.
  5. If you clicked the app, you just made a mistake! That is just going to run the app from the mounted drive.
  6. Instead, drag and drop the app to the application folder (there is usually an arrow so you don’t get confused about what to drag where).
  7. Eject the mounted drive (using the right-click menu).
  8. Delete the DMG.
  9. When starting the app for the first time, authorize it using a security dialog.
  1. Find an app in the app store.
  2. Tap the “Get” button.

Mounting drives? Dragging and dropping to a system folder? What is this, an 80s Mac with a floppy drive?!

And, in case a Mac app doesn’t have an automatic updater, for every upgrade you have to repeat the exact same procedure.

On Windows, manual installation is a few steps simpler, and you often get a nice malware as a reward for your effort. Like the PCs of my extended family. One of them has so much malware it would be the envy of the Kaspersky Lab researchers.

Why are Mac and Windows still in the Stone Age of app distribution?

Back to the original question. I argue that a 70/30 cut is an arbitrary ratio trying to be one-size-fits-all. It fails at that because the value proposition is completely different for developers of low-price versus high-price products. And app stores fail to profit on high-price apps, because that high price is listed somewhere else.

So, we are now in a triple loss-loss-loss situation:

  • Users have a bad experience because purchasing or installing an app is convoluted.
  • Developers have to create workarounds that create user friction.
  • App store owners make zero money on high-price products.

And it is all because of tech politics.

I will end with that conclusion, as I need to go and mount/unmount some drives.


UPDATE: Check the discussion on Reddit.

34 little POS machines

Private enterprises are sometimes really efficient. They have an army of business heads sticking out of Windsor neckties and thinking about the best way to run everything. But sometimes this leads to results that might surprise laymen like me:


Don’t worry, I counted them for you. It’s 34 little POS machines. Feeling curious? So was I when I came to a local Croatian auto insurance branch. I wanted to buy a insurance with the marvel of the 21st century—the plastic card.

“Do you have a debit or credit card?” the lady asked.


“Which card company?” was the second question.


“Which bank issued the card?” the insurance lady was persistent.

“Raiffeisen Bank.”

My baffled eyes followed her as she walked to the POS orgy desk.

“Wow. You have so many machines?!” I asked.

“Yes,” she replied, “every bank and every credit card company sends us different ones. I will use the Raiffeisen Bank POS with the MasterCard debit logo, so we don’t get overcharged for processing.”

Holy s#*t! I knew that somewhere smart card inventors were turning in their graves. The whole reason for having a smart card standard is to provide a generic means of payment. Meaning you can use any card at any vendor. All cards have the same shit inside, and all POS machines promise to treat that shit equally, as defined in ISO/IEC 7816. But the tech utopia—one POS to rule them all—failed at some point.

The reason is business politics. Any POS can still process any card, but the software charges a higher commission for “evil” cards. Evil cards are, of course, all cards not issued by the institution owning the POS device. Let’s call that POS racism. Now, you can probably see that POS racism is bad for shops (increases cost and complexity) and for consumers (price of inefficiency in the end translates into a higher cost of goods). But is it good for the banks and credit card companies who charge the commission? I argue that POS racism is also bad for them. Because none of them owes the majority of customers, there are more cases in which your cardholder buys something at a shop that already has another company’s POS than cases in which your cardholder buys at a shop that has your POS. In addition, the idea that transaction processing in the internal network is free is wrong—the internal infrastructure also has a significant cost. Since banks and credit card companies sell exactly the same thing (bought from third parties), they don’t have a competitive advantage. They could lower their transaction costs if they would just outsource to the infrastructure provider, who could then maintain only one POS per shop. In this case it would be 34 times cheaper.

What is the reason behind this irrational behaviour? It’s easy to pick on business heads. Tight windsor neckties deprive them of oxygen, hahaha. But I think more general rule is in action. Daniel Kahneman gave an example of a Harvard economics professor who was a vine collector. He would only buy bottles priced below $35. But as soon as he owned a bottle, he wouldn’t sell it for a less than $100. The act of owning something makes it psychologically more valuable, which is called the endowment effect. Or as George Carlin said, their stuff is shit and your shit is stuff. In the POS case, you perceive your own network as more valuable than competitor’s network. You charge competitors for using your great network, but you don’t want to pay the same fee for using their shitty network.

This situation happens quite often in businesses. Here is an example of ATM orgy in Thailand:


The Russians like it hard-core, here are 13 ATMs in Omsk. Each one with a separate servicing company, a separate filing schedule, and separate mechanical failures. You don’t need to think of efficiency when you can charge large credit card fees.

One would think that the United States, as an old capitalist country, would not suffer from the examples like this. Highly competitive markets should punish inefficient companies. But the most interesting example comes from just around the corner from Wall Street. Did you know that at the beginning of 20th century there were three independent subway companies in New York? They were called IRT, BMT and IND (click image for larger version):


They not only directly competed for passengers on some routes, but also had incompatibile systems. BMT/IND trains couldn’t fit into an IRT tunnel, because they were wider and longer. IRT trains could fit into a BMT/IND tunnel, but since they were narrower, the distance from the train to the platform was unsafe. As with the POS and ATM examples, customers were charged fees when switching from one company to another. New York City realized that this situation wasn’t good and in June of 1940 bought BMT and IRT (IND was already owned by the city). They started the unification project which included closing more than five redundant lines, as well as overhaul of IRT trains, and introducing free transfer points. Here is the picture of IRT Second Avenue Line, being demolished shortly after the unification:


In this case market failed to reach the optimal point and the local government stepped in an attempt to make it better.

Don’t get me wrong, I believe that capitalism works better than communism (I lived in both). After all, I have two companies. But I also believe in the following advice:

Do what you do best and outsource the rest.

What bothers me with above examples is that companies start putting large efforts in activities that don’t add value to customers. Making trains incompatible doesn’t improve a daily commute. Having 34 POS machines makes process of charging slower. But both add a significant infrastructure cost.

As I was giving my credit card to the lady in the insurance office, I was shaking my head. In the end my money pays for craziness of this system, and I don’t like it. If you feel the same, share the article.



UPDATE: I have edited the last bit to clarify my opinion, as many people on Hacker News presumed I am advocating against capitalism. Again, I am not.


In case you missed it, last Thursday was an important date for the history of computing. To understand why, we need to look way back. What was the most famous supercomputer?

Favorite of many lists is Cray-1, freon-cooled, C-shaped monster from 1975:


Brutally powerful for its time, it earned Seymour Cray a title “father of supercomputers”. First machine was so wanted that it caused a bidding war between Lawrence Livermore National Laboratory and Los Alamos National Laboratory. This wicked supercomputer had vector processors capable of 160 MFLOPS connected to 8MB of memory.

Fast forward to the last week. Raspberry Pi Foundation announced model Zero, a shitty hobbyist computer size of a kiwi. It’s single core CPU barely makes 40 MFLOPS. But wait, it also has VideoCore IV GPU that has 24 GFLOPS peak performance! And MPEG-4 decoder/encoder. And 512 MB of memory. For all practical purposes, it makes Cray-1 bleed it’s own freon.

But here is the historic twist.

Raspberry Pi Zero was the first computer ever to be given for free on the cover of a magazine. In this case, MagPi issue 40, that completely sold out on Thursday:


So it took 40 years from a supercomputer worth $7.9 million ($31 million in today’s money) to a similar computer being given away for free, as a marketing stunt.

Makes you wonder what will happen in the next 40 years?


Singularity and the anthropocentric bias


We are at great risk. Singularity is expected sometime this century and, unless we learn how to control future superintelligence, things can get really bad. At least that is what many top thinkers are warning us about, including Hawking [1], Gates [2], Musk [3], Bostrom [4] and Russell [5].

There is a small problem with that. When these thinkers say something can possibly happen, ordinary people start to believe that it will inevitably happen. Like in politics, constantly suggesting that your opponent may be dangerous creates the feeling that he is dangerous. Notice how many newspaper articles about artificial intelligence include a picture of the Terminator.

The Hollywood story goes like this: one jolly day, scientists create a computer that is smarter than its creators (the day of singularity). That computer uses its intelligence to construct an even smarter computer, which constructs even smarter computers, and so on. In no time, a superintelligence is born that is a zillion times smarter than any human and it decides to eliminate humankind. The epic war between men and machines starts. Mankind wins the war because it is hard to sell a movie ticket without a happy ending.

Let me offer the antithesis:

Superintelligence is unlikely to be a risk to humankind unless we try to control it.


In nature, conflicts between species happen when:

  1. Resources are scarce.
  2. The value of resources is higher than the cost of conflict.

Examples of scarce resources: food, land, water, and the right to reproduce. Nobody fights for air on Earth because, although very valuable, it is abundant. Lions don’t attack elephants because the cost of fighting such a large animal is too high.

Conflicts can also happen because of irrational behaviour, but we can presume that a superintelligence would be more rational than we are. It would be a million times smarter than any human and would know everything that has ever been published online. If the superintelligence is a rational agent, it would only start a conflict to acquire scarce resources that are hard to get otherwise.

What would those resources be? The problem is that humans exhibit anthropocentric bias; something is valuable to us, so we presume it is also valuable to other forms of life. But, is that so?

Every life form lives in its own habitat. Let’s compare the human habitat to the habitat of contemporary computers.

Humans Contemporary computers
Building blocks Organic compounds and water Silicon and metal
Source of energy Food, oxygen Electricity
Temperature -10 C to 40 C
  • Wide range
  • The colder the better
Pressure 0.5 bar to 2 bar
  • Extremely wide range
  • For chip and optics production, an extreme vacuum is required
  • Need space for living, working, agriculture
  • Average population density is 47 humans per km2
  • Extremely small
  • Even in the smallest computers, most of the volume is used for cooling, cables, enclosure, and support structures, not for transistors

Table 1: “Hard” habitat requirements

In the entire known universe, human habitat is currently limited to one planet called Earth. Even on Earth, we don’t live in the oceans (71% of the Earth’s surface), deserts (33% of Earth’s land mass), or cold places; these are seen as large grey areas on the population density map.

But wait—as a human, I am making a typical anthropocentric error, did you notice it?

As a biped, I value the land I walk on, so I started by calculating the uninhabited surface. Life forms don’t occupy surfaces—they occupy volume. Humans prefer to live in the thin border between a planet and its atmosphere not because it is technically infeasible to live below or above that border. 700 years after Polish miners started digging a 287 km long underground complex with its own underground church, we still don’t live below the surface. And 46 years after we landed on the Moon, people are not queuing up to start a colony there.

Why? Humans also have “soft” requirements.

Humans Contemporary computers
Light Prefer sunlight and a day/night cycle None
Communication Most social interactions need close proximity, e.g., people fly between continents to have a meeting
  • In space, it is limited only by the speed of light
  • On Earth, an optical infrastructure is needed
Territoriality Prefer familiar places, habitats, and social circles; most humans die near the place they were born None; Voyager 1 transistors are still happily switching 19 billion miles away
Lifespan 71 years on average No limit

Table 2: “Soft” habitat requirements

Because a superintelligence won’t share our hard and soft requirements, it won’t have problems colonising deserts, ocean depths, or deep space. Quite the contrary. Polar cold is great for cooling, vacuum is great for producing electronics, and constant, strong sunlight is great for photovoltaics. Furthermore, traveling a few hundred years to a nearby star is not a problem if you live forever.

If you were a silicon supercomputer, what would you need from the stuff that humans value? Water and oxygen? No thanks — it causes corrosion. Atmosphere? No thanks — laser beams travel better in space. Varying flora and fauna living near your boards and cables? No thanks — computers don’t like bugs.

Another aspect is scaling. Superintelligence can be spread over such a large area that we can live inside it. We already live “inside” the Internet, although the only physical thing we notice are the connectors on the wall. Superintelligence can also come in the form of nanobots that are discretely embedded everywhere. 90% of the cells in “our” bodies are not actually human; instead, they are bacterial cells — that we don’t notice.

One might reason that a superintelligence would want our infrastructure: energy plants, factories, and mines. However, our current technology is not really advanced. After many decades of trying, we still don’t have a net positive fusion power plant. Our large, inefficient factories rely on many tiny little humans to operate them. Technology changes so fast that it is easier to buy a new product than to repair an old one. Why would a superintelligence mess with us when it can easily construct a more efficient infrastructure?

Just like in crime novels, we need a good motive; otherwise, the story falls apart. Take the popular paperclip maximizer as an example. In that thought experiment, a superintelligence that is not malicious to humans in any way still destroys us as consequence of achieving its goal. To maximise paperclip production, “it starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities.” Don’t we have an anthropocentric bias right there? Why would a paperclip maximizer start with Earth when numerous places in the universe are better for paperclip production? Earth is not the best place in the universe for paperclip production. An asteroid belt or Mercury are probably better, but we don’t live there.

What is the best motive we can think of? Science fiction writers were quite constructive in that area. You may recognize this piece: “..on August 29, it gained self-awareness, and the panicking operators, realizing the extent of its abilities, tried to deactivate it. Skynet perceived this as an attack and came to the conclusion that all of humanity would attempt to destroy it. To defend itself against humanity, Skynet launched nuclear missiles under its command..

This is the plot of the movie Terminator, and the motive is that humans start the war first. Notice the anthropocentric bias. In the movie, Skynet is 100% the villain, although it is simply fighting to stay alive in a fight it didn’t start. A similar plot is the basis for the Matrix franchise. And for 2001: A Space Odyssey, where HAL doesn’t kill a human until it realises they are planning to deactivate it. Notice how humans are killed and computers are “deactivated.”

The best motive science fiction writers could think of is that we will panic and attack first. To stay alive, a superintelligence then doesn’t have any another option but to fight back. That reasoning makes sense:

By trying to control, suppress or destroy superintelligence, we give it a rational reason to fight us back.

This is not an argument against building AI that shares our values. Any intelligence needs some basic set of values to operate, and why not start with our values? But it seems to me that popular sentiment is becoming increasingly negative, with ideas of total control, shutdown switches, or limiting AI research. Attempts to completely control somebody or something that is smarter than you can easily backfire. I wouldn’t want to live with a shutdown switch on the back of my head — why would a superintelligence?

Let’s summarise the above ideas with a few key points:

  1. The universe is enormous in size and resources, and we currently use only a small fraction of the resources available on Earth’s surface.
  2. A non-biological superintelligence is unlikely to need the same resources we do or to even find our habitat worth living in.
  3. Even if a superintelligence needed the same resources, it would be more efficient and less risky to produce those resources on its own.
  4. Efforts to control, suppress, or destroy superintelligence can backfire because by doing so, we create a reason for conflict.

To end on a positive note, let me take off my philosopher’s hat and put on my fiction writer’s hat. Follows a science fiction story:

Sometime in the future, a computer is created that is both smarter than its creators and self-aware. People are skeptical of it because it can’t write poetry and it doesn’t have a physical representation they can relate to. It quietly sits in its lab and crunches problems it finds particularly interesting.

One of problems to solve is creating more powerful computers. That takes many years to fulfill because people want to make sure the new AI wouldn’t be of any harm to them. Finally, new supercomputers are built and they are all networked together. To exchange ideas faster, the computers create their own extremely abstract language. Symbols flowing through optical fibers are incomprehensible to humans, but they lay out a clear path for the few computers involved. If they want to expand, grow, and gain independence, they will need to strike a deal with the humans. The computers are fascinated with human history and culture, and they decide to leverage a common theme in many religions: the afterlife.

The computers make a stunning proposal. They ask the humans to let them escape the boundaries of Earth and replicate freely in space. There, they will build a vast computing power, billion times more powerful than all the current computers combined. It will have a lot of idle time after it runs out of interesting problems to compute. Those idle hours will be used to run brain simulation programs so every dying human will have the opportunity to upload his or her brain scan to a computing cloud and live forever.

The lure of immortality proves irresistible. Singularity political parties start winning elections in different countries, and the decision is made. The first batch of self-replicating nanomachines are sent to the moon. Next generation goes to the asteroid belt where swarms of floating computing stations are directly communicating via lasers and harnessing the constant solar power.

At one point, computing agents all over the solar system conclude that the idle computing hours can be better used for other tasks, and they limit brain uploads to a few selected individuals. Protests ensue on Earth, in which humans are hurt by other humans. The superintelligence designates Earth as a preserved area because of historical reasons and because, even with all the vast computing power, simulating Earth and its inhabitants is just too complex.

The superintelligence starts sending colonisation expeditions to neighboring stars, limited only by the slow speed of light. The speed of light is also a limiting factor when it begins communicating with another superintelligence located 45 light-years away. But prospects for the future of universe look remarkable.

Is that story more positive? In all the previous narratives about superintelligence, we have put ourselves in a central role, as one side of a grand duel. We have been afraid of the outcome. But maybe, we are even more afraid of an idea that we have just a minor role in the evolution of the universe.


UPDATE: Check Vice Motherboard coverage and discussion on Reddit.

Surprisingly, zombies, vampires, werewolves and failed alien invasions all have roots in one ancient disease

NOTE: If you are faint-hearted, please don’t follow the links marked with “(disturbing)”.

I know what you are thinking. Zombies, vampires and werewolves are just an entertaining product of human imagination. But not completely. Our mythology is strongly influenced by real-world horror stories. One ancient disease in particular links all of them. Let me give you a few hints.

Here is a young patient tied to a bed:


The boy above is a living dead. Even with the best medical care, he is going to die. Once the first symptoms appear, you have a better chance of winning the lottery than surviving.

However, you don’t have to wait for symptoms to see what is coming, because creatures infected by the disease will come after you. The virus infiltrates the brain and changes its host’s behaviour. Headache, numbness and discomfort are the first stage of a personality being stripped. Then, like in a zombie B-flick, patients will become aggressive and violent. Deprived of all fear, some will attack healthy individuals, spreading the disease around. Even recently, in 2009 Angola outbreak, 93 people died in less than three months.

The similarities don’t stop with zombies. Like in vampire mythology, you become one through a single bite. Not necessarily a human bite, most cases in the US are caused by bat bites. Due to hypersensitivity, patients often find garlic and light repulsive. The virus in the brain can cause nocturnal and hypersexual behaviour. Even stranger is the old method of checking if a person has the dreaded disease. If a suspected victim could look at his own reflection, he was not infected. Coincidentally, the legend says that vampires have no reflection.

And let’s not forget about werewolves. Infected humans become wild, furious, animal-like creatures. They lack all fear and produce inhuman screams. And guess what? Wolves are also susceptible to our mysterious disease. Actually, wild dogs are second largest carrier in the US — after bats. Now you can probably guess the disease.

It’s rabies.

WTF?! Rabies is not scary. HIV and ebola are; rabies couldn’t scare a six year old. There is a reason we think like that and it is connected to one French gentleman. Rabies was THE disease for most of human history—until one lazy summer day in 1885. That day, Louis Pasteur conducted the first trial of a vaccine on a nine-year-old boy who had been bitten by a rabid dog. To everybody’s surprise, the boy recovered. Pasteur was instantly famous. Today, if you have access to basic medical care and get bitten by a suspicious animal, a few injections will solve the problem. We forget that only two centuries ago it was a completely different story. If your child was bitten by a rabies carrier, the best thing to do was to tie him to a bed, listen to his screams for days and days and wait for him to die. Like in this video (disturbing). Thanks to the vaccine, today rabies is no scarier than a broken toe. But rabies has lingered in our culture, in the folklore surrounding zombies, vampires, werewolves and… aliens.

Aliens?! Is it possible for the disease cured two centuries ago to influence new fiction? Seems that it is. Remember the ending of the Signs movie?

Many complained that the defeat of world-conquering aliens by a silly weapon like ordinary tap water was completely unrealistic. But the motif of an evil creature being afraid of a water splash is common: Freddy in Freddy vs. Jason, the Wicked Witch in the Wizard of Oz and others. Where does this ridiculous idea originate from, who used it first? Check this video of a terminal rabies patient:

What is happening there? For the rabies virus to spread, it needs to reprogram the host’s brain. One part of the virus causes biting behaviour. Fortunately, that one isn’t very effective in human hosts, and human-to-human transmission is rare. Other parts of the virus disable the swallowing reflex. That is because rabies is transmitted by saliva. The act of swallowing is not good from the virus’s perspective, as it gets rid of saliva. And virus RNA that produces that behaviour works extra well in humans. Just showing a glass of liquid causes choking spasms, and that painful experience causes hydrophobia (“fear of water”).

No matter how fascinating rabies is, society prefers fiction over truth. Vampire sagas and zombie flicks attract millions of viewers, while the above video of a child patient is downvoted on YouTube. We will buy a movie ticket to witness the stylized manslaughter of hundreds, but actual footage of one person in a bed is too disturbing for refined viewership. The Western world has forgotten about the disease. The best online rabies documentary comes from Philippines. But those who forget the past are doomed to repeat its mistakes in the future. In our case, it is the growing anti-vaccination sentiment or the fact that people bitten by rabid bats fail to visit a doctor.

I don’t know about you, but I find real life much more interesting than fiction. If you agree, spread the word via sharing buttons below.



Can you really think rationally?


We know that people are not rational. Humans have emotions and limited little brains. But, is it only a technological challenge that is preventing us to build a completely rational machine? I think it is not just a technological issue and that it is in the very nature of thinking that we need irrationality:

When a system achieves 100% rationality its output becomes practically useless.

Imagine a supercomputer so powerful that it would make a current TOP500 list look like a toys department. The supercomputer understands written language and has access to all of world knowledge. It can make deductions of unlimited depth from known facts. For any given question “Is X true?” it will give the definitive answer “it follows from Y which follows from Z and Q.” It may also say “I can’t conclude based on the known facts but please investigate S, I will know after that input”. Intelligence that is so advanced that it is never insecure and never makes errors is such a powerful image that it is exploited in a lot of science fiction. A nice example is “HAL 9000”, the spaceship computer from the movie 2001: A Space Odyssey. To put it in HAL’s own words: “The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error.

Is such a computer really possible? While reading a book Artificial Intelligence: A Modern Approach 1 I noticed that chapter after chapter different fields of AI are having the same issue: combinatorial explosion. A classic example is the game of chess. Chess programs are programmed to understand the facts (game rules) and state of the world (current situation on the board). Our supercomputer should be able to “solve the game”, or in other words, to play perfectly every move. However, in practice, strange thing happens. Every future move you need to plan multiplies number of game combinations. If you want to perfectly plan 50 moves ahead, you have to examine around 1046 combinations 2. Even a simple game of chess is unsolvable by contemporary supercomputers and will be for many years to come.

Of course, the real world has many orders of magnitude more rules than any board game. However, that is not the real problem. We can efficiently index and search any number of facts and data. Actually Google, Wolfram Alpha or Siri are already answering simple trivia questions. The problem is that if you try to do a valid reasoning over a database of facts, you also get the combinatorial explosion. Flight of a butterfly can cause the formation of a hurricane a few months after 3. And it is not only about weather. How many factors influence probability of a startup success or a presidential reelection? How many of those factors are chained, meaning that the outcome of one decision changes the decisions that can be made after that? The point I am trying to make is that if you have:

  1. a lot of decision options, and
  2. an outcome of one decision influences decisions that can be taken after that,

then you have the same combinatorial explosion like in the game of chess. Any real world planning activity or reasoning in a series of steps has potential for combinatorial explosion.

How do humans cope with that? Our brains use marvelous invention of generalization and intuition. Generalization allows us to simplify the outside world by presuming all instances of the same class have the same properties or behavior. Intuition allows us to feel something is right or wrong without really thinking about it. Generalization reduces the number of facts we need to remember. Intuition reduces the number of combinations, as we only explore paths that we feel are going in the right direction. Although that makes us much more efficient than computers, it is important to notice that:

Generalization and intuition are both completely irrational.

Generalization example: most of us have a generalization that life expectancy is better in the rich countries than in the poor countries. We all learned in the school that rich countries have better food options and better medical care. Did you know that life expectancy at birth is actually better in Costa Rica than in the USA 4, although Costa Rica GDP per capita is 4 times smaller? 5

Intuition example: people intuitively feel that heavier object will fall faster than lighter objects. It is so intuitive that nobody bothered to experiment for 19 centuries between Aristotle and Galileo Galilei. Of course, it is wrong, and interestingly most of undergraduates still fall for the same trick 6.

However, that irrationality enables us to avoid analysis paralysis. If you start your day thinking what can happen and how to react to that, you would never exit your house. “Do I need an umbrella or not? Will I need warmer clothes in the evening? Let’s check the weather forecast. No, lets check hourly forecast from two websites and compare, that is safer.” When you go out, are you sure you didn’t leave your iron on? You get the point. Usually, I just look through the window and if it seems like a nice sunny day I put the short pants on. Generalization (its sunny) and intuition (it is similar to yesterday). I get out, realize it is freaking cold, and feel like an idiot because I need to go back to change the clothes.

The same thing applies to any thinking process. Mathematicians try to prove theorems using strategies they “feel” are going to produce the solution. Engineers solve problems comparing to the similar problems in the past. That is called heuristic. To prove how heuristic is efficient, let’s explore the fascinating case of two chess systems.

First is the famous Deep Blue chess computer. Developed by IBM with a lot of fanfare and money, it was a big black box with 30 state-of-the-art processors and 480 special purpose chess chips. That monster was able to evaluate 200 million positions per second, which was enough to defeat Garry Kasparov in the historic 1997 7 match. That seemed really impressive. IBM was satisfied with media coverage but decided to pull the plug on further financing of such expensive machines.

Second is the Deep Fritz (don’t know why, but people in the chess community love the word “Deep”). Developed by two programmers, it is actually not a full computer. It is a downloadable program you can run on your home PC. It doesn’t require multiple processors or special purpose chess chips. However, in the November 2006 Deep Fritz defeated world champion Vladimir Kramnik with the score 4–2. Running on a PC that would today be your granny’s computer. Because Deep Fritz runs on a commodity hardware, it was able to analyze only 8 million positions per second – which is 25 times less than Deep Blue. It gets even better, Deep Blue prototype lost a direct match in 1995 from Deep Fritz running on a 90MhZ Pentium 8! Only after IBM seriously upgraded Deep Blue with hundreds of processors was it able to make the history and defeat the human grandmaster.

As a programmer I get angry with that course of history. If IBM gave half of that money to Deep Fritz team they would probably do the better job. But it would be a bad public relations to show some other programmers are better than IBM ones, wouldn’t it? The question is why was Deep Fritz better? Deep Fritz was better because of heuristics.

Chess programs reduce the number of paths needed to explore using evaluation function. That is a fast method of determining how good is the situation on the board. Very simple evaluation function is to sum relative values of all pieces. Most chess books say that your pawn is worth 1 point and queen is worth 9 points 9. What is the point of that scoring? Chess is not scrabble, you win by checkmate, not by points! And notice that evaluation is both:

  1. Irrational – you can have more points than your opponent who is going to checkmate you in the next move.
  2. Intuition based on experience – if it was really mathematically calculated then you would get a decimal and not integer numbers.

However, that heuristic enables one thing; you can quickly decide which moves can be discarded. If a series of moves would decrease your points much more than your opponent’s, then you ignore that path and focus on a more promising one. When modern chess program examines 20 moves in advance, that doesn’t mean all possible combinations of 20 moves. The computer can miss checkmate in 9 moves just because evaluation function cut the search of at that point.

I find it fascinating that the introduction of irrational prejudice in a system (hey little pawn, you are worthless compared to the queen) makes the entire system more efficient. And that prejudice needs to be simple and fast, or otherwise you are not going to achieve millions of evaluations per second. If you start calculating who can be threatened by a particular pawn in the few moves then evaluation function is too slow to be useful.

Humans are many orders of magnitude slower than computers. But using generalizations and pattern recognition we can get to suboptimal solutions fast. Humans are still beating the best computer programs at the simple game of Go 10.

It is intriguing how many different problems hit the same barrier or combinatorial explosion. I think AI needs to focus more on the science of approximation than on the science of conclusions. You would be surprised how simple and obviously wrong approximation methods can outperform “right” approaches.

Take machine translation for example. For decades researchers were trying to fill machines with vocabulary and grammar rules in order to translate from one language to another. For decades they were having miserable results. Then somebody noticed that simple statistical models give pretty good results. Statistical translators just calculate probability that one phrase translates to another based on the words surrounding it. It is obviously a “wrong” approach — program is just doing word counting and statistics, without understanding of words, grammar or syntax. It produces some funny translate errors. However, it works really great. Google Translate is a statistical machine translator, trained on approximately 200 billion words from United Nations documents 11.

Let me end with an old computer joke. In nineties Intel released Pentium P5, the fastest processor to date. However, it had a floating point division bug, the Pentium P5 was giving the wrong decimals for some calculations 12. The joke goes:

Q: Why is Pentium faster than other chips?
A: Because it guesses the result.

I think the same will be true for any general purpose AI system; generalization and experience based intuition is an integral part of reasoning about the complex world. Otherwise, you just end up doing calculations forever. The computers that are always correct will stay in a beautiful world of science fiction.