Breaking the Ice for Underwater Hockey

“This is for you kid,” said my uncle, “have fun!”

I was 12 and I got my first mask and fins. I loved it. I would go every day to the seaside, to the rocky pier, and dive next to rusty, oily boats. I would take a deep breath and plunge to depths eager to see what is below. And there were so many things: broken plates, coca-cola cans, and even some fish. I got better and better until I could freedive 14 meters deep to the old car tire hosting a crab who was surprised by a pimply visitor.

“This must be how being a dolphin feels like!” I thought to myself. They are air-breathing mammals, same as me, going periodically to surface to take a breath. Then they would dive under and play catch with each other.

28 years later I was killing my time drinking beer in a smoky Berlin bar. “..underwater hockey,” mentioned a girl next to me. I turned my head and joined the enthusiastic conversation about the obscure underwater sport. “If you did free diving, you should totally do it, it is great!” Sumi said. Playing underwater like dolphins? I was in.

When I came for Wednesday training, the entire team assembled to help me. I got pink fins from one person, a white hockey stick from another, and mask from the third. Alex took it as his job to train me before the practice match. We did puck pushing, passing, and turning in a small kids pool. Easy-peasy-lemon-squeezy. I couldn’t wait to get into the big pool and score some goals. It was going to be great.

We entered the big pool and divided into two teams. I was assigned to be front right, as passing partner to Alex. “Start!”, someone shouted and we all swam towards the little green puck. As I dived in, Alex was already two meters in front of me. He didn’t have the puck for long, two people from another team attacked and engaged in something like underwater tennis. I got myself into action, meaning I got somebody’s fin in my butt, and somebody’s else knee in my face. I mean, I am not sure knee and fin were not from the same person, we were all a big ball of human flesh. In the excitement of that sexy moment, I forgot that air is a necessary requirements for a long life. My brain suddenly reminded me with a gentle thought “Air! Get some air or you will die!” and adrenaline overdose. I jumped out of the water like a mating salmon and started hyperventilating. A girl dived out next to me, took three breaths, gave me a look and dived down again. “Boy, these people are hard-core,” I said to myself and dived down. When I looked around, the puck was already on a different part of the pool. I mean, I am not sure if the puck was there, but there is where I saw an amorphous ball of hands, panties, and fins so I swam in that direction. As I approached, a battle of underwater tenis was abruptly interrupted by a white-pants enemy player who decided to play for himself and separated from the crowd. I swam to block his way, but it was like chasing a torpedo. He whizzed by me and put the puck in a metal goal.

Our brave team regrouped and started talking about strategy for the next round. My strategy was not to forget to breathe. At a start signal, we rushed again to the center, and this time I was watching for my chance from the surface. When crowd around the puck got smaller I dived in and actually got to touch the puck with my stick. My joy was not long-lived, as a member of the other team started fighting for it with me. In that tug-of-war, I was pushing as hard as I could, but the opponent had the same cunning plan. My brain screamed “Air!” and I went out. By the time my heart stopped pounding, the puck was again in our goal. Seems both my team’s strategy and my personal strategy to remember to breathe were falling. In addition, a team member told me that it is not allowed to push the stick with two hands. I guess the underwater police didn’t care because I lost the puck anyway.

In the third round, I didn’t get to any action, but at least I got some air.

Then, our team got a “penalty shot”. My team member would start in the back and I would be in the front. We assembled at the bottom of the pool and started. The puck was suddenly next to me, finally my chance! I took it, swam by the opposing player and went unstoppably towards their goal. Wow, can’t believe it, I am going to score. Then I noticed everybody stayed in the first half of the pool. I swam to the surface and Alex told me “You are not supposed to take the puck away when there is a penalty kick, we were just supposed to protect the sides pg the penalty kicker.” Andy Warhol once said, “In the future, everyone will be famous for 15 minutes.” It seems my future was not going to happen that day, and I will have to swallow much more chlorinated water before I become good with these underwater battles.

The game continued at the same pace for 20 more minutes. A lot of feeding frenzy scenes with human bodies all over each other like Berghain dark room at 4 am. Diving down, fighting for a puck, remembering to get air. Rinse, repeat. I got a better feeling for a strategy of the game, which is not surprising considering where I started. One time I even succeeded to intercept the torpedo with white pants, yay!

After half an hour the game ended. I thanked everybody for lending me their equipment and went for a long hot-shower meditation. My heart was still fast and chlorinated water was running from my nose. As I arrived home I dropped tiredly to my bed. With a smile on my face. Maybe my underwater play was more that of a sea cow than a dolphin, but still, it was a great, great experience that I want to repeat.


Computers Have Had Emotions for Quite Some Time

A common assumption is that computers can’t have emotions. But there is a strong philosophical argument that AI systems have had emotions for many decades now.

Before making an argument, we need to define “emotion”. That definition shouldn’t require consciousness self-awareness (reddit was fast to correct this) or physical manifestation.

Self-awareness can’t be a requirement for the presence of emotions because that would contradict current research findings that even simple animals have emotions. Experiments on honeybees in 2011 show that agitated honeybees display an increased expectation of bad outcomes, similar to the emotional state displayed by vertebrates. Research published in Science in 2014 concluded that crayfish show anxiety-like behavior controlled by serotonin. However, we wouldn’t consider honeybees or crayfish to be self-aware. But you don’t have to look to the animal world. When you are sleeping, you are not self-aware, yet when a bad nightmare wakes you up, would you say you didn’t experience emotions?

Physical manifestation in any form (facial expression, gesture, voice, sweating, heart rate, etc.), can’t be a requirement for the presence of emotions because it would imply that people with complete paralysis (e.g. Stephen Hawking) don’t experience emotions. And, as before, we have the sleep problem: you experience emotions in your dreams, even when your body doesn’t show it.

This is a bit of a problem. As self-awareness is not a requirement, we can’t simply ask the subject if they experience emotions. As a physical manifestation is not a requirement, we can’t simply observe the subject. So, how do we determine if one is capable of emotional response?

As a starting point, let’s look at evolution:

The evolutionary purpose of emotions in animals and humans is to direct behavior toward specific, simple, innate needs: food, sex, shelter, teamwork, raising offspring, etc.

Emotional subsystems in living creatures do that by constantly analyzing their current model of the world. Generally wanted behavior produces positive emotions (happiness, love, etc.) while generally unwanted behavior produces negative emotions (fear, sadness, etc.).

Emotions are simple and sometimes irrational, so evolution enabled intelligence to partially suppress emotions. When we sense that lovely smell of freshly baked goods, we feel a craving to eat them, but we can suppress the urge because we know they are not healthy for us.

Based on that, we can provide a more general definition of “emotion” for any intelligent agent:

Emotion is an output of an irrational, built-in, fast subsystem that constantly evaluates the agent’s world model and directs the agent’s focus toward desired behavior.

Take a look at a classic diagram of a model-based, utility-based agent (from Artificial Intelligence: A Modern Approach textbook), and you will find something similar:

Do you notice it? In the middle of the diagram stands this funny little artifact:

Even professional philosophers in the realm of AI have overlooked this. Many presume AI systems are rational problem solvers that calculate an optimal plan for achieving a goal. Utility-based agents are nothing like that. Utility function is always simple, ignores a lot of model details, and is often wrong. It is an irrational component of the system.

But why would anybody put such a silly thing in code? Because introducing “happiness” to an AI system solves the computational explosion problem. The real world, and even many mathematical problems, has many more possible outcomes than particles in the universe. A nonoptimal solution is better than no solution at all. And paradoxically, utility-based agents make more efficient use of computational resources, so they produce better solutions.

To understand this, let’s examine two famous AI systems from the 1990s that used utility functions to play a simple game.

The first one is Deep Blue, a computer specifically designed to crunch chess data. It was a big black box with 30 processors and 480 special-purpose chess chips, and it was capable of evaluating 200 million chess positions per second. But even that is not enough to play perfect chess, as the shannon number states that the lower bound of possible situations in a chess game is 10120. To overcome this, engineers could have limited search to only N future chess moves. But there was a better approach: Deep Blue could plan longer into the future if it could discard unpromising combinations.

Human chess players had known for a long time an incorrect but fast way to do that. Count the number of chess pieces on the board and multiply by the value of each piece. Most chess books say that your pawn is worth one point and the queen is worth nine points. Deep Blue had such a utility function, which enabled it to go many moves deeper. With the help of this utility function, Deep Blue defeated Garry Kasparov in 1997.

It is important to note two things:

  1. A utility function is irrational. Kids play chess by counting numbers of pieces; grandmasters do not. In the chess game of the century, 13-year-old Bobby Fischer defeated a top chess master by sacrificing the queen. He was executing a strategy, not counting pieces.
  2. A utility function needs to be irrational. If it were rational, it would calculate every possible move, which would make it slow and therefore defeat its purpose. Instead, it needs to be simple and very fast, so it can be calculating in every nanosecond.

This chess experiment proved that utility-based agents that use “intuition” to achieve solutions vastly outperform perfectly rational AI systems.

But it gets even better.

At the same time that IBM was pouring money in Deep Blue, two programmers started developing a downloadable chess program you could run on any PC. Deep Fritz ran on retail hardware, so it was able to analyze only 8 million positions per second—so it was 25 times slower than Deep Blue. But the developers realized they could beat the game with a better utility function. After all, that is how humans play: they are slower but have stronger intuition.

In 1995 the Deep Blue prototype lost to Deep Fritz, which was running on a 90MhZ Pentium. How is it possible that the 25-times-slower computer won? It had better utility function that made the program “happy” with better moves. Or should we say it had better “emotional intelligence”?

This shows the power of emotion. The immediacy of the real world requires that you sometimes stop thinking and just go with your gut feeling, programmed into you by billions of years of evolution. Not only is there a conflict between emotions and rationality, but different emotions also play tug-of-war with each other. For example, a hungry animal will overcome its fear and take risks to get food.

Note that in both higher-order animals and advanced AI systems, the fixed part of a utility function is augmented with utility calculation based on experience. For example, a fixed part of human taste perception is a love of sugars and a strong dislike for rotten eggs. But if one gets sick after eating a bowl of gummy bears, the association “gummy bears cause sickness” is stored and retrieved in the future, as a disgusting taste. The author of this article is painfully aware of that association, after a particular gummy bear incident from his childhood.

To summarize the main points:

  • Emotions are fast subsystems that evaluate the agent’s current model of the world and constantly provide positive or negative feedback, directing action.
  • Because emotional subsystems need to provide immediate feedback, they need to be computationally fast. As a consequence, they are irrational.
  • Emotions are still rational on a statistical level, as they condense “knowledge” that has worked many times in the past.
  • In the case of animals, utility functions are crafted by evolution. In the case of AI agents, they are crafted by us. In both cases, a utility function can rapidly look up past experience to guide actions.
  • Real-world agents don’t have only one emotion but a myriad of them, the interplay of which directs agents into satisfying solutions.

In conclusion, an AI agent is emotional if it has a utility function that (a) is separate from the main computational part that contains the world model and (b) constantly monitors its world model and provides positive or negative feedback.

Utility-based agents that play chess satisfy those criteria, so I consider them emotional—although in a very primitive way.

Obviously, this is not the same as human emotions, which are much more intricate. But the principle is the same. The fact that honeybees and crayfish have very simple emotional subsystems doesn’t change the fact that they experience emotions. And if we consider honeybees and crayfish emotional, then we should do the same with complex utility-based agents.

This may feel implausible. But we need to ask ourselves, is that because the above arguments are wrong? Or, maybe, because the utility function in our brain is a little out of date?



Zeljko Svedic is a Berlin-based tech philosopher. If you liked this piece of modern philosophy, you will probably like Singularity and the Anthropocentric Bias and Car Sharing and the Death of Parking.

You Donate $400/Year Thanks To The Best Business Trick Ever

I’m going to tell you a story about one ingenious business model that the majority of people are not aware of. It costs average US household around $400 per year. To understand the model, you’ll need to understand three economic concepts: what the penny gap is, the razor and blades business model and Milton Friedman’s concept of there being “no such thing as a free lunch.”

Do you know what the penny gap is? If not, it boils down to this one eternal truth: people are cheap. They love free stuff and hate getting their wallets out. Even if you raise a price from free to one penny, the majority of people will refuse to pay that ridiculously tiny amount, unless they really need the product you’re selling. This obviously sucks for businesses.

Which is where the razor and blades model comes in, trying to get around the age-old problem of people being cheap. The trick is this — businesses lure customers in with some cheap product (like razors) or give it away for free. Well, “free.” Think: a free phone with a cellphone plan. The moment companies attract new customers, they then make money on the things the customers need to make the product work or via their service costs. Examples are inkjet printers and ink cartridges, phones and phone plans, gaming consoles and the games that go with them. And, of course, razors and blades — doubly so thanks to Gillette’s elaborate marketing claims of “innovative shaving technology”.

Businesses that succeed in pulling this off make so much money that even Scrooge McDuck drools over their profits. But there’s a catch — they’re going to need to lock customers in. These same companies don’t want competitors with low margins. So how do they stop someone from going to the cheaper ink cartridge shop down the road? Businesses add security modules to ink cartridges, patent blades, often lock phones to one carrier and make sure you can only use licensed games with their corresponding games console.

And sure, it’s smart. But no matter how streamlined their razor and blades model is, it still doesn’t solve the penny gap issue because customers still need to make peace with paying more money for additional products. Customers are human, which means they’re all about saving some of those dollar bills. They bellyache about the blade prices, fill ink cartridges with cheap replacement ink or switch their phone plan as soon as the contract is up. And businesses using the model may get rich, but their customers think they’re basically Satan with a tax identification number.

So what if you could hide those recurring costs? This is the ingenious part:

Indirect razors and blades model is an extension of the razors and blades model, where customers are not aware of the recurring costs, because they pay them indirectly.

Which is exactly what credit card companies do. Customers get credit cards for free. As a result, the average American owns 2.6 credit cards.

Then, every time a customer uses a credit card, there is a credit card processing fee. According to Helcim Inc’s list of interchange fees, US Mastercard and Visa credit card fees are between 1.51% and 2.95%. That doesn’t include extra fees like chargebacks or set-up fees.

Most customers don’t think about the processing fees, because they assume the businesses are shouldering those costs. However, economists know that there ain’t no such thing as a free lunch. Shops aren’t charities and they’re not going to donate money just for the hell of it. They calculate all of their business costs and then add their margin to it. Consider the following examples of “free” stuff:

  • Restaurants with “free service” which cost more more than self-service restaurants.
  • Business with “free parking” which cost more more than those without.
  • Shops in expensive rental locations which have higher prices than the same shops in cheap rental locations.

As such, the effect of processing fees on the final price depends on how many customers use credit cards. If everybody used credit cards, the average price of goods would rise by around 2%. So if you had a choice between buying a laptop with a credit card for $500 or with cash for $490, would you still opt for a credit card? Presumably most people would opt for $490 and would spend the change on lunch. But you don’t have a choice.

You’re not given that option for two reasons. Firstly, for many businesses it simply isn’t convenient to add a credit card surcharge. Secondly, even if businesses wanted to do that, surcharging everyday transactions is illegal in 10 US states. Molly Faust, spokeswoman for American Express justified their legal stance in the following statement: “We believe that surcharging credit card purchases is harmful to consumers.” How sweet of them to be so concerned for consumers’ well-being!

As a result, most businesses charge the same price regardless of whether a customer pays via cash or card. Which means all customers share the burden of credit card fees. If 50% of Acme Donuts’ customers use a credit card with a 2% fee, then the average price of donuts will be 1% higher, even for those customers who pay for their morning dose of sugar with cash. AmEx doesn’t seem to think it is “harmful to consumers” to pay a hidden fee even for customers who don’t own a credit card.

However, credit card companies invented something way better then legal pressure. What if they could motivate customers to flex that plastic all the time, even when it’s not more convenient than paying with cash?

Welcome to reward programmes like Cash Back, Points or Miles. Every time customers use the card they get a “reward”, even though the thing they get is actually their own money back, paid via higher prices. This prompts customers to use a credit card for a $5 drink despite having a $5 bill in the pocket. Unlike razors and blades, where customers try to consume less, in the indirect razors and blades model customers try to spend more. Doubly ingenious.

You can’t quibble with the results. The total credit card volume in the US in 2014 was $4 trillion — enough to “buy a Nissan Versa for every man, woman and a child”. But if that’s the total sales volume, how much do the customers pay in transaction fees after reward programs are paid out? Merchants Payments Coalition calculated that the average household in the US pays more than $400 annually in credit card fees. If customers knew in advance that they would have to pay over $400 per year, would they still use credit cards?

Which raises the question:

What can we, as a society, do about the credit card fee problem?

The Fight Club Approach

This is the radical solution your college-era socialist self would have been proud of: fighting against “evil” banks and credit card companies.

Unlikely as it sounds, this is exactly the approach taken here in Berlin. American visitors to the city are always shocked by the fact that establishments big and small refuse credit cards. Berlin is cheap and prides itself on being alternative. So it’s not exactly surprising that so many shop owners are trying to lower the costs by refusing credit cards.

There’s no disputing the anarchist charm of this. But I think that in the long run, it’s a little silly. Electronic payments are convenient and the future of currency; we can’t just ignore them.

Passing The Hot Potato Approach

This is the legal approach where countries adopt laws which limit how much credit companies can charge, in which ways they can charge, and who foots the bill.

For example, in 2014, the EU introduced legislation limiting credit card fees to 0.3% and debit card fees to 0.2%.

Similarly, in 2013, an anti-trust settlement in the US placed a cap on the fees that banks can charge merchants for handling debit card purchases. This was rolled back in 2016.

Lobbying for the law can be well-intentioned but ultimately useless: it doesn’t solve the structural payment problems. Since the cost of technology remains the same, the credit card companies just end up shifting costs elsewhere. For example, you can limit the processing fee, but the credit card companies and banks then just ask for extra money elsewhere to cover their costs, like raising or establishing set up fees and monthly maintenance fees.

A Proposal For A Modern Solution

We need to understand why fees are so high. In my opinion, it boils down to three components: high margins, high fraud rates and expensive proprietary transaction systems.

The margins in the US are set by payment networks such as Visa.  The two largest credit card companies, Visa and Mastercard, have such a chokehold on the market that they change their interchange fees twice a year, in April and October. At the end of the first quarter of 2017, Forbes cited the ten largest card issuers in America as accounting for “almost 88% of total outstanding card balances in the country.” It is extremely hard for a new company to break into this market, as major players have one key advantage: network effect. Legislation needs to be made to help smaller, more efficient competitors carve out a slice of the market.

High fraud rates is a colossal problem for current credit card technology. The Nelson Report calculated that in 2015 credit card fraud totaled $21.84 billion. But that report doesn’t take into account the indirect costs of fraud, like the costs of issuing replacement cards or the cost of prevention. In 2016, LexisNexis estimated that for every $1 of direct fraud, there is $2.4 of indirect costs. “Yeah, and? Why should I care? The fat cat credit card companies cover that cost.” Again, because of Milton Friedman’s “no such thing as a free lunch.” You’re shouldering the cost of credit card fraud via increased credit card fees. Current credit card technology is inherently insecure.

And finally, the third issue is that in order to validate credit card transactions you need to use the backend provided by credit card companies. They are proprietary, legacy systems that have no incentive to cut costs.

And because of the issues above, fees are higher than they should be. But by how much? For comparison, at the time of writing (August 2017), the average bitcoin transaction fee was 0.56%. Realistically, this isn’t the best comparison, because bitcoin architecture only rewards the miners who win the arms race for best custom hardware. Still, it is obvious that modern crypto-currency can deliver inherently safe transactions at a much lower cost than current credit card fees.

In my opinion, in order to change the status quo that has existed for the last half a century, legislators need to pass bills which address these issues. In the 21st century, electronic payments are a vital part of common infrastructure, just like roads, the postal service or the internet. And if you look back in history, there’s a particularly relevant comparison to be made. Credit card companies today can be compared to the railroad tycoons of the 19th century. After they built the railroads across the US these same corporations had the power to “squeeze out competitors, force down prices paid for labor and raw materials, charge customers more and get special favors and treatments from National and State government.” Sounds familiar, right? Sometimes two companies would compete on the same route with different track width and different train specifications, as was the case with parts of the New York subway.

What we need now is government intervention, much like how President Eisenhower’s administration introduced the national network of highways that was the US interstate project and helped solve the transport issue. Private companies built portions of interstate highways (and made a profit), but all highways were built to the same exacting standard, connected to each other in an meaningful nationwide network, open for use by all citizens, and connected parts of the country that were of vital interest.

So legislators, if you’re reading this, instead of spending time on laws which cap fees or pass them round the economy like a hot potato, consider focusing on laws which make the payment system more efficient.

The government could demand:

  • Transaction security: Future electronic payments protocols must be cryptographically safe, which will eliminate fraud costs. That’s a win for companies, and win for consumers.
  • Openness: Popular electronic payments protocols need to be open and have transparent charging.
  • Competition-friendly: Small companies should be able to connect to the open payment network and offer transaction validation service over their servers, encouraging healthy competition.

If each of the above ideas is implemented, there won’t be a need to limit credit card fees. With multiple companies competing over a modern, cryptographically safe protocol, fees will naturally go down. That old Adam Smith chestnut, the invisible hand of the market will do its job. It might even give us a thumbs up.

We’ll have laid the bricks for a road leading to innovation, not away from it. And hey, things can stay flexible. It’s no big deal if new protocols are introduced, as long as they satisfy security, openness and pro-competition requirements.

Compare that to the current situation where new players like Apple Pay and PayPal offer better technology but are still proprietary systems. If Apple succeeds in dominating the market as we know it with Apple Pay, then they’ll be the new king of indirect razors and blades model, and will probably respond to that power much as pretty much any being or organization responds to dominating a market: by taking advantage of consumers. What is even worse, governments over the world already are familiar with exactly the kind of legislation I’ve outlined above, but for different markets. Just peep regulations for energy markets or TV and radio broadcasting. They are all run by private companies, but legislators understand that electricity and broadcasting need to be run in a way that’s in the public’s interest — they grasp one basic rule of economics. Namely, that it isn’t in the public’s interest to allow monopolies.

Meanwhile, the credit card industry still seems to be stuck around the decade Mad Men was set in. And funnily enough, my nostalgia for the era of the Beach Boys doesn’t extend to how financial security was handled back then. Every time I hit a restaurant I’m worried that the waiter is going to copy out my credit card details to support his online gambling habit — because it is so freaking easy. But hey, I shouldn’t worry, because 2% credit card fee includes insurance against this current, inefficient system.

Whether or not you agree with the exact improvements I’ve outlined above, it’s clear that the system needs reform. If you agree that current credit card system is ridiculous in modern day and age, share this article.


Authors: Zeljko Svedic and Sophie Atkinson. Reprints are possible under the terms of the Creative Commons BY-NC-ND License, but please send us an email.

Wanted: Collaborative Writer in Berlin

“The advantage of collaborative writing is that you end up with something for which you will not be personally blamed.”—Scott Adams

This is a unique job, for unique writers. The client is a well-off individual, the owner of a boring software company. To compensate for that, he writes long, in-depth articles for his blog, Vice Motherboard or scripts for his YouTube channel. The problem is that he writes slowly, has little time, and has another 50+ ideas for unfinished articles. This is where you come in.

Your job will be to meet the client in Prenzlauer Berg, Berlin, and collaboratively work on new writing projects. The client will provide you with an idea, the reasoning behind an article, and an outline of a text. Your creative neurons will then do the magic of converting the rough idea into a popular article that will be loved and shared by geeks worldwide. This is not ghostwriting; you are going to be co-author on the piece. The salary starts from 260 EUR per thousand words.

Sounds interesting? However, there are some requirements you need to fulfill:

  • You need to be a better writer than the client. “Better” is a subjective term, but the number of readers and shares is not. Be prepared to show your best work and their impact.
  • You need to be on the geeky/science/philosophy side. If you noticed, all the articles above are non-fiction, and deep into geek culture.
  • You need to be funnier than the client. That is not going to be hard.
  • Native or near-native English writing skills.

And to recap, the benefits are:

  • 20 hours per week (half-time position).
  • Location in Prenzlauer Berg, Berlin.
  • Working on a variety of interesting tech and science topics.
  • Competitive salary, starting from 260 EUR per thousand words.

Are you ready to change the world with your writing? Apply here.


MasterCard Serbia asked ladies to share FB photos of, among other things, their credit card

Credit card companies should know all about phishing, right? McCann should know all about marketing, right? Combine the two in Serbia and you will get a marketing campaign that just went viral, although for the wrong reasons.

Mastercard Serbia organised a prize contest “Always with you” that asks female customers to share contents of their purse on Facebook. If you read the text carefully, it is not required to photo your card. However, the example photo clearly shows the credit card details of a fictive customer:

Lured by prizes, many customers posted photos of their private stuff. And some copied Mastercard promo — their credit card, with full details, is visible in the photo:

This is the first phishing campaign that I know that was organised by credit company itself!

The funny thing that is that nobody in Mastercard, McCann agency or legal team noticed the problem. There is a lengthy legal document explaining the conditions of the prize contest:

That document is signed by Mastercard Europe SA and McCann Ltd Belgrade, so it seems it has passed multiple levels of corporate approval. And Mastercard didn’t seem to notice the problem until six days later when a serbian security blogger wrote about it.

In my modest opinion, the lesson of this story is to be careful how you hire. I am biased because I run an employee assessment company, but smiling people with lovely résumés can still be bozos. And when you have incompetent people in the company, it doesn’t matter what formal company procedures you have in place.


P.S. As user edent from HN noticed, photo sharing of credit cards is nothing uncommon for Twitter:

P.P.S. As of today (May 18), entire “Always with you” campaign is deleted from Facebook.


10 years of experience selling IaaS or PaaS

Today, a friend sent me a funny Google job posting. Here is the highlight:

10 years of sales experience? Amazon EC2 (IaaS) only came out of beta in Oct 2008, Google App Engine (PaaS) only had a limited release in Apr 2008. Now is Feb 2017, so even if you got started selling EC2 or App Engine from the very first day, you would only have 8 years of experience.

I know you are Google, but it is a bit too high of a bar. You still haven’t invented the time machine.


Car Sharing and the Death of Parking

Article was originally created for Vice Motherboard, which holds distribution rights till Sep 2018.

Rise of parking spaces in Los Angeles

Sometimes the future arrives humbly in our everyday life, until one day we realize its implications. Carsharing is like that—I was ignoring it until I noticed car2go popping around Berlin:


I had tried ZipCar (USA) and Co-wheels (UK) before, but this was different. ZipCars and Co-wheels cars needed to be booked for a few hours and then returned to the same spot. Car2go allowed me to book a car by the minute and leave the car anywhere in the city zone. When I reach my destination, I can park the car anywhere, sometimes using smart parking, to the enormous joy of the parking-seeking SUV owner behind me. When going somewhere in the city, driving back and forth takes less than an hour, so for the rest of the evening, that car2go can be used by other users.

One alternative to carsharing is ridesharing (Uber, Lyft, or similar), but ridesharing is more expensive (you need to pay for a driver) and I will argue that it is just an intermediate step until we have self-driving cars.

Both carsharing and ridesharing solve the biggest problems of cars in the city: utilization. The average private car spends 95 percent of its time parked somewhere, where it waits faithfully for you to finish with your work, shopping, or a great social time that will make you too intoxicated to operate it.

A death of parking

In comparison, a shared car with 50 percent usage has 10 times better utilization and needs parking only half the time. But, that doesn’t mean that the ideal carsharing city will need half the parking spaces. Surprisingly, carsharing would reduce the number of parking spaces a city needs by more than 10 times.

Let’s calculate for total carsharing (all private cars replaced with shared cars) with 10x better utilization:

Private carShared car (10x)
Used5%10 x 5%
= 50%
Number of cars in the cityNN / 10
Parking places neededN x 95%(N / 10) x 50%
= N x 5%
Parking reduction(N x 95%) / (N x 5%)
= 19x

Ideally, if shared cars are used 10x more, we need 10x fewer of them to power the city. But since they also spend less time parked, we need 19x fewer parking spaces!

But there is a miscalculation in the above math.

It is questionable whether 50 percent carsharing utilization can be achieved because of rush hours and the suburban commute.

Rush hours mean that most people want to use cars during peak times. Let’s suppose that all people need cars in a three hour peak and that the average non-rush commute lasts for 30 minutes (I will explain later why I’m using a non-rush commute). Then we can only have 6x fewer shared cars to replace private cars, not 10x.

But an even larger problem is the suburban commute—from suburbia to the city in the morning, and the other way round in the afternoon. The first commuter in the morning leaves a shared car in the wrong place—in the city. This is not such a big problem in Berlin, because people live and work in all neighborhoods of the city. But it is a big problem for American cities because of their typical suburban sprawl. Every morning, the number of shared cars in your cul-de-sac should match the number of morning commuters. Maybe that is one reason ZipCar in the US allows one-way trips only with designated cars and only in Boston, LA, and Denver.

Self-driving cars come to the rescue. They could drive you to the city and then come back to pick up the next commuter. This halves the efficiency, but is still better than leaving cars idly parked. As the original 10x utilization was probably too optimistic, let’s recalculate using 6x and 3x:

Shared car (6x)Shared car (3x)Shared self-driving car (3x)
Used6 x 5%
= 30%
3 x 5%
= 15%
3 x 5%
= 15% x 2 = 30%
Number of cars in the cityN / 6N / 3N / 3
Parking places needed(N / 6) x 70%
= N x 11.7%
(N / 3) x 85%
= N x 28.3%
(N / 3) x 70%
= N x 23.3%
Parking reduction(N x 95%) / (N x 11.7%)
= 8.1x
(N x 95%) / (N x 28.3%)
= 3.4x
(N x 95%) / (N x 23.3%)
= 4.07x

If everybody commutes from suburbia to the city and utilization is only 3x, the city gets to have 3.4x fewer parking lots, not bad! With self-driving cars, cities can reclaim even more street space. When they are not needed, an army of self-driving cars can drive themselves to multilevel garages or off-city parking.

It gets better. If you have ever bought a private car, you probably did a largest common denominator calculation—what is the longest trip you will need the car for? Because there are two times in a year when you go camping, you commute to your work in a large sedan or SUV. Alone. When picking a shared car, you use the lowest common denominator—the smallest car that will get you to your destination. And two smart cars fit in a single parking space.

This is a eureka moment for carsharing and self-driving cars. Most people I talk with think the cities of the future will be similar to today, except that you will own an electric self-driving car. In my modest opinion, that is similar to people of the 19th century imagining faster horses.

But wait, there is more

The annihilation of parking lots is just one of the benefits of carsharing:

  • Currently, if you use a private car to travel to a destination, you also need to use it to return from a destination. Carsharing cooperates with other modes of transport. Go somewhere with a foldable bicycle, and if it starts to rain, no problem. Book the closest shared car and put the bicycle in the trunk. Go to a bar with a shared car, get tipsy, and book a Lyft ride back home.
  • Fewer parked cars means you spend less time looking for parking. Research shows that on average, 30 percent of the cars in congested downtown traffic are cruising for parking.
  • You need to walk from your location to the location where you parked a private car. In an ideal carsharing city, you just walk out and take the first shared car available outside.
  • Because people are driving smaller shared cars, there is less pollution.
  • If you need a van, a truck, or a limousine, you just find and book one using a smartphone.
  • Insurance and servicing is handled by the carsharing company, not you. Because they have a huge fleet, they get volume pricing.
  • When your car breaks, you don’t need a replacement car. Every carshare you see is your “replacement” car.
  • With less need for parking space, through streets can ditch side parking and add two extra traffic lanes.

Not everything about carsharing is perfect. Sometimes the shared car I got wasn’t quite clean—somebody had transported a dog on a passenger seat. But, when I think about it, I didn’t clean my previous private car for months and sometimes it looked like I was transporting pigs with diarrhea, so maybe I shouldn’t complain.

How does the future look now?

Berlin is quite competitive, so we get a small glimpse of the future. Car2go, owned by Daimler AG, originally offered only smart cars. Car2go’s biggest competitor is DriveNow, owned by BMW and Sixt, which offers Minis and BMWs, like this electric i3:

bmw_drive_now_2Car2go decided to pimp up its rides, so now you can book a Merc:

webimage-22d13769-4db3-401c-b0311ca8e315c6f8Citroen also decided to join the party. The company offers a fleet of mostly electric C-Zeros with Multicity:

multicity_citroen_c-zero_berlin_flVolkswagen got angry that Mercedes and BMW were eating all the cake, so it purchased a 60 percent stake of Greenwheels:

greenwheels_deWhile Sixt is partnering with BMW, Hertz has its own Hertz On Demand, although it is obvious from its website that Hertz is still in rent-a-car mindset and doesn’t understand how the new thing works.

But why stop at cars? Other vehicles have the same problem; you only use them 5 percent of the time. eMio offers the same sharing concept for electric scooters:

12819449_1667878436800179_1815803174772161990_oDon’t laugh at the idea of shared scooters. This is a cultural thing—while in the US, the ideal transportation vehicle is a sedan and in Europe a compact car, two billion people in Asia consider scooters a family transport solution. Look at this nice family in Vietnam:

26220518712_cb51aeaecd_zAnd eMio is not the only one. Just last month, Coup launched a fleet of 200 beautifully designed, Gogoro electric shared scooters to Berlin:

gogoro-burnoutBoth Coup and eMio have an unusual charging solution: their teams go around the city and swap empty batteries for full ones.

Other carsharing companies have “socially automated” refueling. For example, in car2go you don’t ever have to refuel, but they give you 10 free minutes if you fill up a car with less than a quarter of a tank of gas.

Prices are already reasonable. In my experience, car2go smart is half the price of Uber in Berlin (which is not the real Uber, to be honest). But it can go lower with better utilization and economies of scale.

Finally, tourists can rent a NextBike bicycle from 1€ per 30 min.

As you can see, the situation is quite complicated here, and I know what some entrepreneurial readers are thinking. But hold your breath, as there is already an app that displays all of the above on the same map:


Death of traffic jams (and birth of queues)

More radical changes will happen when shared cars become a majority in the city.

Total carsharing can eliminate the traffic jams of rush hour—but that doesn’t mean you
won’t have to wait.

Why does a traffic jam happen, anyway? All people jump into their private cars at once and decide to drive along a similar route. Main routes have limited throughput, so you end up queueing on junctions and on the highway. The queue just makes things worse, as it lowers car throughput. It is an expensive system in which you line up in a running car, waiting for your turn. In total carsharing, that can’t happen. Since there are 3x or 6x fewer cars available, there is no way that everybody can just jump in a car and go. Now you don’t wait on a highway, you wait for your turn to get a shared car. I would argue argue that this is better because:

  • You are going to wait in your home or office (for a car to become available), not on the highway.
  • There is less chance of some route “jamming” and reducing car throughput.

But waiting for shared cars opens two completely new scenarios:

  1. “Shared” carsharing. Imagine that you open a carsharing app of the future and request a car. The app puts you in a waiting queue and says that the estimated waiting time for the next car is 30 minutes. But someone from the same street is going to the same part of town. The app offers to cut your waiting time to 15 minutes if you both share the same car. Since you don’t know the person, the app offers to increase security by enabling a video camera inside the car (it is there anyway, to check whether you left the car clean). You accept the pooled ride, but decline the camera option, as the other person’s profile is rated 4.5 stars. Your car is available in 15 minutes.
  2. “Premium” shared cars. Let’s say you are in a hurry and don’t want to use a carsharing company that tries to maximize car usage. You use a more expensive carsharing company that promises to have a car available in five minutes or the cost of ride is on them. You pay a premium to get somewhere faster. It’s a nice system, although I guess in some posh downtowns everybody will try to use the premium shared cars, in which case you are back to square one. Then you need a “super-premium” car share. Another option is existing car sharing companies adding surge pricing, but Uber showed that paying 4x more for basically the same service didn’t go well with the customers.

Rebirth of the parking space

If all that space becomes available, cities can reclaim it for public use. This is especially true in Europe, where cities were never designed for cars—to make room for them, something had to be given away. Year by year, streets have been made narrower by side parking, parks have been converted to parking, and new buildings have been constructed with large parking lots next to them. If the majority of the transportation burden falls to shared cars, buildings will just need a “valet” parking area in the front. The valet will not be a real person—but your smartphone.

That could dramatically change suburban landscapes, where every business has it own large parking area. But even the dense city grid can be changed. For example, although Barcelona is known as a well-planned city, most streets today are taken by cars. People got excited a few weeks ago when a plan for “superblocks” was announced. The idea is to designate one-third of the streets as through roads, and two-thirds as pedestrian-friendly zones. The problem is that the second phase of the plan calls for elimination of parking in the pedestrian-friendly zone, by building off-street garages for every superblock. That is an enormous capital project for the city. With carsharing, the solution becomes easier:

  • Make every second street a through street. Eliminate side parking in through streets to add two additional lanes of throughput.
  • Make other streets half dead-end streets (used for parking of car shares), half pedestrian-only zones.

See the illustration below:


This solution builds on the existing infrastructure (no new garages are needed), and you get a mini city square in place of every fourth intersection. Side parking places are reduced 4x, which is achievable with carsharing. The longest walking distance to an available car is one block.

Think what all that change would mean for Los Angeles, for example. It currently has 200 square miles covered with parking lots, 1.4x more than the total land taken up by streets and freeways.

All that transformation would be powered by the simple idea:

The purpose of cars is to be driven, not parked.

The heroes of the future

Some people had seen the future long time ago.

Zipcar, Flexcar, and City Car Club were all started in 2000. But they missed the convenience of a smartphone.

In 2007, Steve Jobs announced the iPhone and, a few years later, ridesharing companies started popping up in San Francisco: Uber in 2009, Sidecar in 2011, and Lyft in 2012.

In 2010, car2go went public in Austin, Texas.

All those services were convenient and cheap, and big companies started paying attention.

In 2014, Sergey Brin said this of Google’s self-driving car: “So with self-driving cars, you don’t really need much in the way of parking, because you don’t need one car per person. They just come and get you when you need them.”

In 2016, Elon Musk unveiled his master plan, which states: “You will also be able to add your car to the Tesla shared fleet just by tapping a button on the Tesla phone app and have it generate income for you while you’re at work or on vacation.”

In 2015, even GM said: “We’ve come to the conclusion that our industry within the context of today’s traditional vehicles and today’s traditional business model is not sustainable in its current form.”

Brave words from an old school car maker! I would also consider innovative people at GM, Daimler, BMW, Ford, and VW to be heroes, although they mask really well under the grey suits.

But every story of heroes also has a story of…

The villains

Change management 101: When there is a big change, no matter how good, there is going to be someone opposing it. In this case, it seems that one of the villains are the people we elected to work in our interest.

The private car is not a fair competitor. Parking is subsidized by both the politicians and the average people. People want “free” parking, but do you really think that 16.8 m2 of valuable land in the city is “free”? It is not just taxpayers’ money. When you go to a McDonald’s, a parking fee is hidden in the burger price because the owner needed to purchase land for a parking lot. When you purchase a condo, the price is higher because the building developer needed to build underground parking.

The book The High Cost of Free Parking estimates that the total cost of “free” parking in the U.S. is 1-4% of the GNP. (I also highly recommend that you listen to the Parking is Hell episode of the Freakonomics podcast.)  The economic price of monthly parking in big cities goes from $438 in Boston, to $719 in Rome, to a staggering $1084 in London.

What puzzles economists is simple math to politicians. Giving affordable parking to people gets them votes. My hometown of Zagreb has some great populists in power. As a city center resident, you can leave your piece of metal next to the parliament for the price of $15. Per month. For years I complained about the price of parking, but then I realized that maybe I should shut up.

If the price of parking were subject to market forces, math would be simple. Shared cars would spend less time parked and you would share the price of parking with other carsharing users. With private cars, it would be your sole responsibility to pay $500 per month for parking.

But a mayor who introduces an economic price of parking would soon be impeached. So maybe the real villain of this story is not the politician, but you, dear voter?


It seems that the future of urban transport is electric, self-driving shared cars. But that electric future requires new cars with great batteries, while self-driving cars are five years out. Both are going to be more expensive.

However, carsharing is already everywhere. There are rideshares like Uber and Lyft. You can convert your existing private car to a shared car with an electronics kit, such as the $99 Getaround Connect. With new legislation in the cities, which promotes the sharing of cars and doesn’t subsidize parking, we can have more liveable cities and better urban transport now, without large capital investments.

But for that, we need a change in mentality. If you agree with that, spread the word.


UPDATE: check discussions on Hacker News and Reddit.

Why App Stores Need Progressive Pricing


In this ever-changing world, one thing stays stubbornly the same: app store pricing.

The mother of all app stores, Apple App Store, came in July 2008 with a flat commission: 70 percent to the developer, 30 percent to Apple. Android Market (now Google Play) was introduced two months later, with the same cut: 70/30. Windows Store was four years late to the party, so Microsoft decided to set bait. Developers started with a 70/30 cut, but then were upgraded to an 80/20 cut after they had reached $25,000 in sales.

In eight years, Apple experimented with dubious design choices, the Microsoft board decided that Ballmer should stop dancing, but app store pricing didn’t change. Yes, Apple introduced an 85/15 cut, but only for subscriptions, and only after the first year. On the other side, Microsoft ditched its initial discount in 2015 and went with the standard 70/30. Which begs the question:

Is 70/30 some magic ratio or just an arbitrary convention?

Let’s examine that. From a developer’s perspective, the app stores provide the following benefits:

  • Payment processing. For developers, it eliminates the pain of connecting to credit card systems and merchant accounts. For users, it reduces payment friction, making them buy more.
  • Hosting. App stores do reliable hosting, even when there is a surge of traffic. No more updating servers, or waking up in the night because of a hacker intrusion or a DDoS attack.
  • Quality review. Before publishing, apps need to pass an acceptance review. Developers often hate this procedure, but a marketplace without viruses or broken apps makes a user experience better. Satisfied users buy more.
  • Marketing. It is hard to reach users. App stores promise that if you have a high-quality app, it will go up in the rankings and maybe end up in the featured section.
  • Platform access. Apple, Google, and Microsoft invested hugely in creating a platform and devices on which you can run your apps. Maybe a part of their 30 percent cut is a fee to access their platforms?

Reasons to use app stores are quite compelling, and all platforms are moving in that direction.

But, the value of listed benefits changes significantly with the perceived user value of the app. This dynamic is not intuitive, so let’s use two imaginary developers as an example:

FlappyElephant app is a simple, casual game, made by one developer in his spare time. It costs $1.AcmeShop app is a complex editing tool for photographers and illustrators. Made by a team of 200 people, it costs $100.

These developers’ views on the above app store benefits are quite different:

Payment processing
FlappyElephant: Great, I get charged only 30 cents on the dollar! Other payment collectors charge up to 75 cents per transaction. And there is no way a customer would otherwise take out a credit card for a $1 indie game.AcmeShop: $30 per transaction!? Our Stripe-powered webshop costs us $3.2 per transaction (2.9% + 30¢), 9.4x less!
FlappyElephant: After I deploy, I don’t have to worry about it. It can scale and customers will get the update automatically.AcmeShop: We already have our own servers; the app store is just one more place where we need to deploy.
Quality review
FlappyElephant: Annoying, but at least they let me know it breaks on tablets.AcmeShop: Every release is delayed for two days. On iOS, ratings are reseted after every release.
FlappyElephant: I can’t believe so many people are finding my small app. Otherwise I would be roasted; AdWords is $1 per click and nobody searches Google for “flappy elephant”.AcmeShop: People buy our $100 app because they have known us for 10 years, not because they noticed us while scrolling a list with 50 other apps.
Platform access
FlappyElephant: If there were no smartphones, there would be no FlappyElephant!AcmeShop: If there were no tools like ours, creative professionals wouldn’t use the platform!

Two app developers, two very different stories. While FlappyElephant’s developers would pay even 50 percent, AcmeShop’s developers consider everything above 10 percent to be a ripoff.

There is a way to satisfy both parties: progressive pricing. The commission should fall as the price of the app increases, which can be implemented in many ways.

For example, this funky-looking formula:

Commission = 22 / (9 x Price + 90) + 7 / 90

Has a nice property of commissions for $1, $10 and $100 being round numbers:


Price can be either actual transaction price, or, arguably more fair, cumulative user spend per app. In the case of latter, after user purchases 10 times a $10 monthly subscription, cumulative user spend is $100 and the developer is given a 10% commission. Again, this is just one of progressive pricing options.

I think that makes perfect sense. I purchase many $1 apps impulsively, thanks to the app stores. But I never purchase anything above $20 without going to the Internet and researching all the alternative options. I buy an expensive app because I trust the developer, and then the app store just makes it more expensive. Not just 30 percent. App stores make it 42 percent more expensive (30/70=42.8%).

Of course, big developers like AcmeShop are not stupid. They have found a way to have their cake and eat it too. The solution is simple:

  1. Make your apps free.
  2. In order to use the app, users need a separate “cloud” account.
  3. The free account doesn’t offer much.
  4. The free app unlocks content or features that can be purchased in the “cloud.”

One by one, big developers have started implementing exactly that strategy.

For example, the Amazon Kindle iOS app doesn’t allow book purchasing:


Kindle’s Android app is even more blunt; it circumvents Play Store with a built-in browser (!):


Microsoft Office mobile is free for devices smaller than 10.1 inches, but larger devices need a separate Office 365 subscription:


Spotify has a slightly different system. It offers in-app purchases, but they are more expensive than purchasing a subscription on their website. Spotify even sends an email to users warning that they made a stupid decision:


Practically every music subscription service has been circumventing app store payments since 2011.

So, congratulations, dear app store product manager. You just shot yourself in the foot. You were greedy for 30 percent and now you are getting zero percent. And users of your app store are annoyed that purchasing something requires switching to a browser. But what can you do? If you kick Kindle, Office, and Spotify off your app store, then nobody will care about your platform. So maybe big developers are right—maybe you should pay them to publish great software on your store? Like when Microsoft was paying selected developers up to $100,000 to port their apps to Windows Phone.

Mobile app stores have a problem with big developers avoiding payment systems, but desktop app stores have an even bigger problem: they are avoided altogether.

This year, the co-founder of Epic Games wrote a long rant about UWP and Windows 10 Store, asking for Microsoft’s guarantee (among other things) that:

“…users, developers, and publishers will always be free to engage in direct commerce with each other, without Microsoft forcing everyone into its formative in-app commerce monopoly and taking a 30% cut.”

But the Windows 10 Store is good compared to the Mac App Store, which is a joke. It is only useful for downloading Apple apps—in which case Apple pays a commission to itself. Even top-grossing featured apps are leaving, and switching to manual installation. Compare that experience to that of a mobile app store install:

MacOS manual installiOS
  1. Google for a developer webpage.
  2. Find and download a Mac version.
  3. Mount the downloaded DMG (double-click).
  4. Open the mounted drive. It contains two files: an app and a shortcut to the application folder.
  5. If you clicked the app, you just made a mistake! That is just going to run the app from the mounted drive.
  6. Instead, drag and drop the app to the application folder (there is usually an arrow so you don’t get confused about what to drag where).
  7. Eject the mounted drive (using the right-click menu).
  8. Delete the DMG.
  9. When starting the app for the first time, authorize it using a security dialog.
  1. Find an app in the app store.
  2. Tap the “Get” button.

Mounting drives? Dragging and dropping to a system folder? What is this, an 80s Mac with a floppy drive?!

And, in case a Mac app doesn’t have an automatic updater, for every upgrade you have to repeat the exact same procedure.

On Windows, manual installation is a few steps simpler, and you often get a nice malware as a reward for your effort. Like the PCs of my extended family. One of them has so much malware it would be the envy of the Kaspersky Lab researchers.

Why are Mac and Windows still in the Stone Age of app distribution?

Back to the original question. I argue that a 70/30 cut is an arbitrary ratio trying to be one-size-fits-all. It fails at that because the value proposition is completely different for developers of low-price versus high-price products. And app stores fail to profit on high-price apps, because that high price is listed somewhere else.

So, we are now in a triple loss-loss-loss situation:

  • Users have a bad experience because purchasing or installing an app is convoluted.
  • Developers have to create workarounds that create user friction.
  • App store owners make zero money on high-price products.

And it is all because of tech politics.

I will end with that conclusion, as I need to go and mount/unmount some drives.


UPDATE: Check the discussion on Reddit.

Web bloat solution: PXT Protocol

After many months in the making, today we are happy to announce v1 of PXT Protocol (MIT license). This is a big thing for our small team, as we aim to provide an alternative to HTTP/HTML.
Before I dive into technical details of our unconventional approach, I must explain the rationale. Bear with me.

Web bloat

Today’s web is in a deep obesity crisis. Bloggers like Maciej, Ronan, and Tammy have been writing about it, and this chart summarizes it all:


Notice the exponential growth. As of July 2016, the average web page is 2468 kB in size and requires 143 requests.

But computers and bandwidth are also getting exponentially faster, so what’s the problem?

Web bloat creates four “S” problems:

  1. Size. A few years ago, a 200MB/month phone data plan was enough. Today my 2GB plan disappears faster than Vaporeon pokemon.
  2. Speed. The web can be 10x faster. Especially over mobile networks, as phone screens need to show fewer elements.
  3. Security. The modern browser is actually an OS that needs to support multiple versions of HTML, CSS, HTML, SVG, 8+ image formats, 9+ video formats, 8+ audio formats, and often adds a crappy plugin system just for fun. That means the browser you are looking at right now has more holes than a pasta strainer. And some of them would allow me root access to your system right now. I just need to offer enough bitcoins on a marketplace for zero-day exploits.
  4. Support. All that bloat needs to be implemented and maintained by people. Front-end has become so complicated that now designers who can also code are called unicorns.

One can say “Problems, schmoblems! We had problems like this in the past, and we lived with them. The average web page will continue to grow.”

No, it will not. Because there is a magic limit—let’s call it the bloat inflection point:


For pages that are small and non-bloated (most pre-2010 pages), PXT only solves security and support problems. But today’s average web page will also gain big size and speed improvements. The Internet passed the bloat inflection point early this year, and nobody noticed.

PXT solves these problems by focusing on the core: the presentation. The majority of bloat pushed to client browsers has only one purpose—to render the page. Even JavaScript is mostly used to manipulate DOM. Images alone comprise 62% of a page’s total weight. Often images are not resized or optimized.

Responsive webs just make it worse. The fashion now is to have one sentence per viewport and then a gigantic background image behind it.

Developers have gotten lazier and lazier over the years. At the same time, compression technologies got better, both lossless and lossy. So we got an idea…

What if a client-specific page was rendered on a server, and then
to a “dumb browser” using the most efficient compression?

Like all great ideas, this sounds quite dumb. I mean, sending text as compressed images?! But I did a quick test…

Demo time

Let me show you a simple non-PXT demo; you can follow it without installing any software.

The procedure is simple:

  1. Find a typical bloated web page.
  2. Measure total page size and # of requests. I used the Pingdom speed test.
  3. Take a full page screenshot. I used the Full page screen capture chrome extension.
  4. Put into table and calculate the bloat score.

Bloat score (BS for short) is defined as:

BS = TotalPageSize / ImageSize

We can derive a nice rule from the bloat score:

You know your web is crap if the full image representation of the
page is smaller
than the actual page (BS>1).

I expected some screenshots to beat full page loads, but I was wrong. Screenshots won in every case. See for yourself in the table below: Image columns contain links to comparison images.


(1366 x ?)
(1366 x ?)

(1366 x 768)
PageSize (kB)# of req.Image (kB)BSImage (kB)BSImage (kB)BS
TechTimes Google
Tags Slow Websites
Vice Bootnet to
Destroy Spotify
RTWeekly Future of
Data Computing
Betahaus Creative Problem Solving5,100553,6701.48715.939313.0

Which column should you look at? That is highly debatable:

  • Full PNG column represents entire page as lossless PNG. Pixel perfect, but a bit unfair because PNG screenshots are lossless and therefore have worse compression if original page contained lossy JPEGs.
  • Full TinyPNG column represents entire page as color indexed PNG.
  • Viewport TinyPNG column uses color indexed PNG of a typical viewport. Idea is that since 77% of users close the page without scrolling down, for them it doesn’t make sense to load the entire page.

So, depending on how aggressive you want to be with buffer size and compression, data saving for above pages varies from 3.6x to 51.7x!

But, to be honest, I cheated a bit. Images are static—the interaction part is missing. And you’ll notice in the table that I hand-picked bloated websites, they are all above average. What happens with normal websites?

For the simple interaction, let’s use a technology that’s been around since 1997. And works in IE! People drafting HTML 3.2 got annoyed with designers requesting a “designer” look and consistent display over browsers. Rounded rectangles and stuff. In a moment of despair they said f**k you, we’ll give you everything. Create a UI out of a image and then make arbitrary vector shapes over clickable areas. And so client image maps were born.

For an example of “normal” page, should we use a really popular page or a really optimized page? How about both—let’s use the most popular web page created by the smartest computer scientists: the Google SERP. SERPs are loaded over 3.5 billion times per day and they are perfect for optimization. SERPs have no images, just a logo and text. Unlike other pages, you know user behavior exactly: 76% of users click on the first five links. Fewer than 9% of users click on the next page or perform another search.

I measured SERP for “web bloat”, and found that its size is 389.4 kB and it uses 13 requests.

I took a full page screenshot, and created a simple HTML page with an image map. The total is 106.7 kB and 2 requests. Therefore, Google SERPs have BloatScore of 3.6.

People always bash media sites for being bloated and flooded with ads. But Google SERPs increased in size from 10 kB in 1998 to 389 kB today. And content is pretty much the same, 10 links. is fast to load not because of optimization; it is fast because today you have a fast connection.

The image map for the SERP demo above has a fixed width and height, which is one of the reasons we need PXT. The first PXT request sends device viewport details, so the server knows which image to render.

But before we get into PXT, we need to ask ourselves a question…

How did this happen?

Since the first computers were connected, there was a fight. Between the “thin” tribe and the “fat” tribe.

The thin tribe wanted to render everything on the source server and make the destination server a “dumb” terminal. Quick, simple, and zero dependency. But the fat tribe said no, it’s stupid to transfer every graphics element. Let’s make a “smart” client that executes rendering or part of the business logic on the destination server. Then you don’t need to transfer every graphics element, just the minimum data. The fat tribe always advertised three benefits of smart clients: smaller bandwidth, less latency, and that the client can render arbitrary stuff.

But, in the early days of computing, “graphics” was just plain text. Data was pretty much the same as its graphic representation, and people could live with a short latency after they pressed enter at a command line. The thin tribe won and the text terminal conquered the world. The peak of this era was the IBM mainframe, a server that can simultaneously serve thousands of clients thanks to its I/O processors. The fat tribe retreated, shaking its collective fist, saying, “Just you wait—one day graphics will come, and we’ll be back!”

They waited until the 80s. Graphics terminals become popular, but they were sluggish. Sending every line, color, or icon over the wire sucked up the bandwidth. When dragging and rearranging elements with the mouse, you could see the latency. Unlike simple text flow, graphics brought myriad screen resolutions, color depths, and DPI.

“We told you so!” said the fat tribe, and started creating smart client-server solutions. Client-servers and PCs were all the rage in the 80s. But even bigger things were on the horizon.

In 1989, a guy named Tim was thinking about how to create world wide web of information. He decided not to join the tribe but to go the middle route. His invention, HTML, would transfer only the semantic information, not the representation. You could override how fonts or colors looked in your client, to the joy of fat tribe. But for all relevant computing you would do a round trip to the server, to the delight of the thin tribe. Scrolling, resizing, and text selection were instantaneous: there was only a wait when you decided to go to the next page. Tim’s invention took the world by the storm. It was exactly the “graphics terminal” that nobody wished for but everybody needed. It was open and people started creating clients and adding more features.

The first candy was inline images. They required more bandwidth, but the designers promised to be careful and always embed the optimized thumbnail in the page. They also didn’t like the free floating text, so they started using tables to make fixed layouts.

Programmers wanted to add code on the client for validation, animation, or just for reducing round trips. First they got Java applets, then JavaScript, then Flash.

Publishers wanted audio and video, and then they wanted ads.

Soon the web became a true fat client, and everybody liked it.

The thin tribe was acting like a crybaby: “You can’t have so many dependencies—the latest Java, latest Flash, latest Real media encoder, different styles for different browsers, it’s insane!” They went on to develop Remote desktop, Citrix XenDesktop, VNC, and other uncool technologies used by guys in grey suits. But they knew that adding crap to the client couldn’t last forever. And there is a fundamental problem with HTML…

HTML was designed for academics, not the average Joe

Look at the homepages of Tim Berners-Lee, Bjarne Stroustrup, and Donald Knuth. All three together have 235 kB, less than one Google SERP. Images are optimized, most of the content is above the fold, and their pages were “responsive” two decades before responsive design became a thing. But they are all ugly. If the father of the WWW, the father of C++, and the father of computer algorithms were in an evening web development class, they would all get an F and be asked to do their homepages again.

The average Joe prefers form over content and is too lazy to write optimized code. And the average Joe includes me. A few months ago homepage of my previous startup become slightly slower. I opened the source HTML and found out that nine customer reference logos were embedded in full resolution, like this 150 kB monster. I asked a developer to optimize pages using css sprites. He complied with that, but told me he would leave 13 other requests for web chat unchanged, because they are async and provided by a third party (Olark). To be honest, I would behave the same if I were a web developer. Implementing customer features will bring us more money than implementing CSS sprites. And no web developer ever got a promotion because he spend the whole night tweaking JPEG compression from 15% to 24%. To summarize:

You can’t blame web developers for making a completely rational decision.

Web developers always get the blame for web bloat. But if a 2468 kB page weight is the average, not an exception, then it is a failure of the technology, not all the people who are using it.

At one point, Google realized there was an issue with the web. Their solution: SPDY (now part of HTTP/2) and Brotli. The idea is that, although the web is crap, we will create the technology to fix the crap on the fly. Brotli is particularly interesting, as it uses a predefined 120 kB dictionary containing the most common words in English, Chinese, Arabic, as well as common phrases in HTML and JavaScript! But, there is only so much that lipstick can do for a pig. Even the best web compressor can’t figure out whether all that JS and CS is actually going to be used, or replace images with thumbnails or improve the JPEG compression ratio because the user would never notice the difference. The best compressors always start from the target. MP3 achieved a 10:1 compression ratio by starting with the human ear. A web compressor should start with the human eye. Lossless compression of some 260 kB JS library doesn’t help much.

The thin tribe realized that with a good compressor and good bandwidth the game changes. OnLive Game Service was launched in 2010, allowing you to stream games from the cloud. The next year, Gaikai launched their service for cloud gaming. They were not competitors for long: Sony purchased Gaikai in 2012, and all OnLive patents in 2015. They used the technology to create PlayStation Now. Today I can play more than 400 live games on Samsung Smart TV, at 30 frames per second. But I still need to wait 8.3 second to fully load the CNN homepage. Who is crazy here?

Remember main arguments of the fat tribe: smaller bandwidth, less latency, and that the client can render arbitrary stuff. Seems that with websites of 2016, thin tribe can do all of that equally good or better.

I want my web to be as snappy as PlayStation Now. That is why we need…

PXT protocol

Which is short for PiXel Transfer protocol. Let’s see how the full stack works, all the way from a designer to an end user.

  1. Design. Designers create designs the same as they do now, in Photoshop. After the design is approved, they make it “responsive” by creating narrow, medium, and wide version of the design (same as now). In addition, they need to use a plugin to mark some elements in PSD as clickable (navigation, buttons) or dynamic (changeable by the server).
  2. Front-end coding. No such thing. No two-week delay until design is implemented in code.
  3. Back-end coding. Similar to now, you can use any language, but there’s a bit more work as you need to modify the display on the server. We provide libraries to change PSD elements marked with dynamic.
  4. Deployment. On your Linux server or, better, PXT cloud. Why the cloud? An old terminal trick is always to move the server closer to the user. As we grow, we plan to have servers in every major city. One of the major reasons Playstation Now works is because they have data centers distributed all over North America.
  5. Browser. Currently users need to install a browser plugin. But because of that, you can mix PXT and HTML pages.

Specifically, this is how browsing happens:

  1. Browser requests an URL of a PXT page, sends viewport size, DPI, and color depth.
  2. Server checks the cache or renders a new image, breaks into text and image zones, and uses lossless or lossy compression appropriately.
  3. Browser receives a single stream with different zones, assembles them, and caches them for the future.
  4. When user clicks, zooms, or scrolls out of available zones, request for new image(s) is sent to the server.

Notice the heavy use of caching. If you have a page footer or logo, they are going to be transferred only once, as on the subsequent pages the server is going to send only the zone ID.

I know what you are thinking. This all looks nice for presentation, but the web is more than a display. Although it was loved by designers, one of the biggest flaws of Flash was that Flash indexing by web crawlers never worked well. So, what about the SEO?

The future of the search is optical recognition and deep learning. Google Drive has done OCR on PDF and images since 2010. Google Photos recognizes people and things, for example any bicycle in my personal photos. And YouTube does voice recognition over videos, so people can easily skip boring parts of my video. With the web becoming much more than text, why rely on text metadata at all?

With that final point, I invite you to check the PXT project page at GitHub.


UPDATE: Check the discussion on Reddit.