Two Rules of AI Business and Startups That Ignore Them

These rules are not new, and they are not mine; I stole them from Andrew Ng and Benedict Evans, two men with a huge following. Still, a large majority of AI entrepreneurs and engineers don’t pay attention to them, maybe because these rules show why their AI project will fail.

AI’s Law of Diminishing Returns

To paraphrase Andrew’s words from Coursera’s Deep Learning Specialization course:

The effort to half an AI system’s error rate is similar, regardless of the starting error rate. 

This is not very intuitive. If an AI system passes 90% of test cases and errors on 10%, then you are 90% done, right? Fix the remaining 10% of errors, and you will have 100% accuracy? Absolutely not. If it took you six months to halve the error rate from 20% to 10%, it will take you approximately another six months to halve 10% to 5%. And another six months to halve 5% to 2.5%. Ad infinitum. You will never achieve a 0% error rate on a real-world AI system. For an illustrative example, see this typical chart of error rate vs the number of training samples:

Notice that later in the training process, training set size increases exponentially with each error rate halving, and the error rate never reaches zero. Sure, you will get more efficient with acquiring training data (e.g., by using low-quality sources or synthetic data). Still, it is hard to believe that acquiring 10X more data is going to be much easier than acquiring the initial set. 

This rule becomes more intuitive when dissecting what an AI system error rate represents: uncovered real-world special cases. There are an infinite number of them. For example, one of the easiest machine learning (ML) tasks is classifying images of dogs and cats. It is an introductory task with online tutorials that get 99% accuracy. But solving the last 1% is incredibly hard. For example, is the creature in the image below a dog or a cat?

It is Atchoum the cat, who rose to fame because half of humans recognized him as a dog. The human accuracy on dog/cat classification within 30 seconds is 99.6%. A dog/cat classifier with less than a 0.4% error rate would be superhuman. But it is possible. A training set with hundreds of thousands of strange-looking dogs and cats would teach a neural network to focus just on details encoded in dog or cat chromosomes (e.g. cat eyes). However, building such a dataset is orders of magnitude more complex than a tutorial with 99% accuracy. Other problems lurk in that 1% error rate: photos that are too dark, photos in low resolution, photo compression artifacts, photo post-processing by modern smartphones (adding of non-existing details), dogs and cats with medical conditions etc. The problem space is infinite. This is still considered a solved ML problem though, because a 1% error rate is low enough for all practical purposes. 

But for some problems, even a 0.01% error rate is not satisfactory, for example: full-self driving (FSD). Elon Musk said in a 2015 article with Forbes:

“We’re going to end up with complete autonomy, and I think we will have complete autonomy in approximately two years.”

Tesla was so confident in that prediction that they started selling a full self-driving add-on package in 2016, and they weren’t the only ones. Kyle Vogt, CEO of Cruise, wrote a piece called How we built the first real self-driving car (really) in 2017, in which he claimed:

“the most critical requirement for deployment at scale is actually the ability to manufacture the cars that run that software”

So, the software and the working prototype are done; they just need to mass-produce “100,000 vehicles per year.” 

Fast forward to 2024. Elon Musk’s predictions for autonomous Tesla vehicles deserved a lengthy Wikipedia table, mostly in red

What about Kyle Vogt? In October of 2023 Cruise’s car dragged a pedestrian for 20 feet, after which California’s DMV suspended Cruise’s self-driving taxi license. Kyle “resigned” as CEO in November 2023.  

Don’t misunderstand me—I believe autonomous cars will have a significant market share, probably in the next decade. The failed predictions above illustrate what happens when entrepreneurs don’t respect the AI law of diminishing returns. Elon and Kyle probably saw a demo of a full-self driving car that could drive on its own, on a sunny day, on a marked road. Sure, a safety driver needed to intervene sometimes, but that was only 1% of the drive time. It is easy to conclude that “autonomous driving is a solved problem,” as Elon said in 2016. Notice how ML scientists and engineers didn’t make such bombastic claims. They were aware of many edge cases, some of which are described in crash reports. Edge cases include:

Why so many companies promised a drastic reduction in self-driving error rates in such a short time without having a completely new ML architecture is an open question. Scaling laws for convolutional neural networks have been known for some time, and the new transformer architecture obeys a similar scaling law. 

AI’s Product vs Feature Rule

When is an AI system a good stand-alone product, and when is it just a feature? In the words of Benedict Evans from The AI Summer podcast: “Is this feature or a product? Well, if you can’t guarantee it is right, it’s a feature. It needs to be wrapped in something that manages or controls expectations.” I love that statement. The “it is right” part can be broken down using error rate:

If your AI system has a higher error rate than target users, you have an AI feature in an existing workflow, not a stand-alone AI product.

This rule is more intuitive than the law of diminishing returns. If target users are better at a task, they will not like stand-alone AI system results. They could still use AI to save them effort and time, but they will want to review and edit AI output. If AI completely fails at a task, humans will use the old workflow and the old software to finish the task.

Let’s take MidJourney for example, which generates whole images based on a text prompt. When I used it for a hobby project last year, satisfying artistic images appeared instantly, like magic. But then I spent hours fixing creepy hands, similar to the ones below:

Each time MidJourney created a new image, one of the hands had strange artifacts. Finally, it generated an image with two normal hands—but then it destroyed the ears in another part of the image. The problem was less with wrong details and more with bad UI, which didn’t allow correction of the AI’s mistakes.

Adobe’s approach is different—it treats generative AI as just one feature in its product suite. You use an existing tool, select an area, and then do a generative fill:

You can use it for the smallest of tasks, like removing cigarette butts from grass in a wedding photoshoot. If you dislike AI grass, no problem—revert to the old designer joy of manually cloning grass. Also, Adobe Illustrator has generative Vector AI that generates vector shapes you can edit to your liking.

MidJourney makes more impressive demos, but Adobe’s approach is more useful to professional designers. That doesn’t mean MidJourney doesn’t make sense as a product, its target users are the ones who don’t care about details. For example, last Christmas, I got the following greetings image over WhatsApp:

Did you notice baby Jesus’ hands and eyes? Take another look:

That would never pass with a designer, but that is not the point. There is a whole army of users who don’t care about image composition and details, they just want images that go with their content. In other words, MidJourney is not a replacement for Adobe’s Creative Suite—it is a replacement for stock photo libraries like Shutterstock and Getty Images. And judging by the recent popularity of AI-generated images on social media and the web, people like artsy MidJourney images more than stock photos.

Low-hanging fruit in stand-alone AI products are use cases where a high error rate doesn’t matter or is still better than the human error rate. An unfortunate example is guided missiles; in the Gulf War, the accuracy of Tomahawk missiles was less than 60%. But the army was happy to buy Tomahawks because they were still much more accurate than older alternatives, as fewer than 1 in 14 unguided bombs hit their targets.

Evaluating startups based on the above rules

The great thing is that error rates are measurable, so the above rules give a framework to judge an AI startup quickly. Below is a simple startup example.

Devin AI made quite a splash in March of 2024 with a video demo of developer AI that can create fully working software projects. The announcement says that Devin was “evaluated on the SWE-Bench” (relevant benchmark), and “correctly resolves 13.86% of the issues unassisted, far exceeding the previous state-of-the-art model performance of 1.96% unassisted.” So, the current state-of-the-art (SOTA) has a 98% error rate, and they claim to have an 86% error rate. Even if that claim is valid (it wasn’t independently verified), why do their promo videos show success after success? It turns out that the video examples were cherry-picked, the task description was changed, and Devin took hours to complete.

In my opinion, Microsoft took the right approach with GitHub Copilot. Although LLMs work surprisingly well for coding, they still make a ton of mistakes and don’t make sense as a stand-alone product. Copilot is a feature integrated into popular IDEs that pops up with suggestions when they are likely to help. You can review, edit, or improve on each suggestion.  

Again, don’t get me wrong. I think coding SOTA will drastically improve over the next few years, and one day, AI will be able to solve 80% of GitHub issues. Devin AI is still far away from that day, although the company has a valuation of $2 billion in 2024.

More formally, the framework for evaluation is:

  1. Find a relevant benchmark for a specific AI use case. 
  2. Find the current state-of-the-art (SOTA) error rate and human error rate on that benchmark.
  3. Is the SOTA better or comparable to the human error rate?
    1. If yes (unlikely): Great, the problem is solved, and you can create a stand-alone AI product by reproducing SOTA results.
    2. If no (likely): Check if there is a niche customer segment that is more tolerant of errors. If yes, you can still have a niche stand-alone product. If you can’t find such a niche, go to the next step.
  4. You can’t release a stand-alone AI product. Wait for SOTA to get better, pour money into research, or go to the next step.
  5. Think about how to integrate AI as a feature into the existing product. Make it easy for users to detect and correct AI’s mistakes. Then, measure AI’s return on investment:

    AI_ROI = Effort_saved_by_AI / Effort_lost_correcting_AI

    If too much user time is spent checking and correcting AI errors (AI_ROI<=1), you don’t even have a feature.

Or, to summarize everything discussed here in one sentence:

Every innovative AI use case will eventually become a feature or a product, once the error rates allow it. If you want to make it happen faster, become a researcher. OpenAI’s early employees spent seven years on AI research before overnight success with ChatGPT. Ilya Sutskever, OpenAI’s chief scientist, still didn’t want to release ChatGPT 3.5 because he was afraid it hallucinated too much. Science takes time.

If you found this article useful, please share.

 

 

MasterCard Serbia asked ladies to share FB photos of, among other things, their credit card

Credit card companies should know all about phishing, right? McCann should know all about marketing, right? Combine the two in Serbia and you will get a marketing campaign that just went viral, although for the wrong reasons.

Mastercard Serbia organised a prize contest “Always with you” that asks female customers to share contents of their purse on Facebook. If you read the text carefully, it is not required to photo your card. However, the example photo clearly shows the credit card details of a fictive customer:

Lured by prizes, many customers posted photos of their private stuff. And some copied Mastercard promo — their credit card, with full details, is visible in the photo:

This is the first phishing campaign that I know that was organised by credit company itself!

The funny thing that is that nobody in Mastercard, McCann agency or legal team noticed the problem. There is a lengthy legal document explaining the conditions of the prize contest:

That document is signed by Mastercard Europe SA and McCann Ltd Belgrade, so it seems it has passed multiple levels of corporate approval. And Mastercard didn’t seem to notice the problem until six days later when a serbian security blogger wrote about it.

In my modest opinion, the lesson of this story is to be careful how you hire. I am biased because I run an employee assessment company, but smiling people with lovely résumés can still be bozos. And when you have incompetent people in the company, it doesn’t matter what formal company procedures you have in place.

 

P.S. As user edent from HN noticed, photo sharing of credit cards is nothing uncommon for Twitter: https://twitter.com/needadebitcard

P.P.S. As of today (May 18), entire “Always with you” campaign is deleted from Facebook.

 

10 years of experience selling IaaS or PaaS

Today, a friend sent me a funny Google job posting. Here is the highlight:

10 years of sales experience? Amazon EC2 (IaaS) only came out of beta in Oct 2008, Google App Engine (PaaS) only had a limited release in Apr 2008. Now is Feb 2017, so even if you got started selling EC2 or App Engine from the very first day, you would only have 8 years of experience.

I know you are Google, but it is a bit too high of a bar. You still haven’t invented the time machine.

 

Web bloat solution: PXT Protocol

After many months in the making, today we are happy to announce v1 of PXT Protocol (MIT license). This is a big thing for our small team, as we aim to provide an alternative to HTTP/HTML.
Before I dive into technical details of our unconventional approach, I must explain the rationale. Bear with me.

Web bloat

Today’s web is in a deep obesity crisis. Bloggers like Maciej, Ronan, and Tammy have been writing about it, and this chart summarizes it all:

growth-average-web-page2014

Notice the exponential growth. As of July 2016, the average web page is 2468 kB in size and requires 143 requests.

But computers and bandwidth are also getting exponentially faster, so what’s the problem?

Web bloat creates four “S” problems:

  1. Size. A few years ago, a 200MB/month phone data plan was enough. Today my 2GB plan disappears faster than Vaporeon pokemon.
  2. Speed. The web can be 10x faster. Especially over mobile networks, as phone screens need to show fewer elements.
  3. Security. The modern browser is actually an OS that needs to support multiple versions of HTML, CSS, HTML, SVG, 8+ image formats, 9+ video formats, 8+ audio formats, and often adds a crappy plugin system just for fun. That means the browser you are looking at right now has more holes than a pasta strainer. And some of them would allow me root access to your system right now. I just need to offer enough bitcoins on a marketplace for zero-day exploits.
  4. Support. All that bloat needs to be implemented and maintained by people. Front-end has become so complicated that now designers who can also code are called unicorns.

One can say “Problems, schmoblems! We had problems like this in the past, and we lived with them. The average web page will continue to grow.”

No, it will not. Because there is a magic limit—let’s call it the bloat inflection point:

Bloat-inflection-point

For pages that are small and non-bloated (most pre-2010 pages), PXT only solves security and support problems. But today’s average web page will also gain big size and speed improvements. The Internet passed the bloat inflection point early this year, and nobody noticed.

PXT solves these problems by focusing on the core: the presentation. The majority of bloat pushed to client browsers has only one purpose—to render the page. Even JavaScript is mostly used to manipulate DOM. Images alone comprise 62% of a page’s total weight. Often images are not resized or optimized.

Responsive webs just make it worse. The fashion now is to have one sentence per viewport and then a gigantic background image behind it.

Developers have gotten lazier and lazier over the years. At the same time, compression technologies got better, both lossless and lossy. So we got an idea…

What if a client-specific page was rendered on a server, and then
streamed
to a “dumb browser” using the most efficient compression?

Like all great ideas, this sounds quite dumb. I mean, sending text as compressed images?! But I did a quick test…

Demo time

Let me show you a simple non-PXT demo; you can follow it without installing any software.

The procedure is simple:

  1. Find a typical bloated web page.
  2. Measure total page size and # of requests. I used the Pingdom speed test.
  3. Take a full page screenshot. I used the Full page screen capture chrome extension.
  4. Put into table and calculate the bloat score.

Bloat score (BS for short) is defined as:

BS = TotalPageSize / ImageSize

We can derive a nice rule from the bloat score:

You know your web is crap if the full image representation of the
page is smaller
than the actual page (BS>1).

I expected some screenshots to beat full page loads, but I was wrong. Screenshots won in every case. See for yourself in the table below: Image columns contain links to comparison images.

Full
PNG

(1366 x ?)
Full
TinyPNG
(1366 x ?)
Viewport
TinyPNG

(1366 x 768)
PageSize (kB)# of req.Image (kB)BSImage (kB)BSImage (kB)BS
TechTimes Google
Tags Slow Websites
22,0004942,3689.352741.8139158.3
Vice Bootnet to
Destroy Spotify
5,0001742,3462.15848.622822.0
RTWeekly Future of
Data Computing
3,4001182,0091.75815.924913.6
Betahaus Creative Problem Solving5,100553,6701.48715.939313.0
AVERAGE:3.615.551.7

Which column should you look at? That is highly debatable:

  • Full PNG column represents entire page as lossless PNG. Pixel perfect, but a bit unfair because PNG screenshots are lossless and therefore have worse compression if original page contained lossy JPEGs.
  • Full TinyPNG column represents entire page as color indexed PNG.
  • Viewport TinyPNG column uses color indexed PNG of a typical viewport. Idea is that since 77% of users close the page without scrolling down, for them it doesn’t make sense to load the entire page.

So, depending on how aggressive you want to be with buffer size and compression, data saving for above pages varies from 3.6x to 51.7x!

But, to be honest, I cheated a bit. Images are static—the interaction part is missing. And you’ll notice in the table that I hand-picked bloated websites, they are all above average. What happens with normal websites?

For the simple interaction, let’s use a technology that’s been around since 1997. And works in IE! People drafting HTML 3.2 got annoyed with designers requesting a “designer” look and consistent display over browsers. Rounded rectangles and stuff. In a moment of despair they said f**k you, we’ll give you everything. Create a UI out of a image and then make arbitrary vector shapes over clickable areas. And so client image maps were born.

For an example of “normal” page, should we use a really popular page or a really optimized page? How about both—let’s use the most popular web page created by the smartest computer scientists: the Google SERP. SERPs are loaded over 3.5 billion times per day and they are perfect for optimization. SERPs have no images, just a logo and text. Unlike other pages, you know user behavior exactly: 76% of users click on the first five links. Fewer than 9% of users click on the next page or perform another search.

I measured SERP for “web bloat”, and found that its size is 389.4 kB and it uses 13 requests.

I took a full page screenshot, and created a simple HTML page with an image map. The total is 106.7 kB and 2 requests. Therefore, Google SERPs have BloatScore of 3.6.

People always bash media sites for being bloated and flooded with ads. But Google SERPs increased in size from 10 kB in 1998 to 389 kB today. And content is pretty much the same, 10 links. Google.com is fast to load not because of optimization; it is fast because today you have a fast connection.

The image map for the SERP demo above has a fixed width and height, which is one of the reasons we need PXT. The first PXT request sends device viewport details, so the server knows which image to render.

But before we get into PXT, we need to ask ourselves a question…

How did this happen?

Since the first computers were connected, there was a fight. Between the “thin” tribe and the “fat” tribe.

The thin tribe wanted to render everything on the source server and make the destination server a “dumb” terminal. Quick, simple, and zero dependency. But the fat tribe said no, it’s stupid to transfer every graphics element. Let’s make a “smart” client that executes rendering or part of the business logic on the destination server. Then you don’t need to transfer every graphics element, just the minimum data. The fat tribe always advertised three benefits of smart clients: smaller bandwidth, less latency, and that the client can render arbitrary stuff.

But, in the early days of computing, “graphics” was just plain text. Data was pretty much the same as its graphic representation, and people could live with a short latency after they pressed enter at a command line. The thin tribe won and the text terminal conquered the world. The peak of this era was the IBM mainframe, a server that can simultaneously serve thousands of clients thanks to its I/O processors. The fat tribe retreated, shaking its collective fist, saying, “Just you wait—one day graphics will come, and we’ll be back!”

They waited until the 80s. Graphics terminals become popular, but they were sluggish. Sending every line, color, or icon over the wire sucked up the bandwidth. When dragging and rearranging elements with the mouse, you could see the latency. Unlike simple text flow, graphics brought myriad screen resolutions, color depths, and DPI.

“We told you so!” said the fat tribe, and started creating smart client-server solutions. Client-servers and PCs were all the rage in the 80s. But even bigger things were on the horizon.

In 1989, a guy named Tim was thinking about how to create world wide web of information. He decided not to join the tribe but to go the middle route. His invention, HTML, would transfer only the semantic information, not the representation. You could override how fonts or colors looked in your client, to the joy of fat tribe. But for all relevant computing you would do a round trip to the server, to the delight of the thin tribe. Scrolling, resizing, and text selection were instantaneous: there was only a wait when you decided to go to the next page. Tim’s invention took the world by the storm. It was exactly the “graphics terminal” that nobody wished for but everybody needed. It was open and people started creating clients and adding more features.

The first candy was inline images. They required more bandwidth, but the designers promised to be careful and always embed the optimized thumbnail in the page. They also didn’t like the free floating text, so they started using tables to make fixed layouts.

Programmers wanted to add code on the client for validation, animation, or just for reducing round trips. First they got Java applets, then JavaScript, then Flash.

Publishers wanted audio and video, and then they wanted ads.

Soon the web became a true fat client, and everybody liked it.

The thin tribe was acting like a crybaby: “You can’t have so many dependencies—the latest Java, latest Flash, latest Real media encoder, different styles for different browsers, it’s insane!” They went on to develop Remote desktop, Citrix XenDesktop, VNC, and other uncool technologies used by guys in grey suits. But they knew that adding crap to the client couldn’t last forever. And there is a fundamental problem with HTML…

HTML was designed for academics, not the average Joe

Look at the homepages of Tim Berners-Lee, Bjarne Stroustrup, and Donald Knuth. All three together have 235 kB, less than one Google SERP. Images are optimized, most of the content is above the fold, and their pages were “responsive” two decades before responsive design became a thing. But they are all ugly. If the father of the WWW, the father of C++, and the father of computer algorithms were in an evening web development class, they would all get an F and be asked to do their homepages again.

The average Joe prefers form over content and is too lazy to write optimized code. And the average Joe includes me. A few months ago homepage of my previous startup become slightly slower. I opened the source HTML and found out that nine customer reference logos were embedded in full resolution, like this 150 kB monster. I asked a developer to optimize pages using css sprites. He complied with that, but told me he would leave 13 other requests for web chat unchanged, because they are async and provided by a third party (Olark). To be honest, I would behave the same if I were a web developer. Implementing customer features will bring us more money than implementing CSS sprites. And no web developer ever got a promotion because he spend the whole night tweaking JPEG compression from 15% to 24%. To summarize:

You can’t blame web developers for making a completely rational decision.

Web developers always get the blame for web bloat. But if a 2468 kB page weight is the average, not an exception, then it is a failure of the technology, not all the people who are using it.

At one point, Google realized there was an issue with the web. Their solution: SPDY (now part of HTTP/2) and Brotli. The idea is that, although the web is crap, we will create the technology to fix the crap on the fly. Brotli is particularly interesting, as it uses a predefined 120 kB dictionary containing the most common words in English, Chinese, Arabic, as well as common phrases in HTML and JavaScript! But, there is only so much that lipstick can do for a pig. Even the best web compressor can’t figure out whether all that JS and CS is actually going to be used, or replace images with thumbnails or improve the JPEG compression ratio because the user would never notice the difference. The best compressors always start from the target. MP3 achieved a 10:1 compression ratio by starting with the human ear. A web compressor should start with the human eye. Lossless compression of some 260 kB JS library doesn’t help much.

The thin tribe realized that with a good compressor and good bandwidth the game changes. OnLive Game Service was launched in 2010, allowing you to stream games from the cloud. The next year, Gaikai launched their service for cloud gaming. They were not competitors for long: Sony purchased Gaikai in 2012, and all OnLive patents in 2015. They used the technology to create PlayStation Now. Today I can play more than 400 live games on Samsung Smart TV, at 30 frames per second. But I still need to wait 8.3 second to fully load the CNN homepage. Who is crazy here?

Remember main arguments of the fat tribe: smaller bandwidth, less latency, and that the client can render arbitrary stuff. Seems that with websites of 2016, thin tribe can do all of that equally good or better.

I want my web to be as snappy as PlayStation Now. That is why we need…

PXT protocol

Which is short for PiXel Transfer protocol. Let’s see how the full stack works, all the way from a designer to an end user.

  1. Design. Designers create designs the same as they do now, in Photoshop. After the design is approved, they make it “responsive” by creating narrow, medium, and wide version of the design (same as now). In addition, they need to use a plugin to mark some elements in PSD as clickable (navigation, buttons) or dynamic (changeable by the server).
  2. Front-end coding. No such thing. No two-week delay until design is implemented in code.
  3. Back-end coding. Similar to now, you can use any language, but there’s a bit more work as you need to modify the display on the server. We provide libraries to change PSD elements marked with dynamic.
  4. Deployment. On your Linux server or, better, PXT cloud. Why the cloud? An old terminal trick is always to move the server closer to the user. As we grow, we plan to have servers in every major city. One of the major reasons Playstation Now works is because they have data centers distributed all over North America.
  5. Browser. Currently users need to install a browser plugin. But because of that, you can mix PXT and HTML pages.

Specifically, this is how browsing happens:

  1. Browser requests an URL of a PXT page, sends viewport size, DPI, and color depth.
  2. Server checks the cache or renders a new image, breaks into text and image zones, and uses lossless or lossy compression appropriately.
  3. Browser receives a single stream with different zones, assembles them, and caches them for the future.
  4. When user clicks, zooms, or scrolls out of available zones, request for new image(s) is sent to the server.

Notice the heavy use of caching. If you have a page footer or logo, they are going to be transferred only once, as on the subsequent pages the server is going to send only the zone ID.

I know what you are thinking. This all looks nice for presentation, but the web is more than a display. Although it was loved by designers, one of the biggest flaws of Flash was that Flash indexing by web crawlers never worked well. So, what about the SEO?

The future of the search is optical recognition and deep learning. Google Drive has done OCR on PDF and images since 2010. Google Photos recognizes people and things, for example any bicycle in my personal photos. And YouTube does voice recognition over videos, so people can easily skip boring parts of my video. With the web becoming much more than text, why rely on text metadata at all?

With that final point, I invite you to check the PXT project page at GitHub.

 

UPDATE: Check the discussion on Reddit.

Worst CAPTCHA Ever

By definition, CAPTCHA should be easy to read by humans but hard to read by machines. Apparently, they don’t agree with that at D&B:

CaptureDNBCaptcha

They put CAPTCHA in a plain HTML text, and then put an ugly background below so it can’t be read by humans.
I knew corporate developers can be of lower quality, but this is hilarious 🙂

 

UPDATE 1: Check discussion on Hacker News and discussion on Reddit.

 

UPDATE 2: Many people got offended, but I stick to my personal opinion: many corporations have recruitment practices that reject good programmers and attract bad ones. Good programmers don’t want to work in an environment where meetings require a tie, the development process is waterfall and the only way to increase your salary is to become a manager. But most of all, the screening of programmers should be done by technical department and not by corporate HR. And that is not hard, here you can create programming tests for Java and C# and send them to your candidates in less than 5 min.