Before the Startup

The second counterintuitive point is that it’s not that important to know a lot about startups. The way to succeed in a startup is not to be an expert on startups, but to be an expert on your users and the problem you’re solving for them. Mark Zuckerberg didn’t succeed because he was an expert on startups. He succeeded despite being a complete noob at startups, because he understood his users really well.

If you don’t know anything about, say, how to raise an angel round, don’t feel bad on that account. That sort of thing you can learn when you need to, and forget after you’ve done it.

In fact, I worry it’s not merely unnecessary to learn in great detail about the mechanics of startups, but possibly somewhat dangerous. If I met an undergrad who knew all about convertible notes and employee agreements and (God forbid) class FF stock, I wouldn’t think “here is someone who is way ahead of their peers.” It would set off alarms. Because another of the characteristic mistakes of young founders is to go through the motions of starting a startup. They make up some plausible-sounding idea, raise money at a good valuation, rent a cool office, hire a bunch of people. From the outside that seems like what startups do. But the next step after rent a cool office and hire a bunch of people is: gradually realize how completely fucked they are, because while imitating all the outward forms of a startup they have neglected the one thing that’s actually essential: making something people want.

– Paul Graham

Before the Startup

Paradoxes of Software Architecture

Paradoxes of Software Architecture | | InformIT

What is Bitcoin?

With its value in US dollars reaching $200, Bitcoin has captured the attention and imagination of numerous people throughout the world, spreading from being a technological curiosity used for paying for geeky items and illegal substances to a phenomenon widely discussed even in mainstream media.

However, it seems to me that most of the coverage misses the point of what Bitcoin really is, mostly deceived by the terminology used since it was created. I’m not going to go into neither technical details not the ideas behind its creation, as others have done a much better job of it, but let me explain how I see it from a purely economical standpoint.

Although it’s usually touted as such, Bitcoin is not really a currency, although it has some of its properties – the main actually being the ability to transfer it electronically from one holder to another. But unlike currency its supply is limited – once the number of coins reach a predetermined figure it will not be possible to create (or “mine for”) more, and far before that the processing power required to mine new ones will be so large that the production rate will be zero for all practical purposes. This also means that its value can’t be inflated, as is the case with actual currencies.

This means that Bitcoin also have some properties of a resource or commodity, not unlike gold, water, arable land or something else that has been used, traded and killed for by humans for years and centuries. However, unlike most other resources (actually all that I can think of), Bitcoin a) can be infinitely divided and traded for other resources (or money) in any amount (a bar of gold can be divided down to the level of an atom; below that it’s not gold any more), and b) it has no intrinsic function it can perform which would make it keep some value if it loses its function as a measure of value: gold can be turned into jewellery and electronic parts, water can be drunk, oil can be burned and land can be worked or lived on.

In other words, Bitcoin is really an experiment, something new in the global economy perhaps for the first time since paper money was introduced. We’ll see how it will fare, will it succeed or fail, what it will turn into, and if it’s just the next step in the evolution of the economic instruments.

The Trouble With Non-tech Cofounders

“I’ve seen the problem with non-tech founders a few times now, different people, different ages and backgrounds, with different levels of skill, but all with the same thing in common: having to rely on someone else to bring an idea from paper to screen. The most common mistake I think people like this make is to think that they know in advance what they need to get built, and once they’ve paid for that to be done, and a website has been delivered, that they then have a business.”

The Trouble With Non-tech Cofounders

Web First, Mobile Perhaps

This was originally a comment to the article titled Web Second, Mobile First on Mark Suster’s excellent blog Both Sides of the Table. While Mark has a consistently high level of quality in his articles I was, to be honest, a bit disappointed with this one. First of all, it states a number of observations that seem pretty obvious to me (and therefore, I believe, to most everyone), such as that a smartphone (or “mobile”) is increasingly the first computing device for many new users.

But more importantly, it’s continuing the false dichotomy of “mobile vs. Web”. Why it’s false? Simply because modern mobile devices – at least those with their own ecosystems – are perfectly able to display Web, and it’s becoming extremely easy to develop for both mobile and Web at the same time, with only a few more resources devoted to ensuring the cross-platform compatibility (which are necessary even if you develop only for “Web”, as you need to take into account different browsers and OSs: e.g. if you’re aiming at China, 77% of your visitors will use IE 8.0 and earlier).

If you’re strapped for resources, there is absolutely no need to develop a mobile app and have to depend on the whims of AppStore and other walled gardens out there – you can develop for the Web, and make your front-end switch the styles automatically according to the device it’s viewed on. You say that you love the new LinkedIn mobile app; but have you seen their mobile Web? It’s pretty much as functional as the mobile app, looks just as well, and has probably required only a bit more resources than the “traditional” Web app.

Essentially, in my opinion, a mobile app makes sense only if it requires no Internet connection to operate properly, so it’s perfect for games, fart jokes and similar use cases. For all the examples Mark mentioned in his article – Yelp, LinkedIn, Foursquare etc – ubiquitous Internet is a prerequisite, which means that there is no advantage over a mobile Web app. Actually, with the modern HTML5 features such as local storage, even a less than 100% reliable connection is not necessarily a problem, as some data can be stored locally and used when offline, syncing it back to the server when online.

So, instead of the “Mobile first, Web second” approach, I’d suggest a different strategy to most new startups: “Web (classic and mobile) first, mobile perhaps (if necessary)”.

A couple early 2012 trends

The new year has barely begun, and I have already started noticing a few tiny trends that might as well be signs of some greater shifts that will develop over the course of the year and beyond:

  • Non-programmers learning to code: There was a minor slew of tweets by non-programmers, stating that their New Year resolution is to learn to code, mainly using Codecademy. To be honest I am not surprised, as it seems that in the present job crunch and general downturn the demand for skilled coders is in ever rising demand. Even if you don’t aim to find employment as a programmer, knowing how to code is growing more and more useful for a number of smaller tasks in your daily life. I think that the main obstacle in their resolution will be when they realise that “coding” is not a single thing – there has never been more available languages, platforms and even targeted audiences when it comes to programming than today.
  • Getting back to work: This Joy of Tech cartoon is the latest and most visible example of something I’ve been noticing everywhere around me: people are getting back to work. Probably a combination of the ongoing depression and the optimism of the New Year is prompting people to turn away from looking for ways to entertain themselves and look for ways to create some value and have some fun doing it. Of course, this attitude has always been present in startups, but it’s spilling over to the general population.

Of course Web development is broken

Because Web was never meant to be developed for.

Originally, Web was intended as a collection of resources – actually, a filesystem of sorts. But it grew out of proportions as it became popular since it allowed users to see color and pictures and animations on the Internet, which was up to that point either limited to plain text, or required some heavy-weight, non-standard applications to be installed on the client. Actually, the name used for the application used to access the Web – a browser – tells a lot about how it was intended to use: to “browse” the resources, not to execute them. Could you imagine what desktop development would look like if you were limited to using just some sort of file viewer to program for it?

Each Web application is, actually, two completely unrelated Web applications. One is executed on the host and is preparing the data for the Web server to serve; but there is another, which is running in each of the users’ browsers, only connected to the former one by asymmetric pairs of requests and responses. Even if it consists only of HTML (and CSS), it still has code being interpreted and evaluated on the client; ajax apps only emphasize this.

So it’s not Web development that is broken; in fact, it is a miracle what has been created by the developers to work around the fundamental limitations of the platform, which was never meant to be one.

(This was originally a comment on a posting on Hacker News, linking to an article titled Web development is just broken.)