First there was Moore’s law, now the Nvidia boss has upped the ante. It’s all fuelling a dangerous conviction that everything can be solved by technology
In 2001 I interviewed the late Gordon Moore, the co-founder of Intel. He was in Cambridge to attend the opening of a new library that he and his wife, Betty, had endowed. We met in the university library – the central library of the university – and had an agreeable chat about the history of the tech industry and the role that he had played in it. As ever, he was wearing a tacky digital watch that served as a cue for a party trick he used to play on people. He would ask them what they thought it had cost, and most people would suggest a trivial sum – $10, say. Nope, he’d reply. The actual cost was $15m: because that was what it had cost Intel to get into – and out of – the market for digital watches. And one of the lessons he learned from that was that his company should stay away from selling consumer goods.
Moore was world famous because of an observation he had made in the early days of the semiconductor industry that Intel once dominated. He had noticed that the number of transistors on an integrated circuit (or chip) had been doubling every year since 1965, and this was likely to continue for several decades. Inevitably, this became known as “Moore’s law”, as if it were a law of physics rather than just an empirical observation and an extrapolation into the future.
Continue reading...