No winning in the age of AI hype
We are currently living in the era of AI hype.
In the recent years, it was the blockchain, or the metaverse. Today, we have AI. The corporate tech world fixating on the current hot thing a recurring pattern. However, while hype in general is the common feature, the specifics vary with the different subjects of the hype. The subject of AI is, in particular, a fertile ground for the sort of pernicious maneuvers that the world of Big Tech is infamous for.
Machine learning has been around for decades. It was, however, in the late 2010s and early 2020s that the first hype-inspiring models were produced. Some of these models could produce text expounding on a topic with the competence of a high school student attempting to confidently deliver a report on a half-remembered subject. Other models could generate images that could pass for actual photos if one does not look too closely. All you needed is a piece of short, natural language input. Some people with a lot of money saw a future in these sorts of things, and the hype cycle got going.
Building an impressive model like this requires a lot of resources. With the blockchain, one could just bolt it onto whatever software project one had going, and check that particular buzzword off the list. One could gesture vaguely in the direction of all the nodes out there running various blockchains, and the built-in financial incentives, if the relevant questions come up, even if all of this was of dubious use in the end. With the metaverse, Facebook decided to bring its anodyne giant tech corporation vibes into the space of VR. This was a space where much smaller corporations were already providing what the actual audience—like, say, assorted queer furries—wanted from the technology, without necessarily being the size of Facebook, and without having Facebook-level of resources. Of course, Facebook is called Meta now, to remind everyone of their failure.
On the other hand, with the hype for general-purpose AI chatbots, corporations need to actually slurp up large amount of input for their training corpora, and then need computing power to actually train the large language models from those corpora. Smaller enterprises, if they wish to include a chatbot in their service (or to replace their entire writing staff with one), are likely to simply contract one of those larger corporations.
Search engines are in a similar position, when it comes to the resources required. Google achieved dominance by offering a Web search engine that was better than the competitors at the time of its introduction. Yes, for a long time now, it has maintained its position in large part thanks to brand recognition, and the fact that it is the main established monopolistic player in the area. However, at least for some time, Google also actually offered a decent search engine, alongside other useful services. The usefulness of these offerings was tied in large part to the fact that Google was able to continue pouring resources into them. A search engine which indexes a small part of the Web, at an infrequent rate, and is incapable of efficiently querying its index is hardly a search engine at all, which makes competing with Google difficult.
Likewise, general purpose AI chatbots capable of generating answers (or sufficiently close imitations of such) in response to natural language queries require ingesting of large amounts of input, the ability to process that input into a model, and then the ability to efficiently use that model to create novel output. If you cannot do that, then you do not really have the sort of product that can impress the intended audience.
All of this requires resources, which means it requires money. Still, the hype tells us that if these AI products in their various categories can be monetized, there may be money to be made. The resource requirements mean that offering the AI products as a service is a natural fit, and so anyone who can establish themselves as a major player in the new market stands to make a lot of money, and to continue making that money. People with money imagine the possibility of being the next Google Search, the next Amazon Web Services, or the next Spotify, and get really excited for shoveling more cash into the hype engine.
All of this is either going to pan out or not. Either the AI hype goes down the Gartner cycle, and eventually becomes an ordinary part of how everyone uses computers, or it fizzles out, and the AI hype years become another thing to point at when talking about the silliness of Silicon Valley.
The first scenario involves people not showing the sort of enthusiasm for AI products that the industry wants or expects. The hype cycle feeds itself, so more stuff may end up having AI bolted onto it, and more AI products may end up being advertised to end users through obnoxious means. The people with the money have a deep fondness for throwing money at what they perceive to be the next big thing, but they still have their limits. Eventually, they may realize that they have misidentified the next best thing, as it fails to pour money back at them. The services that were to be replaced by their AI equivalents may end up being slightly worse in the end, but if it becomes apparent that the pre-AI versions are what people actually overwhelmingly prefer, those non-AI versions may stay around. The experts may start mumbling about another AI winter, and some new next big thing will be located, to provide the next set of buzzwords.
Or, maybe, the AI hype pans out. From a certain vantage point, it is hard to see anything but the current AI products failing to be fit for purpose, and a preponderance of vicious hatred from the public, for both the products, and the companies behind them. But, one can always dismiss any amount of people as haters, so perhaps the numbers of people who do prefer to ask a chatbot instead of using a search engine or a calculator will end up outweighing the group that rather would not.
And, maybe, people do not even need to like it. After all, people did not excitedly clamor for pivot to video, or for every website being replaced by an app that sends unwanted notifications. "Download our cool app to interact with us" is not that distant from "talk to our cool chatbot". People are also certainly not clamoring to be fired from their jobs and replaced by a generative model that is worse at it, but if corporations decide the AI option is cheaper, then the employees can be gotten rid of.
Either things get worse, and then slightly less worse; or, they get worse, and stay worse. Either the damage is localized, or it keeps being a bigger problem for the broader us of people who are obligated to exist in the system alongside all the profit-driven entities that care not for the damage they inflict. The only things that it will possibly end up are subscription services, more annoying and obnoxious versions of things that previously worked better, obstacles thrown in the way. There is no winning for us—the us that are the people who use technology.
Whatever benefits may be extracted from the new technology, it will be the corporate, capitalist parts of society that extract them, for themselves. The technology does not necessarily have some inherent evil in it. It does not need to represent some moral failing, or dramatic societal decay. It is very much of the world that it emerges from, and so the way it exists—the way it is most likely to exist in our current world—means bad things for most people.