AI has scared me, and you should too.

AI has scared me, and you should too.

0 reviews

I had believed that the AI infatuation would eventually fade. Perhaps create some economic problems when the bubble eventually pops. However, I no longer hold this belief. Rather, I now perceive this AI craze as a sign of something much more pernicious, harmful, and catastrophic for society. Does that sound a bit exaggerated? Let me clarify.

A few weeks ago, The Information revealed that OpenAI was on the brink of bankruptcy following a $5 billion deficit for the year. Moreover, the expenses of their AI development program are anticipated to increase dramatically from roughly $3 billion per year to well over $7 billion per year as they endeavor to produce larger, more potent models that are critical to their growth and survival. OpenAI, to put it simply, was dying and dying. But soon after this revelation, OpenAI said that it was attempting to secure $5 billion in bank credit and $6.5 billion in investment, valuing the company at $150 billion, nearly twice its original valuation. This would only keep OpenAI going for a year or so if it were protected. To exacerbate the situation, there is ample evidence that OpenAI is unable to produce these enhanced models and achieve profitability even with this funding (which we will address in more detail in a moment).

You would think that nobody would give OpenAI this enormous quantity of money, wouldn't they? False! In addition to securing $4 billion in unsecured rolling credit from institutions like JP Morgan, Goldman Sachs, Morgan Stanley, Santander, Wells Fargo, SMBC, UBS, and HSBC, OpenAI recently announced that it had raised $6.6 billion in investment from Nvidia, Microsoft, Softbank, and Thrive Capital at a valuation of $157 billion.

Why, therefore, would some of the biggest businesses, investment banks, and firms pour so much money into OpenAI? Is it the century's greatest commercial opportunity? Or is there another factor at work?

Well, let’s take a look at OpenAI’s fundamentals and see if this is a good investment opportunity (hint: it’s absolutely not).

First off, as we mentioned in passing, OpenAI is not profitable. Once more, they were expected to incur an operational loss of $5 billion by the end of the year, and they spent about $3 billion on AI development. This indicates that they are still far from making a profit despite having hundreds of millions of users and greatly surpassing all revenue projections. However, their existing AIs are so very expensive that they would still report a loss of several billion dollars a year even if they spent zero dollars creating better AI, which is the whole basis for their company's worth.

Overall, the commercial underpinnings of OpenAI are utterly appalling. There is a claim that OpenAI is still worth billions of dollars despite this embarrassing balance sheet if they have a path to producing a lot more money and cutting expenditures in the future.

Sadly, this isn't the case at all.

First, the development of AI is seeing significantly declining returns. In other words, if AI is to continue to advance at the same pace, training data, infrastructure, and power consumption must all grow at an exponential rate. This indicates that OpenAI and other AI firms are facing significant obstacles. The cost of developing, building, and maintaining AI will increase exponentially if it continues to advance at the current rate, which makes it by far the most common constraint.

Though not that deep, businesses like OpenAI do have substantial financial resources. Consequently, the advancement of AI is starting to stall. This is evident from their ChatGPT models. Every version, from 1 to 3.5, represented a significant advancement. However, there was very little difference between 3.5 and 4, 4 and 4o, and 4o and o1. Actually, a lot of the adjustments were made to make the AI more usable rather than to improve its performance.

This is also the reason that OpenAI is expected to spend $7 billion annually on AI training; in order to continue making small, gradual progress, it must drastically increase its development.

Furthermore, the issue would not be resolved even if OpenAI were to increase the staggering sum of money needed to create their next-generation AIs. First of all, because costs will increase rather than decrease, they will become even less profitable. Second, they run the risk of "model collapse."

AI-generated data can quickly destabilize a model to the point that it generates absurd gibberish when it is trained. This is due to the fact that AI-generated content exhibits minute, nearly indistinguishable patterns that are absent from human-generated information. An AI gradually begins to assign more weight to these illogical patterns as it is trained on this data, and finally the entire statistical model upon which the AI is built crumbles.

To train its ChatGPT models, OpenAI has collected literally billions of lines of text from the internet. Until recently, ChatGPT was an excellent source for textual content produced by humans. However, the use of ChatGPT online has increased in tandem with its growing popularity. Over 13% of Google search results are currently thought to be artificial intelligence (AI) generated, and that percentage is only expected to rise. Most of this AI-generated information, meanwhile, isn't labeled or readily apparent. Therefore, OpenAI runs the risk of catastrophic model collapse if it keeps scraping data from the internet.

OpenAI and other AI firms have resorted to reliable sources, such as books or video transcripts, to avoid this. These collections of material, however, are defended against this by larger corporate publishers and copyright holders.

These AI companies operate in a grey area when it comes to sourcing their training data. They claim they can take this data and train their models on it without permission or payment under “fair use.” This element of copyright law claims a person or entity can use copyrighted material for commentary or if they transform it. However, using someone else's data to train an AI effectively allows the AI to reproduce that person's work—sometimes precisely—for almost nothing, which is against copyright law. Another issue is that using data in this manner constitutes "unjust enrichment," which is prohibited by law and prevents an individual or business from making money off of your labor or job without paying you. Therefore, several major players, such as Sony and WB, are suing AI businesses to either pay them millions of dollars for their content or cease using their copyrighted data. In actuality, there are an increasing number of cases against the entire AI sector.

Because copyright law is now being used against OpenAI, there is a good likelihood that they will have to remove the great majority of the data that drives their models within the next year or two.

However, OpenAI's technologies will never be dependable enough to provide the unsupervised automation they promise, even if this doesn't occur. According to recent evidence, an AI gets worse at general tasks but better at specific ones as it is trained on more data. In other words, more data won't fix AI's mistakes or hallucinations. Therefore, the fundamental promise that this technology will transform every business would be totally undermined if they were to produce these next-generation AIs, which would still need a great deal of human supervision to perform even simple tasks.

Consider computer programming. This is supposed to be one of the sectors that AI can disrupt the most easily. However, it isn't. Indeed, these AIs are ten times faster at coding than even the most proficient programmers. The code they do generate, meanwhile, is so flawed and rife with mistakes that debugging it takes hundreds of times longer than debugging the work of human programmers. In actuality, the human coder is more cost-effective and efficient overall.

This will remain a fatal issue with OpenAI's products and the AI industry overall until they are able to completely eliminate errors, which they and the industry as a whole are currently unable to do.

What is the real purpose of AI, then, if that is the case?

Examples of AI being touted as fully automating a job are Amazon's checkout-less stores and Cruise's robotaxis. But these corporations have to employ just as many human supervisors as positions they have replaced to keep an eye on these AIs, and even then the quality of the service is lower, and the overall cost is larger than just hiring humans to do the work in the first place. When it comes to practical business planning, there are essentially no applications of AI where the customer, company, and AI provider all benefit and continue to make money.

All of this is known to the banks and corporations that have recently invested billions of dollars in OpenAI. To comprehend these kinds of details, they especially employ some of the world's top market and technology analysts. Why, then, have they made such a large investment in a ship that is clearly sinking?

It's easy. My entire study of OpenAI is predicated on the idea that we live in a meritocracy. However, we don't. Our capitalist society is increasingly moving toward a plutocratic, monopolistic, power-hungry, volumocracy.

Even when they are aware that robotaxis, AI journalists, AI HR bots, AI programmers, and similar technologies perform far worse than humans and are not lucrative, they don't give a damn. They are able to create and deliver considerably more than any human, overwhelming the market with their inferior product and stifling competition. By doing this, they will be able to use force rather than meritocracy to gain limitless market shares. These banks and companies have a strong interest in and substantial investment in the near-monopolies of major tech and media, including Microsoft, which will be strengthened by this.

This is the reality of the AI push. It is caused by the dehumanizing, power-hungry, fascistic tendency of our financial sector and near-monopolistic big corporations. They want more power; damn the consequences.

Therefore, they have spent billions of dollars on a technology that eliminates the small amount of meritocracy that has supported our economy and society for the previous few centuries. Because it drowns out the competitors, it dehumanizes and devalues work. This makes it possible for the near-monopolies in which these investors have enormous holdings to continue providing superior services without having to put in a lot of effort. Instead, they can just rest on their laurels and drown out the opposition with their vast sums of money. The fact that OpenAI isn't profitable doesn't matter to these investors because it will just increase their influence and market dominance.

To put it succinctly, AI applied in this manner produces an unfair concentration of power while successfully undermining the free-market principles that capitalists profess to cherish.

In the meantime, we, the global populace, bear the consequences. Our quality of life declines, our jobs are eliminated or diminished, money that could be allocated to more important areas is diverted to AI, and the goods and services we depend on become almost worthless.

It's already happening, so this isn't hyperbole. Do you recall my statement regarding programmers? Well, during the past year, the number of employment vacancies has drastically decreased due to AI. At the same time, the quality of coding has drastically decreased, leading to an increasing chorus of complaints.

This is the reason AI terrifies me. This technology isn't revolutionary. It is a sign of the decay at the heart of our contemporary society. Only when the massive companies that currently dominate our lives are prepared to dehumanize and destroy society in order to obtain even a small amount of additional power and control will businesses like OpenAI be able to exist. Regardless of how dysfunctional the world they govern is, they desire an all-powerful throne. Indeed, when these models fail or their training data is removed, this house of cards will inevitably topple. Indeed, before they are halted, they will cause incalculable harm to numerous businesses by displacing talent or wiping information and skills. But as long as they have that power, they don't give a damn.

 

 

Sources: Originality, BBC, The Guardian, Will Lockett, Will Lockett, Will Lockett, Will Lockett, Will Lockett, Planet Earth & Beyond, CNBC, Deeplearning.ai, Tech Startups, The Economic Times, The Wrap, AI Snake Oil, The Independent

comments ( 0 )
please login to be able to comment
article by

articles

3

followers

0

followings

1

similar articles