Consulting

Opportunities and Challenges to AI’s ‘Unstoppable’ Development

Opportunities and Challenges to AI’s ‘Unstoppable’ Development

Artificial intelligence (hereon referred to as AI) has made remarkable progress over the last couple of years, far more than it has made even over the past five decades. It is for this reason that many people who do not work in the information technology industry or do not keep themselves informed about it believe that the growth of this technology is on a steady, unstoppable exponential path. Nevertheless, much like any development or project, there are obstacles in the way of this advancement.  AI frequently faces plateaus that must be overcome by software engineers, some technical and others external, from outside its code. This article will go over the obstacles that AI engineers and companies currently face while developing this technology.

Technical Challenges

As with any technology, artificial intelligence has constraints on what is possible for it to achieve, as well as a margin of possible error during its operation. These types of impediments to AI’s progress depend on various factors, including the quality of its models, algorithms, training data, and even communication between itself and its end users.

Even uncommon consumers of the services provided by generative AI can notice defects in the results delivered by AI at its current stage, in which the product of basic tasks can have glaring errors. The most notable of these is “hallucination,” a term used to refer to the phenomenon of AI delivering false or incoherent information in response to its input data. The underlying rationale is that generative AI chatbots are currently programmed to predict a reasonable-sounding answer, and not understand the meaning behind it (Fui-Hoon Nah, et al, 2023). The solution for this is still far off and much too open-ended, but in general it can be said that the current algorithms must be adapted and improved to be able to cross-reference itself.

Sometimes, AI responses will trail off and end up incongruous with the input data. Engineers working on Claude Sonnet are testing the ability to give value to “features” they specify, so as to make sure these topics wind up as part of the response. The example they used was to register the Golden Gate Bridge as a feature while prompting responses about the city of San Francisco. One bug that illustrates the state of this technology is that when the value of this feature was ramped up to beyond its intended maximum, the AI chatbot began to refer to itself as the Golden Gate Bridge in the first person (Templeton, et al, 2024).

The accuracy of an AI model depends largely on the dataset it is provided with. As such, the gathering and formatting of data is a great part of the job for engineers in this field, and many commonly seen errors in the output of AI systems can be traced back to flaws in their datasets. To illustrate this concept, large language models (LLMs for short) serve as a good case study. LLMs currently work well for creating short responses to simple inputs. Logic would conventionally dictate that larger datasets can improve their efficacy by giving them more content they can learn from. Nevertheless, the results have proven to be the opposite, as results become skewed with datasets reaching a certain size threshold. Some engineering teams have opted for using multiple small datasets instead of a single large bank. However, this turns out to be ineffective for long-form content, as smaller amounts of data lack the language complexity of these longer forms (Dohmatob, et al, 2024).

Expanding on the issue of datasets, one problem that AI models have yet to overcome is that of nuance in the importance of concepts. Referring again to the example of the Golden Gate Bridge, unless an idea is explicitly set as high-value in its algorithm, an AI will commonly not grasp its significance, nor the reason behind it. In this way, a model can be very good at identifying what the next word in a sentence might be based on how commonly it is used, but might for this very reason avoid more rarely-used words that might carry much more impact or expression (Dohmatob, et al, 2024).

Economic Concerns

Being such a new and exciting phenomenon for investors and large established technology companies, AI as a whole has financial backing that rivals even the largest countries’ GDPs, with the global AI market valued at 142.3 billion US dollars as of 2023 (Thormundsson, 2024). This economic muscle brings unique challenges that other industries might never have to face.

As the CEO of Meta Mark Zuckerberg recently stated, “The downside of being behind [on AI] is that you’re out of position for the most important technology for the next 10 to 15 years. (Leswing, 2024).” Being the globally popular trend that it is and is currently predicted to become, every company wants to be on the uptake, and so “the risk of underinvesting is dramatically greater than the risk of overinvesting,” as the CEO of Alphabet Sundar Pichai has said (Leswing, 2024). This has led to an unprecedented spending surge in big tech companies, who want to have infrastructure ready for when the time comes to make money from AI. Microsoft has even expressed that if more GPU chips were currently available, they would purchase even more (Weise, 2024). This indicates that despite this tremendous spending, it is still not enough to satisfy AI’s development needs. Wizeline is also a major player in terms of investment in the AI field, especially in Latin America. A prime example of this is the GAIL (generative artificial intelligence laboratory) research center now being built with a funding of approximately 9 million Mexican pesos (around 445,000 US dollars) (González, et al, 2023).

While many companies show fear at the thought of staying behind, they might still have to face dire consequences from overspending. Given the rate and volume at which companies are currently spending, there would need to be $600 billion in annual AI revenue to make up for it, as calculated by David Cahn, a partner at venture firm Sequoia (Leswing, 2024). In the face of this frenzy, the only company that is making quantifiable profits is Nvidia, which makes the GPUs and chips needed to train AI models, and whose revenue has tripled for three straight quarters and is expected to rise further (Leswing, 2024). This uncertainty and risk when it comes to revenue have generated doubts in important investors, especially considering lower earnings reported from companies such as Google and Tesla and dips in share prices for the same reason. This and the general public’s rising distrust of AI is giving the financial world apprehension as to whether to invest, especially as businesses find it difficult to make money from it (Marr, 2024). Many interpret the current upward trend in AI interest as a bubble ready to burst, but regardless of whether the market for AI drops as predicted, companies must still sort out their business model to make their monumental expenditures worthwhile.

Societal Dilemmas

Society is largely divided regarding their acceptance of AI. Those who advocate against it voice reasonable concerns that may outweigh its benefits if not managed properly.

AI requires millions upon millions of iterations when it is trained to a high level, so it comes as no surprise that the energy consumption for doing so scales accordingly. This comes as a great burden to the environment. According to a study done by the Association for Computational Linguistics, training one large language model can emit about 315 tons of carbon dioxide (Chui, et al, 2023). Water consumption is also one of its major drawbacks. Using OpenAI’s GPT-3 as an example, the popular large language model required approximately 700,000 liters of water in its training phase alone, owing to indirect consumption from generating the necessary electricity and direct usage in its cooling systems used in data centers as well as in the production of microchips (A. Shaji George, et al, 2023). This is only one example, and when this technology becomes widespread, it could severely impact communities that already deal with shortages.

AI models’ purpose in general is to optimize themselves and their solutions, but this may often stray them off a humane path, offering cruelly practical (some may say Machiavellian) solutions to problems (Ozmen Garibay, et al, 2023). The complication of the matter is that “humane” is a subjective term, so a model will only constrain itself morally as far as its data and algorithm lead it to consider. This of course is a problem that arises when AI is given complex administrative positions, but the rate at which the world invests and works on this technology indicates that they wish to implement it in this way. One example of AI with administrative responsibility taking harmful moral decisions are the AI algorithms used to aid in lending decisions by financial companies across the United States, which in 2021 were found to be 80% more likely to reject black mortgage applicants (Vargas, 2024). Some companies such as Wizeline are dedicated to fostering results in AI models that are conscientious towards all groups, taking steps such as involving the community in their evaluation of products and taking steps from development all the way through to implementation to reduce bias. However, this is evidently not an investment that all companies that exploit AI are willing to make of their own volition (Vargas, 2024).

Acceptance of AI in the general public is also significantly lowered by the incoming risk of lost job opportunities due to automation replacing them. This could have significant economic consequences, especially in developed countries, where wages are higher and using automation could be considered more worthwhile (Chui, et al, 2023).

Risks of Artificial General Intelligence

The current big milestone that developers in this field strive towards is reaching the point of “general intelligence” for AI (or AGI meaning artificial general intelligence), in which its reasoning capabilities equal to or surpass that of a human. The obstacles mentioned thus far have been those in the way of getting to this point, but once society has arrived at it, a new suite of perils will surface.

It is hypothesized that once AI reaches this point, it will be able to recursively self-improve by creating more intelligent versions of itself and altering its own pre-programmed goals. This obviously brings about the concern of AGI being used for the purposes of malicious groups, but its potential is such that some have posed the idea of allowing it to destroy humanity and take its place in the evolutionary process (McLean, et al, 2021). Seeing it in this way, it is a weapon that has society-changing potential, as nuclear warheads currently do.

Given the current theory of AGI’s rapid growth potential, it is crucial that good risk management is implemented on its first iteration since its rapid advances will render any future configuration obsolete. It is important then to start engineering ways to failsafe AGI right now, and not leave it until we have achieved AGI (McLean, et al, 2021). Unfortunately, current research efforts tend to be more oriented toward indiscriminate progress, not questioning the current countermeasures against control loss.

Conclusions

Viewing the range of obstacles that AI has in its path at a grand level, the problem stems from the fact that tackling one category means neglecting the others. Since the industry right now is such a competitive space, if a company wishes to tackle the technical challenges, they must advance at such a rapid pace that there is no time to responsibly implement the technology in a way that benefits society, and vice versa, companies risk being left out of the trend if they research ways to beneficially implement the technology. To correctly tackle these obstacles in a way that allows for progress and doesn’t afflict human well-being, the private sector (being the biggest proponent for this development) must come together and make these decisions in the way that brings the most benefit to humanity. As long as AI developers conflict, their obstacles are made only greater.

References

Ajami, R. A., & Karimi, H. A. (2023). Artificial Intelligence: Opportunities and Challenges. Journal of Asia-Pacific Business, 24(2), 73–75. https://doi.org/10.1080/10599231.2023.2210239

A.Shaji George, A.S.Hovan George, & A.S.Gabrio Martin. (2023). The Environmental Impact of AI: A Case Study of Water Consumption by Chat GPT. Partners Universal International Innovation Journal (PUIIJ), 01(02), 91–104. https://doi.org/10.5281/zenodo.7855594

Cath, Corinne. (2018). Governing artificial intelligence: ethical, legal and technical opportunities and challenges. Phil. Trans. R. Soc. A. 376 : 20180080

http://doi.org/10.1098/rsta.2018.0080

Chui, M. et al. (June 14, 2023). The economic potential of generative AI: The next productivity frontier. Retrieved from https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#introduction

Dohmatob, E., Feng, Y., Yang, P., Charton, F., & Kempe, J. (2024). A tale of tails: Model collapse as a change of scaling laws. Retrieved from https://arxiv.org/pdf/2402.07043

Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3), 277–304. https://doi.org/10.1080/15228053.2023.2233814

González, C. & Robles, K. (October 11, 2023). Tec creará 1er Laboratorio de Inteligencia Artificial Generativa. Retrieved from https://conecta.tec.mx/es/noticias/guadalajara/educacion/tec-creara-1er-laboratorio-de-inteligencia-artificial-generativa

Leswing, K. (July 25, 2024). Tech’s splurge on AI chips has companies in ‘arms race’ that’s forcing more spending. Retrieved from https://www.cnbc.com/2024/07/25/techs-splurge-on-ai-chips-has-meta-alphabet-tesla-in-arms-race.html

Marr, B. (August 7, 2024). Is The AI Bubble About To Burst? Retrieved from https://www.forbes.com/sites/bernardmarr/2024/08/07/is-the-ai-bubble-about-to-burst/

McLean, S., Read, G. J. M., Thompson, J., Baber, C., Stanton, N. A., & Salmon, P. M. (2021). The risks associated with Artificial General Intelligence: A systematic review. Journal of Experimental & Theoretical Artificial Intelligence, 35(5), 649–663. https://doi.org/10.1080/0952813X.2021.1964003

Ozmen Garibay, O., Winslow, B., Andolina, S., Antona, M., Bodenschatz, A., Coursaris, C., … Xu, W. (2023). Six Human-Centered Artificial Intelligence Grand Challenges. International Journal of Human–Computer Interaction, 39(3), 391–437. https://doi.org/10.1080/10447318.2022.2153320

Smith, E. M., Graham, D., Morgan, C., & MacLachlan, M. (2023). Artificial intelligence and assistive technology: risks, rewards, challenges, and opportunities. Assistive Technology, 35(5), 375–377. https://doi.org/10.1080/10400435.2023.2259247

Szczepański, M. (2019). Economic impacts of artificial intelligence. Retrieved from https://www.semanticscholar.org/paper/Economic-impacts-of-artificial-intelligence-Szczepa%C5%84ski/52ce1997465fa8b52a7f1648ebae447ef7006167

Templeton, et al. (2024). Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet. Retrieved from https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html

Thormundsson, B. (August 12, 2024). AI corporate investment worldwide 2015-2022. Retrieved from https://www.statista.com/statistics/941137/ai-investment-and-funding-worldwide/#:~:text=In%202022%2C%20the%20global%20total,but%20that%20was%20only%20temporary.

Vargas, D. (July 18, 2024). How Can Designers Deal with Bias in AI? Retrieved from https://www.wizeline.com/how-can-designers-deal-with-bias-in-ai/

Weise, K. (July 30, 2024). Microsoft Profit Jumps 10%, but Cloud Computing Grows Less Than Expected. Retrieved from https://www.nytimes.com/2024/07/30/technology/microsoft-earnings-profit.html

Zhang, et al. (October 3, 2024). Intelligence at the Edge of Chaos. Retrieved from https://arxiv.org/html/2410.02536v1#bib


Posted by on December 17, 2024