How quickly is AI growing?

26th February 2024 by Pratik Mitra | Pharmaceutical

How quickly is AI growing

The timeline unfolds with a pivotal moment in 2010 when artificial intelligence (AI) achieved near-perfect translation of natural language, signifying a breakthrough in machine translation. The journey, though ongoing, saw a surge in accuracy and contextual understanding propelled by deep learning techniques and neural machine translation. Traditional systems struggled, but the introduction of recurrent neural networks (RNNs) and later transformer models, particularly in 2010, showcased a remarkable improvement. Trained on extensive datasets, these models grasped linguistic nuances, context, and produced translations akin to human quality.

Fast forward to 2014, and AI made notable strides in mastering word meanings, marking a milestone in natural language processing (NLP). The adoption of word embeddings, such as Word2Vec and GloVe, played a pivotal role. Representing words as vectors in a high-dimensional space enabled nuanced capture of semantic relationships, transcending mere syntactic patterns. This laid the foundation for enhanced contextual understanding, influencing subsequent developments in NLP applications.

The period from 2017 to 2022 witnessed a transformative shift with the emergence of large language foundation models. Models like OpenAI's GPT series, especially GPT-3 (2019), boasting 175 billion parameters, showcased unprecedented capabilities in language generation and contextual reasoning. The paradigm of pre-training large language models became dominant, allowing versatility across applications. Multimodal models, exemplified by OpenAI's CLIP, extended capabilities to process diverse content types. Google's BERT, introduced in 2018, enhanced contextual understanding with its bidirectional approach. The applications of large language models expanded across domains, addressing tasks like NLP, text generation, sentiment analysis, translation, and code generation. However, their widespread use raised ethical concerns, prompting considerations of fairness and transparency. Open-source initiatives, including releases of pre-trained models and frameworks, fostered collaborative research and global AI community contributions.

Insights: The global artificial intelligence (AI) market, valued at $136.55 billion in 2022, is poised for exponential growth, driven by substantial investments, digital disruption, and the quest for a competitive edge in the rapidly evolving global economy. Projections indicate a remarkable compound annual growth rate (CAGR) of 37.3% from 2023 to 2030, foreseeing a staggering market size of $1,811.8 billion by the end of the decade. AI's potential extends beyond market figures, with forecasts indicating its significant contribution to the global economy. By 2030, AI is anticipated to contribute more than the combined current output of economic powerhouses India and China. The projected economic impact is monumental, with an estimated $15.7 trillion infusion into the global economy by 2030, surpassing the combined outputs of China and India. China is expected to reap the greatest economic gains, experiencing a substantial 26% rise in GDP by 2030, closely followed by North America with a 14.5% boost, resulting in a total economic impact of $10.7 trillion and accounting for nearly 70% of the global AI-driven economic surge. These statistics underscore the transformative potential of AI, positioning it as a key driver of economic growth and technological evolution, with profound implications for industries, societies, and global competitiveness in the years to come.

Input: AI systems operate by constructing models that capture relationships between variables within their training data. This can range from understanding the likelihood of words appearing together, such as "home" and "run" in language models, to discerning patterns in gene sequences that influence protein folding, ultimately defining a protein's 3D structure and function.

In essence, a larger dataset provides AI systems with more information to construct accurate models of variable relationships, thereby enhancing their overall performance. Consider a language model fed with an extensive amount of text; it gains a wealth of examples, improving its ability to recognize patterns, like the co-occurrence of "home" and "run" in sentences describing baseball games or emphatic success.

Highlighting the evolution of data scale, the Perceptron Mark I, as detailed in its original research paper, was trained on a mere six data points. In stark contrast, the LlaMa, a substantial language model developed by Meta and unveiled in 2023, underwent training with around one billion data points—a staggering 160-million-fold increase from the Perceptron Mark I. LlaMa's dataset comprised text from diverse sources, including 67% from Common Crawl (a non-profit that scrapes and shares internet data), 4.5% from GitHub (utilized by software developers), and another 4.5% from Wikipedia. This remarkable shift in data scale underlines the relentless pursuit of more extensive and diverse datasets to propel advancements in AI capabilities.

Algorithms, comprising sets of rules or instructions that prescribe a sequence of operations, play a pivotal role in dictating how AI systems harness computational power to model relationships between variables in the provided data. Beyond the straightforward approach of training AI systems with larger datasets and increased computational resources, developers have been exploring methods to extract more value from limited resources. According to research by Epoch, it was discovered that "every nine months, the introduction of improved algorithms yields results comparable to doubling computational budgets." This underscores the significance of algorithmic advancements in maximizing the efficiency and performance of AI systems.

Upward benchmarks

Natural Language Processing (NLP) Advancements:

Case Study: OpenAI's GPT-3 demonstrated the ability to perform a wide range of NLP tasks, such as language translation, question-answering, and content generation. It showcased the power of large-scale pre-trained language models in understanding and generating human-like text.

Computer Vision Improvements:

Case Study: EfficientNet, introduced by Google, presented an efficient convolutional neural network architecture that achieved state-of-the-art performance on image classification tasks while maintaining computational efficiency. Vision Transformers (ViTs) showed that transformer-based architectures could be successfully applied to computer vision tasks by treating images as sequences of patches.

AI in Healthcare: Case Study: PathAI, a company using AI for pathology, developed models to assist pathologists in diagnosing diseases from medical images. AI applications also played a role in predicting patient outcomes and optimizing treatment plans based on individual health data.

Ethical AI and Bias Mitigation: Case Study: Google's research on "Equality of Opportunity in Supervised Learning" addressed the issue of bias in machine learning models. The study focused on developing algorithms that not only minimize overall error but also reduce disparate impact on different demographic groups.

AI for Generative Tasks: Case Study: ChatGPT, based on OpenAI's GPT-3 architecture, demonstrated the ability to generate coherent and contextually relevant text across various prompts. This technology found applications in chatbots, content creation, and even creative writing.

AI in Autonomous Vehicles: Case Study: Waymo, a subsidiary of Alphabet Inc. (Google's parent company), made significant strides in autonomous driving technology. Waymo's self-driving cars leveraged AI for real-time perception of the environment, decision-making in complex traffic scenarios, and safe navigation.

AI and Robotics Integration: Case Study: Boston Dynamics' robot, Spot, showcased the integration of AI in robotics for tasks like autonomous navigation, dynamic obstacle avoidance, and manipulation. Spot found applications in industries such as construction, inspection, and public safety.

Edge AI: Case Study: NVIDIA's Jetson series of edge computing platforms facilitated the deployment of AI models directly on edge devices. This allowed for real-time processing of data on devices like cameras and sensors, enabling applications such as smart surveillance and industrial automation.

Market Impact:

Increased Efficiency and Productivity:

Amazon's use of AI-powered robots in its fulfillment centers is a prime example. The robots assist in tasks such as picking and transporting items, leading to increased operational efficiency and faster order fulfillment.

Innovation and New Business Models: Netflix's recommendation system is a standout example. The platform uses AI algorithms to analyze user viewing history and preferences, providing personalized recommendations. This innovation has played a key role in Netflix's success and user retention.

Improved Customer Experience: Chatbots in customer service, such as those used by companies like American Express, leverage AI to understand and respond to customer queries. These chatbots provide instant support, improving the overall customer experience.

Data-driven Decision Making: The multinational retailer Walmart utilizes AI for demand forecasting and inventory management. By analyzing vast amounts of data, Walmart can optimize its supply chain, reduce stockouts, and make informed decisions on restocking.

Transformation in Healthcare: IBM's Watson for Oncology is used in cancer care. It analyzes medical literature, patient records, and clinical trial data to assist oncologists in making treatment decisions. This AI application helps in providing personalized and evidence-based treatment recommendations.

Enhanced Security and Fraud Detection: PayPal employs AI for fraud detection. Machine learning algorithms analyze transaction patterns and detect anomalies to identify potentially fraudulent activities, enhancing the security of online transactions.

Autonomous Systems and Robotics: Tesla's Autopilot is an example of AI in autonomous vehicles. The system uses computer vision and machine learning to enable features like autonomous driving and automatic lane-keeping, influencing the automotive industry and transportation sector.

E-commerce and Personalization: Alibaba's e-commerce platform uses AI for personalized shopping experiences. The platform analyzes user behavior, preferences, and purchase history to provide tailored product recommendations, contributing to increased sales and customer satisfaction.

Financial Services and Risk Management: JPMorgan Chase utilizes AI in its contract intelligence platform, COiN. The platform automates the review of legal documents, reducing the time required for manual contract analysis and enhancing efficiency in the financial services sector.

Job Market and Skills Demand: The growth of AI has led to increased demand for skilled professionals. Companies like Google, Microsoft, and Facebook are actively recruiting AI and machine learning experts to drive innovation in their AI-related projects and products.

Progressing step

According to Sevilla, the escalation in the compute power employed by AI developers is anticipated to persist at its current accelerated pace. Companies are expected to amplify their expenditures on individual AI systems, benefiting from increased efficiency as the cost of compute steadily decreases. This trend is projected to continue until a point where further investments yield only marginal improvements in performance, at which juncture the increase in compute usage may decelerate, primarily due to the ongoing decrease in compute costs driven by Moore's law. The influx of data into contemporary AI systems, exemplified by models like LlaMa, is primarily sourced from internet scraping. Traditionally, the limiting factor for data input into AI systems has been the availability of sufficient compute power for processing. However, the recent surge in data used for AI training has surpassed the production of new text data on the internet. Researchers at Epoch predict that the scarcity of high-quality language data might become apparent by 2026. AI developers, however, seem less troubled by this prospect. Ilya Sutskever from OpenAI, appearing on the Lunar Society podcast, expressed confidence in the current data situation, stating that "there's still lots to go." Dario Amodei, on the Hard Fork podcast, estimated a low probability of scaling interruptions due to insufficient data, suggesting a potential 10% chance.

Sevilla remains optimistic that a dearth of high-quality data won't impede AI advancements. He believes AI developers will discover innovative approaches, such as utilizing low-quality language data, to address this challenge. Algorithmic progress is expected to continue augmenting the utilization of both compute and data for AI training. While most past improvements have focused on using compute more efficiently, the shift towards addressing data shortages may become more prominent in the future.

How quickly is AI growing

In summary, experts, including Sevilla, foresee rapid AI progress for the next few years. The trajectory involves increased compute, utilization of remaining useful internet data, and ongoing efforts to enhance the efficiency of AI systems. Despite the positive outlook, concerns linger among experts. During a Senate Committee hearing, Amodei highlighted the potential for widespread access to scientific knowledge through AI systems, raising apprehensions about misuse in areas like cybersecurity, nuclear technology, chemistry, and biology, posing risks that need careful consideration.

Pratik Mitra

Research Associate

A dynamic market research specialist with expertise in industry research, market assessment, competitive intelligence, and strategic market intelligence to provide information for business decisions.

Let's Connect

Let's Talk