The Current Situation and Prospects of Decentralized Computing Power

The Current Situation and Prospects of Decentralized Computing Power

The rise of AI and other cutting-edge technologies will bring about massive transformations in numerous industries. Computing power will become increasingly crucial and various related aspects will undergo extensive exploration in the industry. Decentralized computing power networks hold unique advantages that can effectively mitigate the risks of centralization and serve as a powerful complement to centralized computing power.

Computing power in demand

In 2009, the release of "Avatar" marked the beginning of the 3D movie revolution with unparalleled real images. Weta Digital, a significant contributor behind the film's visual effects rendering, processed up to 1.4 million tasks per day and 8GB of data per second in its 10,000-square-foot server farm in New Zealand. Despite huge machine deployment and cost investment, it took more than a month to complete all the renderings.

On January 3 of the same year, Satoshi Nakamoto mined the genesis block of Bitcoin on a small server in Helsinki, Finland, and received a block reward of 50 BTC. Since then, computing power has played a vital role in the cryptocurrency industry. The longest chain not only provides evidence of the sequence of events but also serves as proof that it came from the largest pool of CPU power.

Bitcoin Whitepaper

The PoW consensus mechanism relies heavily on computing power to ensure the security of the chain. The constantly rising Hashrate not only proves the miners' continued investment in computing power but also indicates their positive income expectations. The real demand for computing power has significantly contributed to the development of chip manufacturers. The evolution of mining machine chips from CPU to ASIC has been a result of this demand.

Bitcoin mining machines are primarily ASIC-based and can efficiently execute specific algorithms like SHA-256. The massive economic benefits of Bitcoin have led to an increase in demand for computing power in the related mining industry. However, the over-specialization of equipment and cluster effects have caused a siphon effect among participants, leading to capital-intensive concentrated development.

Ethereum's smart contracts offer a wide range of applications, particularly in the DeFi sector, which has resulted in a sharp increase in the price of ETH. However, the mining difficulty of Ethereum has also been rising due to the PoW consensus mechanism, and miners' computing power requirements are increasing day by day. Unlike Bitcoin, Ethereum requires GPUs for mining calculations, such as the Nvidia RTX series, making it more suitable for general computing hardware. This has led to market competition for GPUs, causing high-end graphics cards to go out of stock.

The ChatGPT developed by OpenAI demonstrated an epoch-making significance in the field of AI on November 30, 2022. The new experience brought by ChatGPT, which can complete various user-proposed tasks based on context just like a real person, left users in awe. The latest version launched in September this year includes generative AI that adds multi-modal features such as voice and images, taking the user experience to a whole new level.

However, the GPT4 model requires more than one trillion parameters involved in model pre-training and subsequent fine-tuning, making it one of the most demanding tasks in the AI field. The pre-training phase involves studying a large amount of text to master language patterns, grammar, and associated context, enabling it to generate coherent and contextual text based on input. The fine-tuning phase adapts the model to specific types of content or styles, improving performance and specialization. The Transformer architecture used by GPT introduces a self-attention mechanism that demands high computing power, especially when processing long sequences. The current mainstream LLM with the same architecture requires high-performance GPUs, indicating that the investment cost in the field of AI large models is enormous. According to relevant SemiAnalysis estimates, training a GPT4 model costs up to $63 million. To maintain its daily operations and ensure a good interactive experience, GPT4 requires a massive amount of computing power..

Computing hardware classification

Here we need to understand the current main computing power hardware types. What computing power demand scenarios can be handled by CPU, GPU, FPGA, and ASIC respectively.

1. GPUs

The GPU trumps the CPU in terms of processing power, thanks to its multitude of cores that enable it to handle multiple computing tasks simultaneously. Parallel computing, which is vital in machine learning and deep learning, is best suited for the GPU as it can process large volumes of computing tasks more efficiently. On the other hand, the CPU has a limited number of cores and is only suitable for processing a single, complex calculation or sequence task. When it comes to rendering tasks and neural network computing, the GPU reigns supreme as it's more efficient and effective than the CPU in handling parallel and repetitive calculations.

2. FPGA (Field Programmable Gate Array)

The field programmable logic gate array is an exceptional semi-custom circuit in the domain of application-specific integrated circuits (ASICs). Comprising a colossal number of small processing units, FPGA is a programmable digital logic circuit integrated chip that can be leveraged for accelerating hardware tasks. While other tasks are still completed on the CPU, FPGA and CPU can work in tandem to deliver unparalleled performance.

3. ASIC (Application Specific Integrated Circuit)

An Application Specific Integrated Circuit (ASIC) is designed to meet specific user requirements and the needs of electronic systems. ASICs have several advantages over general-purpose integrated circuits, including smaller size, lower power consumption, improved performance, enhanced confidentiality, improved reliability, and reduced cost during mass production. Therefore, in the inherent scenario of Bitcoin mining, where specific computing tasks are required, ASIC is the most suitable.

However, ASICs are a fixed integrated circuit, unlike Field-Programmable Gate Arrays (FPGAs) that integrate a large number of basic digital circuit gates and memories into an array. Developers can define the circuit by programming the FPGA configuration, and this programming is replaceable. But, given the current update speed in the AI ​​field, customized or semi-customized chips cannot be adjusted and reconfigured in time to perform different tasks or adapt to new algorithms.

This is where Graphics Processing Units (GPUs) shine in the field of AI, given their general adaptability and flexibility. Major GPU manufacturers have made relevant optimizations for the adaptation of GPUs in the AI ​​field. For instance, Nvidia has launched the Tesla series and Ampere architecture GPUs, which contain hardware units (Tensor Cores) optimized for machine learning and deep learning calculations. These GPUs perform forward and backward propagation of neural networks efficiently and with low energy consumption.

In addition, a wide range of tools and libraries are provided to support AI development, such as CUDA (Compute Unified Device Architecture) to help developers use GPUs for general-purpose parallel computing.

Decentralized computing power

Decentralized computing power is a powerful solution that allows processing power to be provided through distributed computing resources. This approach leverages blockchain technology or similar distributed ledger technology to pool idle computing resources and distribute them to users in need. Resource sharing, transactions, and management are achieved through the effective utilization of this decentralized approach.

  1. Strong demand for computing hardware: The rise of the creator economy and the increasing need for digital media processing have led to a surge in demand for visual effects rendering, resulting in the emergence of specialized rendering outsourcing studios and cloud rendering platforms. However, this requires significant investments in computing power hardware.

  2. Limited supply of computing power hardware: The development of AI has increased the demand for computing hardware, with Nvidia being the dominant player in the market. As a result, the company's market value has exceeded $1 trillion, and its supply capacity has become a limiting factor for the growth of some industries.

  3. Centralized cloud platforms dominate computing power provision: Cloud vendors like AWS offer GPU cloud computing services, which benefit from the growing demand for high-performance computing. For instance, AWS's p4d.24xlarge server, which is designed for machine learning and contains eight Nvidia A100 40GB GPUs, costs $32.8 per hour to rent, with a gross profit margin of 61%. Other cloud providers are also competing for market share by stockpiling hardware.

  4. Uneven distribution of computing resources: The ownership and concentration of GPUs are skewed towards organizations and countries with ample funds and technological expertise, leading to an imbalance in the industry. Moreover, chip and semiconductor manufacturing powers like the United States have implemented stricter restrictions on the export of AI chips to hinder the research capabilities of other nations in the field of general artificial intelligence.

  5. Concentration of computing resources: The development of AI is largely controlled by a few giant companies, such as OpenAI, which enjoys the support of Microsoft and has access to extensive computing resources through Microsoft Azure. This allows OpenAI to shape the AI industry with each new product release, making it challenging for other teams to catch up in the field of large models.

Amidst the challenges posed by high hardware costs, geographical limitations, and uneven industrial development, alternative solutions are imperative. That's where the decentralized computing power platform comes in. This platform aims to establish an open, transparent, and self-regulating market to optimize the utilization of global computing resources..

Adaptive analysis

Decentralized computing power supply side

  1. Current inflated hardware prices and controlled supply dynamics have laid the groundwork for the emergence of decentralized computing networks.

  2. Decentralized computing power comprises a spectrum of providers, ranging from personal computers and IoT devices to expansive data centers and IDCs. This diverse pool of computing resources offers flexible and scalable solutions, enabling AI developers and organizations to optimize resource utilization. Leveraging idle computing power from individuals or organizations facilitates decentralized sharing, albeit subject to user restrictions or sharing limits.

  3. A potential source of high-quality computing power arises from repurposing mining resources following Ethereum's transition to PoS. Coreweave, a prominent integrated GPU provider in the US, exemplifies this trend. Formerly the largest Ethereum mining farm in North America, Coreweave possesses established infrastructure and a surplus of retired mining GPUs. With an estimated 27 million GPUs active during Ethereum's peak mining period, rejuvenating these resources can significantly bolster the decentralized computing network.

Decentralized computing power demand side

  1. From a technical standpoint, decentralized computing resources are used for tasks like graphics rendering and video transcoding, which are quite complex. To encourage participation in the network and develop effective business models, blockchain technology combined with web3 offers tangible income incentives while ensuring secure data transmission. In the AI field, where parallel computing and node communication are crucial, current applications focus more on fine-tuning, inference, and application layers rather than low-level tasks.

  2. Looking at it from a business perspective, a market solely focused on buying and selling computing power lacks innovation. Such markets typically handle supply chain and pricing strategies, which are strengths of centralized cloud services. This limits the market's potential for growth and creativity, prompting networks originally designed for simple graphics rendering to transition towards AI. For instance, Render Network introduced a native integrated Stability AI toolset in Q1 2023, expanding beyond rendering to the AI domain.

  3. Considering the main customer groups, large businesses usually prefer centralized cloud services due to their need for efficient computing power aggregation in developing large-scale models. Decentralized computing power, on the other hand, caters more to small and medium-sized development teams or individuals involved in model fine-tuning or application layer development. They prioritize cost-effectiveness, as decentralized computing significantly lowers initial investment costs. Despite not dominating current industry spending, as AI applications continue to expand, the future market for decentralized computing is promising.

  4. In terms of services provided, current projects resemble decentralized cloud platforms, offering comprehensive management from development to deployment and transaction. This approach attracts developers by simplifying development and deployment processes while enticing users with complete application products. Building an ecosystem based on its computing power network creates a competitive advantage, but it requires effective project operations to attract and retain both developers and users.

AI Computing Power Economics Future Market Size

As per the latest and most reliable estimates presented in the "2022–2023 Global Computing Power Index Assessment Report," a collaborative effort of IDC, Inspur Information, and the Global Industry Research Institute of Tsinghua University, the global AI computing market is poised to witness a remarkable growth from $19.5 billion in 2022 to $34.66 billion in 2026. Moreover, the generative AI computing market is expected to surge from $820 million in 2022 to $10.99 billion in 2026. This means that generative AI computing will account for a significant 31.7% share of the overall AI computing market, up from its current 4.2% share.

Monopoly in Computing Power Economy

The monopolization of AI GPUs by NVIDIA at exorbitant prices, such as H100 selling for $43,000 per unit, is unacceptable. The fact that tech giants in Silicon Valley are the first to grab these limited resources for themselves and then rent them out to developers through cloud platforms owned by Google, Amazon, and Microsoft is a form of control that must be challenged. Developers are being forced to rent cloud servers from these giants at a markup, as they can't even afford to purchase a dedicated GPU. Furthermore, the financial reports showing that AWS's cloud services have a gross margin of 61% and Microsoft's is even higher at 72% is evidence of an exploitative business model. It's time to demand a fairer distribution of computing resources and take action to prevent the monopolization of the next era.

Challenges in Establishing a Decentralized AGI Computing Power Network

Decentralization has been touted as a solution to many challenges facing the technology industry, including those related to artificial intelligence (AI). However, creating a decentralized AI computing power network poses several difficulties. Existing projects have not yet achieved the level of sophistication required to tackle the complexities involved. Here are some of the obstacles that must be overcome:

1. Verifying Work Completion:

Deep learning models rely heavily on sequential processes, making it difficult to verify individual layers' computations independently. As a result, verifying work completion requires executing the entire model from scratch, which is computationally expensive.

2. Building a Viable Market:

Establishing a functional AI computing power market faces challenges related to supply and demand. Ensuring a balance between available computing resources and the demand for them is essential. Participants need incentives to contribute their resources, and a mechanism to track and compensate them for their efforts is necessary. Traditional markets struggle with scalability due to high operational costs, leaving only a small pool of suppliers.

3. The Halting Problem:

Determining whether a computational task will finish in a finite time is a well-known problem in computer science. Smart contracts on platforms like Ethereum face a similar challenge. It's impossible to predict how much computational resources a contract will require or if it will execute within a reasonable time frame. This issue intensifies in deep learning, where models and frameworks constantly evolve.

4. Privacy Concerns:

Machine learning research relies heavily on public datasets, but tailoring models to specific applications requires private data. Protecting sensitive information while enabling effective model training is crucial. Projects must consider privacy concerns during design and development.

5. Heterogeneous Parallelization:

Current decentralized projects struggle with parallelizing compute-intensive tasks across multiple devices. Deep learning models are typically trained on specialized hardware with minimal latency. Distributed computing networks encounter delays due to frequent data transfers, and performance suffers due to the slowest device. Achieving efficient parallelization with untrusted and unreliable computing sources is a significant hurdle.

Addressing These Challenges:

While current decentralized AGI computing power projects face limitations, there are initiatives that have made progress in addressing these challenges. Gensyn and Together offer valuable insights into the design and implementation of decentralized computing power networks for model training and inference. By analyzing their approaches, we can better understand the path forward.

The future of computing power will be shaped by advancements in AI and other emerging technologies. Decentralized computing power networks offer distinct advantages, mitigating centralization risks and complementing traditional computing power. Meeting the growing demand for computing resources necessitates exploring novel solutions.