3 E Network Technology Group Limited (NASDAQ:MASK) (the "Company" or "3 E Network"), a business-to-business ("B2B") information technology ("IT") business solutions provider with the vision to become a next-generation AI infrastructure solutions provider, today announced a new strategic development blueprint for its semiconductor and AI server operations. This blueprint was prepared under the leadership of Mr. Siyang Hu, Vice President of the Company, following the recently announced expansion of its executive team.

This strategic blueprint is intended to support adjustments to the Company's business structure, spanning from the underlying chip architecture to the upper-layer software ecosystem, and extending to the operation of the infrastructure. Mr. Hu stated that 3 E Network plans to build the infrastructure foundation needed for artificial general intelligence through the following three core strategic directions:

I. Vertically Integrated Ecosystem: Enabling Synergy from Semiconductor Logic to System Integration

In the era of large-scale AI models, demand for computing power is continuing to grow, and core technology barriers, and core technology barriers are increasingly moving down to the chip level. 3 E Network intends to build a vertically integrated ecosystem that spans from custom chip solutions to high-end complete systems. Leveraging the team's deep expertise in integrated circuit design and underlying communication protocols, the Company is connecting the entire industrial chain, from custom semiconductor design to full server system delivery.

By participating in system-on-chip definition and storage controller development, the Company seeks to optimize architecture at the data flow level. This "silicon-level" streamlining of data paths and protocol acceleration is designed to deliver lower I/O latency, higher throughput, and better energy efficiency when processing large volumes of concurrent AI tasks, building competitive differentiation in the compute infrastructure sector.

II. Heterogeneous Computing Matrix: Complementary Arm and x86 Architectures Precisely Targeting AI All-Flash Scenarios

To address the complex and varied demands of data center environments, the Company has established a dual-track heterogeneous computing matrix based on Arm and x86 architectures. This complementary approach targets AI all-flash storage scenarios:

  • Driven by Advanced Architecture (Arm Architecture and Software-Defined Networking): In line with the technology trend toward higher bandwidth and lower power consumption in the data center, the Company's high-performance product line incorporates Arm architecture. This integration combines computing, networking, and storage acceleration functions into a unified data path. By adopting a fully software-defined model, this architecture overcomes the flexibility limits of traditional hardware pipelines and improves data processing efficiency.
  • Performance-Focused Compatibility (x86 Architecture Optimization): For general computing and large-scale storage markets where total cost of ownership is a key priority, the Company uses deep hardware customization and underlying firmware optimization. With a forward-looking approach to future high-speed interfaces (such as PCIe 6.0 and beyond), 3 E Network delivers compute platforms that combine throughput with cost efficiency.
  • Focus on AI All-Flash Scenarios: 3 E Network is steadily shifting its product focus from traditional distributed storage to AI all-flash server systems that demand extremely high I/O performance. The Company is working toward the commercial delivery of a new generation of high-performance AI computing systems. By introducing a deeply optimized proprietary networking architecture, 3 E Network is intended to address communication bottlenecks in the data exchanges required for large AI models.