|EN
OneFlow makes AI powerful and simple
Original core concept and technical route
Distributed performance (high efficiency) is the core technical difficulty of deep learning framework. Focusing on performance improvement and heterogeneous distributed expansion, ONEFLOW adheres to the core concept and architecture of static compilation and streaming parallel, which solves the memory wall challenge at cluster level, and leads the world in technology level.
Improved performance and efficiency
OneFlow can greatly reduce the communication and scheduling consumption within the computing cluster, improve the hardware utilization, speed up the model training speed, and greatly reduce the training cost and time. According to the official authoritative evaluation, ONEFLOW is ahead of domestic and foreign competitors in common model scenarios.
Automatically support model parallelism and pipelining parallelism
Data parallelism alone can not support the application scenarios of large models, so it is usually necessary to deeply customize the open source deep learning framework. ONEFLOW is born to support data parallelism, model parallelism and hybrid parallelism, without the need for customized development. It has been implemented in head Internet enterprises and artificial intelligence enterprises.
∙ OneFlow天生支持数据并行、模型并行和混合并行
∙ 已在安防领域人工智能公司落地,实现千万级人脸识别
∙ 同一个网络中同时使用模型并行与数据并行,充分利用计算与传输资源
∙ 已在头部互联网落地,支持百亿级特征超大规模Deep&Wide模型训练
Distributed, easy to use and stable
ONEFLOW is a heterogeneous distributed streaming system for deep learning, which greatly reduces the runtime overhead, and once successfully started, there are no runtime errors. ONEFLOW distributed is the easiest to use, with the best amount of code and fully automatic parallel.
July 31, 2020 officially open source...
menu
Home
Framework Platform
Developer Resources
News & Blog
About Us
Contact Us