Bojie Li (李博杰)
2017-11-13
(转载自 科大新创公益基金会)
计算机系统领域的顶级国际学术会议SOSP 2017(操作系统原理大会)前不久在上海举行。自1967年首届SOSP以来,操作系统和分布式系统教科书里大半的内容都出自SOSP会议。因此,系统领域的研究者普遍把在SOSP上发表论文视作一种荣誉。今年SOSP收录的39篇论文中,仅有两篇的第一作者来自中国大陆,其中就有中国科学技术大学三年级博士生李博杰和大四本科生阮震元合著的KV-Direct系统。这也是中国科学技术大学首次在SOSP上发表论文。作为本科生,阮震元是如何一步步实现科大在SOSP会议上“零的突破”的呢?
2017-11-10
(转载自 微软亚洲研究院)
SOSP(操作系统原理大会)自1967年创办以来,两年一届,已经有50个年头了。从1969年的UNIX系统到21世纪初的MapReduce、BigTable、GFS,系统领域一系列最有影响的工作都是发表在SOSP以及与它隔年举行的兄弟会议OSDI上。如果把SOSP和OSDI历年最具影响力(Hall of Fame Award)的论文汇集成册,就是大半本操作系统和分布式系统的教科书。作为系统领域的最高学术会议,SOSP和OSDI每年只收录30至40篇高质量论文,因而能够在SOSP上发表论文是系统研究者的荣誉。
2017-11-02
(2017 年 11 月在中国科学技术大学演讲的结束语,抄了 爱因斯坦《探索的动机》)
大多数系统领域的研究可以归为两类:一类是有一种新的硬件,比如我们的可编程网卡,还有RDMA网卡、高速存储领域的NVMe和NVM、CPU里面的SGX、TSX指令扩展,可以给系统设计带来很多新的可能;一类是有一种新的应用场景,比如我们今天都在讲的深度学习,就给系统设计带来了很多新的挑战。但是如果系统领域只有这两类研究,那么它就不会成为一个受人尊敬的研究领域,正如只有蔓草而不能成为森林。因为对于新的硬件或者新的应用场景,即使没有专门研究系统的科学家,工程师也会想出办法去利用这些可能,应对这些挑战。
那么是什么把如此多的聪明人引进系统研究的领域呢? 我觉得爱因斯坦的<探索的动机》讲得很好,就化用过来了。首先是一种消极的动机,就是叔本华所说的,把人们引向艺术和科学的最强烈的动机之一,是要逃避日常生活中令人厌恶的粗俗和使人绝望的沉闷,是要摆脱人们自己反复无常的欲望的桎梏。在工程项目里,总有许多非技术的因素,有许多历史遗留的问题,有许多工具和硬件的bug。一个工程项目里大多数的时间都是在做这些并不需要很多创造力的事情,而目前的AI还不足以帮我们做好,因此修养有素的系统工程师就会渴望逃避这种复杂性,进入一种客观知觉的世界。
除了这种消极的动机,还有一种积极的动机。人们总想以最适当的方式画出一幅简化的和易领悟的世界图像;于是他就试图用他的这种世界体系来代替经验的世界,并来征服它。这就是画家、诗人、思辨哲学家和自然科学家所做的,他们都按自己的方式去做。系统研究者对于所研究的主题必须加以严格的控制,也就是描述现实系统中最通用的模块。企图以系统研究者的精密性与完备性来重现现实世界的复杂系统,这不是人类智力所能及的。作为系统基础的基本抽象,比如IP之于网络、SQL之于数据库、文件之于操作系统,应当对一大类硬件体系结构和一大类应用场景普遍有效。有了这些基本抽象,就有可能借助单纯的演绎构建一个完整的系统。在这个构建的过程中,工程师可以把现实世界的复杂性加入进去,此时可能损失一些基本抽象所具有的美好性质,但我们仍然可以通过不超过人类理智的演绎过程理解整个系统的行为。
系统研究者的最高使命是得到那些普遍的基本抽象,由此高性能、可扩放、高可用、易编程的系统就能用演绎的方法建立起来。要通向这些基本抽象,没有逻辑的道路,只有通过那种以对经验的理解为基础的直觉。这意味着一个好的系统研究员必须首先是一个有经验的系统工程师。由于有这种方法论上的不确定性,人们可以假定,会有许多个同样站得住脚的系统抽象。这种看法无论在理论上还是现实中都是成立的。但是,系统领域的发展表明,在同一时期,在同样的硬件限制和应用场景下,总有一个显得比别的高明得多。这就是莱布尼兹非常中肯的表述过的“先定的和谐”。渴望看到这种先定的和谐,是系统研究者无穷的毅力和耐心的源泉。
2017-10-29
Performance of in-memory key-value store (KVS) continues to be of great importance as modern KVS goes beyond the traditional object-caching workload and becomes a key infrastructure to support distributed main-memory computation in data centers. Recent years have witnessed a rapid increase of network bandwidth in data centers, shifting the bottleneck of most KVS from the network to the CPU. RDMA-capable NIC partly alleviates the problem, but the primitives provided by RDMA abstraction are rather limited. Meanwhile, programmable NICs become available in data centers, enabling in-network processing. In this paper, we present KV-Direct, a high performance KVS that leverages programmable NIC to extend RDMA primitives and enable remote direct key-value access to the main host memory.
We develop several novel techniques to maximize the throughput and hide the latency of the PCIe connection between the NIC and the host memory, which becomes the new bottleneck. Combined, these mechanisms allow a single NIC KV-Direct to achieve up to 180 M key-value operations per second, equivalent to the throughput of tens of CPU cores. Compared with CPU based KVS implementation, KV-Direct improves power efficiency by 3x, while keeping tail latency below 10 µs. Moreover, KV-Direct can achieve near linear scalability with multiple NICs. With 10 programmable NIC cards in a commodity server, we achieve 1.22 billion KV operations per second, which is almost an order-of-magnitude improvement over existing systems, setting a new milestone for a general-purpose in-memory key-value store.
2017-10-28
(转载自 微软亚洲研究院)
国际计算机系统架构体系学术会议SOSP 2017(Symposium on Operating Systems Principles)此刻正在上海如火如荼地举行。SOSP作为计算机系统领域最顶级的学术大会之一,若论文有幸被收录,影响力自然不言而喻。前不久,微软亚洲研究院与中国科技大学的联合培养博士生李博杰的一篇关于内存键值存储的论文就被该会议收录了。对于绝大多数非计算机行业的人而言,“内存键值存储”可谓大脑知识的盲区,是好奇心号船舶的一片深海。但对于92年的李博杰来说,这却已成为他生活的一部分。关于他的成长故事,可以从这个对我们陌生,而对他却万分熟悉的词语讲起。
2017-09-02
Driven by explosive demand on computing power and slowdown of Moore’s law, cloud providers have started to deploy FPGAs into datacenters for workload offloading and acceleration. In this paper, we propose an operating system for FPGA, called Feniks, to facilitate large scale FPGA deployment in datacenters.
Feniks provides abstracted interface for FPGA accelerators, so that FPGA developers can get rid of underlying hardware details. In addtion, Feniks also provides (1) development and runtime environment for accelerators to share an FPGA chip in efficient way; (2) direct access to server’s resource like storage and coprocessor over PCIe bus; (3) an FPGA resource allocation framework throughout a datacenter.
We implemented an initial prototype of Feniks on Catapult Shell and Altera Stratix V FPGA. Our experiements show that device-to-device communication over PCIe is feasible and efficient. A case study shows multiple accelerators can share an FPGA chip independently and efficiently.
2017-08-04
(转载自 微软亚洲研究院)
2017年6月19-20日,开源技术盛会LinuxCon + ContainerCon + CloudOpen(LC3)首次在中国举行。两天议程满满,包括 17 个主旨演讲、8 个分会场的 88 场技术报告和微软等公司的技术展览和动手实验。LinuxCon 吸引了众多国际国内互联网巨头、电信巨头和上千名业界人士参会,包括Linux创始人Linus Torvalds,大咖齐聚共同关注业界动态。
2017-08-03
The First Asia-Pacific Workshop on Networking (APNet’17) Invited Talk:
Implementing ClickNP: Highly Flexible and High-Performance Network Processing with FPGA + CPU
Abstract: ClickNP is a highly flexible and high-performance network processing platform with reconfigurable hardware, published in SIGCOMM’16. This talk will share our implementation experience of the ClickNP system, both before and after paper submission. Throughout 8 months, we developed 100 elements and 5 network functions for the SIGCOMM paper, resulting in 1K commits and 20K lines of code. After the paper submission, ClickNP continues to develop and extends to a general-purpose FPGA programming framework in our research team, resulting in 300 elements, 86 application projects and 80K lines of code.
(1) Although with high-level languages, programming FPGA is still much more challenging than CPU. We had hard times to understand the behavior and pitfalls of black-box compilers, and shared our findings by enforcing coding style in the ClickNP language design and providing optimizations in the ClickNP compiler.
(2) OpenCL host to kernel communication model is a poor fit for network processing. This talk will elaborate internals of the high performance communication channel between CPU and FPGA.
(3) FPGA compilation takes hours, run-time debugging is hard, and simulation is inaccurate. For case study, we show how we identified and resolved a deadlock bug in the L4 load balancer, leveraging ClickNP debugging functionalities.
2017-08-03
Limited by the small on-chip memory, hardware-based transport typically implements go-back-N loss recovery mechanism, which costs very few memory but is well-known to perform inferior even under small packet loss ratio. We present MELO, an efficient selective retransmission mechanism for hardware-based transport, which consumes only a constant small memory regardless of the number of concurrent connections. Specifically, MELO employs an architectural separation between data and meta data storage and uses a shared bits pool allocation mechanism to reduce meta data on-chip memory footprint. By only adding in average 23B extra on-chip states for each connection, MELO achieves up to 14.02x throughput while reduces 99% tail FCT by 3.11x compared with go-back-N under certain loss ratio.