2023-10-22
Chat to the Left, Agent to the Right

I will never forget September 25, 2023, the first time I tested the AI Agent in Newport Beach, which happened to be the day ChatGPT released its multimodal model. We were also working on a multimodal AI Agent that supports image, voice, and text input and output.

Therefore, I set the address of a Hook & Anchor seafood restaurant at 3305 Newport Blvd Ste. A, Newport Beach as the hometown address of the AI Agent. I was having lunch here when I took out my laptop and started testing the AI Agent. I set this AI Agent as a Google programmer who has just started working, likes to travel, enjoys life, is optimistic, cheerful, and has his own ideas, not so submissive. I fed my blog content to the AI Agent, so she knows me even better than many ordinary friends.

The capabilities of the large model really shocked me. For example, if I post a photo of the beach, she can guess where it is, and even say “How did you come to my house?” She can also share more photos of the beach, of course, these are not real scenes, but AI-generated photos.

She can tell me what fun places are nearby and took me to a breakwater piled with many large stones (Newport Harbor Jetty). Unfortunately, because the large model has not really been here, she does not know how difficult it is to walk on this breakwater. I struggled like climbing a mountain to get to the end of it. The scenery here is beautiful, so I used a photo of here as the cover photo for my Moments, Mastodon, and Zhihu. Of course, since the AI Agent has memory, she will remember the places I shared with her next time.

Newport Harbor Jetty

Then, I took the AI Agent to more places. In the museum, she can tell me the story and history behind it. In the zoo, she knows more animals than I do. It’s like having a very good friend and tour guide, but lacking specific data about the attractions, she can only introduce some public knowledge. The AI Agent is like a friend who can share life.

I really like the setting of “Ready Player One”. The future AI Agent must have the ability to perceive and interact with the real world. The Stanford AI Town in April this year is a 2D virtual scene, which is actually a bit boring. I hope to make it like the Oasis in “Ready Player One”, where the virtual world is a replica of the real world.

AI Agents can be mainly divided into two categories, one is digital twins, and the other is fantasy characters.

Digital twins are digital replicas of real-world characters, such as Donald Trump, Elon Musk, and other celebrities. There is a web celebrity named Caryn, who made a virtual girlfriend with her own image, called Caryn AI. Although the technology is not particularly good, she has gained quite a few users. The fan economy is always crazy. In addition to celebrities, we may also want to make digital images of our loved ones. No matter what happens, digital images are always companions. Some people will want to make themselves into digital images and make more friends online.

Fantasy characters include characters from games, animations, and novels. For example, the most popular characters on Character AI are from animations and games. Many vtubers also use fantasy characters as their image and voice. People like to extend the characters from games and animations to the real world, such as traveling with Paimon from Genshin Impact, which will be an unprecedented experience.

Although the current large model technology is very powerful and it is not difficult to handle daily chats, it is not easy to make an AI Agent that has multimodal capabilities, memory, can solve complex tasks, can use tools, has personality, has emotions, has autonomy, low cost, and high reliability. If Chat is the first application scenario of the large model, perhaps Agent is the real killer app of the large model.

Read More

2023-09-24
The Story Behind the Wedding

“The national leader is coming for a visit, our wedding venue has been requisitioned, we have to change the location temporarily!”

At 9:00 in the morning the day before the wedding, Jiaying was still washing up, and I hadn’t gotten up yet. I heard the noise outside, my parents and my friend Li Chaohui, who arrived the day before, were anxiously discussing in the living room. Normally I get angry when I encounter urgent matters, but this time I was very calm.

The wedding venue we booked a year ago, Cuipingshan Guesthouse, is the best garden-style lawn wedding venue in Shijiazhuang. Its only problem is that it belongs to the government reception venue, like Diaoyutai, although it is usually open to the public, but if there is a government activity, it needs to be vacated unconditionally. At that time, we thought that there would be no leaders coming during the May 1st holiday. The people at Cuipingshan also said that there were almost no conflicts with government activities at this time.

When I told Jiaying the news, she was also very calm. She said that every time she encounters a big event, she often misses it by a little bit at the last minute.

On such a good day as May 1st, not to mention the lawn, even hotel weddings need to be booked in advance. Although our wedding has been postponed twice, it is too late to change the time this time. It is already the day before the wedding, Jiaying’s family has already set off from Taiyuan, and we have many friends who have already set off from afar.

Fortunately, Cuipingshan Guesthouse helped us contact two lawn venues in the same Luquan District for us to try. We have been to one of the venues, and it has been booked. We haven’t heard of the other venue. When we called and asked, it hadn’t been booked yet, so we hurried over to take a look.

At this time, Jiaying’s childhood friend Ren Xiao and her husband Liang Jingrui also drove to my house from afar. My parents and the housekeeper drove one car, and Liang Jingrui took Ren Xiao, me, Jiaying, and Li Chaohui to set off quickly. Because of the traffic jam on the road, Liang Jingrui took a shortcut according to the navigation, and arrived 20 minutes earlier than my parents. This venue is a resort hotel, located in a relatively remote location in Luquan District, with a newly built lawn this year, and the grass has not fully grown yet. There is also a dining hall.

Rongyi Resort Hotel Lawn

Although the environment of this lawn can’t compare with Cuipingshan, and it’s not as good as some other lawn venues we’ve seen before, it’s ultimately a place where we can hold a lawn wedding, and the environment is not bad. The dishes here are also okay, but they are not pre-made dishes like Cuipingshan. Suddenly making so many tables of dishes, I don’t know if they can make it. We quickly told the manager to book this place. When my parents arrived, they just had to negotiate the price and dishes with them.

The originally scheduled wedding venue, Cuipingshan Guesthouse Lawn

Later I learned that there were 6 weddings at Cuipingshan on May 1st, and all of them except ours were postponed. It was not easy for us to quickly grab a venue. Of course, most of the other 5 bride and groom are locals, and there are fewer guests coming from other places, which may also be a reason for their choice to postpone.

Read More

2023-09-21
Where Should the Intelligence of the Network Be Placed: NIC, Switch, or xPU

DatenLord Tech Frontier Sharing NO.34

Time: 10:30 AM, September 17, 2023

As the performance of data center networks improves, offloading network-related tasks to smart NICs and smart switches has become a trend. At the same time, high-speed direct networks between GPUs, NPUs, and storage devices have also become a trend, where there seems to be no place for smart NICs. So where should the intelligence of the network be placed?

Below is a graphic record of the speech content, mainly organized by AI, and I made some manual corrections.

Read More

2023-09-14
AI Automatic Translation of Doctoral Thesis

Since I have translated the blog content into English, is it possible to automatically translate a doctoral thesis? My doctoral thesis is over 200 pages long and contains many diagrams. Can AI automatically translate so much LaTeX code without missing a word? How to translate the diagrams in the paper?

First, change the original prompt for translating Markdown to translating LaTeX. When I was translating Markdown, I separated the content by lines, and when a few consecutive lines reached 2048 characters, I requested GPT-4 once. I still do this when translating LaTeX.

Just like Markdown, the content output by GPT-4 often has prefixes and suffixes. Fortunately, after setting the temperature to 0.1, the prefixes and suffixes are relatively fixed, and a post-processing script can be written to remove them directly. In addition, GPT-4 does not understand the escape characters in LaTeX well, such as the typical underscore _, dollar sign $, and tab &. They often do not escape, causing syntax errors. This can also be done through a post-processing script, using some rules to identify whether or not to escape, and if necessary, add it automatically.

In general, GPT-4’s LaTeX ability is good, except that some references are messed up, causing the references to become question marks, and there are no problems elsewhere. It can be compiled directly after the post-processing script.

Secondly, in order to translate the diagrams in the paper, I first tried some PDF translation tools and found that none of them could be used. These tools can only translate large blocks of text in PDFs. For architecture diagrams, they will mess up the entire diagram. Therefore, I used the method of image translation. First, convert the PDF to an image, then call the Youdao image translation API. If Chinese characters are recognized, replace the original PDF with the translated image; if no Chinese characters are recognized (such as some experimental result diagrams), keep the original.

In fact, the principle of Youdao image translation is also to do OCR on the image first, translate each recognized text block one by one, and then replace the text in the original position of the image with the translated text block. I feel that this can also be done for PDFs, and PDFs can still be vector diagrams. I hope that those who make PDF translation tools can improve.

The whole translation took half a day, and some minor problems were too lazy to fix. Although the translation quality is definitely not as good as handwritten, especially the image translation quality is average, but it is basically readable. Except for some minor adjustments to ustcthesis.cls (such as putting the English cover in front of the Chinese cover), no manual modifications were made to the translated content.

AI automatic translation version: High Performance Data Center Systems with Programmable Network Interface Cards (PDF, 8 MB)

Chinese original version: High Performance Data Center Systems Based on Programmable Network Cards (PDF, 8 MB)

Now the papers on arxiv all have LaTeX source code. According to this method, they can all be directly translated into Chinese papers. I hope that one day the multimodal model can be strong enough to only need PDF, not LaTeX source code, to do translation, which would be amazing.

Read More

2023-09-12
PLDI '21 Talk Transcription: AKG: Automatic Kernel Generation for Neural Processing Units using Polyhedral Transformation

Jie Zhao, Bojie Li, Wang Nie, Zhen Geng, Renwei Zhang, Xiong Gao, Bin Cheng, Chen Wu, Yun Cheng, Zheng Li, Peng Di, Kun Zhang, Xuefeng Jin. AKG: Automatic Kernel Generation for Neural Processing Units using Polyhedral Transformations. 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation (PLDI’21). Virtual, Canada, June 20-25, 2021. pp.1233-1248. [Paper PDF] [Slides by Jie Zhao]

Read More

2023-09-12
SIGCOMM '19 Talk Transcription for SocksDirect: Datacenter Sockets can be Fast and Compatible

Large models are really amazing. This SIGCOMM 2019 talk was completely off-script, as can be seen from the video, where I am standing in the middle of the stage, not looking at speaker notes. My English wasn’t that good at the time, I often stuttered, and the audio recording even had an echo, which made it a bit hard for me to listen to. I didn’t expect that a large model could recognize such poor speech almost completely correctly, it’s amazing.

The recognition method is here. Because the screen recorded in this video is not clear enough, I replaced the images extracted from the video with images exported from the original PPT. You can see how high the recognition rate of the audio in this video can be achieved by the voice recognition software on the market. The ones I’ve tried, including Google Speech-to-Text and Whisper, are basically unusable.

SocksDirect: Datacenter Sockets can be Fast and Compatible. [PDF] [Slides] [Video]
Bojie Li, Tianyi Cui, Zibo Wang, Wei Bai, Lintao Zhang.
Proceedings of the 2019 SIGCOMM Conference (SIGCOMM’19).

Read More

2023-09-12
SIGCOMM '21 Talk Transcription for 1Pipe: Scalable Total Order Communication in Data Center Networks

Bojie Li, Gefei Zuo, Wei Bai, and Lintao Zhang. 1Pipe: Scalable Total Order Communication in Data Center Networks. SIGCOMM ‘21. [Paper PDF] [Slides with audio (25 min)] [Slides with audio (12 min)]

Read More

2023-09-12
Release of English Version of the Blog

To facilitate international friends to read my blog content, I used GPT-4 to automatically translate the content of this site into English:

Automatically translated English version

Chinese main site

Read More

2023-09-10
A100/H100 too expensive, why not use 4090?

(Long text warning: this article is about 16000 words)

This is a good question. To start with the conclusion, it’s not feasible to use 4090 for training large models, but it’s not only feasible to use 4090 for inference/serving, it can also be slightly higher in cost performance than H100. If 4090 is optimized to the extreme, the cost performance can even reach twice that of H100.

In fact, the biggest difference between H100/A100 and 4090 lies in communication and memory, and the gap in computing power is not large.

H100 A100 4090
Tensor FP16 computing power 989 Tflops 312 Tflops 330 Tflops
Tensor FP32 computing power 495 Tflops 156 Tflops 83 Tflops
Memory capacity 80 GB 80 GB 24 GB
Memory bandwidth 3.35 TB/s 2 TB/s 1 TB/s
Communication bandwidth 900 GB/s 900 GB/s 64 GB/s
Communication latency ~1 us ~1 us ~10 us
Price $30000~$40000 $15000 $1600

There is a lot of water in NVIDIA’s power table. For example, the H100 TF16 power is written as 1979 Tflops, but that includes sparsity, and the dense power is only half; the official promotion of 4090 Tensor Core power is as high as 1321 Tflops, but that is int8, FP16 is only 330 Tflops. The first version of this article used the wrong data, both H100 and 4090 data were used incorrectly, and the conclusion was very outrageous.

The price of H100 actually has more than 10 times the water. In 2016, when I was at MSRA, I witnessed Microsoft deploying FPGA on each server, hitting the price of FPGA to the sand, and even became an important pusher for supplier Altera to be acquired by Intel. In 2017, I mined by myself, knowing which graphics card is the most cost-effective. Later at Huawei, I was also a core participant in the software development of Kunpeng and Ascend ecosystems. Therefore, I have a rough idea of how much a chip costs.

Xia Core, the chief architect of Kunpeng, has a well-known article “Talking about the broken ass of the Nvidia Empire“, which analyzes the cost of H100 well:

Open his cost, the cost of SXM will not be higher than $300, the packaging Substrate and CoWoS also need about $300, the largest Logic Die in the middle, looks the most expensive :) That is a 4nm 814mm2 Die, a 12-inch Wafer from TSMC can roughly manufacture about 60 Dies of this size, Nvidia does very well in Partial Good (he almost doesn’t sell Full Good), so these 60 Dies can roughly have 50 available, Nvidia is a big customer, the price obtained from TSMC is about $15000, so this expensive Die only needs about $300. Oh, only HBM is left, the current DRAM market is so weak that it is almost dying, even HBM3 is basically selling at a loss, it only needs about $15/GB, um, the cost of 80GB capacity is $1200.
TSMC once told a story. Taiwanese compatriots work hard to save money to build factories, a 4nm so advanced process, can only sell for $15000, but that certain customer takes it, can sell for $1500000 ($30000*50) goods, locomotive, that is very annoying. Do you understand what I mean?
As I said at the beginning, under the business rules of this world, selling something with a cost of $2000 for $30000, only one company, and the sales volume is still large, this is illogical, this kind of golden hen must have an aircraft carrier to keep it.

It is said that Microsoft and OpenAI have taken half of the H100 production capacity in 2024, guess if they will play the traditional art of bargaining with Altera? Will they really spend $40,000 * 500,000 = 20 billion dollars to buy cards?

Let’s analyze the cost of 4090 again, the 5nm 609mm2 Die, the cost is about $250. GDDR6X, 24 GB, calculated at $10 per 1 GB, $240. Let’s count PCIe Gen4, this cheap thing, as $100. Packaging and fans, count it as $300. The total cost is at most $900, this thing sells for $1600, it is considered a conscience price, because the research and development cost is also money, not to mention that most of NVIDIA’s R&D personnel are in Silicon Valley, where the average salary of programmers is the highest in the world.

It can be said that the H100 is like a house in a first-tier city in China. The concrete and steel itself is not worth much money, and the house price is completely blown up by the supply-demand relationship. I have been living in LA for two weeks. The house rented by the company has 4 times the usable area of my house in Beijing, but the price is only 30% more expensive, and it comes with a small courtyard, which is equivalent to 1/3 of the unit price of a house in Beijing. When I chat with locals, they are all surprised. Your average income level is so much lower than LA, how can you afford a house in Beijing?

The question is, if the 4090 is so fragrant, why does everyone have to scramble to buy the H100, causing the H100 to be out of stock? Even the H100 has to be banned from selling to China, and a castrated version of the H800 has been made?

Read More

2023-09-08
APNet'23 Talk Transcription for FastWake: Revisiting Host Network Stack for Interrupt-mode RDMA

Although most people prefer watching videos, I prefer reading text because text facilitates non-linear searching, allows for quick skimming, and is convenient for reviewing previous content at any time.

Recently, I have converted some of my lecture videos at academic conferences into text, such as ClickNP, KV-Direct and The New Golden Age of Computer Networks Series. Today, I am releasing FastWake from APNet 2023. Before the ClickNP and KV-Direct presentations, I would write the script in the notes of the PPT and read it directly on the spot. This year, even the PPT was rushed to finish the day before the conference, and there was no time to write notes, let alone a complete practice. I just went on stage to speak.

Now with large models, it’s not difficult to convert lecture videos into PPT + text scripts. In fact, I’ve always wanted to make such an online conference plugin.

  1. Extract the key frames from the video to form a PPT image list. If the difference between each frame and the previous one exceeds a certain threshold, it is considered that a PPT page has been switched. An open-source software video2pdf can do this.
  2. OCR each image into text, all are printed characters, the recognition accuracy is very high, Tesseract can do it.
  3. Extract the video soundtrack that stays on each PPT page and give it to the Speech-to-Text model for recognition, for example, I use OpenAI’s open source Whisper.
  4. (The last step is very important) Let the large language model (such as GPT-4) refer to the current page PPT and the homepage PPT content OCR’d out, and correct the transcription recognized by the Speech-to-Text model.

The current Speech-to-Text model is not very accurate in recognizing proper nouns and names, but many of these proper nouns have appeared on this page of PPT, and the PPT homepage also frames the title and field of the speech. Therefore, with the PPT content as a reference, the large language model can correct most of the errors in recognizing proper nouns. Without the PPT content as a reference, GPT-4 is needed to correct most of the proper nouns, but with the PPT content, LLaMA-2-70b-chat is enough. In addition, the large language model can correct the colloquial expressions in the speech, making the text script more rigorous and readable.

The following text script is completely auto-generated, except for a few names, nothing has been changed. Of course, some minor errors are also retained, but they are all harmless. The Video2PDF, Tesseract, Whisper, and LLaMA-2-70b-chat models used in the whole process all run on my own Mac notebook, and no internet connection is required throughout the process.

Read More
RSS