2023-09-06
The Story of Collecting Large Model Training Corpus

Starting from July, I spent a month alone collecting over 200 TB of large model training corpus, spending 200,000 RMB on traffic and cloud storage fees. Just like the recently released Mate60 Pro, it’s truly a case of “the ape’s cries are incessant, and the light boat has passed ten thousand mountains”.

What’s in the 200 TB Corpus

  • Z-library e-books, 22.43 million volumes, totaling 31 TB
  • Libgen library e-books, 3.78 million volumes, totaling 33 TB
  • Scimag academic papers and journals, 87.6 million volumes, totaling 77 TB
  • Various Chinese corpora, totaling 4 TB, including:
    • Complete set of primary, middle, and high school textbooks, 35 GB
    • Over 10,000 university textbooks and professional books, 142 GB
    • Collections of dozens of classic newspapers and magazines such as “People’s Daily”, “Reference News”, “Sanlian Life Weekly”, “Global Science”, “Reader”, “China National Geographic”, totaling 1 TB
    • Baidu Encyclopedia with 12 million entries, 20 GB
    • Ancient books, local county annals 1.6 TB
    • Various recommended book lists, English-Chinese bilingual world classics, translations of Chinese classics, etc., over 20,000 books, about 300 GB
    • Various dictionaries 100 GB
    • Various Chinese novels about 100 GB
  • Various datasets:
    • RedPajama dataset, an open-source replica of the LLaMA dataset, 2.8 TB
    • MNBVC dataset, 1 TB
    • CommonCrawl May-June 2023 version of WET plain text data, compressed to 8.6 TB
    • Historical Whois data for almost all domain names worldwide (3 billion entries), 2.5 TB
    • TheStack dataset, source code of well-known open-source projects on GitHub, 3 TB
    • The-Eye dataset, a collection of many AI training datasets, 15 TB
    • AmazonReviews dataset, 55 GB

Why did I collect so many books? Many of these books are PDFs composed of images and require OCR to be used as text model training corpus. I have two considerations:

  1. The quality of the corpus is more important than the quantity. The number of posts on Baidu Tieba may be more than the number of books, but posts on Tieba can only train a large model into a jokester, not to do serious work; to master knowledge, you still need to systematically learn from books and literature.
  2. In the future, multimodal large models will become mainstream. Vision contains a lot of important information about the human world. The current text large models only use text for training, which actually loses a lot of information. Future multimodal large models can directly learn multimodal knowledge containing images and text from PDF books.

Whois Domain Registration History Dataset

Today, I used one of the more interesting datasets, had GPT-4 help me write code, and spent 3 hours making a query website: 3 billion Whois history queries for domains worldwide: whois.os.ai.

For example, if you search for Microsoft, you can see that there are actually many microsoft.* domains, and it takes a while to load them all. You can also search for your own domain. Most domains that have existed in history are in this database, and most newly registered domains can be queried in this system the next day.

This dataset originated from my course assignment for the Advanced Software Engineering course at MSRA in 2013~2014. At that time, I made a website soip.net (you can still find the historical traces of domain registration on whois.os.ai), got the .com and .net DNS Zone File from Verisign (currently these gTLD Zone Files can be obtained through ICANN), and then slowly crawled all the Whois data of these tens of millions of domains (currently the number of .com domains has exceeded 100 million), and also crawled the IP addresses resolved from each domain.

This formed a linked data of domain, IP, and Whois domain registration information. You can reverse lookup which domains are hung on a host based on IP, and you can also reverse lookup which domains a person has registered based on registration information. At that time, domain registration information protection was not popular, and the real name, address, email, and phone number of the domain registrant could be publicly found through Whois. Actually, there were already companies providing such services at that time, so I made this website just for the course assignment and did not continue to operate it.

But I think the history of Whois domain registration information should be of high value. It records one side of Internet history like the Internet Archive WayBackMachine. So I kept maintaining it, and later added more gTLD and ccTLD data sources. Of course, my interest-based project can’t achieve 100% coverage, unlike companies like WhoisXMLAPI that professionally provide Whois data history.

10 years have passed, and there are already more than 700 million domains in the Whois dataset, close to 3 billion Whois historical records, of which only over 200 million domains are currently active, and over 400 million domains have disappeared into the dust of history. Most of these domains are bought by “domain farmers” for investment or collection, and are not really used to build websites. Some people who don’t understand technology think that as long as they don’t tell others after registering a domain, no one will know, but that’s not the case. For most top-level domains, the daily increments of domain registration information and DNS information are public, and anyone with a cooperative relationship can get them. With the domain dataset, you can crawl many websites that are not included in search engines.

If I were to write this query website from scratch, it would take at least 2 days. With GPT-4, it only took 3 hours, and the front end is even more beautiful than what I could do. The source code for the entire website was basically written by GPT-4, including the front end, Flask backend, and the script for importing CSV data into MongoDB (of course, importing data took a day or two). The entire front end consists of only one file, and the backend also only has one file, totaling over 500 lines of code. If there is any problem with the code, I let GPT-4 modify it. I’m just a product manager who provides requirements, without writing a single line of code.

Data Collection and Purchasing Data

I have also been in contact with some companies that sell data. The cost of cleaned data is actually quite high, far exceeding the cost of collecting data on your own. But some data is hard to crawl on your own, such as Tianya Forum which no longer exists today, it’s hard to browse all articles on WeChat public accounts, and there are some non-public industry data.

However, for websites like Zhihu, there is no need to buy data. Zhihu now has hundreds of millions of questions and billions of answers. If you buy data according to the pricing of data companies, it would cost an unknown amount of money. Therefore, the ability to crawl data on your own is very important.

Data cleaning is also crucial. I have seen some large language models where the answers still contain things like “expand all”, “previous page”, “next page”, which indicates that the data has not been properly cleaned.

I just used my spare time to do some preliminary data collection and cleaning, and I will share any new progress with everyone in the future.

Read More

2023-08-27
10 Soul-Searching Questions for AI Large Model Startups

  1. To build or not to build a foundational large model?
  2. To B or to C? Domestic or overseas?
  3. RMB capital or USD capital?
  4. Is AI Native application a mobile internet-level opportunity?
  5. Is your vision AGI?
  6. Can the problem of large models talking nonsense be solved?
  7. How does the large model infra profit?
  8. Where is your moat?
  9. Can your business model scale?
  10. How to deal with the regulation and legal responsibility of large models?

Below are my views on these 10 soul-searching questions.

Read More

2023-08-24
Tsinghua's Link Genius Boy: When Top Workers Start Their Own Business

Original video by Bilibili up master “Bao Bao Ba 2022”

Backup of the video on this site (25:58, 121 MB)

The following is the text transcript of AI voice recognition:

Read More

2023-08-17
Speeches at Our Wedding by Various Guests

May 1, 2023, Shijiazhuang

  • Speech by Tan Bo
  • Speech by Mentor Lin Tao
  • Speech by Professor Tan Haisheng
  • Wedding vows of the groom, Li Bojie
  • Wedding vows of the bride, Meng Jiaying
  • Speech by the father of the groom
  • Speech by the father of the bride
  • Speech by the parents of the bride at the name-changing ceremony
  • Speech by the bride at the name-changing ceremony
  • Speech by the parents of the groom at the name-changing ceremony
Read More

2023-08-15
Our Wedding Videos and Photos

May 1, 2023, Shijiazhuang

Photos

Click here to view the online album of wedding photos (110 edited photos)

Trailer

(00:31, 73 MB, 19 Mbps)

Highlight Edit

(04:47, 216 MB, 6 Mbps)

Full Documentary

(01:30:24, 3.35 GB, 5 Mbps)

Read More

2023-08-13
Five Years of PhD at MSRA (Part 3): Underground Mining Server Room and Digital Ex Project

The third in the “Five Years of PhD at MSRA” series, to be continued…

Underground Mining Server Room

In the basement of an ordinary residential building in Wanliu, Beijing, through a heavy air-raid shelter iron door, and then through a dark alley where you can’t see your fingers without turning on the light, is my underground mining warehouse.

In the basement next door, many workers struggling in Beijing live. The smallest room there only costs a thousand yuan a month. More than a dozen strangers in the basement share a bathroom, a washroom, and public sinks and washing machines are all rusty. At the end of the alley is a 30-square-meter hall, with a ventilation port that can let in a little light from the outside world. I rented this hall and a small room next to it as a mining server room.

I built the infrastructure of the underground mining server room myself, running 6 1080Ti water-cooled mining machines, oil-cooled mining machines, multiple 6-card 1060 mining machines, multiple 9-card dedicated mining machines, various ASIC mining machines for mining Bitcoin and Litecoin, worth 300,000 RMB, and also carrying my most covert personal project - the Digital Ex Project.

Read More

2023-08-13
Preview of AI Operating System os.ai

The concept of an AI operating system has been proposed by many people. Traditional AI operating systems may be more about infrastructure (infra), essentially managing hardware; the AI operating system we propose is about managing large models.

Today, I registered the domain os.ai, temporarily put up a placeholder webpage, briefly introducing the AI operating system we are building.

The AI operating system is a bridge between large language models and applications. Our professional team is committed to providing low-cost solutions, building highly predictable and controllable generative AI infrastructure, supporting the generation of text, images, videos, 3D metaverses, and generative agents.

Why do we need an AI operating system? The current large models face many challenges in terms of cost, predictability, multimodality, evaluation testing, etc. We believe that not only improvements in the model itself are needed, but more importantly, it needs to be closely co-designed with data and systems.

Low Cost

Currently, it costs $10 to use GPT-4 to read a paper, and $95 to generate a 7.5-minute video with Runway ML.

As experts in AI infrastructure, we provide low-cost generative AI services by building our own state-of-the-art AI data center composed of GPUs, and co-optimizing models, data, and underlying hardware architecture.

Predictability

  • Reduce hallucinations at the model level
  • Sandbox
  • System/user permission isolation (to avoid command injection)
  • Fact-checking
  • Reliable execution of long process tasks
  • Integration of industry private datasets and databases

Multimodality

Low-cost pipelines for creating text, images, 3D metaverses, and personalized generative assistants, with highly controllable generation details.

  • Text → Image/Video/3D Model
  • Text + Image → Image/Video/3D Model
  • Text + Video → Video/3D Model
  • Text/Image/Video → Personalized Generative Assistant

Model Evaluation

Automatically evaluate, test, and select large language models in an open environment at high throughput. Enable the large language model market, enable the metaverse built by generative assistants.

The AI operating system is still just a preliminary concept, many of its technologies are still under research. Welcome to follow os.ai, let’s look forward to the arrival of the large model AI operating system.

Read More

2023-08-07
How to Prevent Screen Photography, File Uploads, and Other Leaks with Technical Measures

(This article was first published on Zhihu)

Companies dealing with confidential information usually divide their areas into low, medium, and high confidentiality zones:

  • Low confidentiality zone: For image streams, video streams, and information streams, it has a certain leak detection and traceability capability;
  • Medium confidentiality zone: For image streams, video streams, and information streams, it has a certain ability to prevent leaks in advance and detect them, and a strong ability to trace leaks afterwards;
  • High confidentiality zone: For image streams, video streams, and information streams, it has a strong ability to prevent leaks in advance.

The high confidentiality zone is the simplest, physically isolated, with security equipment at the entrance, and electronic devices such as mobile phones and USB drives are not allowed to be brought in.

The medium and low confidentiality zones are more difficult because the office computers inside can access the Internet, and mobile phones can also be brought into the office. The following discusses how to maintain information security from the dimensions of leak prevention, leak detection, and leak tracing. Leak prevention refers to preventing data from leaking out, leak detection is the ability to discover and report when data leakage may occur, and leak tracing is the ability to trace who leaked the data when the data has already leaked.

Read More

2023-08-05
Should AI Clusters Use RoCEv2 or Infiniband?

(This article was first published on Zhihu)

Most major internet companies are deploying RDMA technology, with the main scenarios being storage and AI/HPC, divided into two technical routes, RoCEv2 and Infiniband.

RoCEv2 is RDMA over Ethernet, which runs the RDMA protocol on the traditional data center Ethernet network. The history of Infiniband (IB) is even longer, with HPC high-performance computing clusters from the 1980s all using IB.

The current leader in RDMA network cards is Mellanox, acquired by NVIDIA. It can be said that RoCEv2 is the community version of RDMA, and Infiniband is the enterprise version of RDMA. The advantage of the community version is openness, with many configurable things, but this is also its disadvantage, only network experts can handle it. Moreover, a large-scale RoCEv2 cluster is not something that a network expert can handle alone, it requires a team to solve PFC storm problems and various strange problems with network cards and switches. Of course, if there are only a few machines and a switch, and the network cards are all the same model, such a small-scale cluster using RoCEv2 will basically not encounter any problems.

The RDMA circle is very small, basically all have a certain academic background, if you have never heard of the above problems, then it is better to use IB honestly, spend a little more money, simple and easy. I heard that some AI companies think that buying A100/H100 is enough, they can’t even distinguish between the SXM version and the PCIe version, and they don’t know that they need to buy IB network cards and switches to achieve large-scale training, thinking that connecting with a regular 10G network is enough, this kind of company is best to find a seller of AI cluster solutions to match the IB network cards, switches and network topology, don’t try to show off, don’t try to save money by touching RoCEv2.

Most of OpenAI’s GPU clusters currently use Infiniband, and now some small and medium-sized AI companies also use IB. Most of the newly built GPU clusters of large companies use RoCEv2, because these large factories need to support a scale of tens of thousands of cards, and IB cannot scale up to this level, and cost is very important for such large-scale companies. Some large factories have already started to develop their own network cards. Another reason is that large factories have professional network teams, and it is difficult to optimize such a closed thing as IB, how can these network experts adjust performance and write PPTs.

Read More

2023-08-05
Is Cache Coherency Necessary for Load/Store?

(This article was first published on Zhihu)

Cache Coherency (CC) can be divided into two scenarios:

  1. CC between the CPU and device within the host
  2. CC across hosts

CC between the CPU and device within the host

I believe that CC between the CPU and device within the host is very necessary. When I was interning at Microsoft in 2017, I used an FPGA to create a memory block attached to the PCIe’s bar space. I was able to run a Linux system on this bar space, but the startup process that should have taken only 3 seconds took 30 minutes, which is 600 times slower than host memory. This is because PCIe does not support CC, and the CPU’s direct access to device memory can only be uncacheable, and each memory access has to go through PCIe to FPGA, which is extremely inefficient.

Therefore, the current PCIe bar space can only be used for the CPU to issue MMIO commands to the device, and data transfer must be carried out through device DMA. Therefore, whether it is an NVMe disk or an RDMA network card, they must follow the complex process of doorbell-WQE/command-DMA, as shown in the figure below.

Read More
RSS