Add details about v1.1
Browse files
README.md
CHANGED
@@ -10,6 +10,50 @@ size_categories:
|
|
10 |
# TxT360: A Top-Quality LLM Pre-training Dataset Requires the Perfect Blend
|
11 |
<center><img src="llm360_logo(1).png" alt="k2 eval table" /></center>
|
12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
## We introduce TxT360 (Trillion eXtracted Text) the first dataset to globally deduplicate 99 CommonCrawl snapshots and 14 commonly used non-web data sources (e.g. FreeLaw, PG-19, etc.) providing pretraining teams with a recipe to easily adjust data weighting, obtain the largest high-quality open source dataset, and train the most performant models.
|
14 |
|
15 |
# TxT360 Compared to Common Pretraining Datasets
|
|
|
10 |
# TxT360: A Top-Quality LLM Pre-training Dataset Requires the Perfect Blend
|
11 |
<center><img src="llm360_logo(1).png" alt="k2 eval table" /></center>
|
12 |
|
13 |
+
## Changelog
|
14 |
+
|
15 |
+
| Version | Details |
|
16 |
+
|---------|---------|
|
17 |
+
| v1.1 | Added new data sources: TxT360_BestOfWeb, TxT360_QA, europarl-aligned, and wikipedia_extended. |
|
18 |
+
|
19 |
+
## Details of v1.1 Additions
|
20 |
+
|
21 |
+
- **TxT360_BestOfWeb**: This is a filtered version of the TxT360 dataset, created using the [ProX document filtering model](https://huggingface.co/gair-prox/web-doc-refining-lm). The model is similar to the [FineWeb-Edu classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier), but also assigns an additional format score that considers how a document is formatted.
|
22 |
+
|
23 |
+
- **TxT360_QA**: Synthetic QA pairs generated for each document using Mistral-7B-Instruct-v0.3. QA pairs are appended to the end of every document in the format:
|
24 |
+
|
25 |
+
```json
|
26 |
+
{
|
27 |
+
"text": "{ORIGINAL_DOCUMENT_TEXT}\n\nQ: {QUESTION_1}\nA: {ANSWER_1}\n\nQ: {QUESTION_2}\nA: {ANSWER_2}......{ANSWER_N}\n",
|
28 |
+
"meta": {original TxT360 meta}
|
29 |
+
}
|
30 |
+
```
|
31 |
+
The number of QA pairs may differ for each document, providing diverse question-answering supervision.
|
32 |
+
|
33 |
+
- **europarl-aligned**: Europarl v7 data processed to align English source text with parallel corpora in multiple languages. Each sample concatenates the same content in different languages. Steps include reading English source text, matching with parallel corpus data, and concatenating multilingual content for robust cross-lingual training without any order., e.g.:
|
34 |
+
|
35 |
+
```json
|
36 |
+
{
|
37 |
+
"text": "# English\n\n[English content]\n\n# French\n\n[French content]\n\n # Italian\n\n [Italian content]\n...",
|
38 |
+
"meta": {
|
39 |
+
"language":"fi-de-nl-el-it-fr-en-pt-sv-da-es",
|
40 |
+
"src_file":"ep-00-01-17"
|
41 |
+
}
|
42 |
+
}
|
43 |
+
```
|
44 |
+
|
45 |
+
- **wikipedia_extended**: An enhanced version of Wikipedia data that:
|
46 |
+
- **Appends abstracts** of outgoing linked articles from the source article's abstract to each Wikipedia document.
|
47 |
+
- **Creates a contextually dense document** with interconnected information from related articles.
|
48 |
+
- **Enables long-context training** by allowing models to process extended sequences of linked content.
|
49 |
+
- **Enhances model ability** to understand topic relationships, maintain coherence in long contexts, and generate accurate responses across topics.
|
50 |
+
```json
|
51 |
+
{
|
52 |
+
"text": "{ORIGINAL ARTICLE}\n\n{First Link Title}\n\"{First Link Abstract}\"\n\n{Second Link Title}\n\"{Second Link Abstract}\"..."
|
53 |
+
}
|
54 |
+
```
|
55 |
+
|
56 |
+
|
57 |
## We introduce TxT360 (Trillion eXtracted Text) the first dataset to globally deduplicate 99 CommonCrawl snapshots and 14 commonly used non-web data sources (e.g. FreeLaw, PG-19, etc.) providing pretraining teams with a recipe to easily adjust data weighting, obtain the largest high-quality open source dataset, and train the most performant models.
|
58 |
|
59 |
# TxT360 Compared to Common Pretraining Datasets
|